It’s no secret that Software Quality Managers are facing increasing pressure to deliver high quality software at record-breaking speed. But when it comes to software quality, how exactly can we measure success? Time-to-market is a much simpler calculation, but measuring our performance in delivering high quality software depends on a multitude of factors such as the project methodology (waterfall, hybrid, agile), the complexity of the software, the level of technical debt involved, the number of interfaces, and much more. In short, the number of variables that play into an acceptable level of high severity defects should not be underestimated.
To survive in this marketplace, we must evolve continuously – both in our opinions and our measuring sticks which is why we’ve come up with 8 KPIs that you should add to your Quality Scorecard. Start tracking today to mitigate release risk, improve quality, and measure your success.
Defect Detection Effectiveness (DDE, AKA Defect Detection Percentage)
Overall regression testing effectiveness is calculated as a ratio of defects found prior to and after release by your customers. Defects found after you release are typically known as “incidents” and logged in a helpdesk system whereas the defects found during testing phases (e.g., Unit, System, Regression, or UAT Testing) are identified prior to release and documented with tools like Panaya Test Dynamix. To properly calculate this KPI, you should always categorize what the software version that each defect was identified within, prior to release into your productive environment.
The formula often used for DDE:
Number of Defects Identified at Software Version Release /
Number of Defects at Software Release + Escaped Defects Identified by End Users (e.g., Incidents)
Here’s a simple illustration: assume 95 defects were found during your regression testing cycle on that last monthly SAP Service Pack and 25 defects were logged after the release. The DDE would be calculated as 95 divided by (95 + 25) = 79%.
Keep in mind that DDE should be monitored with a line chart that starts at 100% the day after releasing to production. As your internal end users and customers begin working with your latest SAP service pack as an example, they will inevitably log a few incidents. We’ll often see a “feeding frenzy” occur within the first week 2 days after a Service Pack hits the productive environment. That is when you’ll notice a quick drop from 100% to about 95% as incidents are logged. If your company is on a monthly Service Pack release cadence, then measure DDE for a 30-day period on each Service Pack. On the other hand, if your company is only running four (4) major release cycles per year – then measure it for 90 days to see how it declines over that period of time.
You might also like our blog
Optimize Quality and Value Using the Software Testing Life Cycle
What is considered a “good DDE”? Much like blood pressure readings – every organization and person evolves over time. Even though the medical community defines the “optimal” blood pressure reading to be 120/80 – it’s natural to see an increase in systolic blood pressure as we age. With DDE, industry practitioners and thought leaders have been known to say that 90% is commendable in most industries. However, some organizations achieve >95% DDE on a consistent basis by shifting left with change impact simulation tools such as Panaya’s Impact Analysis.
System-Wide Defects (SWD)
Do you ever encounter multiple defects that are associated with the same objects? Sure, you do, it’s a common phenomenon that many test managers encounter. Suddenly, you see a huge uptick in the number of bugs reported in a UAT cycle.
So, what are your options to manage the inevitable drama of “defect inflation?” It’s simple – begin tracking what Panaya calls “System-Wide Defects.” Tracking this manually takes forever. It’s also painful to do while using legacy ALM tools where all you’re left with is the ability to link defects to one another and add a comment. But if you don’t have a choice in tools at the moment, then you’ll need to set aside the time to properly track System-Wide Defects to properly “explain away” why the bug trend line is moving upward towards the end of a testing cycle rather than down. If you get a chance to check out Panaya Test Dynamix, it has SWD built into the engine itself – calculating SWD at a click of a button.
The Spider Web – Residing within the ‘Risk Cockpit’ of Panaya’s platform, this powerful yet simple representation of 6 additional key performance indicators rounds off the most important KPIs that every quality, testing, and release manager should be tracking.
QA managers understand risk at a deeper level that can only be realized with code or transport-level visibility rolled up to each requirement. This requires the right set of tools. Panaya is the only tool which answers the needs of SAP-ran organizations seeking intelligent suggestions for unit tests, and risk analysis based on transport activity. This level of tracking is available within Panaya Release Dynamix (RDx).
We live in the age of the customer, which drives every organization’s digital transformation strategy. In this day and age, we can’t afford to be siloed in our thinking or our organizational approach to software quality assurance and delivery. Our traditional ALM models of yesteryear were not designed for the continuous delivery model of today. To combat this old way of thinking, QA and testing managers must embed themselves within the action of application development – and that means having a pulse on the delivery of user stories. It’s not enough to “sit and wait” for a user story to reach a done status. We must follow the evolution of a user story, attend daily Scrum meetings, and talk openly about the risks unfolding with important changes being made to the application under test. This is also available within Panaya elease Dynamix (RDx).
You might also like our blog
UAT Testing – Heading into the War Room?
Test Plan Coverage
This is a great KPI to track because you’re not relegated to tracking only system, integration, regression, and UAT coverage. In the true spirit of shifting-left, there’s rising importance in tracking unit testing coverage. Sounds crazy, right? It’s not – especially if you have the right tools to make not only the execution of unit tests easy, but the capturing of the actual results (evidence) even easier. With Panaya Test Center’s built-in test record-and-play capability on, your participation in unit testing will sky rocket. Not only that, you’ll be able to display a Requirements Traceability Matrix showing end-to-end coverage while also easily showcasing actual results to your audit department from unit through to regression testing.
Change Risk Analysis
Risk is inherent to any change we make to an application under test but we don’t always know if we are testing the right things. Many organizations have their own definition of what ‘change risk’ means to them. Within the ‘Risk Cockpit’ of Panaya’s Release Dynamix (RDx), you can take the guesswork out of tracking change with Impact Analysis for your project or next release. RDx systematically calculates the risk for each requirement and keeps you abreast of how it changes as you move further into the delivery lifecycle.
Test Execution Risk
It’s all too common for organizations to track KPIs like authored tests, passed tests, automated tests, and tests executed – but what about tracking the actual steps executed within each of the tests? Many of the popular ALM platforms fail to provide out-of-the-box reporting capabilities to track test ‘step’ execution progress. When you have many different ‘hand-offs’ occurring across a UAT cycle, it makes sense to track Test Execution Risk and status, not only at the test-level, but also at the business process level. Panaya Test Center does just that, out-of-the-box.
With Panaya, Software Quality Managers and all relevant stakeholders can meet their testing KPIs to drive more innovation while reducing efforts by 30-50%, without compromising on scope or quality. Standardize your testing processes and and truly measure success as all stakeholders adopt the same testing methodology to gain real-time visibility over all test cycles, including large scale UAT.