How having the right software testing metrics can help improve software quality
It’s no secret that software testers now face a mounting responsibility to deliver faster releases and make smarter release decisions. On its own, this responsibility is nothing to blink at, but when you couple it with the need to produce higher quality software in that shorter timeframe, it becomes something of an enigma.
Producing higher quality software in a shorter time might seem impossible, but achieving this goal is actually well within reach of any testing team. It simply requires testers to take advantage of what’s already in front of them: Software Testing Metrics.
The good news is, we already have most of the tools we need at our disposal to do just that. Specifically, we already have the data, we just need to change how we consolidate and analyze it. And once we do, we’ll have the key to more intelligent software testing. That’s because properly consolidating and analyzing software test metrics can help us make more proactive and informed decisions to complete software testing better and faster. Ultimately, these improvements will help deliver a higher quality end result.
Properly consolidating and analyzing software test metrics can help us make more proactive and informed decisions to complete software testing better and faster.
How Much Do Metrics Really Matter?
Making better use of test metrics by changing how we look at them sounds well and good, but how much does it really matter? Can it really make that much of an impact on how we complete software testing and the resulting software quality? The short answer is yes. And for any skeptics out there, look no further than these three scenarios for proof:
How good is your software? Does it work the way it’s intended to? Are there any bugs that hinder performance? Seemingly simple questions like these typically guide quality testing, but how do you know when you’ve reached satisfactory answers and the software is ready to move on to production?
Without metrics: Johnny and his team ran tests on the software and they know which of those tests passed and which failed, but they don’t have a clear picture of what’s still outstanding. Additionally, while the team reported defects as they found them, there’s no consensus on how the defects compare to one another in terms of severity and which ones need to be addressed first. Finally, the team has no insight into the bigger picture of where the defects have come from, making it difficult to understand if there are any specific areas of the software that are especially risky. Overall, this inability to properly prioritize testing needs and identify risk leads to blind decision-making, which can result in wasted time and poor decisions that impact the software quality.
With metrics: Johnny and his team ran tests on the software and consulted several reports along the way to inform their efforts. For instance, the team looked at a test-run summary that depicts what’s happening across all testing activities (including what is passing and failing, what is complete and what is outstanding), a defect report that highlights the priority, severity, and status of defects to make triaging defects for resolution easy and a defect density report that reviews how many defects come from particular areas of the software to help the team identify risk and realign their efforts accordingly. Thanks to these reports, the team has a firm grasp on the overall feeling of the application and can make informed decisions about where to spend their time and when. In turn, this information provides the team with absolute confidence that they are passing over the highest quality software at the end of their testing cycle.
All software testers know that we can never cover 100% of an application during testing. While we’ve accepted this fact, it means that we need to be smart about the areas we do cover to ensure we spread out our efforts across a variety of areas and pay extra attention to particularly risky ones. But how do we know we have the right mix of coverage?
Without metrics: Rita and her team are nearing the end of their software testing cycle, but they lack insight on the percentage of requirements they’ve tested, the test cases they’ve run within each of those requirements, and the number of defects they’ve found within each requirement. This lack of insight means the team is unaware of which requirements have had zero test coverage and which requirements have produced a significant amount of defects. As the team prepares for the final days of testing, they have no way to guide their decisions about where to focus their efforts.
With metrics: Rita and her team are nearing the end of their testing cycle and they know exactly where to spend their remaining time because they’ve tracked their coverage at every step of the way. The team has accomplished this tracking by using metrics to understand how much of the requirements they’ve tested and to identify which test cases they’ve run for each requirement as well as how many of those test cases have passed or failed. They’ve also kept handy metrics that show requirements without any test coverage and the number of defects found per requirement to help narrow in on more endemic problems to a requirement or user story. Taken together, these metrics have helped the team keep a pulse on risks, whether due to a lack of coverage or a high number of defects, so they can be certain they dedicate their time where it’s most needed.
As we set timelines for testing cycles, it’s important that we don’t promise to take on more than our team can handle. And during testing, teams need to know how to budget their time properly in order to meet deadlines. But how do you know how quickly your team can complete certain tests?
Without metrics: Kyle and his team are scheduled to test a new piece of software for the next two weeks, but as the time passes they’re unsure of whether or not they’ll be able to deliver as planned. The team does not have a clear understanding of the number of test cases they’re creating, how the number of defects they’re opening compares to the number of defects they’re closing or the average complexity of the test cases they’re creating (and therefore how long it will take them to complete those test cases). At the end of the two weeks, the team has completed most but not all of what they had planned. And not only did the team come up short this time, but because they don’t know exactly how the number of tests they executed compared to the number of tests they had planned, there are several questions about what timeline they should promise going forward.
With metrics: Kyle and his team are scheduled to test a new piece of software for the next two weeks, and they are confident they will deliver as planned. Their confidence stems from two places. First, they developed their plan based on metrics from past testing cycles that gave them a clear vision of what they could execute in a given time. Second, throughout the cycle they regularly revisit their metrics to keep an eye on their status. These metrics include their inflow rate for user stories, test cases and charters, their open vs. close rate for defects and the average test case complexity so they know how much time to budget. Because the team planned based on past performance and kept close track of their progress along the way, they know exactly where they stand at all times and what they need to do deliver as promised.
How Will the Right Metrics Impact Your Software Testing?