Test Coverage Metrics – Whiteboard Friday

In last weeks Whiteboard Friday episode we talked about measuring software quality metrics to gauge the health of the application that’s under test.  Inevitably, the next core set of metrics that you want to analyze revolves around coverage.  For example, “Of these tests that are passing or failing, what are are the artifacts or area of my application that they are designed to ensure my products are produced in a high quality.”

Today we are going to breakdown coverage into three categories:

  • New feature coverage
  • Application Area Coverage..which can also be rolled into regression coverage.
  • Code Coverage

Watch the full video below:

Catch last weeks episode here — “Measuring Software Quality Metrics”

In this episode you will learn about test coverage metrics

  • CORE
    • Number of requirements covered / Total number of requirements * 100 → but this is only one part of coverage
    • Test Cases by Requirement – Most common way for new feature testing, is to see how many test do we have aligned with a user story or a requirement
  • EXTRA
    • Defects per requirement — the test cases might be fine, but the requirement might be what’s causing all the problems
    • Requirements without test coverage– Some requirements might not need auditable test cases with steps, but still need to have a charter or exploratory test session created against them for coverage.  It’s important to know “we are about to push this user story out into production, are we ok that we haven’t had any testing against it.” ?
  • PRO TIPS
    • Heat map visualization –This is key for understanding Risk within your application.  For example, we can apply a heat back to new feature testing to indicate where our troubled requirement are and we can apply it to an application area to pull the coverage from regression

Full Video Transcript Below:

Hey, everyone. Welcome back to Whiteboard Friday. We’re continuing our series on Metrics That Matter. Last time we talked about software quality metrics basically measuring through more of a results-driven type reporting on what’s passing, what’s failing and how can I actually align that back to producing better quality software.

Inevitably, the next question’s going to be now that I’ve got quality metrics, what are my coverage metrics. Coverage metrics are usually broken down into the new feature testing that actually occurs, your application area coverage and then also code coverage. Those are usually the main three that people talk about when they talk about are we covering the tests that we’re actually aligning our quality metrics to. We’ve got, again, just like last time, we’ve got some core reports we want to talk about. We’ve got some extra ones, and we’ve got some tips to actually be aware of as you’re going through and looking at these coverage reports.

The first one is really the basic one around requirements that actually do coverage, which is a coverage report that would take the coverage divided by the actual total required, the requirements are in total. So I’ve got 90 covered requirements, and I’ve got 100 of them total, then I’ve got 90% coverage on those particular requirements. That’s usually one that everyone wants to know, which is out of the total coverage requirements I have, what is actually my coverage percentage? We get to a 90% coverage rate there.

The next one is looking at your test cases per requirement. This is really helpful when you break it down into seeing these laid out with the actual result next to it. You’ve got your requirements, you’ve got a coverage rate of 90 but in that 90%, what’s passing, what’s failing. You’re actually aligning results to it, not just, “Yeah, we have a test case that covers it.” Usually what happens here is that you’ll get a coverage percentage, but you have no idea what actually goes into the results of that coverage percentage. So you’re actually want to be aligned to make sure that your test cases with the results back to that coverage.

The extra reports we want to talk about really gets into talking about one is defects per requirement. We’ve talked about previously in our other talk about a defect density rate that referred to test runs and what we’re doing here is actually looking at defect density per requirement. What that does is if you look at this, for example, you can see that for requirement A, I’ve got a count of 5 defects, but for requirement B, I’ve got 28. What that does is, one, it will get us out of looking at the test runs per defects. And going, “Okay, let’s get back to the actual requirement here or user story and actually look to see maybe there’s something really wrong with that user story, depending on the actual defect density of the requirement.”

Next one is looking at requirements without coverage. This is kind of looking back at out of that 10% that we didn’t have covered, where is the particular status of that requirement? Keeping in mind that if you’re looking at a requirement and you’ve got a ton of requirements that are in done status, and you’ve got no coverage against that. What that means is you’re about to release a requirement or user story out into production, or out into a staging area for your other members to actually use and you’ve actually never tested that. So what you want to do is look at, and this also gets back into a risk-based approach as well, if you are getting towards the end of your testing cycle and you miss a requirement to cover. You need to then look at all the statuses that are done or almost done and say, “Okay, well, we don’t have time to test everything. We need to actually test the ones that are actually going to go out into production right away.”

From a pro tip perspective, a lot of these refer to metrics around looking at those in grids, looking at those in table formats and looking at those in percentages on what’s passing, what’s covering what. This really gets into talking about looking at required coverage in a different way, which is heat map visualization. What heat maps do is that it’ll basically take different metrics, whether it’s the number of requirements and the test runs that make up that requirement. What it does is it’ll section those off into certain blocks, and those blocks will change colors depending on different algorithms that you have that kind of illustrate a risky area of that application.

For example, when I look at . . . let’s say for example over here, if I’ve got a bunch of requirements that make up this particular block, what I can do is look and say, “Okay, for this particular block how many runs are passing or failing. If everything’s passing, then I’m good.” But as I move more towards green all the way to red, what I’m seeing is that the more test runs that are failing in that particular requirement, then what’s going on is I can quickly look to see critical errors of the application that we need to focus on. This gets out of the grid rows, it gets out of the actual percentages-look and gives more of a visualization, which everyone likes more visualization around how do they actually prioritize what you need to test and what’s actually missing in your test coverage.

That’s today I’m talking about the test coverage revolving around metrics. Stay tuned for next time, and we’ll see you on our next Whiteboard Friday.

Hey everyone, Ryan Yackel here with QASymphony. Thanks for watching Whiteboard Friday. For more videos, visit QASymphony.com or subscribe to our YouTube channel. Hope to see you soon.

Check out last season of white board Friday below.

View more QASymphony videos on youtube.  If you have any recommendations for future series let us know on Twitter at @qasymphony.

NEVER MISS AN UPDATE

Subscribe to Our Newsletter

Leave a Reply

Your email address will not be published. Required fields are marked *

More Great Content

Get Started with QASymphony