Measuring Software Quality Metrics – Whiteboard Friday

Welcome to week 3 of White Board Friday (series 2).  In this video we continue with our theme “Metrics that Matter” –  talking about how to measure software quality in terms of metrics.

This is not an exhaustive list of reports, but really we’re going to be separate them into three categories: Core, Extra and Pro Tips.

Watch the full video below:

Catch last weeks episode here — “A Caution on Software Quality Metrics

In this episode you will learn about how to measure quality software metrics through reportswbf3

  • Test Run Summary
  • Defect Priority / Severity and Status
  • By Week/Sprint
  • Results per requirement
  • Defect leakage
  • Manual vs Automated
  • Days Since Run
  • Flapping

Read the full transcript below:

Hey everyone, welcome to Whiteboard Friday, we’re continuing our discussion on Metrics That Matter. Last week we talked about a cautionary tale around metrics that matter. And really the big takeaway is that you always need to make sure you’re having your conversations around those metrics to make sure things aren’t faked, or any other of those errors we talked about. So check out the last Whiteboard Friday when we talked about that. Today we’re continuing that conversation. We’re talking about how to measure software quality in terms of metrics. And again, this is not an exhaustive list of reports that we’re gonna talk about, but really we’re gonna be separating them into the core, the extra category and also the pro tip category. So what are the most important? What are some extra ones we can give you? And then also some tips to give you to think about.

And before I get started here, one thing I did want to say is, “Why do quality metrics usually revolve around results type driven reporting? Well that’s usually because if you think about a production… The person who’s monitoring the production and health of the application. They’re measuring on things that like, what’s red? What’s green? What needs attention? What doesn’t need attention? The same way that a quality engineer or a director of quality, they’re measuring usually by results type reporting, which is, what’s passing? What’s failing? What needs attention? And what’s not being executed? So that’s why predominantly you see quality metrics that are rolled into that category. However, there’s also a non-results type driven reporting that is an exploratory testing or end user validation type testing that’s not really focused on what’s passing or failing, but what’s the overall user experience? And we actually did a Whiteboard Friday on that with exploratory testing. So check out that one. It kind of talks about some non-results type reporting with exploratory testing.

So with that let’s go ahead and get into some of the core reports. The one report that we always hear people talk about is a test run summary report. And that’s usually broken up into, one is by cycle, or by your project, or by release. But usually when people wanna see that, they also wanna see it bubbled up at the overall level if they’re also gonna be managing across different projects. So what’s my current project status for project A and B, and can I get that in one view? Things to look out for though, is you always want to look at the latest test run results in this view. You don’t want to be cluttered with every single test run log in this view. It’s gonna clutter that. And also there could be lots of different fixes that occur. Let’s say if you ran the test case ten times, and it failed the first nine times but it passed the last time, it could be that you’re actually going through some iterations of other fixes during that time. So always pay attention to the last run executions.

When defects are found, people usually wanna know is, “Okay, what defects do I need to really pay attention to?” And we get, there’s defects all the time that come through. Maybe they’re minor issues that really don’t need to be fixed. And all the time we see software that goes out that has release notes on known defects. So known defects are okay to notify your customers base say, “Yeah you know there’s a defect here, but it’s not gonna really affect the overall business process of your application. It’s not gonna affect the overall user experience. So we’re not really gonna fix that. So things that you’re gonna want to keep in mind of what defects need to be paid attention to? Is really based off of the status, and severity, and the priority of that defect. So being able to know, okay, what’s open that’s critical severity defect that’s high priority? Or maybe something that’s been reopened that’s major, but also has a low priority because we want to fix that critical severity defect first. So really keep in mind the status, priority, severities, or must-haves for those types of reports. Again, more data can be put in there. Those are some of the reports for the core assets people want. Now for the extra category, I’m gonna step over here to kind of talk through that, is looking at week by week or day by day, the results of your test results in a trend graph that would show what’s being executed? What’s passing? What’s failing? And what is still not complete yet? So wanting to know how fast are we moving? Are we behind in our current release schedule for testing? Do we need to make up time somewhere else? Do we need to cut testing? You want to be able to know what your overall executions that you’re doing, and what’s left in that. It’s a great help to see kind of what’s trending. What resources are you gonna reallocate.

The next thing revolves around the requirements, or the results rather, per requirement. This is key because a lot of times when you get requirements, your user stories will come into the testing. You’ll notice that a lot of the testing will sometimes gravitate towards a certain set of requirements. Maybe because they’re very familiar with those requirements, and they’re really don’t want to create testing cases against those complex type requirements. Maybe those requirements are actually too complex that you need to ask to be able to break those down. And an indicator of that is when you look to see that you’ve got requirement 1, it’s got 90% of all your test runs allocated to that. Whereas requirements 2 and 3 you only have 5%. So you want to make sure looking at, well okay all my results, when I break down where are they all feeding to? Is it really only one required [inaudible 04:50] or can we spread that out, or can we have a talk with the D.A. because this could really skew our results.

And the last thing that in the extra category is the defect density, and what defect density is doing is really taking sort of like the results for requirement. But you’re really taking a defect per, and then you have a variable that goes outside of that. So it could be defect per requirement as well, just like results per requirement. It could be based off your application area. It could be based off a release, but what you’re doing is you’re trying to see, okay, in application 2, I’ve got ten defects associated to it. Rather than in application 3 where I’ve got no defects associated to it. Or, you know, really one defect associated there. So you’re trying to see if they… Where are the risks in the areas of my application, maybe something was released. And you see there’s a huge density of defects on here. The application may need more attention to it. So tracking back defects not only in severity and priority, but where do they lie? What density are they really making up according to what variable you have?

And then the last thing is I’m gonna give you some pro tips, which are… The first one revolves around looking at your manual versus automated testing. So if you’ve got manual testing and you’re ramping up to automated testing, you always want to make sure that your tracking, okay, what’s the quality of my automated testing versus my manual testing? And one of the things that we like to do is actually put them side by side. So if you have a manual test, and you automate that test, well to test that manual versus automated, let’s see if the manual test discovers defects, or if the automated test discovers defects. Because what you’ll notice is that, well maybe we move the manual test to an automated regression test too fast. And possibly need to keep it in manual test because we haven’t really ironed out the script for automation that actually can do the same value that the manual test did. So seeing kind of, if you take a manual to automated, see where the value is with actually doing that.

The next thing is to look for data since last test run. Everyone has this issue where even if you’re transitioning from tool to tool and you’ve got this huge library of test cases that basically have been archived, and you’re not really using at all. So look to see, okay, well if I haven’t ran my test case, this test case in the last ten years, or five years, or maybe even a year that it’s that way. Is that really a useful test case? Can we get rid of these test cases? Can we make a transition and merge them with other test cases, right? They have really no value in your system. So look for test cases for a latest test run to see if they’re obsolete, or we can actually move those into, maybe combine them somehow.

And the last thing you want to be aware of is what we call flapping. And that is a little different from the test case summary report, but it’s really looking at, okay, I’ve got this test cases ran. And looking at the consecutive times it’s ran, what’s the variation that it’s going through? If it’s going from failed to passed, that’s not really a big deal. Or if it’s going from passed to failed. But it’s going from passed to incomplete, to on hold, back to passed, then to failed, then to incomplete, then finally passed again, that’s indication that something may be wrong either in the application that you’re testing, so maybe some refactoring needs to go on there in the source code, or there’s something wrong with the test case and there’s maybe some user errors involving that test case. So pay attention to those variations in all those different results that are going up and down. So that was kind of talking today about measuring software quality metrics. Gave you some to really take a hold of and implement into your business. Again, we’ve got a lot more to cover in our Metrics That Matter, so stay tuned. We’ll see you next Whiteboard Friday.

Check out last season of white board Friday below.

View more QASymphony videos on youtube.  If you have any recommendations for future series let us know on Twitter at @qasymphony.

NEVER MISS AN UPDATE

Subscribe to Our Newsletter

Leave a Reply

Your email address will not be published. Required fields are marked *

More Great Content

Get Started with QASymphony