QASymphony / Blog / Getting Started with Test Metrics and Reporting – Whiteboard Friday
Getting Started with Test Metrics and Reporting – Whiteboard Friday
Welcome back to Whiteboard Friday. This is the last episode of the “Metrics that Matter” Series. We have talked through a lot of areas – coverage, velocity, quality and more. Today, we are going to look at how to actually get started with test metrics. Things you should specifically look at like tools, people and focus.
We love feedback on these videos – so you if you have any ideas for future series or how to improve let us know on twitter at @qasymphony.
In this episode you will learn about how to get started with test metrics.
Happy Analysts and SMEs
Time Box Limits
Read the full transcript below:
Hey everyone. Welcome back to Whiteboard Friday. We are continuing our conversation on metrics that matter and last week we talked about measuring the quality in that application that you are testing around software quality metrics. Usually what happens after you get down to quality metrics is you move onto inevitably talking about…okay I’ve got all these metrics but how covered am I? Just like the insurance plan you just bought it is always around, what am I covered? What areas of the application are these results, are these tests actually covering? And make sure that the quality that I am actually about to reproduce into production is going to be a good measure of the coverage that is in that area of the application. Usually when we talk about application or coverage in general, there are usually in three buckets. These are all the buckets usually people talk about. It’s usually new feature-type coverage, so I’ve got new features that are coming in. How can I make sure that these new features are properly tested and covered?
All this revolves around is application area coverage and that really gets into, okay when I move these tests possibly into a regression cycle or I have an area of the application that I am going to be testing, repeating on pretty heavily. What is my application coverage? Did I make sure that I tested that correctly? If I have a point of sale system that is going to be talking to an ordering system how do I make sure that both those areas are properly covered? And the last one revolves around code coverage. What is the source code coverage in the actual code that I am doing? How can I make sure that all the source code that is being created is being properly tested? And again we’ve got three different areas to talk about today. We’ve got the quarter reports, we have the extra reports and we’ve got some pro tips which I am really excited about today to talk about kete maps, so we will get to that towards the end. So first of all talking about as a high level is what are the core areas do you want to make sure in terms of talking about coverage? The first one is really revolving around what is the total percentage or coverage percentage that I want to see in my requirements I am testing?
It is a simple formula. It is basically you would take the requirements that are covered then divide it by the total requirements. So if I’ve got nine covered requirements and I’ve got ten total then I am at 90% test coverage. That is the basic understanding of, okay can I get a gauging of the percentage of coverage that I have for these particular requirements that I am testing against. The next thing is really revolving on the test cases per requirement. This is the breakdown lots of people want to see when it comes to, okay for this particular requirement, I know it is really important, give me my breakdown. Give me what test cases are passing, what test cases are failing. It is also great that, if you have this tracking in place, to see, okay after the feature is actually pushed out into production, you can then go back and track, if the test case re-runs, all the way back to that requirement to make sure that did the product owner completely change that particular requirement and you’re working a new release? It could possibly break the test cases that had an association with it. Those are the two ones that are pretty cooler when it comes to actual coverage reporting.
The next one I want to talk about is we talked about defect density in the last Webwork Friday. We really need to talk about defect density as it really equates to requirements. If I’ve got a report that will show me my defects per requirements…so if I’ve got requirement A and B. A has a got a count of 5 defects but B has really got a count of 28 defects, it tells me the risky requirements or user stories are actually in this release. When you see these types of reports, this should automatically display to you…okay I need to go to the product owner immediately. I need to actually have a conversation with them to make sure that this requirement is something that is properly documented, that it is something that can be tested and…or also the test cases are being created that are actually going to satisfy a high quality requirement that is going to be released into production. So always make sure you see these counts go up to really go talk to the product owner about that.
We also talk about requirements without coverage. So we have talked about what is my percentage of that? Well we’ve got, in this example, we had a 10% that actually were covered. We want to make sure that the requirements that do not have coverage are being identified and actually being uplifted and also what we want to see is the state that the requirements’ at, when the requirements without coverage are actually found. For example, I have requirement A that is in a done status, that’s very troubling to me because that tells me that I am saying guys, this is done, I want to actually push it out to production. I have some that are incomplete, so I’ve got time to actually write test cases against that. There may not be a need to write formal test cases against every requirement. But it is also proper to ensure that if this has been moved to a done state, what was maybe an exploratory test done to satisfy to make sure that yeah, the test team has signed off along this requirement, looking good. Or was some of this testing done by the D.A. that we can make sure that maybe that formal test case artifact may not be necessary for that. We also want to call it out just to make sure that you know what areas are not covered.
The last area, in the pro tip section, is really about kete map visualization. These are more percentages that are in grid formats. Previously we talked about showing pie charts and bar graphs but kete map visualization actually gives a totally different look into the area of your application. Because what it does is that you can identify whether these tiles here are requirements when testing against them. They can be an application area. They can be application areas that have regression testing against them. But what it does, it shows the coverage in terms of, okay are we looking okay? Do we have a neutral state here meaning that there is an incomplete and there is a bouncing out of the passing and failing? Is there a critical area of our application where everything is particularly failing? That gets back into talking about making sure that the director of QA has a good understanding and has good confidence around, okay are we covering the areas of the application? In these types of understanding, do we feel good about this, do we not feel good about it? Can we release something into production that’s got this type of visualization for that? This goes beyond more than just a simple pie or chart or grid format. It really gives a different look and more of an image format to help out with okay, are we good to go for the release into production? And it is also a great idea to find out what are the risky areas of your application as well. This will move over into a new release if you found that this critical error in your application in release one, well since it is so critical in release one we need to pay extra attention to it in release two to make sure it is not going to turn red like it did in the prior release.
That gives you some ideas around test case coverage and try to figure out how do we cover errors of our application.
Next week we’ll be talking about velocity so come back for Webwork Friday, we’ll be talking about some velocity metrics that your team needs to be aware of.
Check out last season of white board Friday below.