Velocity Using Test Efficiency Metrics – Whiteboard Friday

Welcome back to Whiteboard Friday!  In the past episodes we have talked about Quality and Coverage, today we are going to discuss Velocity using test efficiency metrics.

When people think about velocity they usually think about how fast they are going, or doing different activities.  In the below video we dive into some different reports you should be looking at when analyzing velocity.

Watch the full video below:

Catch last weeks episode here — “Test Coverage Metrics”

In this episode you will learn about measuring velocity in testing

  • Core
    • Requirements inflow rate
    • Test case creation rate
    • Test run rate (cases & steps)
    • % tests complete and blocked
    • Defects opened and closed
  • Extra
    • Avg. test case complexity
    • Testing time spent and remaining
  • Tip
    • Breakdown analytics by tester

Full Video Transcript Below:

Hey, everyone. Welcome back to Whiteboard Friday. We’re continuing with our series on Metrics that Matter. The last two sessions that we had, we talked about two important areas of metrics. One was quality, and the other was coverage. Today we’re gonna be talking about the last one, which is gonna be around velocity. Every time people think about velocity, they think about how fast are we moving. How fast are we moving in a test sprint, test cycle, test suite?

How fast we are closing defects? How fast are we doing all these different activities, right? But we also wanna look at things that possibly could be impeding that velocity as well. So we’re gonna talk about, again, some core reports that everyone kinda should be taking a look at. Look at some of those extra reports and then give you some pro tips as well, thinking about some things that are maybe a little bit outside of what you may have not thought of. So let’s get in and talk about some of those core reports.

The first one is a report that everyone really likes to know of, and it’s more of like a burn up chart. But it’s really referred to in the testing world as more of a plan versus executed type reporting. Everyone wants to know based off of what’s actually planned. You can see here this blue line shows, you know, the number of plan test cases that we have versus the number of tests that actually have occurred. So this tells me a couple of things. One is that we’re not gonna need or we’re not meeting our planned executions, right?

We’ve got a lot of passing and failing and a lot of unexecuted right here. However, what you can also see in this graph is, one, is that if we had kept this line maybe the same, so if we had moved it off to the right here, what you may have noticed is that maybe we could have caught up to actually meeting that deadline. So maybe you’re overplanning the test you have. The other thing is that if you are…maybe not be overplanning, but you maybe are focusing on areas of the application that may have halted that particular growth of meeting that plan deadline.

So right here you can see that there’s a large dip up in the plan, and we didn’t actually match that in terms of executing. So looking at your plan versus executed will help with looking at your next sprint, your next cycle, to be overplanned, to be underplanned for that. Next is looking at the inflow rate of your user storage you need to test and also the test cases that you’re creating, your charters, whatever you have. So as you have an inflow rate, it should match the number of test cases you’re aligning with these particular requirements.

If I had test cases that were way down here, requirements that were shooting way up, that shows me that the inflow rate that I’m having for requirements is not matching the test that I need to actually create to actually cover these requirements. So looking at making sure that you’re not overstaffing or understaffing, the amount of resources needed to actually cover these particular requirements.

The next one is, and I’ll move over here, looking at a close versus open rate for your particular defects. So what this means is that if we are opening defects and closing them with the same rate, then we’re doing well. We’re pretty much netting out in terms of what’s going on. However, towards the end up here, you can actually see that the open and close were actually deviating from each other. So we’re actually opening more than we’re actually closing. So what’s going on here? First of all, we’re finding and opening more severity, and critical, and maybe more complex defects towards the end of a cycle.

Our development team, somebody did not have the time to actually fix this. So maybe we need to revamp our testing strategy to find the severe defects more up at the front here versus at the very end. For some of those extra reports, one I like to look at is average complexity. So what’s the complexity of our test cases, and how is that actually growing from release to release? So if you look at release one and release two here, you can see that the high complexity test cases stayed constant at 30 and 30.

But when we went to release three, what we did was we moved a bunch of low complexity test cases to the right, and now I have 70% of my test cases are actually high complexities. So what is that gonna do to the actual overall planned executions that I’m actually tracking? Right? Looking at the complexity, the test cases, looking at the actual percentages of those in your testing cycles, that’s gonna affect how much you’re actually gonna be able to get done. More complex usually means more time.

Then looking at a charter bug detection array, and this is a pretty cool pie chart. What this does is it breaks down the number of test cases you have. So, for example, if I’ve got 70 test cases that make up this pie graph, and I have 25 charters that make up this section of the pie graph, you can notice that I’ve actually got 75 test cases, only yielded 20 bugs, and the 20 charters I had actually yielded 30 bugs.

So as our time better spent in creating maybe high complexity test cases that are step-by-step test cases, they are only gonna yield 20 bugs. Or should we focus more time on less overhead and create charters that could produce even more bugs in the application? So this is a good breakdown of looking at where our speed and time should be allocated to based off test cases versus a particular charter you may wanna use for your testing.

Then some pro tips here is looking at analytics per tester. Again, this is, again, not to create competition between one of them but to create a healthy dialogue as well, to figure out kind of who is actually testing what and what are the results of that testing and learn from each other. I’ve got three people here. I’ve got Ryan, Ali, and Kyle, and you can see here that Ryan, that’s me, I actually did the best out of this chart here. I’ll show you why. But what I found is that Ryan found five bugs, and four of them were SevOne bugs.

Ali executed 25 test cases, right? Kyle actually found 25 bugs. Only two of them were SevOne bugs. He actually executed about four times as more as I did, right? So he’s running a 100 test cases whereas I’m running 25, but I found 4 Sev defects. He found two, and my defect density for this is actually much better than his. So, out of my five, I’ve got four SevOne defects. I have just 25. He’s only got two. So maybe the 23 of them could have been cosmetic, right? So am I doing better in terms of, “What am I planning to execute? How am I designing my test cases?” It’s good to, like, talk to one another and share these results to help one another out so that we’re all finding these SevOne defects first rather than at the end of a particular cycle.

Then the last one I was looking at the menu overhead of what could be impeding your velocity of executing test cases, and this is what I would do. I would look at all your test cases that you have, the average time that’s taking you to actually run these test cases, and the number of times you’ve executed them, how many times it’s actually been manual, so how many days you have that it hasn’t been moved to an automated fashion, so almost roughly, you know, a third of the year we’ve got, and we’re still running these test cases, and then what type of complexity is it at.

With this data, what you can look at is, one, if this complexity was low, this maybe like, okay, we can make this a candidate for automation right away. But since it’s high, this may tell us, you know, why it’s been 100 days as a manual test case, right? Maybe it’s too complex for us also to automate. But it maybe to the fact where if it is even high, you know, look at the average time you’re spending, how many times you’re executing it and looking at, okay, “Can we get rid of this as a manual overhead and move it to an automated side fashion?”

So those are some of the velocity reports that we’ve talked about today, from core, extra, and some pro tips, looking at some of the velocity statistics for speed, but also things that could impede that velocity as well. So come back next time. We’ll have a lot more to talk about for Whiteboard Friday on Metrics that Matter. I hope to see you again.

Hey, everyone. Ryan Yackel here with QASymphony. Thanks for watching Whiteboard Friday. For more videos, visit or subscribe to our YouTube channel. Hope to see you soon.

Check out last season of white board Friday below.

View more QASymphony videos on youtube.  If you have any recommendations for future series let us know on Twitter at @qasymphony.

Leave a Reply

Your email address will not be published. Required fields are marked *

More Great Content

Get Started with qTest