Why Do Software Testers Need Analytics and Visibility? – Whiteboard Friday

Whiteboard Friday is back!

Thank you for your support.  We did not expect the first round of Whiteboard Friday to be as popular as it was, but are extremely excited about all the feedback.  Which is why we are back for a season 2!

We have a new product announcement coming up at the end of September and in preparation for it I want to discuss software testing metrics.

So sit back.  Relax. And join me every Friday for season 2 of Whiteboard Friday – “Metrics that Matter”.

You can subscribe to the series over on the QASymphony YouTube page.

In this Episode You Will Learn about why testers need analytics and visibility:wbf1

  • Measure Doneness — In an agile environment, most teams are viewing the stages their issues or user stories are progressing.  As they progress or halt in in progress, it could be due to testing.
  • Resource allocation — Are there slower times when I don’t need as many manual testers on new releases?
  • Measuring performance, identifying biases– How am I measuring across my peers and how can I help my peers.  If i found 5 bugs this past week in X area of the application, what was I doing that others were not?  Re-open rate of defects, what are the testers doing to affect this reopen rate?
  • Beyond the check mark– Passing and failing, but what does this mean? To see what you are focusing on that’s NOT testing related.  You have all have this data around passing failed, incomplete…but that all refers to only the test execution– what about defect resolution, defect reporting, log analysis.  All these factors play into who much TIME you are spending rather than just testing time.

Transcription of the video below:

Hey everyone, welcome to Whiteboard Friday. Today we’ve got a brand new series we’re gonna be kicking off, it’s gonna about metrics, reporting, quality assurance, all that rolled into kind of what metrics we should be looking at and paying attention to as a software test organization or even software sets in your team, within your scrum agile teams or even compound teams.

So, one of the things that we’re going to be talking about before we actually get into talking about the actual analytics around this is going to be, one is, why do we even need to care about metrics? And we have boiled it down to four things, and again this is not exhaustive but we’ve got four things that we want to talk about about why you should actually care about metrics.

The first one refers to measuring doneness, and doneness is actually a real word, it’s usually used when you’re talking about cooking meat. You’re thinking about throwing a steak on the grill and you’re thinking “okay, is it done yet?” It’s actually a word, talking about the doneness of that steak. And a lot of developer and testing teams actually use this to measure how done are we when it comes to completing our sprint. So, we’re talking about moving user stories through a compound board, or through a scrum board, and usually a testing that accompanies that. And how do we measure the doneness of that user story or of that task so we can actually push it out into production?

But it’s also getting beyond the doneness of that one particular user story, so meaning that you can actually attach a lot of test cases to a user story, right? You can say “yeah, this is done,” or that the doneness on this user story is all ready to go. However, there could be a lot of other testing you’re doing that’s in that sprint that may not be directly tied to a user story. So we’re also measuring the doneness of all your load testing, your performance testing, maybe your regression testing that’s outside of those brand new feature-type testing, so it’s not just the brand new feature-type tests that you’re doing and getting listed done, but you’re measuring all the doneness of your actual testing that’s going on in that sprint or in that release.

The next one is resource allocation. Everyone has resources that they’re testing, you’ve got actual people that are going to be on your team, whether they’re automation engineers or manual test engineers. You’ve also got some subject matter experts that are going to be helping out in your testing, so grabbing a business owner and a product owner, having them come in and do maybe some session-based-type testing or exploratory testing. It’s also around the allocation of certain testing environments, and can we use a certain environment a certain day or on a certain week for our testing purposes.

So, the metrics that we’re going to be pulling, right, it’s going to be mainly around, “okay, can I go without using this resource here or resource there?” But then it’s also wondering “okay, well what can testers do when I’m allocating them not for testing in a certain period?” For example, when I was testing back in my consulting days there’d be periods where I’d be doing a lot of execution testing, either through automation or manual, but we’d have code block periods where I didn’t really get a whole lot of time to actually do any testing, so where can my time be spent elsewhere? It’s figuring out, one is, where can you put people to start working on stuff? But then it’s also, “okay these people that aren’t working on something, where can I reallocate those to be doing other things for the business?”

The next thing is really revolving around measuring your personal performance and also identifying certain biases. So everyone, and this is kind of a weird thing, but… People usually don’t like this, but I actually liked taking tests when I was in college and in high school because it could actually measure where I was in relation to other team members or other peers, rather, in my classroom. I really did like, I’d be like salivating over the fact that I got to take a test over all the knowledge that I have. But I also got to measure that against people and see where I stacked up.

Measuring performance is good for adjusting teams to kind of see, okay, how am I comparing to other testers? Normally people think about, “well how many bugs did Johnny find in this one application area than what I found in this application area?” But it’s not about “okay, well Johnny found five bugs and I only found two. I need to get three more bugs than Johnny next sprint or next release.” It’s really okay, Johnny found five, I found two, how can I learn from Johnny about how he found those bugs? It’s not just a metric to throw up there to say “oh yeah, you didn’t collect as many bugs as Johnny did,” because finding bugs is not the only actual activity of the test you can do. Also it may be writing charter scripts or writing test scripts, how are they doing something that’s better than you that you can learn from to also improve yourself? So the metrics around, both at a team level and at a personal level is going to help you out to measure those performances. Also it’s going to better your personal career.

The next thing and kind of with that is also identifying personal biases. As you’re actually going through and measuring how you’re performing personally and within your own team, you’ve got certain biases that everyone needs to be aware of and those biases can hinder you from actually providing more business value for the application that’s being developed or for the application that’s being enhanced. An example of this is when I was working at a retail company in their systems department, what was happening was I really liked testing out the point of sale terminals and I thought I was really good at testing that out.

But what happened was I got so locked into, any time there was a new application area that needed to be tested or new feature, I always jumped at yeah, I’ll be the main point on testing the point of sale system, where I didn’t really have any experience really testing out the buying process or the shipping process. I actually got all the way to that point of sales system. The whole point was that I had a bias to only want to test one area of the application, and with those biases I may or may not have not found bugs or features that could have been there because really I didn’t have another person to come alongside me and actually test it out in maybe a different way than I would have. So identifying those personal biases is gonna be key along with measuring those personal and team performances.

And then the last part is really getting beyond the checkmark, or the pass/fail, or the incomplete. Lots of testers like to highlight the fact that hey, we’ve got 90% passing or 80% passing, we’ve only got 20 more tests left in the sprint but 20 are failing and all these nice color-coding type things around what’s passing, what’s going, what’s on hold, and all that. While all that is good to show what’s actually going on in your testing bed or in your testing process, you want to be able to take these results, not just display them nicely but also have conversations about them.

So getting back to the measuring the doneness of a particular sprint, you want to be able to bring this data that you have around what’s passing and what’s failing so that you can make a case on whether you should do more testing or we should not really test in this area anymore. People love to talk about okay, well we’ve 99% regression passing. Well that’s great, it’s 99% regression passing for the past 5 years, okay well should we even care about that? Or should we only care about what hasn’t passed? On what hasn’t been complete when it comes to certain areas we’ve testing over and over again.

So it’s getting beyond the passing and failing and really trying to bring these metrics into more of a conversation around, maybe there’s areas we don’t need to test as we’re testing more and more, there’s new opportunities for us to test. If one area of the application is super green and one is kind of light green, different ways to kind of slice up the data so that you’re not always looking on what’s passing, what’s failing, you’re trying to figure out okay, what value can this bring, what kind of conversations can I bring to the organization as a whole to bring business value. That’s what testers try to do, we’re not just trying to say “hey, look at all these nice pretty pie charts and graphs that we have.”

That gives us four things to look at on why you should care about metrics, and look at, in the coming days we’re going to be looking at things that are going to play into these charts and bar graphs and things like that that are actually going to add value to your team. What we’re also going to be talking about next week is going to be saying, okay, well now that we know we’ve got software testers excited about analytics and visibility, what are some cautionary things we can look out for when it comes to analytics and reporting? We’ll be talking about that next time on Whiteboard Friday. Thanks for dropping by and hope to see you next time.

Check out last season of white board Friday below.

Whiteboard Friday Video Series: Exploratory Testing

View more QASymphony videos on youtube.  If you have any recommendations for future series let us know on Twitter at @qasymphony.

Leave a Reply

Your email address will not be published. Required fields are marked *

More Great Content

Get Started with qTest