Aligning Business Goals with QA Metrics: Your Most Frequently Asked Questions Answered

Earlier this month, QASymphony hosted a webinar with guest speaker Adam Satterfield, who shared a different approach to QA reporting that focuses on helping the business measure release readiness and assess overall application quality over time. During the webinar, Satterfield shared the three key areas around which QA leaders and testers should organize their reporting to demonstrate business value: the cost of a defect, efficiency gained from test automation and risk identification through coverage metrics. Then, QASymphony’s Director of Product Ryan Yackel demonstrated how to collect these metrics and develop executive-level reports using qTest Insights.

Watch the Webinar On Demand: Metrics Worth Measuring: Aligning Business Goals to Quality Metrics

After Satterfield’s presentation, a lively discussion ensued, but we couldn’t get to every question during the live event. To answer your most frequently asked questions, we caught up with Adam to put together the responses below.

1. Can you elaborate more on flakiness? Does the tool [qTest Insights] detect duplicate tests in the test suite?

Satterfield: Yes, the tool definitely does detect duplicate tests in the test suite.

In terms of flakiness, we have found that some automation teams spend as much as 40 to 60 percent of their eight-hour workday just rewriting and doing maintainability on tests instead of creating new test for new functionality. Having the ability to understand at a glance how robust a test is and whether it follows a certain set of guide posts becomes very important.

Katalon has a set of guideposts on how to write your tests to ensure that the data that you are accessing is good and to make sure you don’t have redundant or multiple branching paths that are relying on other tests. Other resources include talks and podcasts by Angie Jones (who we recently had on a webinar about Growing Your Testers to Grow Your Business) and Joe Colantonio.

2. How exactly do you measure coverage by stories or AC? Do you have an example?  

Satterfield: There are a couple of ways you can approach this. You can use something like SonarQube to understand your unit and integration coverage by the code pass. Alternatively, you can look at, from an automation standpoint, hooking some of those results in to do something very similar.

When you think about either tracking coverage by your acceptance criteria or your user stories, it really depends how granular you get as an organization with your acceptance criteria. If your acceptance criteria are essentially turning into your acceptance tests and you don’t really have this interstitial state between user story to acceptance criteria to acceptance test, you can just track it by the user story. So the user story represents a certain set of functionality. And typically what we would do is have some custom fields in Jira that say “this user story impacts this system or this application.”

And then we can essentially tie that back into whether or not we have a test for it, as well as if that test actually interacts with that area of the application. Then we can assume a certain level of coverage. The information that the coverage piece is providing is simply letting you know probable risk and probable impact. This should just be used as information for the greater picture as a whole and not as a “we have 100 percent trust that we are tracking coverage based on our test cases with this information.” I don’t think you can really get that with that coverage piece. It is simply another avenue to provide some additional information so you can understand probable risk and any potential areas that you may need to look at with your QA team.  

3. Early on you mentioned tracking defect time. Can you discuss more?  E.g., should there be an estimate to fix and to test the fix? Is there an actual “time spent” metric that should be captured?

Satterfield: Typically, with most teams, I like to combine the development and testing hours into the overall estimate. I’m not a huge fan of having different story points or hours for the developers and the testers. To me, it’s about how you meet that definition of done for that story. And if you think about treating defects or stories, it’s about the same thing – how do we get this done and how much effort is the team putting into getting that information in? So with that, one of the big things is that yes, actual time spent is where the money is for tracking defect time!

4. Is there any way to track code churn from an automation standpoint? How often automated cases get changed?

Satterfield: There are some ways you can do that depending on the framework that you use, but it is very framework-specific. So yes, it’s definitely possible and there are certain things you can put into your code base that will allow you to track that.

There are also certain tools you can use to track code churn. For instance, if you use something like BitBucket and you are checking your code in and out, and you can ensure that your team is following good code check-in and check-out procedures, you can certainly check the history of the check-ins and check-outs. But you have to make sure you treat your automation code as if it was production code. Treating it in that way and using good check-in and check-out procedures will be the easiest way to track your code churn.

5. In agile scrum, the team owns quality, and thus, many companies who practice this framework do not have conventional QA managers or even conventional QA.  Everyone is considered a developer. In this case, multiple teams may work on the same product but may have different backlogs and projects. Even different definitions of “done.” Any advice on how to bring these teams together?

Satterfield: Really good question! This is where a lot of the culture shift comes in. What is the overall goal of the team itself? Is the team really driven to deliver a high level of quality? You know, whether they are working with multiple teams or whether they use a SAFe model or a scrum model to bring these teams together, this is where the culture shifts, and the ability to fall back on some of the soft skills of the tester comes into play.

You don’t have to track these metrics, and you don’t have to force-feed metrics to the organization to force people to track them. What you want to do is partner with the teams and show the value. A lot of times what we see with organizations is that say they are tracking metrics because they are awesome. Instead, they should be saying “We believe that by tracking these we are going to be able to provide information that is going to show value down the line.” If you can show a team the value of doing something new or different, a lot times they are definitely going to come along with you. But it really is just about having that conversation of not doing something just to say we did it, but to actually partner and as a team want to grow. Because whether you are a developer or tester, nobody wants to stay late or work on the weekends to resolve production issues. By working and tracking and understanding how to grow from sprint to sprint, you can limit that “firefighting” that you often have to do.

6. When you talk about running automated tests in qTest, is this only Selenium or are there are other ways to run automated tests (especially for Angular or React applications)?

Yackel: Yes, qTest Launch provides a universal agent which allows team to connect any test automation framework for test automation scheduling and reporting.  

Check out our blog post about the qTest universal agent to learn more.

7. What is a good way to show issues found in exploratory testing with qTest if you do not create new tickets for defects but rather fail the ticket that caused the issue?

Yackel: There are a couple ways to do this.  

First, you can leverage qTest Explorer’s ability to report all test opportunities you find during a test session. With Explorer’s session module, you are able to go back and review all exploratory test sessions in a central location.  From the session module, you also have the option to link an exploratory session with a new or existing bug from Jira Software.

Check out qTest Explorer’s capabilities here.

Second, you can use qTest Insights to report on bugs found with test runs associated, and bugs found that were NOT found with test runs.  For example, if your exploratory session ended up failing a defect in Jira but you did NOT run a Test Run from qTest Manager, you would still be able to report on the failed bug.

Check out our latest blog post on how to report on bugs from Jira that do not have associated test runs for qTest Manager.

8. If the defect rate is high in each area of application, how do I track high risk areas?

Satterfield: High risk areas can be tracked through a multitude of ways. First, you can identify them through the risk-based testing technique. Identifying the probability and impact of a potential issue occurring in an area of functionality, and then applying that information, helps you identify your risk.

Second, you can use a tool like qTest to identify areas in the application that have low test case coverage. That tells you that you may have a low level of confidence that you are testing adequately in that area.

Lastly, you will want to set up application fields in Jira to ensure you are tracking issues and stories as they relate to application areas. Using JQL, you can then search for application areas and identify where most defects occur or where your user stories are most concentrated.

9. What do you think are the steps to take to automate test cases and link them to Jira if they are currently being maintained in BitBucket?

Satterfield: The best way to start is to ensure your automation tool can talk to Jira. Sometimes this is done via your test case tool. For example, qTest uses a webhook to integrate with Jira user stories, tasks and other custom issue types for a real-time view into test coverage.

Outside of that you can use Jira’s API functionality to connect user stories to automated test cases. It is important to remember to have some level of detail in your user story about the acceptance test for those who do not have access to the automation suite.

Join us at Quality Jam London to hear more from Keynote Speaker Adam Satterfield!

Leave a Reply

Your email address will not be published. Required fields are marked *

More Great Content

Get Started with QASymphony