3 Ways to Measure Urgency in Software Testing

urgency in software testingSoftware development projects have a natural cadence. In some cases, where the release schedule is relaxed, delivering information about a project every few days is acceptable. Other times, managers need data as soon as possible so that they can make a decision about allocating staff or releasing software. Projects often have delays that make the testers feel no sense of urgency, such as waiting a bug fix, for a build, for a decision or a requirement. Slipping into the assumption that “late is no big deal” can make a project even later.

Some tools do exist to help software testing managers measure that sense of urgency, including measurements, charts, and data.  The measurements I am going to walk you through today are test team velocity, test coverage and mean time to failure. 

Team Velocity

Team Velocity measures how fast a test team worked through a set of tasks. The easiest way to calculate velocity is to choose a base unit — test case, hour, or point — and calculate how much work is done in a given time period. That tells the throughput, or amount of work done in, say, a week. Dividing throughout by the total number of units yields a percentage of the work is done. If a tester gets information from a test management tool at any point, it will show how far the team is into the planned work. If  someone checks the tool at the end of each day , they can plot team velocity and get a feel for how fast or slow the team is going, as well as make predictions about when they might be done (assuming the team moves at a consistent pace). Graphing throughput over time can create a burnup chart; most spreadsheets can project the trend line, to show if the team is on track to deliver the work on time.

Velocity lines that stay flat for several days or, worse, start moving up instead of down on the graph, show that there is a problem. Graphs that look like this are a call to action. Where slogans and speeches fail, a simple trend line graphs can often motivate a team to take action to bring the project back on track.

Test Coverage

Test coverage is a way to measure what has been tested — and what remains untested. Here are a few of the ways to look at coverage on a test project:

  • The number of features in a software product that have been tested
  • How many of the defined user scenarios have been looked at
  • The percentage of lines of code that have been executed during a test
  • Variables coverage which measures every place a person can enter data into the user interface
  • Configuration which measures how many of the product configurations have been tested

Keeping a close eye on test coverage can create urgency in two ways. First, coverage helps a test manager to understand risk by comparing what is currently untested with the amount of time remaining before a release. This can help that manager make decisions about how to prioritize the remaining work and focus coverage on specific areas of the product. Second, measuring test coverage can help expose the areas of the product that may have been overlooked. If a manager discovers that an important part of the product is not in the coverage map, a team can revise their plan so that part of the product will get tested.

The State of Test First Methodologies 2016– Hundreds gave their thoughts and the results are in. Download your free copy.

Mean Time to Failure (MTTF)

While coverage velocity measure project performance, MTTF measures the software in operational use. MTTF is the average amount of time the software works correctly between failures – errors that make it impossible for a user to perform a key action. Failures can be found by looking for the keywords “error” or “exception” appearing in the application log files, system downtime, or by testing in production.. Agreeing on what a failure means within a development group is an important first step. The smaller MTTF, the bigger the problem is for the software team.

A low MTTF can point to an urgent need for better development practices, and better testing around the parts of the product that are failing. A test manager seeing MTTF shrink over the course of a few releases knows that they need to start an investigation to find the source of these problems. Likewise, a high MTTF could mean the software could ship now without further testing.

Discovering Urgency

Everything seems to be urgent in software testing. Just one bug slipping into production can have severe consequences for a software company. Managers who track coverage, velocity, and mean time to failure diligently can publish those measures and discuss them to create a sense of urgency among the technical staff. Each of these methods can uncover a problem or productivity dip that may be invisible at first glance – and quickly.

The State of Test First Methodologies 2016– Hundreds gave their thoughts and the results are in. Download your free copy.

Leave a Reply

Your email address will not be published. Required fields are marked *

More Great Content

Get Started with qTest