QASymphony / Blog / Unlogged Bugs – A Detriment to your Quality Metrics
Unlogged Bugs – A Detriment to your Quality Metrics
As Agile teams face the increasing time crunch of reporting bugs properly, they often begin to see the task as an unnecessary responsibility that can be ignored. Many testers will take the close collaboration and co-location of Agile teams as an invitation to skip formal bug logging all together.
Let’s take a look at two common methods for resolving a potential software issue:
When the tester files a bug, it is looked at the next day by a supervisor, who needs to decide if the bug should be fixed. The supervisor assigns it to the programmer, who objects claiming the software is working as expected. The supervisor brings in the tester, who has to explain the bug, and get the programmer to fix it. The actual fix takes an hour, and the programmer marks it as fixed; the tester verifies the fix the next day. The actual effort is one hour, but the time elapsed can be one to a few work days – not good for agile and quick releases.
In the second, the tester could turn around, say “Hey Samantha, this looks odd, could you help me understand this?” The bug might be fixed and tested at the programmers machine in an hour or so – great for agile and quick releases, but how do you track important defect related metrics?
As appealing as this second option sounds, it also robs the team of data. The quality of the software coming out is the same, but management and the team loses sight of churn and rework — the number of features that need rework before they can go to production, the amount of time the rework takes, and the depth of those problems are all lost in the shuffle.
To fix it, we need to talk about how we got here, and what other measures to use instead. So let me walk you through this below:
How We Got Here
Fifteen years ago most teams developed software in phases. You wrote the code (and the bugs), then took enough bugs out during the testing and fixing phase to ship the code to production.
Then Agile Software Development changed the game. All of a sudden testing was an activity that could be performed as soon as the code was compiled. The very word team changed, from a group of specialists, such as “the test team” or “the analyst team”, to include everyone delivering a specific piece of functionality. Cubes, which had a lot of negative attributes that were hard to measure but were at least cheap, gave way to open-plan offices that are even cheaper.
Now that teams were on stories, they typically had two types of testing: New feature testing, and release candidate testing, that might include re-checking functionality for regressions. The new feature testing is a place where bug reports become integrated into the development work or, perhaps, do not get recorded at all.
First off, let’s get real. Most organizations don’t mine their bug trackers for insights into how to improve performance. Many don’t even have the tools to determine test velocity – let alone try to accelerate the team. That’s why we created qTest Insights, to provide a visual display of testing activity that you can act on. But what do we do when the bugs are not recorded in the system?
We suggest tracking rework, and finding ways to reduce it.
A Closer Look At Rework
Most agile tracking tools have a concept of a swim lane, where a feature or story moves from analysis, to coding and feature test. Moving a feature to the left on the board (or swim lane) indicates a problem; the code was buggy or the requirements were unclear. This is rework, new work that has to occur because the feature did not do it’s job properly the first time.
That’s really what we are getting at with bugs at the feature level. How much extra work do we have to do because the feature didn’t work right the first time through? Time spent fixing is time that could be spent moving forward.
Zero percent rework is unrealistic, but most teams we work with can see a ten to fifteen percent improvement in velocity just by cutting rework in half.
That is the kind of information teams should be pulling out of bug tracking tools –
What are the bugs that are being caught?
Wow can we prevent them, and if we do, will it make a difference?
Configure the tracking tool to generate this data – specifically, the percentage of features (or “stories”) that get “kicked back” from test to development. That is the rework percentage.
Once you have that data, look for rework time, which is the number of hours a week an average team member spends fixing, waiting for a new build, and re-testing. In many cases, it is twenty percent or more of the time.
If rework time is significant, than it’s time to enforce a strict defect tracking process for long enough period to get a feeling about why these bugs are occuring.. Even if the team only tracks defects for just a couple of sprints, it can provide enough information for a leader to look at the data, conduct a retrospective, and determine some actions to take to prevent those categories of defects.
Once you have the data, track it for another sprint or two. If rework time goes down to something manageable, stop tracking until it becomes a problem again.
The data above is “Catch and release” data, designed to figure out what is happening, hold as long as useful, then throw away. We still recommend some longer-term, dashboard style metrics to help you understand team performance. Some of those may include high level metrics about where team members are spend time (writing tests vs. executing tests), test coverage, flakiness of automated tests, and other critical metrics that are not good “catch and release” candidates because they need to be constantly monitored.
The State of Test First Methodologies 2016– Hundreds gave their thoughts and the results are in. Download your free copy.