Have you ever tried to hit a target while blindfolded? You could be duplicating that experience when you rely on test automation without removing the blind spots that cause you to send automated testing off course. There are more than a few gaps in test automation visibility and understanding that can affect the quality, coverage, and certainty of your testing. Dive into these seven test automation blind spots and surface with 20/20 vision.
1. Technical debt
If developers are re-working backend systems that power the application you are testing, the changes could affect the user experience in unanticipated ways. Developers often refer to the re-work they are doing as technical debt, but reconfiguring backend systems can also cause technical debt for test automation — especially if testers are not aware of the changes developers are making.
For example, if you have prebuilt scripts running regression on UX, the changes could break the existing scripts. If the testing team isn’t aware of the changes, their automation regression coverage will instantly be lagging.
If fixing that technical debt won’t affect functionality, you need to communicate that, too. If you leave testers in the dark when developers make changes, you will have needlessly created a blind spots for them.
Instead, when developers plan to address technical debt, they should make sure the sprint teams know so they can change regression test automation, as needed and repair all bugs during each sprint. As a preventive measure, the scrum master should start each sprint by asking if there is any technical debt that the teams plans to work on that is not currently included in the sprint scope. This helps the team address blind spots related to technical debt before they arise.
2. Mobile test automation
There are at least a few quality blind spots in mobile test automation. Open-source tools such as Appium aren’t always up-to-date with new versions of mobile OSes. And outdated tools won’t appropriately test the updated elements of the new software.
When compared with their stationary counterparts such as Selenium, mobile test automation tools such as Appium have limited capabilities that restrict their visibility into the software.
Mobile test automation alone won’t give you a complete picture of software bugs; you need to add traditional and exploratory testing to the mix to rid the blind spots. Using a test case management tool such as qTest Manager to track tests and confirm test quality should help.
3 & 4. Code coverage analysis and manual testing
Code coverage can only tell you how much of the code you tested. Since you will never get 100 percent code coverage, you can’t be sure you have test automation coverage for all the different use cases that lead you to a line of code in a code fork.
To limit blind spots, don’t expect so much code coverage. Typically, you may be able to achieve up to about 80 percent code coverage, and better in some instances. You’ll have to use manual testing to address the rest.
While test automation works well for regression testing and to ensure that your code is working, it is still essential to have manual testers who can focus on business functional testing, cover the edge cases, and test in instances where automated testing falls short.
Take exploratory testing for example. If you don’t have manual testers focused on navigating through the application the way a user would, you’ll miss areas that a user might find frustrating, and you’ll miss out on opportunities to build empathy with your users, and, ultimately, build a better product. Because a tester’s job involves replicating the experience of a user, they arguably have the best perspective on a user’s wants, needs and frustrations. Gaining that understanding can make or break the success of your product — that’s a blind spot you’ll want to avoid at all costs.
Automated security testing can’t test for all security vulnerabilities. If you rely on automated tests entirely, you’ll create blind spots. With the amount of code you must check, the constant code updates, and the number of software vulnerabilities in most software, there is no way to find and address every vulnerability. Most development shops are doing well to find and fix half of their coding flaws.
For visibility, keep a record of all your security tests in the same place where you keep your automated and manual test results. Add outsourced penetration testing to get better testing for security vulnerabilities so you can patch them. And don’t forget that security testing is a shared responsibility.
The number of bugs you find preproduction and in production is not in itself a great metric; it doesn’t demonstrate a lot of value and leaves blind spots in leaders’ perception of the value QA delivers. If you were going to choose a single KPI, a better one would be the cost to fix each bug. The idea is to keep expensive bugs from making it into production.
Then, you can make it a team effort to use test automation to find as many (expensive) bugs as you can in preproduction, and as early on as possible. Then, you can use manual testing to find even more bugs, which you won’t always find using automated testing (and track the cost savings associated with those too).
Error-ridden test scripts will miss the coding errors you are trying to target for testing. You need to develop, test, and execute code reviews of test scripts just as you do for the code that ultimately goes into production.
Broadening your gaze
Now that you know about the biggest test automation blind spots, you can take your test automation strategy to the next level. For more information, read our article, “Getting Beyond UI Test Automation.”