A new test estimation technique you can test yourself
My company QASymphony has been working for some time on a new test estimation technique designed to give an accurate figure for the effort that will be expended in the next test execution cycle. We currently call it qEstimation.
What unit should be used for that figure? In agile development person-minutes is usually best. In large sequential lifecycle projects the number of digits becomes unwieldy: person-hours or even person-days is more convenient. The technique works in exactly the same way in either or any lifecycle.
We intend that the technique be simple to apply. Even more importantly, we want how it works to be easily understandable to non-testers. Our motivation is to enable testers to make those responsible for allocating them understand why specified time and number of people are needed to deliver adequate testing. In this article I will attempt to explain how the technique works.
First cut is the deepest
Like most if not all TE techniques qEstimation generates a first cut estimate then refines it based on newly available information. The accuracy of the first cut is important. If it is wrong by too much it will increase the time before the current estimate becomes accurate. Refining it in the light of the actual effort expended in the first cycle will inevitably result in an estimate halfway between them and the likelihood of that being anywhere near right is small. The current estimate may bounce for several risky and expensive cycles before it becomes accurate. If, on the other hand, these two figures are closer together, the truth will be zeroed in upon faster.
In qEstimation the first cut estimate is computed from the “testing size” which is in turn computed from four intrinsic, easily quantifiable/identifiable properties of a test: checkpoints (the number of times it requires actual and expected outcomes to be compared), precondition (the difficulty of reaching execution preconditions for the test: one of four defined levels), data (difficulty of preparing the test data needed: ditto) and type (one of ten defined).
Veracity is a function of velocity
Testing size of a test is expressed as its intrinsic number of test case points. A test case point is the testing size of the simplest of all test cases and is described in Test Case Point Analysis (N. Patel et al, 2001: white paper, Cognizant Technology Solutions). However the meaning of a TCP is not important to qEstimation. Its accuracy as a large integer proportional to the testing effort required is, because the number of TCPs covered in actual testing can be counted. It is the sum of the TCPs of the completed tests.
Thus when the first cycle is completed, or even before, the test velocity so far can be calculated: it is the number of TCPs covered divided by the person-hours expended. So we have another simple formula to estimate the effort required for the next cycle: sum of the TCPs of the planned tests divided by current test velocity.
Prove it yourself
The algorithm I have described is implemented fully as an Excel workbook you can get free here. If you are a tester working in an agile or otherwise nonsequential lifecycle you can enter the simple data it requires about your next test cycle (sprint, iteration, increment etc) quickly. If you are in a sequential lifecycle it will take longer but I still hope you will give it a try: perhaps you can find a way to apply it to, and measure the effort required for, just some of the tests. Please let me know whether it proves accurate.