HERE

Test Case Point Analysis

Quality Assurance Management is an essential component of the Software Development Lifecycle. To ensure quality, applicability, and usefulness of a product, development teams must spend considerable time and resources testing, which makes the estimation of the software testing effort, a critical activity. This white paper proposes an approach, namely Test Case Point Analysis, to estimating the size and effort of software testing work. The approach measures the size of a software test case based on its checkpoints, preconditions and test data, as well as the types of testing. The testing effort is computed using the Test Case Point count of the testing activities.

To ensure quality, applicability, and usefulness of a product, development teams must spend considerable time and resources testing, which makes the estimation of the software testing effort, a critical activity.

INTRODUCTION

Software testing is an important activity in software development and maintenance projects, ensuring quality, applicability, and usefulness of software products. In fact, no software can be released without a reasonable amount of testing involved. To achieve acceptable quality, software project teams devote a substantial portion of the total development effort to perform testing. According to industry reports, software testing consumes about 10-25% of the total project effort, and on some projects this number may reach 50%.

Thus, one factor to ensure that development teams can meet release dates and correctly estimate the needed resource, is the ability to provide accurate size and effort estimates for testing activities. Size and effort estimates are key inputs for investment decisions, planning, scheduling, project bidding, and for computing other metrics such as productivity and defect density for project monitoring and controlling. If the estimates are not realistic, then these activities use incorrect information with potentially disastrous consequences.

If the estimates are not realistic, then these activities use incorrect information with potentially disastrous consequences.

Unfortunately, providing reliable size and effort estimates for software testing is challenging. One reason is that software testing is a human-intensive activity that is affected by a large number of factors such as personnel capabilities, software requirements, processes, environments, communications, and technologies. These factors cause high variance in the productivities of projects of different characteristics. Another reason is that software quality attributes such as functionality, reliability, usability, efficiency, and scalability are hard to objectively quantify. Moreover, there are many different types of software testing and testing contexts within even one project. Because of the diversity of software testing activities, an approach that works well on one type of test, in one environment, may not work when it is applied to another type of test in another context.

Although many approaches are available to estimate overall software development size and effort, few exist to estimate software testing size and effort. Often, testing effort is estimated as a part of the overall software development effort, and the overall software size is used to predict effort for testing. This approach, however, cannot take into account specific testing requirements, such as intensive testing, various platforms, devices, and browsers required by clients. Thus, the testing effort would be highly under- or over-estimated. Another limitation is that this approach cannot be applied to estimating the effort of the test-only project in which the software to be tested exists but the testing team does not have access to the software source code or requirements in order to measure the overall software size in Source Lines of Code (SLOC) or Function Points. Finally, the software size in Function Points or SLOC, the two most popular size metrics, may not be an accurate indicator of the amount of testing that needs to be performed by the testing team. While the productivity of software testing is computed as the amount of testing over the effort spent for testing, it may not be accurately measured using these size metrics.

In an attempt to address this issue, this white paper presents an approach to estimating the size and effort of software testing. The sizing method is called Test Case Point Analysis. As the name indicates, the method measures the size of the test case – the core artifact that testers create and use when performing software test execution. The size of a test case is evaluated using four elements of test case complexity, including checkpoint, precondition, data, and type of the test case. This article also describes simple approaches to determining testing effort using the Test Case Point Analysis.

Theoretically, this Test Case Point Analysis can be applied to measuring the size of different kinds of software testing given test cases as input.  However, this method was developed specifically to estimate the size of system, integration, and acceptance testing activities that are manually performed by independent verification and validation (IV&V). Unit testing, automation testing, and performance testing that involve using test scripts are also beyond the scope of this method…

Download full whitepaper

1 comment on Test Case Point Analysis

  1. Helen says:

    Hi Josh, thanks for your article, you must pay plenty of time and efforts on it. I found it helpful and appreciate your work. I was searching for Test case point analysis and also find out the another article mentioning about estimating software testing size using Test Case Point Analysis which is a step of estimating the effort of test automation projects. May you guys found it worth reading:
    https://www.katalon.com/resources-center/blog/estimating-effort-test-automation/#

Leave a Reply

Your email address will not be published. Required fields are marked *

More Great Content

Get Started with QASymphony