Software Test Metrics

When used properly, test metrics can aid in software development process improvement by providing pragmatic, objective evidence of process change initiatives


April 26, 2007
URL:http://www.drdobbs.com/tools/software-test-metrics/199201553

Shaun Bradshaw is Director of Quality Solutions at Questcon Technologies.


When used properly, test metrics can aid in software development process improvement by providing pragmatic, objective evidence of process change initiatives. Metrics are defined as "standards of measurement" and have long been used in the IT industry to indicate a method of gauging the effectiveness and efficiency of a particular activity within a project. Although test metrics are gathered during the test effort, they can provide measurements of many different activities performed throughout a project. In conjunction with root cause analysis, test metrics can be used to quantitatively track issues from points of occurrence throughout the development process. In addition, when test metrics information is accumulated, updated and reported on a consistent and regular basis, it ensures that trends can be promptly captured and evaluated.

Test metrics exist in a variety of forms, and the question is not whether metrics should be used, but which ones should be used. The five points I present here serve as a guide for instituting a metrics program.

1. Keep It Simple

Simpler is almost always better when practically considering the resources and time it will take to capture the necessary data and when taking into account how meaningful the resulting value will be to the process improvement effort.

First, track the easy metrics. Most test analysts are required to know the number of test cases they will execute, the current state of each test case (executed/unexecuted, passed/failed/blocked, and so on) and the time and date of execution. This is basic information that should be tracked in some way by every test analyst. One way to ensure proper tracking is to simply formalize the process of gathering and tracking data that is already available. Additionally, the metrics should be objectively quantifiable and easy to understand. Metrics are easy to understand when they have clear, unambiguous definitions and explanations. Example 1 presents the definition and explanation of a "blocked" test case.

[Click image to view at full size]
Example 1

The metric in Example 1 defines the impact that known issues are having on the test team's ability to execute the remaining test cases. A "blocked" test is one that cannot be executed due to a known environmental problem. A test case is also blocked when a previously discovered system defect is known to cause test failure. Because of the potential delays involved, it is useful to know how many tests cannot be executed until program fixes are received and verified.

The definition and explanation in Example 1 are clearly intended to track unintentional blocks. They eliminate confusion as to whether or not these tests were intentionally blocked by the test analyst (that is, flagged so they will not be executed). This metric is also objectively quantifiable. Based on the definition, a test case can be counted as blocked or not, but not both. Also, by sticking to the definition and explanation, a test analyst cannot apply other subjective criteria that would allow a test case to be flagged as blocked.

2. Create Meaningful Metrics

Test metrics are meaningful if they provide objective feedback to the project team regarding any of the development processes from analysis, to coding, to testing. If a metric is not meaningful, it should not be tracked. Tracking meaningless metrics wastes time and does little to improve the development process. Also, metrics should remain objective, as subjective metrics are difficult to track and interpret, and team members might not trust them. Without trust, it is difficult to implement process changes.

3. Track the Metrics

Tracking test metrics throughout the test effort is extremely important because it allows the project team to see developing trends and provides an historical perspective at the end of the project. Tracking metrics requires effort, but that effort can be minimized through the simple automation of the run log (by using a spreadsheet or simple database) or through customized reports from a test management or defect tracking system. This underscores the "keep it simple" step -- the metrics should be simple to track and simple to understand. The process of tracking test metrics should not create a burden on the test team or test lead.

There are several types of metrics to track including base metrics, calculated metrics and S-curves:

Base Metrics

Base metrics constitute the raw data gathered by a test analyst throughout the testing effort. These metrics are used to provide project status reports to the test lead and project manager, and also feed into the formulas used to derive calculated metrics. Every project should track the test metrics in Table 1.

[Click image to view at full size]
Table 1

There are other base metrics that can and should be tracked, but this list is sufficient for most test teams that are starting a metrics program.

Calculated Metrics

Calculated metrics convert the base metrics data into more useful information. These types of metrics are generally the responsibility of the test lead and can be tracked at many different levels (by module, tester, or project). The calculated metrics in Table 2 are recommended for implementation in all test efforts.

[Click image to view at full size]
Table 2

The S-Curve

When charting cumulative test case passes and defects, the graph commonly takes on an "S" shape. (The reason test results are normally S-shaped is a natural function of the testing process which: Starts out slowly during the start of test execution because of environment, application and data setup issues; picks up pace as testing continues, fewer issues are discovered and more fixes are released to test; and finishes slowly when the most difficult defects are fixed and lower priority test cases are executed.)

[Click image to view at full size]
Figure 1

Test execution starts out slowly, picks up toward the middle of the test effort and then finishes slowly. It is useful to include the S-curve as part of a test metrics program because it gives immediate visual feedback on the progress of the test effort and illustrates the risks involved in releasing the application to production. It is helpful to develop two separate graphs, each displaying a theoretical curve to compare against the actual curve. The first graph is used to track test case passes and charts the progress of the test effort (Figure 1). The other graph is used to track defects, and charts the risk of release (Figure 2). The degree to which the actual test curve complies with the theoretical curve becomes the basis for measuring test progress and risk of release.

[Click image to view at full size]
Figure 2

4. Use Metrics to Manage the Project

Many times, members of a project team intuitively understand that changes in the development lifecycle could improve the quality and reduce the cost of a project, but they are unwilling to implement changes without objective proof of where the changes should occur. By coming together on a regular basis during a project and especially at the end of the project, the team can review the test metrics and other available information to determine what improvements can be made. Here is an example of how metrics can be used to make process changes during a test effort.

Imagine that the test team is halfway through the test execution phase of a project, and the project team is reviewing the existing metrics. One metric stands out -- less than 50 percent of the test cases have been executed. The project manager is concerned that half of the testing time has elapsed, but less than half the tests are completed. Initially this looks bad, but the test lead points out that 30 percent of the test cases are blocked. The test lead explains that one failure in a particular module is preventing the test team from executing the blocked test cases. Moreover, the next test release is scheduled in four days and none of the blocked test cases can be executed until then. At this point, using objective information available from the metrics, the project manager is able to make a decision to push an interim release to the test team with a fix for the problem that is causing so many of the test cases to be blocked.

In this example, if the metrics had not been tracked, the project manager might not have been able to make that critical decision, the test team would have lost four days of testing, and possibly lost the confidence or respect of the project team.

5. The Final Step: Interpretation and Change

As previously mentioned, test metrics should be reviewed and interpreted on a regular basis throughout the test effort, particularly after the application is released into production. During review meetings, the project team should closely examine all available data and use that information to determine the root cause of identified problems. It is important to look at several of the base metrics and calculated metrics in conjunction with one another, as this will allow the project team to have a more complete picture of what took place during a test.

If metrics have been gathered across several projects, a comparison should be done between the results of the current project and the average or baseline results from the other projects. Determine if the current metrics are typical of software projects in your organization. If development process changes were made for the current project, note if there were any visible effects on the metrics.

Metrics Business Case

Metrics can provide information necessary to understand the types of process changes that can improve the quality and reduce the cost of a given project and provide a clear indication of the level of improvement as a result of process changes. Using the five points above can help charter the way to instituting a metrics program and effectively monitoring the benefits of process improvement initiatives.

The following case study demonstrates the successful execution of test metrics within a real-life company setting.

Background

The IT department of a major truck manufacturer had little or no testing across its IT projects. The company's projects were primarily maintenance related and operated in a COBOL/CICS/Mainframe environment. The truck manufacturer wanted to migrate to more up-to-date technologies and felt that testing should accompany this technological shift. The company needed to establish a testing process and also train new test team members.

Course of Action

The test team was introduced to test metrics it should track. The team's first project, Project V, was primarily developed in Visual Basic and HTML, and was accessed via standard web browser technology. By the time the test team became involved in the project, all of the analysis and most of the development had been completed. The test team developed 355 test cases and had a 30.7 percent first run failure rate along with an overall failure rate of 31.4 percent.

Believing that earlier test team involvement would improve the development process, with the company's second project, Project T, the test team instituted requirements and specifications walkthroughs. Project T was slightly more complex in that it added XML to the new development environment of Visual Basic and HTML. Project T required 345 test cases and used substantially the same staff as Project V.

Results

The first run failure rate for Project T was 18.0 percent, and the overall failure rate was 17.9 percent -- dramatically better than the results from Project V. By reducing the overall Failure Rate, the accumulated costs of rework were also reduced, creating a cost saving of approximately $170,000.

When managing a test effort, test leads and test managers sometimes find it difficult to empirically convey to the project manager, project team and other interested parties, the impacts of scope changes, delays and defects. Consistently applying a set of well-defined test metrics to track and manage a test effort can dramatically improve the ability to effectively and objectively communicate findings across an organization.

Terms of Service | Privacy Statement | Copyright © 2024 UBM Tech, All rights reserved.