1. Keep It Simple
Simpler is almost always better when practically considering the resources and time it will take to capture the necessary data and when taking into account how meaningful the resulting value will be to the process improvement effort.
First, track the easy metrics. Most test analysts are required to know the number of test cases they will execute, the current state of each test case (executed/unexecuted, passed/failed/blocked, and so on) and the time and date of execution. This is basic information that should be tracked in some way by every test analyst. One way to ensure proper tracking is to simply formalize the process of gathering and tracking data that is already available. Additionally, the metrics should be objectively quantifiable and easy to understand. Metrics are easy to understand when they have clear, unambiguous definitions and explanations. Example 1 presents the definition and explanation of a "blocked" test case.
The metric in Example 1 defines the impact that known issues are having on the test team's ability to execute the remaining test cases. A "blocked" test is one that cannot be executed due to a known environmental problem. A test case is also blocked when a previously discovered system defect is known to cause test failure. Because of the potential delays involved, it is useful to know how many tests cannot be executed until program fixes are received and verified.
The definition and explanation in Example 1 are clearly intended to track unintentional blocks. They eliminate confusion as to whether or not these tests were intentionally blocked by the test analyst (that is, flagged so they will not be executed). This metric is also objectively quantifiable. Based on the definition, a test case can be counted as blocked or not, but not both. Also, by sticking to the definition and explanation, a test analyst cannot apply other subjective criteria that would allow a test case to be flagged as blocked.