Selecting Test Cases Based on User Priorities



March 01, 2000
URL:http://www.drdobbs.com/selecting-test-cases-based-on-user-prior/184414580

March 2000 Feature: Selecting Test Cases Based on User Priorities

Your product's release date looms. Though you're scrambling to cover every testing contingency, a worry still gnaws at you: will the user base curse your name in three months' time as a poorly coded module repeatedly causes problems? How do you know you've done every reasonable, appropriate test?

If your system-test strategy is implementation-based, you will attempt to test every line of code, or even every path through the code. You'll certainly find defects, but the process is expensive—and impossible for those portions of the system for which you only have access to the executable, or where there are infinite paths. Specification-based techniques, on the other hand, will cover all of the assumptions and constraints that were imposed by the software's developers.

However, neither approach addresses a crucial point of view: your users' priorities. If you are in the shrink-wrap software business, you may have made some vague assumptions about your users; by contrast, if you are building a product after a formal request for proposals, you may be following precisely defined user profiles. Regardless of the rigorousness of your design process, one thing holds true: the frequency with which each type of user uses the system will reflect the relative importance of the specific system features.

In system testing, this frequency of use has traditionally been represented by an operational profile, which guides the selection of test cases so that the most popular system operations are tested most frequently. This is an effective technique for discovering defects that users would encounter the most. While operational profiles are easy enough to construct after accumulated experience with the system, they are harder to build prior to release—which is when, of course, they are most useful.

The familiar use-case model of system requirements can play a part in computing the relative frequency of use, thus guiding the selection of test cases for the system. We have developed a set of extensions—including an expanded description of the system actors—to existing use-case templates that capture information relevant to testing.

Increasing System Reliability

The reliability of a software program, as defined by John Musa in Software Reliability Engineering (McGraw-Hill, 1999), is the probability of failure-free operation in a specified context and period of time. During system testing, it's possible to estimate the reliability of the system as it will be experienced in normal operation. Accurate estimates require that you must specify the context, which is in part comprised of the system functions that will be exercised. The context should also include a description of the operating environment consisting of the operating system version number, the run-time system's version (if applicable), as well as version specifications for all DLLs used. One technique for specifying the system functions' portion of the context is to use the same operational profile that drives system testing.

Reliability requirements are stated in terms of a specified period of failure-free operation (for example, "no failures in 24 hours"). The frequencies of operation shown in the operational profile should be based on the user's actions within the same time period as expressed in the reliability requirement. This relationship between the two time periods provides a clear direction for system testing. Using an operational profile designed for the appropriate time interval to direct the tests, and then repairing the failures encountered produces the fastest possible improvement in system reliability.

Actors and Use Cases

The use-case technique, incorporated into the Rational Unified Process, provides an easy-to-understand representation of the functional requirements for a system. The technique identifies all external forces (or actors) that trigger system functionality. Each use case provides a description of a use of the system by one or more of the actors.

An actor can represent a human user of the system or a stimulus from another system. However, because each actor actually represents a type rather than a specific user, each will interact differently with the system.

Use cases describe the details of system functionality from the user perspective, with scenario sections detailing the system's response to specific external stimuli. The scenario section also outlines what triggers the use and provides the information needed to establish the criteria that will determine whether the system passed the test. Additionally, the use case describes the preconditions that must be established prior to the execution of the test case.

For our purposes, let's focus on the frequency and criticality fields in the use case template. The criticality attribute defines how necessary a use is to the successful operation of the system; the frequency attribute defines how often a specific use is triggered. By combining these two attributes, you can prioritize uses and tests and thus test the most important, most frequently invoked uses. In our simple banking system, making deposits and making adjustments might have about the same frequency, but making deposits would have a higher criticality and should be tested more rigorously.

Criticality is easy for an expert to judge, so this field can be completed as the use case is constructed. Frequency is more difficult to quantify, however, because different actors may trigger the same use at very different rates.

Actor Profiles

Each actor is described by a brief profile. The major attribute is the actor's use profile, which ranks the frequency with which this actor triggers each individual use. It is usually easy to determine the relative frequency with which a specific actor does this, either by analyzing the responsibilities of the actor or by simply reasoning about the domain.

You can note the frequency attribute with relative rankings (for example, first, second or third), implied rankings (high, medium or low) or the percentage of invocations that are applied to this use (0 to 100 percent). However, the actors seldom trigger each use the exact same percentage during each program execution, making this last approach less accurate.

Though the ranking values are identical, it's often easier to attach meaning to high, medium and low than to 1, 2 and 3. On the other hand, combining numeric rankings is more intuitive than combining subjective values.

Use-case Profiles

Now, you can combine the individual actor's use profiles to rank each use case. Record the ranking in the frequency attribute of the use case (we also summarize it in a table for ease of reference). Combine the actor profile values with a weighted average (the weight represents the relative importance of the actor).

For simplicity, in our example we treat all the actors equally, each with a weight of 1. The values in the abstract actor use profile aren't included in the computation, but they do help determine the value for specialized actors.

The test priority column is determined by combining the frequency and criticality columns, typically with either a conservative or an averaging strategy. While averaging is self-explanatory, the conservative strategy—choosing the highest rating by default—often comes into play with life- or mission-critical systems.

Allocating Tests

The technique presented here doesn't change the basic process of selecting types of tests and specific input data, nor does it change the calculation of how many test cases can be constructed given the available resources. What does change is that now you can systematically distribute test cases to specific use cases with a calculated test priority.

Once the priorities have been determined, you can compute the number of tests to be associated with each use case. One easy method is to value the use cases' test priorities numerically. In our example, the ranks of high, medium, and low are replaced with 3, 2 and 1, which sum to 6. Assume that there is time for approximately 100 tests. Then, assigning 3 to high, use case number 1 would rate 100 * 3/6, or 50 tests. Use case number 2 would rate 100 * 2/6 tests, or 33 tests, and use case number 3 would rate 100 *1/6, or 17 tests.

When you calculate the number of tests in this fashion, adopt a standard for determining which test cases to construct. In general, allocate your tests to the sections of the use-case description in the following order of priority: first, the basic scenario; second, exceptions; third, alternative courses of action; and last, extensions.

Although the basic scenario and exceptions should be the focus of the majority of the tests, do be sure to evaluate each section to a greater or lesser degree.

Maximizing Testing ROI

Prioritizing the uses of the system produces a weighted operational profile, or use profile. Many of the steps might seem straightforward enough in the context of the simple example presented here, but in a larger project, determining which use cases to stress and which to cover only briefly can be tricky.

Indeed, building use profiles maximizes the return on investment of resources allocated to system testing, directing system testing in such a way that the reliability of the software increases the fastest. That is, test cases are selected so that those operations users depend on most are triggered most often.

The technique also links the system's actors, uses and tests; now, when uses change, you can easily identify which tests to change, and when tests fail, you can easily find the affected uses.

A Unified Modeling Language Use-Case Diagram



To illustrate the use profile technique, we'll model a simple banking system with which customers can directly access their accounts through automated teller machines. the bank employees access the same accounts directly through the system under development. Customers can add or remove money from their accounts or check their balances. According to the UML notation, actors are stick figures and use cases are ovals. A dependency arrow shows the association of an actor to a specific use. The actors in the example system include a bank customer, a teller, a head teller and the electronic funds transfer system.

The Use-Profile Process



1. Define complete set of actors.
2. Define a complete set of use cases including associations with actors.
3. Construct the use profile for each actor.
4. Compute the frequency attribute for each use case from the actor use profiles.
5. Combine the frequency and critically ratings for a single use case into a "test priority" value.
6. Allocate test cases based on the test priority of the use case.

Source: McGregor, John D. and Sykes, David A., A Practical Guide to Testing Object-Oriented Software, Addison-Wesley, in press.

A Unified Modeling Language Template


Use Case ID: Make Deposit
Use Case LEvel: System Level
Scenario: Actor(s): Electronic Funds Transfer, Bank Fiduciary Employee, and Account Holder
Preconditions: Account must be open and active
Description: Trigger: Actor initiates a deposit to an account
The system responds by adding the deposit amount to the pre-existing balance.
The system updates counters measuring account activity.
The system checks if amount of deposit requires IRS notification and generates notification if required.
Relevant requirements: Ability to increase account balances
Post-conditions: Balance has been increased by the amount of the deposit
Alternative Courses of Action: If account is not active, first activate the account and then make deposit.
Extensions:
Exceptions: Invalid account number
Concurrent Uses: Making a withdrawal
Related Use Cases: Making a withdrawal

Decision Support
Frequency:
Criticality:
Risk:

Modification History
Owner: MLMajor
Initiation date: 12/20/99
Date last modified: 12/20/99

This template is derived by Software Architects based on work from Ivar Jacobson's Object-Oriented Software Engineering: A Use-Case Driven Approach (Addison-Wesley, 1992) and the work of Alistair Cockburn (compiled at http://members.aol.com/cockburn). We have modified Cockburn's priority attribute to more clearly distinguish between frequency and criticality and have also added a risk attribute.

Actor Profiles


Each actor is described by a brief profile. The major attribute is the actor's use profile, which ranks the frequency with which this actor triggers each individual use.

Name: Bank Fiduciary Employee
Abstract: Y/n
Description (Role): Has access to money accounts
Skill Level: Varies
Actor's Use Profile:
Use Case Name Relative Frequency
Make Withdrawal High
Make Deposit Medium
Make Adjustment Low

Name: Bank Teller
Abstract: y/N
Description (Role): Directly interfaces with account owners
Skill Level: Trained but varied amounts of experience
Actor's Use Profile:
Use Case Name Relative Frequency
Make Withdrawal Medium
Make Deposit High
Make Adjustment N/A

Name: Head Teller
Abstract: y/N
Description (Role): Supervises Bank Tellers and manages accounts
Skill Level: Expert
Actor's Use Profile:
Use Case Name Relative Frequency
Make Withdrawal High
Make Deposit Medium
Make Adjustment Medium

Name: Account Holder
Abstract: y/N
Description (Role): Owns the account and can trigger account activity
Skill Level: Novice
Actor's Use Profile:
Use Case Name Relative Frequency
Make Withdrawal High
Make Deposit Medium
Make Adjustment N/A

Name: Electronic Funds Transfer System
Abstract: y/N
Description (Role): Triggers account activity within constraints
Skill Level: Expert
Actor's Use Profile:
Use Case Name Relative Frequency
Make Withdrawal Medium
Make Deposit Medium
Make Adjustment N/A

Summary Table of Use-Case Profiles


Use Case Name Frequency Criticality Test Priority
Make Withdrawal High High High
Make Deposit Medium Medium Medium
Make Adjustment Medium Low Low

The individual actor's use profiles are combined to rank the frequency of each use. For the Make Withdrawal use case, the average frequency is calculated as: Medium + High + High + Medium/4 = High. For the Make Deposit use case, the average frequency is calculated as: High + Medium + Medium + Medium/4 = Medium. For the Make Adjustment use case, the average frequency is calculated as: Medium/1 = Medium.

To save space in this example, the criticality value for each use case is placed in the table. The Make Adjustment use has a low criticality rating because the head teler could use combinations of deposit/withdrawal to achieve the required result. Withdrawal is rated as more critical than deposit, simply because we must be able to return money that we have accepted. The test priority column is then obtained by combining the frequency and criticality columns, using the same method.

Test-Case Diagram



The test-case diagram extends the UML use case diagram to provide traceability from the actors to the system test cases that an actor triggers. Changes to an actor may result in a need to change specific use and test cases. Test reports can easily provide a "by actor" summary of failure. You can then systematically distribute a portion of the possible test cases to specific use cases using a calculated test priority.

Terms of Service | Privacy Statement | Copyright © 2024 UBM Tech, All rights reserved.