Dr. Dobb's is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


Channels ▼
RSS

Design

Software Testing Cycles


FEB94: Software Testing Cycles

Overcoming testing bottlenecks

Scott is director of technical marketing at Mercury Interactive and can be reached at 408-987-0100 or [email protected].


Over the past decade, application programmers have seen a dramatic improvement in GUI development tools as we've moved from manual coding and API programming to GUI builders, application generators, and user-interface management systems. Although end users have benefited tremendously from enhanced application features and program size, quality-assurance (QA) engineers must test an order of magnitude more code. Consequently, testing is often a bottleneck, as each code change, enhancement, bug fix, and platform port necessitates retesting the entire application to ensure a quality release.

The key to testing more code in less time is a clearly defined software test cycle. The test cycle I'll discuss in this article is composed of four steps: test generation, playback, verification, and reuse (maintenance and porting).

Test Generation

Test generation is the only phase still driven entirely by human developers, even when an automated test tool is involved. With test generation, there are two methods for creating tests--programming and recording (capturing)--and two levels of representing test commands--analog and object oriented.

Programming is most frequently used by testing experts. QA engineers who are not constrained by time can use programming to create robust test suites, often beginning test development before the application is finished. Recording, on the other hand, is used by application experts. This category includes QA engineers less proficient at programming, customer-support engineers, and beta-site customers who have neither the time nor the expertise to program tests. Recording is also used by developers and porting groups who work under short time constraints.

Programming is the only way to create the general framework or structure of a test suite; it allows tests to be generated before the application is ready. Creating programmed tests at the object level enables the test tool to query widget and object information such as existence, state, and contents. However, this approach requires the test developer to program the physical widget descriptions and keep track of the UI class hierarchies and state-handling mechanisms--similar to handcoding the GUI.

The main drawback is the long test-development time required. All of the UI internal-class hierarchies and state-handling mechanisms must be included in the programmatic test script. Testing something as simple as selecting an item from a menu or clicking a button results in lengthy and awkward test scripts that are complex to maintain. Developing UI tests using a programming-only tool requires the test developer to reprogram the tests or create aliases and huge mapping files for each platform. The complexity involved in maintaining these test suites can be as great as that of developing the initial application code. Other shortcomings include problems in code maintainability, complexity of use, and necessary expertise level.

Analog recording--recording mouse movements and keyboard strokes--is the simplest and most productive way to generate tests. The drawback here is that the tests are position-dependent: Each movement of the mouse is to a specific X-Y location. Thus, object-oriented recording was developed, in which test scripts are recorded and reverse-engineered at the widget level, rather than at the analog level. While the user interacts with the application, his actions are automatically mapped to application functions; see Figure 1.

Object-oriented recording by itself does not solve the problem of reuse, however. To ensure test reusability, the test tool must use what I call the "data-model" approach, which separates the GUI description from the body of the test. If, for example, a menu-item label changes from "Open" to "Start," a single modification to the data model causes all tests to be automatically remapped by the testing tool. By contrast, the "code-model" approach, used by most commercially available test tools, requires physical descriptions to be manually programmed and changed throughout the test script as the application changes. The test script must be constantly modified and recompiled since the GUI elements and their descriptions are embedded in the actual test script. Aliases and large, complex mapping files can be created, although they must be updated as well. The code model does not differentiate between the logical and physical descriptions, and it creates a test script that is very difficult to read and maintain.

The data-model approach separates the test script from the GUI data, which can change from release to release and between platforms. It enables the logical name information to be separated from the physical description. The logical data is a clear description of the GUI object, while the physical data is a detailed list of the object's attributes used for unique identification of the object. The test tool is responsible for resolving logical and physical descriptions during run time. The GUI test tool automatically remaps the rest of the test suite. The data model provides a basis for the automation of GUI testing and makes the test script more readable.

Test Playback

An automated test tool must be able to replay unattended overnight, and it must replay correctly 100 percent of the time. If the test suite loses synchronization with the application after the first few tests, then all subsequent results will be erroneous and the benefits over manual testing will be lost.

The problem of synchronization is partially solved by working at the object level, since tests can keep pace with the application by determining when windows and widgets become active and inactive. This method relies on the operating-system mechanics, assuming that when the test tool gets processing time from the operating system, the application is ready to receive more test inputs. Working at the object level solves synchronization problems for most applications running stand-alone on a non-preemptive operating system. Unfortunately, this approach fails with a true multitasking operating system or on the server side of a client/server application.

The solution to testing client/server systems is output synchronization, which keeps pace with the output by "watching" the screen for visual cues, enabling the tool to emulate a human tester and avoid timing errors. The test tool monitors the application, waiting for data from the server or for a response such as "query complete" before continuing, thus maintaining the integrity of the test suite and the accuracy of the results.

Verification

Speed in developing and updating a string of baseline results is critical, as it contains the expected responses to which the test tool compares all future results. A test tool should have built-in baseline acquisition and verification; the baseline acquisition should be automatic and in conjunction with the initial test development. Any new results required should be automatically updatable. A test tool also needs multimethod verification, which enables the testing of graphics, text, files, and widgets and objects.

Bitmap verification, also known as "image verification," is the only effective means of testing a graphically oriented application. The test tool compares expected and actual areas of the GUI, representing the difference as an overlap (exclusive-OR) of the two images.

Text verification, also known as "value verification," tests the values in an application, not just the GUI. It uses a form of optical character recognition (OCR) to "read" the bits on the screen and translate them into alphanumeric characters. This is the only way to test the server side of a client/server application.

File verification validates input and output data files processed in the background and not sent to the GUI level.

GUI-level verification reads the window and widget resources to check the existence of a particular field/button and/or determine its contents. The state of a widget (active/disabled) can be checked also. This method works well for standard GUI elements on the client side of client/server applications.

All four methods are necessary for complete application testing.

Test Reusability

Maintaining test code requires substantial time and effort for upkeep. As a result, reusability--immunity to GUI changes in the application--is critical in successful application testing. In a new version of an application, buttons, widgets, and objects may have moved, and items may have been added or removed from menus. Much as GUIBuilders automated GUIdesign (tracking user-interface descriptions, object-class hierarchies, and state handling), GUITest Builders automatically accommodate changes in the user-interface hierarchy via a dynamic GUImap.

For example, translating a GUI-based software application from English to French changes all buttons, labels, and menus, but the underlying application functionality remains the same. Manually programmed tests require the test developer to check through hundreds or thousands of test scripts, finding and replacing each button, label, and menu name; see Figures 2(b) and 3(b). The GUITest Builder lets the test designer make logical-name changes once. The tool automatically remaps the changes to the physical attributes that identify the GUI objects. The new logical names are remapped to constant physical attributes (such as [MSW_class: button, MSW_id:1445, MSW_style:BA0000, owner:'WinBurger']). The resulting test script is clear and concise; see Figures 2(a) and 3(a).

Path-specific programmed tests must be rewritten to port an application from one platform to another (Solaris to Windows, for example). All the physical pathnames must be changed throughout the complete suite of tests. By contrast, the GUI Test Builder lets each window of the ported application be "learned" automatically; then the original logical names can be mapped directly to the new physical names. If, on the Solaris platform, "open" originally mapped to [X_class: vcontrolToggle, X_attachedname: OK, X_path: table_main; work_area; frame_22;option_Btn; open], on the new Windows platform it may map to [MSW_class:button, label:open, MSW_id:1376, MSW_style:BB0000, owner:"WinBurger"]. The GUI test tool will make the platform-dependent changes in the GUI map automatically. When the test script is replayed, it reads the logical names of the objects and uses the GUI map to translate the logical names into the physical descriptions. The physical descriptions are then used to identify the GUI objects, allowing the test scripts to be independent of the platform.

Conclusion

Today's automated software-testing tools can automate construction and maintenance of all GUI-related information during testing. They support visual management of GUI tests, interactive editing for generating and manipulating GUI descriptions, automatic porting of GUI test files across multiple platforms, and the like. With tools like these, application testers can finally match the gains achieved by application developers who use GUI builders.


Figure 1: (a) Object-oriented; (b) analog scripts.

<b>(a)</b>
     menu_select (File;Open)

<b>(b)</b>
     move_loc_abs (120,230);
     button_press ("Left");
     move_loc_rel (0, 25);
     button_press ("Left");

Figure 2: (a) Object-oriented recording; (b) programming.

<b>(a)</b>
     menu_select ("File;Open");

<b>(b)</b>
     const WAPP_WND = "/[WndBorder]WinBase - (Point.rev)";
     const WAPP_MENU = "{WAPP_WND}/$Menu";
     const FILE_MENU = "{WAPP_MENU}/File";
     const OPEN="{FILE_MENU}/Open";
     MenuGrab (OPEN);

Figure 3: (a) Object-oriented recording; (b) programming.

<b>(a)</b>
     button_press ("Ok");

<b>(b)</b>
     const OPEN_WND = "/[WndBorder]WinBase - (Point.rev)/[DialogStyleBox]Open Test";
     const OK_BUTTON = "{OPEN_WND}/[PushButton]OK";
     PushButtonClick(OK_BUTTON);


Copyright © 1994, Dr. Dobb's Journal


Related Reading


More Insights






Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.