Dr. Dobb's is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


Channels ▼
RSS

Tools

Designing for Testability


Dr. Dobb's Journal June 1997: Designing for Testability

Fred is principal of Advantage Software Technologies and can be contacted at [email protected].


There is no more thankless and grueling chore than continually testing and retesting a system under development. I am speaking, of course, of regression testing. A regression occurs when you unintentionally break existing system functions while fixing defects or adding other functions. Changes must be made, but how can you gain confidence that your software is functioning correctly in the face of those changes, without the cost of retesting driving you out of business? The answer is to make the regression-testing process less labor intensive.

Regression testing is repetitious and boring. It also needs to be done quickly. Therefore, it's a perfect candidate for a task that the computer can do for you. With adequate preparation, you can create a framework that lets you automate the regression testing process.

Prerequisites for Applying Regression Test Automation

To automate regression testing, you need to begin with the end in mind, making sure that your test-automation strategy is both efficient and effective. You want to easily add tests over time. You need test runs to complete quickly and efficiently. You need test results to be meaningful. To achieve these qualitative goals, your design should allow for:

  • Testing parts. Your regression testing runs should execute subsystem tests first, followed by application-level testing only if the subsystems pass their tests.
  • Making tests repeatable. You need to be able to recreate a known state of the system under test as a starting point for regression testing.
  • Automatically assessing pass-or-fail, eliminating the work-intensive task of manually comparing the results of each test run.

Subsystem Testing and Test Drivers

If you read enough about process improvement, you'll eventually run into the phrase "design for testability." You can practice design for testability by designing monitoring points into the software so it can be tested (or test itself) in predetermined ways.

It's a good idea to place each subsystem under test in isolation by writing a specialized client that has the sole purpose of testing whether the client interface to the subsystem behaves as expected. These special clients are called "test drivers."

Test drivers can come in many shapes and sizes. The design of a test driver might include anything from a hard-coded set of tests to a programmable interpreter capable of executing any number of testing commands. The level of sophistication required of a test driver will generally be suggested by the possibilities involved in testing the subsystem. Nevertheless, the test driver needs to obey certain rules to be a good citizen of the testing framework that runs it. Examples of these rules are expressed in the Perl-based solution to drop-in regression testing I present here.

Enabling Drop-in Tests with Perl

If you choose a common test directory structure, Perl can help. Perl can automatically set up the master script for the test run, run the test suite master script, and compare the actual results created against their ideals. If you can get Perl to do this much, it will achieve the "drop-in" test goal.

For starters, you need to create a structure of tests in your file system that is predictable in terms of where the test components are and how names of components belonging to the same test are related. Once a drop-in capability exists, you only need to add a test executable to an appropriate directory, and the Perl script will apply it in the next test run.

For purposes of example, I'll use the structure in Listing One as an assumed test directory tree. The test/suite/bin directory is where the executable tests reside. Any file appearing in this directory implicitly defines a test. If I place a file in this directory, I am saying, "This file, when executed, writes a test result to standard output that can be compared from run to run." It makes no difference whether a file contained in test/suite/bin is a program or a shell script, as long as it follows this rule about how it behaves when run.

I recommend using a naming convention for test executables and scripts -- something like prefix.suffix to name a test file (.exe for Windows users). The prefixes will be used by the Perl script to derive names of other files associated with a test.

Auxiliary programs and shell scripts supporting the test executables will reside in test/suite/common. The entries in test/suite/bin can use and even share the items in test/suite/common. (You may not need this shared collection of files right away, but it's a good idea to have a place to put such things when they surface.)

The test/suite/pre and test/suite/post directories are used to hold optional processing scripts and executables that should be run before or after a given test in test/suite/bin. Be sure to name the entries in test/suite/pre and test/suite/post with the same prefix as their corresponding item in test/suite/bin. The Perl script will look for these items. If they exist, the Perl script will add them to the test suite master script so that they are run in proper order.

The test/suite/input directory is where the read-only input files belonging to the test/suite/bin programs are kept. If a test needs data to run with, it should look for it in here, under a name formed by using its own name as a prefix. You could also opt to pass in the input file for a given test on standard input, but that would restrict you to one input file per test.

When a test is run, its standard output will be directed into a file in the directory test/suite/actuals bearing the name of the test, and having the .log suffix. The Perl script will create commands that do comparisons (diffs) between the actual and ideal versions of the same test's log file. Differences will be highlighted in the results of running these diffs.

The Perl Script

The Perl script that implements the process presented so far needs to take the following actions to support drop-in test capability:

  • Read through the test/suite/bin directory, collecting the names of the tests that need to be executed (keeping track of their file pathnames and their prefixes).
  • Look through the pre and post directories for any programs or scripts that match the test prefixes, and save their file pathnames (again, indexed by the same prefixes).
  • Iterate over the list of test name prefixes, constructing a master test script to run the test suite, with preprocessing and postprocessing inserted in the proper order.
  • Construct a results-comparison script to test if the actual results match the ideal results, and to highlight which specific test results did not match. If ideal files don't exist (true for new tests), create a placeholder for the ideal file (that will be visible in the test results).

The first part of the script introduces a number of constants used elsewhere in the script. I use a good many string values to represent changeable constants in the script. It makes things less readable in places, but more portable when it comes to filename suffixes and the like. (The settings in the script as shown are for DOS and Windows 95/NT. I've used the same script under Sun Solaris by just changing the constants.) The script needs to know about filename suffixes because certain systems require executable files and scripts (batch files) to be distinguished from one another, and for them to be invoked in distinct ways. UNIX is uniform in this respect, but other platforms are less accommodating.

As Listing Two illustrates, each test's log file will be created with the test name as its prefix, and the value stored in $logSuffix as its suffix. A so-called "batch" file will be recognized as such if it ends in $batSuffix. If a test is run by invoking a batch file, you invoke it using the command $batExecCmd when you enter that test into the master test script. Again, the script should allow both batch file scripts and normal executables to appear in the test/suite/bin directory.

The generated scripts will output messages using the echo command. Given the possibility that this command may need to be substituted on some system, the script defines the echo command name in a constant. Next, it establishes constants to hold the names of the files to produce. Specifically, it creates constants representing the names of the master test script, compare script, and log file that the compare script will produce; see Listing Three.

The script next defines constants representing the pathnames of the test directory tree entries. Using this method, you can change the test directory tree location and organization; the remainder of the script will continue to interact with the directories as it should. It also ensures that the required directories exist, using Perl's -e function. If it is missing any required directories, it invokes the die function with a suitable message; see Listing Four. The full pathnames of the master test script, compare script, and compare log (see Listing Five) are established so that the files are located immediately below the test root directory, but you could put them anywhere you like.

Lastly, the script defines strings that represent test master-script entry and exit commands (Listing Six), and compare script entry and exit commands. These are used in this example to emit messages indicating the start and end of each script's execution, but more involved start and end processing could be substituted here (such as sending mail to users when the script completes).

In Listing Seven, the script gets the names of the test files from the test/suite/bin directory and saves them in an associative array as keys, with their prefix names as values. It selects only "normal files" using the Perl -f function, which allows us to ignore subdirectories that might be located with the test executable files. The split function is used to extract the test filename prefix from the filename. If the filename has no suffix, the filename itself is kept as its own prefix. This is okay, and expected.

Next, the script loads the file lists from the "pre" and "post" directories (see Listing Eight). There's no need to die if they aren't present -- they're optional. The script saves the names in two separate arrays which will be scanned later to find matches on the filename prefixes (refer to runtests.pl, available electronically; see "Availability," page 3).

Now that the script has gathered all the necessary information, it creates the "master" test script (Listing Nine), inserting preprocessing and postprocessing steps if they exist. It also creates the compare script for execution after the master script is executed. The associative array %testFilePrefixes holds the filenames of the tests in its keys, as well as each filename's prefix in its corresponding value entry.

Conclusion

Not surprisingly, drop-in test automation capability begins and ends with what you plan to do with the tests that are dropped in. Once you have defined the structure of tests, how those tests are run, and how their results are inspected, you can automate test-script setup using Perl's facilities for scanning directories, manipulating strings, and creating files. I hope you find this strategy and sample Perl script helpful for doing similar things using your own requirements.

DDJ

Listing One

/test   /suite
      /bin
      /common
      /pre
      /post
   /input
   /actuals
   /ideals

Back to Article

Listing Two

$logSuffix         = ".log";$batSuffix         = ".bat";
$scriptSuffix      = ".bat";
$batExecCmd        = "call";

Back to Article

Listing Three

$diffPgm           = "diff" ;$echoPgm           = "echo" ;
$masterScriptName  = "master";
$compareScriptName = "compare";
$masterFile        = "$masterScriptName"  . "$scriptSuffix" ;
$compareFile       = "$compareScriptName" . "$scriptSuffix" ;
$compareLogFile    = "$compareScriptName" . "$logSuffix" ;

Back to Article

Listing Four

$testRoot        = "/test";$testSuite       =     "$testRoot/suite";
$testSuiteBin    =         "$testSuite/bin";
$testSuiteCommon =         "$testSuite/common";
$testSuitePre    =         "$testSuite/pre";
$testSuitePost   =         "$testSuite/post";
$testInput       =     "$testRoot/input";
$testActuals     =     "$testRoot/actuals";
$testIdeals      =     "$testRoot/ideals";


</p>
-e $testRoot     || die "Aborting:  Missing directory $testRoot\n";
-e $testSuite    || die "Aborting:  Missing directory $testSuite\n";
-e $testSuiteBin || die "Aborting:  Missing directory $testSuiteBin\n";
-e $testActuals  || die "Aborting:  Missing directory $testActuals\n";
-e $testIdeals   || die "Aborting:  Missing directory $testIdeals\n";

Back to Article

Listing Five

$masterScriptPath  = "$testRoot/$masterFile" ;$compareScriptPath = "$testRoot/$compareFile" ;
$compareLogPath    = "$testRoot/$compareLogFile" ;

Back to Article

Listing Six

$scriptEntryCmds   = "\@echo off\n"                   . "$echoPgm Test run start...\n\n";
$scriptExitCmds    = "$echoPgm Test run end...\n";


</p>
$compareEntryCmds  = "$echoPgm Comparing log files...\n"
                   . "$echoPgm Compare log start... > $compareLogPath\n\n";
$compareExitCmds   = "$echoPgm Compare log end...   >> $compareLogPath\n";

Back to Article

Listing Seven

opendir (BIN_DIR, $testSuiteBin)     || die "Aborting:  Can't read dir $testSuiteBin\n";
while ( $filename = readdir(BIN_DIR) ) {
    if ( -f "$testSuiteBin/$filename" ) {
        ($prefix) = split( /\./, $filename ) ;
        $testFilePrefixes{$filename} = $prefix ;
    }
}
closedir (BIN_DIR);

Back to Article

Listing Eight

opendir (PRE_DIR, $testSuitePre) ;while ( $filename = readdir(PRE_DIR) ) {
    if ( -f "$testSuitePre/$filename" ) {
        @testPreFiles = ( @testPreFiles, $filename ) ;
    }
}
closedir (PRE_DIR) ;


</p>
opendir (POST_DIR, $testSuitePost) ;
while ( $filename = readdir(POST_DIR) ) {
    if ( -f "$testSuitePost/$filename" ) {
        @testPostFiles = ( @testPostFiles, $filename ) ;
    }
}
closedir (POST_DIR) ;

Back to Article

Listing Nine

open (TMASTER, "> $masterScriptPath")     || die "Aborting:  Can't create $masterPath\n";
print TMASTER "$scriptEntryCmds";


</p>
open (TCOMPARE, "> $compareScriptPath") 
    || die "Aborting:  Can't create $comparePath\n";
print TCOMPARE "$compareEntryCmds";
 
foreach $test (keys(%testFilePrefixes)) {
    print "Adding drop-in test $test...\n";
    print TMASTER "$echoPgm $test...\n";


</p>
    # --------------------------------------------------------
    # Create preprocessing entries for this test.  If it's a 
    # batch file, invoke it using the $batExecCmd
    #
    foreach $preFile (@testPreFiles) {
        ($prefix) = split( /\./, $preFile ) ;
        if ($prefix eq $testFilePrefixes{$test}) {
            print TMASTER "$batExecCmd " if ($preFile =~ /.*$batSuffix$/ ) ;
            print TMASTER "$testSuitePre/$preFile\n";
        }
    }
    # -----------------------------------------------------
    # Create master script main entry for this test.
    #
    $logFile   = "$testFilePrefixes{$test}$logSuffix";
    $logActual = "$testActuals/$logFile";
    $logIdeal  = "$testIdeals/$logFile";
    print TMASTER "$batExecCmd " if ($test =~ /.*$batSuffix$/ ) ;
    print TMASTER "$testSuiteBin/$test > $logActual\n";


</p>
    # --------------------------------------------------------
    # Create postprocessing entries for this test.  If it's a 
    # batch file, invoke it using the $batExecCmd
    #
    foreach $postFile (@testPostFiles) {
        ($prefix) = split( /\./, $postFile ) ;
        if ($prefix eq $testFilePrefixes{$test}) {
            print TMASTER "$batExecCmd " if ($postFile =~ /.*$batSuffix$/ ) ;
            print TMASTER "$testSuitePre/$postFile\n";
        }
    }
    print TMASTER "\n";
    # -----------------------------------------------------
    # Create compare script entry for this test.  If the
    # test's ideal log file doesn't exist, we create a
    # placeholder for the ideal file contents that will
    # be exposed during the compare step.
    #
    print TCOMPARE "$echoPgm **** $logIdeal vs. $logActual ****" .
        " >> $compareLogPath\n"; 
    print TCOMPARE "$diffPgm $logIdeal $logActual >> $compareLogPath\n\n"; 


</p>
    if (! -e $logIdeal) {
        open (IDEAL,"> $logIdeal") || die "Aborting: Can't create $logIdeal\n";
        print IDEAL "--replace with ideal contents--\n";
        close (IDEAL);
    }
}
print TMASTER "$scriptExitCmds";
close (TMASTER);


</p>
print TCOMPARE "$compareExitCmds";
close (TCOMPARE);

Back to Article


Copyright © 1997, Dr. Dobb's Journal


Related Reading


More Insights






Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.