Dr. Dobb's is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


Channels ▼
RSS

Get Test-Inoculated!


May 2002: Get Test-Inoculated!

Testing is painful. For every program we write, we should ensure that it doesn't fail in the myriad ways it might—not only on its own but also in concert with other components of the system, under any number of deployment and user scenarios. But who can afford to do this? In many Java projects, we attack the edges, with developers doing white-box unit testing until they've poked at most methods, and the QA engineers doing black-box system testing until they've prodded the expected use cases. Sometimes, this is enough. Often, it isn't. The problem seems to be that many quality concerns cut across modules in the system; they can't be easily checked by unit tests or controlled in system tests. It would be better if we could crawl inside the program to tweak and twist components where we suspect bugs. Then we could start to think about testing subsystems.

AspectJ, a language developed with DARPA funding at Xerox PARC, can help. How? In the vernacular of aspect-oriented programming (AOP), AspectJ allows programmers to modularize crosscutting concerns. For testing, that means you can encapsulate a quality concern that spans your program in a single module called an aspect, using constructs that clearly express crosscutting program invariants and expected behavior. By working at the language level, AspectJ complements testing techniques and tools such as assertions for local checks, JUnit for unit tests and JMeter for system tests. Seen through the lens of AOP, many quality concerns become clearer; writing in AspectJ, you can avoid the need for some tests altogether, make other tests better, and write new tests that crosscut the system.

Not Writing Tests
A good way to avoid writing tests is to specify and verify program invariants. But many useful invariants reach beyond a single method or even a single class—they crosscut the program structure. Checks for such invariants wind up scattered throughout the code. This makes it tough to verify invariants of any real importance. To simplify this, you can use AspectJ to enforce many such crosscutting invariants at compile-time and runtime.

Consider the simple case of field values. To enforce legal values, you must check that every set of a field value respects the invariant. The compiler helps with private members by restricting visibility to other class members. You may give yourself more help by using setter methods to guard field values, checking incoming parameters for correct values. But that leaves the issue of ensuring that the rest of the class doesn't bypass the setter methods, which requires manually inspecting all the methods of the class. AspectJ helps with this by making it possible to give a name to many different points in the program's execution, and talk about something that should—or should not—happen at those points. For example, AspectJ lets us say things like "Whenever a nonpublic field is set, make sure it is within a setter method."

Compile-Time Invariants
AspectJ lets us declare compile-time warnings and errors so we can check invariants without running a test suite. For example, we can suggest that nonpublic fields be written only by setter methods:

declare warning : within(com.xerox..*) && set(!public * *) 
      && !withincode(* set*(..)) : 
   "writing field outside setter" ;

This code says, "Issue a compile-time warning if, within the xerox package and subpackages, there is a set to a nonpublic field that is not within a setter method. The fields may be of any type or name, and setter methods may return any value and take any number of arguments, so long as they have a name prefixed by 'set'." When processing code that violates this, the compiler prints out:

.\com\xerox\printing\gui\Panel.java:23:13: writing field outside setter (warning)
            privateInt = 1;
            ^   

To not merely suggest but guarantee the invariant for a class, we can declare an error rather than a warning. Now we need only test that the setters handle invalid input. We don't have to inspect code for illegal sets.

As another example, consider a printer stream-handling class with a template method that delegates a certain operation to a subclass. We want to require the subclasses not to handle any IOException (because our base class converts any IOException to a special exception class). It is straightforward to express this constraint in AspectJ:

declare error : handler(IOException+) 
      && withincode(* PrinterStream+.delegate(..)) :
   "do not handle IOException in this method";

This says, "Issue a compile-time error if there is an IOException handler in the delegate method of PrinterStream (or any subclasses)." The compiler will print out a message if it sees offending code.

Finally, consider another hard-to-enforce API rule: using a factory method to create objects. In this case, we'd like our code to use our Point subclass, as obtained from our factory. Writing this in AspectJ:

declare error  : !withincode(Point+ SubPoint+.create(..)) 
      && within(com.xerox..*)  && call(Point+.new(..)) : 
   "use SubPoint.create() to create Point";

Here the compiler signals an error for any attempt to construct a subclass of Point outside of a factory method.

Compile-time declarations let us say that something should not happen. These declarations help fill the gap between compiler errors and testing, so we can improve quality and avoid writing some test cases. They can enforce coding standards or API usage rules that are otherwise enforced, if at all, by testing or code reviews.

Getting Started With AOP
Will joinpoints, pointcuts and advice become household words?

AspectJ is an aspect-oriented extension to the Java programming language that enables the clean modularization of crosscutting concerns. A joinpoint is a well-defined point in the program flow, like method execution. You write a pointcut to select join points. You write advice to be executed when any join point selected by a pointcut is reached. To check crosscutting assertions at compile-time, you can declare errors based on pointcuts. You can also add fields and supertypes in a type-safe way. An aspect encapsulates all these; similar to a class, an aspect is a reusable component that contains code for a concern.

The AspectJ project produces an open-source compiler, structure browser and API documentation tool, as well as IDE support for JBuilder, Forte and Emacs. Most AspectJ users have found that the easiest way to get their feet wet is by using aspects during development for debugging, testing, configuration management and performance tuning. They then go on to use aspects in production builds, to implement concerns like synchronization, error checking and handling, and context-sensitive behavior.

—Wes Isberg

 

Runtime Invariants
Not all invariants can be checked at compile-time. We try to write test suites and error-handling code to check runtime invariants, but it's hard to write code that checks at all the right times. If we could easily specify invariants to be checked throughout program execution, we might detect bugs missed during a black-box test.

As a simple example, we'd like our factory methods not to return null.

after () returning (Point p) : 
   call(Point+ SubPoint+.create(..)) {
      if (null == p) {
         String err = "Null Point constructed when " + thisJoinPoint.getThis()
            + " called " + thisJoinPoint.getTarget() 
            + " at " + thisJoinPoint.getSignature() 
            + " from " + thisJoinPoint.getSourceLocation()
            + " with args ["
            + Arrays.asList(thisJoinPoint.getArgs());
         throw new Error(err+"]");
      }
   }

This code says, "After returning from a call to these factory methods, if the result is null, signal an error including the caller, the source line of the call, the callee, and the signature of the method called."

This is our first example of advice, which takes the form advice declaration : pointcut { body }. The pointcut picks out join points in the program execution; in this case, calls to the factory methods. In the advice declaration, after returning means that the advice body runs after the factory call join point completes normally, and (Point p) means that the code can access the Point result value as the variable p. The code in the body, after checking for null, creates an error message using thisJoinPoint, a special variable available in advice (as this is available for objects) to represent the join point context.

This after advice checks a postcondition on the return value of the call. You could write similar code to check preconditions on the input values using before advice. Many class and method invariants can be checked using before or after advice on method-call join points.

However, some of the more interesting invariants crosscut control flows. For example, we may want to say that nonstatic fields in our printer stream can be set only during object initialization. We start with a pointcut for sets to any nonstatic field in our class, regardless of type or name:

set(!static * PrinterStream.*)

For clarity's sake, we name the pointcut using the form pointcut name (arguments) : pointcut expression.

pointcut fieldWrites() : set(!static * PrinterStream.*);

We can write a pointcut for constructor execution join points:

pointcut init() : execution(PrinterStream.new(..));

Finally, to pick out join points in the control flow of the init() join points, we use the cflow() designator:

before() : fieldWrites() && !cflow(init()) 
   { throw new Error("set outside of init"); }

Defining names for subparts of pointcuts can clarify advice in exactly the way that splitting a method into smaller named methods does. This before advice now reads "field writes not in the control flow of initialization."

That works for control flow. Invariants may also cut across objects. For field access, we may want to say that only the instance can set the field, not any static method or other instances of the same class. To bind the executing and target object at the join point, we can use pointcut designators for this() and target().

before(Object caller, PrinterStream targ) : this(caller) 
      && target(targ) && fieldWrites() {
   if (caller != targ) 
      throw new Error(caller + " setting fields in " + targ);
}

This code says, "Before any set of nonstatic fields in the printer stream, throw an error if the caller and the enclosing object are not the same reference."

These examples show how to enforce invariants that crosscut classes and control flow and objects using advice that checks at every join point matched by the pointcut during program execution. Now we see not only how to avoid writing tests, but also how to improve the tests we have.

Writing Tests
In developing tests, we have to set up input, assemble components, run the test and verify the results. From this process, several useful patterns in test design have evolved:

  1. Use recorders and drivers to identify and cover the range of input required to test given code.
  2. Use stubs to vary the behavior of objects in a system, especially for testing failure conditions.
  3. Use "oracles" to determine if the test has failed.

These patterns apply whether we are writing a small, ad hoc test to debug a particular problem or a complex test of multiple subsystems. Tests based on these patterns often crosscut the system because we want to direct many components and verify expected results based on the state of many components.

To pull these together in our test design, it's important to express our failure hypotheses. For example, consider the possible failure: "After a printer stream throws an exception, the print job may fail to close the stream, but then completes, and the controller may die if it has no print jobs, which would leave the printer stream without a live controller." Having stated this, I can write aspects to check the liveness and invariants of each component, define when we are printing, provoke, throw or catch exceptions, supply the correct input, and so on. The hypothesis is the overarching concern of a test.

Learn From Your Mistakes
Record the input that led to an exception to make sure it doesn't crop up again.

You can store input that triggers an exception for later replay as a regression test. For example, this aspect stores the input values passed to any main routine if any exceptions were thrown:

	  aspect MainFailure { 
    pointcut main(String[] args) : args(args) && execution(static void main(String[]));
    after(String[] args) throwing (Throwable t) : main(args) 
         { Log.logFailureCase(args, t, thisJoinPoint); }
}

 

Using Recorders to Capture Input
A recorder captures values to replay or to tell if you have tested all relevant values—values that test each failure hypothesis. For example, if we have a method that multiplies integers, the following set of input is probably sufficient because it addresses failure hypotheses about overflow, sign, identity and correctness.

{0, -1, 1, Integer.MAX_VALUE, Integer.MIN_VALUE, 41, -4096} 

AspectJ can record input at any join point in program execution, including method calls deep within the system. As an example, our print buffer has a capacity method that may need further testing, depending on inputs not supplied in system testing. This aspect records the current input values.

aspect RecordingInput {
   pointcut capacityCall (int i) : 
      call(public * PrinterBuffer+.capacity (int)) && args(i) ;
   before (int i) : within(com.xerox..*) && capacityCall(i) 
      {log.print("<capacityCall tjp=\"" + thisJoinPoint + " i=\""                  
                + i + "\"/>");}
}

This is our first example aspect. An aspect is like a class, except that it can also include crosscutting code like pointcuts and advice. This code says, "Before calling capacity() from within any com .xerox package, print argument i as input to the XML log." We can inspect the log to see if we have covered our hypothesized failures. We can also rewrite the recorder to handle more methods or automate the completeness test. Recording test input helps identify the unit tests you have to write by telling you what input is covered in system tests.

Using Drivers to Replay Input
To drive methods through relevant input values, we can use around advice. For example, the following aspect works by taking over, whenever a print buffer's capacity method is called, to drive through the range of inputs:

aspect BufferTest {
   int around(int original, PrinterBuffer buffer) :
         call(int PrinterBuffer+.capacity(int)) 
         && args(original) && target(buffer) {
      int[] input = new int[] { 0, 1, 10, 1000, -1, 4096 };
      for (int i = 0; i < input.length; i++) {
         int result = proceed(input[i], buffer); // invoke test
         validateResult(buffer, input[i], result);
      }
      return proceed(original, buffer); // do original processing
    }
    void validateResult(PrinterBuffer buffer, int input, int result) {
      // ... 
    }
} 

Around advice continues with the call by invoking proceed() with the advice argument types. This advice calls proceed() in a loop to repeatedly invoke the capacity() call with each input value, validating the result for each. It returns the actual result to the caller after continuing with the original input.

Around advice can help tame the combinatorial explosion of integration test cases by advising multiple join points in a running program. For example, if the print buffer's resize() method called the capacity() method to see if it needed to be resized, an aspect could use around advice on calls to both methods.

Before, after and around advice enables us to record and play back input for any join point in a running system. Being able to target any point with specific variants helps us assemble repeatable integration tests even on an ad hoc basis, much like a debugger session. Moreover, with some careful coding, we can automate the process of regression testing by capturing and replaying inputs for failures, storing input in a database. But to set up more sophisticated testing behaviors, we may need stubs.

Assembling Tests With Stubs
In integration testing, stubs (or "mock objects") simulate components that may be missing or uncontrollable. For example, rather than setting up the printer to throw an exception, we deploy a stub printer stream to test that clients handle the exception correctly. But in the past, installing stubs could be very difficult. AspectJ avoids the need for some stubs and makes others easy to install at the right points.

To avoid using stubs, we can use advice to change input values, return different values or throw exceptions at any join point. For example, we might hazard that our print job is not converting exceptions from the printer stream. We want our test driver to call the print job, which calls the printer stream, which needs to know to throw the right exception back to the print job, which should convert it and re-throw it to the test driver. This aspect sets up the test:

aspect InjectingIOException { 
   pointcut testEntryPoint(TestDriver driver) : 
      target(driver) && execution(* startTest());
   pointcut testCheckPoint(PrinterStream stream) : 
      execution(public * PrinterStream+.*(..) throws IOException)
      && target(stream); 
   after (TestDriver driver, PrinterStream stream) returning 
      throws IOException :
      cflow(testEntryPoint(driver)) && testCheckPoint(stream) {
         // query driver
         IOException e = driver.createException(stream); 
         if (null != e) throw e;
   }
}

The advice code queries the driver for an exception to throw if this is an error test. It gets the variable driver as bound by the testEntryPoint pointcut and stream as bound by the testCheckPoint pointcut, which picks out any public method declared to throw an IOException. Because this is after returning advice, it runs only where the join point is not throwing an exception (by contrast to after throwing and after advice). Distinguishing abrupt and normal completion enables advice to run only on exception or to avoid interfering with premature failures.

In many cases, we can use around and after advice to change the exceptions or results of a join point to avoid writing stub subclasses. In other cases, it's better to replace a component with a stub. One way is to use around advice to replace input values with your stub objects; another way is to advise constructor calls:

PrinterStream around () : within(PrintJob) && 
   call (PrinterStream+.new(..))  && !call (StubStream+.new(..)) 
      { return new StubStream(thisJoinPoint.getArgs()); }

This advice replaces the result of any PrinterStream constructor call in our print job with a new StubStream.

Stub types open a range of design possibilities. They can be smart (or at least controllable), and we can write pointcuts that pick out join points with only our stub as target:

pointcut stubWrite() : 
   printerStreamTestCalls() && target (StubStream);

This is how we say, "At stream test calls when we are using stub objects …"

Verifying Test Results
We can set up a test hypothesis, but how do we check it? The traditional ways to generalize result verification include gold standards, which compare expected with actual results; substitutes, which do the same thing using a different algorithm; and round-trips, a loss-less transformation of the result to another object and back. Round-trips verify not results but process, ensuring that the process did not induce inconsistency in a component. All these are "oracles" because they determine the fate of the test.

We can use around or after advice to look up golden results or around advice to invoke substitutes and compare the results. Round-trips are less common because, to be useful, they should apply after any possible change. But run early and often, they can detect a failure otherwise unnoticed until much later in the running program.

As an example, we can check our printer streams by converting to buffered form and back, and comparing the resulting stream with the original using equals(). We want to check this on construction and after any public call. Here's the code:

void check(PrinterStream stream) {
   BufferedPrinterStream bufferStream =
      new BufferedPrinterStream(stream);
   PrinterStream newStream = PrinterStream.make(bufferStream);
   if (!stream.equals(newStream)) 
      throw new Error("round-trip failed for " + stream);
}
after () returning (PrinterStream stream) : 
   call (PrinterStream+ com.xerox.printing..*(..)) 
   && !call (PrinterStream
             PrinterStream.make(BufferedPrinterStream)) { 
   check(stream); 
}
after(PrinterStream stream) returning : target(stream) 
   && call(public * *(..)) {
   check(stream);
}

This code first defines the check method. Then the first advice says, "Check after returning normally with a PrinterStream from any call within our packages (except the round-trip method)." The second advice says, "Check after returning normally from any printer stream public method."

As with stubs, oracles can be as generic or as fine-grained as we specify, applying some invariants that are always true and other invariants more carefully.

Small Steps
Testing is important. But some kinds of testing are hard because they interact with code in a crosscutting way. This is most apparent in integration testing, but is also true for unit and system invariants. Since AspectJ modularizes crosscutting concerns, it's a powerful tool for improving quality. Because AspectJ is a seamless extension to Java, you can adopt it in small steps that pay for themselves as you go:

  • Express compile-time invariants, to avoid writing tests.
  • Express runtime invariants, to make your existing tests more effective.
  • Add checks to your existing tests and test cases to your existing suite.
    • Name interesting points in the execution of a program using pointcuts.
    • Check method and class invariants with advice.
    • Assemble integration tests using invariants with recorders, drivers, stubs and oracles.

Each of these steps is valuable on its own, but the big benefit comes from the fact that the code can be reused with each step, as invariants enhance existing tests and aspects combine to form testable descriptions of expected system behavior. When aspects are designed for reuse and you can build new tests from existing ones, you can avoid much of the repetition in testing and become progressively more productive.

The most important benefit of using aspects for testing is being able to work at the right levels of control and abstraction. For integration tests involving crosscutting invariants or structure, you can write tests that specifically target a given failure. When you can easily express failure hypotheses, your testing is both more enjoyable and more effective.

Testing Resources

AspectJ. You can download the documentation, compiler and IDE support for JBuilder, Forte and Emacs, as well as the AspectJ source code under the Mozilla license at http://aspectj.org. Full code listings for this article are available, as well.

Assertions. The new assert keyword in Java 1.4 is documented at
http://java.sun.com/j2se/1.4/docs/guide/lang/assert.html.

Integration Testing. Robert Binder's Testing Object-Oriented Systems (Addison-Wesley, 1997) is a good introduction. He discusses integration test strategies and the technique of failure hypothesis, which he calls "fault models."

JMeter. This popular Web-testing harness is available from http://jakarta.apache.org.

JUnit. This is a popular unit-testing harness (www.junit.org). JUnit's authors describe their excellent philosophy in "JUnit Test-Infected: Programmers Love Writing Tests"
(http://junit.sourceforge.net/doc/testinfected/testing.htm).

—Wes Isberg


Related Reading


More Insights






Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.