Testing Service Oriented Architectures

Traditional testing techniques assume defective code can be precisely determined. But this isn't always the case with SOA.


October 28, 2008
URL:http://www.drdobbs.com/architecture-and-design/testing-service-oriented-architectures/architecture-and-design/testing-service-oriented-architectures/211600992

Arunava is a Technical Architect at BearingPoint working on SOA and Java Enterprise Architectures. He holds a Ph.D. in Physics from Florida State University and can be contacted at [email protected].


Traditional testing techniques assume defective code can be precisely determined. But this isn't always the case when Service-Oriented Architectures (SOAs) are involved. SOA implies a network of distributed nodes, some of which may be clustered, redundant, or maintained by other organizations. Consequently, isolating problems in a system can be demanding. Also, SOA systems tend to be always online, even when new services are introduced to or removed from live systems.

For these reasons, SOA testing presents a level of opacity that more traditional systems avoid. To address these problems, SOA testing must leverage best practices from existing testing methodologies and create new approaches. In this article, I discuss SOA testing and present a SOA test harness.

SOA Testing

In testing SOA, traditional test practices should not be abandoned. But because of SOA's distributed and technologically heterogeneous nature, testing best practices need to be extended. As with non-SOA code, unit testing of a service is a necessary first step, offering the first line of defense against defects in implementing services. Likewise, integration testing is a necessity as service choreography and orchestration become the means to support business processes. In the world of SOA integration testing, services may be provided without code from other groups or hosted outside of the testing environment.

With this in mind, it becomes increasingly apparent that what is required for testing in SOA environments is the ability to monitor the inputs and outputs of each service, validate data at each node, and assess the behavior of each node under different loads and constraints. As the number of services grow, it becomes essential to monitor the relevant services of a business process or use case in a controlled manner.

One solution to this problem assumes that an Enterprise Service Bus is used and can emit debugging and auditing information as messages pass through (www.crosschecknet.com/soa_testing/TestingInAServiceOrientedWorld.pdf). While useful, this approach often lacks the granularity necessary in debugging complex applications. Another potentially more complete solution to the problem is to isolate the services of interest within a harness that simulates the behavior of the system at the boundaries, then monitor each service in the system.

Unit Testing

Unit testing in SOA is not only the testing of components comprising a service, but also the testing of individual services in isolation. This can be done using a static or dynamic client that invokes the service. In this manner, functional and nonfunctional testing on a service can be done. Validity of the content in the request or response can be assessed via XPath or XQuery statements. This is perhaps the easiest of the tests to perform because preexisting Dynamic Invocation Interface (DII) clients are available (java.sun.com/j2ee/1.4/docs/tutorial-update2/doc/JAXRPC5.html).

Integration Testing

Integration testing in SOA is the testing of multiple services in the fulfillment of a use case or business process. Again, challenges arise in transparency within and among services since some of these services can cross organizational boundaries and leverage external systems.

Perhaps the most conceptually straightforward approach to integration testing of SOA is to use service proxies substituting for existing services. This lets the boundaries of the system of interest be simulated. For proxies to deliver meaningful content to these services, realistic data must be obtained. This can be done by monitoring the node that is transmitting the data and copying before sending the data forward. The proxy could then leverage the sample data for testing purposes. One means of generating proxies is to use DII and Dynamic Service Interfaces (DSI) and the interface definition. Pulling previously collected data from a data store provides an approximation of the actual service. A mechanism should be able to toggle between live or simulated data. Similarly, it should be possible to use the proxies to simulate network load and latency.

In addition to proxies, agents can be added to services through plug-in mechanisms. Agents allow monitoring of service inputs and outputs. Additionally, they can be used to monitor various metrics, such as time of processing or network behavior. The granularity of the agent (as a proxy, or else embedded in the service) would allow precise monitoring at multiple levels. If agents possess XPath and XQuery ability, then content validation can be executed too.

Code-Level Change and Instrumentation

When source code is available, test harness frameworks can perform code-level monitoring using utility classes or code instrumentation. One approach to code-level monitoring is to use aspects. In addition to introducing cross-cutting into the source code, aspect frameworks (such as AspectJ) can instrument libraries. Consequently, support for pointcuts and joinpoints can be introduced at a code level as needed. To reduce overhead, the aspect could be written to pass information to a separate monitoring program.

Another useful outcome of such a tool is coverage testing of services in a dynamic service composition environment. This enables an understanding of compositions favored by processes and an insight into which services are used or avoided by user communities.

A Test Harness in Ruby

To test some of these ideas, my team implemented a simple record-and-playback style test harness in Ruby (www2.ruby-lang.org/en/). We decided that Ruby's dynamic programming and metaprogramming features would be useful in creating a harness that could add methods declaratively at runtime. Since our team consisted of Java developers trying Ruby for the first time, we decided to use JRuby (jruby.codehaus.org). This let us incorporate well-understood Java frameworks from within the Ruby source.

Using SOAP4R (dev.ctor.org/soap4r), we wrote a SOAP Client. Data was then recorded from an existing Apache Axis2 service. The output was saved using YAML (www.yaml.org), a powerful human-readable data serialization mechanism available with Ruby implementations. We also used SOAP4R to create a standalone SOAP Server. Methods to be tested and the corresponding results were maintained as key-value pairs in a Java properties file. The format of each key-value pair is:

methodname_param1param2...    paramN=recorded_response.yml

If desired, we could have extended the format to refer to specific service URIs as well. Finally, we added methods at runtime using Ruby's metaprogramming facilities. Example 1 created the methods. The Java classes (Properties, FileInputStream) are leveraged by way of JRuby. Example 2 presents the corresponding client code used to record data from the web service.

#read the methods and responses from test.properties
fis = FileInputStream.new("test.properties")
          @properties = Properties.new
          @properties.load(fis)
          fis.close
          keys = @properties.keys
#add methods to return the recorded response
          keys.each do |key|
            puts "Adding method " + key.to_s
            add_method(self,key.to_s, Object.new)
            self.class.send('define_method', key.to_s) { |*args|
               if (@properties.containsKey(key.to_s))
                 value= @properties.get(key)
                 puts "Found operation " + key.to_s + " 
                                  with value " + value.to_s
#retrieve the data from the saved result
                 if (value)
                   if (value.rindex('.yml') > 0)
                     data = YAML::load_file(value)
                     return data
                   else
                     file = File.new(value)
                     data = file.readlines
                     file.close
                     return data
                   end
                 end
               end
               }
            end

Example 1: Code fragment to add methods.

While this example is simple, it provides the skeleton of a test harness for services. With moderate effort, you can extend it to encompass multiple services with multiple methods. Consequently, the server can be used to create a boundary of services to be invoked by the service(s) under study.

driver = SOAP::RPC::Driver.new(
    'http://localhost:8080/axis2/services/WeatherService',
    'http://ws.apache.org/axis2')  
#driver = SOAP::RPC::Driver.new(
    'http://localhost:9080/','urn:weatherSoapServer')
driver.add_method('getWeather')

file = File.new('getweather.yml','w')
YAML.dump(driver.getWeather,file)
file.close

Example 2: Client code to store service output.

To the best of my knowledge, there are two commercial off-the-shelf solutions that provide this sort of functionality exist—iTKO's LISA (www.itko.com) and AmberPoint's Validation (www.amberpoint.com/products/validation.shtml). Both provide Service Virtualization to varying degrees. While Amberpoint's solution is a development time record-and-playback utility, iTKO's solution also has elements for continuous testing of services.

Terms of Service | Privacy Statement | Copyright © 2024 UBM Tech, All rights reserved.