Better, Stronger, Faster Integration Testing

We have the technology to rebuild integration tests to make them more effective and less mind-numbing


July 12, 2006
URL:http://www.drdobbs.com/tools/better-stronger-faster-integration-testi/190302526

Andrew and Jennifer are the authors of Applied Software Project Management (O'Reilly & Associates). They can be contacted at www.stellman-greene.com.


We can all agree that integration testing is important. But it's not fun. Yes, programmers need to make sure that the software they deliver works. It's all too common to send a build off to users or testers, only to have it immediately back bounce with serious problems. We all recognize that sinking feeling when we've realized that we're in for a long night of tracking down bugs and patching code. And there's nothing more deflating than thinking that you've built a great piece of software, only to get nothing but negative feedback. So we, as programmers, bite the bullet and run our integration tests. But even when they're effective, they can be monotonous.

Typically, the goal of integration testing is to prove that all of the features have been developed, and that they work together well enough for the project to be delivered. For some projects, that delivery may be to a customer or user. In others, the software is delivered to a testing team for functional, system, and regression tests. On an iterative or Agile project, integration tests happen at the end of each iteration. But in all cases, developers need to convince themselves and others that the software really works before they feel comfortable releasing it.

Every project you've ever worked on with more than one developer has done some sort of integration testing. The software is broken up into pieces, each of which is developed by individual programmers. When programmers feel that their work is done, they integrates it with the rest of the product and make sure that it "plays nice" with the rest of the software. This is done in a variety of ways. On some projects, it's unplanned and ad-hoc; developers just play with the software in ways that they think it might break. In more structured environments, a QA team defines a set of tests that exercise the core behavior across multiple features based on integrated code, then pass those tests on to developers to be run. The tests must pass before the QA team accepts the build. But in all cases, developers are left with a task that's relatively unrewarding, time-consuming and not intellectually engaging.

"We can rebuild..."

One reason many programmers view integration testing as a "necessary evil" of software development is that the interesting work is usually finished before integration tests begin. The architecture decisions have been made and implemented, and the only changes made to the code are done to fix minute details. Often, programmers delegate integration testing tasks to junior team members, who end up mindlessly running a subset of the QA team's test cases. It's not uncommon to draft testers for this task as well. Everyone finishes their piece, then moves to something else, until someone throws a bug back to them.

It doesn't have to be this way. Instead of looking at these tests as a final obstacle before the handoff, the team can view integration testing as an opportunity to take a step back and get a good sense of what they've built. The team can cap off a project by bringing everyone together to figure out how to put it through its paces. If done right, integration testing can actually be interesting, and even a morale booster. Building a house of cards is fun, but kicking it down even more so! And if team members uncovers a serious problem, they can all share in the sense of relief that they got to it first, before anyone else found out about it.

A good integration test takes advantage of the fact that the programmers have intimate knowledge of the code. When the team has just finished writing the code for the project, they know more about it than anyone else at any other time. If you've just spent months building a piece of software, you know where the stitches are. You know how it breaks down into pieces, and you have a better idea than anyone else about what might break it. Integration testing is your chance to prove how good your solution really is. If you treat it that way, then you and the entire team can get behind the product--and deal with any major problems then and there, while the memory of its construction is still fresh.

Better, Stronger, Faster Integration Tests

Users don't use the software one object at a time. They don't deliberately try to break it. They just want to get their work done. But to do that work, a user typically has long, complex interactions with the software. It's these complex interactions that generate the "off-the-wall" bugs that always seem to catch the programmers off guard. It's common for a programmer to be surprised when he finds out how the user actually uses the software, and it's not uncommon for that use to differ significantly from what the programmer intended. That's why it's important to test the software in a way that gets all of its components interacting together. And that's the goal of this practice for integration testing.

This method of integration testing gets the programmers working together in a way that resembles a scaled-down version of a full test cycle. Before it starts, the team should be relatively confident that they have completed their programming, and that the code is ready to roll. This practice requires roughly a day of work for the entire team. To make sure that it only takes about a day (or two), every single developer should take part in this test. A larger team will expend more effort without needing more calendar time. Note that if the project is very large, it usually makes sense to break it into multiple chunks, and have a separate integration test at the end of each one. (A good rule of thumb for sizing each chunk is that if it seems like the integration tests will take far more than two days to do, then the project should probably be broken down further.)

Table 1 shows the steps involved in running the integration tests.

Preparation

1) Each programmer on the team (including the team lead) writes down a list of features that they worked on, and comes up with two to five test cases for each feature.

2) The team meets to combine their results. The first half of the meeting is used to review everyone's work. The team uses the rest of the meeting to brainstorm new test cases that were missed.

3) The test cases are divided into packets for each team member to test, and distributed to the team. No team member executes a test case that they wrote " everyone executes tests written by others.

4) The lead creates an empty spreadsheet for storing test results, which is placed on a shared drive for everyone to update. (If there's already a bug tracking database like Bugzilla installed, use that instead!)

Execution and review

1)The team cuts a build and distributes it to the developers.

2) Each team member runs through one packet of test cases.

3) When a team member finds any behavior that does not match his or her expectations, a bug is added to the spreadsheet (or bug tracker).

4) The team meets to talk about which bugs to fix.

Table 1: Running a better, stronger, faster integration test.

The test is divided into two stages. The goal of the preparation stage is to create a packet of test cases for each team member. A typical test case includes these sections:

Table 2 is an example of a test case that would exercise one specific behavior in a word processing application. In this example, the programmers intended the word processor's search-and-replace feature to work in such a way that if the original text had been all lowercase, then the replacement text had to be inserted in all lowercase.

Name

Test #7: Verify that lowercase data entry results in lowercase insert

Preconditions The test document TESTDOC.DOC in the share drive is loaded. (Line #38 has the text "this is the search term" in it.)
Steps 1. Click on the "Search and Replace " button.
2. Click in the "Search Term " field.
3. Enter This is the Search Term
4. Click in the "Replacement Text" field.
5. Enter This IS THE Replacement TeRM
6. Verify that the "Case Sensitivity" checkbox is unchecked
7. Click the OK button
Expected Results The search-and-replace window is dismissed, and the text this is the search term has been replaced by this is the replacement term in all lowercase.

Table 2: Example of an integration test case

Identifying the Steps

The steps in the test case must be written out explicitly to make sure that everyone runs the test the same way. That way, if bugs are found, they're easy to replicate. This is why it's important that each team member only run test cases that were written by someone else: It ensures that everyone writes their test cases in a way that can be reproduced.

There will almost always be more than one round of execution and review. After the testing session in which all of the team members execute their tests, the team sits down to review the results that they added to the spreadsheet (or the database). For each bug that was found, the team must make a decision about whether or not to fix it. It is entirely possible that they will decide to release the software even though it contains defects that they know about.

The only rule is that if the team makes any changes to the code, they must agree to spend another iteration running the next round of execution and review. It's often tempting to just fix the code and release it, rather than take the time to run another round of tests. But when you change code, you run the risk of introducing more defects. The only way to make sure the change didn't break the software is to re-run the tests. After all, you thought the software was ready to deliver before you started the integration testing. If you hadn't run them then, you wouldn't have even known about the bug that's now giving you trouble.

To keep this interesting for the team--and to make sure that they all get a good sense of how the product is behaving--the team members trade packets after each round. No team member gets the packet that he or she created. This ensures that nobody ever tests his or her own code, because people tend to be blind to bugs that they introduced. For each bug that the team decides to fix, the person who fixes the defect must create a new test case to verify the fix, and add it to the packet.

One big advantage to having different people test parts of the software that may be new to them is that it keeps the testing from becoming monotonous. Programmers generally only get to work on a small piece of a project. It is interesting and rewarding for someone to see how her piece fits into the overall project. By working with features that other people created, she can gain perspective about what her teammates have been working on. This can really help the team crystallize, which will help them cooperate better on future projects.

There is no set number of iterations that the team must go through. Instead, the testing process repeats until every single team member feels confident that the product is ready to ship. This consensus is the most valuable product of integration testing. Performing these tests will give the team members a good feel for the quality of the software. If they can run through these tests, discuss the results, and still feel that they can get behind the final product, then it's going to be much more polished when it finally reaches the users. And if users come back with issues, the team members will have a much better sense of how they actually use the software in real life, which will give the team a head start in understanding and fixing the problem

What It Is, and What It Isn't

When the team designs the integration tests, they are much more likely to address the real weakness of the software. And if a particular test fails, there's a good chance that the team members will know exactly what caused it. In contrast, consider what would happen if the same defect were found two months later in a test cycle. A programmer would need to set up the environment again, trace through the code, replicate the malfunctioning behavior, and reverse-engineer and repair the underlying cause. He doesn't have the advantage that comes with having recently designed the tests to exercise specific code.

Integration testing is not a replacement for functional and regression testing. But it does give the team a relatively light-weight tool to help them evaluate whether or not the product is close to release.

In an agile environment, where there's little tolerance for long testing schedules, this kind of integration testing can be extremely valuable. Typically, code developed by an agile team will have extensive unit tests. However, unit tests typically verify only that individual objects function correctly. There is often at least a small gap in testing how those objects interact. This gap gets especially compounded when humans are involved. The reason for this is that unit tests must always run automatically and independently. Humans tend to string together behaviors that call upon many areas of the software. This is exactly the sort of condition that automated unit tests don't replicate. And many automated unit tests do not cover interface, but only the objects that interface is wired up to. By including an integration test at the end of every iteration, an agile team can deliver better code to their users.

But even an organization with a full testing process will benefit from developer-run integration tests. This integration practice makes an excellent addition to the exit criteria for the software development phase of a project with a "heavy-weight" software process. It catches defects earlier and can reduce the number of regression testing cycles, which will lead to shorter schedules. And, just as importantly, it can serve as a superior smoke test. Many teams use a subset of the system test cases as a smoke test that must be passed before a build can be accepted into testing. This practice will replace that subset with substantially different tests. Those tests will still verify that the build is ready to move forward in the process, but they have the advantage of being able to catch different defects than those that the QA team would have thought to look for. They provide wider coverage, which leads to higher quality. But most importantly, integration tests have the intangible benefit of giving the entire team--and not just the testers--a personal stake in the quality of the product.

Terms of Service | Privacy Statement | Copyright © 2024 UBM Tech, All rights reserved.