Step 2: Establish a Reliable Regression Base
Whenever new capabilities are added or bugs are fixed in existing code, regression bugs are a common yet unintended side effect. These bugs occur when a certain software function no longer works as expected as a result of some other code changes (see en.wikipedia.org/wiki/Regression_testing). In fact, studies have shown that, on average, 25 percent of software defects are introduced while programmers are changing and fixing existing code during maintenance; see Software Process Improvement by R.B. Grady (Prentice Hall, 1997). Regression testing is the primary technique for containing such risks. With regression testing, the software behavior captured by previously run tests is verified by rerunning the tests and comparing the current results with those from the originally captured "golden set."
A challenging situation arises when the inherited code to be modified lacks regression tests. In such cases, try to resist the temptation to "hack at it anyway," and first create some regression tests before making any code changes. This safety net will help verify that local code changes do not break something else in the code. Otherwise, attempting to maintain the integrity of the previous functionality while introducing new features becomes a high-stakes crapshoot.
There are two complementary approaches to creating such a test suite. A manual approach involves writing some tests at the highest abstraction level of the code. Because writing tests by hand is generally expensive, it's best to minimize that effort upfront while still capturing the reference behavior. This can be accomplished by identifying the module-under-test's high-level APIs and exercising them first. The tests should follow the module's "positive" behavior, and produce some code coverage for expected conditions. This is beneficial because a few tests aimed at the high-level API exercise many of the module's lower level functions. The resulting test coverage can be amended with lower level tests as required to increase the test coverage to a comfortable level before you change the code.
Another way to approach this problem is by creating tests from the bottom up, starting from the leaf-level functions. Such tests typically exercise more paths through the code. However, the bottom-up approach is much more labor intensive because many more tests need to be created. To get results in a reasonable time, you need an automated tool capable of auto-generating API tests (either with semirandom inputs or using specified value ranges) and capturing their results. Some commercial testing tools offer this capability; for public domain software, see tools such as CxxTest (cxxtest.sourceforge.net).
The primary method for assessing the completeness of such a regression test suite is to track test coveragea cumulative measurement of how much code the tests actually exercise. Although the commonly used line coverage metric is certainly better than nothing, it is by no means sufficient. While line coverage indicates what code was executed, it mostly ignores the aspect of what conditions led to executing it. In general, programs can be thought of as giant state machines with code paths representing (sequences of) transitions between the states. Real paths are formed by control branches activated by appropriate values for conditions. Hence, in order to measure how much of the real behavior of a software system is captured by tests, it is typically necessary to track path, branch, and condition coverage metrics. A number of commercial tools are able to support this level of coverage analysis.
Once the test suite is in place, its execution must be automated so that it can be run on a regular and frequent basis. This is an absolute requirement for reliably checking existing functionality. Use scripts, crontab, CruiseControl, or any other automated execution fixture to run the test suite automatically, and collect coverage from the runs. In addition, I recommend using a runtime memory-error detection tool during test suite execution. These tools accurately detect runtime bugs, cover both your source and third-party libraries, and generally check for errors that can't currently be detected via static analysis.
At this point, you should be prepared to start changing the code.