Acceptance Test Driven Development
In response to my editorial anent TDD as a design practice rather than a validation process, I received several thoughtful comments. These discussed the fact that while TDD can make you think about your code, unit tests in themselves are insufficient to test code. I agree wholeheartedly with this perspective — and I should note that most professionals who use TDD would also agree.
Unit tests must be combined with higher-level tests, before developers can begin to feel that their code is reliable and conforms with the user’s stated needs. Because of their low level of granularity, unit tests can fulfill neither one of these mandates. In one letter to me, Capers Jones, noted that in studies, unit tests remove only about 35% of defects in software. This number rises to 40% if the tests are written before the code.
The stand-out question then is: Are unit tests the right level of test for doing TDD successfully? I have long maintained that they are not. My preference has been higher level tests. For a long time, I thought integration tests were that better level. But as time has passed, my experience has led me to move up the chain even further. I am becoming convinced that the right level to work at before writing code is the acceptance test.
Acceptance tests have several key advantages over lower-level tests:
- As a design framework prior to coding, they focus the developer on the user's needs. This is an important benefit. It is much easier to design functions correctly, when you can see them holistically, rather than the bottoms-up approach of TDD where the focus is on tiny steps developed within the scope of a single function.
- The developer validates the code at the level of the user experience. If the previous benefit was important, this one is crucial. It is entirely possible to write software that enjoys 100% code coverage, where every unit tests passes, but whose functionality is still broken. Even if the functionality is correct, the software still might not do what the user requested. In counterpoint, if the code is defined by acceptance tests, it is no longer possible to pass all the test and not work or not do what the user requested. Passing all acceptance tests (presuming they’re correctly written) guarantees satisfaction with user needs and validates the software.
- Acceptance tests document the software. This last point gets at what is one of the most pervasive misconceptions about unit tests: that they document the code. This is a canard that keeps being tossed about as a good reason for writing unit tests. It is, in a word, nonsense. If you’re looking at unit tests as documentation for the code, something upstream is seriously broken. (Allow me to take this up in a separate column, so as not to get too far off topic here.)
The use of acceptance tests to drive TDD is a point of view that has gained traction in various parts of the industry over the last decade. Its most established home is in behavior-driven development (BDD), which works by transforming use cases and user requirements into an extensive set of tests that the code must subsequently satisfy. There are many BDD tools in the OSS bazaar. One of my favorites is easyb, which won a Jolt award in 2009.
Lately, the term acceptance test-driven development (ATDD) has emerged in its own right and stresses the use of acceptance testing tools, such as Fitnesse, to steer coding. How these tools can drive development is thoughtfully explained in a recent book by Ken Pugh, which I recommend if this approach makes sense to you.
What ATDD does is substitute a higher level of tests in the TDD process. It's important to note that it does not denigrate the use of unit tests. It simply forces the developer to think at a higher level and focus on the user’s needs all through the implementation.