Our esteemed editor, Andrew Binstock, stirred up a hornets' nest a few weeks back by commenting that people have become doctrinaire about TDD. He observed: "all the benefits [of TDD that Rob Meyers lists] could be attained equally by writing tests after the code, rather than before." I absolutely agree with the doctrinaire part, but not with that "after the code" part. Let me explain.
To my mind, TDD (and its more-refined cousins BDD and ATDD) are not testing methodologies at all. They're design methodologies. In fact, let's just call it Test-Driven Design to eliminate the confusion.
Yes, TDD does yield a bunch of unit tests, which is hunky dory. Having those unit tests in place has scads of benefits — everything from faster development times, to creating a more relaxed programming environment, to making real refactoring possible. However, the tests are not the main point. If they were, then Andrew's comment would be absolutely on the mark; it wouldn't matter whether you wrote the tests before or after you wrote the code. (Though writing tests first means that they actually get written.)
The problem is that tests are not enough. Tests that exercise poorly written code just tell you that the monster lives. They don't stop the monster from accidentally killing innocent children. When you write post-facto tests for Frankenstein code, you tend to test what you have without questioning what you have. That's better than nothing, but it's not ideal. A good testing strategy improves code quality as a side effect of testing. (T/B/AT)DD does that.
By putting the test first and then writing code to make the test pass, (T/B/AT)DD becomes a way to design code. Even better, as you add tests you incrementally improve your design. In an environment that (deliberately) lacks detailed, up-front design thinking, that last characteristic is critical. The process works so well, in fact, that the architecture that emerges from a TDD environment is usually better than the one that, I at least, can create from whole cloth ahead of time.
So, how does that work?
Consider a basic premise of lean/agile development: "eliminate waste." The most wasteful thing you can do is spend time working on something that you don't use. A large, up-front design is the classic example, since most systems that start with that sort of design don't implement it. In a lean world, you have to design incrementally as the system evolves.
Incremental design does not mean "hack together the code and then draw a picture of it." You still want a properly modularized system where the pieces interact over well-defined interfaces. More importantly, you don't want a lot of unnecessary baggage that does nothing but add complexity. The question, then, is how do you develop those minimal interfaces?
We've all worked on systems that got it wrong. You need to do something, but the system fights you every step of the way. You find yourself asking, "Did the clown who came up with this junk actually use it to do anything?" The answer, usually, is "no," which is one of the reasons that up-front design doesn't work. By definition, if you're designing first, then you're not yet using what you've designed. If you don't use it, you don't see the flaws. The same reasoning applies at the product level.
So, rather than making something up and hoping that you get it right, imagine that you start with something real. Take a small piece of a real story, and implement that piece as if the nonexistent subsystem that exposed our API did exist. You invent the API that you need to do the job at hand: no wasted arguments, no convoluted workarounds. The code just does what it does without fuss.
But the module that you're talking to doesn't exist yet. What you've just done is designed an API particularly suited for the task at hand.
That chunk of story was our test (step 1).
Now get the test to compile, typically by introducing interfaces and mocking implementations.
Next, get the test to pass, initially with bogus implementations of those interfaces. (The mocks return constants, for example.)
Incrementally replace that bogus implementation with real code, getting the test to pass after each (small) change.
You now have a working implementation of the subsystem that can do exactly what the story requires of it. (Nothing more; nothing less.)
Because you only build code that's actually used by the story implementation/test, you have 100% test coverage. You never get 100% coverage if you add the tests after you write the code, and without 100% coverage, refactoring is risky.
As you work, the API will change (as will the way that you use the API). That's the point. You're uncovering design flaws and fixing them. You're figuring out the best flow. You're cleaning up the mess as you're cooking instead of leaving a huge pile of dirty pots in the sink for tomorrow morning. The API gets more and more optimal as it evolves.
So you now have not only a test and working implementation, but also architecture that's optimized for the actual stories. High fives all around!
Apply this methodology recursively to the components that comprise the subsystem; you can do that all the way down to the class level. You've just invented TDD. Should you decompose all the way down? In practice, I don't. I've been doing this programming stuff long enough that I can often just write a reasonable implementation and move on. Of course, if I get it wrong, I'm more than willing to fix it. My high-level tests tend to uncover flaws in the low-level code. If they don't, I add a test.
If you read through Kent Beck's Test Driven Development: By Example (which I highly recommend — this is definitely one of those books that don't become obsolete over time, and it's criminal that it's not an eBook), you'll find that even though Beck presents TDD at the micro level, he repeatedly points out that he's doing that to demonstrate that you can work at that level if you have to, not that you should work at that level all the time. What he's describing is the same architectural process I just covered, but at the micro level.
There are a lot of benefits to working this way. This isn't doctrine, it's just my experience. You certainly don't need to use (T/B/AT)DD to be "agile," but you do need the benefits that come from working this way to be effective. If you can get the same benefits some other way, go for it (and tell me how you do it in the comments!).
So, in sum: testing is valuable, but incrementally developing an optimal architecture as a side effect of testing is priceless.