In a letter to me posted in a recent Dr. Dobb's Update Newsletter, Mr. Ralph Kelsey from the Computer Science department of Ohio University objects in rather aggrieved terms to my comment that "TDD is corrupted by writing tests after code." Says he, "This is about as extreme as the Taliban. Testing is good. Write [tests] before, during, and after." Well, it's easy to agree that in most cases, more testing is good. But that has virtually nothing to do with TDD. TDD is not a way to more-thoroughly test code, although it might have that result. But if you're doing TDD so that you'll gain better test coverage, you have not only misunderstood TDD, you've misunderstood testing as well.
Test-Driven Development (TDD) is a practice designed to force developers to think about their code before writing it. This is done in small increments in which the developer must identify increasingly more-complex interfaces to other objects as well as build up the functionality of the object under development. Frequently, a developer must create a shim to represent other objects not yet coded so as to continue writing tests and code. These shims, known in the trade as stubs and mock objects, find their interfaces designed incrementally as other objects that rely on them build them up. By this means, developers are forced to think about interactions and interfaces before banging out their own code. As they do this, they design both their own objects as well as the objects they depend on. And design is wired into the coding and testing. This cycle, which depends entirely on writing tests before code, is pure TDD. It delivers thoughtfully designed code as well as the tests that prove its functionality (at a unit level). Not bad.
If we accept Mr. Kelsey's comment that it's not important whether the tests are written out before coding, we quickly see how TDD dissolves into nothing but an orientation towards unit tests. If the code is written first, then it's ipso facto not test-driven, nor does it have the benefit of the TDD design approach of incremental integration. The code might be designed using other valid techniques, but then what passes for after-the-fact TDD is simply validation of the code. This approach is certainly workable — even mainstream — but it's categorically not TDD.
On a personal level, I prefer this approach to TDD. I think TDD has some limitations; namely, the organic growth by tiny increments, the constant throwing away of tests when the code is refactored, and — most importantly — the very high premium TDD puts on code refactoring skills. Many courses on TDD fail to take this last aspect into account. I think they ultimately do a substantial disservice to neophytes because they give them a powerful technique that they cannot properly use. In my view, TDD instruction should invariably be preceded by a thorough course on refactoring. Until you can complete the excellent Refactoring Workbook, TDD should not be your aim.
Returning to my main point: I understand the source of the confusion. Dave Astel's otherwise very approachable work on TDD, Test-Driven Development — A Practical Guide, seems to imply that tests are the principal goal of TDD. And I'm sure other instructional sources do, too. However, Bob Martin gets it right and states it unambiguously in his master work, Agile Software Development, Principles, Patterns, and Practices: "The act of writing a unit test is more an act of design than of verification." The importance he attaches to this view is signaled by it being the first sentence of his chapter on testing.
So, in response to Mr. Kelsey, when I say that writing tests after writing code is a corruption of TDD, this is not extremism, but a statement of first principles.