A key tenet of test-driven development (TDD) is to write unit tests first, that is, before the code. The idea is that writing the tests will help in the design of the code that will fulfill the test and certainly clarify some of the thinking about what it will require. A curiosity of TDD is that in the formulation by Kent Beck, which popularized the technique, the term "unit" was not defined. For many practitioners of TDD today, units represent classes and the tests are very low level a sort of bottom-up design that reaches up from almost atomic levels. Within TDD, there is a second school that works at a somewhat higher level and uses tests for exploratory development. This branch, which is sometimes referred to as the "London" way of doing TDD, was first illustrated in the excellent book, Growing Object-Oriented Software, Guided by Tests, by Steve Freeman and Nat Pryce.
I prefer the London approach, but am more comfortable working at a higher level still: that of functional tests. I find that unit tests are simply too low level. They test too little. And many small tests taken together tell you only that the individual grains of code work correctly. They say nothing about whether the functionality works in the large, nor whether it's anything the customer asked for.
Functional tests should reflect a function specified in the requirements or in some kind of feature repository (read defect trackers). Using functional tests correctly orients the design and code to the goal, namely delivering what the user wants.
Even higher-level tests, such as user acceptance tests, are too coarse for my taste. One feature that a user might demand could have multiple parts consisting of dramatically different actions. For example, suppose a UAT request is sub-second response time. This will surely require numerous optimizations. A single UAT of the response time will tell me whether I've succeeded overall, but is of much less use in designing and validating the individual changes I've made.
This view is not universally accepted. The author Ken Pugh told me several years ago that when he sits down to write code, he always starts with a user acceptance test as the defining goal for what he'll work on. I trust Ken when he says this, but I don't see how I could apply it for myself. Much of my development work is simply not a broad brush endeavor. It frequently includes small maintenance efforts, or optimization, and so on. These tasks rarely can be encapsulated in a UAT.
But the user orientation that Pugh finds in his approach, and which I echo in functional tests, is a key differentiator from the mainstream TDD orientation. Other approaches to testing take the core concept of serving the user even further. Principal among these is model-based testing (MBT). I realize I tread lonely ground here: Except in embedded programming, there is a longstanding prejudice in the U.S. against model-based anything. And yet, there is much to recommend it.
In a typical MBT scenario, a model is constructed from the user requirements. Typically, this is done using a modeling language like UML. Then, MBT software reads the model and generates tests that exercise the features. These tests work at various levels: functional, integration, and unit tests, and so on. These tests are then run via custom or standard test frameworks or harnesses.
Modeling for testing is easier than standard UML modeling, which involves design decisions and complex architectural planning. Instead, the model is simply a reflection of the data already captured in requirements. It is essentially a process of translating one artifact into another. Moreover, this artifact-to-artifact translation means that it is comparatively simple to fold in customer changes in the Agile tradition. When tests are modified or updated, a new test suite is automatically generated.
The requirements-based model has one especially attractive capability: It does pure black-box testing. The model knows nothing about the implementation, nor should it. Its only job is to make sure that the requirements are fulfilled. It runs what are called conformance tests.
Because such tests can create numerous false negatives when features have not yet been implemented, MBT products generally enable developers to specify which aspects should be tested in any given run. But a manager can run the full suite to get a very accurate idea of where a project stands in relation to its goal. This benefit, and that of automated requirements-oriented testing, is crucial in large projects those involving hundreds of thousands of lines and up. On those projects, MBT is well-nigh indispensable.
MBT tools tend to be large packages and are aimed at complex projects where testing all necessary conditions can be very difficult. Vendors include IBM Rational, Conformiq, and SmartTesting, among others. The free tools market is fairly thin. Microsoft Research offers one popular MBT tool called SpecExplorer, which is .NET-centric. In the Java space, there is graphwalker and fmbt. The open source tools tend to offer only subsets of the functionality of commercial tools.
Organizations that would like to test from requirements, but don't want to jump into MBT, do have options. Principal among these is behavior-driven development (BDD), in which software behavior is coded as a test in pseudocode and then the tests are run as a suite. Done this way, it's easy to tell where the project stands in relation to its requirements. Cucumber, Spock, and easyb are leading open-source BDD frameworks.
Either way, if TDD feels like it's too low level, there are many good options for both designing your code and testing it thoroughly from higher levels.