Matthew actively develops working software, writes and speaks about systems improvement, and is a cofounder of the Great Lakes Software Excellence Conference. Read his blog at xndev.blogspot.com.
We are implementing the project iteratively, then a large software project is just a series of very small ones. If we cannot define truly testable acceptance criteria for our next two weeks of work, then we have big problems. If we can, then why not represent them directly as test cases instead of deriving the test cases later from whatever representation we choose instead?
-- Steve Gordon, PhD, on the Agile Testing List
Steve Gordon is doing more than recommending that we automate our acceptance tests. In fact, he is suggesting that our tests are our requirements. That is to say: If we develop the tests up front, as examples of what the code will do -- what do we need a requirements document for? If the tests pass, the software works. If the tests do not demonstrate that the software works, then we need more tests.
I have to admit, this logic has a certain appeal. First, automated tests are specific, unambiguous, and certainly testable. You can't write a floofy automated test that says "Handles errors appropriately." Instead, you've got to define the error conditions, how the software will respond, and how you will evaluate if those errors are correct. Likewise, inconsistent tests are a lot easier to spot than inconsistent requirements. ("Wait, we've got two different expected outputs for an input of 123 -- which one is correct?")
But what would that look like in the real world?
Imagine, for a moment, that we have a project designed to create a simple web service -- one that converts Fahrenheit to Celsius. Our business customer, the weather department at BigNewsCorp, has created acceptance tests, which we have automated and can run at the push of a button. Once the software is development, we log into the testing tool and see these results:
|Function||Input||Expected Output||Actual Output||Result|
There, clear as daylight. The acceptance tests pass; the software must work, right?
Now, imagine for a moment you are the business customer, or an independent, third-party software tester. Can you give "signoff"?
As a test suite, this list is pretty good, but as requirements, I am afraid it leaves a lot of questions unanswered:
- What is the basic logic used to convert Fahrenheit to Celsius? What about fractional results? Should it round up at 0.5? Or give fractions to the tenth, or the hundredth?
- What are the upper and lower bounds of the function?
- Can it handle fractional input?
- What about null input? Or alphanumeric input?
To demonstrate these answers, we could add a lot more tests, or "notes" column for each test. Still, even with that, some of the questions (like the basic logic) simply will not be answered with automated tests.
And that is a problem, because if we want to do random testing on any other numbers, we need a way to know what the right answer is -- we need an Oracle. Written requirements could provide us that Oracle. Without that Oracle, getting the tests of "pass" is as easy as a series of if statements that return the right answer for the tests -- and only for those tests.
Keep in mind, the scenario above is only an illustration. Real code has to interact with databases, with files, with multiple other objects with multiple variables.
In the environment where I work (and I suspect in yours as well), the pre-defined, up front tests are good, but not good enough. So we define requirements and do exploratory acceptance testing. Because the project is different every time, and the regression statistics are so low, we find more value in varying those exploratory acceptance tests than in simply extending the regression suite.
I think that Brian Marick summed it up best when he wrote:
The claim that the tests are the requirements has wasted untold amounts of time because, well, they aren't. They can, however, be used to achieve the same end by a different means.
Acceptance tests can be a great supplement to written requirements; they can both serve as examples and tell a compelling story about what the software should do. But a story is not an explanation, and tests are not requirements.
References and Footnotes
The free online temperature converter I used to create test cases.
This article has a bug. Can you find it?