Dr. Dobb's is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Channels ▼

Are Tests Requirements?

Matthew actively develops working software, writes and speaks about systems improvement, and is a cofounder of the Great Lakes Software Excellence Conference. Read his blog at xndev.blogspot.com.

We are implementing the project iteratively, then a large software project is just a series of very small ones. If we cannot define truly testable acceptance criteria for our next two weeks of work, then we have big problems. If we can, then why not represent them directly as test cases instead of deriving the test cases later from whatever representation we choose instead?

-- Steve Gordon, PhD, on the Agile Testing List

Steve Gordon is doing more than recommending that we automate our acceptance tests. In fact, he is suggesting that our tests are our requirements. That is to say: If we develop the tests up front, as examples of what the code will do -- what do we need a requirements document for? If the tests pass, the software works. If the tests do not demonstrate that the software works, then we need more tests.

I have to admit, this logic has a certain appeal. First, automated tests are specific, unambiguous, and certainly testable. You can't write a floofy automated test that says "Handles errors appropriately." Instead, you've got to define the error conditions, how the software will respond, and how you will evaluate if those errors are correct. Likewise, inconsistent tests are a lot easier to spot than inconsistent requirements. ("Wait, we've got two different expected outputs for an input of 123 -- which one is correct?")

But what would that look like in the real world?

Imagine, for a moment, that we have a project designed to create a simple web service -- one that converts Fahrenheit to Celsius. Our business customer, the weather department at BigNewsCorp, has created acceptance tests, which we have automated and can run at the push of a button. Once the software is development, we log into the testing tool and see these results:

Function Input Expected Output Actual Output Result
Convert_Farenheit_To_Celsius 32 0 0

Convert_Farenheit_To_Celsius 212 100 100
Convert_Farenheit_To_Celsius 100 38 38
Convert_Farenheit_To_Celsius 300 149 149
Convert_Farenheit_To_Celsius 0 -17 -17
Convert_Farenheit_To_Celsius 10 -12 -12
Convert_Farenheit_To_Celsius -20 -29 -20

There, clear as daylight. The acceptance tests pass; the software must work, right?

Now, imagine for a moment you are the business customer, or an independent, third-party software tester. Can you give "signoff"?

As a test suite, this list is pretty good, but as requirements, I am afraid it leaves a lot of questions unanswered:

  • What is the basic logic used to convert Fahrenheit to Celsius? What about fractional results? Should it round up at 0.5? Or give fractions to the tenth, or the hundredth?
  • What are the upper and lower bounds of the function?
  • Can it handle fractional input?
  • What about null input? Or alphanumeric input?

To demonstrate these answers, we could add a lot more tests, or "notes" column for each test. Still, even with that, some of the questions (like the basic logic) simply will not be answered with automated tests.

And that is a problem, because if we want to do random testing on any other numbers, we need a way to know what the right answer is -- we need an Oracle. Written requirements could provide us that Oracle. Without that Oracle, getting the tests of "pass" is as easy as a series of if statements that return the right answer for the tests -- and only for those tests.

Keep in mind, the scenario above is only an illustration. Real code has to interact with databases, with files, with multiple other objects with multiple variables.

In the environment where I work (and I suspect in yours as well), the pre-defined, up front tests are good, but not good enough. So we define requirements and do exploratory acceptance testing. Because the project is different every time, and the regression statistics are so low, we find more value in varying those exploratory acceptance tests than in simply extending the regression suite.

I think that Brian Marick summed it up best when he wrote:

The claim that the tests are the requirements has wasted untold amounts of time because, well, they aren't. They can, however, be used to achieve the same end by a different means.

Acceptance tests can be a great supplement to written requirements; they can both serve as examples and tell a compelling story about what the software should do. But a story is not an explanation, and tests are not requirements.

References and Footnotes

The free online temperature converter I used to create test cases.

This article has a bug. Can you find it?

Related Reading

More Insights

Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.