Unity Output
The tests should be run as part the automated test
build. A single command builds and runs your test
executable. I prefer to build often, with
each small change. This is TDD. I set up my
development environment to automatically
make all
whenever a file
is saved. Test output looks like this:
make compiling SprintfTest.c Linking BookCode_Unity_tests Running BookCode_Unity_tests .. ----------------------- 2 Tests 0 Failures 0 Ignored OK
Notice that when all tests are passing, the output is minimal. At quick glance, a single line of text says "OK," which means, "All tests passing." In the UNIX style, the test harness follows the "no news is good news" principle. (When a test case fails, as you will see shortly, it reports a specific error message.) It's pretty self-explanatory, but let's decipher the test output and summary line.
Notice also that a dot (.) is printed before each test case runs. For a long test run, this lets you know something is happening. The line of hyphens (- - -) is just a separator line for the test summary.
- Tests the total number of
TEST
cases. - Failures the total number
of
TEST
cases that failed. - Ignored a count of the number of tests in ignore state. Ignored tests are compiled but are not run.
Let's add a failing test to see what happens. Look at the test output, and the intentional error in this test case will be evident:
TEST(sprintf, NoFormatOperations) { char output[5]; TEST_ASSERT_EQUAL(4, sprintf(output, "hey")); TEST_ASSERT_EQUAL_STRING("hey", output); }
The failure looks like this:
make compiling SprintfTest.c Linking BookCode_Unity_tests Running BookCode_Unity_tests .. TEST(sprintf, NoFormatOperations) stdio/SprintfTest.c:75: FAIL Expected 4 Was 3 ----------------------- 2 Tests 1 Failures 0 Ignored FAIL
The failure reports the filename and line of the failing test case, the name of the test case, and the reason for failure. Also notice the summary line now shows one test failure.
Now, let's look at CppUTest.
CppUTest: A C++ Unit Test Harness
Now that we've seen Unity, I'll quickly describe CppUTest, my preferred unit test harness for C and C++. In full disclosure, I am partial to CppUTest, not only because it is a capable test harness but also because I am one of its authors. The first examples in this article use Unity. The later examples, use CppUTest.
CppUTest was developed to support multiple OS platforms with a specific goal of being usable for embedded development. The CppUTest macros make it so that test cases can be written without knowledge of C++. This makes it easy for C programmers to use the test harness.
CppUTest uses a primitive subset of C++; it's a good choice for embedded development where not all compilers support the full C++ language. You will see that the test cases are nearly identical between Unity and CppUtest. You, of course, can use whichever test harness you prefer for your product development.
This CppUTest test case is equivalent to the second
Unity test case found in
the section on sprintf
.
TEST(sprintf, NoFormatOperations) { char output[5] = ""; LONGS_EQUAL(3, sprintf(output, "hey")); STRCMP_EQUAL("hey", output); }
Besides the macro names, the test cases are the same.
Let's look at a CppUTest test fixture that is equivalent to the earlier example Unity test fixture (discussed in the "Test Fixtures in Unity" section).
TEST_GROUP(sprintf) { char output[100]; const char * expected; void setup() { memset(output, 0xaa, sizeof output); expected = ""; } void teardown() { } void expect(const char * s) { expected = s; } void given(int charsWritten) { LONGS_EQUAL(strlen(expected), charsWritten); STRCMP_EQUAL(expected, output); BYTES_EQUAL(0xaa, output[strlen(expected) + 1]); } };
Again, it is very similar to the previous example, with all the same concepts
represented. One formatting difference is that the
CppUTest TEST_GROUP
is
followed by a set of curly braces enclosing shared
data declarations and functions. Everything between
the curly braces is part of the
TEST_GROUP
and is accessible
to each TEST
in the group.
The shared data items
(output
,
expected
, and
length
) are initialized
by a special helper function called
setup
. As you might guess,
setup
is called before each
TEST
. Another special
function, teardown
, is
called after each TEST
. (In
this example, it is not used.)
expect
and
given
are free-form helper
functions that are accessible to all
TEST
cases in the
TEST_GROUP
.
These refactored test cases are identical to the Unity test cases:
TEST(sprintf, NoFormatOperations) { expect("hey"); given(sprintf(output, "hey")); } TEST(sprintf, InsertString) { expect("Hello World\n"); given(sprintf(output, "%s\n", "Hello World")); }
One advantage to CppUTest is that tests self-install.
There is no need for an external script to
generate a test runner or to manually write and
maintain test-wiring code like
RUN_TEST_CASE
,
TEST_GROUP_RUNNER
, and
RUN_TEST_GROUP
. On the
minor difference list are the assertion macros; each
test harness supports different macros, though there
is functional overlap.
You may notice that Unity and CppUTest are suspiciously close in their macros and test structure. Well, there is no real mystery there; they do follow a well-established pattern that I first saw with JUnit, a Java test framework. The more specific similarities are because I contributed the test fixture-related macros to the Unity project.
CppUTest Output
As already explained for Unity, tests run as part of
an automated build using
make
. Test output looks
like this:
make all compiling SprintfTest.cpp Linking BookCode_tests Running BookCode_tests .. OK (2 tests, 2 ran, 0 checks, 0 ignored, 0 filtered out)
Just like with Unity, when all tests are passing, the output is minimal. Here is how to interpret the summary line of the test run:
- tests the total number of
TEST
cases. - ran the total number of
TEST
cases that ran (in this case, they passed too). - checks a count of the
number of condition checks made. (Condition checks are calls such as
LONGS_EQUAL
.) - ignores a count of the number of tests in ignore state. Ignored tests are compiled but are not run.
- filtered out a count of the number of tests that were filtered out of this test run. Command-line options select specific tests to run.
Let's insert an error into the test to see what the output looks like:
TEST(sprintf, NoFormatOperations) { char output[5]; LONGS_EQUAL(4, sprintf(output, "hey")); STRCMP_EQUAL("hey", output); }
The failure looks like this:
make compiling SprintfTest.cpp Linking BookCode_Unity_tests Running BookCode_Unity_tests ... stdio/SprintfTest.cpp:75: TEST(sprintf, NoFormatOperations) expected <4 0x2> but was <3 0x1> Errors (1 failures, 2 tests, 2 ran, 1 checks, 0 ignored, 0 filtered out, 0 ms)
The failure reports the line of the failing condition check, the name of the test case, and the reason for failure. Also notice the summary line includes a count of test failures.
If you ever insert an error on purpose into a test case, make sure you remove it, or you risk baking a bug into your code.
Unit Tests Can Crash
One other possible outcome during a test run is a
crash. Generally speaking, C is not a safe language.
The code can go off into the weeds, never to return.
sprintf
is a dangerous
function. If you pass it an output buffer that is too
small, it will corrupt memory. This error might crash
the system immediately, but it might cause a crash later. The behavior is
undefined. Consequently, a test run may silently exit
with an OK
, silently exit
early showing no errors, or crash with a bang.
When you have a silent failing or crashing test, let the test harness help you confirm what is wrong. Sometimes a production code change will cause a previously passing test to fail, or even crash. So, before chasing the crash, make sure you know which test is failing.
Because the test harness is normally quiet except for
test failures, when a test crashes, you probably won't
get any useful output. Both Unity and CppUTest have a
command-line option for running the test in verbose
mode (-v
). With
-v
,
each TEST
announces itself
before running. Conveniently, the last
TEST
mentioned is the one
that crashed.
You can also filter tests by test group (-g testgroup
) and test case (-n testname
). This lets you get very precise about which test
cases are running. These are very helpful for chasing
down crashes.
The Four-Phase Test Pattern
In Gerard Meszaros' book, xUnit Testing Patterns, he describes the Four-Phase Test, which is what I use, too. The goal of the pattern is to create concise, readable, and well-structured tests. If you follow this pattern, the test reader can quickly determine what is being tested. Paraphrasing Gerard, here are the four phases:
- Setup: Establish the preconditions to the test.
- Exercise: Do something to the system.
- Verify: Check the expected outcome.
- Cleanup: Return the system under test to its initial state after the test.
To keep your tests clear, make the pattern visible in your tests. When this pattern is broken, the documentation value of the test is diminished; the reader has to work harder to understand the requirements expressed by the test.
Conclusion
At this point, you should have a good overview of Unity and CppUTest, and understand how test fixtures and test cases allow a set of tests to be defined. Whether you use them to practice TDD on your C projects or just to ensure higher code quality is entirely up to you. [If you choose to go the TDD route, though, you might find the remainder of the material in the book from which this article is excerpted to be useful in your work. See below. Ed.]
This article is excerpted and adapted from Test-Driven Development for Embedded C, published by Pragmatic Programmers. You can find the complete code by visiting the book's home page. James W. Grenning invented Planning Poker, an Agile estimation technique, and is one of the original authors of the Manifesto for Agile Software Development.