*Andrew Koenig is a former AT&T researcher and programmer. He is author of C Traps and Pitfalls and, with Barbara, coauthor of Ruminations on C++ and Accelerated C++.
Barbara Moo is an independent consultant with 20 years of experience in the software field. She is the coauthor, with Stanley Lippman and Josée Lajoie, of C++ Primer, Fourth Edition.*

When you are reading or writing a program that includes a loop, and you are trying to figure out how the loop works, do you do so by tracing its execution in your head? Many programmers do just that: They say something like, "Well, the first time through the loop, iter is equal to c.begin(), so the loop must be working on the first element of c," and so on. Many programmers also make off-by-one errors in their programs, particularly when they think about whether to use < or <= in loop conditions.

We believe that these two facts are connected. Mentally tracing through a loop may help convince you that each iteration is doing approximately the right thing, but getting the details exactly right is hard—especially the crucial detail of whether the loop starts and ends where it should. Off-by-one errors come from missing such details.

There is a technique for thinking about loops that we believe makes off-by-one errors much easier to avoid. The technique is to find an appriopriate loop invariant, a term that we suspect scares some people off because it sounds so theoretical. Despite its sound, the notion of loop invariants (and invariants in other contexts as well) can make programs easier to write, read, and understand.

When you are writing a program and thinking about what form a loop should take, you can often simplify your task by thinking about the invariant at the same time. When you are trying to figure out what a loop in someone else's program is doing—especially when you suspect that the loop is not doing what it should—you can form a sharper picture of the loop's workings by asking what the loop invariant is or should be.

In effect, an invariant is a way of formalizing our thinking about part of a program's design. We often find that writing down the invariant that corresponds to part of a program helps us sharpen our thinking about how that part of the program should work, and makes the actual implementation much easier.

Just about any loop has an invariant that will help describe it. Sometimes, the loop's author will have thought explicitly about the invariant, and even been considerate enough to mention it in a comment. More often, programmers don't bother to think about invariants until they are trying to understand code that someone else has written. Accordingly, our discussion of invariants and the code to which those invariants apply will be intertwined, as code and invariants usually are in real life.

### What is an Invariant?

An invariant is a claim about the state of a program that is true every time the program's execution reaches a particular point. That point is often the beginning of a loop. For example, suppose we wish to set the variable sum to the sum of the integers from 0 through 100, inclusive. We might write a program fragment that looks like this:

int n = 0; int sum = 0; while (n != 100) { ++n; sum += n; }

The form of this loop may be slightly surprising: Why are we incrementing **n** at the beginning rather than at the end? Why does the loop use **!=** for its comparison even though we specified that 100 should be included in the sum? The answer to both of these questions is that we thought about the invariant first, and then based the loop on the invariant.

The most straightforward way to use an invariant to write or understand a loop is to think of the loop this way:

<i>establish the invariant</i> while (<i>condition</i>) { <i> Perform a computation that will</i> eventually cause the condition to become false while keeping the invariant true }

When the **while** terminates, the condition is false. Because the invariant was true at the beginning of the loop, and each trip through the loop causes the invariant to remain true, the invariant must still be true after the loop finishes—no matter how many times the loop executes.

When we want to find an appropriate invariant for a loop, one way of doing so is to look for a claim that:

- Involves a variable (or variables) that change(s) during the loop.
- Can be made to be trivially true before the first trip through the loop.
- Is equivalent to our problem being solved as soon as the variable's value makes the condition false.

Let's apply this technique to our summation loop. When we're done, we want **sum** to be the sum of the integers from 0 through 100. We need an invariant that uses a variable, so a reasonable first try is to use that variable in place of one of the constants. Accordingly, it makes sense to try "sum is the sum of the integers from 0 through **n**, inclusive" as our invariant.

It is trivial to make this invariant true by setting both **sum** and **n** to zero. If we make our condition **n != 100**, then when the loop finishes, **n** will be 100; and if the code correctly corresponds to the invariant, **sum** will have to be the sum of the integers from 0 through 100. The remaining problem, then, is how to write a loop body that will:

- Leave the invariant true, assuming that it was true to begin with.
- Change the state of the program in such a way that the loop's condition will eventually become false.

Notice that there is no requirement that the invariant remain true while the loop body is executing; the invariant is associated only with the beginning of the loop.

We begin by setting **n** to 0, and we would eventually like **n** to be equal to 100. The easiest way to achieve this state of affairs is by adding 1 to **n** in the loop body. Of course, once we have done so, the invariant is no longer true because we have changed the value of **n**. The former value of **n** is now **n-1**, which means that **sum** is no longer the sum of the integers from 0 through **n**, but rather the sum of the integers from 0 through **n-1**. We can remedy this state of affairs by adding (the new value of) **n** to **sum**, because ** n+(the sum of the integers from 0 through n-1)** is the sum of the integers from 0 through

**n**. This reasoning gives us the body of the loop:

++n; sum += n;

Because the loop maintains the invariant, it is easy to understand the loop's behavior: When it finishes, **n** must be equal to 100, so the invariant forces **sum** to be the sum of the integers from 0 through 100, inclusive.

The more usual form of this loop looks like this:

int n = 1; int sum = 0; while (n <= 100) { sum += n; ++n; }

However, when we tried to find an invariant that corresponds to this version of the loop, we realized that the version we actually chose had a simpler invariant and was easier to explain. We leave it as an exercise to write down the invariant for this alternative version of the loop, and use it to convince yourself that the program works correctly.

### Linear Search

Let's apply the idea of an invariant to help us understand a piece of code that finds the first element in a sequence with a given value. We realize that the Standard Library **find** algorithm solves this problem for us, but it is useful to examine an explicit, concrete solution to better understand how invariants work.

Suppose **c** is a sequential container (**vector**, **list**, **deque**, or a user-defined container that behaves like one of these), **it** is an appropriate iterator for this container, and **x** is a value to be sought in **c**. Then after executing

it = c.begin(); while (it != c.end() && *it != x) ++it;

the value of **it** is the first element of **c** that is equal to **x**. If there is no such element, **it** is equal to **c.end()**.

How can we convince ourselves of this claim without stepping through the loop in our heads? As before, the strategy is to come up with an invariant that:

- Involves at least one variable that will change in the loop.
- Is easily made true at the beginning of the loop.
- Gives us what we want when the loop's condition becomes false.

In this case, what we want is for one of two conditions to hold: for ***it** to be equal to **x** or for **it** to be equal to **c.end()**. But the **while** condition already gives us that! So why do we need to bother with the invariant at all?

The answer is that the loop's condition assures us that if the loop terminates, one of the two conditions will be true. However, it does not assure us that the loop will terminate, nor does it assure us that the loop will find the first element of **c** that is equal to **x**. After all, if we had written the loop this way:

// This program does not work it = c.begin(); while (it != c.end() && *it != x) { /* Do nothing */ }

the condition would be the same, but the loop would terminate only if **c** happened to be empty or its first element happened to be equal to **x**. So we have two tasks before us. First, we must convince ourselves that the loop terminates. Second, we must show that when it does terminate, it has found the first element equal to **x**.

It is easy to see that the loop must terminate because the loop's body increments **it**. If you initialize a variable with **c.begin()** and increment it enough times, you will eventually reach **c.end()**. But how do we know that this loop will find the first element of **c** that is equal to **x**? For that matter, how do we know that the loop will find an appropriate element of **c** at all?

To answer that question, we must find an invariant that we can use to prove that the loop does indeed find the first element of **c** that is equal to **x**. We can do so by using the notion that "None of the elements in the range **[c.begin(), it)** are equal to **x**." Here, we are using the unbalanced brackets **[ )** in the usual way, to denote a range that includes the first given element and excludes the second.

By this definition, initializing **it** to **c.begin()** makes the invariant trivially true: The range **[c.begin(), c.begin())** has no elements at all, so it cannot contain any elements that are equal to **x**. As before, our problem is to verify that if the invariant is true at the beginning of the loop body, it is still true at the end.

In this case, all the loop body does is increment it. Doing so appends one element to the range **[c.begin(), it)**, namely the element at ***it**. The first part of the **while** condition, **it != c.end()**, assures us that this element actually exists; the second part, ***it != x**, assures us that the element is not equal to **x**. Therefore, the invariant is still true: No element equal to **x** can be found in the range **[c.begin(), it)**.

Accordingly, because we have shown that the code maintains the invariant, we have assured ourselves that when the loop terminates, as it must, either ***it** is the first element that is equal to **x** or **it** is equal to **c.end()**.

### Generalizing Invariants

The examples that we have seen so far are nearly trivial. In practice, the notion of an invariant applies to more than just loops. Another common use for invariants is to help us think about data structures. For example, suppose we have a function named **compute** that takes a long time to execute. We might decide to try to make our program run more quickly by storing values that **compute** yields in an associative array named **cache**, so that we can fetch them again without having to recompute them.

In order for this strategy to work, we have to know that whenever **cache[x]** is defined—that is, whenever **cache** has an element with a key equal to **x**, the value of **cache[x]** is the same as the result of **compute(x)** would be. Therefore, we can use exactly that claim as our invariant. We will expect this invariant to be true at any point at which we attempt to use our cache. Therefore, if the cache is to work correctly, we somehow have to ensure that no operation that anyone performs on the cache will ever cause the invariant to be false.

One way to ensure that the invariant remains true is to find every place in the program that refers to the cache and verify that all references are appropriate. A better way, however, is to hide the existence of the cache so that only a small part of the program is even capable of accessing it. For example, suppose that **compute** accepts an **int** and returns another **int**. Then we might hide the cache as part of its implementation:

int compute(int n) { // Invariant: If cache[i] is present, // its value is equal to what compute(i) would return. static map<int, int> cache; map<int, int>::const_iterator it = cache.find(n); if (it != cache.end()) return *it; int result; // Do the computation here cache[n] = result; return result; }

In order to verify that the invariant remains true, we need only inspect the body of the **compute** function itself, because no one else is able to touch the cache. Moreover, the invariant makes it relatively easy to understand what operations we can safely perform on the cache. Are we allowed to remove cache elements to save space? Yes, because the invariant talks only about values of elements that exist. What if something causes **compute(n)** to return a different value than it did previously? Then the invariant is no longer true, and we must make it true by removing the appropriate element from the cache—and so on.

This example is still pretty simple, but nevertheless it illustrates an important point: Thinking explicitly about the invariant for our cache encourages us to think about what conditions are necessary to keep the invariant true. In particular, the invariant helps us realize that we need to defend against the possibility of someone changing the cache behind our backs.

### Class Invariants

It is often useful to associate an invariant with a class. When we do so, we effectively use the same invariant for every publicly accessible function that operates on objects of that class, be they public member functions or friend functions. Every such function can assume that the invariant is true when the function begins execution, and is responsible for ensuring that the invariant is true again when the function is about to return.

For example, one fairly common C++ programming technique is to use two classes to define a reference-counted data structure. There are many variations on this technique; one variation uses a nested class to hide the data representation from the users:

class Thing { private: struct Data { size_t refcount; // ... }; Data* dp; // ... };

Here, the type **Thing::Data** is private, so it is accessible only to members and friends of **Thing**. Every **Thing** object contains a pointer to a **Data** object, and every **Data** object contains a count (in its **refcount** member) of the number of **Thing** objects that refer to it.

One problem that some C++ programmers find confusing is how to write the constructors, copy constructor, destructor, and copy assignment operator for such a class so as to be assured of maintaining the reference counts correctly. You will not be surprised to learn that we think that an appropriate class invariant will make it easier to handle the reference counts.

We can think of two plausible ways to implement such a class, each of which has its own invariant. The simpler of the two invariants has two parts:

- The
**dp**member of every completely constructed**Thing**object points to a**Thing::Data**object. - The
**refcount**member of every**Thing::Data**object is equal to the number of completely constructed**Thing**objects that point to it.

The more complicated design allows a **Thing** object to have a zero **dp** member; we have left it as an exercise to find the corresponding invariant. We can assume that the class invariant is true at entry to every member of **Thing**—even constructors—because of the "completely constructed" caveat.

For example, consider how we might use the invariant to help us write the **Thing** copy constructor:

Thing::Thing(const Thing& t) { // We need to figure out what to do // here }

We know that the formal parameter **t** is a (**const**) reference to the **Thing** object that is to be copied. Because this object has already been completely constructed (or so we must assume!), we know that:

**t.dp**points to a**Thing::Data**object.- This object includes a
**refcount**member that counts the number of**Thing**objects that refer to it.

Our job is to ensure that when the constructor finishes, these two claims will still be true. To do so, we must initialize **this->dp** to point to a **Thing::Data** object. Because we are implementing a reference-counted data structure, we probably want to use the same **Thing::Data** object as the one to which **t->dp** points, so it suffices to initialize **this->dp** as a copy of **t->dp**. This initialization satisfies the first claim.

We still need to deal with the second claim, because when our constructor returns, the object will become completely constructed. Accordingly, we must adjust the **refcount** member of **this->dp** to account for the new **Thing** object that is pointing to it.

In short, we can satisfy both claims by writing a copy constructor that looks like this:

Thing::Thing(const Thing& t): dp(t.dp) { // We can do whatever else we need to // do here ++dp->refcount; }

Alternatively, we might decide that we do not want to initialize **this->dp** to point to the same object as **t.dp** but instead make it point to a newly created copy of the object to which **t.dp** points. We can do so this way:

Thing::Thing(const Thing& t): dp(new Thing::Data(*t->dp)) { //We can do whatever else //we need to do here dp->refcount = 1; }

We leave it as an exercise to verify that this variation still preserves the invariant. We can use the invariant in a similar way to help us implement all of the other copy-control members: the other constructors, destructor, and copy-assignment operator. Moreover, if we decide to add features such as copy-on-write, the invariant will help us keep our bearings.

### Discussion

It is a common misconception that an invariant is something that is always true. In fact, an invariant is intended to be true only at a specific point or points in a program's text. For an invariant to be useful, it should be associated with a part of the program that can be executed more than once, particularly a loop or a procedure.

Understanding an invariant often helps us understand the corresponding program. The reason is that the invariant corresponds in a deep way to the purpose of that part of the program. So, for example, we saw that in a linear search, we could use the invariant to express the notion that we intend for the search loop to find the first element of a container that has a particular value, rather than finding any such element or skipping over an element even if it is there. In our reference-counted data structure, we saw that we could use an invariant to make a clear claim about what conditions have to occur in order for us to consider the data structure to be valid. Moreover, we saw how the importance of maintaining an invariant leads to the notion of hiding from users code that might be able to falsify the invariant.

We do not pretend that invariants will somehow make all programming problems trivial to solve. However, we have found that when we have a piece of code that we can't quite get to do what we want, we can often sharpen our understanding by asking ourselves: "What is the invariant?"