Algorithms that use general problem solving techniques are among the most simple and powerful algorithms available. They successively create possible solutions to a problem and test each solution for correctness. Such code can be written once and then reused for many different problems. This allows a programmer to avoid writing a specialized algorithm for each problem.

Backtracking is one example of these techniques. Backtracking works on the class of problems called combinatorial problems. A problem is combinatorial if the solution can be expressed as an ordered set of choices from a list provided in the problem statement. For example, scheduling is a combinatorial problem because the solution is an ordered list of jobs.

Backtracking is part of a family of related algorithms: branch-and-bound, hill-climbing, Z*, A*, alpha-beta search, etc., all which use general problem solving techniques to solve combinatorial problems. Of these algorithms, backtracking is both simple and representative. It is relatively easy to learn, and the techniques backtracking uses are similar to its more complicated cousins.

Unfortunately, many treatments of backtracking ignore its greatest strength — its generality. Specialized algorithms are written for each problem, leading to one algorithm to solve the n-queens problems, another to solve optimal packing, etc. I use C++ templates and STL to write a generic algorithm that works for any problem solvable by backtracking. My class requires you to write a simple criterion function that the algorithm will use to find the solution.

### A Brute Force Approach

The easiest way to understand backtracking is to start by studying a brute force approach that is simple but impractical. Later I transform it into the efficient and practical technique of backtracking.

A combinatorial problem typically has a finite number of solutions. The brute force algorithm generates every possible solution to a combinatorial problem and then tests it for correctness.

do { generate the next solution; test solution for validity; } while (solution is invalid);

This algorithm is general and slow. It works for any problem that has a finite number of testable solutions, so it is general. It is slow because there are an exponential number of possible solutions to a combinatorial problem.

However, the algorithm's simplicity makes it easy to use. Generating all possible solutions to a combinatorial problem is trivial; once the code is written it can be used for any future problem. The only remaining task is writing a function that tests if a solution is valid. The problem statement defines how to test the solution, so a few minutes of coding produces an algorithm guaranteed to find an answer, no matter how slowly.

### Optimizing the Process

I use the n-coloring problem as an example. The n-coloring problem requires you to color a map with n colors such that no adjacent regions are colored the same. This problem is easy to understand, but it has many important real world applications.

Consider using the brute force technique described above to color the United States with four colors. There are over 1029 possible colorings. No computer could generate and test this many solutions in a reasonable time. There needs to be a better way.

Imagine you are trying to color a map by hand. Would you color the entire map, then check if it was valid? Not if you're smart. You would color one state at a time, and immediately check if it was the same color as a neighbor. If it matched the color of a neighbor you would immediately correct your mistake — there is no point in coloring the rest of the map because the final result is guaranteed to be invalid.

Therefore, you would keep changing the color of the state until it didn't conflict with its neighbors. If you couldn't find a color that was correct, you'd conclude that you made a bad decision earlier. You would then backtrack to a previously colored state, change its color and then proceed forward.

### Decision Trees

Any solution to a combinatorial problem may be regarded as a series of decisions. Coloring the contiguous U.S. map with four colors consists of 49 decisions, one decision for each state. Each decision has four possible choices.

A problem's solution space is defined as the set of all possible solutions. Decision trees represent the solution space in a tree structure. The tree has one level for each decision. The root of the tree represents the initial state. The root has one child for each choice that can be made for the first decision. Each of these children each has their own set of children representing the choices available for the second decision. This pattern continues to the bottom of the tree.

Thus, the decision tree for coloring the U.S. would have 49 levels, one for each state, and each node would have four children, one for each color. Any traversal from the root to a leaf represents one possible solution to the problem. Figure 1 illustrates a decision tree for coloring three states with two colors.

##### Figure 1: A decision tree for coloring three states with two colors.

### Pruning the Tree

Backtracking uses a decision tree to formalize the ad hoc hand coloring method described above. It starts at the root of the tree with no states colored. It colors the first state by choosing one of the four children of the root. To keep the algorithm simple, the backtracking algorithm always chooses the leftmost child. It traverses to that child, making it the current node.

The algorithm then colors the next state by choosing the leftmost child of the current node. As with coloring by hand, the algorithm now checks to see if the coloring is valid. Suppose it finds that both states are the same color; assuming the states are adjacent on the map, the coloring is invalid.

Since the coloring is invalid, there is no need to traverse any lower in the tree; all lower branches are guaranteed to yield invalid results as well. Therefore the algorithm prunes the current node from the tree, and backtracks to the previous node.

Pruning the tree guarantees that the invalid colorings are not considered. The gains are not negligible. The case above pruned 447-2 nodes from the decision tree, which is over 1028 decisions that were not examined.

The backtrack algorithm continues to traverse the tree, descending level by level as long as the coloring is valid, and backtracking and pruning as soon as it detects an invalid coloring.

Note that at each step the algorithm needs to test only the most recent decision. If the algorithm is coloring the nth state, the n-1 states must be valid since they were just checked in the previous step. Thus, the algorithm only checks to see if the nth state's coloring conflicts with its adjacent states. If not, the coloring of all n states must be valid.

### Storing Decision Trees

Although I described the algorithm as if it traversed a preexisting decision tree, it actually uses a more intelligent technique. The decision tree for even a modest problem would exceed the storage capacities of any known computer.

Instead, the algorithm generates only one node of the tree at a time, which is always the current leaf. Invalid nodes are not stored because they are pruned. This allows the algorithm to store the entire tree in an array. Each array element represents one level in the tree; the nth element of the array is a child of the n-1th node.

For the coloring problem, let the colors be numbered 1 to 4. The algorithm colors the first state by appending the first color to an empty array, giving `[1]`

. It then colors the next state by appending 1 to the array, giving `[1,1]`

. This coloring is invalid (assume the states are adjacent), so the algorithm must backtrack. Conceptually, the algorithm backs up to the previous node, flags it invalid, and then creates a new child node. In practice, the algorithm just increments the last element's value, giving `[1,2]`

. This coloring is valid, so the algorithm creates a new child, giving `[1,2,1]`

.

What happens when the current node is invalid and it is the last valid color? Instead of incrementing the element, the algorithm backtracks by deleting the last element until it finds an element that it can increment. Thus `[3,2,4]`

become `[3,3]`

, and `[4,1,4,4,4]`

becomes `[4,2]`

. In effect, the algorithm counts in base n arithmetic.

This implementation is very general. The integers in the array merely represent the nodes in the decision tree. By defining what the integers represent, the algorithm will work with any decision tree.

### STL Implementation

Most backtracking algorithms are implemented using an array of `int`

s to store the decision tree, and require a user-defined function that validates the decision tree. The algorithm generates a decision, passes it to the user-defined function, and either backtracks or continues forward based on the result of the function call. However, backtracking is a generalized algorithm. It should be implemented in a generic way that will work for any data type and any container choice. For example, it would be nice to use `enum`

s to represent the color choices for the map coloring problem.

STL provides a powerful paradigm to make the backtracking algorithm generic. In STL, algorithms are implemented as functions that use iterators to traverse over a container. Iterators have the same semantics as C-style pointers. Templates allow the functions to use iterators from any type of container, letting you use C-style arrays, vectors, deques, etc., without any changes to the algorithm's code.

My backtrack algorithm uses templates that allow it to use any STL container to store the decision tree. The template arguments allow you to specify your data type (`int`

, `enum`

, etc.), the container that stores the decision tree (`vector`

, c array, etc.), and the user-defined function that evaluates the decision tree's correctness.