*Jonathan is the chief technical officer and one of the founders of Intrinsa Corp. Jerry is a member of the technical staff at Intrinsa and a veteran of the ANSI C++ Standards Committee. They can be reached at [email protected] and [email protected], respectively.*

An old adage claims that "Anyone can make a bridge that stands, but it takes an engineer to make one that just barely stands." This is as true of software as bridges. It takes considerable skill to design efficient software that only does as much work as necessary. For example, traditional sorting may be overkill if you really only have a few conditions you need to satisfy. This month, Jonathan and Jerry show how topological sorting does only enough work to satisfy a few constraints. They then extend the basic algorithm to provide good results even when those constraints are contradictory.

-- Tim Kientzle

In developing our PREfix software component simulator, we realized that to get the best results, we should analyze functions in a particular order. For example, if *Foo()* calls *Bar()*, it's best to analyze *Bar()* before analyzing *Foo()*. Given an entire program, we needed to sort the various functions so that a called function was analyzed before the function that called it. Note that there are many different orders that might work; we just needed to find one.

This is an example of a "topological sort." Most sorting techniques require a "total order," where you have to be able to compare any two elements. A topological sort uses a "partial order" -- you may know that A precedes both B and C, but not know (or care) whether B precedes C or C precedes B. Topological sorting is a useful technique in many different domains, including software tools, dependency analysis, constraint analysis, and CAD.

Topological sorting works well in certain situations. If there are very few relations (the partial order is "sparse"), then a topological sort is likely to be faster than a standard sort. Given *n* objects and *m* relations, a topological sort's complexity is *O(n+m)* rather than the *O(n log n)* of a standard sort. In our case, most functions typically call a handful of other functions, meaning the total number of relations (caller/callee pairs) is relatively small, so topological sorting makes sense.

Topological sorts can also deal gracefully with cycles. For example, imagine that two functions X and Y are mutually recursive: X calls Y and Y calls X. In this case, it is useful to detect the cycle and the specific relations that cause the cycle. Standard sorting algorithms, however, will simply fail in this situation. We developed an extension to topological sorting that can produce a "best" order, even in the presence of cycles.

In this article, we present a basic topological sorting algorithm and implementation, then extend the algorithm and implementation to deal with cycles. The implementations presented here make heavy use of, the C++ Standard Template Library (STL). Much of the work involves manipulating vectors and queues, and STL's primitives make it easier. Furthermore, packaging the various functions as STL algorithms allows them to be reused on different types of data.

### The Basic Algorithm

For simplicity, we'll describe the algorithm in terms of a generic "precedes" relation: If X precedes Y, we say that X is the predecessor and Y is the successor. Our terminology and discussion follows Donald Knuth's presentation in *The Art of Computer Programming, Volume 1* (Addison-Wesley, 1997).

For example, Figure 1 illustrates the following set of relations on the input set { A, B, C, D }: A precedes B, C precedes D, and B precedes D. For these relations, there are several valid orderings:

{ A, B, C, D }

{ A, C, B, D }

{ C, A, B, D }

However, { C, D, A, B } would not be valid, because it violates the relation that B precedes D.

One way of ensuring a correct order is to simply copy the data, making sure that whenever we copy one item, we know that all of its predecessors have already been copied. As a shorthand for this notion of "already been copied," we'll define a predecessor X as an "active" predecessor of Y if X precedes Y and X has not (yet) been copied. Then this basic algorithm can be summed up quite succinctly:

While there is a member that has no active predecessors, repeat:

1. Pick a member X with no active predecessors.

2. Copy X to the output.

Clearly, the output will indeed be ordered. What is not so obvious is that -- assuming there are no cycles in the input -- the *while* loop will not terminate until every item has been output.

The key to an efficient implementation of this algorithm is to be able to pick members with no active predecessors without having to rescan the entire list at each iteration. One way of accomplishing this is to associate a count of predecessors and a list of successors to each input item, and use an auxiliary queue Q as interim storage of nodes that have no active predecessors but have not yet been output. In fact, Knuth uses the topological sorting algorithm as an example of using a linked list as a queue.

Assume we have an array, *count,* which stores the predecessor count for each member. Using the basic queue operations *add* (at the end of the queue) and *pop* (from the front of the queue), we can refine the algorithm as in Figure 2.

The *while* loop terminates when there are no items left on the queue, either because everything in the input list has been processed, or because there is a cycle.

### Complexity

The overall time complexity of this basic algorithm is *O(n+m)*. The *O(n)* comes from the number of times that the *while* loop (and initial *for* loop) is executed, and the *O(m)* from the nested *for* loop.

Although there is no way to calculate how many times the inner loop will be executed on any one iteration of the outer loop, it will only be executed once for each successor of each member, which means that the total number of times that it will be executed is the total number of successors of all the members -- or the total number of relations.

Space complexity is also *O(n+m)*. The *O(n)* component comes from the predecessor count information stored for each member, and the maximum length of the auxiliary queue. The *O(m)* comes from storing the successors for each member; once again, the total number of successors is the number of relations, so *O(m)*.

### Cycles

The basic algorithm's usefulness may be limited because of its inability to give detailed information about cycles. In practice, there are often one or more cycles in real-world problems. In program analysis, mutually recursive functions cause cycles.

For example, Figure 3 illustrates the set of relations on the set { A, B, C, D, E, F, G }: A precedes B, B precedes C, C precedes D, C precedes B, D precedes B, D precedes E, D precedes F, F precedes G, and G precedes F. Note that there is a cycle involving B, C, and D; and another cycle involving F and G. There is also a cycle involving just B and C, but we can ignore that because of the B, C, D cycle; we are only interested in the maximal cycles.

In *The Design and Analysis of Computer Algorithms* (Addison Wesley, 1974), Aho, Hopcroft, and Ullman present a complete -- and efficient -- algorithm for identifying strongly connected components in a graph, which we can use for finding cycles.

The basic approach is to iterate through all the members of the set and perform a depth-first search of the relations from each. As we visit each member for the first time, we number it (Aho et al. refer to this as the "depth-first number") and push it onto a stack.

For each member, we also keep a record of the lowest-numbered potential root of a cycle involving that member. Initially, this is simply the member itself; when we are done with the depth-first search from a member, if any of its successors has a lower-numbered potential root, we update the member's potential root.

A member is potentially the base of a cycle if its lowest-numbered potential root is itself. Because the search is depth-first, any cycle will end up at the end of the stack, so we can find the cycle by popping the stack repeatedly until we get to the base node. (Unlike the Aho et al. algorithm, we ignore members that do not have any members following them on the stack; these members are not in any cycle.) Since each member is popped off the stack only once, it is in at most one cycle. The file topsort.h (available electronically; see "Availability," page 3) implements this algorithm.

### Dealing with Cycles

In a dependency-analysis tool, it might be enough just to identify the cycles so that you know which dependencies overconstrain the problem. In other cases, however, what you want is a "best" ordering -- an ordering that preserves as much of the information as possible. In our case, we want to warn the user of the approximation caused by mutually recursive functions, but still proceed with bottom-up analysis.

In Figure 3, there is no definitive ordering between B, C, and D, or between F and G. However, the first cycle precedes the second cycle because D precedes F (but there is no member of the second cycle preceding any member of the first cycle). Similarly, A precedes the first cycle, and the first cycle precedes E.

Possible "best" orderings for Figure 3 include { A, B, C, D, E, F, G } and { A, D, C, B, F, G, E }, but not { A, E, B, C, D, F, G }.

You can extend the topological sorting algorithm to deal with cycles by first finding the cycles of the set, then creating a set where all members of a cycle are replaced by a single placeholder. Next, topologically sort this smaller set. Finally, replace each placeholder with all the members of the corresponding cycle. Figure 4 shows this extended topological sorting algorithm.

The key point is that the topological sort in step 5 will not encounter any cycles, since we have replaced them all in step 2.

### Complexity

The Aho, Hopcroft, and Ullman cycle-finding algorithm is *O(n+m)*. Since the worst-case number of cycles is *O(n)*, and the number of relations between cycles is *m* (or fewer, if relations between the same cycle can be weeded out efficiently), the extended algorithm is also *O(n+m)*.

Oddly, the worst case for the extended algorithm is the situation where there are no cycles, and it performs a topological sort on all *n* members. If this is the typical case for some data, it may be more efficient to first perform a topological sort with the basic algorithm, and only fall back on the advanced algorithm if cycles actually appear.

In some real-world problems, cycles of length 2 may be much more common than longer cycles. Mutual recursion is a good example of this; pairs of mutually recursive functions are not infrequent, but larger combinations are rarer (although they certainly occur in practice). In this case, you might optimize the basic topological sorting algorithm by making an *O(m)* initial pass through the members. Simply look for mutually recursive pairs; when both members in a mutually recursive pair's predecessor counts are reduced to 1, that pair is eligible to be put on the output queue. While this computation remains *O(m)*, the constant factor may be high enough that it is not worth doing.

### Implementation

We've tested the implementation on Windows 95/NT with Microsoft Visual C++ 4.2, and on Solaris with the Sparcworks compiler and ObjectSpace's STL library.

The file topsort.h (available electronically; see "Availability," page 3) is the complete implementation of the *topsort*, *cycles*, and *topsortWithCycles* algorithms presented here. The file driver.cpp (also available electronically) illustrates different ways of calling the functions. Example 1 is driver.cpp's output for the example illustrated in Figure 3.

Packaging these algorithms completely in a header file is not ideal, since they define auxiliary types, which include some noninline functions; however, it makes the clearest explanation.

The *topsort* algorithm produces a linear order for its input obeying the partial order specified by the relations. It returns true if there is a cycle, and false otherwise (but does not give any information about the cycle). The *cycles* algorithm produces a list of cycles in the input. The *topsortWithCycles* algorithm extends topsort to return an order whether or not cycles are present.

The functions are templatized to allow them to operate on arbitrary types. Example 2 shows the declaration of *topsort*.

For efficiency, the underlying *GraphInfo* class operates on integer indices, rather than the iterators that are used in the external interface. The functions *topsort*, *cycles*, and *topsortWithCycles* first convert the relations to relations between integer indices, and use *GraphInfo* to operate on the integers, then use the result to order their output.

While this may appear unnecessary, it is actually important to ensure the time complexity of algorithm. The aforementioned complexity analysis assumed that the time to index into the count array and get the list of successors of a member is *O(1)*, and that the time to assign a result is also *O(1)*. If strings were being used as indices, for example, array lookups would be *O(log n)* complexity, rather than *O(1)*, and so the overall complexity would be *O((n+m) *· *log n)*. For integers, both assignments and lookups are indeed *O(1)*.

One additional implementation note: *topsort* checks for relations between an integer and itself, and simply ignores them. Strictly speaking, such a relation is a cycle of length 1, but such a cycle does not affect the linear ordering. It is easy and efficient to deal with this as a special case.

*topsortWithCycles* illustrates the combination of cycle-finding and topological sorting to produce an approximate topological sort in the presence of cycles. For simplicity, this implementation always searches for cycles; if cycles are relatively rare, it may be more efficient to first perform a topological sort, and then find and expand cycles from the remaining relations only if there is at least one cycle.

**DDJ**

*Copyright © 1997, Dr. Dobb's Journal*