The Fastest Sorting Algorithm?

Which sorting algorithm is the fastest? Stefan presents his answer to this age-old question.


April 01, 2000
URL:http://www.drdobbs.com/architecture-and-design/the-fastest-sorting-algorithm/184404062

Which sorting algorithm is the fastest? Ask this question to any group of programmers and you'll get an animated discussion. Of course, there is no one answer. It depends not only on the algorithm, but also on the computer, data, and implementation. However, if you count the number of operations needed to sort integer numbers on a standard von Neumann computer, there is a clear winner -- the algorithm presented in the paper "Sorting In Linear Time?" by A. Andersson, T. Hagerup, S. Nilsson, and R. Raman (Proceedings of the 27th Annual ACM Symposium on the Theory of Computing, 1995). It sorts n integers in time proportional to n log log n. In this article, I'll give you a complete description of this algorithm.

Can it be done even faster? No one knows. We only know that it can't possibly be done using less than n operations: An algorithm using fewer operations than that can't look at each of the n numbers and, therefore, might leave some of the numbers out of order.

Even though the n log log n time-sorting algorithm came about as a theoretical game, its real-life performance is good. A C implementation like nloglogn.c ( available electronically; downloads at the top of the page) with no particular optimizations runs faster on a typical 32-bit machine than many standard textbook sorting algorithms.

The Problem

To achieve agreement when discussing the speed of a sorting algorithm, it's necessary to establish some common ground. How do we measure speed? There is a well-established model -- the von Neumann computer, or unit-cost RAM model -- widely used by computer scientists to evaluate algorithms without introducing all the complexities of real-world computers. This model doesn't account for some important things, such as the performance gap between RAM and CPU, but typically gives good approximations of real-life performance.

There is nothing strange about the unit-cost RAM model. It's probably similar to your mental image of a computer. The computer has a RAM divided into words of w bits -- the "word length" of the machine. Each word has an address, and a word can be accessed in constant time given its address. The computer has a CPU capable of performing a small number of instructions, such as reading and writing words in RAM, and performing basic arithmetical and logical operations. Each of these operations can be performed in a constant number of machine cycles.

If you don't use the full power of the RAM model, but only consider sorting algorithms that operate by comparing elements pairwise, it's well known that at least n log n comparisons are needed to sort n elements in the worst case. There are plenty of algorithms, such as mergesort and heapsort, that achieve this bound.

Other sorting techniques may be faster. For example, using radix sorting, you can sort n word-sized integers by scanning the data a few bits at a time. Each scan can be done in linear time, and the number of scans depends on the word length w. Hence, the time for radix sorting is proportional to nw. Here, it's easy to make a mistake. On most computers, w is a small number (typically 32 or 64), and you might be tempted to state that radix sorting is a linear time sorting algorithm. Not so. The algorithm will only work if w>=log n. If not, you wouldn't even be able to store the numbers. Since each memory address is a word consisting of w bits, the address space won't accommodate n numbers if w<log n. Hence, in the unit-cost RAM model, radix sort also runs in time proportional to n log n.

I'll now describe an algorithm that does not depend on the word length. For any word length
w, w >=log n, it sorts n word-sized integers in time proportional to n log log n.

The Algorithm

With this algorithm, the sorting is performed in two phases. In the first phase, you reduce the size of the numbers using radix sorting techniques. The elements become smaller and the number of elements doesn't increase. In fact, in linear time, it's possible to cut the number of bits of each number in half.

In the second phase, you perform a mergesort on the shortened numbers. Because the numbers are now much shorter, several numbers will fit in a single machine word. With several numbers in one word, it's possible to do several operations on these numbers in parallel using the shift and bitwise AND/OR/XOR operations found on most computers. In fact, using a scheme known as "packed merging," you can merge two sorted sequences of k numbers, each of which fits in one word, in time proportional to log k. This is an improvement on the standard merging algorithm, which requires time proportional to k.

Given these building blocks, we are almost done. In the first phase, you perform log log n reductions. This is done in n log log n time, because each reduction takes linear time. After halving the number of bits log log n times, the numbers will be so small that you can fit k=log n numbers within one word.

In the second phase, you use the packed merging as a subroutine in a standard mergesort. You perform the merging bottom up. Start by sorting subsequences of length k. Each of the n/k subsequences is sorted in k log k time using a standard comparison-based algorithm, such as mergesort. Plugging in k=log n, you see that this adds up to n log log n.

You now combine these sorted sequences pairwise using packed merging, and thereby create longer sorted sequences. Each sequence is packed in a single machine word. In this first round, you need to merge n/2k pairs, each merging is done in log k time using packed merging. This sums up to n/2k×log k.

In the next round, you need to merge n/4k pairs, each consisting of two words. This is done in n/4k×2 log k=n/2k×log k time, the same time bound as before. In fact, you'll get the same result for each round.

Because the length of the sequences double in each round, the number of rounds is log n/k. Hence, the total time for this version of mergesort is log n/k×n/2k×log k. Plugging in k=log n, you see that this expression is also no more than n log log n.

At this point, the data is sorted and no more than n log log n time has elapsed, since both the reduction phase and the mergesorting phase run in time proportional to n log log n.

Reducing the Number Size

There are several alternative algorithms that can be used for reducing a sorting problem to one with shorter numbers. Which to chose depends on whether you want to optimize the rate of reduction, the speed of the algorithm, or the amount of memory required. Here, I'll show you a simple algorithm that reduces the number of bits by a factor of 2. It runs in linear time, but uses plenty of memory.

Assume that you want to sort n b-bit numbers. I'll use a bucketing scheme with two bucket tables: High and Low. Each table entry can hold a linked list of numbers. I'll explain the algorithm using a small example. In Figure 1, you see seven 6-bit numbers to be sorted. Figure 1 also shows the two bucket tables and a batch list used to record which buckets are actually being used. This is necessary because you cannot afford to inspect every single bucket: The number of buckets may be much larger than the number of elements.

Figure 1: Bucketing on high-order bits. All active buckets are recorded in the batch list.


Data: 100101, 011001, 100111, 001100, 011111, 110111, 001000

High                        Low
------------------------    ------------------------
000:                        	000:
001: 001100, 001000         	001:
010:                        	010:
011: 011001, 011111         	011:
100: 100101, 100111         	100:
101:                       	101:
110: 110111                	110:
111:                        	111:

Batch: 100, 011, 001, 110

The first step in Figure 1 is to perform bucketing on the high-order bits; in this example, the first 3 bits of each number. Every time a bucket is used for the first time, the number of this bucket is recorded in the batch list.

Figure 2 shows the second step of the algorithm, where you traverse each non-empty bucket in the High bucket table, moving elements into the Low bucket table. There are several important things to note.

Figure 2: Bucketing on low-order bits. The minimum element in each High bucket is left behind.


High                        Low
------------------------    --------------------

000:                        	000:
001: 001000                 	001:
010:                       	010:
011: 011001                 	011:
100: 100101                 	100: 001100
101:                        	101:
110: 110111                 	110:
111:                        	111: 100111, 011111

Batch: 100, 011, 001, 110, 111, 100

The next step is to sort the batch. This is the reduced sorting problem -- the batch contains at most n numbers and each number has b/2 bits.

Using the sorted batch, it's straightforward to assemble the numbers in ascending order. You start by moving numbers back from the Low bucket list to the High bucket list. Figure 3 is the result. Once again, you use the batch to find the nonempty buckets. But because the batch is now sorted, the Low buckets will be visited in order. Hence, each High bucket will end up containing a sorted list.

Figure 3: Using the now sorted batch, the elements from the Low buckets have been moved back to High buckets and can be extracted in ascending order.


High                       Low
------------------------   --------
000:                       000:
001: 001000, 001100         001:
010:                       010:
011: 011001, 011111         011:
100: 100101, 100111         100:
101:                       101:
110: 110111                110:
111:                       111:

Batch:  001, 011, 100, 100, 110, 111

Finally, you traverse the nonempty High buckets, once again using the sorted batch, to produce the final sorted list.

You might have noticed a problem with this algorithm. How do you know if a bucket is empty? The obvious solution is to initialize each bucket to a null pointer. But you can't afford that: The number of buckets might be larger than n and the algorithm is supposed to run in linear time. The solution is a bit tricky.

How to Avoid Initializing Memory

Consider an algorithm that uses a large memory area. If the running time of the algorithm is smaller than the size of the memory, initializing the memory will take longer than running the algorithm. However, using a shrewd trick, it's possible to refrain from initializing the memory.

The idea is to keep track of which memory positions have been used so far. Using this information, you may postpone the initialization until the first time a memory cell is encountered.

One solution would be to use an array to keep track of active memory positions. Every time a memory position is accessed, check whether the address is in this array of active memory positions. If not, initialize this memory cell and add the address to the array.

For this scheme to work without increasing the overall running time, it's crucial that the lookups in the array can be performed in constant time. To achieve this, use double bookkeeping. Not only do you use an array of active memory positions, but each memory position is also extended to contain a pointer into this array. In Figure 4, for instance, the memory has three active memory cells: cell numbers 0, 4, and 6.

Figure 4: How to avoid initializing memory using double bookkeeping.


Memory                        Active
-------------------------      -------
Address Contents Pointer

0       134431   X---------->  0
1       938434      -------->  4
2       432754      |  ----->  6
3       292343      |  |
4       874944   X---  |
5       002345         |
6       654243   X------
7       112903

Checking whether a memory cell is active is a two-step procedure. First, you check whether the pointer attached to this memory position is valid: Does it point to a position in the active array? If not, the memory position contains garbage and has not been initialized. If the pointer is valid, you also need to check whether the corresponding pointer in the active array actually points to this memory address. If it does, the memory cell is active.

This check only requires two memory accesses. Hence, it can be performed in constant time. Adding a new address to the active list is also done in constant time: Store the memory address in the next free position in the active array and set up the pointer associated with the memory location accordingly.

Fast Merging of Short Numbers

All that remains is to merge short sequences of short numbers fast. The idea is to use a parallel algorithm that only requires very simple operations. If the algorithm is simple enough, it may be possible to simulate the parallelism using the inherent parallel nature of bitwise logical operations. For example, the bitwise OR operation performs w OR operations in parallel in just one machine cycle.

Batcher's bitonic merging fulfills the requirements. The algorithm is based on a property of so called "bitonic sequences." A bitonic sequence is almost sorted: It is the concatenation of an ascending and a descending sequence, or it can be obtained by a rotation (cyclic shift) of such a sequence. For example, the sequence 2, 3, 4, 3, 2, 1 is bitonic since it is the concatenation of the ascending sequence 2, 3, 4 and the descending sequence 3, 2, 1. The sequence 3, 4, 3, 2, 1, 2 is also bitonic, because it can be obtained by rotating the previous bitonic sequence. Bitonic sequences have a special property described in the lemma in Figure 5.

Figure 5: The bitonic lemma.


If the sequence x(1), x(2), ..., x(2k) is bitonic,
the two sequences

  L = min(x(1), x(k+1)), min(x(2), x(k+2)), min(x(k), x(2k))

and
  R = max(x(1), x(k+1)), max(x(2), x(k+2)), max(x(k), x(2k))


are also bitonic. Furthermore, each element of L is smaller
than or equal to each element of R.

In other words:

  1. Start with any bitonic sequence with an even number of elements, cut it in half, and form two new sequences by comparing the two halves element by element.
  2. Form one sequence, L, containing the minimum elements of each pair and an other, R, which contains the maximum elements.
  3. Then the sequences L and R will both be bitonic and each element of L will be smaller than or equal to each element of R.

Using this lemma, you can efficiently merge sorted sequences. For instance, by reversing one sequence in Figure 6 and appending the other, you get a bitonic sequence. In the first phase of the merging, you use the lemma to produce two subsequences L and R, where each element of L is no larger than each element of R. To complete the merging, you only need to sort L and R and concatenate the two sequences. This can be done using the lemma once again. Remember that both L and R are bitonic, so in the second phase, you can apply the bitonic lemma to each of them. The process is repeated recursively until the subsequences consist of just one element.

Figure 6: The first phase of merging two sorted sequences using the bitonic lemma.


X = 0, 2, 3, 5
Y = 2, 3, 6, 7

// Form a bitonic sequence S by reversing X and appending Y:
S = 5, 3, 2, 0, 2, 3, 6, 7

// Compute the min and max sequences L and R:
L = min(5, 2), min(3, 3), min(2, 6), min(0, 7) = 2, 3, 2, 0
R = max(5, 2), max(3, 3), max(2, 6), max(0, 7) = 5, 3, 6, 7

Packing all of the numbers in one machine word, it's possible to perform each phase of this merging algorithm in constant time; see Figure 7. In this example, the numbers consist of 3 bits. You'll also need a special test bit for each number, so you can fit eight numbers into one 32-bit word. The special bitmask N, which indicates which elements are smaller, is computed using subtraction and the basic bitwise AND/OR operations. The second phase of the merging can be performed in a similar fashion. In fact, both subproblems can be solved in parallel using the same technique. (The full details can be found in nloglogn.c, available electronically at the top of the page.)

Figure 7: Implementing the computation of Figure 6 in parallel with standard operations only.


A = 0101 0011 0010 0000 0000 0000 0000 0000   // 5 3 2 0 0 0 0 0
B = 0010 0011 0110 0111 0000 0000 0000 0000   // 2 3 6 7 0 0 0 0

// Compute a bitmask N, showing which elements of A that are
// smaller than or equal to the corresponding elements of B.
// M is a precomputed constant.

M = 1000 1000 1000 1000 1000 1000 1000 1000     // 8 8 8 8 8 8 8 8
N = ((B OR M) - A) AND M                      	// 0 8 8 8 8 8 8 8
N = N - (N>>3)                                	// 0 7 7 7 7 7 7 7

// Compute the max and min sequence and concatenate them.

Z =    (B AND N)                             // 0 3 6 7 0 0 0 0
    OR (A - (A AND N))                       // 5 0 0 0 0 0 0 0
    OR ((A AND N)>>16)                       // 0 0 0 0 0 3 2 0
    OR ((B - (B AND N))>>16)                 // 0 0 0 0 2 0 0 0

Each phase of this merging algorithm is implemented using just a constant number of basic operations so the total time to merge k numbers, small enough to fit within one word, is proportional to log k.


Stefan is a lecturer in the Department of Computer Science at the Royal Institute of Technology (KTH) in Stockholm, Sweden. He can be reached at [email protected]. Apr00: The Fastest Sorting Algorithm?


High                        Low
------------------------    --------------------

000:                        	000:
001: 001000                 	001:
010:                       	010:
011: 011001                 	011:
100: 100101                 	100: 001100
101:                        	101:
110: 110111                 	110:
111:                        	111: 100111, 011111

Batch: 100, 011, 001, 110, 111, 100

Figure 2: Bucketing on low-order bits. The minimum element in each High bucket is left behind.

Apr00: The Fastest Sorting Algorithm?


High                       Low
------------------------   --------
000:                       000:
001: 001000, 001100         001:
010:                       010:
011: 011001, 011111         011:
100: 100101, 100111         100:
101:                       101:
110: 110111                110:
111:                       111:

Batch:  001, 011, 100, 100, 110, 111

Figure 3: Using the now sorted batch, the elements from the Low buckets have been moved back to High buckets and can be extracted in ascending order.

Apr00: The Fastest Sorting Algorithm?


Memory                        Active
-------------------------      -------
Address Contents Pointer

0       134431   X---------->  0
1       938434      -------->  4
2       432754      |  ----->  6
3       292343      |  |
4       874944   X---  |
5       002345         |
6       654243   X------
7       112903

Figure 4: How to avoid initializing memory using double bookkeeping.

Apr00: The Fastest Sorting Algorithm?


If the sequence x(1), x(2), ..., x(2k) is bitonic,
the two sequences

  L = min(x(1), x(k+1)), min(x(2), x(k+2)), min(x(k), x(2k))

and
  R = max(x(1), x(k+1)), max(x(2), x(k+2)), max(x(k), x(2k))


are also bitonic. Furthermore, each element of L is smaller
than or equal to each element of R.

Figure 5: The bitonic lemma.

Apr00: The Fastest Sorting Algorithm?


X = 0, 2, 3, 5
Y = 2, 3, 6, 7

// Form a bitonic sequence S by reversing X and appending Y:
S = 5, 3, 2, 0, 2, 3, 6, 7

// Compute the min and max sequences L and R:
L = min(5, 2), min(3, 3), min(2, 6), min(0, 7) = 2, 3, 2, 0
R = max(5, 2), max(3, 3), max(2, 6), max(0, 7) = 5, 3, 6, 7

Figure 6: The first phase of merging two sorted sequences using the bitonic lemma.

Apr00: The Fastest Sorting Algorithm?


A = 0101 0011 0010 0000 0000 0000 0000 0000   // 5 3 2 0 0 0 0 0
B = 0010 0011 0110 0111 0000 0000 0000 0000   // 2 3 6 7 0 0 0 0

// Compute a bitmask N, showing which elements of A that are
// smaller than or equal to the corresponding elements of B.
// M is a precomputed constant.

M = 1000 1000 1000 1000 1000 1000 1000 1000     // 8 8 8 8 8 8 8 8
N = ((B OR M) - A) AND M                      	// 0 8 8 8 8 8 8 8
N = N - (N>>3)                                	// 0 7 7 7 7 7 7 7

// Compute the max and min sequence and concatenate them.

Z =    (B AND N)                             // 0 3 6 7 0 0 0 0
    OR (A - (A AND N))                       // 5 0 0 0 0 0 0 0
    OR ((A AND N)>>16)                       // 0 0 0 0 0 3 2 0
    OR ((B - (B AND N))>>16)                 // 0 0 0 0 2 0 0 0

Figure 7: Implementing the computation of Figure 6 in parallel with standard operations only.

Apr00: How to Avoid Initializing Memory

How to Avoid Initializing Memory

Consider an algorithm that uses a large memory area. If the running time of the algorithm is smaller than the size of the memory, initializing the memory will take longer than running the algorithm. However, using a shrewd trick, it's possible to refrain from initializing the memory.

The idea is to keep track of which memory positions have been used so far. Using this information, you may postpone the initialization until the first time a memory cell is encountered.

One solution would be to use an array to keep track of active memory positions. Every time a memory position is accessed, check whether the address is in this array of active memory positions. If not, initialize this memory cell and add the address to the array.

For this scheme to work without increasing the overall running time, it's crucial that the lookups in the array can be performed in constant time. To achieve this, use double bookkeeping. Not only do you use an array of active memory positions, but each memory position is also extended to contain a pointer into this array. In Figure 4, for instance, the memory has three active memory cells: cell numbers 0, 4, and 6.

Checking whether a memory cell is active is a two-step procedure. First, you check whether the pointer attached to this memory position is valid: Does it point to a position in the active array? If not, the memory position contains garbage and has not been initialized. If the pointer is valid, you also need to check whether the corresponding pointer in the active array actually points to this memory address. If it does, the memory cell is active.

This check only requires two memory accesses. Hence, it can be performed in constant time. Adding a new address to the active list is also done in constant time: Store the memory address in the next free position in the active array and set up the pointer associated with the memory location accordingly.

-- S.N.

Terms of Service | Privacy Statement | Copyright © 2024 UBM Tech, All rights reserved.