A sorting network sorts n items by performing a predetermined set of comparisons. A software implementation for a sequential computer might take the form
swap(1,2); swap(4,5); swap(3,5); swap(3,4); swap(1,4); swap(1,3); swap(2,5); swap(2,4); swap(2,3); where swap(a,b) = if(b < a) exchange a and b;This is a nine-comparison network for sorting five items. You might try it with pennies, nickels, dimes, quarters, and cheeseburgers. The best-, worst-, and typical-case number of comparisons are the same. Only the number of exchanges varies. Nine comparisons are, in fact, the fewest necessary to sort five items by exchanges when any combination of items is possible. When n is known to be small, these simple predetermined sequences out-perform the best algorithmic sorts.
In theory, the minimum number of comparisons necessary to sort n items is the ceiling of log2(n!). As n grows larger, 1og2(n!) approaches nlog2(n). The reason why O(nlog2(n)) sorting algorithms, such as Quicksort, are so efficient as n grows larger should be obvious. Sorting networks generated by the Bose-Nelson algorithm (See Listing 1. ) are O(n1.585), which diverges from nlog2(n) rapidly. However, the Bose-Nelson network for 16 elements is 65 comparisons, which is pretty nearly nlog2(n), and sorting 1000 random 16-element arrays using a Quicksort that pivoted on the last element in the array requires 85.43 comparisons on average, which is just over n1.585. This is probably the reverse of what you might have expected. Sorting is so often discussed in terms of increasing n that it is easy to fall into the trap of expecting "efficient" algorithms to behave nicely over the whole range of their inputs. The "gotcha" in Quicksort is the extra comparisons needed to properly partition the array. They are insignificant when sorting a large array but constitute a significant part of the total when the array is small. As n becomes very small, the behavior of sorting networks comes closer to the ideal.
Bose-Nelson generates minimum comparison networks only for n <= 8. However, minimum comparison networks are available for 9 <=n<=16 as well. Those generated by the code in Listing 2 are based on illustrations in The Art of Computer Programming, Vol. 3 (Knuth 1973). Please note that both listings generate sorts according to element numbers one greater than the corresponding array index.
Sorting small arrays is a problem all its own, difficult to understand in the usual terms. If you need to sort a small array in a time-critical section of code, you can count on only two things holding true: your algorithmic sort may perform much worse than expected, and no algorithmic sort can match the minimum comparison networks on all combinations of elements. When that code is part of something like an operating system, the networks have at least two other attractive features: many comparisons can be done in parallel, if appropriate; and they are inherently re-entrant, so that sorting may be interleaved with other tasks.
Knuth, Donald E. 1973. The Art of Computer Programming, Vol. 3. Reading, MA: Addison-Wesley. Pp. 220-229.
Bose, R. C. and Nelson, R. J. 1962. "A Sorting Problem". JACM, Vol. 9. Pp. 282-296.
Davis, Wilbon. September, 1992. "Time Complexity". The C Users Journal, Vol. 10, No. 9. Pp. 29-38.