Benchmarks and Constants
Bentley benchmarked these algorithms on a 400 MHz Pentium II, varying the size of the smaller subarray from 1 to 50, with an overall array of 1 million elements. He found that the Reversal algorithm averaged near 66 nanoseconds per element, Gries-Mills was below 50 ns per element, and the Juggling algorithm was about 190 ns per element. Today's processors, such as the Intel i7 860, run at 2.8 GHz (with TurboBoost, up to 3.46 GHz) and provide larger multilevel caches with automatic prefetching.
Table 1 shows statistics gathered from performance measurements for an input array of 1 million 32-bit elements, split into two subarrays of all possible sizes, which are then Block Exchanged using each of the three algorithms. The code was compiled with Intel C++ Composer XE 2011.
From these performance measurements, the Gries-Mills algorithm has retained its performance lead, with the highest mean performance of 1.22 ns/element, the fastest worst case, and the smallest range. The Reversal algorithm came in at a close second place, with the smallest standard deviation and same range; that is, the most consistent performance. The Juggling algorithm's performance is substantially worse, more than 5x slower than Gries-Mills, with deviation 14x worse, and over 5x the range. The performance gain is substantially higher than the frequency ratio would account for (3.46 GHz/400 MHz = 8.7x), whereas the Gries-Mills algorithm gained over 40x in performance. Thus, CPU and memory improvements other than frequency provided over 4x the performance gain.
The Reversal algorithm performs |A|/2 swaps in the first step, |B|/2 swaps in the second step, and (|A|+|B|)/2 swaps in the third step, where |A| is the number of elements in the A subarray, for a total of (|A|+|B|) swaps. Since (|A|+|B|) = N, the Reversal performs N swaps. Each swap performs 3 memory reads and 3 memory writes for a total of 6N memory accesses.
The Gries-Mills algorithm performs S = min(|A|,|B|) swaps at each iteration of the algorithm, for N/S iterations, until fewer than |A| elements remain. The total number of swaps can be approximated by N/S iterations times S swaps, which is N swaps, which is 6N memory accesses.
The Juggling algorithm moves array elements instead of swapping them, which uses a single read and a single write. In the best case of handling all array elements in a single loop, 2*(N+1) memory accesses are needed. However, in the worst case, when each pair of elements terminates a loop, the Juggling algorithm, performs N/2 swaps, as it moves an element into the temporary storage location, then moves a single element into its destination, followed by a move from the temporary storage location to its destination. In the worst case, the Juggling algorithm performs 3N memory accesses. Thus, the number of memory accesses is between 2N and 3N for the Juggling algorithm.
The three algorithms are O(n). From the number of memory accesses, the Juggling algorithm should have the highest performance, as it performs between 2x and 3x fewer memory accesses. Performance measurements show the Juggling algorithm being the slowest, however. The Reversal and Gries-Mills algorithms access memory within the array in a highly coherent way: The next array element accessed is the left or the right neighbor of the current element. This memory access pattern is cache friendly, resulting in higher performance.
The Juggling algorithm accesses memory in a non-cache-friendly manner, suffering in performance for it. When the rotation distance (Ρ) is 1, the algorithm accesses the neighboring element. As the rotation distance increases, the percentage of cache line hits decreases proportional to the distance. Once the distance exceeds the cache line size, the next element accesses become cache misses, which are much slower than cache hits.
The importance of algorithmic cache-friendliness on swapping algorithms has increased from Pentium II to the present-day Intel i7 860 processor, as the ratio between the best and the worst performance of these algorithms increased from nearly 4x on the Pentium II to over 5x today. The number of memory accesses used by an algorithm does not necessarily provide a good estimate of performance: Knowledge of whether the accesses are cache hits or misses is also necessary.
A swap operation performs a total of 3 reads and 3 writes. Out of these, only two reads and two writes are from/to system memory (at 32-bits each). Thus, 4 accesses * 4 bytes/1.22 ns/element = 13.1 GBytes/sec of memory bandwidth. System memory consists of two channels that are 64-bits each and run at 1333 MHz, for a peak bandwidth of 21.3 GBytes/sec. Thus, the Reversal algorithm is using 62% of the total memory bandwidth from a single CPU core, which doesn't provide much headroom for parallel scaling.
Three sequential O(n) in-place algorithms for swapping/exchanging blocks of unequal sizes were explored. The Gries-Mills algorithm is the performance leader with the Reversal algorithm coming in a close second, and the Juggling algorithm a distant third. The Reversal algorithm is the gem of the three, having high and consistent performance, the smallest and simplest implementation, and being the easiest to understand. The Gries-Mills and Reversal algorithms performed well due to their cache-friendly memory access patterns. The Juggling algorithm performed the fewest memory accesses, but came in over 5x slower due to its cache-unfriendly memory access pattern.
Victor Duvanenko is a frequent contributor to Dr. Dobb's on algorithms for parallel programming.