When the worst case is really, really bad
In my last note, I talked about the idea of something happening "with probability 1." Such events are bound to happen eventually, even though there is no way to predict just how long it might take. A similar notion occurs with algorithms that have a worst case that is much worse than average. Two such algorithms, or categories of algorithms, are quicksort and hashing.
Hashing typically relies on the notion a hash code with the property that if two values are unequal, their hash codes will usually be unequal as well. Such hash codes can fail in three ways.
First, the hash code might be poorly designed. As a limiting example, one can imagine someone constructing a data structure for which the hash code is always 42, with the intention of coming back later and replacing that sillly hash code with something genuinely useful. Forgetting to replace it might slip through testing and cause performance problems during production. More generally, from time to time we see hash codes that just don't work as well in practice as they did in theory.
The second kind of failure is much less likely: Perhaps someone could come along by accident and supply a bunch of data values that just happen to have the same hash code. The whole point of hashing, of course, is to choose a hash algorithm that will make this eventuality very unlikely in practice. Indeed, if the hash algorithm is effectively random, then the chance of such dire performance problems is usually too small to worry about.
Where such problems become severe is in the third case: Perhaps a malicious user might reverse-engineer the hash code and supply data with the specific purpose of making the hash algorithm perform badly. Such reverse engineering is a kind of denial-of-service attack, and in our increasingly security-sensitive world, has to be taken more seriously than most people realize. The point is that events that are rare in theory can be made common through a coordinated effort.
Many years ago, I heard about a striking example of such an attack. It took place in a college dorm that had tankless toilets. Such toilets rely on a large flow of water being for the short time during which the toilet is being flushed. A small pressure-activated device in the toilet shuts off the water after it has been running long enough.
What happened in this dorm was that some students decided to see what would happen if they flushed all the toilets in the building at the same time. To do this, they enlisted a bunch of friends, synchronized their watches, and flushed the toilets.
The results were surprising. Allowing water to flow to all of those toilets at once dropped the water pressure in the building to below what was necessary to activate the shutoff mechanisms. So all the toilets continued to allow water to trickle through their pipes, which in turn prevented the water pressure from increasing. In effect, flushing all the toilets at once broke the building's plumbing, and it stayed broken until someone went through the building and shut off enough of the toilets' water supply by hand to allow the remaining toilets to shut themselves off.
This is a graphic example of how to crash a system by arranging for enough events to happen at the same time that shouldn't be doing so. It's much the same kind of denial-of-service attack that one might mount against a hashing algorithm.
In case you think such an attack is infeasible in practice, consider quicksort. Like hashing, this is an algorithm that works very well most of the time but has a horrible worst case. The basic idea is to pick a random element in the sequence to be sorted, and then partition the sequence into three segments that are respectively less than, equal to, or or greater than the chosen element. This can be done in linear time by swapping elements. After partitioning, one runs quicksort recursively on the two segments that are unequal to the chosen element.
Quicksort works because a randomly chosen element is apt to sort into a random position in the sequence. If by chance one chooses too many elements that are close to the highest or lowest value in the sequence, performance will be horrible; but that almost never happens.
But consider abstract data types, which in principle can have comparison functions that can change at run time. Might it be possible to mount a denial-of-service attack against a quicksort program by observing how it chooses the elements to compare and subverting the comparison function accordingly?
The answer is yes: In 1999, Doug McIlroy wrote a paper that explains how to do it.
Fortunately, I have not personally encountered a case where someone actually made a system misbehave by using McIlroy's technique. However, the fact that it exists, together with the results of flushing all those toilets, suggests that when we design systems that are intended to be used by people who might conceivably be hostile toward us, we should be very careful when we think about events that "almost never happen so we can ignore them."