Alexander is currently working as a Senior Software Engineer for Computer Associates, and can be contacted for the purposes of this article at email@example.com.
Some time ago I was interested in finding a type of interprocess (interthread) communication which would require minimal preconditions on the platform and not require a set of special processor commands. This article describes lock-free interprocess communication which does not require any special processor commands, and the only precondition for successful interprocess communication is knowledge of processor word length.
Many recipes for lock-free interprocess communication require special atomic processor commands. In this article, I propose a communication type that requires only atomic writing of processor word from processor cache into main memory and atomic processor word reading from main memory into the processor register or processor cache. To the best of my knowledge, all platforms satisfy these requirements if the address in memory is properly aligned. Later in the article, I'll discuss performance issues related to cache trashing.
Based on this communication mechanism, I implement a registry which allows updates but at the same time allows a linear increase in reading performance by increasing the number of processors. Some performance testing results are presented for single- as well as multi-processor systems. This testing shows that on some platforms, lock-free algorithms outperform standard techniques by 30 or more times (>=3000%).
In this article, I'll present four algorithms. Algorithm one demonstrates the main idea. Algorithm two (light pipe) is mostly a low level communication primitive to pass data between two processes, but is interesting to some extent in and of itself. Algorithm three takes into account cache line length to increase the performance of algorithm two. Algorithm four is an implementation of a registry optimised for reads. Algorithm four uses algorithm three (cache line optimised light pipe) as a communication primitive.