Handling RMS Workloads
High-end applications tend to trickle down to wider use as mainstream systems develop the performance characteristics required to run them. This is the case for recognition, mining, and synthesis (RMS)--applications now performed primarily on powerful supercomputers. As multi-core platforms take over the workplace and home, RMS will play a crucial role in enabling these computers to understand and "see" data the way humans do and help us tap all the digital wealth that surrounds us.
Traditionally we have treated "R," "M," and "S" components as independent application classes. For example, computer vision (recognition), data mining (mining), and 3D photorealistic graphics (synthesis) are traditionally considered independent, stand-alone applications. However, an integration of these component applications, if achieved in real-time in an interactive RMS (iRMS) loop, could lead to new uses in everything from the sciences and medicine to business and entertainment.
The Promise of Speculative Threading
All these workloads require significant software innovation. Today we rely on software developers to express parallelism in the application, or depend on automatic tools (compilers) to extract this parallelism. These methods are only partially successful. To run RMS workloads and make effective use of many cores, we need applications that are highly parallel almost everywhere.
For example, compilers must provide code that works in all cases. Consequently, anytime a compiler has to parallelize two pieces of code, it must consider all potential dependences. It has to analyze whether one piece of code might write something in a given memory location that another piece of code may read. Unfortunately, in a majority of cases, a compiler doing this only has an approximate view of the memory locations that are being touched by every single instruction. Therefore, whenever the compiler has to detect potential dependences, it tends to be over-conservative. If the compiler cannot prove that two instructions are independent, it presumes that they are dependent. This means when the compiler generates the code, it assumes a huge number of dependences that don't exist or rarely exist. The code ends up overly serialized and misses many opportunities for parallelization.
Now consider how we might instead design multi-core platforms that take parallelize applications in a "speculative" way. The speculating we're doing here is creating speculative threads--threads that do not necessarily commit and thus, are not necessarily correct. At the time they are spawned, it's unknown whether they will work well. At some point, they have to be checked. They might speed up the application or they might be incorrect and discarded. The key here is that there will be a check of their correctness down the line. If they are not correct, no damage is done because errant threads are simply squashed and their activity discarded. There's a built-in safety net so that the processor ends up in the same state it would have been in if the threads had never executed.
Speculative threads could revolutionize how we parallelize applications. No longer would compilers have to be conservative. Instead of generating code for the worst case, compilers could generate for the common case. The result would be a much higher degree of parallelism and significant gain in performance.