Two Parallelism "Holy Grails"
A good parallel program has two qualities: (1) it works as intended, (2) it scales.Item one on this wishlist: Making a parallel program work generally involves making it deterministic. At least, making it deterministic is the part which feels "new" for parallel programming. A deterministic parallel program, even if not working they way you want it to, is debuggable like a serial program. It's when you have a non-deterministic parallel program, that debugging can easily become a nightmare.
Item two: Scaling means a program runs incrementally faster each time your offer incremental hardware threads. It doesn't say anything about effeciency. It turns out - scaling, even a little, is the critical and most difficult part.
Combine these two - and you have the ultimate "wishlist" for making parallel programming approachable.
Ct Technology, from Intel, comes from research focused on just these two items. A practical approach to making programming much more likely to be deterministic, and more likely to scale.
Intel felt that the research had gone far enough - and that a useful product could be created by the end of 2009.
Perhaps much bigger than Ct itself - we made clear our claim that C++ could be extended for data parallelism, with certain determinism guarantees and scaling characteristics.
The bar is raised: language extensions should help with these two "holy grails" more and more explicitly - not always leaving it as an exercise for the programmer.
That sounds like kinder gentler parallelism to me. A worthy goal, as more and more parallel constructs are proposed.
Coverage of the Ct announcement first appeared on HPC wire.