No Silver Bullet for Parallelism
Parallelism for multicore has been a popular subjects for years. You've probably seen plenty of articles, conference presentations, and panel discussions on this subject. Without coincidence, it's also been a key topic at practically all of the previous Multicore Expos (of which I am chairman). Does this imply that the subject has been mastered? According to industry experts, there will NEVER be a 'silver bullet' to save programmers from the complexities of multicore.
Homogeneous multicore processors have become mainstream but due to strong constraints on power consumption and performance, they are often combined with specialized co-processors (e.g. GPGPU) leaving programmers with a set of heterogeneous cores to deal with.
With current state-of-the-art in parallel programming, developing applications for these architectures is very difficult and expensive. Compiler technology alone will not fill the gap, they are just not good at making those optimization decisions that multicore requires. This is due to two main issues: many decisions depend on run- time values; there is no compile time model able to predict the performance of a parallel program. But there is hope -- but it will take the industry and research communities a very strong combined effort to design and agree on providing a set of standard programming, performance, and debugging tools.
Andrew Richards of Codeplay Software, also on this panel, has a slightly more positive perspective. He says that:
Optimizing code for multicore is a tough task rewarded by elation and frustration, and it's hard to predict which you will experience when. Be prepared to experiment, at least tools can help analyze problems and help with trying out solutions.
Regardless of whether you agree or disagree, I invite you to challenge these guys here in this blog and in the live panel discussion.