Computer architecture guru and UC Berkeley computer-science professor David Patterson has weighed in with an important article on programming for multicore processors -- aka parallel programming -- in the July issue of IEEE Spectrum.
The piece mines the historical record, recalling Intel CEO Paul Otellini's announcement of an inflection point for the industry in 2004, when processors began to move from single to multiple cores. Patterson also notes the golden age of parallelism, when companies like Thinking Machines, Convex, and nCube attempted to create purpose-build hardware.
Those companies also thought they'd be able to crack the code of parallelism, coming up with techniques which would decompose dusty decks and enable contemporary programmers to do more than simply exploit gross-grain, task-level parallelism. In general, this was not to be.
Yet Patterson sees the ubiquity of high-core-count processors as a renewed opportunity:
But there are reasons for optimism. First off, the whole computer industry is now working on the problem. Also, the shift to parallelism is starting small and growing slowly. Programmers can cut their teeth on dual- and quad-core processors right now, rather than jumping to 128 cores in one fell swoop.
One of the biggest factors, though, is the degree of motivation. In the past, programmers could just wait for transistors to get smaller and faster, allowing microprocessors to become more powerful. So programs would run faster without any new programming effort, which was a big disincentive to anyone tempted to pioneer ways to write parallel code.
Another potential reason for success is the synergy between many-core processing and software as a service, or cloud computing as it is often called. . .Expert programmers can take advantage of the task-level parallelism inherent in cloud computing."
My take: It's a wordy piece, but well worth your time. (This is the first issue of Spectrum in a long while where I immediately opened it up and started reading.) However, Patterson's treatise essentially confirms that a magic bullet is unlikely to ever emerge and effective data-level parallelism will remain limited to specific applications.
See complete story, The Trouble With Multicore.