SD West: Parallel or Bust

The next decade will be dominated by multi-core, says James Reinders. Are you ready for octo-core and beyond?


March 07, 2008
URL:http://www.drdobbs.com/cpp/sd-west-parallel-or-bust/206902441

We've all been hearing the multi-core drumbeat for some time now. The general consensus is that if you want to wring more performance from your application by taking advantage of tomorrow's processors, you're going to need to embrace parallelism in your code today. Yesterday at SD West, the man beating that drum was James Reinders, Chief Software Evangalist for Intel.

Reinders began his keynote with the now-familiar reasons why we've left the "gigahertz era" and entered the "multi-core era". We're approaching the physical limits of processor performance: each further improvement in clock speed now becomes dramatically more difficult to achieve. Multi-core is the way forward, and is quickly becoming the norm in commercial hardware. Quad-core processors are here now, octo-core processors are coming soon, and it won't stop there.

So it's a trend that's not going away, and we ignore it at our peril. Reinders then posed the question: "Are you ready?" Successful parallel programing, according to Reinders, centers around three things:

Of these, scaling may be the trickiest concept. No one expects to get a 4X speedup in their app by running it on a quad-core processor, of course, but we'd like to get as close to that ideal as possible. But, according to Reinders, this can get very tricky. Even the best parallel code optimized for two or four cores can fall apart when it's run on 5, 6, 7 or 8 cores. "At 8 or 16 cores, something happens," he said, noting that latencies loom large at this scale.

Part of the problem here is Amdahl's law, says Reinders: The rule that says the speedup of your program that comes from throwing more cores at the problem is limited by the amount of your program that can be parallelized. At some point, you reach a point of diminishing returns, and 16 processors may be just as fast as 6,000.

So what to do? There's no one solution of course, but one very good idea is to rely on abstractions. Don't program directly to raw threads, but rely on mechanisms (at least for C/C++ programmers) like libraries, OpenMP, or Intel's Threading Building Blocks (TBB). "Parallel programming shouldn't be about writing thread management code," said Reinders.

Another good reason to use abstractions is that future architectures may not, and according to Reinders, certainly will not, use cores of equal speed. Code that assumes identical cores will break on these platforms. Having a mechanism to abstract away the underlying architecture will make life much easier in this situation.

Reinders also offered what he calls a "gem of a tip" for debugging: Give yourself a way to turn off concurrency entirely. It is essential to have a way to separate the concurrency bugs from the other bugs. Debug it sequentially first, then introduce concurrency.

To get started moving your code into the multi-core era, Reinders suggests a couple of Intel web sites:

Terms of Service | Privacy Statement | Copyright © 2024 UBM Tech, All rights reserved.