Herb Sutter is a bestselling author and consultant on software development topics, and a software architect at Microsoft. He can be contacted at www.gotw.ca.
Let's say you need to migrate existing code to take advantage of concurrent execution or scale on parallel hardware. In that case, you'll probably find yourself in one of these two common situations, which are actually more similar than different:
- Migrating an application: You're an application developer, and you want to migrate your existing synchronous application to be able to benefit from concurrency.
- Migrating a library or framework: You're a developer on a team that produces a library or framework used by other teams or external customers, and you want to let the library take advantage of concurrency on behalf of the application without requiring application code rewrites.
You have a mountain of opportunities and obstacles before you. Where do you start?
This column is about the most basic strategy to have in our toolchest. It may seem so simple that it goes without saying, but there are a few important details to keep in mind to do it correctly.
Adding Concurrency: Target It, Localize It, Hide It
Perhaps the most effective way to deliver concurrency is to hide it inside "synchronous" APIs. The key is to inject concurrency in a targeted, localized way inside key functions or methods, and let the application code that calls those methods stay happily sequential and blissfully unaware of the concurrency. We want keep from exposing concurrency throughout the entire application, because the existing code isn't designed with concurrency and nondeterminism in mind. The less code has to be upgraded to know about concurrency, the better.
First, identify the places where we can benefit from concurrency. Here are two major kinds of situations to look for:
- High-latency operations that can benefit from concurrent waiting. Here we want to look for places where our lower-level or library code is doing synchronous work when the caller requests it. If that work could be launched actively sooner (even if sometimes speculatively), then the waiting can overlap with the caller's execution, so that when the caller finally does ask for the result some or all of the work or waiting is already complete.
- Compute-intensive operations that can benefit from parallel execution. Here, we want to look for "hot spot" functions or methods where your application or library code is spending significant parts of computational effort. Focus on the ones that are amenable to parallelization because we can readily break the work into chunks that can be performed in parallel. Typically, we're looking for natural parallelism that we can exploit in the algorithm (e.g., recursive divide-and-conquer where nonoverlapping subranges can be processed in parallel) or in the data structures (e.g., nonoverlapping subtrees or subgraphs or hash table buckets can be processed in parallel). These are our initial targets.
Of course, there's overlap between these two categories, as we'll see in the following examples. Second, once we've found that low-hanging fruit, add concurrency in those functions internally. Note that word "internally" is essential -- we don't want to expose the concurrency in the API we present to the caller. Ideally, all concurrency is bounded within the scope of the function call, so that the caller calls our function and we launch the concurrency and join with it before returning, and the function just appears to execute faster. Next best, concurrency is not bounded but is managed automatically, so that the caller calls our function and we launch the concurrency and return with something still running under the covers, and the caller doesn't need to know because in some subsequent call into our code (or upon a timeout, or some other automated way) we join with the concurrent work.
With the concurrency thus compartmentalized, the calling code doesn't have to know about the parallel work. As far as the caller is concerned, our library method just happens to execute better or faster than before.