John Gross is chief engineer and Jeremy Orme chief architect at Connective Logic Systems. They can be contacted at firstname.lastname@example.org and email@example.com, respectively.
This multi-part article introduces Blueprint, a technology developed Connective Logic Systems (where we work) that provides an alternative approach to multi-core development. The Blueprint toolset is built on the premise that we actually have no problem understanding complex concurrency -- until we project it into the one-dimensional world of text. For this reason, a visual representation is much easier to work with than its textual equivalent. There are some cases where text is obviously more intuitive -- such as parallelizing data-parallel loops. In these cases, technologies such as Microsoft's Task Parallel Library (TPL) and Intel's Threaded Building Blocks, can be intuitive and productive. For more background information, see Multi-Core OO: Part 1 and Multi-Core OO: Part 2.
Part 2 and Part 3 of this series proposed a means of maintaining an OO paradigm in a multi-core environment. This addressed the problem of mapping concurrently executing components to an arbitrary number of CPU cores in a symmetric shared memory environment. However, it did not deal with the more difficult issue of mapping concurrent functionality to multiple processes -- each with their own disparate memory space and each running their own operating system instance. This installment will explain why next-generation "many-core" processors are likely to make this problem relevant to mainstream developers, what challenges it raises and how it can be addressed with the object-oriented model still intact. In addition, this article shows how multiple mappings of functionality to processes can be achieved without the need to modify the application itself (as described in Parts 2 and 3). Accretion of functionality to processes is a lightweight task, and this means that applications can be developed, debugged and maintained in a convenient "single process" form, but deployed in the field with the benefit of scalable processing resources.
Why Is Multiple-Process Programming Important?
Multiple-Process programming has always been relevant for distributed applications because a cluster of machines are not able to execute a single process so developers have always had the task of mapping functionality to disparate processes.
Most engineers would agree that because of this, distributed applications are more difficult to implement than their single process equivalents but up until now (and despite the arrival of multi-core) mainstream developers haven't needed to consider this particular problem.
This situation is almost certain to change in the very near future because the prevailing view is that symmetric memory architectures are unlikely to scale past 8 CPUs. Among others, video games developers migrating to the Cell processor have already taken the plunge, and the Tilera64 software environment also predicts message-passing as a next-generation programming model.