Channels ▼
RSS

Parallel Pattern 6: Partition



Structured Patterns: An Overview

Parallel Pattern 1: Superscalar Sequences and Task Graphs

Parallel Pattern 2: Selection

Parallel Pattern 3: Map

Parallel Pattern 4: Gather

Parallel Pattern 5: Stencil

Parallel Pattern 6: Partition

Parallel Pattern 7: Reduce

Parallel Pattern 8: Scan

Parallel Pattern 9: Pack


Over a series of articles, I have been discussing various patterns for structured parallel computation. These patterns, when composed, can be used to express computations in a wide variety of domains. Scalable approaches to parallel computation have to take two things into account: tasks and data. Many approaches to parallel computation over-emphasize the former at the expense of the latter. However, in order to achieve scalable parallel performance, especially on modern hierarchical NUMA memory architectures, algorithms need to be designed in a way that emphasizes data locality and isolation.

The partition pattern provides a simple way for the software developer to express both data locality and isolation. In this pattern, a collection of data is broken down into a set of non-overlapping regions each called a partition, and then a single task is assigned to operate on each partition. Since the partitions are non-overlapping, these tasks can freely operate in parallel on the data in each partition without interfering with one another. Each task can both read and write to the data in their partition without fear of causing race conditions, since in this pattern there is a guarantee that only one task is accessing each partition. In addition, partitions that are created from contiguous regions exhibit spatial data locality, and so are a good match to hierarchical memory systems.

The partition can be thought of as a special case of the gather pattern, although since it also provides isolation it can also be used to support efficient race-free local writes (scatters). In combination with other patterns already discussed, a sequence of maps (as in the sequence and map patterns discussed earlier) using different partitions might be used in order to achieve global communication, even though each step in the sequence uses a local partition.

There are two variants of the partition pattern: regular and irregular. In a regular partition each partition is the same size. Unless specified otherwise it's best to think of this as the default for the partition pattern, although if we want to be specific we can also call the regular partition case the "dice" pattern. In contrast, in an irregular partition, while still non-overlapping, partitions can be of different sizes. I will call irregular partitions and the associated pattern "segments" and will discuss them in a later post. Likewise, in a divide-and-conquer approach to algorithm design, you may want to apply (possibly irregular) partitioning/segmentation patterns recursively. This is related to yet another pattern, "nesting". For now, let's just focus on the simple regular partition.

Even with simple regular partitions, there are a few things to keep in mind. In particular, the hierarchical memory system hardware itself uses this pattern. Caches are divided into blocks (cache lines) of different sizes, and virtual memory is organized around pages. Data sharing between processors is managed on a block basis. Normally these blocks are powers of two in size and are aligned along power of two boundaries, and blocks at different levels of the memory hierarchy are nested.

It is usually beneficial to make the software partitioning used consistent with the partitioning used by the underlying hardware. Software developers should normally use aligned power-of-two partitions when designing algorithms, so that different software partitions map onto different hardware partitions. If instead the divisions between software partitions are located in the interior of hardware partitions, the hardware may treat the software partitions as shared even though they aren't. This can result in extra (and unnecessary) interprocessor communication (cache coherency protocols at the hardware level) to keep the contents of the hardware blocks consistent when multiple processors are writing to the same blocks, even if different processors are updating different parts of the shared blocks. This "false sharing" adds hidden communication overhead that can severely impact scalability.

The optimal partition size can also be dependent on the hardware memory architecture, and there are often tradeoffs between data locality and algorithmic efficiency. A smaller partition may fit completely into a higher level of the memory hierarchy, allowing processing of that partition to be more efficient, but smaller partitions also usually incur more overhead for communication in the rest of the algorithm. As an example, consider matrix multiplication, which can be written in terms of nested sub-matrix multiplications. Larger sub-matrices give more data reuse and coherence, but use more local cache.

To avoid a dependency on any particular memory hardware, algorithms should be written in a generic way using parameterized partition sizes. Then, when installing the software on a particular processor, some testing can be used to tune the partition size to the optimum for that architecture without any need to change the software. Note that processors can vary in cache sizes and organization even within the same generation of processors from the same vendor.

The partition pattern is supported in the Intel RapidMind platform with the "dice" data access operator, which divides an array into a kind of nested array of uniformly sized partitions. The result of the dice can then be passed to a "map" operation that operates on each partition in parallel. Using dice appropriately can significantly improve the performance of algorithm implementations. In addition, Intel supports very powerful mechanisms for generic programming which make it possible to write parameterized code that can be easily tuned to different processor architectures. In Intel RapidMind this tuning can be done without any need to recompile the software.


Related Reading


More Insights






Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.
 

Video