Channels ▼
RSS

Design

Fundamental Concepts of Parallel Programming


Data Decomposition

Data decomposition, also known as "data-level parallelism", breaks down tasks by the data they work on, rather than by the nature of the task. Programs that are broken down via data decomposition generally have many threads performing the same work, just on different data items. For example, consider a program that is recalculating the values in a large spreadsheet. Rather than have one thread perform all the calculations, data decomposition would suggest having two threads, each performing half the calculations, or n threads performing 1/nth the work.

If the gardeners used the principle of data decomposition to divide their work, they would both mow half the property and then both weed half the flower beds. As in computing, determining which form of decomposition is more effective depends a lot on the constraints of the system. For example, if the area to mow is so small that it could not warrant two mowers, it would be better done by just one gardener -- functional decomposition -- and data decomposition could be applied to other task sequences, such as when the mowing is done and both gardeners begin weeding in parallel.

Implications of Different Decompositions

Different decompositions provide different benefits. If the goal, for example, is ease of programming and tasks can be neatly partitioned functionally, then functional decomposition is more often than not the winner. Data decomposition adds some additional code-level complexity to tasks, so it is reserved for cases where the data is easily divided and performance is important.

However, the most common reason for threading an application is performance. And here the choice of decompositions is more difficult. In many instances, the choice is dictated by the problem domain: some tasks are much better suited to one type of decomposition. But some tasks have no clear bias. Consider for example, processing images in a video stream. In formats with no dependency between frames, programmers have a choice of decompositions. Should they choose functional, in which one thread does decoding, another color balancing, and so on, or data decomposition, in which each thread does all the work on one frame and then moves on to the next? To return to the analogy of the gardeners, the query would take this form: If two gardeners need to mow two lawns and weed two flower beds, how should they proceed? Should one gardener only mow -- that is, they choose functional -- or should both gardeners mow together then weed together?

In some cases -- for instance when a resource constraint exists, such as only one mower -- the answer emerges quickly. In others where each gardener has a mower, the answer comes only through careful analysis of the constituent activities. In the case of the gardeners, functional decomposition looks better, because the start-up time for mowing is saved if only one mower is in use. Ultimately, the right answer for parallel programming is determined by careful planning and testing. The empirical approach plays a more significant role in design choices in parallel programming than it does in standard single-threaded programming.

As mentioned previously, P/C situations are often unavoidable, but nearly always detrimental to performance. The P/C relation runs counter to parallel programming because it inherently makes two activities sequential. This sequential aspect occurs twice in a P/C situation: once at the start, when the consumer thread is idling as it waits for the producer to produce some data, and again at the end, when the producer thread has completed its work and is idling as it waits for the consumer thread to complete. Hence, developers who recognize a P/C relation are wise to try to find another solution that can be parallelized. Unfortunately, in many cases, this cannot be done. Where P/C cannot be avoided, developers should work to minimize the delay caused by forcing the consumer to wait for the producer. One approach is to shorten the activity required before the consumer can start up. For example, if the producer must read a file into a buffer, can the consumer be launched after the first read operation rather than waiting for the entire file to be read? By this means, the latency caused by the producer can be diminished. Table 1 summarizes these forms of decomposition.

Table 1: Summary of the Major Forms of Decomposition.

Challenges

The use of threads enables performance to be significantly improved by allowing two or more activities to occur simultaneously. However, developers cannot fail to recognize that threads add a measure of complexity that requires thoughtful consideration to navigate correctly. This complexity arises from the inherent fact that more than one activity is occurring in the program. Managing simultaneous activities and their possible interaction leads to confronting four types of problems:

  • Synchronization is the process by which two or more threads coordinate their activities. For example, one thread waits for another to finish a task before continuing.
  • Resource limitations refer to the inability of threads to work concurrently due to the constraints of a needed resource. For example, a hard drive can only read from one physical location at a time, which deprives threads of the ability to use the drive in parallel.
  • Load balancing refers to the distribution of work across multiple threads so that they all perform roughly the same amount of work.
  • Scalability is the challenge of making efficient use of a larger number of threads when software is run on more-capable systems. For example, if a program is written to make good use of four processors, will it scale properly when run on a system with eight processors?

Each of these issues needs to be handled carefully to maximize application performance.


This article is based on material found in book Programming with Hyper-Threading Technology by Andrew Binstock and Richard Gerber. http://www.intel.com/intelpress/sum_hyperthreading.htm


Related Reading


More Insights






Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.
 

Video