Channels ▼

Clay Breshears

Dr. Dobb's Bloggers

Game of Life — Distributed Lists

June 17, 2013

I want to take one more look at the parallelized list-based version of the Game of Life simulation. However, where the previous three posts examined different methods to perform a task decomposition of the computations, I will look into the issues you will come up against when implementing the parallelism with a domain decomposition. And not just any domain decomposition, but a domain decomposition with a distributed-memory coding using the Message-Passing Interface (MPI) to share data between processes. Before I get to those details, in case you came into this series late, let me quickly review the algorithmic highlights of the list-based Game of Life.

More Insights

White Papers

More >>

Reports

More >>

Webcasts

More >>

Two arrays, Map and numNeighbors, hold the status of cells and number of neighbors for each corresponding grid cell, respectively. Grid initialization can be done from the keyboard or from a file. Valid cells entered are added to the newlive list. This list is first traversed to set the initial number of neighbors for each grid cell around each new live cell; then the list is copied to the maydie list in order to see if any of the initial cells are overcrowded and should be killed in the next generation. This list copy applies to cases where the programmer has implemented the list data structure and included a copyList() function. As I showed in the previous post, when using the Intel Threading Building Blocks (TBB) concurrent_queue container as a thread-safe list, any grid cells added to the newlive list should also be pushed onto the maydie list.

For each new computed generation, the code traverses the maylive list to check if the conditions are right or not for a dead cell to become living and then traverses the maydie list to check if living cells should be killed. Cells that are new to life in the generation are added to the newlive list and those that are killed are added to the newdie list. The newlive and newdie lists are traversed and the number of neighbors for cells bordering the affected cells is adjusted accordingly. If it is found that a cell's number of neighbors reaches a critical value to either become alive or to die, this cell is added to the proper maylive or maydie list that will be examined for the next generation. All lists, after traversal, are cleared (which is a natural side-effect of popping things off a concurrent_queue container). As a task decomposition code, the traversal of lists will be done by concurrent threads. Printing the current state of the grid cells can be done by a single thread at regular intervals or at the end of the simulation, depending on how often the user wants to see the progress of the simulation.

When first confronted with coding up the parallel Game of Life, because of the use of a large array as the grid holding the live cells, my first thoughts are, unsurprisingly, to look for a domain decomposition solution. Even using the list-based simulation version, I can divide up the grid in some logical way, assign each grid section to a different thread, instantiate the four list structures for each thread and, through the shared numNeighbors array, keep the totals for neighbor counts up-to-date in each generation (by using some atomic increment/decrement operation). As the threads will be working on their own portion of the grid independently of the other threads, the only tricky part to this approach is making sure that the threads don't get too far ahead of each other in the separate steps required for computation of each generation. Some synchronization will be needed between steps since I can't predict or guarantee the number of cells being added to each list in any given thread. The very nature of the Game of Life and the separate subgrid assigned to threads leads to some unavoidable load imbalance.

That's how I would approach a domain decomposition parallelization in a shared-memory environment. Ho hum, right? If you're not fast asleep by now, think about how to divide up those grid sections, but not to separate threads that can easily "look" into any other thread's assigned portion of Map and numNeighbors. Ask yourself what kind of information needs to be shared between MPI processes that have been assigned a subgrid (and even before that, how would you divide up the grid sections among those processes), when and how can that data be shared, and how can you synchronize the processing of the same steps within the same generation across those processes. Like many parallelization approaches, there are alternatives that can be implemented. The remainder of this article will examine these possibilities and some of the coding consequences involved with dividing up the two grids (map and numNeighbors) between MPI processes.

Inputting the Initial Configuration

To get started, each process will need to know the original set of live grid cells that are included within the subgrid assigned to it. Using a file containing the grid cell coordinates of the initial configuration, two obvious methods to get this data to the appropriate process would be to 1) have each process read the input file and note only those cell coordinates that fall within the logical range of the subgrid assigned to the process, or 2) have the file read in by one process and this input process distribute the cell coordinates to the appropriate process. With the second method, I can think of three ways to distribute the data to the correct process: 1) send a message to the correct process after each coordinate pair is input, 2) collect all the coordinates in separate message buffers for each of the other processes and send a single message to each process after the input is complete, or 3) collect all input data and broadcast the full set of input coordinates to all other processes.

If I use the first scheme, an unknown number of messages will be sent between the input process and the others. This can be done by setting up each receiving process to keep receiving coordinate pairs until a special coordinate pair (say, 0 0) is sent to signal the end of input has been reached. For the second scheme, an unknown length message will be sent. If the receiver does not have enough buffer space, parts of the message will be lost. To avoid this, the input process can send two separate messages to each process: the first contains a single integer that is the number of coordinate pairs to be sent in the second message. Before receiving the second message, receiving processes can ensure that enough buffer space has been set aside. The third scheme has the same drawbacks as the second one since the total number of coordinates will be unknown until runtime. This scheme also requires sending a relatively long message containing superfluous data (for each process), and each receiving process will still need to pick out the relevant coordinates from the long message.

Related Reading






Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.
 


Dr. Dobb's TV