Game of Life — Distributed Lists
I want to take one more look at the parallelized list-based version of the Game of Life simulation. However, where the previous three posts examined different methods to perform a task decomposition of the computations, I will look into the issues you will come up against when implementing the parallelism with a domain decomposition. And not just any domain decomposition, but a domain decomposition with a distributed-memory coding using the Message-Passing Interface (MPI) to share data between processes. Before I get to those details, in case you came into this series late, let me quickly review the algorithmic highlights of the list-based Game of Life.
More Insights
White Papers
- Top Six Things to Consider with an Identity as a Service Solution
- Infographic: Challenges in Managing a Hybrid Cloud
Reports
More >>Webcasts
- Transforming Operations - Part 1: Managing Outsourced Development in Telecommunications
- 5 Reasons to Choose an Open Platform for Cloud
Two arrays, Map
and numNeighbors
, hold the status of cells and number of neighbors for each corresponding grid cell, respectively. Grid initialization can be done from the keyboard or from a file. Valid cells entered are added to the newlive
list. This list is first traversed to set the initial number of neighbors for each grid cell around each new live cell; then the list is copied to the maydie
list in order to see if any of the initial cells are overcrowded and should be killed in the next generation. This list copy applies to cases where the programmer has implemented the list data structure and included a copyList()
function. As I showed in the previous post, when using the Intel Threading Building Blocks (TBB) concurrent_queue
container as a thread-safe list, any grid cells added to the newlive
list should also be pushed onto the maydie
list.
For each new computed generation, the code traverses the maylive
list to check if the conditions are right or not for a dead cell to become living and then traverses the maydie
list to check if living cells should be killed. Cells that are new to life in the generation are added to the newlive
list and those that are killed are added to the newdie
list. The newlive
and newdie
lists are traversed and the number of neighbors for cells bordering the affected cells is adjusted accordingly. If it is found that a cell's number of neighbors reaches a critical value to either become alive or to die, this cell is added to the proper maylive
or maydie
list that will be examined for the next generation. All lists, after traversal, are cleared (which is a natural side-effect of popping things off a concurrent_queue
container). As a task decomposition code, the traversal of lists will be done by concurrent threads. Printing the current state of the grid cells can be done by a single thread at regular intervals or at the end of the simulation, depending on how often the user wants to see the progress of the simulation.
When first confronted with coding up the parallel Game of Life, because of the use of a large array as the grid holding the live cells, my first thoughts are, unsurprisingly, to look for a domain decomposition solution. Even using the list-based simulation version, I can divide up the grid in some logical way, assign each grid section to a different thread, instantiate the four list structures for each thread and, through the shared numNeighbors
array, keep the totals for neighbor counts up-to-date in each generation (by using some atomic increment/decrement operation). As the threads will be working on their own portion of the grid independently of the other threads, the only tricky part to this approach is making sure that the threads don't get too far ahead of each other in the separate steps required for computation of each generation. Some synchronization will be needed between steps since I can't predict or guarantee the number of cells being added to each list in any given thread. The very nature of the Game of Life and the separate subgrid assigned to threads leads to some unavoidable load imbalance.
That's how I would approach a domain decomposition parallelization in a shared-memory environment. Ho hum, right? If you're not fast asleep by now, think about how to divide up those grid sections, but not to separate threads that can easily "look" into any other thread's assigned portion of Map
and numNeighbors
. Ask yourself what kind of information needs to be shared between MPI processes that have been assigned a subgrid (and even before that, how would you divide up the grid sections among those processes), when and how can that data be shared, and how can you synchronize the processing of the same steps within the same generation across those processes. Like many parallelization approaches, there are alternatives that can be implemented. The remainder of this article will examine these possibilities and some of the coding consequences involved with dividing up the two grids (map
and numNeighbors
) between MPI processes.
Inputting the Initial Configuration
To get started, each process will need to know the original set of live grid cells that are included within the subgrid assigned to it. Using a file containing the grid cell coordinates of the initial configuration, two obvious methods to get this data to the appropriate process would be to 1) have each process read the input file and note only those cell coordinates that fall within the logical range of the subgrid assigned to the process, or 2) have the file read in by one process and this input process distribute the cell coordinates to the appropriate process. With the second method, I can think of three ways to distribute the data to the correct process: 1) send a message to the correct process after each coordinate pair is input, 2) collect all the coordinates in separate message buffers for each of the other processes and send a single message to each process after the input is complete, or 3) collect all input data and broadcast the full set of input coordinates to all other processes.
If I use the first scheme, an unknown number of messages will be sent between the input process and the others. This can be done by setting up each receiving process to keep receiving coordinate pairs until a special coordinate pair (say, 0 0
) is sent to signal the end of input has been reached. For the second scheme, an unknown length message will be sent. If the receiver does not have enough buffer space, parts of the message will be lost. To avoid this, the input process can send two separate messages to each process: the first contains a single integer that is the number of coordinate pairs to be sent in the second message. Before receiving the second message, receiving processes can ensure that enough buffer space has been set aside. The third scheme has the same drawbacks as the second one since the total number of coordinates will be unknown until runtime. This scheme also requires sending a relatively long message containing superfluous data (for each process), and each receiving process will still need to pick out the relevant coordinates from the long message.