Dr. Dobb's is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

# Hurray! New Life for the 5th Generation Project... Maybe?

February 18, 2010

What about problems that require massive parallelism in order to solve them? In Parallelism Should Inspire You, I blogged about how we are surrounded by parallelism more so than serial activities. We are surrounded by biological and social systems that are inherently immersed in parallelism. It's the only way to think about them.

What about swarm and "collective behavior" with agents working in parallel communicating and interacting with each other and with its environment? In such domains, performance is critical in generating a solution. It is the difference between requiring years of processing on serial computers and requiring only days or minutes when using massive parallelism.

For example, given X problem and several algorithms to solve that problem, Table 1 shows the performances of various algorithms on a serial computer and the time required to solve the problem.

This partial table is from "Computer Algorithms: Introduction to Design and Analysis". When I originally saw the differences in performance, the impact was huge. An algorithm can take years or centuries to solve a problem for input for a mere 100,000? But with massive parallelism algorithms with rather ugly polynomial performance such as O(N2) can be reduced. Research done at the Naval Research Center in Washington wanted to compare the performance of serial computers, workstations, supercomputers and parallel computers on N-body code. The N-body problems are used to predict the motion of groups of objects that interact with each other. Those bodies can be celestial bodies in gravitational simulations in astrophysics or atomic bodies for particle simulations. At the Naval Research Center, a CM-2 (Parallel Connection Machine SIMD supercomputer) showed vast improvement over the serial computer from O(N2) to O(log(N)) for small values of N and O(N log(N)) for large values of N. CM-2 had 65,536 simple 1-bit processors in the late 80s. With exascale computers, 1,000,000 N-bodies can be processed with 1018 operations per second. Even problems with the exponential performance (O(2n)) such as cryptographic applications, massive parallelism show considerable promise.

And what do these applications all have in common to make them so prime for massive parallelism:

• they are inherently parallel
• they do not require a lot of memory for individual nodes
• they utilize collective communication across all nodes
• not a lot of I/O

Table 2 shows a list of applications that are prime for massive parallelism, some are better candidates than others. They are also grouped according to the algorithm or numerical method used for these applications. This information was extracted from a graphic in a presentation by Rick Stevens of the Argonne National Laboratory University of Chicago entitled "Getting Ready for Exascale Science".

Raster graphics, pattern matching and symbolic processing are of special interest to Cameron and I. Raster graphics (as applied to visualization) and pattern matching and symbolic processing as applied to Artificial Intelligence. "Massively Parallel Artificial Intelligence" is an area of research that as you can imagine, AI utilizing massively parallel hardware (as every field is) for performance, but also the massive parallelism changes the approach to building intelligent systems. Man, this sounds allot like, yeah you can imagine what I am going to say, The Fifth Generation Project revised ... Ah yes!

### More Insights

 To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.