Parallel Programming Back to the Future
A couple of weeks ago, Tracey and I were lucky enough to present some of our crazy ideas at the 22nd Midwest Conference on Artificial Intelligence and Cognitive Science (MAICS2011), held this year in Cincinnati, Ohio. We always get a kick out these conferences because, in some respects, it's a little like time travel — we get the opportunity to get a sneak peek at what our computing future might hold or at the very least, we get a look at some of the amazing things that are being worked on in computer research labs around the world.
This year, Isao Hayashi presented work that he and Suguru Kudoh are doing at Kansai University in Osaka, Japan, in the area of neuro-computing. They are using neurons and fuzzy sets as part of a computer-brain interface. It's the kind of stuff that, after you see it, you still don't believe it. Are those brain cells in that petri dish controlling that computer? OMG! At this year's conference, we were baptized in computationally intelligent agents, evolutionary programming, genetic programming, biometric recognition algorithms, reasoners, classifiers, learners, qualitative visualization schemes, quantitative visualizations schemes, and spatio-temporal knowledge representation and reasoning under uncertainty for action recognition in smart homes. Add to this micro ontologies and extreme machine learning and you start to get the picture.
Now, for those of you who have stayed tuned to see what mischief me and Tracey were going to get ourselves into next, recall that we are still held captive by the ghosts of ICOT and the fifth generation project. We haven't escaped those AI-complete problems yet. We're hoping that the new readily available multicore processors will somehow be part of the solution and the paradigm shift; but to date, no cigar. So whenever we go to these conferences, we are always on the look out for some clue that will help us solve the mysteries of massive parallelism and parallel programming. At this year's MAICS2011, we lucked out (somewhat). We sat in on a talk about the latest rendition of the Connex Architecture (CA). Basically, the Connex Architecture is a massively parallel computer architecture that supports 1024 processing elements (cores). The CA is a parallel programmable VLSI chip consisting of an array of processors. This rendition of the CA is a general purpose array/vector processor. It is programmed by an extension of the C language called "Vector-C" and, in many respects, the CA represents at least part of the state of the art in parallel programming. On a computer with the CA, the operating system does not call the shots when it comes down to how the 1024 processors are used (at least, not in the implementation we were presented with); but instead, the programmer writes the parallel processing demands directly in Vector-C. This means that there are assembly language instructions or machine instructions that are specifically designed to handle parallelism. The Vector-C compiler will produce such instructions. But this all brings us full circle. Because, to program the CA, you need graphically intimate knowledge of the hardware architecture. There's no blackbox programming going on with the Connex. You need to be very up close and personal with this architecture in order to effectively use the 1024 processors through Vector-C programming. But isn't this kind of mind-numbing detail exactly the thing that all of the multicore, parallel programmers are trying to get away from? Or is it the mutex and semaphore management and all of the locking and unlocking and synchronization schemes? Many programmers are having trouble dealing with 2 to 8 processors. What happens if CA becomes a household word? It has 1024 elements and requires the programmer to personally meet each of them.
Is there any escape from the complexities of massively parallel programming? On the one hand, we're being told not to worry about it. "Vendor X has a solution coming down the pike, and it makes parallel programming transparent to the developer (all you need to do is use our template)." On the other hand, we're at MAICS2011 and we see a state of the art massively parallel architecture that requires the developer (programmer) and the processors to know each other on a first-name basis. The CA is an instance of the computing power that ICOT and the fifth generation were going after. But there were some other paradigm shifts that were implied in ICOT that we have not made yet. Although the Fifth Generation project is over, me and Tracey are now certain more than ever that there were some unturned stones at ICOT when it comes to how to deal with massively parallel computers and parallel programming. This year's MAICS2011 is proof positive that the quest for artificially or computationally intelligent computers is still on, and that parallelism is one of the necessary evils to achieve it — but from the Vector-C programming language and the Connex Architecture we saw, it's clear that there are still no shortcuts. The worm-holes continue to elude us. If only we could connect to the PSN, perhaps the Animus, Ezio, and the brotherhood hold the answers.