There's Good News and There's Bad News ...
Last week we gave the webinar scene a break. Instead, Tracey talked me into going with her to the new Bill Gates Hillman Complex located on Carnegie Mellon campus in Pittsburgh. Holger Hoos from the University of British Columbia was giving a talk called "Taming the Complexity Monster". I figured how could we go wrong!
Since we both are currently wandering in the wilderness of AI-Complete problems and our multicore machinations are turning out to give us deceptively correct answers. I thought just maybe Holger Hoos might have some encouraging words for us. After all Dr. Hoos is a founding member of the Bioinformatics, Theoretical and Empirical Algorithms Laboratory and a member of the Laboratory for Computational Intelligence with additional involvement with the Institute for Computational Intelligence and Cognitive Systems. Surely he could shed some light on our massive multicore vs. AI-Complete problem issue.
I remember thinking on the drive down that it felt like we were off to see the wizard. So after taking a wrong turn somewhere around Veteran's Bridge in downtown Pittsburgh and getting lost, we finally made it to the Gates Hillman Complex.
The Bill Gates Hillman Complex located on Carnegie Mellon campus
There were several eye-popping and jaw-dropping moments in the talk. Tracey and I were especially encouraged when he started talking about problems that were similar to the ones we are currently challenged with.
We weren't the only ones that were captivated by the talk. You could almost feel the excitement in the air. But with one click of the mouse everything went down hill for our multicore expectations. Out came the PowerPoint. Out came the performance graphs. Out came the truth (atleast the truth according Holger). He demonstrated that if we had one processor for every atom in the universe and if each processor could execute instructions at the speed of light that a parallel computational solution for the class of problems that we are trying to solve would take millions, billions, trillions of years to complete! Ouch! That was very bad news to hear.
He also suggested that for the class of problems that Tracey and I are stuck with that even with such large-scale massive parallelism, it would only offer extremely negligible speed up in relation to problem size. How could that be? I immediately pulled out my PSP (really I did) and then logged in to our private lab and research cluster, typed some numbers into Mupad, and slowly the bad news was retraced on the screen of my PSP.
And just as we were about to succumb to Holger's talk, he introduced a new approach to algorithm design. He had some new approaches to some of the classic NP-hard problems. These new approaches had a dramatic effect on algorithm design that brought computational solutions within practical reach. He introduced the Concorde TSP Solver, DPLL (Davis-Putnam-Logemann-Loveland) algorithm, and the new field of empirical algorithms approaches. As I looked closer I saw what at least looks like to me a striking intersection between some of the empirical algorithms and some of the findings of ICOT and the Fifth generation project. So it would seem that there is hope after all and that the ghosts of ICOT are trying to tell us something and that some of the stones we've turned over could lead to significant paradigm shifts for parallel programming and multicore computers. So our trip turned out to be good-news, bad-news.
The good news was that we heard an approach to solving our particular brand of problems. The bad news was that we heard an approach to solving our particular brand of problems but there was no short cut in sight, no easy way out, no silver bullet, no rest for the wicked.