OpenMP vs. OpenMP
If you're a fan of Seinfeld's Cosmo Kramer, you know there's no better way to start the day than with a good old-fashioned catfight. I don't know that if it rises to that level, but I do think that the OpenMP back-and-forth between Charles Leiserson, coauthor of Introduction to Algorithms and cofounder of Cilk Arts, and Ruud van der Pas, a senior staff engineer at Sun and coauthor of Using OpenMP, makes for some fascinating reading.
OpenMP is an API that supports parallel programming in C/C++ and Fortran platforms. Defined by a group of hardware and software vendors, it is a portable, scalable model that gives programmers a flexible interface for developing parallel applications for platforms ranging from desktops to supercomputers.
What kicked off the give-and-take was a blog Mr. Leiserson posted last fall entitled The OpenMP Concurrency Platform, in which he pointed out that "when nested parallelism is turned on [in OpenMP compilers], it is common for OpenMP applications to 'blow out' memory at runtime because of the inordinate space demands." Mr. Leiserson summed up his take on OpenMP by saying that "if your code looks like a sequence of parallelizable Fortran-style loops, OpenMP will likely give good speedups. If your control structures are more involved, in particular, involving nested parallelism, you may find that OpenMP isn't quite up to the job."
Mr. van der Pas took exception to Mr. Leiserson's analysis and recently responded in a blog posting of his own entitled Demystifying Persistent OpenMP Myths - Part I, taking Mr. Leiserson to task for, he said, perpetuating OpenMP myths and contributing to misperceptions about the API. In particular, Mr. van der Ruud focused on OpenMP 3.0, which was released last year, opining that Mr. Leiserson "does not seem to be aware of the huge leap forward made with OpenMP 3.0." As for OpenMP being "not quite up to the job," Mr. van der Ruud found this to be a "surprisingly bold and general claim, [and] some more specific information would be helpful. As already mentioned…it is not at all clear why nested parallelism should not be suitable and performant. It actually is and is successfully used for certain kinds of algorithms." He countered Mr. Leiserson's comment that "if your code looks like a sequence of parallelizable Fortran-style loops, OpenMP will likely give good speedups" by calling it "one of those persistent myths. OpenMP has always been more flexible than for 'just' parallelizing loops. "
Round 3. Mr. Leiserson recently responded to Mr. van der Ruud in a blog entitled Debunking an OpenMP "Demystifier," where he started off by saying that he found many of Mr. van der Pas's criticisms "unfair and inaccurate." Following Mr. van der Pas's lead, Mr. Leiserson went point-by-point through the arguments presented, concluding by saying, "I stand by my statement that for nested parallelism, 'you may find that OpenMP isn't up to the job.' I don't think that's a myth, or even hyperbole. As I mention in my original summary...the latest generation of compilers is addressing this issue."
Mr. Leiserson concluded by saying that "it seems to me that the OpenMP 'myths' of which Mr. van der Pas accuses me are unfair and largely of his own making. He repeatedly takes issue with things I said that are true as stated, but which he seems to defensively interpret as if I am implying something other than that which I actually said. ....I hope I have made it clear that my characterization of OpenMP was objectively grounded in concrete coding experience, and not based on 'persistent' gossip, opinions, or myths."
There are lots of details I've spared you from in the interest of clarity and length, but this is fascinating dialogue about an important topic by two parallel programming heavyweights at the top of their game. And me? I'm looking forward to the next round and Mr. van der Ruud's response.