Can Computer Simulations from Fifth Generation Solve the Gulf of Mexico Oil Spill?
We're still stuck on the oil spill thing. I was going to make a sarcastic remark about sounding like a broken record, but the whole MP3 download and play your digital music thing, puts that whole frame of reference out of reach these days. So I'll go with the ground hog day analogy (Bill Murray fans will understand).
As I type, the reality is the top kill procedure failed to stop the oil from erupting. They say it had a 60-70% chance of working. When I heard that I couldn't help but ask myself -- 60-70% based on what? Were they using inductive reasoning, which would suggest that from experience there was a 60-70% chance of success. In other words, BP used the top kill method did n times and 6 or 7 out of 10 times it worked. Or did they mean 6 or 7 out of 10 experts agree that it will work? What is the 60-70% chance of success based on? Did they run some kind of computer simulation 2000 or 3000 times and 60-70% of the time the top kill procedure worked in the simulation? But I guess so many of us sold our statistical inference/analysis books back after the class was over that we probably wouldn't understand the oil industry's methods anyway, right?
One of the famous metrics of Fifth-Generation computers is LIPS (Logical Inferences Per Second). The goal of the early pioneers was to develop systems that could perform millions of LIPS. That concept of an inference is a pretty powerful thing. An inference might be a deduction, an induction, or even an abduction. To be able to perform millions of those in a second, is pretty darn impressive. So in the early days of the Fifth Generation project, we had these expert systems like Drilling Advisor, Propspector, and Litho, etc., that had these models of things related to drilling for oil. They helped with exploration, trouble shooting, problem solving. What happened to those systems? Where are their descendants? Surely they've evolved over the past 20 years. Surely they've been adapted to offer expert or knowledge-based advice on current flows, drilling problems, oil spills, environmental contamination, right? Surely big oil just didn't throw these system in the backroom. So at this point with Intel's multicore Xeons and AMDs multicore Opterons so affordable, and with Expert System Design and Agent-Oriented Design so available, and Linux and Solaris so open, do we have smart systems which are capable of performing millions or billions of logical inferences per second involved with helping to resolve the oil crisis off the Gulf of Mexico? We have the computer power right? We have cheap affordable parallelism. It's now trivial to put together super super clusters that can perform billions of logical inferences per second. Beyond that, it's also now almost trivial to connect those clusters together to form super clusters of super computers that can perform trillions of logical inferences per second. So my question is what mathematical, geological, oceanographic, environmental, etc. models is BP using that only gave them a success rate of 60-70% for plugging up the oil well? With the massively parallel computer power that is available to virtually anyone these days we should be able to use inductive logic programming or data mining to learn models that give better than 60-70% success rate. And because we're only getting 60-70% estimates, it suggests that the problem is not well understood in the first place and quality data is not readily available.
Expert systems, knowledge-based systems, agent-oriented systems, and massively parallel processors offer so much promise, how come they are not being highlighted during this crisis? Everyone talks about how hard the problem is, but how are computers being used to solve the problem? From the failure rate so far it seems like either computers are not being used, or the oil experts are not using Fifth-Generation Simulation (I hope they're not relying on 4GL and spreadsheets).