Artificial Intelligence Meets Natural Stupidity
Artificial Intelligence Meets Natural Stupidity is a paper by AI researcher Drew McDermott which is funny and nicely written. McDermott published it in 1976: when, as he explains in his introduction, hacker culture had combined with the need to explore weird ideas, thereby crippling AI's self-discipline. To keep AI credible, he says, we must stop repeating mistakes caused by sloppy thinking. The paper ridicules three of these mistakes, which McDermott admits he suffers from himself. The part I enjoyed most is the schematic chronicle of a researcher who acts as though identifying the shortcomings of version I of a program is equivalent to having written version II, and thereby ends up inhibiting further research. I believe this still happens, 33 years on; and I don't believe it happens only in AI. So I'd like to recommend the paper to everyone who programs a new way to solve a problem.
McDermott's original paper was published in SIGART Newsletter (of the Special Interest Group on Artificial Intelligence, of the Association for Computing Machinery), number 57 (April 1976). It has been republished in the book Mind Design, edited by John Haugeland (1981). If you try to buy the book, beware: there seem to be several versions, some of which contain more papers than the copy I have — but not McDermott's. Confusingly, Amazon shows you the cover of one version, but when you "look inside", displays the contents of another. Barnes & Noble do show a version with the paper in. There are several copies on the Web, including the Citeseer one that I linked to above.
As I said in my introduction, the paper focusses on three mistakes made by AI researchers. (It also, very briefly, warns on the final page about a few others.) One mistake is "wishful mnemonics": identifiers named after grand concepts such as "theorem", "understand", or "is a". In other words, identifiers named after what you would like the program to do, rather than what it actually does. Often, what you would like it to do is to imitate some human ability such as understanding natural language. It will never so, certainly not in the way a human does, and you shouldn't let your identifiers lull you into believing that it does.
McDermott devotes three pages of this section to wishful abuse of "is a". Read these if you model the world using inheritance, including by "is-a" links in semantic networks.
Another mistake is to assume that if only modules could communicate in natural language, all our problems would be solved. I suspect far fewer people make this mistake than in 1976. But there is a part of this section that discusses: "the"; reference; data structures that represent parts of things such as "the finger of the hand" or "the left arm of the chair"; and data structures that represent things that have been destroyed and recreated, such as "the barn that burned down and was rebuilt". That is well worth reading, especially if you believe you can solve your problem by using the data structures AI people call "frames", or any equivalent that has "part-of" links.
And the third section is headed "** Only a Preliminary Version of the Program was Actually Implemented". Find it, and enjoy.