The Annoying Chinese Room
Artificial Intelligence may not pay the bills for too many people in our field, and it may not have worked its way into too many products, but I think it still holds a special place of fascination for programmers. For my generation at least, blame it on HAL.
If you actually work for a living, you may not be aware of one the biggest wet blankets at the AI party, John R. Searle. Back in 1980 Searle posed a thought experiment that has become known as the Chinese Room argument, in which many people feel he thoroughly disproved the notion that any computer program could acquire true intelligence.
The Chinese Room argument is taken very seriously in the academic world, and entire forests have been destroyed arguing about it. To maintain your street cred among the intelligentsia, you need to bone up on Searle. You probably only need to dedicate 30 minutes of your life. There's just one problem: it will drive you nuts.
Everyone who writes about this problem gets to create their own little overview, and I guess I'm not going to be an exception. But I encourage you to read Searle's original argument; it's cogent and concise and won't take too much of your time.
In short, Searle does a very nice job of setting up a straw man and then knocking it down. He first starts with the premise that there exists a computer program that can intelligently converse with humans using written Chinese. Given that this passes the Turing test, we assert that the computer executing the program has achieved understanding and intelligence.
Searle then uses reduction ad absurdum to smite the assertion. He rightfully supposes that we could implement this computer program by putting John Searle in a locked room, giving him a written instruction book, paper, pencil, filing cabinets, erasers and what not. He would then accept Chinese text through his mail slot, and blindly follow the instruction book in order to create output, which would intelligently respond to the input text.
And at this point could John Searle be said to understand Chinese? Of course not. (For good measure, Searle also points out the program does nothing to explain human intelligence - but I don't think this stirred up the passions of the first part of the argument.)
The first problem here is that Searle is a philosopher, and as such he got off on a bad foot. In the Chinese Room, John Searle is clearly functioning as a CPU executing a program, and nobody would seriously posit that the CPU itself is intelligent - any more than we posit that one neuron or DRAM cell is intelligent.
And the fact that the problem is framed in these erroneous terms is at least annoying to most of us, if not downright infuriating. It tends to bring out angry reactions.
However, it doesn't take much jiggering to reframe the problem. In this computer with a human CPU and paper storage, where exactly is the so-called understanding? You can't exactly point to it. So you end up falling back on the notion that the system as a whole is intelligent, which is somehow unsatisfying. It would feel a lot more comfortable if we could point to a robot and say the same thing.
Our Unfortunate Predisposition
As computer programmers, I think we have a natural tendency to believe in the computational model of the mind. We work with computers all day, and we are steeped in deterministic thinking. When that is your framework, it's very easy to think of the mind as another computing device. And fair enough, you can explain most of what it does in terms of computation.
Searle seems to be taking a poke at this model. After all, he's asserting that a computer can't have intelligence. Thus the mind must be something more than a simple computation machine.
What's the Point?
It's easy for anyone to read Searle's paper and immediately toss off a dozen cogent arguments either in favor or opposed to it. But trust me: you probably don't have anything new to say about it - spend a year or two on the exahustive references in the Wikipedia, then reassess.
Given this, I won't try to formulate any ingenious arguments either. The purpose of my post is just to air the argument, not to refute it.
However, I do think that there are some interesting things popping up right now that may make this paper unimportant in the future.
Turing Was Right
When Alan Turing devised his classic test for machine intelligence, he purposely structure it so as to avoid getting sucked into a Chinese Room argument. He didn't ask for proof that a machine was intelligent, simply that it exhibited intelligence.
The difference between Turing and Searle is that with Searle we have to get into definitions of elusive concepts such as mind, consciousness, intelligence, intention, and understanding. Great stuff for a philosopher - for a scientist, not so much.
What's really interesting is that neuroscience is now providing some small measure of visibility into the mind, as opposed to the brain. We can actually draw some conclusions about how cognition occurs.
And the outcome of a lot of this research seems to show that our model of the mind may be fatally flawed. Consciousness is turning out to be a shaky concept, as is the notion that the conscious mind has free will. Intelligence and understanding are turning out to not be nearly as clear-cut as we once thought.
It may turn out that the entire semantic model of cognitive processes that we built up over 2,000 years is about as useful as the luminiferous aether turned out to be. And it may also turn out that Turing's intution is correct: both humans and computers can't be said to possess intelligence or understanding, they can only exhibit it.
In which case, Searle's work might be described by a line from the Bard:
Told by an idiot, full of sound and fury,
Which would be fine by me. Twenty-eight years after publication, I still find it pretty annoying.