Dr. Dobb's is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


Channels ▼
RSS

Design

AI: It's OK Again!


Analysis and Synthesis

In the same year that Feigenbaum et al. were publishing The Handbook of Artificial Intelligence, G.E. Hinton and J.A. Anderson came out with their Parallel Models of Associative Memory and David Rumelhart and James McClelland, joined by Hinton, started work on a project that resulted in the two-volume Parallel Distributed Processing.

If The Handbook was the handbook of GOFAI ("good old-fashioned artificial intelligence," the attempt to model human intelligence at the symbolic level), then Parallel Distributed Processing was the handbook of connectionism. Symbolism and connectionism have been competing themes in AI work throughout its history.

The Handbook of Artificial Intelligence, though, shone its light on AI successes, which then were in the symbolist tradition, mostly expert systems, models of specific subject-matter domains embodying domain-specific rules and an inference engine through which the system can draw conclusions based on the rules. MYCIN, Eward Shortliffe's medical-advice program, is a good example. Implemented in the mid-1970s, MYCIN engaged in a dialog with a doctor about a patient to assemble information on the basis of which to suggest a diagnosis and recommended treatment. Its advice compared favorably with that of domain experts in several disease domains.

Expert systems represent only an early example of the symbolist approach. The logic of that approach is that, "since the phenomenon of interest is human symbolic reasoning, we should be modeling at that level, both in order to succeed and in order to understand our success—to understand how human brains work once we have a working AI system," according to Larry Yaeger of Indiana University. Marvin Minsky, Douglas Hofstadter, and Douglas Lenat are among those promulgating the symbolist view today. (Although Hofstadter, whose work on fluid concepts seems squarely in the symbolist tradition, says he hasn't read any AI journals in the past 10 to 15 years. "I just pursue my own goals and ignore just about everyone and everything else," he says. And that is in itself a comment on the state of AI today.)

Today, "the symbolic paradigm...has turned out to be a dead end," Terry Winograd says.

That seems harsh, given that many presentations at AAAI were arguably in the symbolist tradition. There was a whole track on AI and the Web, much of which dealt with Web 3.0 issues like ontologies and semantic descriptions.

Some of those seem pretty intelligent. "It's amazing how intelligent a computer program can seem to be when all it's doing is following a few simple rules...within a limited universe of discourse," says Don Woods, who, as the creator of the classic game Adventure, showed the world how to do just that.

But the limited universe of discourse is the problem. We tend to regard brittleness at the edge of domains to be evidence of lack of intelligence. "[E]xplain your symptoms in terms of drops rather than drips," says Yaeger, and "the best medical diagnosis software...won't have a clue."

Maybe a bigger universe of discourse is the answer? With more intelligence built into the universe itself? MIT's Rodney Brooks thinks that's important: "We have reached a new threshold in AI brought about by the massive amount of mineable data on the web and the immense amount of computer power in our PCs."

James Hendler points to "an early wave of Web 3.0 applications now starting to hit the Web," and sees big opportunities in nontext search. "Wouldn't it be nice if you could ask a future Google to recommend some potential friends for your MySpace links?" Hendler, Tim Berners-Lee, and Ora Lassila wrote the defining article on the Semantic Web (www.w3.org/2001/sw), and while Berners-Lee says the Semantic Web is not AI, it is tempting to see it as the ultimate AI knowledge base.

Or maybe that would be Doug Lenat's Cyc project (www.cyc.com). "It started with the goal of entering an entire encyclopedia's knowledge into the computer, but extending every entry so that all underlying assumptions—all common sense and background knowledge—[were] also entered," Yaeger says. Cyc has evolved in its goals, but "[i]f there's any hope of making GOFAI work...Cyc seems like its best hope."

But Brooks cautions, "we still have great challenges in making these systems as flexible, as deep, and as intellectually resilient as a two-year-old child." Winograd thinks that the symbolist approach will never get there: "In order to build human-like intelligence," he says, "researchers will need to base it on a deep understanding of how real nervous systems are structured and how they operate." Connectionism, it seems, is ascendant.

The word "connectionist" was first used in the context of mental models by D.O. Hebb in 1949, but its influence on AI researchers dates to Rosenblatt's use in his Perceptrons paper in 1958. Minsky and Papert killed the nive perceptron model stone dead in 1969 and more or less interred connectionism along with it, until Parallel Distributed Processing resurrected it in 1987.

"The idea behind connectionism," Yaeger says, "is that key aspects of brain behavior simply cannot be modeled at the symbolic level, and by working closer to the physical system underlying human thought—the brain and its neurons and synapses—we stand both a much greater chance of succeeding at producing AI and of understanding how it relates to real human thought." Yaeger is wholeheartedly in the connectionist camp, and in particular in the tradition spearheaded by John Holland and advanced by Stephen Wolfram and Chris Langton and others, cellular automata and Artificial Life.

The connectionist approach is basically synthesis, or bottom-up, the symbolist approach is analysis, top-down. Both are doubtless necessary. "[S]ymbols-only AI is not enough, [but] subsymbolic perceptual processes are not enough either," Winston says.

Science and Engineering

So what about the engineering side of AI, what about real working systems that solve real problems? There the news seems good.

In terms of real engineering and applied science accomplishments, "[t]he most active and productive strand of AI research today is the application of machine learning techniques to a wide variety of problems," Winograd says, "from web search to finance to understanding the molecular basis of living systems." Work like this, and advances in other areas such as robotics, are taking us in the direction of more intelligent artifacts, "and will lead to a world with many 'somewhat intelligent' systems, which will not converge to human-like intelligence."

Rodney Brooks sees great progress being made in practical systems involving language, vision, search, learning, and navigation, systems that are becoming part of our daily lives. Nils Nilsson took time out from writing a book on the history of AI to share some thoughts on its state today, citing practical results of AI work in adjacent fields like genomics, control engineering, data analysis, medicine and surgery, computer games, and animation.

In a forthcoming book, Hamid Ekbia examines the unique tension between the engineering and science goals of AI:

Artificial Intelligence seeks to do three things at the same time:

1. as an engineering practice, AI seeks to build precise working systems;

2. as a scientific practice, it seeks to explain the human mind and human behavior;

3. as a discursive practice, it seeks to use psychological terms (derived from its scientific practice) to describe what its artifacts (built through the engineering practice) do.

This third practice, which acts like a bridge between the other two, is more subjective than the other two.

And that, he argues, is why the field has such dramatic ups and downs and is so often burdened with over-promising and grandiosity. The gap between AI engineering and AI as a model of intelligence is so large that trying to bridge it almost inevitably leads to assertions that later prove embarrassing. McCarthy said AI was "the science and engineering of making intelligent machines." If that is its hope, maybe it can't escape hype.

Winners and Losers

Right now, the balance in AI work seems to be tipped toward applied over theoretical, and toward the connectionist over the symbolist. But if history is a guide, things could shift back. Another tilt noticeable in the AI work presented at AAAI this summer is modesty over hype. It's something that's been going on since the AI Winter of the '90s that followed the disappointment over the overpromising of the '80s. AI advances are not trumpeted as artificial intelligence so much these days, but are often seen as advances in some other field. "AI has become more important as it has become less conspicuous," Winston says. "These days, it is hard to find a big system that does not work, in part, because of ideas developed or matured in the AI world." And that note of modesty may be a good thing both for the work and for AI.


Related Reading


More Insights






Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.