Despite the power of computers to crunch numbers with unfathomable speed and perform quadrillions of calculations per second, the machines are still quite primitive in their ability to truly understand human language.
This is a glaring digital deficiency that Nicholas Cassimatis, assistant professor in the Department of Cognitive Science at Rensselaer Polytechnic Institute, is looking to solve. He is leading a multi-university team of researchers to develop unified theories of language and cognition that aim to allow more meaningful linguistic interaction between humans and computers. Only by better understanding the nature of human language, Cassimatis said, can we develop computational systems with human-level language abilities.
The five-year, $8 million project launched in late 2009 and is funded by a Multidisciplinary University Research Initiative (MURI) grant from the U.S. Department of Defense's Office of Naval Research.
"The goal of our project, quite simply, is to make computers understand language," Cassimatis said. "We're enabling computers to jointly reason about the world, people in it, and language, to create a unified theory that we hope will allow computers to use language in a more natural and flexible way." The research entails developing a cognitive architecture, or a computational framework, that models the structure and behavior of an intelligent mind, to account for previously unexplained aspects of language use. Cassimatis said this architecture will pair existing linguistic and artificial intelligence theories with new models, toward the goal of overcoming different challenges that have traditionally hindered the progress of "teaching" natural language to computers. These challenges include decoding ambiguous or metaphorical language; extracting and using contextual clues from the surrounding environment; as well as discerning' and dealing with the changing beliefs, goals, and intentions of other speakers.
"Computers can use many different words, they can dictate, record, or play back language, but they really have no idea what they, or we, are talking about," Cassimatis said. "They lack a common sense that makes it nearly impossible, right now, for them to answer many simple factual questions. With our new research program, we want to devise a unified theory that will allow humans to better understand our own language capabilities and, in turn, interact more naturally and more effectively with computers."
Potential applications of such a system include computer-based, artificially intelligent agents that allow people to use spoken language to view, manipulate, or deal with data, such as their personal information. Other possible applications include digital agents that allow scientists to access and analyze data, or let physicians access patient medical information, using language. "The mouse and keyboard, and later the Web, dramatically changed the ways in which we interact with computers and data," Cassimatis said. "Our research aims to take this to the next level. What we want to do is enable computers not just to react to pre-programmed voice commands like 'Call Mom' or 'Web Search GE Stock Price,' but to enable computers so they can actually listen to what we say and intelligently understand what we're asking or conveying."
Co-investigators of the project are Herbert H. Clark, professor of psychology at Stanford University; Jerry Hobbs, research professor at the University of Southern California; Pat Langley, professor of computing and informatics at Arizona State University; Sergei Nirenburg, professor of computer science and electrical engineering at the University of Maryland; and Ivan Sag, professor of humanities and linguistics at Stanford University.