For over a decade, search engines have been the most popular tools with which to find information on the Web. The almost instant access to information and ease of use has been the main reason for their popularity. Yet there are drawbacks to using search engines. For instance, it is more natural to express an information need in the form of a question than a set of keywords and operators. The choice of keywords strongly influences the relevance of the results, and the use of a keyword that does not match the author's choice of keywords may not result in a relevant hit. This occurs more often than you would assume. Slang words can have as many as 10 or more synonyms, while other common words may have at least two or three synonyms. For example, the word "bought" has several synonyms, among them, "purchased" and "acquired."
A search engine doesn't give an answer in response to a query. Instead, users need to scan the text of a ranked list of documents to find answers. Most question/answering systems address both of these problems. In this article, I describe the design and implementation of one such question/answering (Q&A) system. Other Q&A systems include those at www.brainboost.com, www.answerbus.com, and start.csail.mit.edu.
Design
Most of the initial Q&A system designs were built on existing search engines with preprocessing and postprocessing modules, like those depicted in Figure 1. The preprocessing module consisted of parsing a natural-language question into a search-engine query. But the translation of a question into a search-engine query is imprecise in nature due to the infinite possibilities in posing questions. However, you can categorize a factoid question using a set of patterns and question words into one of several defined categories with reasonable accuracy. Examples of question words include single words and phrases such as "who," "what," "when," "how much," "how long," and so on.
The postprocessing module extracted the text from the top n hits for the generated query, segmented the text into passages, and assigned ranks to passages based on a measure of "closeness" to the original question. Each passage was scored based on the degree of overlap between query words and passage words, as well as a density measure of the occurrence of query words in the passage.
Factoid questions are among the easier types of questions to answer. The two other typesopinion and summary questionsare more complex and may involve compiling, analyzing, and synthesizing text to generate answers. Most answers to factoid questions can be found by extracting the most appropriate sentence verbatim from the text. Therefore, the challenge to find answers for factoid questions is to look for the most likely passage that answers a question.