Artificial intelligence, a field seeking to express human intelligence through machinery, predates the modern computing age. As a field of computer science, it includes two broad approaches to the study of human intelligence. The first approach attempts to identify laws of reasoning and express those laws in symbols that may be manipulated by a computer. The second approach attempts to use computer hardware and software to model the functions of the human brain. Both approaches have experienced fluctuating fortunes because failures to achieve goals undermined early successes in the field.
From the start of the computer era, researchers sought to develop a computer system that would be indistinguishable from a human being. This standard, articulated by the mathematician Alan Turing (1912–1954) in 1951, would identify a system as possessing artificial intelligence if it could produce responses to questions that were indistinguishable from those produced by a human being.
From the start, researchers pursued both the symbolic approach to artificial intelligence and the efforts to build an artificial brain. In 1951, MIT professors created an artificial neuron and workers at Carnegie Mellon University began developing a program that would prove theorems within a limited field of mathematics. Researchers quickly identified goals that would remain central to the field, including the preparation of a program that would translate text from one language to another or one that could play chess. The term “artificial intelligence” was coined in 1955 by MIT professor John McCarthy (1927–2012).
The field of artificial intelligence quickly fell into a pattern of early successes for simple problems, followed by the inability to solve more complicated problems that more closely modeled human intelligence. An MIT program called SHRDLU, completed in 1969, was typical of this pattern. SHRDLU was designed to process English commands and use them to manipulate objects with a robot arm. It worked well for a simple environment filled with toy blocks but it was unable to be adapted for more realistic settings.
In the 1970s and 1980s, the pattern of successes on simple problems but failure with more complicated tasks caused the major American and European funding agencies to reduce their support for artificial intelligence research. For almost a decade, the field suffered what researchers called “the AI winter.”
The symbolic approach to artificial intelligence recovered briefly in the early 1980s. One of the early successes was the rule-based systems used to capture human knowledge about some task or activity. As had happened to the field as a whole, the work on rule-based systems also suffered from the inability to meet inflated expectations. Although expert systems became commercial products by the middle of the decade, they were never able to accomplish all that had been claimed for them. The Japanese government invested heavily in them for their 5th Generation Computing project and had little to show in return. Early hopes of building a system that could diagnose disease were also frustrated.
Artificial intelligence resurged in the early 1990s as researchers looked at new approaches to modeling the operation of the brain. These efforts, identified in this period as neural nets, proved capable of recognizing patterns in data, such as objects in photographs and suspicious actions in financial transactions. Perhaps because earlier forms of this work had failed to meet the expectations set for it, researchers tended to identify this work as “pattern recognition” or “machine intelligence” rather than artificial intelligence.
By limiting the goals of their work, researchers were able to achieve substantial success in the 1990s and 2000s with programs that played chess, recognized images in photographs, diagnosed difficult mechanical problems, identified songs, translated text, and even played the television game show Jeopardy. However, the field seemed little closer to the goal of building a system that would pass Turing's test and be indistinguishable from a human being.
Grier, D., Alan. (2014). Artificial intelligence. In H. R. Slotten (Ed.), The Oxford Encyclopedia of the History of American Science, Medicine, and Technology (1st ed.). Oxford University Press, Inc. https://search.credoreference.com/articles/Qm9va0FydGljbGU6NDgxNzk1Mg==?aid=16233
ProCon.org (Ed.). (2024). Artificial intelligence (Ai) — top 3 pros and cons. In ProCon Headlines (1st ed.). ProCon. https://search.credoreference.com/articles/Qm9va0FydGljbGU6NDkzMTgzNQ==?aid=16233