Artificial intelligence (AI) is the intelligence exhibited by machines or software. It is also the name of the academic field of study which studies how to create computers and computer software that are capable of intelligent behavior. Major AI researchers and textbooks define this field as "the study and design of intelligent agents",〔 in which an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success.〔 John McCarthy, who coined the term in 1955,〔 defines it as "the science and engineering of making intelligent machines".〔
AI research is highly technical and specialized, and is deeply divided into subfields that often fail to communicate with each other.〔 Some of the division is due to social and cultural factors: subfields have grown up around particular institutions and the work of individual researchers. AI research is also divided by several technical issues. Some subfields focus on the solution of specific problems. Others focus on one of several possible approaches or on the use of a particular tool or towards the accomplishment of particular applications.
The central problems (or goals) of AI research include reasoning, knowledge, planning, learning, natural language processing (communication), perception and the ability to move and manipulate objects.〔 General intelligence is still among the field's long-term goals.〔 Currently popular approaches include statistical methods, computational intelligence and traditional symbolic AI. There are a large number of tools used in AI, including versions of search and mathematical optimization, logic, methods based on probability and economics, and many others. The AI field is interdisciplinary, in which a number of sciences and professions converge, including computer science, mathematics, psychology, linguistics, philosophy and neuroscience, as well as other specialized fields such as artificial psychology.
The field was founded on the claim that a central property of humans, human intelligence—the sapience of ''Homo sapiens''—"can be so precisely described that a machine can be made to simulate it."〔See the Dartmouth proposal, under Philosophy, below.〕 This raises philosophical issues about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence, issues which have been addressed by myth, fiction and philosophy since antiquity.〔 Artificial intelligence has been the subject of tremendous optimism〔The optimism referred to includes the predictions of early AI researchers (see optimism in the history of AI) as well as the ideas of modern transhumanists such as Ray Kurzweil.〕 but has also suffered stunning setbacks.〔The "setbacks" referred to include the ALPAC report of 1966, the abandonment of perceptrons in 1970, the Lighthill Report of 1973 and the collapse of the Lisp machine market in 1987.〕 Today it has become an essential part of the technology industry, providing the heavy lifting for many of the most challenging problems in computer science.〔
== History ==
Thinking machines and artificial beings appear in Greek myths, such as Talos of Crete, the bronze robot of Hephaestus, and Pygmalion's Galatea.〔 Human likenesses believed to have intelligence were built in every major civilization: animated cult images were worshiped in Egypt and Greece〔 and humanoid automatons were built by Yan Shi, Hero of Alexandria and Al-Jazari.〔 It was also widely believed that artificial beings had been created by Jābir ibn Hayyān, Judah Loew and Paracelsus.〔 By the 19th and 20th centuries, artificial beings had become a common feature in fiction, as in Mary Shelley's ''Frankenstein'' or Karel Čapek's ''R.U.R. (Rossum's Universal Robots)''.〔 Pamela McCorduck argues that all of these are some examples of an ancient urge, as she describes it, "to forge the gods".〔 Stories of these creatures and their fates discuss many of the same hopes, fears and ethical concerns that are presented by artificial intelligence.
Mechanical or "formal" reasoning has been developed by philosophers and mathematicians since antiquity. The study of logic led directly to the invention of the programmable digital electronic computer, based on the work of mathematician Alan Turing and others. Turing's theory of computation suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate any conceivable act of mathematical deduction.〔This insight, that digital computers can simulate any process of formal reasoning, is known as the Church–Turing thesis.〕〔 This, along with concurrent discoveries in neurology, information theory and cybernetics, inspired a small group of researchers to begin to seriously consider the possibility of building an electronic brain.〔
The field of AI research was founded at a conference on the campus of Dartmouth College in the summer of 1956.〔 The attendees, including John McCarthy, Marvin Minsky, Allen Newell, Arthur Samuel, and Herbert Simon, became the leaders of AI research for many decades.〔 They and their students wrote programs that were, to most people, simply astonishing:〔Russell and Norvig write "it was astonishing whenever a computer did anything kind of smartish." 〕 computers were winning at checkers, solving word problems in algebra, proving logical theorems and speaking English.〔 By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defense〔 and laboratories had been established around the world.〔 AI's founders were profoundly optimistic about the future of the new field: Herbert Simon predicted that "machines will be capable, within twenty years, of doing any work a man can do" and Marvin Minsky agreed, writing that "within a generation ... the problem of creating 'artificial intelligence' will substantially be solved".〔
They had failed to recognize the difficulty of some of the problems they faced.〔See 〕 In 1974, in response to the criticism of Sir James Lighthill and ongoing pressure from the US Congress to fund more productive projects, both the U.S. and British governments cut off all undirected exploratory research in AI. The next few years would later be called an "AI winter",〔 a period when funding for AI projects was hard to find.
In the early 1980s, AI research was revived by the commercial success of expert systems,〔 a form of AI program that simulated the knowledge and analytical skills of one or more human experts. By 1985 the market for AI had reached over a billion dollars. At the same time, Japan's fifth generation computer project inspired the U.S and British governments to restore funding for academic research in the field.〔 However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting AI winter began.〔
In the 1990s and early 21st century, AI achieved its greatest successes, albeit somewhat behind the scenes. Artificial intelligence is used for logistics, data mining, medical diagnosis and many other areas throughout the technology industry.〔
The success was due to several factors: the increasing computational power of computers (see Moore's law), a greater emphasis on solving specific subproblems, the creation of new ties between AI and other fields working on similar problems, and a new commitment by researchers to solid mathematical methods and rigorous scientific standards.〔
On 11 May 1997, Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov. In February 2011, in a ''Jeopardy!'' quiz show exhibition match, IBM's question answering system, Watson, defeated the two greatest Jeopardy champions, Brad Rutter and Ken Jennings, by a significant margin. The Kinect, which provides a 3D body–motion interface for the Xbox 360 and the Xbox One, uses algorithms that emerged from lengthy AI research〔(【引用サイトリンク】title=Kinect's AI breakthrough explained )〕 as do intelligent personal assistants in smartphones.〔http://readwrite.com/2013/01/15/virtual-personal-assistants-the-future-of-your-smartphone-infographic〕
抄文引用元・出典: フリー百科事典『 ウィキペディア（Wikipedia）』