Here is the AI FAQ. As you can guess by its name, this FAQ aims at answering usual questions in AI. If you think I miss one or an answer is incomplete, please drop me an email at email@example.com
! My goal is to provide answers as concise and clear as possible. I don't see the point in paraphrasing, so I am always glad to propose a great link or a quote as an answer.
Yes, here is a short list:
There are FAQs for specific AI fields, such as:
In general, http://www.faqs.org/
is a good place to check for the latest FAQs in most areas.
I wrote this FAQ so as to add a couple of questions and bring my own answers, which can be simply one link or a quote.
jmc: It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.
comp.ai: Artificial intelligence ("AI") can mean many things to many people. Much confusion arises because the word 'intelligence' is ill-defined. The phrase is so broad that people have found it useful to divide AI into two classes: strong AI and weak AI.
Peter Norvig: We think of AI as understanding the world and deciding how to make good decisions. Dealing with uncertainty but still being able to make good decisions is what separates AI from the rest of computer science.
Machine learning pioneer Arthur Samuel, in his 1983 talk entitled “AI: Where It Has Been and Where It Is Going”, stated that the main goal of the fields of machine learning and artificial intelligence is: “to get machines to exhibit behaviour, which if done by humans, would be assumed to involve the use of intelligence.”
comp.ai: Strong AI makes the bold claim that computers can be made to think on a level (at least) equal to humans and possibly even be conscious of themselves. Weak AI simply states that some "thinking-like" features can be added to computers to make them more useful tools... and this has already started to happen (witness expert systems, drive-by-wire cars and speech recognition software). What does 'think' and 'thinking-like' mean? That's a matter of much debate.
No consensus on a formal definition of intelligence has been reached by scholars so far. See http://en.wikipedia.org/wiki/Intelligence
for further attempts of definition.
jmc: Intelligence is the computational part of the ability to achieve goals in the world. Varying kinds and degrees of intelligence occur in people, many animals and some machines.
jmc: Not yet. The problem is that we cannot yet characterize in general what kinds of computational procedures we want to call intelligent. We understand some of the mechanisms of intelligence and not others.
jmc: Sometimes but not always or even usually. On the one hand, we can learn something about how to make machines solve problems by observing other people or just by observing our own methods. On the other hand, most work in AI involves studying the problems the world presents to intelligence rather than studying people or animals. AI researchers are free to use methods that are not observed in people or that involve much more computing than people can do.
jmc: No. IQ is based on the rates at which intelligence develops in children. It is the ratio of the age at which a child normally makes a certain score to the child's age. The scale is extended to adults in a suitable way. IQ correlates well with various measures of success or failure in life, but making computers that can score high on IQ tests would be weakly correlated with their usefulness. For example, the ability of a child to repeat back a long sequence of digits correlates well with other intellectual abilities, perhaps because it measures how much information the child can compute with at once. However, ``digit span'' is trivial for even extremely limited computers.
However, some of the problems on IQ tests are useful challenges for AI.
jmc: A few people think that human-level intelligence can be achieved by writing large numbers of programs of the kind people are now writing and assembling vast knowledge bases of facts in the languages now used for expressing knowledge. However, most AI researchers believe that new fundamental ideas are required, and therefore it cannot be predicted when human-level intelligence will be achieved.
Yes and no. All computations that have been observed so far in human brains can be reproduced on computers. Yet some AI researchers consider the Von Neumann architecture
as a computational and architectural bottleneck for AI. For example, IBM's SyNAPSE
project aims to design computing chips that map closer human brain functions.
We don't know for we haven't yet figured out the right algorithms to compute intelligence. My opinion is that increasing our computational power cannot hurt and let us focus on functionality rather than on tedious optimization.
Read this article
written by A. M. Turing on computing machinery and intelligence. You may also want to read Ray Kurzweil's bestseller "The Age of Spiritual Machines: When Computers Exceed Human Intelligence"
. Many other articles and books have been written on that subject. Few are interesting, mainly because many people avoid trying to define "thinking".
Edsger Dijkstra: Whether computers can think is like the question of whether submarines can swim. In English, we say submarines don’t swim, but we say aeroplanes do fly. In Russian, they say submarines do swim.
jmc: After WWII, a number of people independently started to work on intelligent machines. The English mathematician Alan Turing may have been the first. He gave a lecture on it in 1947. He also may have been the first to decide that AI was best researched by programming computers rather than by building machines. By the late 1950s, there were many researchers on AI, and most of them were basing their work on programming computers.
comp.ai: Statistical AI, arising from machine learning, tends to be more concerned with "inductive" thought: given a set of patterns, induce the trend. Classical AI, on the other hand, is more concerned with "deductive" thought: given a set of constraints, deduce a conclusion. Another difference, as mentioned in the previous question, is that C++ tends to be a favorite language for statistical AI while LISP dominates in classical AI.
A system can't be truly intelligent without displaying properties of both inductive and deductive thought. This lends many to believe that in the end, there will be some kind of synthesis of statistical and classical AI.
comp.ai: We say a game is solved when we know for sure the result when both players play optimally. The result is either a guaranteed win for the first player, a guaranteed win for the second player, or a draw. We
find this out by searching the mini-max game tree to the game ending positions. If you do this for 3x3 tic-tac-toe, it is easy to see that it is a forced draw.
A few other games:
3x3x3 tic-tac-toe: win for the first player.
4x4x4 tic-tac-toe: win for the first player.
Connect-4: win for the first player.
Go-Moku: win for the first player.
List of solved games: http://en.wikipedia.org/wiki/Solved_game
All the following libraries are open-source and free (unless otherwise stated):
Artificial neural network: FANN [C++, many bindings available], Neuroph [Java]
Data mining: Weka [Java]
Evolutionary computation: Sferes2 [C++]
Fuzzy logic: Fuzzy Logic Toolbox (not free of charge) [MATLAB], jFuzzyLogic [Java]
Machine learning: Apache Mahout (scalable, can be used with Apache Hadoop) [Java]
I couldn't agree more with Ray Kurzweil
's standpoint: "In media appearances, Kurzweil has stressed the extreme potential dangers of nanotechnology, but argues that in practice, progress cannot be stopped, and any attempt to do so will retard the progress of defensive and beneficial technologies more than the malevolent ones, increasing the danger. He suggests that the proper place of regulation is to make sure progress proceeds safely and quickly."
Yes, but it creates others with greater added value for the society. Just like Gutenberg's printing technology.
A nice answer by Faisal Sikder
: (TODO: correct it. E.g. obviously an algorithm isn't always deterministic)
is a set of well-defined instructions for carrying out a particular task. An algorithm is predictable, deterministic, and not subject to chance. An algorithm tells you how to go from point A to point B with no detours, no side trips to points D, E, and F, and no stopping to smell the roses or have a cup of joe.
is a technique that helps you look for an answer. Its results are subject to chance because a heuristic tells you only how to look, not what to find. It doesn’t tell you how to get directly from point A to point B; it might not even know where point A and point B are. In effect, a heuristic is an algorithm in a clown suit. It’s less predictable, it’s more fun, and it comes without a 30-day, money-back guarantee.
Here is an algorithm
for driving to someone’s house: Take Highway 167 south to Puyallup. Take the South Hill Mall exit and drive 4.5 miles up the hill. Turn right at the light by the grocery store, and then take the first left. Turn into the driveway of the large tan house on the left, at 714 North Cedar.
Here’s a heuristic
for getting to someone’s house: Find the last letter we mailed you. Drive to the town in the return address. When you get to town, ask someone where our house is. Everyone knows us—someone will be glad to help you. If you can’t find anyone, call us from a public phone, and we’ll come get you.
The difference between an algorithm
and a heuristic
is subtle, and the two terms overlap somewhat. The main difference between the two is the level of indirection from the solution. An algorithm gives you the instructions directly. A heuristic tells you how to discover the instructions for yourself, or at least where to look for them.
The one you are the most proficient with! There are a few exceptions though, e.g. if you want to artificially evolve a program, it will make your life easier to write it in an interpreted language.
Here is a more precise answer from comp.ai written in 2004, which is worth reading thoroughly once for all:
There is no authoritative answer for this question, as it really depends on what languages you like programming in. AI programs have been written in just about every language ever created. The most common seem to be Lisp, Prolog, C/C++, recently Java, and even more recently, Python.
LISP: For many years, AI was done as research in universities and laboratories, thus fast prototyping was favored over fast execution. This is one reason why AI has favored high-level languages such as Lisp. This tradition means that current AI Lisp programmers can draw on many resources from the community. Features of the language that are good for AI programming include: garbage collection, dynamic typing, functions as data, uniform syntax, interactive environment, and extensibility. Read Paul Graham's essay, "Beating the Averages" for a discussion of some serious advantages: http://www.paulgraham.com/avg.html
PROLOG: This language wins 'cool idea' competition. It wasn't until the 70s that people began to realize that a set of logical statements plus a general theorem prover could make up a program. Prolog combines the high-level and traditional advantages of Lisp with a built-in unifier, which is particularly useful in AI. Prolog seems to be good for problems in which logic is intimately involved, or whose solutions have a succinct logical characterization. Its major drawback (IMHO) is that it's hard to learn.
C/C++: The speed demon of the bunch, C/C++ is mostly used when the program is simple, and execution speed is the most important. Statistical AI techniques such as neural networks are common examples of this. Backpropagation is only a couple of pages of C/C++ code, and needs every ounce of speed that the programmer can muster.
Java: The newcomer, Java uses several ideas from Lisp, most notably garbage collection. Its portability makes it desirable for just about any application, and it has a decent set of built in types. Java is still not as high-level as Lisp or Prolog, and not as fast as C, making it best when portability is paramount.
Python: This language does not have widespread acceptance yet, but several people have suggested to me that it might end up passing Java soon. Apparently the new edition of the Russell-Norvig textbook will include Python source as well as Lisp. According to Peter Norvig, "Python can be seen as either a practical (better libraries) version of Scheme, or as a cleaned-up (no $@&%) version of Perl." For more information, especially on how Python compares to Lisp, go to http://norvig.com/python-lisp.html
jmc: No. These theories are relevant but don't address the fundamental problems of AI.
In the 1930s mathematical logicians, especially Kurt Gödel and Alan Turing, established that there did not exist algorithms that were guaranteed to solve all problems in certain important mathematical domains. Whether a sentence of first order logic is a theorem is one example, and whether a polynomial equations in several variables has integer solutions is another. Humans solve problems in these domains all the time, and this has been offered as an argument (usually with some decorations) that computers are intrinsically incapable of doing what people do. Roger Penrose claims this. However, people can't guarantee to solve arbitrary problems in these domains either. See my Review of The Emperor's New Mind
by Roger Penrose. More essays and reviews defending AI research are in Defending AI research : a collection of essays and reviews
(John McCarthy, 1996).
In the 1960s computer scientists, especially Steve Cook and Richard Karp developed the theory of NP-complete problem domains. Problems in these domains are solvable, but seem to take time exponential in the size of the problem. Which sentences of propositional calculus are satisfiable is a basic example of an NP-complete problem domain. Humans often solve problems in NP-complete domains in times much shorter than is guaranteed by the general algorithms, but can't solve them quickly in general.
What is important for AI is to have algorithms as capable as people at solving problems. The identification of subdomains for which good algorithms exist is important, but a lot of AI problem solvers are not associated with readily identified subdomains.
The theory of the difficulty of general classes of problems is called computational complexity. So far this theory hasn't interacted with AI as much as might have been hoped. Success in problem solving by humans and by AI programs seems to rely on properties of problems and problem solving methods that the neither the complexity researchers nor the AI community have been able to identify precisely.
Algorithmic complexity theory as developed by Solomonoff, Kolmogorov and Chaitin (independently of one another) is also relevant. It defines the complexity of a symbolic object as the length of the shortest program that will generate it. Proving that a candidate program is the shortest or close to the shortest is an unsolvable problem, but representing objects by short programs that generate them should sometimes be illuminating even when you can't prove that the program is the shortest.
jmc: Alexander Kronrod, a Russian AI researcher, said ``Chess is the Drosophila of AI.'' He was making an analogy with geneticists' use of that fruit fly to study inheritance. Playing chess requires certain intellectual mechanisms and not others. Chess programs now play at grandmaster level, but they do it with limited intellectual mechanisms compared to those used by a human chess player, substituting large amounts of computation for understanding. Once we understand these mechanisms better, we can build human-level chess programs that do far less computation than do present programs.
Unfortunately, the competitive and commercial aspects of making computers play chess have taken precedence over using chess as a scientific domain. It is as if the geneticists after 1910 had organized fruit fly races and concentrated their efforts on breeding fruit flies that could win these races.
jmc: The Chinese and Japanese game of Go is also a board game in which the players take turns moving. Go exposes the weakness of our present understanding of the intellectual mechanisms involved in human game playing. Go programs are very bad players, in spite of considerable effort (not as much as for chess). The problem seems to be that a position in Go has to be divided mentally into a collection of subpositions which are first analyzed separately followed by an analysis of their interaction. Humans use this in chess also, but chess programs consider the position as a whole. Chess programs compensate for the lack of this intellectual mechanism by doing thousands or, in the case of Deep Blue, many millions of times as much computation. Sooner or later, AI research will overcome this scandalous weakness.
Let me quote Seth Godin
, who expresses here a commonly thought, then forgotten idea:
I'm often stunned by the lack of questions that adults are prepared to ask. When you see kids go on a field trip, the questions pour out of them. Never ending, interesting, deep... even risky. And then the resistance kicks in and we apparently lose the ability.
Is the weather the only thing you can think to ask about? A great question is one you can ask yourself, one that disturbs your status quo and scares you a little bit.
The A part is easy. We're good at answers. Q, not so much.
"Intelligence is ten million rules." Douglas Lenat (US AI researcher)
"Chess is the Drosophila of AI." Alexander Kronrod (Russian AI researcher)
"A computer once beat me at chess, but it was no match for me at kick boxing." Emo Philips (American comedian)
"If people do not believe that mathematics is simple, it is only because they do not realize how complicated life is." John Louis von Neumann