Guest post by Ariel Procaccia:
The recent AI magazine special issue on AGT is a good excuse to discuss an interesting question: Can AGT/E enable AI in some fundamental way? Or, are we (AI researchers working in AGT/E) betraying the legacy of our founding fathers – Alan Turing, John McCarthy, Marvin Minsky, and, well, Isaac Asimov – by not focusing on our true purpose – building intelligent robots, bringing about the singularity, or at least making better vacuum cleaners? These questions are all the more challenging because for now I want to avoid a related thorny issue, that of defining AI. I argue below that the answer to the first question is yes.
Here is the argument in a nutshell (it seems that a similar argument was independently proposed by the wise Aviv Zohar). One of the classic goals of AI is to create a software agent that seems intelligent to a human observing it (e.g., can pass the Turing test). However, in the last two decades a significant portion of AI research has moved from focusing on single agents to studying multiagent systems. Now, game theory attempts to distill the principles of rational interaction. But rationality is just another word for artificial intelligence: it is not how humans actually behave, but rather how we perceive intelligent behavior. Therefore, a multiagent system in which interactions are governed by game theory (or in which decision making is informed by social choice theory, for that matter) would be perceived as intelligent by a human observing it. In other words, AGT/E enables artificial intelligence on the system-wide level rather than on the individual level.
Now that we are convinced that AGT/E plays a fundamental role in AI (but Turing must be turning in his grave, or is Turning turing?), it remains to determine how AGT/E research in AI is distinct from AGT/E research in, say, theory of CS. Looking at the special issue’s table of contents, some major theory-oriented AGT/E topics are conspicuously missing, e.g., price of anarchy (which is admittedly on the decline even in theory) and algorithmic mechanism design in the Nisan-Ronen sense – truthful approximations for computationally intractable problems. The last question was eloquently addressed by Elkind and Leyton-Brown in their editorial for the special issue. They pointed out two (related) distinctions. First, AI researchers are interested in reasoning about practical multiagent systems, and thus tend to consider more realistic models, employ an empirical approach where analysis fails, and test their methods through competitions. Second, many AI researchers do not in general view computational hardness as an insurmountable obstacle, and thus employ heuristics where appropriate. I would like to raise a third point. Modern AI encompasses a world of ingenious ideas that, we are discovering, have a considerable conceptual interface with AGT/E. Therefore, some AGT/E work on the AI side emphasizes the connections with machine learning (the fascinating sociology of machine learning and AI is beyond the scope of this post), knowledge representation and reasoning, decision making under uncertainty, planning, and other well-studied areas of AI.
Wait, but what is AI? Unfortunately, no one can be told what AI is. You have to see it for yourself.
The debate between the complexity of achieving certain solutions versus the beauty of the solution itself is certainly a healthy one. But as you have pointed out there are AI researchers who don’t see the computation as insurmountable, rather give the solution not worrying about how it is going to be computed. Can you expand on that, perhaps in a later post, to how this philosophy is positioning itself in the AI community? I would especially be interested in listening about the works that address the `sociology of machine learning’ and the reverse question `learning in a social system’.
When discussing the Turing test, most people focus on the distinction/relation between human and machine as viewed through an interface. There is a deeper point that is often not discussed, which is perhaps much more relevant to what makes something intelligent – which Turing did discuss in his writings.
A truly intelligent machine would continually improve its performance and can be trained in much the same way as people. So, the way Turing thought one should build AI was to build a machine that is a very good learner and then teach it like we do with kids. Moreover, if its abilities are suitably general, one could unleash it on the real world and let is go on. So, many conceptual issues arise – not just how should we play a given game, no matter how complex, but also what is the game I find myself in and how should I make sense of it, if I am *not* told anything about it up front. Clearly this kind of learning is hard, so it hasn’t been studied nearly as much as it should be. But, I claim, it is precisely these kinds of issues that uniquely define AI w.r.t. other related areas.
Consider this modified version of the Turing test:
We let you take an agent home, for a period of a month, say, and interact with it continuously. Imagine all the things we can get a young person to do in one month, maybe spending a few hours each day with them. Imagine the state of the art in what we can get a machine to achieve in the same month? Why is there a difference? This is not the same as airy-fairy questions or singularity. I am simply asking about rate of change of skill. What scientific principles will allow us to begin to fill the gap? How can game theory play a role here?
Game theory can play a role as the underling base for the AI. We can program it to developed a game/(event) given the information and then play it (interact in it). Learning does not seem to be the issue, as this is based on the ability of the AI to add a new data set for an undiscovered event. Lets say the program does not include the word A, then the AI should first figure that it does not know “A” then it must include it into a data set then if possible link it to other related data, such as other letters, symbols, ect. The issue I see is getting the AI to “create” which can be done through its ability to link the events. Now Game Theory is the science of making decisions which max an utility which may be fundamental for the AI’s ability to link the data sets and hence create.
[…] This post was mentioned on Twitter by Vaishak Belle, TCS blog aggregator. TCS blog aggregator said: Is Game Theory (Artificially) Intelligent? « Algorithmic Game-Theory/Economics: http://bit.ly/ew2Cb1 […]
Despite the ambiguity in definitions and perhaps as a result of optimisim I would answer in the affirmative. Game theoretic reasoning can be found throughout nature and AGT distills this type of intelligence through a mathematical filter. Moreover, reasoning of a game theoretic nature seems more relevant to intelligence, at least the human kind, than computational aptitude, which in a philosophical sense can be considered an intelligence bounded by logical axioms.
I completely agree that the subject matter of game theory is not the actual behavior of people but our perception of some aspects of that behavior.
However, I don’t agree that rationality and intelligence are the same things. My intuition about rationality captures behavior of animals, body cells and businesses which I am reluctant to call `intelligent’ (though admittedly after reading this post and comments I think maybe AI guys would view these creatures as intelligent.) Similarly, Turing’s Test captures an aspect of intelligence which I wouldn’t necessarily identify with rationality.
Of course I don’t have good definitions neither for rationality nor for intelligence In fact, for me the whole purpose of game theory is to formalize my intuition about rationality. But my point is, that my intuitions of what it means to be rational and what it means to be intelligent are different.
I partially agree: the sentence “rationality is just another word for artificial intelligence” is too strong. I do think though that rationality captures an aspect of AI, while the Turing test captures a different aspect. That is, the fact that the Turing test does not necessarily test rationality does not necessarily imply that rationality is not an aspect of AI.
By the way, I am reluctant to call animals and body cells “rational”, though I see that game theorists may view them as rational 🙂
[…] first book, The Age of Intelligent Machines, was published in 1990. The nonfiction work discusses the history of computer AI and also […]
[…] for the criticism about the actual contribution of AGT to AI (a glimpse into this debate is in a previous post by Ariel Procaccia). In his ACM research award talk, Joe Halpern considered several solution […]
[…] Is Game Theory (Artificially) Intelligent? « Algorithmic Game-Theory/Economics https://agtb.wordpress.com/2011/01/18/is-game-theory-artificially-intelligent/ […]
[…] Is Game Theory (Artificially) Intelligent? https://agtb.wordpress.com/2011/01/18/is-game-theory-artificially-intelligent/ […]
[…] because I sometimes think of rationality as a possible key to artificial intelligence. Indeed, one can argue that rationality is perceived as intelligence, and therefore a multiagent system that is based on […]
[…] Of course, there is no reason to stop restricting our mode of communication. A natural continuation is to switch to the domain of game theory. The judge sets a two-player game for the human and computer to play. To decide which player is human, the judge only has access to the history of actions the players chose. This is the economic Turing test suggested by Boris Bukh and shared by Ariel Procaccia. The test can be viewed as part of the program of linking intelligence and rationality. […]