Over a recent lunch, Boris Bukh suggested the following variant of the Turing test: a human and a computer play a game (in the game-theoretic sense). A judge who is observing only their moves must decide with confidence who is the human and who is the computer. The premise is that the human would play irrationally (he’s just a random person off the street), and the computer’s goal is to also play irrationally to avoid detection.
An interesting aspect of the economic Turing test is that it’s not clear whether it’s harder to be the judge or the computer (programmer). On the one hand, how do you program a computer to act irrationally? On the other hand, if a judge has a set of rules that identify humanly irrational behavior (e.g., he read the papers of Kahneman and Tversky), couldn’t the programmer implement the same set of rules?
I found this idea especially appealing because I sometimes think of rationality as a possible key to artificial intelligence. Indeed, one can argue that rationality is perceived as (a sufficient condition for) intelligence, and therefore a multiagent system that is based on game-theoretic principles will be perceived as intelligent, so in that sense game theory can enable classic artificial intelligence (albeit on the multiagent level). The economic Turing test turns this argument on its head: to be indistinguishable from a human the computer must strive to be irrational.
The field of AI has so far failed to deliver human-level intelligence; is this the dawn of the age of artificial stupidity?