Over a recent lunch, Boris Bukh suggested the following variant of the Turing test: a human and a computer play a game (in the game-theoretic sense). A judge who is observing only their moves must decide with confidence who is the human and who is the computer. The premise is that the human would play irrationally (he’s just a random person off the street), and the computer’s goal is to also play irrationally to avoid detection.
An interesting aspect of the economic Turing test is that it’s not clear whether it’s harder to be the judge or the computer (programmer). On the one hand, how do you program a computer to act irrationally? On the other hand, if a judge has a set of rules that identify humanly irrational behavior (e.g., he read the papers of Kahneman and Tversky), couldn’t the programmer implement the same set of rules?
I found this idea especially appealing because I sometimes think of rationality as a possible key to artificial intelligence. Indeed, one can argue that rationality is perceived as (a sufficient condition for) intelligence, and therefore a multiagent system that is based on game-theoretic principles will be perceived as intelligent, so in that sense game theory can enable classic artificial intelligence (albeit on the multiagent level). The economic Turing test turns this argument on its head: to be indistinguishable from a human the computer must strive to be irrational.
The field of AI has so far failed to deliver human-level intelligence; is this the dawn of the age of artificial stupidity?
Well, according to an article in The Atlantic a few years ago, advancements in regular Turing games is also by being, in some sense, less smart and rational.
(A good bidding strategy for the computer to avoid detection, is, according to the article, apparently, to swear a lot as it bids…)
http://www.theatlantic.com/magazine/archive/2011/03/mind-vs-machine/308386/
Wow, that’s an excellent article. As I see it stylistic tactics like swearing will not work in the economic Turing test because the judge can only observe moves. One thing that I learned from the article though is that the Loebner competition has a “most human human” prize. This got me thinking about an inverse economic Turing test, where the computers are playing rationally and the human’s goal is to win the “most computer human” prize 🙂
By the way, on Google+ Kobi Gal shared a link to another relevant article in The Atlantic: http://www.theatlantic.com/technology/archive/2012/08/a-conjecture-for-a-better-turing-test/261722/
Unfortunately, this sort of Turing test is impossible, because it is based on a fundamentally faulty assumption: “game behaviour is a human universal”. However, human behaviour and types of irrationality are extreme culture sensitive, see work by Joe Henrich of UBC or this popular Pacific Standard article. At best, you would be testing for if the computer can mimic the cultural norms of the judge. You could have humans classified as computers by judges from different backgrounds. Of course, this could be also used as an argument that in general, the idea of “human intelligence” is a cultural one, and thus we should really be talking about “western liberal intelligence” or whatever cultural group we happen to belong to.
That’s a good point (I happened to read the interesting article you linked to), but as you were perhaps hinting it is also an argument against the original Turing test. In fact, there the cultural requirement is even more explicit: everyone needs to speak the same language!
I’m enjoying this variant of the Turing test. I will try to address in in a series of posts on TheEGG: an extended response to this comment thread. Unfortunately, it is taking a bit longer than expected to write, so the first entry is mostly a rehashing of your definition, connections to the vague term of ‘intelligence’, and a series of questions.
I’m wondering if the computer would have to behave as if it’s conscious, in addition to being irrational.. Though I’m not sure how the former could be tested, either 🙂
The problem with the term “irrational” is that it is based on negation, and thus does not carry much content of its own.
A different distinction between behavior types is “deductive” vs. “inductive”.
One great presentation by Brian Arthur is linked below, but it is quite a classic distinction.
While the “rational” player (presumably the computer) derives the next action from the intended solution (e.g. some Bayes-Nash equilibrium it has in mind), people typically apply inductive reasoning by following simple policies that require little information and effort (even when information and time are available).
Of course it is quite easy to follow *some* inductive logic—much easier than following deductive logic and solving complex games. In fact, in sufficiently complicated games (like Chess), computers must resort to inductive methods as they cannot really solve the game.
The test then boils down to distinct between inductive moves used by people to those used by a computer. My impression is that the first type indeed relates to cognitive biases as isolated by Kahneman&Tversky, Ariely, H.Simon, Camerer and others. Computers can try to imitate these patterns, but this is not a “finite set of rules”, and clearly no one can claim that Kahneman has successfully modeled the full range of cognitive biases.
Both the programmer and the referee are therefore required to learn more and to deepen into the intricacies of human behavior if they with to succeed—which is the point of this test if I get it right.
http://www.jstor.org/stable/pdfplus/2117868.pdf?acceptTC=true