Crowd sourcing has just been given a recent visibility boost with DARPA’s Red Balloon contest that was won by the MIT team. At the same time, Amazon’s well-established (since 2005!) platform for crowd sourcing, the Mechanical Turk, is gaining attention as a platform for conducting behavioral experiments, especially in behavioral economics and game-theory.
Named after the 18th century human-powered chess-playing “machine” called “the Turk”, this platform allows “requesters” to submit “Human Intelligence Tasks” (HITs) to a multitude of human “workers” who solve them for money. A sample of recent typical tasks include tagging pictures (for 3 cents), writing (and posting on the web) a short essay with a link (for 20 cents), or correcting spellings (for 2 cents, in Japanese). This allows brief and cheap employment of hundreds and thousands of people to work on simple Internet-doable low-level knowledge tasks. You may demand various “qualification exams” from these workers, and design such quals of your own. Obviously workers are in it for the money, but apparently not just that.
Recently, the Mechanical Turk is being used to conduct behavioral experiments. Gabriele Paolacci is methodologically repeating experiments of Kahneman and Tversky and reporting on them in his blog. Panos Ipeirotis reports on his blog studies of some aspects of the Mechanical Turk as well as results of various behavioral game-theory experiments on it. I’ve seen others report on such experiments too. Markus Jacobsson from PARC gives general tips for conducting such human experiments using the Mechanical Turk.
Turk-based behavioral experimentation has the immense appeal of being cheap, fast, and easy to administer. There are obviously pitfalls such as getting a good grasp on the population, but so does any experimental setup. Such a platform may be especially appropriate for Internet-related behavioral experiments such as figuring out bidding behavior in online auctions, or how to best frame a commercial situation on a web-page. Could this be a tool for the yet not-quite-existent experimental AGT?
One of the experimental assumptions sometimes game-theoretic experiments make is that the actual cost and the opportunity cost (or the actual profit or the opportunity profit) are the same thing.
In reality most people make a distinction between actual cost and the opportunity cost. For an example, if in an experiment you allocate me $M, and ask me to minimize my cost to achieve a task, which might take away $x from me, thereby giving me a profit of $M-$x. You would have to choose $M > $x, otherwise I may not participate in your experiment.
The difference between opportunity and real cost is whether the initial $M was mine or yours. In case, it was mine, the cost is real. In case, you allocated me $M, then it is opportunity cost, in terms of decrease profit.
One may need to have some handle on this when running the experiments with unknown population. Some experiment of course could be run better with mass population, but some would suffer.