Ido Erev, Eyal Ert, and Ori Plonsky are organizing an interesting competition under the title: From Anomalies to Forecasts: Choice Prediction Competition for Decisions under Risk and Ambiguity (CPC2015). The idea is to be able to quantitatively predict the magnitude of a multiple known human “biases” and “non-rational” behaviors. The first prize is to be invited to be a co-author of a paper about the competition written by the organizers.
Experimental studies of human choice behavior have documented clear violations of rational economic theory and triggered the development of behavioral economics. Yet, the impact of these careful studies on applied economic analyses, and policy decisions, is not large. One justification for the tendency to ignore the experimental evidence involves the assertion that the behavioral literature highlights contradicting deviations from maximization, and it is not easy to predict which deviation is likely to be more important in specific situations.
To address this problem Kahneman and Tversky (1979) proposed a model (Prospect theory) that captures the joint effect of four of the most important deviations from maximization: the certainty effect (Allais paradox, Allais, 1953), the reflection effect, overweighting of low probability extreme events, and loss aversion (see top four rows in Table 1). The current paper extends this and similar efforts (see e.g., Thaler & Johnson, 1990; Brandstätter, Gigerenzer, & Hertwig, 2006; Birnbaum, 2008; Wakker, 2010; Erev et al., 2010) by facilitating the derivation and comparison of models that capture the joint impact of the four “prospect theory effects” and ten additional phenomena (see Table 1).
These choice phenomena were replicated under one “standard” setting (Hertwig & Ortmann, 2001): choice with real stakes in a space of experimental tasks wide enough to replicate all the phenomena illustrated in Table 1. The results suggest that all 14 phenomena emerge in our setting. Yet, their magnitude tends to be smaller than their magnitude in the original demonstrations.
[[ Table 1 omitted here and appears in the source page]]
The current choice prediction competition focuses on developing models that can capture all of these phenomena but also predict behavior in other choice problems. To calibrate the models we ran an “estimation set” study that included 60, randomly selected, choice problems.
The participants in each competition will be allowed to study the results of the estimation set. Their goal will be to develop a model that will predict the results of the competition set. In order to qualify to the competition, the model will have to capture all 14 choice phenomena of Table 1. The model should be implemented in a computer program that reads the parameters of the problems as an input and predicts the proportion of choices of Option B as an output. Thus, we use the generalization criterion methodology (see Busemeyer & Wang, 2000).
The deadline for registration is April 20th. Submission deadline is May 17th.
Leave a Reply