A “textbook system” based on social choice theory would have a centralized mechanism interacting with multiple software agents, each of them representing a user. The centralized mechanism would be designed to optimize some global goal (such as revenue or social welfare) and each software agent would elicit the preferences of its user and then optimize according to user preferences.
Among other irritating findings, behavioral economics also casts doubts on this pretty picture, questioning the very notion that users have preferences; that is preferences that are independent of the elicitation method. In the world of computation, we have a common example of this “framing” difficulty: the default. Users rarely change it, but we can’t say that they actually prefer the default to the other alternative since if we change the default then they stick with the new one. Judicious choice of defaults can obviously be used for the purposes of the centralized mechanism (default browser = Internet explorer); but what should we do if we really just want to make the user happy? What does this even mean?
The following gripping talk by Dan Ariely demonstrates such issues.
I did not see any example in the talk that people chose a default choice, while they could be happier with the other choice.
All his examples fell in “do not care” category. There are two options in all his examples which were ties, and his third choice or default, helped people break ties.
For an example, between Rome and Paris example, did he find a person who wanted to visit Paris but the option of paid coffee made him choose Rome?
Helping people break ties,seems wonderful and make a case that the designers of the form should help make people choose. Not choosing could be even worse. If A and B are uncomparable, i.e., people are going to be almost equally happier with A and B, and therefore it requires a lot of thinking for people to make a choice between A and B, in such cases it seems that making A default has higher social surplus, in the sense that it saves the cognitive effort of making the choice.
Indeed, if you see people’s use of technologies, default has helped the progress of modern technology. If things are not defaulted, then when a choice comes between A and B, far higher people would choose none. In case the underlying technology has network effect (that is more the people choose, the other choosers are happier too), setting of the default increases the social surplus.
So I need to see an experiment, where people would have obviously preffered B between A and B, but his other tactics made people choose A. That would be irrationality. Otherwise, it is just calling people’s dilemma of choice as irrationality.
How about his example with physicians and patients? It seems to me that there an unreasonable decision is made in order to avoid cognitive effort. It is true that the default signals a collective decision that the patient’s hip should be replaced. However, this should not be relevant after clear evidence is presented that the collective decision can be doubted. Further, the evidence shows that the doubtful collective decision can be tested with low cost and the outcome of the test can lead to a lower cost solution for all parties involved (patient takes ibuprofen instead of getting his hip replaced). It seems that the only way in which the default is preferable is in minimizing the cognitive effort of the decision maker.
Alex, note that the hip was replaced of a patient, while the decisions were made by a physician.
It is not clear, whether the physician would be happier, if the hips were replaced after another attempt. If there is only one medicine remaining, then the physician would go for the purpose of comprhensiveness.
The experiment also did not provide the entire information. How many different types of medicine were tried, how many were remaining, etc. How relevant the untried medications were, because if they were very relevant and obvious then may be there were other reasons why physician left the medication. It looked like a cooked up situation.
I think Dan Areily himself might be able to confirm that people behave differently in cooked up experimental situations versus the real situations.
Here are more examples, which I think one could try. Take two different but somewhat comparable set of furnitures, and ask a person that he/she could choose one of the sets for free.
Now take two numbers, say $1000 and $1100. Randomly stick these price tags to these two sets. Now ask the people to make a choice.
It is very likely that more people would now choose the more expensive set. It does not seem irrational to economists, because they have understood that price is a signal. It won’t seem irrational because in people’s mind, they are choosing the set which has a higher common value (or resale value). Note that the price tags were random. But the fact that everybody would now choose the expensive set, actually made the expensive set option a bit rational. That’s because the economists now understand it.
Let me with some strech take a similar model of organ choices and show that it is rational to go with the default choice. Suppose people understand that everybody chooses the default choice. Now if everybody chooses to be organ donor, then in economists terms, donating organs is not as valuable, so donating organ is not that an expensive proposition, so choose it. If nobody is choosing to be organ donor, then donating organ is now more expensive proposition (rare -> expensive), so it is not as easy to donate organs.
What would be a balanced experiment is as follows. You create two types of forms. Both types of forms list both the options. On the form it says, one of the options is chosen randomly for you but you could choose to switch if you would like. Now do people switch knowing that choices were random.
I believe that people try to avoid expensive decision making, in case the cognitive cost of making decision is higher than the impact of the decision itself. This seems rational to me.
What I believe that as computing advances and become more and more trustworthy, people would be using computers to make such decisions. They tell their computers their preferences at a macro level, and computers would use some kind of computation and learning to make decisions at micro level.
In the Rome-vs-Paris example there were maybe 20% of the people that preferred Rome in one framing and Paris in the other. What do these 20% “really” prefer? Presumably we could interview the people after their trip and ask them ex-post how happy are they are with the trip. We may get different average numbers according to the framing. We may claim that the framing that resulted in higher happiness is “better”. Similar experiments may be conceived for the default browser, GUI for dating services, etc. I’d like to see experiments that supply this type of evaluation of the different framings.
At least among the people, I have experimented with, they feel happier ith default.
I feel happier with default. My computer comes with a default background. I do not usually make an attempt to change the background. But if somebody changes the background, I might make an attempt to change the beackground.
In this experiments and the policy making, one should also account for the hugely positive utility which users receive by having a default choice.
In any case, framing does not seem to be related to irrationality. All it says that people do not see questions and decisions in mathematical objective terms. There are a lot of subjective signals which go around. These subjective signals not only help people make decisions, but they may actually make people happier. These signals have real utility.
There are generally two types of problems with eliciting preferences: framing effects, and context dependent effects.
Framing, in which A is equivalent to A’, but treated differently in choice contexts may not be hard to diminish. Get users to see before choosing that A and A’s are equivalent.
Context dependent effects are much harder to deal with, strategic context dependent preferences are the hardest to deal with.
Context dependency is hard because we have no good alternative to set theory to model preference relations, and set theory doesn’t allow context to make a difference.