A “textbook system” based on social choice theory would have a centralized mechanism interacting with multiple software agents, each of them representing a user. The centralized mechanism would be designed to optimize some global goal (such as revenue or social welfare) and each software agent would elicit the preferences of its user and then optimize according to user preferences.
Among other irritating findings, behavioral economics also casts doubts on this pretty picture, questioning the very notion that users have preferences; that is preferences that are independent of the elicitation method. In the world of computation, we have a common example of this “framing” difficulty: the default. Users rarely change it, but we can’t say that they actually prefer the default to the other alternative since if we change the default then they stick with the new one. Judicious choice of defaults can obviously be used for the purposes of the centralized mechanism (default browser = Internet explorer); but what should we do if we really just want to make the user happy? What does this even mean?
The following gripping talk by Dan Ariely demonstrates such issues.