Usually I like to pontificate about mundane things like journals and the job market, but for once I’d like to soar above these trivialities and write about welfare and justice. Computer scientists like to talk about social welfare, but usually we mean it in an extremely restricted sense: the sum of players’ utilities. I think we like this interpretation because we love to optimize stuff.
But economists take a much broader view of social welfare. Most recently, one of the reviews (likely by an economist) of my CACM article on cake cutting included the following comment:
The author seems to believe that there are only two ways of evaluating social welfare, either utilitarian or egalitarian. Although both of these criteria are certainly interesting, it is far from being true that there are no other ways of making such evaluations. It is also not true, contrarily to what author asserts, that “the very thought of social welfare” requires an “interpersonal comparisons of value”. The Pareto criterion certainly is free of such comparisons and it says something about social welfare. Its conjunction with the no envy criterion, which is also free of such comparisons, provides another example, which is satisfactory from the fairness viewpoint as well.
My response amounted to an argumentum ad auctoritatem: Many CS papers use “welfare” or “social welfare” in the same way, including a recent CACM review article. I was actually trying to be extra careful by using the term utilitarian social welfare. An EC’12 reviewer raised similar objections to this paper, which is why the introduction of the camera-ready version includes a long (for a conference paper) historical discussion.
These reviewers are right, of course, in asking for a broader perspective. But perhaps part of the objection to (utilitarian) social welfare is also philosophical. Check out, for example, the article on distributive justice in the Stanford Encyclopedia of Philosophy. It turns out that from a philosopher’s point of view, even economists are narrow-minded:
Economists defending some form of welfarism normally state the explicit functional form, while philosophers often avoid this formality, concentrating on developing their theories in answer to two questions: 1) the question of what has intrinsic value, and 2) the question of what actions or policies would maximize the intrinsic value.
Strangely enough, though, “most philosophical activity has concentrated on a variant known as utilitarianism”, which is exactly the notion mentioned above. The Stanford article’s section that deals with welfare-based principles is essentially a critique (more like an indictment) of this approach:
For instance, some people may have a preference that the members of some minority racial group have less material benefits. Under utilitarian theories, in their classical form, this preference or interest counts like any other in determining the best distribution. Hence, if racial preferences are widespread and are not outweighed by the minority’s contrary preferences (perhaps because the minority is relatively few in number compared to the majority), utilitarianism will recommend an inegalitarian distribution based on race if there is not some other utility-maximizing alternative on offer. […] Utilitarians may believe that even more welfare in the long run can be achieved by re-educating the majority so that racist preferences weaken or disappear over time, leading to a more harmonious and happier world. However, the utilitarian must supply an account of why racist or sexist preferences should be discouraged if the same level of total long term utility could be achieved by encouraging the less powerful to be content with a lower position.
But similar questions get even murkier under alternative principles of distributive justice. A seemingly popular principle, equality of opportunities, is advocated by “those who believe that we can show equal concern, respect, or treatment of people without them having the same material goods and services, so long as they have equal economic opportunities.” From this viewpoint, it has been argued that discrimination based on gender or race is bad because people have no control over these parameters, and it is immoral to structure society in a way that one’s draw in this “natural lottery” can profoundly affect one’s opportunities in life. Going one step further, people also cannot control which families they are born into; should we let such factors affect a person’s chances in life? And taking this reasoning to the limit, we can observe that people cannot control the talents with which they are born. Where does one draw the line?
Putting my two cents (worth maybe two Israeli liras in this case) in, it seems to me that in simple domains, undesirable mechanisms like random serial dictatorship (see, e.g., Eric Budish’s position paper in Exchanges) satisfy the principle of equality of opportunities; going back to the review snippet above, RSD is also ex-post Pareto efficient. Perhaps maximizing the sum of utilities, when possible, is not that bad after all?
I’ll conclude with an ambitious thought. It seems (from ongoing work with Nicolas Christin and Anupam Datta at CMU) that research on distributive justice says little about how to implement and verify those lofty ideals. Is this a challenge for algorithmic philosophy?
Ariel, some of the questions you raise are addressed in John Roemer’s work on distributive justice. It’s only briefly mentioned in the Stanford article. Roemer proposes that individuals should be rewarded proportionally to the *effort* they put into a task relative to the “type” of the individual. The type could incorporate such external factors as social background. A high school student from a poor neighborhood might have less time to do homework after school, because he needs to work to earn extra money. Check out Peyton Young’s book called “Equity” for a great introductory textbook.
In my opinion, neither theory you mention gets at the issue of fairness in the sense of “non-discrimination”. It’s easy to come up with scenarios where maximizing social welfare (or envy-freeness) is blatantly “unfair” in a deeply intuitive sense of the word. It’s also easy to make a case against equal opportunity in the same way. This situation was part of the motivation for a joint work with Dwork, Pitassi, Reingold and Zemel on fairness in classification: http://arxiv.org/abs/1104.3913
These are good comments. Desert-based principles seem to have their own criticisms…
I know of your interesting paper. It is a rare example of an implementation of justice principles (of course you are working with a very general notion of justice via the similarity metric). Speaking of algorithmic philosophy, one element that can nicely complement your framework is a method of verifying that the company is actually using the classifier you designed (this is one of the things that came up in discussions with Christin and Datta).
By the way, congrats on your cool new blog…
Going off on a bit of a tangent based on your last two words. Is your use of the term algorithmic philosophy based on some standard or popular definition/classification? I had recently started using algorithmic philosophy to describe some of my thoughts, and I am curious to know if I am encroaching on somebody else’s term. Unfortunately, search is no help since both Google and DuckDuckGo give me my own post back as a first result. Do you know if there is an existing history for this term?
Nice post! I don’t know of an existing history for this term. I came up with it independently, but it seems that you have thought about it much more seriously…
Hi. This is the first time I read about distributive justice. The problems I see with this approach are two fold:
1. How to set the rules about the fair distribution? (if for example one person is less intelligent but has better voice than other person).
2. How to encourage people to use their full potential? (if somebody is poor it’s better to him – in psychological manner – to find a job than to get cash benefit for doing nothing).
Can you recommend an interesting source that correspond to the above questions? 🙂