Usually I like to pontificate about mundane things like journals and the job market, but for once I’d like to soar above these trivialities and write about welfare and justice. Computer scientists like to talk about social welfare, but usually we mean it in an extremely restricted sense: the sum of players’ utilities. I think we like this interpretation because we love to optimize stuff.
But economists take a much broader view of social welfare. Most recently, one of the reviews (likely by an economist) of my CACM article on cake cutting included the following comment:
The author seems to believe that there are only two ways of evaluating social welfare, either utilitarian or egalitarian. Although both of these criteria are certainly interesting, it is far from being true that there are no other ways of making such evaluations. It is also not true, contrarily to what author asserts, that “the very thought of social welfare” requires an “interpersonal comparisons of value”. The Pareto criterion certainly is free of such comparisons and it says something about social welfare. Its conjunction with the no envy criterion, which is also free of such comparisons, provides another example, which is satisfactory from the fairness viewpoint as well.
My response amounted to an argumentum ad auctoritatem: Many CS papers use “welfare” or “social welfare” in the same way, including a recent CACM review article. I was actually trying to be extra careful by using the term utilitarian social welfare. An EC’12 reviewer raised similar objections to this paper, which is why the introduction of the camera-ready version includes a long (for a conference paper) historical discussion.
These reviewers are right, of course, in asking for a broader perspective. But perhaps part of the objection to (utilitarian) social welfare is also philosophical. Check out, for example, the article on distributive justice in the Stanford Encyclopedia of Philosophy. It turns out that from a philosopher’s point of view, even economists are narrow-minded:
Economists defending some form of welfarism normally state the explicit functional form, while philosophers often avoid this formality, concentrating on developing their theories in answer to two questions: 1) the question of what has intrinsic value, and 2) the question of what actions or policies would maximize the intrinsic value.
Strangely enough, though, “most philosophical activity has concentrated on a variant known as utilitarianism”, which is exactly the notion mentioned above. The Stanford article’s section that deals with welfare-based principles is essentially a critique (more like an indictment) of this approach:
For instance, some people may have a preference that the members of some minority racial group have less material benefits. Under utilitarian theories, in their classical form, this preference or interest counts like any other in determining the best distribution. Hence, if racial preferences are widespread and are not outweighed by the minority’s contrary preferences (perhaps because the minority is relatively few in number compared to the majority), utilitarianism will recommend an inegalitarian distribution based on race if there is not some other utility-maximizing alternative on offer. […] Utilitarians may believe that even more welfare in the long run can be achieved by re-educating the majority so that racist preferences weaken or disappear over time, leading to a more harmonious and happier world. However, the utilitarian must supply an account of why racist or sexist preferences should be discouraged if the same level of total long term utility could be achieved by encouraging the less powerful to be content with a lower position.
But similar questions get even murkier under alternative principles of distributive justice. A seemingly popular principle, equality of opportunities, is advocated by “those who believe that we can show equal concern, respect, or treatment of people without them having the same material goods and services, so long as they have equal economic opportunities.” From this viewpoint, it has been argued that discrimination based on gender or race is bad because people have no control over these parameters, and it is immoral to structure society in a way that one’s draw in this “natural lottery” can profoundly affect one’s opportunities in life. Going one step further, people also cannot control which families they are born into; should we let such factors affect a person’s chances in life? And taking this reasoning to the limit, we can observe that people cannot control the talents with which they are born. Where does one draw the line?
Putting my two cents (worth maybe two Israeli liras in this case) in, it seems to me that in simple domains, undesirable mechanisms like random serial dictatorship (see, e.g., Eric Budish’s position paper in Exchanges) satisfy the principle of equality of opportunities; going back to the review snippet above, RSD is also ex-post Pareto efficient. Perhaps maximizing the sum of utilities, when possible, is not that bad after all?
I’ll conclude with an ambitious thought. It seems (from ongoing work with Nicolas Christin and Anupam Datta at CMU) that research on distributive justice says little about how to implement and verify those lofty ideals. Is this a challenge for algorithmic philosophy?