This is the fifth and final post in a series recapping the talks at the CMU summer school on algorithmic economics (the first four: #1, #2, #3, #4). See the summer school site for abstracts, slides, and (eventually) videos.
Itai Ashlagi
Itai Ashlagi from MIT discussed two domains in which mechanism design theory is playing an important role: matching residents to hospitals, and kidney exchange. He began with the cautionary tale of the Boston School Choice Mechanism, which illustrates the consequences of poor mechanism design. This was a mechanism for assigning students to schools, and it worked as follows. Every student submitted a ranked list of schools. The mechanism went through the students in some order, assigning a student to his/her first choice provided space was still available. The mechanism then performed an iteration using the unassigned students’ second choices, an iteration with the unassigned students’ third choices, and so on. This may sound natural enough, but it has some serious problems — do you see any? The key issue is that if a student’s first choice is full, he/she is skipped and not considered again until the second iteration, at which point his/her second choice might be full, and so on. For this reason, there can be an incentive to rank first a school that is not your first choice, such as one that is pretty good and also has a large capacity. Such strategic behavior was observed empirically. The school choice mechanism has since been changed to have better incentive properties. One natural improvement: when a student’s first choice is already full, proceed to the next school on that student’s list, rather than immediately skipping to the next student.
Matching residents to hospitals is the canonical application of the Gale-Shapley stable marriage algorithm. Itai discussed the complications introduced by couples constraints, where two residents want to be matched to nearby hospitals. A lot of the nice theory about stable matchings, including the main existence theorem and incentive properties, break down when couples constraints are added. Nevertheless, the current heuristic algorithms for matching with couples constraints generally find a stable matching. To provide a theoretical explanation for this empirical success, Itai advocated a “large random market” approach. He proposed a model with n residents with random preferences, and O(n) hospitals with arbitrary preferences. The goal (see this paper) is to prove that, provided the number of couples grows sublinearly with n, there exists a stable matching with high probability (as n grows large). The gist of the algorithmic proof is: first match all of the singles (using Gale-Shapley), and then introduce the couples one-by-one. A new couple is allowed to evict previously assigned singles, which are then re-assigned by resuming the Gale-Shapley algorithm. This algorithm successfully computes a stable matching as long as no couple evicts another couple. To prove that this is unlikely, Itai defined a subtle notion of conflict between couples and showed that, if there are not too many couples, the conflict graph is acyclic with high probability. Thus, for an appropriate ordering of the couples, the algorithm terminates with a stable matching.
Kidney exchange, in its simplest form, involves two patient-donor pairs, (A,A’) and (B,B’). A’ wants to give a kidney to A but unfortunately they are incompatible (e.g., because of a blood type mismatch). Similarly for B’ and B. But if B’ is compatible with A and A’ with B, then we can transplant compatible kidneys to both of the patients. (Generally, A and A’ have never met B or B’. The two surgeries are always done simultaneously — do you see why?) Itai discussed the frontier of the theory and practice of kidney exchange (see here for details). One issue is that the real players in the “kidney exchange game” are hospitals, rather than individual pairs of patients. (A single hospital can have many patient-donor pairs.) The objective function of a single hospital (save as many of their own patients, do as many surgeries in-house as possible, etc.) is not the same as the global objective (save as many lives as possible). Simplistic mechanisms incentivize hospitals to keep secret their patient-donor pairs that can be matched in-house, reporting only their difficult-to-match pairs to the central exchange. Itai discussed better exchange mechanisms that do not suffer from such incentive problems. The conjecture that hospitals are disproportionately reporting difficult-to-match pairs could also explain why long chains of exchanges seem needed to maximize the number of matched patients, in contrast to the predictions of the early models of kidney exchange (see here for details).
Vincent Conitzer
Vince Conitzer from Duke gave two lectures on social choice from a computer science perspective. Recall that a social choice function takes preferences as input (e.g., a ranked list of candidates from each voter) and outputs a notion of the “collective preferences” (e.g., a single ranked list). Vince pointed out that there are at least two different reasons to invoke such a function: to recover some notion of “ground truth”; and to compromise among subjective preferences when no “ground truth” exists. He began by reviewing the basics on the latter topic. With only two alternatives, the majority rule satisfies all the properties you might want. When there are at least three alternatives, Arrow’s Impossibility Theorem says that every voting rule has its problems — most commonly, failing to satisfy “independence of irrelevant alternatives” and hence, via the Gibbard-Satterthwaite theorem, being manipulable by tactical voting.
Moving on to the use of social choice functions to recover the “ground truth”, Vince discussed a very cool question: which social choice functions arise as maximum likelihood estimators (MLEs) with respect to some noise model? (See here for details.) For example, suppose there are only two alternatives, one “right” and one “wrong”. Assume that the votes are random and i.i.d., with each being “right” with some probability larger than .5. Then, the majority rule gives the MLE. When there are more than two alternatives and a “correct” ranking of them, the Kemeny rule gives the MLE under a natural noise model (based on independently flipping each pair of alternatives). A neat way to prove that certain voting rules cannot provide a MLE for any noise model is to exhibit a violation of the following consistency property: if the output of a rule is the same with votes V_1 and V_2, then it must also be the same with the votes V_1+V_2.
Vince’s second lecture focused on computational aspects of social choice. The Gibbard-Satterthwaite theorem states that non-trivial voting rules are manipulable. But perhaps there are non-trivial and computationally tractable rules for which manipulation is computationally intractable? Such rules are known for worst-case intractability (i.e., NP-hardness), but these rules are generally easy to manipulate “on average” (see this survey paper). Thus, the Gibbard-Satterthwaite theorem seems equally damning in the computationally bounded world. Vince wrapped up by discussing the communication complexity of eliciting preferences (i.e., how many questions of a given form do you need to ask everybody in order to evaluate a given voting rule) and compact representations for preferences for settings where the number of alternatives is ridiculously large.
Epilogue
Thanks to my co-organizer Ariel, all of the speakers, and especially to the students — both for their enthusiastic participation and for the unexpected gift at the end of the summer school!
Way cool! Some very valid points! I appreciate you writing this article and the rest of the site is really good.