Feeds:
Posts
Comments

Archive for June, 2013

I have recently written two survey articles about cake cutting:

  1. Cake Cutting: Not Just Child’s Play, a popular (as opposed to scholarly) article published in the July issue of the Communications of the ACM.
  2. Cake Cutting Algorithms, draft (comments welcome!) of chapter 13 of the upcoming Handbook of Computational Social Choice (which I am editing with Felix Brandt, Vince Conitzer, Ulle Endriss, and Jerome Lang).

These articles explain at length why I think cake cutting is cool. Here I’ll try to convince you in a few paragraphs.

Imagine that the cake is rectangular and heterogeneous, i.e., players have different values for pieces of cake. Each player has a valuation function that maps a given piece of cake to its value; assume that the valuations are additive and the value of the whole cake is 1. An allocation is proportional if each player has value at least 1/n for its piece, where n is the number of players.

Consider the following cake cutting algorithm. Each player makes a mark at the point such that the piece of cake to the left of the mark is worth 1/n to that player. The player who made the leftmost mark gets the piece, and the satisfied player and allocated piece are removed from the game. We repeat this process until there is only one player, who gets the remaining piece. This algorithm is clearly proportional: Each player before the last player has value exactly 1/n for his piece, while the last player has value at most 1/n for each piece that was allocated to another player, and therefore value at least 1-(n-1)/n=1/n for his own piece. Notice that this algorithm requires \Theta(n^2) operations: In round k, the n-k+1 remaining players make marks.

Let us now slightly modify the algorithm. Initially, each player makes a mark such that the piece of cake to the left (and to the right) of the mark is worth 1/2. Suppose for ease of exposition that n is a power of 2. We recursively call the algorithm with the “owners” of the first n/2 marks from the left, and the piece of cake to the left of their rightmost mark; and with the remaining players and remaining cake. When there is only one player, he gets the piece. Each recursive call includes half of the players and a piece of cake whose value for these players is at least half the value of the previous piece, so after \lg n calls the value a player receives is at least 1/2^{\lg n}=1/n. By examining the recursion tree, whose height is \lg n, it is easy to see that the number of operations is \Theta(n\lg n). Strikingly, in a concrete complexity model for cake cutting it is possible to show that no algorithm can do better. How? Read the chapter.

Cake cutting is great because it calls for algorithmic insights and computational thinking, yet relatively few computer scientists have so far been involved. And the best thing about cake cutting? How else would you write papers with cool titles like Cake Cutting is Not a Piece of Cake and Throw One’s Cake – and Eat It Too.

Read Full Post »

The new issue of SIGecom Exchanges is out. Since I already wrote what I had to say about it in the editor’s letter, I’ll just quote myself:

Issue 12.1 of SIGecom Exchanges includes a letter from the program chairs of EC’13, a survey, and six research letters. The purpose of the first letter is to update the community about some of the decisions and discussions surrounding EC’13. The previous Exchanges issue — 11.2 — included a letter from the SIG chair David Parkes, and going forward the plan is to alternate between these two types of updates, publishing a letter from the SIG chair in each December issue and a letter from the EC chairs in each June issue.

The survey continues a new tradition that started with Eric Budish’s survey in issue 11.2. This tradition actually turned out to be very useful — when I want to catch up on a hot area all I need to do is invite a survey, wait a few months, and voila! This time I invited Mallesh Pai and Aaron Roth to contribute a survey on differential privacy and mechanism design. There has been a lot of action in this space lately, including very recent workshops at Caltech and in NYC, and an upcoming EC workshop organized by Mallesh and Aaron. Their survey is a 22-page masterpiece that is as readable and intuitive as it is comprehensive (the area is still young enough to be covered in full). Tip: don’t miss footnote 11!

As a special bonus to the readers of the blog, I’ll add another exclusive piece of Exchanges trivia: Milind Tambe now holds the record for the largest number of letters published in Exchanges, with a whopping four letters!

Read Full Post »

As pointed out by Michael Mitzenmacher and Rakesh Vohra (who will soon carry the respectable but grammatically weird title “Penn Integrates Knowledge Professor”), the National Science Foundation SSS program is trying an intriguing new peer review method, which is based on a paper by Merrifield and Saari (Merrifield is an astronomer and Saari is a prominent social choice theorist). Vohra gives a concise summary of the proposed method:

1) There are N agents each of whom submits a proposal.
2) Each agent receives m < N proposals to review (not their own).
3) Each agent compiles a ranked list of the m she receives, placing them in the order in which she thinks the community would rank them, not her personal preferences.
4) The N ranked lists are combined to produce an optimized global list ranking all N applications.
5) Failure to carry out this refereeing duty by a set deadline leads to automatic rejection of one’s proposal.
6) Individual rankings are compared to the positions of the same m applications in the globally-optimized list. If both lists appear in approximately the same order, then proposer is rewarded by having his proposal moved a modest number of places up the final list relative to those from proposers who have not matched the community’s view as well.

An interesting detail worth mentioning is that in step 4, the lists are combined using a well-known voting rule called Borda count: Each agent awards m-k points to the proposal ranked in the k’th position. The purpose of the controversial step 6 is to incentivize good reviewing; but even Merrified and Saari worry that “this procedure will drive the allocation process toward conservative mediocrity, since referees will be dissuaded from supporting a long-shot application if they think it will put them out of line with the consensus.”

Overall I think the proposal is interesting, and certainly could be superior to the status quo. The main criticism that immediately comes to mind was also expressed by Vohra: The proposed method feels somewhat unprincipled, in that it gives no theoretical guarantees. But perhaps such a method that is actually implemented is better than impossibility results that lead nowhere — and it could be refined over time.  Because criticizing is fun (and sometimes even helpful), here are two more points.

First, strategic manipulation — the bane of social choice. NSF panels are populated by people who did not submit proposals, so the incentive for manipulation is small. Under the proposed system it is not clear that you cannot increase the chances of your own proposal by placing good proposals at the bottom of your ranking. The problem is similar to classic social choice questions, but in a setting where the set of agents and set of alternatives coincide; one would want the method to be impartial, in the sense that whether your proposal is funded is independent of your submitted ranking. Recent papers deal with very similar problems: Holzman and Moulin have an Econometrica 2013 paper where the goal is to impartially select a single alternative rather than ranking the alternatives; and my TARK’11 paper with Alon, Fischer, and Tennenholtz looks at the problem of impartially selecting a subset of alternatives (we actually discuss an application to peer review in Section 5). Our mechanism is simple: If you want to select k agents, randomly partition the agents into t subsets, and select the best k/t agents from each subset only based on votes from other subsets. I know that Holzman and Moulin have tried to design (axiomatically) good mechanisms for impartial ranking and ran into impossibilities, but is ranking actually necessary? NSF just needs to select a subset (of known size) of proposals to be funded. Would I recommend our mechanism, as is, to NSF? Umm, no. But the ideas and approach could be useful.

Second, randomness. Of course, peer review is random, but usually the randomness is not so explicit. Under the current system, a panelist who happens to get a batch of bad proposals can give bad scores and reviews to all of them. Under the proposed system, an agent who is randomly assigned a batch of bad proposals has to rank one of them on top, thereby giving it m-1 points — the same number of points a proposal that is judged to be fantastic would get from beating a batch of good proposals! And there are no reviews, so essentially the number of points awarded by each reviewer is all that matters. Moreover, it is very difficult to rank a batch of bad proposals, so if you get an unlucky batch you would probably lose your potential bonus from step 6 of the proposed method. The obvious way to ameliorate these issues it to use a large value of m. The NSF program uses m=7, which on the one hand seems too small to prevent randomness from playing a pivotal role, and on the other hand seems very large as each proposal is read by seven people (instead of just four today). Increasing m would make the reviewing load unbearable.

That said, kudos to the NSF for experimenting with new peer review methods. I expect that now the NSF will have to find a way to review a wave of proposals about how to review NSF proposals.

Update June 11 10:10am EST: Here is one possible solution to the strategic manipulation problem, which gives some kind of theoretical guarantee. The desired property is that a manipulator cannot improve the position of his own proposal in the global ranking by reporting a ranking of the m proposals in his batch that is different from the ranking of these m proposals based on others’ votes. Agents are penalized for bad reviewing according to the maximum displacement distance between their ranking and the others’ aggregate ranking, i.e., the maximum difference between the position of a proposal in one ranking and the position of the same proposal in the other ranking. By pushing a proposal x down in his ranking, a manipulator’s own score in the global ranking (modified by the penalty) decreases by at least as many points as x’s Borda score in the global ranking, therefore the manipulator’s position in the global ranking can only decrease.

Read Full Post »