Feeds:
Posts
Comments

Archive for August, 2012

This is the third post in a series recapping the talks at the CMU summer school on algorithmic economics (here and here are the first two). See the summer school site for abstracts, slides, and (eventually) videos.

Jason Hartline

Jason Hartline from Northwestern gave two lectures on auction design. The first lecture covered the basics of Baeysian optimal auction design (Chapters 2 and 3 of Jason’s book), the second an overview of approximation in auction design (Chapters 1, 5, and 7).

We begin with single-item auctions. By “Bayesian” we mean that the bidders have private valuations for the good that are drawn independently from (possibly different) known distributions. As a bidder, you know your own valuation but only the distributions over the other valuations. For example, suppose there are two bidders, with valuations drawn i.i.d. from the uniform distribution on [0,1]. In the second-price auction, bidding truthfully is a dominant strategy; assuming bidders do this, the expected seller revenue is the expected smaller bid, namely 1/3. What happens in a first-price auction? Bidders do not have dominant strategies, and we expect all of them to maximize their expected utility with respect to what they know — formally, to be at Bayes-Nash equilibrium. In our example, suppose that each player always bids half their valuation. A simple calculation shows that this is a Bayes-Nash equilibrium: from the perspective of a bidder with valuation v, the best response against the other bid (which is uniform in [0,1/2]) is v/2. The expected revenue at this Bayes-Nash equilibrium is the expected value of the high bid — a.k.a. 1/3, the same as in the second-price auction!

Could this “revenue equivalence” hold more generally? If so, how would we prove it? The answers are yes, and using a fundamental characterization of Bayes-Nash equilibria. In the two-bidder first-price auction, suppose your valuation is v. Then the probability that you win (over the possible competing bids) in this Bayes-Nash equilibrium is also v. In auction theory jargon, your interim allocation rule — the probability that you win, as a function of your valuation — is x(v) = v. The characterization identifies which interim allocation rules arise in Bayes-Nash equilibrium, and what the corresponding expected payments must be. First, if the interim allocation rules are nondecreasing, then there are unique interim payment rules such that these expected allocations and payments arise in Bayes-Nash equilibria (namely, p(v) = v \cdot x(v) - \int_0^v x(z)dz — try it for the second-price auction!). Second, if the interim allocation rules are not monotone, then they do not correspond to any Bayes-Nash equilibrium.

Jason wrapped up his first lecture with several applications of this characterization and with a quick proof sketch. The characterization immediately implies “Revenue Equivalence”: if two (auction, equilibrium) pairs produce the same interim allocation rules (as with second-price and first-price auctions with i.i.d. bidder valuations), then they necessarily generate the same expected payments. The characterization is also useful when solving for and proving the uniqueness of Bayes-Nash equilibria (e.g., of first-price and all-pay auctions).

Jason started his second lecture with one of those simple and cool math results that everybody should know, the “prophet inequality”. N prizes arrive online. Each prize has a random value, drawn from a known distribution. Different prizes’ distributions are independent but can be different. At each time step you learn the exact value of the current prize, and you can either take it (ending the game) or move on to the next prize. The goal is to identify a stopping rule with large expected value [1]. One can derive an optimal stopping rule using backward induction. There is also a good “threshold strategy”, meaning that you pick the first prize with value exceeding a fixed threshold. A short but sweet proof shows that, if you set this threshold so that the probability of eventually picking a prize is exactly 50%, then your expected reward is at least half of the expected value of the best prize (a strong benchmark!).

Jason argued that the 2-approximate threshold strategy, while suboptimal, is more interesting than the optimal stopping rule. Some of the reasons: it is simple and easy to implement and interpret; it is robust, meaning that it doesn’t change much under perturbations of the distributions; and it is easily adapted to solve variations of the original problem. Note this “simple is better than optimal” perspective also showed up in Eva’s talk. Jason compared this style of theoretical work in auction design to Picasso’s bulls — one wants just the right level of abstraction to isolate the essential features of good auction formats.

Jason concluded with two applications of approximation in auction design. The first was to revenue maximization in multi-parameter settings, for example when there are multiple goods and each bidder has a different valuation for each. The structure of the optimal auction is poorly understood in such problems (though see Costis’s lecture, below). Assume that each bidder is unit-demand (i.e., wants only one good). One class of simple auctions is sequential posted-price mechanisms: bidders are considered one-by-one, with a bidder offered a price for each remaining good, and the bidder picks its favorite (maximizing value minus price). The results here show that these simple mechanisms can achieve a good constant-factor approximation of the optimal revenue [2]. Interpretations: unit-demand preferences behave similarly to single-parameter preferences and, in such settings, competition between bidders is not essential for near-optimal revenue.

The second application was to prior-independent auctions. These are auctions with robust performance guarantees, in that they are simultaneously near-optimal for a wide range of valuation distributions. A precursor to prior-independent auctions is a beautiful 1996 result of Bulow and Klemperer which states that, for a single-item auction with bidder valuations drawn i.i.d. from some distribution F, the revenue of the second-price auction with n+1 bidders is at least that of an optimal auction (for F) with n bidders. Thus, extra competition is more valuable than information about the underlying valuation distribution. A geometric proof of this result suggests how to design prior-independent auctions in a range of settings, essentially using a guinea pig bidder to supply a single sample from the underlying distribution, which is all that is needed for near-optimal revenue.

Costis Daskalakis (2nd lecture)

Costis’s second lecture was about his recent work in multi-parameter mechanism design (see here and here). For concreteness, we’ll again think about maximizing the seller’s revenue when selling several different goods, where each agent has a private valuation for each but is unit-demand (i.e., only wants one good).

Abstractly, we can think of a (direct-revelation) mechanism as a function from (truthfully reported) valuations to allocations (who gets what) and payments (who pays what), subject to incentive-compatibility constraints (truthful reporting always maximizes expected agent utility, and never yields negative expected utility). In principle, the revenue-maximizing auction can be computed as the solution to a massive linear program, where there are allocation and payment variables for every bidder, good, and (alarmingly) valuation profile. By itself, this observation is neither illuminating nor computationally useful. The research program here is to show that smaller linear programs suffice, and that optimal solutions of these linear programs have informative interpretations.

The key idea is to use a much smaller set of decision variables, as follows. Analogous to the characterization in Jason’s first lecture, an interim allocation rule induced by a mechanism specifies, for each bidder i, valuation vector v (over goods) of i, and good j, the probability that j is allocated to i when its valuation vector is v (where the randomness is over the valuations of the other bidders, and possibly also over coin flips made by the mechanism). Interim payment rules are defined in the same way. We’ll consider a linear program with only these interim values as decision variables — assuming each player’s space of valuations is given explicitly, there are only polynomially many of these. The incentive-compatibility constraints and the objective function (expected sum of payments) can be expressed directly with these variables. The catch is feasibility — after solving this linear program, can we reverse engineer a bona fide mechanism that induces the computed interim allocation rules? The answer is, not necessarily. Thus, we need to add additional constraints to the linear program to enforce the feasibility of the interim allocation rules. These constraints can be explicitly described for single-item auctions (see Rakesh’s talk in the next post), but in general they need to be generated on the fly, using a separation oracle within the ellipsoid method.

In an exciting twist to the plot, the required separation oracle turns out to be precisely the corresponding welfare maximization problem (given some valuations, allocate goods to maximize the sum of valuations).  For example, with unit-demand bidders this is simply a maximum-weight matching computation.  The first consequence is computational: if welfare-maximization is tractable (or more generally, tractable to approximate to within some factor), then so is revenue-maximization.  The second consequence is that revenue-maximizing auctions have a conceptually simple format.  To explain it, here’s what’s long been known (since Myerson (1981)) for single-parameter problems: to maximize revenue, just transform the reported valuations into “virtual valuations” and then maximize the corresponding (virtual) welfare.  This transformation has a simple closed form [3]. The only new complications in the multi-parameter problems discussed here are that one must use a distribution over virtual valuation transformations, and that these transformations are given by a linear program, not an expicit formula.

1. Amusingly, Jason and I had watched this exact scenario play out two days earlier (with N=2), during the “Mystery Box” contest at a Pirates game.

2. The main steps are: upper-bound the (poorly understood) optimal revenue using that of a (well-understood) single-parameter environment by viewing each agent-good pair as a single-parameter agent (roughly, optimal revenue should only go up from the increased competition); use prophet inequality-style arguments to prove approximation guarantees for sequential posted-price mechanisms in the single-parameter relaxation; and extend these mechanisms to the original multi-parameter problem in a natural way.

3. A valuation v gets mapped to the virtual valuation v - (1-F(v))/f(v), where F and f are the distribution and density functions.

Advertisement

Read Full Post »

This is the second post in a series recapping the talks at the CMU summer school on algorithmic economics (here is the first).  See the summer school site for abstracts, slides, and (eventually) videos.

Avrim Blum

Next I’ll talk about the two lectures by Avrim Blum, a CMU local, about the many connections between online learning and game theory. Avrim began with a masterful development of Littlestone and Warmuth’s randomized weighted majority algorithm [1]. This is an algorithm for making decisions online (e.g., which route to take to work each day) that provides a “no-regret” guarantee: your time-averaged performance (e.g., average travel time) approaches that of the best fixed option in hindsight.

Suppose there are N so-called experts (e.g., newsletters on investing) who each give advice about a binary action (e.g., whether to buy or sell stocks tomorrow). Your job is to choose one of the two actions based on this expert advice. You incur cost 1 if you choose incorrectly, 0 otherwise. To develop some intuition, first make the unrealistic assumption that some expert always predicts correctly — how long before you can identify it? The trick is to always take the action recommended by the majority of the experts who have made no mistakes thus far. Then, every time you make a mistake, you eliminate at least half of the remaining experts from future consideration. Thus, you identify the omniscient expert after at most log N mistakes.

The majority trick seems like a good idea. How can we extend it to the case of no omniscient expert? One idea is to re-start the above algorithm as soon as every expert has made at least one mistake — then the number of mistakes that we make is at most log N times that of the best expert. To do better, we should be more mindful of how experts performed in the past. A simple but great idea is to associate a weight (read: credibility) with each expert, intitially 1, and multiply this weight by (1-\epsilon) every time the expert makes a mistake (where \epsilon is a parameter to be tuned later). In the randomized weighted majority algorithm, at every time step you choose an action with probability proportional to the sum of the current weights of the experts that advocate that action.

The analysis of the algorithm is jaw-droppingly slick. At each time step: if the probability of a mistake by the algorithm is p, then the expected value of the experts’ total weight drops by a (1-\epsilon p) factor. On the other hand, if there is an expert that makes merely m mistakes, then its weight of (1-\epsilon)^m single-handedly provides a lower bound on the experts’ total weight. Using the total weight of the experts as an intermediary, we’ve related the two quantites we care about — the algorithm’s cost and the quality of the best expert. Rewriting (taking logs and using a deft Taylor expansion) shows that the expected number of mistakes made by the algorithm is at most (1+\epsilon)m + \epsilon^{-1} \log N. Choosing \epsilon to balance the two terms gives the desired no-regret guarantee.

The algorithm and analysis are flexible and permit several extensions. For example, to handle general bounded costs (in [0,1], say), just use the weight multiplier (1-\epsilon c_i) when following the ith expert leads to a cost of c_i. An analogous algorithm and analysis handles bounded rewards instead of bounded costs. A more impressive extension is to the “bandit setting”, meaning the only feedback you get is the reward of the chosen strategy (e.g., you only learn the travel time of the route that you chose, not those of unchosen routes). Here Avrim covered the elegant reduction of Auer, Cesa-Bianchi, Freund, and Schapire from the bandit setting to the full-feedback setting. To use a no-regret algorithm as a black box, one must fabricate a reward for every action (the input that the black box expects) from the reward of the action you took (which is all you learn). The first trick is to assign zero reward to all unchosen actions. The second trick is to make sure the expected reward vector is unbiased by dividing the reward of the chosen action by the probability of taking it — note this can increase rewards significantly. The third trick is to control the maximum fabricated reward size by mixing in a little bit of the uniform distribution when choosing actions. The bottom line is that there are no-regret algorithms for the bandit setting, as well; the only catch is that the additive loss blows up from logarithmic in N to polynomial in N.

Having made all of us experts in online learning, Avrim moved on to applications in game theory. Here’s an amazing one: the existence of a no-regret algorithm, like randomized weighted majority, directly implies von Neumann’s Minimax theorem! Contrast this with the original proof (via a fixed-point theorem) or later proofs that explicitly use linear programming duality. Recall the Minimax theorem says that, under optimal randomized play in a zero-sum two-player game (e.g., randomizing uniformly in rock-paper-scissors), it doesn’t matter if you announce your strategy before or after that of the other player — you get the same expected payoff either way. One inequality is easy — going last can only help you. The reverse inequality can be proved by letting the row and column players play the game repeatedly, each choosing a strategy at each time step using some no-regret algorithm (where strategies correspond to experts, the game payoffs to rewards). The empirical distributions of the row and column players’ chosen strategies approach a minimax pair (equivalently, a Nash equilibrum): essentially, competing with the best expert in hindsight translates to each player doing as well as a best response to the other player’s empirical distribution (i.e., as well as choosing its strategy after that of the other player).

In non-zero sum games, simultaneous regret-minimization does not generally yield a Nash equilibrium (do you see why?), but the empirical distribution of joint play approaches the set of “coarse correlated equilibria” (see also Eva’s talk), a relaxation of correlated equilibria. There are also simple online algorithms that guarantee convergence of joint play to the smaller set of correlated equilibria — one uses an appropriately more stringent regret notion (swap regret) and constructs online algorithms that have vanishing swap regret (e.g., via a black-box reduction to standard regret-minizmization).

Avrim wrapped up by surveying some of his recent work in algorithmic game theory, including item pricing (see here for a cute vertex pricing problem in graphs and here for general results that follow from “random power of 2” pricing schemes) and ways of guiding game dynamics to good Nash equilibria (see here for positive results for fair cost-sharing in neworks).

Costis Daskalakis

Costis gave two lectures on different topics.  Here I’ll discuss his lecture on the complexity of equilibria; a future post will discuss his recent work on auctions [2].

Recall from Avrim’s talk the history of the minimax theorem for two-player zero-sum games. von Neumann first proved it using Brouwer’s fixed-point theorem, but later proofs via linear programming duality lead to polynomial-time algorithms for computing a Nash equilibrium in such games. The known proofs of Nash’s theorem — stating that every finite nonncooperative game has at least one (mixed-strategy) Nash equilibrium — rely on fixed-point theorems. Should we be looking for a new proof of Nash’s theorem, one that results in an efficient algorithm? If not — if computing Nash equilibria is no easier than computing Brouwer fixed points — how would we prove it?

Costis began with a discussion of Brouwer’s fixed-point theorem, which states that every continuous function on a convex compact set has at least one fixed point. He outlined the proof for simplicies, which follows from Sperner’s lemma (see also Ariel’s recent post on the whining philosophers problem). This proof reduces approximate fixed-point computation to a certain type of path-following in an exponential-sized graph. In 1990, Papadimitriou defined the complexity class PPAD, essentially as all problems solvable via such a path-following algorithm. Suitable fixed-point computation problems are PPAD-complete — as hard as every problem in PPAD. Computing a Nash equilibrium is a PPAD problem. A tour de force reduction from 2005 (see here and here) proves that computing a Nash equilibrium is PPAD-complete, even in two-player non-zero-sum games. In this sense, fixed-point theorems are fundamental to Nash’s existence result, not an artifact of his proof.

What does this computational intractability mean? Paraphrasing Scarf, it suggests that Nash equilibria cannot always be used for the evaluation of economic activity (since you can’t efficiently solve for your model’s predictions). For similar reasons, Costis argued that the Nash equilibrium concept fails as a universal prediction of human behavior.

So what now? An important research direction is to identify conditions on a game that guarantee tractable equilibrium computation. Costis finished on a positve note with his tractability results for a multi-player generalization of zero-sum games [3]. The model is the following. There is an undirected network. Each node is a player. Each edge is annotated with payoffs for a two-player game, played between its endpoints. Each node picks a single strategy, and its total payoff is the sum of the payoffs from the games in which it participates (one per incident edge). The sum of the players’ total payoffs is assumed to be zero in every strategy profile. (A special case is when every edge’s game is itself zero-sum.) Amazingly, this global zero-sum condition is enough to recover equilibrium tractability: a Nash equilibrium can be computed in polynomial time, the set of equilibria is convex, and play by no-regret learning algorithms converges to the set of equilibria.

1. I fully plan to steal Avrim’s exposition for my own graduate classes.

2. Costis looked surprised when I suggested that almost none of the summer school students would have seen him talk about the complexity of equilibria before — he gave zillions of talks on the topic during 2007–2009, but this is already a new generation of PhD students!

3. Three-player zero-sum games are as general as two-player non-zero-sum games (proof: add a dummy player), so Nash equilibrium computation is PPAD-hard. Recovering tractability thus requires additional restrictions on the game.

Read Full Post »

Earlier this month, my co-blogger Ariel and I ran a summer school on algorithmic economics [1]. In my highly biased opinion, it was a lot of fun and a big success. For those of you who couldn’t make it, videos will be posted on the summer school site in the near future. The abstracts there should give you a quick idea for the topics covered by the nine speakers.  In a series of posts, I’ll discuss the talk content in a bit more detail. I hope that some of the students who participated in the school discuss their own experiences in the comments. Having now met all of them, I can confidently say that the future of algorithmic game theory/economics is in good hands!

Leeat Yariv

First up was Leeat Yariv from CalTech [2]. She talked about several aspects of modeling and reasoning about social networks. As motivation, she pointed out how interesting phenomena can be explained using non-obvious properties of networks. For example, how did the Medici family grow so powerful, when other families had greater wealth and more seats in the state legislature? Looking at the network of 15th century Florentine marriages provides an explanation — the Medici family was the most central (in a precise sense), enjoying many strategic connections through marriages, and capitalized on this to create a dynasty.

Two major themes in Leeat’s lectures were diffusion and network formation. I’m more familiar with research in computer science on these two topics, and it was cool to hear an economist’s take on them.  We first considered diffusion (like the spread of viruses, the adoption of a new technology, the transmission of information, etc.) without an explicit network. This is an old topic — for example, in 1962 Everett Rogers compiled 508 diffusion studies in his book Diffusion of Innovation. Two notable patterns in these studies were “S-shaped adoption” (meaning things are adopted slowly, then very rapidly, and then slowly again) and a correlation between the likelihood of adoption by an individual and the “connectedness” of the individual. The S-curve is nicely explained by Bass’s diffusion model.

Leeat then moved on to her work with Jackson on diffusion in networks. The model is that each node of a network has two strategies, to engage or not. The cost of engaging is some constant. The benefit of engaging is given by a function of a node’s degree and the fraction of its neighbors that are engaged — for example, it could be proportional to the number of engaged neighbors. Rather than keeping track of strategy profile details, a mean field model is used — if x is the overall fraction of engaged nodes, then for the analysis we’ll assume that x is the fraction of engaged neighbors in every node’s neighborhood. Generically, symmetric equilibria correspond to a discrete set of values of x, which alternate between “tipping points” (where after a slight perturbation in x, myopic dynamics lead elsewhere) and “stable equilibria” (where myopic dynamics return to the equilibrium). An interesting problem is to understand ways of modifying preferences or the network structure to increase the amount of engagement. Increasing the degree distribution (in the sense of first-order dominance) or its variance (in the sense of mean-preserving spreads) accomplish this, in that the values of tipping points and stable equilibria decrease and increase, respectively. Increasing a node’s value for its neighbors engagement (via advertising, rebates, etc.) also increases engagement. Simple properties of the degree distribution suggest how to optimally use such increases.

Moving on to network formation, Leeat reviewed some of the best-known generative models, both static (Erdos-Renyi and scale-free networks) and dynamic (like preferential attachment). For recent work, she focused on her model of group formation and its connections to homophily (the widely observed phenomenon that people connected to each other in a social network tend to be similar). The story is as follows. Tonight you plan to go out to dinner and then see some music. You only have enough time to read the food or the music section of the newspaper (for recommendations), not both. Which one you decide to read depends on whether you care more about food or music, as well as on the quality of information you get from your friends — if they’ve already figured out the best concert to go to, your effort is better spent learning about restaurants. The formal model is that agents will partition themselves into groups, with each group sharing all of the information its members have gleaned, and with diminishing returns for additional information on a topic. If you care more about music, you want a majority of your friends to read about music, but you also want a few to read about food. Call a group “stable” if all of its members agree on how many people should read about each topic. Stable groups must have fairly homogeneous preferences, and more so at the extremes (where essentially only one topic matters). Leeat suggested the following interpretation of these results: as connection costs decrease (say, with the Internet), homogeneity among groups increases, especially for extremists.  Obviously, it’s tempting to draw an analogy with political discourse in the 21st century…

Eva Tardos

The next speaker was Eva Tardos, from Cornell. The overarching theme of Eva’s lectures was to understand when simple auction formats perform well. That is, if we take design simplicity as a constraint, how close to an optimal solution can we get? For example, if we sell a bunch of different goods using only single-item auctions (second- or first-price, simultaneously or sequentially), can we approach the (optimal) welfare achieved by the (complex) VCG mechanism?

Eva spent most of her first lecture thoroughly analyzing a concrete example, which introduced several analysis motifs. The example is a special case of a very neat paper by Christodoulou, Kovacs, and Schapira. There are several goods (think potential PhD advisors). Bidders can have different valuations for different goods but only want one good (i.e., the valuation for a bundle is the maximum of the valuations for the goods in the bundle). Suppose the goods are sold using simultaneous second-price auctions — each bidder submits one bid per good, and each good is allocated separately using a Vickrey auction. How good are the Nash equilibria in the corresponding game?

As a warm up, consider pure Nash equilibria of the full-information version of the game. Here, one starts with a pure Nash equilibrium (that also satisfies a necessary “no overbidding” property), and considers a hypothetical deviation by a player for the item it receives in a welfare-maximizing allocation. Summing over all players and charging lost welfare appropriately back to the equilibrium welfare shows that the welfare of every pure Nash equilibrium is at least 50% of the maximum possible. Next, Eva showed how the structure of this analysis, which only uses the fact that each player has no regret with respect to a single and simple alternative strategy, implies that the 50% bound holds more generally for all “no regret” outcomes. (This encompasses all mixed-strategy Nash and correlated equilibria, plus more; see also Avrim’s lectures for more details.) Still more generally, Eva considered the incomplete information case, which is usually the natural one for auction settings. Here, we assumed that each bidder has a common valuation for all goods that it is interested in (and valuation 0 for the rest). There is a different analysis trick in this case, which is to use that each bidder in a Bayes-Nash equilibrium has no regret for bidding half its value  on all the items that it is interested in. The result is that the expected welfare of every Bayes-Nash equilibrium is at least 25% of that of the expected optimal welfare. Interestingly, this holds even if bidders’ valuations are statistically correlated. Eva wrapped up her first lecture by showing how similar ideas can be used to prove bounds on the quality of Bayes-Nash equilibria of the generalized second-price (GSP) sponsored search auction — another example of a simple (second-price) solution being used in lieu of a “complex” (VCG) one — even when advertiser quality factors are unknown (see here for details).

For her second lecture, Eva left behind simultaneous single-item auctions and moved on to the sequential case (see here and here for details). She made a compelling case that first-price auctions need to be used instead of second-price ones — the problem being that the second-price rule permits signalling and threats via high overbids, and these lead to subgame perfect equilibria with very poor welfare. Happily, constant-factor guarantees are possible for equilibria of first-price sequential auctions. The key idea in the analysis is to consider deviations of the form: follow the equilibrium strategy until you reach the auction for the good that you would be assigned in an optimal allocation, and then bid half your value for it.

1. I can’t remember why it used to bother me that junior organizers are expected to do all the work. These days, I think it’s a great tradition!

2. Caveat: I missed Leeat’s first lecture due to a thunderstorm conspiracy, and am basing my summary of it on her slides.

Read Full Post »

Machine learning conferences tend to be early adopters of innovations in the conference format. A recent example is the so-called Toronto system for matching papers to reviewers via ML-type techniques. It was introduced in a UAI 2011 paper, and this year was already employed by UAI 2012. NIPS 2012 had an unusual double submission policy, which allowed concurrent submission to other venues (provided that acceptance to one venue imposes withdrawal from all others). This caused a bit of mayhem when NIPS papers were concurrently submitted to UAI, which did not endorse the same avant-garde policy.

Not to be outdone by its competitors, ICML 2013 includes the following paragraph in its CFP:

This year there will be three reviewing cycles for main conference papers. This is an experimental step toward a merger of conference and journal formats — ICML may ultimately have six two-month reviewing cycles per year. Accepted papers will appear on-line shortly after acceptance and will be available for citation at that time. As of now we are still calling ICML a conference rather than a journal. A good discussion of the issues of merging conference and journal formats can be found in p40-jagadish.pdf by H. V. Jagadish.

The boldface sentence seems to implicitly imply that the essence of conference-ness lies in the uniqueness of the deadline. Is it naive to think that a conference should be called a “conference” as long as it brings people together?

I read the paper by Jagadish and did some additional research. I turns out that VLDB 2012 had a rolling deadline on the 1st of each month, i.e., 12 submission deadlines in the year preceding the conference. As far as I understand, papers are submitted to a journal-conference hybrid called Proceedings of VLDB, and as of 2011 this is the only way to present papers in the conference. Interestingly, Jagadish’s vision (circa 2008) was to ultimately replace VLDB with a journal called Journal of Data Management Research, but this journal doesn’t seem to exist. Perhaps the reason can be found in the list of possible worries in Jagadish’s paper:

JDMR is not really a journal: To the extent that JDMR is closely associated with conference presentation, and is interested in “conference-style” papers, it is a journal-conference hybrid. I personally believe it is more a journal than a conference proceedings, on account of year-round submissions and multi-round reviews. But there are those in the community who believe strongly that such a hybrid should not be called a journal. This is a dialog in progress.

Of course, this shift is part of larger trends that were discussed on this blog, as well as in many other places. Whether we want to call them journals, conferences, or journferences, personally I think VLDB and ICML are moving in the right direction. To be honest, I would support any solution that ameliorates the IJCAI (2011 deadline: Jan 25), EC (2011 deadline: Feb 4), and AAAI (2011 deadline: Feb 8) deadline frenzy.

Hat tip: Mugizi Rwebangira.

Read Full Post »

In 2001 I moved into an apartment in Jerusalem with two friends, Naomi and Nir. The apartment had one larger bedroom and two equally-sized, smaller bedrooms. While Naomi and I were running around viewing apartments, Nir was having the time of his life in South America. Nevertheless, when he got back, he argued that he should get the larger room (and pay the same rent) because he did not have a say in choosing the apartment. Strangely enough, this outlandish argument somehow made sense at the time. Nir’s powers of persuasion did not go to waste; today he is a philosopher.

Last week the memories came rushing back when a new CMU grad student—who was attending the summer school on algorithmic economics—asked me for advice on dividing the rent between his housemates. Fortunately, now I know a bit more about fair division than I did in 2001. Here is what I told him.

There is a wonderful 1999 paper by Francis Edward Su, which presents fair division methods through Sperner’s lemma. Suppose for simplicity that there are three housemates: call them Naomi, Nir, and Ariel, and denote them by A, B, and C. We denote the price of room i by x_i, and assume that the prices are positive and the total rent is 1, so x_1+x_2+x_3=1. Our goal is to divide the rent so that each person prefers a different room.

The space S of possible partitions of the rent can be represented as a triangle (or, if you want to be formal, a 2-simplex in \mathbb{R}^3). Now, triangulate S, and assign ownership of each vertex to one of A, B, and C, in a way that each elementary triangle is an ABC triangle. See the figure below, which is copied from the paper.

Next, we ask the owner of each vertex to tell us which of the rooms he prefers at the prices that are represented by the vertex. In this way, we obtain a new labeling of the triangle by the labels 1, 2, and 3. If we assume that people always want a free room when one is offered to them, the labeling of the edges of the triangle is constrained (because the vertices there have a price of zero for at least one room), and in fact it looks like this:

A variant of Sperner’s lemma implies that for such a labeling, there must be an elementary 123 triangle, i.e., a “small” triangle whose vertices have different labels. But such a a triangle is nothing but three “similar” partitions of the rent in which different people choose different rooms! By choosing a sufficiently fine triangulation we can make these three vertices arbitrarily close, thereby obtaining an approximately envy-free allocation. In the limit (and under an additional mild assumption) we can obtain a completely envy-free allocation. These techniques easily generalize to the case of more than three housemates. By the way, the connection to Sperner also implies computational hardness; and in theory it’s possible to use the completely different techniques of Lipton et al. to obtain an approximately envy-free division.

Su actually went ahead and implemented his scheme as a component in his fair division calculator. It’s not much to look at, but keep in mind that it dates back to the era when Java applets were cool. One of the nicest things about the algorithm is that you just need to ask players queries of the following form: “under these prices, which room do you want?” The algorithm iteratively poses these questions to the housemates and uses the answers to refine its solution. Fair division theory has given rise to such amazingly practical and beautiful methods, and I can’t help but wonder whether a flashier fair division calculator (perhaps a smartphone app) would be used by hundreds of thousands.

(A video of Herve Moulin’s mini-course on fair division will soon be available here. For now you can check out his slides, or read a related survey that I wrote a few weeks ago for CACM.)

Read Full Post »

Yesterday’s fiasco in the Olympic badminton women’s doubles competition provides an unusually vivid illustration of the perils of designing mechanisms that violate incentive constraints. The competition is structured in two stages: a round-robin stage in which the teams compete to earn placement in a second-stage single-elimination tournament. The second-seeded team in the competition, Tian Qing and Zhao Yunlei of China, suffered an upset loss to a Danish team in the first stage. As a result, they were placed as one of the lowest-seeded teams in the second-stage tournament. At that point it became advantageous for the remaining teams that had already secured a second-stage berth to lose their final first-stage match in order to avoid being paired against Qing and Yunlei in the first round of the second stage. Spectators watched in frustration and disbelief as the athletes deliberately served into the net and hit shots out of bounds in an effort to lose the match. Olympic officials afterward decided to disqualify the offending teams from the competition.

There are at least two things that I find striking about this incident. As an algorithmic game theorist (or should I say AGT/E theorist?) I’ve read numerous theory papers containing examples where rational users of non-truthful mechanisms might manipulate it in ways that are shockingly contrary to the mechanism’s intent. Meanwhile, in the real world, people manipulate non-truthful mechanisms all the time but in ways that lack the shock value of the badminton example. (For instance, bidders in a first-price auction shade down their bids a little bit to avoid paying a higher price.) There’s even a growing body of work — for instance, papers such as these — adopting a price-of-anarchy perspective on non-truthful auctions and proving that their equilibria are only moderately inefficient. So what about the aforementioned counterexamples where the outcome of rational behavior is diametrically opposed to the mechanism designer’s intent? The Olympic badminton incident is a welcome example of how these occurrences can happen in real life, not just in the nightmares of theoreticians. Is it time for the International Olympic Committee to start reading research papers on Nonmanipulable Selections from a Tournament?

My other reaction to this news – which stems from my interest in sports, not in game theory – is, “Why doesn’t this happen all the time in sports?” I know of one other example of similarly counterintuitive behavior in sports: the bizarre phenomenon of track sprints in cycling, where racers ride as slowly as possible until the final 200 meters, sometimes coming to a complete standstill, because a racer expends much less effort when “drafting” behind an opponent, creating a disincentive to take the lead.

The bike-racing example is thematically related to what happened in the Olympic badminton tournament, but why don’t we see more occurrences of the exact same type of manipulation as yesterday’s badminton matches? Many sporting tournaments are structured in a way that seems to permit these manipulations. The FIFA World Cup has an initial stage of round-robin tournaments among disjoint groups of teams, with the leading teams in each group advancing to a single-elimination tournament. Have there been cases of teams deliberately losing their final round-robin match to secure a more attractive place in the second-stage tournament? I’m not aware of any such cases, but I’m not well versed in World Cup history. Numerous team sports in the U.S. have a regular season followed by a single-elimination playoff. It’s well known that teams don’t try their hardest after securing a playoff berth (at least in American football where the potential cost of injuring a key player outweighs the potential benefit of improving one’s seeding in the playoff tournament) but that’s different from having an incentive to deliberately lose. The equivalent of the badminton story could easily happen in the NFL: a strong team suffers an anomalous number of losses in the regular season but reaches the playoffs; should opposing teams intentionally lose games to secure a low playoff seeding? I don’t think this has ever happened in the NFL, but why not? One theory is that the ultimate factor motivating teams is profit, not playoff victories, so there is a strong disincentive to engage in behavior that angers the team’s fans. Alternatively the explanation might depend on more superficial aspects of the NFL playoff system, such as the fact that the playoff tree is partitioned into NFC and AFC subtrees irrespective of the relative strengths of the teams, and the fact that the tree doesn’t have constant depth due to wild-card games. Both of these aspects reduce the number of opportunities for an NFL team to perversely benefit in the playoffs by lowering its ranking in the regular season.

Read Full Post »

%d bloggers like this: