Archive for June, 2010
The theoretical computer science community is off to another round of discussions regarding its conference system. Lance Fortnow, ex-officio as SIGACT chair, put forward a suggestion of reforming STOC, a drastic change that may alter the whole conferences-as-top-publications culture of the field (as Lance has previously suggested doing). He set up a blog with the proposal itself as well as many responses — mostly negative ones. Other recent related posts are here, here, here, and here (added 21/6: and here and here.)
While I like the CS conference system, I still think that a change is needed in the pan-TCS conferences as the field is getting bigger and deeper. Like most others who commented on the proposal, I doubt that the suggested new format has enough value to attract attendees. As many suggested, I would start by co-locating with other related conferences, as was done this year (with CCC and EC).
Guest post from Michael Schapira
I just got back from the 10th conference on electronic commerce (EC 10), which is the premier conference in algorithmic game theory. EC 10 was held in Boston from June 9th to 11th (co-located with STOC and CCC) and was chaired by David Parkes (general chair), Chris Dellarocas and Moshe Tennenholz (program co-chairs). The conference was great. The highlights included, alongside interesting technical talks, several social events and, most notably, a New England lobster dinner boat cruise (!). In addition, Ron Kohavi, who manages the experimentation platform team at Microsoft, gave a very entertaining talk, full of sociological observations and historical anecdotes, about the importance of controlled experiments for the evaluation of industrial innovations.
45 papers were accepted this year out of 136 submissions, an all-time-high acceptance rate. This has generated some discussion (see here). I, for one, think that the PC has definitely made the right choice in raising the acceptance rate. My impression is that the higher acceptance rate did not come at the expense of lower quality, and has greatly contributed to the broadening of the scope of EC (a long-time agenda). Indeed, this year’s conference featured interesting papers about topics that were, at best, peripheral at previous ECs (see below). EC is still primarily a CS/econ theory conference, as reflected by the high number of pure theory papers (35/45), but the number of empirical and experimental papers seems to be increasing from year to year (5 accepted papers in each category).
So, what can we learn from this EC about the current trends in algorithmic game theory?
More and more applications. While adword auctions seem to still be the undisputed queen of killer applications of algorithmic game theory, other applications like prediction markets, recommender systems and social networks are slowly but surely taking their place at her side.
Bayesian algorithmic mechanism design. Work in computer science on algorithmic mechanism design, ever since this research agenda was articulated by Nisan and Ronen a decade ago, has mainly focused on worst-case, or “prior-free”, environments, in which the strategic agents have no knowledge whatsoever about each other’s private information. This is in sharp contrast to the standard assumption in economics that agents have (common) probabilistic “beliefs” about each other’s private information. The attempt to reconcile these two contradicting worldviews is the motivation for much recent work by computer scientists (see recent and not so recent work by Jason Hartline and others). Examples from this EC include work on mechanisms for single-parameter domains that do not know the underlying distribution but perform (almost) as well nonetheless, a study of risk-neutral vs. risk-averse bidders, and a comparison between the power of deterministic and nondeterministic mechanisms.
Computational social choice. Economic voting theory dates back to Arrow’s fundamental impossibility result in 1951. Over the past years, voting theory has also been examined through the computational lens. However, for purely historical and sociological reasons, this research has taken place mainly within the Artificial Intelligence community. EC 10 seems like the beginning of the end of this separation between the theory of CS and the AI communities (in this respect, of course…). Vince Conitzer and Ariel Procaccia, two active researchers in this field, gave a tutorial on computational voting theory, and an entire EC session was devoted to this subject. In addition, some of the new work on approximate mechanism design without money can be regarded as an interface between algorithmic mechanism design and computational social choice.
The following are some of the papers I especially liked:
- Truthful Mechanisms With Implicit Payment Computation, Moshe Babaioff, Bobby Kleinberg and Alex Slivkins. This paper, which is the recipient of the best paper award, addresses the following interesting question. Truthful mechanisms output an outcome (allocation of items, etc.) and payments. What is the additional computational/communication cost of computing truth-inducing payments? Recently, it was shown here that for deterministic truthful mechanisms this overhead can be relatively big. This paper shows that if one is willing to settle for the weaker notion of truthfulness in expectation (and the possibility of rebate), then this “overhead of truthfulness” goes down to zero.
- Market Design for a P2P Backup System, Sven Seuken, Denis Charles, Max Chickering and Sidd Puri. Motivated by the deficiencies of existing information storage methods (CDs, data centers) this work presents a market-based P2P backup system design. The thing I like most about this paper is that, aside from analyzing the theoretical aspects of the proposed design (equilibrium analysis, etc.), much thought has been given to practical implementation details (user interface, deployability), and the proposal is also being evaluated empirically to establish its merits. Such work is still quite rare at EC.
- Matching In Networks with Bilateral Contracts, John W. Hatfield and Scott D. Kominers. This paper presents a clean and general model that encompasses several (all?) classical models in matching theory and establishes results that generalize existing results in a variety of well-studied contexts.