Feeds:
Posts
Comments

Archive for December, 2009

AGT decade in review

The end of the decade is always a good opportunity to look back on every aspect of our lives, and this blog gives me a forum to do so for the field of algorithmic game theory.

At the beginning of the decade there were scattered attempts to model and study issues related to the Internet using a combination of computer science with game theory or economic theory.   By the end of a decade a full blown academic discipline has emerged with its own language (game-theory + information economics + algorithms + complexity), it’s own conferences (EC, SAGT, WINE — but no journals), graduate courses, a steady stream of  PhD’s (some who have won awards for their dissertation), and in general a pretty lively community.  I would say that this field has been quite warmly accepted by the theoretical CS community, by the AI community, and by the Game-Theory community, but has not yet won the hearts or minds of many economists, and is just now starting to be embraced by the Networking community.      Despite the rather common language used in this border between economics, game-theory, and computation, there are still pretty well distinguished sub-communities according to the “land of origin”: theoretical CS, AI, or Networking (with a much smaller number of new immigrants from the lands of economics or game theory.)

It is quite clear that this research community is currently mostly of a CSey “piranha culture” where packs of researchers quickly devour a problem, each of them taking a small bite that is immediately sent to a conference.  Given the initial availability of low hanging fruit in this uncharted territory this culture seems to have served the field well, despite its many shortcomings.  At this point AGT seems to have become rather faddish (in the appropriate communities, that is), which calls for increased diligence in evaluating new research.

So what did we have in AGT this decade?

Much work during this decade has gone into determining the computational complexity of various types of equilibria in games and in markets.    The class PPAD has been pointed as the complexity of many of these problems, and in particular of the problem of finding a Nash equilibrium.  This last result by Daskalakis, Goldberg, and Papadimitriou and Chen and Deng gets the “AGT result of the decade” award.  While there are still open problems regarding the complexity of equilibria, especially in markets, it seems to me that — in the limited sense of pinpointing the right complexity class — most of the picture is understood at this point, with the most significant remaining open problem being the computation of approximate Nash equilibria.

Starting with Koutsoupias–Papadimitriou, much attention has gone into the analysis of the costs that the game-theoretic point of view levies on computational problems.  The catchy name  “Price of Anarchy”, together with the convincing results of Roughgarden and Tardos who analyzed these costs in models of network congestion, have been extremely influential on the field.  This last paper gets the award for “most influential AGT paper of the decade”.  In the early part of the decade using the terms “price of X” for many X’s was de rigueur in certain circles, but by the end of the decade the naming template has become passe, while the type of analysis itself — sans the catchy name — became standard.

Auctions of various forms were a central object of study during this decade.  In the first half of the decade much attention has gone into the combinatorial auction formalism which became the paradigmatic problem in Algorithmic Mechanism Design.  It is probably fair to summarize that most  computational issues have been resolved, while most strategic questions have remained open. The failure to understand the extent of the clash between the strategic and computational considerations applies to the rest of Algorithmic Mechanism Design as well  despite much work and some mild progress (e.g. in single parameter domains, maximum-in-range mechanisms, and randomized mechanisms).  The basic question of “how well can computationally-efficient incentive compatible combinatorial auctions or incentive compatible scheduling mechanisms perform” remains as open as in the beginning of the decade, and gets my (biased) “AGT open problem of the decade” award.

The second half of the decade saw much of the focus shift to “ad auctions” of various kinds,  an application that obviously wins the “killer AGT application of the decade” award (rather than the spectrum auctions which seemed the candidate in the beginning of the decade).  While the driver of ad auction research is certainly the internet advertising multi-billion dollar industry that has hired droves of AGT researchers, much of this work seems to focus on issues that are of basic theoretical interest in settings of repeated auctions, often departing from the basic models of dominant-strategy worst-case analysis, vying for more delicate models that capture the desired issues better (and in so also influencing the rest of algorithmic mechanism design.)

The field of AGT is obviously much influenced by developments in its parental fields of computer science, economics, and game-theory, as well as in its basic domain of application, the Internet.  Trends which are significant include the continuing march of the Internet to become a platform for everything (from social networks, to cloud computing) and the growing roles of information economics as well as experimental economics within the fields of economics and game theory.

The rest of the world has also changed, and despite many tragedies, conflicts, set-backs, and especially dangers, the main story of the decade is a triumph: the amazing rise of Asia that has led to the fastest increase in human welfare in history, and may give some hope to a Billion people that still remain poor.

On this happy note let me wish a happy new year — and decade — to my readers.

Advertisement

Read Full Post »

A workshop on decentralized mechanism design, distributed computing, and cryptography will be held at the IAS in Princeton on June 3-4, 2010. The organizers request those interested in attending to send them an email by January 25th.

Read Full Post »

The classic Gibbard-Satterthwaite theorem states that every nontrivial voting method can be manipulated.   In the late 1980’s an approach to circumvent this impossibility was suggested  by Bartholdi, Tovey, and Trick: maybe a voting method can be found where the manipulation is computationally hard — and thus effective manipulation would be practically impossible?  Indeed voting methods whose manipulation problem was NP-complete were found by them as well as later works.  However, this result is not really satisfying: the NP-completeness only ensures that the manipulation cannot be solved in the worst case; it is quite possible that for most instances manipulation is easy, and thus the voting method can still be effectively manipulated.   What would be desired is a voting method where manipulation would be computationally hard everywhere, except for a negligible fraction of inputs.  In the last few years several researchers have indeed attempted “amplifying” the NP-hardness of manipulation in a way that may get closer to this goal.

The new paper of Marcus Isaksson, Guy Kindler, and Elchanan Mossel titled The Geometry of Manipulation – a Quantitative Proof of the Gibbard-Satterthwaite Theorem shatters these hopes.  Solving the main open problem in a paper by Ehud Freidgut, Gil Kalai, and myself, they show that every nontrivial neutral voting method can be manipulated on a non-negligible fraction of inputs.  Moreover, a random flip of the order of 4 (four) alternatives will be such a manipulation, again with non-negligible probability.  This provides a quantitative version of the Gibbard-Satterthwaite theorem,  just like Gil Kalai previously obtained a quantitative version of Arrow’s theorem.

Read Full Post »

Michal Feldman wrote  a short report from WINE 09:

The workshop on Internet and network economics was held last week in the Sapienza University of Rome. The conference was extremely well organized by PC chair Stefano Leonardi, and the papers’ quality was high relative to previous years. The research areas were very diverse, more or less a representative sample of research in algorithmic game theory, including, among others, mechanism design, congestion games, strong equilibria, prediction markets, fair allocations and ad auctions. It seems that there is no longer a real distinction between WINE and SAGT in topics, participants and quality.

Peyton Young gave a keynote talk on adaptive learning in systems of interactive agents, which surveyed a family of adaptive learning rules, in which a Nash equilibrium will be player a large proportion of the time as an unintended consequence of individual adaption.

The session on envy-free mechanisms included papers both in the traditional cake-cutting (divisible) setting and the setting of indivisible goods with payments. David Kempe introduced an interesting model of envy-free mechanisms under budget constraints, and showed positive algorithmic results.

Angelina Vidali presented her work on the geometry of truthfulness, where she introduces a new geometrical method for characterizing the allocation graph of truthful mechanisms. This work may prove useful in improving the lower bound of the job scheduling problem with unrelated machines.

Nicole Immorlica gave a talk on externalities in keyword auctions, where she presented empirical results, based on Microsoft Live, indicating that externality effects (i.e., the ads in the other sponsored positions) are statistically significant.

Read Full Post »

The Fifth Israeli Computational Game Theory Seminar was held today in Microsoft R&D center in Herzliya.  This was a pretty large affair with about 140 registered participants from all over Israel (and a few from abroad), but with quite a friendly feeling as the Israeli community is pretty closely knit.   There were five talks that were chosen by the organizers (Moshe Tennenholtz, Michal Feldman, and myself) in an attempt to optimize interest to the audience while maintaining some diversity in terms of research areas, speaker affiliation, etc.

The first talk was by Dov Samet, a game-theorist/economist/decision-theorist, who talked about a knowledge-based formalization of “the sure thing principle“. This principle states that if you will do X if you know Y and will do X if you know not(Y) then you will do X without knowing about Y.  This was formalized and shown to be equivalent to another formalization in terms of “kens” (a formal notion capturing bodies of knowledge). Most interesting, a generalization of the latter turns out to be equivalent to not being able to “agree to disagree”  in a knowledge-based abstraction a-la Aumann’s classical paper.  This talk was somewhat out of the usual scope of the AGT community who, like most of the TCS community, are not really into the “knowledge” formalisms.

The second talk was by Seffi Naor who talked about best-response dynamics in network creation games.  He studied a game in which each player needs to acquire a path in a graph from his origin node to a common root, where each edge has a cost, and if several players share an edge then they split its cost evenly between them.  It is known that this game has a price of anarchy of n but price of stability 1 (i.e. there are both extremely good and extremely bad Nash equilibria, in terms of the total social cost).  Seffi asked how good is the equilibrium that the best-response dynamics will converge to?  This of course depends on the starting point and he considered the “empty” starting point for which he was able to show a polylog loss of efficiency relative to the optimum.

The third talk was by Noga Alon who considered what happens in a win-loose zero-sum game when one player gets to play after seeing some information regarding the realization of the mixed strategy of the other.  Two models were considered, and in both of them (different) sharp thresholds were shown: when the number of bits of information leaked is small, there is mild loss of value, and at a certain point the loss becomes severe.

The forth talk, by Svetlana Olonetsky, considered mechanisms that are both incentive compatible and envy-free.  Her model considered allocations of heterogeneous items among agents whose valuations are additive over the items, but only as long as at most k items are obtained. While each of these two requirements (incentive compatibility and envy-freeness) can be achieved separately it is still open whether they can be achieved by the same mechanism, and partial results leading to this open problem were shown.

The final talk was by Joe Halpern who suggested several new equilibrium concepts that go beyond the Nash equilibrium to address issues motivated by computational game theory.  The first takes into account not just selfish deviations by the players but also worst-case deviations by some players (in coalition),  in which model he can apply the secure multi-party computation cryptographic protocols.  (He also suggested to think about the fact that sometimes we have  some “obedient” non-rational players who just do as told, perhaps helping the whole system function well.)  The second concept provides a rather general framework for capturing the complexity of computation as cost within the game.   The third, which he only mentioned briefly, models the fact that players may not even be fully aware of the game when they play it.

While these talks are hardly a representative sample of AGT research, I was still left with a feeling that the research area is widening.  While just a few years ago much of the AGT research used a rather small number of categories and models (e.g. price of anarchy; complexity of equilibrium computation, incentive-compatible mechanism design), the scope is now widening with an influx of new models and issues.   I like this exploration trend, but at some point the community will likely need to re-focus on a small set of basic and robust notions.

Read Full Post »

STACS accepted papers

STACS accepted papers have been announced.  Two AGT-related ones:

  1. Paul DuettingMonika Henzinger and Ingmar WeberSponsored Search, Market Equilibria, and the Hungarian Method
  2. Felix BrandtFelix Fischer and Markus HolzerOn Iterated Dominance, Matrix Elimination, and Matched Paths

Read Full Post »

Crowd sourcing has just been given a recent visibility boost with DARPA’s Red Balloon contest that was won by the MIT team.  At the same time, Amazon’s well-established (since 2005!) platform for crowd sourcing, the Mechanical Turk, is gaining attention as a platform for conducting behavioral experiments, especially in behavioral economics and game-theory.

Named after the 18th century human-powered chess-playing “machine” called “the Turk”, this platform allows “requesters” to submit “Human Intelligence Tasks” (HITs) to a multitude of human “workers” who solve them for money.  A sample of recent typical tasks include tagging pictures (for 3 cents), writing (and posting on the web) a short essay with a link (for 20 cents), or correcting spellings (for 2 cents, in Japanese).  This allows brief and cheap employment of hundreds and thousands of people to work on simple Internet-doable low-level knowledge tasks.  You may demand various “qualification exams” from these workers, and design such quals of your own.  Obviously workers are in it for the money, but apparently not just that.

Recently, the Mechanical Turk is being used to conduct behavioral experiments.  Gabriele Paolacci is methodologically repeating experiments of Kahneman and Tversky and reporting on them in his blog. Panos Ipeirotis reports on his blog studies of some aspects of the Mechanical Turk as well as results of various behavioral game-theory experiments on it.  I’ve seen others report on such experiments too.  Markus Jacobsson from PARC gives general tips for conducting such human experiments using the Mechanical Turk.

Turk-based behavioral experimentation has the immense appeal of being cheap, fast, and easy to administer.  There are obviously pitfalls such as getting a good grasp on the population, but so does any experimental setup.   Such a platform may be especially appropriate for Internet-related behavioral experiments such as figuring out bidding behavior in online auctions, or how to best frame a commercial situation on a web-page.  Could this be a tool for the yet not-quite-existent experimental AGT?

Read Full Post »

%d bloggers like this: