The end of the decade is always a good opportunity to look back on every aspect of our lives, and this blog gives me a forum to do so for the field of algorithmic game theory.
At the beginning of the decade there were scattered attempts to model and study issues related to the Internet using a combination of computer science with game theory or economic theory. By the end of a decade a full blown academic discipline has emerged with its own language (game-theory + information economics + algorithms + complexity), it’s own conferences (EC, SAGT, WINE — but no journals), graduate courses, a steady stream of PhD’s (some who have won awards for their dissertation), and in general a pretty lively community. I would say that this field has been quite warmly accepted by the theoretical CS community, by the AI community, and by the Game-Theory community, but has not yet won the hearts or minds of many economists, and is just now starting to be embraced by the Networking community. Despite the rather common language used in this border between economics, game-theory, and computation, there are still pretty well distinguished sub-communities according to the “land of origin”: theoretical CS, AI, or Networking (with a much smaller number of new immigrants from the lands of economics or game theory.)
It is quite clear that this research community is currently mostly of a CSey “piranha culture” where packs of researchers quickly devour a problem, each of them taking a small bite that is immediately sent to a conference. Given the initial availability of low hanging fruit in this uncharted territory this culture seems to have served the field well, despite its many shortcomings. At this point AGT seems to have become rather faddish (in the appropriate communities, that is), which calls for increased diligence in evaluating new research.
So what did we have in AGT this decade?
Much work during this decade has gone into determining the computational complexity of various types of equilibria in games and in markets. The class PPAD has been pointed as the complexity of many of these problems, and in particular of the problem of finding a Nash equilibrium. This last result by Daskalakis, Goldberg, and Papadimitriou and Chen and Deng gets the “AGT result of the decade” award. While there are still open problems regarding the complexity of equilibria, especially in markets, it seems to me that — in the limited sense of pinpointing the right complexity class — most of the picture is understood at this point, with the most significant remaining open problem being the computation of approximate Nash equilibria.
Starting with Koutsoupias–Papadimitriou, much attention has gone into the analysis of the costs that the game-theoretic point of view levies on computational problems. The catchy name “Price of Anarchy”, together with the convincing results of Roughgarden and Tardos who analyzed these costs in models of network congestion, have been extremely influential on the field. This last paper gets the award for “most influential AGT paper of the decade”. In the early part of the decade using the terms “price of X” for many X’s was de rigueur in certain circles, but by the end of the decade the naming template has become passe, while the type of analysis itself — sans the catchy name — became standard.
Auctions of various forms were a central object of study during this decade. In the first half of the decade much attention has gone into the combinatorial auction formalism which became the paradigmatic problem in Algorithmic Mechanism Design. It is probably fair to summarize that most computational issues have been resolved, while most strategic questions have remained open. The failure to understand the extent of the clash between the strategic and computational considerations applies to the rest of Algorithmic Mechanism Design as well despite much work and some mild progress (e.g. in single parameter domains, maximum-in-range mechanisms, and randomized mechanisms). The basic question of “how well can computationally-efficient incentive compatible combinatorial auctions or incentive compatible scheduling mechanisms perform” remains as open as in the beginning of the decade, and gets my (biased) “AGT open problem of the decade” award.
The second half of the decade saw much of the focus shift to “ad auctions” of various kinds, an application that obviously wins the “killer AGT application of the decade” award (rather than the spectrum auctions which seemed the candidate in the beginning of the decade). While the driver of ad auction research is certainly the internet advertising multi-billion dollar industry that has hired droves of AGT researchers, much of this work seems to focus on issues that are of basic theoretical interest in settings of repeated auctions, often departing from the basic models of dominant-strategy worst-case analysis, vying for more delicate models that capture the desired issues better (and in so also influencing the rest of algorithmic mechanism design.)
The field of AGT is obviously much influenced by developments in its parental fields of computer science, economics, and game-theory, as well as in its basic domain of application, the Internet. Trends which are significant include the continuing march of the Internet to become a platform for everything (from social networks, to cloud computing) and the growing roles of information economics as well as experimental economics within the fields of economics and game theory.
The rest of the world has also changed, and despite many tragedies, conflicts, set-backs, and especially dangers, the main story of the decade is a triumph: the amazing rise of Asia that has led to the fastest increase in human welfare in history, and may give some hope to a Billion people that still remain poor.
On this happy note let me wish a happy new year — and decade — to my readers.