The end of the decade is always a good opportunity to look back on every aspect of our lives, and this blog gives me a forum to do so for the field of algorithmic game theory.
At the beginning of the decade there were scattered attempts to model and study issues related to the Internet using a combination of computer science with game theory or economic theory. By the end of a decade a full blown academic discipline has emerged with its own language (game-theory + information economics + algorithms + complexity), it’s own conferences (EC, SAGT, WINE — but no journals), graduate courses, a steady stream of PhD’s (some who have won awards for their dissertation), and in general a pretty lively community. I would say that this field has been quite warmly accepted by the theoretical CS community, by the AI community, and by the Game-Theory community, but has not yet won the hearts or minds of many economists, and is just now starting to be embraced by the Networking community. Despite the rather common language used in this border between economics, game-theory, and computation, there are still pretty well distinguished sub-communities according to the “land of origin”: theoretical CS, AI, or Networking (with a much smaller number of new immigrants from the lands of economics or game theory.)
It is quite clear that this research community is currently mostly of a CSey “piranha culture” where packs of researchers quickly devour a problem, each of them taking a small bite that is immediately sent to a conference. Given the initial availability of low hanging fruit in this uncharted territory this culture seems to have served the field well, despite its many shortcomings. At this point AGT seems to have become rather faddish (in the appropriate communities, that is), which calls for increased diligence in evaluating new research.
So what did we have in AGT this decade?
Much work during this decade has gone into determining the computational complexity of various types of equilibria in games and in markets. The class PPAD has been pointed as the complexity of many of these problems, and in particular of the problem of finding a Nash equilibrium. This last result by Daskalakis, Goldberg, and Papadimitriou and Chen and Deng gets the “AGT result of the decade” award. While there are still open problems regarding the complexity of equilibria, especially in markets, it seems to me that — in the limited sense of pinpointing the right complexity class — most of the picture is understood at this point, with the most significant remaining open problem being the computation of approximate Nash equilibria.
Starting with Koutsoupias–Papadimitriou, much attention has gone into the analysis of the costs that the game-theoretic point of view levies on computational problems. The catchy name “Price of Anarchy”, together with the convincing results of Roughgarden and Tardos who analyzed these costs in models of network congestion, have been extremely influential on the field. This last paper gets the award for “most influential AGT paper of the decade”. In the early part of the decade using the terms “price of X” for many X’s was de rigueur in certain circles, but by the end of the decade the naming template has become passe, while the type of analysis itself — sans the catchy name — became standard.
Auctions of various forms were a central object of study during this decade. In the first half of the decade much attention has gone into the combinatorial auction formalism which became the paradigmatic problem in Algorithmic Mechanism Design. It is probably fair to summarize that most computational issues have been resolved, while most strategic questions have remained open. The failure to understand the extent of the clash between the strategic and computational considerations applies to the rest of Algorithmic Mechanism Design as well despite much work and some mild progress (e.g. in single parameter domains, maximum-in-range mechanisms, and randomized mechanisms). The basic question of “how well can computationally-efficient incentive compatible combinatorial auctions or incentive compatible scheduling mechanisms perform” remains as open as in the beginning of the decade, and gets my (biased) “AGT open problem of the decade” award.
The second half of the decade saw much of the focus shift to “ad auctions” of various kinds, an application that obviously wins the “killer AGT application of the decade” award (rather than the spectrum auctions which seemed the candidate in the beginning of the decade). While the driver of ad auction research is certainly the internet advertising multi-billion dollar industry that has hired droves of AGT researchers, much of this work seems to focus on issues that are of basic theoretical interest in settings of repeated auctions, often departing from the basic models of dominant-strategy worst-case analysis, vying for more delicate models that capture the desired issues better (and in so also influencing the rest of algorithmic mechanism design.)
The field of AGT is obviously much influenced by developments in its parental fields of computer science, economics, and game-theory, as well as in its basic domain of application, the Internet. Trends which are significant include the continuing march of the Internet to become a platform for everything (from social networks, to cloud computing) and the growing roles of information economics as well as experimental economics within the fields of economics and game theory.
The rest of the world has also changed, and despite many tragedies, conflicts, set-backs, and especially dangers, the main story of the decade is a triumph: the amazing rise of Asia that has led to the fastest increase in human welfare in history, and may give some hope to a Billion people that still remain poor.
On this happy note let me wish a happy new year — and decade — to my readers.
We must also add that AGT researcher of the decade is Noam Nisan. Happy new year.
The best of AGT has yet to come! AGT has not found its distinguished feature yet. Kind of a one liner, which says aha, this is what AGT is. It is very hard to explain a distinction between AGT and GT, and often the distinctions is made between the fields but the researchers of the field.
Complexity theory: things could be fundamentally hard.
Approximation algorithms: some optimizations are likely to be fundamentally hard so let us approximate.
Micro economics: individual rationality. (computational hint here behind one of my quotes: rationality can be modeled with computers).
Classical physics: F = ma.
Quantum computation: superimposition of computation.
AGT: computability of an outcome among rational computers?
In hind sight many of the GT principles were efficiently computable (e.g., market equilibirum under gross substitute or scalable utility), and those which are not, we have not yet proposed a replacement yet.
What is the replacement of Nash equilibrium?
In computational advertising, outside of our community, economists take the credit. So it is not only economists who have not yet recognized AGT, but it is everybody outside of our own community, who give virtually all the credit to economics for computational advertising and significantly less to AGT.
Success within the community does not count as much as the success outside of the community wil count. AGT is likely to get this recognition in the coming decade, when the fresh researchers take over and bring a new viewpoint and not just adapt the “worst-case-complexity-approximation” viewpoint to AGT.
Some of the work by young researchers is indeed very impressive, even if not yet popular.
[…] This post was mentioned on Twitter by Jose M Vidal, Jose M Vidal. Jose M Vidal said: AGT decade in review http://ff.im/-dCOhW […]
Hi Noam,
I find your statement
“The failure to understand the extent of the clash between the strategic and computational considerations applies to the rest of Algorithmic Mechanism Design as well“
a bit puzzling in light of work by one of your students
http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=4690959
This is a great paper and one of the most significant advancements on the question, but still may only be considered a mild step: On the result side, it applies to a problem of their own creation rather than to any previously considered one; on the technical side, this problem is “fully dimensional” hence Roberts theorem was adapted to it (and this is exactly why it does not have implications to previously considered problems)
[…] AGT decade in reviewI guess, AGT can qualify for the hottest field in CS right now. I got interested in this after […]
Hate to say it. The decade ends at December 31, 2010.
We count to ten in this part of the universe, 1, 2, 3, 4….10.
Yeh,
I heard abut those guys who celebrated the end of the millennium at the end of 2000….
We must also add that AGT researcher of the decade is Noam Nisan. Happy new year.
thanks
killing games