Archive for October, 2010


I couldn’t help but feel a tinge of sorrow when seeing this photograph from Lance.  These are going to be the last printed FOCS proceedings in history — and nobody wants them.  I still remember the days when my collection of FOCS and STOC proceedings was the single most important research resource in my possession.  I have probably not opened any physical proceedings in the last decade, but I could never bring myself to throw them away.  It maybe time now.  Sigh.


Read Full Post »

I recently looked at the paper A Revealed Preference Approach to Computational Complexity in Economics by Federico Echenique, Daniel Golovin and Adam Wierman that attempts to redirect the study of computational complexity in economic settings.

In the usual models we now have in AGT/E, the players are assumed to have some kind of preferences, and then the “equilibrium” is expected to be with respect to these preferences.    However, the assumption in economics may be taken to be more subtle: it is that they behave as if they have preferences.  I.e. if you look at their behavior then you may deduce from it some revealed preference that can explain their behavior. While abstractly the notion of revealed preferences does indeed seem easier to swallow  than believing that people really have preferences, and much discussion in economic thought seems to have gone into this difference, I must admit that I never gave it much thought.

If we take the view that people have preferences, then the natural computational complexity problem to study of that of the task of “INPUT: preferences; OUTPUT: equilibrium according to these preferences”.  On the other hand if you take the weaker revealed preferences point of view, then this paper suggests that the natural complexity to study is “INPUT: player behavior; OUTPUT: equilibrium according to the revealed preferences”.  This may be a much easier task!

The paper starts with the following basic demonstration in the context of consumer choice: there are m infinitely divisible goods, and a consumer has a utility function u:[0,1]^n \rightarrow \Re^+ specifying his utility for every bundle of goods.  Importantly, we are not assuming anything on the utility function except that it is weakly monotone in each coordinate (free disposal).  Given a vector of prices p_1 ... p_m and a budget b that consumer’s demand will be the bundle x \in [0,1]^m that maximizes u(x) subject to \sum_j p_j x_j \le b.  Looking at this in the old way, the optimization problem of calculating the demand is hard: without any further structure on u, an exponential number of queries will be needed.  However, they suggest the following way to look at it: our input should be a set of responses that we have seen from the user.  Each response is a triplet: \vec{p}, b, \vec{x}, where \vec{x} is the bundle demanded by our player when presented with prices \vec{p} and budget b.  Given these responses we should optimize for any u that is consistent with these previous responses — the revealed utility of the player.  This problem turns out to be easy as, by Afriat’s theorem, the revealed preferences may always be taken to be a concave function, for which optimization is efficient using convex programming.

The authors give a few more sophisticated examples of such a difference and more over argue that this difference is at the root of economists’ refusal to view excessive computational complexity as a critique of an equilibrium concept.  The “usual” argument against taking worst-case complexity results seriously is that perhaps the hard instances never occur in reality.  The usual response to such a claim would be that such an argument would call for the identification of the special properties of real world instances that make the problem easy.  This paper suggests such a special property: revealed preferences may indeed be simpler than general ones.

Read Full Post »

In a long post, Panos Ipeirotis suggests how Amazon should fix Mechanical Turk.   The post ends with two tweets:

Is it worth trying to challenge MTurk? Luis von Ahn, looking at an earlier post of mine, tweeted:

MTurk is TINY (total market size is on the order of $1M/year): Doesn’t seem like it’s worth all the attention.

I will reply with a prior tweet of mine:

Mechanical Turk is for crowdsourcing what AltaVista was for search engines. We now wait to see who will be the Google.

Read Full Post »

AGT on TCS Stack-Exchange

In the last few months I’ve become a fan of the TCS stack exchange, a question-and-answer forum for theoretical computer science.  Some of this activity is in AGT.  I certainly sometimes  feel that this is yet another time sink to be found on the Internet (like writing a blog…) , allowing me to keep away from “real work”.   However on the whole, I would evaluate my time spent there as productive.  On the question side, this a good place to check out whether someone knows an answer to some small obstacle that got you stuck in your research, or to check a point of view when looking at a new subject, or finding a reference, etc.  I also feel that the “answer side” is beneficial for me: questions that I’m able to answer often look at something that I know from a different perspective — a perspective that it’s good to gain.  In other cases, I look at a question, think that I should know the answer and, when thinking about it a few more minutes, realize that there is a gap in my knowledge — one that I didn’t realize existed.  In such cases, it sometimes means that I get to fill this gap, or sometimes that I find a real “hole” worthy of research.  A recent question asked about efficiency gaps between correlated equilibria and coarse equilibria.  The fact that I wasn’t able to answer it, demonstrated to me that I don’t understand coarse equilibria well enough.  When someone finally answers the question (as I’m quite sure will happen — this can’t be hard…), I will learn something.

Read Full Post »

It is well known that a correlated equilibrium can be computed in polynomial time in the size of the game.  In a beautiful paper published several years ago, Papadimitriou and Roughgarden proved that this extends even to games that are given concisely.  I.e. for many natural ways of succinctly representing “exponential size” games, one can still compute a correlated equilibrium in time that is polynomial in the representation length.  This holds for graphical games, anonymous games, polymatrix games, congestion games, scheduling games, local effect games, as well as several generalizations.  The logic of the algorithm is as follows: A correlated equilibrium can be expressed as a solution to a linear program.  In the case of a succinctly represented game the LP has exponentially many variables but only polynomially many constraints, and thus it’s dual can be solved by the Ellipsoid algorithm — provided that an appropriate separation oracle can be provided.   It turns out that, for many succinct representations, this can indeed be done, using a technique of Hart and Schmeidler.  The details are somewhat more delicate as the primal LP is unbounded and thus its dual is known to be infeasible, so “solving the dual” will certainly fail, but the failure will supply enough information as to find a solution to the primal.

In a paper by Stein, Parrilo, and Ozdaglar, recently uploaded to the arXiv, the authors claim to have found a bug in the Papadimitriou and Roughgarden paper (I have only skimmed the paper and have not verified it).  They note that the Ellipsoid-based algorithm sketched above returns a correlated equilibrium with three properties:

  1. All its coefficients are rational numbers, if so was the input.
  2. It is a convex combination of product distributions
  3. It is symmetric, if so was the game.

However, they exhibit a simple 3-player symmetric game that has a unique equilibrium satisfying the last two properties, an equilibrium with irrational coordinates.  (The game gives each player a strategy set of {0,1}, and the common utility of all players is s_1 + s_2 +s_3 \:mod\: 3.)  They also isolate the problem: when attempting to move from the dual LP to the primal solution, the precision needs to go up, requiring an increase in the bounding Ellipsoid.  Finally, they show that the algorithm still finds an approximate correlated equilibrium (in time that is polynomial also in \log \epsilon^{-1}).  The question of whether one can find an exact correlated equilibrium of succinctly represented games in polynomial time remains open.


Read Full Post »

Interstellar Trade

On the TCS stack-exchange, Ryan Williams asked “What papers should everyone read” .  One of the papers suggested as an answer is in Economics (some may even say in Algorithmic Game Theory):  Paul Krugman’s 1978 paper on “A Theory of Interstellar Trade” still stings.

Read Full Post »

The Center for Research in the Foundations of Electronic Markets in Århus, Denmark,  nicely funded at $4.75M, officially kicks off with an inauguration event on October 13-15.


Read Full Post »

An International Conference on Game Theory will be held in Tel Aviv, Israel, on June 20-23, 2011.

Shortly before that, on May 22-27, 2011, there will be an Algorithmic Game Theory Workshop in Jerusalem, Israel.

Both should be fun, scientifically and otherwise.

Read Full Post »

I’ve recently given a general talk about Algorithmic Mechanism Design in which, as usual, one of my motivational slides was the Braess Paradox (if you are a reader of this blog and have not seen the Braess paradox yet — stop reading this post and read about it instead). I would say that the Braess Paradox has been the most influential single intellectual motivator of the whole field of Algorithmic Game Theory.  Computer Scientists love it. Economists like it, but CS/Engineering types love it.  Here are some thoughts about why we, computer scientists, were so rattled by it.

The Braess paradox provides a simple example where selfish individual behavior leads to results that are socially sub-optimal.  An economist can hardly be blown off his feet.  The revelation that individuals may act selfishly (or as they would say, rationally) is hardly a surprise for an economist — in fact that is the basis of his field.  The fact that such selfish behaviors may have externalities and can result in a sub-optimal outcome is more interesting, but the tragedy of the commons, much studied and discussed, is certainly a more devastating demonstration of this.  (Aside: it seems that many undergraduate curricula in economics are so focused on perfect market scenarios that the failure of these models is hardly discussed and you need to get a graduate degree to hear about the limitations of free markets.)  However, for engineers we have two revelations here.  The mere suggestion that even though we design a system in some “perfect” way, someone who has control over part of the system may reasonably change it is annoying and surprising.  Yes, we engineers were thinking about “bad guys” (security), about “broken systems” (distributed computing) but the fact that “ok guys” may circumvent all our designs is an annoying revelation.  While the initial reaction of an engineer would be “well, if they’re not following my deisgn, then I can’t guarantee anything“, the Braess paradox also shows that one can analyze the situation under selfish behavior — an unexpected but pleasant surprise.  So while economists treat the Braess paradox as just a cute example as of itself, computer scientists immediately view it as a fictional model that stands for a whole new class of examples.

The second difference of a point of view between economists and computer scientists would be about what kind of solutions for this problem should we go after.  It would be clear to an economist (continuing my aside from above: at least one with a graduate degree) that in this case some kind of regulation or taxation is needed.  However, an engineer or computer scientist would require a much more specific and detailed answer: exactly how much taxation?  How would it depend on the network structure?  On the set of requests for routing?  On the delay characteristics? Economists would also be interested in some form of these questions, but the degree of specificness which would satisfy an economist is much less than what would satisfy an engineer.   I would say that most economists would not immediately feel that there are interesting open problems around Braess’s paradox.  Engineers and Computer Scientists would feel otherwise: we want general algorithms that will work.  An analysis that is satisfying for a whole class of future scenarios.  Something that we can implement, analyze, and be confident that will work in the future.

Just by itself then, the Braess paradox offers computer scientists a new set of problems (selfishness), tools to address these problems (equilibrium analysis), and challenges (analysis and algorithms for general scenarios).  The appeal is clear, and the main attraction of the seminal AGT paper that introduced the Braess paradox to Computer Scientists was suggesting that the challenges can be addressed:  a general analysis, for a whole class of networks, requests, and delays, is possible.

Read Full Post »

Medicare Auction Update

The “Letter from 167 Concerned Auction Experts on Medicare Competitive Bidding Program” which was organised by Peter Cramton was sent  to Chairman Stark, Health Subcommittee, Ways and Means, U.S. House of Representatives, on 26 September 2010.  Chairman Stark forwarded it to the Center for Medicare and Medicaid Services, along with his own letter, which concludes with:

I urge you to give these comments and recommendations serious consideration. I would also request that you inform me in a timely way as to whether CMS plans to incorporate any of the recommended changes and if not, why not.

Peter wrote a Freakonomics Blog post with Ian Ayres that appeared in the Opinion Pages of the New York Times.  You can find more information in the Market Design blog.

Read Full Post »

%d bloggers like this: