Feeds:
Posts
Comments

Archive for October, 2010

 

I couldn’t help but feel a tinge of sorrow when seeing this photograph from Lance.  These are going to be the last printed FOCS proceedings in history — and nobody wants them.  I still remember the days when my collection of FOCS and STOC proceedings was the single most important research resource in my possession.  I have probably not opened any physical proceedings in the last decade, but I could never bring myself to throw them away.  It maybe time now.  Sigh.

Read Full Post »

I recently looked at the paper A Revealed Preference Approach to Computational Complexity in Economics by Federico Echenique, Daniel Golovin and Adam Wierman that attempts to redirect the study of computational complexity in economic settings.

In the usual models we now have in AGT/E, the players are assumed to have some kind of preferences, and then the “equilibrium” is expected to be with respect to these preferences.    However, the assumption in economics may be taken to be more subtle: it is that they behave as if they have preferences.  I.e. if you look at their behavior then you may deduce from it some revealed preference that can explain their behavior. While abstractly the notion of revealed preferences does indeed seem easier to swallow  than believing that people really have preferences, and much discussion in economic thought seems to have gone into this difference, I must admit that I never gave it much thought.

If we take the view that people have preferences, then the natural computational complexity problem to study of that of the task of “INPUT: preferences; OUTPUT: equilibrium according to these preferences”.  On the other hand if you take the weaker revealed preferences point of view, then this paper suggests that the natural complexity to study is “INPUT: player behavior; OUTPUT: equilibrium according to the revealed preferences”.  This may be a much easier task!

The paper starts with the following basic demonstration in the context of consumer choice: there are m infinitely divisible goods, and a consumer has a utility function u:[0,1]^n \rightarrow \Re^+ specifying his utility for every bundle of goods.  Importantly, we are not assuming anything on the utility function except that it is weakly monotone in each coordinate (free disposal).  Given a vector of prices p_1 ... p_m and a budget b that consumer’s demand will be the bundle x \in [0,1]^m that maximizes u(x) subject to \sum_j p_j x_j \le b.  Looking at this in the old way, the optimization problem of calculating the demand is hard: without any further structure on u, an exponential number of queries will be needed.  However, they suggest the following way to look at it: our input should be a set of responses that we have seen from the user.  Each response is a triplet: \vec{p}, b, \vec{x}, where \vec{x} is the bundle demanded by our player when presented with prices \vec{p} and budget b.  Given these responses we should optimize for any u that is consistent with these previous responses — the revealed utility of the player.  This problem turns out to be easy as, by Afriat’s theorem, the revealed preferences may always be taken to be a concave function, for which optimization is efficient using convex programming.

The authors give a few more sophisticated examples of such a difference and more over argue that this difference is at the root of economists’ refusal to view excessive computational complexity as a critique of an equilibrium concept.  The “usual” argument against taking worst-case complexity results seriously is that perhaps the hard instances never occur in reality.  The usual response to such a claim would be that such an argument would call for the identification of the special properties of real world instances that make the problem easy.  This paper suggests such a special property: revealed preferences may indeed be simpler than general ones.

Read Full Post »

In a long post, Panos Ipeirotis suggests how Amazon should fix Mechanical Turk.   The post ends with two tweets:

Is it worth trying to challenge MTurk? Luis von Ahn, looking at an earlier post of mine, tweeted:

MTurk is TINY (total market size is on the order of $1M/year): Doesn’t seem like it’s worth all the attention.

I will reply with a prior tweet of mine:

Mechanical Turk is for crowdsourcing what AltaVista was for search engines. We now wait to see who will be the Google.

Read Full Post »

AGT on TCS Stack-Exchange

In the last few months I’ve become a fan of the TCS stack exchange, a question-and-answer forum for theoretical computer science.  Some of this activity is in AGT.  I certainly sometimes  feel that this is yet another time sink to be found on the Internet (like writing a blog…) , allowing me to keep away from “real work”.   However on the whole, I would evaluate my time spent there as productive.  On the question side, this a good place to check out whether someone knows an answer to some small obstacle that got you stuck in your research, or to check a point of view when looking at a new subject, or finding a reference, etc.  I also feel that the “answer side” is beneficial for me: questions that I’m able to answer often look at something that I know from a different perspective — a perspective that it’s good to gain.  In other cases, I look at a question, think that I should know the answer and, when thinking about it a few more minutes, realize that there is a gap in my knowledge — one that I didn’t realize existed.  In such cases, it sometimes means that I get to fill this gap, or sometimes that I find a real “hole” worthy of research.  A recent question asked about efficiency gaps between correlated equilibria and coarse equilibria.  The fact that I wasn’t able to answer it, demonstrated to me that I don’t understand coarse equilibria well enough.  When someone finally answers the question (as I’m quite sure will happen — this can’t be hard…), I will learn something.

Read Full Post »

It is well known that a correlated equilibrium can be computed in polynomial time in the size of the game.  In a beautiful paper published several years ago, Papadimitriou and Roughgarden proved that this extends even to games that are given concisely.  I.e. for many natural ways of succinctly representing “exponential size” games, one can still compute a correlated equilibrium in time that is polynomial in the representation length.  This holds for graphical games, anonymous games, polymatrix games, congestion games, scheduling games, local effect games, as well as several generalizations.  The logic of the algorithm is as follows: A correlated equilibrium can be expressed as a solution to a linear program.  In the case of a succinctly represented game the LP has exponentially many variables but only polynomially many constraints, and thus it’s dual can be solved by the Ellipsoid algorithm — provided that an appropriate separation oracle can be provided.   It turns out that, for many succinct representations, this can indeed be done, using a technique of Hart and Schmeidler.  The details are somewhat more delicate as the primal LP is unbounded and thus its dual is known to be infeasible, so “solving the dual” will certainly fail, but the failure will supply enough information as to find a solution to the primal.

In a paper by Stein, Parrilo, and Ozdaglar, recently uploaded to the arXiv, the authors claim to have found a bug in the Papadimitriou and Roughgarden paper (I have only skimmed the paper and have not verified it).  They note that the Ellipsoid-based algorithm sketched above returns a correlated equilibrium with three properties:

  1. All its coefficients are rational numbers, if so was the input.
  2. It is a convex combination of product distributions
  3. It is symmetric, if so was the game.

However, they exhibit a simple 3-player symmetric game that has a unique equilibrium satisfying the last two properties, an equilibrium with irrational coordinates.  (The game gives each player a strategy set of {0,1}, and the common utility of all players is s_1 + s_2 +s_3 \:mod\: 3.)  They also isolate the problem: when attempting to move from the dual LP to the primal solution, the precision needs to go up, requiring an increase in the bounding Ellipsoid.  Finally, they show that the algorithm still finds an approximate correlated equilibrium (in time that is polynomial also in \log \epsilon^{-1}).  The question of whether one can find an exact correlated equilibrium of succinctly represented games in polynomial time remains open.

 

Read Full Post »

Interstellar Trade

On the TCS stack-exchange, Ryan Williams asked “What papers should everyone read” .  One of the papers suggested as an answer is in Economics (some may even say in Algorithmic Game Theory):  Paul Krugman’s 1978 paper on “A Theory of Interstellar Trade” still stings.

Read Full Post »

The Center for Research in the Foundations of Electronic Markets in Århus, Denmark,  nicely funded at $4.75M, officially kicks off with an inauguration event on October 13-15.

 

Read Full Post »

Older Posts »