Feeds:
Posts
Comments

Posts Tagged ‘CS vs Econ’

I spent most of last week in the Bertinoro workshop on Frontiers in Mechanism Design organized by Jason Hartline and Tim Roughgarden.  Such a small focused event is really a winning format (when done right — of course): almost every talk was very close to my interests and was really good (since the speakers presented their preferred recent work which usually was also accepted to some other top conference).

One of my take-homes from this event was the strengthening of the realization that computer scientists are doing more and more Bayesian analysis  in Algorithmic Mechanism Design, in this respect getting closer to the economists’ way of thinking.  It is not that computer scientists have lost their basic dislike of “average case analysis”, distributional priors, or especially, common priors, it is just that we have started reaching in more and more places the limits of “worst case” analysis.  It seems that computer scientists are being very careful with “how much” they rely on Bayesian analysis, obtaining various “hybrid” results that are more robust, in various senses, than straight Bayesian ones.

An extreme good example is the classic result of economists Jeremy Bulow and Paul Klemperer who show that revenue of the straight 2nd price auction (for a single item) with n+1 bidders always dominates the revenue of Myerson’s optimal auction with n bidders.  Only the analysis is Bayesian: the resulting 2nd price n+1-bidder auction is completely “worst-case” — neither the auctioneer, nor the bidders must have any knowledge or agreement on the underlying distribution.   In spirit, such a result is similar to Valiant’s prior-free learning where the analysis is with respect to an underlying distribution even though the learning algorithms cannot depend on it.  A recent paper Revenue Maximization with a Single Sample by Dhangwatnotai, Roughgarden and Yan (to appear in EC 2010) gets more general results in this prior-independent vein, although in an approximation sense.

A weaker type of result, but still better than full-blown Bayesian, is the 2nd price version of Myerson’s auction.  In this version, the auctioneer must “know” the underlying distribution in order to set the optimal reserve price, but once that is done, the bidders see a “worst-case” situation in front  of them(2nd price with reserve) and should bid truthfully in the dominant strategy sense without needing to know or agree about the underlying prior distribution.  (This is opposed to the revenue-equivalent-in-principle 1st price version in which bidders must know and agree on a common prior for them to have any chance of reaching he desired equilibrium.)  A recent paper Multi-parameter mechanism design and sequential posted pricing by Chawla,  Hartline,  Malec, and Sivan (to appear in STOC 2010) gets similar types of results in a unit-demand heterogeneous auction setting where the auctioneer needs to know the distribution in order to set prices (in this case, posted prices) but the resulting mechanism is very simple and truthful in the dominant-strategy sense (again the approximation guarantee is in an approximation sense).

A yet weaker version of a similar type of result appears in the paper Bayesian Algorithmic Mechanism Design by Jason D. Hartline and Brendan Lucier (to appear in STOC 2010).  In this paper again the auctioneer does need to know the underlying distribution, and then he creates a mechanism that is incentive compatible, but here only in the Bayesian sense.  I.e. the the bidders need not know the underlying distribution (as they should just act truthfully) but they should still agree that the auctioneer knows the prior distribution.  This type of result is more fragile than the previously mentioned ones since the  truthfulness of the auction depends on the auctioneer correctly knowing the underlying distribution, rather than just the optimality of it.  On the plus side, the paper shows that the auctioneer’s derivation of the auction can be done effectively just by using a “black box” for sampling the underlying distribution (as is the case for he derivation of Myerson’s optimal reserve price).

A someone dual situation is presented in the paper Price of Anarchy for Greedy Auctions by Brendan Lucier and Allan Borodin (SODA 2010).  In that paper, auctions are presented in which the auctioneer need not know the underlying auction and acts in “detail-free” way, i.e. the auction is independent of the underlying distribution.  However,  the approximate-optimality holds when the bidders are in a Bayesian equilibrium, i.e the bidders must know and agree on a common prior for the analysis to hold.

The last example of “somewhat-Bayesian” results that comes to mind has nothing to do with incentives but is just algorithmic.  The paper The Adwords Problem: Online Keyword Matching with Budgeted Bidders under Random Permutations by Nikhil Devanur and Thomas Hayes (in EC 2009) considers an online model of repeated auctions, where no distributional assumptions are made on the bidder values that are assumed to be “worst case”, but a distributional assumption is made on the order or arrival which is assumed to be uniform.  This allows them to get arbitrarily good approximations, circumventing the known lower bounds for the completely worst case.

While economists too have looked at weakening of the fully Bayesian assumptions, as computer scientists are doing now, I see a difference in the two approaches.  Harsanyi‘s influence on economic thinking seems to be so great that the Bayesian point of view seems to be taken as the correct model by economists, and even its criticisms (cf. the Wilson critique) and results that weaken the assumptions are simply taken as “2nd order” improvements.  Computer Scientists, on the other hand, seem to have their basic intellectual foundation in a non-Bayesian setting, and as they move toward Bayesian models they do not seem to have a single model in mind but are rather quite happy to explore the “Pareto frontier ” between the strength of the model and the strength of the results obtained.

Finally, as a side note, let me observe that while the gap between computer scientists and economists is narrowing on the Bayesian question, the other great gap — the issue of approximation — seems to be as great as ever.  E.g., all the results by computer scientists mentioned in this posts only provide approximate results, while all those by economists provide exact ones.

Advertisement

Read Full Post »

Eran Shmaya posts about a beautiful demonstration from 1957 by Micahel Rabin of the twists that computation puts on  game theory.

Read Full Post »

Via Al Roth’s blog, I hear of an email circulated bySusan Athey and Parag Pathak:

The National Bureau of Economic Research workshop on Market Design is a forum to discuss new academic research related to the design of market institutions, broadly defined. The next meeting will be held in Cambridge, Massachusetts, on Friday and Saturday, October 8-9, 2010.

We welcome new and interesting research, and are happy to see papers from a variety of fields. Participants in the past meeting covered a range of topics and methodological approaches. Last year’s program can be viewed at:  http://www.nber.org/~confer/2009/MDs09/program.html

The conference does not publish proceedings or issue NBER working papers – most of the presented papers are presumed to be published later in journals.

There is no requirement to be an NBER-affiliated researcher to participate. Younger researchers are especially encouraged to submit papers. If you are interested in presenting a paper this year, please upload a PDF or Word version by September 1, 2010 to this link http://www.nber.org/confsubmit/backend/cfp?id=MDf10

Preference will be given to papers for which at least a preliminary draft is ready by the time of submission. Only authors of accepted papers will be contacted.

For presenters and discussants in North America, the NBER will cover the travel and hotel costs. For speakers from outside North America, while the NBER will not be able to cover the airfare, it can provide support for hotel accommodation.

Please forward this announcement to any potentially interested scholars. We look forward to hearing from you.

Apparently, the only “call for papers” is the link for uploading your submission (deadline:September 1st, 2010), and I’m not really sure how wide the intended scope is, beyond extrapolating from last year’s program and the identity of the organizers.  In fact, I’m even not sure if participation as a listener is open to all.   In any case, this sounds quite different from CS conferences, and you may read the reflections of David Pencok from last year’s conference.

Read Full Post »

One of main culture clashes between computer scientists and economists on the CS-econ frontier is whether “complexity of equilibria” matters.  The  CS-y view of the matter is captured in Kamal Jain’s quote: “If your laptop can’t find it then neither can the market“.  Economists mostly don’t care since they see equilibrium reached everyday, contrary to what CS says.  As Jeff Ely quips:  “Solving the n-body problem is beyond the capabilities of the world’s smartest mathematicians.  How do those rocks-for-brains planets manage to do pull it off?”  TCS folks who see complexity as the map of the world can’t really understand this indifference, as Lance Fortnow tweets: “I’m an economist so I can ignore computational constraints / I’m a computer scientist, so I can ignore gravity.

Computational Complexity map of the world

The beautiful thing about studying systems at equilibrium is precisely the fact that this abstracts away the gory details of the process (aka dynamics aka algorithm) of reaching this equilibrium.  In principle, there are many different difficult-to-understand dynamics that all reach the same easy-to-understand equilibrium point.   This is all very nice as long as the equilibrium is indeed magically reached somehow by the “real” dynamics.  The obvious crucial question is whether this is the case in practice. It seems that the general tendency among economists is to claim “yes”, to which the natural next question in line is “why”?  As computer scientists show, this is not a general characteristic of large games or markets.  Understanding the properties of interesting games and markets that make them actually reach equilibrium should be enlightening.  Maybe it is because economists choose to consider only those that do turn out to converge quickly, ignoring another large and interesting class of strategic scenarios? Maybe it is because economists are thinking about “smallish” games and so their insights will not carry over to more complex realistic scenarios?  Maybe there is some deeper interesting structure that guarantees fast convergence?  Distinguishing between these possibilities is especially crucial as we aim to analyze the new artificial games and markets that are to be found — and designed — on the Internet as well as elsewhere.  Which economic and game-theoretic sensibilities will still hold in these complex unnatural  circumstances?

Complexity is all about understanding such processes.  While the foremost question dealt by computational complexity is that of “time” — how long does a computational process need in order to find the solution — in our case to reach (close-to) equilibrium — this is not the only type of questions and insights provided by complexity theory.  As one can see in the “map above”, there are a stunning variety of complexity classes, each attempting to capture a different facet of the challenge of finding the solution: how much space (memory) is needed? Can we even tell when we reach a solution?  Does randomization help?  Is it helped parallelism?  Are approximations easier?  Does the solution have this or that particular structure? In the case of finding equilibria, the classes PPAD and PLS give a very nuanced explanation of what is involved.  There are also “concrete” models that study explicitly specific parameters such as communication or queries.  One may dislike the fact that this complexity analysis does not restrict attention to natural dynamics but allows arbitrary and unnatural algorithms.  The kind of natural dynamics that economists usually have in mind are some kind of best-replying in case of games and some kind of a Walrasian auctioneer in markets.  The problem is that there are many variants of these that make sense: fictitious play, various forms of population dynamics, more sophisticated types of learning such as regret-minimization, and all these can be enhanced with various orderings, smoothing attempts, tie-breaking rules, strategic look-ahead, re-starts, actions of the central planner, not to mention other more or less complex optimization and learning  attempts.  The strength of complexity analysis is that it applies to all of these.  Any “lower bounds” are definitive: any practical system can be simulated by a computer, and thus no dynamics can succeed in general. (Emphasis on “in general” — as mention above, the problems that you may be interested in may have special structure — so what is it?)   A statement of  an “upper bound” may be less interesting as stated, but immediately raises the challenge of either finding a natural algorithm=process=dynamic or pinpointing the complexity reason explaining why this is impossible.

This is a good point to refute several irrelevant objections to the applicability of computational complexity for analyzing dynamics of games and markets.  The first is the notion that humans or markets can undergo processes that cannot be computed.  Frankly, there is no evidence of this; there is certainly much about the world that we do not understand well enough to simulate, but there is no indication of any natural process that is inherently more powerful than computers.  This is the modern version of the Church-Turing thesis.  (It seems that some quantum phenomena cannot be simulated by usual computers, but that doesn’t change the principle — it just would replace classical computers with quantum ones in the few places that it matters.)  Even if you do subscribe to metaphysical dualism, do you want to base economics on it?  The second types of objections concern the standard technical notions used in complexity which obviously leave much wiggle-room: “polynomial time”, with its hidden constants, is not synonymous with efficient; worst-case analysis is not always the right notion, etc.  The point is that these are simply concise notions that usually seem to capture the issues well.  There always are cases where more refined notions are needed, and in such cases complexity theory can provide more precise answers: for example in analysis of basic problems like sorting or matrix multiplication, very exact results are obtained (with no hidden constants) similarly, cryptography is not based on worst-case analysis, etc.  There is no indication so far that the usual somewhat coarse notions of computational complexity miss something significant when applied to games or markets — quite the contrary in fact.  If such evidence emerges then complexity will not become irrelevant; simply more refined analysis will be used.

Take for example the stunning difference between reaching a Nash equilibrium and reaching a Correlated equilibrium.  While the latter is reached by various natural regret-minimization dynamics, there are no “non-trivial” dynamics that, in general, reach the former.  Let me say a word about this “non-triviality” by giving an example of what I would consider a trivial process: suppose that each player chooses a strategy at random every “turn”, unless his last strategy was already a best-reply to those of the others (up-to some \epsilon indifference).  At some time, the players will happen to “hit” an (\epsilon-) equilibrium.  This type of dynamics that simply search over the whole space of strategy profiles provides no insight and is not useful in most practical situations.  The point of the triviality is not that of the random choices but rather that of essentially trying every possibility.  Many other proposed “dynamics” for Nash equilibrium or for market equilibrium are similarly trivial — in some cases they resort to simply trying all possibilities (in some approximate sense).  The dynamics for correlated-Nash are not like this at all — they only look at a tiny “small-dimensional” fraction of the space of possibilities.  Why is that? Complexity theory explains this phenomena clearly: correlated equilibria can be found in polynomial time, but finding a Nash equilibrium (or many types of market-equilibrium) is PPAD-complete.

Read Full Post »

A few days ago, Nortrhwestern hosted a conference in honor of  Paul Milgrom winning the Nemmers Prize.  (As reported here and here.) The speakers seem to be the A-list of Economists in Mechanism Design, and their talks slides, as well as a video of Milgrom’s talk, are on the web page.  Computer Scientists were not invited, but it seems that Lance Fortnow was still able to tweet from there gems like:

Preston McAfee: Left to their own devices computer scientists would recreate the Soviet Union

Susan Athey: CS still needs to catch up in auction theory (on why Microsoft hires economists)

Read Full Post »

Eran Shmaya, in a beautiful recent blog post, adds some sociological angles to the economists’ never-ending introspection of whether their field is a science, where the starting point of such discussion is Popper‘s falsifiability.  The discussion about whether economic theory has any significant prediction power has certainly picked up lately due to the financial crisis that convinced many people that the answer is “No”.  (With a variant taken by many micro-economists claiming that micro is fine and only macro-economics is a superstition.)

Whenever I hear these types of discussions about the prediction power economic theory, I always think about the irony that, as an algorithmic game-theorist, I am mostly interested in using economic theory as a branch of mathematics…  Arrow’s theorem is simply true, not falsifiable.  So is Meyrson’s expression for optimal auctions, as well as many other mathematical tidbits that we, computer scientists, pick up from economic theory.  We don’t really care how well these theorems relate to human situations, since we use them in different circumstances, those related to computer systems.  I often wish that theoretical economists would allow themselves to stray even farther away from the realities of their human subjects.

This is not to say that we, in algorithmic game theory, need not worry about how well our theories correspond with reality.  We should.  We have not really started doing so.   And, it’s getting high time we do.  But our concerns may be more like those of engineers rather than those of scientists.  From economics, I’m happy to just take the theory…

Read Full Post »

Games and Economic Behavior (GEB) is the leading academic journal in Game Theory, and as such is a natural place for submission of your Algorithmic Game Theory papers.  (Other natural alternatives include TCS journals, AI journals, Economics journals, and OR journals.)  GEB has been very “CS-friendly” for quite some time, to the point of having added several CS researchers to its editorial board.

Judging papers on an interdisciplinary border is always a tricky business, and in the case of CS there is an additional complication with the conference publication culture in CS.  GEB has recently undergone some internal discussions reaching general guidelines for evaluation of CS papers for publication in GEB.  These have been written by Editor David Parkes and recently sent to the editorial board.  With his permission, I am publishing them verbatim.

The evaluation of CS papers for GEB: Guidelines

* On the Role of GEB

To be published in GEB, a paper should be of interest to game theorists; for CS-themed papers it should meet the standards set by the relevant leading field journal. Papers should be original, make a substantial contribution, and provide a broadly accessible exposition.

For CS papers in particular, a unique role that GEB plays is in certification that a piece of work is correct and is of interest to the game-theory community. This is important in meeting GEB’s mission of communicating game theoretic ideas across disciplines.

For the moment, many submitted papers will have previously appeared in “quasi-archival” CS conferences, such as FOCS/STOC, AAAI/IJCAI, EC and WINE. These are heavily reviewed and high quality venues, but publication does not certify interest to the GT community, nor necessarily the correctness of proofs, and the final version may also differ from a reviewed version.

* Originality of Contribution

In considering the role of GEB in certification of interest and correctness and its mission of communicating ideas, and the role of conferences within CS, we can expect the incremental contribution of the GEB paper over a CS conference paper to be much less than that required in other fields.

But the stated policy will remain that there should still be some incremental contribution, judged to be of some technical significance and game-theoretic interest. Authors should be expected to discuss differences from a conference version in a cover letter.

There has been an extensive discussion on whether or not to require an incremental contribution over a conference paper and there is not unanimity here. But this policy seems to best balance considerations about this role of GEB, with other considerations (e.g., that the GEB version be the reference version, and in making efficient use of reviewer resources.)

* On Scope

GEB should be a welcoming place for appropriate papers at the CS/GT interface, and we need to be as clear as possible in communicating with authors about what is considered in scope. It is this issue about which we have received most questions in the past couple of years.

The journal aims to communicate game-theoretic ideas across fields and applications, and CS constitutes a significant group of contributors to both GT and its applications. But what about a paper that doesn’t contribute to GT or its applications (i.e., it does not develop new game-theory per se, no new game-theoretic approach, does not test existing GT, etc.), but rather is computer-science centric in its contributions, be they theoretical, computational, or algorithmic?

This is a difficult area, but a broad guideline is that papers should be in scope if they are computational, but on a topic “of interest to game theory.” For example, new results related to core solution concepts, equilibrium analysis, or pertaining to issues of bounded rationality, would be in scope. On the other hand, results on more applied problems, such as faster algorithms for winner determination in auctions, or for clearing prediction markets, may be out of scope and may be better submitted to a CS or OR journal.  And as already stated, to be acceptable to GEB a paper must also be of a quality acceptable by the relevant leading field journal.

* Most importantly, be Flexible

Again, our overall mission is to communicate GT ideas across disciplines, and the guidelines above were designed towards this goal. But we suggest that every paper be judged with the overall mission in mind, even if it does not meet the guidelines above.  A reviewer should weigh all relevant variables: the importance of the results, the nature of the earlier publication (the refereeing process, visibility, the level of exposition in terms of technical details and applicability, etc.), the marginal contribution to knowledge in the GEB publication beyond the earlier publications, and so forth.

Read Full Post »

In the last month, Brendan Lucier has uploaded three conceptually related papers to the arXiv (two of them with co-authors):

  1. Beyond Equilibria: Mechanisms for Repeated Combinatorial Auctions by Brendan Lucier
  2. Bayesian Algorithmic Mechanism Design by Jason D. Hartline and Brendan Lucier
  3. Price of Anarchy for Greedy Auctions by Brendan Lucier and Allan Borodin (to appear in SODA 2010)

All three papers aim to improve the approximation ratios obtained by computationally efficient mechanisms.  Each of them gives pretty strong and general results, but at a price: relaxing the notion of implementation from the usual one (in CS) of dominant-strategies.  Specifically, the notions used in these three papers are, respectively:

  1. “We consider models of agent behaviour in which they either apply common learning techniques to minimize the regret of their bidding strategies, or apply short-sighted best-response strategies.”
  2. “We consider relaxing the desideratum of (ex post) incentive compatibility (IC) to Bayesian incentive compatibility (BIC), where truthtelling is a Bayes-Nash equilibrium (the standard notion of incentive compatibility in economics).”
  3. “We study the price of anarchy for such mechanisms, which is a bound on the approximation ratio obtained at any mixed Nash equilibrium.”

In general, I like the strategy of relaxing the required notion of implementation or equilibrium in order to get strong and general results.  This is especially true in the context of algorithmic mechanism design for a few reasons (1) The concept used in CS seems to be so strict that simply not enough is achievable (2) The CS point of view is anyway much stricter than that of Bayesian-Nash commonly used by economists (3) much less seems to be needed in practice in order to ensure cooperative behavior.  (E.g.  people just use TCP truthfully despite the clear opportunities for beneficial manipulation.)  The difficulty with the relaxation strategy is that there are many possible ways of relaxing the game theoretic requirements, and it takes some time to really be convinced which notions are most useful, and in which situations.  I would love to see more community discussions on these as well as other notions of relaxed implementation and equilibrium.

Read Full Post »

An NSF-targeted workshop on Research Issues at the Interface of Computer Science and Economics has just taken place in Cornell, attended by many central people in the area (but not by me), including several bloggers that reported on it (Lance, Muthu a b c, and Rakesh). This is another one in a sequence of workshops where economists and computer scientists meet to exchange ideas, results, and points of view.

A fruitful issue in the interface is taking account of computational complexity in economic and game-theoretic settings.  This has received much attention (e.g. in Rakesh’s post), and while it certainly is a key issue, I disagree that this is the core issue in this interface.  I believe that even more interesting is the combination and interplay of the different research attitudes of the two communities, a combination that is needed for studying the future economic issues brought by technological advances.

I believe that much of the difference in attitudes comes from economists’ emphasis on “what is” vs. CS emphasis on “what can be”.   Even theoretical economists try to talk about the real world, their examples and motivations are real existing industries and situations, and their philosophical point of view is mostly scientific: understand the world.  Computer Scientists prepare for the future, knowing that most of what exists today will soon become obsolete.  Our motivations and examples often assume a few more years of the steady march of Moore’s law.  These attitudes are the outcome of different situations of these disciplines: economic work that did not correspond with reality was eventually ignored, while CS work that was not forward looking became obsolete.  This distinction is obviously somewhat simplistic and exaggerated, but this does  seem like the general spirit of things.  Indeed the sub-area of economics that was most natural for computer scientists to “blend with” was Mechanism Design which unusually  in economics has an engineering spirit of  “what can be” .  I feel that many of the technical and  cultural differences between the communities have their roots with this difference.

Economists and Game-theorists take their models much more seriously and literally than computer scientists do.  While an economists’ model should correspond to some real phenomena, a CS model should correspond to a host of unknown future situations.  A CS reader is more willing to take the leap of imagination between the model as stated and various unknown future scenarios.  In compensation, the CS reader expects a take-away that transcends the model, often found in the form of mathematical depth.  An economics reader is much more skeptical of a model and demands specific justification, but on the other hand, mathematical depth is not seen as a feature but rather as a nuisance to be buried in the appendix.

The Bayesian point of view of economists vs. the worst-case point of view of computer scientists have a similar root, I think.  In a situation that already exists, there is obviously some empirical distribution, and it is hard to argue against taking it into account.  In a situation that doesn’t exist, there is yet no empirical distribution, so in order to prepare for it we need to either theoretically guess the future distribution, or prepare for worst case.  CS has learned that the first option does not work well.

There are other high-level differences I can think of, but let me give two specific ones.  How does one interpret a 4/3-approximation result in a network setting?  While an economist will certainly consider a loss of 33% to be a negative result (as it would be in almost an existing scenario), a computer scientist will view it as giving up a few months of technological advance — a negligible cost compared to the other difficulties facing the adoption of anything new.

In Mechanism Design, computer scientist focus on the simplest model (dominant-strategy implementation; independent private values; risk-neutrality) and spend most of their efforts exploring the limits of the model, inventing new mechanisms, or proving impossibility, for various  scenarios which are obviously unrealistic today.  Economic theory, on the other hand, has moved on to more sophisticated models (risk aversion, dependent values, etc.), and is not as concerned with the limits of the model — until a real application emerges, like spectrum auctions.

I think that the future exploration of the interface of economics and CS should not only combine notions and results from both fields, but also combine attitudes and points of view.  The latter is even more difficult to do well.

Read Full Post »

%d bloggers like this: