Feeds:
Posts
Comments

Archive for March, 2010

ACM just announced that Tim Roughgarden has won the 2009 Grace Murray Hopper award (“awarded to the outstanding young computer professional of the year”) for “his research combining computer science and game theory to analyze network routing among self-interested parties“. Congratulations to Tim as well as to the winners of the other ACM awards.

Read Full Post »

AGT/E Grad Schools

I was asked which CS departments are recommended for getting your graduate degree in Algorithmic Game Theory (/Economics).  This sounds like an excellent question to leave for commenters on my blog, but let me start with some of the obvious answers:

The three top CS departments are all excellent for AGT as well: Berkeley has the patriarch of the field, as well as an AGT-active networking group (Scott Shenker), and a whole related school of Information.  MIT has Costis and, from the crypto side, Silvio (who in recent years has been forwarding a hostile takeover of mechanism design by crypto), as well as a network GT  group, and the fantastic close-by microsoft reserach center.  Stanford has Tim and, on the AI-ish side, Yoav, as well as strong AGT-friendly economists (Paul Milgrom, Ilya Segal), and close proximity to Silicon Valley  and in particular to the excellent AGT reserach groups of Microsoft and of Google.

Other noteworthy places include:

  1. The very strong group in Cornell (Including Eva Tardos, Jon and Bobby Kleinberg, Joe Halpern and, from the crypto side, Rafael Pass).
  2. The EEconomiCS group in Northwestern (Lance Fortnow, Nicole Immorlica, Jason Hartline) with strong support from the business school (including Ehud Kalai, the unofficial AGT godfather within GT, and Rakesh Vohra, a major communication hub between Econ/GT, OR, and TCS).
  3. Liverpool has a strong Economics and Computation group.
  4. Israel has unusual presence in AGT, including Tel-Aviv, the Technion, and my own CS department at the Hebrew University (Jeff Rosenschein, Daniel Lehmann) with a strong supporting center for the study of rationality (including AGT reserachers Liad Blumrosen and Michal Feldman as well as many AGT-close economists, game-theorists, and mathematicians like Sergiou Hart, Gil Kalai, and Abraham Neyman).

So let me stop with this incomplete list, and ask readers for more suggestions.

Read Full Post »

I spent most of last week in the Bertinoro workshop on Frontiers in Mechanism Design organized by Jason Hartline and Tim Roughgarden.  Such a small focused event is really a winning format (when done right — of course): almost every talk was very close to my interests and was really good (since the speakers presented their preferred recent work which usually was also accepted to some other top conference).

One of my take-homes from this event was the strengthening of the realization that computer scientists are doing more and more Bayesian analysis  in Algorithmic Mechanism Design, in this respect getting closer to the economists’ way of thinking.  It is not that computer scientists have lost their basic dislike of “average case analysis”, distributional priors, or especially, common priors, it is just that we have started reaching in more and more places the limits of “worst case” analysis.  It seems that computer scientists are being very careful with “how much” they rely on Bayesian analysis, obtaining various “hybrid” results that are more robust, in various senses, than straight Bayesian ones.

An extreme good example is the classic result of economists Jeremy Bulow and Paul Klemperer who show that revenue of the straight 2nd price auction (for a single item) with n+1 bidders always dominates the revenue of Myerson’s optimal auction with n bidders.  Only the analysis is Bayesian: the resulting 2nd price n+1-bidder auction is completely “worst-case” — neither the auctioneer, nor the bidders must have any knowledge or agreement on the underlying distribution.   In spirit, such a result is similar to Valiant’s prior-free learning where the analysis is with respect to an underlying distribution even though the learning algorithms cannot depend on it.  A recent paper Revenue Maximization with a Single Sample by Dhangwatnotai, Roughgarden and Yan (to appear in EC 2010) gets more general results in this prior-independent vein, although in an approximation sense.

A weaker type of result, but still better than full-blown Bayesian, is the 2nd price version of Myerson’s auction.  In this version, the auctioneer must “know” the underlying distribution in order to set the optimal reserve price, but once that is done, the bidders see a “worst-case” situation in front  of them(2nd price with reserve) and should bid truthfully in the dominant strategy sense without needing to know or agree about the underlying prior distribution.  (This is opposed to the revenue-equivalent-in-principle 1st price version in which bidders must know and agree on a common prior for them to have any chance of reaching he desired equilibrium.)  A recent paper Multi-parameter mechanism design and sequential posted pricing by Chawla,  Hartline,  Malec, and Sivan (to appear in STOC 2010) gets similar types of results in a unit-demand heterogeneous auction setting where the auctioneer needs to know the distribution in order to set prices (in this case, posted prices) but the resulting mechanism is very simple and truthful in the dominant-strategy sense (again the approximation guarantee is in an approximation sense).

A yet weaker version of a similar type of result appears in the paper Bayesian Algorithmic Mechanism Design by Jason D. Hartline and Brendan Lucier (to appear in STOC 2010).  In this paper again the auctioneer does need to know the underlying distribution, and then he creates a mechanism that is incentive compatible, but here only in the Bayesian sense.  I.e. the the bidders need not know the underlying distribution (as they should just act truthfully) but they should still agree that the auctioneer knows the prior distribution.  This type of result is more fragile than the previously mentioned ones since the  truthfulness of the auction depends on the auctioneer correctly knowing the underlying distribution, rather than just the optimality of it.  On the plus side, the paper shows that the auctioneer’s derivation of the auction can be done effectively just by using a “black box” for sampling the underlying distribution (as is the case for he derivation of Myerson’s optimal reserve price).

A someone dual situation is presented in the paper Price of Anarchy for Greedy Auctions by Brendan Lucier and Allan Borodin (SODA 2010).  In that paper, auctions are presented in which the auctioneer need not know the underlying auction and acts in “detail-free” way, i.e. the auction is independent of the underlying distribution.  However,  the approximate-optimality holds when the bidders are in a Bayesian equilibrium, i.e the bidders must know and agree on a common prior for the analysis to hold.

The last example of “somewhat-Bayesian” results that comes to mind has nothing to do with incentives but is just algorithmic.  The paper The Adwords Problem: Online Keyword Matching with Budgeted Bidders under Random Permutations by Nikhil Devanur and Thomas Hayes (in EC 2009) considers an online model of repeated auctions, where no distributional assumptions are made on the bidder values that are assumed to be “worst case”, but a distributional assumption is made on the order or arrival which is assumed to be uniform.  This allows them to get arbitrarily good approximations, circumventing the known lower bounds for the completely worst case.

While economists too have looked at weakening of the fully Bayesian assumptions, as computer scientists are doing now, I see a difference in the two approaches.  Harsanyi‘s influence on economic thinking seems to be so great that the Bayesian point of view seems to be taken as the correct model by economists, and even its criticisms (cf. the Wilson critique) and results that weaken the assumptions are simply taken as “2nd order” improvements.  Computer Scientists, on the other hand, seem to have their basic intellectual foundation in a non-Bayesian setting, and as they move toward Bayesian models they do not seem to have a single model in mind but are rather quite happy to explore the “Pareto frontier ” between the strength of the model and the strength of the results obtained.

Finally, as a side note, let me observe that while the gap between computer scientists and economists is narrowing on the Bayesian question, the other great gap — the issue of approximation — seems to be as great as ever.  E.g., all the results by computer scientists mentioned in this posts only provide approximate results, while all those by economists provide exact ones.

Read Full Post »

David Pennock returns to the question of how to call the field the lies at the intersection (or maybe crossroads) of economics and computation. As I described in my first blog post, about a year ago, many of us were loosely using the term “algorithmic game theory” for this field as well, but I have to agree that this puts a wrong spin on the field.  Economics does go beyond game theory and “CS ∩ Econ” goes beyond “CS ∩ GT”, and so should the name, if only to covey this message.  This would also fit more naturally various areas that are often considered part of “AGT” but are really not GT: from computation of market equilibria, to prediction markets, to social networks, and much more.

So let me second David’s suggestion of Algorithmic Economics which also combines nicely with Algorithmic Game Theory yielding Algorithmic Game Theory and Economics. This sounds better than the wider “computation and economics”, while the perhaps more precise “computational economics” is sort of taken (although we could think of a hostile take-over, I suppose).

Maybe I should change the name of my blog to Algorithmic Game Theory and Economics?

Read Full Post »

Co-located with EC…

EC 2010 will be held on June 7-11, 2010 in Harvard (Boston).  Beyond being co-located with the major theory conferences STOC and CCC, several smaller EC-related events are being held there as well:

Read Full Post »

Non-archival conferences?

A comment on my recent post mentions that “COMSOC is non-archival so it should not have an impact on the number of submissions to EC”.  Frankly, I wasn’t really aware of this fact (despite being on the COMSOC PC), but indeed looking at the COMSOC website I see that:

Accepted papers will be collected in informal workshop notes, printed copies of which will be available at the workshop. To accomodate the publishing needs of different scientific communities, we stress that authors will retain the copyright of their papers and that submitting to COMSOC-2010 does not preclude publication of the same material in a journal or in archival conference proceedings.

Submission and reviewing standards seem like those of usual “archival conferences” except that “recently published” submissions are OK too:

Submissions of papers describing original or recently published work on all aspects of computational social choice are invited…..   All submitted papers will be reviewed by the program committee.

I wonder what does this really mean.  The issue of copyright is pretty clear but seems scientifically irrelevant, especially since one would hope that most papers will be available freely on the web (preferably on arXiv and findable from the COMSOC website).  The fact that written proceedings will not be made available using “normal” print venues also seems clear but, again, who cares?  Many “archival conferences” don’t have printed proceedings either, mostly since these seem pretty useless given the web.

The fact that papers appearing in COMSOC supposedly can be published in economics journals unlike those in “archival conferences” is pure “voodoo”:  you change a few scientifically irrelevant technicalities (like copyright) and lo and behold your new conference paper suddenly becomes un-published and acceptable to journals.  In fact, EC takes this to a logical conclusion, allowing the authors to chose the archival/non-archival tag of their paper:

To accommodate the publishing traditions of different fields, authors may instead submit working papers that are under review or nearly ready for journal review. These submissions will be subject to review and considered for presentation at the conference but only a one page abstract will appear in the proceedings with a URL that points to the full paper and that will be reliable for at least two years.

One may ask whether a paper is “really” published if it appeared in a non-archival conference.  My point is that the meaning of “really” is not very real.  There is the question of to what extent was the paper evaluated and refereed, but this does not seem related to the “archival” nature of the conference.  As an example, EC papers are judges in the same way whether or not they choose the non-archival track; I also doubt that the “non-archival” status of COMSOC will have much bearing on how the PC evaluates papers.  There is the question of whether one writes it on their CV — but different people may do different things, and even the same person has different versions of their CV according to the standards of those that request it.

Finally, there is the all important question how much weight and prestige is attached to the conference publication, in hiring, promotion, or grant decisions (as is uniquely done in CS).  This again widely differs between institutions, as well as the issue in question.  For example, from my limited experience,  more weight is given to conference publications in hiring decisions than in tenure decisions, and more weight is give to conference publications by top ranked departments than by lower ranked ones.  Is this prestige a function of the “archival” status of the conference?  I doubt it.  The bottom line is that at first approximation any conference will gain or lose prestige  according to whether appearing in it is indeed an indication of scientific quality.  In the case of COMSOC, as for any other young conference, this still remains to be seen.

So, what’s my bottom line: I like the notion of a “non-archival” competitive conference that does not expect exclusivity from submitted papers.  I certainly see no reason to “respect” such conference publications less than “archival” ones — respect should just be a function of quality.  Maybe having high-quality highly-competitive “non-archival” conferences can get the best of both worlds in the CS journal-conference debate? (See also my view.)

Read Full Post »

EC accepted papers

The list of accepted papers for EC 2010 has been posted.

Read Full Post »

SAGT 2010

The 3rd International Symposium on Algorithmic Game Theory, SAGT2010, will be held on October 18-20, 2010 in Athens, GREECE.  Submission deadline is May 3rd.

Read Full Post »

This year I’m on the PC of the following two conferences:

The Third International Workshop on Computational Social Choice will take place in Düsseldorf, Germany, September 13–16, 2010.  Deadline for submissions is May 15th.

The Second Symposium on Innovations in Computer Science (ICS 2011) will be held in Beijing, China, January 7-9, 2011.  Deadline for submissions is August 2nd.

Read Full Post »