I spent most of last week in the Bertinoro workshop on Frontiers in Mechanism Design organized by Jason Hartline and Tim Roughgarden. Such a small focused event is really a winning format (when done right — of course): almost every talk was very close to my interests and was really good (since the speakers presented their preferred recent work which usually was also accepted to some other top conference).

One of my take-homes from this event was the strengthening of the realization that computer scientists are doing more and more *Bayesian analysis in Algorithmic Mechanism Design*, in this respect getting closer to the economists’ way of thinking. It is not that computer scientists have lost their basic dislike of “average case analysis”, distributional priors, or especially, common priors, it is just that we have started reaching in more and more places the limits of “worst case” analysis. It seems that computer scientists are being very careful with “how much” they rely on Bayesian analysis, obtaining various “hybrid” results that are more robust, in various senses, than straight Bayesian ones.

An extreme good example is the classic result of economists Jeremy Bulow and Paul Klemperer who show that revenue of the straight 2nd price auction (for a single item) with bidders always dominates the revenue of Myerson’s optimal auction with bidders. Only the analysis is Bayesian: the resulting 2nd price -bidder auction is completely “worst-case” — neither the auctioneer, nor the bidders must have any knowledge or agreement on the underlying distribution. In spirit, such a result is similar to Valiant’s prior-free learning where the analysis is with respect to an underlying distribution even though the learning algorithms cannot depend on it. A recent paper *Revenue Maximization with a Single Sample *by Dhangwatnotai, Roughgarden and Yan (to appear in EC 2010) gets more general results in this prior-independent vein, although in an approximation sense.

A weaker type of result, but still better than full-blown Bayesian, is the 2nd price version of Myerson’s auction. In this version, the auctioneer must “know” the underlying distribution in order to set the optimal reserve price, but once that is done, the bidders see a “worst-case” situation in front of them(2nd price with reserve) and should bid truthfully in the dominant strategy sense without needing to know or agree about the underlying prior distribution. (This is opposed to the revenue-equivalent-in-principle 1st price version in which bidders must know and agree on a common prior for them to have any chance of reaching he desired equilibrium.) A recent paper Multi-parameter mechanism design and sequential posted pricing by Chawla, Hartline, Malec, and Sivan (to appear in STOC 2010) gets similar types of results in a unit-demand heterogeneous auction setting where the auctioneer needs to know the distribution in order to set prices (in this case, posted prices) but the resulting mechanism is very simple and truthful in the dominant-strategy sense (again the approximation guarantee is in an approximation sense).

A yet weaker version of a similar type of result appears in the paper Bayesian Algorithmic Mechanism Design by Jason D. Hartline and Brendan Lucier (to appear in STOC 2010). In this paper again the auctioneer does need to know the underlying distribution, and then he creates a mechanism that is incentive compatible, but here only in the Bayesian sense. I.e. the the bidders need not *know* the underlying distribution (as they should just act truthfully) but they should still *agree* that the auctioneer knows the prior distribution. This type of result is more fragile than the previously mentioned ones since the *truthfulness* of the auction depends on the auctioneer correctly knowing the underlying distribution, rather than just the optimality of it. On the plus side, the paper shows that the auctioneer’s derivation of the auction can be done effectively just by using a “black box” for sampling the underlying distribution (as is the case for he derivation of Myerson’s optimal reserve price).

A someone dual situation is presented in the paper Price of Anarchy for Greedy Auctions by Brendan Lucier and Allan Borodin (SODA 2010). In that paper, auctions are presented in which the auctioneer need not know the underlying auction and acts in “detail-free” way, i.e. the auction is independent of the underlying distribution. However, the approximate-optimality holds when the bidders are in a Bayesian equilibrium, i.e the bidders must know and agree on a common prior for the analysis to hold.

The last example of “somewhat-Bayesian” results that comes to mind has nothing to do with incentives but is just algorithmic. The paper The Adwords Problem: Online Keyword Matching with Budgeted Bidders under Random Permutations by Nikhil Devanur and Thomas Hayes (in EC 2009) considers an online model of repeated auctions, where no distributional assumptions are made on the bidder values that are assumed to be “worst case”, but a distributional assumption is made on the order or arrival which is assumed to be uniform. This allows them to get arbitrarily good approximations, circumventing the known lower bounds for the completely worst case.

While economists too have looked at weakening of the fully Bayesian assumptions, as computer scientists are doing now, I see a difference in the two approaches. Harsanyi‘s influence on economic thinking seems to be so great that the Bayesian point of view seems to be taken as *the* correct model by economists, and even its criticisms (cf. the Wilson critique) and results that weaken the assumptions are simply taken as “2nd order” improvements. Computer Scientists, on the other hand, seem to have their basic intellectual foundation in a non-Bayesian setting, and as they move toward Bayesian models they do not seem to have a single model in mind but are rather quite happy to explore the “Pareto frontier ” between the strength of the model and the strength of the results obtained.

Finally, as a side note, let me observe that while the gap between computer scientists and economists is narrowing on the Bayesian question, the other great gap — the issue of approximation — seems to be as great as ever. E.g., all the results by computer scientists mentioned in this posts only provide approximate results, while all those by economists provide exact ones.

on March 22, 2010 at 11:45 am |Anonymouswhere we can find the list of titles/abstracts of all talks?

on March 22, 2010 at 9:23 pm |anonI would say that economists do almost always use worst case assumptions: rational self-interested agent. Not to speak about accounting and finance.

On the other hand some people in TCS seems to like on average-case:

http://en.wordpress.com/tag/average-case-complexity/

on March 23, 2010 at 12:19 am |Computer Worms And Their Spreading | Computer Internet network … | Network Security[…] Bayesian Computer Scientists « Algorithmic Game-Theory/Economics […]

on March 23, 2010 at 10:03 am |COMPUTER AND NETWORK SECURITY – Yusran Adhitya Kurniawan's Blog | Network Security[…] Bayesian Computer Scientists « Algorithmic Game-Theory/Economics […]

on March 23, 2010 at 4:41 pm |Small Home Based Business | Computer Internet network security News | Network Security[…] Bayesian Computer Scientists « Algorithmic Game-Theory/Economics […]

on March 23, 2010 at 5:35 pm |anonymousFor an example of worst case analysis in the economics of invention.

http://www.againstmonopoly.org/index.php?limit=10&chunk=0&author=886089000000000866&perm=593056000000002716

on March 23, 2010 at 6:30 pm |Have a Better Party With a Karaoke Machine[…] Bayesian Computer Scientists « Algorithmic Game-Theory/Economics […]

on July 15, 2010 at 5:59 am |killing gamesI would say that economists do almost always use worst case assumptions: rational self-interested agent. Not to speak about accounting and finance.

On the other hand some people in TCS seems to like on average-case:

thanks

killing games

on January 1, 2011 at 8:48 am |AGT/E 2010 year in review « Algorithmic Game-Theory/Economics[…] settings and paying more attention to into more sophisticated, problematic, and realistic Bayesean models, correlated values, and other complications. On the basic open problem of AMD, that of […]

on January 20, 2013 at 1:05 pm |http://tinyurl.com/midwallan06437“Bayesian Computer Scientists Turing’s Invisible Hand” was indeed a marvelous post, cannot wait to examine a lot more of ur articles. Time to squander a little time on-line lmao. I appreciate it -Jamey

on March 12, 2014 at 10:50 pm |pcmazwrupp@gmail.comRadio Free Dystopia. Ford & Lopatin – Too Much MIDI (Please Forgive Me) on Wub-Fur Internet Radio

on March 12, 2014 at 10:50 pm |pcmazwrupp@gmail.comShe’s a little odd looking…. my mom keeps on saying she is a tramp! lol!

on March 12, 2014 at 10:50 pm |pcmazwrupp@gmail.comAnyone have any Ideas for a karaoke Party?