Feeds:
Posts
Comments

Archive for September, 2014

A thank you to MSR/SVC


Microsoft Research’s Silicon Valley Center (MSR/SVC) was the home of a truly amazing research team. The team began at Digital Equipment Company’s Systems Research Center (DEC/SRC) in the mid-eighties and, over the last thirty years, has been a pioneer of distributed systems research and an example of industrial research at its best. Its thirteen year run at MSR/SVC continued this tradition:


  • as a collaboration between two pillars of computer science, systems and theory;

  • as an industrial laboratory with unfettered academic freedom;

  • as a lab where research prototypes transferred to deployed systems, like Dryad which shipped with Microsoft Server;

  • as a lab where fundamental theory of computation was envisioned and brought to maturity, like differential privacy;

  • as a lab that embraced and supported the greater academic community;

  • as a lab that, since its contemporaneous founding with the then nascent field of economics and computation (EC), was pivotal in its development.

Perhaps most importantly, MSR/SVC has had profound impact on several generations of researchers who visited as Ph.D.s in the internship program, or as postdocs, or as academic visitors. Indeed, many of these researchers have shared their experiences in the comments on Omer Reingold’s presumedly final Windows on Theory blog post.

My own story is similar to that of many others: My advisor Anna Karlin was a member of the team during its DEC/SRC years. (Thank you Anna!) My Ph.D. thesis originated from an auction question posed by the team’s Andrew Goldberg and is based on what became a nine-paper collaboration. (Thank you Andrew!) I clearly remember my 2003 job interview with Roy Levin in 2003, before the first academic paper on the subject, where he told me about the problem of sponsored search and the research challenges it posed for auction theory. (Thank you Roy!) On graduation I joined the team as a researcher and spent an amazing four years in which I could not have found a more supportive, stable, and stimulating environment to work on the theory of mechanism design. Collaborations with Andrew Goldberg developed the competitive analysis of auctions. Collaborations Moshe Babaioff and Alex Slivkins and lab visitors Avrim Blum, Nina Balcan, and Bobby Kleinberg brought connections to machine learning theory. Collaborations with visitors Shuchi Chawla and Bobby Kleinberg initiated the study of approximation in Bayesian mechanism design. Collaborations with lab visitor Madhu Sudan brought connections to coding theory. Discussions with Cynthia Dwork, Frank McSherry, and Kunal Talwar made connections to differential privacy. It was an incredible time to be with such an amazing group. MSR/SVC, thank you!


Last week MSR/SVC closed and with it did a chapter in the story of an elite team of researchers, a culture of collaboration, an institution of research excellence. I hope, just as the team survived acquisition by Compaq and Hewlett Packard in the late nineties, that this chapter is not its last. For myself, my cobloggers at Turing’s Invisible Hand, and on behalf of the EC research community; I would like to thank the MSR/SVC team for everything they have done for computer scientists, computer science, and the field of economics and computation.

Read Full Post »

This year, the conference on web and Internet economics (WINE) will take place in Beijing, and will feature a poster session for the presentation of both original results that have not yet been published elsewhere and results published recently, which are relevant to the WINE community. While the deadline for regular paper submission has already passed, you still have time to submit your work to the poster session (deadline is September 15). This is a great opportunity to get your work exposed to the WINE community. More information can be found here: http://wine2014.amss.ac.cn/dct/page/70011

Read Full Post »

An important conjecture in prior-free mechanism design was affirmatively resolved this year. The goal of this post is to explain what the conjecture was, why its resolution is fundamental to the theoretical study of algorithms (and mechanisms), and encourage the study of important open issues that still remain.


The conjecture, originally from Goldberg et al. (2004), is that the lower bound of 2.42 on the approximation factor of a prior-free digital good auction is tight. In other words, the conjecture stated that there exists a digital good auction that, on any input, obtains at least a 2.42 fraction of the best revenue from a posted price that at least two bidders accept (henceforth: the benchmark). The number 2.42 arises as the limit as the number of agents n approaches infinity (for finite n the lower bound improves and is given by a precise formula). The conjecture was resolved in the affirmative by Ning Chen, Nick Gravin, and Pinyan Lu in a STOC 2014 paper Optimal Competitive Auctions. The resolution of this conjecture suggests that a natural method of proving a lower bound for approximation is generally tight but we still do not really understand why.

Summary.
To explain the statement of the theorem, let’s consider the n = 2 special case. For n = 2 agents, the benchmark is twice the lower agent’s value. (The optimal posted price that both bidders will accept is a price equal to the lower bidder’s value, the revenue from this posted price is twice the lower bidder’s value.) The goal of prior-free auction design is to find an auction that approximates this benchmark. For the n = 2 special case there is a natural candidate: the second-price auction. The second-price auction’s revenue for two bidders is equal to the lower bidder’s value. Consequently, the second-price auction is a two approximation: the ratio of the second-price auction’s revenue to the benchmark is two in worst-case over all inputs (in fact, it is exactly equal to two on all inputs).


The Goldberg et al. (2004) lower bound for the n = 2 agent special case shows that this two approximation is optimal. The proof of the lower bound employs the probabilistic method. A distribution over bidder values is considered, the expected benchmark is analyzed, the expected revenue of the optimal auction (for the distribution) is analyzed, and the ratio of their expectations gives the lower bound. The last step follows because any auction has at most the optimal auction revenue and if the ratio of the expectations has a certain value, there must be an input in the support of the distribution that has at least this ratio. A free parameter in this analysis is the the distribution over bidder values. The approach of Goldberg et al. was to use the distribution for which all auctions obtain the same revenue, i.e., the so-called equal revenue or Pareto distribution. This distribution is defined so that an agent with a random random value accepts a price p > 1 with probability exactly 1/p and the expected revenue generated is exactly one.


For more details see the newly updated Chapter 6 of Mechanism Design and Approximation.


Discussion.
Prior-free mechanism design falls into a genre of algorithm design where there is no pointwise optimal algorithm. For this reason the worst-case analysis of a mechanism is given relative to a benchmark. (The same is true for the field of online algorithms where this is referred to as competitive analysis.) In the abstract the optimal algorithm design problem is the following:


    \min_{\text{ALG}} \max_{\text{INPUT}} \frac{\text{BENCHMARK}(\text{INPUT})}{\text{ALG}(\text{INPUT})}

where ALG is a possibly randomized algorithm. Let ALG* be the optimal algorithm. Yao’s minimax principle states that this is the same as:

    \max_{\text{DIST}} \min_{\text{ALG}} \frac{{\mathbf E}_{\text{INPUT} \sim \text{DIST}}[\text{BENCHMARK}(\text{INPUT})]}{{\mathbf E}_{\text{INPUT} \sim \text{DIST}}[\text{ALG}(\text{INPUT})]}

where DIST is a distribution over inputs and ALG may as well be deterministic. Of course, if instead of maximizing over DIST we consider some particular distribution DIST we get a lower bound on the worst-case approximation of any algorithm. Let DIST* be the worst distribution for BENCHMARK. In general DIST* should depend on BENCHMARK.


Let EQDIST denote the product distribution for which the expected value of ALG(INPUT), for INPUT drawn from EQDIST, is a constant for all auctions ALG. For a number of auction problems (not just digital goods), it was conjectured that DIST* = EQDIST. Two things are important in this statement:


  • EQDIST is a product distribution where as Yao’s theorem may generally require correlated distributions. (Why is a product distribution sufficient?!)
  • EQDIST is not specific to BENCHMARK. (Why not?!)


Prior to the Chen-Gravin-Lu paper the equality of EQDIST and DIST* was known to hold for specific benchmarks and the following problems:


  1. Single agent monopoly pricing (for revenue). See Chapter 6 of MDnA.
  2. Two-agent digital-good auctions (for revenue). See Chapter 6 of MDnA.
  3. Three-agent digital-good auctions (for revenue). See Hartline and McGrew (2005).
  4. One-item two-agent auctions (for residual surplus, i.e., value minus payment).

Of these (1) and (2) are very simple, (3) and (4) are non-obvious. All of these results come from explicitly exhibiting the optimal auction ALG*.


Chen, Gravin, and Lu give the first highly non-trivial proof for showing that DIST* = EQDIST without explicitly constructing ALG*. Moreover, they do it not just for the standard benchmark (given above) but for any benchmark with certain properties. It’s clear from their proof which properties they use (monotonicity, symmetry, scale invariance, constant in the highest value). It is not so clear which are necessary for the theorem. For example, the benchmark in (1) and (4), above, are not constant in the highest bid.


Conclusion.
Prior-free mechanism design problems are exemplary of an a genre of algorithm design problems where there is no pointwise optimal algorithm. (The competitive analysis of online algorithms gives another example.) These problems stress the classical worst-case algorithm design and analysis paradigm. We really do not understand how to search the space of mechanisms for prior-free optimal ones. (Computational hardness results are not known either.) We also do not generally know when and why the lower bounding approach above gives a tight answer. The Chen-Gravin-Lu result is the most serious recent progress we have seen on these questions. Let’s hope it is just the beginning.

Read Full Post »