This is the third post in a series recapping the talks at the CMU summer school on algorithmic economics (here and here are the first two). See the summer school site for abstracts, slides, and (eventually) videos.

**Jason Hartline
**

Jason Hartline from Northwestern gave two lectures on auction design. The first lecture covered the basics of Baeysian optimal auction design (Chapters 2 and 3 of Jason’s book), the second an overview of approximation in auction design (Chapters 1, 5, and 7).

We begin with single-item auctions. By “Bayesian” we mean that the bidders have private valuations for the good that are drawn independently from (possibly different) known distributions. As a bidder, you know your own valuation but only the distributions over the other valuations. For example, suppose there are two bidders, with valuations drawn i.i.d. from the uniform distribution on [0,1]. In the second-price auction, bidding truthfully is a dominant strategy; assuming bidders do this, the expected seller revenue is the expected smaller bid, namely 1/3. What happens in a first-price auction? Bidders do not have dominant strategies, and we expect all of them to maximize their expected utility with respect to what they know — formally, to be at Bayes-Nash equilibrium. In our example, suppose that each player always bids half their valuation. A simple calculation shows that this is a Bayes-Nash equilibrium: from the perspective of a bidder with valuation v, the best response against the other bid (which is uniform in [0,1/2]) is v/2. The expected revenue at this Bayes-Nash equilibrium is the expected value of the high bid — a.k.a. 1/3, the same as in the second-price auction!

Could this “revenue equivalence” hold more generally? If so, how would we prove it? The answers are yes, and using a fundamental characterization of Bayes-Nash equilibria. In the two-bidder first-price auction, suppose your valuation is v. Then the probability that you win (over the possible competing bids) in this Bayes-Nash equilibrium is also v. In auction theory jargon, your *interim allocation rule* — the probability that you win, as a function of your valuation — is x(v) = v. The characterization identifies which interim allocation rules arise in Bayes-Nash equilibrium, and what the corresponding expected payments must be. First, if the interim allocation rules are nondecreasing, then there are unique interim payment rules such that these expected allocations and payments arise in Bayes-Nash equilibria (namely, — try it for the second-price auction!). Second, if the interim allocation rules are not monotone, then they do not correspond to any Bayes-Nash equilibrium.

Jason wrapped up his first lecture with several applications of this characterization and with a quick proof sketch. The characterization immediately implies “Revenue Equivalence”: if two (auction, equilibrium) pairs produce the same interim allocation rules (as with second-price and first-price auctions with i.i.d. bidder valuations), then they necessarily generate the same expected payments. The characterization is also useful when solving for and proving the uniqueness of Bayes-Nash equilibria (e.g., of first-price and all-pay auctions).

Jason started his second lecture with one of those simple and cool math results that everybody should know, the “prophet inequality”. N prizes arrive online. Each prize has a random value, drawn from a known distribution. Different prizes’ distributions are independent but can be different. At each time step you learn the exact value of the current prize, and you can either take it (ending the game) or move on to the next prize. The goal is to identify a stopping rule with large expected value [1]. One can derive an optimal stopping rule using backward induction. There is also a good “threshold strategy”, meaning that you pick the first prize with value exceeding a fixed threshold. A short but sweet proof shows that, if you set this threshold so that the probability of eventually picking a prize is exactly 50%, then your expected reward is at least half of the expected value of the best prize (a strong benchmark!).

Jason argued that the 2-approximate threshold strategy, while suboptimal, is more interesting than the optimal stopping rule. Some of the reasons: it is simple and easy to implement and interpret; it is robust, meaning that it doesn’t change much under perturbations of the distributions; and it is easily adapted to solve variations of the original problem. Note this “simple is better than optimal” perspective also showed up in Eva’s talk. Jason compared this style of theoretical work in auction design to Picasso’s bulls — one wants just the right level of abstraction to isolate the essential features of good auction formats.

Jason concluded with two applications of approximation in auction design. The first was to revenue maximization in multi-parameter settings, for example when there are multiple goods and each bidder has a different valuation for each. The structure of the optimal auction is poorly understood in such problems (though see Costis’s lecture, below). Assume that each bidder is unit-demand (i.e., wants only one good). One class of simple auctions is sequential posted-price mechanisms: bidders are considered one-by-one, with a bidder offered a price for each remaining good, and the bidder picks its favorite (maximizing value minus price). The results here show that these simple mechanisms can achieve a good constant-factor approximation of the optimal revenue [2]. Interpretations: unit-demand preferences behave similarly to single-parameter preferences and, in such settings, competition between bidders is not essential for near-optimal revenue.

The second application was to prior-independent auctions. These are auctions with robust performance guarantees, in that they are simultaneously near-optimal for a wide range of valuation distributions. A precursor to prior-independent auctions is a beautiful 1996 result of Bulow and Klemperer which states that, for a single-item auction with bidder valuations drawn i.i.d. from some distribution F, the revenue of the second-price auction with n+1 bidders is at least that of an optimal auction (for F) with n bidders. Thus, extra competition is more valuable than information about the underlying valuation distribution. A geometric proof of this result suggests how to design prior-independent auctions in a range of settings, essentially using a guinea pig bidder to supply a single sample from the underlying distribution, which is all that is needed for near-optimal revenue.

**Costis Daskalakis (2nd lecture)
**

Costis’s second lecture was about his recent work in multi-parameter mechanism design (see here and here). For concreteness, we’ll again think about maximizing the seller’s revenue when selling several different goods, where each agent has a private valuation for each but is unit-demand (i.e., only wants one good).

Abstractly, we can think of a (direct-revelation) mechanism as a function from (truthfully reported) valuations to allocations (who gets what) and payments (who pays what), subject to incentive-compatibility constraints (truthful reporting always maximizes expected agent utility, and never yields negative expected utility). In principle, the revenue-maximizing auction can be computed as the solution to a massive linear program, where there are allocation and payment variables for every bidder, good, and (alarmingly) valuation profile. By itself, this observation is neither illuminating nor computationally useful. The research program here is to show that smaller linear programs suffice, and that optimal solutions of these linear programs have informative interpretations.

The key idea is to use a much smaller set of decision variables, as follows. Analogous to the characterization in Jason’s first lecture, an interim allocation rule induced by a mechanism specifies, for each bidder i, valuation vector v (over goods) of i, and good j, the probability that j is allocated to i when its valuation vector is v (where the randomness is over the valuations of the other bidders, and possibly also over coin flips made by the mechanism). Interim payment rules are defined in the same way. We’ll consider a linear program with only these interim values as decision variables — assuming each player’s space of valuations is given explicitly, there are only polynomially many of these. The incentive-compatibility constraints and the objective function (expected sum of payments) can be expressed directly with these variables. The catch is *feasibility* — after solving this linear program, can we reverse engineer a bona fide mechanism that induces the computed interim allocation rules? The answer is, not necessarily. Thus, we need to add additional constraints to the linear program to enforce the feasibility of the interim allocation rules. These constraints can be explicitly described for single-item auctions (see Rakesh’s talk in the next post), but in general they need to be generated on the fly, using a separation oracle within the ellipsoid method.

In an exciting twist to the plot, the required separation oracle turns out to be precisely the corresponding welfare maximization problem (given some valuations, allocate goods to maximize the sum of valuations). For example, with unit-demand bidders this is simply a maximum-weight matching computation. The first consequence is computational: if welfare-maximization is tractable (or more generally, tractable to approximate to within some factor), then so is revenue-maximization. The second consequence is that revenue-maximizing auctions have a conceptually simple format. To explain it, here’s what’s long been known (since Myerson (1981)) for single-parameter problems: to maximize revenue, just transform the reported valuations into “virtual valuations” and then maximize the corresponding (virtual) welfare. This transformation has a simple closed form [3]. The only new complications in the multi-parameter problems discussed here are that one must use a distribution over virtual valuation transformations, and that these transformations are given by a linear program, not an expicit formula.

1. Amusingly, Jason and I had watched this exact scenario play out two days earlier (with N=2), during the “Mystery Box” contest at a Pirates game.

2. The main steps are: upper-bound the (poorly understood) optimal revenue using that of a (well-understood) single-parameter environment by viewing each agent-good pair as a single-parameter agent (roughly, optimal revenue should only go up from the increased competition); use prophet inequality-style arguments to prove approximation guarantees for sequential posted-price mechanisms in the single-parameter relaxation; and extend these mechanisms to the original multi-parameter problem in a natural way.

3. A valuation v gets mapped to the virtual valuation , where F and f are the distribution and density functions.