Posts Tagged ‘auctions’

A “textbook system” based on social choice theory would have a centralized mechanism interacting with multiple software agents, each of them representing  a user.  The centralized mechanism would be designed to optimize some global goal (such as revenue or social welfare) and each software agent would elicit the preferences of its user and then optimize according to user preferences.

Among other irritating findings, behavioral economics also casts doubts on this pretty picture, questioning the very notion that users have preferences; that is preferences that are independent of the elicitation method.  In the world of computation, we have a common example of this “framing” difficulty: the default.  Users rarely change it, but we can’t say that they actually prefer the default to the other alternative since if we change the default then they stick with the new one.  Judicious choice of defaults can obviously be used for the purposes of the centralized mechanism (default browser = Internet explorer); but what should we do if we really just want to make the user happy?  What does this even mean?

The following gripping talk by Dan Ariely demonstrates such issues.

Read Full Post »

One of the defining characteristic of ad auctions is that they are repeated a large number of times. Every time some user makes a search query or visits a web page with an “ad slot” on it a new auction takes place among all advertisers that target their ads at this impression. The targeting criteria may be quite sophisticated, taking into account not only the characteristics of the ad-slot (web page or search keyword) but also various characteristics of the user such as his geographic location (and sometimes much more). While much of the early work on ad auctions focused on the single auction, much of the current work of ad auctions focuses explicitly on the repetitions, on the stream. If the different auctions in the stream are totally unrelated then one of them shouldn’t effect the others, and indeed they should be analyzed in isolation. In many real world scenarios, however, there are significant constraints between the different auctions in the stream that need to be taken into account. Looking at the papers soon to be presented in EC’09 we can see several such issues:

  1. Budgets: It is very common for an advertiser to have a budget limit for the total expenditure over all auctions in a certain period. This raises many questions, both game-theoretic and algorithmic, from the bidder’s point of view as well as from the auctioneer’s point of view. The basic paper addressing the algorithmic problem of the auctioneer due to Aranyak Mehta, Amin Saberi, Umesh Vazirani, and Vijay Vazirani presents an online algorithm with a worst case competitive ratio of 1-1/e. Many variants of the model have been considered in the literature, but the 1-1/e ratio has been hard to beat. The EC’09 paper “The Adwords Problem: Online Keyword Matching with Budgeted Bidders under Random Permutations” by Nikhil Devanur and Thomas Hayes does so getting a 1-\epsilon approximation by adding a distributional assumption to model.
  2. Reservations: Advertisers often wish to “reserve” a certain number of impressions of some target type in advance. If this has been done, then once the stream of impressions arrives, then the reserved number of impressions should be delivered sometime during the agreed-upon period (with some penalty if the reservation cannot be fulfilled.) There are challenges during reservation time (can the auctioneer commit to a requested reservation? how to price it?) as well as during delivery time (which reservation should the current impression fulfill?). The EC’09 paper “Selling Ad Campaigns: Online Algorithms with Cancellations” by Moshe Babaioff, Jason Hartline, and Robert Kleinberg studies the added flexibility that the auctioneer gets if he is allowed to renege on past reservations, for a cancellation fee.
  3. Learning: Most ad auctions have a pay-per-click rule: the bidders pay for the impression that they won only if the ad was “clicked” by the user. This means that the “real bid” to be considered depends on the “click through rate” — the probability that the ad will be clicked — a random variable that depends on the impression and on the advertiser. These click through rates can be learned throughout the stream of auctions, and then taken into account in future auctions. Non-strategic analysis of similar situations often falls under the name of multi-arm bandit problems, and two closely related papers in EC’09 take into account the strategic behavior of the bidders: “Characterizing Truthful Multi-Armed Bandit Mechanisms” by Moshe Babaioff, Yogeshwer Sharma, and Aleksandrs Slivkins and “The Price of Truthfulness for Pay-Per-Click Auctions” by Nikhil Devanur and Sham Kakade.

Read Full Post »

There are usually two different measures that auction designers attempt optimizing for: efficiency (social welfare) and auctioneer revenue. A “better” auction often improves both efficiency and revenue but in other cases these are conflicting goals. It is well known that efficiency is optimized by Vickerey Auctions while revenue is optimized by Myerson optimal auctions. I often hear cynical doubts about whether anyone optimizes efficiency rather than revenue, and specifically such disbelief regarding the big companies running ad auctions (such as my current employer, Google). As far as I can tell, reality seems to be quite the opposite: companies aim to optimize their long-term or middle-term revenue rather than the revenue of a single auction. In a competitive environment the only way of optimizing long term revenue is by gaining and maintaining market share which in turn requires providing high “added-value” i.e. optimizing efficiency.

In any case, this post points out to a paper by Gagan Aggarwal, Gagan Goel and Aranyak Mehta recently posted to the arXiv. Complementing a result of Jeremy Bulow and Paul Klemperer, they show that the difference between the two different optimization goals is not very large compared to increasing the number of bidders. The setting is the classic one of selling a single indivisible good in the private value model with a commonly known distribution over bidders’ valuations (with some mild restrictions on the distribution). The BK paper shows that the revenue of an efficiency-maximizing auction with k+1 bidder is at least as high as that of the revenue-maximizing one with k bidders. The new AGM paper shows that the efficiency of a revenue-maximizing auction with k+logk bidders is at least as high as that of an efficiency-maximizing one with k bidders (and that the logk term is necessary).

[Added on 10.6: Thanks to Tim Roughgarden for pointing out to me his closely related joint paper with Mukund Sundararajan that generalizes the BK result to an ad-auction setting, as well as provides direct revenue guarentees without increasing the number of bidders.]

Read Full Post »

I recently looked again at Ausubel’s multi-unit auction paper, and really liked the crystallization of the first paragraph in it:

The auctions literature has provided us with two fundamental prescriptions guiding effective auction design. First, an auction should be structured so that the price paid by a player—conditional on winning—is as independent as possible of her own bids (William Vickrey, 1961). Ideally, the winner’s price should depend solely on opposing participants’ bids—as in the sealed-bid, second-price auction—so that each participant has full incentive to reveal truthfully her value for the good. Second, an auction should be structured in an open fashion that maximizes the information made available to each participant at the time she places her bids (Paul R. Milgrom and Robert J. Weber, 1982a).

I would say that this is useful practical insight gained from studying theory.

Read Full Post »

The paper Incentive Compatible Budget Elicitation in Multi-unit Auctions by Sayan Bhattacharya, Vincent Conitzer, Kamesh Munagala, and Lirong Xia has recently been uploaded to the arXiv.

The basic model that they study is the auction of a single infinitely divisible good among bidders with budget limits.  In the model each bidder i has a private value v_i and private budget b_i.  The utility of i when allocated x_i units of the infinitely divisible good and charged p_i is assumed to be x_i v_i - p_i but only if the budget is not exceeded, p_i \le b_i. If p_i > b_i then the utility is assumed to be negative infinity.

The twist in this model is the budget limit which takes us out of the quasi-linear model usually assumed in auction theory.  In particular VCG mechanisms are no longer incentive compatible.  A previous paper by Shahar Dobziniski, Ron Lavi, and myself shows that indeed there is no (anonymous) incentive compatible mechanism that produces Pareto-optimal allocations.  This paper shows that there is such a randomized mechanism.

The paper starts with the  mechanism that was previously shown to be incentive compatible in the special case that the budgets are public knowledge (an adaptive version of Ausubel’s auction).  The heart of the proof lies in an analysis showing that the only way a bidder can gain by mis-reporting his budget is by over-reporting; under-reporting is never profitable. This requires quite a delicate analysis (which I didn’t fully read yet) but once this characterization is known, an incenive compatible randomized mechanism follows immediately: run the original mechanism, and instead of asking a bidder to pay p_i, ask him to pay b_i with probability p_i/b_i. The expected payments have not changed so under-reporting is still not profitable, and this randomized payment makes over-reporting result in a postive probability of getting negative infinite utility and thus is certainly not profitable.

Read Full Post »

The new GoogleReserach on Twitter will certainly have some tweets related to algorthmic game theory such as the announcement of the WWW’09 paper “Bid Optimization for Broad Match Ad Auction” by Eyal Even Dar, Vahab S. Mirrokni, S. Muthukrishnan, Yishay Mansour, and Uri Nadav.

Read Full Post »

Muthu has announced a new paper “Bidding on Configurations in Internet Ad Auctions” in his blog.

Abstract: In Internet advertising, a configuration of ads is determined by the seller, and advertisers buy spaces in the configuration. In this paper, motivated by sponsored search ads, we propose an auction where advertisers directly bid and determine the eventual configuration.

Read Full Post »

Shahar Dobzinski and Shaddin Dughmi just posted a new paper to the arXiv: On the Power of Randomization in Algorithmic Mechanism Design.

The mechanism design problem that they study in this paper is the basic one of a multi-unit auction: There are M identical indivisible units of some good for sale among n bidders, and each bidder i has a valuation function v_i : \{1 ... M\} \rightarrow \Re^+, where v_i(k) specifies his value from acquiring k units.   The normal assumptions are free disposal i.e. that the valuation functions v_i are monotone non-decreasing.   The goal is to find an allocation that gives k_i units to each each bidder i (and thus \sum_i k_i \le M) optimizing the social welfare \sum_i v_i(k_i).

This problem is perhaps the simplest one exhibiting the key clash between computational complexity and incentive compatibility: since this is a social welfare maximization problem, the VCG mechanism that finds the optimal allocation is incentive compatible.  However, if we view the number of units M as large, and require algorithms that work in time polynomial in n and \log M, (assuming that the valuation functions are either concisely represented or are given as “black boxes”) then finding the optimal allocation is a knapsack problem and is thus computationally hard.  On the other hand, it is well known that the knapsack problem can be well-approximated efficiently (there is an FPTAS).  The clash is that we don’t know how to turn such an approximation algorithm into an incentive-compatible approximation mechanism.

This paper manages to find a randomized incentive-compatible approximation mechanism.  It also highlights the role and necessity of randomization in this setting.

Read Full Post »

« Newer Posts