Feeds:
Posts
Comments

Posts Tagged ‘auctions’

Google Ad Exchange

Google just announced its ad exchange:

The DoubleClick Ad Exchange is a real-time marketplace to buy and sell display advertising space. By establishing an open marketplace where prices are set in a real-time auction, the Ad Exchange enables display ads and ad space to be allocated much more efficiently and easily across the web. It’s just like a stock exchange, which enables stocks to be traded in an open way.

Google's Ad-exchange

There are some existing competitors, but Google’s entry may be game-changing.  Lots of research problems beg themselves.

Read Full Post »

Shahar Dobzinski asks:

Suppose you have an interesting result that has an easy, almost trivial proof. What is the best way to publish it? Writing a full, formal paper takes too much energy. Besides, a travel to a conference just to give a 5 minutes presentation is an overkill, and journals are just too slow (who reads them anyways?)

The result in question regards the basic issue in algorithmic mechanism design of to what extent does incentive compatibility penalize computationally efficient approximation algorithms.  Shahar observed that known techniques imply that, at least for artificial problems, incentive compatibility may result in an unbounded degradation.

I talked Shahar (who is my just-graduating student) into writing it up and uploading it to the arxiv, here.

I think that the question that Shahar raises (how to “publish” easy stuff), as well as the answer he gives (unbounded price of incentive compatibility), are both interesting (though not really related) — so here they are.

Read Full Post »

Undergraduate algorithms courses typically discuss the maximum matching problem in bipartite graphs and present algorithms that are based on the alternating paths (Hungarian) method.  This is true in the standard CLR book as well as in the newer KT book (and implicitly in the new DPV book that just gives the reduction to max-flow.)  There is an alternative auction-like algorithm originally due to Demange, Gale, and Sotomayor that is not well known in the CS community despite being even simpler.  The algorithm naturally applies also to the weighted version, sometimes termed the assignment problem, and this is how we will present it.

Input: A weighted bipartite graph, with non-negative integer weights.  We will denote the vertices on one side of the graph by B (bidders) and on the other side by G (goods).  The weight between a bidder i and a good j is denoted by w_{ij}.  We interpret w_{ij} as quantifying the amount that bidder i values good j.

Output: A matching M with maximum total weight \sum_{(i,j) \in M} w_{ij}.  A matching is a subset of B \times G such that no bidder and no good appear more than once in it.

The special case where w_{ij} \in \{0,1\} is the usual maximum matching problem.

Algorithm:

Initialization:

  1. For each good j, set p_j \leftarrow 0 and owner_j \leftarrow null.
  2. Initialize a queue Q to contain all bidders i.
  3. Fix \delta = 1/(n_g+1), where n_g is the number of goods.

While Q is not empty do:

  1. i \leftarrow Q.deque().
  2. Find j that maximizes w_{ij} - p_j.
  3. If w_{ij} - p_j \ge 0 then
    1. Enque current owner_j into Q.
    2. owner_j \leftarrow i.
    3. p_j \leftarrow p_j + \delta.

Output: the set of (owner_j, j) for all j.

Correctness: The proof of correctness is based on showing that the algorithm gets into an “equilibrium”, a situation where all bidders “are happy”.

Definition: We say that bidder i is \delta-happy if one of the following is true:

  1. For some good j, owner_j=i and for all goods j’ we have that \delta + w_{ij}-p_j \ge w_{ij'}-p_{j'}.
  2. For no good j does it hold that owner_j=i and  for all goods j we have that that w_{ij} \le p_{j}.

The key loop invariant is that all bidders, except those that are in Q, are \delta-happy.  This is true at the beginning since Q is initialized to all bidders.  For the bidder i dequeued in an iteration, the loop exactly chooses the j that makes him happy, if such j exists, and the \delta-error is due to the final increase in p_j.  The main point is that this iteration cannot hurt the invariant for any other i’: any increase in p_j for j that is not owned by i’ does not hurt the inequality while an increase for the j that was owned by i’ immediately enqueues i’.

The running time analysis below implies that the algorithm terminates, at which point Q must be happy and thus all bidders must be \delta-happy.

Lemma: if all bidders are \delta-happy then for every matching M’ we have that n\delta + \sum_{i=owner_j} w_{ij} \ge \sum_{(i,j) \in M'} w_{ij}.

Before proving this lemma, we notice that this implies the correctness of the algorithm since by our choice of \delta, we have that n\delta < 1, and as all weights are integers, this implies that our matching does in fact have maximum weight.

We now prove the lemma.  Fix a bidder i, let j denote the good that he got from the algorithm and let j’ be the good he gets in M’ (possibly j=null or j’=null).  Since i is happy we have that \delta + w_{ij}-p_j \ge w_{ij'}-p_{j'} (with a notational convention that w_{i,null}=0 and p_{null}=0, which takes care also of case 2 in the definition of happy)  Summing up over all i we get \sum_{i=owner_j} (\delta + w_{ij}-p_j) \ge \sum_{(i,j') \in M'} (w_{ij'}-p_{j'}).  Now notice that since both the algorithm and M’ give matchings, each j appears at most once on the left hand side and at most once on the right hand side.  More over if some j does not appear on the left hand side then it was never picked by the algorithm and thus p_j=0.  Thus when we subtract \sum_j p_j from both sides of the inequality, the LHS becomes the LHS of the inequality in the lemma and the RHS becomes at most the RHS of the inequality in the lemma.  QED.

Running Time Analysis:

Each time the main loop is repeated, some p_j is increased by \delta or some bidder is removed from Q forever.  No p_j can ever increase once its value is above C = max_{i,j} w_{ij}.  It follows that the total number of iterations of the main loop is at most Cn/\delta = O(Cn^2) where n is the total number of vertices (goods+bidders).  Each loop can be trivially implemented in O(n) time, giving total running time of O(Cn^3), which for the unweighted case, C=1, matches the running time of the basic alternating paths algorithm on dense graphs.

For non-dense graphs, with only m=o(n^2) edges (where an edge is a non-zero w_{ij}), we can improve the running time by using a better data structure.  Each vertex maintains a priority que of goods ordered according to the value of w_{ij} - p_j.  Whenever some p_j is increased, all bidders that have an edge to this j need to update the value in the priority queue.  Thus an increase in p_j requires d_j priority queue operations, where d_j is the degree of j. Since each p_j is increased at most C/\delta = O(Cn) times, and since \sum_j d_j =m we get a total of O(Cmn) priority queue operations.  Using a heap to implement the priority queue takes O(\log n) per operation.  However, for our usage, an implementation using an array of linked lists gives O(1) amortized time per operation: entry t of the array contains all j such that w_{ij} - p_j = t\delta, updating the value of j requires moving it down one place in the array, and finding the maximum w_{ij} - p_j is done by marching down the array to find the next non empty entry (this is the only amortized part).  All in all, the running time for the unweighted case is O(mn).

Additional comments:

  • As shown by DGS, a similar procedure terminates with close to VCG prices, which are also the point-wise minimum equilibrium prices.
  • The algorithm was presented for the assignment problem where bidders never desire more than a single item.  It does work more generally as long as bidders are “gross substitutes”.
  • The algorithm, like many auctions, can be viewed as a primal-dual algorithm for the associated linear program.
  • Choosing a small fixed value of, say, \delta=0.01 gives a linear time 1.01-approximation for the maximum matching.
  • Choosing the value \delta = 1/\sqrt{n} gives a matching that misses at most \sqrt{n} edges, that can then be added using \sqrt{n} alternating path computations, for a total running time of O(m \sqrt{n}).
  • Many algorithmic variants were studied by Dimitry Bertsekas.
  • A wider economic context appears in this book.

Read Full Post »

A “textbook system” based on social choice theory would have a centralized mechanism interacting with multiple software agents, each of them representing  a user.  The centralized mechanism would be designed to optimize some global goal (such as revenue or social welfare) and each software agent would elicit the preferences of its user and then optimize according to user preferences.

Among other irritating findings, behavioral economics also casts doubts on this pretty picture, questioning the very notion that users have preferences; that is preferences that are independent of the elicitation method.  In the world of computation, we have a common example of this “framing” difficulty: the default.  Users rarely change it, but we can’t say that they actually prefer the default to the other alternative since if we change the default then they stick with the new one.  Judicious choice of defaults can obviously be used for the purposes of the centralized mechanism (default browser = Internet explorer); but what should we do if we really just want to make the user happy?  What does this even mean?

The following gripping talk by Dan Ariely demonstrates such issues.

Read Full Post »

One of the defining characteristic of ad auctions is that they are repeated a large number of times. Every time some user makes a search query or visits a web page with an “ad slot” on it a new auction takes place among all advertisers that target their ads at this impression. The targeting criteria may be quite sophisticated, taking into account not only the characteristics of the ad-slot (web page or search keyword) but also various characteristics of the user such as his geographic location (and sometimes much more). While much of the early work on ad auctions focused on the single auction, much of the current work of ad auctions focuses explicitly on the repetitions, on the stream. If the different auctions in the stream are totally unrelated then one of them shouldn’t effect the others, and indeed they should be analyzed in isolation. In many real world scenarios, however, there are significant constraints between the different auctions in the stream that need to be taken into account. Looking at the papers soon to be presented in EC’09 we can see several such issues:

  1. Budgets: It is very common for an advertiser to have a budget limit for the total expenditure over all auctions in a certain period. This raises many questions, both game-theoretic and algorithmic, from the bidder’s point of view as well as from the auctioneer’s point of view. The basic paper addressing the algorithmic problem of the auctioneer due to Aranyak Mehta, Amin Saberi, Umesh Vazirani, and Vijay Vazirani presents an online algorithm with a worst case competitive ratio of 1-1/e. Many variants of the model have been considered in the literature, but the 1-1/e ratio has been hard to beat. The EC’09 paper “The Adwords Problem: Online Keyword Matching with Budgeted Bidders under Random Permutations” by Nikhil Devanur and Thomas Hayes does so getting a 1-\epsilon approximation by adding a distributional assumption to model.
  2. Reservations: Advertisers often wish to “reserve” a certain number of impressions of some target type in advance. If this has been done, then once the stream of impressions arrives, then the reserved number of impressions should be delivered sometime during the agreed-upon period (with some penalty if the reservation cannot be fulfilled.) There are challenges during reservation time (can the auctioneer commit to a requested reservation? how to price it?) as well as during delivery time (which reservation should the current impression fulfill?). The EC’09 paper “Selling Ad Campaigns: Online Algorithms with Cancellations” by Moshe Babaioff, Jason Hartline, and Robert Kleinberg studies the added flexibility that the auctioneer gets if he is allowed to renege on past reservations, for a cancellation fee.
  3. Learning: Most ad auctions have a pay-per-click rule: the bidders pay for the impression that they won only if the ad was “clicked” by the user. This means that the “real bid” to be considered depends on the “click through rate” — the probability that the ad will be clicked — a random variable that depends on the impression and on the advertiser. These click through rates can be learned throughout the stream of auctions, and then taken into account in future auctions. Non-strategic analysis of similar situations often falls under the name of multi-arm bandit problems, and two closely related papers in EC’09 take into account the strategic behavior of the bidders: “Characterizing Truthful Multi-Armed Bandit Mechanisms” by Moshe Babaioff, Yogeshwer Sharma, and Aleksandrs Slivkins and “The Price of Truthfulness for Pay-Per-Click Auctions” by Nikhil Devanur and Sham Kakade.

Read Full Post »

There are usually two different measures that auction designers attempt optimizing for: efficiency (social welfare) and auctioneer revenue. A “better” auction often improves both efficiency and revenue but in other cases these are conflicting goals. It is well known that efficiency is optimized by Vickerey Auctions while revenue is optimized by Myerson optimal auctions. I often hear cynical doubts about whether anyone optimizes efficiency rather than revenue, and specifically such disbelief regarding the big companies running ad auctions (such as my current employer, Google). As far as I can tell, reality seems to be quite the opposite: companies aim to optimize their long-term or middle-term revenue rather than the revenue of a single auction. In a competitive environment the only way of optimizing long term revenue is by gaining and maintaining market share which in turn requires providing high “added-value” i.e. optimizing efficiency.

In any case, this post points out to a paper by Gagan Aggarwal, Gagan Goel and Aranyak Mehta recently posted to the arXiv. Complementing a result of Jeremy Bulow and Paul Klemperer, they show that the difference between the two different optimization goals is not very large compared to increasing the number of bidders. The setting is the classic one of selling a single indivisible good in the private value model with a commonly known distribution over bidders’ valuations (with some mild restrictions on the distribution). The BK paper shows that the revenue of an efficiency-maximizing auction with k+1 bidder is at least as high as that of the revenue-maximizing one with k bidders. The new AGM paper shows that the efficiency of a revenue-maximizing auction with k+logk bidders is at least as high as that of an efficiency-maximizing one with k bidders (and that the logk term is necessary).

[Added on 10.6: Thanks to Tim Roughgarden for pointing out to me his closely related joint paper with Mukund Sundararajan that generalizes the BK result to an ad-auction setting, as well as provides direct revenue guarentees without increasing the number of bidders.]

Read Full Post »

I recently looked again at Ausubel’s multi-unit auction paper, and really liked the crystallization of the first paragraph in it:

The auctions literature has provided us with two fundamental prescriptions guiding effective auction design. First, an auction should be structured so that the price paid by a player—conditional on winning—is as independent as possible of her own bids (William Vickrey, 1961). Ideally, the winner’s price should depend solely on opposing participants’ bids—as in the sealed-bid, second-price auction—so that each participant has full incentive to reveal truthfully her value for the good. Second, an auction should be structured in an open fashion that maximizes the information made available to each participant at the time she places her bids (Paul R. Milgrom and Robert J. Weber, 1982a).

I would say that this is useful practical insight gained from studying theory.

Read Full Post »

« Newer Posts - Older Posts »