Feeds:
Posts
Comments

Posts Tagged ‘auctions’

I spent most of last week in the Bertinoro workshop on Frontiers in Mechanism Design organized by Jason Hartline and Tim Roughgarden.  Such a small focused event is really a winning format (when done right — of course): almost every talk was very close to my interests and was really good (since the speakers presented their preferred recent work which usually was also accepted to some other top conference).

One of my take-homes from this event was the strengthening of the realization that computer scientists are doing more and more Bayesian analysis  in Algorithmic Mechanism Design, in this respect getting closer to the economists’ way of thinking.  It is not that computer scientists have lost their basic dislike of “average case analysis”, distributional priors, or especially, common priors, it is just that we have started reaching in more and more places the limits of “worst case” analysis.  It seems that computer scientists are being very careful with “how much” they rely on Bayesian analysis, obtaining various “hybrid” results that are more robust, in various senses, than straight Bayesian ones.

An extreme good example is the classic result of economists Jeremy Bulow and Paul Klemperer who show that revenue of the straight 2nd price auction (for a single item) with n+1 bidders always dominates the revenue of Myerson’s optimal auction with n bidders.  Only the analysis is Bayesian: the resulting 2nd price n+1-bidder auction is completely “worst-case” — neither the auctioneer, nor the bidders must have any knowledge or agreement on the underlying distribution.   In spirit, such a result is similar to Valiant’s prior-free learning where the analysis is with respect to an underlying distribution even though the learning algorithms cannot depend on it.  A recent paper Revenue Maximization with a Single Sample by Dhangwatnotai, Roughgarden and Yan (to appear in EC 2010) gets more general results in this prior-independent vein, although in an approximation sense.

A weaker type of result, but still better than full-blown Bayesian, is the 2nd price version of Myerson’s auction.  In this version, the auctioneer must “know” the underlying distribution in order to set the optimal reserve price, but once that is done, the bidders see a “worst-case” situation in front  of them(2nd price with reserve) and should bid truthfully in the dominant strategy sense without needing to know or agree about the underlying prior distribution.  (This is opposed to the revenue-equivalent-in-principle 1st price version in which bidders must know and agree on a common prior for them to have any chance of reaching he desired equilibrium.)  A recent paper Multi-parameter mechanism design and sequential posted pricing by Chawla,  Hartline,  Malec, and Sivan (to appear in STOC 2010) gets similar types of results in a unit-demand heterogeneous auction setting where the auctioneer needs to know the distribution in order to set prices (in this case, posted prices) but the resulting mechanism is very simple and truthful in the dominant-strategy sense (again the approximation guarantee is in an approximation sense).

A yet weaker version of a similar type of result appears in the paper Bayesian Algorithmic Mechanism Design by Jason D. Hartline and Brendan Lucier (to appear in STOC 2010).  In this paper again the auctioneer does need to know the underlying distribution, and then he creates a mechanism that is incentive compatible, but here only in the Bayesian sense.  I.e. the the bidders need not know the underlying distribution (as they should just act truthfully) but they should still agree that the auctioneer knows the prior distribution.  This type of result is more fragile than the previously mentioned ones since the  truthfulness of the auction depends on the auctioneer correctly knowing the underlying distribution, rather than just the optimality of it.  On the plus side, the paper shows that the auctioneer’s derivation of the auction can be done effectively just by using a “black box” for sampling the underlying distribution (as is the case for he derivation of Myerson’s optimal reserve price).

A someone dual situation is presented in the paper Price of Anarchy for Greedy Auctions by Brendan Lucier and Allan Borodin (SODA 2010).  In that paper, auctions are presented in which the auctioneer need not know the underlying auction and acts in “detail-free” way, i.e. the auction is independent of the underlying distribution.  However,  the approximate-optimality holds when the bidders are in a Bayesian equilibrium, i.e the bidders must know and agree on a common prior for the analysis to hold.

The last example of “somewhat-Bayesian” results that comes to mind has nothing to do with incentives but is just algorithmic.  The paper The Adwords Problem: Online Keyword Matching with Budgeted Bidders under Random Permutations by Nikhil Devanur and Thomas Hayes (in EC 2009) considers an online model of repeated auctions, where no distributional assumptions are made on the bidder values that are assumed to be “worst case”, but a distributional assumption is made on the order or arrival which is assumed to be uniform.  This allows them to get arbitrarily good approximations, circumventing the known lower bounds for the completely worst case.

While economists too have looked at weakening of the fully Bayesian assumptions, as computer scientists are doing now, I see a difference in the two approaches.  Harsanyi‘s influence on economic thinking seems to be so great that the Bayesian point of view seems to be taken as the correct model by economists, and even its criticisms (cf. the Wilson critique) and results that weaken the assumptions are simply taken as “2nd order” improvements.  Computer Scientists, on the other hand, seem to have their basic intellectual foundation in a non-Bayesian setting, and as they move toward Bayesian models they do not seem to have a single model in mind but are rather quite happy to explore the “Pareto frontier ” between the strength of the model and the strength of the results obtained.

Finally, as a side note, let me observe that while the gap between computer scientists and economists is narrowing on the Bayesian question, the other great gap — the issue of approximation — seems to be as great as ever.  E.g., all the results by computer scientists mentioned in this posts only provide approximate results, while all those by economists provide exact ones.

Read Full Post »

Facebook has announced a pretty nice graduate student fellowship program.  The first three reserach areas they mention are:

  • Internet Economics: auction theory and algorithmic game theory relevant to online advertising auctions.
  • Cloud Computing: storage, databases, and optimization for computing in a massively distributed environment.
  • Social Computing: models, algorithms and systems around social networks, social media, social search and collaborative environments.

(Hat tip to TechCrunch.)

Google has its own graduate Fellowship program (Nicolas Lambert got the 2009 Fellowship in “Market Algorithms”.)

Read Full Post »

Google gives its employees $1/day of free adwords advertising.  Beyond an employee perk, this gives Google’s employees the experience of being an Internet advertiser, i.e. experiencing the point of view of Google’s paying customers.  I have been using my $1/day account to advertise the divorce-consulting business of my sister in law (In Hebrew) and did indeed find this experience to be quite illuminating.

The first thing I learned is that the ad auction itself is just a small part of the whole thing.  Choosing the right text for the ads (all ten words of it), choosing the right keywords to target, etc, is much more prominent than setting the right bids.  “Tiny” issues come up everywhere, e.g. when I needed to choose keywords to target, it turns out that there are four ways to write divorce in Hebrew: גירושים, גירושין, גרושין, גרושים.  I’m not really sure whether these are all “kosher” spellings, but they are all searched for an do need to be taken into account.  Even more, the whole advertising campaign is peripheral, in principle, to the business itself, and frankly the auction logic is not the first concern of the advertiser.  This is obvious, of course, but is easy to forget for one whose work focuses on the auction logic.

Now for the auction itself: all together I spent several hours setting up the campaign, making up the ads, choosing keywords, looking at reports, and trying a bit of optimization.  The adwords user interface was very easy and convenient to start with, but it didn’t take long until I was attempting things that confused me (e.g. splitting my single campaign into two different ones), at which point I gave up, and stayed with what I achieved, which is quite fine actually.  I was especially impressed with Google’s automatic suggestions of keywords which were cleverer than what I came up with (I know that not really, just some learning algorithm, but they were eerily good.)

I was surprised and disappointed (as an advertiser, but frankly delighted as a Google employee) by the pretty high prices on the keywords that I targeted: my average cost per click is 88 cents, and this is for pretty low slots, on the average.  (Divorce is expensive, it seems, also on the Internet.)  This means that my $1 per day suffices for a single click per day, and no more.  I do get this single click almost every day, but have so far been unable to ever get two clicks in one day: neither optimizing by hand, nor letting Google’s automatic bidder do it.  I was professionally insulted by not being able to beat the automatic bidder, but have still not given up on getting an average of more than one click per day for my $1.  My click through rate is pretty high (compared to my expectations):  0.79%, so I usually get about 150 impressions every day of which one is clicked and results in a visit to the website.  Now these visitors are probably really good leads: not only have they searched for relevant keywords, but they also clicked on a pretty specific ad.  I suppose that if even 1% of them become clients (this is not so little: we are not talking about buying a sweatshirt; this is about handling divorce), then the advertising would be considered quite  profitable even had my sister in law paid for it.  Unfortunately, it is quite hard to gauge whether this is the case: getting and keeping the required statistics is easier to imagine theoretically than to do when you have to handle a small business.  In other words, I haven’t a clue what my valuation of a click is.   (The lack of knowledge of one own’s valuation has been discussed in AGT, but frankly I have not seen really convincing treatment of this issue.)

The set of reports about the performance of the campaign that adwords makes available  is quite impressive, and they are really nicely and simply done, but somehow I still don’t really know how how to optimize my campaign as to get the most and the best customers to the site.  I’m sure that more time on my part, as well as a more data-centric handling of my sister-in-law’s business would improve things, but the difficulty of getting and handling the right data is another lesson that I got from this exercise (again, I knew this theoretically, but now I feel it too).

Read Full Post »

A few days ago, Nortrhwestern hosted a conference in honor of  Paul Milgrom winning the Nemmers Prize.  (As reported here and here.) The speakers seem to be the A-list of Economists in Mechanism Design, and their talks slides, as well as a video of Milgrom’s talk, are on the web page.  Computer Scientists were not invited, but it seems that Lance Fortnow was still able to tweet from there gems like:

Preston McAfee: Left to their own devices computer scientists would recreate the Soviet Union

Susan Athey: CS still needs to catch up in auction theory (on why Microsoft hires economists)

Read Full Post »

A large number of AGT researchers work in just three large companies: Google, Microsoft, and Yahoo, mostly working on ad auctions. From two or three data points, it seems that Facebook is now attempting to hire such researchers too (although maybe not in “research” roles). Looking in Facebook’s “careers” page, they are indeed looking for an “advertising auction expert” whose requirements include “Expert knowledge of algorithmic game theory, auction theory, mechanism design and their applications to online advertising” as well as a Ph.D. in CS or OR.

I can well see the potential for Algorithmic Mechanism Design research in Facebook. Facebook’s information about its users is enormous, detailed, and lives within the context of their complex social network. Making good use of this complex information seems to require another level of sophistication relative to today’s ad auctions. Privacy issues are always there, and while Facebook has already had problems in that department, it seems that its users do not consider their “identity” to be too private. I wonder if within a year or two we’ll start seeing research papers coming from Facebook too.

Read Full Post »

In the last month, Brendan Lucier has uploaded three conceptually related papers to the arXiv (two of them with co-authors):

  1. Beyond Equilibria: Mechanisms for Repeated Combinatorial Auctions by Brendan Lucier
  2. Bayesian Algorithmic Mechanism Design by Jason D. Hartline and Brendan Lucier
  3. Price of Anarchy for Greedy Auctions by Brendan Lucier and Allan Borodin (to appear in SODA 2010)

All three papers aim to improve the approximation ratios obtained by computationally efficient mechanisms.  Each of them gives pretty strong and general results, but at a price: relaxing the notion of implementation from the usual one (in CS) of dominant-strategies.  Specifically, the notions used in these three papers are, respectively:

  1. “We consider models of agent behaviour in which they either apply common learning techniques to minimize the regret of their bidding strategies, or apply short-sighted best-response strategies.”
  2. “We consider relaxing the desideratum of (ex post) incentive compatibility (IC) to Bayesian incentive compatibility (BIC), where truthtelling is a Bayes-Nash equilibrium (the standard notion of incentive compatibility in economics).”
  3. “We study the price of anarchy for such mechanisms, which is a bound on the approximation ratio obtained at any mixed Nash equilibrium.”

In general, I like the strategy of relaxing the required notion of implementation or equilibrium in order to get strong and general results.  This is especially true in the context of algorithmic mechanism design for a few reasons (1) The concept used in CS seems to be so strict that simply not enough is achievable (2) The CS point of view is anyway much stricter than that of Bayesian-Nash commonly used by economists (3) much less seems to be needed in practice in order to ensure cooperative behavior.  (E.g.  people just use TCP truthfully despite the clear opportunities for beneficial manipulation.)  The difficulty with the relaxation strategy is that there are many possible ways of relaxing the game theoretic requirements, and it takes some time to really be convinced which notions are most useful, and in which situations.  I would love to see more community discussions on these as well as other notions of relaxed implementation and equilibrium.

Read Full Post »

The Stanford Institute for Economic and Policy Research, jointly with Microsoft, is holding a conference on Internet Economics on September 24-25.  Many papers are available on the web site.

Read Full Post »

Older Posts »