Feeds:
Posts
Comments

## Will work for attention

I just got the following email:

Hi Noam,

I just came across your blog, Algorithmic Game Theory and found it to be pretty insightful. I’m trying to get some freelance practice, and was wondering if you would be willing to consider a guest piece from me. I could write on something specific for you, or something that would fit in with your blog. Let me know if you are willing and if there is a process or anything. If you choose to accept my article, I would only ask for a single link back to my site, http://www.onlinedegreeshub.com/. I have included 2 of my previous published posts:

http://www.onlinedegreeshub.com/blog/2009/12-celebs-who-give-geeks-hope-for-fame-and-fortune/

Please get back with me if you would like me to send something over for your consideration.

Thanks for your time,
Tara Miller

I have to admit that I was intrigued by this unusual mode of marketing: the sole desired reward is attention — a link to the website. Of course, the spam filter on this blog routinely removes many automatically-generated comments that attempt “stealing attention”, but this offer does not seem to be neither spam nor automatically generated and the offer for the “guest post” seems personalized enough so that real work was offered. What is being promoted seems to be a rather new web site and blog that intend to be a portal for online education/degrees — probably not useful for most of the audience of this blog — but still seems to contain some useful links to online material. I was impressed enough with the basic premise of this offer (as well as somewhat flattered by the suggestion that the attention that my blog can bring is worth something) that my response was to use the letter, with the links, directly as a blog post. So, Tara, if you’re actually reading this, a comment on your experience with this marketing effort would be appreciated.

Read Full Post »

## Ad auctions workshop schedule

The schedule of the ad auctions workshop has just been published.  The schedule of the workshop on The Economics of Networks, Systems, and Computation has not yet been published too now yet.  The workshops are taking place on July 6th and 7th, respectively, as part of EC 2009 in Stanford.

Read Full Post »

## Stream of Auctions

One of the defining characteristic of ad auctions is that they are repeated a large number of times. Every time some user makes a search query or visits a web page with an “ad slot” on it a new auction takes place among all advertisers that target their ads at this impression. The targeting criteria may be quite sophisticated, taking into account not only the characteristics of the ad-slot (web page or search keyword) but also various characteristics of the user such as his geographic location (and sometimes much more). While much of the early work on ad auctions focused on the single auction, much of the current work of ad auctions focuses explicitly on the repetitions, on the stream. If the different auctions in the stream are totally unrelated then one of them shouldn’t effect the others, and indeed they should be analyzed in isolation. In many real world scenarios, however, there are significant constraints between the different auctions in the stream that need to be taken into account. Looking at the papers soon to be presented in EC’09 we can see several such issues:

1. Budgets: It is very common for an advertiser to have a budget limit for the total expenditure over all auctions in a certain period. This raises many questions, both game-theoretic and algorithmic, from the bidder’s point of view as well as from the auctioneer’s point of view. The basic paper addressing the algorithmic problem of the auctioneer due to Aranyak Mehta, Amin Saberi, Umesh Vazirani, and Vijay Vazirani presents an online algorithm with a worst case competitive ratio of 1-1/e. Many variants of the model have been considered in the literature, but the 1-1/e ratio has been hard to beat. The EC’09 paper “The Adwords Problem: Online Keyword Matching with Budgeted Bidders under Random Permutations” by Nikhil Devanur and Thomas Hayes does so getting a $1-\epsilon$ approximation by adding a distributional assumption to model.
2. Reservations: Advertisers often wish to “reserve” a certain number of impressions of some target type in advance. If this has been done, then once the stream of impressions arrives, then the reserved number of impressions should be delivered sometime during the agreed-upon period (with some penalty if the reservation cannot be fulfilled.) There are challenges during reservation time (can the auctioneer commit to a requested reservation? how to price it?) as well as during delivery time (which reservation should the current impression fulfill?). The EC’09 paper “Selling Ad Campaigns: Online Algorithms with Cancellations” by Moshe Babaioff, Jason Hartline, and Robert Kleinberg studies the added flexibility that the auctioneer gets if he is allowed to renege on past reservations, for a cancellation fee.
3. Learning: Most ad auctions have a pay-per-click rule: the bidders pay for the impression that they won only if the ad was “clicked” by the user. This means that the “real bid” to be considered depends on the “click through rate” — the probability that the ad will be clicked — a random variable that depends on the impression and on the advertiser. These click through rates can be learned throughout the stream of auctions, and then taken into account in future auctions. Non-strategic analysis of similar situations often falls under the name of multi-arm bandit problems, and two closely related papers in EC’09 take into account the strategic behavior of the bidders: “Characterizing Truthful Multi-Armed Bandit Mechanisms” by Moshe Babaioff, Yogeshwer Sharma, and Aleksandrs Slivkins and “The Price of Truthfulness for Pay-Per-Click Auctions” by Nikhil Devanur and Sham Kakade.

Read Full Post »

## Jeff Ely’s Micro-Economics Course

Most researchers on the border of CS and economics (aka algorithmic game theory) seem to come from a CS background, as do I.  While the technical definitions and results of economic theory are quite easily understandable to us, the wider point of view of economists is often harder to grok.  Just like autodidacts often miss the traditional context of what they have learned, many of us in the algorithmic game theory community have our own narrative of the game-theoretic and economic notions that we use.  While this is necessary to some extent, as a novel scientific narrative is a required basis for a new scientific area, it is clear that we often miss a useful wider point of view that has been developed by a large economics community over a long time.  I personally try to understand the economists’ point of view, even on topics where I do not accept it and prefer my CS-ey point of view  (examples abound:  worst-case analysis, approximation, discrete models, focus on structure, and much more.)

Jeff Ely, an Economics Professor in Northwestern, has recently been publishing on his blog some notes as well as slides from his intermediate micro-economics course.  These seem to be quite helpful to me for bridging the “narrative-gap” for two main reasons.  First, as Jeff explains, the organization of the course is not the classical one for teaching micro-economics but rather his unique “institution-free approach”.  So far he has published notes for the first 3 lectures and I find this approach to blend very well with the CS-ey point of view.  (Is this somehow related to the strong cs-econ group in Northwestern?)  Second, the lecture slides are presented within his own ars-economic commentary about what he teaches, which I find even more fascinating.   These lectures are probably not the right place for a computer scientist to learn micro-economics, as the slides are rather high-level and miss much detail.  Being an intermediate course, their mathematical level would also likely be too low for the taste of most CS or Math students.  Yet, as a vehicle for getting some feel for the “mind of an economist” they seem excellent.

Read Full Post »

## Tsinghua Innovations in Computer Science

A recently announced new conference named “Innovations in Computer Science (ICS)” is drawing much attention and debate in the Theory of CS blogosphere (e.g. here, here, here, here, and here).   The aspect that I found most interesting has not been discussed though: the new conference is in Beijing.  I find the seriousness with which the community is discussing a non-North-American conference to be an interesting indication that the times are a changing.  There has even been discussion whether the best “conceptual papers” will be sent to ICS or to the leading prestigious conferences FOCS and STOC.  This is still just talk, but even such talk is quite an accomplishment for a conference that has not been born yet.

This seems to be another indication that China is starting to take its place in the innovative forefront of science, in this case theoretical computer science.  Specifically, the ICS conference is being sponsored by the Institute of Theoretical Computer Science at Tsinghua University that has gained significant visibility in the last few years.  Led by Turing award winner Andy Yao, supported by a distinguished international “chair professor team“, hosting an amazingly strong stream of visitors, recruiting top students, and, it seems, amply funded, the institute has certainly gained much attention very quickly.  Their goals of scientific excellence seems quite clear from the slides of a talk on “Nurturing Talents in China” that Andy Yao gave last year.

Algorithmic game theory has already seen very significant fruit from this institute: the PPAD-completeness of Nash equilibria of two-player games due to Xi Chen and Xiaotie Deng — the celebrated final step in settling the complexity of the Nash problem!  (And, many other publications as well)

I do not know whether China is investing similar effort in scientific areas other than Computer Science, nor whether they are making similar progress in other fields, but I certainly hope so.  Clearly there remains a huge gap to close before China shoulders its fair share of the scientific progress of humanity.  We can just hope that China does so with the same speed that it is moving on capturing its fair share of world GDP. (Yes, I know that China has a few other tiny problems like health, basic education, freedom, and other 3rd world problems, but let me worry about my own field first, and in any case,  scientific progress will only help in advancing these as well.)

Read Full Post »

## The Advertising Industry

The current “killer application” for algorithmic game theory is certainly on-line advertising. The traditional advertising industry is  undergoing great turmoil due to this on-line advertising as well as other technological changes which may not only give viewers more relevant and less annoying ads, but also may start addressing the main problem from the point of view of advertisers, captured by the famous quote  “Half the money I spend on advertising is wasted; the trouble is I don’t know which half”.

This highly entertaining video is from the point of view of traditional advertising (thanks to Eyal Manor for the link).

Read Full Post »

## Workshop: Search, Mechanism Design and the Internet

A Workshop on “Search, Mechanism Design and the Internet” will be held in London on June 12-13.  (As noted on Haris-Aziz’s blog.)  Some of the presented papers can be found here.

Read Full Post »

## Revenue vs. Efficiency in Auctions

There are usually two different measures that auction designers attempt optimizing for: efficiency (social welfare) and auctioneer revenue. A “better” auction often improves both efficiency and revenue but in other cases these are conflicting goals. It is well known that efficiency is optimized by Vickerey Auctions while revenue is optimized by Myerson optimal auctions. I often hear cynical doubts about whether anyone optimizes efficiency rather than revenue, and specifically such disbelief regarding the big companies running ad auctions (such as my current employer, Google). As far as I can tell, reality seems to be quite the opposite: companies aim to optimize their long-term or middle-term revenue rather than the revenue of a single auction. In a competitive environment the only way of optimizing long term revenue is by gaining and maintaining market share which in turn requires providing high “added-value” i.e. optimizing efficiency.

In any case, this post points out to a paper by Gagan Aggarwal, Gagan Goel and Aranyak Mehta recently posted to the arXiv. Complementing a result of Jeremy Bulow and Paul Klemperer, they show that the difference between the two different optimization goals is not very large compared to increasing the number of bidders. The setting is the classic one of selling a single indivisible good in the private value model with a commonly known distribution over bidders’ valuations (with some mild restrictions on the distribution). The BK paper shows that the revenue of an efficiency-maximizing auction with k+1 bidder is at least as high as that of the revenue-maximizing one with k bidders. The new AGM paper shows that the efficiency of a revenue-maximizing auction with k+logk bidders is at least as high as that of an efficiency-maximizing one with k bidders (and that the logk term is necessary).

[Added on 10.6: Thanks to Tim Roughgarden for pointing out to me his closely related joint paper with Mukund Sundararajan that generalizes the BK result to an ad-auction setting, as well as provides direct revenue guarentees without increasing the number of bidders.]

Read Full Post »

## Approximate Nash

The basic computational problem in algorithmic game theory is that of computing a (mixed) Nash equilibrium of a given (non-cooperative) game. The computational complexity status of this problem has been recently settled by showing that it is “PPAD-complete” and thus “unlikely” to be efficiently computable (see this link for a nice overview). This may be considered as not very satisfying answer due to our incomplete understanding of the class PPAD, but at least the problem is now reduced to be a purely computational complexity one with no game theory aspects to it and no special role for the Nash problem. The main related problem remaining is whether approximating a Nash equilibrium may be done efficiently. This post will describe the current status of the approximation problem.

## The approximate-Nash problem

Let us first settle on some notation. We are talking about a $n$-player game, where each player $i$ has $m_i$ possible strategies, each player has a utility function $u_i : \{1..m_1\} \times \{1..m_2\} \times ... \times \{1..m_n\} \rightarrow \Re$. We will denote by $\Delta_i$ the set of probability distributions on $\{ 1 ... m_i\}$ and for $x^1 \in \Delta_1 ... x^n \in \Delta_n$ we use $u_i(x^1 ... x^n)$ as a shorthand for $\sum_{j_1, ... , j_n} \Pi_i x^i_{j_i} u_i(j_1 ... j_n)$, the expected value of $u_i(j_1 ... j_n)$ where each $j_i$ is chosen at random according to the probability $x^i$.

We know by Nash’s theorem that a mixed Nash equilibrium of the game exists and the computational problem is to find one. Unfortunately, it is also known that when $n \ge 3$ then the Nash equilibria may all be irrational numbers. We thus cannot just ask our algorithm to output a Nash equilibrium since it may not have a finite representation and so we need to settle for approximation in order to even define the “exact” problem. So here is the carefully stated common definition of the Nash equilibrium computation problem:

Input: The input is given by $n$ tables $u_1 ... u_n$ each of dimensions $m_1 \times m_2 \times ... \times m_n$. For finiteness of representation, each of the $n \Pi_i m_i$ numbers in the input is a rational number given by a $d$-digit numerator and denominator. We are also give a rational number $\epsilon>0$ (with $d$-digit numerator and denominator).

Output: $n$ rational vectors $x^1 \in \Delta_1, ..., x^n \in \Delta_n$  that are an $\epsilon$-Nash equilibrium.

Definition: $x^1, ..., x^n$ are $\epsilon$-Nash of the game defined by $u_1 ... u_n$ if for every $i$ and every $x^{i*} \in \Delta_i$ we have that $u_i(x^i, x^{-i}) \ge u_i(x^{i*},x^{-i}) - \epsilon$.

Running time: we want the running time to be polynomial in the input size, i.e. polynomial in $n$, $d$, and $\Pi_i m_i$.

It is quite useful to look carefully at various choices made here, and what we would get with slightly different choices:

1. Allowing any Nash equilibrium: our definition allowed any ($\epsilon$)-Nash rather than specifying which one we desire in case multiple ones exist. It is known that most ways of attempting to require a specific Nash point make the problem harder, NP-complete.
2. Approximation: as mentioned above when $n \ge 3$ we need to settle for a $\epsilon$-approximation just to have a finite output. For $n=2$ there always exists an equilibrium which has rational numbers with $poly(d, n, \Pi_i m_i)$-digit long numerator and denominator, and thus the exact version is equivalent to the problem as stated.
3. Approximation of best-response condition: our notion for approximation relaxed the best-response condition by $\epsilon$ rather than asking for some rational point that is $\epsilon$-close to an exact Nash equilibrium point. The latter condition seems to be much harder and in not even known to be in NP (or in fact even in the polynomial time hierarchy) and is treated at length by this 57-page paper by Mihalis Yannakakis and Kousha Etessami. The crux of the difficulty is that getting some grip on an exact Nash point seems to require “infinite precision” arithmetic — the same type of problem encountered in trying to determine whether $\sum_i \sqrt{x_i} > T$ (discussed by Richard Lipton and see also this related post of his). (Thanks to Kousha Etessami for explaining these delicate issues to me.)
4. Having $\epsilon$ be given in binary: in our definition $\epsilon$ was given by $d$-digit numerator and denominator and the running time was required to be polynomial in $d$, i.e. in $\log \epsilon^{-1}$. An alternative version would be to require $\epsilon$ to be “given in unary” i.e. allow the algorithm to run in time polynomial in $\epsilon^{-1}$. This version is asking for a fully polynomial approximation scheme (FPTAS) and could be easier. However, it turns out that getting a FPTAS is also PPAD-hard and thus this is actually not the case.
5. Additive approximation: Our condition demands $u_i(x^i, x^{-i}) \ge u_i(x^{i*},x^{-i}) - \epsilon$. We can not use the more natural relative error $u_i(x^i, x^{-i}) \ge u_i(x^{i*},x^{-i}) (1 - \epsilon)$ as the Nash equilibrium point $u_i$ may be arbitrarily close to zero which may again require an exponential-length answer.

At this point we can start talking about the approximate version of this problem. For this we will allow an arbitrary dependence of the running time on $\epsilon$ and even allow $\epsilon$ be fixed (e.g. $\epsilon=0.1$) . As the original problem scales, for the approximation to make sense, we need to normalize $\epsilon$ by the sizes of the numbers in the probem.

$\epsilon$-Approximation Problem: This is the same problem as above but scaled so that the utilities all satisfy $0 \le u_i(j_1 ... j_n) \le 1$.

## A Quasi-polynomial-time Algorithm

The key result by Richrad Lipton, Evangelos Marakakis, and Aranyak Metha is that the approximation problem can be solved in “quasi-polynomial” time — in this case time $N^{\log N}$ where $N$ is the input size. The algorithm is very simple and is based on the existence of approximate equilibria with strategies that have only small support.  In many “non-uniform” models, randomization over a large domain can be replaced by randomization over a much smaller domain.  (One nice example is the simulation of public coins by private ones in communication complexity.) This is the situation here and some background is given in a recent blog post by Richard Lipton.

Definition: Let $k$ be an integer, then a distribution $\tilde{x}^i \in \Delta_i$ is $k$-simple if all its coordinates are integer multiples of $1/k$. I.e. it is a uniform distribution on a multi-set of size $k$ of pure strategies. A random $k$-simplification of $x^i$ is obtained by choosing at random $k$ elements (with repetitions) according to the distribution specified by $x^i$ and then taking the uniform distribution on this multiset.

Proposition: For every $x^1 \in \Delta_1$ and fixed pure strategies $j_2 ... j_n$, if we chose a random $k$-simplification $\tilde{x}^1$ of $x^1$ then $Pr[|u_i(\tilde{x}^1,j_2,...,j_n) - u_i(x^1,j_2,...,j_n)| > \epsilon] \le e^{-\Omega(k\epsilon^2)}$.

The proof is by observing that $u_i(x^1,j_2,...,j_n)$ is the expected value of  $u_i(j_1,j_2,...,j_n)$  when $j_1$ is chosen according to $x_1$, while $u_i(\tilde{x}^1,j_2,...,j_n)$ is the average value of  $u_i(j_1,j_2,...,j_n)$ over $k$ random choices of $j_1$ according to $x_1$, and thus the proposition is just a direct use of the Hoeffding tail inequalities.

Now, if we choose $k = O(\log N / \epsilon^2)$, where $N=\Pi_i m_i$, then, with high probability (due to the union bound), the previous proposition holds for all pure strategies $j_2 ... j_n$ and all $u_i$ for $i = 1 ...n$ .  In this case, by averaging, it also holds for all mixed strategies $x^2 ... x^n$:

Definition: $\tilde{x^1}$ is an $\epsilon$-approximation of $x^1$ (relative to $u_1 ... u_n$) if for all $x^{2} \in \Delta_2 .... x^{n} \in \Delta_n$ and all $i$ we have $|u_i(\tilde{x}^1,x^{2},...,x^{n}) - u_i(x^1,x^{2},...,x^{n})| \le \epsilon$.

Corollary: For every $x^1 \in \Delta_1$ there exists an $\epsilon$-approximation $\tilde{x}^1$ which is $k$-simple, for $k = O(\log N / \epsilon^2)$.

The main lemma now follows:

Lemma: Every game has an $\epsilon$-Nash equilibrium where all player strategies are $k$-simple, with $k = O(\log N / \epsilon^2)$.

The proof starts with any Nash equilibrium $x^1 ... x^n$ and chooses a $k$-simple $\epsilon'$-approximation $\tilde{x}^i$ for each $x^i$ (where $\epsilon'=\epsilon/(2n)$).  Since $x^1 ... x^n$ are an exact Nash equilibrium, in order to show that $\tilde{x^1} ... \tilde{x^n}$ are an approximate equilibrium, it suffices to show that changing the $x^i$‘s to $\tilde{x}^i$‘s changes things by at most $\epsilon$ — but this is exactly what being an $\epsilon'$-approximation means for replacing a single $x^i$ by $\tilde{x}^i$, and the approximation errors $\epsilon'$‘s simply add up.

Once the existence of a $k$-simple $\epsilon$-equilibrium is known, an $O(n^k)$ algorithm follows simply by exhaustive search, and thus we get our algorithm.

Notice that we have actually shown a stronger theorem: not only can we find some $\epsilon$-approximation but in fact we can find an approximation to any equilibrium, and our approximated equilibrium maintains also player’s utilities.  This for example also provides an approximate equilibrium with approximately optimal social welfare.

## A polynomial time algorithm?

The existence of a quasi-polynomial time algorithm may often be viewed as a hint that a polynomial-time algorithm exists as well.  In our case the main open problem is whether such a polynomial time algorithm exists for every fixed value of $\epsilon>0$.  The upper bounds for the approximation problem are rater weak and the best that is known is a polynomial time approximation for $\epsilon=0.3393...$.

One possible approach is to see whether the parameter $k$ above may be reduced to being constant when $\epsilon$ is constant.  This however is not the case.  Consider a zero-sum two-player game where each utility is chosen at random to be 0 or 1.  The following statements can be easily shown to hold with high probability over the random choice of the game: (a) for each row or column, the fraction of 1-entries is very close to 1/2 and thus a value of close to 1/2 can be obtained by any player by using a uniform distribution on his rows or columns (b) for any $k=o(\log n)$ rows there exists a single column which has all 0 entries in these rows.  Thus any mixed strategy that is $o(\log n)$-simple cannot ensure positive value.  It follows that there does not exists a $\epsilon$-approximate Nash which is $k$-simple for any $\epsilon<1/2$ and $k = o(\log n)$.  A recent paper of Constantinos Daskalakis and Christos Papadimitriou generalizes this impossibility not only to simples strategies but to more general “oblivious” algorithms.

Another difficulty is shown by a recent paper of Elad Hazan and Robert Krauthgamer.  They consider the harder problem of approximating a specific Nash equilibrium — specifically the one with highest social welfare — for which the technique above also provides a quasi-polynomial-time algorithm.  They show that this problem is at least as hard as the known problem of finding a small randomly planted clique in a graph, providing some evidence for its difficulty.

Read Full Post »