Feeds:
Posts
Comments

Archive for April, 2009

The new GoogleReserach on Twitter will certainly have some tweets related to algorthmic game theory such as the announcement of the WWW’09 paper “Bid Optimization for Broad Match Ad Auction” by Eyal Even Dar, Vahab S. Mirrokni, S. Muthukrishnan, Yishay Mansour, and Uri Nadav.

Read Full Post »

Muthu has announced a new paper “Bidding on Configurations in Internet Ad Auctions” in his blog.

Abstract: In Internet advertising, a configuration of ads is determined by the seller, and advertisers buy spaces in the configuration. In this paper, motivated by sponsored search ads, we propose an auction where advertisers directly bid and determine the eventual configuration.

Read Full Post »

Shahar Dobzinski and Shaddin Dughmi just posted a new paper to the arXiv: On the Power of Randomization in Algorithmic Mechanism Design.

The mechanism design problem that they study in this paper is the basic one of a multi-unit auction: There are M identical indivisible units of some good for sale among n bidders, and each bidder i has a valuation function v_i : \{1 ... M\} \rightarrow \Re^+, where v_i(k) specifies his value from acquiring k units.   The normal assumptions are free disposal i.e. that the valuation functions v_i are monotone non-decreasing.   The goal is to find an allocation that gives k_i units to each each bidder i (and thus \sum_i k_i \le M) optimizing the social welfare \sum_i v_i(k_i).

This problem is perhaps the simplest one exhibiting the key clash between computational complexity and incentive compatibility: since this is a social welfare maximization problem, the VCG mechanism that finds the optimal allocation is incentive compatible.  However, if we view the number of units M as large, and require algorithms that work in time polynomial in n and \log M, (assuming that the valuation functions are either concisely represented or are given as “black boxes”) then finding the optimal allocation is a knapsack problem and is thus computationally hard.  On the other hand, it is well known that the knapsack problem can be well-approximated efficiently (there is an FPTAS).  The clash is that we don’t know how to turn such an approximation algorithm into an incentive-compatible approximation mechanism.

This paper manages to find a randomized incentive-compatible approximation mechanism.  It also highlights the role and necessity of randomization in this setting.

Read Full Post »

As picked up by Lance, Moshe Vardi uses his editor’s letter in the May CACM to start up a debate on the future of conferences in CS.  Moshe asks:

Is the conference-publication “system” serving us well today? Before we try to fix the conference publication system, we must determine whether it is worth fixing.

While I agree with the view that the conference system in CS is cracking and needs an overhaul or replacement, I feel somewhat uncomfortable with the undertones of the question.  Certainly no one is opposed to conferences per-se.  It seems that the real issue is “how much should we value conference publications relative to journal publications”, with an implication that this will be used in hiring or promotion decisions.  Indeed Moshe points specifically to the following decision:

 in 1999, the Computing Research Association published a Best Practices Memo, titled “Evaluating Computer Scientists and Engineers for Promotion and Tenure,” that legitimized conference publication as the primary means of publication in computer research.

In my view, the issue of evaluation of candidates is not the correct starting point for thinking about our conferences.  Top conferences in CS started gaining higher prestige than journals simply because the average scientific quality there was  better (i.e. the CRA decision was the result, not the cause, of the higher prestige of conferences).  I have no doubt that we see questions about conference publications arising now simply since publication in many conferences is no longer a signal of high quality.  This is being recognized by the community and will certainly be taken into account by future promotion and hiring committees.  Instead our question should be how do we best use conferences (and journals, and the Internet, and anything else) to advance our field.   Conferences should provide the best service that they can, and so should journals.  The community will certainly come to appreciate the venues that offer the highest signal of quality, be it a conference or a journal. 

I seems to me that many of the changes in CS conferences are not really driven by an attempt to advance computer science, but more by attempts t0 advance computer scientists.   Oded Goldreich puts it bluntly:

My impression is that FOCS and STOC do not function any more as forums devoted to the presentation and exchange of ideas (but rather function as “weight-lifting competitions”)

I’m not sure that I agree with Oded’s point of view regarding FOCS or STOC, but the situation is certainly worse in other conferences.  

Personally, I do have some preliminary ideas about what kind of changes I would like to see in the conference system: fewer conferences with proceedings, smaller PCs (unlike IJCAI’09), less accepted papers, and in general more “less”.  I also don’t see any reason to limit in any way the overlap between a journal publication and a confernce publication.  If needed, the EC system of allowing a confernce submission not to be publihsed in the proceedings as to allow future publication in economics journals seems good.  The point of this post though is not to put forward these suggestions but rather to steer the debate away from focusing on evaluation of candidates.

Read Full Post »

Two new conferences in May

Two new conferences related to Algorithmic Game Theory are taking place during this coming May:

  1. International conference for Game Theory for Networks (GameNets), taking place in Istanbul during 13-15 May.  Among the plenary speakers is Robert Aumann.
  2. The First Conference on Auctions, Market Mechanisms and Their Applications (AMMA), taking place in Boston during 8-9 May.

Read Full Post »

From Marginal Revolution, a story on how the eBay has reduced the looting of antiquities…

Before reading question: Can you guess how this works?

Easier question: why should you expect eBay to actually increase the looting of antiquities?

Read Full Post »

Another round in the everlasting soul-searching of the TCS community about its connection with the real world has been lately going on in the “usual” theory blogs (by Mihai, Bill, and Michael, spurred by a workshop on theory and multi-cores) and also by Mark, a self-professed outsider. Among other issues that he points out, Mark complains about “people who devise complicated (never implemented) algorithms, with basic big-O analysis only, and obsession over the worst case” and asks: “how many researchers who devise new algorithms actually have them implemented?”  Mihai, basically argues (in the multi-core case) against being too practical: “there is really no point wasting our time on current technology which is clearly bound to be replaced by some other technology in the next 60-1000 years”, but cannot refrain from mocking “too theoretical” theory (with a stab at algorithmic game theory too): “Which theory community? … The one that thinks Amazon should be happy with an O(lg3n) approximation for its profit?”

I tend to think that much of these complaints comes from taking the models and results of theoretical computer science too literally. The contribution that theory provides is usually not that of someone directly implementing a new algorithm. Usually it is the insights behind the algorithm (or the lower bound or the characterization) that tend to be useful. Such insights may well come from “an O(lg3n) approximation for profit” (which may be near optimal on real distributions, or may suggest a good heuristic or may combine well with other algorithms and optimization goals — all of these best dealt with practically rather than analyzed theoretically). On the other hand even, say, a linear time algorithm for the exact optimal profit may not be directly applicable since it solves the wrong problem or due to additional new constraints or due to the input not being available as assumed or a variety of other reasons.  Thus the “technology transfer” from theory to practice is not really composed of “algorithms” and “results” but rather of “insights”, “techniques”, and “ways of thinking”.  Indeed, when I look at how theoreticians in Google contribute to the practice of the company, I can hardly think of a situation where the contribution was simply a great new algorithm that was simply implemented. Usually, what a theoretical researcher contributes is a point of view that leads to a host of suggestions about algorithmic techniques, heuristics, metrics, notions, interfaces, trade-offs and more — all influenced by theoretical understanding — that together yield improved performance.

Thus the contribution of an “algorithm for X with performance Y in model Z” does not require that model Z is “realistic” or that performance Y is “good” — the real contribution is that obtaining performance Y in model Z provides new insights – insights that can then be used in various possible realistic applications for achieving performance improvements in various cases. It seems that many in our field (including program committees) sometimes forget this and simply view improving Y as a goal by itself (this is related to the old TCS conceptual debate). A well chosen model Z will indeed tend to have the property that improving Y in it indeed correlates well with new insights, but is never quite equivalent.  A major difference between theoretical CS (as well as theoretical economics) and pure math is exactly in the central place of the art of choosing the right model.  This is especially true for Algorithmic game-theory that sits between theoretical CS and theoretical economics that have different sensibilities about good models — but this requires another post.

While judging a single contribution for “insight” is certainly difficult and subjective, in the long run we can tell.  The real test of a community, in the long run, is whether the insights that it has developed over a significant amount of time contribute to other fields. I would claim that theoretical computer science passes this test with flying colors. Looking a generation back TCS has pretty completely re-shaped computer science and optimization.  The more recent products of theoretical CS like approximation algorithms, online algorithms or streaming/sketching algorithms have already had a major influence on how the fields of OS, DB, or AI treat their own problems as well as how the industry looks at many of its problems. I would even claim that the very young field of algorithmic game theory can already boast some of its insights already affecting other fields, and certainly much real Internet activity (yes, yes, ad-auctions and such). Of course it is way too early to judge the contribution of algorithmic game theory or other last-decade fruits of theory.

Read Full Post »

Older Posts »