Feeds:
Posts
Comments

Posts Tagged ‘New papers’

There are two main reasons why researchers publish papers in conferences and journals.  A “real” reason and a “strategic” reason.  The strategic reason is an artifact of the current academic situation where researchers are judged according to their publication list, when considered for positions, promotions, grants, etc.  Given this state of affairs, we have a strong personal incentive to “publish” whether or not our research is worthwhile and whether or not anyone will ever read it.   The “real” reason for publication is dissemination: let other researchers learn about our work, so they can use it, continue it, be influenced by it, etc.  This is what  the whole “scientific/academic establishment” should aim to promote.  Inherent here is the competition for the attention of other researchers who have to decide to spend time and effort reading your paper rather than the countless others vying for their attention.

In my view the main real service that journals and conferences provide in this day and age is exactly the arbitration of this competition for attention: the editors and reviewers of journals and the program committee of conferences  look at lots of papers and suggest a few of them for me to look at.  When chosen right, this is indispensable: there is no way that I could spot by myself  the new important paper of a young graduate student among the hundreds of non-important ones out there.  The main reason why I prefer conferences to journals in CS is that the former seem to be doing a much better job (although still far from perfect) of this identification of new important stuff.

The Internet has revolutionized the mechanics of dissemination of scientific work.   I can still remember when scientific dissemination worked by putting reprints of our papers in envelopes and sending them in (real) mail.  This was replaced by photocopying from conference proceedings, then by sending email attachments, and today we just “put it on the web”.  The standard that seems to be emerging now is to put it on the arXiv.  In comparison, the “social-scientific” structure surrounding “publication” has hardly changed at all, and putting your paper on the arXiv is not “counted as a publication” , provides no signal of your paper’s quality or correctness, and usually does not suffice for getting much attention for your work.    I think that the main ingredient missing from having “putting your paper on the web” be the main form of publication is a surrounding mechanism that can provide a signal of quality and that will help readers focus their attention on the more important work.  How exactly this can work still remains to be seen, but I would like to run an experiment in this direction on this blog.

Experiment: Recommend interesting AGT/E papers on the Web

I am asking readers of this blog to put forward — as a comment to this blog post — recommendations for interesting papers in the field of Algorithmic Game Theory/Economics.  Here are the rules of this experiment:

  • Eligible recommenders: Anyone from the “AGT/E research community”.  I will take this in a wide sense: anyone who has published a paper related to AGT in a recognized scientific forum (conference or journal in CS, AI, GT, economics, …)
  • Eligible papers: Papers must be (1) Available openly on the web. (2) Not already have been published in a journal or conference with proceedings.  It is OK if they were submitted to or accepted  by a conference or journal as long as they have not yet appeared yet.  (3) Related to Algorithmic Game Theory/Economics, taken in a wide sense.
  • What to include: (1) Name of the recommender and a link to their academic home-page — no anonymous recommendations (2) A link to the paper (3) A short explanation of what the paper is about and why you think it is interesting.   There is no implicit assumption of having refereed the paper in any sense.
  • Conflicts: The recommender should follow the usual rules of avoiding conflict of interest followed in program committees: do not recommend (1) your own papers, (2) papers of someone in own department, (3) papers of a frequent collaborator (4) papers of a family member/lover, etc.

[Update: following a suggestion, to start things off, I entered my own recommendation in the format that I was thinking of — as a comment.]

Advertisement

Read Full Post »

My paper with Shahar Dobzinski titled Multi-Unit Auctions: Beyond Roberts, was just uploaded to the arXiv.

Abstract

We exhibit incentive compatible multi-unit auctions that are not affine maximizers (i.e. are not of the VCG family) and yet approximate the social welfare to within a factor of 1+\epsilon. For the case of two-item two-bidder auctions we show that this family of auctions, termed Triage mechanisms, are the only scalable ones that give an approximation factor better than 2. “Scalable” means that the allocation does not depend on the units in which the valuations are measured. We deduce from this that any scalable computationally-efficient incentive-compatible auction for m items and n \ge 2 bidders cannot approximate the social welfare to within a factor better than 2. This is in contrast to arbitrarily good approximations that can be reached under computational constraints alone, and in contrast the optimal social welfare that can be obtained under incentive constraints alone.

Read Full Post »

Guest post by Ariel Procaccia

Three of the papers uploaded to the archives on March 30 have a common theme: they deal with truth and justice (a.k.a. fairness). In the scientific jargon “paper after paper” means “two papers” and “a series of papers” means “three papers”; since there is also a bonus fourth paper, I can therefore declare a new “trend”.

Two of the arXiv papers are by Edith Cohen, Michal Feldman, Amos Fiat, Haim Kaplan, and Svetlana Olonetsky. Both papers study mechanisms with payments for the allocation of goods. The first paper asks (and answers) the following question: for which allocation rules are there payments such that the resulting mechanism is, at the same time, truthful and envy free? The latter property means that each player weakly prefers his own outcome (bundle+payments) to the outcome of any other player. Interestingly, the fact that an allocation has payments leading to truthfulness, and different payments leading to envy freeness, does not imply that there is a single payment scheme that leads to both. The second paper focuses on a setting where players have a limit on the number of goods they may receive; the main result is that (under additive valuations) the VCG Mechanism with Clark Pivot payments, which is of course truthful, also has the property that a higher capacity agent never envies a lower capacity agent. This implies envy freeness in the natural special case where all agents have the same capacity.

The third paper is by Elchanan Mossel and Omer Tamuz. The first part of the paper deals with cake cutting, informally known as “allocation of divisible goods”. Curiously, most of the literature on cake cutting is concerned with fairness, while truthfulness has so far been largely overlooked. Mossel and Tamuz are interested in mechanisms (without payments this time) that are, again, truthful and fair, but this time the notion of fairness is proportionality: each of the n players is allocated a piece of cake worth at least 1/n according to his valuation. They present a simple randomized mechanism that is truthful (in expectation) and proportional. To be precise, they show that there exists such a mechanism, since the construction relies on a well known nonconstructive existence result. The mechanism is then extended to yield (the existence of) a randomized mechanism that is truthful and guarantees each player a piece worth more than 1/n of the entire cake in expectation (assuming some of the players have different valuation functions). The second part of the papers applies these insights to the allocation of indivisible goods.

Incidentally, I recently wrote a paper with Yiling Chen, John Lai, and David Parkes where we independently asked questions that are very similar to the ones raised by Mossel and Tamuz, but our answers are very different (modulo a small overlap). In our paper we want it all: truthfulness, envy freeness, and proportionality (which in our setting is not implied by envy freeness). Envy free cake cutting is notoriously difficult, and in fact there is no known protocol that achieves an envy-free division with a bounded number of steps. In order to get constructive results we therefore focus on restricted valuation functions where achieving fairness is trivial and the richness of the problem comes from the additional truthfulness requirement. Our main result is a deterministic mechanism that is truthful, envy free, proportional, and polynomial-time, assuming the valuations are what we call piecewise constant. The paper will appear in AAAI’10; in observance of the time honored AAAI tradition the current version is super dense and most proofs are omitted. This is another similarity to the Mossel-Tamuz paper: they have proofs of existence and we have nonexistent proofs! (In fact the proofs do exist, and the full version is coming soon.)

I’ll finish with a word of caution from my AI conscience. The problems that we study in AGT are often justified by assuming that our players are computational agents (e.g., in the scheduling domain). It is clear why lack of truthfulness may lead to socially undesirable outcomes in a multiagent system. In contrast, envy freeness seems to be desirable for philosophical or psychological reasons; it is a priori unclear why we would look for envy freeness in a multiagent system. A nice challenge is to come up with a model that relates envy freeness to the stability of the system, for example in terms of the robustness to random failures.

Read Full Post »

I spent most of last week in the Bertinoro workshop on Frontiers in Mechanism Design organized by Jason Hartline and Tim Roughgarden.  Such a small focused event is really a winning format (when done right — of course): almost every talk was very close to my interests and was really good (since the speakers presented their preferred recent work which usually was also accepted to some other top conference).

One of my take-homes from this event was the strengthening of the realization that computer scientists are doing more and more Bayesian analysis  in Algorithmic Mechanism Design, in this respect getting closer to the economists’ way of thinking.  It is not that computer scientists have lost their basic dislike of “average case analysis”, distributional priors, or especially, common priors, it is just that we have started reaching in more and more places the limits of “worst case” analysis.  It seems that computer scientists are being very careful with “how much” they rely on Bayesian analysis, obtaining various “hybrid” results that are more robust, in various senses, than straight Bayesian ones.

An extreme good example is the classic result of economists Jeremy Bulow and Paul Klemperer who show that revenue of the straight 2nd price auction (for a single item) with n+1 bidders always dominates the revenue of Myerson’s optimal auction with n bidders.  Only the analysis is Bayesian: the resulting 2nd price n+1-bidder auction is completely “worst-case” — neither the auctioneer, nor the bidders must have any knowledge or agreement on the underlying distribution.   In spirit, such a result is similar to Valiant’s prior-free learning where the analysis is with respect to an underlying distribution even though the learning algorithms cannot depend on it.  A recent paper Revenue Maximization with a Single Sample by Dhangwatnotai, Roughgarden and Yan (to appear in EC 2010) gets more general results in this prior-independent vein, although in an approximation sense.

A weaker type of result, but still better than full-blown Bayesian, is the 2nd price version of Myerson’s auction.  In this version, the auctioneer must “know” the underlying distribution in order to set the optimal reserve price, but once that is done, the bidders see a “worst-case” situation in front  of them(2nd price with reserve) and should bid truthfully in the dominant strategy sense without needing to know or agree about the underlying prior distribution.  (This is opposed to the revenue-equivalent-in-principle 1st price version in which bidders must know and agree on a common prior for them to have any chance of reaching he desired equilibrium.)  A recent paper Multi-parameter mechanism design and sequential posted pricing by Chawla,  Hartline,  Malec, and Sivan (to appear in STOC 2010) gets similar types of results in a unit-demand heterogeneous auction setting where the auctioneer needs to know the distribution in order to set prices (in this case, posted prices) but the resulting mechanism is very simple and truthful in the dominant-strategy sense (again the approximation guarantee is in an approximation sense).

A yet weaker version of a similar type of result appears in the paper Bayesian Algorithmic Mechanism Design by Jason D. Hartline and Brendan Lucier (to appear in STOC 2010).  In this paper again the auctioneer does need to know the underlying distribution, and then he creates a mechanism that is incentive compatible, but here only in the Bayesian sense.  I.e. the the bidders need not know the underlying distribution (as they should just act truthfully) but they should still agree that the auctioneer knows the prior distribution.  This type of result is more fragile than the previously mentioned ones since the  truthfulness of the auction depends on the auctioneer correctly knowing the underlying distribution, rather than just the optimality of it.  On the plus side, the paper shows that the auctioneer’s derivation of the auction can be done effectively just by using a “black box” for sampling the underlying distribution (as is the case for he derivation of Myerson’s optimal reserve price).

A someone dual situation is presented in the paper Price of Anarchy for Greedy Auctions by Brendan Lucier and Allan Borodin (SODA 2010).  In that paper, auctions are presented in which the auctioneer need not know the underlying auction and acts in “detail-free” way, i.e. the auction is independent of the underlying distribution.  However,  the approximate-optimality holds when the bidders are in a Bayesian equilibrium, i.e the bidders must know and agree on a common prior for the analysis to hold.

The last example of “somewhat-Bayesian” results that comes to mind has nothing to do with incentives but is just algorithmic.  The paper The Adwords Problem: Online Keyword Matching with Budgeted Bidders under Random Permutations by Nikhil Devanur and Thomas Hayes (in EC 2009) considers an online model of repeated auctions, where no distributional assumptions are made on the bidder values that are assumed to be “worst case”, but a distributional assumption is made on the order or arrival which is assumed to be uniform.  This allows them to get arbitrarily good approximations, circumventing the known lower bounds for the completely worst case.

While economists too have looked at weakening of the fully Bayesian assumptions, as computer scientists are doing now, I see a difference in the two approaches.  Harsanyi‘s influence on economic thinking seems to be so great that the Bayesian point of view seems to be taken as the correct model by economists, and even its criticisms (cf. the Wilson critique) and results that weaken the assumptions are simply taken as “2nd order” improvements.  Computer Scientists, on the other hand, seem to have their basic intellectual foundation in a non-Bayesian setting, and as they move toward Bayesian models they do not seem to have a single model in mind but are rather quite happy to explore the “Pareto frontier ” between the strength of the model and the strength of the results obtained.

Finally, as a side note, let me observe that while the gap between computer scientists and economists is narrowing on the Bayesian question, the other great gap — the issue of approximation — seems to be as great as ever.  E.g., all the results by computer scientists mentioned in this posts only provide approximate results, while all those by economists provide exact ones.

Read Full Post »

A new paper Budget Feasible Mechanisms by Christos Papadimitriou and Yaron Singer was recently uploaded to the arXiv:

We study a novel class of mechanism design problems in which the outcomes are constrained by the payments. This basic class of mechanism design problems captures many common economic situations, and yet it has not been studied, to our knowledge, in the past. We focus on the case of procurement auctions in which sellers have private costs, and the auctioneer aims to maximize a utility function on subsets of items, under the constraint that the sum of the payments provided by the mechanism does not exceed a given budget. Standard mechanism design ideas such as the VCG mechanism and its variants are not applicable here. We show that, for general functions, the budget constraint can render mechanisms arbitrarily bad in terms of the utility of the buyer. However, our main result shows that for the important class of submodular functions, a bounded approximation ratio is achievable. Better approximation results are obtained for subclasses of the submodular functions. We explore the space of budget feasible mechanisms in other domains and give a characterization under more restricted conditions.

Read Full Post »

The classic Gibbard-Satterthwaite theorem states that every nontrivial voting method can be manipulated.   In the late 1980’s an approach to circumvent this impossibility was suggested  by Bartholdi, Tovey, and Trick: maybe a voting method can be found where the manipulation is computationally hard — and thus effective manipulation would be practically impossible?  Indeed voting methods whose manipulation problem was NP-complete were found by them as well as later works.  However, this result is not really satisfying: the NP-completeness only ensures that the manipulation cannot be solved in the worst case; it is quite possible that for most instances manipulation is easy, and thus the voting method can still be effectively manipulated.   What would be desired is a voting method where manipulation would be computationally hard everywhere, except for a negligible fraction of inputs.  In the last few years several researchers have indeed attempted “amplifying” the NP-hardness of manipulation in a way that may get closer to this goal.

The new paper of Marcus Isaksson, Guy Kindler, and Elchanan Mossel titled The Geometry of Manipulation – a Quantitative Proof of the Gibbard-Satterthwaite Theorem shatters these hopes.  Solving the main open problem in a paper by Ehud Freidgut, Gil Kalai, and myself, they show that every nontrivial neutral voting method can be manipulated on a non-negligible fraction of inputs.  Moreover, a random flip of the order of 4 (four) alternatives will be such a manipulation, again with non-negligible probability.  This provides a quantitative version of the Gibbard-Satterthwaite theorem,  just like Gil Kalai previously obtained a quantitative version of Arrow’s theorem.

Read Full Post »

In the last month, Brendan Lucier has uploaded three conceptually related papers to the arXiv (two of them with co-authors):

  1. Beyond Equilibria: Mechanisms for Repeated Combinatorial Auctions by Brendan Lucier
  2. Bayesian Algorithmic Mechanism Design by Jason D. Hartline and Brendan Lucier
  3. Price of Anarchy for Greedy Auctions by Brendan Lucier and Allan Borodin (to appear in SODA 2010)

All three papers aim to improve the approximation ratios obtained by computationally efficient mechanisms.  Each of them gives pretty strong and general results, but at a price: relaxing the notion of implementation from the usual one (in CS) of dominant-strategies.  Specifically, the notions used in these three papers are, respectively:

  1. “We consider models of agent behaviour in which they either apply common learning techniques to minimize the regret of their bidding strategies, or apply short-sighted best-response strategies.”
  2. “We consider relaxing the desideratum of (ex post) incentive compatibility (IC) to Bayesian incentive compatibility (BIC), where truthtelling is a Bayes-Nash equilibrium (the standard notion of incentive compatibility in economics).”
  3. “We study the price of anarchy for such mechanisms, which is a bound on the approximation ratio obtained at any mixed Nash equilibrium.”

In general, I like the strategy of relaxing the required notion of implementation or equilibrium in order to get strong and general results.  This is especially true in the context of algorithmic mechanism design for a few reasons (1) The concept used in CS seems to be so strict that simply not enough is achievable (2) The CS point of view is anyway much stricter than that of Bayesian-Nash commonly used by economists (3) much less seems to be needed in practice in order to ensure cooperative behavior.  (E.g.  people just use TCP truthfully despite the clear opportunities for beneficial manipulation.)  The difficulty with the relaxation strategy is that there are many possible ways of relaxing the game theoretic requirements, and it takes some time to really be convinced which notions are most useful, and in which situations.  I would love to see more community discussions on these as well as other notions of relaxed implementation and equilibrium.

Read Full Post »

An intriguing paper titled “A Theory of Goal-Oriented Communication” by Oded Goldreich, Brendan Juba, and Madhu Sudan has recently been uploaded to the ECCC, expanding a line of work started by the last two authors here and here.  The basic issue studied is how is it possible to effectively communicate without agreeing on a language in advance.  The basic result obtained is that, as long as the parties can “sense” whether some progress is made toward their goals,  prior agreement about a language is not necessary and a “universal”  protocol exists.  My nerdier side cannot help but thinking about the application to communicating with an alien species (which I bet the authors did not mention on purpose.)

It is obviously hard to tell yet whether this is a beginning of a great new intellectual journey or just a blind alley but,  In any case, I suppose that this is the kind of work sought by the ICS conference.    Despite the game-theoretic-sounding notion of the goals of the communicating parties that underlines the model, conspicuously absent are notions of rationality.

Read Full Post »

An intriguing new paper, A computational view of Market Efficiency by Jasmina Hasanhodzic, Andrew W. Lo, and Emanuele Viola has just been uploaded to the arXiv.

Abstract

We propose to study market efficiency from a computational viewpoint. Borrowing from theoretical computer science, we define a market to be efficient with respect to resources S (e.g., time, memory) if no strategy using resources S can make a profit. As a first step, we consider memory-m strategies whose action at time t depends only on the m previous observations at times t-m,...,t-1. We introduce and study a simple model of market evolution, where strategies impact the market by their decision to buy or sell. We show that the effect of optimal strategies using memory m can lead to “market conditions” that were not present initially, such as (1) market bubbles and (2) the possibility for a strategy using memory m' > m to make a bigger profit than was initially possible. We suggest ours as a framework to rationalize the technological arms race of quantitative trading firms.

Read Full Post »

Shahar Dobzinski asks:

Suppose you have an interesting result that has an easy, almost trivial proof. What is the best way to publish it? Writing a full, formal paper takes too much energy. Besides, a travel to a conference just to give a 5 minutes presentation is an overkill, and journals are just too slow (who reads them anyways?)

The result in question regards the basic issue in algorithmic mechanism design of to what extent does incentive compatibility penalize computationally efficient approximation algorithms.  Shahar observed that known techniques imply that, at least for artificial problems, incentive compatibility may result in an unbounded degradation.

I talked Shahar (who is my just-graduating student) into writing it up and uploading it to the arxiv, here.

I think that the question that Shahar raises (how to “publish” easy stuff), as well as the answer he gives (unbounded price of incentive compatibility), are both interesting (though not really related) — so here they are.

Read Full Post »

Older Posts »

%d bloggers like this: