Feeds:
Posts
Comments

Archive for July, 2009

Shahar Dobzinski asks:

Suppose you have an interesting result that has an easy, almost trivial proof. What is the best way to publish it? Writing a full, formal paper takes too much energy. Besides, a travel to a conference just to give a 5 minutes presentation is an overkill, and journals are just too slow (who reads them anyways?)

The result in question regards the basic issue in algorithmic mechanism design of to what extent does incentive compatibility penalize computationally efficient approximation algorithms.  Shahar observed that known techniques imply that, at least for artificial problems, incentive compatibility may result in an unbounded degradation.

I talked Shahar (who is my just-graduating student) into writing it up and uploading it to the arxiv, here.

I think that the question that Shahar raises (how to “publish” easy stuff), as well as the answer he gives (unbounded price of incentive compatibility), are both interesting (though not really related) — so here they are.

Advertisement

Read Full Post »

The paper Strategyproof Approximation Mechanisms for Location on Networks by Noga Alon, Michal Feldman, Ariel D. Procaccia, and Moshe Tennenholtz was recently uploaded to the arxiv (this is probably Noga Alon‘s first work in AGT.)   This paper continues previous work on Approximate Mechanism Design Without Money by a subset of these authors.  The (amazingly animated) slides from Ariel’s distinguished dissertation talk in AAMAS give an overview of these results.

The Gibbard-Satherswaite impossibility result rules out, in most settings, the possibility of designing incentive compatible mechanisms without money.   Chapter 10 of the Algorithmic Game Theory book is devoted to settings where this is possible, and these papers suggest a new approach for escaping the impossibility, an approach which should not surprise computer scientists: approximation.  While the first paper gives motivation and promise and is technically simple, the new paper is more sophisticated.   Both papers focus on simple variants of facility location but one can certainly think of much future work in this direction in other settings, like the recent one on Sum of us: truthful self-selection by Noga Alon, Felix Fischer, Ariel D. Procaccia, and Moshe Tennenholtz.

Read Full Post »

Lance Fortnow has published in CACM his viewpoint on the role of conferences in CS, another round in ever lasting internal debate on this issue.  Among his recommendations is:

[…] holding conferences less frequently and accepting every reasonable paper for presentation without proceedings

While I am in favor of  having large conferences where everyone meets and everyone talks as is common in other disciplines, I would like to see these as an additional format to the top CS conferences rather than a replacement.  Of course, adding non-competitive non-proceedings conferences, without destroying the current competitive ones will not “solve the problem”, stated by the CRA tenure policy memo from 99, that “In those dimensions that count most, conferences are superior”.

From my point of view the unique role of top conferences in CS  is that of influencing the agenda of the field.  Science is a human endeavor, and as such is a social one.  The raison d’etre of the whole academic system is to bring scientists together, having them influence each other.  In fact, this is the only way for a field to advance: “standing on the shoulders of giants” or at least on the shoulders of each other.  Now, the truth is that the most difficult insight at every single point in time is the understanding of what is important and how to look at things.  A field manages to progress best if many researchers reach similar views on these questions, and then work, for a while, in the implied directions, boosting each other’s research.  Of course, in many cases after a while it may become apparent that this research direction was “wrong” (i.e. not interesting, not solvable, or not bearing fruit), but this is the common risk in any research.  When a whole community works in some direction together, they will tend not only to progress on it faster, but also to branch away into more profitable directions faster.  If done right, they will also be able to abandon a wrong direction faster, and more often concentrate efforts in more promising directions.

All in all, I think that the CS conference system (at least in areas that I am familiar with) has been doing very well in concentrating the community effort, and more importantly in abandoning “old fields” and quickly moving into more promising ones.  The “trends” we see evoked by the conference system have managed to develop whole areas extremely quickly: modern cryptography, quantum computation, online algorithms, streaming algorithms, and algorithmic game theory are examples of rapid creation of new fields.  They have also manged to change directions quickly: the re-invention of CCC (formerly called Structures), the de-emphasis of new research in automata theory, the shift of focus away from the PRAM model in parallel computation,  and the rise and fall of theoretical VLSI are such examples.

Now, I am sure that the reader will disagree with the “wisdom of the community” in at least some of the examples I listed above as well as in many other cases.  Indeed it is important that there is ample room for other voices, both within each conference and in competing conferences.   I do not think that one can seriously claim that the CS conference system is “shutting up” other voices.  Certainly non-trendy areas get less attention and space than trendy ones, but they are allowed to compete for the mind-share and heart-share of the community, and may then become trendy.  This competition is key.  It may be artificially triggered by the limited size of the conference, but it really reflects a competition for the limited attention that humans have.

Now, one may evaluate differently the success of the CS conference system in setting the scientific agenda, and may certainly not like the idea of having a community agenda at all.  But just consider the alternatives!  One of the saddest and most common things in science is a brilliant graduate student that is working in the defunct area of his or her adviser.  In most case, the area chosen for your PhD will determine what you work on for the rest of your career.  What a loss!  The CS conference system has given graduate students a better chance to find a research area with a promising future (better — not perfect).  Many an adviser suggested: “read all abstracts of papers in the last STOC and find 2 or 3 papers that you want to read completely”.  This is so much better than “read my own last 2 or 3 papers”.   An enormous amount of research effort is wasted in all branches of science by researchers working in stupid and boring directions.  This waste can not be totally eliminated of course, but I think that the CS conference system is very effective in reducing it relative to other scientific systems I know.

Read Full Post »

After the success of the first polymath project (see also here and here as well as the project wiki), there are now two other “polymath” projects going on or proposed. Fields medalist Terry Tao suggested a question from the International Mathematics Olympiad and his blog hosts an ongoing mini-polymath project addressing it. Gil Kalai is meanwhile probing whether there is sufficient interest for a polymath project on the Hirsch conjecture (on which Gil has extensively blogged).

The question of whether to have a “polymath project”, or some other form of collaborative research, related to Algorithmic Game Theory begs itself. The young age of the field as well as the disparate backgrounds involved seem to make it a pretty good candidate for such efforts.

Read Full Post »

Lunar landing and war

Yesterday marked the 40th anniversary of Apollo 11’s lunar landing.  Technophiles and sci-fi fans everywhere, including complexity theorists as well as presidents, celebrate this event, while others think it was a waste of money and effort.    One side of the debate focuses on various useful scientific discoveries that can be attributed to the space mission (I’ve seen the iPod claimed as an application), while the other side focuses on the opportunity cost (fighting hunger and disease and such).

I am not really convinced by the attributed scientific discoveries — much more could have been obtained at a cheaper cost by saner missions such as un-manned space flight.  On the other hand, I can’t see the opportunity cost as being world hunger.  The truth is that the space mission was part of the cold war — a sublimation of a war, or a game of war, if you will.  The intention was certainly to compete with the USSR and the basic technology is as war-like as it gets. Like a war it managed to capture the imagination, enthusiasm, and energy of a nation, and in fact of much of the world.  Compared to alternative acts of war, the Apollo mission really shines: it is not a human tragedy.  In general, the main attraction of other games of war as well (like sports competitions) is that they are not real wars.

The algorithmic game theory connection here is slim, but at least Google found a neat angle by taking Apollo’s computer programs and making them open source.   I suppose that this now justifies the auction of Buzz Aldrin’s slide rule.

Read Full Post »

Guest post from Ariel Procaccia:

IJCAI is held biennially in odd-numbered years since 1969, so IJCAI’09 is the 21st in the series (AAAI takes place every year, with the frequent exception that there is no AAAI when IJCAI is in North America, e.g., this year). Generally speaking, AAAI/IJCAI are considered the top conferences in AI; in case you were wondering how one defines AI, my best definition is “everything that might get published in AAAI/IJCAI”. For a more detailed (and less cyclic) definition see the list of keywords in the call for papers. IJCAI’09 had 1290 submitted papers and 331 accepted papers.

The conference included plenty of Algorithmic Game Theory papers. I consider at least 7 sessions (with 4 papers per session) to be “hardcore” Econ/CS sessions: three social choice sessions (!!), coalitional games, solution concepts in games, auctions, and mechanism design. Some AGT papers appeared in indirectly related sessions about social networks, “performance and modeling in games”, etc.

A short comparison to the previous week’s EC is called for. One point is that social choice and coalitional games are very popular in IJCAI. In contrast, it seems that the vast majority of papers in EC were concerned with mechanism design, often in the context of approximation. Furthermore, the emphasis in EC seemed to be on technical papers, whereas the main emphasis in IJCAI (and AI conferences in general) is on the conceptual aspects rather than the technical, which usually makes the conference fun to attend. That said, I personally think that the technical threshold is satisfactory, and one often encounters mathematically involved papers (such as a paper by Lirong Xia and Jérôme Lang with a 15 page proof). Finally, EC had only 7 accepted papers labeled as “AI”, and in my opinion only one or two of them were “real” AI (for example, my own EC papers were labeled as “AI” as I consider myself to be an “AI guy”, but were not “real” AI). For more see Shahar’s EC conference report of July 11, which correctly observed that “AI guys complain that there are too many theory papers”, although I don’t see what made the theory guys complain.

Turning back to IJCAI, the keynote address was given by Hal Varian, who spoke about computer mediated transactions. Varian did an interesting job of putting the recent Internet-related developments in perspective by providing many historical anecdotes. I especially liked his discussion of how better monitoring is related to better contracts. For instance, as early as 3000BC clay tokens were matched to jars in order to guarantee honest delivery between Mediterranean countries, thereby significantly improving trade. In his introduction Tuomas Sandholm noted that HAL is an appropriate name for a speaker in an AI conference; Varian replied that in a recent statistics conference someone said that he was invited since his name is Hal Variance.

Let me briefly mention one game theory talk that I particularly enjoyed. A paper called “Iterated regret minimization: a new solution concept”, by Joe Halpern and Rafael Pass (Cornell), was nicely presented by Rafael. The paper defines iterated regret minimization in the natural way (given that you know what regret minimization is). Most interestingly, it turns out that in several prominent games where standard solution concepts (Nash, iterated elimination of dominated strategies, etc.) give absurd outcomes, iterated regret minimization provides intuitive outcomes that agree with empirical experiments in a very satisfying way. Joe and Rafael also provide existence theorems and an epistemic characterization.

I will conclude with some assorted highlights. First, the important-sounding “AAAI Presidential Panel on Long-Term AI Futures” was charged with dealing with the seemingly sci-fi issues that might arise when we reach the “singularity”, that is, the point where machines are as intelligent as humans. The presentation was somewhat—how should I put it?bizarre unexpected, but nevertheless thought provoking. They actually discussed (among many other issues) Asimov’s laws of robotics. Here is some food for thought: how do you specify and implement Asimov’s first law (a robot cannot harm a human)? If a robot makes a child cry by trying to help the parents, does it violate this law? It turns out that there are some AAAI/IJCAI papers on this issue! Second, the titles of the outstanding papers are “Learning conditional preference networks with queries” and “Consequence-Driven Reasoning for Horn SHIQ Ontologies”; the readers of this blog will surely excuse me for not elaborating further. Third, it seems the zeitgeist is AI and the environment, not surprisingly given Obama’s funding policies; environmental game theory is probably the next big thing. Fourth, check out this movie of how Andrew Ng of Stanford University (co-winner of this year’s IJCAI Computers and Thought award) used machine learning to build an awesome autonomous helicopter.

The next IJCAI will take place in Barcelona in 2011. It was also announced that IJCAI’13 will be held in Beijing. By then China will take over the universe, so perhaps we will not need a visa.

Read Full Post »

Undergraduate algorithms courses typically discuss the maximum matching problem in bipartite graphs and present algorithms that are based on the alternating paths (Hungarian) method.  This is true in the standard CLR book as well as in the newer KT book (and implicitly in the new DPV book that just gives the reduction to max-flow.)  There is an alternative auction-like algorithm originally due to Demange, Gale, and Sotomayor that is not well known in the CS community despite being even simpler.  The algorithm naturally applies also to the weighted version, sometimes termed the assignment problem, and this is how we will present it.

Input: A weighted bipartite graph, with non-negative integer weights.  We will denote the vertices on one side of the graph by B (bidders) and on the other side by G (goods).  The weight between a bidder i and a good j is denoted by w_{ij}.  We interpret w_{ij} as quantifying the amount that bidder i values good j.

Output: A matching M with maximum total weight \sum_{(i,j) \in M} w_{ij}.  A matching is a subset of B \times G such that no bidder and no good appear more than once in it.

The special case where w_{ij} \in \{0,1\} is the usual maximum matching problem.

Algorithm:

Initialization:

  1. For each good j, set p_j \leftarrow 0 and owner_j \leftarrow null.
  2. Initialize a queue Q to contain all bidders i.
  3. Fix \delta = 1/(n_g+1), where n_g is the number of goods.

While Q is not empty do:

  1. i \leftarrow Q.deque().
  2. Find j that maximizes w_{ij} - p_j.
  3. If w_{ij} - p_j \ge 0 then
    1. Enque current owner_j into Q.
    2. owner_j \leftarrow i.
    3. p_j \leftarrow p_j + \delta.

Output: the set of (owner_j, j) for all j.

Correctness: The proof of correctness is based on showing that the algorithm gets into an “equilibrium”, a situation where all bidders “are happy”.

Definition: We say that bidder i is \delta-happy if one of the following is true:

  1. For some good j, owner_j=i and for all goods j’ we have that \delta + w_{ij}-p_j \ge w_{ij'}-p_{j'}.
  2. For no good j does it hold that owner_j=i and  for all goods j we have that that w_{ij} \le p_{j}.

The key loop invariant is that all bidders, except those that are in Q, are \delta-happy.  This is true at the beginning since Q is initialized to all bidders.  For the bidder i dequeued in an iteration, the loop exactly chooses the j that makes him happy, if such j exists, and the \delta-error is due to the final increase in p_j.  The main point is that this iteration cannot hurt the invariant for any other i’: any increase in p_j for j that is not owned by i’ does not hurt the inequality while an increase for the j that was owned by i’ immediately enqueues i’.

The running time analysis below implies that the algorithm terminates, at which point Q must be happy and thus all bidders must be \delta-happy.

Lemma: if all bidders are \delta-happy then for every matching M’ we have that n\delta + \sum_{i=owner_j} w_{ij} \ge \sum_{(i,j) \in M'} w_{ij}.

Before proving this lemma, we notice that this implies the correctness of the algorithm since by our choice of \delta, we have that n\delta < 1, and as all weights are integers, this implies that our matching does in fact have maximum weight.

We now prove the lemma.  Fix a bidder i, let j denote the good that he got from the algorithm and let j’ be the good he gets in M’ (possibly j=null or j’=null).  Since i is happy we have that \delta + w_{ij}-p_j \ge w_{ij'}-p_{j'} (with a notational convention that w_{i,null}=0 and p_{null}=0, which takes care also of case 2 in the definition of happy)  Summing up over all i we get \sum_{i=owner_j} (\delta + w_{ij}-p_j) \ge \sum_{(i,j') \in M'} (w_{ij'}-p_{j'}).  Now notice that since both the algorithm and M’ give matchings, each j appears at most once on the left hand side and at most once on the right hand side.  More over if some j does not appear on the left hand side then it was never picked by the algorithm and thus p_j=0.  Thus when we subtract \sum_j p_j from both sides of the inequality, the LHS becomes the LHS of the inequality in the lemma and the RHS becomes at most the RHS of the inequality in the lemma.  QED.

Running Time Analysis:

Each time the main loop is repeated, some p_j is increased by \delta or some bidder is removed from Q forever.  No p_j can ever increase once its value is above C = max_{i,j} w_{ij}.  It follows that the total number of iterations of the main loop is at most Cn/\delta = O(Cn^2) where n is the total number of vertices (goods+bidders).  Each loop can be trivially implemented in O(n) time, giving total running time of O(Cn^3), which for the unweighted case, C=1, matches the running time of the basic alternating paths algorithm on dense graphs.

For non-dense graphs, with only m=o(n^2) edges (where an edge is a non-zero w_{ij}), we can improve the running time by using a better data structure.  Each vertex maintains a priority que of goods ordered according to the value of w_{ij} - p_j.  Whenever some p_j is increased, all bidders that have an edge to this j need to update the value in the priority queue.  Thus an increase in p_j requires d_j priority queue operations, where d_j is the degree of j. Since each p_j is increased at most C/\delta = O(Cn) times, and since \sum_j d_j =m we get a total of O(Cmn) priority queue operations.  Using a heap to implement the priority queue takes O(\log n) per operation.  However, for our usage, an implementation using an array of linked lists gives O(1) amortized time per operation: entry t of the array contains all j such that w_{ij} - p_j = t\delta, updating the value of j requires moving it down one place in the array, and finding the maximum w_{ij} - p_j is done by marching down the array to find the next non empty entry (this is the only amortized part).  All in all, the running time for the unweighted case is O(mn).

Additional comments:

  • As shown by DGS, a similar procedure terminates with close to VCG prices, which are also the point-wise minimum equilibrium prices.
  • The algorithm was presented for the assignment problem where bidders never desire more than a single item.  It does work more generally as long as bidders are “gross substitutes”.
  • The algorithm, like many auctions, can be viewed as a primal-dual algorithm for the associated linear program.
  • Choosing a small fixed value of, say, \delta=0.01 gives a linear time 1.01-approximation for the maximum matching.
  • Choosing the value \delta = 1/\sqrt{n} gives a matching that misses at most \sqrt{n} edges, that can then be added using \sqrt{n} alternating path computations, for a total running time of O(m \sqrt{n}).
  • Many algorithmic variants were studied by Dimitry Bertsekas.
  • A wider economic context appears in this book.

Read Full Post »

New paper on cake-cutting

A paper “On the complexity of envy-free cake cutting” by Xiaotie Deng, Qi Qi, and Amin Saberi was recently uploaded to the archive.  This paper seems to finalize the adoption of the “cake-cutting” literature (see presentation by Ulle Endris or survey by Brams and Taylor) into main stream algorithmic game theory.

Abstract: We study the envy-free cake-cutting problem for d+1 players with d cuts, for both the oracle function model and the polynomial time function model. For the former, we derive a \theta(({1\over\epsilon})^{d-1}) time matching bound for the query complexity of d+1 player cake cutting with Lipschitz utilities for any d> 1. When the utility functions are given by a polynomial time algorithm, we prove the problem to be PPAD-complete.  For measurable utility functions, we find a fully polynomial-time algorithm for finding an approximate envy-free allocation of a cake among three people using two cuts.

Read Full Post »

Guest post by Shahar Dobzinski, reporting from EC:

EC took place last week at Stanford. For me EC is not necessarily the place to hear new results; I already read or heard talks about most of the papers that interest me in the program. The best thing about EC is that it is a meeting place for the algorithmic game theory community (with quite a few attendees that do not coauthor a paper in the proceedings).

The EC audience is quite diverse and consists mainly of Theory and AI people, but there are also some economists and attendees from other areas. Though it seems that everybody is quite happy with this diversity, I couldn’t avoid noticing that theory people tend to complain that there are too much AI papers, AI guys complain that there are too much theory papers, and both complain that there are too many computer scientists who are doing pure economics. Economists seem to be too polite to complain (at least when I am around). Optimists would interpret this as a sign that the mix of papers is correct.

One disappointing feature of this conference is the horrible meeting room. The room is huge, full with round tables. Hard to even hear hear the questions asked by the audience, so the poor session chair has to run with a microphone to the other side of the room whenever there is a question. I have never been to a wedding in the US, but in Israel this room could be a great wedding hall.

So what are the trends in EC? Adwords seems like a hot trend, although my feeling is that it was hotter last year. Many papers model different issues in selling ad auctions, and even more papers with adwords as their claimed motivation and inspiration. Adwords are an important application of algorithmic game theory, but in some sessions it looked as if this is the only application. On the other hand, relatively not too many equilibria computation papers were presented.  Price-of-anarchy type of results were not too popular either. Still, approximately 10 percent of the papers in the conference had the phrase “price of” in their title, and a few more have it in the abstract. Termed in 2001 by Papadimitriou, the “price of anarchy” is surely a great, catchy name. Unfortunately, by now it might be overused: an hypothetical list of the difference “prices” defined by now would probably be about the same size of Garey and Johnson’s NP-complete problems list. We should consider a definition freeze for at least a year or two.

Two papers won the outstanding paper award. Nicolas Lambert gave a very good talk on his paper with Yoav Shoam. The paper shows how to design payment schemes that motivate experts to answer truthfully multiple-choice questions. The expert is being asked, for example, ”what is the mean of a random variable X?” and is getting paid according to the realization of X.  They provide a complete characterization of the possible mechanisms. The paper also got the best student paper award.

The second winning paper was my joint paper with Itai Ashlagi and Ron Lavi. We considered the problem of truthful scheduling of jobs on unrelated machines to minimize the makespan, that was introduced by Nisan and Ronen in their “Algorithmic Mechanism Design” paper. We showed that no anonymous mechanism can provide a more-than-trivial approximation ratio (a mechanism is anonymous, if, roughly speaking, the names of the machines does not matter).

EC’10 will take place at Harvard, with  Moshe Tennenholtz and Chris Dellarocas as chairs. Hopefully it will be at least as good as EC’09.

Read Full Post »

SAGT (Symposium on Algorithmic Game Theory) 2009 will take place in Paphos (Cyprus) on October 18-20.   Here is the list of accepted papers.

Read Full Post »

Older Posts »

%d bloggers like this: