The deadline for submissions to STOC has been extended to Monday, Nov 5 2012 5:00 p.m. EST. The change has been announced on the STOC submission website.
Archive for October, 2012
Yesterday I taught the first of five algorithmic economics lectures in my undergraduate AI course. This lecture just introduced the basic concepts of game theory, focusing on Nash equilibria. I was contemplating various ways of making the lecture more lively, and it occurred to me that I could stand on the shoulders of giants. Indeed, didn’t Russell Crowe already explain Nash’s ideas in A Beautiful Mind, complete with a 1940’s-style male chauvinistic example?
The first and last time I watched the movie was when it was released in 2001. Back then I was an undergrad freshman, working for 20+ hours a week on the programming exercises of Hebrew U’s Intro to CS course, which was taught by some guy called Noam Nisan. I didn’t know anything about game theory, and Crowe’s explanation made a lot of sense at the time.
I easily found the relevant scene on youtube. In the scene, Nash’s friends are trying to figure out how to seduce a beautiful blonde and her less beautiful friends. Then Nash/Crowe has an epiphany. The hubbub of the seedy Princeton bar is drowned by inspirational music, as Nash announces:
If we all go for the blonde, we block each other. Not a single one of us is gonna get her. So then we go for her friends, but then they give us the cold shoulder because nobody likes to be second choice. Well, what if no one goes for the blonde? We don’t get in each other’s way, and we don’t insult the other girls. That’s the only way we win. … Adam Smith said the best result comes from everyone in the group doing what’s best for himself, right? … The best result would come from everyone in the group doing what’s best for himself and the group!
But if no one goes for the blonde, wouldn’t a player gain from deviating to the blonde, without making others worse off? This game has Pareto efficient Nash equilibria (one player goes for the blonde, the others go for her friends), but the strategy profile advocated by Crowe is not Pareto efficient, nor an equilibrium.
Nash concludes by proclaiming “Adam Smith was wrong!” Maybe, but so was Russell Crowe.
Joan Feigenbaum, the PC chair for STOC 2013 would like to urge all those submitting (deadline: November 2nd) to pay attention to the new submission instructions, which are not what we are used to. In particular, “we strongly suggest that you create your user account on the submission server NOW…”and “Length and formatting requirements will be enforced strictly and literally”.
The academic hiring season is starting again, and as a service to the AGTE/econ-cs community we’d like to provide a place for advertising new research positions in the field. This includes academic positions as well as industrial research ones, permanent and postdoc positions, in all areas related to the border between CS and game-theory and economics (interpreted widely). Please post any such open position announcements as comments to this thread.
A Workshop on The Economics of Web Search and Social Networks will be held in Rome on February 4th 2013 in conjunction with the Sixth ACM WSDM Conference. Submission deadline: Nov 15th.
Here on Turing’s Invisible Hand we make it our mission to address the most profound questions of algorithmic economics. This post fearlessly tackles one of those timeless questions: “Which are the 10 most cited algorithmic economics papers?”
I searched on Google Scholar for the names of researchers who I thought were likely to be highly cited, and checked out their highly cited algorithmic economics papers (see discussion). To be eligible, a paper has to be:
- A research paper. For example, Papadimitriou’s Algorithms, Games, and the Internet is not eligible.
- Published in a CS conference or journal. For example, the Edelman et al. GSP paper is not eligible.
I tabulated eligible papers with more than 500 citations to obtain a ranking. The information was recorded roughly a week ago.
- 1210 citations: Maximizing the spread of influence through a social network (Kempe, Kleinberg, Tardos)
- 1203 citations: Worst-case equilibria (Koutsoupias and Papadimitriou)
- 1150 citations: Algorithmic mechanism design (Nisan and Ronen)
- 1043 citations: How bad is selfish routing? (Roughgarden and Tardos)
- 1035 citations: Algorithm for optimal winner determination in combinatorial auctions (Sandholm)
- 605 citations: The Michigan Internet AuctionBot: A configurable auction server for human and software agents (Wurman, Wellman, Walsh)
- 600 citations: Incentives for sharing in peer-to-peer networks (Golle, Leyton-Brown, Mironov, Lillibridge)
- 599 citations: A market-oriented programming environment and its application to distributed multicommodity flow problems (Wellman)
- 560 citations: Truth revelation in approximately efficient combinatorial auctions (Lehmann, O’Callaghan, Shoham)
- 518 citations: a tie between Taming the computational complexity of combinatorial auctions: Optimal and approximate approaches (Fujishima, Leyton-Brown, Shoham) and Robust incentive techniques for peer-to-peer networks (Feldman, Lai, Stoica, Chuang)
Apart from its general stone-age level of sophistication, the methodology has several specific shortcomings. First, in some cases the the journal version of a paper does not have the same title as the conference version. I took this into account in the case of #5, but I may have missed some other instances. Interestingly, the Daskalakis et al. work on complexity of Nash is conspicuously missing from the list. The SICOMP 2009 version has 134 citations, and the STOC 2006 version has 360 citations, so the combined paper almost makes it. To make things more confusing, there is an ECCC 2005 paper that extends the reduction from four to three players, and has 128 citations. Presumably, though, people cited the last paper together with the STOC paper, until the SICOMP paper came along. In any case, no doubt this work will find its way to the top of the list in a few years.
Second, and more importantly, it is of course unclear what counts as an algorithmic economics paper. I guess the criterion I had in mind is something like: if an author gave a great presentation to the well-rested attendees of an algorithmic economics seminar (so the audience is a mix of computer scientists and microeconomists), nobody would fall asleep. This criterion arguably rules out some important AI data-mining-type papers that could be outside the scope of what would keep the economists awake. Hurray, finally we have a clear-cut definition for the field!
Costis Daskalakis and Jason Hartline are organizing a workshop on Bayesian mechanism design on Oct 20, immediately preceding FOCS 2012. The workshop consists of eight tutorial/research talks, by Shuchi Chawla, Nicole Immorlica, Tim Roughgarden, Eva Tardos, as well as Costis and Jason themselves.
You can find the schedule and abstracts here.
Guest post by Yair Zick.
COMSOC 2012 in Numbers:
54 papers were submitted, with 38 accepted; the review process was handled by 21 committee members who handled a total of 162 reviews. The workshop held 4 keynote talks and one tutorial. Germany had the greatest number of publications, with the United States in second place and Israel at a respectable third. Attendance and submission were slightly lower this year; this may be attributed to the lack of tutorial days and the clash with the start of the semester in the U.S. and Canada. Papers were presented in a single session format, spanning three days.
The Workshop at a Glance:
Piotr Faliszewski and Felix Brandt organized a great workshop. The schedule was busy, but the long breaks (with plenty of food and coffee provided) allowed for continued focus throughout the day. Another culinary point to note is the provision of meals for those with special dietary needs. The lunches and the social dinner (held in the old Jewish quarter) had vegetarian and Kosher food options, both reportedly palatable. COMSOC really stands out in this respect as many big conferences have a poor track record with providing for special dietary restrictions. The workshop was held in the AGH university of science and technology; the new facilities allowed for a smooth experience (with the exception of a few adversarial laptops refusing to obey their owners).
The Business Meeting:
During the business meeting, it was suggested that future workshops include a poster session as well. A poster session would allow for an increase in the number of accepted papers while largely maintaining the workshop format: avoiding parallel sessions, and avoiding an overly long or packed schedule. It was suggested that papers would be put in poster sessions according to both quality and their publication history. The logic being that a paper that has previously appeared in previous venues should appear in a poster session over a new line of research. If such a poster session is to take place in future COMSOCs, the organizers would then be faced with the following preference elicitation problem: which should be a poster paper? a lower merit new paper, or a higher quality published work? Another point that was argued for was allowing papers from less prominent disciplines to take precedence over those dealing with well-established problems. This would arguably provide more focus on interdisciplinary and innovative work.
Another topic raised at the business meeting was the timing of the workshop. Currently COMSOC is held in even years around September-October; this allows both for ample time to organize the workshop and avoids competition with other conferences that are usually held during the summer. However, this poses a problem for researchers hailing from America, which was indeed evident in the number of accepted papers. This is understandable as the workshop fell on exactly the first week of the semester in the U.S. and Canada; however, an agreement as to the best time has yet to be reached.
Some suggested holding the workshop in conjunction with events in the non-computational social choice community, which will improve the interaction between the two communities, but would also increase the number of constraints as to workshop timing and venue. The venue for the next workshop is yet undecided, though several attendees called for a location outside Europe.
The topics presented this year did not radically differ from those presented in COMSOC 2010; though new models were presented, most papers focused on analysis and extension of existing models. Here are some highlights.
There was a significant focus on judgment and preference aggregation. In his keynote talk, Clemens Puppe discussed paradoxes in various preference aggregation problems. He showed a characterization of such scenarios where paradoxes do not exist, and some ways of avoiding paradoxes while maintaining a rich problem structure. Some of the work on the topic was quite high-level: Umberto Grandi presented a general model for studying paradoxes in voting and judgment aggregation; Dvir Falik presented impossibility (and possibility) results for a general model of binary evaluation aggregation. Some discussed aspects or variations of known problems: Dorothea Baumeister discussed the complexity of bribery and manipulation in judgment aggregation; Ulle Endriss explained what happens when agents need to agree on a common graph structure; Jerome Mengin presented aggregation problems in the multi-issue domain, and Amirali Salehi-Abari talked about the impact of an underlying social network on preference aggregation.
Uncertainty and randomness in elections was also mentioned in several papers. Uncertainty was studied in voters’ preferences (Aziz et al.), candidate deletion (Boutilier et al.), scores assigned by voters (Baumeister et al.), the voting rule used (Erdelyi & Elkind) and the way ties are broken (Obraztsova et al.).
Parameterized complexity was also discussed to an extent. We started with an excellent tutorial by Rolf Niedermeier on parameterized complexity. Rolf introduced some basic concepts from the theory of parameterized complexity, such as fixed parameter tractability (FPT), as well as a high-level overview of the higher complexity classes. He then proceeded to discuss methodologies for proving membership in FPT or some of the higher complexity classes used in the study of parameterized complexity. Although some papers applied parameterized complexity analysis, I believe that there is still plenty of room for applying parameterized complexity techniques in computational social choice.
Some speakers employed interesting and powerful tools from abstract algebra and geometry. The keynote talk by Craig Tovey discussed the geometry of spatial voting: a higher-dimension equivalent of hotelling’s voting model. Geometric analysis of the problem allows one to identify sets of points that have some highly desirable properties (one with a particularly cute name is the yolk). The keynote talk by Michel Le Breton also showed some interesting uses of Erhart polynomials in the combinatorial analysis of the probability that a player is pivotal in a simple game. Apart from that, he discussed how different assumptions on the underlying probability space lead to different power indices (with a natural focus on the Shapley-Shubik, Banzhaf and Penrose measures). He showed some results on the social welfare achieved by these power indices, and the combinatorial structure of various voting models. There were other papers that employed geometric analysis. Lirong Xia talked about the number of swaps required in order to manipulate an election, essentially measuring the maximal hamming distance between an election and a successful manipulation. Dimitrii Shiryaev talked about upper and lower bounds on the volume of the space of elections where a given candidate is a winner. Schurmann showed how to use Erhart polynomials in order to compute the probability of a condorcet paradox in an election, and how to mitigate the exponential blowup of the problem using known properties of the underlying polyhedron.
The last keynote talk was by Gerhard Woeginger, who talked about the complexity of core stability in cooperative hedonic games. He discussed in length the complexity of some classes of hedonic games, and presented a host of open problems in the field. One interesting class of games he discussed was those induced by friendship relations. Briefly, these are games where one values a coalition based on an underlying preference order on the other agents.
COMSOC 2012 was held in the beautiful city of Krakow, Poland. The city provides plenty of scenery and cultural experiences. I didn’t get much time to see the historical sites, but the old Jewish quarter (Kazimierz) and the city castle (Wawel) both provided for very picturesque walks. The old city holds the impressive market square (Rynek Glowny), boasting a huge antiques market, where you could get wartime artifacts as well as old silverware and kitchen items. The non-antiques market had really fun stalls. I was particularly impressed with the fur-selling stalls, where one could get rabbit-fur hats, lambskin coats and whole reindeer pelts. Those seemed less attractive to me for both practical reasons (wearing a fur hat in Singapore is a very bad idea), and my own personal tastes. However, the stalls selling woodwork, displaying hand-carved toy trains, jewelry boxes and chess sets provided more tropical-weather-friendly items. Under the generous guidance of two of the locals, Piotr and Piotr, we were introduced to East-European restaurants, serving simple and delicious food which certainly exceeded its dismal reputation (gefilte fish anyone?).
The fifth annual New York Computer Science & Economics day (NYCE 2012) will take place on December 3rd. This year it is focusing on “The Economics of Big Data, Information and Privacy”. Advance registration (free) is now open. Student presentations of posters and short talks are invited.