Mark Wilson writes a guest post from COMSOC:
This is my first COMSOC meeting, the third in the series so far. The next is planned for 2012 but there are no details yet about location. The organization has been impressive (92 official attendees to look after), and the conference dinner at the Rheinturm, in the revolving restaurant 172m above the ground, was excellent. Congratulations to Joerg Rothe and his team.
There was a conscious effort to get a broad range of invited speakers: political science, game theory, AI, economics, mathematics were represented. There were 57 submissions to this non-archival conference, and 39 were accepted. Much information including slides of talks is on the conference website.
It was noted at the business meeting that the schedule was rather crowded and more free time would be good in future. The best solution that I can see is some reduction in the number of talks (maybe accept only about 50%). Also, given that in some sessions much background material is common to more than one talk, perhaps the speakers could collaborate to save time. Some of the talks described mechanisms that could be used for aspects of conference organization, especially the decision of which papers to accept. It would be nice to see some self-reference of this sort for future meetings.
I will give a very biased summary of highlights in the interests of space. Some of the key themes that I noticed: parametrized complexity, allocation (including fair division, matching/stable marriage and cake cutting); experimental results and explicit computation, (such as integer and linear programming, random generation, local search); social networks; manipulation/bribery/control/cloning/partial winner etc.
The LOGICC tutorial day was on Monday. A summary of the invited tutorials:
Vince Conitzer (Duke): a clear overview of (some of) the computational social choice research area. Useful for students new to the area.
Agnieska Rusinowska (Paris-Sorbonne): a survey of the idea of influence in social networks. Too long and detailed for my taste, but the slides will be useful, with a huge number of references.
Nicholas Tideman (Virginia Tech): after long experience, claims that spatial models are the best for modelling real voter behaviour. Discussed difficulties of obtaining real data and had plenty of computational problems to offer the audience, often involving Gaussian quadrature. Challenged us to generate fake elections that he could not distinguish from real ones.
Toby Walsh (UNSW, NICTA): presented experimental results about manipulation of voting rules using methodology similar to studying SAT. It seems that many NP-hard manipulation problems exhibit smooth phase transitions as the size of the manipulating coalition increases. This calls for an analytic proof.
The conference proper started on Tuesday. The invited talks:
Gabrielle Demange (Paris School of Economics): has a new ranking method (“generalized handicap”) which she has characterized axiomatically. Ranking methods focus attention of the rankers, and there is a dynamic process which converges in some cases. This reminded me of Noam’s very early post on the attention economy. She explicitly did not treat strategic behaviour.
Matthew Jackson (Stanford): in experiments, if you offer people $100 now or $200 in 2 years, most take the $100. But if you offer $100 in 6 years or $200 in 8 years, they take the $200. This is inconsistent with standard rational models. Main idea is that perhaps each person has several “personalities”, some with more patience than others. He showed that in that case, this kind of time-inconsistency is logically inevitable under very weak assumptions.
Bettina Klaus (Lausanne): discussed the allocation mechanisms based on deferred acceptance (scarce heterogeneous indivisible goods with no money transfers) used for matching medical students to hospital residencies, etc, originating in the stable marriage theorem of Gale and Shapley. She and coauthor have axiomatized these methods. Very clearly showed the link between theory and practice and made the area seem very attractive to study.
Herve Moulin (Rice): the problem of impartial peer evaluation – design a mechanism that allows, for example, academic researchers to rank each other (or simply decide on who wins a prize), in a way that is strategyproof. Requires some restrictions (for example don’t allow a voter to rank herself) so as not to fall foul of Gibbard-Satterthwaite impossibility result. Has solved this problem when there are at least 4 agents. Plenty of open problems.
Hannu Nurmi (Turku): Despite a lot of academic results based on flaws in existing voting rules, very little reform has occurred in practice. New proposed rules may require a different model from the standard ordinal preferences used in most of social choice; voting rules are used for many purposes, and we may need to specialize analysis to different cases. Long discussion of various paradoxes. Asked for computational help to study the structure and probability of profiles leading to various paradoxes.
Among the contributed talks, almost all of which were interesting and well presented, my favourites were:
Tyler Lu (Toronto): a model that interpolates between the extremes of social choice and full personalized recommendations. Apparently this is isomorphic to theory of fully proportional representation. Hardness results, greedy algorithms, approximation algorithms.
Ariel Procaccia (Harvard): cake cutting is awesome! Computer scientists have a lot to contribute, so please join in! Presented polynomial time deterministic and randomized algorithms for cake cutting that give envy-free and proportional allocations, and induce truthful behaviour.
Piotr Faliszewski (AGH Krakow): “distance rationalizability” provides a framework for voting rules that allows for more general results, based on the notion of distance to a consensus. Since distance and consensus can take very many values, this basic idea is too general and every rule is distance rationalizable. Putting natural conditions on the distances allows for more interesting results including some on complexity of winner determination.
Felix Fischer (Harvard): the problem of finding a strategyproof mechanism for approval voting (choosing a set of winners) when the voters and candidates coincide. Deterministic results are negative, but a randomized 4-approximation algorithm exists. Practical use in deciding on conference accepted papers, social networks?
Toby Walsh called for a standard library of benchmark voting data for testing algorithms, along the lines of what is available in many other fields (call it VoteLib for now). I second this enthusiastically.
Like this:
Like Loading...
Read Full Post »