PS: Since I accepted the intimidating invitation below, I’ve been thinking of what my first post should be about. Now I have to admit I took the easy way out. Thanks for letting me participate. I’ll try to think of something more original next time.
Posts Tagged ‘Conferences’
Looking in my re
Guest post by Reshef Meir
Once upon a time (or so I’m told), the important publication venues were journals. Sure, there were conferences, but their main purpose was to present new findings to the community and to trigger discussions. A conference paper did not really “count”, and results were only considered valid after being published in a respectable journal, verified by its reviewers. Indeed, this is still the situation in some fields.
I have no intention to revive the discussion on the pros and cons of journals, but conferences proceedings in computer science, and in AGT in particular, are nowadays treated as publications for every purpose. They are considered reliable, are highly cited, and theorems are used as building blocks for newer results. We also want institutions and promotion committees to consider conference papers when looking at candidates.
The next sentence should be “…this progress was made possible due to great improvements in the review process of conferences”. But has it?
Almost half of the conference submissions I have personally reviewed  contained severe technical errors—where many of the erroneous submissions came from EC. All EC submissions, I should say, were worthy, and would make at least a reasonable case for acceptance if not for the technical errors.
Somewhat surprisingly, I discovered that there is no consensus about rejecting papers once a technical error is spotted . Often authors reply with a corrected version (sometimes weakening the propositions), or a proof sketch, or promise they would fix the proof. For some committee members, this is a satisfactory answer; others assert that the paper should be refereed based on the originally submitted version, and that technical correctness is a threshold condition for acceptance.
I am not arguing that technical correctness should be the only or the primary criterion for acceptance, but this is one criterion that I think there is currently a problem with. To initiate a discussion, here are the main arguments for acceptance/rejection as I perceive them.
1) It is not the reviewers’ job but the authors’ responsibility to verify the correctness of their results (an old debate, see e.g. here, p.3).
2) Proofs are sometimes replaced with sketches or even omitted, so it is unreasonable (and perhaps impossible) to verify correctness anyway.
3) Even in journals, errors are no big deal, since if results are important the error will eventually “float”.
4) We should trust authors, as no one wants an embarrassing error under their name. 
5) There is an opportunity cost incurred on the community for delaying the publication of interesting results.
a) If errors are found, a corrected version can be submitted to either a different conference or the next meeting of the same conference. Authors should not be given the credit for correcting the paper and resubmitting it, since we know it cannot be properly reviewed.
b) Accepting revised versions from some authors may be unfair toward others. Also, why not submit a corrected version with better motivation, references, or added results?
c) If an error is found, this is an indication that there might be other hidden errors, and that authors should better prepare their paper for publication.
d) There are many non/lightly refereed venues (Arxiv, workshops) for propagating results quickly. It is hard to claim that results in CS do not propagate fast enough.
e) While authors doubtlessly prefer to publish error-free papers, other tasks and priorities may come before verification. Papers usually do not go under major changes between acceptance and the camera-ready version. From my experience as an author though, papers often significantly improve between submissions.
f) Low tolerance for errors will incentivize authors to invest effort in finding their own mistakes prior to submission.
All in all, I agree that the best verification is indeed by the authors themselves, and that it is the authors’ responsibility to publish error-free papers. However I do think there is a problem. I will fully admit that my own papers are not free from errors, and unfortunately some of them have been only found after publication.
One possible solution is a revision of the reviewing process that will put more emphasis on verifying correctness, making another step in the hybridization of journals and conferences. For example, adding more time for the review and allowing authors to submit revisions in particular cases. As such changes are costly, a simpler solution is to make it clear that non-trivial errors will result in rejection unless there are unusual circumstances (like a ground-breaking result that can be easily fixed). The point of this strict line is not to be an adversarial reviewer, but rather to ensure that authors have not just the capability and the responsibility—but also the incentive—to properly verify their own work.
So, should the review process change? Should an article with errors be accepted or rejected?
 Aggregating 20 submissions over the period 2009-2013, from AAMAS, SAGT, WINE, AAAI, IJCAI, and ACM-EC. Clearly this is not statistically significant at any rate, but may still indicate a problem. Of course, even in published journal papers errors are common, and I have seen estimations that between 10% and 30% of published papers contain non-trivial errors. Unfortunately I could not find any trusted source but see e.g. here and here.
By “severe technical error” I mean either that a proposition is wrong, or that large chunks of the proof should be rewritten.
The deadline for ICS 2011 is near: August 2nd. The conference itself will take place in Beijing on January 7-9, 2011. From the website:
Innovations in Computer Science (ICS) is a new conference in theoretical computer science (TCS), broadly construed. ICS seeks to promote research that carries a strong conceptual message (e.g., introducing a new concept or model, opening a new line of inquiry within traditional or cross-disciplinary areas, or introducing new techniques or new applications of known techniques). ICS welcomes all submissions, whether aligned with current TCS research directions or deviating from them.
The inaugural year of the conference has drawn much discussion (here, here, here, here, here, and my own here), has had many cs/econ-related papers, and seems to have been quite interesting. This year too, the conference offers financial support:
ITCS will provide full support for one author of each accepted paper, including air ticket (economy airfare at the minimum of the amount), hotel lodging (up to 4 nights), and registration fee.
The Workshop on Internet & Network Economics (WINE) is an
interdisciplinary forum for the exchange of ideas and results arising in the algorithmic and economic analysis of Internet and WWW. WINE 2010 is co-located with the 7th Workshop on Algorithms and Models for the Web Graph (WAW 2010) from December 13 to 17 in Stanford University. The submission deadline for regular and short papers is July 30th, 2010. For more information, see http://wine2010.stanford.edu .
I am particularly impressed with the graphics of the program committee web page. (Well, the program committee composition is impressive as well).
- The Geometry of Manipulation – a Quantitative Proof of the Gibbard Satterthwaite Theorem [arXiv]
Authors: Marcus Isaksson, Guy Kindler and Elchanan Mossel
- Frugal Mechanism Design via Spectral Techniques [arXiv]
Authors: Ning Chen, Edith Elkind, Nick Gravin and Fedor Petrov
- Sequential Rationality in Cryptographic Protocols
Authors: Ronen Gradwohl, Noam Livne and Alon Rosen
- Frugal and Truthful Auctions for Vertex Covers, Flows, and Cuts [arXiv]
Authors: David Kempe, Mahyar Salek and Cristopher Moore
- Black-Box Randomized Reductions in Algorithmic Mechanism Design
Authors: Shaddin Dughmi and Tim Roughgarden
- Pure and Bayes-Nash Price of Anarchy for Generalized Second Price Auction [pdf]
Authors: Renato Paes Leme and Eva Tardos
- Budget Feasible Mechanisms [arXiv]
Author: Yaron Singer
The theoretical computer science community is off to another round of discussions regarding its conference system. Lance Fortnow, ex-officio as SIGACT chair, put forward a suggestion of reforming STOC, a drastic change that may alter the whole conferences-as-top-publications culture of the field (as Lance has previously suggested doing). He set up a blog with the proposal itself as well as many responses — mostly negative ones. Other recent related posts are here, here, here, and here (added 21/6: and here and here.)
While I like the CS conference system, I still think that a change is needed in the pan-TCS conferences as the field is getting bigger and deeper. Like most others who commented on the proposal, I doubt that the suggested new format has enough value to attract attendees. As many suggested, I would start by co-locating with other related conferences, as was done this year (with CCC and EC).
Guest post from Michael Schapira
I just got back from the 10th conference on electronic commerce (EC 10), which is the premier conference in algorithmic game theory. EC 10 was held in Boston from June 9th to 11th (co-located with STOC and CCC) and was chaired by David Parkes (general chair), Chris Dellarocas and Moshe Tennenholz (program co-chairs). The conference was great. The highlights included, alongside interesting technical talks, several social events and, most notably, a New England lobster dinner boat cruise (!). In addition, Ron Kohavi, who manages the experimentation platform team at Microsoft, gave a very entertaining talk, full of sociological observations and historical anecdotes, about the importance of controlled experiments for the evaluation of industrial innovations.
45 papers were accepted this year out of 136 submissions, an all-time-high acceptance rate. This has generated some discussion (see here). I, for one, think that the PC has definitely made the right choice in raising the acceptance rate. My impression is that the higher acceptance rate did not come at the expense of lower quality, and has greatly contributed to the broadening of the scope of EC (a long-time agenda). Indeed, this year’s conference featured interesting papers about topics that were, at best, peripheral at previous ECs (see below). EC is still primarily a CS/econ theory conference, as reflected by the high number of pure theory papers (35/45), but the number of empirical and experimental papers seems to be increasing from year to year (5 accepted papers in each category).
So, what can we learn from this EC about the current trends in algorithmic game theory?
More and more applications. While adword auctions seem to still be the undisputed queen of killer applications of algorithmic game theory, other applications like prediction markets, recommender systems and social networks are slowly but surely taking their place at her side.
Bayesian algorithmic mechanism design. Work in computer science on algorithmic mechanism design, ever since this research agenda was articulated by Nisan and Ronen a decade ago, has mainly focused on worst-case, or “prior-free”, environments, in which the strategic agents have no knowledge whatsoever about each other’s private information. This is in sharp contrast to the standard assumption in economics that agents have (common) probabilistic “beliefs” about each other’s private information. The attempt to reconcile these two contradicting worldviews is the motivation for much recent work by computer scientists (see recent and not so recent work by Jason Hartline and others). Examples from this EC include work on mechanisms for single-parameter domains that do not know the underlying distribution but perform (almost) as well nonetheless, a study of risk-neutral vs. risk-averse bidders, and a comparison between the power of deterministic and nondeterministic mechanisms.
Computational social choice. Economic voting theory dates back to Arrow’s fundamental impossibility result in 1951. Over the past years, voting theory has also been examined through the computational lens. However, for purely historical and sociological reasons, this research has taken place mainly within the Artificial Intelligence community. EC 10 seems like the beginning of the end of this separation between the theory of CS and the AI communities (in this respect, of course…). Vince Conitzer and Ariel Procaccia, two active researchers in this field, gave a tutorial on computational voting theory, and an entire EC session was devoted to this subject. In addition, some of the new work on approximate mechanism design without money can be regarded as an interface between algorithmic mechanism design and computational social choice.
The following are some of the papers I especially liked:
- Truthful Mechanisms With Implicit Payment Computation, Moshe Babaioff, Bobby Kleinberg and Alex Slivkins. This paper, which is the recipient of the best paper award, addresses the following interesting question. Truthful mechanisms output an outcome (allocation of items, etc.) and payments. What is the additional computational/communication cost of computing truth-inducing payments? Recently, it was shown here that for deterministic truthful mechanisms this overhead can be relatively big. This paper shows that if one is willing to settle for the weaker notion of truthfulness in expectation (and the possibility of rebate), then this “overhead of truthfulness” goes down to zero.
- Market Design for a P2P Backup System, Sven Seuken, Denis Charles, Max Chickering and Sidd Puri. Motivated by the deficiencies of existing information storage methods (CDs, data centers) this work presents a market-based P2P backup system design. The thing I like most about this paper is that, aside from analyzing the theoretical aspects of the proposed design (equilibrium analysis, etc.), much thought has been given to practical implementation details (user interface, deployability), and the proposal is also being evaluated empirically to establish its merits. Such work is still quite rare at EC.
- Matching In Networks with Bilateral Contracts, John W. Hatfield and Scott D. Kominers. This paper presents a clean and general model that encompasses several (all?) classical models in matching theory and establishes results that generalize existing results in a variety of well-studied contexts.