There are two main reasons why researchers publish papers in conferences and journals. A “real” reason and a “strategic” reason. The strategic reason is an artifact of the current academic situation where researchers are judged according to their publication list, when considered for positions, promotions, grants, etc. Given this state of affairs, we have a strong personal incentive to “publish” whether or not our research is worthwhile and whether or not anyone will ever read it. The “real” reason for publication is dissemination: let other researchers learn about our work, so they can use it, continue it, be influenced by it, etc. This is what the whole “scientific/academic establishment” should aim to promote. Inherent here is the competition for the attention of other researchers who have to decide to spend time and effort reading your paper rather than the countless others vying for their attention.
In my view the main real service that journals and conferences provide in this day and age is exactly the arbitration of this competition for attention: the editors and reviewers of journals and the program committee of conferences look at lots of papers and suggest a few of them for me to look at. When chosen right, this is indispensable: there is no way that I could spot by myself the new important paper of a young graduate student among the hundreds of non-important ones out there. The main reason why I prefer conferences to journals in CS is that the former seem to be doing a much better job (although still far from perfect) of this identification of new important stuff.
The Internet has revolutionized the mechanics of dissemination of scientific work. I can still remember when scientific dissemination worked by putting reprints of our papers in envelopes and sending them in (real) mail. This was replaced by photocopying from conference proceedings, then by sending email attachments, and today we just “put it on the web”. The standard that seems to be emerging now is to put it on the arXiv. In comparison, the “social-scientific” structure surrounding “publication” has hardly changed at all, and putting your paper on the arXiv is not “counted as a publication” , provides no signal of your paper’s quality or correctness, and usually does not suffice for getting much attention for your work. I think that the main ingredient missing from having “putting your paper on the web” be the main form of publication is a surrounding mechanism that can provide a signal of quality and that will help readers focus their attention on the more important work. How exactly this can work still remains to be seen, but I would like to run an experiment in this direction on this blog.
Experiment: Recommend interesting AGT/E papers on the Web
I am asking readers of this blog to put forward — as a comment to this blog post — recommendations for interesting papers in the field of Algorithmic Game Theory/Economics. Here are the rules of this experiment:
- Eligible recommenders: Anyone from the “AGT/E research community”. I will take this in a wide sense: anyone who has published a paper related to AGT in a recognized scientific forum (conference or journal in CS, AI, GT, economics, …)
- Eligible papers: Papers must be (1) Available openly on the web. (2) Not already have been published in a journal or conference with proceedings. It is OK if they were submitted to or accepted by a conference or journal as long as they have not yet appeared yet. (3) Related to Algorithmic Game Theory/Economics, taken in a wide sense.
- What to include: (1) Name of the recommender and a link to their academic home-page — no anonymous recommendations (2) A link to the paper (3) A short explanation of what the paper is about and why you think it is interesting. There is no implicit assumption of having refereed the paper in any sense.
- Conflicts: The recommender should follow the usual rules of avoiding conflict of interest followed in program committees: do not recommend (1) your own papers, (2) papers of someone in own department, (3) papers of a frequent collaborator (4) papers of a family member/lover, etc.
[Update: following a suggestion, to start things off, I entered my own recommendation in the format that I was thinking of — as a comment.]
Noam, you should go first.
you are right. done.
One paper that I like that has not been published yet is Strategy-proof Allocation of Multiple Items between Two Agents without Payments or Priors by Mingyu Guo and Vincent Conitzer to appear in AAMAS-10.
The paper considers the problem of allocating divisible goods between two competing agents in a setting with “no money”. The setting has each bidder give a “relative value” to each good (where the value obtained is linear in the fraction of the good allocated), and the goal is to approximate, in an incentive compatible way, the optimal allocation. The paper does not completely settle the question, but it does give both lower bounds and upper bounds, and studies in detail a sub-family of incentive-compatible mechanisms.
The paper is quite elegant, I think (ignoring the slightly messy calculations), but what I really like about it is how it makes non-trivial progress on several significant but somewhat fuzzy high-level agendas: Approximate mechanism design without money as well as on an agenda trying to take a mechanism-design view of markets (the model is just a Fisher market). I also see ample room for further interesting work: not just solving their original problem (including making their key “computer assisted” lower bound explicit), but also handling the cases of more players and goods (maybe placing some conditions on the underlying market structure as needed), and handling fairness (since the model seems to be not very far from “piecewise-linear” cake-cutting).
Noam Nisan
This is the most interesting, insightful and useful blog entry I have read in years. I couldn’t agree more.
Blog comments are obviously a poor excuse for a real recommendation system, but I can see why you suggested this as a way to get the ball rolling.
In an ideal world we’d have a recommendation system as nice as Netflix or Amazon.com, but the trouble is that designing and implementing it would cost time and money. Professors don’t have time for this sort of thing and the money would have to come from a funding agency, and they’re not used to funding this sort of infrastructure. (Even though in the long run they might end up saving money by decreasing the amount of conference travel that researchers are currently forced to do.)
I just wanted to point out the website
http://www.scirate.com
which allows one to vote for papers on the arXiv. Right now, most users are physicists, but other communities are most welcm.
This website has been working for more than 2 years now. I don’t know how many people are using it, but the “best” preprints everyday can get a dozen or so votes. This is rather effective when you couldn’t check the arxiv for a while, and you want to find out what nice papers you missed.
and usually does not suffice for getting much attention for your work
I’m not so convinced of this. Usually at most 3 manuscripts appear on cs.CC/cs.DS each evening. Enough people glance at the titles and abstracts frequently enough that if something interesting pops up, the news will get around by word of mouth.
I am also unconvinced by the reasoning that the value of conferences is mainly to help in the focus of attention problem. We have a proliferation of conferences these days. I find it easier to know about relevant papers through ArXiv feeds rather than figure out the accepted papers in various conferences; of course it is useful to look at the major theory conference accepted papers but one can miss interesting papers at smaller venues which may be quite relevant to your own area. The number of new papers per day/week on ArXiv in each relevant area is not too large so far and it is easy enough to glance at the abstract to know whether is is interesting/relevant.
I think there are two factors that make the results of your experiment different from (not necessarily better or worse than) the signal you get from conferences. One is anonymity of reviewer/recommender. The other one is that it is different to get a paper and be asked to review it than to get asked to name a few good papers. There are many papers that would make it in the reviewing-based process but not in the recommendation-based one, since they have good but forgettable results.
One problem is the period in which a paper in AGT for example is on the web and not published. In econ papers can be years on the web before they are published and the word of mouth spreads. Many people still don’t even put their papers online before it was accepted in CS.
One thing I would like to see is people talk about their papers before submitted as in some other social science fields. This will improve the papers (by getting comments) and knowledge. We are not in competing for discovering cancer after all….
I would actually be glad to get recommendations for papers that do not satisfy the requirement “(2) Not already have been published in a journal or conference with proceedings. It is OK if they were submitted to or accepted by a conference or journal as long as they have not yet appeared yet.” which is very restrictive. Finding the right papers to read out of the recently published papers is also challenging.
Noam
The general idea of your system is very good, but I see substantial difficulties with the current proposal.
The comment got too long, so I put some of it in a blog post https://www.cs.auckland.ac.nz/~mcw/blog/2010/04/18/recommending-research-articles/. Other issues not mentioned there:
I think you overstate the “attention” argument. We don’t have time to read a large number of papers in detail, but arXiv mailings allow quick checking of abstracts. I don’t find it too onerous to keep up with the CS.GT, MATH.CO and CS.DS. Of course if more people used arXiv (many leaders of the field do not, which doesn’t help) it may become more onerous.
In my experience plenty of researchers don’t cite some relevant work, perhaps because they are participating mostly in a fairly insular conference system where most people are insiders and the pace of submission is so rapid – it almost suffices to cite papers that have been in these conferences.
As a “self-proclaimed outsider” I am quite sensitive to the idea that a self-appointed small group can decide on the “direction of a research field” in the way I think you are suggesting.
It is easy to spend a few hours looking at Scopus, Google Scholar, etc to see whether you are overlooking previous and related work. I try to be as complete as I can when writing a bibliography, but I have a feeling that not everyone is as thorough.
Another problem is that if double blind conferences are used, submissions to them tend not to be available on the web before acceptance. This further reduces the pool of papers.
I am not sure why you are excluding published papers from the eligible ones.
Comments (usually neutral in tone) on published papers occur in Math Reviews for example (although in some areas they seem to have a chronic lack of reviewers, perhaps because of the lack of incentives to be one). A real “noteworthiness” or attention-allocation recommender system would be more informative, but at least MR-type info is a start. I don’t know of a similar thing in CS/AGT(/E).
[…] a start, Noam Nisan is trying an experiment to get people to recommend interesting papers in Algorithmic Game Theory/Economics. Journals no […]
The best decider of what is good and what is wrong is the market. After all good ideas and results are those for which people would have paid for. For instance we know that google was a good idea by looking its stock market value. In the worst case if anyone finds out a lost good idea disseminated out there he might keep it for his own interest…So why do not use the market for highlight the best disseminated ideas ? Maybe because the actual intelectual property rights is not suitable for this ?
Strangely enough, despite the wide interest in this post in terms of comments, no one has actually made a recommendation (except Noam). When I tried to think of papers to recommend I realized many of them were already featured in previous blog posts (“New paper: yada yada”), so it seemed like it would be weird to recommend them “again”.
Yes, its a shame that despite a significant number of comments, no one actually made a recommendation. I wonder why this is….
Why is it shameful? May be the analysis in the post is not fully right, or may be the conditions required on recommendations are too hard, or may be the experiment needs some changes.
I would strongly encourage such experiments, since I feel that the internet has potential of revolutionizing how we think and collaborate on scientific matter. So far we have revolutionized only the speed of distribution of scientific knowledge, but the mechanisms are about the same. Instead of submitting papers by post, we submit it over the internet, faster, yes, but otherwise the same. The process of evaluating the submitted papers, which helps eventual distribution is still the same. As this blog suggests there might be a way to have community involvement in these processes besides the 3 referee system we currently have.
I hope you can re-run the experiment may be with fewer conditions.
Of course its not “shameful” that the experiment is not working, just “too bad”. I agree that various variants should be tried (not just by me).
Maybe this place http://www.researchgate.net/ can have the features needed to solve the problems Noam discussed. It’s like facebook, but for scientists 🙂
Paul Goldberg’s post is providing recommendation for “On the rate of convergence of fictitious play” by Brandt, Fischer and Harrenstein.
I’m excited to discover this great site. I want to to
thank you for ones time for this wonderful read!! I definitely
enjoyed every bit of it and i also have you book marked to see new things
in your web site.