Archive for November, 2011






Read Full Post »

The Society for Social Choice and Welfare will organize its 11th biannual meeting in New Delhi from 17th August to 20th August, 2012.  “We are specially interested in papers in Algorithmic Game Theory and Mechanism Design”.  Submission deadline is February 28, 2012.

Read Full Post »

NetEcon 2012

NetEcon 2012 will be held in conjunction with INFOCOM’12 on March 30, 2012 in Orlando, Florida.  Paper submission deadline is Dec 16, 2011.

Read Full Post »

Muthu has been conducting an experiment during the last month or so where he uses G+ hangout for open discussions with “the world” on some research topic.  This coming Wednesday, November 9th, at 8:00-9:00 AM PST he will be hosting me (Noam) for an open discussion on Algorithmic Mechanism Design and Computational auctions.  There is no set plan; basically I will talk about whatever I am asked to (even if I know nothing about it).

Instructions on how to join the hangout:
Invite muthubyshance@gmail.com into your circle on your Google+ or send email to that address, Muthu will add you to his video hangout circle; then, at the suitable time, go to his Google+, you will see Muthu hanging out, join the hangout; or on Google+, go to hangout, and you will see him. If there are any problems, email at the address above.

Read Full Post »

In my previous post I tried to spell out the problems with the current academic journal publishing system, and pointed out to Tim Gowers’ post that suggested how it can be replaced by a combination of arXiv and a math-overflow-like online commenting&reputation system.  Many comments to Tim’s post (as well as to mine, my G+Lance’s, or Gil’s) raised objections to such a system.  In many cases, I feel that the objections were to using a web-based system as things are today without incorporating the critical positive features of journals.   Clearly any alternative system would have to  incorporate these positive features in some way, preferably in a better way than journals do them.  In this post I would like to explicitly point out the positive features that need to be duplicated by any system that seeks to replace the journal system.

That’s where the people are

The most important feature of the journal system is that leading researchers actually publish there, referee papers there, and hire and promote others according to their journal publications.  Any alternative system would be a non-starter without  a strategy of how to move researchers into it.  I think that this is doable:.  To start you need two elements: an organizer that can set up the proposed system, run it, and advance it, and a group of high-profile researchers that are willing to lend their name to the effort by serving as its “board” or so.  It is important to give value immediately along-side the existing journal system.  At first only a few researchers will actively use the new system, either for the added value or due to their interest in a new thing, or as a deliberate contribution to the effort.  As more people join, the value goes up, more people start using it, and “reputation” earned on the site can become a minor added value in hiring decisions (comparable to the “bonus points” one may get for being a good expositor) maybe via letters people write.  Here we hope that a positive feedback loop emerges gradually pulling all the community in.  This can happen: see how CS switched to a conference culture from its previous journal one.
Peers must review peers’ work

The main service that journals still provide is to have someone read your paper, check it, point out possible mistakes, suggestions for improvement, and even typos.  This is indispensable: someone qualified needs to spend the time doing so.  However, the idea that these someones are arbitrarily chosen by an editor and then provide anonymous feedback bundled with a reject/accept decision (and in case of reject, everything is repeated again with another journal)  is not ordained from heaven.  Any web-based system must provide the mechanisms by which peers read each others’ work, check it, and suggest improvements.  While setting the incentive structure right for this is not trivial, I believe that it is totally solvable, especially compared to the non-existent incentives for refereeing well for journals.  There are just so many ways by which it possible to provide more useful information to authors, save duplication of review effort, reward refereeing, and reduce arbitrariness.

A Hierarchy

Journals get their authority from their editorial board that should be compose of well-respected researchers (although the legal journals offer an intriguing alternative where the best law journal are student-edited).  When one thinks of web-based reputation systems, one shudders at the thought of papers being evaluated by popular vote rather than by trusted experts.  This is another critical element that every web-based system replacing journals must include: any “decisions”, rankings, or scores must take into account the identity of the recommender/voter/referee/commenter, giving more weight to those with higher reputation.  This includes implicit rankings like the order and prominence that papers or comments are presented to the user.  Good systems will do this in a flexible way where each user may choose his preferred reputation metrics and be presented information ranked according to his taste.  This choice may be implicit e.g. choosing the ranking system suggested by virtual-journal X.  Math-overflow-type systems have gone a step in this direction, but we will likely need more sophisticated systems, maybe along the lines of page-rank or personalize-page-rank.  The bar set by the journal system is not very high.

Read Full Post »

The problem with journals

Tim Gowers recently suggested an answer to “How might we get to a new model of mathematical publishing?” which I highly recommend.  While there has been much talk for years now on how to replace the journal system, I think that his proposal is explicit enough and simple enough to actually be implementable if seriously attempted.

One of the many comments to his post questioned the basic premise:

[...] I have no idea why people constantly claim that the journal system is broken. It seems to work just fine to me. The only real issues I’ve heard people bring up are 1. the open access issue, and 2. the cost issue.  [Here comes a discussion of how these issues are on their way to be solved -- a point to which I mostly agree]  Aside from the above two issues, what exactly is this suggestion supposed to accomplish?

I’d like to answer this explicitly and talk about the problem with the journal publication system, even assuming that we’ve completely solved the open-access and cost issues:  Journals are simply not fulfilling their main three functions: dissemination, verification, and allocation of attention.

  1. Dissemination: While originally the main point of a print journal was so that Prof. A. can see the results of Prof. B. relatively quickly, it is clear that, in the age of the Internet, journals only slow dissemination compared to, say, putting stuff on the arXiv.
  2. Verification: Despite pretenses, refereeing is not really trust-worthy. Results of some importance become believed not when refereed but rather only after the community has studied them for a while.
  3. Allocation of attention: an important goal of leading journals is to filter the “important” papers out of all the submitted ones, so that readers need not read everything but rather only the important stuff. I am afraid that today so much is published so that most of what one reads in most journals should have been filtered out. Partially this is a problem of the publish-or-perish culture and partially due to the coarseness of the refereeing model as a filtering tool.

All three of these main goals can be improved upon considerably using the right tools (that need to be figured out) on the Internet. At the same time that the journal system has lost its usefulness, it has created a lot of harmful side effects: the writing of countless worthless papers, lack of recognition for surveys, books, or other non-”paper” contributions, blind and silly use of metrics like impact factors for hiring, grants and promotion which lead to wasteful optimization of these rather than of real research. All these harmful side-effects could be tolerated had the system served its main purpose — but now we are just paying the price but not getting the goods.

Read Full Post »