After having looked at some of the debate raised by Moshe Vardi’s questioning of CS conferences (on Lance’s blog post as well as on my own post), I was struck by the fact that everyone was concerned with publishing papers; hardly anyone worried about reading these papers. This seems to be the general attitude in the theoretical CS community and there is a risk of it becoming even more so in the AGT community. People write papers just so that they get published, not really making much effort to have them be read; conferences and journals are being created to cater to the publishing needs of authors rather than to satisfy any desire of having more information to read.
CS journals lost their prestige and appeal simply since we stopped reading them. This is not a complaint about the reading habits in CS but rather about the publishing habits: what readers found in CS journals was usually not very helpful: lots of boring, mediocre, and badly written papers (and this in addition to the non-timeliness and expense of journals). The presumed added value of peer-reviewed verification of correctness was only meaningful to papers that were not previously seriously read by the community — usually those that were not interesting anyway. Since the authors of the most important papers do want people to read them, the best publications moved elsewhere — in the case of CS to conferences. For a long time we did read proceedings of CS conferences. The danger I see for many conferences now is the that people read and listen less and less to results presented there too. Many conferences have very few attendees that are not presenting their own papers — it did not use to be that way and this is a sign of rot.
I have two general recommendations for catering to possible readers of our scientific results:
Publish Less Papers
We publish too much (as recently noted by Mark). I am not against writing and sharing small steps towards solving a problem, but the pre-mature packaging as a “”"publication”"” is usually bad. Algorithms and theorems are often not really fully understood by the authors, and are thus not properly crystallized, simplified, or generalized. Models are not fully polished, justified, discussed or compared to other related models. Proofs are rarely made crisp. We also feel compelled to “market” our results often sacrificing scientific honesty, not to mention the amount of time wasted on the whole packaging effort. No wonder few people bother reading such papers. There are other ways to get preliminary results in the open (while maintaining credit) such as technical reports, the arXiv, or giving workshop talks. I really like the tradition in the field of economic theory of circulating a working paper that keeps being improved until it is deemed ready for real publication in a journal (where the top ones are actually widely read, I understand).
The funny thing is that the pressure for publishing more papers rather than better papers is not really external — it is a psychological trick we play on oursleves. In most places it is easier to get a job or tenure with ten first rate papers than with twenty second rate ones. It does not make any sense for hiring, tenure, grant, or promotion committees to count papers — if you insist on “counting”, then at least make sure you count impact: citations, the h-factor, or just publication in the absolute top venues. Conferences and journals should insist on evaluating papers from the point of view of the reader: simplicity, full context, clear writing, crisp proofs, polished models, all these are critcal to a paper being useful to its readers. They can help reducing the sheer number of papers by simply accepting less of them: top ones should be more selective, and less-than-top conferences should consider becoming publication-less workshops.
Help Identify the Important Ones
All these papers are “out there”: in conference proceedings, journals, technical reports, on authors’ websites, the arXiv, and other places. No one can read even a fraction of the papers in his field. Wouldn’t it be nice to have someone point to those that we should be advised to read? Someone to summarize a topic? This “meta-research” layer would be the critical element in helping the readers allocate their attention. There are various formats to draw attention to the key results: awards, invited talks, or tutorials in conferences. In the long term, a book that summarizes an area, in the middle term, a “hand-book” written by many authors, or in the shorter term, lecture notes. Wouldn’t it be nice if more people wrote surveys or expositions? I particularly liked the new physics review site (as reported by Suresh). Blogs can point papers out too (Oded Goldreich does and I try to do so too).