Feeds:
Posts
Comments

Archive for May, 2009

(Thanks to Eyal Winter for the link.)

 

Read Full Post »

Google wave unveiled

Google just unveiled its new (open source) attempt to change email by combining it with essentially all other electronic collaboration modes (IM, blogs, collaborative editing…) .  Google Wave may completely change how we collaborate electronically.  Or, it may not.

Read Full Post »

Wired magazine published a (rather enthusiastic) popular article on Hal Varian’s role as chief Google economist.  It shortly mentions that Microsoft now has Susan Athey in a similar role.

Read Full Post »

The National Bureau of economic research recently held a “working group meeting” on market design, as mentioned in Al Roth’s blog as well as David Pennock’s blog.  The meeting web page contains links to the presented papers, many of which seem quite interesting.  Dave Pennock’s post also offers reflections on the culture difference between economics and CS: “In some ways it felt like visiting a foreign country”.

Read Full Post »

Paul Goldberg’s blog maintains a list of postdoc positions in Algorithmic Game Theory.  Here are the ones listed there now:

  1. In Liverpool and in Warwick with Paul Goldberg, Artur Czumaj, Leslie Ann Goldberg and Piotr Krysta
  2. Another one in Liverpool with Piotr Krysta.
  3. In Singapore with Edith Elkind and Ning Chen.

Read Full Post »

A few interesting pieces that I recently came by:

  1. A new economic paper on open source byMichael Schwarz and Yuri Takhteyev (via Al Roth’s blog).
  2. A compendium of PPAD-complete problems from Shiva Kintali.
  3. Some interesting thoughts on graduate school from Noah Snyder.
  4. The WAA09 workshop on ad auctions (via Muthu’s blog) — deadline is May 22nd.
  5. A new paper on “feedback loops of attention in peer production” by Fang Wu, Dennis M. Wilkinson, and Bernardo Huberman added to the many social network papers by Huberman on the arXiv.
  6. Electronic voting workshop in Israel

Read Full Post »

After having looked at some of the debate raised by Moshe Vardi’s questioning of CS conferences (on Lance’s blog post as well as on my own post), I was struck by the fact that everyone was concerned with publishing papers; hardly anyone worried about reading these papers.  This seems to be the general attitude in the theoretical CS community and there is a risk of it becoming even more so in the AGT community.  People write papers just so that they get published, not really making much effort to have them be read; conferences and journals are being created to cater to the publishing needs of authors rather than to satisfy any desire of having more information to read.   

CS journals lost their prestige and appeal simply since we stopped reading them. This is not a complaint about the reading habits in CS but rather about the publishing habits:  what readers found in CS journals was usually not very helpful: lots of boring, mediocre, and badly written papers (and this in addition to the non-timeliness and expense of journals).  The presumed added value of peer-reviewed verification of correctness was only meaningful to papers that were not previously seriously read by the community — usually those that were not interesting anyway.   Since the authors of the most important papers do want people to read them, the best  publications moved elsewhere — in the case of CS to conferences.  For a long time we did read proceedings of CS conferences.  The danger I see for many conferences now is the that people read and listen less and less to results presented there too.    Many conferences have very few attendees that are not presenting their own papers — it did not use to be that way and this is a sign of rot.   

I have two general recommendations for catering to possible readers of our scientific results:

Publish Less Papers

We publish too much (as recently noted by Mark).  I am not against writing and sharing small steps towards solving a problem, but the pre-mature packaging as a “””publication””” is usually bad.  Algorithms and theorems are often not really fully understood by the authors, and are thus not properly crystallized, simplified, or generalized.  Models are not fully polished, justified, discussed or compared to other related models.  Proofs are rarely made crisp.  We also feel compelled to “market” our results often sacrificing scientific honesty, not to mention the amount of time wasted on the whole packaging effort.  No wonder few people bother reading such papers.  There are other ways to get preliminary results in the open (while maintaining credit) such as technical reports, the arXiv, or giving workshop talks.  I really like the tradition in the field of economic theory of circulating a working paper that keeps being improved until it is deemed ready for real publication in a journal (where the top ones are actually widely read, I understand).

The funny thing is that the pressure for publishing more papers rather than better papers is not really external — it is a psychological trick we play on oursleves.  In most places it is easier to get a job or tenure with ten first rate papers than with twenty second rate ones.  It does not make any sense for hiring, tenure, grant, or promotion committees to count papers — if you insist on “counting”, then at least make sure you count impact: citations, the h-factor, or just publication in the absolute top venues.  Conferences and journals should insist on evaluating papers from the point of view of the reader: simplicity, full context, clear writing, crisp proofs, polished models, all these are critcal to a paper being useful to its readers.  They can help reducing the sheer number of papers by simply accepting less of them: top ones should be more selective, and less-than-top conferences should consider becoming publication-less workshops. 

Help Identify the Important Ones

All these papers are “out there”: in conference proceedings, journals, technical reports, on authors’ websites, the arXiv, and other places.  No one can read even a fraction of the papers in his field.  Wouldn’t it be nice to have someone point to those that we should be advised to read?  Someone to summarize a topic?  This “meta-research” layer would be the critical element in helping the readers allocate their attention.  There are various formats to draw attention to the key results: awards, invited talks, or tutorials in conferences.  In the long term, a book that summarizes an area, in the middle term, a “hand-book” written by many authors, or in the shorter term, lecture notes.  Wouldn’t it be nice if more people wrote surveys or expositions?  I particularly liked the new physics review site (as reported by Suresh).  Blogs can point papers out too (Oded Goldreich does and I try to do so too).

Read Full Post »

Older Posts »