Archive for April, 2013

Ariel Rubinstein is a famous game theorist who has been arguing for a while that game theory is not useful. Most recently he published an op-ed in the Israeli Ha’aretz newspaper, mischievously entitled “How game theory will solve the Euro zone problems and stop the Iranian nukes?” (answer: it won’t). The op-ed rehashes his well-known views, but there was one paragraph that I particularly liked. (I’m translating from Hebrew because the English version of the op-ed, which appeared a few days ago under a slightly different title, is behind an insurmountable paywall.)

Some of the claimed applications of game theory are nothing but labels for real-life situations. For example, it has been claimed that the Euro zone crisis is like the games known as the prisoner’s dilemma, chicken, and the diner’s dilemma. The crisis bears a resemblance to all of these situations. But such statements are as hollow as saying that the Euro zone crisis is like a Greek tragedy. While the comparison to a Greek tragedy is perceived as an emotional statement by ivory tower intellectuals, giving a label from the game theory lexicon is for some reason perceived as a scientific truth.

A few days later the New York Times published an article that seems to have been deliberately designed to piss Rubinstein off. Michael Chwe, a UCLA political science professor, has just published a book called “Jane Austen, Game Theorist”. I haven’t read the book itself, but what I can say is that the NYT story makes a weak case for why Jane Austen “isn’t merely fodder for game-theoretical analysis, but an unacknowledged founder of the discipline itself”. For example:

Take the scene in “Pride and Prejudice” where Lady Catherine de Bourgh demands that Elizabeth Bennet promise not to marry Mr. Darcy. Elizabeth refuses to promise, and Lady Catherine repeats this to Mr. Darcy as an example of her insolence — not realizing that she is helping Elizabeth indirectly signal to Mr. Darcy that she is still interested. It’s a classic case of cluelessness, which is distinct from garden-variety stupidity, Mr. Chwe argues. “Lady Catherine doesn’t even think that Elizabeth” — her social inferior — “could be manipulating her,” he said.

But if, as Rubinstein “suggests”, Greek tragedies capture strategic interactions, and if we’re anyway revising our view of who founded game theory, shouldn’t the honor go to Sophocles?


Read Full Post »

A call for nominations has just gone out for a prestigious award, recognizing  for the best paper at the interface of game theory and computer science published over the last decade. This is clearly a topic about which this blog’s readers should have some strong (and well informed) opinions! Note that the nomination period is less than a month, so get your suggestions in quickly.

Call for nominations for the Prize in Game Theory and Computer Science of the Game Theory Society. The due date is 15 May 2013.

The Prize in Game Theory and Computer Science of the Game Theory Society was established in 2008 in recognition of Ehud Kalai’s role in promoting the connection of the two research areas. The Prize is awarded about every four years, normally at the World Congress of the Society. The last award took place in 2008 in Evanston. This time, the Prize will in 2013 be awarded at the Workshop on Computational Game Theory in Stony Brook (July 16-18, 2013).

The Prize will be awarded to the person (or persons) who have published the best paper at the interface of game theory and computer science in the last decade. Preference will be given to candidates who are 45 or less at the time of the award, but this is not an absolute constraint. The amount of the Prize will be USD 2,500 plus travel expenses of up to USD 2,500 to attend the scientific event where the Prize is awarded.

The Game Theory Society invites nominations for the Prize. Each nomination should include a full copy of the paper (in pdf format) plus an extended abstract, not exceeding two pages, that explains the nature and importance of the contribution.

Nominations should be emailed to the Society’s Secretary-Treasurer, Federico Valenciano (at federico.valenciano@ehu.es ) by 15 May 2013. The selection will be made by a committee appointed by the President, and the result will be announced in June 2013.

Read Full Post »

Looking in my rearview mirror 

Guest post by Reshef Meir

Once upon a time (or so I’m told), the important publication venues were journals. Sure, there were conferences, but their main purpose was to present new findings to the community and to trigger discussions. A conference paper did not really “count”, and results were only considered valid after being published in a respectable journal, verified by its reviewers. Indeed, this is still the situation in some fields.

I have no intention to revive the discussion on the pros and cons of journals, but conferences proceedings in computer science, and in AGT in particular, are nowadays treated as publications for every purpose. They are considered reliable, are highly cited, and theorems are used as building blocks for newer results. We also want institutions and promotion committees to consider conference papers when looking at candidates.

The next sentence should be “…this progress was made possible due to great improvements in the review process of conferences”. But has it?

Almost half of the conference submissions I have personally reviewed [1] contained severe technical errors—where many of the erroneous submissions came from EC.  All EC submissions, I should say, were worthy, and would make at least a reasonable case for acceptance if not for the technical errors.

Somewhat surprisingly, I discovered that there is no consensus about rejecting papers once a technical error is spotted [2]. Often authors reply with a corrected version (sometimes weakening the propositions), or a proof sketch, or promise they would fix the proof. For some committee members, this is a satisfactory answer; others assert that the paper should be refereed based on the originally submitted version, and that technical correctness is a threshold condition for acceptance.

I am not arguing that technical correctness should be the only or the primary criterion for acceptance, but this is one criterion that I think there is currently a problem with. To initiate a discussion, here are the main arguments for acceptance/rejection as I perceive them.

Toward acceptance:

1)      It is not the reviewers’ job but the authors’ responsibility to verify the correctness of their results (an old debate, see e.g. here, p.3).

2)      Proofs are sometimes replaced with sketches or even omitted, so it is unreasonable (and perhaps impossible) to verify correctness anyway.

3)      Even in journals, errors are no big deal, since if results are important the error will eventually “float”.

4)      We should trust authors, as no one wants an embarrassing error under their name. [3]

5)      There is an opportunity cost incurred on the community for delaying the publication of interesting results.

Toward rejection:

a)      If errors are found, a corrected version can be submitted to either a different conference or the next meeting of the same conference. Authors should not be given the credit for correcting the paper and resubmitting it, since we know it cannot be properly reviewed.

b)      Accepting revised versions from some authors may be unfair toward others. Also, why not submit a corrected version with better motivation, references, or added results?

c)       If an error is found, this is an indication that there might be other hidden errors, and that authors should better prepare their paper for publication.

d)      There are many non/lightly refereed venues (Arxiv, workshops) for propagating results quickly. It is hard to claim that results in CS do not propagate fast enough.

e)      While authors doubtlessly prefer to publish error-free papers, other tasks and priorities may come before verification. Papers usually do not go under major changes between acceptance and the camera-ready version. From my experience as an author though, papers often significantly improve between submissions.

f)       Low tolerance for errors will incentivize authors to invest effort in finding their own mistakes prior to submission.

All in all, I agree that the best verification is indeed by the authors themselves, and that it is the authors’ responsibility to publish error-free papers. However I do think there is a problem. I will fully admit that my own papers are not free from errors, and unfortunately some of them have been only found after publication.

One possible solution is a revision of the reviewing process that will put more emphasis on verifying correctness, making another step in the hybridization of journals and conferences. For example, adding more time for the review and allowing authors to submit revisions in particular cases. As such changes are costly, a simpler solution is to make it clear that non-trivial errors will result in rejection unless there are unusual circumstances (like a ground-breaking result that can be easily fixed). The point of this strict line is not to be an adversarial  reviewer, but rather to ensure that authors have not just the capability and the responsibility—but also the incentive—to properly verify their own work.

So, should the review process change? Should an article with errors be accepted or rejected?

[1] Aggregating 20 submissions over the period 2009-2013, from AAMAS, SAGT, WINE, AAAI, IJCAI, and ACM-EC. Clearly this is not statistically significant at any rate, but may still indicate a problem. Of course, even in published journal papers errors are common, and I have seen estimations that between 10% and 30% of published papers contain non-trivial errors. Unfortunately I could not find any trusted source but see e.g. here and here.

[2]  By “severe technical error” I mean either that a proposition is wrong, or that large chunks of the proof should be rewritten.

[3] Indeed, some people are quite embarrassed if an error is discovered even before publication. In contrast, Lamport (in Sec.4.4 and in general) seems to be skeptical about the attitude of computer scientists towards their published errors.

Read Full Post »

Congratulations to Yoav Shoham and to Moshe Tennenholtz for winning the 2012 ACM/AAAI Allen Newell Award “for contributions to multiagent systems spanning computer science, game theory, and economics.”  Congratulations to the winners of the other prestigious awards announced at the same time (even though they are not related to Algorithmic Game Theory and Economics…)

Read Full Post »

The list of EC’13 accepted papers is now available. It has become a tradition of sorts on our blog to visualize this list as a word cloud. Comparing the current word cloud to the EC’12 word cloud and EC’11 word cloud, it seems that the main themes haven’t changed much (and Moshe is still a very popular name!). A closer examination of the list itself though does reveal a very encouraging trend: Compared to previous years, there is a significant number of papers from prominent economists, operations researchers, and social scientists. Great program and proximity to Pittsburgh  what more can one ask for?


Read Full Post »

The Simons Institute in Berkeley will hold a conference on “visions of theory of computing” during May 29-31, 2013 (just before STOC):

This three-day symposium will bring together distinguished speakers and participants from the Bay Area and all over the world to celebrate both the excitement of fundamental research on the Theory of Computing, and the accomplishments and promise of computational research in effecting progress in other sciences — the two pillars of the Institute’s research agenda.

Lots of interesting speakers.

Read Full Post »

%d bloggers like this: