Archive for October, 2009

Guest Guest post from FOCS

[Noam: Since I didn't go to FOCS, I asked Shahar to write a guest blog post from there.]

[Shahar: Noam asked me to write a guest post on FOCS. However, Shaddin wanted to write a post so badly that I had to let him guest post in my own guest post.]

Guest Post by Shahar Dobzinski
Guest Guest Post by
Shaddin Dughmi

When Shahar asked me on monday night to write this guest guest post, my first reaction was that I wish I knew ahead of time so that I would take notes. Since much of this is from memory, I apologize for any unintentional omissions or inaccuracies. Since this is an AGT blog, I will focus mostly on AGT-related papers.

This year FOCS is being held in downtown Atlanta, and the conference hotel is only a few blocks from Georgia Tech. The conference talks are all in the hotel conference center as usual, though the celebration on day 0 was located at the Lecraw Auditorium at Georgia Tech.

Saturday featured the celebration of the 50th anniversary of FOCS and the 20th anniversary of the ACO program at GATech. There were talks by four distinguished speakers: Richard Karp, Mihalis Yannakakis, Noga Alon, and Manuel Blum. All four talks were inpsiring. Manuel Blum’s talk on “Can TCS get a grip on Consciousness” was particularly entertaining and thought provoking. In the talk, Manuel chronicles his own personal journey towards understanding consciousness. The talk culminates with a description of “Global Workspace Theory” as a model of consciousness. Manuel draws concrete connections between this model and mathematical proofs, suggesting that TCS can benefit from models of consciousness as guides for high-level planning.

The conference talks were held in two adjacent conference rooms on the second floor of the conference hotel. Luckily, there was an adjacent conference being held on the same floor called “The Secrets of the Millionaire Mind”. The colorful characters from this conference occasionally ventured into the same lobbies as the FOCS audience, and doubtless managed to recruit many TCS graduate students who are fed up with meager stipends and left-over pizza.

There were two AGT sessions. Due to incurable jetlag, I unfortunately missed the first one on Monday morning. Luckily, however, all talks will be available online soon. Nevertheless, word around the conference lobby was that Amin Saberi’s talk on Convergence in local interaction games was fantastic. The second AGT session was monday afternoon, featuring three talks. The first talk was on Rationalizing Network Formation, where Kalyanaraman and Umans prove hardness results on inferring valuations of players in network formation games. I interpreted this as a hardness result for effective market research. The second talk in that session was on pricing strategies for Revenue Maximization. In this talk, the authors obtain improved lower and upper bounds on pricing strategies for revenue maximization when a limited number of items are sold to bidders with unknown subadditive valuations. The lower bounds apply to the previously studied static single-price, and they relax the model to one that allows prices to vary with time. An interesting question that was brought up after the talk was whether their results could be improved
by using a benchmark other than welfare. The third talk was on my paper with Shahar Dobzinski (the guest blogger one level up in the blogging tree), where we introduce a new class of truthful-in-expectation mechanisms and use it to get a truthful-in-expectation FPTAS for multi-unit auctions. We also show the first separation between truthfulness-in-expectation and universal truthfulness.

All in all, this FOCS was the usual mix of talks, good conversation, and blossoming collaborations. The southern hospitality was warm, and the sourthern food was an experience. As revealed in the business meeting, the next FOCS will be held in Las Vegas! In light of this choice of location, lets hope they tape the talks next year as well for those of us one roll of dice away from the big one!

Read Full Post »

Not a Journal

A recent blog post by Jeff Elly brought to my attention “NAJ Economics“, an overlay journal devoted to Economics that has been in operation for about eight years, although at pretty low volume.  “NAJ” stands for “Not a Journal” (or the geek-wannabe GNU-like “NAJ aint a journal”), and the idea is that the set of (rather distinguished) editors choose papers that they like on the web, peer-review them, and publish links to the “reviewed” papers.  The way it works is that the editors pick what they want to review:  you can not submit your paper to NAJ, nor can you ask them not to review your paper once you’ve put it openly on the web.  The idea is that this gives a peer-reviewed publication: the author takes care of publication on the web, and NAJ provides the peer-review.    (I started thinking about “NAJ AGT” which could handle publication more elegantly by relying on the arXiv.  But then, it doesn’t seem that “NAJ economics” is a success story, so maybe not.)

At the same time, Daniel Lemire posts a 1987 “EWD” by Dijkstra going against the whole notion of trusting peer review too much.  Indeed, Dijkstra rarely published his work in the usual sense but rather mailed out photocopies of his hand-written writings, termed EWDs, to colleagues. While my sympathies (like those of Lemire)  are with this mode of publication, I’m afraid that few non-Turing-Award-winners will get their work noticed this way, so some mechanism for allocation of attention is still needed.

Read Full Post »

Eran Shmaya, in a beautiful recent blog post, adds some sociological angles to the economists’ never-ending introspection of whether their field is a science, where the starting point of such discussion is Popper‘s falsifiability.  The discussion about whether economic theory has any significant prediction power has certainly picked up lately due to the financial crisis that convinced many people that the answer is “No”.  (With a variant taken by many micro-economists claiming that micro is fine and only macro-economics is a superstition.)

Whenever I hear these types of discussions about the prediction power economic theory, I always think about the irony that, as an algorithmic game-theorist, I am mostly interested in using economic theory as a branch of mathematics…  Arrow’s theorem is simply true, not falsifiable.  So is Meyrson’s expression for optimal auctions, as well as many other mathematical tidbits that we, computer scientists, pick up from economic theory.  We don’t really care how well these theorems relate to human situations, since we use them in different circumstances, those related to computer systems.  I often wish that theoretical economists would allow themselves to stray even farther away from the realities of their human subjects.

This is not to say that we, in algorithmic game theory, need not worry about how well our theories correspond with reality.  We should.  We have not really started doing so.   And, it’s getting high time we do.  But our concerns may be more like those of engineers rather than those of scientists.  From economics, I’m happy to just take the theory…

Read Full Post »

SAGT 2009

Yesterday I returned from the Symposium on Algorithmic Game Theory (SAGT) that was held in Cyprus.   This is the second year of SAGT which is the newest addition to the set of conferences that are targeted at the interface between computer science and game theory and economics: EC and WINE (as well as the related and more specific COMSOC).  SAGT differentiates itself from the others in two respects.  First there is the geographic dimension: SAGT is European, while EC is North-American, and WINE rotates between America, Europe, and Asia.  Second, SAGT is unashamedly theoretical: even its name has the theoretical “algorithmic” and “game theory” while the others have applied leanings,  “Internet” for WINE and “electronic commerce” for EC, despite their dominant theoretical flavor.  SAGT dates are strategically chosen with respect to this eco-system of conferences, in particular its submission deadline is slightly after EC acceptance notifications.  Given that EC has turned into quite a strong conference with very low acceptance rates, this opens the way for SAGT to develop into a strong conference as well.  I understand from people who have been in both the first and second SAGT that indeed the scientific level this year seems significantly higher than last year.

I have to admit that at first I feared that I’ll find weak papers that answered un-interesting questions in un-illuminating ways.  While I can’t say that non of the talks that I’ve heard falls into this category,  I was quite positively surprised.  It seems that many of the papers were specialized rather than weak, targeted quite properly at Algorithmic Game Theorists.

As an example, take the session on “solution concepts” in which several alternatives to Nash equilibrium were explored:

  • The Computational Complexity of Weak Saddles (by Felix Brandt, Markus Brill, Felix Fischer and Jan Hoffmann)  which explored Shapely’s alternative  to mixed-Nash equilibrium, a solution concept that leaves sets of strategies for each player.
  • Partition Equilibrium (by Michal Feldman and Moshe Tennenholtz) that explores deviations by coalitions where the  coalition structure is exogenously given.
  • Learning and Approximating the optimal strategy to commit to (by Joshua Letchford, Vincent Conitzer and Kamesh Munagala) that considers a Stackelberg version of a game, where one player may commit in advance to a mixed strategy.
  • On the Complexity of iterated weak dominance in constant sum games (by Felix Brandt, Markus Brill, Felix Fischer and Paul Harrenstein) that studied iterated removal of dominated strategies, but using the tricky notion of weak dominance rather than the more robust notion of strict dominance.

These variants may all be too specialized for someone who just wants to plainly use game theory, but for people working inside the field they are illuminating.  Personally, as a fan of alternative notions of equilibrium, all of these papers seemed well motivated to me, and while I can’t say that I found any of the results to be earth-shattering, I did find them interesting.

Invited talks were given by Elias Koutsoupias (Approximate price of anarchy and stability), Mihalis Yannakakis (Computational aspects of equilibria), and myself (Google’s auction for TV ads).  Unfortunately due to my travel constraints, I missed these talks (well, except mine).  The social program included an excursion to several tourist attractions, braving an unusual heat wave (35 degrees Celsius on the bus).  The banquet compensated  for the heat with an excellent seafood meze.  The whole conference was very well organized by PC chair Marios Mavronicolas, and local arrangements chair Vicky Papadopoulou.

Next year, SAGT is scheduled to be in Athens around the same time of year (mid October), with its submission deadline shortly after EC acceptance notifications.

Read Full Post »

Outsourcing and CS education

The academic year has started today at the Hebrew University and I couldn’t help wondering what all our CS graduates will be doing ten or twenty years after graduation.  With the amazing rise of technology and science in much of the third developing world, their world-wide competition will be tough.  Will their software engineering jobs be outsourced too?   The current situation does not seem at equilibrium:

Read Full Post »

My nephew just started his undergraduate degree in Math at the Technion.  When his grandfather, a high-powered engineer (whom I sometimes turn to for solving a differential equation that I need solved) , asked him what they learned in their very first calculus lesson, the answer was “supremum“.  The grandfather was mystified, claiming that “supremum” was not part of the math curriculum at all when he got his engineering degree half a century ago (also in the Technion, it turns out.)  While this can hardly be literally true, it does highlight the difference between the “abstract” approach to math and the “applied approach” which he got as an engineer.  From an abstract point of view, it makes much sense to start  with the notion of supremum, the ubiquitous existence of which essentially defines the real numbers, while I suppose that a more applied approach will hardly emphasize such abstract technical hurdles.  When I studied math at the Hebrew university, we spent an enormous amount of time on such abstract notions up to the point of ending the first year with hardly any practical abilities of figuring out a simple Integral but having a pretty good grasp of properties of zero-measure (yes, first year).

In my department, we have an on-going debate about the type of math that our CS students should get.  Traditionally, we have followed the extremely abstract tradition of our math department which emphasizes the basics rather than applications (even in the “differential equations” course, students rarely solve them, but more often prove existence and such).   In the last decade or so there has been a push in my department to shift some focus to “useful” math too (like being able to actually use singular value decomposition or Fourier transform, as needed in computer vision, learning theory, and other engineering-leaning disciplines).  I’m on the loosing side of this battle and am against this shift away from the “non-useful” fundamentals.   The truth is that most computer scientists will rarely need to use any piece of “useful” math.  What they will constantly need to use is “mathematical maturity”: being comfortable with formal definitions and statements, being able to tell when a proof is correct, knowing when and how to apply a theorem, and so on.

Read Full Post »

Games and Economic Behavior (GEB) is the leading academic journal in Game Theory, and as such is a natural place for submission of your Algorithmic Game Theory papers.  (Other natural alternatives include TCS journals, AI journals, Economics journals, and OR journals.)  GEB has been very “CS-friendly” for quite some time, to the point of having added several CS researchers to its editorial board.

Judging papers on an interdisciplinary border is always a tricky business, and in the case of CS there is an additional complication with the conference publication culture in CS.  GEB has recently undergone some internal discussions reaching general guidelines for evaluation of CS papers for publication in GEB.  These have been written by Editor David Parkes and recently sent to the editorial board.  With his permission, I am publishing them verbatim.

The evaluation of CS papers for GEB: Guidelines

* On the Role of GEB

To be published in GEB, a paper should be of interest to game theorists; for CS-themed papers it should meet the standards set by the relevant leading field journal. Papers should be original, make a substantial contribution, and provide a broadly accessible exposition.

For CS papers in particular, a unique role that GEB plays is in certification that a piece of work is correct and is of interest to the game-theory community. This is important in meeting GEB’s mission of communicating game theoretic ideas across disciplines.

For the moment, many submitted papers will have previously appeared in “quasi-archival” CS conferences, such as FOCS/STOC, AAAI/IJCAI, EC and WINE. These are heavily reviewed and high quality venues, but publication does not certify interest to the GT community, nor necessarily the correctness of proofs, and the final version may also differ from a reviewed version.

* Originality of Contribution

In considering the role of GEB in certification of interest and correctness and its mission of communicating ideas, and the role of conferences within CS, we can expect the incremental contribution of the GEB paper over a CS conference paper to be much less than that required in other fields.

But the stated policy will remain that there should still be some incremental contribution, judged to be of some technical significance and game-theoretic interest. Authors should be expected to discuss differences from a conference version in a cover letter.

There has been an extensive discussion on whether or not to require an incremental contribution over a conference paper and there is not unanimity here. But this policy seems to best balance considerations about this role of GEB, with other considerations (e.g., that the GEB version be the reference version, and in making efficient use of reviewer resources.)

* On Scope

GEB should be a welcoming place for appropriate papers at the CS/GT interface, and we need to be as clear as possible in communicating with authors about what is considered in scope. It is this issue about which we have received most questions in the past couple of years.

The journal aims to communicate game-theoretic ideas across fields and applications, and CS constitutes a significant group of contributors to both GT and its applications. But what about a paper that doesn’t contribute to GT or its applications (i.e., it does not develop new game-theory per se, no new game-theoretic approach, does not test existing GT, etc.), but rather is computer-science centric in its contributions, be they theoretical, computational, or algorithmic?

This is a difficult area, but a broad guideline is that papers should be in scope if they are computational, but on a topic “of interest to game theory.” For example, new results related to core solution concepts, equilibrium analysis, or pertaining to issues of bounded rationality, would be in scope. On the other hand, results on more applied problems, such as faster algorithms for winner determination in auctions, or for clearing prediction markets, may be out of scope and may be better submitted to a CS or OR journal.  And as already stated, to be acceptable to GEB a paper must also be of a quality acceptable by the relevant leading field journal.

* Most importantly, be Flexible

Again, our overall mission is to communicate GT ideas across disciplines, and the guidelines above were designed towards this goal. But we suggest that every paper be judged with the overall mission in mind, even if it does not meet the guidelines above.  A reviewer should weigh all relevant variables: the importance of the results, the nature of the earlier publication (the refereeing process, visibility, the level of exposition in terms of technical details and applicability, etc.), the marginal contribution to knowledge in the GEB publication beyond the earlier publications, and so forth.

Read Full Post »

Older Posts »