Feeds:
Posts

## Noam and I

Late last Friday night, after reaching SF airport to take a red-eye to Atlanta, I peeked at my email and found a one-liner from Noam, asking if I wanted to join as a co-blogger. In no time I found myself sending the reply “Happy to join”, then I boarded my flight and immediately snoozed off. After a couple of hours, I woke up sweating and full of anguish: How had I taken this relatively big step with so little thought? Do I have the time, do I have ideas, do I have ways of talking about them, how will I be judged by people, and do I need to put myself through this? Then I was reminded of my (very positive) ventures with Noam, my mind calmed down, and I slipped back into deep slumber.

The year is 1998 or 99, the venue is STOC or FOCS; here is my 30 second conversation with Noam:

Me: I hear you are working in economics these days. How does a top-notch complexity theorist make such a big change?

Noam: That is where the future lies!

Me: Hmmmm?!

Fast forward to March 2001. My book on Approximation Algorithms is almost ready to go to the publishers. I do not want to work in this area anymore — the big problems I wanted to solve in recent years, Steiner tree and asymmetric TSP, have not yielded despite a huge investment — and I can see the BIG VOID approaching! I have already written a paper in game theory but I know so little about the area. Even worse, I can’t seem to understand what game theory papers are really talking about. If only I could gather 5 top experts in the area andhave them tell me about their work …

I soon realize that many other people are in the same boat! So I approach Fred Roberts, Director of DIMACS, with a proposal for a workshop. He is excited and immediately commits generous funding. I know a few potential speakers, but to make it a good event I will need to find a suitable co-organizer. The choice is obvious — Noam — and he agrees! And the workshop ends up being a big success and plays a key role in getting AGT started. (One small gripe: DIMACS still hasn’t listed the co-organizers in alphabetical order.)

Fast forward to December 2005. A publishing rep visits and asks which book I would like to write next. I tell her it would be on AGT but I don’t know enough to write a complete book, and what is more, no one else does either! The rest of the conversation goes something like this (the “….” represent switches in the person speaking):

Rep: You should produce an encyclopedia on AGT …. You will be the Editor-in-Chief and you will have (innumerable) editors, subeditors, reviewers …. The encyclopedia will have extensive bibliographies on all aspects. …. We will make sure every library gets a copy. …. You will get x% of the total sales.

Me: We need an edited book, with chapters written by many top experts working on different aspects of AGT …. I will need to find 2 or 3 well-chosen co-editors …. The chapters should expound the main ideas in the clearest possible manner using simple settings and relegate more complex developments to exercises. They should provide only the absolutely essential references and not be structured as surveys but as pedagogical tools …. If we do a really good job, just about every CS Dept. will be using it for a semester-long course. …. No one should get royalties, not even the co-editors. Instead the price of the book should be set as low as possible and the book should be made available online.

As you can see, we were on completely different wavelengths! But the conversation had already set me in motion, and an obvious choice of co-editor was Noam!

My stance and my creed: My work sits at the extreme “A” end of AGT&E. Fortunately, a lot has happened at that end over the last ten to twelve years and that is what I will mostly talk about.

I consider our theory of algorithms and computational complexity sacrosanct and I believe we need to carefully maintain its integrity. My recent guest posts reflect this, e.g., on Galactic P and Galactic PTAS and on the class FIXP, and so does my recent paper on non-bipartite matching, in which I have made a special point of not sparing myself. Of course, I can’t claim that my viewpoint is the right one, and I will always seek extensive discussion from you all. Hope you will participate.

## Final post and AGT course update

There’s a changing of the guard happening at Turing’s Invisible Hand — I am following Ariel into retirement and some new additions to the team will be announced soon. I’d like to thank Noam for the opportunity to participate and my co-bloggers for the good times.

Before I go, a quick update on the algorithmic game theory courses that I’ve been teaching at Stanford this year. Last fall quarter I taught an intro to AGT course (mechanism design, price of anarchy, learning algorithms, complexity of equilibria, applications, etc.). I wrote detailed notes for all 20 lectures, which were also videotaped; see the intro AGT course page for details. I hope that some of you find the materials useful for teaching or self-study.

This quarter, I’m teaching a sequel course on topics in mechanism design. The topics range from the classic (matching markets, gross substitutes, etc.) to the hot-off-the-press (price of anarchy of simple auctions, multi-parameter revenue maximization, etc.). I am again making videos and accompanying lecture notes available; see the advanced AGT course page. As always, I’d be grateful for any suggestions and corrections that you have.

## Ariel Procaccia, blogger emeritus

When I was 21, I used to go to the gym three times a week, rain or shine. After a few years I shaved off a weekly session, and a few years later I was down to one gym session a week. By the time we moved to Pittsburgh, playing Dance Central with my wife seemed like a demanding workout.

Why am I telling you this? Because blogging is a bit like going to the gym  as the following graph suggests (the y axis shows the number of blog posts posted by yours truly, excluding announcements and guest posts):

But blogging is definitely more fun than going to the gym, and I did have a lot of fun blogging these past 2+ years; I want to thank Noam for this opportunity. Unfortunately, it’s no longer as fun as it used to be, and I decided to quit while I’m ahead. Since my blogging career was defined by ranking and listing stuff, I would like to finish with a vainglorious list of some of my own posts that I especially like:

Now that I think about it, blogging is more like singing in the shower: it’s a creative form of self-expression, and you have no idea who’s listening. If you have been listening for the last two years, I’m grateful for your attention I sing like a crow!

## The Choices Are: Slick and Exponential, or Brute Force and Polynomial

### Guest Post by Vijay Vazirani

A new, difficult problem, begun within TCS twelve years ago, namely finding good algorithms for computing market equilibria, has stretched the holy grail of TCS, namely polynomial time solvability, to its very limits — so much so that today, the two options listed in the title seem the only ones available!  Let me quickly provide some background, say a bit about these options and invite you to a discussion that — hopefully — goes right to the core tenets of our field.

The notion of market equilibrium, which asks for prices under which there is parity between supply and demand, is central to economics.  The Arrow-Debreu Theorem (1955), showing existence of such prices for a realistic, complex model of the economy, is perhaps the most central result in economics.  Yet, this result has an unsatisfying aspect to it: Whereas this notion is inherently algorithmic, e.g., Walras (1874), while defining  this notion also gave a mechanism for arriving at an equilibrium, the Theorem is highly non-constructive.   Several top mathematicians, including Scarf and Smale, have given algorithms, but they are slow.

Hence, bringing to bear powerful tools from our theory of algorithms was a fine goal. Work started with the easiest market models  and gradually moved on to more realistic, complex models. At first, the primal-dual paradigm and interior point methods for convex programs did admirably well. However, when aspects of intractability showed up, attention shifted to the two options listed in the title.

“Slick” algorithms are in the style of the (pivoting-based) simplex algorithm, though suitably enhanced to the setting of the linear complementarity problem — these are called complementary pivot algorithms, e.g., the classic Lemke-Howson algorithm (1964) for 2-player Nash is one such. Such algorithms run very fast in practice — indeed simplex (1947) is still the method of choice for most LP applications — but one can make them take exponential time by intricately doctoring up the instance.

Let us say that an algorithm is “brute force” if it accomplishes the job of finding one solution by finding them all — it has no idea how to do any better. However, it does achieve polynomial running time if certain parameters are assumed constant e.g., for CLIQUE, when restricted to graphs having cliques of  size at most $k$ (a constant), there is a trivial $O(n^k)$ algorithm.

From the viewpoint of practicality, the former win hands down — even for moderate instance sizes, the latter will take prohibitively long.  However, there is another important criterion: the study of polynomial time algorithms has historically led to deep insights into the structure of the problems studied. Surprisingly enough, even on this aspect, these exponential time algorithms do well, e.g., simplex leads to a proof of strong duality and Shapley has obtained deep structural properties of 2-Nash from the Lemke-Howson algorithm. For now, I am not aware of any insights from brute force algorithms.

Smoothed analysis provides a fairly convincing theoretical reason for the success of simplex. Surely there should be a more general setting that encompasses complementary pivot algorithms.