One of main culture clashes between computer scientists and economists on the CS-econ frontier is whether “complexity of equilibria” matters. The CS-y view of the matter is captured in Kamal Jain’s quote: “If your laptop can’t find it then neither can the market“. Economists mostly don’t care since they see equilibrium reached everyday, contrary to what CS says. As Jeff Ely quips: “Solving the n-body problem is beyond the capabilities of the world’s smartest mathematicians. How do those rocks-for-brains planets manage to do pull it off?” TCS folks who see complexity as the map of the world can’t really understand this indifference, as Lance Fortnow tweets: “I’m an economist so I can ignore computational constraints / I’m a computer scientist, so I can ignore gravity.”
The beautiful thing about studying systems at equilibrium is precisely the fact that this abstracts away the gory details of the process (aka dynamics aka algorithm) of reaching this equilibrium. In principle, there are many different difficult-to-understand dynamics that all reach the same easy-to-understand equilibrium point. This is all very nice as long as the equilibrium is indeed magically reached somehow by the “real” dynamics. The obvious crucial question is whether this is the case in practice. It seems that the general tendency among economists is to claim “yes”, to which the natural next question in line is “why”? As computer scientists show, this is not a general characteristic of large games or markets. Understanding the properties of interesting games and markets that make them actually reach equilibrium should be enlightening. Maybe it is because economists choose to consider only those that do turn out to converge quickly, ignoring another large and interesting class of strategic scenarios? Maybe it is because economists are thinking about “smallish” games and so their insights will not carry over to more complex realistic scenarios? Maybe there is some deeper interesting structure that guarantees fast convergence? Distinguishing between these possibilities is especially crucial as we aim to analyze the new artificial games and markets that are to be found — and designed — on the Internet as well as elsewhere. Which economic and game-theoretic sensibilities will still hold in these complex unnatural circumstances?
Complexity is all about understanding such processes. While the foremost question dealt by computational complexity is that of “time” — how long does a computational process need in order to find the solution — in our case to reach (close-to) equilibrium — this is not the only type of questions and insights provided by complexity theory. As one can see in the “map above”, there are a stunning variety of complexity classes, each attempting to capture a different facet of the challenge of finding the solution: how much space (memory) is needed? Can we even tell when we reach a solution? Does randomization help? Is it helped parallelism? Are approximations easier? Does the solution have this or that particular structure? In the case of finding equilibria, the classes PPAD and PLS give a very nuanced explanation of what is involved. There are also “concrete” models that study explicitly specific parameters such as communication or queries. One may dislike the fact that this complexity analysis does not restrict attention to natural dynamics but allows arbitrary and unnatural algorithms. The kind of natural dynamics that economists usually have in mind are some kind of best-replying in case of games and some kind of a Walrasian auctioneer in markets. The problem is that there are many variants of these that make sense: fictitious play, various forms of population dynamics, more sophisticated types of learning such as regret-minimization, and all these can be enhanced with various orderings, smoothing attempts, tie-breaking rules, strategic look-ahead, re-starts, actions of the central planner, not to mention other more or less complex optimization and learning attempts. The strength of complexity analysis is that it applies to all of these. Any “lower bounds” are definitive: any practical system can be simulated by a computer, and thus no dynamics can succeed in general. (Emphasis on “in general” — as mention above, the problems that you may be interested in may have special structure — so what is it?) A statement of an “upper bound” may be less interesting as stated, but immediately raises the challenge of either finding a natural algorithm=process=dynamic or pinpointing the complexity reason explaining why this is impossible.
This is a good point to refute several irrelevant objections to the applicability of computational complexity for analyzing dynamics of games and markets. The first is the notion that humans or markets can undergo processes that cannot be computed. Frankly, there is no evidence of this; there is certainly much about the world that we do not understand well enough to simulate, but there is no indication of any natural process that is inherently more powerful than computers. This is the modern version of the Church-Turing thesis. (It seems that some quantum phenomena cannot be simulated by usual computers, but that doesn’t change the principle — it just would replace classical computers with quantum ones in the few places that it matters.) Even if you do subscribe to metaphysical dualism, do you want to base economics on it? The second types of objections concern the standard technical notions used in complexity which obviously leave much wiggle-room: “polynomial time”, with its hidden constants, is not synonymous with efficient; worst-case analysis is not always the right notion, etc. The point is that these are simply concise notions that usually seem to capture the issues well. There always are cases where more refined notions are needed, and in such cases complexity theory can provide more precise answers: for example in analysis of basic problems like sorting or matrix multiplication, very exact results are obtained (with no hidden constants) similarly, cryptography is not based on worst-case analysis, etc. There is no indication so far that the usual somewhat coarse notions of computational complexity miss something significant when applied to games or markets — quite the contrary in fact. If such evidence emerges then complexity will not become irrelevant; simply more refined analysis will be used.
Take for example the stunning difference between reaching a Nash equilibrium and reaching a Correlated equilibrium. While the latter is reached by various natural regret-minimization dynamics, there are no “non-trivial” dynamics that, in general, reach the former. Let me say a word about this “non-triviality” by giving an example of what I would consider a trivial process: suppose that each player chooses a strategy at random every “turn”, unless his last strategy was already a best-reply to those of the others (up-to some indifference). At some time, the players will happen to “hit” an (-) equilibrium. This type of dynamics that simply search over the whole space of strategy profiles provides no insight and is not useful in most practical situations. The point of the triviality is not that of the random choices but rather that of essentially trying every possibility. Many other proposed “dynamics” for Nash equilibrium or for market equilibrium are similarly trivial — in some cases they resort to simply trying all possibilities (in some approximate sense). The dynamics for correlated-Nash are not like this at all — they only look at a tiny “small-dimensional” fraction of the space of possibilities. Why is that? Complexity theory explains this phenomena clearly: correlated equilibria can be found in polynomial time, but finding a Nash equilibrium (or many types of market-equilibrium) is PPAD-complete.