One of main culture clashes between computer scientists and economists on the CS-econ frontier is whether “complexity of equilibria” matters. The CS-y view of the matter is captured in Kamal Jain’s quote: “*If your laptop can’t find it then neither can the market*“. Economists mostly don’t care since they see equilibrium reached everyday, contrary to what CS says. As Jeff Ely quips: “*Solving the n-body problem is beyond the capabilities of the world’s smartest mathematicians. How do those rocks-for-brains planets manage to do pull it off?*” TCS folks who see complexity as the map of the world can’t really understand this indifference, as Lance Fortnow tweets: “*I’m an economist so I can ignore computational constraints / I’m a computer scientist, so I can ignore gravity.*”

The beautiful thing about studying systems at equilibrium is precisely the fact that this abstracts away the gory details of the process (aka dynamics aka algorithm) of reaching this equilibrium. In principle, there are many different difficult-to-understand dynamics that all reach the same easy-to-understand equilibrium point. This is all very nice as long as the equilibrium is indeed magically reached somehow by the “real” dynamics. The obvious crucial question is whether this is the case in practice. It seems that the general tendency among economists is to claim “yes”, to which the natural next question in line is “why”? As computer scientists show, this is not a general characteristic of large games or markets. Understanding the properties of interesting games and markets that make them actually reach equilibrium should be enlightening. Maybe it is because economists choose to consider only those that do turn out to converge quickly, ignoring another large and interesting class of strategic scenarios? Maybe it is because economists are thinking about “smallish” games and so their insights will not carry over to more complex realistic scenarios? Maybe there is some deeper interesting structure that guarantees fast convergence? Distinguishing between these possibilities is especially crucial as we aim to analyze the new artificial games and markets that are to be found — and designed — on the Internet as well as elsewhere. Which economic and game-theoretic sensibilities will still hold in these complex unnatural circumstances?

Complexity is all about understanding such processes. While the foremost question dealt by computational complexity is that of “time” — how long does a computational process need in order to find the solution — in our case to reach (close-to) equilibrium — this is not the only type of questions and insights provided by complexity theory. As one can see in the “map above”, there are a stunning variety of complexity classes, each attempting to capture a different facet of the challenge of finding the solution: how much space (memory) is needed? Can we even tell when we reach a solution? Does randomization help? Is it helped parallelism? Are approximations easier? Does the solution have this or that particular structure? In the case of finding equilibria, the classes PPAD and PLS give a very nuanced explanation of what is involved. There are also “concrete” models that study explicitly specific parameters such as communication or queries. One may dislike the fact that this complexity analysis does not restrict attention to natural dynamics but allows arbitrary and unnatural algorithms. The kind of natural dynamics that economists usually have in mind are some kind of best-replying in case of games and some kind of a Walrasian auctioneer in markets. The problem is that there are many variants of these that make sense: fictitious play, various forms of population dynamics, more sophisticated types of learning such as regret-minimization, and all these can be enhanced with various orderings, smoothing attempts, tie-breaking rules, strategic look-ahead, re-starts, actions of the central planner, not to mention other more or less complex optimization and learning attempts. The strength of complexity analysis is that it applies to *all* of these. Any “lower bounds” are definitive: any practical system can be simulated by a computer, and thus *no* dynamics can succeed in general. (Emphasis on “in general” — as mention above, the problems that you may be interested in may have special structure — so what is it?) A statement of an “upper bound” may be less interesting as stated, but immediately raises the challenge of either finding a natural algorithm=process=dynamic or pinpointing the complexity reason explaining why this is impossible.

This is a good point to refute several irrelevant objections to the applicability of computational complexity for analyzing dynamics of games and markets. The first is the notion that humans or markets can undergo processes that cannot be computed. Frankly, there is no evidence of this; there is certainly much about the world that we do not understand well enough to simulate, but there is no indication of any natural process that is inherently more powerful than computers. This is the modern version of the Church-Turing thesis. (It seems that some quantum phenomena cannot be simulated by usual computers, but that doesn’t change the principle — it just would replace classical computers with quantum ones in the few places that it matters.) Even if you do subscribe to metaphysical dualism, do you want to base economics on it? The second types of objections concern the standard technical notions used in complexity which obviously leave much wiggle-room: “polynomial time”, with its hidden constants, is not synonymous with efficient; worst-case analysis is not always the right notion, etc. The point is that these are simply concise notions that usually seem to capture the issues well. There always are cases where more refined notions are needed, and in such cases complexity theory can provide more precise answers: for example in analysis of basic problems like sorting or matrix multiplication, very exact results are obtained (with no hidden constants) similarly, cryptography is not based on worst-case analysis, etc. There is no indication so far that the usual somewhat coarse notions of computational complexity miss something significant when applied to games or markets — quite the contrary in fact. If such evidence emerges then complexity will not become irrelevant; simply more refined analysis will be used.

Take for example the stunning difference between reaching a Nash equilibrium and reaching a Correlated equilibrium. While the latter is reached by various natural regret-minimization dynamics, there are no “non-trivial” dynamics that, in general, reach the former. Let me say a word about this “non-triviality” by giving an example of what I would consider a trivial process: suppose that each player chooses a strategy at random every “turn”, unless his last strategy was already a best-reply to those of the others (up-to some indifference). At some time, the players will happen to “hit” an (-) equilibrium. This type of dynamics that simply search over the whole space of strategy profiles provides no insight and is not useful in most practical situations. The point of the triviality is not that of the random choices but rather that of essentially trying every possibility. Many other proposed “dynamics” for Nash equilibrium or for market equilibrium are similarly trivial — in some cases they resort to simply trying all possibilities (in some approximate sense). The dynamics for correlated-Nash are not like this at all — they only look at a tiny “small-dimensional” fraction of the space of possibilities. Why is that? Complexity theory explains this phenomena clearly: correlated equilibria can be found in polynomial time, but finding a Nash equilibrium (or many types of market-equilibrium) is PPAD-complete.

on November 28, 2009 at 12:07 am |JoeThe claim that worst-case complexity is a concise notion that seem to capture the issues well, is surprising.

Many natural problems (e.g. protein folding) have NP-complete formulations and yet are “solved” by nature regularly.

Furthermore, many NP-complete problems are routinely solved by computer programs such as integer linear-programming, SAT solving, scheduling problems etc.

(Not to mention the simplex algorithm whose versions that actually run are proven to be exponential.)

Worst-case complexity analysis is usually the best we can do. For that reason, it is important to do it.

Nevertheless, claiming that it captures the world well,

is far fetching and requires strong arguments.

It seems entirely possible that what we see in the markets today comes from some distribution on which finding equilibrium is not computationally hard.

While it is true that more refined analysis is needed,

we do not know how to do it.

Furthermore, an economist should not necessarily wait for us to catch up with finding a refined analysis.

Economists should describe the world the best way they can. If it turns out that the world “computes” solution to hard problems, it is up to us to explain the discrepancy.

on November 28, 2009 at 6:07 am |noamnisanYou seem to have missed my point. It is not that if a problem is NPC then it can’t be solved in reality — it is just that there must be an explanation for this, which usually means understanding the special property of the realistic problems that makes them easy. Protein folding by nature would certainly be a good example where the mystery is strengthened by the basic complexity results and where we should eventually be able to understand the special physical/chemical properties that make the dynamics converge. LP is actually a good example for the rare cases where there was always ample evidence for a gap between the basic complexity results and reality; the recent more refined smoothed analysis seems to close the gap.

No one is suggesting not to study equilibria in cases where we don’t understand the dynamics leading to them. The point is that exactly in such cases studying the dynamics is doubly interesting and would usually mean understanding something new about the problem itself, not about complexity theory.

on November 28, 2009 at 1:03 am |Tweets that mention Economists and Complexity « Algorithmic Game Theory -- Topsy.com[…] This post was mentioned on Twitter by Lance Fortnow, Maryland Thermoform. Maryland Thermoform said: "Economists and Complexity « Algorithmic Game Theory" http://tinyurl.com/yb5kbhr MDThermo […]

on November 28, 2009 at 1:34 am |D. EppsteinFrom my naive point of view it seems likely that case analysis may make more sense in economics than in scientific applications, because if there’s a worst case to exploit then one can expect the arbitrageurs to eventually find it and exploit it.

on November 28, 2009 at 3:31 am |alexI thought economists typically modeled things using very simple games, for which computing the equilibrium is typically possible.

on November 28, 2009 at 6:11 am |noamnisanIndeed so. The question is what happens with the larger games of reality. If convergence depends crucially on smallness then the small models don’t say much. This does not seem to be the common case: good “small” economic models do explain also larger realistic games.

on November 28, 2009 at 5:52 am |Vijay Vazirani“but finding a Nash equilibrium (or many types of market-equilibrium) is PPAD-complete”

Only 2-player Nash is PPAD-complete. For 3 or more players, it is FIXP-complete — a class due to Etessami and Yannakakis (FOCS 2007).

Problems in PPAD always have rational solutions, but a 3-player Nash instance may have only irrational solutions (Nash himself gave such an example).

on November 28, 2009 at 6:18 am |noamnisanAs usual with continuous quantities, some notion of approximation is implied. I avoided getting into this issue since I don’t believe that it is crucial here, but see a previous post on approximation: https://agtb.wordpress.com/2009/06/07/approximate-nash/

on November 28, 2009 at 7:42 am |Vijay VaziraniThanks Noam, and agreed.

Perhaps I should have stated my main point differently — that FIXP is an important class for understanding the complexity of equilibria but needs to be more well known and more widely explored.

on November 28, 2009 at 11:18 pm |Kamal JainIf some problem is complexitywise hard, but somewhere some system solve the instances efficiently then it means that we should either giveup the church-turing thesis, or question the complexity class, or try to understand what kind of instances that system is getting.

So in case, scientists believe that economic systems solve instances of a computationally hard problem, that may means the scientific understanding is lacking in understanding what kind of situations arise in practice which is leading to solvable instances of hard problem. Could scientists in general and economists in particular say, “hey inpractice we do not get general instances of Nash problem but usually get the instances which have these additional attributes which make them easy to solve”.

One goal of any theoretical discipline is to build a conceptual understanding of the real world. If something is computationally hard but the real world solves it, what it means? It means that we are still lacking the conceptual understanding of an important dimension of the real world.

on November 29, 2009 at 12:17 am |yI think Alex’s point is valid to some extent. He is saying that it is important for a subset of economists and computer scientists to think hard about the special cases of equilibrium problems that seem to arise in practice which enable them to be solved fast. However, the practicing economist could/should continue to use the notion of equilibria without constantly worrying about the computational complexity issues.

on November 29, 2009 at 3:07 am |JKExcuse the question (I know very little about economics) but what is an example of a real-world, large-scale system that one can indisputably claim has come to equilibrium? Global and national markets (stock markets, commodities markets, currency markets, …) fluctuate daily, sometimes by significant amounts. Occasionally this can be attributed to a change in information, but usually not. Do (experimental) economists really believe the stock market is in equilibrium? (I know about the efficient market hypothesis, which makes for nice theory. What I’m wondering about is how anyone justifies that hypothesis in practice.)

on December 29, 2009 at 10:42 pm |StephenTo say something has come to an equilibrium definitively, we would need to know what each of the participants is maximizing. Indisputability is to high a bar for large real-world systems. Because of this, most of the work has been done in experimental settings. For markets for physical goods, Vernon Smith did good experimental work about the convergence to equilibrium with participants having only limited information.

The real-world example that I think would be easiest to go after would be equilibrium in routing games. Looking at traffic flows after a major road or bridge closure for instance could be informative.

on November 29, 2009 at 3:08 am |Paul BeameIn standard models for Nash equilibria (even approximate ones) one assumes that the players have perfect information about the game. Without perfect information the difference between PPAD-hard find strategies approximately in equilibrium) and FIXP-hard (approximating an actual equilibrium) seems to go away.

Suppose that one does not have perfect information and the payoffs given are only approximate. It is certainly no harder to compute strategies that are an approximate equilibrium for some possible payoffs consistent with the input. Can it be significantly easier?

on December 24, 2009 at 10:56 pm |Gil KalaiThis is interesting discussion and I am sorry I just saw it. Here is my 2 cents anyway.

1. I certainly agree that economic systems (as well as other sustems in nature) cannot “solve” computationally infeasible problems. It is an interesting question if computational complexity considerations of this type should play a role in modeling economical or, say, biological/physical phenomena.

2. For modeling in science as well as in economics, computational complexity is sort of second order matter. If you want to predict the value of a stock in a week based on some data you have today the first order question is what is the quality of the solution you can give based on the data, even with an unlimmited computational power. The (related) question of computational complexity comes next.

3) It is possible that if you have the full data (and a complete understanding of the mechanism) the process is computationally feasible but when you need to optimize over some hidden variables the process becomes computationally infeasible.

4) when it comes to game theory there are much deeper issues that may lead to skepticism concerning the relevance of computational complexity considerations. Game theory (unlike closely related fields like statistics and optimization) almost did not led to computations which are relevant to predictions or precise normative suggestions. You may argue that the value of zero sum games has descriptive and normative value, bur when you go beyond that (say to Nash equilibrium or correlated equilibrium) these are hardly ever used for any computations at all.

Game theory is mainly used to conceptually understand (or discuss, at least) various issues in strategic behaviors.

5) Of course, I think that the quantitative (mathematical) study of various aspects in game theory and economics including the computational complexity aspect, and also various other mathematical aspects like dynamics, learnability, stability, and more, is interesting (which is why I am doing it). Such a study can contribute to game-theory /economics and can also lead to fruit in math/TCS.

Also the computer/Internet revolution gives wonderful new opportunities to experiment and gather empirical data on strategic behavior. This was not possible in early days of game theory.

The main bottleneck remains: Putting complexity aside (and even in small cases where we can compute everything), we do not have and are not close to having models that give quantitative computational tools for either predicting strategic behavior, or suggesting such a behavior.

on July 27, 2010 at 5:52 pm |Rowers, Funding, Metropolis, and Equilibria - The Quantum Pontiff[…] discussion of computational complexity and economics. See especially the comment by […]

on October 24, 2010 at 10:38 pm |Revealed Preferences and Complexity « Algorithmic Game-Theory/Economics[…] of such a difference and more over argue that this difference is at the root of economists’ refusal to view excessive computational complexity as a critique of an equilibrium…. The “usual” argument against taking worst-case complexity results seriously is that […]

on January 1, 2011 at 8:49 am |AGT/E 2010 year in review « Algorithmic Game-Theory/Economics[…] the core algorithmic/complexity questions of the computational difficulty of finding equilibria in games and markets we have seen only little progress, mostly for some market variants. There […]

on September 8, 2011 at 1:00 am |VPNVPN…[…]Economists and Complexity « Algorithmic Game-Theory/Economics[…]…

on October 17, 2011 at 5:17 pm |game systemgame system…[…]Economists and Complexity « Algorithmic Game-Theory/Economics[…]…

on May 6, 2013 at 8:47 pm |Reasons to care: In honor of Scott Aaronson | Windows On Theory[…] mathematical deduction systems, or computation by natural systems (see workshop above) or economic markets, has been a surprising and central phenomena, validating the formal study of Turing machines or […]

on May 7, 2013 at 10:09 am |Artem KaznatcheevAwesome post!

When trying to convince economists of the relevance of complexity theory in print, is there a standard reference to cite that makes the sort of arguments? I am familiar with Roughgarden’s survey, but it still seems a bit too computer science-y to convince economists, is there something that is specifically targeted at economists, uses their languages, and (preferably) published in places they read? In a similar vein, do you know any surveys or historic overviews of how computer scientists introduced intractability results into economics and how economists responded? These would be great sources to have!

I ask this because I am currently struggling with how to convince evolutionary biologists to care about computational complexity, and a lot of the same arguments as economists given are around.

on May 24, 2013 at 7:00 am |Game Theory Links 5-24-13 - Mind Your Decisions[…] Economists and Complexity by Noam Nisan […]

on May 25, 2013 at 10:52 am |payday loans with bad creditHaving quick entry to money has helped many

applicants keep track of their budgeted costs while too many unforeseen payments or

emergency costs creep in the budget before their next paycheck might

help. With these refinancing options you never have to bother about whether you might have bankruptcy

on your credit, a number of bad debts, or maybe a low credit standing.

on November 20, 2013 at 7:00 pm |Why economists should care about computational complexity | Question and Answer[…] relevance of complexity theory in print, is there a standard reference to cite? I am familiar with Noam Nisan’s blog post, Tim Roughgarden’s survey, and chapter 11 of Scott Aaronson’s essay. These posts are […]

on November 15, 2014 at 1:03 pm |usgcfswiqey@gmail.comVery good

on June 18, 2016 at 1:00 pm |Mike MorganRegarding nature’s efficient ‘computation’ of the outcome of n-body problems, it would not surprise me if the most efficient way of ‘simulating’ some phenomena were to sit back and watch them take place. There is also an issue of space complexity. Nature always has enough space to solve instances computed and represented by nature. This is not the case with more abstract problems.

on September 6, 2017 at 4:49 am |vnabpvm@gmail.comIf both the husband and wife are HIV +ve, how best can they live? and is a CD4 count of 376 over 11 years safe without ARVs?