The Choices Are: Slick and Exponential, or Brute Force and Polynomial
Guest Post by Vijay Vazirani
A new, difficult problem, begun within TCS twelve years ago, namely finding good algorithms for computing market equilibria, has stretched the holy grail of TCS, namely polynomial time solvability, to its very limits — so much so that today, the two options listed in the title seem the only ones available! Let me quickly provide some background, say a bit about these options and invite you to a discussion that — hopefully — goes right to the core tenets of our field.
The notion of market equilibrium, which asks for prices under which there is parity between supply and demand, is central to economics. The Arrow-Debreu Theorem (1955), showing existence of such prices for a realistic, complex model of the economy, is perhaps the most central result in economics. Yet, this result has an unsatisfying aspect to it: Whereas this notion is inherently algorithmic, e.g., Walras (1874), while defining this notion also gave a mechanism for arriving at an equilibrium, the Theorem is highly non-constructive. Several top mathematicians, including Scarf and Smale, have given algorithms, but they are slow.
Hence, bringing to bear powerful tools from our theory of algorithms was a fine goal. Work started with the easiest market models and gradually moved on to more realistic, complex models. At first, the primal-dual paradigm and interior point methods for convex programs did admirably well. However, when aspects of intractability showed up, attention shifted to the two options listed in the title.
“Slick” algorithms are in the style of the (pivoting-based) simplex algorithm, though suitably enhanced to the setting of the linear complementarity problem — these are called complementary pivot algorithms, e.g., the classic Lemke-Howson algorithm (1964) for 2-player Nash is one such. Such algorithms run very fast in practice — indeed simplex (1947) is still the method of choice for most LP applications — but one can make them take exponential time by intricately doctoring up the instance.
Let us say that an algorithm is “brute force” if it accomplishes the job of finding one solution by finding them all — it has no idea how to do any better. However, it does achieve polynomial running time if certain parameters are assumed constant e.g., for CLIQUE, when restricted to graphs having cliques of size at most
(a constant), there is a trivial
algorithm.
From the viewpoint of practicality, the former win hands down — even for moderate instance sizes, the latter will take prohibitively long. However, there is another important criterion: the study of polynomial time algorithms has historically led to deep insights into the structure of the problems studied. Surprisingly enough, even on this aspect, these exponential time algorithms do well, e.g., simplex leads to a proof of strong duality and Shapley has obtained deep structural properties of 2-Nash from the Lemke-Howson algorithm. For now, I am not aware of any insights from brute force algorithms.
Smoothed analysis provides a fairly convincing theoretical reason for the success of simplex. Surely there should be a more general setting that encompasses complementary pivot algorithms.
So we came full circle: we adopted poly time to avoid brute force and now we managed to find a way of doing brute force in poly time!
Surely this is against the spirit of poly time.
These results are meaningless.
They are easy to find, get into FOCS/STOC/SODA, get referenced and
appreciated, so why are they meaningless?
Unfortunately you are right. They do get accepted and appreciated,
even though there is nothing to appreciate. People are taking polynomial
time too literally. We need a new notion that leaves out brute force
and takes in slick. What is that? Good question to improve “core tenets”
of our field.
Are you saying exponential is better than polynomial time?
Its about worst case exponential, where such bad instances have to be artificially engineered.
I did spend some time trying to analyse a smoothed version of the Lemke-Howson algorithm, but didn’t make progress. If I ever get to review a paper on that topic, I promise to show maximal enthusiasm…
Thank you Paul and the rest for fine insights from many viewpoints. Now that I
think about it, I had raised a similar issue in my Approximation Algorithms book, Section 8.3.1 — about PTASs that do exhaustive search and yet are regarded as the “best” algorithms. Once again, these are not at all practical.
I tend to agree with Anon that we need to find a way to ensure that people don’t exploit a literal reading of polynomial time to sell these substandard results, which have no practical or theoretical merit.
I was just informed by Dick Lipton that I have been scooped by him and
Ken Regan — they have already christened “brute force” as “galactic” and
explored in depth properties — or lack thereof — of such algorithms in their blog.
I think my comment in my book still remains as a pioneering observation — as
you can see, there is still credit at stake here 🙂
Hi, Vijay—!
Actually your idea has separate conceptual import as you say—and really spreading the credit around might require going to any online stacks of Dokladys and page-searching перебор. (Or ask Leonid Levin.) 🙂
Economics has moved beyond general equilibrium about 30 years back. Arrow-Debreu is something one studies in the first few weeks of grad school micro, but its relevance has diminished. To sell these results as something economists will be interested is completely misleading.
What is mathematical economics studying today? Is that field dead too?
Every policy maker, all the way to the Treasury Secretary, talks and thinks in terms of supply & demand and equilibria. To say that these basics are irrelevant is completely misleading. And since the economists didn’t finish the job 40 years ago, what is wrong if computer scientists do it today?
Smoothed analysis, along with earlier work of Adler and Megiddo, Borgwardt, Megiddo, and Smale on probabilisitc analysis of variants of the Simplex algorithm gave very nice insight into why Simplex does so well in general. In contrast, it is striking and intriguing that Lemke-Howson is very unlikely to have polynomial smoothed complexity (Chen, Deng, and Teng JACM 2009), but it’s worth noting that this is true for any algorithm that solves bimatrix games – so that result is not just about a particular type of algorithm, but the problem itself.
I absolutely agree with Vijay that it would be great to find an analogue of smoothed analysis that does address the performance of algorithms for complementarity problems. At the same time, I would also like to see more formal evidence to back up the statement that is often made: “Lemke-Howson is great in practice”. I have not seen too much evidence, just a couple of experimental studies, not all published. One problem is that we don’t really have a good library of interesting games.
The story for Simplex is completely different – there we have broad and deep experience using the algorithm on practically relevant instances. It would be great if more people were actually solving games and market equilibrium problems (in practice and/or research) which could then inform experimental studies, and potentially the definition of classes of input games which might be used to prove upper-bounds on the performance of these algorithms (average or worst case).
Before the seminal work of Daskalakis Goldberg Papadimitriou and Chen and Deng, many people believed it might be possible to find an equilibrium of a bimatrix game in polynomial time, and so even artificially engineered examples for algorithms, especially one as important as Lemke-Howson, were informative. I am still excited about the following two relatively new results, which I agree may not be of practical relevance, but I believe are of theoretical importance:
P. W. Goldberg, C. H. Papadimitriou, and R. Savani (2013)
The Complexity of the Homotopy Method, Equilibrium Selection, and Lemke-Howson Solutions
ACM Transactions on Economics and Computation 1, article 9.
Y. Disser and M. Skutella (2013)
In defense of the Simplex Algorithm’s worst-case behavior⋆
http://arxiv.org/abs/1311.5935
These papers show that the Lemke-Howson can actually solve PSPACE-complete problems (!), and the Simplex algorithm can solve NP-hard problems, respectively (with the solution in the case of the Simplex algorithm encoded in the computation trace, obviously not by the LP solution, which can be found in polynomial time). These results highlight just how powerful these algorithms are, and they reinforce the importance of understanding what types of/ restrictions of inputs are important.
Now a(nother) shameless plug. With Ted Turocy and Bernhard von Stengel, we have been working on game solving software. We are very keen to help researchers solve instances game (often with pivoting algorithms). Below are three relevant links. If the games are too big for the web version, let us know and we may be able to help. It would be great to know what games/equilibrium problems researchers in other fields would like to solve.
http://gambit.sourceforge.net/
http://gte.csc.liv.ac.uk/index/
http://gte.csc.liv.ac.uk/gte/builder/
http://banach.lse.ac.uk/
In trying to figure out an approach to understand “slick exponential”, I have the following question. In cases where a problem is “hard”, but the reduction is long and complicated and hard to discover, that may be taken as a hint that the problem is not as hard as one where it’s very easy to construct a reduction. I would like to know whether there’s been any work that might formalize this intuition. (Note that, perhaps unfortunately, that question misses out on the smoothed analysis aspect, since it’s not specific to problems whose instances are described using numerical values.)
Hi Paul, Interesting thought, but I have not seen this in any problem, i.e., the length or difficulty of a reduction implies that the problem is easier. Of course if the reduction takes more time, e.g., is super-polynomial, this can happen and I seem to remember it happening, but can’t recall for what problem.
Hi Ken, “credit” was just a joke, and I don’t need any. I will be happy if the algorithms community recognizes Galactic P and Galactic PTAS for what they really are, i.e., valuable in certain senses, e.g., a Galactic PTAS implies that you can’t use the PCP Theorem to show hardness, but not to ascribe any practicality to such algorithms. Also, not start claiming that you have settled a fundamental issue in the theory of algorithms with a Galactic P algorithm — once you resort to exhaustive search, you can achieve a lot for free! (I was bothered by one such claim, which made me write this blog post.)
Isn’t it a shame that CS has to remind Econ of its fundamentals? In fact take away the Nobel Prize and what is left there? People with five conflicting opinions about every problem and no solutions!
It goes deeper, if you see deeper. The new generation, which has been raised on an extremely poorly planned publication structure, with no conferences and cliquish journals, is devoid of imagination and cannot appreciate a new idea, no matter how great. I wouldn’t trust my economic well being to them. Sorry.
Hey guys,
I have found the proof of the existence of one-way functions:
https://plus.google.com/112389322263940335368/posts/cz5yba1obXk
Good Luck!!!
Thanks for such a interesting post. Economics being the oldest branch of science, recalled or remembered by computer science.
If you want to grow your experience simply keep visiting this web page and be updated with the hottest information posted here.|
It would be great if I could post text and pics to the blog from my cell (iPhone). Maybe even video??. . I’m new at the whole blog thing, so it really would need to be free and pretty easy to use.. . THANKS!.