Feeds:
Posts
Comments

Posts Tagged ‘Theory vs. Practice’

Eran Shmaya, in a beautiful recent blog post, adds some sociological angles to the economists’ never-ending introspection of whether their field is a science, where the starting point of such discussion is Popper‘s falsifiability.  The discussion about whether economic theory has any significant prediction power has certainly picked up lately due to the financial crisis that convinced many people that the answer is “No”.  (With a variant taken by many micro-economists claiming that micro is fine and only macro-economics is a superstition.)

Whenever I hear these types of discussions about the prediction power economic theory, I always think about the irony that, as an algorithmic game-theorist, I am mostly interested in using economic theory as a branch of mathematics…  Arrow’s theorem is simply true, not falsifiable.  So is Meyrson’s expression for optimal auctions, as well as many other mathematical tidbits that we, computer scientists, pick up from economic theory.  We don’t really care how well these theorems relate to human situations, since we use them in different circumstances, those related to computer systems.  I often wish that theoretical economists would allow themselves to stray even farther away from the realities of their human subjects.

This is not to say that we, in algorithmic game theory, need not worry about how well our theories correspond with reality.  We should.  We have not really started doing so.   And, it’s getting high time we do.  But our concerns may be more like those of engineers rather than those of scientists.  From economics, I’m happy to just take the theory…

Read Full Post »

My nephew just started his undergraduate degree in Math at the Technion.  When his grandfather, a high-powered engineer (whom I sometimes turn to for solving a differential equation that I need solved) , asked him what they learned in their very first calculus lesson, the answer was “supremum“.  The grandfather was mystified, claiming that “supremum” was not part of the math curriculum at all when he got his engineering degree half a century ago (also in the Technion, it turns out.)  While this can hardly be literally true, it does highlight the difference between the “abstract” approach to math and the “applied approach” which he got as an engineer.  From an abstract point of view, it makes much sense to start  with the notion of supremum, the ubiquitous existence of which essentially defines the real numbers, while I suppose that a more applied approach will hardly emphasize such abstract technical hurdles.  When I studied math at the Hebrew university, we spent an enormous amount of time on such abstract notions up to the point of ending the first year with hardly any practical abilities of figuring out a simple Integral but having a pretty good grasp of properties of zero-measure (yes, first year).

In my department, we have an on-going debate about the type of math that our CS students should get.  Traditionally, we have followed the extremely abstract tradition of our math department which emphasizes the basics rather than applications (even in the “differential equations” course, students rarely solve them, but more often prove existence and such).   In the last decade or so there has been a push in my department to shift some focus to “useful” math too (like being able to actually use singular value decomposition or Fourier transform, as needed in computer vision, learning theory, and other engineering-leaning disciplines).  I’m on the loosing side of this battle and am against this shift away from the “non-useful” fundamentals.   The truth is that most computer scientists will rarely need to use any piece of “useful” math.  What they will constantly need to use is “mathematical maturity”: being comfortable with formal definitions and statements, being able to tell when a proof is correct, knowing when and how to apply a theorem, and so on.

Read Full Post »

In the last month, Brendan Lucier has uploaded three conceptually related papers to the arXiv (two of them with co-authors):

  1. Beyond Equilibria: Mechanisms for Repeated Combinatorial Auctions by Brendan Lucier
  2. Bayesian Algorithmic Mechanism Design by Jason D. Hartline and Brendan Lucier
  3. Price of Anarchy for Greedy Auctions by Brendan Lucier and Allan Borodin (to appear in SODA 2010)

All three papers aim to improve the approximation ratios obtained by computationally efficient mechanisms.  Each of them gives pretty strong and general results, but at a price: relaxing the notion of implementation from the usual one (in CS) of dominant-strategies.  Specifically, the notions used in these three papers are, respectively:

  1. “We consider models of agent behaviour in which they either apply common learning techniques to minimize the regret of their bidding strategies, or apply short-sighted best-response strategies.”
  2. “We consider relaxing the desideratum of (ex post) incentive compatibility (IC) to Bayesian incentive compatibility (BIC), where truthtelling is a Bayes-Nash equilibrium (the standard notion of incentive compatibility in economics).”
  3. “We study the price of anarchy for such mechanisms, which is a bound on the approximation ratio obtained at any mixed Nash equilibrium.”

In general, I like the strategy of relaxing the required notion of implementation or equilibrium in order to get strong and general results.  This is especially true in the context of algorithmic mechanism design for a few reasons (1) The concept used in CS seems to be so strict that simply not enough is achievable (2) The CS point of view is anyway much stricter than that of Bayesian-Nash commonly used by economists (3) much less seems to be needed in practice in order to ensure cooperative behavior.  (E.g.  people just use TCP truthfully despite the clear opportunities for beneficial manipulation.)  The difficulty with the relaxation strategy is that there are many possible ways of relaxing the game theoretic requirements, and it takes some time to really be convinced which notions are most useful, and in which situations.  I would love to see more community discussions on these as well as other notions of relaxed implementation and equilibrium.

Read Full Post »

An NSF-targeted workshop on Research Issues at the Interface of Computer Science and Economics has just taken place in Cornell, attended by many central people in the area (but not by me), including several bloggers that reported on it (Lance, Muthu a b c, and Rakesh). This is another one in a sequence of workshops where economists and computer scientists meet to exchange ideas, results, and points of view.

A fruitful issue in the interface is taking account of computational complexity in economic and game-theoretic settings.  This has received much attention (e.g. in Rakesh’s post), and while it certainly is a key issue, I disagree that this is the core issue in this interface.  I believe that even more interesting is the combination and interplay of the different research attitudes of the two communities, a combination that is needed for studying the future economic issues brought by technological advances.

I believe that much of the difference in attitudes comes from economists’ emphasis on “what is” vs. CS emphasis on “what can be”.   Even theoretical economists try to talk about the real world, their examples and motivations are real existing industries and situations, and their philosophical point of view is mostly scientific: understand the world.  Computer Scientists prepare for the future, knowing that most of what exists today will soon become obsolete.  Our motivations and examples often assume a few more years of the steady march of Moore’s law.  These attitudes are the outcome of different situations of these disciplines: economic work that did not correspond with reality was eventually ignored, while CS work that was not forward looking became obsolete.  This distinction is obviously somewhat simplistic and exaggerated, but this does  seem like the general spirit of things.  Indeed the sub-area of economics that was most natural for computer scientists to “blend with” was Mechanism Design which unusually  in economics has an engineering spirit of  “what can be” .  I feel that many of the technical and  cultural differences between the communities have their roots with this difference.

Economists and Game-theorists take their models much more seriously and literally than computer scientists do.  While an economists’ model should correspond to some real phenomena, a CS model should correspond to a host of unknown future situations.  A CS reader is more willing to take the leap of imagination between the model as stated and various unknown future scenarios.  In compensation, the CS reader expects a take-away that transcends the model, often found in the form of mathematical depth.  An economics reader is much more skeptical of a model and demands specific justification, but on the other hand, mathematical depth is not seen as a feature but rather as a nuisance to be buried in the appendix.

The Bayesian point of view of economists vs. the worst-case point of view of computer scientists have a similar root, I think.  In a situation that already exists, there is obviously some empirical distribution, and it is hard to argue against taking it into account.  In a situation that doesn’t exist, there is yet no empirical distribution, so in order to prepare for it we need to either theoretically guess the future distribution, or prepare for worst case.  CS has learned that the first option does not work well.

There are other high-level differences I can think of, but let me give two specific ones.  How does one interpret a 4/3-approximation result in a network setting?  While an economist will certainly consider a loss of 33% to be a negative result (as it would be in almost an existing scenario), a computer scientist will view it as giving up a few months of technological advance — a negligible cost compared to the other difficulties facing the adoption of anything new.

In Mechanism Design, computer scientist focus on the simplest model (dominant-strategy implementation; independent private values; risk-neutrality) and spend most of their efforts exploring the limits of the model, inventing new mechanisms, or proving impossibility, for various  scenarios which are obviously unrealistic today.  Economic theory, on the other hand, has moved on to more sophisticated models (risk aversion, dependent values, etc.), and is not as concerned with the limits of the model — until a real application emerges, like spectrum auctions.

I think that the future exploration of the interface of economics and CS should not only combine notions and results from both fields, but also combine attitudes and points of view.  The latter is even more difficult to do well.

Read Full Post »

Wired magazine published a (rather enthusiastic) popular article on Hal Varian’s role as chief Google economist.  It shortly mentions that Microsoft now has Susan Athey in a similar role.

Read Full Post »

I recently looked again at Ausubel’s multi-unit auction paper, and really liked the crystallization of the first paragraph in it:

The auctions literature has provided us with two fundamental prescriptions guiding effective auction design. First, an auction should be structured so that the price paid by a player—conditional on winning—is as independent as possible of her own bids (William Vickrey, 1961). Ideally, the winner’s price should depend solely on opposing participants’ bids—as in the sealed-bid, second-price auction—so that each participant has full incentive to reveal truthfully her value for the good. Second, an auction should be structured in an open fashion that maximizes the information made available to each participant at the time she places her bids (Paul R. Milgrom and Robert J. Weber, 1982a).

I would say that this is useful practical insight gained from studying theory.

Read Full Post »

Another round in the everlasting soul-searching of the TCS community about its connection with the real world has been lately going on in the “usual” theory blogs (by Mihai, Bill, and Michael, spurred by a workshop on theory and multi-cores) and also by Mark, a self-professed outsider. Among other issues that he points out, Mark complains about “people who devise complicated (never implemented) algorithms, with basic big-O analysis only, and obsession over the worst case” and asks: “how many researchers who devise new algorithms actually have them implemented?”  Mihai, basically argues (in the multi-core case) against being too practical: “there is really no point wasting our time on current technology which is clearly bound to be replaced by some other technology in the next 60-1000 years”, but cannot refrain from mocking “too theoretical” theory (with a stab at algorithmic game theory too): “Which theory community? … The one that thinks Amazon should be happy with an O(lg3n) approximation for its profit?”

I tend to think that much of these complaints comes from taking the models and results of theoretical computer science too literally. The contribution that theory provides is usually not that of someone directly implementing a new algorithm. Usually it is the insights behind the algorithm (or the lower bound or the characterization) that tend to be useful. Such insights may well come from “an O(lg3n) approximation for profit” (which may be near optimal on real distributions, or may suggest a good heuristic or may combine well with other algorithms and optimization goals — all of these best dealt with practically rather than analyzed theoretically). On the other hand even, say, a linear time algorithm for the exact optimal profit may not be directly applicable since it solves the wrong problem or due to additional new constraints or due to the input not being available as assumed or a variety of other reasons.  Thus the “technology transfer” from theory to practice is not really composed of “algorithms” and “results” but rather of “insights”, “techniques”, and “ways of thinking”.  Indeed, when I look at how theoreticians in Google contribute to the practice of the company, I can hardly think of a situation where the contribution was simply a great new algorithm that was simply implemented. Usually, what a theoretical researcher contributes is a point of view that leads to a host of suggestions about algorithmic techniques, heuristics, metrics, notions, interfaces, trade-offs and more — all influenced by theoretical understanding — that together yield improved performance.

Thus the contribution of an “algorithm for X with performance Y in model Z” does not require that model Z is “realistic” or that performance Y is “good” — the real contribution is that obtaining performance Y in model Z provides new insights – insights that can then be used in various possible realistic applications for achieving performance improvements in various cases. It seems that many in our field (including program committees) sometimes forget this and simply view improving Y as a goal by itself (this is related to the old TCS conceptual debate). A well chosen model Z will indeed tend to have the property that improving Y in it indeed correlates well with new insights, but is never quite equivalent.  A major difference between theoretical CS (as well as theoretical economics) and pure math is exactly in the central place of the art of choosing the right model.  This is especially true for Algorithmic game-theory that sits between theoretical CS and theoretical economics that have different sensibilities about good models — but this requires another post.

While judging a single contribution for “insight” is certainly difficult and subjective, in the long run we can tell.  The real test of a community, in the long run, is whether the insights that it has developed over a significant amount of time contribute to other fields. I would claim that theoretical computer science passes this test with flying colors. Looking a generation back TCS has pretty completely re-shaped computer science and optimization.  The more recent products of theoretical CS like approximation algorithms, online algorithms or streaming/sketching algorithms have already had a major influence on how the fields of OS, DB, or AI treat their own problems as well as how the industry looks at many of its problems. I would even claim that the very young field of algorithmic game theory can already boast some of its insights already affecting other fields, and certainly much real Internet activity (yes, yes, ad-auctions and such). Of course it is way too early to judge the contribution of algorithmic game theory or other last-decade fruits of theory.

Read Full Post »

« Newer Posts - Older Posts »