An NSF-targeted workshop on Research Issues at the Interface of Computer Science and Economics has just taken place in Cornell, attended by many central people in the area (but not by me), including several bloggers that reported on it (Lance, Muthu a b c, and Rakesh). This is another one in a sequence of workshops where economists and computer scientists meet to exchange ideas, results, and points of view.
A fruitful issue in the interface is taking account of computational complexity in economic and game-theoretic settings. This has received much attention (e.g. in Rakesh’s post), and while it certainly is a key issue, I disagree that this is the core issue in this interface. I believe that even more interesting is the combination and interplay of the different research attitudes of the two communities, a combination that is needed for studying the future economic issues brought by technological advances.
I believe that much of the difference in attitudes comes from economists’ emphasis on “what is” vs. CS emphasis on “what can be”. Even theoretical economists try to talk about the real world, their examples and motivations are real existing industries and situations, and their philosophical point of view is mostly scientific: understand the world. Computer Scientists prepare for the future, knowing that most of what exists today will soon become obsolete. Our motivations and examples often assume a few more years of the steady march of Moore’s law. These attitudes are the outcome of different situations of these disciplines: economic work that did not correspond with reality was eventually ignored, while CS work that was not forward looking became obsolete. This distinction is obviously somewhat simplistic and exaggerated, but this does seem like the general spirit of things. Indeed the sub-area of economics that was most natural for computer scientists to “blend with” was Mechanism Design which unusually in economics has an engineering spirit of “what can be” . I feel that many of the technical and cultural differences between the communities have their roots with this difference.
Economists and Game-theorists take their models much more seriously and literally than computer scientists do. While an economists’ model should correspond to some real phenomena, a CS model should correspond to a host of unknown future situations. A CS reader is more willing to take the leap of imagination between the model as stated and various unknown future scenarios. In compensation, the CS reader expects a take-away that transcends the model, often found in the form of mathematical depth. An economics reader is much more skeptical of a model and demands specific justification, but on the other hand, mathematical depth is not seen as a feature but rather as a nuisance to be buried in the appendix.
The Bayesian point of view of economists vs. the worst-case point of view of computer scientists have a similar root, I think. In a situation that already exists, there is obviously some empirical distribution, and it is hard to argue against taking it into account. In a situation that doesn’t exist, there is yet no empirical distribution, so in order to prepare for it we need to either theoretically guess the future distribution, or prepare for worst case. CS has learned that the first option does not work well.
There are other high-level differences I can think of, but let me give two specific ones. How does one interpret a 4/3-approximation result in a network setting? While an economist will certainly consider a loss of 33% to be a negative result (as it would be in almost an existing scenario), a computer scientist will view it as giving up a few months of technological advance — a negligible cost compared to the other difficulties facing the adoption of anything new.
In Mechanism Design, computer scientist focus on the simplest model (dominant-strategy implementation; independent private values; risk-neutrality) and spend most of their efforts exploring the limits of the model, inventing new mechanisms, or proving impossibility, for various scenarios which are obviously unrealistic today. Economic theory, on the other hand, has moved on to more sophisticated models (risk aversion, dependent values, etc.), and is not as concerned with the limits of the model — until a real application emerges, like spectrum auctions.
I think that the future exploration of the interface of economics and CS should not only combine notions and results from both fields, but also combine attitudes and points of view. The latter is even more difficult to do well.
Thanks for the interesting post. It would be great if there was greater bandwidth between game theorists in computer science and in economics. I’m of course a big fan of the CS worldview. My impression from talking to economists, however, is that the CS community has done a poor job of convincing game theorists in economics departments that questions of computational complexity are interesting, and that this is a big obstacle to getting them to read the CS literature at all.
When talking to people who do game theory in both the economics and AI communities, I’ve encountered a great deal of resistance to the worst-case point of view. You make a good argument for worst case bounds in the context of prior-free mechanism design. What I’ve found more problematic though is the objection to worst-case hardness results. I’ve been told that the experience of many people is that Nash equilibria are easy to compute (fictitious play often converges), and NP-hard allocation problems are easy to solve (CPLEX returns solutions in seconds). They concede that there exist instances of these problems that they won’t be able to solve, but claim that worst case results like this don’t say anything about the “real world” difficulty of these problems.
Of course towards the goal of an elegant an internally consistent theory, its great to have tractable solution concepts. But I think many economists still view themselves as social scientists first. I’d love to see someone write down an argument intended for economists for why (worst case) computational tractability should be a central feature of a mechanism or solution concept.
I think that collaboration is already taking place between economics and CS but in specific areas, which I think is natural. I think there is too much background effort to collaborate, while this is taking place anyway.
Some issues such as worst case analysis are tough to accept even for some CS researchers (for example those who are trying to solve hard problems quickly).
I’ll give two examples for more collaboration that I think will grow in the near future.
Game theorists are probably one bridge between economic theorists and CS theorists. Some of them in fact study “future” questions, and others put emphasis on real modeling (perhaps it depends also on the department they are working in).
Bounded rationality is an important topic that I see connecting the two disciplines. Computation and worst case scenarios can be captured well through this notion. Some progress has been done, but I think there is still a lot to understand.
But not only through game theorists collaboration can take place. A pretty new “type” of economists are market designers. For these economists if the average case is solved efficiently they are happy. CS researchers can contribute to such research. One example for such a contribution is the kidney exchange problem where algorithms that solve this hard problem for large instances have been developed by CS researchers.
My point is that specific researchers from both fields have a lot to work on together. This has been done in recent years, and will continue to take place.
PS. Panos Ipeirotis, “A computer scientist in business school” (with the url http://behind-the-enemy-lines.blogspot.com ), has a different take on these differences of attitude.
[…] by the theoretical CS community, by the AI community, and by the Game-Theory community, but has not yet won the hearts or minds of many economists, and is just now starting to be embraced by the Networking community. Despite the rather […]
[…] more Bayesian analysis in Algorithmic Mechanism Design, in this respect getting closer to the economists’ way of thinking. It is not that computer scientists have lost their basic dislike of “average case […]
Game theorists are probably one bridge between economic theorists and CS theorists. Some of them in fact study “future” questions, and others put emphasis on real modeling (perhaps it depends also on the department they are working in).
Bounded rationality is an important topic that I see connecting the two disciplines. Computation and worst case scenarios can be captured well through this notion. Some progress has been done, but I think there is still a lot to understand.
thanks
killing games
Just want to thank you for the latest post. I am glad I stumbled on this blog site and worked pretty well for me. It is better to attract more traffic on it.