In a recent Bloomberg view story, Duncan Watts discusses the revelation that struggling author Robert Galbraith is actually J. K. Rowling (of Harry Potter fame and fortune). The positive reviews of Galbraith’s book are widely interpreted as proof of Rowling’s writing talent, but Watts argues that the book’s lack of commercial success actually means that the success of the Harry Potter books is a fluke. (Some of the comments claim that even before Rowling was exposed the book was more successful than Watts says.)
Watts uses one of his famous experiments, conducted with colleagues in Columbia, to support his thesis. They recruited almost 30,000 participants to download and rate songs. Some groups of participants were able to see how many times each song was downloaded by other members of the group. This created a snowball effect that made a song’s success totally unpredictable: One song was rated first of 48 by one group and 40th by another. In Watts’ words:
… market success is driven less by intrinsic talent than by “cumulative advantage,” a rich-get-richer process in which early, possibly even random events are amplified by social feedback and produce large differences in future outcomes.
I’m wondering to what degree this statement is true with respect to the scientific success of papers. In math and “hardcore” theoretical computer science (e.g., complexity theory), many of the most celebrated papers are those that solve a long-standing open problem. But when it comes to papers that suggest new models and/or make a conceptual contribution, it is quite hard to predict which papers are destined to become hits (especially in economics), and in many cases chance plays a pivotal role. (Just to be clear, I am not talking about serendipity, but rather about cases where the same paper can have very different outcomes in different “worlds”.)
As a case in point, take the elegant 1989 paper “The computational difficulty of manipulating an election“, by three prominent operations researchers: Bartholdi (Georgia Tech), Tovey (Georgia Tech), and Trick (CMU). The paper makes the case that computational complexity can serve as a barrier against manipulation in elections. With the exception of a couple of related papers by the authors in the early 1990s, the paper was all but forgotten for more than a decade, until Tuomas Sandholm happened to mention a similar idea to Rakesh Vohra and Mark Satterthwaite (of Gibbard-Satterthwaite fame, albeit not fortune) over dinner and was referred to the 1989 paper. The AAAI’02 paper on the complexity of manipulation by Conitzer and Sandholm found the right audience and fueled a continued interest in this topic. Today Bartholdi, Tovey, and Trick are seen as the founders of the field of computational social choice, which has grown dramatically (complexity of manipulation is just one of 19 chapters in an upcoming book); and, despite many other important contributions, the 1989 paper is their most highly cited according to Google Scholar (well, right now Trick has one more highly cited paper but the small gap will probably close in the next few months).
Of course, a successful paper needs to be outstanding in some way; but many outstanding papers — even those that do get published — just don’t get lucky. So I’m afraid the conclusion is not very exciting: Scientific success is a mix of luck and talent, just like everything else in life. In particular, in a slightly different world computational social choice would not exist and elections would still be manipulable… oh wait.