Feeds:
Posts

## Impartial division of scientific credit

This morning a CMU colleague posted on Google+ a gentlemanly apology about a magazine article that featured his work; in the interview he did not give sufficient credit to a student. I mentioned this to him in the elevator, and we chatted briefly about the difficulty of assigning credit in academia.

This encounter reminded me of Valencia, and I wasn’t thinking about paella. The AAMAS best paper and one of the EC best papers have six authors each, listed in alphabetical order (incidentally, the intersection of the two author lists is nonempty). Now, I am all for alphabetical author ordering, and use it myself whenever possible (in particular, I can’t complain about EC’12 papers with six authors in alphabetical order). But sometimes it is important to know how to assign credit, especially when dealing with award papers or other influential papers that can have an impact on their authors’ careers; doubly so when the influential paper has more than 400 authors (see the last four pages). Of course this issue is typically dealt with through recommendation letters, but it’s not a perfect solution and the information doesn’t always come through.

In some CS communities, such as systems and AI, authors are typically ordered by contribution. In other disciplines such as medicine papers often have fifteen authors or more, but the ninth author’s contribution is typically on the level of catching a lab mouse that was trying to escape. Although this is a more informative way of assigning credit,  I’ve heard many stories of huge fights breaking out.

So is there a better way of assigning scientific credit? Yes! In theory… A beautiful JET 2008 paper by de Clippel, Moulin, and Tideman studies the allocation of a (homogeneous) divisible good. The setting fits the assignment of scientific credit perfectly, although as far as I can tell this potential application is not mentioned in the paper.

The main property one would ask for is impartiality: your share of the credit should only depend on your coauthors’ reports (I think this property is equivalent to strategyproofness when your utility is strictly increasing with your share of the credit).  Another basic property that de Clippel et al. ask for, which connects the reports with the credit division, is consensuality: if there is a division that agrees with all individual reports then it must be the outcome.

In their model, each player $i$ reports an evaluation $r^i_{jk}$ for every two players $j,k\in N\setminus \{i\}$, where $N$ is the set of players. A report $r^i_{jk}=x$ means that $i$ thinks $j$ deserves $x$ times the credit that $k$ gets. Perhaps a more intuitive way of thinking about the model is that each player reports normalized values that specify how the other players’ share of the credit should be divided among them; the ratios can be obtained by dividing pairs of such reported values.

The case of three players turns out to be rather straightforward. The unique mechanism that is impartial and consensual assigns to player i the share $1/(1+r^j_{ki}+r^k_{ji})$, where $k$ and $j$ are the two other players. The bad news is that this mechanism allocates the entire credit if and only if the reports are consensual (which happens if and only if $r^1_{23}r^2_{31}r^3_{12}=1$). In other words, for three players there is no impartial and consensual mechanism that allocates the entire credit. (Of course I am omitting some details; you can find them in the paper.)

Fortunately, it can be argued that we don’t really need credit division unless there are many authors, and for the case of four and more players de Clippel et al. give a family of (rather complicated)  mechanisms that output an exact allocation, and satisfy impartiality and consensuality, as well as two other desiderata: anonymity (in the sense that the mechanism is indifferent to the identity of the players) and continuity.

I toyed for a while with the idea of demonstrating how serious I am by choosing a paper where my share of the credit is small and convincing my coauthors to report evaluations. Ultimately, on top of embarrassing my coauthors, I realized I would have to figure out the gory details of the 4+ player mechanisms. For now it is amusingly self-referential to imagine de Clippel et al. dividing the credit for their paper using the methods therein; with only three authors, I sure hope their evaluations would be consensual!

### 6 Responses

1. Even if the authors agreed that Author A had 73% of the credit, Author B 18%, etc., I suspect that for many purposes (e.g. tenure) people would round heavily toward the extremes.

This approach also is not terribly informative when the question becomes who bears responsibility in the case of specific flaws in the paper: error, scientific fraud, etc. An approach I’ve found intriguing is by analogy with the credits in a movie. Authors are listed by area of responsibility: Martinez ran experiments A and B, Jones proved the theorems and ran experiment C, Zhang and Jones wrote the paper, Hoffman schmoozed with DARPA and got the funding, etc.

2. Yeah, explicitly listing each others contributions seems to make sense and looks much more realistic than having each other anonymously evaluate every other author.

3. Interesting post, Ariel! For what it’s worth, another interesting take on the issue of allocating scientific credit is Jon Kleinberg and Sigal Oren’s paper “Mechanisms for (Mis)Allocating Scientific Credit” (https://sites.google.com/site/sigal3/) which looks at a model in which scientists are strategically choosing which projects to work on, and society must decide in advance how to allocate credit for discoveries so that, in equilibrium, people’s effort will be distributed in the socially-optimal way. The notion of authors self-reporting the extent of their co-authors’ contributions doesn’t enter into their model, though. Instead, unequal allocation of credit among co-authors emerges as a symmetry-breaking mechanism to eliminate undesirable equilibria.

4. In some areas (within finance and economics for example) some professors pay their students to prove their conjectures and they only get acknowledged.

I guess it is like asking an RA to do some work on data…

5. Interestingly, there is a strategic analysis of the problem (http://www.jstor.org/stable/10.1086/250082) that shows how alphabetical order is an equilibrium while ordering by “contribution” is not.