This morning a CMU colleague posted on Google+ a gentlemanly apology about a magazine article that featured his work; in the interview he did not give sufficient credit to a student. I mentioned this to him in the elevator, and we chatted briefly about the difficulty of assigning credit in academia.
This encounter reminded me of Valencia, and I wasn’t thinking about paella. The AAMAS best paper and one of the EC best papers have six authors each, listed in alphabetical order (incidentally, the intersection of the two author lists is nonempty). Now, I am all for alphabetical author ordering, and use it myself whenever possible (in particular, I can’t complain about EC’12 papers with six authors in alphabetical order). But sometimes it is important to know how to assign credit, especially when dealing with award papers or other influential papers that can have an impact on their authors’ careers; doubly so when the influential paper has more than 400 authors (see the last four pages). Of course this issue is typically dealt with through recommendation letters, but it’s not a perfect solution and the information doesn’t always come through.
In some CS communities, such as systems and AI, authors are typically ordered by contribution. In other disciplines such as medicine papers often have fifteen authors or more, but the ninth author’s contribution is typically on the level of catching a lab mouse that was trying to escape. Although this is a more informative way of assigning credit, I’ve heard many stories of huge fights breaking out.
So is there a better way of assigning scientific credit? Yes! In theory… A beautiful JET 2008 paper by de Clippel, Moulin, and Tideman studies the allocation of a (homogeneous) divisible good. The setting fits the assignment of scientific credit perfectly, although as far as I can tell this potential application is not mentioned in the paper.
The main property one would ask for is impartiality: your share of the credit should only depend on your coauthors’ reports (I think this property is equivalent to strategyproofness when your utility is strictly increasing with your share of the credit). Another basic property that de Clippel et al. ask for, which connects the reports with the credit division, is consensuality: if there is a division that agrees with all individual reports then it must be the outcome.
In their model, each player reports an evaluation for every two players , where is the set of players. A report means that thinks deserves times the credit that gets. Perhaps a more intuitive way of thinking about the model is that each player reports normalized values that specify how the other players’ share of the credit should be divided among them; the ratios can be obtained by dividing pairs of such reported values.
The case of three players turns out to be rather straightforward. The unique mechanism that is impartial and consensual assigns to player i the share , where and are the two other players. The bad news is that this mechanism allocates the entire credit if and only if the reports are consensual (which happens if and only if ). In other words, for three players there is no impartial and consensual mechanism that allocates the entire credit. (Of course I am omitting some details; you can find them in the paper.)
Fortunately, it can be argued that we don’t really need credit division unless there are many authors, and for the case of four and more players de Clippel et al. give a family of (rather complicated) mechanisms that output an exact allocation, and satisfy impartiality and consensuality, as well as two other desiderata: anonymity (in the sense that the mechanism is indifferent to the identity of the players) and continuity.
I toyed for a while with the idea of demonstrating how serious I am by choosing a paper where my share of the credit is small and convincing my coauthors to report evaluations. Ultimately, on top of embarrassing my coauthors, I realized I would have to figure out the gory details of the 4+ player mechanisms. For now it is amusingly self-referential to imagine de Clippel et al. dividing the credit for their paper using the methods therein; with only three authors, I sure hope their evaluations would be consensual!