- The 5th International Workshop on Computational Social Choice (COMSOC’14), which I am co-chairing with Toby Walsh. Submission deadline: March 15, 2014.
- The AI & Security special session (track) at the 27th IEEE Computer Security Foundations Symposium (CSF’14), which I am co-chairing with Benjamin Rubinstein. Submission deadline: February 10, 2014. Blurb:
In recent years, a number of communities overlapping with AI—notably algorithmic economics and machine learning—have made significant forays into security & privacy. This session aims to collect theoretical viewpoints on security & privacy, particularly from researchers across diverse communities such as those identifying with AAAI/IJCAI, AAMAS, EC, WEIS, ICML, NIPS, COLT, STOC/FOCS, S&P, and CCS (including the AISEC workshop). Papers in the following areas intersecting with information security are highly encouraged to submit to this special session: Economics: Game theory, mechanism design, market design, social choice; Learning: Online learning, robust statistics, adversarial machine learning, privacy-preserving technologies such as differential privacy.
Today some 46 million turkeys will be devoured in the US. Any turkey is fair game, except for Popcorn (the duly elected National Thanksgiving Turkey) and Caramel, which were solemnly given a “full reprieve from cranberry sauce” by President Obama. Tomorrow, when they’re done with the turkeys, Americans (who are typically extremely nice on every other day of the year) will turn on each other, trampling anyone who stands in their way to get the best Black Friday deals.
For me, thanksgiving is a yearly reminder of how peculiar the American way can be even after 4+ years in the US. Therefore, as an alternative Thanksgiving celebration, I am happy to present the top five things that never cease to amaze me about the US (in no particular order):
- Credit history: Any long-term contract (renting an apartment, electricity or Internet connection, cellphone, etc.) requires a credit check, which determines whether you qualify as a good person (good credit history), a bad person (bad history), or a turkey (no history). The underlying logic is that you are a reliable person if you get into debt and then repay it. In contrast, if you never got into debt in your life and have lots of money in your savings account then you’re obviously a drug dealer. But here’s the catch 22-esque irony: To build your credit history you need a credit card, but to get a (usable) credit card you need credit history.
- Taxes: A crazy tax system is one where nobody has any idea what the rules are. A crazier tax system is one where the rules are nondeterministic. One of my favorite examples is this line from US tax form W-4: “Enter “1” for your spouse. But, you may choose to enter “-0-” if you are married and have either a working spouse or more than one job.”
- American Football: An average professional football game lasts 3 hours and 12 minutes. Average minutes of play: 11. Average time spent on replays: 17 minutes. Average number of commercials: more than 100. I’m not making this stuff up.
- US Congress: Enough said.
- Healthcare: The American healthcare system has turned inefficiency into an art form. Here’s what a typical visit to the hospital looks like. Upon arrival, you are quickly ushered into a private room. Now, let there be n nurses denoted 1,…,n such that for all i, nurse i+1 is more senior than nurse i (they have different titles like unlicensed assistive personnel, licensed vocational nurse, registered nurse, physician’s assistant, etc.); usually n is 4 or 5, but I’ve also witnessed n=6 at Children’s Hospital Boston. For i=1,…,n, nurse i enters the room and asks you the same questions asked by nurses 1,…,i-1 (nurse 1 usually also measures blood pressure or something). Nurse n is a nurse practitioner, and is essentially equivalent to a doctor; after asking the same questions asked by nurses 1,…,n-1, she writes a prescription, say for 10 pills. Incidentally, the pharmaceutical company distributes the pills in packs of 10. A naive person would think he can go to the pharmacy and procure a pack of 10 pills. But that would be too efficient; instead, a pharmacist must take the pills out of the box and carefully place 10 pills into a bag. It is widely understood that this medically indispensable process takes around an hour.
To be eligible (for the test of time award), a paper or series of papers must be on a topic related to the fields of electronic commerce or economics and computation and must have been published, in preliminary or final form, in a journal or conference proceedings at least ten years before the year of the award presentation. The inaugural SIGecom Test of Time Award will be given in 2014
To be eligible (for the doctoral dissertation award), a dissertation must be on a topic related to the fields of electronic commerce or economics and computation and must have been defended successfully during the calendar year preceding the year of the award presentation. The inaugural SIGecom Doctoral Dissertation Award will be given for dissertations defended in 2013.
The deadline for nominations for both awards is the end of February 2014.
Now, some researchers dislike the idea of awards in science. I have to admit that I like them. On my cynical days, I view them as an extremely cost-effective way of motivating us. On my less cynical days I adopt Nati’s view of awards as “a pat on the back” that says “well done. keep up the good work”. One should give (deserved) pats on the back, and SigEcom is doing so now.
Lecture videos from weeks 5-7 of my AGT course have been posted. Week 5 covers mechanism design with budgets and without money, with a case study on kidney exchange. Week 6 is all about selfish routing and the price of anarchy, with a case study on network overprovisioning. Week 7 goes beyond Nash equilibria and covers some aspects of the theory of smooth games. I fell behind on lecture notes, which are currently available through week 5, but am now catching up.
Looking ahead, in weeks 8-10 we’ll conclude this quarter’s course by talking about learning dynamics (best-response and no-regret) and the complexity of equilibria.
Guest post by Felix Brandt
We recently had an internal discussion about author ordering within our group and agreed to retain alphabetical ordering (sort of renewing a decision we made some years earlier when the group was composed differently). Author ordering is a surprisingly tricky issue and I think it’s particularly difficult in algorithmic economics where the cultures of many different disciplines clash.
The problem usually starts with the fact that nobody makes a conscious decision at the beginning of one’s career which author ordering policy to use. In most cases, PhD students just publish their first papers using whatever convention is used in their group. Once they have published papers, they are reluctant to change the convention for the sake of consistency and, perhaps more importantly, because changing the convention can have negative effects on previous coauthors (by making their contribution appear less). Whenever authors with different authorship conventions write a joint paper, they need to decide which convention to use (and if it’s not the alphabetical convention, they also need to agree on a particular ordering for the paper).
The two predominant conventions are alphabetical ordering and ordering by contribution. (Another interesting convention I heard about, but that I have rarely seen, is to list authors by age from youngest to oldest.) Alphabetical ordering is the standard in mathematics, economics, and theoretical computer science. In most other disciplines, including AI, it is not. Sometimes, the head of a group is listed as an author even though he did not contribute anything and, in some areas, there is a special meaning to being the first, the last, or even the second-to-last author (like here). Let’s assume for simplicity that in theoretical areas such as algorithmic economics, only people who significantly contributed to a paper are actually listed as authors (even though that might not always be the case).
At first glance, ordering by contribution seems to be the fairest solution. Measuring relative contribution, however, is a source of great dispute that can hamper productivity. Studies have shown that authors almost always disagree with respect to their relative contributions. Another problem arises when using a default ordering in case the contribution of all or some authors is roughly equivalent (which is mostly the case in our group). Even if one uses non-alphabetical ordering only if the contribution of one author was significantly larger than those of the others, the asymmetry remains: If the authors are listed alphabetically, the first author may have contributed more. If the authors are listed non-alphabetically, the first must have contributed more.
About two years ago, I had the privilege to write a joint paper with Paul Seymour and asked him to move my name further back because I had contributed less. His reply was that “it’s quite standard to be alphabetical and no-one reads anything else into it (and that is a treasure worth preserving)”. I think that’s a very good point because once non-alphabetical ordering is used, it indicates that some information can be deduced from the author orderings in general. The fact that authors have to be ordered in some way can be seen as an unwanted by-product of the sequential nature of language. Whenever I read papers myself, I never put any meaning into the ordering of authors. This is, however, not the case in general and sometimes researchers are denied awards, fellowships, or tenure because the committee in charge expects them to be “first authors”. In order to avoid this problem, some authors put a disclaimer on their webpage or on their papers, explaining which ordering convention is used and sometimes even listing the contribution of individual authors. An alternative, perhaps more elegant, solution to neutralize the author ordering in CVs or on personal homepages (places where it may actually matter) is to list publications as “<paper title> (with coauthor x and coauthor y)”. Despite all these efforts, author ordering remains a controversial issue simply because the cultures of fields, the principles of authors, and the expectations of readers differ so much.
Other articles on author ordering:
- A statement by the AMS about author ordering. In mathematics, alphabetical author ordering is also known as the Hardy-Littlewood rule.
- A paper in the Journal of Politicial Economy by Engers, Gans, Grant, and King (First-Author Conditions) finds that “it is an equilibrium for papers to use alphabetical ordering whereas it is never an equilibrium for authors always to be listed in order of relative contribution”.
- A recent arxiv paper by Ackerman and Brânzei (Research Quality, Fairness, and Authorship Order) analyzes the “phenomenon that alphabetical ordering is correlated with higher research quality”. They cite several studies providing empirical evidence for this claim.
- Blog posts by Michael Trick and Michael Mitzenmacher.
Lecture videos and notes from the first four weeks of my AGT course at Stanford have been posted. Week 2 characterizes dominant-strategy mechanisms in single-parameter environments (Myerson’s Lemma, the Revelation Principle) and gives an introduction to algorithmic mechanism design (e.g., Knapsack auctions). Week 3 covers the basic theory of Bayesian optimal mechanisms, and more recent results about simple near-optimal and prior-independent auctions. Week 4 is about multi-parameter mechanism design, including the VCG mechanism and combinatorial auctions, and also includes case studies on reserve prices in Yahoo! keyword auctions and on the past, present, and future of wireless spectrum auctions.
Looking ahead, Week 5 is about mechanism design with payment constraints (budgets or no money at all), with a case study on kidney exchange, and Week 6 covers the price of anarchy of selfish routing.
More than a year ago a badminton scandal during the London 2012 Olympics rocked the really-care-about-mechanism-design corner of the blogosphere. Bobby Kleinberg reported:
The competition is structured in two stages: a round-robin stage in which the teams compete to earn placement in a second-stage single-elimination tournament. The second-seeded team in the competition, Tian Qing and Zhao Yunlei of China, suffered an upset loss to a Danish team in the first stage. As a result, they were placed as one of the lowest-seeded teams in the second-stage tournament. At that point it became advantageous for the remaining teams that had already secured a second-stage berth to lose their final first-stage match in order to avoid being paired against Qing and Yunlei in the first round of the second stage. Spectators watched in frustration and disbelief as the athletes deliberately served into the net and hit shots out of bounds in an effort to lose the match. Olympic officials afterward decided to disqualify the offending teams from the competition.
Bobby asked whether it’s time for the International Olympic Committee to start reading research papers on mechanism design. A year ago the answer was “probably not”, but a recent paper may soon have members of the IOC scrambling to get their math degrees. There are two interesting things about this paper, written by Marc Pauly and published in Social Choice and Welfare.
The first interesting thing is, well, the results. Pauly considers competition formats in which the competing teams (or players) are partitioned into two subsets. In the first stage, the teams in each subset compete in a round-robin tournament, with the top (in terms of the number of wins) teams from each subset proceeding to the next stage. In the second stage, the winner is chosen by a function that aggregates the results of competitions between the second-stage teams. His main result is that there is no second-stage function that would make this process monotonic, which you can think of as strategyproof — a team should never be able to become a winner by throwing a match — if a few other basic axioms are imposed and there are at least six teams. My conclusion from this result is that the two-stage, round-robin-first format used in Badminton is inherently flawed — but, as Pauly notes in his conclusions, there are many other competition formats one can consider.
Back in 2012, people have floated different ideas for other competition formats, especially in response to a Cheap Talk blog post on the same topic. But I kinda like an idea suggested in a comment on Bobby’s post by… me (of course I’m being completely impartial here). Following up on a comment by Suresh Venkatsubramanian, I suggested determining the bracket in the second round uniformly at random, and incentivizing teams to do as well as they can in the first round by letting their probability of advancing to the second round monotonically increase with their number of wins in the first round. I thought this is nice because it reminded me of something Vince Conitzer once told me, that admittance into medical school in the Netherlands depends on your school grades and a lottery — the better your grades, the more likely you are to get in. As an aside, when I checked just now that I wasn’t making this up, I found an article claiming that the chance is only 70% for those with the highest grades. More interestingly, some universities that at some point started selecting half of their admitted students “deterministically” found that there is no difference in performance between this half and the students in the other half who just got lucky.
Going back to Pauly’s paper, another interesting thing about it is that it cites Bobby’s post and the Cheap Talk blog post. I haven’t seen many scientific papers citing blog posts, and it may well be the first published paper citing our blog. Ironically, the cheap talk post is called “Let’s write this paper”; apparently Pauly took the title literally!