Berthold was a phenomenal problem-solver, and he made numerous
contributions across many subfields of algorithmic game theory. To
name just a few, his celebrated paper with Czumaj resolved the price
of anarchy in the Koutsoupias-Papadimitriou scheduling model, the
model in which the POA was originally defined. His early work in
algorithmic mechanism design (e.g., with Briest and Krysta), which I
regularly teach in my classes, demonstrated the richness of the design
space for single-parameter problems. His work with Skopalik and
others characterized the computational complexity of computing
equilibria in congestion games. He was an exceptionally strong
scientist.
Recent progress in multi-dimensional mechanism design
Organizers: Yang Cai (UC Berkeley), Costis Daskalakis (MIT), and Matt Weinberg (MIT)
Abstract: Mechanism design in the presence of Bayesian priors has received much attention in the Economics literature, focusing among other problems on generalizing Myerson’s celebrated auction to multi-item settings. Nevertheless, only special cases have been solved, with a general solution remaining elusive. More recently, there has been an explosion of algorithmic work on the problem, focusing on computation of optimal auctions, and understanding their structure. The goal of this tutorial is to overview this work, with a focus on our own work. The tutorial will be self-contained and aims to develop a usable framework for mechanism design in multi-dimensional settings.
Axiomatic social choice theory: from Arrow’s impossibility to Fishburn’s maximal lotteries
Organizers: Felix Brandt (TU München)
Abstract: This tutorial will provide an overview of central results in social choice theory with a special focus on axiomatic characterizations as well as computational aspects. Topics to be covered include rational choice theory, choice consistency conditions, Arrovian impossibility theorems, tournament solutions, social decision schemes (i.e., randomized social choice functions), preferences over lotteries (including von Neumann-Morgenstern utility functions, stochastic dominance, and skew-symmetric bilinear utility functions), and the strategyproofness of social decision schemes. The overarching theme will be four escape routes from negative results such as the impossibilities due to Arrow and Gibbard-Satterthwaite: (i) restricting the domain of preferences, (ii) replacing choice consistency with variable-electorate consistency, (iii) only requiring expansion consistency, and (iv) randomization.
Bitcoin: the first decentralized digital currency
Organizers: Aviv Zohar (Hebrew Univ.)
Abstract: Bitcoin is a disruptive new protocol for a digital currency that has been growing in popularity.The most novel aspect of the protocol is its decentralized nature: It has no central entity in charge of the currency or backing it up, and no central issuer. Instead, Bitcoin is managed by a peer-to-peer network of nodes that process all its transactions securely. The protocol itself combines ideas from many areas of computer science. These range from its use of cryptographic primitives to secure transactions, its use of economic mechanisms to avoid denial-of-service attacks and to incentivize participation, through its solution to the Byzantine consensus problem, and the robust construction of its P2P network. The goal of the tutorial is to provide a basic understanding of the Bitcoin protocol, to discuss the main problems and challenges that it faces, and to provide a starting point for research on the protocol and its surrounding ecosystem.
Privacy, information economics, and mechanism design
Organizers: Cynthia Dwork (Microsoft Research), Mallesh Pai (UPenn), and Aaron Roth (UPenn)
Abstract: Internet scale interactions have implications to game theory and mechanism design in at least two ways. First, the ability of many entities to aggregate large amounts of sensitive data for purposes of financial gain has brought issues of privacy to the fore. As mechanism designers, it is therefore crucial that we understand both how to -model- agent costs for loss of privacy, as well as how to control them. Second, it has made “large markets and large games” the common case instead of the exception. Can mechanism designers leverage these large market conditions to design mechanisms with desirable properties that would not be possible to obtain in small games? What techniques can we use to enforce that players are “informationally small” with minimal assumptions on the economy? In this tutorial, we will discuss results and techniques from “differential privacy”, an approach developed over the last decade in the theoretical computer science literature, which remarkably can address both of these two issues. We will motivate the definition, and then show both how it can provably control agent costs for “privacy” under worst-case assumptions, and how it can be used to develop exactly and asymptotically truthful mechanisms with remarkable properties in various large market settings. We will survey recent results at the intersection of privacy and mechanism design, with the goal of getting participants up to the frontier of the research literature on the topic.
The game exhibits some peculiar phenomena despite the existence of a unique Nash equilibrium: Alice raises her hand, Bob does not raise his hand, and Charlie randomizes with equal probability. Charlie couldn’t be happier with the equilibrium as he will never have to take out the garbage (and could even decide who has to do the job by playing a pure strategy instead).
The security level of all players is 0.5 and the expected payoff in the Nash equilibrium is (0.5, 0.5, 1). However, the minimax strategies of Alice and Bob are different from their equilibrium strategies, i.e., they can guarantee their equilibrium payoff by not playing their respective equilibrium strategies (a phenomenon that was also observed by Aumann)! The solution in which all players play their minimax strategies obviously suffers from the fact this strategy profile fails to be an equilibrium: Both Alice and Bob would want to deviate. On top of that, the unique equilibrium is particularly weak in the sense that it fails to be quasi-strict, i.e., all players could as well play any other strategy without jeopardizing their payoff.
Quasi-strict equilibrium is an equilibrium refinement proposed in 1973 by Harsanyi and requires that all pure best responses are played with positive probability. Harsanyi showed that in almost all games all equilibria are quasi-strict. Indeed the three-player game above (taken from this paper) is one of the very few exceptions. Quasi-strict equilibrium is rather attractive from an axiomatic perspective. For example, it has been shown that the existence of quasi-strict equilibrium is sufficient to justify the assumption of common knowledge of rationality when players are ‘cautious’ (for more details see here and here).
In 1999, Henk Norde proved that every two-player game contains a quasi-strict equilibrium (via a rather elaborate proof using Brouwer’s fixed-point theorem), strengthening earlier results which showed existence in zero-sum games, bimatrix games with a finite number of equilibria, 2×n games, etc. Norde’s existence result implies that computing a quasi-strict equilibrium is PPAD-hard (while this problem was shown NP-hard for games with at least three players). Curiously, however, membership in PPAD for two-player games remains open due to the intricate existence proof by Norde (see also this review of Norde’s paper by Bernhard von Stengel).
Coming back to the example, it seems as if Charlie has to live with the deficiencies of Nash equilibrium and prepare to take out the garbage with positive probability.
Here’s an article that has been trending on the New York Times site. It’s about Sperner’s Lemma–and, amazingly, they get the technical details right! The article describes a pretty practical scheme for pairing n indivisible goods with n agents; it’s motivated by the example of matching roommates with rooms, each of which has different pros and cons that the roommates each value differently. There’s a nice discussion of the idea of fair division, a pretty thorough description of a paper by Francis Su, a shout-out to Turing’s Invisible Hand Blogger Emeritus Ariel Procaccia and his web site spliddit, a quote from Stephen Brams, an online rent division calculator, and a very nice interactive graphic of how Sperner’s Lemma works.
So, what do you all think? Do you buy it? And, have you ever used a formal fair-division algorithm to make a real-life decision?
Registration for the temporally co-located Meeting of the Society for Social Choice and Welfare (SCW) is open as well.
The Tenth Ad Auction Workshop (here is the call for papers)
PS: Since I accepted the intimidating invitation below, I’ve been thinking of what my first post should be about. Now I have to admit I took the easy way out. Thanks for letting me participate. I’ll try to think of something more original next time.
Spectacular work, much of it done over the last decade, has revealed a new Chapter: on equilibrium computation. The following striking dichotomies, based on these insights, speak for themselves. For the readers’ convenience, all the relevant references have been hyperlinked.
2-Nash | k-Nash, k ≥ 3 | |
---|---|---|
Nature of solution | Rational [1] | Algebraic; irrational example [2] |
Complexity | PPAD-complete [3] [4] [5] |
FIXP-complete [6] |
Practical algorithms | Lemke-Howson [1] | —?— |
Decision version | NP-complete [7][8] | ETR-complete [9][10] |
SPLC utilities | PLC utilities | |
---|---|---|
Nature of solution | Rational [11][12] | Algebraic [12]; irrational example [17] |
Complexity | PPAD-complete [11] [13] | FIXP-complete [14] [15] |
Practical algorithms | Lemke-based [16] | —?— |
Decision version | NP-complete [11] | ETR-complete [14] |
SPLC production | PLC production | |
---|---|---|
Nature of solution | Rational [18] | Algebraic [15]; irrational example [18] |
Complexity | PPAD-complete [18] | FIXP-complete [15] [14] |
Practical algorithms | Lemke-based [18] | —?— |
Decision version | NP-complete [18] | ETR-complete [14] |
Note: PLC = piecewise-linear concave
SLPC = separable, piecewise-linear concave
In the third table, the utilities of agents are: for all negative results we assume the most restricted utilities, i.e., linear, and for positive results we assume SPLC.
These dichotomies were first identified in [15]. This paper also extends the second and third dichotomies from SPLC to the new class of Leontief-free functions, which properly contains SPLC.