Feeds:
Posts

## WINE 2014 poster session

This year, the conference on web and Internet economics (WINE) will take place in Beijing, and will feature a poster session for the presentation of both original results that have not yet been published elsewhere and results published recently, which are relevant to the WINE community. While the deadline for regular paper submission has already passed, you still have time to submit your work to the poster session (deadline is September 15). This is a great opportunity to get your work exposed to the WINE community. More information can be found here: http://wine2014.amss.ac.cn/dct/page/70011

## Prior-Free Analysis of Digital-good Auctions and Beyond.

An important conjecture in prior-free mechanism design was affirmatively resolved this year. The goal of this post is to explain what the conjecture was, why its resolution is fundamental to the theoretical study of algorithms (and mechanisms), and encourage the study of important open issues that still remain.

The conjecture, originally from Goldberg et al. (2004), is that the lower bound of 2.42 on the approximation factor of a prior-free digital good auction is tight. In other words, the conjecture stated that there exists a digital good auction that, on any input, obtains at least a 2.42 fraction of the best revenue from a posted price that at least two bidders accept (henceforth: the benchmark). The number 2.42 arises as the limit as the number of agents n approaches infinity (for finite n the lower bound improves and is given by a precise formula). The conjecture was resolved in the affirmative by Ning Chen, Nick Gravin, and Pinyan Lu in a STOC 2014 paper Optimal Competitive Auctions. The resolution of this conjecture suggests that a natural method of proving a lower bound for approximation is generally tight but we still do not really understand why.

Summary.
To explain the statement of the theorem, let’s consider the n = 2 special case. For n = 2 agents, the benchmark is twice the lower agent’s value. (The optimal posted price that both bidders will accept is a price equal to the lower bidder’s value, the revenue from this posted price is twice the lower bidder’s value.) The goal of prior-free auction design is to find an auction that approximates this benchmark. For the n = 2 special case there is a natural candidate: the second-price auction. The second-price auction’s revenue for two bidders is equal to the lower bidder’s value. Consequently, the second-price auction is a two approximation: the ratio of the second-price auction’s revenue to the benchmark is two in worst-case over all inputs (in fact, it is exactly equal to two on all inputs).

The Goldberg et al. (2004) lower bound for the n = 2 agent special case shows that this two approximation is optimal. The proof of the lower bound employs the probabilistic method. A distribution over bidder values is considered, the expected benchmark is analyzed, the expected revenue of the optimal auction (for the distribution) is analyzed, and the ratio of their expectations gives the lower bound. The last step follows because any auction has at most the optimal auction revenue and if the ratio of the expectations has a certain value, there must be an input in the support of the distribution that has at least this ratio. A free parameter in this analysis is the the distribution over bidder values. The approach of Goldberg et al. was to use the distribution for which all auctions obtain the same revenue, i.e., the so-called equal revenue or Pareto distribution. This distribution is defined so that an agent with a random random value accepts a price p > 1 with probability exactly 1/p and the expected revenue generated is exactly one.

For more details see the newly updated Chapter 6 of Mechanism Design and Approximation.

Discussion.
Prior-free mechanism design falls into a genre of algorithm design where there is no pointwise optimal algorithm. For this reason the worst-case analysis of a mechanism is given relative to a benchmark. (The same is true for the field of online algorithms where this is referred to as competitive analysis.) In the abstract the optimal algorithm design problem is the following:

$\min_{\text{ALG}} \max_{\text{INPUT}} \frac{\text{BENCHMARK}(\text{INPUT})}{\text{ALG}(\text{INPUT})}$

where ALG is a possibly randomized algorithm. Let ALG* be the optimal algorithm. Yao’s minimax principle states that this is the same as:

$\max_{\text{DIST}} \min_{\text{ALG}} \frac{{\mathbf E}_{\text{INPUT} \sim \text{DIST}}[\text{BENCHMARK}(\text{INPUT})]}{{\mathbf E}_{\text{INPUT} \sim \text{DIST}}[\text{ALG}(\text{INPUT})]}$

where DIST is a distribution over inputs and ALG may as well be deterministic. Of course, if instead of maximizing over DIST we consider some particular distribution DIST we get a lower bound on the worst-case approximation of any algorithm. Let DIST* be the worst distribution for BENCHMARK. In general DIST* should depend on BENCHMARK.

Let EQDIST denote the product distribution for which the expected value of ALG(INPUT), for INPUT drawn from EQDIST, is a constant for all auctions ALG. For a number of auction problems (not just digital goods), it was conjectured that DIST* = EQDIST. Two things are important in this statement:

• EQDIST is a product distribution where as Yao’s theorem may generally require correlated distributions. (Why is a product distribution sufficient?!)
• EQDIST is not specific to BENCHMARK. (Why not?!)

Prior to the Chen-Gravin-Lu paper the equality of EQDIST and DIST* was known to hold for specific benchmarks and the following problems:

1. Single agent monopoly pricing (for revenue). See Chapter 6 of MDnA.
2. Two-agent digital-good auctions (for revenue). See Chapter 6 of MDnA.
3. Three-agent digital-good auctions (for revenue). See Hartline and McGrew (2005).
4. One-item two-agent auctions (for residual surplus, i.e., value minus payment).

Of these (1) and (2) are very simple, (3) and (4) are non-obvious. All of these results come from explicitly exhibiting the optimal auction ALG*.

Chen, Gravin, and Lu give the first highly non-trivial proof for showing that DIST* = EQDIST without explicitly constructing ALG*. Moreover, they do it not just for the standard benchmark (given above) but for any benchmark with certain properties. It’s clear from their proof which properties they use (monotonicity, symmetry, scale invariance, constant in the highest value). It is not so clear which are necessary for the theorem. For example, the benchmark in (1) and (4), above, are not constant in the highest bid.

Conclusion.
Prior-free mechanism design problems are exemplary of an a genre of algorithm design problems where there is no pointwise optimal algorithm. (The competitive analysis of online algorithms gives another example.) These problems stress the classical worst-case algorithm design and analysis paradigm. We really do not understand how to search the space of mechanisms for prior-free optimal ones. (Computational hardness results are not known either.) We also do not generally know when and why the lower bounding approach above gives a tight answer. The Chen-Gravin-Lu result is the most serious recent progress we have seen on these questions. Let’s hope it is just the beginning.

## SAGT 2014, 9/30-10/2, Patras, Greece

The 7th International Symposium on Algorithmic Game Theory (SAGT) will take place in Patras, Greece, from September 30 to October 2.  Notice the change in location (from the original Hafa, Israel).

## The EC Academic Job Market

At 7:30am midway through EC’14 there was scheduled an Open discussion about the CS job market for Ph.D.s for Econ/CS people. Even at 7:30am the room was packed! David Parkes led the discussion, which focused on how to improve the academic job market for EC people. While Econ/CS Ph.D.s benefit from fantastic opportunities at industrial research laboratories, this demand has not translated to steady availability of positions in the academic job market. If attendance and participation in this discussion are any indication of the importance of this issue, then it is one we should devote significant effort and resources to resolving. This blog post serves to summarize the context and suggestions from the EC discussion and suggested action by the SIGecom executive committee as well as to solicit additional feedback from the community.

Identity. When academic positions come tied to areas, identity is an important issue. While many in the Econ/CS community come from an AI/ML or Algorithms/Theory background, many consider Econ/CS, henceforth EC, their primary research community. Nonetheless, EC faculty applicants will typically be competing for AI or Theory positions. Only a few schools have chosen to specifically list EC as a target area for hiring (independent of AI or Theory persuasion). In comparison Computational Biology, in the last decade, and Data Science, contemporaneously, have become first-order subfields with respect to hiring. One recommendation for hiring discussions is to separate EC from AI and Theory hiring. It is not to EC’s advantage to be in a zero-sum game with either AI or Theory.

Public Relations. EC does not presently have a clearly articulated value proposition that engenders broad investment from within CS, broad support for hiring from the Economics academic community, or broad visibility by the general public. One concrete action to take is to be more public about successes of our field in terms of impact on practice and impact on science (in particular Economics) and about the big challenges our field is hoping to address in the medium and long term. A second concrete action to take is to prepare development pitches, both at the department level for including EC in the vision for the department, and at the donor level to provide a basis for deans to raise money for faculty lines in EC. A third action item is to encourage more outreach articles in general computer science venues and in popular science venues.

Web Resources. The SIGecom advisors have discussed the idea of creating a web resource that would facilitate the initiative described above. In particular:

1. To aggregate survey articles, general CS articles, popular science articles, and teaching materials (cf. Interactions.org).
2. To collect and disseminate development resources, e.g., for faculty to pitch their department for EC hiring, for deans to pitch their donors for EC hiring, for researchers to pitch funding agencies (cf. Theory Matters).
3. To collect advice for EC applicants to faculty positions outside of EC, e.g., operations research, business schools, information science, etc. These academic markets have different timings, structure, and focus.
4. To collect job posts and publicize job market outcomes (cf. the computational complexity blog).
5. To survey research themes and success, impact on practice, and impact on science.

The SIG would contribute resources to make sure that the web resource is well designed and hosted, and we plan to do all of this with a view to making sure that whatever we do is maintainable going forward.

Coordination. Turing’s Invisible Hand coblogger Jason Hartline has agreed to serve as the SIGecom 2014-2015 Special Initiatives Chair for the Job Market and will be coordinating the effort to assemble this web resource, recruiting volunteers, and facilitating the initiatives proposed above. Please agree to help if asked, write Jason to volunteer, or provide discussion in the comments below.

Joint post with SIGecom Chair David Parkes.

## Berthold Vöcking: 1967-2014

Berthold Vöcking, a leading researcher in algorithmic game theory and chair of the 2013 Symposium on Algorithmic Game Theory, sadly passed away on June 11th. Tim Roughgarden gave the following apt remarks before a moment of silence was held in commemoration of Berthold at ACM-EC two weeks ago:

Berthold was a phenomenal problem-solver, and he made numerous
contributions across many subfields of algorithmic game theory. To
name just a few, his celebrated paper with Czumaj resolved the price
of anarchy in the Koutsoupias-Papadimitriou scheduling model, the
model in which the POA was originally defined. His early work in
algorithmic mechanism design (e.g., with Briest and Krysta), which I
regularly teach in my classes, demonstrated the richness of the design
space for single-parameter problems. His work with Skopalik and
others characterized the computational complexity of computing
equilibria in congestion games. He was an exceptionally strong
scientist.

## EC early registration and tutorials

The early registration period for ACM EC 2014 ends today.  Also, I’d like to draw your attention to this year’s EC tutorials (not without self-interest):

Recent progress in multi-dimensional mechanism design

Organizers: Yang Cai (UC Berkeley), Costis Daskalakis (MIT), and Matt Weinberg (MIT)

Abstract: Mechanism design in the presence of Bayesian priors has received much attention in the Economics literature, focusing among other problems on generalizing Myerson’s celebrated auction to multi-item settings. Nevertheless, only special cases have been solved, with a general solution remaining elusive. More recently, there has been an explosion of algorithmic work on the problem, focusing on computation of optimal auctions, and understanding their structure. The goal of this tutorial is to overview this work, with a focus on our own work. The tutorial will be self-contained and aims to develop a usable framework for mechanism design in multi-dimensional settings.

Axiomatic social choice theory: from Arrow’s impossibility to Fishburn’s maximal lotteries

Organizers: Felix Brandt (TU München)

Abstract: This tutorial will provide an overview of central results in social choice theory with a special focus on axiomatic characterizations as well as computational aspects. Topics to be covered include rational choice theory, choice consistency conditions, Arrovian impossibility theorems, tournament solutions, social decision schemes (i.e., randomized social choice functions), preferences over lotteries (including von Neumann-Morgenstern utility functions, stochastic dominance, and skew-symmetric bilinear utility functions), and the strategyproofness of social decision schemes. The overarching theme will be four escape routes from negative results such as the impossibilities due to Arrow and Gibbard-Satterthwaite: (i) restricting the domain of preferences, (ii) replacing choice consistency with variable-electorate consistency, (iii) only requiring expansion consistency, and (iv) randomization.

Bitcoin: the first decentralized digital currency

Organizers: Aviv Zohar (Hebrew Univ.)

Abstract: Bitcoin is a disruptive new protocol for a digital currency that has been growing in popularity.The most novel aspect of the protocol is its decentralized nature: It has no central entity in charge of the currency or backing it up, and no central issuer. Instead, Bitcoin is managed by a peer-to-peer network of nodes that process all its transactions securely. The protocol itself combines ideas from many areas of computer science. These range from its use of cryptographic primitives to secure transactions, its use of economic mechanisms to avoid denial-of-service attacks and to incentivize participation, through its solution to the Byzantine consensus problem, and the robust construction of its P2P network. The goal of the tutorial is to provide a basic understanding of the Bitcoin protocol, to discuss the main problems and challenges that it faces, and to provide a starting point for research on the protocol and its surrounding ecosystem.

Privacy, information economics, and mechanism design

Organizers: Cynthia Dwork (Microsoft Research), Mallesh Pai (UPenn), and Aaron Roth (UPenn)

Abstract: Internet scale interactions have implications to game theory and mechanism design in at least two ways. First, the ability of many entities to aggregate large amounts of sensitive data for purposes of financial gain has brought issues of privacy to the fore. As mechanism designers, it is therefore crucial that we understand both how to -model- agent costs for loss of privacy, as well as how to control them. Second, it has made “large markets and large games” the common case instead of the exception. Can mechanism designers leverage these large market conditions to design mechanisms with desirable properties that would not be possible to obtain in small games? What techniques can we use to enforce that players are “informationally small” with minimal assumptions on the economy? In this tutorial, we will discuss results and techniques from “differential privacy”, an approach developed over the last decade in the theoretical computer science literature, which remarkably can address both of these two issues. We will motivate the definition, and then show both how it can provably control agent costs for “privacy” under worst-case assumptions, and how it can be used to develop exactly and asymptotically truthful mechanisms with remarkable properties in various large market settings. We will survey recent results at the intersection of privacy and mechanism design, with the goal of getting participants up to the frontier of the research literature on the topic.

## Taking out the garbage, quasi-strict equilibrium, and PPAD

Suppose Alice, Bob, and Charlie want to decide who has to take out the garbage by playing the following game. Each of the players independently and simultaneously raises his hand or not. Alice loses if exactly one player raises his hand, whereas Bob loses if exactly two players raise their hands, and Charlie loses if either all or no players raise their hand. A normal-form representation of the situation looks as follows (Alice chooses rows, Bob columns, and Charlie matrices).

The game exhibits some peculiar phenomena despite the existence of a unique Nash equilibrium: Alice raises her hand, Bob does not raise his hand, and Charlie randomizes with equal probability. Charlie couldn’t be happier with the equilibrium as he will never have to take out the garbage (and could even decide who has to do the job by playing a pure strategy instead).

The security level of all players is 0.5 and the expected payoff in the Nash equilibrium is (0.5, 0.5, 1). However, the minimax strategies of Alice and Bob are different from their equilibrium strategies, i.e., they can guarantee their equilibrium payoff by not playing their respective equilibrium strategies (a phenomenon that was also observed by Aumann)! The solution in which all players play their minimax strategies obviously suffers from the fact this strategy profile fails to be an equilibrium: Both Alice and Bob would want to deviate. On top of that, the unique equilibrium is particularly weak in the sense that it fails to be quasi-strict, i.e., all players could as well play any other strategy without jeopardizing their payoff.

Quasi-strict equilibrium is an equilibrium refinement proposed in 1973 by Harsanyi and requires that all pure best responses are played with positive probability. Harsanyi showed that in almost all games all equilibria are quasi-strict. Indeed the three-player game above (taken from this paper) is one of the very few exceptions. Quasi-strict equilibrium is rather attractive from an axiomatic perspective. For example, it has been shown that the existence of quasi-strict equilibrium is sufficient to justify the assumption of common knowledge of rationality when players are ‘cautious’ (for more details see here and here).

In 1999, Henk Norde proved that every two-player game contains a quasi-strict equilibrium (via a rather elaborate proof using Brouwer’s fixed-point theorem), strengthening earlier results which showed existence in zero-sum games, bimatrix games with a finite number of equilibria, 2×n games, etc. Norde’s existence result implies that computing a quasi-strict equilibrium is PPAD-hard (while this problem was shown NP-hard for games with at least three players). Curiously, however, membership in PPAD for two-player games remains open due to the intricate existence proof by Norde (see also this review of Norde’s paper by Bernhard von Stengel).

Coming back to the example, it seems as if Charlie has to live with the deficiencies of Nash equilibrium and prepare to take out the garbage with positive probability.