STOC 2011 Accepted Papers
February 9, 2011 by Noam Nisan
The list of accepted papers to STOC 2011 is now online. The following seem to be related to Algorithmic Game Theory/Economics:
Christos Papadimitriou and George Pierrakos
We revisit the problem of designing the profit-maximizing single-item auction, solved by Myerson in his seminal paper for the case in which bidder valuations are independently distributed. We focus on general joint distributions, seeking the optimal deterministic incentive compatible auction. We give a geometric characterization of the optimal auction through a duality theorem, resulting in an efficient algorithm for finding the optimal deterministic auction in the two-bidder case and an inapproximability result for three or more bidders.
Mechanism design with uncertain inputs (to err is human, to forgive divine) |
Uriel Feige and Moshe Tennenholtz
We consider a task of scheduling with a common deadline on a single machine. Every player reports to a scheduler the length of his job and the scheduler needs to finish as many jobs as possible by the deadline. For this simple problem, there is a truthful mechanism that achieves maximum welfare in dominant strategies. The new aspect of our work is that in our setting players are uncertain about their own job lengths, and hence are incapable of providing truthful reports (in the strict sense of the word). For a probabilistic model for uncertainty our main results are as follows.1) Even with relatively little uncertainty, no mechanism can guarantee a constant fraction of the welfare.
2) To remedy this situation, we introduce a new measure of economic efficiency, based on a notion of a {\em fair share} of a player, and design mechanisms that are $\Omega(1)$-fair. In addition to its intrinsic appeal, our notion of fairness implies good approximation of maximum welfare in several cases of interest.
3) In our mechanisms the machine is sometimes left idle even though there are jobs that want to use it. We show that this unfavorable aspect is unavoidable, unless one gives up other favorable aspects (e.g., give up $\Omega(1)$-fairness).
We also consider a qualitative approach to uncertainty as an alternative to the probabilistic quantitative model. In the qualitative approach we break away from solution concepts such as dominant strategies (they are no longer well defined), and instead suggest an axiomatic approach, which amounts to listing desirable properties for mechanisms. We provide a mechanism that satisfies these properties.
Online Bipartite Matching with Unknown Distributions |
Chinmay Karande and Aranyak Mehta and Pushkar Tripathi
We consider the online bipartite matching problem in the unknown distribution input model. We show that the RANKING algorithm of \cite{KVV90} achieves a competitive ratio of at least 0.653. This is the first analysis to show an algorithm which breaks the natural 1-1/e `barrier’ in the unknown distribution model (our analysis in fact works in the stronger, random order model) and answers an open question in \cite{GM08}. We also describe a family of graphs on which RANKING does no better than 0.727. Finally, we show that for graphs which have $k > 1$ disjoint perfect matchings, RANKING achieves a competitive ratio of at least $1 – \sqrt{\frac{1}{k} – \frac{1}{k^2} + \frac{1}{n}}$ — in particular RANKING achieves a factor of 1-o(1) for graphs with $\omega(1)$ disjoint perfect matchings.
Jon Kleinberg and Sigal Oren
Scientific communities confer many forms of {\em credit} — both implicit and explicit — on their successful members, and it has long been argued that the motivation provided by these forms of credit helps to shape a community’s collective attention toward different lines of research. The allocation of scientific credit, however, has also been the focus of long-documented pathologies: certain research questions are said to command too much credit, at the expense of other equally important questions; and certain researchers (in a version of Robert Merton’s {\em Matthew Effect}) seem to receive a disproportionate share of the credit, even when the contributions of others are similar.Here we show that the presence of each of these pathologies can in fact increase the collective productivity of a community. We consider a model for the allocation of credit, in which individuals can choose among {\em projects} of varying levels of importance and difficulty, and they compete to receive credit with others who choose the same project. Under the most natural mechanism for allocating credit, in which it is divided among those who succeed at a project in proportion to the project’s importance, the resulting selection of projects by self-interested, credit-maximizing individuals will in general be socially sub-optimal. However, we show that there exist ways of allocating credit out of proportion to the true importance of the projects, as well as mechanisms that assign credit out of proportion to the relative contributions of the individuals, that lead credit-maximizing individuals to collectively achieve social optimality. These results therefore suggest how well-known forms of misallocation of scientific credit can in fact serve to channel self-interested behavior into socially optimal outcomes.
Shahar Dobzinski
We show that every universally truthful randomized mechanism for combinatorial auctions with submodular valuations that provides $m^{\frac 1 2 -\epsilon}$ approximation to the social welfare and uses value queries only must use exponentially many value queries, where $m$ is the number of items. In contrast, ignoring incentives there exists constant ratio approximation algorithms for this problem. Our approach is based on a novel \emph{direct hardness} approach and completely skips the notoriously hard characterization step. The characterization step was the main obstacle for proving impossibility results in algorithmic mechanism design so far.We demonstrate two additional applications of our new technique: (1) an impossibility result for universally-truthful polynomial time flexible combinatorial public projects and (2) an impossibility result for truthful-in-expectation mechanisms for exact combinatorial public projects. The latter is the first result that bounds the power of polynomial-time truthful in expectation mechanisms in any setting.
Bharat Adsul and Jugal Garg and Ruta Mehta and Milind Sohoni
Given a rank-1 bimatrix game (A,B), i.e., where rank(A+B)=1, we construct a suitable linear subspace of the rank-1 game space and show that this subspace is homeomorphic to its Nash equilibrium correspondence. Using this homeomorphism, we give the first polynomial time algorithm for computing an exact Nash equilibrium of a rank-1 bimatrix game. This settles an open question posed in Kannan and Theobald (SODA 2007) and Theobald (2007). In addition, we give a novel algorithm to enumerate all the Nash equilibria of a rank-1 game and show that a similar technique may also be applied for finding a Nash equilibrium of any bimatrix game. This technique also proves the existence, oddness and the index theorem of Nash equilibria in a bimatrix game. Further, we extend the rank-1 homeomorphism result to a fixed rank game space, and give a fixed point formulation on $[0,1]^k$ for solving a rank-k game. The homeomorphism and the fixed point formulation are piece-wise linear and considerably simpler than the classical constructions.
Nicole Immorlica and Adam Tauman Kalai and Brendan Lucier and Ankur Moitra and Andrew Postlewaite and Moshe Tennenholtz
We revisit classic algorithmic search and optimization problems from the perspective of competition. Rather than a single optimizer minimizing expected cost, we consider a zero-sum game in which a search problem is presented to two players, whose only goal is to {\em outperform the opponent}. Such games are typically
exponentially large zero-sum games, but they often have a rich structure. We provide general techniques by which such structure can be leveraged to find minmax-optimal and approximate minmax-optimal strategies. We give examples of ranking, hiring, compression, and binary search duels, among others. We give bounds on how often one can beat the classic optimization algorithms in such duels.
From Convex Optimization to Randomized Mechanisms: Toward Optimal Combinatorial Auctions for Submodular Bidders |
Shaddin Dughmi and Tim Roughgarden and Qiqi Yan
We design a polynomial-time, truthful-in-expectation, $(1-1/e)$-approximation mechanism for welfare maximization for a fundamental class of combinatorial auctions. Our results apply to bidders with valuations that are {\em matroid rank sums (MRS)}, which encompass most concrete examples of submodular functions studied in this context, including coverage functions and matroid weighted-rank functions. Our approximation factor is the best possible, even for known and explicitly given coverage valuations, assuming $P \neq NP$.
Our mechanism is the first truthful-in-expectation and polynomial-time mechanism to achieve a constant-factor approximation for an $NP$-hard welfare maximization problem in combinatorial auctions with heterogeneous goods and restricted valuations.Our mechanism is an instantiation of a new framework for designing approximation mechanisms based on randomized rounding algorithms. A typical such algorithm first optimizes over a fractional relaxation of the original problem, and then randomly rounds the fractional solution to an integral one. With rare exceptions, such algorithms cannot be converted into truthful mechanisms. The high-level idea of our mechanism design framework is to optimize {\em directly over the (random) output of the rounding algorithm}, rather than over the {\em input} to the rounding algorithm. This approach leads to truthful-in-expectation mechanisms, and these mechanisms can be implemented efficiently when the corresponding objective function is concave. For bidders with MRS valuations, we give a novel randomized rounding algorithm that leads to both a concave objective function and a $(1-1/e)$-approximation of the optimal welfare.
Our approximation mechanism also provides an interesting separation between the power of maximal-in-distributional-range mechanisms and that of universally truthful (or deterministic) VCG-based mechanisms, which cannot achieve a constant-factor approximation for welfare maximization with MRS valuations.
Shahar Dobzinski and Hu Fu and Robert Kleinberg
We consider the problem of designing a revenue-maximizing auction for a single item, when the values of the bidders are drawn from a correlated distribution. We observe that there exists an algorithm that finds the optimal randomized mechanism that runs in time polynomial in the size of the support. We leverage this result to show that in the oracle model introduced by Ronen and Saberi [FOCS’02], there exists a polynomial time truthful in expectation mechanism that provides a $(1.5+\epsilon)$-approximation to the revenue achievable by an optimal truthful-in-expectation mechanism, and a polynomial time deterministic truthful mechanism that guarantees $\frac 5 3$ approximation to the revenue achievable by an optimal deterministic truthful mechanism.We show that the $\frac 5 3$-approximation mechanism provides the same approximation ratio also with respect to the optimal truthful-in-expectation mechanism. This shows that the performance gap between truthful-in-expectation and deterministic mechanisms is relatively small. En route, we solve an open question of Mehta and Vazirani [EC’04].
Finally, we extend some of our results to the multi-item case, and show how to compute the optimal truthful-in-expectation mechanisms for bidders with more complex valuations.
Exact Algorithms for Solving Stochastic Games |
Kristoffer Arnsfelt Hansen and Michal Koucky and Niels Lauritzen and Peter Bro Miltersen and Elias P. Tsigaridas
Shapley’s stochastic games and Everett’s recursive games are classical models of game theory describing two-player zero-sum games of potentially infinite duration. In this paper we describe algorithms for exactly solving these games. As the values of both kinds of games may be irrational numbers, our algorithms output the value of the game given as input in isolating interval representation. When the number of positions of the game is constant, our algorithms run in polynomial time and are the first algorithms with this property. As a byproduct, we give new improved upper and lower bounds on the algebraic degree of the values of stochastic and recursive games as a function of the combinatorial parameters of the game. As another byproduct, we give a new upper bound on the complexity of the strategy iteration algorithm for concurrent reachability games, a well-studied special case of recursive games. This upper bound is exponential even when the number of positions is constant, but prior to this paper only a doubly exponential upper bound on the complexity of strategy iteration was known, even for this case.
Online Bipartite Matching with Random Arrivals: A Strongly Factor Revealing LP Approach |
Mohammad Mahdian and Qiqi Yan
In a seminal result, Karp, Vazirani, and Vazirani show that
a simple ranking algorithm achieves an optimal competitive ratio of $1-1/e$ for the online bipartite matching problem. They also prove that when nodes arrive in a random order, a simple greedy algorithm achieves a $1-1/e$ factor. In this paper, we prove that in the random arrival model, the ranking algorithm of Karp-Vazirani-Vazirani has competitive ratio at least 0.696, beating the $1-1/e\approx0.632$ barrier in the adversarial model. Our result also extends to the i.i.d.\ distribution model of Feldman et al., strengthening their result by removing the assumption that the distribution is known.Our analysis has two main steps. First, we exploit certain dominance and monotonicity properties of the algorithm to derive a family of factor-revealing linear programs (LPs). In particular, by symmetry of the ranking algorithm in the random-arrivals model, we have the monotonicity property on both sides of the graph, which gives good “strength” to the LPs. Second, To obtain a good lower-bound on the optimal values of these LPs and hence on the competitive ratio of the algorithm, we introduce the technique of strongly factor-revealing LPs. In particular, we derive a family of modified LPs with similar strength such that the optimal value of {\em any one} of these new LPs is a lower-bound on the competitive ratio of the algorithm. This enables us to leverage the power of computer LP-solvers to solve for large instances of the new LPs to establish bounds that would otherwise be difficult to attain by human analysis.
Richard Cole and Jose R. Correa and Vasilis Gkatzelis and Vahab Mirrokni and Neil Olver
We study policies aiming to minimize the weighted sum of completion times of jobs in the context of coordination mechanisms for selfish scheduling problems. Our goal is to design local policies that achieve a good price of anarchy in the resulting equilibria for unrelated machine scheduling. To obtain the approximation bounds, we introduce a new technique that while conceptually simple, seems to be quite powerful. The method entails mapping strategy vectors into a carefully chosen inner product space; costs are shown to correspond to the norm in this space, and the Nash condition also has a simple description. With this structure in place, we are able to prove a number of results, as follows.
First, we consider Smith’s rule, which orders the jobs on a machine in ascending processing time to weight ratio, and show that it achieves an approximation ratio of $4$. We also demonstrate that this is the best possible for deterministic non-preemptive strongly local policies. Since Smith’s rule is always optimal for a given fixed assignment, this may seem unsurprising. But we then show that better approximation ratios can be obtained, if either preemption or randomization is allowed.
We prove that ProportionalSharing, a preemptive strongly local policy, achieves an approximation ratio of $2.618$ for the weighed sum of completion times, and an approximation ratio of $2.5$ in the unweighted case. Again, we observe that these bounds are tight. Next, we consider Rand, a natural non-preemptive but \emph{randomized} policy. We show that it achieves an approximation ratio of at most $2.13$; moreover, if the sum of the weighted completion times is negligible compared to the cost of the optimal solution, this improves to $\pi/2$.
Finally, we show that both ProportionalSharing and Rand induce potential games, and thus always have a pure Nash equilibrium (unlike Smith’s rule). This also allows us to design the first \emph{combinatorial} constant-factor approximation algorithm minimizing weighted completion time for unrelated machine scheduling. It achieves a factor of $2+\epsilon$ for any $\epsilon > 0$, and involves imitating best response dynamics using a variant of \pro\ as the policy.
Oliver Friedmann and Thomas Dueholm Hansen and Uri Zwick
The \emph{simplex} algorithm is among the most widely used algorithms for solving \emph{linear programs} in practice. Most \emph{deterministic} pivoting rules are known, however, to require an exponential number of steps to solve some linear programs. No non-polynomial lower bounds were known, prior to this work, for \emph{randomized} pivoting rules. We provide the first \emph{subexponential} (i.e., of the form~$2^{\Omega(n^\alpha)}$, for some $\alpha>0$) lower bounds for the two most natural, and most studied, randomized pivoting rules suggested to date.
The first randomized pivoting rule we consider is {\sc Random-Edge}, which among all improving pivoting steps (or \emph{edges}) from the current basic feasible solution (or \emph{vertex}) chooses one uniformly at random. The second randomized pivoting rule we consider is {\sc Random-Facet}, a more complicated randomized pivoting rule suggested by Kalai (1992) Matou{\v{s}}ek, Sharir and Welzl (1996). Our lower bound for the {\sc Random-Facet} pivoting rule essentially matches the subexponential upper bounds of Kalai and Matou{\v{s}}ek et al. Lower bounds for {\sc Random-Edge} and {\sc Random-Facet} were known before only in \emph{abstract} settings, and not for concrete linear programs.
Our lower bounds are obtained by utilizing connections between pivoting steps performed by simplex-based algorithms and \emph{improving switches} performed by \emph{policy iteration} algorithms for 1-player and 2-player games. We start by building 2-player \emph{parity games} (PGs) on which suitable randomized policy iteration algorithms perform a subexponential number of iterations. We then transform these 2-player games into 1-player \emph{Markov Decision Processes} (MDPs) which correspond almost immediately to concrete linear programs.
Like this:
Like Loading...
Related
The paper “Subexponential lower bounds for randomized pivoting rules for the simplex algorithm” is also somewhat an AGT paper, as it has policy iteration of 1- and 2-player games as a core element of the construction.
The paper “Inner Product Spaces for MinSum Coordination Mechanisms” (by Cole,Correa,Gkatzelis, Mirrokni, Oliver) also fits in AGT. It gives improved price of anarchy bounds for coordination mechanisms. The paper is available on arxive: http://arxiv.org/abs/1010.1886
Thanks, Troels and Vahab, these are now added.
Chosen Model…
[…]STOC 2011 Accepted Papers « Algorithmic Game-Theory/Economics[…]…