Feeds:
Posts
Comments

Archive for February, 2011

Peter Cramton started a Market Design Blog about two months ago.  Welcome.

His latest post publicizes the New Trends in Mechanism Design workshop and summer school that will be held by the Center for Research in the Foundations of Electronic Markets in Copenhagen during 5-9 September 2011.  An announcement of the workshop has been circulated recently, but I can’t find any official web page yet.

Advertisement

Read Full Post »

Evaluation of scientific research is notoriously hard, almost by definition: success means something not done before, but if it was not done before, then how can we evaluate it?  We are now seeing everywhere a huge shift in how this is done: it used to be that we designated some person as the authority for such judgement and then asked him (only rarely was it “her”): the chair of the anthropology department (say) decided which anthropologist to hire, hopefully after getting authoritative “letters” from other anthropologists.    Similarly, some committee of Full Professors used their authority judgment to decide which departments in the university should grow, which research projects should be funded, or which universities in a country to pour money into.  Instead, we are increasingly seeing a huge shift into evaluating everything by numbers: counting publications, journal rankings, impact factors, citations, H factors, grant amounts, numeric assessment exercises, and so on, attempting to base more and more scientific evaluations on such measures.

Many academics are viciously opposed to this numeric evaluation trend: they point correctly to various errors and short-comings of the numeric measures; they point out that in order to reasonably evaluate work in anthropology (say) you must understand this type of work; they object to popularity contests determining scientific progress; and they fear that the  mechanical-economic-political control that these bibliometric approaches bring into the process of scientific discovery, will bias it away from the “true” path.

I agree with many of these criticisms of bibliometric evaluation, however I have to admit that I see the authority-based system as even more problematic. The local authorities are people and as such they have huge biases:  obviously they share various common human  ethnic/religious/gender biases — biases absent from non-viciously-chosen bibliometric measures.  Even more troublesome are personal biases: we all tend to think that our friends and students are smarter than others that we don’t yet personally know.   Worst of all are scientific biases: the authorities tend to defend their scientific turf against new approaches, new fields, competing ideas, and so on.  The combination of these natural biases encourages inbreeding, conformity, and discourages change and innovation — the very things academia should encourage.  This bites especially hard in two cases where it counts most: the first case is if we happen to start with weak authorities (as the quadratic law of hiring says: 1st rate people hire 1st rate people, but 2nd rate people hire 4th rate people), and the second case is areas which are in rapid flux.    It is not that numeric systems are completely free from all these problems since ultimately they are based on human authorities too (those who accept papers for publication, cite them, or award grants) but as they are more global, more transparent, and involve a larger number of people, they are less prone to biases, and may support  faster adoption of change.  Additionally there is the issue of minimizing self-interest: Professors can not be trusted to evaluate themselves any more (or less) than any other segment of the population can.  I can not see a single academic department whose self-evaluation will be “we stink — don’t give us any money”, even though I can see many that will get this evaluation by any non-self-interested evaluation method used.

Additionally, measuring scientific success in a way that can be understood and believed from the outside is simply unavoidable.  The taxpayers are being called to fund scientific research in ever increasing amounts. They want to know why.  Frankly, society will not continue funding it (i.e. us academics) if we do not make a convincing argument why supporting research is preferable to improving elementary schools, building roads, increasing minimum wage, or reducing taxes.  This question will be asked in one way or another by every politician who needs to allocate the money and in every political system.  Luckily, there are excellent answers demonstrating the human value of academic research, innovation, and critical thinking, with examples ranging from the Socratic method to the Internet.     These answers seem to be quite effective and in many places the political system has indeed been convinced that pouring more money into academic research is a good idea.  But, very few administrators will be convinced that a carte blanche is called for.   In fact the more academic research is deemed to be “useful” (whether practically or just in the sense of advancing humanity’s quest for truth and beauty) the more emphasis will be put on its measurement and “optimization”. The close ties of academic research and education and the ever increasing numbers of college and university students only strengthen this further.  The charm of bibliometric/numeric measurements in comparison with authority based ones is that its harder to corrupt them or politically bias them, as the latter very often are.

So, given that we are moving to more mechanical forms of evaluating academic excellence, how can we avoid most of the pitfalls?  This question applies at all levels from the evaluation of single candidates for appointment or tenure, to funding departments within an organization, to national policy.  This is really a critical question in our day and age, where entire countries are overhauling the way that they evaluate their academic research.  Famously and transparently this is the case for Great Britain, but it is also true in a large sense for China, as well as my own country, Israel, and many other nations (at various levels of explicitness).   These changes may turn out to have profound implications on human progress, whether good or bad, we surely do not yet know, but should definitely aim for the good.

So here are some suggestions on how to sanely use bibliometric/mechanical evaluations:

  1. Don’t be silly: Don’t use indications that obviously do not make sense.  If in some field of knowledge publication of books or presentations in conferences are de facto a stronger indication of excellence than journal publicaions, then counting only the latter is simply silly.
  2. Measure quality, not quantity: Do not count papers, count citations.  Do not count number of faculty members, count number of award-winning faculty members.  Do not count Ph.D.s produced, count Ph.D. that got good positions.
  3. Measure strategically: Count what is harder to artificially fake; scarcity and competition is some signal of quality; ease of measurement matters.   Self-citations are not an indication of influence; self-published books are not an indication of excellence; excellence of a conference or journal is negatively correlated with its acceptance rate and positively with the number of attendees or readers.
  4. Measure globally: There is no reason why excellence of a computer scientist, philosopher, or almost any other researcher should be judged differently in different institutes or different countries.  If you are from a small country and publish in your own country — that is usually not a sign of excellence.  (It is usually not a good sign if French researchers publish in French or Chinese one in Chinese.)  Pick two random universities A and B with their different evaluation systems — in most cases, using B’s system to evaluate A gives more honest indications than using A’s system.
  5. Measure widely: There are many different ways to count anything such or university rankings or citation counts.  (E.g. web-based sites that offer easy to access citation counts in CS include at least Google Scholar, arnetminer, citeseerMicrosoft academic search, as well doubtless others.)  They differ from each other in many parameters of what they count and even what they aim to capture.  The average of several (reasonable) citation counts is usually preferable to any single one.  As non-numeric committees always do, it is best to take into account as many indicators of excellence as possible (e.g. citation counts, publication-venue ranking, prizes, grants, esteemed academic responsibilities, …)
  6. Vary your measurements: Occasionally varying the technical details of what you measure makes it harder for the evaluated to strategically game the system, and allows the gradual optimization of the evaluation process.
  7. Judgement and responsibility remain with people:   Any numerical measurement is only a proxy for excellence and not the thing itself.  If you recoginize excellence that is not captured by the numerical system that you are using, then it is still your responsibility to use your judgement.  This ofcourse should be rare, should come with a strong explanation, and may use-up some of your professional credit, but the responsibility always remains with the  people making the decision.

Read Full Post »

The Department of Computer Science at the University of Liverpool, with its strong Computation and Economics group, is now offering an MSc degree programme in Computation and Game Theory.

I wonder whether you can do a double major with Bealtes Studies.

Read Full Post »

STOC 2011 Accepted Papers

The list of accepted papers to STOC 2011 is now online.  The following seem to be related to Algorithmic Game Theory/Economics:

On Optimal Single-Item Auctions
Christos Papadimitriou and George Pierrakos
We revisit the problem of designing the profit-maximizing single-item auction, solved by Myerson in his seminal paper for the case in which bidder valuations are independently distributed. We focus on general joint distributions, seeking the optimal deterministic incentive compatible auction. We give a geometric characterization of the optimal auction through a duality theorem, resulting in an efficient algorithm for finding the optimal deterministic auction in the two-bidder case and an inapproximability result for three or more bidders.
Mechanism design with uncertain inputs (to err is human, to forgive divine)
Uriel Feige and Moshe Tennenholtz
We consider a task of scheduling with a common deadline on a single machine. Every player reports to a scheduler the length of his job and the scheduler needs to finish as many jobs as possible by the deadline. For this simple problem, there is a truthful mechanism that achieves maximum welfare in dominant strategies. The new aspect of our work is that in our setting players are uncertain about their own job lengths, and hence are incapable of providing truthful reports (in the strict sense of the word). For a probabilistic model for uncertainty our main results are as follows.1) Even with relatively little uncertainty, no mechanism can guarantee a constant fraction of the welfare.

2) To remedy this situation, we introduce a new measure of economic efficiency, based on a notion of a {\em fair share} of a player, and design mechanisms that are $\Omega(1)$-fair. In addition to its intrinsic appeal, our notion of fairness implies good approximation of maximum welfare in several cases of interest.

3) In our mechanisms the machine is sometimes left idle even though there are jobs that want to use it. We show that this unfavorable aspect is unavoidable, unless one gives up other favorable aspects (e.g., give up $\Omega(1)$-fairness).

We also consider a qualitative approach to uncertainty as an alternative to the probabilistic quantitative model. In the qualitative approach we break away from solution concepts such as dominant strategies (they are no longer well defined), and instead suggest an axiomatic approach, which amounts to listing desirable properties for mechanisms. We provide a mechanism that satisfies these properties.

Online Bipartite Matching with Unknown Distributions
Chinmay Karande and Aranyak Mehta and Pushkar Tripathi
We consider the online bipartite matching problem in the unknown distribution input model. We show that the RANKING algorithm of \cite{KVV90} achieves a competitive ratio of at least 0.653. This is the first analysis to show an algorithm which breaks the natural 1-1/e `barrier’ in the unknown distribution model (our analysis in fact works in the stronger, random order model) and answers an open question in \cite{GM08}. We also describe a family of graphs on which RANKING does no better than 0.727. Finally, we show that for graphs which have $k > 1$ disjoint perfect matchings, RANKING achieves a competitive ratio of at least $1 – \sqrt{\frac{1}{k} – \frac{1}{k^2} + \frac{1}{n}}$ — in particular RANKING achieves a factor of 1-o(1) for graphs with $\omega(1)$ disjoint perfect matchings.
Mechanisms for (Mis)allocating Scientific Credit
Jon Kleinberg and Sigal Oren
Scientific communities confer many forms of {\em credit} — both implicit and explicit — on their successful members, and it has long been argued that the motivation provided by these forms of credit helps to shape a community’s collective attention toward different lines of research. The allocation of scientific credit, however, has also been the focus of long-documented pathologies: certain research questions are said to command too much credit, at the expense of other equally important questions; and certain researchers (in a version of Robert Merton’s {\em Matthew Effect}) seem to receive a disproportionate share of the credit, even when the contributions of others are similar.Here we show that the presence of each of these pathologies can in fact increase the collective productivity of a community. We consider a model for the allocation of credit, in which individuals can choose among {\em projects} of varying levels of importance and difficulty, and they compete to receive credit with others who choose the same project. Under the most natural mechanism for allocating credit, in which it is divided among those who succeed at a project in proportion to the project’s importance, the resulting selection of projects by self-interested, credit-maximizing individuals will in general be socially sub-optimal. However, we show that there exist ways of allocating credit out of proportion to the true importance of the projects, as well as mechanisms that assign credit out of proportion to the relative contributions of the individuals, that lead credit-maximizing individuals to collectively achieve social optimality. These results therefore suggest how well-known forms of misallocation of scientific credit can in fact serve to channel self-interested behavior into socially optimal outcomes.

An Impossibility Result for Truthful Combinatorial Auctions with Submodular Valuations
Shahar Dobzinski
We show that every universally truthful randomized mechanism for combinatorial auctions with submodular valuations that provides $m^{\frac 1 2 -\epsilon}$ approximation to the social welfare and uses value queries only must use exponentially many value queries, where $m$ is the number of items. In contrast, ignoring incentives there exists constant ratio approximation algorithms for this problem. Our approach is based on a novel \emph{direct hardness} approach and completely skips the notoriously hard characterization step. The characterization step was the main obstacle for proving impossibility results in algorithmic mechanism design so far.We demonstrate two additional applications of our new technique: (1) an impossibility result for universally-truthful polynomial time flexible combinatorial public projects and (2) an impossibility result for truthful-in-expectation mechanisms for exact combinatorial public projects. The latter is the first result that bounds the power of polynomial-time truthful in expectation mechanisms in any setting.

Rank-1 Bimatrix Games: A Homeomorphism and a Polynomial Time Algorithm
Bharat Adsul and Jugal Garg and Ruta Mehta and Milind Sohoni
Given a rank-1 bimatrix game (A,B), i.e., where rank(A+B)=1, we construct a suitable linear subspace of the rank-1 game space and show that this subspace is homeomorphic to its Nash equilibrium correspondence. Using this homeomorphism, we give the first polynomial time algorithm for computing an exact Nash equilibrium of a rank-1 bimatrix game. This settles an open question posed in Kannan and Theobald (SODA 2007) and Theobald (2007). In addition, we give a novel algorithm to enumerate all the Nash equilibria of a rank-1 game and show that a similar technique may also be applied for finding a Nash equilibrium of any bimatrix game. This technique also proves the existence, oddness and the index theorem of Nash equilibria in a bimatrix game. Further, we extend the rank-1 homeomorphism result to a fixed rank game space, and give a fixed point formulation on $[0,1]^k$ for solving a rank-k game. The homeomorphism and the fixed point formulation are piece-wise linear and considerably simpler than the classical constructions.
Dueling algorithms
Nicole Immorlica and Adam Tauman Kalai and Brendan Lucier and Ankur Moitra and Andrew Postlewaite and Moshe Tennenholtz
We revisit classic algorithmic search and optimization problems from the perspective of competition. Rather than a single optimizer minimizing expected cost, we consider a zero-sum game in which a search problem is presented to two players, whose only goal is to {\em outperform the opponent}. Such games are typically
exponentially large zero-sum games, but they often have a rich structure. We provide general techniques by which such structure can be leveraged to find minmax-optimal and approximate minmax-optimal strategies. We give examples of ranking, hiring, compression, and binary search duels, among others. We give bounds on how often one can beat the classic optimization algorithms in such duels.
From Convex Optimization to Randomized Mechanisms: Toward Optimal Combinatorial Auctions for Submodular Bidders
Shaddin Dughmi and Tim Roughgarden and Qiqi Yan
We design a polynomial-time, truthful-in-expectation, $(1-1/e)$-approximation mechanism for welfare maximization for a fundamental class of combinatorial auctions. Our results apply to bidders with valuations that are {\em matroid rank sums (MRS)}, which encompass most concrete examples of submodular functions studied in this context, including coverage functions and matroid weighted-rank functions. Our approximation factor is the best possible, even for known and explicitly given coverage valuations, assuming $P \neq NP$.
Our mechanism is the first truthful-in-expectation and polynomial-time mechanism to achieve a constant-factor approximation for an $NP$-hard welfare maximization problem in combinatorial auctions with heterogeneous goods and restricted valuations.Our mechanism is an instantiation of a new framework for designing approximation mechanisms based on randomized rounding algorithms. A typical such algorithm first optimizes over a fractional relaxation of the original problem, and then randomly rounds the fractional solution to an integral one. With rare exceptions, such algorithms cannot be converted into truthful mechanisms. The high-level idea of our mechanism design framework is to optimize {\em directly over the (random) output of the rounding algorithm}, rather than over the {\em input} to the rounding algorithm. This approach leads to truthful-in-expectation mechanisms, and these mechanisms can be implemented efficiently when the corresponding objective function is concave. For bidders with MRS valuations, we give a novel randomized rounding algorithm that leads to both a concave objective function and a $(1-1/e)$-approximation of the optimal welfare.

Our approximation mechanism also provides an interesting separation between the power of maximal-in-distributional-range mechanisms and that of universally truthful (or deterministic) VCG-based mechanisms, which cannot achieve a constant-factor approximation for welfare maximization with MRS valuations.

Optimal Auctions with Correlated Bidders are Easy
Shahar Dobzinski and Hu Fu and Robert Kleinberg
We consider the problem of designing a revenue-maximizing auction for a single item, when the values of the bidders are drawn from a correlated distribution. We observe that there exists an algorithm that finds the optimal randomized mechanism that runs in time polynomial in the size of the support. We leverage this result to show that in the oracle model introduced by Ronen and Saberi [FOCS’02], there exists a polynomial time truthful in expectation mechanism that provides a $(1.5+\epsilon)$-approximation to the revenue achievable by an optimal truthful-in-expectation mechanism, and a polynomial time deterministic truthful mechanism that guarantees $\frac 5 3$ approximation to the revenue achievable by an optimal deterministic truthful mechanism.We show that the $\frac 5 3$-approximation mechanism provides the same approximation ratio also with respect to the optimal truthful-in-expectation mechanism. This shows that the performance gap between truthful-in-expectation and deterministic mechanisms is relatively small. En route, we solve an open question of Mehta and Vazirani [EC’04].

Finally, we extend some of our results to the multi-item case, and show how to compute the optimal truthful-in-expectation mechanisms for bidders with more complex valuations.

Exact Algorithms for Solving Stochastic Games
Kristoffer Arnsfelt Hansen and Michal Koucky and Niels Lauritzen and Peter Bro Miltersen and Elias P. Tsigaridas
Shapley’s stochastic games and Everett’s recursive games are classical models of game theory describing two-player zero-sum games of potentially infinite duration. In this paper we describe algorithms for exactly solving these games. As the values of both kinds of games may be irrational numbers, our algorithms output the value of the game given as input in isolating interval representation. When the number of positions of the game is constant, our algorithms run in polynomial time and are the first algorithms with this property. As a byproduct, we give new improved upper and lower bounds on the algebraic degree of the values of stochastic and recursive games as a function of the combinatorial parameters of the game. As another byproduct, we give a new upper bound on the complexity of the strategy iteration algorithm for concurrent reachability games, a well-studied special case of recursive games. This upper bound is exponential even when the number of positions is constant, but prior to this paper only a doubly exponential upper bound on the complexity of strategy iteration was known, even for this case.
Online Bipartite Matching with Random Arrivals: A Strongly Factor Revealing LP Approach
Mohammad Mahdian and Qiqi Yan
In a seminal result, Karp, Vazirani, and Vazirani show that
a simple ranking algorithm achieves an optimal competitive ratio of $1-1/e$ for the online bipartite matching problem. They also prove that when nodes arrive in a random order, a simple greedy algorithm achieves a $1-1/e$ factor. In this paper, we prove that in the random arrival model, the ranking algorithm of Karp-Vazirani-Vazirani has competitive ratio at least 0.696, beating the $1-1/e\approx0.632$ barrier in the adversarial model. Our result also extends to the i.i.d.\ distribution model of Feldman et al., strengthening their result  by removing the assumption that the distribution is known.Our analysis has two main steps. First, we exploit certain dominance and monotonicity properties of the algorithm to derive a family of factor-revealing linear programs (LPs). In particular, by symmetry of the ranking algorithm in the random-arrivals model, we have the monotonicity property on both sides of the graph, which gives good “strength” to the LPs. Second, To obtain a good lower-bound on the optimal values of these LPs and hence on the competitive ratio of the algorithm, we introduce the technique of strongly factor-revealing LPs. In particular, we derive a family of modified LPs with similar strength such that the optimal value of {\em any one} of these new LPs is a lower-bound on the competitive ratio of the algorithm. This enables us to leverage the power of computer LP-solvers to solve for large instances of the new LPs to establish bounds that would otherwise be difficult to attain by human analysis.

Inner Product Spaces for MinSum Coordination Mechanisms
Richard Cole and Jose R. Correa and Vasilis Gkatzelis and Vahab Mirrokni and Neil Olver
We study policies aiming to minimize the weighted sum of completion times of jobs in the context of coordination mechanisms for selfish scheduling problems. Our goal is to design local policies that achieve a good price of anarchy in the resulting equilibria for unrelated machine scheduling. To obtain the approximation bounds, we introduce a new technique that while conceptually simple, seems to be quite powerful. The method entails mapping strategy vectors into a carefully chosen inner product space; costs are shown to correspond to the norm in this space, and the Nash condition also has a simple description. With this structure in place, we are able to prove a number of results, as follows.

First, we consider Smith’s rule, which orders the jobs on a machine in ascending processing time to weight ratio, and show that it achieves an approximation ratio of $4$. We also demonstrate that this is the best possible for deterministic non-preemptive strongly local policies. Since Smith’s rule is always optimal for a given fixed assignment, this may seem unsurprising. But we then show that better approximation ratios can be obtained, if either preemption or randomization is allowed.

We prove that ProportionalSharing, a preemptive strongly local policy, achieves an approximation ratio of $2.618$ for the weighed sum of completion times, and an approximation ratio of $2.5$ in the unweighted case. Again, we observe that these bounds are tight. Next, we consider Rand, a natural non-preemptive but \emph{randomized} policy. We show that it achieves an approximation ratio of at most $2.13$; moreover, if the sum of the weighted completion times is negligible compared to the cost of the optimal solution, this improves to $\pi/2$.

Finally, we show that both ProportionalSharing and Rand induce potential games, and thus always have a pure Nash equilibrium (unlike Smith’s rule). This also allows us to design the first \emph{combinatorial} constant-factor approximation algorithm minimizing weighted completion time for unrelated machine scheduling. It achieves a factor of $2+\epsilon$ for any $\epsilon > 0$, and involves imitating best response dynamics using a variant of \pro\ as the policy.

Subexponential lower bounds for randomized pivoting rules for the simplex algorithm
Oliver Friedmann and Thomas Dueholm Hansen and Uri Zwick
The \emph{simplex} algorithm is among the most widely used algorithms for solving \emph{linear programs} in practice. Most \emph{deterministic} pivoting rules are known, however, to require an exponential number of steps to solve some linear programs. No non-polynomial lower bounds were known, prior to this work, for \emph{randomized} pivoting rules. We provide the first \emph{subexponential} (i.e., of the form~$2^{\Omega(n^\alpha)}$, for some $\alpha>0$) lower bounds for the two most natural, and most studied, randomized pivoting rules suggested to date.

The first randomized pivoting rule we consider is {\sc Random-Edge}, which among all improving pivoting steps (or \emph{edges}) from the current basic feasible solution (or \emph{vertex}) chooses one uniformly at random. The second randomized pivoting rule we consider is {\sc Random-Facet}, a more complicated randomized pivoting rule suggested by Kalai (1992) Matou{\v{s}}ek, Sharir and Welzl (1996). Our lower bound for the {\sc Random-Facet} pivoting rule essentially matches the subexponential upper bounds of Kalai and Matou{\v{s}}ek et al. Lower bounds for {\sc Random-Edge} and {\sc Random-Facet} were known before only in \emph{abstract} settings, and not for concrete linear programs.

Our lower bounds are obtained by utilizing connections between pivoting steps performed by simplex-based algorithms and \emph{improving switches} performed by \emph{policy iteration} algorithms for 1-player and 2-player games. We start by building 2-player \emph{parity games} (PGs) on which suitable randomized policy iteration algorithms perform a subexponential number of iterations. We then transform these 2-player games into 1-player \emph{Markov Decision Processes} (MDPs) which correspond almost immediately to concrete linear programs.

 

Read Full Post »

The January 2011 Feature Column from the American Mathematical Society gives a nice discussion of the Price of Anarchy.

Read Full Post »

Congratulations to Vincent Conitzer for winning the 2011 Computers and Thought Award for “his seminal work at the boundary of microeconomic theory and artificial intelligence, in particular for groundbreaking work on computational aspects of game theory, social choice, and mechanism design.”

Congratulations also to co-recipient Malte Helmert.

 

Read Full Post »

%d bloggers like this: