This is the fourth post in a series recapping the talks at the CMU summer school on algorithmic economics (the first three: #1, #2,#3). See the summer school site for abstracts, slides, and (eventually) videos.

Rakesh Vohra

Rakesh Vohra, from Northwestern, lectured on the application of techniques from combinatorial optimization and linear programming to economic applications. He emphasized that the goal is not merely to solve the underlying optimization problem, but to identify qualitative properties of the optimal solution and how it varies with changes in the parameters of the problem.

I’ll focus primarily on his discussion of allocating a single good without payments (inspired by Ben-Porath and Lipman). As in the previous post, agents have private valuations drawn from known distributions. With payments, the welfare maximization problem is solved by the Vickrey auction. Without payments, it seems hopeless — if bidding higher can increase the probability of winning, then every agent will just bid the highest number they can think of. But what if the seller has available a modest deterrent: *inspection*? Formally, the design space consists of an allocation rule (for each bid profile, the probability that each agent wins) and an inspection rule (the probability of inspecting the winner, as a function of its bid). We assume that inspection reveals whether or not the winner overbid and, if so, the good is not allocated to anyone (otherwise the winning bidder gets it). To rule out the trivial “always inspect” mechanism, we assume that inspection costs the seller K. The seller’s objective is to maximize the expected valuation of the winner (over the randomness in the valuations and the mechanism), less the expected inspection cost. For example, if K is 10 and all of the bids come in below 10, then surely you don’t want to inspect, and presumably all you can do is pick a winner at random. Intuitively, as the winning bid grows larger, inspection makes more sense. What is the optimal mechanism?

Like in Costis’s talk covered in the previous post, the plan is to study a linear program in which the decision variables represent the interim allocation rules. Assume that the space of possible bidder valuations is finite. Recall that the interim allocation rule of a bidder specifies its probability of winning given its valuation (assuming a truthful report), over the randomness in others’ valuations and in the mechanism. There is also a variable for the probability of inspection for each possible winner and its bid. Exercise: the objective function and the incentive-compatibility constraints (truthful reporting always maximizes expected utility) can be expressed directly in terms of these decision variables.

Recall that the catch with working with interim allocation rules is feasibility: without additional constraints, it might be impossible to reverse engineer a bona fide mechanism that induces the computed interim allocation rules. In the single-item setting studied here, however, a result of Border gives an explicit set of linear constraints that are necessary and sufficient for feasibility. To describe them, assume for simplicity that bidders are i.i.d. and consider a subset S of possible bidder valuations. Define f(S) as the probability that at least one bidder has its valuation in S. The probability that a mechanism allocates the good to a bidder with a valuation in S is obviously at most f(s); also, this probability can be written as a linear function of the interim allocation rules (as you should check). Cool fact #1: these inequalities (one per subset S of possible valuations) are not only necessary for a set of interim allocation rules to be feasible, they are also sufficient. (Harder exercise: derive this result from the max-flow/min-cut theorem.) Cool fact #2: the set function f(S) is nondecreasing and submodular, the latter being a set-theoretic analog of concavity.

We’ve reduced the seller’s problem to maximizing a function of the interim allocation and inspection rules, subject to feasibility and incentive-compatibility (IC) constraints. Amazingly, it can be solved fairly explicitly. The first step is to make a couple of judicious guesses about the solution (to be checked at the end): that allocation and inspection rules are nondecreasing, that both rules are anonymous (i.e., are the same for all bidders), and that the IC constraints hold with equality. This last assumption permits the elimination of the inspection rule variables and the IC constraints. After a little rewriting, we’re now essentially maximizing a linear function over the interim allocation rules subject to feasibility constraints. Cool fact #2 implies that this is an instance of “polymatroid optimization”, the point here being that a natural greedy algorithm (considering variables in decreasing order of objective function coefficient, increasing the current one as much as possible) solves the problem optimally. This implies that the optimal mechanism always has the following format. There is a cutoff L > K such that: if there is at least one agent with bid above L, award the good to the highest such agent and inspect with a probability that is increasing in the winning bid; otherwise, award the good to a random agent and don’t inspect. For a given instance, this greedy algorithm can be used to compute the precise cutoff and inspection probabilities.

Rakesh also discussed several other economic applications. First he considered the single-item problem with arbitrary transfers and showed how to prove Myerson’s characterization of Bayes-Nash equilibria (see Jason’s talk) using a shortest-path problem — the monotonicity requirement on interim allocation rules corresponds to a “no negative cycle” condition and, when this condition is satisfied, the payments correspond to shortest-path distances. Again using the “no negative cycle” condition for the existence of shortest paths, he gave a short proof of Afriat’s theorem, which gives a necessary and sufficient condition on a consumer’s purchasing decisions for the existence of a utility function that explains the decisions. Finally, he gave a neat application of relatively recent techniques in approximation algorithm design — an iterative rounding algorithm of Kiraly, Lau, and Singh — to the computation of approximate Walrasian equilibria. These are prices that approximately clear a market of indivisible goods: if every agent wants at most k goods (e.g., if agents and goods are students and courses, respectively, k would be 4 or 5), then the excess demand for every good is at most k-1.

Herve Moulin

Herve Moulin from Rice spoke about fair division from several different angles. He began by discussing foundations of welfarism — what are reasonable approaches to collective utility maximization? A natural class of objective functions to maximize have the form , where is agent i’s utility and p is a real number. When p=1 this is the utilitarian (sum) objective. When p=0 this function is defined as — like in Nash bargaining, for example. When the intention is to select the leximin utility profile, meaning that inequality is tolerated only if it makes everyone better off. Formally, this profile is computed by first maximizing the minimum utility over feasible utility profiles; subject to this, maximizing the second-smallest utility; and so on. The objective functions in this class are in fact the unique ones that satisfy a reasonable list of properties (symmetry, monotonicity, scale invariance, and separability). Different choices of the parameter p yield different objective functions and, generally, different recommended utility profiles. One happy case where the optimal utility profile coincides for all concave utility functions (i.e., p at most 1) is when the feasible region is defined by polymatroid constraints — as in Rakesh’s talk above, this means there are packing constraints (one per subset of agents) where the right-hand side is specified by a nondecreasing, submodular set function.

Next, Herve discussed the “division of manna”, with an emphasis on impossibility results. There are several divisible goods to be split amongst a number of agents, each with a private concave utility function. We’d like a division rule (mapping utility functions to allocations) that is “good” in some sense, or at least “non-trivial”. “Trivial” rules include those that are independent of agents’ preferences, like dictator rules (which are trivially Pareto optimal) and the “always split everything equally” rule (which trivially satisfies all natural fairness notions). Constant functions like these are trivially strategyproof, meaning truthful reporting is a weakly dominant strategy. It can be proved that, sadly, no strategyproof rule is both Pareto optimal and fair (in any reasonable sense of the word). This impossibility result continues to hold if the strategyproof condition is changed to a natural monotonicity requirement — for example, that increasing the amount of the good cannot decrease the amount of it that anyone receives.

Herve’s second lecture focused on cost-sharing mechanisms [1]. Each agent receives a quantity of some good. Each agent has quasi-linearity utility (value minus payments), with value linear (with a possibly private coefficient) in the quantity received. A publicly known cost function describes the joint cost as a function of the quantities allocated. Simple examples include the all-or-nothing cost function (0 if all quantities are 0, 1 otherwise) and, more generally, functions that depend only on the sum of the allocated quantities. A cost-sharing method specifies, for every quantity profile, how the resulting joint cost gets split amongst the agents. Every cost-sharing method defines a full-information game (where strategies = quantities) for each valuation profile, called a “demand game”; and a direct-revelation mechanism for the private value case (where players report valuations, and the mechanism returns equilibrium quantities).

Consider first a supermodular (increasing returns) cost function and integral quantities. Incremental cost-sharing methods are defined as follows: agents are repeatedly approached in some order; the current agent is asked if it wants an additional unit, at a price equal to the increase in joint cost it would cause; and the first time an agent declines, it is kicked out of the game (keeping the units it already agreed to). These methods have a number of nice incentive properties: the demand game is dominance solvable and the corresponding equilibrium is a strong Nash equilibrium (exercise: prove these facts). The corresponding direct-revelation mechanism is “group strategyproof” — robust to coalitional deviations from truthtelling (assuming no side payments); in fact, these are the only deterministic cost-sharing methods (satisfying mild restrictions) that yield GSP mechanisms. Using the round robin order and letting the common size of quantity increments tend to 0 recovers the serial cost-sharing method, which is closely related to fair queuing. Herve recently made an interesting comparison of these different methods using the price of anarchy. Submodular cost functions have also been studied, and there are some interesting differences: positive results exist mostly for 0-1 quantities, rather than general ones; and the Shapley value plays a fundamental role — for example, it defines a cost-sharing method that minimizes the worst-case efficiency loss among all methods in a big class.

1. This work made a big impression on me back around 2004, so it was very cool to finally get to see Herve talk about it.

### Like this:

Like Loading...

Read Full Post »