Earlier this month, my co-blogger Ariel and I ran a summer school on algorithmic economics [1]. In my highly biased opinion, it was a lot of fun and a big success. For those of you who couldn’t make it, videos will be posted on the summer school site in the near future. The abstracts there should give you a quick idea for the topics covered by the nine speakers. In a series of posts, I’ll discuss the talk content in a bit more detail. I hope that some of the students who participated in the school discuss their own experiences in the comments. Having now met all of them, I can confidently say that the future of algorithmic game theory/economics is in good hands!
Leeat Yariv
First up was Leeat Yariv from CalTech [2]. She talked about several aspects of modeling and reasoning about social networks. As motivation, she pointed out how interesting phenomena can be explained using non-obvious properties of networks. For example, how did the Medici family grow so powerful, when other families had greater wealth and more seats in the state legislature? Looking at the network of 15th century Florentine marriages provides an explanation — the Medici family was the most central (in a precise sense), enjoying many strategic connections through marriages, and capitalized on this to create a dynasty.
Two major themes in Leeat’s lectures were diffusion and network formation. I’m more familiar with research in computer science on these two topics, and it was cool to hear an economist’s take on them. We first considered diffusion (like the spread of viruses, the adoption of a new technology, the transmission of information, etc.) without an explicit network. This is an old topic — for example, in 1962 Everett Rogers compiled 508 diffusion studies in his book Diffusion of Innovation. Two notable patterns in these studies were “S-shaped adoption” (meaning things are adopted slowly, then very rapidly, and then slowly again) and a correlation between the likelihood of adoption by an individual and the “connectedness” of the individual. The S-curve is nicely explained by Bass’s diffusion model.
Leeat then moved on to her work with Jackson on diffusion in networks. The model is that each node of a network has two strategies, to engage or not. The cost of engaging is some constant. The benefit of engaging is given by a function of a node’s degree and the fraction of its neighbors that are engaged — for example, it could be proportional to the number of engaged neighbors. Rather than keeping track of strategy profile details, a mean field model is used — if x is the overall fraction of engaged nodes, then for the analysis we’ll assume that x is the fraction of engaged neighbors in every node’s neighborhood. Generically, symmetric equilibria correspond to a discrete set of values of x, which alternate between “tipping points” (where after a slight perturbation in x, myopic dynamics lead elsewhere) and “stable equilibria” (where myopic dynamics return to the equilibrium). An interesting problem is to understand ways of modifying preferences or the network structure to increase the amount of engagement. Increasing the degree distribution (in the sense of first-order dominance) or its variance (in the sense of mean-preserving spreads) accomplish this, in that the values of tipping points and stable equilibria decrease and increase, respectively. Increasing a node’s value for its neighbors engagement (via advertising, rebates, etc.) also increases engagement. Simple properties of the degree distribution suggest how to optimally use such increases.
Moving on to network formation, Leeat reviewed some of the best-known generative models, both static (Erdos-Renyi and scale-free networks) and dynamic (like preferential attachment). For recent work, she focused on her model of group formation and its connections to homophily (the widely observed phenomenon that people connected to each other in a social network tend to be similar). The story is as follows. Tonight you plan to go out to dinner and then see some music. You only have enough time to read the food or the music section of the newspaper (for recommendations), not both. Which one you decide to read depends on whether you care more about food or music, as well as on the quality of information you get from your friends — if they’ve already figured out the best concert to go to, your effort is better spent learning about restaurants. The formal model is that agents will partition themselves into groups, with each group sharing all of the information its members have gleaned, and with diminishing returns for additional information on a topic. If you care more about music, you want a majority of your friends to read about music, but you also want a few to read about food. Call a group “stable” if all of its members agree on how many people should read about each topic. Stable groups must have fairly homogeneous preferences, and more so at the extremes (where essentially only one topic matters). Leeat suggested the following interpretation of these results: as connection costs decrease (say, with the Internet), homogeneity among groups increases, especially for extremists. Obviously, it’s tempting to draw an analogy with political discourse in the 21st century…
Eva Tardos
The next speaker was Eva Tardos, from Cornell. The overarching theme of Eva’s lectures was to understand when simple auction formats perform well. That is, if we take design simplicity as a constraint, how close to an optimal solution can we get? For example, if we sell a bunch of different goods using only single-item auctions (second- or first-price, simultaneously or sequentially), can we approach the (optimal) welfare achieved by the (complex) VCG mechanism?
Eva spent most of her first lecture thoroughly analyzing a concrete example, which introduced several analysis motifs. The example is a special case of a very neat paper by Christodoulou, Kovacs, and Schapira. There are several goods (think potential PhD advisors). Bidders can have different valuations for different goods but only want one good (i.e., the valuation for a bundle is the maximum of the valuations for the goods in the bundle). Suppose the goods are sold using simultaneous second-price auctions — each bidder submits one bid per good, and each good is allocated separately using a Vickrey auction. How good are the Nash equilibria in the corresponding game?
As a warm up, consider pure Nash equilibria of the full-information version of the game. Here, one starts with a pure Nash equilibrium (that also satisfies a necessary “no overbidding” property), and considers a hypothetical deviation by a player for the item it receives in a welfare-maximizing allocation. Summing over all players and charging lost welfare appropriately back to the equilibrium welfare shows that the welfare of every pure Nash equilibrium is at least 50% of the maximum possible. Next, Eva showed how the structure of this analysis, which only uses the fact that each player has no regret with respect to a single and simple alternative strategy, implies that the 50% bound holds more generally for all “no regret” outcomes. (This encompasses all mixed-strategy Nash and correlated equilibria, plus more; see also Avrim’s lectures for more details.) Still more generally, Eva considered the incomplete information case, which is usually the natural one for auction settings. Here, we assumed that each bidder has a common valuation for all goods that it is interested in (and valuation 0 for the rest). There is a different analysis trick in this case, which is to use that each bidder in a Bayes-Nash equilibrium has no regret for bidding half its value on all the items that it is interested in. The result is that the expected welfare of every Bayes-Nash equilibrium is at least 25% of that of the expected optimal welfare. Interestingly, this holds even if bidders’ valuations are statistically correlated. Eva wrapped up her first lecture by showing how similar ideas can be used to prove bounds on the quality of Bayes-Nash equilibria of the generalized second-price (GSP) sponsored search auction — another example of a simple (second-price) solution being used in lieu of a “complex” (VCG) one — even when advertiser quality factors are unknown (see here for details).
For her second lecture, Eva left behind simultaneous single-item auctions and moved on to the sequential case (see here and here for details). She made a compelling case that first-price auctions need to be used instead of second-price ones — the problem being that the second-price rule permits signalling and threats via high overbids, and these lead to subgame perfect equilibria with very poor welfare. Happily, constant-factor guarantees are possible for equilibria of first-price sequential auctions. The key idea in the analysis is to consider deviations of the form: follow the equilibrium strategy until you reach the auction for the good that you would be assigned in an optimal allocation, and then bid half your value for it.
1. I can’t remember why it used to bother me that junior organizers are expected to do all the work. These days, I think it’s a great tradition!
2. Caveat: I missed Leeat’s first lecture due to a thunderstorm conspiracy, and am basing my summary of it on her slides.
Assuming that the senior organizer is planning to write another 3.5 recap posts as informative as this one, the overall workload seems rather balanced 🙂