On behalf of the organizers of the workshop on the subject, this is a test of the social and information network. If you think you are influential, be sure to share this with all of your colleagues. If you are submitting a paper, be sure to include with your submission that you heard about the workshop here on Turing’s Invisible Hand and who first shared it with you. (Or don’t.)

The Workshop on Social and Information Networks
(with the Conference on Economics and Computation)
June 15, 2015 in Portland, Oregon, USA.

Social and economic networks have recently attracted great interest within computer science, operations research, and economics, among other fields. How do these networks mediate important processes, such as the spread of information or the functioning of markets? This research program emerges from and complements a vast literature in sociology. Much of the recent excitement is due to the availability of social data. Massive digital records of human interactions offer a unique system-wide perspective on collective human behavior as well as opportunities for new experimental and empirical methods.

This workshop seeks to feature some of the most exciting recent research across the spectrum of this interdisciplinary area. On the computational end, we invite work on applications of machine learning, algorithms, and complex network theory to social and economic networks. On the social science end, we welcome new theoretical perspectives, empirical studies, and experiments that expand the economic understanding of network phenomena. As an organizing theme, we emphasize the flow of information in networks, but the workshop is not limited to this.

Submissions are due on April 30th, 2015; see the workshop webpage for more details on the workshop and submission process.

Ido Erev, Eyal Ert, and Ori Plonsky are organizing an interesting competition under the title:  From Anomalies to Forecasts: Choice Prediction Competition for Decisions under Risk and Ambiguity (CPC2015).  The idea is to be able to quantitatively predict the magnitude of a multiple known human “biases” and “non-rational” behaviors.  The first prize is to be invited to be a co-author of a paper about the competition written by the organizers.

Experimental studies of human choice behavior have documented clear violations of rational economic theory and triggered the development of behavioral economics. Yet, the impact of these careful studies on applied economic analyses, and policy decisions, is not large. One justification for the tendency to ignore the experimental evidence involves the assertion that the behavioral literature highlights contradicting deviations from maximization, and it is not easy to predict which deviation is likely to be more important in specific situations.

To address this problem Kahneman and Tversky (1979) proposed a model (Prospect theory) that captures the joint effect of four of the most important deviations from maximization: the certainty effect (Allais paradox, Allais, 1953), the reflection effect, overweighting of low probability extreme events, and loss aversion (see top four rows in Table 1). The current paper extends this and similar efforts (see e.g., Thaler & Johnson, 1990; Brandstätter, Gigerenzer, & Hertwig, 2006; Birnbaum, 2008; Wakker, 2010; Erev et al., 2010) by facilitating the derivation and comparison of models that capture the joint impact of the four “prospect theory effects” and ten additional phenomena (see Table 1).

These choice phenomena were replicated under one “standard” setting (Hertwig & Ortmann, 2001): choice with real stakes in a space of experimental tasks wide enough to replicate all the phenomena illustrated in Table 1. The results suggest that all 14 phenomena emerge in our setting. Yet, their magnitude tends to be smaller than their magnitude in the original demonstrations.

[[ Table 1 omitted here and appears in the source page]]

The current choice prediction competition focuses on developing models that can capture all of these phenomena but also predict behavior in other choice problems. To calibrate the models we ran an “estimation set” study that included 60, randomly selected, choice problems.

The participants in each competition will be allowed to study the results of the estimation set. Their goal will be to develop a model that will predict the results of the competition set. In order to qualify to the competition, the model will have to capture all 14 choice phenomena of Table 1. The model should be implemented in a computer program that reads the parameters of the problems as an input and predicts the proportion of choices of Option B as an output. Thus, we use the generalization criterion methodology (see Busemeyer & Wang, 2000).

The deadline for registration is April 20th.  Submission deadline is May 17th.

On behalf of the organizers…

The Workshop on Algorithmic Game Theory and Data Science
(with the Conference on Economics and Computation)
June 15, 2015 in Portland, Oregon, USA.

Computer systems have become the primary mediator of social and economic interactions, enabling transactions at ever-increasing scale.  Mechanism design when done on a large scale needs to be a data-driven enterprise.  It seeks to optimize some objective with respect to a huge underlying population that the mechanism designer does not have direct access to.  Instead, the mechanism designer typically will have access to sampled behavior from that population (e.g. bid histories, or purchase decisions).  This means that, on the one hand, mechanism designers will need to bring to bear data-driven methodology from statistical learning theory, econometrics, and revealed preference theory.  On the other hand, strategic settings pose new challenges in data science, and approaches for learning and inference need to be adapted to account for strategization.

The goal of this workshop is to frame the agenda for research at the interface of algorithms, game theory, and data science.  Papers from a rich set of experimental, empirical, and theoretical perspectives are invited. Topics of interest include but are not limited to:

  • Can good mechanisms be learned by observing agent behavior in response to other mechanisms? How hard is it to “learn” a revenue maximizing auction given a sampled bid history?  How hard is it to learn a predictive model of customer purchase decisions, or better yet, a set of prices that will accurately maximize profit under these behavioral decisions?
  • What is the sample complexity of mechanism design?  How much data is necessary to enable good mechanism design?
  • How does mechanism design affect inference?  Are outcomes of some mechanisms more informative than those of others from the viewpoint of inference?
  • How does inference affect mechanism design?  If participants know that their data is to be used for inference, how does this knowledge affect their behavior in a mechanism?
  • Can tools from computer science and game theory be used to contribute rigorous guarantees to interactive data analysis?  Strategic interactions between a mechanism and a user base are often interactive (e.g. in the case of an ascending price auction, or repeated interaction with a customer and an online retailer), which is a setting in which traditional methods for preventing data over-fitting are weak.
  • Is data an economic model? Can data be used to evaluate or replace existing economic models?  What is the consequence for game theory and economics for replacing the model with data.

Submissions are due April 27, 2015. See the workshop website for further details and submission instructions.

Prize Announcements

John Nash will receive the 2015 Abel Prize (the most prestigious prize in mathematics besides the Fields Medal). The Norwegian Academy of Sciences and Letters is awarding the prize not for Nash’s work on game theory, but for his (and Louis Nirenberg’s) “striking and seminal contributions to the theory of nonlinear partial differential equations and its applications to geometric analysis”. Nash is the first person to win the Nobel Prize and the Abel Prize.

Coincidentally, it has been announced that former Turing’s Invisible Hand blogger Ariel Procaccia will receive the IJCAI-2015 Computers and Thought Award. Ariel is “recognized for his contributions to the fields of computational social choice and computational economics, and for efforts to make advanced fair division techniques more widely accessible”. Congratulations!

Last fall I posted an announcement about the Reverse AGT Workshop series and its first event on the topic of Optimal Taxation. Three economists, Benjamin Lockwood (Harvard Business School), Stefanie Stantcheva (Harvard Economics), and Glen Weyl (Microsoft Research), spoke about their work on optimal taxation in presentations targeted to an AGT audience. Each talk was thirty minutes and followed by fifteen minutes of discussion. It was fantastic! Since the workshop a reading group organized to dig deeper into key papers in the optimal taxation literature. For example, the reading group spend three weeks on an excellent survey by Thomas Piketty and Emmanuel Saez, Optimal Labor Income Taxation from the Handbook of Public Economics.

Next week the second Reverse AGT workshop on the topic of competition in selection markets will be held at Harvard U. This workshop series is now cosponsored by Harvard’s Center for Research on Computation and Society and Microsoft Research. The event will be videotaped and talks will be made available on the series website. A summary is below with full details on the workshop website.

Reverse AGT Workshop on Competition in Selection Markets
Maxwell Dworkin 119, Harvard U.
2-5pm, Friday, February 27, 2015.

In many markets, especially for insurance and financial products, the value of a sale depends on the identity of the customer, as this determines the chance of their, e.g., getting sick or defaulting on a loan. Competition between firms (e.g., insurance companies, banks) for customers may lead to inefficiencies, even complete market collapse, in such markets as firms are not sensitive to the costs their actions have on other firms. For example, a firm may design products to selectively attract (“cream-skim”) the safe customers, which worsens the pool of remaining customers. This can cause the market to unravel.

Economists have taken two contrasting, but complementary, approaches to studying these issues. The first, dominant from the 1970’s through the mid-2000’s, assumed customers differed only along a single dimension like health risk and studied equilibrium when firms perfectly compete and offer arbitrarily complicated mechanisms. Nathan Hendren will present some classic results from this “game theoretic” approach. In the last decade, economists have been interested in a “price theoretic” approach where firms use very simple strategies, like offering a single, standard insurance contract, but where they compete in a more realistic environment where they have market power or when customers differ along multiple dimensions such as their income as well as their health. Glen Weyl will present this approach. Eduardo Azevedo will present his joint work with Daniel Gottlieb, which tries to bridge these approaches with a new perfectly competitive solution concept that exists generally, applies in settings with multiple dimensions of consumer heterogeneity but allows for a rich range of contracts to be traded.

AGT@IJCAI Workshop

[The following workshop announcement comes to us by way of Carmine Ventre.]

AGT@IJCAI 2015 Preliminary Call for Papers

1st Workshop on Algorithmic Game Theory at IJCAI
Buenos Aires, Argentina
July 24-31, 2015



April 27, 2015 – Submission of contributions to workshops;
May 20, 2015 – Workshop paper acceptance notification;
May 30, 2015 – Deadline for final camera ready copy to workshop organizers.



Over the past fifteen years, research in theoretical computer science, artificial intelligence, and microeconomics has joined forces to tackle problems involving incentives and computation. This research field, commonly named Algorithmic Game Theory, is becoming increasingly more relevant.

The main aim of this one-day long workshop is to bring together the rich variety of scientists that IJCAI attracts in order to have a multidisciplinary forum within which discuss and analyze current and novel challenges that the research in Algorithmic Game Theory faces.


All paper submissions will be peer-reviewed and evaluated on the basis of the quality of their contribution, originality, soundness, and significance. Industrial applications and position papers presenting novel ideas, issues, challenges and directions are also welcome. Submissions are invited in, but not limited to, the following topics:

– Algorithmic mechanism design
– Auction algorithms and analysis
– Behavioral Game Theory
– Bounded rationality
– Computational advertising
– Computational aspects of equilibria
– Computational social choice
– Convergence and learning in games
– Coalitions, coordination and collective action
– Economic aspects of security and privacy
– Economic aspects of distributed and network computing
– Information and attention economics
– Network games
– Price differentiation and price dynamics
– Social networks

Papers are to be submitted electronically on easychair. Submission link and submission format will be available on the workshop website in due course.


To widen participation and encourage discussion, there will be no formal publication of workshop proceedings. Therefore, submissions of preliminary work and papers to be submitted or in preparation for submission to other major venues in the field are encouraged.


Program Committee:

Ioannis Caragiannis (University of Patras)
Constantinos Daskalakis (MIT)
Edith Elkind (University of Oxford)
Diodato Ferraioli (Università di Salerno)
Martin Gairing (University of Liverpool)
Enrico H. Gerding (University of Southampton)
Vasilis Gkatzelis (Stanford)
Umberto Grandi (IRIT)
Gianluigi Greco (Università della Calabria)
Nicole Immorlica (Microsoft Research)
Jerome Lang (Université Paris-Dauphine)
Kate Larson (University of Waterloo)
Katrina Ligett (California Institute of Technology)
Emiliano Lorini (IRIT)
Brendan Lucier (MSR New England)
Vangelis Markakis (Athens University of Economics and Business)
Noam Nisan (Hebrew University of Jerusalem)
David Parkes (Harvard University)
Maria Polukarov (University of Southampton)
Valentin Robu (Heriot-Watt University)
Francesca Rossi (Università di Padova)
Eva Tardos (Cornell)
Orestis Telelis (University of Piraeus)
Vijay V. Vazirani (Georgia Tech)
Angelina Vidali (UPMC Sorbonne Universities)
Toby Walsh (NICTA and and UNSW)


Organizing Committee:

Georgios Chalkiadakis (Technical University of Crete)
Nicola Gatti  (Politecnico di Milano)
Reshef Meir (Harvard University)
Carmine Ventre (Teesside University)


Methods from CS have enabled new understanding of topics in game theory and economics, but have been explored only for a small collection of subareas of game theory and economics. There may be opportunities more broadly and especially in areas that computer scientists would not naturally explore on their own. The following workshop is a coordinated effort of AGT researchers and economists in the Cambridge area to explore possible interactions more broadly. Feel free to attend if you are in the Cambridge area; if not you may find the format interesting. The official announcement follows.

Reverse AGT Workshop on Optimal Taxation
Harvard U, 20 University Road, Room 646
1-4pm, Monday, November 24, 2014

At the Reverse AGT Workshop local economists will present an area of economic study for an algorithmic game theory (AGT) audience. The presentations will include a brief introduction to the area and several current research topics. The schedule includes ample time for discussion to make connections to related research in AGT and to highlight research questions that methods from AGT might help to answer. The topic of the first workshop is Optimal Taxation and it is organized by Glen Weyl, Brendan Lucier, and Jason Hartline.


1:00: Glen Weyl: Introduction to Optimal Redistributive Taxation
1:30: Q/A and discussion

1:45: Stefanie Stantcheva: Approximating Optimal Tax Systems
2:15: Q/A and discussion

2:30: Benjamin Lockwood: Optimal Income Taxation with Misoptimizing Consumers
3:00: Q/A and discussion

3:15: Coffee and cookies
3:45: Summary discussion and closing comments


Introduction to Optimal Redistributive Taxation
Glen Weyl (Microsoft Research and U. of Chicago)

I will give a brief introduction to the theory of utilitarian optimal redistributive taxation proposed by Vickrey (1945) based on insurance behind the veil of ignorance. I will mostly focus on the types of models studied and results obtained, rather than on techniques used. I will discuss the veil of ignorance argument for utilitarianism, the optimal linear income tax, the nonlinear income tax problem, the optimal top tax rate, the Atkinson Stiglitz theorem, tagging, the taxation of leisure complements and, briefly, a few more recent results that I find particularly interesting.

Approximating Optimal Tax Systems
Stefanie Stantcheva (Harvard Economics)

In this talk, I will highlight how complex dynamic optimal tax systems are in realistic settings. I will show how economists have tried to simplify the optimal systems numerically. I will then argue that it is crucial to have a theory of approximation of optimal tax systems that can be applied in a consistent manner to different environments and tax problems. I will propose directions along which to think about this and present the beginning of the work I am doing on this.

Optimal Income Taxation with Misoptimizing Consumers
Benjamin Lockwood (Harvard Business School)

This paper studies optimal redistributive income taxation in the presence of psychological frictions. We augment familiar formulas for optimal taxes using “sufficient statistics” for misoptimization, which abstract from the underlying behavioral model generating misoptimization. We show that corrections are likely to be strongest at the bottom of the income distribution, and we clarify conditions under which the planner should work to correct or exacerbate misoptimization. Finally, we show how this approach can be implemented empirically, using reduced-form evidence about responses to the Earned Income Tax Credit to estimate the degree of misoptimization. Simulations suggest that this type of misoptimization generates substantial optimal work subsidies for low income individuals. Joint with Dmitry Taubinsky.