Feeds:
Posts
Comments

Economic theory has produced a number of cute mechanisms that have been proven to satisfy various desirable properties. However, and in some cases surprisingly, most of these mechanisms have never been used in practice.

There have been some recent attempts by computer scientists to make these mechanisms accessible to a broader audience. One of them, called Spliddit, developed by former TIH blogger Ariel Procaccia, provides implementations of five fair division mechanisms. Spliddit was revamped some days ago and now also offers the first(?) real-world application of the Shapley value.

Another such attempt that I developed with Christian Geist and Guillaume Chabin called Pnyx will be demonstrated at AAMAS in Istanbul tomorrow. Pnyx is a web-based tool for preference aggregation. We were inspired by the observation that most people use inferior mechanisms (such as plurality rule) and/or unsuitable tools (such as Doodle) when aggregating preferences in everyday situations. Depending on the desired output, Pnyx computes its outcome using the Borda count, Kemeny’s rule, and Fishburn’s maximal lotteries. We tried to keep Pnyx as simple as possible. A similar recently developed tool that offers many more voting rules is Whale3.

I am sure there are other websites or apps that implement well-studied economic mechanisms. Please let us know about these in the comments.

SCUGC 2015: The 5th Workshop on Social Computing and User-Generated Content

https://sites.google.com/site/scugc2015/

June 16, 2015, Portland, Oregon.

in conjunction with

ACM Conference on Economics and Computation (ACM-EC 2015).

SUBMISSIONS DUE: April 25, 2015 midnight EDT.

The  workshop will bring together researchers and practitioners from a variety of relevant fields, including economics, computer science, and social psychology, in both academia and industry, that are interested in the field of social computing and user generated content. We solicit research contributions (both new and recently published). The workshop will also feature a discussion panel on prediction markets.

Social Computing and User Generated Content

Social computing systems are now ubiquitous on the web– Wikipedia is perhaps the most well-known peer production system, and there are many platforms for crowdsourcing tasks to online users, including Games with a Purpose, Amazon’s Mechanical Turk, the TopCoder competitions for software development, and many online Q&A forums such as Yahoo! Answers. Meanwhile, the user-created product reviews on Amazon generate value to other users looking to buy or choose amongst products, while Yelp’s value comes from user reviews about listed services; and a significant fraction of the content consumed online consists of user-generated, publicly viewable social media such as blogs or YouTube, as well as comments and discussion threads on these blogs and forums.

Workshop Topics

The workshop aims to bring together participants with diverse perspectives to address the important research questions surrounding social computing and user generated content: Why do users participate- what factors affect participation levels, and what factors affect the quality of participants’ contributions? How can participation be improved, both in terms of the number of participants and the quality of user contributions? What design levers can be used to design better social computing systems? Finally, what are novel ways in which social computing can be used to generate value? The answers to these questions will inform the future of social computing; both towards improving the design of existing sites, as well as contributing to the design of new social computing applications. Papers from a rich set of experimental, empirical, and theoretical perspectives are invited. The topics of interest for the workshop include, but are not limited to

o    Incentives in peer production systems

o    Experimental studies on social computing systems

o    Empirical studies on social computing systems

o    Models for user behavior

o    Crowdsourcing and Wisdom of the Crowds

o    Games with a purpose

o    Online question-and-answer systems

o    Game-theoretic approaches to social computing

o    Algorithms and mechanisms for social computing, crowdsourcing and UGC

o    Quality and spam control in user generated content

o    Rating and ranking user generated content

o    Manipulation resistant ranking schemes

o    User behavior and incentives on social media

o    Trust and privacy in social computing systems

o    Social-psychological approaches to incentives for contribution

o    Algorithms and systems for large scale decision making and consensus

o    Usability and user experience

Organizing Committee

Boi Faltings, École Polytechnique Fédérale de Lausanne (EPFL)

John Horton, New York University

Alex Slivkins, Microsoft Research NYC

On behalf of the organizers of the workshop on the subject, this is a test of the social and information network. If you think you are influential, be sure to share this with all of your colleagues. If you are submitting a paper, be sure to include with your submission that you heard about the workshop here on Turing’s Invisible Hand and who first shared it with you. (Or don’t.)


The Workshop on Social and Information Networks
http://networks.seas.harvard.edu/
(with the Conference on Economics and Computation)
June 15, 2015 in Portland, Oregon, USA.

Social and economic networks have recently attracted great interest within computer science, operations research, and economics, among other fields. How do these networks mediate important processes, such as the spread of information or the functioning of markets? This research program emerges from and complements a vast literature in sociology. Much of the recent excitement is due to the availability of social data. Massive digital records of human interactions offer a unique system-wide perspective on collective human behavior as well as opportunities for new experimental and empirical methods.

This workshop seeks to feature some of the most exciting recent research across the spectrum of this interdisciplinary area. On the computational end, we invite work on applications of machine learning, algorithms, and complex network theory to social and economic networks. On the social science end, we welcome new theoretical perspectives, empirical studies, and experiments that expand the economic understanding of network phenomena. As an organizing theme, we emphasize the flow of information in networks, but the workshop is not limited to this.

Submissions are due on April 30th, 2015; see the workshop webpage for more details on the workshop and submission process.

Ido Erev, Eyal Ert, and Ori Plonsky are organizing an interesting competition under the title:  From Anomalies to Forecasts: Choice Prediction Competition for Decisions under Risk and Ambiguity (CPC2015).  The idea is to be able to quantitatively predict the magnitude of a multiple known human “biases” and “non-rational” behaviors.  The first prize is to be invited to be a co-author of a paper about the competition written by the organizers.

Experimental studies of human choice behavior have documented clear violations of rational economic theory and triggered the development of behavioral economics. Yet, the impact of these careful studies on applied economic analyses, and policy decisions, is not large. One justification for the tendency to ignore the experimental evidence involves the assertion that the behavioral literature highlights contradicting deviations from maximization, and it is not easy to predict which deviation is likely to be more important in specific situations.

To address this problem Kahneman and Tversky (1979) proposed a model (Prospect theory) that captures the joint effect of four of the most important deviations from maximization: the certainty effect (Allais paradox, Allais, 1953), the reflection effect, overweighting of low probability extreme events, and loss aversion (see top four rows in Table 1). The current paper extends this and similar efforts (see e.g., Thaler & Johnson, 1990; Brandstätter, Gigerenzer, & Hertwig, 2006; Birnbaum, 2008; Wakker, 2010; Erev et al., 2010) by facilitating the derivation and comparison of models that capture the joint impact of the four “prospect theory effects” and ten additional phenomena (see Table 1).

These choice phenomena were replicated under one “standard” setting (Hertwig & Ortmann, 2001): choice with real stakes in a space of experimental tasks wide enough to replicate all the phenomena illustrated in Table 1. The results suggest that all 14 phenomena emerge in our setting. Yet, their magnitude tends to be smaller than their magnitude in the original demonstrations.

[[ Table 1 omitted here and appears in the source page]]

The current choice prediction competition focuses on developing models that can capture all of these phenomena but also predict behavior in other choice problems. To calibrate the models we ran an “estimation set” study that included 60, randomly selected, choice problems.

The participants in each competition will be allowed to study the results of the estimation set. Their goal will be to develop a model that will predict the results of the competition set. In order to qualify to the competition, the model will have to capture all 14 choice phenomena of Table 1. The model should be implemented in a computer program that reads the parameters of the problems as an input and predicts the proportion of choices of Option B as an output. Thus, we use the generalization criterion methodology (see Busemeyer & Wang, 2000).

The deadline for registration is April 20th.  Submission deadline is May 17th.

On behalf of the organizers…


The Workshop on Algorithmic Game Theory and Data Science
https://sites.google.com/site/agtanddatascienceworkshop2015
(with the Conference on Economics and Computation)
June 15, 2015 in Portland, Oregon, USA.


Computer systems have become the primary mediator of social and economic interactions, enabling transactions at ever-increasing scale.  Mechanism design when done on a large scale needs to be a data-driven enterprise.  It seeks to optimize some objective with respect to a huge underlying population that the mechanism designer does not have direct access to.  Instead, the mechanism designer typically will have access to sampled behavior from that population (e.g. bid histories, or purchase decisions).  This means that, on the one hand, mechanism designers will need to bring to bear data-driven methodology from statistical learning theory, econometrics, and revealed preference theory.  On the other hand, strategic settings pose new challenges in data science, and approaches for learning and inference need to be adapted to account for strategization.


The goal of this workshop is to frame the agenda for research at the interface of algorithms, game theory, and data science.  Papers from a rich set of experimental, empirical, and theoretical perspectives are invited. Topics of interest include but are not limited to:


  • Can good mechanisms be learned by observing agent behavior in response to other mechanisms? How hard is it to “learn” a revenue maximizing auction given a sampled bid history?  How hard is it to learn a predictive model of customer purchase decisions, or better yet, a set of prices that will accurately maximize profit under these behavioral decisions?
  • What is the sample complexity of mechanism design?  How much data is necessary to enable good mechanism design?
  • How does mechanism design affect inference?  Are outcomes of some mechanisms more informative than those of others from the viewpoint of inference?
  • How does inference affect mechanism design?  If participants know that their data is to be used for inference, how does this knowledge affect their behavior in a mechanism?
  • Can tools from computer science and game theory be used to contribute rigorous guarantees to interactive data analysis?  Strategic interactions between a mechanism and a user base are often interactive (e.g. in the case of an ascending price auction, or repeated interaction with a customer and an online retailer), which is a setting in which traditional methods for preventing data over-fitting are weak.
  • Is data an economic model? Can data be used to evaluate or replace existing economic models?  What is the consequence for game theory and economics for replacing the model with data.


Submissions are due April 27, 2015. See the workshop website for further details and submission instructions.

Prize Announcements

John Nash will receive the 2015 Abel Prize (the most prestigious prize in mathematics besides the Fields Medal). The Norwegian Academy of Sciences and Letters is awarding the prize not for Nash’s work on game theory, but for his (and Louis Nirenberg’s) “striking and seminal contributions to the theory of nonlinear partial differential equations and its applications to geometric analysis”. Nash is the first person to win the Nobel Prize and the Abel Prize.

Coincidentally, it has been announced that former Turing’s Invisible Hand blogger Ariel Procaccia will receive the IJCAI-2015 Computers and Thought Award. Ariel is “recognized for his contributions to the fields of computational social choice and computational economics, and for efforts to make advanced fair division techniques more widely accessible”. Congratulations!

Last fall I posted an announcement about the Reverse AGT Workshop series and its first event on the topic of Optimal Taxation. Three economists, Benjamin Lockwood (Harvard Business School), Stefanie Stantcheva (Harvard Economics), and Glen Weyl (Microsoft Research), spoke about their work on optimal taxation in presentations targeted to an AGT audience. Each talk was thirty minutes and followed by fifteen minutes of discussion. It was fantastic! Since the workshop a reading group organized to dig deeper into key papers in the optimal taxation literature. For example, the reading group spend three weeks on an excellent survey by Thomas Piketty and Emmanuel Saez, Optimal Labor Income Taxation from the Handbook of Public Economics.

Next week the second Reverse AGT workshop on the topic of competition in selection markets will be held at Harvard U. This workshop series is now cosponsored by Harvard’s Center for Research on Computation and Society and Microsoft Research. The event will be videotaped and talks will be made available on the series website. A summary is below with full details on the workshop website.

Reverse AGT Workshop on Competition in Selection Markets
http://crcs.seas.harvard.edu/reverse-agt-workshops
Maxwell Dworkin 119, Harvard U.
2-5pm, Friday, February 27, 2015.

Summary:
In many markets, especially for insurance and financial products, the value of a sale depends on the identity of the customer, as this determines the chance of their, e.g., getting sick or defaulting on a loan. Competition between firms (e.g., insurance companies, banks) for customers may lead to inefficiencies, even complete market collapse, in such markets as firms are not sensitive to the costs their actions have on other firms. For example, a firm may design products to selectively attract (“cream-skim”) the safe customers, which worsens the pool of remaining customers. This can cause the market to unravel.

Economists have taken two contrasting, but complementary, approaches to studying these issues. The first, dominant from the 1970’s through the mid-2000’s, assumed customers differed only along a single dimension like health risk and studied equilibrium when firms perfectly compete and offer arbitrarily complicated mechanisms. Nathan Hendren will present some classic results from this “game theoretic” approach. In the last decade, economists have been interested in a “price theoretic” approach where firms use very simple strategies, like offering a single, standard insurance contract, but where they compete in a more realistic environment where they have market power or when customers differ along multiple dimensions such as their income as well as their health. Glen Weyl will present this approach. Eduardo Azevedo will present his joint work with Daniel Gottlieb, which tries to bridge these approaches with a new perfectly competitive solution concept that exists generally, applies in settings with multiple dimensions of consumer heterogeneity but allows for a rich range of contracts to be traded.