Feeds:
Posts
Comments

Archive for April, 2015

SCUGC 2015: The 5th Workshop on Social Computing and User-Generated Content

https://sites.google.com/site/scugc2015/

June 16, 2015, Portland, Oregon.

in conjunction with

ACM Conference on Economics and Computation (ACM-EC 2015).

SUBMISSIONS DUE: April 25, 2015 midnight EDT.

The  workshop will bring together researchers and practitioners from a variety of relevant fields, including economics, computer science, and social psychology, in both academia and industry, that are interested in the field of social computing and user generated content. We solicit research contributions (both new and recently published). The workshop will also feature a discussion panel on prediction markets.

Social Computing and User Generated Content

Social computing systems are now ubiquitous on the web– Wikipedia is perhaps the most well-known peer production system, and there are many platforms for crowdsourcing tasks to online users, including Games with a Purpose, Amazon’s Mechanical Turk, the TopCoder competitions for software development, and many online Q&A forums such as Yahoo! Answers. Meanwhile, the user-created product reviews on Amazon generate value to other users looking to buy or choose amongst products, while Yelp’s value comes from user reviews about listed services; and a significant fraction of the content consumed online consists of user-generated, publicly viewable social media such as blogs or YouTube, as well as comments and discussion threads on these blogs and forums.

Workshop Topics

The workshop aims to bring together participants with diverse perspectives to address the important research questions surrounding social computing and user generated content: Why do users participate- what factors affect participation levels, and what factors affect the quality of participants’ contributions? How can participation be improved, both in terms of the number of participants and the quality of user contributions? What design levers can be used to design better social computing systems? Finally, what are novel ways in which social computing can be used to generate value? The answers to these questions will inform the future of social computing; both towards improving the design of existing sites, as well as contributing to the design of new social computing applications. Papers from a rich set of experimental, empirical, and theoretical perspectives are invited. The topics of interest for the workshop include, but are not limited to

o    Incentives in peer production systems

o    Experimental studies on social computing systems

o    Empirical studies on social computing systems

o    Models for user behavior

o    Crowdsourcing and Wisdom of the Crowds

o    Games with a purpose

o    Online question-and-answer systems

o    Game-theoretic approaches to social computing

o    Algorithms and mechanisms for social computing, crowdsourcing and UGC

o    Quality and spam control in user generated content

o    Rating and ranking user generated content

o    Manipulation resistant ranking schemes

o    User behavior and incentives on social media

o    Trust and privacy in social computing systems

o    Social-psychological approaches to incentives for contribution

o    Algorithms and systems for large scale decision making and consensus

o    Usability and user experience

Organizing Committee

Boi Faltings, École Polytechnique Fédérale de Lausanne (EPFL)

John Horton, New York University

Alex Slivkins, Microsoft Research NYC

Advertisement

Read Full Post »

On behalf of the organizers of the workshop on the subject, this is a test of the social and information network. If you think you are influential, be sure to share this with all of your colleagues. If you are submitting a paper, be sure to include with your submission that you heard about the workshop here on Turing’s Invisible Hand and who first shared it with you. (Or don’t.)


The Workshop on Social and Information Networks
http://networks.seas.harvard.edu/
(with the Conference on Economics and Computation)
June 15, 2015 in Portland, Oregon, USA.

Social and economic networks have recently attracted great interest within computer science, operations research, and economics, among other fields. How do these networks mediate important processes, such as the spread of information or the functioning of markets? This research program emerges from and complements a vast literature in sociology. Much of the recent excitement is due to the availability of social data. Massive digital records of human interactions offer a unique system-wide perspective on collective human behavior as well as opportunities for new experimental and empirical methods.

This workshop seeks to feature some of the most exciting recent research across the spectrum of this interdisciplinary area. On the computational end, we invite work on applications of machine learning, algorithms, and complex network theory to social and economic networks. On the social science end, we welcome new theoretical perspectives, empirical studies, and experiments that expand the economic understanding of network phenomena. As an organizing theme, we emphasize the flow of information in networks, but the workshop is not limited to this.

Submissions are due on April 30th, 2015; see the workshop webpage for more details on the workshop and submission process.

Read Full Post »

Ido Erev, Eyal Ert, and Ori Plonsky are organizing an interesting competition under the title:  From Anomalies to Forecasts: Choice Prediction Competition for Decisions under Risk and Ambiguity (CPC2015).  The idea is to be able to quantitatively predict the magnitude of a multiple known human “biases” and “non-rational” behaviors.  The first prize is to be invited to be a co-author of a paper about the competition written by the organizers.

Experimental studies of human choice behavior have documented clear violations of rational economic theory and triggered the development of behavioral economics. Yet, the impact of these careful studies on applied economic analyses, and policy decisions, is not large. One justification for the tendency to ignore the experimental evidence involves the assertion that the behavioral literature highlights contradicting deviations from maximization, and it is not easy to predict which deviation is likely to be more important in specific situations.

To address this problem Kahneman and Tversky (1979) proposed a model (Prospect theory) that captures the joint effect of four of the most important deviations from maximization: the certainty effect (Allais paradox, Allais, 1953), the reflection effect, overweighting of low probability extreme events, and loss aversion (see top four rows in Table 1). The current paper extends this and similar efforts (see e.g., Thaler & Johnson, 1990; Brandstätter, Gigerenzer, & Hertwig, 2006; Birnbaum, 2008; Wakker, 2010; Erev et al., 2010) by facilitating the derivation and comparison of models that capture the joint impact of the four “prospect theory effects” and ten additional phenomena (see Table 1).

These choice phenomena were replicated under one “standard” setting (Hertwig & Ortmann, 2001): choice with real stakes in a space of experimental tasks wide enough to replicate all the phenomena illustrated in Table 1. The results suggest that all 14 phenomena emerge in our setting. Yet, their magnitude tends to be smaller than their magnitude in the original demonstrations.

[[ Table 1 omitted here and appears in the source page]]

The current choice prediction competition focuses on developing models that can capture all of these phenomena but also predict behavior in other choice problems. To calibrate the models we ran an “estimation set” study that included 60, randomly selected, choice problems.

The participants in each competition will be allowed to study the results of the estimation set. Their goal will be to develop a model that will predict the results of the competition set. In order to qualify to the competition, the model will have to capture all 14 choice phenomena of Table 1. The model should be implemented in a computer program that reads the parameters of the problems as an input and predicts the proportion of choices of Option B as an output. Thus, we use the generalization criterion methodology (see Busemeyer & Wang, 2000).

The deadline for registration is April 20th.  Submission deadline is May 17th.

Read Full Post »

On behalf of the organizers…


The Workshop on Algorithmic Game Theory and Data Science
https://sites.google.com/site/agtanddatascienceworkshop2015
(with the Conference on Economics and Computation)
June 15, 2015 in Portland, Oregon, USA.


Computer systems have become the primary mediator of social and economic interactions, enabling transactions at ever-increasing scale.  Mechanism design when done on a large scale needs to be a data-driven enterprise.  It seeks to optimize some objective with respect to a huge underlying population that the mechanism designer does not have direct access to.  Instead, the mechanism designer typically will have access to sampled behavior from that population (e.g. bid histories, or purchase decisions).  This means that, on the one hand, mechanism designers will need to bring to bear data-driven methodology from statistical learning theory, econometrics, and revealed preference theory.  On the other hand, strategic settings pose new challenges in data science, and approaches for learning and inference need to be adapted to account for strategization.


The goal of this workshop is to frame the agenda for research at the interface of algorithms, game theory, and data science.  Papers from a rich set of experimental, empirical, and theoretical perspectives are invited. Topics of interest include but are not limited to:


  • Can good mechanisms be learned by observing agent behavior in response to other mechanisms? How hard is it to “learn” a revenue maximizing auction given a sampled bid history?  How hard is it to learn a predictive model of customer purchase decisions, or better yet, a set of prices that will accurately maximize profit under these behavioral decisions?
  • What is the sample complexity of mechanism design?  How much data is necessary to enable good mechanism design?
  • How does mechanism design affect inference?  Are outcomes of some mechanisms more informative than those of others from the viewpoint of inference?
  • How does inference affect mechanism design?  If participants know that their data is to be used for inference, how does this knowledge affect their behavior in a mechanism?
  • Can tools from computer science and game theory be used to contribute rigorous guarantees to interactive data analysis?  Strategic interactions between a mechanism and a user base are often interactive (e.g. in the case of an ascending price auction, or repeated interaction with a customer and an online retailer), which is a setting in which traditional methods for preventing data over-fitting are weak.
  • Is data an economic model? Can data be used to evaluate or replace existing economic models?  What is the consequence for game theory and economics for replacing the model with data.


Submissions are due April 27, 2015. See the workshop website for further details and submission instructions.

Read Full Post »

%d bloggers like this: