Feeds:
Posts
Comments

Archive for January, 2012

EC = MC^3

Now seems like a good time to take a break from writing for EC, and write a bit about EC.

In the past few years (since I started submitting, at least) EC has been competitive, not necessarily in terms of the acceptance rate, rather in terms of the quality of accepted papers. On the theory side, many people now prefer EC over SODA as an outlet for papers on algorithmic economics (or whatever you want to call it). I know several theory people who even like EC better than STOC/FOCS, although these are admittedly outliers (for the time being). Some of the top AI people in the algorithmic economics community publish some of their best work in EC. Enough said; EC rocks.

Strangely enough though, there is a large gap between the general high regard for EC within the EC community and its almost complete anonymity outside the community. I even had discussions with people who actively work on algorithmic economics but were unaware that EC is considered a great conference, or that it is not strictly about electronic commerce. In many ways, EC is the flagship conference of the algorithmic economics community, but it is often not recognized as such outside the community.

Which brings me to my favorite topic of late: renaming stuff.  EC is a young conference and its fame will grow, but I do think the conference’s name plays an important role in its own misperception. Although electronic commerce is still an important part of EC, since 1999 the conference has evolved and it now includes (but is not limited to) all of the work at the intersection of economics and computation. Hmm, how would one call a conference on Economics and Computation? EC! There are actually precedents for changing the name of a conference while keeping the same acronym. For example, AAAI used to be the American Association for AI, and is now the Association for the Advancement of AI.

Rumor has it that the alternative interpretation of EC was discussed recently by the powers that be, but (obviously) was not adopted. Reportedly one concern was that “economics and computation” is not general enough. Be that as it may, the fact that economics and computation has the same acronym as electronic commerce is too cool an opportunity to pass up on and, seriously, such a change may do a great deal to correct some of the misperceptions of EC.

Advertisement

Read Full Post »

Guest post by Ashwinkumar Badanidiyuru.

There has already been some info on the blogs on SODA here. The conference was well organized and the number of researchers attending the conference was also considerable. I really liked the cleanliness of Kyoto and the punctuality of trains.

The negative things were that the conference did not organize a breakfast and coffee was not available at all breaks. I and a couple of my friends also felt a terrible jet lag. Another problem which we faced while roaming around was the availability of vegetarian food.

There were two main sessions related to game theory and mechanism design. I will blog here about one of the sessions and my colleague Hu Fu has already blogged about the other.

  1. The Notion of a Rational Convex Program, and An Algorithm for the Arrow-Debreu Nash Bargaining Game. (Vijay Vazirani):A nice blog post about this line of work has already been written here.  This paper introduces the notion of a rational convex program (RCP). RCP is a convex program that is guaranteed to have an optimal solution, which is rational and has polynomially non-zero bits given any setting of its parameters to rational numbers and the optimal solution is bounded. This paper defined a new Nash bargaining game and shows that it is logarithmic RCP (Objective function is a sum of logarithm of linear functions). It also gives a combinatorial algorithm to compute an equilibrium of such a game.
  2. Beyond Myopic Best Response in Cournot Competition. (Svetlana Olonetsky, Amos Fiat, Elias Koutsoupias, Katrina Ligett, Yishay Mansour): This paper studies the best response strategies when the best response is non-myopic. (i.e. depends on how the other player could potentially choose the strategy in the next round). Consider the Cournot competition game where two firms produce oil, Firm1 at a cost of 0.3 dollars/barrel and Firm2 at a cost of 0.5 dollars/barrel. Let the price set by the market be 1-(number of barrels produced). In a Nash equilibrium Firm1 produces 0.3 barrels and Firm2 produces 0.1 barrels. But if Firm1 produced 0.5+ε barrels then Firm2 would drop out and Firm1gets a better return. This hints that Nash might not be the right solution concept.The solution concept proposed and studied in this paper is a delegation game where in the first round it is decided if each player is supposed to maximize his profit or revenue and in the second round the game is played. The part which confuses me is that this looks like something in between a one shot and a multi-round game.
  3. Metastability of Logit Dynamics for Coordination Games. (Vincenzo Auletta, Diodato Ferraioli, Francesco Pasquale, Giuseppe Persiano): Usually in learning in games we study the stationary distribution of some noisy response strategy. This becomes not so interesting if the mixing time is very large. This paper studies the transient phase for Logit Dynamics for Coordination Games. It studies something called metastable distribution in which the game is present for a time-scale shorter than the mixing time. As an example it gives Ising’s game (similar to Ising’s model) where it takes exponential time to mix (under suitable parameters), while there are meta stable distributions. An interesting question would be computing a meta-stable distribution if the pseudo-mixing time is not polynomial.
  4. Sketching Valuation Functions. (Ashwinkumar Badanidiyuru, Shahar Dobzinski, Hu Fu, Robert Kleinberg, Noam Nisan, Tim Roughgarden): We study the problem of storing an approximate version of different classes of valuation functions in polynomial space. The problem is motivated by sealed bid combinatorial auctions in which the bidders first write some version of their valuation and then send it to auctioneer. Then, the auctioneer uses this to allocate the items. This research is along the lines of a paper by Goemans, Harvey, Iwata and Mirrokni which gives an algorithm to find a {\sqrt{n}}-sketch for submodular functions using polynomially many value queries. There is also a lower bound of {n^{1/3-\epsilon}} using arbitrary communication, stemming from the work of Balcan and Harveyon learning submodular functions in the PMAC model. Some of the interesting results in this paper are:
      • A {\sqrt{n}} tight sketch for subadditive functions using arbitrary
        communication and {\Omega(n)} lower bound when the algorithm is restricted to use only value queries.
      • A surprising “PTAS” (i.e., (1+ε)-sketch) for coverage functions.
      • A generic reduction from sketching to learning in the PMAC model proposed by Balcan, Harvey.

    A very recent arXiv paper on learning valuation functions in the PMAC model, by Balcan, Constantin, Iwata, and Wang, contains a set of closely related (and in a few cases, overlapping) results that were independently discovered at the same time.

  5. Voting with Limited Information and Many Alternatives. (Jon M. Kleinberg, Flavio Cherichetti): In this work an adversary first chooses an urn from a set of urns, each containing red and blue balls. Each voter knows the set of urns, but not the urn chosen by adversary. Then, each voter chooses a random ball from the chosen urn and votes for his/her guess for the right urn. The voters win if the chosen urn gets the maximum number of votes. The paper proves that {O(n^5 \log(1/\delta))} voters are enough for this and surprisingly if each person gets to divide his vote among multiple urns then {O(n^2 \log(1/\delta))} voters are enough to win with probability 1-δ. The second result is surprising since even a single person drawing random ball from the urn requires so many draws. The main motivation for this result is the problem where a set of jurors vote in a criminal trial. This is related to the Condorcet Jury Theorem.

There were some other interesting papers, a couple of which are described below.

  • Popularity Vs Maximum Cardinality in the Stable Marriage Setting. (Telikepalli Kavitha).Extends Gale-Shapley algorithm to compute matching with small unpopularity. The algorithm is roughly a multi round GS algorithm where an unmatched boy re-proposes in the next round and any boy in round i gets preference over any boy in round j<i. The interesting feature of this algorithm is that at round i all augmenting paths of length i are avoided.
  • Gathering Despite Mischief. (Yoann Dieudonne, Andrzej Pele and David Peleg).Consider a group of individuals, each moving synchronously in a network. The goal is for them to meet together at a node. There could be byzantine individuals which are either making incorrect moves or are sharing wrong information with other individuals. This paper gives upper and lower bounds (mostly not matching) for the total number of individuals necessary for a protocol to exist. An interesting result is a tight bound of f+2 in one of the models. Most bounds in this area do not seem to be of this form.
  • Private Data Release Via Learning Thresholds. (Moritz Hardt, Guy Rothblum and Rocco A. Servedio).This paper gives a computationally efficient reduction from differentially private data release to learning thresholded sums of predicates.
  • Matroidal Degree-Bounded Minimum Spanning Trees. (Rico Zenklusen).This paper generalizes the classic problem of degree bounded spanning tree to the case where we need to have a spanning tree with a matroid constraint for every node over which edges can be chosen. It gives an algorithm which returns a spanning tree of cost at most OPT and which violates each matroid constraint by at most eight elements. Seems to use similar techniques as Lau-Singh with non-trivial generalizations.
  • Submodular Functions are Noise Stable. (Mahdi Cheraghchi, Adam Klivans, Pravesh Kothari, Homin Lee).This paper is a simple observation that submodular functions are noise stable and hence can be agnostically learned.
  • Random Walks, Electric Networks and The Transience Class Problem of Sandpiles. (Ayush Choure, Sundar Vishwananthan).In the sandpiles model you have a grid and sand particles are added one by one to some square in the grid. When the number of particles reaches a threshold they flip to adjacent squares. This paper studies and gives a bound for number of particles to be added to one corner of a square grid for the other corner to flip. I am curious if this model is also applicable to certain settings in social networks.
  • Information Dissemination Via Random Walks in D-Dimensional Space. (Henry Lam, Zhenming Liu, Michael Mitzenmacher, Xiarui Sun, Yajun Wang).This paper considers the problem of m agents placed uniformly at random in a d-dimensional (d>2) hypercube and gives nearly tight bounds on the time required for information dissemination when one agent holds it. The interesting phenomenon for d>2 is a phase transition not seen when d=2. The asymptotic rate for time required follows different functions in two different ranges for m.
  • Rumor Spreading and Vertex Expansion. (George Giakkoupis, Thomas Sauerwald).Gives nearly tight bounds on round complexity of rumor spreading in terms of vertex expansion. The lower bound for parameters such as vertex expansion only happen in specific class of graphs. It is interesting if there is a graph parameter which tightly bounds rumor spreading in every graph.
  • Simultaneous Approximations for Adversarial and Stochastic Online Budgeted Allocation. (Shayan Oveis Gharan, Vahab Mirrokni, Morteza Zadimoghaddam).This paper analyses an existing algorithm for the online budgeted allocation and proves that it has a simultaneous approximation for adversarial as well as stochastic version of the problem.
  • Improved Competitive Ratio for the Matroid Secretary Problem. (Oded Lachish, Sourav Chakraborty).This paper improves the competitive ratio of the matroid secretary problem to √log(rank). The surprising fact is that the paper is able to do so using some variant of binning.

Read Full Post »

Guest post by Hu Fu.

SODA took place in the charming city of Kyoto. The conference was well attended and was meticulously organized, as so many things in Japan are. The hotel was located right in Higashiyama (east hill), with a few major attractions in walking distance. A walk around the area quickly brings one to the most serene other world of temples and gardens.

The afternoon of the second day was packed with two game theory sessions. Here I’m going to report from the first one on mechanism design, and my colleague Ashwinkumar Badanidiyuru will cover the second session.

Berthold Vöcking started the first session with a superbly clear presentation of a universally truthful mechanism for multi-unit auctions, which gives a PTAS for social welfare maximization. This answers a question left open by Dobzinski and Dughmi, who showed a by-now well-known truthful-in-expectation FPTAS for the problem and a separation between universal truthfulness and truthfulness-in-expectation in approximation power for a restricted version of multi-unit auctions. Vöcking’s work shows that there is essentially no such separation if one considers general multi-unit auctions. A starting point of the work is the observation from smoothed analysis that the computational problem of social welfare maximization is solvable in expected polynomial time if the valuations are perturbed by some random noise. However, the magnitude of the noise needs to depend on the maximum value, while truthfulness disallows “naive” use of such private data. Two ideas combined to solve the problem. One is a subjective variant of VCG mechanisms, in which different affine maximizers are used for different bidders; the other is the use of consensus estimate functions, which was first introduced in mechanism design by Goldberg and Hartline.

Another talk in the same session by Bach Ha also involves consensus estimates. This work with Jason Hartline gives a 30.5 approximation for revenue maximization in prior-free downward closed environments. Following the approach in Hartline and Yan (EC 11), the benchmark is the revenue of an envy-free mechanism. When applying consensus estimates, this work and the previous talk both face feasibility problems when there is no consensus. Vöcking solved the problem by designing new consensus functions, while Ha and Hartline introduced a cross-checking process.

The second talk in the session was by Balu Sivan on his work with Shuchi Chawla and Jason Hartline on optimal crowdsourcing contests. As pointed out in previous works, crowdsourcing has much similarity with all-pay auctions, but the auctioneer benefits only from the highest payment. Sivan first shows that the optimal static contest lets the winner take all bonus. For contests that are allowed to set up concrete rules after seeing the bids, the optimal contest is shown to be a “virtual valuation” maximizer.

Our Cornell colleague Vasilis Syrgkanis talked about his work with Renato Paes Leme and Eva Tardos on sequential auctions. In combinatorial auctions, much work has been done on simultaneous auctions for each item, and this work shows that if these auctions are run sequentially and the bidders are foreseeing what’s going to happen later, then sequential auctions behave quite differently and may have very good properties. Two such examples from the talk: pure subgame perfect Nash equilibrium always exists for sequential first price auctions, and in a matroid setting such an auction implements exactly the VCG outcome at equilibrium.

Chaitanya Swamy‘s talk concludes the session on mechanism design. In his work with Konstantinos Georgiou, Swamy gives black-box reductions that essentially reduce strategyproof cost-sharing mechanism design problems to the algorithmic problem of social welfare minimization, with a logarithmic loss in the approximation ratio. The reduction comprises of two reductions, taking as bridge the problem of truthful mechanism design with a slight technical constraint. As is true for black-box reductions in general, this reduction finds applications in a variety of problems.

Besides the main game theory sessions, there were other talks on topics that have economics applications, two of which took place towards the very end of the conference in an on-line algorithm session. Vahab Mirrokni talked about his work on on-line matching with Shayan Oveis Gharan and Morteza Zadimoghaddam. They aim at algorithms which perform well in both the adversarial model and the random arrival model. For unweighted matchings, they showed that the existing algorithm Balance achieves optimal competitive ratios in both models, while for weighted matching this is not possible, though the algorithm in the classic Mehta-Saberi-Vazirani-Vazirani paper was shown to be 0.76-competitive in the random arrival model. In the last talk of the conference, Sourav Chakraborty presented his result with Oded Lachish, which was the first improvement on the competitive ratio for general matroid secretary problem since the problem was posed in 2007 by Babaioff, Immorlica and Kleinberg. Chakraborty and Lachish improved the ratio from O(log ρ) to O(√log ρ), where ρ is the rank of the matroid. The appealing question whether there is O(1)-competitive algorithm is left open.

Read Full Post »

Report from ITCS 2012

I was at MIT this past week for the 2012 Innovations in Theoretical Computer Science conference (ITCS). The conference, formerly known as ICS, was founded to “promote research that carries a strong conceptual message,” partly reflecting the perception among some people in our community that STOC and FOCS are not sufficiently receptive to this kind of research. Indeed, the fraction of papers introducing new models was noticeably higher than is typical of other theory conferences, but there were just as many papers (including some of the most interesting ones) that improved the state of the art on widely-studied theory topics such as homomorphic encryption and coding theory. Taken together, I thought that the program consistituted a balanced and high-quality collection of papers. (Full disclosure: I was on the ITCS12 program committee, so this endorsement may be a bit biased.) To me, the median ITCS paper was roughly as appealing as the median papers at STOC/FOCS, though as one might expect, the best papers at ITCS didn’t reach the level of the very best papers from STOC and FOCS.

1. General thoughts on the conference

One of the most appealing aspects of ITCS this year, for me, was the small size: roughly 100 people attended. Michael Mitzenmacher, noting the low attendance on his blog, asserted, “I think it’s a big question of where ITCS goes from here, and I think it’s a question the larger community (not just the people attending this year) need to be involved in. The quality of the work reinforces in my mind that the papers should go somewhere, preferably in my mind to a larger FOCS, STOC, or SODA; or, possibly, to a larger and better attended ITCS.”

As a counterpoint to this statement, I want to write about some of the benefits afforded by ITCS’s unique status as a small, broad, and top-notch theory conference. The talks are better attended and of higher quality, on average. I think this is due to two effects: the speakers know that their talks have to be geared to a broader audience, and the people in the audience are less likely to be tempted into skipping the talk to have a hallway conversation with a colleague. (Needless to say, this last observation cuts both ways. The hallway conversations are usually one of the best aspects of going to a conference!) With 90% of the conference participants in the audience for most of the talks, the audience begins to feel more familiar. For example, you get to know the personalities of the most frequent questioners, and you get the kind of friendly give-and-take between speakers and audience that you see often at workshops but rarely in large venues.

The combination of small size and wide breadth also influences one’s experience of the meals and other chit-chat opportunities. At a larger conference, I always find myself spending some of the meals (maybe more than I should) interacting with my closest colleagues, the same ones I see at other small conferences and workshops year-round. At ITCS, I had lunch with complexity theorists and cryptographers and dinner with people who work on data privacy, social networks, and combinatorial optimization. I see these same people at larger conferences, but our interactions generally remain more superficial. The opportunity to have protracted conversations with people from so many subfields of TCS reinforced my feeling that the conference is contributing to cohesion in the TCS community.

Over dinner, I asked a few of the people at my table if they felt there were significant differences between ITCS and STOC/FOCS. One of the frequent responses, that I happen to disagree with, is that STOC/FOCS talks place too much emphasis on overwhelming the audience with the technical difficulty of the work. Actually, at STOC/FOCS talks I attended in recent years, I’ve been impressed with the speakers’ ability to communicate the new ideas in their work, often despite its technical difficulty. To the extent that there were talks that fell short of this standard, I attributed it to the inevitable variations in quality among the talks at any conference, and not to an institutionalized preference at those conferences for talks that get lost in a fog of technical details.

On the other hand, people pointed out two differences between ITCS and STOC/FOCS that resonated with me. First, there was a greater share of papers leaving open-ended questions; few of them, if any, seemed to present the “final word” on a subject. In fact, the openness to such papers is spelled out in the ITCS call for papers. Second (and perhaps this is just a restatement of the first point) there is a perception that the STOC/FOCS review process is more geared to finding fault with papers (yielding a bias toward bulletproof papers for which the PC could find no good reasons for rejection) while the ITCS review process is more geared to identifying the appealing aspects of a submission, yielding papers that were judged to have at least one very good reason for acceptance. Having recently served on the program committees of all these conferences, I believe this perception is accurate.

2. AGT/E papers at ITCS

Two sessions at the conference were devoted to algorithmic game theory/economics; no other TCS topic had so many papers at ITCS. (For the record, the other sessions were learning theory, coding theory, quantum computing, cryptography, approximation algorithms, complexity theory, and three miscellaneous sessions.) Here is a brief summary of the AGT/E papers that were presented.

  • Mechanism Design with Approximate Valuations (Alessandro Chiesa, Silvio Micali, and Zeyuan Allen Zhu): Looks at welfare maximization in single-item auctions whose bidders have uncertainty about their own valuation: they can pin it down to a (possibly narrow) interval but not specify it exactly. The paper shows that dominant-strategy mechanisms cannot significantly improve upon the approximation ratio of the trivial mechanism that awards the item to a random bidder, whereas one can achieve a very good approximation factor (tending to 1 as the bidders’ uncertainty tends to zero) using implementation in undominated strategies. In fact, the second-price auction achieves this, and is optimal among deterministic auctions. Interestingly, there is a randomized auction that does somewhat better, in terms of worst-case approximation in undominated strategies.
  • Quantum Strategic Game Theory (Shengyu Zhang): The paper investigates some very fundamental questions about games in which randomization over strategies is accomplished by quantum superposition rather than classical randomization. What does equilibrium even mean in this context? The paper proposes the following definition: for each player there is a Hilbert space whose dimensionality equals the size of the player’s strategy set. A referee proposes a quantum state in the tensor product of these Hilbert spaces. Each player is allowed to apply an operator (completely positive, trace-preserving) to their tensor factor, and the referee then performs a measurement to extract a classical joint strategy profile. Players then receive payoffs according to the payoff matrix of the game. A state is a correlated equilibrium if no player can improve its payoff by applying an operator other than the identity. A Nash equilibrium is a correlated equilibrium that is also a product state. The paper then shows that there exist games having classical correlated equilibria whose quantum counterpart is very far from being an equilibrium (players can improve their payoff by a factor of at least {n^{0.585}}), and there exist games having correlated equilibria whose correlation is much easier to generate in the quantum setting (requiring only one entangled pair of qubits) than in the classical setting (requiring an unbounded number of correlated random bits).
  • The Curse of Simultaneity (Renato Paes Leme, Vasilis Syrgkanis, and Eva Tardos): The paper looks at games for which it is meaningful to investigate either a simultaneous-move or a sequential-move version of the game. How does the price of anarchy (PoA) of the simultaneous-move game (with respect to Nash equilibria) compare to that of the sequential-move game (with respect to subgame perfect equilibria)? The paper gives several examples of games that are known to have a terrible PoA in the simultaneous-move setting, but the PoA improves dramatically for the sequential-move version. Moreover, these are not contrived examples at all; they are games that have been widely studied in the PoA literature (machine cost-sharing, unrelated machine scheduling, consensus and cut games). In presenting the paper, Renato stressed how the authors’ thought process was guided by negative examples: when studying games with very contrived equilibria leading to meaningless negative results, it’s necessary to reconsider the underlying assumptions to extract more meaningful guarantees on system performance.
  • No Justified Complaints: On Fair Sharing of Multiple Resources (Danny Dolev, Dror G. Feitelson, Joseph Y. Halpern, Raz Kupferman, and Nathan Linial): What is the fair way to partition a divisible resource among claimants with varying levels of entitlement to the resource? This question is probably almost as old as civilization itself (how should a parent’s property be divided up among the descendants?) and there are many axiomatic approaches to defining fairness, but very few of them are applicable in multi-dimensional contexts such as cloud computing, when there are multiple capacity constraints on the resource (CPU, disk, bandwidth, memory) and different claimants require the resources in different proportions. The paper proposes a very natural fairness definition in this setting, called “bottleneck based fairness” (BBF), and it shows that there is always an allocation satisfying BBF. It gives an existence proof to show that there is always an allocation satisfying the definition. Interestingly, their proof of existence uses differential equations and is not fully constructive, though Joe Halpern in his presentation noted that Avital Gutman and Noam Nisan, in subsequent work (to appear in AAMAS 2012), gave a reduction from finding BBF allocations to computing a Fisher market equilibrium, thus yielding a more constructive existence proof accompanied by a polynomial time algorithm. (Thanks, Ariel, for supplying the Gutman-Nisan reference!)
  • Approximately Optimal Mechanism Design via Differential Privacy (Kobbi Nissim, Rann Smorodinsky, and Moshe Tennenholtz): The paper looks at implementation of social objectives in a very general mechanism design setting. The setting doesn’t allow for monetary transfers, but it incorporates an interesting notion of reactions, meaning that after the mechanism designer chooses an outcome, the players perform reactions in response to this outcome and their payoff depends on both the outcome and the chosen reaction. (For example, in facility location an outcome may be a set of facilities, and a reaction may be a customer choosing which facility to connect to.) They show that if the social objective is insensitive (in the sense of differential privacy) then there is a general paradigm for approximately optimizing the objective in ex-post Nash equilibrium via a randomized mechanism. The mechanism combines the exponential mechanism of McSherry-Talwar with a “blatantly monotone mechanism” (to borrow a term from Hartline and Lucier) that is executed with small probability and corrects for the fact that the McSherry-Talwar mechanism is only epsilon-truthful. One thing I particularly liked about the paper is that their results are general enough to apply in settings with interdependent values. There’s been far too little research on interdependent-value domains in algorithmic mechanism design, and maybe this paper will encourage further research along those lines.
  • Fairness Through Awareness (Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel): The paper takes a TCS approach to combating the evils of discrimination. It’s hard to legislate against all forms of discrimination in classification, since some of them can be quite subtle, e.g. rather than classifying individuals based on race one could exploit the fact that seemingly innocuous attributes (such as ZIP code) are often strongly correlated with race. The paper advocates a two-pronged attack, consisting of first defining a task-specific similarity metric reflecting the notion of similarity that is most relevant to the classification task at hand, then requiring classification rules to be implemented by Lipschitz mappings with respect to this metric. The paper doesn’t specify how the first step should be done, but it gives some recipes for doing the second step as well as investigating conditions under which this fairness definition implies “statistical parity” (meaning that the classifier assigns nearly the same distribution of labels to protected groups of individuals as to the general population). Moritz, in his talk, adeptly confronted the main critique of this approach, which is that it reduces the hard problem of designing non-discriminatory classifiers to the (seemingly) equally hard problem of designing the “right” similarity metric. The difference, he observed, is that classification rules are typically proprietary information and therefore not open to public discussion; the authors envision that similarity metrics would be developed by a process of standard-setting and revision conducted as a public discourse.
  • Prisoner’s Dilemma on Graphs with Large Girth (Vahideh H. Manshadi and Amin Saberi): For years, game theorists have been interested in the problem of how cooperation evolves in communities of individuals in settings where cooperation doesn’t benefit the individual unless its neighbor is also cooperating. A prototypical example would be a graph whose nodes play the repeated prisoner’s dilemma. This paper looks at the question of how the graph structure influences the probability of cooperation spreading throughout the network. They start with a random imitation dynamic (the voter model) in which players adjust their strategies over time by imitating a random neighbor, irrespective of payoffs. This dynamic induces a Markov chain on the set of strategy profiles that eventually gets sucked into one of two absorbing states: everyone cooperating, or everyone defecting. If one initializes the graph with two adjacent players cooperating and the rest defecting, the probability that eventually everyone cooperates is exactly 2/n. What if one perturbs the dynamic by giving players a slight bias toward imitating successful neighbors? The paper shows that in graphs of girth at least 7, the net effect is to increase the probability that cooperation spreads throughout the graph. (As long as the perturbation is small enough.) On the other hand, the effect is reversed in certain low-girth graphs such as cliques and complete bipartite graphs.

  • Crowdsourced Bayesian Auctions (Pablo Azar, Jing Chen, and Silvio Micali): How should one design mechanisms for settings in which players have Bayesian priors about the distribution of valuations, but the mechanism designer does not? Is there a way to elicit the players’ distributional information without destroying the mechanism’s incentive properties? The paper shows that the answer is yes. First it presents a quite general mechanism for downward-closed environments. The mechanism selects one player at random (the “advisor”), asks for the distribution of other players’ types, and runs the optimal mechanism on the remaining players given this distribution, allocating nothing to the advisor but rewarding her for the distributional information by using a proper scoring rule. Second, the paper presents an improved mechanism for single-item auctions that is like Ronen’s lookahead auction: it raises the price until all but one of the agents have dropped out, then it uses their knowledge of the remaining agent’s bid distribution to set the optimal reserve for the remaining agent.

3. Innovations in the conference format

There were two notable innovations in the conference format this year, and I think both of them enhanced the experience. First of all, each session began with a “chair rant” in which the session chair presented a ten-minute talk introducing the papers and giving some insight into why the PC liked them. At their best — as when Madhu gave a five-minute synopsis of the history of coding theory in TCS — these functioned as a powerful advertisement for the entire subfield, in addition to priming people to pay close attention to the upcoming talks.

A second innovation in the format was the “graduating bits” session at the end of the second day. Students and postdocs who are on the job market this year were invited to give five-minute presentations about their work. The community’s response to the call for presentations was overwhelming: more than thirty people gave talks, greatly in excess of the organizers’ expectations. In addition to the obvious value for people who are thinking of recruiting theoreticians this year (see Claire’s blog post here) another side benefit is that the conference was flooded with grad students and postdocs, who were there to present their five-minute pitch but stayed for the entire conference, thereby enhancing everyone else’s experience. However, there were simply too many talks and not enough breaks. (The session lasted three hours, and there were no breaks.) If graduating bits is to become a staple of the ITCS conference, future organizers will have to think about limiting the number of presentations, splitting the talks into multiple sessions, or using posters instead of talks.

It was instructive to watch all of these mini-job-talks and reflect on what made some of them more effective than others. Here are some rules of thumb that I took away from the experience.

  • Organize the talk around one theme, even at the expense of leaving out some of your work. Any job talk, regardless of length, should have a consistent theme that encompasses the work being presented and communicates the presenter’s vision and future goals. This rules out talks without a theme, and it also rules out talks whose theme is so broad that it becomes almost vacuous. (“My research focuses on modifying classical optimization problems to reflect real-world constraints.”) The reality is that many of us, by the time we’re on the job market, have worked on enough different topics that it’s hard to delineate a single theme that encompasses them all. In a one-hour talk there are options for coping with this difficulty: you can spend two minutes listing other things you’ve worked on apart from the main theme, or you can present a theme that sounds excessively broad at first but then convince the audience that it translates into a compelling research vision. In a five-minute talk there’s not enough time for either of these strategies.
  • Too much humor can spoil the talk. I enjoy it when speakers use a joke to lighten the mood of their talk. It’s especially welcome during a three-hour session with uninterrupted five-minute talks. But some speakers tried to insert a punch line into nearly every slide; at that point, the humor gets in the way of the main message and detracts from the presentation.
  • Don’t just tell me what you solved, tell me what you’d like to solve. I found that many of the most effective talks focused on an open question (or set of related questions) that inspired the speaker’s work. It vividly conveys the sort of work you envision doing in the future. A good job talk should be both impressive and interesting; the audience should not only respect your accomplishment but should feel that they learned something memorable that sparked their curiosity. A great open question can do a superb job accomplishing the latter objective.

Read Full Post »

The fourth World Congress of the Game Theory society will be held in July 2012 at Istanbul Bilgi University, in Istanbul, Turkey.  Paper submission deadline is February 1st.

Read Full Post »

And the winner is…

Thanks a lot for the slew of ideas and suggestions for the new blog name. After carefully considering several great options, we are happy to announce that we have a winner: “Turing’s Invisible Hand”. An honorable mention goes to “The Best Response”, which came in a close second. Both suggestions were contributed anonymously; I call on the contributors to de-anonymize themselves and take their place in history alongside Shakespeare and Dante.

By the way, originally I had in mind letting readers vote over the suggested names, and was fantasizing about devious voting rules and impregnable false-name-proof schemes. Ultimately though we decided that semi-dictatorship is the way to go. Nevertheless, I found it both amusing and helpful that people took matters into their own hands by “+1”-ing ideas. The moral: democracy is unstoppable, dictators beware!

Read Full Post »

When Noam invited me to join the newly-formed bloggin’ committee I happily accepted. However, still giddy with the 2011 spirit of revolution, I had one condition: that we change the name of the blog. “Algorithmic Game Theory/Economics” is certainly an informative name, but it is not, well, catchy. I would like to break the confound between the name of the field and the name of the blog: while the name of the field is serious business (nevertheless it is still pretty much up in the air), the name of the blog can be playful.

Inspired by “Mad Men”, I decided to exercise my dormant copywriting skills, and came up with the name “Computer/Games” (cf. Face/Off). Some of my co-bloggers were not enthusiastic about this name though, in particular because economics more broadly should preferably be represented.

So, in line with the collaborative Zeitgeist, I would like to hold a crowdsourcing competition for the new blog name (via comments to this post). The winner will receive, eh, co-authorship on an EC’12 paper*, as well as eternal recognition in the blog description. In any case, the winner will be decided based on the whims of our all-powerful blogging oligarchy; we also reserve the right to keep the old name.

Happy new year!

* Subject to the impartial acceptance of the winner’s own EC’12 submission.

Read Full Post »

%d bloggers like this: