I was at MIT this past week for the 2012 Innovations in Theoretical Computer Science conference (ITCS). The conference, formerly known as ICS, was founded to “promote research that carries a strong conceptual message,” partly reflecting the perception among some people in our community that STOC and FOCS are not sufficiently receptive to this kind of research. Indeed, the fraction of papers introducing new models was noticeably higher than is typical of other theory conferences, but there were just as many papers (including some of the most interesting ones) that improved the state of the art on widely-studied theory topics such as homomorphic encryption and coding theory. Taken together, I thought that the program consistituted a balanced and high-quality collection of papers. (Full disclosure: I was on the ITCS12 program committee, so this endorsement may be a bit biased.) To me, the median ITCS paper was roughly as appealing as the median papers at STOC/FOCS, though as one might expect, the best papers at ITCS didn’t reach the level of the very best papers from STOC and FOCS.
1. General thoughts on the conference
One of the most appealing aspects of ITCS this year, for me, was the small size: roughly 100 people attended. Michael Mitzenmacher, noting the low attendance on his blog, asserted, “I think it’s a big question of where ITCS goes from here, and I think it’s a question the larger community (not just the people attending this year) need to be involved in. The quality of the work reinforces in my mind that the papers should go somewhere, preferably in my mind to a larger FOCS, STOC, or SODA; or, possibly, to a larger and better attended ITCS.”
As a counterpoint to this statement, I want to write about some of the benefits afforded by ITCS’s unique status as a small, broad, and top-notch theory conference. The talks are better attended and of higher quality, on average. I think this is due to two effects: the speakers know that their talks have to be geared to a broader audience, and the people in the audience are less likely to be tempted into skipping the talk to have a hallway conversation with a colleague. (Needless to say, this last observation cuts both ways. The hallway conversations are usually one of the best aspects of going to a conference!) With 90% of the conference participants in the audience for most of the talks, the audience begins to feel more familiar. For example, you get to know the personalities of the most frequent questioners, and you get the kind of friendly give-and-take between speakers and audience that you see often at workshops but rarely in large venues.
The combination of small size and wide breadth also influences one’s experience of the meals and other chit-chat opportunities. At a larger conference, I always find myself spending some of the meals (maybe more than I should) interacting with my closest colleagues, the same ones I see at other small conferences and workshops year-round. At ITCS, I had lunch with complexity theorists and cryptographers and dinner with people who work on data privacy, social networks, and combinatorial optimization. I see these same people at larger conferences, but our interactions generally remain more superficial. The opportunity to have protracted conversations with people from so many subfields of TCS reinforced my feeling that the conference is contributing to cohesion in the TCS community.
Over dinner, I asked a few of the people at my table if they felt there were significant differences between ITCS and STOC/FOCS. One of the frequent responses, that I happen to disagree with, is that STOC/FOCS talks place too much emphasis on overwhelming the audience with the technical difficulty of the work. Actually, at STOC/FOCS talks I attended in recent years, I’ve been impressed with the speakers’ ability to communicate the new ideas in their work, often despite its technical difficulty. To the extent that there were talks that fell short of this standard, I attributed it to the inevitable variations in quality among the talks at any conference, and not to an institutionalized preference at those conferences for talks that get lost in a fog of technical details.
On the other hand, people pointed out two differences between ITCS and STOC/FOCS that resonated with me. First, there was a greater share of papers leaving open-ended questions; few of them, if any, seemed to present the “final word” on a subject. In fact, the openness to such papers is spelled out in the ITCS call for papers. Second (and perhaps this is just a restatement of the first point) there is a perception that the STOC/FOCS review process is more geared to finding fault with papers (yielding a bias toward bulletproof papers for which the PC could find no good reasons for rejection) while the ITCS review process is more geared to identifying the appealing aspects of a submission, yielding papers that were judged to have at least one very good reason for acceptance. Having recently served on the program committees of all these conferences, I believe this perception is accurate.
2. AGT/E papers at ITCS
Two sessions at the conference were devoted to algorithmic game theory/economics; no other TCS topic had so many papers at ITCS. (For the record, the other sessions were learning theory, coding theory, quantum computing, cryptography, approximation algorithms, complexity theory, and three miscellaneous sessions.) Here is a brief summary of the AGT/E papers that were presented.
- Mechanism Design with Approximate Valuations (Alessandro Chiesa, Silvio Micali, and Zeyuan Allen Zhu): Looks at welfare maximization in single-item auctions whose bidders have uncertainty about their own valuation: they can pin it down to a (possibly narrow) interval but not specify it exactly. The paper shows that dominant-strategy mechanisms cannot significantly improve upon the approximation ratio of the trivial mechanism that awards the item to a random bidder, whereas one can achieve a very good approximation factor (tending to 1 as the bidders’ uncertainty tends to zero) using implementation in undominated strategies. In fact, the second-price auction achieves this, and is optimal among deterministic auctions. Interestingly, there is a randomized auction that does somewhat better, in terms of worst-case approximation in undominated strategies.
- Quantum Strategic Game Theory (Shengyu Zhang): The paper investigates some very fundamental questions about games in which randomization over strategies is accomplished by quantum superposition rather than classical randomization. What does equilibrium even mean in this context? The paper proposes the following definition: for each player there is a Hilbert space whose dimensionality equals the size of the player’s strategy set. A referee proposes a quantum state in the tensor product of these Hilbert spaces. Each player is allowed to apply an operator (completely positive, trace-preserving) to their tensor factor, and the referee then performs a measurement to extract a classical joint strategy profile. Players then receive payoffs according to the payoff matrix of the game. A state is a correlated equilibrium if no player can improve its payoff by applying an operator other than the identity. A Nash equilibrium is a correlated equilibrium that is also a product state. The paper then shows that there exist games having classical correlated equilibria whose quantum counterpart is very far from being an equilibrium (players can improve their payoff by a factor of at least
), and there exist games having correlated equilibria whose correlation is much easier to generate in the quantum setting (requiring only one entangled pair of qubits) than in the classical setting (requiring an unbounded number of correlated random bits).
- The Curse of Simultaneity (Renato Paes Leme, Vasilis Syrgkanis, and Eva Tardos): The paper looks at games for which it is meaningful to investigate either a simultaneous-move or a sequential-move version of the game. How does the price of anarchy (PoA) of the simultaneous-move game (with respect to Nash equilibria) compare to that of the sequential-move game (with respect to subgame perfect equilibria)? The paper gives several examples of games that are known to have a terrible PoA in the simultaneous-move setting, but the PoA improves dramatically for the sequential-move version. Moreover, these are not contrived examples at all; they are games that have been widely studied in the PoA literature (machine cost-sharing, unrelated machine scheduling, consensus and cut games). In presenting the paper, Renato stressed how the authors’ thought process was guided by negative examples: when studying games with very contrived equilibria leading to meaningless negative results, it’s necessary to reconsider the underlying assumptions to extract more meaningful guarantees on system performance.
- No Justified Complaints: On Fair Sharing of Multiple Resources (Danny Dolev, Dror G. Feitelson, Joseph Y. Halpern, Raz Kupferman, and Nathan Linial): What is the fair way to partition a divisible resource among claimants with varying levels of entitlement to the resource? This question is probably almost as old as civilization itself (how should a parent’s property be divided up among the descendants?) and there are many axiomatic approaches to defining fairness, but very few of them are applicable in multi-dimensional contexts such as cloud computing, when there are multiple capacity constraints on the resource (CPU, disk, bandwidth, memory) and different claimants require the resources in different proportions. The paper proposes a very natural fairness definition in this setting, called “bottleneck based fairness” (BBF), and it shows that there is always an allocation satisfying BBF. It gives an existence proof to show that there is always an allocation satisfying the definition. Interestingly, their proof of existence uses differential equations and is not fully constructive, though Joe Halpern in his presentation noted that Avital Gutman and Noam Nisan, in subsequent work (to appear in AAMAS 2012), gave a reduction from finding BBF allocations to computing a Fisher market equilibrium, thus yielding a more constructive existence proof accompanied by a polynomial time algorithm. (Thanks, Ariel, for supplying the Gutman-Nisan reference!)
- Approximately Optimal Mechanism Design via Differential Privacy (Kobbi Nissim, Rann Smorodinsky, and Moshe Tennenholtz): The paper looks at implementation of social objectives in a very general mechanism design setting. The setting doesn’t allow for monetary transfers, but it incorporates an interesting notion of reactions, meaning that after the mechanism designer chooses an outcome, the players perform reactions in response to this outcome and their payoff depends on both the outcome and the chosen reaction. (For example, in facility location an outcome may be a set of facilities, and a reaction may be a customer choosing which facility to connect to.) They show that if the social objective is insensitive (in the sense of differential privacy) then there is a general paradigm for approximately optimizing the objective in ex-post Nash equilibrium via a randomized mechanism. The mechanism combines the exponential mechanism of McSherry-Talwar with a “blatantly monotone mechanism” (to borrow a term from Hartline and Lucier) that is executed with small probability and corrects for the fact that the McSherry-Talwar mechanism is only epsilon-truthful. One thing I particularly liked about the paper is that their results are general enough to apply in settings with interdependent values. There’s been far too little research on interdependent-value domains in algorithmic mechanism design, and maybe this paper will encourage further research along those lines.
- Fairness Through Awareness (Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel): The paper takes a TCS approach to combating the evils of discrimination. It’s hard to legislate against all forms of discrimination in classification, since some of them can be quite subtle, e.g. rather than classifying individuals based on race one could exploit the fact that seemingly innocuous attributes (such as ZIP code) are often strongly correlated with race. The paper advocates a two-pronged attack, consisting of first defining a task-specific similarity metric reflecting the notion of similarity that is most relevant to the classification task at hand, then requiring classification rules to be implemented by Lipschitz mappings with respect to this metric. The paper doesn’t specify how the first step should be done, but it gives some recipes for doing the second step as well as investigating conditions under which this fairness definition implies “statistical parity” (meaning that the classifier assigns nearly the same distribution of labels to protected groups of individuals as to the general population). Moritz, in his talk, adeptly confronted the main critique of this approach, which is that it reduces the hard problem of designing non-discriminatory classifiers to the (seemingly) equally hard problem of designing the “right” similarity metric. The difference, he observed, is that classification rules are typically proprietary information and therefore not open to public discussion; the authors envision that similarity metrics would be developed by a process of standard-setting and revision conducted as a public discourse.
- Prisoner’s Dilemma on Graphs with Large Girth (Vahideh H. Manshadi and Amin Saberi): For years, game theorists have been interested in the problem of how cooperation evolves in communities of individuals in settings where cooperation doesn’t benefit the individual unless its neighbor is also cooperating. A prototypical example would be a graph whose nodes play the repeated prisoner’s dilemma. This paper looks at the question of how the graph structure influences the probability of cooperation spreading throughout the network. They start with a random imitation dynamic (the voter model) in which players adjust their strategies over time by imitating a random neighbor, irrespective of payoffs. This dynamic induces a Markov chain on the set of strategy profiles that eventually gets sucked into one of two absorbing states: everyone cooperating, or everyone defecting. If one initializes the graph with two adjacent players cooperating and the rest defecting, the probability that eventually everyone cooperates is exactly 2/n. What if one perturbs the dynamic by giving players a slight bias toward imitating successful neighbors? The paper shows that in graphs of girth at least 7, the net effect is to increase the probability that cooperation spreads throughout the graph. (As long as the perturbation is small enough.) On the other hand, the effect is reversed in certain low-girth graphs such as cliques and complete bipartite graphs.
- Crowdsourced Bayesian Auctions (Pablo Azar, Jing Chen, and Silvio Micali): How should one design mechanisms for settings in which players have Bayesian priors about the distribution of valuations, but the mechanism designer does not? Is there a way to elicit the players’ distributional information without destroying the mechanism’s incentive properties? The paper shows that the answer is yes. First it presents a quite general mechanism for downward-closed environments. The mechanism selects one player at random (the “advisor”), asks for the distribution of other players’ types, and runs the optimal mechanism on the remaining players given this distribution, allocating nothing to the advisor but rewarding her for the distributional information by using a proper scoring rule. Second, the paper presents an improved mechanism for single-item auctions that is like Ronen’s lookahead auction: it raises the price until all but one of the agents have dropped out, then it uses their knowledge of the remaining agent’s bid distribution to set the optimal reserve for the remaining agent.
3. Innovations in the conference format
There were two notable innovations in the conference format this year, and I think both of them enhanced the experience. First of all, each session began with a “chair rant” in which the session chair presented a ten-minute talk introducing the papers and giving some insight into why the PC liked them. At their best — as when Madhu gave a five-minute synopsis of the history of coding theory in TCS — these functioned as a powerful advertisement for the entire subfield, in addition to priming people to pay close attention to the upcoming talks.
A second innovation in the format was the “graduating bits” session at the end of the second day. Students and postdocs who are on the job market this year were invited to give five-minute presentations about their work. The community’s response to the call for presentations was overwhelming: more than thirty people gave talks, greatly in excess of the organizers’ expectations. In addition to the obvious value for people who are thinking of recruiting theoreticians this year (see Claire’s blog post here) another side benefit is that the conference was flooded with grad students and postdocs, who were there to present their five-minute pitch but stayed for the entire conference, thereby enhancing everyone else’s experience. However, there were simply too many talks and not enough breaks. (The session lasted three hours, and there were no breaks.) If graduating bits is to become a staple of the ITCS conference, future organizers will have to think about limiting the number of presentations, splitting the talks into multiple sessions, or using posters instead of talks.
It was instructive to watch all of these mini-job-talks and reflect on what made some of them more effective than others. Here are some rules of thumb that I took away from the experience.
- Organize the talk around one theme, even at the expense of leaving out some of your work. Any job talk, regardless of length, should have a consistent theme that encompasses the work being presented and communicates the presenter’s vision and future goals. This rules out talks without a theme, and it also rules out talks whose theme is so broad that it becomes almost vacuous. (“My research focuses on modifying classical optimization problems to reflect real-world constraints.”) The reality is that many of us, by the time we’re on the job market, have worked on enough different topics that it’s hard to delineate a single theme that encompasses them all. In a one-hour talk there are options for coping with this difficulty: you can spend two minutes listing other things you’ve worked on apart from the main theme, or you can present a theme that sounds excessively broad at first but then convince the audience that it translates into a compelling research vision. In a five-minute talk there’s not enough time for either of these strategies.
- Too much humor can spoil the talk. I enjoy it when speakers use a joke to lighten the mood of their talk. It’s especially welcome during a three-hour session with uninterrupted five-minute talks. But some speakers tried to insert a punch line into nearly every slide; at that point, the humor gets in the way of the main message and detracts from the presentation.
- Don’t just tell me what you solved, tell me what you’d like to solve. I found that many of the most effective talks focused on an open question (or set of related questions) that inspired the speaker’s work. It vividly conveys the sort of work you envision doing in the future. A good job talk should be both impressive and interesting; the audience should not only respect your accomplishment but should feel that they learned something memorable that sparked their curiosity. A great open question can do a superb job accomplishing the latter objective.
Like this:
Like Loading...
Read Full Post »