Business schools have a variety of groups, different schools are organized differently and groups differ by their names. The most relevant groups are in the area of Operations (such as Operations Management, Operations Research), Decisions (such Decision Sciences and Information Science), and depending on your research you might find connections to Marketing or Strategy groups. It is worthwhile to spend some time looking online and engaging your contacts to learn more about different places that might be interested in you. This post will focus mostly on the Operations job market although some information is related to all of these areas.

Job market candidates who have further questions that are not answered below are encouraged to email either of the authors of this post. Answers to any general-interest questions will be included in future updates.

**When does the job market take place?**

The typical recruiting process for Operations-like groups starts by asking applicants to submit a partial application due October or November. The early applications are used for conducting short interviews/meetings with candidates in INFORMS (a big annual conference in OR and Management Science which will take place this year on November 9-12). In addition to short interviews, departments will send to representatives attend talks of potential candidate who are presenting at INFORMS (information regarding the talk should be included in the early application).

Deadlines for “full” applications are usually after INFORMS (between Thanksgiving and January and vary quite a bit), and flyouts (campus interviews) take place between January and March. Applying early for short interviews is not a requirement.

Some Marketing groups have faculty that do quantitative research and may be interested in CS-Econ work. The Marketing job market begins very early (application deadlines are around August) and campus interviews are around October-November. We hope to collect more information about this market in the future.

**How to apply?**

Information about OR-like job postings can be found at Operations Academia. Also check the INFORMS website under “Build your career“. Groups such as Decisions Sciences or Information Sciences may also post their jobs on economics job listings such as the AEA’s Job Openings for Economists.

**What can I read to be better prepared?**

General advice about the job market in economics and some for OR is given in the links below. Although the economics job market is a bit different, much of the advice is very relevant to candidate going to OR-like jobs and is recommended to read:

- Operations Academia‘s advice for the Operations job market.

- The Stanford Economics Department’s advice for the job market.

- John Cawley’s A Guide and Advice for Economists on the U.S. Junior Academic Job Market.

- David Laibson’s Job Market Tips.

**What should be my job talk/job market paper?**

This is a tough question as the OR-like market is somewhere between economics and CS in several ways. CS candidate typically have many papers while economics students typically have only a few polished papers. The job talk/job market paper should showcase that you have an agenda, even if your results are dispersed between different papers. You want to convey that you have a well formed research direction and make your work relevant and interesting to your audience. To make your work relevant, remember that the audience of your talk does not necessarily have a CS background and so you should not assume they are familiar with CS methods or justification. You must explain why your result is a good result.

It is important to have a job market paper (see advice about job market papers for econonomics students).

**When should I start preparing for this job market?**

As early as possible. We would suggest to start to thinking about the job market (job market paper) at least two years before graduating. When you are conducting research, think how it will fit into a cohesive research portfolio. Going to conferences and communicating with colleagues from other fields will help you understand how to interest them in your research when you are on the market. Finding and working on more general materials (like applications, statements, etc.) usually happens after the summer of the year of graduation (or Marketing this begins before the summer). It is recommended not to wait until the fall to begin.

**How do business schools “count” conference publications?**

Different fields have different publication outlets. Business schools/OR care less about CS conference publications, although it is understood that candidates maybe come from different backgrounds. Journal publications are important and there are several top journals in the OR-like area. Common ones are Management Science and Operations Research. Top economics journals are also good. Other journals that are often considered good in different groups (such as Math or OR, Mathematical Programming, top economics field journals like GEB). It is in general good for students who are interested in such jobs to submit papers to good journals.

**Are there venues to present work prior to the job market in order to increase visibility?**

Yes. INFORMS is in the fall and MSOM (Manufacturing science and Operations Management) is in the early summer.

- INFORMS‘s call for papers is in the late spring and it is not very competitive. There are also organized (invited) sessions, typically there is a cluster on CS-Econ topics that is organized by someone in CS-Econ.

- MSOM‘s call for papers happens early in the spring and it is also not competitive. Submissions are just a summary of few pages. They have a mailing list one can register to get information not only about the conference (also many job postings are sent through this mailing list). MSOM has special sessions for job market candidates to give talks and these talks are recorded.

**What are the main things that business schools are looking for in interviews that are distinct from CS interviews?**

As in every interview, your interviewers are interested to learn about your work and to learn all your strengths. In contrast to CS interviews, they may not be familiar with some/many technical aspects of your work, so explaining your results may be more challenging. Business schools need people who can teach MBAs, and value the ability to communicate ideas in a non technical way. For example, your ability to clearly motivate to the (non-CS) audience of your job talk your CS methods can help demonstrate this ability. One-on-one interviewers will be looking for this ability as well. Also, candidates typically wear a suit to their interview.

]]>

*We are pleased to post the following announcement on behalf of the organizers of NYCE-2014.*

The 7th annual** New York Computer Science and Economics Day (NYCE-2014) **will take place on Friday, December 5th in Microsoft’s Times Square location (11 Times Square). We tentatively plan a panel, a mix of invited and contributed talks, and a poster session. More details are coming soon; the webpage (https://sites.google.com/site/nycsecon2014/) will be up in a few days.

**Organizers:** Arash Asadpour (NYU Stern), Mohammad Hossein Bateni (Google Research), Alex Slivkins (MSR-NYC).

]]>

Microsoft Research’s Silicon Valley Center (MSR/SVC) was the home of a truly amazing research team. The team began at Digital Equipment Company’s Systems Research Center (DEC/SRC) in the mid-eighties and, over the last thirty years, has been a pioneer of distributed systems research and an example of industrial research at its best. Its thirteen year run at MSR/SVC continued this tradition:

- as a collaboration between two pillars of computer science, systems and theory;
- as an industrial laboratory with unfettered academic freedom;
- as a lab where research prototypes transferred to deployed systems, like Dryad which shipped with Microsoft Server;
- as a lab where fundamental theory of computation was envisioned and brought to maturity, like differential privacy;
- as a lab that embraced and supported the greater academic community;
- as a lab that, since its contemporaneous founding with the then nascent field of
*economics and computation*(EC), was pivotal in its development.

Perhaps most importantly, MSR/SVC has had profound impact on several generations of researchers who visited as Ph.D.s in the internship program, or as postdocs, or as academic visitors. Indeed, many of these researchers have shared their experiences in the comments on Omer Reingold’s presumedly final Windows on Theory blog post.

My own story is similar to that of many others: My advisor Anna Karlin was a member of the team during its DEC/SRC years. (Thank you Anna!) My Ph.D. thesis originated from an auction question posed by the team’s Andrew Goldberg and is based on what became a nine-paper collaboration. (Thank you Andrew!) I clearly remember my 2003 job interview with Roy Levin in 2003, before the first academic paper on the subject, where he told me about the problem of sponsored search and the research challenges it posed for auction theory. (Thank you Roy!) On graduation I joined the team as a researcher and spent an amazing four years in which I could not have found a more supportive, stable, and stimulating environment to work on the theory of mechanism design. Collaborations with Andrew Goldberg developed the competitive analysis of auctions. Collaborations Moshe Babaioff and Alex Slivkins and lab visitors Avrim Blum, Nina Balcan, and Bobby Kleinberg brought connections to machine learning theory. Collaborations with visitors Shuchi Chawla and Bobby Kleinberg initiated the study of approximation in Bayesian mechanism design. Collaborations with lab visitor Madhu Sudan brought connections to coding theory. Discussions with Cynthia Dwork, Frank McSherry, and Kunal Talwar made connections to differential privacy. It was an incredible time to be with such an amazing group. MSR/SVC, thank you!

Last week MSR/SVC closed and with it did a chapter in the story of an elite team of researchers, a culture of collaboration, an institution of research excellence. I hope, just as the team survived acquisition by Compaq and Hewlett Packard in the late nineties, that this chapter is not its last. For myself, my cobloggers at Turing’s Invisible Hand, and on behalf of the EC research community; I would like to thank the MSR/SVC team for everything they have done for computer scientists, computer science, and the field of economics and computation.

]]>

]]>

An important conjecture in prior-free mechanism design was affirmatively resolved this year. The goal of this post is to explain what the conjecture was, why its resolution is fundamental to the theoretical study of algorithms (and mechanisms), and encourage the study of important open issues that still remain.

The conjecture, originally from Goldberg et al. (2004), is that the lower bound of 2.42 on the approximation factor of a prior-free digital good auction is tight. In other words, the conjecture stated that there exists a digital good auction that, on any input, obtains at least a 2.42 fraction of the best revenue from a posted price that at least two bidders accept (henceforth: the benchmark). The number 2.42 arises as the limit as the number of agents *n* approaches infinity (for finite *n* the lower bound improves and is given by a precise formula). The conjecture was resolved in the affirmative by Ning Chen, Nick Gravin, and Pinyan Lu in a STOC 2014 paper Optimal Competitive Auctions. The resolution of this conjecture suggests that a natural method of proving a lower bound for approximation is generally tight but we still do not really understand why.

**Summary.**

To explain the statement of the theorem, let’s consider the *n = 2* special case. For *n = 2* agents, the benchmark is twice the lower agent’s value. (The optimal posted price that both bidders will accept is a price equal to the lower bidder’s value, the revenue from this posted price is twice the lower bidder’s value.) The goal of prior-free auction design is to find an auction that approximates this benchmark. For the *n = 2* special case there is a natural candidate: the second-price auction. The second-price auction’s revenue for two bidders is equal to the lower bidder’s value. Consequently, the second-price auction is a two approximation: the ratio of the second-price auction’s revenue to the benchmark is two in worst-case over all inputs (in fact, it is exactly equal to two on all inputs).

The Goldberg et al. (2004) lower bound for the *n = 2* agent special case shows that this two approximation is optimal. The proof of the lower bound employs the probabilistic method. A distribution over bidder values is considered, the expected benchmark is analyzed, the expected revenue of the optimal auction (for the distribution) is analyzed, and the ratio of their expectations gives the lower bound. The last step follows because any auction has at most the optimal auction revenue and if the ratio of the expectations has a certain value, there must be an input in the support of the distribution that has at least this ratio. A free parameter in this analysis is the the distribution over bidder values. The approach of Goldberg et al. was to use the distribution for which all auctions obtain the same revenue, i.e., the so-called equal revenue or Pareto distribution. This distribution is defined so that an agent with a random random value accepts a price *p > 1* with probability exactly *1/p* and the expected revenue generated is exactly one.

For more details see the newly updated Chapter 6 of Mechanism Design and Approximation.

**Discussion.**

Prior-free mechanism design falls into a genre of algorithm design where there is no pointwise optimal algorithm. For this reason the worst-case analysis of a mechanism is given relative to a benchmark. (The same is true for the field of online algorithms where this is referred to as competitive analysis.) In the abstract the optimal algorithm design problem is the following:

where

where

Let *EQDIST* denote the product distribution for which the expected value of *ALG(INPUT)*, for *INPUT* drawn from *EQDIST*, is a constant for all auctions *ALG*. For a number of auction problems (not just digital goods), it was conjectured that *DIST* = EQDIST*. Two things are important in this statement:

*EQDIST*is a product distribution where as Yao’s theorem may generally require correlated distributions. (Why is a product distribution sufficient?!)

*EQDIST*is not specific to*BENCHMARK*. (Why not?!)

Prior to the Chen-Gravin-Lu paper the equality of *EQDIST* and *DIST** was known to hold for specific benchmarks and the following problems:

- Single agent monopoly pricing (for revenue). See Chapter 6 of MDnA.

- Two-agent digital-good auctions (for revenue). See Chapter 6 of MDnA.

- Three-agent digital-good auctions (for revenue). See Hartline and McGrew (2005).

- One-item two-agent auctions (for residual surplus, i.e., value minus payment).

Of these (1) and (2) are very simple, (3) and (4) are non-obvious. All of these results come from explicitly exhibiting the optimal auction

Chen, Gravin, and Lu give the first highly non-trivial proof for showing that *DIST* = EQDIST* without explicitly constructing *ALG**. Moreover, they do it not just for the standard benchmark (given above) but for any benchmark with certain properties. It’s clear from their proof which properties they use (monotonicity, symmetry, scale invariance, constant in the highest value). It is not so clear which are necessary for the theorem. For example, the benchmark in (1) and (4), above, are not constant in the highest bid.

**Conclusion.**

Prior-free mechanism design problems are exemplary of an a genre of algorithm design problems where there is no pointwise optimal algorithm. (The competitive analysis of online algorithms gives another example.) These problems stress the classical worst-case algorithm design and analysis paradigm. We really do not understand how to search the space of mechanisms for prior-free optimal ones. (Computational hardness results are not known either.) We also do not generally know when and why the lower bounding approach above gives a tight answer. The Chen-Gravin-Lu result is the most serious recent progress we have seen on these questions. Let’s hope it is just the beginning.

]]>

]]>

**Identity.** When academic positions come tied to areas, identity is an important issue. While many in the Econ/CS community come from an AI/ML or Algorithms/Theory background, many consider Econ/CS, henceforth EC, their primary research community. Nonetheless, EC faculty applicants will typically be competing for AI or Theory positions. Only a few schools have chosen to specifically list EC as a target area for hiring (independent of AI or Theory persuasion). In comparison Computational Biology, in the last decade, and Data Science, contemporaneously, have become first-order subfields with respect to hiring. One recommendation for hiring discussions is to separate EC from AI and Theory hiring. It is not to EC’s advantage to be in a zero-sum game with either AI or Theory.

**Public Relations.** EC does not presently have a clearly articulated value proposition that engenders broad investment from within CS, broad support for hiring from the Economics academic community, or broad visibility by the general public. One concrete action to take is to be more public about successes of our field in terms of impact on practice and impact on science (in particular Economics) and about the big challenges our field is hoping to address in the medium and long term. A second concrete action to take is to prepare development pitches, both at the department level for including EC in the vision for the department, and at the donor level to provide a basis for deans to raise money for faculty lines in EC. A third action item is to encourage more outreach articles in general computer science venues and in popular science venues.

**Web Resources.** The SIGecom advisors have discussed the idea of creating a web resource that would facilitate the initiative described above. In particular:

- To aggregate survey articles, general CS articles, popular science articles, and teaching materials (cf. Interactions.org).
- To collect and disseminate development resources, e.g., for faculty to pitch their department for EC hiring, for deans to pitch their donors for EC hiring, for researchers to pitch funding agencies (cf. Theory Matters).
- To collect advice for EC applicants to faculty positions outside of EC, e.g., operations research, business schools, information science, etc. These academic markets have different timings, structure, and focus.
- To collect job posts and publicize job market outcomes (cf. the computational complexity blog).
- To survey research themes and success, impact on practice, and impact on science.

The SIG would contribute resources to make sure that the web resource is well designed and hosted, and we plan to do all of this with a view to making sure that whatever we do is maintainable going forward.

**Coordination.** Turing’s Invisible Hand coblogger Jason Hartline has agreed to serve as the SIGecom 2014-2015 Special Initiatives Chair for the Job Market and will be coordinating the effort to assemble this web resource, recruiting volunteers, and facilitating the initiatives proposed above. Please agree to help if asked, write Jason to volunteer, or provide discussion in the comments below.

*Joint post with SIGecom Chair David Parkes.*

]]>

Berthold was a phenomenal problem-solver, and he made numerous

contributions across many subfields of algorithmic game theory. To

name just a few, his celebrated paper with Czumaj resolved the price

of anarchy in the Koutsoupias-Papadimitriou scheduling model, the

model in which the POA was originally defined. His early work in

algorithmic mechanism design (e.g., with Briest and Krysta), which I

regularly teach in my classes, demonstrated the richness of the design

space for single-parameter problems. His work with Skopalik and

others characterized the computational complexity of computing

equilibria in congestion games. He was an exceptionally strong

scientist.

]]>

**Recent progress in multi-dimensional mechanism design**

Organizers: Yang Cai (UC Berkeley), Costis Daskalakis (MIT), and Matt Weinberg (MIT)

Abstract: Mechanism design in the presence of Bayesian priors has received much attention in the Economics literature, focusing among other problems on generalizing Myerson’s celebrated auction to multi-item settings. Nevertheless, only special cases have been solved, with a general solution remaining elusive. More recently, there has been an explosion of algorithmic work on the problem, focusing on computation of optimal auctions, and understanding their structure. The goal of this tutorial is to overview this work, with a focus on our own work. The tutorial will be self-contained and aims to develop a usable framework for mechanism design in multi-dimensional settings.

**Axiomatic social choice theory: from Arrow’s impossibility to Fishburn’s maximal lotteries**

Organizers: Felix Brandt (TU München)

Abstract: This tutorial will provide an overview of central results in social choice theory with a special focus on axiomatic characterizations as well as computational aspects. Topics to be covered include rational choice theory, choice consistency conditions, Arrovian impossibility theorems, tournament solutions, social decision schemes (i.e., randomized social choice functions), preferences over lotteries (including von Neumann-Morgenstern utility functions, stochastic dominance, and skew-symmetric bilinear utility functions), and the strategyproofness of social decision schemes. The overarching theme will be four escape routes from negative results such as the impossibilities due to Arrow and Gibbard-Satterthwaite: (i) restricting the domain of preferences, (ii) replacing choice consistency with variable-electorate consistency, (iii) only requiring expansion consistency, and (iv) randomization.

**Bitcoin: the first decentralized digital currency**

Organizers: Aviv Zohar (Hebrew Univ.)

Abstract: Bitcoin is a disruptive new protocol for a digital currency that has been growing in popularity.The most novel aspect of the protocol is its decentralized nature: It has no central entity in charge of the currency or backing it up, and no central issuer. Instead, Bitcoin is managed by a peer-to-peer network of nodes that process all its transactions securely. The protocol itself combines ideas from many areas of computer science. These range from its use of cryptographic primitives to secure transactions, its use of economic mechanisms to avoid denial-of-service attacks and to incentivize participation, through its solution to the Byzantine consensus problem, and the robust construction of its P2P network. The goal of the tutorial is to provide a basic understanding of the Bitcoin protocol, to discuss the main problems and challenges that it faces, and to provide a starting point for research on the protocol and its surrounding ecosystem.

**Privacy, information economics, and mechanism design**

Organizers: Cynthia Dwork (Microsoft Research), Mallesh Pai (UPenn), and Aaron Roth (UPenn)

Abstract: Internet scale interactions have implications to game theory and mechanism design in at least two ways. First, the ability of many entities to aggregate large amounts of sensitive data for purposes of financial gain has brought issues of privacy to the fore. As mechanism designers, it is therefore crucial that we understand both how to -model- agent costs for loss of privacy, as well as how to control them. Second, it has made “large markets and large games” the common case instead of the exception. Can mechanism designers leverage these large market conditions to design mechanisms with desirable properties that would not be possible to obtain in small games? What techniques can we use to enforce that players are “informationally small” with minimal assumptions on the economy? In this tutorial, we will discuss results and techniques from “differential privacy”, an approach developed over the last decade in the theoretical computer science literature, which remarkably can address both of these two issues. We will motivate the definition, and then show both how it can provably control agent costs for “privacy” under worst-case assumptions, and how it can be used to develop exactly and asymptotically truthful mechanisms with remarkable properties in various large market settings. We will survey recent results at the intersection of privacy and mechanism design, with the goal of getting participants up to the frontier of the research literature on the topic.

]]>

The game exhibits some peculiar phenomena despite the existence of a *unique* Nash equilibrium: Alice raises her hand, Bob does not raise his hand, and Charlie randomizes with equal probability. Charlie couldn’t be happier with the equilibrium as he will never have to take out the garbage (and could even decide who has to do the job by playing a pure strategy instead).

The security level of all players is 0.5 and the expected payoff in the Nash equilibrium is (0.5, 0.5, 1). However, the minimax strategies of Alice and Bob are different from their equilibrium strategies, i.e., they can *guarantee* their equilibrium payoff by *not* playing their respective equilibrium strategies (a phenomenon that was also observed by Aumann)! The solution in which all players play their minimax strategies obviously suffers from the fact this strategy profile fails to be an equilibrium: Both Alice and Bob would want to deviate. On top of that, the unique equilibrium is particularly weak in the sense that it fails to be quasi-strict, i.e., all players could as well play *any* other strategy without jeopardizing their payoff.

Quasi-strict equilibrium is an equilibrium refinement proposed in 1973 by Harsanyi and requires that all pure best responses are played with positive probability. Harsanyi showed that in almost all games all equilibria are quasi-strict. Indeed the three-player game above (taken from this paper) is one of the very few exceptions. Quasi-strict equilibrium is rather attractive from an axiomatic perspective. For example, it has been shown that the existence of quasi-strict equilibrium is sufficient to justify the assumption of common knowledge of rationality when players are ‘cautious’ (for more details see here and here).

In 1999, Henk Norde proved that every two-player game contains a quasi-strict equilibrium (via a rather elaborate proof using Brouwer’s fixed-point theorem), strengthening earlier results which showed existence in zero-sum games, bimatrix games with a finite number of equilibria, 2×n games, etc. Norde’s existence result implies that computing a quasi-strict equilibrium is PPAD-hard (while this problem was shown NP-hard for games with at least three players). Curiously, however, membership in PPAD for two-player games remains open due to the intricate existence proof by Norde (see also this review of Norde’s paper by Bernhard von Stengel).

Coming back to the example, it seems as if Charlie has to live with the deficiencies of Nash equilibrium and prepare to take out the garbage with positive probability.

]]>