When I was 21, I used to go to the gym three times a week, rain or shine. After a few years I shaved off a weekly session, and a few years later I was down to one gym session a week. By the time we moved to Pittsburgh, playing Dance Central with my wife seemed like a demanding workout.

Why am I telling you this? Because blogging is a bit like going to the gym  as the following graph suggests (the y axis shows the number of blog posts posted by yours truly, excluding announcements and guest posts):


But blogging is definitely more fun than going to the gym, and I did have a lot of fun blogging these past 2+ years; I want to thank Noam for this opportunity. Unfortunately, it’s no longer as fun as it used to be, and I decided to quit while I’m ahead. Since my blogging career was defined by ranking and listing stuff, I would like to finish with a vainglorious list of some of my own posts that I especially like:

Now that I think about it, blogging is more like singing in the shower: it’s a creative form of self-expression, and you have no idea who’s listening. If you have been listening for the last two years, I’m grateful for your attention I sing like a crow!

The Choices Are: Slick and Exponential, or Brute Force and Polynomial

Guest Post by Vijay Vazirani

A new, difficult problem, begun within TCS twelve years ago, namely finding good algorithms for computing market equilibria, has stretched the holy grail of TCS, namely polynomial time solvability, to its very limits — so much so that today, the two options listed in the title seem the only ones available!  Let me quickly provide some background, say a bit about these options and invite you to a discussion that — hopefully — goes right to the core tenets of our field.

The notion of market equilibrium, which asks for prices under which there is parity between supply and demand, is central to economics.  The Arrow-Debreu Theorem (1955), showing existence of such prices for a realistic, complex model of the economy, is perhaps the most central result in economics.  Yet, this result has an unsatisfying aspect to it: Whereas this notion is inherently algorithmic, e.g., Walras (1874), while defining  this notion also gave a mechanism for arriving at an equilibrium, the Theorem is highly non-constructive.   Several top mathematicians, including Scarf and Smale, have given algorithms, but they are slow.

Hence, bringing to bear powerful tools from our theory of algorithms was a fine goal. Work started with the easiest market models  and gradually moved on to more realistic, complex models. At first, the primal-dual paradigm and interior point methods for convex programs did admirably well. However, when aspects of intractability showed up, attention shifted to the two options listed in the title.

“Slick” algorithms are in the style of the (pivoting-based) simplex algorithm, though suitably enhanced to the setting of the linear complementarity problem — these are called complementary pivot algorithms, e.g., the classic Lemke-Howson algorithm (1964) for 2-player Nash is one such. Such algorithms run very fast in practice — indeed simplex (1947) is still the method of choice for most LP applications — but one can make them take exponential time by intricately doctoring up the instance.

Let us say that an algorithm is “brute force” if it accomplishes the job of finding one solution by finding them all — it has no idea how to do any better. However, it does achieve polynomial running time if certain parameters are assumed constant e.g., for CLIQUE, when restricted to graphs having cliques of  size at most k (a constant), there is a trivial O(n^k) algorithm.

From the viewpoint of practicality, the former win hands down — even for moderate instance sizes, the latter will take prohibitively long.  However, there is another important criterion: the study of polynomial time algorithms has historically led to deep insights into the structure of the problems studied. Surprisingly enough, even on this aspect, these exponential time algorithms do well, e.g., simplex leads to a proof of strong duality and Shapley has obtained deep structural properties of 2-Nash from the Lemke-Howson algorithm. For now, I am not aware of any insights from brute force algorithms.

Smoothed analysis provides a fairly convincing theoretical reason for the success of simplex. Surely there should be a more general setting that encompasses complementary pivot algorithms.

A workshop called “Towards Better and more Affordable Healthcare: Incentives, Game Theory, and Artificial Intelligence” (HCAGT) will be held on May 5, 2014, in conjunction with AAMAS’14 in Paris. The submission deadline is Feb 16. More details are available on the workshop’s website.

Guest post by Vincent Conitzer

Duke just started a Master’s of Science in Economics and Computation. Because the program was announced only very recently, the deadline this year is February 15, 2014. The requirements are here.

The program is intended to be very small, at least initially, and piggybacks on some of the existing infrastructure of Duke’s smoothly running Master’s program in Economics.

Still, I believe that this is an exciting development for our area that reflects well on its significance and maturity (of course meanwhile there remains so much to be done!).  At the same time, this new MS program is much broader than our area narrowly construed.  Students could do very well in it without ever taking a course with me or any of our other faculty that publish in EC/TEAC (Kamesh, Sasa, Rachel, Ozan, Santiago, …).  For example, they might choose to focus more on macroeconomics, finance, econometrics, etc. — and there are people at Duke interested in computational aspects of those.

I hope to see this type of program replicated at other universities, and I think there is plenty of room for that.  I personally think the broad approach is a good idea, but I am curious to hear what others think about it.  It does require us to engage a little with people working on “computational economics” in other senses. (See also earlier discussions here and here.)  At the same time, having such MS programs certainly doesn’t require that we all become experts on all those topics ourselves.  Most universities should have other people interested in those aspects, and obviously it would be bad for any Master’s program to rely too much on one or two individual faculty.  What do people think?

Bitcoin developer Amir TaakiBitcoin has been popping up in conversations lately, mostly in the context of people urging me to invest (real money) in it. But being the sad academic that I am, this mainly makes me wonder if there is some more interesting research to be done on bitcoin. Actually, “bitcoin” almost sounds like something that could have been the name of a blog on computation and economics so where better to discuss it than right here?

I found out a bit about bitcoin (no pun intended) through a great, detailed blog post by Michael Nielsen. Here are some of the points I found most interesting, very briefly summarized:

  1. Let’s say that Alice has a bitcoin. One worries that she would try to “double spend” it by giving it to Bob and Charlie. The way bitcoin deals with this is by letting the recipient broadcast the transaction, and getting other users to verify that it is legitimate.
  2. But now one may worry that Alice could manipulate the authorization process via a sybil attack, i.e., she could create a zillion copies that would certify her fraudulent transactions.
  3. To prevent sybil attacks, bitcoin uses a proof-of-work system. The idea is that if I want to validate a transaction, I need to solve a computationally hard puzzle essentially inverting a cryptographic hash function. In more detail, the puzzle is to find a number (called the nonce) such that when the number is appended to the string representation of the transaction, and the resultant string is hashed using a specific hash function (SHA-256), the output is smaller than a given target value. The target value can be adjusted to make the puzzle easier or harder.
  4. Now that we’re validating transactions by making unrelated users solve useless puzzles, we need to incentivize users to participate. Currently this is done by creating bitcoins out of thin air and using them to reward successful validations. The rate at which new bitcoins are created decreases over time, and at some point users will have to offer a reward for validating their transactions.

Admittedly, I am almost completely ignorant of research papers on bitcoin (especially by the security community). However, I do know of a cool EC’12 paper, “On Bitcoin and Red Balloons”, by Moshe Babaioff, Shahar Dobzinski, Sigal Oren, and Aviv Zohar. In a nutshell, the need to validate transactions is propagated through the bitcoin network, but they argue that nodes have an incentive to keep the knowledge of a transaction to themselves, thereby increasing their own chance of successfully validating the transaction and getting a reward. Babaioff et al’s clever solution is based on the mechanism that helped the MIT team win the DARPA Network Challenge. The idea is to consider the chain of nodes on which the transaction validation request was propagated, and reward each node in the chain; the challenge is to set the rewards in a way that incentivizes propagation and achieves other desirable properties.

The source of the incentive problem is that a node is competing with other nodes to solve a computational puzzle. A node is indeed more likely to be rewarded if there are fewer nodes that know of a transaction. But perhaps it is possible to redesign the computational puzzle to avoid this?

  • Question 1: Can the proof-of-work system be redesigned to be “Markovian”, in the (even stronger) sense that your chance of solving the puzzle at time t is independent of t? If so, assuming that (i) a node knows about at least one transaction that needs to be validated at any given time, and (ii) the probability that two nodes simultaneously validate a transaction is negligible, nodes would be indifferent to how many other nodes know about the transaction they are trying to validate.

More importantly, there must be a better way to prevent sybil attacks than turning millions of nodes into the computational equivalents of Sisyphus.

  • Question 2: Is it possible to replace the useless work nodes are doing with useful work? This reminds me of the transition from CAPTCHA to reCAPTCHA: Under the former, people were solving useless visual puzzles, whereas reCAPTCHA uses some of this previously wasted brainpower to digitize books. The question is whether something similar can be done for puzzles that are meant to be solved by computers.
  • Question 3: Even more ambitiously, can we do away with this whole proof-of-work thing? Or is it possible to prove that, in some sense, it is necessary? An impossiblity result might build on existing impossibilities for sybil-proofness; see, e.g., chapter 27 of the AGT book.

My research for this blog post did not extend to newer cryptocurrencies (such as Peercoin, Namecoin, and Primecoin), and it may well be the case that the preceding questions are already solved or just plain silly. But this space of problems, in my opinion, is quite compelling; it seems to me that the design of electronic cash systems could emerge as a new frontier for algorithmic economics.

CFP digest

  1. The 5th International Workshop on Computational Social Choice (COMSOC’14), which I am co-chairing with Toby Walsh. Submission deadline: March 15, 2014.
  2. The AI & Security special session (track) at the 27th IEEE Computer Security Foundations Symposium (CSF’14), which I am co-chairing with Benjamin Rubinstein. Submission deadline: February 10, 2014. Blurb:

    In recent years, a number of communities overlapping with AI—notably algorithmic economics and machine learning—have made significant forays into security & privacy. This session aims to collect theoretical viewpoints on security & privacy, particularly from researchers across diverse communities such as those identifying with AAAI/IJCAI, AAMAS, EC, WEIS, ICML, NIPS, COLT, STOC/FOCS, S&P, and CCS (including the AISEC workshop). Papers in the following areas intersecting with information security are highly encouraged to submit to this special session: Economics: Game theory, mechanism design, market design, social choice; Learning: Online learning, robust statistics, adversarial machine learning, privacy-preserving technologies such as differential privacy.

Today some 46 million turkeys will be devoured in the US. Any turkey is fair game, except for Popcorn (the duly elected Nationala_560x375 Thanksgiving Turkey) and Caramel, which were solemnly given a “full reprieve from cranberry sauce” by President Obama. Tomorrow, when they’re done with the turkeys, Americans (who are typically extremely nice on every other day of the year) will turn on each other, trampling anyone who stands in their way to get the best Black Friday deals.

For me, thanksgiving is a yearly reminder of how peculiar the American way can be even after 4+ years in the US. Therefore, as an alternative Thanksgiving celebration, I am happy to present the top five things that never cease to amaze me about the US (in no particular order):

  1. Credit history: Any long-term contract (renting an apartment, electricity or Internet connection, cellphone, etc.) requires a credit check, which determines whether you qualify as a good person (good credit history), a bad person (bad history), or a turkey (no history). The underlying logic is that you are a reliable person if you get into debt and then repay it. In contrast, if you never got into debt in your life and have lots of money in your savings account then you’re obviously a drug dealer. But here’s the catch 22-esque irony: To build your credit history you need a credit card, but to get a (usable) credit card you need credit history.
  2. Taxes: A crazy tax system is one where nobody has any idea what the rules are. A crazier tax system is one where the rules are nondeterministic. One of my favorite examples is this line from US tax form W-4: “Enter “1” for your spouse. But, you may choose to enter “-0-” if you are married and have either a working spouse or more than one job.”
  3. American Football: An average professional football game lasts 3 hours and 12 minutes. Average minutes of play: 11. Average time spent on replays: 17 minutes. Average number of commercials: more than 100. I’m not making this stuff up.
  4. US Congress: Enough said.
  5. Healthcare: The American healthcare system has turned inefficiency into an art form. Here’s what a typical visit to the hospital looks like. Upon arrival, you are quickly ushered into a private room. Now, let there be n nurses denoted 1,…,n such that for all i, nurse i+1 is more senior than nurse i (they have different titles like unlicensed assistive personnel, licensed vocational nurse, registered nurse, physician’s assistant, etc.); usually n is 4 or 5, but I’ve also witnessed n=6 at Children’s Hospital Boston. For i=1,…,n, nurse i enters the room and asks you the same questions asked by nurses 1,…,i-1 (nurse 1 usually also measures blood pressure or something). Nurse n is a nurse practitioner, and is essentially equivalent to a doctor; after asking the same questions asked by nurses 1,…,n-1, she writes a prescription, say for 10 pills. Incidentally, the pharmaceutical company distributes the pills in packs of 10. A naive person would think he can go to the pharmacy and procure a pack of 10 pills. But that would be too efficient; instead, a pharmacist must take the pills out of the box and carefully place 10 pills into a bag. It is widely understood that this medically indispensable process takes around an hour.

Ils sont fous ces Américains!