Feeds:
Posts
Comments

Archive for February, 2012

The National Security Agency (NSA) has recently declassified an amazing letter that John Nash sent to it in 1955.  It seems that around the year 1950 Nash tried to interest some US security organs (the NSA itself was only formally formed only in 1952) in an encryption machine of his design, but they did not seem to be interested.  It is not clear whether some of his material was lost, whether they ignored him as a theoretical professor, or — who knows — used some of his stuff but did not tell him.  In this hand-written letter sent by John Nash to the NSA in 1955, he tries to give a higher-level point of view supporting his design:

In this letter I make some remarks on a general principle relevant to enciphering in general and to my machine in particular.

He tries to make sure that he will be taken seriously:

I hope my handwriting, etc. do not give the impression I am just a crank or circle-squarer.  My position here is Assist. Prof. of Math.  My best known work is in game theory (reprint sent separately).

He then goes on to put forward an amazingly prescient analysis anticipating computational complexity theory as well as modern cryptography.  In the letter, Nash takes a step beyond Shannon’s information-theoretic formalization of cryptography (without mentioning it) and proposes that security of encryption be based on computational hardness — this is exactly the transformation to modern cryptography made two decades later by the rest of the world (at least publicly…).  He then goes on to explicitly focus on the distinction between polynomial time and exponential time computation, a crucial distinction which is the basis of computational complexity theory, but made only about a decade later by the rest of the world:

So a logical way to classify enciphering processes is by t he way in which the computation length for the computation of the key increases with increasing length of the key. This is at best exponential and at worst probably at most a relatively small power of r, ar^2 or ar^3, as in substitution ciphers.

He conjectures the security of a family of encryption schemes.  While not totally specific here, in today’s words he is probably conjecturing that almost all cipher functions (from some — not totally clear — class) are one-way:

Now my general conjecture is as follows: for almost all sufficiently complex types of enciphering, especially where the instructions given by different portions of the key interact complexly with each other in the determination of their ultimate effects on the enciphering, the mean key computation length increases exponentially with the length of the key, or in other words, the information content of the key.

He is very well aware of the importance of this “conjecture” and that it implies an end to the game played between code-designers and code-breakers throughout history.  Indeed, this is exactly the point of view of modern cryptography.

The significance of this general conjecture, assuming its truth, is easy to see.  It means that it is quite feasible to design ciphers that are effectively unbreakable.  As ciphers become more sophisticated the game of cipher breaking by skilled teams, etc., should become a thing of the past.

He is very well aware that this is a conjecture and that he cannot prove it.  Surprisingly, for a mathematician, he does not even expect it to be solved.  Even more surprisingly he seems quite comfortable designing his encryption system based on this unproven conjecture.  This is quite eerily what modern cryptography does to this day: conjecture that some problem is computationally hard; not expect anyone to prove it; and yet base their cryptography on this unproven assumption.

The nature of this conjecture is such that I cannot prove it, even for a special type of ciphers.  Nor do I expect it to be proven.

All in all, the letter anticipates computational complexity theory by a decade and modern cryptography by two decades.  Not bad for someone whose “best known work is in game theory”.  It is hard not to compare this letter to Goedel’s famous 1956 letter to von Neumann also anticipating complexity theory (but not cryptography).  That both Nash and Goedel passed through Princeton may imply that these ideas were somehow “in the air” there.

ht: this declassified letter seems to have been picked up by Ron Rivest who posted it on his course’s web-site, and was then blogged about (and G+ed) by Aaron Roth.

Edit: Ron Rivest has implemented Nash’s cryptosystem in Python.  I wonder whether modern cryptanalysis would be able to break it.

Advertisement

Read Full Post »

STOC 2012 accepted papers

The list of papers accepted to STOC 2012 has been posted here.  Many are related to Game Theory or Economics:

  1. A quantitative Gibbard-Satterthwaite theorem without neutrality by Elchanan Mossel and Miklos Z. Racz
  2. Finding red balloons with split contracts: robustness to individuals’ selfishness by Manuel Cebrian, Lorenzo Coviello, Andrea Vattani, and Panagiotis Voulgaris
  3. Quantum Money from Hidden Subspaces by Scott Aaronson, and Paul Christiano (slides)
  4. A Complementary Pivot Algorithm for Market Equilibrium under Separable, Piecewise-Linear Concave Utilities by Jugal Garg, Ruta Mehta, Milind Sohoni, and Vijay V. Vazirani (abstract)
  5. Polyhedral Clinching Auctions and the Adwords Polytope by Gagan Goel, Vahab Mirrokni, Renato Paes Leme
  6. Prior-Free Auctions with Ordered Bidders by Stefano Leonardi and Tim Roughgarden
  7. Competitive Contagion in Networks by Sanjeev Goyal and Michael Kearns
  8. On the limits of black-box reductions in mechanism design by Shuchi Chawla, Nicole Immorlica, and Brendan Lucier
  9. From Query Complexity to Computational Complexity by Shahar Dobzinski and Jan Vondrak
  10. Budget Feasible Mechanism Design: From Prior-Free to Bayesian by Xiaohui Bei, Ning Chen, Nick Gravin, and Pinyan Lu
  11. Rational Proofs by Pablo Azar and Silvio Micali (abstract)
  12. Minimax Option Pricing Meets Black-Scholes in the Limit by Jacob Abernethy, Rafael Frongillo, and Andre Wibisono (abstract)
  13. An Algorithmic Characerization of Multi-Dimensional Mechanisms by Yang Cai, Constantinos Deskalakis, and S. Matthew Weinberg
  14. Online Matching with Concave Returns by Nikhil R. Devanur and Kamal Jain
  15. A Rigorous Analyis of One-Dimensional Schelling Segregation by Christina Brandt, Nicole Immorlica, Gautam Kamath, and Robert Kleinberg
  16. Matroid Prophet Inequalities by Robert Kleinberg and S. Matthew Weinberg

Read Full Post »

EC’12 Innovations

I’ve enjoyed reading the lively discussion sparked by Ariel’s post endorsing a name change for EC.  This seems like a good time to talk about some intriguing reforms that are actually taking place with this year’s conference.  There are at least three noteworthy innovations in the EC’12 call for papers:

  1. The CFP designates three focus areas: theory and foundations, artificial intelligence, experimental and applications.  The authors of each paper must designate it as belonging to at least one of the areas.  Multiple focus areas are allowed.  The SPC and PC have members in each focus area, and the labeling of papers by focus area will guide the assignment of reviewers.
  2. There is the possibility of holding parallel sessions, if the number of submissions of sufficiently high quality exceeds the number of papers that could be presented in a single-session conference.
  3. The submissions are in single-column format with a limit of 18 pages.  (Hooray!  No more squinting at 9-point font when reviewing EC submissions.)

Concerning the first two of these points, I thought it could be beneficial to have an open discussion on this blog about their rationale and their implications for the process of reviewing, selecting, and presenting papers.  A few days ago I had an Internet chat with this year’s PC chairs, Panos Ipeirotis and Kevin Leyton-Brown, as well as Tim Roughgarden who was one of last year’s PC chairs.  In the process, I also got the scoop on another innovation that hasn’t yet been publicly announced: authors of work accepted for publication during the past year in other venues (journals or conferences) may submit proposals for poster presentations of their work.  The submission deadline for such posters will be later than the Feb. 6 deadline for full paper submissions; expect more details to be forthcoming.

The remainder of this post is a summary of my chat with Panos, Kevin, and Tim.

The parallel session option

EC has always been a single session conference.  This clearly has some advantages, but of course it places a constraint on the number of papers that can appear in the conference.  It has become increasingly difficult to obey this constraint while accepting all of the papers that were deemed by the program committee to be worthy of presentation, as the number of submissions has grown sharply in recent years.  Kevin wrote, “We wanted to ensure that we accepted all the papers that we thought were worth featuring at EC.  I’m not sure how many that will be.  Maybe we’ll be able to do a single track after all, but we wanted to allow the quality of papers to drive the schedule, and to go parallel if that was what it takes.  We imagine not being parallel in all sessions.”  Tim added, “Last year, due to a number of constraints, we had to reject papers that had strong support from the PC. This was the hardest part, for me, of being co-PC chair last year.  Last year, there was no option of parallel sessions.  Since EC was part of FCRC, the hours available to present EC papers was unusually limited.  I also insisted that every accepted paper should receive a talk of at least 20 minutes.  (I think 25-30 minutes would be even better.)  Taking the intersection of these constraints, the maximum number of papers we could accept was 48.  (It wound up being 49, due to two similar accepted papers sharing a common talk slot.)  Meanwhile, there were at least 55-60 submissions that had very strong support from the PC.”

Focus Areas

Q:  Are there informal targets for the distribution of papers by focus area?

A:  (Kevin)  Absolutely no targets.  Panos and I were very explicit that we’d be OK with having only a handful of papers get in from one of the areas.  They’re a device for assigning reviewers.  The main purpose, though, is communication and commitment.  That is, we wanted to credibly signal to people outside the “core theory” area of EC that they ought to submit high-quality work to the conference, and that if they do, they’ll be reviewed by people who value the kind of work that they do.  Making separate SPCs for the different areas was a way of communicating how we defined those communities, and of making a visible promise that these people would look after the papers.  Allowing multiple tags per paper tries to reflect the fact that these “areas” aren’t partitions of the topics studied by the EC community, or indeed of the members of this community.

My sense is that in past years, reviewing of different areas has been done fairly and well.  However, that’s not always how it’s been perceived by researchers from non-theory disciplines.  In some cases, they think they’ve been rejected by theorists for not taking the right approach, when they’ve actually been rejected by members of their own community, for the right reasons.  But this new process will make clear what’s going on.  Rather than serving as a device for imposing quotas on the papers that are submitted, I hope it changes the perceptions about the conference enough before submission, that we end up with a wider range of strong submissions.

(Tim)  I think such signalling is a good idea. Last year, Yan and I did all the reviewing assignments by hand, and put in a lot of work to ensure that submissions were reviewed by the PC members who were most likely to appreciate them. But the process is not transparent to the authors. With different tracks, it should be obvious to authors that their submission will be assigned to knowledgeable reviewers.

Q:  The CFP says that a paper submitted to two focus areas will be handled by the most qualified reviewers in the union of the two areas. Does that mean it’s possible that all reviewers of a paper will come from a single focus area, although the paper was submitted to two areas?

A:  (Kevin)  Yes, that’s possible.  It may be that we’ll do assignments by hand, but maybe not.  What we promise is the union.

Q:  Anything else you want to tell us about the focus areas?

A:  (Kevin)  We don’t intend to identify focus areas in the program.  For example, we envision a session on ad auctions that might include papers that are empirical, theoretical, AI.  There’s no reason even to indicate which papers these are.  The goal is explicitly not to fragment EC into subconferences.

(Panos)  Plus a paper may belong to multiple areas.  The SPC is partitioned by focus area, but the PC is not.  We have PC members that have been nominated by SPC members across multiple focus areas.

Poster session(s)

(Panos)  We have a new idea for something separate from the reviewed paper track.  The idea is to let people submit accepted work (e.g., accepted in journals, or maybe another conference) and present in poster sessions.

(Kevin)  Not rigorously peer reviewed.  With a later deadline, too.

(Panos)  We will do lightweight checking that it is relevant, and that it is accepted for publication since the last EC.  (Full acceptance in the case of journal articles, not conditional.)

Concluding Thoughts

I’d like to thank Panos, Kevin, and Tim for volunteering their time to discuss these issues.  Blog readers, what do you think about these reforms?  How do you feel about EC going to parallel sessions?  Has there been a pro-theory bias in the reviewing of EC submissions in past years?  Will you be submitting your work to the new poster session?

Read Full Post »

%d bloggers like this: