Feeds:
Posts
Comments

Archive for June, 2011

The list of accepted papers to SAGT 2011 (to be held in Salerno, Italy on October 17-19) is now available.

Read Full Post »

A Postdoc in Economics of Privacy is being offered at UPenn with the PIs Sham Kakade, Michael Kearns, Mallesh Pai, and Aaron Roth.  The types of questions that this postdoc aims for include:

  • How should a market for private data be structured?
  • How should we structure other markets to properly account for participants concerns about privacy?
  • How can market research be conducted both usefully and privately?

Read Full Post »

USC has an interdisciplinary project on Game Theory and Human Behavior, which includes AGT researchers David Kempe, Shang-Hua Teng, and the just-hired Shaddin Dughmi, as well as the Teamcore research group headed by Milind Tambe that works on “applying research in multiagent systems (MAS) to [...] security, safety and sustainability.”  The Teamcore group is offering postdoc positions.

Read Full Post »

Back from FCRC

Just returned from FCRC 2011, which included, among other conferences, EC (with workshops and tutorials), STOC, and CCC, in all of which I attended some talks, in addition to the plenary talks.  So much was going on, and there were so many competing things to do, that it is hard to give any flavor of the large event (maybe Lance’s post succeeds in describing the hectic atmosphere), so let me just mention shortly three talks which left an impression on me.

  1. Valiant’s Turing lecture talk was a valiant attempt to point out the computational facet of evolution and call for a computational study of evolution.  This is not a completely clear agenda, but I did think that Valiant made a step in the direction of defining and motivating it.  I believe that this type of high-risk high-yield not-fully-clear agendas are exactly what Turing awardees should be focusing on, and what Turing-award lectures should be used for.
  2. STOC best-student paper went to Analyzing Network Coding Gossip Made Easy by Bernhard Haeupler.  His talk was a model of talks that give you a take-away: he explained the problem (k messages need to be broadcast  in an n-node network and this is done using the following simple algorithm: at each stage each processor chooses a random neighbor and sends it a message), the logic of network coding (in order to escape from loosing a logarithmic factor due to the coupon collector, each message is not one of the original ones received by the processor in question, but rather a random linear (mod 2) combination of such) and a 1-slide understandable-on-the-spot proof that it works quickly (replacing and significantly generalizing previous  previous 30-page proofs).  Wow.
  3. In EC, the paper Reserve Prices in Internet Advertising Auctions: A Field Experiment by Michael Ostrovsky (Stanford) and Michael Schwarz (Yahoo! Research) described an application of auction theory  in Yahoo.  They analyzed the distribution of advertiser values for advertising on about half a million keywords, and for each keyword calculated the optimal reserve price a’la Myerson’s optimal auction.  These new keyword-specific minimum prices replaced Yahoo’s previous fixed (10 cents, I think) minimum prices and indeed yielded a directed significant increase in Yahoo’s revenue for the quarter.  This is quite an impressive feat, although frankly I question Yahoo’s business decision of increasing revenue this way, transferring value away from advertiser, and in my opinion likely to drive advertisers away from Yahoo in the longer run.

Read Full Post »

During iAGT, Vince Conitzer announced a new ACM journal titled “Transactions on Economics and Computation” which he is co-editing-in-chief (or is that co-editor-in-chiefing?) with Preston McAfee.  The topics selected are quite similar to those of EC starting including Algorithmic Game Theory, Mechanism Design, Recommendation/Reputation systems, electronic commerce, computational social choice, and such (see first frame in http://www.youtube.com/watch?v=tnDVx6il8fg ).  This is the first journal aimed squarely at AGT and the border between computation and economics, and may be considered as a sign that our field has reached scientific adulthood.

Readers of this blog may recall that I am not a great fan of the current scientific journal system.  I prefer the CS conference-based one, and would like to see a future more flexible system evolve where dissemination is solved by arXiv-like web repositories and selection handled by a flexible and non-exclusive system that ranges from the informal recommendations of readers or bloggers to various more formal overlapping workshops, conferences, and “virtual journal” committees.

In the past decade there have been several discussions about whether to start an AGT/comp-and-econ journal and my personal opinion was always against it.  I felt that it was beneficial for an AGT/E paper that appeared in a CS conference to also reach other audiences that use journals such as Game theorists (e.g. GEB), Economists (e.g. Econometrica), or Operation researchers (e.g. MOR).  Additionally, I felt that the field was too young and lacked sufficient standards for ensuring high quality, so some “adult supervision” from a more established area (whether a GT/econ-leaning one or CS-leaning one) was beneficial.

Now, however, I think that the time has come for our own journal.  While many AGT/comp-econ papers should still aim to reach a more diverse audience and attempt publication in other forums, at this point we have enough papers that follow a clearly AGT-agenda and so can justifiably go to this journal.  Of the maybe 200 AGT/comp-econ papers per year (that appear in EC, SAGT, WINE, associated workshops, as well as in TCS forums ike STOC/FOCS/SODA and AI forums like AAAI/AAMAS/IJCAI) many would find a proper resting home in this new journal.  I certainly sometimes question the value of submitting a final version of a conference paper to any journal, but at this point this is how academic promotion still works, and this cannot be ignored given the large number of young scientists in our field.

So, congratulations are in order.

PS. At the same time, GEB is devoting a special issue, edited by Jason Hartline, to papers that appeared in current FOCS/STOC/SODA.

Read Full Post »

The June 2011 issue of SIGecom exchanges, the newsletter of the ACM Special Interest Group on Electronic Commerce, is now online.

Read Full Post »

Last week I spent all of my time at the iAGT workshop in Jerusalem that I organized together with Michal Feldman and my garduate student Tali Gutman (who really did everything).  This workshop is part of the activities of of the Semester on Algorithmic Game Theory at the Institue of advanced studies at the Hebrew University, with additional funding from the Google grant on electronic markets and auctions, as well as from Microsoft.  The worskhop was packed with over 100 registered participants, A great list of  35 speakers, a poster session, a panel on “future directions in AGT”, trips to Masada and the Jerusalem old city, and has ended with a party at Amos Fiat’s new 300-year-old house.  As an organizer I think that the workshop went great, with a wide coverage of AGT, excellent talks, high energy levels, and real “community-building”.  We are posting videos of all talks on the web, and currently we have three formats: .mov files, .flv files, and slowly we are uploading the talks to youtube.

Let me mention two talks that left an especially large impression on me (both from the first day, since as the workshop progressed my brain capacity was getting lower). Paul Goldberg talked about his paper “The Complexity of the Homotopy Method, Equilibrium Selection,and Lemke-Howson Solutions” with Papadimitriou and Savani. While I’ve seen this paper before, I needed Paul’s crystal-clear talk to really understand the point: while the problem of finding an arbitrary Nash equilibrium is PPAD-complete, the problem of finding the “natural Nash equilibrium” — which is found e.g. by the Lemke-Howson algorithm — is harder, i.e. is PSPACE-complete.  This distinction is similar to the well-known one for the cannonical PPAD-complete problem which asks to find an “end-of-line” in an exponential sized graph defined by a given Boolean circuit: finding an arbitrary end-of-line, in addition to a given one, is PPAD-complete, while finding the end of line specifically at the other end of the given one is PSPACE-complete.

Nina Balcan talked about her paper “Learning submodular functions” with Harvey (re-titled “learning valuation functions” so as to fit the workshop theme more directly). The main technical result is a new construction of a family of matroids over m items .  This family has the following property: there is a fixed collection of super-polynomially (in m) many sets such that for any assignment of low/high to each one of these sets, one of the matroids in this family has a polylogarithmic rank on all the “low” sets and a m^{1/3} rank on all the “high” sets.  This technical feat is useful for proving lower bounds on various tasks you may want to do for submodular functions (such as learning them in an approximate sense)  since the matroid rank function is submodular so we get a rich new family of submodular functions to make our life hard.  Even more surprising to me was a reamrk made by Nina after the talk that matroid rank functions actually belong to the even more restricted family of “(gross) substitutes”, so such lower bounds will also apply to this smaller calss as well.

An interesting theme I could see in several talks was that we continue to look at new notions for modeling the private information in mechanism design, and we are starting to address more sophisticated information structures than the simple privae-value setting.  Early work by Computer Scientists embraced the notion of dominant strategies — a worst case notion that we like in CS — and shunned the standrad economic modeiling of Bayesian priors.  Many of the new papers are trying to find some middle ground — starting now with the Bayesian models and attempting to take a step back as to make them “distribution free” ala the “Wilson doctrine” or at rely on less information.  See the Wednesday morning session and the Thursday afternoon session for examples of such talks.

The panel discussion was quite lively, much of it focusing on Kevin Leyton Brown’s comment that the elephant in the room is that we are all working on theoretical models which we don’t believe that they reflect reality.  (My brother who is a philosopher of science told me that this “modeling question” is highly discussed in his field.)  The natural continuation was the question of whether and how should we be in touch with practitioners as to make sure that we don’t lead ourselves astray with worthless theoretical questions.  Silvio Micali insisted that it is our duty to lead the practitioners and provide them with new notions and ideas that they can not even imagine today rather than to follow them and solve their current concerns, while many others talked about the reality-check that interactions with practitioners brings.

Much more was going on, of course, but I’ll stop now and start getting ready for FCRC next week.

Read Full Post »