Feeds:
Posts
Comments

Archive for January, 2011

Speed-Price Equilibrium

For years, everyone entering Tel-Aviv from the east (e.g. from Jerusalem or from the airport) during the morning rush hour got stuck in the usual traffic jam for about an hour or so. Every day.  An hour or so.  About a week ago a new toll “fast-lane” has been opened, bypassing the traffic jam. Part of the intention is to reduce traffic into Tel-Aviv: public transportation as well as carpools use the lane for free and a huge parking lot has been set up at the entrance with free public transportation to Tel-Aviv provided.  (There are also various criticisms of this lane, such as the fact that a lane that was once “free for all” is now “for the rich”.)

What is most interesting and novel, certainly for this blog, is the pricing mechanism:  The price is set dynamically and may vary from 6 NIS to 75 NIS according to traffic (that’s from less than $2 to over $20) — and is displayed prominently at the entrance to the toll way.  Automatic systems monitor the number of vehicles on the lane and their speed and adjust the price as to ensure a driving speed of 70 km/h.   It seems that the designers take it for granted that an equilibrium will be reached: as prices go up, less drivers will elect to use the fast lane, reducing congestion and increasing the speed, and when it rises above 70 km/h, prices will decrease again, presumably boosting demand, etc.  This makes sense, but a skeptical engineer may question each of these logical steps.  Will drivers even be sensitive to the prices?  Maybe they will interpret a high price as a signal that traffic outside of the lane is really awful, encouraging more drivers to take the fast lane?  I would tend to view the consistent emergence of an equilibrium as a non-trivial validation of economic theory.  Let’s wait and see.  The last that I’ve checked, there weren’t enough people using the fast lane even at the lowest price.  I certainly enjoyed that when I traveled to Tel-Aviv yesterday, reaching my destination almost an hour earlier than I’m used too.

Read Full Post »

Guest post by Ariel Procaccia:

The recent AI magazine special issue on AGT is a good excuse to discuss an interesting question: Can AGT/E enable AI in some fundamental way? Or, are we (AI researchers working in AGT/E) betraying the legacy of our founding fathers – Alan Turing, John McCarthy, Marvin Minsky, and, well, Isaac Asimov – by not focusing on our true purpose – building intelligent robots, bringing about the singularity, or at least making better vacuum cleaners? These questions are all the more challenging because for now I want to avoid a related thorny issue, that of defining AI. I argue below that the answer to the first question is yes.

Here is the argument in a nutshell (it seems that a similar argument was independently proposed by the wise Aviv Zohar). One of the classic goals of AI is to create a software agent that seems intelligent to a human observing it (e.g., can pass the Turing test). However, in the last two decades a significant portion of AI research has moved from focusing on single agents to studying multiagent systems. Now, game theory attempts to distill the principles of rational interaction. But rationality is just another word for artificial intelligence: it is not how humans actually behave, but rather how we perceive intelligent behavior. Therefore, a multiagent system in which interactions are governed by game theory (or in which decision making is informed by social choice theory, for that matter) would be perceived as intelligent by a human observing it. In other words, AGT/E enables artificial intelligence on the system-wide level rather than on the individual level.

Now that we are convinced that AGT/E plays a fundamental role in AI (but Turing must be turning in his grave, or is Turning turing?), it remains to determine how AGT/E research in AI is distinct from AGT/E research in, say, theory of CS. Looking at the special issue’s table of contents, some major theory-oriented AGT/E topics are conspicuously missing, e.g., price of anarchy (which is admittedly on the decline even in theory) and algorithmic mechanism design in the Nisan-Ronen sense – truthful approximations for computationally intractable problems. The last question was eloquently addressed by Elkind and Leyton-Brown in their editorial for the special issue. They pointed out two (related) distinctions. First, AI researchers are interested in reasoning about practical multiagent systems, and thus tend to consider more realistic models, employ an empirical approach where analysis fails, and test their methods through competitions. Second, many AI researchers do not in general view computational hardness as an insurmountable obstacle, and thus employ heuristics where appropriate. I would like to raise a third point. Modern AI encompasses a world of ingenious ideas that, we are discovering, have a considerable conceptual interface with AGT/E. Therefore, some AGT/E work on the AI side emphasizes the connections with machine learning (the fascinating sociology of machine learning and AI is beyond the scope of this post), knowledge representation and reasoning, decision making under uncertainty, planning, and other well-studied areas of AI.

Wait, but what is AI? Unfortunately, no one can be told what AI is. You have to see it for yourself.

Read Full Post »

The Schedule (with abstracts) of the last week’s IPAM workshop on AGT is now posted.

Read Full Post »

AGT and AI

AI magazine published an issue devoted to “Algorithmic Game Theory and Artificial Intelligence”.   It contains the following list of articles:

The articles are restricted to subscribers, and the links above point to openly accessible copies, where available.

[ht: thanks to Ariel Procaccia for the pointer as well as for the links.]

Read Full Post »

Report from ICS 2011

A few days ago I returned from Beijing where I attended the Innovations in Computer Science (ICS 2011) Conference.  The most remarkable thing about this conference for me is its venue: ITCS in Tsinghua university in Beijing.  I have never been in China before, and my expectation was to see an awakening giant; very soon it was clear that the giant is already completely awake: Beijing basically has the same look and feel of a huge western metropolis: skyscrapers, large roads, traffic jams, people dressing and behaving similarly to what we see  in the west, cars looking more or less the same (but driving is rougher, say southern-European style), western-style Cafes, and fast-food places abound (ok, there are somewhat more Chinese restaurants than usual).  Tsinghua university also felt very much like the old and grand universities that  I’m used seeing in Europe or the USA in terms of architecture, vegetation, and students.  Frankly, I was at first disappointed by the lack of any “foreign” or “developing” feel to the place.  However, there were compensations:  I definitely felt the awe of a country boy coming to the big city, an awe I used to feel when arriving in cities like NYC, Paris, Rome, or London, but have since lost (I wonder whether this is due to my own de-sensitization or whether their grandeur has really dimmed.)  The people seem open and friendly and getting along as a tourist is very easy.  Westerners seem to draw no special attention from anyone, and even Uri Zwick speaking in Chinese (which he has been studying for years) did not seem to draw any particular attention (except from the other conference attendees).  Personally, I was especially delighted by the beautiful parks in which, despite the biting cold weather, everywhere there were groups of people dancing, singing, playing Chinese hacky sack, and other social activities.  Obviously I got to see only a small part of Beijing (the Tsinghua area and a few of the usual tourist attractions), not to mention the rest of China, but I got a distinct sense that people look happier than what I’m used to seeing elsewhere.

That finished, now let me say a few words about the (extremely well run) conference itself (who paid for airfare and accommodations of all speakers).  Certainly after the many discussions of this new conference last year (e.g. here, here, here, here, and here), it is natural to ask about the quality of papers as well as about their conceptual character.  My judgment of both is mixed: in terms of quality I would compare ICS  to the main European theory conferences (ICALPESA): all papers do have something new and significant, but only a few seem very strong (like many papers in FOCS/STOC or even in SODA/EC are).  Not bad I’d say for the 2nd year of existence and for a non North-American conference.  In terms of the “innovative” character expected of papers, one does get a clear feeling that most papers (but still far from all of them) do try to advance some conceptual message, and make it explicit in the writeup as well as their presentation, but still in most cases this message is tied to a usual type of “result”.   (An example of a truly conceptual talk that much appealed to me was Ueli Maurer’s (joint work with Renato Renner) advocating an algebraic abstraction for the foundations of cryptography.)  Perhaps one can see a theme common to many papers in the conference of having a very interesting question, but only a mildly interesting answer.

ICS had a large number of AGT-related papers, two invited talks related to AGT, as well as a panel on AGT.

Gil Kalai’s invited (highly-conceptual) talk on “influence and noise” starts with the classic KKL work on influences and reaches some of his modern results on approximation in computational social choice.  Gil also writes on ICS in his blog.

Silvio Micali’s invited talk described his paper with Jing Chen, Conservative Rationalizability and The Second-Knowledge Mechanism.  This work considers a simple auction of a single item in a setting with private values where players have some information about each others’ value. The authors carefully choose a non-Bayesian modeling and suggest and motivate a very robust (“conservative”) notion of implementation which is worst case and takes into account only a single level of iteration in the removal of dominated strategies.  Significantly, this notion does not attempt to pinpoint an equilibrium but rather reaches the implementation goal under any reasonable profile of strategies. I really liked the notions used and their argumentation and motivation, but the mechanism obtained itself was rather un-illuminating, basically asking players to tell all about each other, rewarding them for information, and severely penalizing anyone caught cheating, but as  Silvio argued: “all is fair in Love and Mechanism Design”.

An important algorithmic result regarding game theory was the paper “Strategy iteration is strongly polynomial for 2-player turn-based stochastic games with a constant discount factor” by Thomas Dueholm Hansen, Peter Bro Miltersen, and  Uri Zwick.  They consider the long-standing open problem of computing the value of stochastic zero-sum games played on graphs, and provide the first fully-polynomial time algorithm for the “constant-discount” case.  The problem without discounts (or with the discount factor being part of the input) is still open and is one of the few problems known to lie in NP \cap coNP but not known to have a polynomial time algorithm.

One of my favorite papers in the conference was “Is Submodularity Testable?” by C. Seshadhri and Jan Vondrak.  Their starting point is that there are many algorithms — for various tasks on sub-modular functions (including combinatorial auction allocation) — that run in time that is sublinear in the representation of the function, often time that is polynomial in n — the description length of items in the domain.  The fact that these algorithms work for sub-modular functions (but not for all functions) suggest that this sub-modularity “is noticed” in sub-linear time; so they ask the basic question of whether sub-modularity can be tested in sub-linear time, in the usual property-checking sense.  Their results are clever and insightful but much weaker than one would desire — this exactly suggests that there is some interesting and important structure there yet to be found.  As I tend to view sub-modularity as one level in a hierarchy of “complement-freeness”, I find similar questions to be of interest at other levels of the hierarchy as well.

The panel on Algorithmic Game Theory/Economics was pretty interesting (at least to me — the organizer).  Despite a few pointed questions from the audience (“Why do most economists continue to ignore AGT?”), the general atmosphere was enthusiastic for the field (not surprising, again given the organizer).  From different points of view people thought that the field is doing well, with the level of agreement getting to the point that several people commented on how Computer Scientists should learn is disagree with each other more often (like economists viciously do).  Yet, I could see two different approaches among the panelists: the “moderates”, Shanghua Teng and Peter Bro-Miltersen, emphasized the algorithmic side of AGT focusing on the interesting algorithmic questions and beautiful structure underlying them.  The “extremists” , Silvio Micali and myself, de-emphasized the algorithmic aspects per-se  and were interested more in new points of view and types of challenges coming from the meeting of economic/game-theoretic considerations and computational ones.  Several people made a point of disparaging the concept of Nash equilibrium, and not just due to its problematic computational status (as Gil Kalai said: the computational complexity is the least of the problems of the Nash equilibrium concept as a model of behavior).

Next year ICS 2012 will be held in MIT, Boston.

Read Full Post »

AGT/E 2010 year in review

2010 has seen the field of Algorithmic Game Theory and Economics continue its process of maturing.  I can not point out any breakthroughs this year, conceptual or technical, but we have seen a significant widening in scope, in points of view, in ways of analysis, in application areas, as well as an increase in technical depth.

Interest in Computational Social Choice is rising, and the dividing line between it and Algorithmic Mechanism Design is blurring.  Among the topics of interest are various notions of approximation, notions of fairness, and positive results without quasi-linearity.

In Algorithmic Mechanism Design, computer scientists are slowly graduating from the basic “private-value dominant-strategy quasilinear” settings and paying more attention to into more sophisticated, problematic, and realistic Bayesean models, correlated values, and other complications.  On the basic open problem of AMD, that of reconciling computational complexity with incentives, there has been some progress, but the basic challenge still seems beyond our powers.

The more applied world of research into ad auctions is also widening, with most interest now going beyond the basic multi-slot “adwords auction“, to the more sophisticated types of challenges raised by display auctions, ad-exchanges, privacy issues, social networks, and such.

On the core algorithmic/complexity questions of the computational difficulty of finding equilibria in games and markets we have seen only little progress, mostly for some market variants.  There was some commotion regrading the complexity of correlated equilibria in concise games, but that was soon settled.

The extremely influential “price of anarchy” line of work seems to be getting closer to being exhausted.  Its point of view of analyzing the loss due to rationality constraints using a competitive ratio framework seems to still rule, but now this would be just one ingredient in a research effort rather than the effort itself.

Beyond the growing set of AGT conferences, and the growing place of AGT within other fields,  we have seen a multitude of worshops, and schools on AGT throughout the year, from Bertinoro to Lisbon to Saarbrucken to Aarhus and from Tehran to Shanghai to Auckland, with more coming next year from LA to Jerusalem.

As far as I can see, as “Algorithmic Game Theory/Economics” is getting larger and wider, it is also getting more fragmented and  I have an acute feeling that my point of view is restricted.  Comments from readers adding their point of view are highly encouraged.

Happy New year!

Read Full Post »