Feeds:
Posts
Comments

Archive for November, 2009

One of main culture clashes between computer scientists and economists on the CS-econ frontier is whether “complexity of equilibria” matters.  The  CS-y view of the matter is captured in Kamal Jain’s quote: “If your laptop can’t find it then neither can the market“.  Economists mostly don’t care since they see equilibrium reached everyday, contrary to what CS says.  As Jeff Ely quips:  “Solving the n-body problem is beyond the capabilities of the world’s smartest mathematicians.  How do those rocks-for-brains planets manage to do pull it off?”  TCS folks who see complexity as the map of the world can’t really understand this indifference, as Lance Fortnow tweets: “I’m an economist so I can ignore computational constraints / I’m a computer scientist, so I can ignore gravity.

Computational Complexity map of the world

The beautiful thing about studying systems at equilibrium is precisely the fact that this abstracts away the gory details of the process (aka dynamics aka algorithm) of reaching this equilibrium.  In principle, there are many different difficult-to-understand dynamics that all reach the same easy-to-understand equilibrium point.   This is all very nice as long as the equilibrium is indeed magically reached somehow by the “real” dynamics.  The obvious crucial question is whether this is the case in practice. It seems that the general tendency among economists is to claim “yes”, to which the natural next question in line is “why”?  As computer scientists show, this is not a general characteristic of large games or markets.  Understanding the properties of interesting games and markets that make them actually reach equilibrium should be enlightening.  Maybe it is because economists choose to consider only those that do turn out to converge quickly, ignoring another large and interesting class of strategic scenarios? Maybe it is because economists are thinking about “smallish” games and so their insights will not carry over to more complex realistic scenarios?  Maybe there is some deeper interesting structure that guarantees fast convergence?  Distinguishing between these possibilities is especially crucial as we aim to analyze the new artificial games and markets that are to be found — and designed — on the Internet as well as elsewhere.  Which economic and game-theoretic sensibilities will still hold in these complex unnatural  circumstances?

Complexity is all about understanding such processes.  While the foremost question dealt by computational complexity is that of “time” — how long does a computational process need in order to find the solution — in our case to reach (close-to) equilibrium — this is not the only type of questions and insights provided by complexity theory.  As one can see in the “map above”, there are a stunning variety of complexity classes, each attempting to capture a different facet of the challenge of finding the solution: how much space (memory) is needed? Can we even tell when we reach a solution?  Does randomization help?  Is it helped parallelism?  Are approximations easier?  Does the solution have this or that particular structure? In the case of finding equilibria, the classes PPAD and PLS give a very nuanced explanation of what is involved.  There are also “concrete” models that study explicitly specific parameters such as communication or queries.  One may dislike the fact that this complexity analysis does not restrict attention to natural dynamics but allows arbitrary and unnatural algorithms.  The kind of natural dynamics that economists usually have in mind are some kind of best-replying in case of games and some kind of a Walrasian auctioneer in markets.  The problem is that there are many variants of these that make sense: fictitious play, various forms of population dynamics, more sophisticated types of learning such as regret-minimization, and all these can be enhanced with various orderings, smoothing attempts, tie-breaking rules, strategic look-ahead, re-starts, actions of the central planner, not to mention other more or less complex optimization and learning  attempts.  The strength of complexity analysis is that it applies to all of these.  Any “lower bounds” are definitive: any practical system can be simulated by a computer, and thus no dynamics can succeed in general. (Emphasis on “in general” — as mention above, the problems that you may be interested in may have special structure — so what is it?)   A statement of  an “upper bound” may be less interesting as stated, but immediately raises the challenge of either finding a natural algorithm=process=dynamic or pinpointing the complexity reason explaining why this is impossible.

This is a good point to refute several irrelevant objections to the applicability of computational complexity for analyzing dynamics of games and markets.  The first is the notion that humans or markets can undergo processes that cannot be computed.  Frankly, there is no evidence of this; there is certainly much about the world that we do not understand well enough to simulate, but there is no indication of any natural process that is inherently more powerful than computers.  This is the modern version of the Church-Turing thesis.  (It seems that some quantum phenomena cannot be simulated by usual computers, but that doesn’t change the principle — it just would replace classical computers with quantum ones in the few places that it matters.)  Even if you do subscribe to metaphysical dualism, do you want to base economics on it?  The second types of objections concern the standard technical notions used in complexity which obviously leave much wiggle-room: “polynomial time”, with its hidden constants, is not synonymous with efficient; worst-case analysis is not always the right notion, etc.  The point is that these are simply concise notions that usually seem to capture the issues well.  There always are cases where more refined notions are needed, and in such cases complexity theory can provide more precise answers: for example in analysis of basic problems like sorting or matrix multiplication, very exact results are obtained (with no hidden constants) similarly, cryptography is not based on worst-case analysis, etc.  There is no indication so far that the usual somewhat coarse notions of computational complexity miss something significant when applied to games or markets — quite the contrary in fact.  If such evidence emerges then complexity will not become irrelevant; simply more refined analysis will be used.

Take for example the stunning difference between reaching a Nash equilibrium and reaching a Correlated equilibrium.  While the latter is reached by various natural regret-minimization dynamics, there are no “non-trivial” dynamics that, in general, reach the former.  Let me say a word about this “non-triviality” by giving an example of what I would consider a trivial process: suppose that each player chooses a strategy at random every “turn”, unless his last strategy was already a best-reply to those of the others (up-to some \epsilon indifference).  At some time, the players will happen to “hit” an (\epsilon-) equilibrium.  This type of dynamics that simply search over the whole space of strategy profiles provides no insight and is not useful in most practical situations.  The point of the triviality is not that of the random choices but rather that of essentially trying every possibility.  Many other proposed “dynamics” for Nash equilibrium or for market equilibrium are similarly trivial — in some cases they resort to simply trying all possibilities (in some approximate sense).  The dynamics for correlated-Nash are not like this at all — they only look at a tiny “small-dimensional” fraction of the space of possibilities.  Why is that? Complexity theory explains this phenomena clearly: correlated equilibria can be found in polynomial time, but finding a Nash equilibrium (or many types of market-equilibrium) is PPAD-complete.

Advertisement

Read Full Post »

A few days ago I re-heard the story of how the game-theory course in the Technion was graded. I’ve heard versions where the professor was Ran Smorodinsky and other versions with Dov Monderer, but I haven’t checked out what “really” happened. (Anyone with the real facts is welcome to send in a comment…) Here is the story.

In the last day of class, the Professor gathered all the students and made the following offer: He wants to have an average grade of X (say 80) in the course with a standard deviation of Y (say 15). If all students agree upon everybody’s grades in a way that conforms to these constraints, then this is how they will be graded, and the exam will be canceled. Otherwise, the scheduled exam will take place as usual and will determine the grades. The professor then left the class and let the students try to reach a joint decision.

Some versions of the story continue thus: the best student in class immediately got up; said that she will not accept anything less than 100; and immediately left the room too.

Read Full Post »

Perhaps the first complexity question regarding equilibria that a Computer Scientist can ask is how difficult — computationally — is it to find a pure Nash equilibrium of a given game, or report that none exists.

In the simple setting, the n-player game is given by it’s n utility functions, each of them given as a table of size m^n of numbers (where we assume for simplicity that each player has a strategy space of size m.)  By going over all possible m^n possible strategy profiles and checking for each of them whether some player can gain by deviating we get an efficient algorithm.  (Easy exercise: The running time of this algorithm is O(n^2 m^n) — improve it to linear in the input size, i.e to O(nm^n).)

This, however, is not the end of the story.  In many cases, we really have a “very large” game, and we would like to find an equilibrium in time which is significantly sub-linear in the size of the game’s utility tables.   How can we even talk about getting sub-linear time algorithms?  We need to read the input — don’t we?  There are two standard ways in Computer Science to talk about this question.  The concrete complexity approach assumes that the utility tables are given by some of “black box” which can answer queries of a certain type (e.g. given a strategy profile s, can return the value of u_i(s) as a single atomic operation).  The set of queries that the black box supports will of course determine what can be done efficiently in this model.  The Turing-Machine complexity approach will consider concise representations of the utility functions, whose length is significantly less than the full size of the tables, and expect the algorithm to find an answer in time that is polynomial in the size of the concise input.  This of course gives different algorithmic problems for different concise representations.

In the rest of this post I will present the basic study of the complexity of finding a pure Nash equilibrium in these models (for general games as well as some special cases.)   These basics are often overlooked as they are shadowed by the meaty PLS-completeness results and PPAD-completeness results, but I prefer  talking about the basics even though these contains no “significant” proofs.

Black Box Complexity

In the variant here we assume that we get n “black boxes”, one for each player, that can answer utility queries: given a strategy profile s return u_i(s).  Another variant we can consider would also allow asking best-reply queries: given a strategy profile of the others s_{-i}, return the best-reply s_i of player i.  (Where we would need to specify how ties are handled.)  We will concentrate on cases where m is not too large and we allow the running time to be polynomial in m, even though in some cases we could think about running in time which is logarithmic in m — the length needed to represent a strategy of a player. In these situations, as a best-reply query can easily be simulated by m utility queries to all possible values of s_i, adding a best-reply query doesn’t significantly change the power of the model.  (One can imagine even more queries to ask from the black-box, and the ultimate in this direction would be t allow asking any question about u_i, but only about a single u_i for each query.  This model turns out to be equivalent to the communication complexity model.)

We will consider not only general games, but also two sub-classes of games that are known to always have a pure-Nash equilibrium: Dominance-solvable games (those where iterated elimination of strictly dominated strategies leaves a single strategy profile), and Potential Games.  For motivation it is best to start with a “good” algorithm, that finds a pure Nash equilibrium in all Dominance-solvable games:

  1. Let s be an arbitrary initial strategy profile
  2. Repeat mn times
    1. For all players i = 1 ... n do
      1. s_{i} = best reply of i to s_{-i}.

The total running time of this algorithm is O(m^2 n^2) operations (which include utility queries to the black box, where each best-reply calculation is implemented via m utility queries.)  The interesting fact is that this algorithm always ends with a pure Nash equilibrium — as long as the game is dominance-solvable.  (Idea of easy proof: a strategy that was eliminated in the t‘th step of the strategy elimination process can never be used after t iterations of the outer loop of this algorithm.)  What is remarkable about this running time is that it is exponentially smaller than the “input size”, as each black box contains m^n numbers in it.

Can such an efficient algorithm — whose running time is polynomial in n and m — be found for other games?  For potential games?  Maybe even this algorithm — just repeated best-replying — will do the trick?  It is not difficult to see for general games repeated-best replying may loop infinitely even if some pure Nash equilibrium exists.  For potential games, almost by definition, every sequence of best-replies will indeed terminate with a pure Nash equilibrium.  (In fact, ordinal potential games are exactly those games where every sequence of better-replies terminates.)  But how long can this take?  Unfortunately, exponential time.  (Exercise: find an exact potential game with n players each having two strategies, where starting from some strategy profile s, every sequence of best-replies requires exponential time to reach a Nash equilibrium.)

So, can there be another efficient algorithm for general games or for potential games?  A simple observation is that in a potential game where the potential function has polynomial-sized range, any best-reply sequence will terminate withing this polynomial number of steps.  Can this be generalized even further?  One can formally prove a negative answer — a lower bound.  One of the simplest formalisms for this is using an “adversary argument”: instead of first choosing a game and then letting the algorithm access the utility functions via the black boxes, the adversary will make-up the game as he goes along.  Each time the hypothetical algorithm makes a query to the black-box, the adversary will answer it in some way that he thinks will be least helpful to the algorithm — making sure that for as long as possible the algorithm does not have sufficient information as to provide an answer.  The idea about this technique is that if the adversary can pull this off — i.e. keep answering in a consistent way such that a single answer is still not guaranteed — then there must be some fixed game for which the hypothetical algorithm was not able to produce a correct answer in time — exactly the game that is still consistent with the adversary’s answers.

For the case of general games it is easy to provide such an adversary.  Let the adversary fix in his mind some game without any pure Nash equilibrium. (For concreteness, say, 2 players playing matching pennies, and another n-2 players not caring at all, but still having two strategies each.  This gives a game with 2^n strategy profiles, whose structure is just many multiple copies of the 2×2 game of matching pennies.)  Whenever the algorithm makes a query, the adversary will just answer with the value of the fixed game.  Clearly the algorithm can never find a pure Nash equilibrium since none exists.  The point is that the algorithm cannot decide that no Nash equilibrium exists before making at least 2^n queries.  If even a single strategy profile was not queried, then it could be that at that strategy profile all players get high utility (where “high” is anything higher than any value in the original game) and thus it is a Nash equilibrium.

For the case of potential games the adversary must be somewhat more clever.  We will consider an n player game where each player has two strategies and thus the space of strategy profiles is the n-dimensional Boolean hypercube. Our adversary will produce an exact potential game, as will be evident by the fact that all players will have exactly the same utility at each strategy profile.  In such cases a pure Nash equilibrium is simply a local maximum of this utility function.  Let us start with the basic idea, that does not quite do the job: At the t‘th query of the algorithm, s^t, the adversary will answer u_1(s^t) = ... = u_n(s^t) = u(s^t)=t.  The idea is that the algorithm cannot find any Nash equilibrium this way since a strategy profile can be declared to be such only if all of its n neighbors (in the Boolean hypercube — corresponding to the possible n deviations by a single player) were already queried to have a lower utility than the vertex itself.  Since the adversary keeps answering with increasing values, the only way that this can happen is if the algorithm first “closes up” a strategy profile (vertex in the hypercube) by first asking all of its neighbors and only then the enclosed vertex.  So to fix it, we’ll improve the adversary: when asked a query s^t, the adversary will look at the set of un-accessed vertices of the hypercube and see whether any vertices are left disconnected from the “rest of the hypercube”.  Specifically, let us call a vertex “compromised” if its connected component (after removing all queries s^1 ... s^t from the hypercube) has size strictly less than 2^{n-1} (half the hypercube).  What our adversary will do is go over all newly compromised vertices, fixing their values in an order that increases from the farthest away from s^t to s^t itself.  This way a vertex is never assigned a value after all of its neighbors have, and thus no vertex can be guaranteed to be a Nash equilibrium until all vertices of the hypercube are either queried or compromised.  The only question is how many queries does this take?  Let Q be the set of queries and C be the set of all vertices compromised by Q, how large does Q have to be for the union to be at least half of the whole hypercube?  Combinatorially, Q must contain all of C‘s neighbors in the hypercube.  This leads us to ask for smallest possible size of the set of neighbors of a set of vertices in the hypercube — known as the isoperimetric problem for the Boolean hypercube.  This question is completely solved (see chapter 16 of Bollobas), and in particular it is known that for any subset of at most half of the hypercube, the size of the set of its neighbors is at least \Omega(1/\sqrt{n}) fraction of original set’s  size.  This means that |Q| \ge \Omega(|C|/\sqrt{n}), which implies that |Q| \ge \Omega(2^n/\sqrt{n}) — an exponential lower bound for the worst case running time of any algorithm that finds a pure Nash equilibrium for all potential games.  An immediate corollary is that there may be best-reply paths of this exponential length.

Turing-Machine complexity

We now move to the alternative way of accessing a “large” game, that of representing it concisely.  Clearly not every game has such a concise representation, but can we at least find and equilibrium efficiently for those that do?  Also, different representations may require different complexities — which concise representation should we consider?  The first one we will look at is the general representation of a black box by the code of a procedure, equivalently by a description of Turing machine. However,  we only want to consider TMs that run efficiently, e.g. in polynomial time, or alternatively count the running time of the TM as part of the input size.  The usual formalism for giving such a TM as input, is the equivalent requirement of a Boolean circuit that computes the black-box queries, and this will be our first concise representation:

Input: n boolean circuits, each computing a function u_i : (\{0,1\}^{\log m})^n \rightarrow \{0,1\}^t, where the input of each circuit is the index of each player’s strategy and  the output of each circuit is interpreted as a t-bit long integer specifying the utility.

Output (Decision version): does this game have a pure Nash equilibrium?

We would expect the running time to be polynomial in the input size: n,t, and the total size of the circuits.  As before let us concentrate on cases where m is not too large (i.e. also polynomial in these parameters).

So what is the complexity of this problem?  Generally speaking, the state of complexity theory is such that we get essentially no algorithmically-useful information from the circuit representation.  Thus, we would expect a polynomial time algorithm to exist for this generic representation only if such existed in the black-box model — which is doesn’t.  In this world we can easily prove an NP-completeness result.

Theorem: Existence of a pure Nash equilibrium in a game given by boolean circuits (as above) is NP-complete.

Proof: Containment in NP is by guessing the equilibrium point, and then verifying it by checking all m possible deviations for every player.  (This is the only place where we used m being small, otherwise the problem seems to be \Sigma_2-complete).  To prove NP-hardness we will provide a reduction from 3-SAT (I lost the reference for this reduction).  All players will have two strategies: 0 and 1.  For each variable we will have a player whose utility function is a constant 0.  For each clause we will have an additional player whose utility function gives him 1 if his strategy is the OR of its literals.  We will then have two additional “top” players whose utilities are as follows: if all clause players play 1, then both of these players get utility 1 whatever they play; otherwise, these players play “matching pennies” between themselves.  Note that in a pure Nash equilibrium, the variable players can play anything, the clause players must faithfully compute the clause values, and then the two “top” players can be in equilibrium if and only if all clause values are 1.

The next succinct representation we consider is one that aims to take advantage of the fact that in many games while there are many players, each player’s utility only depends on the strategies of a few others and not on everyone’s strategies.  In such cases, we can represent  the utility of each player as a full table of only the relevant players’ strategies.  In such games an important structure is obtained by looking at the graph  (or digraph) specifying for each two players whether the utility of one depends on  the strategy played by the other.  So here is the equilibrium problem using this “graphical games” succinct representation:

Input: A graph G and for each vertex i in it, a table u_i : \{1...m\}^{d_i+1} \rightarrow\mathbb{Q}, where d_i is the degree of vertex i, and the table specifies the utility of player i as a function of his strategy as well as those of his neighbors.

Output (Decision version): does this game have a pure Nash equilibrium?

We would expect the running time to be polynomial in the the total size of the tables, i.e polynomial in n and in m^d, where d = \max_i d_i.

However just a tiny variation of the NP-completeness proof above shows that this problem in NP-complete too.  The only players in the reduction above who had degree more than 3 are the two top players who were “computing” the “AND” of all the clauses.  To get a reduction where all players have degree at most 3, we will add an additional player for each clause who will compute the “and” of this clause with all preceding ones.  This will be done by giving him utility 1 if his strategy is indeed the “and” of the original clause-player of his clause and the new-clause-player of the preceding clause (and 0 otherwise).  Now the two “top” players will just need to depend on the new-clause-player of the last clause as they did before (either always wining or playing matching pennies).  Thus we get:

Theorem: Existence of a pure Nash equilibrium in graphical games (as above) is NP-complete.

A nice exercise is to exhibit a polynomial time algorithm for the special case that when the graph is in fact a tree.

Now let us move to the special case of exact potential games.  The problem specification (input and output) is exactly as defined above where the utility functions are given as Boolean circuits, except that we are guaranteed that the utility functions are those of an exact potential game.  (This condition may not be easy to verify in general, but our algorithm may take it as an assumption.)   We know that a pure equilibrium is exactly a local maximum of the potential function.  This potential function is uniquely defined, up to an additive constant, by the given utility functions: once the value of the potential function is known for some profile, P(s^0), we can change, in turn each s^0_i to an arbitrary s_i and the difference is completely specified by u_i.  Thus the potential function is defined by P(s)-P(s^0) = \sum_{i=1}^n [ P(s^0_1 ... s^0_i, s_{i+1} ... s_n) - P(s^0_1 ... s^0_{i-1}, s_i, ..., s_n)] =   \sum_{i=1}^n [u_i(s^0_1 ... s^0_i, s_{i+1} ... s_n) - u_i(s^0_1 ... s^0_{i-1}, s_i, ..., s_n)].  We see that the problem of finding a pure Nash equilibrium for potential games is equivalent to that of finding a local maximum of a given Boolean circuit, where “locality” is given by switching the values of one of the n sets of \log m bits in our problem definition. (Proof: the expression above is efficiently computable and thus our problem is no harder, and on the other hand a special case of our problem is where all u_i‘s are identical in which case our problem is equivalent to finding a local maximum.)

Now let us say a few words about the complexity of this local-maximum problem.  First, syntactically speaking we have an “NP search problem”: on input x (the utility functions) the task is to find some solution y (strategy profile) that satisfies R(x, y) (y is a local maximum of x), where R is computable in polynomial time.  The general class of such search problems is termed FNP,  and in our case we are talking about problems with a guarantee that such a y always exists, a subclass termed TFNP.  Now, semantically, there are various other problems that have a similar flavor of finding a local maximum (or minimum), and this class of problems, a subset of TFNP, was termed PLS (polynomial local search).  Some examples of such problems come from attempting to formalize what various local-improvement search heuristics obtain: suppose that you attempt finding a short traveling salesman route by gradually trying to switch the locations of pairs of cities along the route.  What do you get?  How long will this take?  The general class of problems in PLS is defined as those having polynomial-time computable procedures specifying a neighborhood relation and a quality score over solutions where the problem is to define a solution which is a local minimum (or maximum).  By these definitions, our problem is in this class PLS, and in fact the generic nature of our problem makes it PLS-complete.  So what does this PLS-completeness mean?  It means that the complexity of our problem is similar to the hardest problems in this class which seem to have intermediate difficulty between P and NP.  We do not know a polynomial time algorithm for them (and such an algorithm will require significantly new techniques since it doesn’t “relativize”), nor or they likely to be NP-complete (since that would also imply NP=coNP).

A special case of potential games is that of network congestion games.  In these we are given a (directed) graph, with n pairs of vertices (a_i,b_i) in it and, for each edge e, a sequence of n+1 costs: c_e(0) .... c_e(n).  This is a succinct representation of a congestion game: player i‘s set of strategies is the set of paths in the graph from a_i to b_i, and the cost (negative utility) of a player is the sum of c_e(k_e) over all edges e in his path, where k_e is the number of players whose strategy (path) contains the edge e.  Network congestion games are exact potential games, and have a pure Nash equilibrium.  Finding such an equilibrium turns out to be PLS-complete too, although this requires a complicated reduction.

Read Full Post »

A few days ago, Nortrhwestern hosted a conference in honor of  Paul Milgrom winning the Nemmers Prize.  (As reported here and here.) The speakers seem to be the A-list of Economists in Mechanism Design, and their talks slides, as well as a video of Milgrom’s talk, are on the web page.  Computer Scientists were not invited, but it seems that Lance Fortnow was still able to tweet from there gems like:

Preston McAfee: Left to their own devices computer scientists would recreate the Soviet Union

Susan Athey: CS still needs to catch up in auction theory (on why Microsoft hires economists)

Read Full Post »

Circulating Announcements

The following announcements have been circulating lately, with requests to circulate further, so I comply:

Read Full Post »

The fifth Israeli Seminar on Computational Game Theory will be held on December 10th at Microsoft Israel R&D Center in Herzliya.  The speakers are: Dov Samet, Seffi Naor, Noga Alon, Svetlana Olenetsky, and Joe Halpern.

Read Full Post »

A large number of AGT researchers work in just three large companies: Google, Microsoft, and Yahoo, mostly working on ad auctions. From two or three data points, it seems that Facebook is now attempting to hire such researchers too (although maybe not in “research” roles). Looking in Facebook’s “careers” page, they are indeed looking for an “advertising auction expert” whose requirements include “Expert knowledge of algorithmic game theory, auction theory, mechanism design and their applications to online advertising” as well as a Ph.D. in CS or OR.

I can well see the potential for Algorithmic Mechanism Design research in Facebook. Facebook’s information about its users is enormous, detailed, and lives within the context of their complex social network. Making good use of this complex information seems to require another level of sophistication relative to today’s ad auctions. Privacy issues are always there, and while Facebook has already had problems in that department, it seems that its users do not consider their “identity” to be too private. I wonder if within a year or two we’ll start seeing research papers coming from Facebook too.

Read Full Post »

DARPA Netwrok Challenge

Despite my better judgement , I am fascinated by DARPA’s Network Challenge.  The basic idea is simple: on this Saturday, December 5th, at 10AM (EST),  ten large red weather balloons with be placed in various locations in the USA.  The first person to report the locations of all balloons wins $40K.  This seems to be an exercise in crowdsourcing, and it is not difficult to see various reasons why the US military may be interested in such an exercise.

If you want to win, the question seems to be how to get many potential balloon-sighters to report their findings to you rather than to someone else.  Offers of $1K for the first sighting of a balloon (conditioned on winning) came fast, soon to be trumped by $3K offers, which is within a factor of 4/3 from optimizing this approach.   Indeed the prize amount seems to be intentionally low, as to provide barriers to this natural monetary technique.  At this point, the edge of rationality, non-monetary rewards were started to be offered, like world peace or, my favorite, “use the winnings to create a giant flying cupcake built entirely out of balloons!“.  My own contribution (probably discovered independently by countless others) is the observation that the monetary technique is not really limited by the prize money: winning may have significant advertising value for the right kind of company, who may invest much larger amounts of money in attracting collaborators to its effort.

An analysis of possible approaches can be found here, including some thoughts on satellite or aerial surveys and on negative tactics (mis-report balloons to your competitors) and a link to a wiki.

Read Full Post »

ICS accepted papers

The accepted papers for ICS 2010 have just been announced.  Judging by the list of titles and abstracts, this is the most interesting list of accepted papers that I’ve seen in years.  So Hooray for the conceptualists as well as to the organizers.  Now we have to see if the actual papers match the promise of their abstract and title.

Many accepted papers are AGT-related:

Read Full Post »

In a recent (guest) blog post, Yehuda Lindell makes fun of (or maybe just complains about) “applying game-theoretic principles to our work as academics”.  He lists several such strategic acts, starting with the well known principle of “least publishable unit” (LPU):

The first observation is that the “best strategy” is to write papers that are just good enough to get into the conference that you want, but no better. This way you minimize your work while maximizing your publications.

Even taking the totally cynical “rational” point of view, I doubt that this LPU strategy is “best” or even good.  The LPU strategy may indeed be optimal if you want to maximize the number of your publications, but not at all if you want to get hired, get promoted, get grants, or other selfish utility measures.  In most reasonable places, “paper counting”  — especially of conference papers — is rarely significant. Reputation (of author and of the venue), letters, even citation counts (and impact factors) are used instead — and these all do not fare well under the LPU strategy.

So why do so many researchers seem to follow this strategy to some extent?  Why do so many of us produce many mediocre papers rather than fewer good papers?  I think that the reason is not game-theoretic: most researchers who trade-off quality for quantity are not trying to cynically optimize their career.  They know that their many mediocre papers will fool no-one.  Well, almost no-one: they do manage to fool themselves.  Indeed producing a paper gives us a very concrete sense of achievement.  We can add it to our CV, count it, tell our spouse that we wrote it, and in general have something specific to show for our work.  It gives us the required psychological boost of success.   A fake one, unfortunately, like some drugs do.  The antidote: don’t fool yourself. Easier said than done, of course.

Read Full Post »

%d bloggers like this: