Today’s online edition of the Israeli newspaper Haaretz features an op-ed by the journalist Ari Shavit (I actually know him personally, but of course all Israelis know each other). Its title is “Game theory and the bomb”. Which bomb? The hypothetical Iranian nuclear bomb. Shavit summarizes a discussion with the retired general Yitzhak Ben Israel. The latter observed that an Israeli military strike against Iran may speed up rather than slow down the development of the Iranian nuclear arsenal. His argument is that after the strike, the fear of bringing about a military strike will no longer hold the Iranians back. Of course a strike may also be successful in delaying the bomb, but its outcome is uncertain.
The crucial paragraph, which gives the article its name, is puzzling. It loosely translates as follows:
“No one … can predict the outcome of action or inaction. But when there is uncertainty, the guiding principle should be the one defined by the father of game theory, John von Neumann. The Jewish mathematician, who was one of the leaders of the Manhattan project, argued that in critical situations where you do not know the outcome, you should not maximize benefit but rather minimize loss in case of failure. If a strike hastens the bomb then Israel would pay the maximum price. Therefore, via mathematical analysis, Israel must choose to avoid a strike.”
QED. I found it amusing that Shavit points out that von Neumann was Jewish, as if that makes him uniquely qualified to save the Jewish state via his mathematical insights. More importantly, this has got to be the most informal “mathematical analysis” I have ever seen. It also seems fundamentally flawed. von Neumann’s Maxmin strategies deal, in a sense, with uncertainty about the other player’s intentions, and therefore work under the assumption that the other player plays adversarially (the game may not be zero-sum). It seems that Shavit, and presumably Ben Israel, are confusing this type of uncertainty with the uncertainty that comes from a move by nature (throwing dice to determine the outcome of a strike).
Any ideas on how to formalize Shavit’s argument, or whether it makes any sense at all, for that matter? To give you a bit of incentive, the first successful solution will receive the Nobel peace prize.
Somewhat related:
Thomas Schelling, The Reciprocal Fear of Surprise Attack, RAND Corporation, 1958.
He seems to be looking at a 1-player game against nature rather than a 2-player game. So we are in the realm of optimization.
One possible interpretation is that the outcome depends on a decision and some random variable which is a function of the decision made, and that random variable comes from a family of random variables with parameterized variability.
He seems to be arguing that with an implicit objective like “value at risk”, when the “variability parameter” is high, a solution that is likely to keep one safe is better than one that pushes the risk-reward envelope.
(Though I’m sure there is a counter-example somewhere, I think by and large his assertion holds.)
This interpretation makes sense, but if this is what Ben Israel had in mind then the connection to von Neumann (or to game theory in general, for that matter) is somewhat tenuous/misleading.
I guess I am imagining an extensive form game with Israel moving first, Iran moving second, and a move by nature in between in case of a strike.
My conclusion is I have no idea why.
But in getting to that I guess there are a few possibilities (i) sometimes optimization and the “you know I know that you…”-fest get conflated, or (ii) sometimes optimization under uncertainty is a reasonable “approximation” or at least a practical one. (Depending on how “big and branching” the game is, and whether there is a random element there.)
On the latter point, Bayesian games do have a rather “optimization-y” feel (a distribution of Stakelberg games) as opposed to the very combinatorial feel of the pursuit of a solution in pure strategies. What do you think?
Well, when you tell computer scientists that game theory is optimization, you’re preaching to the choir 🙂
Haha. Well, I’d say there are 3 levels.
There’s the feasibility problem (equations and inequalities): 0 objective functions
… regular optimization (equations and inequalities): 1 objective function
… and coupled optimization/game theory: n > 1 objective functions
And they are ordered in difficulty too.
Personally, I distinguish between optimization and game theory though one could say that GT is a generalization of optimization. But that is an artifact of my own education.
I think that instead of Neumann a better reference would be John Harsanyi, who received his Nobel-prize for “developing the highly innovative analysis of games of incomplete information, so-called Bayesian games”, see
http://hu.wikipedia.org/wiki/Hars%C3%A1nyi_J%C3%A1nos
For the record, as Neumann, Harsanyi was also born in a Hungarian jewish family, and actually they went to the same secondary school in Budapest (together with another Nobel-prize winner, Eugene Wigner, who also participated in the Manhattan program with Neumenn). You can read some interesting stories about them in a nice book by Kati Marton: The Great Escape: Nine Jews Who Fled Hitler and Changed the World
The remark about him being Jewish is to fend off the unfortunate dialectic that has developed in some communities equating doveishness on Iran with opposition or even hatred of Isreal. But the emphasis is not going to convince anyone and will likely backfire.
I don’t think the idea here is so much about games of incomplete information (though that could be relevant as well) but simply about games with stochastic outcomes, i.e., a puppeted third player whose strategy (presumably known to the standard players) which dictates a probability function over possible plays at each of their turns. For instance, each round their might be a probability of a nuclear or conventional war with that probability dependent on prior actions.
But the people who research this just aren’t as famous.
—
Ohh, and the idea that you minimize the harm is just ridiculus. By von Neumann and Morgenstein’s definition of utility the optimal choice is always to maximize your expected value.
This makes no sense, even if this were a correct portrayal of the methodology, since minimizing harm in this case implies nothing—the worse case scenario is nuclear annihilation either way.