Archive for December, 2010

On Getting Slashdotted

It seems that a recent blog post of mine got slashdotted.  Looking at my blog’s stats, I see that the numbers of visitors to the blog indeed jumped by one or two orders of magnitude.  I suppose that this is as much fame as an academic blogger can hope for.  It would have been nicer if my sudden (micro-) fame was due to my own clever thoughts rather than just for quoting someone else’s, but I suppose that I can salvage some (pico-)credit for finding that quote (via yet someone else) and repeating it (at least this all by myself).

So I now find myself with a few tens of thousands of visitors — supposedly intelligent and techie ones — that I have a few hours to relate to.  I could just bask in the (pico-)glory of being (micro-)famous, but this seems a bit like my daughter’s desire, at the time, to be “famous” as a goal onto itself (she’s pushing 11 now, and is over it — no need to worry).  So how can I actually make some use of my slashdot visitors?  Even if I were set up for putting ads on my blog, I estimate that I could only make a few dozen of dollars at the most (taking into account a $1 CPM) — not worth the effort.

So, perhaps there is something that I want to say to my visitors — after all that’s why someone publishes a blog to begin with.  The problem is that my usual audience is quite different from this slashdot crowd.  My blog is targeted quite narrowly at academic researchers in the field termed “Algorithmic Game Theory and Economics” which, obviously,  is my own field of research.  This field attempts addressing the following challenge raised by the Internet: the different computers that need to interact are owned and operated by different people and organizations, who will most likely optimize their own computing systems to do what’s best for themselves rather than follow any agreed protocol.  How can any computerized protocol still function when even the the “trusted” computers can not be trusted to follow any specific protocol?  To answer this challenge, the field attempts combining game-theoretic insights with the computational ones.  My blog’s usual readers are either working-in or learning-about this field or related fields such as theoretical computer-science, game-theory, or theoretical economics (well, at least that’s what I imagine).  So, my slashdot visitors who want to learn more about this field, may start with the following recent CACM article.  If you really want to dive in you can read the introduction to the handbook that I co-edited — here’s a link to a free online copy.

I have one thing directly targeted at my new slashdot visitors: this is the book “The Elements of Computing Systems: building a modern computer from first principles” that I wrote with Shimon Schocken.  The idea is to lead the reader through a sequence of projects building a complete computer systems from the ground up, including hardware and software.  We provide detailed specifications, simulation tools, and test programs, so the whole endeavor nicely fits within a single academic semester, including building the CPU, a virtual machine, a compiler, and a computer game that runs above this platform.  You can see the nice things readers say about us on Amazon’s web page, or can view a 10 minute video about the book.  (Recently, this book got its own share of instant fame when a reader implemented its ALU within Minecraft describing it in a Youtube video that got over 1M views and a mention in Wired.)


Read Full Post »

Panel on AGT in ICS

During this coming ICS, I will be chairing a panel on “Algorithmic Game Theory and Economics: Area, Technique, or Fad?” with co-panelists Peter Bro-Miltersen, Silvio Micali, and Shang-Hua Teng.  I would like to use this blog for two tasks related to this panel:

  • I think that it would be nice to have an additional “devil’s advocate” panelist who can argue for something like “this whole AGT thing is a fad destined to be soon forgotten”.  Ideas for such a possible panelist that will be attending ICS are welcome (as comments or in email to me).
  • Questions to the panelists can be left as comments on this post.

Read Full Post »

Every so often Lance tweets about some science policy report.  My natural tendency, like any good techie, is to keep my distance from such reports.  I do recognize that they serve an important social function (like dentists or lawyers) but personally I would want to have as little as possible to do with them.  However, I do sometimes skim over them, trying to gain some fuzzy insight or point of view.  Here is an observation I picked from the Report to the President and Congress: Designing a Digital Future: Federally FUnded R&D in Networking and IT:

Everyone knows Moore’s Law – a prediction made in 1965 by Intel co-founder Gordon Moore that the density of transistors in integrated circuits would continue to double every 1 to 2 years. (…)  Even more remarkable – and even less widely understood – is that in many areas, performance gains due to improvements in algorithms have vastly exceeded even the dramatic performance gains due to increased processor speed.

The algorithms that we use today for speech recognition, for natural language translation, for chess playing, for logistics planning, have evolved remarkably in the past decade.  It’s difficult to quantify the improvement, though, because it is as much in the realm of quality as of execution time.

In the field of numerical algorithms, however, the improvement can be quantified.  Here is just one example, provided by Professor Martin Grötschel of Konrad-Zuse-Zentrum für Informationstechnik Berlin.  Grötschel, an expert in optimization, observes that a benchmark production planning model solved using linear programming would have taken 82 years to solve in 1988, using the computers and the linear programming algorithms of the day.  Fifteen years later – in 2003 – this same model could be solved in roughly 1 minute, an improvement by a factor of roughly 43 million.  Of this, a factor of roughly 1,000 was due to increased processor speed, whereas a factor of roughly 43,000 was due to improvements in algorithms!  Grötschel also cites an algorithmic improvement of roughly 30,000 for mixed integer programming between 1991 and 2008.


Read Full Post »

A friend pointed out to me a 1945 paper (English translation is here) by psychoanalyst and philosopher Jacques Lacan that attempts to reason about knowledge in the sense similar to that used in computer science or economics and game theory much before these two fields started to formally address notions of knowledge.  The paper analyzes the following puzzle:

A prison warden has three select prisoners summoned and announces to them the following:  “For reasons I need not make known now, gentlemen, I must set one of you free. In order to decide whom, I will entrust the outcome to a test which you will kindly undergo. “There are three of you present. I have here five discs differing only in color: three white and two black. Without letting you know which I have chosen, I shall fasten one of them to each of you between his shoulders; outside, that is, your direct visual field – any indirect ways of getting alook at the disc being excluded by the absence here of any means of mirroring.  “At that point, you will be left at your leisure to consider your companions and their respective discs, without being allowed, of course, to communicate amongst yourselves the results of your inspection. Your own interest would, in any case, proscribe such communication, for the first to be able to deduce his own color will be the one to benefit from the dispensatory measure at our disposal.  “His conclusion, moreover, must be founded upon logical and not simply probabilistic reasons. Keeping this in mind, it is to be understood that as soon as one of you is ready to formulate such a conclusion, he should pass through this door so that he may be judged individually on the basis ofhis respose.”  This having been made clear, each of the three subjects is adorned with a white disc, no use being made of the black ones, of which there were, let us recall, but two.  How can the subjects solve the problem?

While the paper (or actually the English translation — which I understand, in the case of Lacan, is usually much clearer than the French original) starts off very clearly, I have to admit that I can’t comprehend much once he starts his analysis, but he seems to consider the asynchrony involved (as we would call it today in distributed computation) .   As the paper progresses, the section names become quite delicious: “The Modulation of Time in the Sophism’s Movement: the Instant of the Glance, the Time for Comprehending and the Moment of Concluding”, “Temporal Tension in the Subjective Assertion and its Value Manifested in the Demonstration of the Sophism”, and “The Truth of the Sophism as Temporalized Reference of One self to Another: Anticipating Subjective Assertion as the Fundamental Form of a Collective Logic”.

Read Full Post »

%d bloggers like this: