Hi x (whoever you are)

I agree that changing an upper bound from O(n^3) to O(n^2) is a worthy thing to do, and that lower bounds are hard. I am not suggesting that such upper bound refinements not be published.

I am suggesting that their importance is not as great as some people seem to suppose. I am also suggesting that they should be complemented by some other evidence (possibly in the same paper, by the same author, and possibly by others).

My main concern is not really about individual researchers but how the whole system fits together. A paper that does what is described above immediately leads to interesting questions about actual performance of this algorithm compared with, say, other algorithms with a known Theta(n^2) bound. But publication outlets for results on this question seem to be fewer and less highly regarded by much of the “TCS community” (if such a thing exists), so incentives to do this work are relatively weak. Furthermore the people who might be interested in doing this work are not finding out about such algorithms systematically. I will also add that many published algorithms have substantial errors, but correcting these and trying to publish that is a thankless task.

]]>is not necessarily an improvement on another algorithm

with an O(n^3) bound.”. I think this type of statement

is a very common criticism of theory but it is not a good

argument. It may very difficult to prove a lower bound

on the O(n^3) time algorithm to really show that

it performs as poorly as the stated bound – or it may

be in fact much better but that is also difficult to

prove. However, should one not prove that

some other algorithm gives a provably better asymptotic

bound? Why not? We have to prove what we can as long

as there is a decent enough metric. Interpreting the

significance of the asymptotic improvement in practice is a different issue and need not always burden theoretical discoveries. ]]>

I agree with everything you say above. The contributions of TCS are mainly conceptual, a way of thinking rather than results. We don’t need less “pure” theory, just more “applied” theory and more communication between groups and recognition that everyone should work together using their own strengths for the advancement of science.

Two comments:

1. I would also be very interested in what you have to say about algorithmic game theory, and choosing models in general. What sort of model validation should be done as far as TCS goes (it is different from theoretical physics, since to some extent we can design the universe, not just interpret it)?

2. “improving Y” (performance) is not the same as “asymptotically improving an upper bound when no lower bound is known” – this is one of the things I was complaining about in my post. I see a big gulf between the followers of P-vs-NP and the followers of Knuth here, to say nothing of the experimental algorithm analysts. Bob Sedgewick once told me about some upper bounds he ran into when writing his Algorithms books, and they were ludicrously non-tight when he implemented the algorithms. An algorithm with a O(n^2) bound is not necessarily an improvement on another algorithm with a O(n^3) bound. Of course, good papers improve the upper bound for the SAME algorithm, or give lower bounds, or give some empirical evidence for why the second algorithm is better.

]]>Sorry for having mis-quoted you.

]]>*there is really no point wasting our time on current technology which is clearly bound to be replaced by some other technology in the next 60-1000 years*

Actually, I didn’t write that. It was an anonymous comment on my blog (that later got dissected in follow-up comments).

I’m glad you feel good about theory ðŸ™‚ I am also not keen on this TCS soul searching, not because I am ok with the current state of affairs, but because I find it rather useless (nobody ever convinced anybody of anything, as far as I can tell).

]]>