My nephew just started his undergraduate degree in Math at the Technion. When his grandfather, a high-powered engineer (whom I sometimes turn to for solving a differential equation that I need solved) , asked him what they learned in their very first calculus lesson, the answer was “supremum“. The grandfather was mystified, claiming that “supremum” was not part of the math curriculum at all when he got his engineering degree half a century ago (also in the Technion, it turns out.) While this can hardly be literally true, it does highlight the difference between the “abstract” approach to math and the “applied approach” which he got as an engineer. From an abstract point of view, it makes much sense to start with the notion of supremum, the ubiquitous existence of which essentially defines the real numbers, while I suppose that a more applied approach will hardly emphasize such abstract technical hurdles. When I studied math at the Hebrew university, we spent an enormous amount of time on such abstract notions up to the point of ending the first year with hardly any practical abilities of figuring out a simple Integral but having a pretty good grasp of properties of zero-measure (yes, first year).
In my department, we have an on-going debate about the type of math that our CS students should get. Traditionally, we have followed the extremely abstract tradition of our math department which emphasizes the basics rather than applications (even in the “differential equations” course, students rarely solve them, but more often prove existence and such). In the last decade or so there has been a push in my department to shift some focus to “useful” math too (like being able to actually use singular value decomposition or Fourier transform, as needed in computer vision, learning theory, and other engineering-leaning disciplines). I’m on the loosing side of this battle and am against this shift away from the “non-useful” fundamentals. The truth is that most computer scientists will rarely need to use any piece of “useful” math. What they will constantly need to use is “mathematical maturity”: being comfortable with formal definitions and statements, being able to tell when a proof is correct, knowing when and how to apply a theorem, and so on.
“most computer scientists will rarely need to use any piece of “useful” math.”
That’s quite a remarkable statement. But it really depends on your definition of computer scientist, doesn’t it? The statement is meaningless unless you establish that.
I also found the same statement stunning even when applied to theoretical computer scientists. Quite apart from the value of “useful” math there is the question of diversity of student aptitudes – some students absorb abstract math better when also doing applied math.
The truth is that most computer scientists will rarely need to use any piece of “useful” math. What they will constantly need to use is “mathematical maturity”
As was as surprised by this comment as the others. As far as I can tell, ”most” computer scientists don’t need to use any math at all; if they are using math, it is likely to be the “useful” kind. I wonder if you’re using a different definition of an average computer scientist.
Most CS graduates work in a software engineering job (or its derivatives) and thus hardly use math per-se. My point was that the basic mathematical capabilities that they got during their studies are implicitly used all the time in their high-level programming job (e.g. in modeling the problem that they are working on.)