As the New-York Times says, “It’s been a banner year or so for artificial intelligence”. This particular article talks about Valiant’s Turing award, and refers also to IBM’s impressive recent display of Jeopardy playing by “Watson”. This labeling of Valiant as an AI researcher, as well as the connection to IBM’s Watson seems to be shared by most newspapers but does not really fit the award citation of “For transformative contributions to the theory of computation, including the theory of probably approximately correct (PAC) learning, the complexity of enumeration and of algebraic computation, and the theory of parallel and distributed computing.” My own tendency is to view Valiant as a theoretical computer scientist, most admired, by me, for his work on basic complexity theory (algebraic, counting, graph-theoretic, and beautiful). So I wonder again whether these newspapers are correct in their labeling, a question that raises again the old question of what is AI?
It seems that AI guys are often soul-searching about the definition of AI but non-AI CS-ers seem mostly oblivious to it. Just recently, (in an aside to some CS department discussions,) several faculty members from my department sincerely asked me to explain what (the hell) are these AI researchers doing. (I suppose that asking me rather than an AI researcher is more or less like asking an anthropologist who lived among the “natives” rather than the natives themselves.)
Part of the question is the discrepancy between the vision of AI, which talks about the ability of mimicking aspects of human behavior that we consider as intelligent, and the practice of AI by academic community. It seems that once an aspect of AI is understood well enough, it ceases being part of the academic AI community and develops its own internal community. Examples of this phenomena are the computer vision community, the machine learning community, and the information retrieval community, all central aspects of the concept of AI that are now mostly leading their separate academic lives in conferences like ICCV, NIPS, and SIGIR, respectively, rather than as part of AAAI or IJCAI. Thus while Valiant’s foundational work on learning is clearly at the heart of the concept of AI, sociologically speaking it was part of theoretical computer science and machine learning communities rather than the academic AI community.
For a long time I was wondering why derivatives of Game-Theory are accepted as part of academic AI community under labels such as multi-agent systems (MAS). I now tend to adopt the answer I got from Aviv Zohar taking the view the strategic interaction with others is one of the central traits of intelligence and thus falls streight into the concept of AI. It seems that while this MAS community clearly has its own venues like AAMAS and is also part of the AGT/EC one, it still considers itself part of the general AI community, and indeed publishes much in the general AAAI/IJCAI conferences. Maybe this is an indication that the field still hasn’t reached the level of “being understood well enough” commonly needed for breaking away from the general AI community.
I was especially struck by Abe Othman using as a working definition of AI, “AI is whatever gets published at AAAI/IJCAI” similarly to Ariel Procaccia stating “in case you were wondering how one defines AI, my best definition is everything that might get published in AAAI/IJCAI”. These two young bright stars in the field of AI made a point of using this definition, and even though they obviously used it in a somewhat tongue in cheek manner, I think that they were trying to make a point. (This is especially so since by the virtue of their own research both could have opted to call themselves Algorithmic-Game-Theorists and distanced themselves from the AI community.) Discussing it with Jeff Rosenschein (who was also Ariel’s PhD advisor), he tended to think that they were acknowledging the community aspects of the scientific endeavor. My own view is that they are expressing a confidence in the value and direction of the AI community — a confidence that the AI community did not always have.
My own point of view on the AI academic community comes mostly from looking at differences within the AGT/E community where AI and TCS researchers frolic together and seem to actually speak with each other. In many cases they do the same type of work. This is especially true for the stronger research which has both a compelling conceptual message and significant analysis. (But weak papers from the two sides are often weak in their unique ways.) Still, I can see different types of challenges driving the two communities. The AI guys are drawn to ill-defined questions and try to define them it (or some aspects) better, while the TCS guys prefer clearer challenges and try to solve them better. Even when working on similar issues, the AI guys will emphasize the question while the CS guys emphasize the answer. The TCS guys often dislike the AI papers since they do not have any “meat”, and the AI guys often dislike the CS papers since they lack compelling questions but only give answers.
If I had to give my own definition for AI — the academic field rather than the basic concept — I would point to exactly the characteristic of having ill-defined questions: The academic AI community is handling the sub-areas of conceptual AI (i.e. of mimicking aspects of human intelligence) that are not understood well enough as to have coherent and well-defined paradigms and questions.
By far, one of the most successful and influential parts of AI is Machine Learning.
All this definition game is really an interesting intellectual exercise, but the truth (in my opinion) is that at least the AI+GT community fails to deliver. The level of the accepted papers to this IJCAI is totally embarrassing. And no, the “we do new models” propaganda cannot be the reason, as the papers seem to be simple variants of previous papers, certainly not something that you can’t find even in weak theoretical conferences like SAGT or WINE.
Noam, I agree with you that Valiant is primarily a theory researcher who happened to make important contributions to the theory of machine learning at a time that the ML community really needed it (and Vapnik was making similar contributions at a similar time).
I’m an AI guy, but I don’t worry much about the definition of AI. I think of AI as “doing the right thing” — the theory and practice of computing a rational action to take. From that point of view Game Theory is obviously an important part of AI — unless you want to say that AI is part of Game Theory. You could say the same of Control Theory (one stresses action in the face of adversaries, the other in the face of an uncertain future). AI could have developed as part of an adjacent field, but historically accidents and the power of community — of like-minded groups of people and the tools they use — forged a separate discipline. The drawback of separate disciplines is that sometimes it takes a while to rediscover something that is well-known in the other community; the advantage is that sometimes it is easier to tackle problems using new ideas.
Interactions bewteeen fields is common: traditionally we had biology and chemistry; now we have chemical biology and biochemistry as well. How can you tell them apart? Somewhat by subject matter, but more by the conferences they congregate at, and the look and smell of their lab equipment.
That definition came from a conversation with Tuomas Sandholm (my advisor) about whether Luis von Ahn is an AI researcher. Tuomas said he wasn’t, because he doesn’t publish in AAAI/IJCAI. It was a clear-cut stroke through a poorly-defined morass and it stuck with me.
I’m more comfortable with the definition “AI is the creation of X bots”, where X is anything (chess, driving, poker, moon-exploration, billiards, …). I believe this is close to the definition that you advance. It’s very practice-oriented, but I think AI should be practical field – leave theory to the theorists. The practical successes of ML are why it has taken over.
Also, Peter Norvig makes an amusing point about Biology and Chemistry in his discussion of overlapping fields. I recall that at Harvard it was possible to major in either “Biology”, “Chemistry”, “Physics”, “Biophysics”, “Biochemical Sciences”, or “Chemistry and Physics”.
your quote about AI, “It seems that once an aspect of AI is understood well enough, it ceases being part of the academic AI community and develops its own internal community.” is almost exactly the same as this quote by Bertrand Russel: “as soon as definite knowledge concerning any subject becomes possible, this subject ceases to be called philosophy, and becomes a separate science”. Has AI taken on the role of philosophy in the 21st century?
The previous comment was by me.
In my experience, AI people keep telling us that there is no AI, there are CL, ML, KR, Vision, … but no AI.
Well, I know several researchers (including myself) who think that AAAI and IJCAI are not the best AI conferences. I completely agree with Anon’s post: “The level of the accepted papers to this IJCAI is totally embarrassing”. Actually, as Abe asked in his blog: “Should There Be a AAAI?” My answer is NO.
I like Noam’s post as well as the non-anonymous comments (including the de-anonymized one) after it; some really good points here.
As for Anonymous completely agreeing with anon that “The level of the accepted papers to this IJCAI is totally embarrassing”, I’m not quite sure what they’re talking about because the decisions for this IJCAI are not yet public (or even final) and I doubt that anyone but the program chair at this point has a good view of the overall quality of the papers (who I assume is not making these anonymous posts), even within our subarea.
Then there’s this bit about “the AI+GT community fails to deliver”… Just off the top of my head, the AI+GT/E community has delivered: real-world combinatorial (procurement) auctions used for tens of billions of dollars worth of trade; algorithms deployed to clear real-world kidney exchanges; poker-playing programs that, on some variants of poker, are at least competitive with the best human players; and systems based on computing game-theoretic solutions that are used for the allocation of security resources at airports, the assignment of federal air marshals, and seemingly many things to come. (Note: I don’t think at all that real-world applications are the only measure of the importance of the area, but they’re certainly useful for a paragraph like this.)
AGT/E is an area where people in AI and theoretical computer science really talk to each other, and I think both sides have benefited tremendously from the important insights on the other side, probably more than each is willing to acknowledge. Let’s continue the fruitful interaction, and talk about the best insights and the most interesting directions, rather than wasting time picking on how bad the worst published papers on the other side are (which both sides do).
reparacion de pc…
[…]So what is AI? « Algorithmic Game-Theory/Economics[…]…
Ogłoszenia Radom…
[…]So what is AI? « Algorithmic Game-Theory/Economics[…]…
[…] So what is AI? https://agtb.wordpress.com/2011/03/16/so-what-is-ai/ […]