As the New-York Times says, “It’s been a banner year or so for artificial intelligence”. This particular article talks about Valiant’s Turing award, and refers also to IBM’s impressive recent display of Jeopardy playing by “Watson”. This labeling of Valiant as an AI researcher, as well as the connection to IBM’s Watson seems to be shared by most newspapers but does not really fit the award citation of “For transformative contributions to the theory of computation, including the theory of probably approximately correct (PAC) learning, the complexity of enumeration and of algebraic computation, and the theory of parallel and distributed computing.” My own tendency is to view Valiant as a theoretical computer scientist, most admired, by me, for his work on basic complexity theory (algebraic, counting, graph-theoretic, and beautiful). So I wonder again whether these newspapers are correct in their labeling, a question that raises again the old question of what is AI?
It seems that AI guys are often soul-searching about the definition of AI but non-AI CS-ers seem mostly oblivious to it. Just recently, (in an aside to some CS department discussions,) several faculty members from my department sincerely asked me to explain what (the hell) are these AI researchers doing. (I suppose that asking me rather than an AI researcher is more or less like asking an anthropologist who lived among the “natives” rather than the natives themselves.)
Part of the question is the discrepancy between the vision of AI, which talks about the ability of mimicking aspects of human behavior that we consider as intelligent, and the practice of AI by academic community. It seems that once an aspect of AI is understood well enough, it ceases being part of the academic AI community and develops its own internal community. Examples of this phenomena are the computer vision community, the machine learning community, and the information retrieval community, all central aspects of the concept of AI that are now mostly leading their separate academic lives in conferences like ICCV, NIPS, and SIGIR, respectively, rather than as part of AAAI or IJCAI. Thus while Valiant’s foundational work on learning is clearly at the heart of the concept of AI, sociologically speaking it was part of theoretical computer science and machine learning communities rather than the academic AI community.
For a long time I was wondering why derivatives of Game-Theory are accepted as part of academic AI community under labels such as multi-agent systems (MAS). I now tend to adopt the answer I got from Aviv Zohar taking the view the strategic interaction with others is one of the central traits of intelligence and thus falls streight into the concept of AI. It seems that while this MAS community clearly has its own venues like AAMAS and is also part of the AGT/EC one, it still considers itself part of the general AI community, and indeed publishes much in the general AAAI/IJCAI conferences. Maybe this is an indication that the field still hasn’t reached the level of “being understood well enough” commonly needed for breaking away from the general AI community.
I was especially struck by Abe Othman using as a working definition of AI, “AI is whatever gets published at AAAI/IJCAI” similarly to Ariel Procaccia stating “in case you were wondering how one defines AI, my best definition is everything that might get published in AAAI/IJCAI”. These two young bright stars in the field of AI made a point of using this definition, and even though they obviously used it in a somewhat tongue in cheek manner, I think that they were trying to make a point. (This is especially so since by the virtue of their own research both could have opted to call themselves Algorithmic-Game-Theorists and distanced themselves from the AI community.) Discussing it with Jeff Rosenschein (who was also Ariel’s PhD advisor), he tended to think that they were acknowledging the community aspects of the scientific endeavor. My own view is that they are expressing a confidence in the value and direction of the AI community — a confidence that the AI community did not always have.
My own point of view on the AI academic community comes mostly from looking at differences within the AGT/E community where AI and TCS researchers frolic together and seem to actually speak with each other. In many cases they do the same type of work. This is especially true for the stronger research which has both a compelling conceptual message and significant analysis. (But weak papers from the two sides are often weak in their unique ways.) Still, I can see different types of challenges driving the two communities. The AI guys are drawn to ill-defined questions and try to define them it (or some aspects) better, while the TCS guys prefer clearer challenges and try to solve them better. Even when working on similar issues, the AI guys will emphasize the question while the CS guys emphasize the answer. The TCS guys often dislike the AI papers since they do not have any “meat”, and the AI guys often dislike the CS papers since they lack compelling questions but only give answers.
If I had to give my own definition for AI — the academic field rather than the basic concept — I would point to exactly the characteristic of having ill-defined questions: The academic AI community is handling the sub-areas of conceptual AI (i.e. of mimicking aspects of human intelligence) that are not understood well enough as to have coherent and well-defined paradigms and questions.