This semester I have been co-teaching (with the awesome Martial Hebert) CMU’s graduate artificial intelligence (grad AI) course. It’s been a lot of fun teaching AI to a class where a significant fraction of the students build robots for a living (possibly some of the students *are* robots, especially the ones who got a perfect score on the midterm exam). Although the last class is on May 2, I already gave my last lecture, so this seems like a good time to share some thoughts.

My general impression is that many AI courses try to cover all of AI, broadly defined. Granted, you get Renaissance students who can write “hello world” in Prolog while reciting the advantages and disadvantages of iterative deepening depth-first search. On the down side, breadth comes at the expense of depth, and the students inevitably get the very wrong impression that AI is a shallow field. Another issue is that AI is so broad that some if its subdisciplines are only loosely related, and in particular someone specializing in, say, algorithmic economics, may not be passionate about teaching, say, logic (to give a completely hypothetical example).

Our version of the course is far from perfect, but we did concentrate on only several favorite topics, and covered them in-depth. We taught a few of the usual topics, but with a twist. Some examples include: upper and lower bounds for randomized alpha-beta pruning, existence of solutions to CSPs using the Lovasz Local Lemma, optimality of A* (in terms of the number of explored nodes) among all admissible algorithms, and the computational hardness of classical planning. In addition, we taught some unusual topics: motion planning (three lectures), perception (three lectures), and, of course, algorithmic economics (five lectures!).

So what should every aspiring AI researcher know about algorithmic economics? Here is my shamelessly biased view.

- Computational social choice (first lecture, second lecture): common voting rules, manipulation, the Gibbard-Satterthwaite Theorem, computational complexity of manipulation, applications to multiagent systems and human computation.
- Computational game theory (first lecture, second lecture): prisoner’s dilemma, existence and complexity of Nash equilibrium, applications of Nash to interdomain routing (OK, this is a STOC paper, but it’s one of my favorites and I couldn’t resist) and the smart grid, mediated equilibrium, correlated equilibrium and its complexity, Stackelberg strategies, security games, auctions, combinatorial auctions. Covering auctions in 30 minutes is ridiculous, but the Vickrey auction is a must.
- Computational fair division (lecture): cake cutting algorithms, strategyproof cake cutting, complexity of fair cake cutting, applications of fair division to resource allocation in data centers (yep, this is a systems paper by systems people, but who’s to say it’s not AI?). Fair division is admittedly a niche area within AI, but it’s been getting increasing visibility in the last AAAIs/IJCAIs and it’s just too much fun to skip.

Sometimes the line between theory and AI (and systems) is quite blurred, but some fun and important topics such as truthful approximation algorithms and the price of anarchy are arguably *not* AI. Fortunately I’m scheduled to co-teach “great theoretical ideas in computer science” in Fall 2013; one great meta idea seems to be that you can teach whatever you want as long as you can prove it, and I expect that this stuff will make the cut.