Here’s Alex’s announcement of his new book, which I am very excited about, and many in our community would no doubt find extremely useful (there’s even an open version on arXiv!):
I am pleased to announce Introduction to multi-armed bandits, a broad and accessible introduction to the area which emphasizes connections to operations research, game theory, and mechanism design. The said connections have generated a considerable amount of interest (and publications) in the Economics and Computation community.
The book is teachable by design: each chapter corresponds to one week of my class. Each chapter handles one big direction in the literature on bandits, covers the first-order concepts and results on a technical level, and provides a detailed literature review for further exploration. There are no prerequisites other than a certain level of mathematical maturity.
The chapters are as follows: stochastic bandits; lower bounds; Bayesian bandits and Thompson Sampling; Lipschitz Bandits; full feedback and adversarial costs; adversarial bandits; linear costs and semi-bandits; contextual bandits; bandits and games; bandits with knapsacks; bandits and incentives.
The book is also available on arxiv (in a plain-format version).
Aleksandrs Slivkins
Microsoft Research NYC
Thanks for making this available! Sounds like a good text for a long weekend.