TTIC 31250 - An Introduction to the Theory of Machine Learning (Spring 2018)

Avrim Blum

Mon/Wed 3:00-4:20pm, TTIC Room 526

(Office hours: Mon/Wed 4:30-5:00)


Course description: This course will cover some of the basic theory underlying machine learning and the process of generalizing from data. We will talk about both the PAC model for batch learning (learning over one set of data with the intent of producing a predictor that performs well on new data) and models for learning from feedback over time (online learning). We will discuss important fundamental concepts including overfitting, uniform convergence, formal notions of Occam's razor, VC-dimension, and regularization, as well as several classic algorithms including the Perceptron algorithm, SVMs, algorithms for combining expert advice, and boosting. We will also discuss limited-feedback (bandit) algorithms, reinforcement learning, connections between learning and game theory, and formal guarantees on privacy. This will be a proof-oriented course: our focus will be on proving performance guarantees for algorithms that aim to generalize from data as well as understanding what kinds of performance guarantees we can hope to prove.

Prerequisites: The main prerequisite is comfort with a proof-oriented course, having taken some algorithms class, and comfort with basic probabilistic modeling and reasoning. For example, 1000 programs are submitted to a stock-market prediction challenge, and we find that one of those programs has correctly predicted whether the market will go up or down the next week for 10 weeks in a row; should we feel confident that the program is a good one? Comfort with thinking about points and vectors in high-dimensional space is a plus.

Coursework: We will have 5 homework assignments, a small course project (a 4-5 page writeup based on exploring a theoretical idea, or trying some experiments, or reading and explaining a recent research paper), and a take-home final worth 1-2 homeworks. In addition, students will be asked to help with grading one of the assignments.


Homeworks


Lecture Notes and Tentative Plan

  1. 03/26: Introduction. The PAC model, Overfitting, and Occam's razor.
  2. 03/28: The Online Mistake-Bound model, Combining Expert Advice / Multiplicative Weights, Regret Minimization, sleeping experts.
  3. 04/02: The Perceptron Algorithm, Margins, and intro to Kernels. (And finish from last time).
  4. 04/04: [No class today: Board of Trustees meeting]
  5. 04/09: Perceptron (see notes above) and SVMs.
  6. 04/11: Uniform Convergence and VC-Dimension I.
  7. 04/16: Uniform Convergence and VC-Dimension II.
  8. 04/18: Rademacher Bounds.
  9. 04/23: Boosting.
  10. 04/25: Statistical Query Model I.
  11. 04/30: Statistical Query Model II.
  12. 05/02: Computational Hardness Results for Learning.
  13. 05/07: Learning and Game Theory I.
  14. 05/09: Learning and Game Theory II: Book Chapter on Learning, Regret Minimization, and Equilibria.
  15. 05/14: Bandit algorithms.
  16. 05/16: Learning Finite-State Environments.
  17. 05/21: MDPs and Reinforcement Learning.
  18. 05/23: Differential Privacy and Learning.
  19. 05/30: Semi-Supervised Learning.

Recommended texts/readings: