Matthew Walter

Assistant Professor
Toyota Technological Institute at Chicago
6045 S. Kenwood Avenue
Chicago, IL 60637

Tel: +1 (773) 834-3637
Fax: +1 (773) 357-6970
E-mail: mwalter@ttic.edu

TTIC 31240: Self-driving Vehicles: Models and Algorithms for Autonomy

Colloquially known as Duckietown, the course considers problems in perception, navigation, and control, and their systems-level integration in the context of self-driving vehicles through an open-source curriculum for autonomy education that emphasizes hands-on learning. Students collaborate to implement concepts covered in lecture on a low-cost autonomous vehicle with the goal of navigating a model town complete with roads, signage, traffic lights, obstacles, and citizens. The vehicle is equipped with a monocular camera and a performs all processing onboard with a Raspberry Pi 3, and must: follow lanes while avoiding obstacles, pedestrians and other robots; localize within a global map; navigate a city; and coordinate with other robots to avoid collisions. The course is taught in concurrently and in conjunction with classes at the University of Montreal and ETH Zürich, which provides opportunities for interaction and collaboration across institutions.


TTIC 31170: Planning, Learning, and Estimation for Robotics and Artificial Intelligence

Spring 2015, Spring 2017

A graduate-level course concerned with fundamental techniques in robotics and artificial intelligence (AI), with an emphasis on probabilistic inference, learning, and planning under uncertainty. The course will investigate the theoretical foundations underlying these topics as rigorous mathematical tools that enable solutions to real-world problems drawn broadly from robotics and AI. The course will cover topics that include: Bayesian filtering (Kalman filtering, particle filtering, and dynamic Bayesian networks), simultaneous localization and mapping (SLAM), planning, Markov decision processes, reinforcement learning, and graphical models.


TTIC 31180: Probabilistic Graphical Models

Spring 2016

Many problems in machine learning, computer vision, natural language processing, robotics, computational biology, and beyond require modeling complex interactions between large, heterogeneous collections of random variables. Graphical models combine probability theory and graph theory to provide a unifying framework for representing these relationships in a compact, structured form. Probabilistic graphical models decompose multivariate joint distributions into a set of local relationships among small subsets of random variables via a graph. These local interactions result in conditional independencies that afford efficient learning and inference algorithms. Moreover, their modular structure provides an intuitive language for expressing domain-specific knowledge, and facilitates the transfer of modeling advances to new applications.

This graduate-level course will provide a strong foundation for learning and inference with probabilistic graphical models. The course will first introduce the underlying representational power of graphical models, including Bayesian and Markov networks, and dynamic Bayesian networks. Next, the course will investigate contemporary approaches to statistical inference, both exact and approximate. The course will then survey state-of-the-art methods for learning the structure and parameters of graphical models.