General Information
Home
Info
Logistics
Course Materials
Readings
Lecture Notes
Readings
Readings complement lecture content and links will be periodically posted below.
corresponding lecture
readings
lecture 1, 8
Overcoming Catastrophic Forgetting in Neural Networks
lecture 1, 5
Continual Backprop: Stochastic Gradient Descent with Persistent Randomness
lecture 2
Adapting Bias by Gradient Descent: An Incremental Version of Delta-Bar-Delta
lecture 2
Gain Adaptation Beats Least Squares?
lecture 3, 4
Simple Agent, Complex Environment: Efficient Reinforcement Learning with Agent States
lecture 6
Toward a Formal Framework for Continual Learning
lecture 6
Mark Ring's Slides: My Take on Continual Learning
lecture 10
A Tutorial on Thompson Sampling
lecture 11
Efficient Continual Learning with Modular Networks and Task-Driven Priors
lecture 11
NEVIS’22: A Stream of 100 Tasks Sampled from 30 Years of Computer Vision Research
lecture 12, 13
Non-Stationary Bandit Learning via Predictive Sampling
lecture 13, 15
An Information-Theoretic Framework for Supervised Learning
lecture 17
Deciding What to Learn: A Rate-Distortion Approach
Supplementary background materials
supplementary materials
Supplementary Math Background
Reinforcement Learning: An Introduction
General surveys and papers we may cover later
surveys and papers
Embracing Change: Continual Learning in Deep Neural Networks
Towards Continual Reinforcement Learning: A Review and Perspectives
Efficient Continual Learning with Modular Networks and Task-Driven Priors
An Information-Theoretic Analysis of Compute-Optimal Neural Scaling Laws