CS 330: Deep Multi-Task and Meta Learning

Fall 2019, Class: Mon, Wed 1:30-2:50pm, Bishop Auditorium


Description:

While deep learning has achieved remarkable success in supervised and reinforcement learning problems, such as image classification, speech recognition, and game playing, these models are, to a large degree, specialized for the single task they are trained for. This course will cover the setting where there are multiple tasks to be solved, and study how the structure arising from multiple tasks can be leveraged to learn more efficiently or effectively. This includes:

  • goal-conditioned reinforcement learning techniques that leverage the structure of the provided goal space to learn many tasks significantly faster
  • meta-learning methods that aim to learn efficient learning algorithms that can learn new tasks quickly
  • curriculum and lifelong learning, where the problem requires learning a sequence of tasks, leveraging their shared structure to enable knowledge transfer

This is a graduate-level course. By the end of the course, students will be able to understand and implement the state-of-the-art multi-task learning and meta-learning algorithms and be ready to conduct research on these topics.

Format:

The course is a combination of lecture and reading sessions. The lectures will discuss the fundamentals of topics required for understanding and designing multi-task and meta-learning algorithms. During the reading sessions, students will present and discuss recent contributions and applications in this area. There will be three assignments. Throughout the semester, each student will also work on a related research project that they present at the end of the semester.

Prerequisites:

CS 229 or an equivalent introductory machine learning course is required. CS 221 or an equivalent introductory artificial intelligence course is recommended but not required.

Enrollment:

Please fill out this enrollment form if you are interested in this course. See the form for more information on enrollment.


Staff

Chelsea Finn

Prof. Chelsea Finn

Instructor
OH: Weds 3-4 pm
Location: Gates 219
Webpage
Suraj Nair

Suraj Nair

Teaching Assistant
OH: Thurs 7-8 pm
Location: Gates 167
Webpage
Tianhe Yu

Tianhe (Kevin) Yu

Teaching Assistant
OH: Mon 10:30-11:30 am
Location: Gates B21
Webpage
Abhishek Sinha

Abhishek Sinha

Teaching Assistant
OH: Tue 4:30-5:30 pm
Location: Gates 259
Webpage
Tim Liu

Tim Liu

Teaching Assistant
OH: Fri 4:30-5:30 pm
Location: Gates 358
Webpage


Tentative Timeline

Date Lecture Handouts / Deadlines Notes
Week 1
Mon, Sep 23
Lecture Course introduction, problem definitions, applications
Week 1
Wed, Sep 25
Lecture Supervised multi-task learning, black-box meta-learning Homework 1 out HW1 [pdf][zip]
Week 1
Thu, Sep 26
TA Session TensorFlow tutorial TF notebook
Week 2
Mon, Sep 30
Lecture Optimization-based meta-learning
Week 2
Wed, Oct 02
Reading Applications in imitation learning, vision, language, generative models Presentation slides [P1][P2][P3][P4]
Week 3
Mon, Oct 7
Lecture Few-shot learning via metric learning Final Project Guidelines
Week 3
Wed, Oct 09
Reading Hybrid meta-learning approaches Due Homework 1
Homework 2 out
HW2 [pdf][zip]
Presentation slides [P1][P2][P3][P4]
Week 4
Mon, Oct 14
Lecture Bayesian meta-learning
Week 4
Wed, Oct 16
Reading Meta-learning for active learning, weakly-supervised learning, unsupervised learning Presentation slides [P1][P2][P3][P4]
Week 5
Mon, Oct 21
Lecture Renforcement learning primer, multi-task RL, goal-conditioned RL
Week 5
Wed, Oct 23
Reading Auxiliary objectives, state representation learning Due Homework 2
Homework 3 out
HW3 [pdf][zip]
Presentation slides [P1][P2][P3][P4]
Week 6
Mon, Oct 28
Reading Hierarchical RL, curriculum generation Presentation slides [P1][P2][P3][P4]
Week 6
Wed, Oct 30
Guest Lecture Meta-RL, learning to explore Due Project proposal
Kate Rakelly, UC Berkeley
Week 7
Mon, Nov 04
Reading Meta-RL and emergent phenomenon Presentation slides [P1][P2][P3][P4]
Week 7
Wed, Nov 06
Lecture Model-based RL for multi-task learning, meta model-based RL Due Homework 3
Week 8
Mon, Nov 11
Lecture Lifelong learning: problem statement, forward & backward transfer
Week 8
Wed, Nov 13
Reading Miscellaneous multi-task/meta-RL topics Due Project milestone
Week 9
Mon, Nov 18
Guest Lecture TBD Jeff Clune, University of Wyoming / Uber
Week 9
Wed, Nov 20
Guest Lecture Information theoretic exploration Sergey Levine, UC Berkeley
Week 10
Mon, Nov 25
Thanksgiving Break
Week 10
Wed, Nov 27
Thanksgiving Break
Week 11
Mon, Dec 02
LectureFrontiers: Memorization, unsupervised meta-learning, open problems
Week 11
Tue, Dec 03
Presentation Poster Presentation 1:30 - 3:30 pm @ Packard Atrium
Week 13
Mon, Dec 16
No Class Due Final Project Report (Deadline at 11:59 pm PT)



Note on Financial Aid


All students should retain receipts for books and other course-related expenses, as these may be qualified educational expenses for tax purposes. If you are an undergraduate receiving financial aid, you may be eligible for additional financial aid for required books and course materials if these expenses exceed the aid amount in your award letter. For more information, review your award letter or visit the Student Budget website.

All Projects

  • Weighted Gradient MAML.
    Andrew Zhou Wang, Daniel Kang, Rohan Badlani
  • Estimating the Degree of Shared Representation in Multi-Task Learning.
    Benjamin Petit, Will Deaderick
  • Learning to Adapt to Various Surfaces in Autonomous Driving.
    Nathan Spielberg
  • Activation Patterns of Policy Networks trained with Multi-Task and Meta-RL.
    Ruilin Li
  • Few-Shot Video Classification with Linear Base Learners.
    Kevin Shin Tan
  • Meta-Learning Symbolic, String-Based Concepts.
    Allen Nie, Andrew Nam, Katherine L. Hermann
  • Meta-Learning Hierarchical Policies.
    Rafael Mitkov Rafailov, Riley DeHaan
  • Hyperspherical Prototype Networks for Semi-Supervised Learning.
    Daniel Tan, Ryan Arthur Tolsma
  • Using curiosity as an Intrinsic Reward for Policy Gradient Methods.
    Adam Harrison Williams
  • Emergence of Modular Functional Groups through Attention.
    Dian Ang Yap, Josh Payne, Vineet Sai Kosaraju
  • Multi-Task Learning for Weakly Supervised Name Entity Recognition.
    Saelig Ashank Khattar, Jason Fries
  • Quantifying and Evaluating Positive Transfer in Multi-Task and Meta-Learning in NLP Tasks.
    Daniel Alexander Salz, Hanoz Bhathena, Siamak shakeri
  • Meta-Learning for Compensating for Sensor Drift and Noise in Aptamer E-chem Kinetic Data.
    Louis Blankemeier
  • Meta-Learning for Low Resource Question Answering.
    Anirudh Rajiv Joshi, Ayush Agarwal, Raul Puri
  • Few-Shot Object Detection with Prototypical RetinaNet.
    John Weston Hughes, Kamil Ali
  • Distributionally Robust Meta-Learning.
    Yining Chen, Yue Hui
  • Single Molecule Localization Microscopy with Meta-Learning.
    Hsiang-Yu Yang
  • Learning to be Safe.
    Krishnan Srinivasan, Samuel Clarke
  • Multi-Task Learning for Mathematical Problem Solving.
    Justin Dieter
  • Automatical Web Navigation via Unsupervised and Few-Shot Learning.
    Nikhil Cheerla, Rohan Suri
  • Meta-GAN for Few-Shot Image Classification with Data Augmentation.
    Jc Charles Peruzzi, Mason Riley Swofford, Nikita-Girey Nechvet Demir
  • Meta-Learning of Visual Tasks with AutoBAHNNs: Autoencoder-like Biological-Artifical Hybrid Neural Networks.
    George Sivulka, Josh Brendan Melander
  • Meta-Learning with PCGrad and Hessian Regularization.
    Alex Mckeehan
  • Learning Higher-Order Representations of Networks via Pretraining on the Subgraph Matching Problem.
    Sabri Eyuboglu
  • Explainable Bayesian Multi-modal Meta-Learning: Quantifying Uncertainty of Subspace Structures.
    Lijing Wang
  • Representation Learning for Classification.
    Geoffrey Lim Angus
  • Meta-Kernels for MAML.
    Mansheej Paul, Saarthak Sarup
  • Apply Meta-Learning to Predict Business Success.
    Chenchen Pan
  • Meta-Learning Semi-parametric Image Classification.
    Henrik Marklund
  • Hierarchical and Meta-Learning Methods for MineRL.
    Jeffrey Gu
  • FLEO: Flow-Bsed Latent Embedding Optimization for Few-Shot Meta-Learning.
    Karen Yang, Todd Francis Macdonald
  • k-AML: k-Attractor Meta Learning.
    Advay Pal, Behzad Haghgoo, Megumi Sano
  • Meta-Hierarchical RL.
    Tian Tan, Ye Ye, Zhihan Xiong
  • Meta-Batch RL.
    Albert Jia-Xiang Tung, Jonathan Austin Booher
  • Meta-Learning for Global Satellite-Based Land Cover Classification.
    Sherrie Wang
  • Meta-Ensembles for Epistemic Uncertainty in Few-Task Meta Learning.
    Apoorva Sharma
  • Meta-Learning for Autonomous Vehicle Safety Validation.
    Anthony Louis Corso
  • Large Scale Authorship Attribution with Meta-Learning with Latent Embedding Space Optimization.
    Juanita Ordonez
  • Bridging Weakly/Semi-Supervised Learning via Meta Learning.
    Hao Sheng, Huanzhong Xu, Xiao Chen
  • Review/Comparison on Gradient-Based Meta Learning Algorithms.
    Jongho Kim
  • Multi-Tak Manipulation of Deformable Objects.
    Neel Sesh Ramachandran, Varun Nambiar
  • You Just Have to Believe: Meta-Learning for Bayesian Reinforcement Learning.
    Preston Davis Culbertson
  • Meta-Learning with PCGrad.
    Jo C H Chuang, Tom Knowles



    © Chelsea Finn 2019