ICML 2021 Tutorial unsupervised learning for reinforcement learning

Overview:

In this tutorial we cover advances and challenges at the intersection of Unsupervised Learning and Reinforcement Learning. Unsupervised Learning (UL) has really taken off in the past few years with the advent of language model based pre-training in natural language processing, and contrastive learning in computer vision. Some of the main advantages of unsupervised pre-training in these domains is the emergent data-efficiency in downstream supervised learning tasks. There’s a lot of interest in the community in terms of how these techniques can be applied to reinforcement learning and robotics. It may not be as straightforward given that RL and Robotics present further challenges compared to passive learning from images and text on the internet, due to the sequential decision making nature of the problem. This tutorial will cover the foundational blocks of how to apply and use unsupervised learning in reinforcement learning with the hope that people can take back knowledge of the latest state-of-the-art techniques and practices as well as the wide array of future possibilities and research directions in this challenging and interesting intersection. Part I covers Representation Learning for RL. Part II covers Unsupervised Pre-Training for RL.

Slides:

https://www.dropbox.com/s/kkj8tozq2m7zr5e/ICML2021-tutorial--Unsupervised%20RL--Srinivas-Abbeel.pdf?dl=0


ICML TUTORIAL VIDEO:

https://icml.cc/virtual/2021/tutorial/10843 (requires icml registration at this time, but we are told that might be lifted at some point in the future)