A Roadmap to Never-Ending Reinforcement Learning

7 May 2021, ICLR Virtual

Motivation

Humans have a remarkable ability to continually learn and adapt to new scenarios over the duration of their lifetime (Smith & Gasser, 2005). This ability is referred to as never ending learning, also known as continual learning or lifelong learning. Never-ending learning is the constant development of increasingly complex behaviors and the process of building complicated skills on top of those already developed (Ring, 1997), while being able to reapply, adapt and generalise its abilities to new situations.


A never-ending learner has the following desiderata:

  • It learns behaviors and skills while solving its tasks.

  • It invents new subtasks that may later serve as stepping stones.

  • It learns hierarchically, i.e. skills learned now can be built upon later.

  • It learns without ergodic or resetting assumptions on the underlying (PO)MDP.

  • It learns without episode boundaries.

  • It learns in a single life without leveraging multiple episodes of experience.


There are several facets to building AI agents with never-ending learning abilities. In this workshop, we identify key themes including open-ended learning, cognitive sciences, developmental robotics, perceptual and temporal abstractions and world modelling. We are proposing a unique format for our workshop where we invite speakers to prepare a short talk presenting their views on the topic. Each talk will be followed by a discussion session chaired by an invited panelist. We hope that our format can enable a dialogue between a diverse set of views on never-ending learning and inspire future research directions.

INVITED Speakers

Danijar Hafner

University of Toronto

Hyo Gweon

Stanford University

INVITED SESSION Panelists

Martha White

University of Alberta

Eric Eaton

University of Pennsylvania

Aleksandra Faust

Google Brain

INVITED RounDTABLE Panelists

Satinder Singh

University of Michigan, Deepmind

Melanie Mitchell

Santa Fe Institute

Celeste Kidd

U C Berkeley

SCHEDULE (TIME IN UTC)

13:00 - 14:00 Poster Session I

14:00 - 14:15 Opening Remarks

14:15 - 14:30 Invited Talk #1: Danijar Hafner

14:30 - 15:00 Panel #1: Danijar Hafner & Eric Eaton

15:00 - 15:15 Contributed Talk #1: Continuous Coordination as a Realistic Scenario for Lifelong Learning

15:15 - 15:30 Invited Talk #2: Anna Harutyunyan

15:30 - 16:00 Panel #2: Anna Harutyunyan & Martha White

16:00 - 16:15 Contributed Talk #2: Reward and Optimality Empowerments: Information-Theoretic Measures for Task Complexity in Deep Reinforcement Learning

16:15 - 16:20 Break

16:20 - 17:05 Roundtable Panel

17:05 - 17:20 Invited Talk #3: Joel Lehman

17:20 - 17:50 Panel #3: Joel Lehman & Pierre-Yves Oudeyer

17:50 - 18:05 Contributed Talk #3: RECON: Rapid Exploration for Open-World Navigation with Latent Goal Models

18:05 - 18:10 Break

18:10 - 18:25 Invited Talk #4: Natalia Díaz-Rodríguez

18:25 - 18:55 Panel #4: Natalia Díaz-Rodríguez & Aleksandra Faust

18:55 - 19:10 Invited Talk #5: Hyo Gweon

19:10 - 19:40 Panel #5: Hyo Gweon & Matt Botvinick

19:40 - 19:55 Closing Remarks

19:55 - 20:55 Poster Session #2

Accepted papers

  1. PsiPhi-Learning: Reinforcement Learning with Demonstrations using Successor Features and Inverse TD Learning. Angelos Filos, Clare Lyle, Yarin Gal, Sergey Levine, Natasha Jaques, Gregory Farquhar. PAPER LINK, POSTER A2

  2. Persistent Reinforcement Learning via Subgoal Curricula. Archit Sharma, Abhishek Gupta, Karol Hausman, Sergey Levine, Chelsea Finn. PAPER LINK, POSTER C6

  3. Fast Inference and Transfer of Compositional Task Structure for Few-shot Task Generalization. Sungryull Sohn, Hyunjae Woo, Jongwook Choi, Izzeddin Gur, Aleksandra Faust, Honglak Lee. PAPER LINK, POSTER B0

  4. Multi-Task Reinforcement Learning with Context-based Representations. Shagun Sodhani, Amy Zhang, Joelle Pineau. PAPER LINK, POSTER A3

  5. Continuous Coordination As a Realistic Scenario For Lifelong Learning. Akilesh Badrinaaraayanan, Hadi Nekoei, Aaron Courville, Sarath Chandar. PAPER LINK

  6. On Lottery Tickets and Minimal Task Representations in Deep Reinforcement Learning. Marc Vischer, Henning Sprekeler, Robert Lange. PAPER LINK, POSTER C2

  7. Reward and Optimality Empowerments: Information-Theoretic Measures for Task Complexity in Deep Reinforcement Learning. Hiroki Furuta, Tatsuya Matsushima, Tadashi Kozuno, Yutaka Matsuo, Sergey Levine, Ofir Nachum, Shixiang Gu. PAPER LINK, POSTER C3

  8. CoMPS: Continual Meta Policy Search. Glen Berseth, Zhiwei Zhang, Chelsea Finn, Sergey Levine. PAPER LINK, POSTER A5

  9. RL for Autonomous Mobile Manipulation with Applications to Room Cleaning. Glen Berseth, Charles Sun, Sergey Levine. PAPER LINK, POSTER B5

  10. OptiDICE: Offline Policy Optimization via Stationary Distribution Correction Estimation. Jongmin Lee, Wonseok Jeon, Byung-Jun Lee, Joelle Pineau, Kee-Eung Kim. PAPER LINK, POSTER C5

  11. COMBO: Conservative Offline Model-Based Policy Optimization. Tianhe (Kevin) Yu, Aviral Kumar, Aravind Rajeswaran, Rafael Rafailov, Sergey Levine, Chelsea Finn. PAPER LINK, POSTER A1

  12. Towards Reinforcement Learning in the Continuing Setting. Abhishek Naik, Zaheer Abbas, Adam White, Rich Sutton. PAPER LINK, POSTER A0

  13. Self-Constructing Neural Networks through Random Mutation. Samuel Schmidgall. PAPER LINK, POSTER D0

  14. Reset-Free Reinforcement Learning via Multi-Task Learning: Learning Dexterous Manipulation Behaviors without Human Intervention. Abhishek Gupta, Justin Yu, Vikash Kumar, Zihao Zhao, Kelvin Xu, Aaron Rovinsky, Thomas Devlin, Sergey Levine. PAPER LINK, POSTER B4

  15. RECON: Rapid Exploration for Open-World Navigation with Latent Goal Models. Dhruv Shah, Ben Eysenbach, Nicholas Rhinehart, Sergey Levine. PAPER LINK, POSTER C1

  16. What is Going on Inside Recurrent Meta Reinforcement Learning Agents? Safa Alver, Doina Precup. PAPER LINK, POSTER D3

InviteD PAPERS

  1. Continual Learning for Robotics: Definition, Framework, Learning Strategies, Opportunities and Challenges. Timothée Lesort, Vincenzo Lomonaco, Andrei Stoian, Davide Maltoni, David Filliat, Natalia Díaz-Rodríguez. PAPER LINK, POSTER C4

  2. Continual Model-Based Reinforcement Learning with Hypernetworks. Yizhou Huang, Kevin Xie, Homanga Bharadhwaj, Florian Shkurti. PAPER LINK, POSTER B3

  3. Continual Reinforcement Learning in 3D Non-stationary Environments. Vincenzo Lomonaco, Karan Desai, Eugenio Culurciello, Davide Maltoni. PAPER LINK, POSTER C0

  4. Lifelong Learning with a Changing Action Set. Yash Chandak, Georgios Theocharous, Chris Nota, Philip S. Thomas. PAPER LINK, POSTER D1

  5. Reset-Free Lifelong Learning with Skill-Space Planning. Kevin Lu, Aditya Grover, Pieter Abbeel, Igor Mordatch. PAPER LINK, POSTER B6

  6. Enhanced POET: Open-ended Reinforcement Learning through Unbounded Invention of Learning Challenges and their Solutions. Rui Wang, Joel Lehman, Aditya Rawal, Jiale Zhi, Yulun Li, Jeff Clune, Kenneth O. Stanley. PAPER LINK, POSTER A6

  7. What's a Good Prediction? Issues in Evaluating General Value Functions Through Error. Alex Kearney, Anna Koop, Patrick M Pilarski. PAPER LINK, POSTER B1

  8. Ecological Reinforcement Learning. John D. Co-Reyes, Suvansh Sanjeev, Glen Berseth, Abhishek Gupta, Sergey Levine. PAPER LINK, POSTER A4

  9. Adapting to Reward Progressivity via Spectral Reinforcement Learning. Michael Dann, John Thangarajah. PAPER LINK, POSTER D2

  10. Learning and Planning in Average-Reward Markov Decision Processes. Yi Wan, Abhishek Naik, Richard S. Sutton. PAPER LINK, POSTER B2

DATES

Tentative dates are as follows:

  • Submission Deadline: February 26, 2021 February 28, 2021 (AOE)

  • Notification (Accept/Reject): March 26, 2021

  • Camera-ready (final) paper deadline: May 1, 2021

  • Workshop: May 7, 2021


Call for papers

We invite you to submit papers (up to 6 pages, excluding references and appendix) in the ICLR 2021 format. The focus of the work should relate to the list of NERL topics specified below. The review process will be double-blind and accepted submissions will be presented as virtual talks or posters. There will be no proceedings for this workshop, however, authors can opt to have their abstracts/papers posted on the workshop website.


We encourage submissions on the following topics from a never-ending RL perspective:


  • Abstractions, modularity and compositional learning

  • Model-based reasoning (e.g., Planning, Predictive models)

  • Hierarchical Reinforcement Learning (e.g., Skill learning, Behavior priors)

  • Open-ended learning and exploration

  • Evolutionary Algorithms

  • Non-episodic learning

  • Curriculum Learning

  • Task-agnostic RL, Unsupervised RL

  • Developmental Learning, Social Learning


Accepted work will be presented as posters during the workshop and selected contributions are presented as spotlights. Each accepted work entering the poster sessions will have an accompanying pre-recorded 5-minute video. Please note that at least one coauthor of each accepted paper will be expected to have an ICLR conference registration and participate in one of the virtual poster sessions.


Please submit your papers via the following link:

https://cmt3.research.microsoft.com/NERL2021


PROGRAM COMMITTEE

Anurag Ajay Coline Devin Vitchyr Pong Yuke Zhu Matthew Riemer Vivek Veeriah

Parminder Bhatia Karol Hausman Sebastian Flennerhag Tin Ho Kyle Hsu Andrei Rusu


Krsto Proroković Kelvin Xu Alexandre Galashov Jakob Forester Ben Eysenbach Ashley Edwards

Dave Abel Ofir Nachum Luisa Zintgraf Jonathan Schwarz Maximilian Igl Ryan Julian

Josiah Hanna Tianhe Yu Jan Humplik Nantas Nardelli Dumitru Erhan Marc Pickett


Jakub Sygnowski Shixiang Gu

GET INVOLVED


Related workshops

ORGANIZERS

Khimya Khetarpal

McGill University, Mila

Rose E. Wang

Stanford University

Annie Xie

Stanford University

Adam White

Deepmind, University of Alberta

Doina Precup

McGill University, Deepmind, Mila