Discovering Multiple Solutions from a Single Task in Offline Reinforcement Learning

1The University of Tokyo 2RIKEN Center for Advanced Intelligence Project (AIP)

Abstract

Recent studies on online reinforcement learning (RL) have demonstrated the advantages of learning multiple behaviors from a single task, as in the case of few-shot adaptation to a new environment. Although this approach is expected to yield similar benefits in offline RL, appropriate methods for learning multiple solutions have not been fully investigated in previous studies. In this study, we therefore addressed the problem of learning multiple solutions in offline RL. We propose algorithms that can learn multiple solutions in offline RL, and empirically investigate their performance. Our experimental results show that the proposed algorithm learns multiple qualitatively and quantitatively distinctive solutions in offline RL.

Locomotion behaviors in the diverse datasets



HopperVel

Walker2dVel



HalfCheetahVel

AntVel


Visualization of the learned behaviors


HopperVel

Walker2dVel


CheetahVel

AntVel


Locomotion in few-shot adaptation tasks



BibTeX

@InProceedings{osa2024diveoff,
  author    = {Takayuki Osa and Tatsuya Harada},
  booktitle = {Proceedings of the International Conference on Machine Learning (ICML)},
  title     = {Discovering Multiple Solutions from a Single Task in Offline Reinforcement Learning},
  year      = {2024},
  }