Behavior Prior Representation learning for Offline Reinforcement Learning
Published in ICLR 2023
Recommended citation:Hongyu Zang, Xin Li, Jie Yu, Chen Liu, Riashat Islam, Remi Tachet des Combes, Romain Laroche: Behavior Prior Representation learning for Offline Reinforcement Learning. ICLR 2023
Paper link:Offline reinforcement learning (RL) struggles in environments with rich and noisy inputs, where the agent only has access to a fixed dataset without environment interactions. Past works have proposed common workarounds based on the pre-training of state representations, followed by policy training. In this work, we introduce a simple, yet effective approach for learning state representations. Our method, Behavior Prior Representation (BPR), learns state representations with an easy-to-integrate objective based on behavior cloning of the dataset: we first learn a state representation by mimicking actions from the dataset, and then train a policy on top of the fixed representation, using any off-the-shelf Offline RL algorithm. Theoretically, we prove that BPR carries out performance guarantees when integrated into algorithms that have either policy improvement guarantees (conservative algorithms) or produce lower bounds of the policy values (pessimistic algorithms). Empirically, we show that BPR combined with existing state-of-the-art Offline RL algorithms leads to significant improvements across several offline control benchmarks.
@inproceedings{
zang2023bpr,
title={Behavior Prior Representation learning for Offline Reinforcement Learning},
author={Hongyu Zang and Xin Li and Jie Yu and Chen Liu and Riashat Islam and Remi Tachet des Combes and Romain Laroche},
booktitle={The Eleventh International Conference on Learning Representations },
year={2023},
url={https://openreview.net/forum?id=hQ4K9Bf4G2B}
}