TOP-ERL: Transformer-based Off-policy Episodic Reinforcement Learning

Karlsruhe Institute of Technology
ICLR 2025 SpotlightšŸ”„

Episodic RL often uses the movement primitves (MPs) as a paramterized trajectory generator. A simple illustration of using MPs is shown in the video.

Abstract

This work introduces Transformer-based Off-Policy Episodic Reinforcement Learning (TOP-ERL), a novel algorithm that enables off-policy updates in the ERL framework. In ERL, policies predict entire action trajectories over multiple time steps instead of single actions at every time step. These trajectories are typically parameterized by trajectory generators such as Movement Primitives (MP), allowing for smooth and efficient exploration over long horizons while capturing high-level temporal correlations. However, ERL methods are often constrained to on-policy frameworks due to the difficulty of evaluating state-action values for entire action sequences, limiting their sample efficiency and preventing the use of more efficient off-policy architectures. TOP-ERL addresses this shortcoming by segmenting long action sequences and estimating the state-action values for each segment using a transformer-based critic architecture alongside an n-step return estimation. These contributions result in efficient and stable training that is reflected in the empirical results conducted on sophisticated robot learning environments. TOP-ERL significantly outperforms state-of-the-art RL methods. Thorough ablation studies additionally show the impact of key design choices on the model performance.


Transformer Critic for Action Sequence Value Estimation

In TOP-ERL, we utilize Transformers as an action sequence value estimator, to evaluate the value of executing a sequence of actions from an intermediate state in the episode. We train the critic via the N-step future returns shown below.

N-step return


Empirical Results

N-step return

BibTeX

@inproceedings{
        li2025toperl,
        title={{TOP}-{ERL}: Transformer-based Off-Policy Episodic Reinforcement Learning},
        author={Ge Li and Dong Tian and Hongyi Zhou and Xinkai Jiang and Rudolf Lioutikov and Gerhard Neumann},
        booktitle={The Thirteenth International Conference on Learning Representations},
        year={2025},
        url={https://openreview.net/forum?id=N4NhVN30ph}
        }