GRAM: Generalization in Deep RL with a Robust Adaptation Module
Автор: AerospaceControlsLab
Загружено: 2025-06-11
Просмотров: 484
arXiv: https://arxiv.org/abs/2412.04323
Code: https://github.com/merlresearch/gram
Abstract: The reliable deployment of deep reinforcement learning in real-world settings requires the ability to generalize across a variety of conditions, including both in-distribution scenarios seen during training as well as novel out-of-distribution scenarios. In this work, we present a framework for dynamics generalization in deep reinforcement learning that unifies these two distinct types of generalization within a single architecture. We introduce a robust adaptation module that provides a mechanism for identifying and reacting to both in-distribution and out-of-distribution environment dynamics, along with a joint training pipeline that combines the goals of in-distribution adaptation and out-of-distribution robustness. Our algorithm GRAM achieves strong generalization performance across in-distribution and out-of-distribution scenarios upon deployment, which we demonstrate through extensive simulation and hardware locomotion experiments on a quadruped robot.
Доступные форматы для скачивания:
Скачать видео mp4
-
Информация по загрузке: