Decoupling Representation Learning From Reinforcement Learning | Paper Explained
Автор: Bits Of Deep Learning
Загружено: 2020-09-20
Просмотров: 2187
Can we improve Reinforcement Leanining by decoupling Representation Learning from the RL part?
In this video you'll find out.
Decoupling Representation Learning From Reinforcement Learning Paper: https://arxiv.org/abs/2009.08319
Abstract:
In an effort to overcome limitations of reward-driven feature learning in deep reinforcement learning (RL) from images, we propose decoupling representation learning from policy learning. To this end, we introduce a new unsupervised learning (UL) task, called Augmented Temporal Contrast (ATC), which trains a convolutional encoder to associate pairs of observations separated by a short time difference, under image augmentations and using a contrastive loss. In online RL experiments, we show that training the encoder exclusively using ATC matches or outperforms end-to-end RL in most environments. Additionally, we benchmark several leading UL algorithms by pre-training encoders on expert demonstrations and using them, with weights frozen, in RL agents; we find that agents using ATC-trained encoders outperform all others. We also train multi-task encoders on data from multiple environments and show generalization to different downstream RL tasks. Finally, we ablate components of ATC, and introduce a new data augmentation to enable replay of (compressed) latent images from pre-trained encoders when RL requires augmentation. Our experiments span visually diverse RL benchmarks in DeepMind Control, DeepMind Lab, and Atari
#reinforcementlearning #contrastivelearning #unsupervisedlearning #AugmentedTemporalContrast #ATC
Доступные форматы для скачивания:
Скачать видео mp4
-
Информация по загрузке: