Abstract:

Visual Model-Based Reinforcement Learning (MBRL) promises to encapsulate agent’s knowledge about the underlying dynamics of the environment, enabling learning a world model as useful planner. However, top MBRL agents such as Dreamer often struggles with visual pixel-based inputs in the presence of exogenous or irrelevant noise in observation space, due to failure to capture task-specific features while filtering our irrelevant spatio-temporal details. To tackle this problem, we apply a spatio-temporal masking strategy, a bisimulation principle, combined with latent reconstruction, to capture endogenous task-specific aspects of the environment for world models, effectively eliminating non-essential information. However, as in most prior works, joint training of representations, dynamics and policy learning often leads to instabilities. To further address this, we develop a hybrid Recurrent State-Space Model (RSSM) structure, enhancing state representation robustness for effective policy learning. Our empirical evaluation demonstrates significant performance improvements over existing methods in a range of visually complex control tasks such as Maniskill with exogenous distractors from the Matterport environment.

Download paper here

Under Reivew