No Thumbnail Available

State Representation Learning with Robotic Priors for Partially Observable Environments Data

Morik, Marco; Rastogi, Divyam; Brock, Oliver

We introduce Recurrent State Representation Learning (RSRL) to tackle the problem of state representation learning in robotics for partially observable environments. To learn low dimensional state representations, we combine a Long Short Term Memory network with robotic priors. RSRL introduces new priors with landmarks and combines them with existing robotics priors in literature to train the representations. To evaluate the quality of the learned state representation, we introduce validation networks that help us to better visualize and quantitatively analyze the learned state representations. We show that the learned representations are low dimensional, locally consistent, and can approximate the underlying true state for robot localization in simulated 3D maze environments. We use the learned representations for reinforcement learning and show that we achieve similar performance as training with the true state. The learned representations are also robust to landmark misclassification errors.