Vision-Based Reinforcement Learning

Principal Investigator(s): 
Stella Yu

Vision-based reinforcement learning (RL) is successful, but how to generalize it to unknown test environments remains challenging.  It not only needs to process high-dimensional visual inputs, but it is also required to deal with significant variations in new test scenarios, e.g. color/texture changes or moving distractors. Existing methods focus on training an RL policy that is universal to changing visual domains, whereas we focus on extracting visual foreground that is universal, feeding clean invariant vision to the RL policy learner.  Our method is completely unsupervised, without manual annotations or access to environment internals.