Deep spatial autoencoders for visuomotor learning

TitleDeep spatial autoencoders for visuomotor learning
Publication TypeJournal Article
Year of Publication2016
AuthorsFinn, C., Tan X. Yu, Duan Y., Darrell T., Levine S., & Abbeel P.
Published inIEEE International Conference on Robotics and Automation (ICRA)
Page(s)512-519
Date Published05/2016
PublisherIEEE
ISBN Number978-1-4673-8026-3
Accession Number16055493
KeywordsCameras, Learning (artificial intelligence), Robot kinematics, Robot sensing systems, unsupervised learning, Visualization
Abstract

Reinforcement learning provides a powerful and flexible framework for automated acquisition of robotic motion skills. However, applying reinforcement learning requires a sufficiently detailed representation of the state, including the configuration of task-relevant objects. We present an approach that automates state-space construction by learning a state representation directly from camera images. Our method uses a deep spatial autoencoder to acquire a set of feature points that describe the environment for the current task, such as the positions of objects, and then learns a motion skill with these feature points using an efficient reinforcement learning method based on local linear models. The resulting controller reacts continuously to the learned feature points, allowing the robot to dynamically manipulate objects in the world with closed-loop control. We demonstrate our method with a PR2 robot on tasks that include pushing a free-standing toy block, picking up a bag of rice using a spatula, and hanging a loop of rope on a hook at various positions. In each task, our method automatically learns to track task-relevant objects and manipulate their configuration with the robot's arm.

URLhttp://www.icsi.berkeley.edu/pubs/vision/deepspatialautoencoders16.pdf
DOI10.1109/ICRA.2016.7487173
ICSI Research Group

Vision