Learning to Detect Visual Grasp Affordance

TitleLearning to Detect Visual Grasp Affordance
Publication TypeJournal Article
Year of Publication2016
AuthorsSong, H. Oh, Fritz M., Goehring D., & Darrell T.
Published inIEEE Transactions on Automation Science and Engineering
Volume13
Issue2
Page(s)798-809
Date Published04/2016
ISSN1545-5955
KeywordsAffordance, autonomous agent, autonomous object detection, category-level continuous pose regression, continuous pose estimates, Estimation, grasp point locations, Grasping, grasping system, image texture, local texture-like measures, machine learning, max-margin optimization, Object Detection, object-category measures, Pipelines, pose estimation, regression analysis, robot vision, Robots, Training, Training data, visual grasp affordance estimation, Willow Garage PR2 robot
Abstract

Appearance-based estimation of grasp affordances is desirable when 3-D scans become unreliable due to clutter or material properties. We develop a general framework for estimating grasp affordances from 2-D sources, including local texture-like measures as well as object-category measures that capture previously learned grasp strategies. Local approaches to estimating grasp positions have been shown to be effective in real-world scenarios, but are unable to impart object-level biases and can be prone to false positives. We describe how global cues can be used to compute continuous pose estimates and corresponding grasp point locations, using a max-margin optimization for category-level continuous pose regression. We provide a novel dataset to evaluate visual grasp affordance estimation; on this dataset we show that a fused method outperforms either local or global methods alone, and that continuous pose estimation improves over discrete output models. Finally, we demonstrate our autonomous object detection and grasping system on the Willow Garage PR2 robot.

URLhttp://www.icsi.berkeley.edu/pubs/vision/visualgraspaffordance.pdf
DOI10.1109/TASE.2015.2396014
ICSI Research Group

Vision