Recognizing the grasp intention from human demonstration

In human grasping, choices are made on the use of hand-parts even before a grasp is realized. The human associates these choices with end-functionality and is confident that the resulting grasp will be able to meet task requirements. We refer to these choices on the use of hand-parts underlying grasp formation as the grasp intention. Modeling the grasp intention offers a paradigm whereby decisions underlying grasp formation may be related to the functional properties of the realized grasp in terms of quantities which may be sensed/recognized or controlled. In this paper we model grasp intention as mix of oppositions between hand parts. Sub-parts of the hand acting in opposition to each other are viewed as a basis from which grasps are formed. We compute a set of such possible oppositions and determine the most likely combination from the raw information present in a demonstrated grasp. An intermediate representation of raw sensor data exposes interactions between elementary grasping surfaces. From this, the most likely combination of oppositions is inferred. Grasping experiments with humans show that the proposed approach is robust enough to correctly capture the intention in demonstrated grasps across a wide range of hand functionality. (C) 2015 Elsevier B.V. All rights reserved.

Published in:
Robotics and Autonomous Systems, 74, 108-121
Amsterdam, Elsevier

 Record created 2016-02-16, last modified 2018-12-03

Rate this document:

Rate this document:
(Not yet reviewed)