Files

Abstract

Background: Active upper-limb prostheses are used to restore important hand functionalities, such as grasping. In conventional approaches, a pattern recognition system is trained over a number of static grasping gestures. However, training a classifier in a static position results in lower classification accuracy when performing dynamic motions, such as reach-to-grasp. We propose an electromyography-based learning approach that decodes the grasping intention during the reaching motion, leading to a faster and more natural response of the prosthesis. Methods and Results: Eight able-bodied subjects and four individuals with transradial amputation gave informed consent and participated in our study. All the subjects performed reach-to-grasp motions for five grasp types, while the elecromyographic (EMG) activity and the extension of the arm were recorded. We separated the reach-to-grasp motion into three phases, with respect to the extension of the arm. A multivariate analysis of variance (MANOVA) on the muscular activity revealed significant differences among the motion phases. Additionally, we examined the classification performance on these phases. We compared the performance of three different pattern recognition methods; Linear Discriminant Analysis (LDA), Support Vector Machines (SVM) with linear and non-linear kernels, and an Echo State Network (ESN) approach. Our off-line analysis shows that it is possible to have high classification performance above 80% before the end of the motion when with three-grasp types. An on-line evaluation with an upper-limb prosthesis shows that the inclusion of the reaching motion in the training of the classifier importantly improves classification accuracy and enables the detection of grasp intention early in the reaching motion. Conclusions: This method offers a more natural and intuitive control of prosthetic devices, as it will enable controlling grasp closure in synergy with the reaching motion. This work contributes to the decrease of delays between the user’s intention and the device response and improves the coordination of the device with the motion of the arm.

Details

Actions

Preview