Résumé

The control of robotic prosthetic hands (RPHs) for upper limb amputees is far from optimal. Simultaneous and proportional finger control of a RPH based on EMG signals is still challenging. Based on EMG and kinematics recordings of subjects following a pre-defined sequence of single and multi-fingers movements, we aimed at predicting finger flexion and thumb opposition angles. We compared two deep learning (DL) based approaches, the first one using the raw EMG signals and the second one using the spectrogram of the signal as input, with the standard state of the art decoding technique (STD) for finger angle regression. Using a genetic algorithm for hyper-parameter optimization, we obtained an optimized model architecture (and set of features in the case of STD) for each condition on one recording session. Then, we evaluated the best model of each condition on the eleven EMG and finger kinematics recordings available from four subjects. The two DL approaches based on convolutional neural networks predicted finger angles with a similar mean squared error loss but both of them outperformed the standard approach for the regression of simultaneous single-finger angles. This proposed decoding strategy and hyperparameter optimization framework provides a basis to further improve single finger proportional control for RPHs.

Détails