Fichiers

Résumé

This abstract addresses the question of human imitation through convergent evidence from neuroscience, using tools from machine learning. In particular, we consider a deficit in imitation of meaningless gestures (i.e., hand postures relative to the head) following callosal brain lesion (i.e., disconnected hemispheres). We base our work on the rational that looking at how imitation in apraxic patients is impaired can unveil its underlying neural principles. We ground the functional architecture and information flow of our model in brain imaging studies. Finally findings from monkey brain neurophysiological studies drive the choice of implementation of our processing modules. Our neurocomputational model of visuo-motor imitation is based on selforganizing maps receiving sensory input (i.e., visual, tactile or proprioceptive) with associated activities [1]. We train the connections between the maps with anti-hebbian learning to account for the transformations required to translate the observation of the visual stimulus to imitate to the corresponding tactile and proprioceptive information that will guide the imitative gesture. Patterns of impairment of the model, realized by adding uncertainty in the transfer of information between the networks, reproduce the deficits found in a clinical examination of visuo-motor imitation of meaningless gestures [2]. The model makes hypotheses on the type of representation used and the neural mechanisms underlying human visuo-motor imitation. The model also helps to gain more understanding in the occurrence and nature of imitation errors in patients with brain lesions.

Détails

Actions

Aperçu