Enactive Robot Vision

Enactivism claims that sensory-motor activity and embodiment are crucial in perceiving the environment and that machine vision could be a much simpler business if considered in this context. However, computational models of enactive vision are very rare and often rely on handcrafted control systems. In this paper, we describe results from experiments where evolutionary robots can choose whether to exploit sensory motor coordination in a set of vision- based tasks. We show that complex visual tasks can be tackled with remarkably simple neural architectures generated by a co-evolutionary process of active vision and feature selection. We describe the application of this methodology in four sets of experiments, namely shape discrimination, car driving, and wheeled/bipedal robot navigation. A further set of experiments where the visual system can develop the receptive fields by means of unsupervised Hebbian learning, demonstrates that the receptive fields are significantly affected by the behavior of the system and differ from those predicted by most computational models of visual cortex. Finally, we show that our robots can also replicate the performance deficiencies observed in experiments of sensory deprivation with kitten.

Published in:
Adaptive Behavior, 16, 2-3, 122-128

Note: The status of this file is: Involved Laboratories Only

 Record created 2007-05-07, last modified 2018-03-17

Download fulltext

Rate this document:

Rate this document:
(Not yet reviewed)