Fichiers

Résumé

The complexity of today's autonomous robots poses a major challenge for Artificial Intelligence. These robots are equipped with sophisticated sensors and mechanical abilities that allow them to enter our homes and interact with humans. For example, today's robots are almost all equipped with vision and several of them can move over rough terrain with wheels or legs. The methods developed so far in Artificial Intelligence, however, are not yet ready to cope with the complexity of the information gathered through the robot sensors and the need for rapid action in partially unknown and dynamic environments. In this thesis, I will argue that the apparent complexity of the environment and of the robot brain can be significantly simplified if perception, behavior, and learning are allowed to co-develop on the same time scale. In doing so, robots become sensitive to, and actively exploit, characteristics of the environment that they can tackle within their own computational and physical constraints. This line of work is grounded on philosophical and psychological research showing that perception is an active process mediated by behavior. However, computational models of active vision are very rare and often rely on architectures that are preprogrammed to detect certain characteristics of the environment. Previous work have shown that complex visual tasks, such as position and size invariant shape recognition as well as navigation, can be tackled with remarkably simple neural architectures generated by a coevolutionary process of active vision and feature selection. Behavioral machines equipped with primitive vision systems and direct pathways between visual and motor neurons were evolved while freely interacting with their environments. I proceed further on this line of investigation and describe the application of this methodology in three situations, namely car driving with an omnidirectional camera, goal-oriented navigation of a humanoid robot, and cooperative tasks by two agents. I will show that these systems develop sensitivity to a number of oriented, retinotopic, visual features – oriented edges, corners, height – and a behavioral repertoire to locate, bring, and keep these features in sensitive regions of the vision system that allow them to accomplish their goals. In a second set of experiments, I will show that active vision can be exploited by the robot to perform anticipatory exploration of the environment in a task that requires landmark-based navigation. Evolved robots exploit an internal expectation system that makes use of active exploration to check for expected events in the environment. I will describe a third set of experiments where, in addition to an evolutionary process, the visual system of the robot can develop receptive fields by means of unsupervised Hebbian learning and show that these receptive fields are significantly affected by the behavior of the system and differ from those predicted by most computational models of visual cortex. Finally, I will show that these robots replicate the performance deficiencies observed in experiments of motor deprivation with kitten when they are exposed to the same type of motor deprivations. Furthermore, the analyses of our robot brains suggest an explanation for the deficiencies observed in kitten that have not yet been fully understood.

Détails

Actions

Aperçu