000112723 001__ 112723
000112723 005__ 20190509132133.0
000112723 0247_ $$2doi$$a10.5075/epfl-thesis-3974
000112723 02470 $$2urn$$aurn:nbn:ch:bel-epfl-thesis3974-3
000112723 02471 $$2nebis$$a5449482
000112723 037__ $$aTHESIS
000112723 041__ $$aeng
000112723 088__ $$a3974
000112723 245__ $$aEnactive robot vision
000112723 269__ $$a2007
000112723 260__ $$bEPFL$$c2007$$aLausanne
000112723 300__ $$a123
000112723 336__ $$aTheses
000112723 502__ $$aEzequiel Di Paolo, Frédéric Kaplan, Tom Ziemke
000112723 520__ $$aThe complexity of today's autonomous robots poses a major challenge for Artificial Intelligence. These robots are equipped with sophisticated sensors and mechanical abilities that allow them to enter our homes and interact with humans. For example, today's robots are almost all equipped with vision and several of them can move over rough terrain with wheels or legs. The methods developed so far in Artificial Intelligence, however, are not yet ready to cope with the complexity of the information gathered through the robot sensors and the need for rapid action in partially unknown and dynamic environments. In this thesis, I will argue that the apparent complexity of the environment and of the robot brain can be significantly simplified if perception, behavior, and learning are allowed to co-develop on the same time scale. In doing so, robots become sensitive to, and actively exploit, characteristics of the environment that they can tackle within their own computational and physical constraints. This line of work is grounded on philosophical and psychological research showing that perception is an active process mediated by behavior. However, computational models of active vision are very rare and often rely on architectures that are preprogrammed to detect certain characteristics of the environment. Previous work have shown that complex visual tasks, such as position and size invariant shape recognition as well as navigation, can be tackled with remarkably simple neural architectures generated by a coevolutionary process of active vision and feature selection. Behavioral machines equipped with primitive vision systems and direct pathways between visual and motor neurons were evolved while freely interacting with their environments. I proceed further on this line of investigation and describe the application of this methodology in three situations, namely car driving with an omnidirectional camera, goal-oriented navigation of a humanoid robot, and cooperative tasks by two agents. I will show that these systems develop sensitivity to a number of oriented, retinotopic, visual features – oriented edges, corners, height – and a behavioral repertoire to locate, bring, and keep these features in sensitive regions of the vision system that allow them to accomplish their goals. In a second set of experiments, I will show that active vision can be exploited by the robot to perform anticipatory exploration of the environment in a task that requires landmark-based navigation. Evolved robots exploit an internal expectation system that makes use of active exploration to check for expected events in the environment. I will describe a third set of experiments where, in addition to an evolutionary process, the visual system of the robot can develop receptive fields by means of unsupervised Hebbian learning and show that these receptive fields are significantly affected by the behavior of the system and differ from those predicted by most computational models of visual cortex. Finally, I will show that these robots replicate the performance deficiencies observed in experiments of motor deprivation with kitten when they are exposed to the same type of motor deprivations. Furthermore, the analyses of our robot brains suggest an explanation for the deficiencies observed in kitten that have not yet been fully understood.
000112723 6531_ $$aactive vision
000112723 6531_ $$aenaction
000112723 6531_ $$amobile robots
000112723 6531_ $$aneural networks
000112723 6531_ $$acomputer vision
000112723 6531_ $$avision active
000112723 6531_ $$aénaction
000112723 6531_ $$arobots mobiles
000112723 6531_ $$aréseaux de neurones
000112723 6531_ $$avision assistée par ordinateur
000112723 6531_ $$aevolutionary robotics
000112723 700__ $$0241094$$g157788$$aSuzuki, Mototaka
000112723 720_2 $$aFloreano, Dario$$edir.$$g111729$$0240742
000112723 8564_ $$uhttps://infoscience.epfl.ch/record/112723/files/EPFL_TH3974.pdf$$zTexte intégral / Full text$$s13139354$$yTexte intégral / Full text
000112723 909C0 $$xU10370$$0252161$$pLIS
000112723 909CO $$pthesis$$pthesis-bn2018$$pDOI$$ooai:infoscience.tind.io:112723$$qDOI2$$qGLOBAL_SET$$pSTI
000112723 917Z8 $$x108898
000112723 918__ $$dEDPR$$bSTI-SMT$$cI2S$$aSTI
000112723 919__ $$aLIS
000112723 920__ $$b2007$$a2007-12-5
000112723 970__ $$a3974/THESES
000112723 973__ $$sPUBLISHED$$aEPFL
000112723 980__ $$aTHESIS