000212558 001__ 212558
000212558 005__ 20180913063350.0
000212558 0247_ $$2doi$$a10.1109/Tamd.2015.2417353
000212558 022__ $$a1943-0604
000212558 02470 $$2ISI$$a000356164500002
000212558 037__ $$aARTICLE
000212558 245__ $$aMotor-Primed Visual Attention for Humanoid Robots
000212558 260__ $$aPiscataway$$bIeee-Inst Electrical Electronics Engineers Inc$$c2015
000212558 269__ $$a2015
000212558 300__ $$a16
000212558 336__ $$aJournal Articles
000212558 520__ $$aWe present a novel, biologically inspired, approach to an efficient allocation of visual resources for humanoid robots in a form of a motor-primed visual attentional landscape. The attentional landscape is a more general, dynamic and a more complex concept of an arrangement of spatial attention than the popular "attentional spotlight" or "zoom-lens" models of attention. Motor-priming of attention is a mechanism for prioritizing visual processing to motor-relevant parts of the visual field, in contrast to other, motor-irrelevant, parts. In particular, we present two techniques for constructing a visual "attentional landscape". The first, more general, technique, is to devote visual attention to the reachable space of a robot (peripersonal space-primed attention). The second, more specialized, technique is to allocate visual attention with respect to motor plans of the robot (motor plans-primed attention). Hence, in our model, visual attention is not exclusively defined in terms of visual saliency in color, texture or intensity cues, it is rather modulated by motor information. This computational model is inspired by recent findings in visual neuroscience and psychology. In addition to two approaches to constructing the attentional landscape, we present two methods for using the attentional landscape for driving visual processing. We show that motor-priming of visual attention can be used to very efficiently distribute limited computational resources devoted to the visual processing. The proposed model is validated in a series of experiments conducted with the iCub robot, both using the simulator and the real robot.
000212558 6531_ $$aCognitive robotics
000212558 6531_ $$acomputer vision
000212558 6531_ $$ahumanoid robots
000212558 6531_ $$amachine learning
000212558 700__ $$0244106$$aLukic, Luka$$g207253$$uInst Super Tecn, VISLAB ISR, Lisbon, Portugal
000212558 700__ $$0240594$$aBillard, Aude$$g115671
000212558 700__ $$aSantos-Victor, Jose$$uInst Super Tecn, VISLAB ISR, Lisbon, Portugal
000212558 773__ $$j7$$k2$$q76-91$$tIeee Transactions On Autonomous Mental Development
000212558 909C0 $$0252119$$pLASA$$xU10660
000212558 909CO $$ooai:infoscience.tind.io:212558$$pSTI$$particle
000212558 917Z8 $$x202511
000212558 937__ $$aEPFL-ARTICLE-212558
000212558 973__ $$aEPFL$$rREVIEWED$$sPUBLISHED
000212558 980__ $$aARTICLE