Ambrosano, AlessandroVannucci, LorenzoAlbanese, UgoKirtay, MuratFalotico, EgidioMartinez-Canada, PabloHinkel, GeorgKaiser, JacquesUlbrich, StefanLevi, PaulMorillas, ChristianKnoll, AloisGewaltig, Marc-OliverLaschi, Cecilia2017-01-242017-01-242017-01-24201610.1007/978-3-319-42417-0_2https://infoscience.epfl.ch/handle/20.500.14299/133295WOS:000389727000002The 'red-green' pathway of the retina is classically recognized as one of the retinal mechanisms allowing humans to gather color information from light, by combining information from L-cones and M-cones in an opponent way. The precise retinal circuitry that allows the opponency process to occur is still uncertain, but it is known that signals from L-cones and M-cones, having a widely overlapping spectral response, contribute with opposite signs. In this paper, we simulate the red-green opponency process using a retina model based on linear-nonlinear analysis to characterize context adaptation and exploiting an image-processing approach to simulate the neural responses in order to track a moving target. Moreover, we integrate this model within a visual pursuit controller implemented as a spiking neural network to guide eye movements in a humanoid robot. Tests conducted in the Neurorobotics Platform confirm the effectiveness of the whole model. This work is the first step towards a bio-inspired smooth pursuit model embedding a retina model using spiking neural networks.Retina Color-Opponency Based Pursuit Implemented Through Spiking Neural Networks in the Neurorobotics Platformtext::conference output::conference proceedings::conference paper