Penaloza, BorisHerzog, Michael H.Ogmen, Haluk2021-11-062022-11-012021-11-062021-11-0110.1016/j.visres.2021.07.002https://infoscience.epfl.ch/handle/20.500.14299/182815WOS:000709009800010Under ecological conditions, the luminance impinging on the retina varies within a dynamic range of 220 dB. Stimulus contrast can also vary drastically within a scene and eye movements leave little time for sampling luminance. Given these fundamental problems, the human brain allocates a significant amount of resources and deploys both structural and functional solutions that work in tandem to compress this range. Here we propose a new dynamic neural model built upon well-established canonical neural mechanisms. The model consists of two feed-forward stages. The first stage encodes the stimulus spatially and normalizes its activity by extracting contrast and discounting the background luminance. These normalized activities allow a second stage to implement a contrast-dependent spatial-integration strategy. We show how the properties of this model can account for adaptive properties of motion discrimination, integration, and segregation.NeurosciencesOphthalmologyPsychologyNeurosciences & Neurologyshunting equationsdynamic neural modelsensory adaptationdynamic-range problemspatial integrationmotion discriminationmotion segmentationadaptive center-surroundcenter-surround antagonismcontour enhancementspatial summationreceptive-fieldperceptionsuppressionmodelresponsesneuronsAdaptive mechanisms of visual motion discrimination, integration, and segregationtext::journal::journal article::research article