Adaptive mechanisms of visual motion discrimination, integration, and segregation
Under ecological conditions, the luminance impinging on the retina varies within a dynamic range of 220 dB. Stimulus contrast can also vary drastically within a scene and eye movements leave little time for sampling luminance. Given these fundamental problems, the human brain allocates a significant amount of resources and deploys both structural and functional solutions that work in tandem to compress this range. Here we propose a new dynamic neural model built upon well-established canonical neural mechanisms. The model consists of two feed-forward stages. The first stage encodes the stimulus spatially and normalizes its activity by extracting contrast and discounting the background luminance. These normalized activities allow a second stage to implement a contrast-dependent spatial-integration strategy. We show how the properties of this model can account for adaptive properties of motion discrimination, integration, and segregation.
Penaloza et al. - 2021 - (Postprint) Adaptive Mechanisms [...].pdf
Postprint
embargo
2022-11-01
CC BY-NC-ND
1.85 MB
Adobe PDF
f81b1a16c9995d3e275eeef62b89e5fa
Penaloza et al. - 2021 - (Suppl Mat) Adaptive Mechanisms [...].pdf
Postprint
embargo
2022-11-01
CC BY-NC-ND
437.97 KB
Adobe PDF
0b9e01dd7de0de74363edcc6087165f2