How good Gestalt determines low level vision

In classical models of vision, low level visual tasks are explained by low level neural mechanisms. For example, in crowding, perception of a target is impeded by nearby elements because, as it is argued, responses of neurons coding for nearby elements are pooled. Indeed, performance deteriorated when a vernier stimulus was flanked by two lines, one on each side. However, performance improved strongly when the lines were embedded in squares. Low level interactions cannot explain this uncrowding effect because the neighboring lines are part of the squares. It seems that good Gestalts determine crowding, contrary to classical models which rather predict that low level crowding should occur even before the squares, ie higher level features, are computed. Crowding and other types of contextual modulation are just one example. Very similar results were also found for visual backward and forward masking, feature integration along motion trajectories and many more. I will discuss how good Gestalts determine low level processing by recurrent, dynamic computations, thus, mapping the physical into perceptual space.

Published in:
Perception, 41, 1
Presented at:
35th European Conference on Visual Perception, Alghero, Italy, September 2-6, 2012
London, Pion Ltd.

 Record created 2012-10-03, last modified 2018-03-17

Rate this document:

Rate this document:
(Not yet reviewed)