How good Gestalt determines low level vision
In classical models of vision, low level visual tasks are explained by low level neural mechanisms. For example, in crowding, perception of a target is impeded by nearby elements because, as it is argued, responses of neurons coding for nearby elements are pooled. Indeed, performance deteriorated when a vernier stimulus was flanked by two lines, one on each side. However, performance improved strongly when the lines were embedded in squares. Low level interactions cannot explain this uncrowding effect because the neighboring lines are part of the squares. It seems that good Gestalts determine crowding, contrary to classical models which rather predict that low level crowding should occur even before the squares, ie higher level features, are computed. Crowding and other types of contextual modulation are just one example. Very similar results were also found for visual backward and forward masking, feature integration along motion trajectories and many more. I will discuss how good Gestalts determine low level processing by recurrent, dynamic computations, thus, mapping the physical into perceptual space.
2012
London
41
1
NON-REVIEWED
Event name | Event place | Event date |
Alghero, Italy | September 2-6, 2012 | |