Files

Abstract

How features are attributed to objects is one of the most puzzling issues in the neurosciences. According to a deeply entrenched view, the analysis of features is spatially localized and, consequently, features are perceived according to the locations where they are physically. Previously, it was found that there are only rare cases in which features can be perceived at a different location. These cases were usually interpreted to reflect errors of visual processing because either the observer's attention was disturbed or the spatio-temporal limits of the visual system were exceeded. Here, I show that features in motion displays can be systematically attributed from one location to another although the elements, which possess the features, are invisible. Further, the experiments presented in this thesis show that features can be integrated across locations thereby following precisely rules of grouping. This indicates that grouping operations can access and process individual features prior to an integration stage. Hence, contrary to what is usually assumed, these cases of non-retinotopic feature integration may point, not to an error, but to a fundamental computational strategy by which the visual system maintains perceptual objects across space and time.

Details

Actions