Files

Abstract

Perceptual learning is the ability to modify perception through practice. As a form of brain plasticity, perceptual learning has been studied for more than thirty years in different fields including psychology, neurophysiology and computational neuroscience. Thanks to its simple nature, perceptual learning is often considered a basic form of learning, and an optimal starting point to understand more complex forms of plasticity. From a computational perspective, perceptual learning is usually described with simple neural network architectures. Here I demonstrate by empirical results and theoretical considerations why current computational models are inadequate to describe perceptual learning. Paradoxically, simple computational models often largely outperform human performance. While very simple feed-forward neural networks can learn any stimulus-response mapping, the human brain cannot. Common neural networks consider the learning process a mere computational problem, primarily focusing on stimulus response mappings. However, I will show that perceptual learning is a more complex phenomenon. For example, while neural networks can learn only one task per network, in real learning situations, human observers often have to learn multiple tasks by using the very same neurons. Hence, more plausible computational approaches that account for the complex learning situations of human observers, need to be developed.

Details

Actions