Abstract

Temporal integration of information and prediction of future sensory inputs are assumed to be important computational tasks of generic cortical microcircuits. It has remained open how cortical microcircuits could possibly achieve this, especially since they consist--in contrast to most neural network models--of neurons and synapses with heterogeneous dynamic responses. It turns out, however, that the diversity of computational units increases the capability of microcircuit models for temporal integration. Furthermore the prediction of future input may be rather easy for such circuits since it suffices to train the readouts from such microcircuits. In this article we show that very simple readouts from a generic recurrently connected circuit of integrate-and-fire neurons with diverse dynamic synapses can be trained in an unsupervised manner to predict movements of different objects, that move within an unlimited number of combinations of speed, angle, and offset over a simulated sensory field. The autonomously trained microcircuit model is also able to compute the direction of motion, which is a computationally difficult problem ('aperture problem') since it requires disambiguation of local sensory readings through the context of other sensory readings at the current and preceding moments. Furthermore the same circuit can be trained simultaneously in a supervised manner to also report the shape and velocity of the moving object. Finally it is shown that the trained neural circuit supports novelty detection and the generation of 'imagined movements'. Altogether the results of this article suggest that it is not necessary to construct specific and biologically unrealistic neural circuit models for specific sensory processing tasks, since 'found' generic cortical microcircuit models in combination with very simple perceptron-like readouts can easily be trained to solve such computational tasks

Details

Actions