Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Journal articles
  4. Perspectives of the high-dimensional dynamics of neural microcircuits from the point of view of low-dimensional readouts
 
research article

Perspectives of the high-dimensional dynamics of neural microcircuits from the point of view of low-dimensional readouts

Häusler, S.
•
Markram, H.  
•
Maass, W.
2003
Complexity

We investigate generic models for cortical microcircuits, i.e., recurrent circuits of integrate-and-fire neurons with dynamic synapses. These complex dynamic systems subserve the amazing information processing capabilities of the cortex, but are at the present time very little understood. We analyze the transient dynamics of models for neural microcircuits from the point of view of one or two readout neurons that collapse the high-dimensional transient dynamics of a neural circuit into a one- or two-dimensional output stream. This stream may for example represent the information that is projected from such circuit to some particular other brain area or actuators. It is shown that simple local learning rules enable a readout neuron to extract from the high-dimensional transient dynamics of a recurrent neural circuit quite different low-dimensional projections, which even may contain virtual attractors that are not apparent in the high-dimensional dynamics of the circuit itself. Furthermore it is demonstrated that the information extraction capabilities of linear readout neurons are boosted by the computational operations of a sufficiently large preceding neural microcircuit. Hence a generic neural microcircuit may play a similar role for information processing as a kernel for support vector machines in machine learning. We demonstrate that the projection of time-varying inputs into a large recurrent neural circuit enables a linear readout neuron to classify the time-varying circuit inputs with the same power as complex nonlinear classifiers, such as a pool of perceptrons trained by the p-delta rule or a feedforward sigmoidal neural net trained by backprop, provided that the size of the recurrent circuit is sufficiently large. At the same time such readout neurons can exploit the stability and speed of learning rules for linear classifiers, thereby overcoming the problems caused by local minima in the error function of nonlinear classifiers. In addition it is demonstrated that pairs of readout neurons can transform the complex trajectory of transient states of a large neural circuit into a simple and clearly structured two-dimensional trajectory. This two-dimensional projection of the high-dimensional trajectory can even exhibit convergence to virtual attractors that are not apparent in the high-dimensional trajectory

  • Details
  • Metrics
Type
research article
DOI
10.1002/cplx.10089
Author(s)
Häusler, S.
Markram, H.  
Maass, W.
Date Issued

2003

Published in
Complexity
Volume

8

Issue

4

Start page

39

End page

50

Note

Institute for Theoretical Computer Science, Technische Universität Graz, Inffeldgasse 16b, A-8010 Graz, Austria

Special Issue: Complex Adaptive Systems: Part II (Issue Edited by Heinz Georg Schuster, Klaus Pawelzik)

Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
LNMC  
Available on Infoscience
February 27, 2008
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/19344
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés