Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Journal articles
  4. Biologically plausible deep learning – but how far can we go with shallow networks?
 
research article

Biologically plausible deep learning – but how far can we go with shallow networks?

Illing, Bernd
•
Gerstner, Wulfram
•
Brea, Johanni
October 1, 2019
Neural Networks

Training deep neural networks with the error backpropagation algorithm is considered implausible from a biological perspective. Numerous recent publications suggest elaborate models for biologically plausible variants of deep learning, typically defining success as reaching around 98% test accuracy on the MNIST data set. Here, we investigate how far we can go on digit (MNIST) and object (CIFAR10) classification with biologically plausible, local learning rules in a network with one hidden layer and a single readout layer. The hidden layer weights are either fixed (random or random Gabor filters) or trained with unsupervised methods (Principal/Independent Component Analysis or Sparse Coding) that can be implemented by local learning rules. The readout layer is trained with a supervised, local learning rule. We first implement these models with rate neurons. This comparison reveals, first, that unsupervised learning does not lead to better performance than fixed random projections or Gabor filters for large hidden layers. Second, networks with localized receptive fields perform significantly better than networks with all-to-all connectivity and can reach backpropagation performance on MNIST. We then implement two of the networks - fixed, localized, random & random Gabor filters in the hidden layer - with spiking leaky integrate-and-fire neurons and spike timing dependent plasticity to train the readout layer. These spiking models achieve > 98.2% test accuracy on MNIST, which is close to the performance of rate networks with one hidden layer trained with backpropagation. The performance of our shallow network models is comparable to most current biologically plausible models of deep learning. Furthermore, our results with a shallow spiking network provide an important reference and suggest the use of datasets other than MNIST for testing the performance of future models of biologically plausible deep learning.

  • Files
  • Details
  • Metrics
Type
research article
DOI
10.1016/j.neunet.2019.06.001
Web of Science ID

WOS:000483920500008

Author(s)
Illing, Bernd
Gerstner, Wulfram
Brea, Johanni
Date Issued

2019-10-01

Published in
Neural Networks
Volume

118

Start page

90

End page

101

Subjects

Deep learning

•

Local learning rules

•

Random Projections

•

Unsupervised Feature Learning

•

Spiking Networks

•

MNIST

•

CIFAR10

Note

CC-BY license

Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
LCN  
Available on Infoscience
March 1, 2019
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/154958
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés