Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. EPFL thesis
  4. Supervised learning and inference of spiking neural networks with temporal coding
 
doctoral thesis

Supervised learning and inference of spiking neural networks with temporal coding

Stanojevic, Ana  
2023

The way biological brains carry out advanced yet extremely energy efficient signal processing remains both fascinating and unintelligible. It is known however that at least some areas of the brain perform fast and low-cost processing relying only on a small number of temporally encoded spikes. This thesis investigates supervised learning and inference of spiking neural networks (SNNs) with sparse temporally encoded communication. We explore different setups and compare the performance of our SNNs with that of the standard artificial neural networks (ANNs) on data classification tasks.

In the first setup we consider: A family of exact mappings between a single-spike network and a ReLU network. We dismiss training for a moment and analyse deep SNNs with time-to-first-spike (TTFS) encoding. There exist a neural dynamics and a set of parameter constraints which guarantee an approximation-free mapping (conversion) from a ReLU network to an SNN with TTFS encoding. We find that a pretrained deep ReLU network can be replaced with our deep SNN without any performance loss on large-scale image classification tasks (CIFAR100 and PLACES365). However, we hypothesise that in many cases there is a need for training or fine-tuning deep spiking neural network for the specific problem at hand.

In the second setup we consider: Training a deep single-spike network using a family of exact mappings from a ReLU network. We thoroughly investigate the reasons for unsuccessful training of deep SNNs with TTFS encoding and uncover an instance of the vanishing-and-exploding gradient problem. We find that a particular exact mapping solves this problem and yields an SNN with learning trajectories equivalent to those of ReLU network on large image classification tasks (CIFAR100 and PLACES365). Training is crucial for fine-tuning SNNs for the specific device properties such as low latency, the amount of noise or quantization. We hope that this study will eventually lead to an SNN hardware implementation offering a low-power inference with ANN performance on data classification tasks.

  • Files
  • Details
  • Metrics
Type
doctoral thesis
DOI
10.5075/epfl-thesis-10637
Author(s)
Stanojevic, Ana  
Advisors
Gerstner, Wulfram  
•
Wozniak, Stanislaw Andrzej  
Jury

Prof. Patrick Thiran (président) ; Prof. Wulfram Gerstner, Dr Stanislaw Andrzej Wozniak (directeurs) ; Prof. Nicolas Flammarion, Prof. Shih-Chii Liu, Prof. Friedemann Zenke (rapporteurs)

Date Issued

2023

Publisher

EPFL

Publisher place

Lausanne

Public defense year

2023-12-13

Thesis number

10637

Total of pages

138

Subjects

spiking neural network

•

temporal encoding

•

sparse communication

•

efficient data classification

•

multiplication-free inference

•

backpropagation training

•

time-to-first-spike encoding

•

deep ReLU network conversion

•

SNN-ReLU network equivalence

•

deep SNN training

EPFL units
LCN1  
Faculty
IC  
School
IINFCOM  
Doctoral School
EDIC  
Available on Infoscience
December 4, 2023
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/202507
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés