Supervised learning and inference of spiking neural networks with temporal coding
The way biological brains carry out advanced yet extremely energy efficient signal processing remains both fascinating and unintelligible. It is known however that at least some areas of the brain perform fast and low-cost processing relying only on a small number of temporally encoded spikes. This thesis investigates supervised learning and inference of spiking neural networks (SNNs) with sparse temporally encoded communication. We explore different setups and compare the performance of our SNNs with that of the standard artificial neural networks (ANNs) on data classification tasks.
In the first setup we consider: A family of exact mappings between a single-spike network and a ReLU network. We dismiss training for a moment and analyse deep SNNs with time-to-first-spike (TTFS) encoding. There exist a neural dynamics and a set of parameter constraints which guarantee an approximation-free mapping (conversion) from a ReLU network to an SNN with TTFS encoding. We find that a pretrained deep ReLU network can be replaced with our deep SNN without any performance loss on large-scale image classification tasks (CIFAR100 and PLACES365). However, we hypothesise that in many cases there is a need for training or fine-tuning deep spiking neural network for the specific problem at hand.
In the second setup we consider: Training a deep single-spike network using a family of exact mappings from a ReLU network. We thoroughly investigate the reasons for unsuccessful training of deep SNNs with TTFS encoding and uncover an instance of the vanishing-and-exploding gradient problem. We find that a particular exact mapping solves this problem and yields an SNN with learning trajectories equivalent to those of ReLU network on large image classification tasks (CIFAR100 and PLACES365). Training is crucial for fine-tuning SNNs for the specific device properties such as low latency, the amount of noise or quantization. We hope that this study will eventually lead to an SNN hardware implementation offering a low-power inference with ANN performance on data classification tasks.
EPFL_TH10637.pdf
n/a
openaccess
copyright
7.76 MB
Adobe PDF
5237b353b90b71e380911ca329a03bf9