Files

Abstract

Spiking Neuron Networks (SNNs) are often referred to as the 3rd generation of neural networks. They derive their strength and interest from an accurate modelling of synaptic interactions between neurons, taking into account the time of spike emission. SNNs overcome the computational power of neural networks made of threshold or sigmoidal units. Based on dynamic event-driven processing, they open up new horizons for developping models with an exponential capacity of memorizing and a strong ability to fast adaptation. Today, the main challenge is to discover efficient learning rules that might take advantage of the specific features of SNNs while keeping the nice properties (general-purpose, easy-to-use, available simulators, etc.) of current connectionist models (such as MLP, RBF or SVM). The present survey relates the history of the "spiking neuron" and summarizes the most currently in use models of neurons and networks, in Section 1. The computational power of SNNs is addressed in Section 2 and the problem of learning in networks of spiking neurons is tackled in Section 3, with insights into the tracks currently explored for solving it. Section 4 reviews the tricks of implementation and discuss several simulation frameworks. Examples of application domains are proposed in Section 5, mainly in speech processing and computer vision, emphasizing the temporal aspect of pattern recognition by SNNs. Keywords: Spiking neurons, Spiking neuron networks, Pulsed neural networks, Synaptic plasticity, STDP.

Details

Actions

Preview