Synaptic plasticity across different time scales and its functional implications

Humans and animals learn by modifying the synaptic strength between neurons, a phenomenon known as synaptic plasticity. These changes can be induced by rather short stimuli (lasting, for instance, only a few seconds), yet, in order to be useful for long-term memory, they should remain stable for months or years. Experimentalists study synaptic plasticity by applying a vast variety of protocols. In the present thesis we focus on protocols that fall under two main categories: (i) Those that induce synaptic modifications that last for only a few hours ("early phase" of plasticity) (ii) Those that allow synapses to undergo a sequence of steps that transforms the rapid changes occurring during the "early phase" into a stable memory trace ("late phase" of plasticity). The goal of this thesis is to better understand synaptic plasticity across these different phases, early and late, by creating compact mathematic models to describe the plasticity mechanisms. Our approach allows for a synthetic view of the field as well as the exploration of functional consequences of learning. In this direction, we propose a model for the induction of synaptic plasticity that depends on the presynaptic spike time and nonlinearly on the postsynaptic voltage. The model is able to reproduce a broad range of experimental protocols such as voltage-clamp experiments and spike-timing experiments. Since the voltage is a key element in the model, we describe the neuronal activity by using a compact neuron model that faithfully reproduces the voltage time course of pyramidal neurons. In addition, this model of the induction of synaptic plasticity is combined with a trigger process for protein synthesis, and the final stabilization mechanism in order to describe the "late phase". In this combinatory form, it is able to explain experimental phenomena known as tagging experiments and to make testable predictions. A study of functional consequences of the induction model reveals selectivity in the inputs, independent component analysis computation and a tight relation between connectivity and coding. In parallel a top-down approach finding independent components is used to derive a rate-based learning rule which shows structural correlations with the induction model. This unified model across different time scales allowing the stabilization of synapses is crucial to understand learning and memory processes in animals and humans, and a necessary ingredient for any large-scale model of the brain.

Related material