Files

Abstract

The brain is a complex biological system composed of a multitude of microscopic processes, which together give rise to computational abilities observed in everyday behavior. Neuronal modeling, consisting of models of single neurons and neuronal networks at varying levels of biological detail, can synthesize the gaps currently hard to constrain in experiments and provide mechanistic explanations of how these computations might arise. In this thesis, I present two parallel lines of research on neuronal modeling, situated at varying levels of biological detail. First, I assess the provenance of voltage-gated ion channel models in an integrative meta-analysis that investigates a backlog of nearly 50 years of published research. To cope with the ever-increasing volume of research produced in the field of neuroscience, we need to develop methods for the systematic assessment and comparison of published work. As we demonstrate, neuronal models offer the intriguing possibility of performing automated quantitative analyses across studies, by standardized simulated experiments. We developed protocols for the quantitative comparison of voltage-gated ion channels, and applied them to a large body of published models, allowing us to assess the variety and temporal development of different models for the same ion channels over the time scale of years of research. Beyond a systematic classification of the existing body of research made available in an online platform, we show that our approach extends to large-scale comparisons of ion channel models to experimental data, thereby facilitating field-wide standardization of experimentally-constrained modeling. Second, I investigate neuronal models of working memory (WM). How can cortical networks bridge the short time scales of their microscopic components, which operate on the order of milliseconds, to the behaviorally relevant time scales of seconds observed in WM experiments? I consider here a candidate model: continuous attractor networks. These can implement WM for a continuum of possible spatial locations over several seconds and have been proposed for the organization of prefrontal cortical networks. I first present a novel method for the efficient prediction of the network-wide steady states from the underlying microscopic network properties. The method can be applied to predict and tune the "€œbump"€ shapes of continuous attractors implemented in networks of spiking neuron models connected by nonlinear synapses, which we demonstrate for saturating synapses involving NMDA receptors. In a second part, I investigate the computational role of short-term synaptic plasticity as a synaptic nonlinearity. Continuous attractor models are sensitive to the inevitable variability of biological neurons: variable neuronal firing and heterogeneous networks decrease the time that memories are accurately retained, eventually leading to a loss of memory functionality on behaviorally relevant time scales. In theory and simulations, I show that short-term plasticity can control the time scale of memory retention, with facilitation and depression playing antagonistic roles in controlling the drift and diffusion of locations in memory. Finally, we place quantitative constraints on the combination of synaptic and network parameters under which continuous attractors networks can implement reliable WM in cortical settings.

Details

Actions

Preview