Infoscience

Thesis

Neural assemblies as core elements for modeling neural networks in the brain

How does the brain process and memorize information? We all know that the neuron (also known as nerve cell) is the processing unit in the brain. But how do neurons work together in networks? The connectivity structure of neural networks plays an important role in information processing. Therefore, it is worthwhile to investigate modeling of neural networks. Experiments extract different kinds of datasets (ranging from pair-wise connectivity to membrane potential of individual neurons) and provide an understanding of neuronal activity. However, due to technical limitations of experiments, and complexity and variety of neural architectures, the experimental datasets do not yield a model of neural networks on their own. Roughly speaking, the experimental datasets are not enough for modeling neural networks. Therefore, in addition to these datasets, we have to utilize assumptions, hand-tuned features, parameter tuning and heuristic methods for modeling networks. In this thesis, we present different models of neural networks that are able to produce several behaviors observed in mammalian brain and cell cultures, e.g., up-state/down-state oscillations, different stimulus-evoked responses of cortical layers, activity propagation with tunable speed and several activity patterns of mice barrel cortex. An element which is embedded in all of these models is a network feature called neural assembly. A neural assembly is a group (also called population) of neurons with dense recurrent connectivity and strong internal synaptic weights. We study the dynamics of neural assemblies using analytical approaches and computer simulations. We show that network models containing assemblies exhibit dynamics similar to activity observed in the brain.

Fulltext

Related material