Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Journal articles
  4. A dynamic attractor network model of memory formation, reinforcement and forgetting
 
research article

A dynamic attractor network model of memory formation, reinforcement and forgetting

Boscaglia, Marta
•
Gastaldi, Chiara  
•
Gerstner, Wulfram  
Show more
December 1, 2023
Plos Computational Biology

Empirical evidence shows that memories that are frequently revisited are easy to recall, and that familiar items involve larger hippocampal representations than less familiar ones. In line with these observations, here we develop a modelling approach to provide a mechanistic hypothesis of how hippocampal neural assemblies evolve differently, depending on the frequency of presentation of the stimuli. For this, we added an online Hebbian learning rule, background firing activity, neural adaptation and heterosynaptic plasticity to a rate attractor network model, thus creating dynamic memory representations that can persist, increase or fade according to the frequency of presentation of the corresponding memory patterns. Specifically, we show that a dynamic interplay between Hebbian learning and background firing activity can explain the relationship between the memory assembly sizes and their frequency of stimulation. Frequently stimulated assemblies increase their size independently from each other (i.e. creating orthogonal representations that do not share neurons, thus avoiding interference). Importantly, connections between neurons of assemblies that are not further stimulated become labile so that these neurons can be recruited by other assemblies, providing a neuronal mechanism of forgetting.|Experimental evidence suggests that familiar items are represented by larger hippocampal neuronal assemblies than less familiar ones. In line with this finding, our computational model shows that the size of memory assemblies depends on the frequency of their recall (i.e. the higher the frequency, the larger the assembly), which can be explained by the interplay of online learning and background firing activity. Furthermore, we find that assemblies representing uncorrelated memories increase their sizes while remaining orthogonal, in line with findings with single-cell recordings. To model this empirical finding, we propose to go beyond the standard attractor network memory models and use instead a dynamic model to study memory coding.

  • Details
  • Metrics
Type
research article
DOI
10.1371/journal.pcbi.1011727
Web of Science ID

WOS:001129806900005

Author(s)
Boscaglia, Marta
Gastaldi, Chiara  
Gerstner, Wulfram  
Quiroga, Rodrigo Quian
Date Issued

2023-12-01

Publisher

Public Library Science

Published in
Plos Computational Biology
Volume

19

Issue

12

Article Number

e1011727

Subjects

Life Sciences & Biomedicine

•

Neural-Networks

•

Synaptic Plasticity

•

Self-Reference

•

Neurons

•

Representation

•

Hippocampus

•

Mechanisms

•

Storage

•

Sleep

Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
LCN  
FunderGrant Number

Biotechnology and Biological Sciences Research Council

BB/T001291/1

Swiss National Science Foundation

200020_184615

European Union Horizon 2020 Framework Program

785907

Available on Infoscience
February 20, 2024
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/204813
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés