Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. EPFL thesis
  4. From event-based surprise to lifelong learning. A journey in the timescales of adaptation
 
doctoral thesis

From event-based surprise to lifelong learning. A journey in the timescales of adaptation

Barry, Martin Louis Lucien Rémy  
2023

Humans and animals constantly adapt to their environment over the course of their life. This thesis seeks to integrate various timescales of adaptation, ranging from the adaptation of synaptic connections between spiking neurons (milliseconds), rapid behavioral adjustments in response to situations like unexpected encountering someone on the street (seconds), and the process of lifelong learning such as learning a language (years).


Our work comprises two primary narratives: the development of a bio-inspired Spiking Neural Network (SpikeSuM) for surprise-based learning, and the creation of an artificial continual learning model (GateON).


To examine the role of surprise in rapid adaptation to unforeseen events, we developed SpikeSuM. SpikeSuM is a biologically plausible model using three-factor Hebbian rules for fast adaptation to unexpected events. Our findings indicate that brain-like surprise can be extracted from the neuronal activity of a prediction-error-minimizing network, and that SpikeSuM achieves Bayesian performance in online adaptation tasks. However, rapid adaptation may be susceptible to unwanted memory loss, or "catastrophic forgetting". To address this issue, we developed a context-driven prediction circuit that enables SpikeSuM to not only learn quickly in new contexts, but also retain memories of previous ones.


Our research resulted in the formulation of key hypotheses for context detection in the brain. These include a Surprise-driven context boundary detection using dis-inhibitory networks, as well as the hypothesis of locally inhibited neuromodulation for preventing unwanted learning. The context-driven approach of SpikeSuM inspired the development of our GateON method.


GateON is an artificial neural network method that emphasizes the importance of context-based prediction and the ability of neurons to retain information to solve continual learning problems. The method is designed to solve the limitations of SpikeSuM, namely the modular approach preventing generalization across contexts. GateON achieves state-of-the-art results on large-scale continual learning problems, including continual learning on 100 MNIST tasks and large language models. Furthermore, GateON's potential for refinement through local learning rules suggests a path toward a more biologically plausible implementation, making it a promising approach for future research.


By synthesizing these approaches and insights, our work aims to advance the neuroscience and machine learning fields and contribute to a deeper comprehension of brain mechanisms underlying surprise, context-based learning, and continual learning.

  • Files
  • Details
  • Metrics
Loading...
Thumbnail Image
Name

EPFL_TH9923.pdf

Type

N/a

Access type

openaccess

License Condition

copyright

Size

7.82 MB

Format

Adobe PDF

Checksum (MD5)

f177aa9998c59f0b499883020d44d0e7

Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés