Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Journal articles
  4. Combining hypothesis- and data-driven neuroscience modeling in FAIR workflows
 
review article

Combining hypothesis- and data-driven neuroscience modeling in FAIR workflows

Eriksson, Olivia
•
Bhalla, Upinder Singh
•
Blackwell, Kim T.
Show more
July 6, 2022
Elife

Modeling in neuroscience occurs at the intersection of different points of view and approaches. Typically, hypothesis-driven modeling brings a question into focus so that a model is constructed to investigate a specific hypothesis about how the system works or why certain phenomena are observed. Data-driven modeling, on the other hand, follows a more unbiased approach, with model construction informed by the computationally intensive use of data. At the same time, researchers employ models at different biological scales and at different levels of abstraction. Combining these models while validating them against experimental data increases understanding of the multiscale brain. However, a lack of interoperability, transparency, and reusability of both models and the workflows used to construct them creates barriers for the integration of models representing different biological scales and built using different modeling philosophies. We argue that the same imperatives that drive resources and policy for data - such as the FAIR (Findable, Accessible, Interoperable, Reusable) principles - also support the integration of different modeling approaches. The FAIR principles require that data be shared in formats that are Findable, Accessible, Interoperable, and Reusable. Applying these principles to models and modeling workflows, as well as the data used to constrain and validate them, would allow researchers to find, reuse, question, validate, and extend published models, regardless of whether they are implemented phenomenologically or mechanistically, as a few equations or as a multiscale, hierarchical system. To illustrate these ideas, we use a classical synaptic plasticity model, the Bienenstock-Cooper-Munro rule, as an example due to its long history, different levels of abstraction, and implementation at many scales.

  • Files
  • Details
  • Metrics
Type
review article
DOI
10.7554/eLife.69013
Web of Science ID

WOS:000822556000001

Author(s)
Eriksson, Olivia
Bhalla, Upinder Singh
Blackwell, Kim T.
Crook, Sharon M.
Keller, Daniel  
Kramer, Andrei
Linne, Marja-Leena
Saudargiene, Ausra
Wade, Rebecca C.
Kotaleski, Jeanette Hellgren
Date Issued

2022-07-06

Published in
Elife
Volume

11

Article Number

e69013

Subjects

Biology

•

Life Sciences & Biomedicine - Other Topics

•

fair

•

modeling workflows

•

parameter estimation

•

mathematical modeling

•

uncertainty quantification

•

synaptic plasticity

•

large-scale brain

•

systems biology

•

parameter-estimation

•

sabio-rk

•

network

•

simulation

•

resource

•

receptor

•

neurons

Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
BBP-CORE  
Available on Infoscience
July 18, 2022
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/189411
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés