Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Journal articles
  4. Driving and suppressing the human language network using large language models
 
research article

Driving and suppressing the human language network using large language models

Tuckute, Greta
•
Sathe, Aalok
•
Srikant, Shashank
Show more
January 3, 2024
Nature Human Behaviour

Transformer models such as GPT generate human-like language and are predictive of human brain responses to language. Here, using functional-MRI-measured brain responses to 1,000 diverse sentences, we first show that a GPT-based encoding model can predict the magnitude of the brain response associated with each sentence. We then use the model to identify new sentences that are predicted to drive or suppress responses in the human language network. We show that these model-selected novel sentences indeed strongly drive and suppress the activity of human language areas in new individuals. A systematic analysis of the model-selected sentences reveals that surprisal and well-formedness of linguistic input are key determinants of response strength in the language network. These results establish the ability of neural network models to not only mimic human language but also non-invasively control neural activity in higher-level cortical areas, such as the language network.|Tuckute et al. use a machine learning approach to identify sentences that either maximally or minimally activate the human language processing network.

  • Details
  • Metrics
Type
research article
DOI
10.1038/s41562-023-01783-7
Web of Science ID

WOS:001135860200001

Author(s)
Tuckute, Greta
Sathe, Aalok
Srikant, Shashank
Taliaferro, Maya
Wang, Mingye
Schrimpf, Martin  
Kay, Kendrick
Fedorenko, Evelina
Date Issued

2024-01-03

Publisher

Nature Portfolio

Published in
Nature Human Behaviour
Subjects

Life Sciences & Biomedicine

•

Neural Responses

•

Reveals

•

Brain

•

Fmri

•

Systems

•

Comprehension

•

Principles

•

Time

Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
UPSCHRIMPF1  
FunderGrant Number

American Association of University Women (American Association of University Women, Inc.)

Amazon Fellowship from the Science Hub

American Association of University Women

R01-DC016607

Show more
Available on Infoscience
February 20, 2024
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/204881
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés