Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Journal articles
  4. Scaling laws and representation learning in simple hierarchical languages: Transformers versus convolutional architectures
 
research article

Scaling laws and representation learning in simple hierarchical languages: Transformers versus convolutional architectures

Cagnetta, Francesco
•
Favero, Alessandro  
•
Sclocchi, Antonio
Show more
December 1, 2025
Physical Review E

How do neural language models acquire a language's structure when trained for next-token prediction? We address this question by deriving theoretical scaling laws for neural network performance on synthetic datasets generated by the random hierarchy model (RHM)—an ensemble of probabilistic context-free grammars designed to capture the hierarchical structure of natural language while remaining analytically tractable. Previously, we developed a theory of representation learning based on data correlations that explains how deep learning models capture the hierarchical structure of the data sequentially, one layer at a time. Here, we extend our theoretical framework to account for architectural differences. In particular, we predict and empirically validate that convolutional networks, whose structure aligns with that of the generative process through locality and weight sharing, enjoy a faster scaling of performance compared to transformer models, which rely on global self-attention mechanisms. This finding clarifies the architectural biases underlying neural scaling laws and highlights how representation learning is shaped by the interaction between model architecture and the statistical properties of data.

  • Details
  • Metrics
Type
research article
DOI
10.1103/qtd6-nl8p
Scopus ID

2-s2.0-105026662020

Author(s)
Cagnetta, Francesco

Scuola Internazionale Superiore di Studi Avanzati

Favero, Alessandro  

École Polytechnique Fédérale de Lausanne

Sclocchi, Antonio

Gatsby Computational Neuroscience Unit

Wyart, Matthieu

Johns Hopkins University

Date Issued

2025-12-01

Published in
Physical Review E
Volume

112

Issue

6

Article Number

065312

Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
PCSL  
FunderFunding(s)Grant NumberGrant URL

European Union (EU)

101154584

Available on Infoscience
January 13, 2026
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/257881
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés