Fundamental limits of learning in sequence multi-index models and deep attention networks: high-dimensional asymptotics and sharp thresholds
In this manuscript, we study the learning of deep attention neural networks, defined as the composition of multiple self-attention layers, with tied and low-rank weights. We first establish a mapping of such models to sequence multi-index models, a generalization of the widely studied multi-index model to sequential covariates, for which we establish a number of general results. In the context of Bayes-optimal learning, in the limit of large dimension D and proportionally large number of samples N , we derive a sharp asymptotic characterization of the optimal performance as well as the performance of the best-known polynomialtime algorithm for this setting-namely approximate message-passing-, and characterize sharp thresholds on the minimal sample complexity required for better-than-random prediction performance. Our analysis uncovers, in particular, how the different layers are learned sequentially. Finally, we discuss how this sequential learning can also be observed in a realistic setup.
6949_Fundamental_limits_of_lea.pdf
Main Document
Published version
openaccess
N/A
890.84 KB
Adobe PDF
bcfcfec874f68c503b45de0493f03282
2502.00901v1.pdf
Main Document
Submitted version (Preprint)
openaccess
N/A
1.21 MB
Adobe PDF
ef3c01f45fe83ebc5365aa8e7942f94d