Chizat, LenaicColombo, MariaFernandez-Real, XavierFigalli, Alessio2024-05-162024-05-162024-05-162024-05-0610.1002/cpa.22200https://infoscience.epfl.ch/handle/20.500.14299/207996WOS:001214248200001This paper studies the infinite-width limit of deep linear neural networks (NNs) initialized with random parameters. We obtain that, when the number of parameters diverges, the training dynamics converge (in a precise sense) to the dynamics obtained from a gradient descent on an infinitely wide deterministic linear NN. Moreover, even if the weights remain random, we get their precise law along the training dynamics, and prove a quantitative convergence result of the linear predictor in terms of the number of parameters. We finally study the continuous-time limit obtained for infinitely wide linear NNs and show that the linear predictors of the NN converge at an exponential rate to the minimal & ell;2$\ell _2$-norm minimizer of the risk.Physical SciencesInfinite-width limit of deep linear neural networkstext::journal::journal article::research article