Rigorous Dynamical Mean-Field Theory for Stochastic Gradient Descent Methods
We prove closed-form equations for the exact high-dimensional asymptotics of a family of first-order gradient-based methods, learning an estimator (e.g., M-estimator, shallow neural network) from observations on Gaussian data with empirical risk minimization. This includes widely used algorithms such as stochastic gradient descent (SGD) or Nesterov acceleration. The obtained equations match those resulting from the discretization of dynamical mean-field theory equations from statistical physics when applied to the corresponding gradient flow. Our proof method allows us to give an explicit description of how memory kernels build up in the effective dynamics and to include nonseparable update functions, allowing datasets with nonidentity covariance matrices. Finally, we provide numerical implementations of the equations for SGD with generic extensive batch size and constant learning rates.
23m1594388.pdf
main document
openaccess
CC BY
976.91 KB
Adobe PDF
1767c0e9815b70588f08773a7b0df12a