Mignacco, FrancescaUrbani, PierfrancescoZdeborova, Lenka2021-07-312021-07-312021-07-312021-09-0110.1088/2632-2153/ac0615https://infoscience.epfl.ch/handle/20.500.14299/180309WOS:000674928400001In this paper we investigate how gradient-based algorithms such as gradient descent (GD), (multi-pass) stochastic GD, its persistent variant, and the Langevin algorithm navigate non-convex loss-landscapes and which of them is able to reach the best generalization error at limited sample complexity. We consider the loss landscape of the high-dimensional phase retrieval problem as a prototypical highly non-convex example. We observe that for phase retrieval the stochastic variants of GD are able to reach perfect generalization for regions of control parameters where the GD algorithm is not. We apply dynamical mean-field theory from statistical physics to characterize analytically the full trajectories of these algorithms in their continuous-time limit, with a warm start, and for large system sizes. We further unveil several intriguing properties of the landscape and the algorithms such as that the GD can obtain better generalization properties from less informed initializations.Computer Science, Artificial IntelligenceComputer Science, Interdisciplinary ApplicationsMultidisciplinary SciencesComputer ScienceScience & Technology - Other Topicsdisordered systemsneural networksnon-convex optimizationinitializationreconstructiondynamicssystemsStochasticity helps to navigate rough landscapes: comparing gradient-descent-based algorithms in the phase retrieval problemtext::journal::journal article::research article