Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Journal articles
  4. Landscape and training regimes in deep learning
 
research article

Landscape and training regimes in deep learning

Geiger, Mario  
•
Petrini, Leonardo  
•
Wyart, Matthieu  
April 16, 2021
Physics Reports

Deep learning algorithms are responsible for a technological revolution in a variety oftasks including image recognition or Go playing. Yet, why they work is not understood.Ultimately, they manage to classify data lying in high dimension – a feat genericallyimpossible due to the geometry of high dimensional space and the associatedcurse ofdimensionality. Understanding what kind of structure, symmetry or invariance makesdata such as images learnable is a fundamental challenge. Other puzzles include that(i) learning corresponds to minimizing a loss in high dimension, which is in generalnot convex and could well get stuck bad minima. (ii) Deep learning predicting powerincreases with the number of fitting parameters, even in a regime where data areperfectly fitted. In this manuscript, we review recent results elucidating (i, ii) andthe perspective they offer on the (still unexplained) curse of dimensionality paradox.We base our theoretical discussion on the (h,α) plane wherehcontrols the numberof parameters andαthe scale of the output of the network at initialization, andprovide new systematic measures of performance in that plane for two common imageclassification datasets. We argue that different learning regimes can be organized intoa phase diagram. A line of critical points sharply delimits an under-parametrized phasefrom an over-parametrized one. In over-parametrized nets, learning can operate intwo regimes separated by a smooth cross-over. At large initialization, it correspondsto a kernel method, whereas for small initializations features can be learnt, togetherwith invariants in the data. We review the properties of these different phases, ofthe transition separating them and some open questions. Our treatment emphasizesanalogies with physical systems, scaling arguments and the development of numericalobservables to quantitatively test these results empirically. Practical implications arealso discussed, including the benefit of averaging nets with distinct initial weights, orthe choice of parameters (h,α) optimizing performance.

  • Files
  • Details
  • Metrics
Loading...
Thumbnail Image
Name

1-s2.0-S0370157321001290-main.pdf

Type

Publisher's Version

Version

Published version

Access type

openaccess

License Condition

CC BY

Size

1.85 MB

Format

Adobe PDF

Checksum (MD5)

7be3f0deb3761d55a9442822dedd4d5b

Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés