Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Conferences, Workshops, Symposiums, and Seminars
  4. Neural Tanget Kernel: Convergence and Generalization in Neural Networks
 
conference paper

Neural Tanget Kernel: Convergence and Generalization in Neural Networks

Jacot-Guillarmod, Arthur Ulysse  
•
Gabriel, Franck Raymond  
•
Hongler, Clément  
2018
NIPS'18 Proceedings of the 32nd International Conference on Neural Information Processing Systems
32nd International Conference on Neural Information Processing Systems

At initialization, artificial neural networks (ANNs) are equivalent to Gaussian processes in the infinite-width limit [12, 9], thus connecting them to kernel methods. We prove that the evolution of an ANN during training can also be described by a kernel: during gradient descent on the parameters of an ANN, the network function fθ (which maps input vectors to output vectors) follows the kernel gradient of the functional cost (which is convex, in contrast to the parameter cost) w.r.t. a new kernel: the Neural Tangent Kernel (NTK). This kernel is central to describe the generalization features of ANNs. While the NTK is random at initialization and varies during training, in the infinite-width limit it converges to an explicit limiting kernel and it stays constant during training. This makes it possible to study the training of ANNs in function space instead of parameter space. Convergence of the training can then be related to the positive-definiteness of the limiting NTK. We then focus on the setting of least-squares regression and show that in the infinite-width limit, the network function fθ follows a linear differential equation during training. The convergence is fastest along the largest kernel principal components of the input data with respect to the NTK, hence suggesting a theoretical motivation for early stopping. Finally we study the NTK numerically, observe its behavior for wide networks, and compare it to the infinite-width limit.

Finally we study the NTK numerically, observe its behavior for wide networks, and compare it to the infinite-width limit.

We then focus on the setting of least-squares regression and show that in the infinite-width limit, the network function f(theta) follows a linear differential equation during training. The convergence is fastest along the largest kernel principal components of the input data with respect to the NTK, hence suggesting a theoretical motivation for early stopping.

  • Files
  • Details
  • Metrics
Loading...
Thumbnail Image
Name

Neural Tangent Kernel: Convergence and Generalization in Neural Networks.pdf

Access type

openaccess

Size

1.81 MB

Format

Adobe PDF

Checksum (MD5)

dec2fcc93f578ec63b264cb7be979d6b

Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés