Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Conferences, Workshops, Symposiums, and Seminars
  4. Optimal Distributed Learning with Multi-pass Stochastic Gradient Methods
 
conference paper

Optimal Distributed Learning with Multi-pass Stochastic Gradient Methods

Lin, Junhong  
•
Cevher, Volkan  orcid-logo
June 8, 2018
Proceedings of the 35th International Conference on Machine Learning
35th International Conference on Machine Learning

We study generalization properties of distributed algorithms in the setting of nonparametric regression over a reproducing kernel Hilbert space (RKHS). We investigate distributed stochastic gradient methods (SGM), with mini-batches and multi-passes over the data. We show that optimal generalization error bounds can be retained for distributed SGM provided that the partition level is not too large. Our results are superior to the state-of-the-art theory, covering the cases that the regression function may not be in the hypothesis spaces. Particularly, our results show that distributed SGM has a smaller theoretical computational complexity, compared with distributed kernel ridge regression (KRR) and classic SGM.

  • Files
  • Details
  • Metrics
Loading...
Thumbnail Image
Name

dsgm_camera.pdf

Access type

openaccess

Size

567.59 KB

Format

Adobe PDF

Checksum (MD5)

43e12a84bf3b73f4c35b3f1182fe3b96

Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés