000255609 001__ 255609
000255609 005__ 20190317000958.0
000255609 037__ $$aCONF
000255609 245__ $$aOptimal Distributed Learning with Multi-pass Stochastic Gradient Methods
000255609 260__ $$c2018-06-08
000255609 269__ $$a2018-06-08
000255609 300__ $$a27
000255609 336__ $$aConference Papers
000255609 520__ $$aWe study generalization properties of distributed algorithms in the setting of nonparametric regression over a reproducing kernel Hilbert space (RKHS). We investigate distributed stochastic gradient methods (SGM), with mini-batches and multi-passes over the data. We show that optimal generalization error bounds can be retained for distributed SGM provided that the partition level is not too large. Our results are superior to the state-of-the-art theory, covering the cases that the regression function may not be in the hypothesis spaces. Particularly, our results show that distributed SGM has a smaller theoretical computational complexity, compared with distributed kernel ridge regression (KRR) and classic SGM.
000255609 6531_ $$aDistributed Learning, Stochastic Gradient Methods, Kernel Methods, RKHS, Regularization
000255609 700__ $$0250957$$aLin, Junhong
000255609 700__ $$0243957$$aCevher, Volkan
000255609 7112_ $$dJuly 10 -15, 2018$$cStockholm, Sweden$$a35th International Conference on Machine Learning
000255609 773__ $$tProceedings of the 35th International Conference on Machine Learning
000255609 8560_ $$fjunhong.lin@epfl.ch
000255609 8564_ $$uhttps://infoscience.epfl.ch/record/255609/files/dsgm_camera.pdf$$s581214
000255609 909C0 $$xU12179$$pLIONS$$mvolkan.cevher@epfl.ch$$0252306
000255609 909CO $$qGLOBAL_SET$$pconf$$pSTI$$ooai:infoscience.epfl.ch:255609
000255609 960__ $$ajunhong.lin@epfl.ch
000255609 961__ $$aalain.borel@epfl.ch
000255609 973__ $$rREVIEWED$$aEPFL
000255609 980__ $$aCONF
000255609 981__ $$aoverwrite