Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Journal articles
  4. Variance-Reduced Stochastic Learning Under Random Reshuffling
 
research article

Variance-Reduced Stochastic Learning Under Random Reshuffling

Ying, Bicheng  
•
Yuan, Kun  
•
Sayed, Ali H.  
January 1, 2020
Ieee Transactions On Signal Processing

Several useful variance-reduced stochastic gradient algorithms, such as SVRG, SAGA, Finito, and SAG, have been proposed to minimize empirical risks with linear convergence properties to the exact minimizer. The existing convergence results assume uniform data sampling with replacement. However, it has been observed in related works that random reshuffling can deliver superior performance over uniform sampling and, yet, no formal proofs or guarantees of exact convergence exist for variance-reduced algorithms under random reshuffling. This paper makes two contributions. First, it provides a theoretical guarantee of linear convergence under random reshuffling for SAGA in the mean-square sense; the argument is also adaptable to other variance-reduced algorithms. Second, under random reshuffling, the article proposes a new amortized variance-reduced gradient (AVRG) algorithm with constant storage requirements compared to SAGA and with balanced gradient computations compared to SVRG. AVRG is also shown analytically to converge linearly.

  • Details
  • Metrics
Type
research article
DOI
10.1109/TSP.2020.2968280
Web of Science ID

WOS:000519834600002

Author(s)
Ying, Bicheng  
Yuan, Kun  
Sayed, Ali H.  
Date Issued

2020-01-01

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC

Published in
Ieee Transactions On Signal Processing
Volume

68

Start page

1390

End page

1408

Subjects

Engineering, Electrical & Electronic

•

Engineering

•

random reshuffling

•

variance-reduction

•

stochastic gradient descent

•

linear convergence

•

empirical risk minimization

•

convergence rate

•

gradient methods

Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
ASL  
Available on Infoscience
April 1, 2020
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/167747
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés