Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Journal articles
  4. Distributed Learning in Non-Convex Environments-Part II: Polynomial Escape From Saddle-Points
 
research article

Distributed Learning in Non-Convex Environments-Part II: Polynomial Escape From Saddle-Points

Vlaski, Stefan  
•
Sayed, Ali H.  
January 1, 2021
Ieee Transactions On Signal Processing

The diffusion strategy for distributed learning from streaming data employs local stochastic gradient updates along with exchange of iterates over neighborhoods. In Part I [3] of this work we established that agents cluster around a network centroid and proceeded to study the dynamics of this point. We established expected descent in non-convex environments in the large-gradient regime and introduced a short-term model to examine the dynamics over finite-time horizons. Using this model, we establish in this work that the diffusion strategy is able to escape from strict saddle-points in O(1/mu) iterations, where mu denotes the step-size; it is also able to return approximately second-order stationary points in a polynomial number of iterations. Relative to prior works on the polynomial escape from saddle-points, most of which focus on centralized perturbed or stochastic gradient descent, our approach requires less restrictive conditions on the gradient noise process.

  • Details
  • Metrics
Type
research article
DOI
10.1109/TSP.2021.3050840
Web of Science ID

WOS:000622094600009

Author(s)
Vlaski, Stefan  
Sayed, Ali H.  
Date Issued

2021-01-01

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC

Published in
Ieee Transactions On Signal Processing
Volume

69

Start page

1257

End page

1270

Subjects

Engineering, Electrical & Electronic

•

Engineering

•

stochastic optimization

•

adaptation

•

non-convex costs

•

saddle point

•

escape time

•

gradient noise

•

stationary points

•

distributed optimization

•

diffusion learning

Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
ASL  
Available on Infoscience
March 26, 2021
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/176173
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés