Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Conferences, Workshops, Symposiums, and Seminars
  4. Polynomial Escape-Time From Saddle Points In Distributed Non-Convex Optimization
 
conference paper

Polynomial Escape-Time From Saddle Points In Distributed Non-Convex Optimization

Vlaski, Stefan  
•
Sayed, Ali H.  
January 1, 2019
2019 Ieee 8Th International Workshop On Computational Advances In Multi-Sensor Adaptive Processing (Camsap 2019)
8th IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)

The diffusion strategy for distributed learning from streaming data employs local stochastic gradient updates along with exchange of iterates over neighborhoods. In this work we establish that agents cluster around a network centroid in the mean-fourth sense and proceeded to study the dynamics of this point. We establish expected descent in non-convex environments in the large-gradient regime and introduce a short-term model to examine the dynamics over finitetime horizons. Using this model, we establish that the diffusion strategy is able to escape from strict saddle-points in O(1/mu) iterations, where mu denotes the step-size; it is also able to return approximately second-order stationary points in a polynomial number of iterations. Relative to prior works on the polynomial escape from saddle-points, most of which focus on centralized perturbed or stochastic gradient descent, our approach requires less restrictive conditions on the gradient noise process.

  • Details
  • Metrics
Type
conference paper
DOI
10.1109/CAMSAP45676.2019.9022458
Web of Science ID

WOS:000556233000033

Author(s)
Vlaski, Stefan  
Sayed, Ali H.  
Date Issued

2019-01-01

Publisher

IEEE

Publisher place

New York

Published in
2019 Ieee 8Th International Workshop On Computational Advances In Multi-Sensor Adaptive Processing (Camsap 2019)
ISBN of the book

978-1-7281-5549-4

Start page

171

End page

175

Subjects

stochastic optimization

•

adaptation

•

non-convex costs

•

saddle point

•

escape time

•

gradient noise

•

stationary points

•

distributed optimization

•

diffusion learning

•

diffusion

•

networks

Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
ASL  
Event nameEvent placeEvent date
8th IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)

Guadeloupe, FRANCE

Dec 15-18, 2019

Available on Infoscience
August 21, 2020
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/171009
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés