Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Journal articles
  4. Small Errors in Random Zeroth-Order Optimization are Imaginary
 
research article

Small Errors in Random Zeroth-Order Optimization are Imaginary

Jongeneel, Wouter  
•
Yue, Man-Chung
•
Kuhn, Daniel  
2024
SIAM Journal on Optimization

Most zeroth-order optimization algorithms mimic a first-order algorithm but replace the gradient of the objective function with some noisy gradient estimator that can be computed from a small number of function evaluations. This estimator is constructed randomly, and its expectation matches the gradient of a smooth approximation of the objective function whose quality improves as the underlying smoothing parameter δ is reduced. Gradient estimators requiring a smaller number of function evaluations are preferable from a computational point of view. While estimators based on a single function evaluation can be obtained by a clever use of the divergence theorem from vector calculus, their variance explodes as δ tends to 0. Estimators based on multiple function evaluations, on the other hand, suffer from numerical cancellation when δ tends to 0. To combat both effects simultaneously, we extend the objective function to the complex domain and construct a gradient estimator that evaluates the objective at a complex point whose coordinates have small imaginary parts of the order δ. As this estimator requires only one function evaluation, it is immune to cancellation. In addition, its variance remains bounded as δ tends to 0. We prove that zeroth-order algorithms that use our estimator offer the same theoretical convergence guarantees as the state-of-the-art methods. Numerical experiments suggest, however, that they often converge faster in practice.

  • Details
  • Metrics
Type
research article
DOI
10.1137/22M1510261
ArXiv ID

2103.05478

Author(s)
Jongeneel, Wouter  
Yue, Man-Chung
Kuhn, Daniel  
Date Issued

2024

Publisher

Society for Industrial & Applied Mathematics (SIAM)

Published in
SIAM Journal on Optimization
Volume

34

Issue

3

Start page

2638

End page

2670

Subjects

Zeroth-order optimization

•

Derivative-free optimization

•

Complex-step derivative

URL

View record in ArXiv

https://arxiv.org/pdf/2103.05478.pdf
Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
RAO  
FunderGrant Number

Hong Kong Research Grants Council

25302420

Schweizerischer Nationalfonds zur Förderung der Wissenschaftlichen Forschung

51NF40 180545

Schweizerischer Nationalfonds zur Förderung der Wissenschaftlichen Forschung

51NF40 180545

Available on Infoscience
July 18, 2022
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/189437
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés