Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Journal articles
  4. Gradient-based optimisation of the conditional-value-at-risk using the multi-level Monte Carlo method
 
research article

Gradient-based optimisation of the conditional-value-at-risk using the multi-level Monte Carlo method

Ganesh, Sundar Subramaniam  
•
Nobile, Fabio  
December 15, 2023
Journal of Computational Physics

In this work, we tackle the problem of minimising the Conditional-Value-at-Risk (CVaR) of output quantities of complex differential models with random input data, using gradient-based approaches in combination with the Multi-Level Monte Carlo (MLMC) method. In particular, we consider the framework of multi-level Monte Carlo for parametric expectations and propose modifications of the MLMC estimator, error estimation procedure, and adaptive MLMC parameter selection to ensure the estimation of the CVaR and sensitivities for a given design with a prescribed accuracy. We then propose combining the MLMC framework with an alternating inexact minimisation-gradient descent algorithm, for which we prove exponential convergence in the optimisation iterations under the assumptions of strong convexity and Lipschitz continuity of the gradient of the objective function. We demonstrate the performance of our approach on two numerical examples of practical relevance, which evidence the same optimal asymptotic cost-tolerance behaviour as standard MLMC methods for fixed design computations of output expectations.

  • Files
  • Details
  • Metrics
Type
research article
DOI
10.1016/j.jcp.2023.112523
ArXiv ID

arXiv:2210.03485

Author(s)
Ganesh, Sundar Subramaniam  
Nobile, Fabio  
Date Issued

2023-12-15

Published in
Journal of Computational Physics
Volume

495

Issue

112523

Subjects

Multilevel Monte Carlo Methods

•

Value-at-Risk

•

Conditional-Value-at-Risk

•

Uncertainty Quantification

•

Optimisation Under Uncertainty

•

Gradient Descent

Note

Data available at https://zenodo.org/record/7193448

Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
CSQI  
FunderGrant Number

H2020

80089

Available on Infoscience
October 10, 2022
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/191411
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés