Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Journal articles
  4. A Pareto Dominance Principle for Data-Driven Optimization
 
research article

A Pareto Dominance Principle for Data-Driven Optimization

Sutter, Tobias  
•
Van Parys, Bart
•
Kuhn, Daniel  
January 19, 2024
Operations Research

We propose a statistically optimal approach to construct data-driven decisions for stochastic optimization problems. Fundamentally, a data-driven decision is simply a function that maps the available training data to a feasible action. It can always be expressed as the minimizer of a surrogate optimization model constructed from the data. The quality of a data-driven decision is measured by its out-of-sample risk. An additional quality measure is its out-of-sample disappointment, which we define as the probability that the out-of-sample risk exceeds the optimal value of the surrogate optimization model. The crux of data-driven optimization is that the data-generating probability measure is unknown. An ideal data-driven decision should therefore minimize the out-of-sample risk simultaneously with respect to every conceivable probability measure (and thus in particular with respect to the unknown true measure). Unfortunately, such ideal data-driven decisions are generally unavailable. This prompts us to seek data-driven decisions that minimize the in-sample risk subject to an upper bound on the out-of-sample disappointment---again simultaneously with respect to every conceivable probability measure. We prove that such Pareto-dominant data-driven decisions exist under conditions that allow for interesting applications: the unknown data-generating probability measure must belong to a parametric ambiguity set, and the corresponding parameters must admit a sufficient statistic that satisfies a large deviation principle. If these conditions hold, we can further prove that the surrogate optimization model generating the optimal data-driven decision must be a distributionally robust optimization problem constructed from the sufficient statistic and the rate function of its large deviation principle. This shows that the optimal method for mapping data to decisions is, in a rigorous statistical sense, to solve a distributionally robust optimization model. Maybe surprisingly, this result holds irrespective of whether the original stochastic optimization problem is convex or not and holds even when the training data is non-i.i.d. As a byproduct, our analysis reveals how the structural properties of the data-generating stochastic process impact the shape of the ambiguity set underlying the optimal distributionally robust optimization model.

  • Files
  • Details
  • Versions
  • Metrics
Type
research article
DOI
10.1287/opre.2021.0609
ArXiv ID

2010.06606

Author(s)
Sutter, Tobias  

University of Konstanz

Van Parys, Bart

Massachusetts Institute of Technology

Kuhn, Daniel  

EPFL

Date Issued

2024-01-19

Published in
Operations Research
Volume

72

Issue

5

Start page

1751

End page

2261

Subjects

Data-driven decision-making

•

Stochastic optimization

•

Robust optimization

•

Large deviations

Note

Earlier versions of this paper had the title "A General Framework for Optimal Data-Driven Optimization"

Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
RAO  
FunderFunding(s)Grant NumberGrant URL

Swiss National Science Foundation

51NF40_180545

Available on Infoscience
November 4, 2024
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/175398.2
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés