Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Journal articles
  4. Policy Gradient Algorithms for Robust MDPs with Non-Rectangular Uncertainty Sets
 
research article

Policy Gradient Algorithms for Robust MDPs with Non-Rectangular Uncertainty Sets

Li, Mengmeng  
•
Sutter, Tobias  
•
Kuhn, Daniel  
2026
SIAM Journal on Optimization

We propose policy gradient algorithms for robust infinite-horizon Markov decision processes (MDPs) with non-rectangular uncertainty sets, thereby addressing an open challenge in the robust MDP literature. Indeed, uncertainty sets that display statistical optimality properties and make optimal use of limited data often fail to be rectangular. Unfortunately, the corresponding robust MDPs cannot be solved with dynamic programming techniques and are in fact provably intractable. We first present a randomized projected Langevin dynamics algorithm that solves the robust policy evaluation problem to global optimality but is inefficient. We also propose a deterministic policy gradient method that is efficient but solves the robust policy evaluation problem only approximately, and we prove that the approximation error scales with a new measure of non-rectangularity of the uncertainty set. Finally, we describe an actor-critic algorithm that finds an ϵ-optimal solution for the robust policy improvement problem in O(1/ϵ^4) iterations. We thus present the first complete solution scheme for robust MDPs with non-rectangular uncertainty sets offering global optimality guarantees. Numerical experiments show that our algorithms compare favorably against state-of-the-art methods.

  • Details
  • Versions
  • Metrics
Type
research article
DOI
10.1137/24M1631250
Author(s)
Li, Mengmeng  
Sutter, Tobias  
Kuhn, Daniel  
Date Issued

2026

Publisher

SIAM Publications

Published in
SIAM Journal on Optimization
Volume

36

Issue

1

Start page

120

End page

151

Subjects

Markov decision processes

•

Robust optimization

•

Policy gradient algorithms

•

Langevin dynamics

Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
RAO  
FunderFunding(s)Grant NumberGrant URL

Swiss National Science Foundation

51NF40 225155

RelationRelated workURL/DOI

IsNewVersionOf

[Preprint]

https://dx.doi.org/10.48550/arXiv.2305.19004
Available on Infoscience
February 9, 2026
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/198182.4
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés