A learning-based approach to stochastic optimal control under reach-avoid constraint
We develop a model-free approach to optimally control stochastic, Markovian systems subject to a reach-avoid constraint. Specifically, the state trajectory must remain within a safe set while reaching a target set within a finite time horizon. Due to the time-dependent nature of these constraints, we show that, in general, the optimal policy for this constrained stochastic control problem is non-Markovian, which increases the computational complexity. To address this challenge, we apply the state-augmentation technique from [23], reformulating the problem as a constrained Markov decision process (CMDP) on an extended state space. This transformation allows us to search for a Markovian policy, avoiding the complexity of non-Markovian policies. To learn the optimal policy without a system model, and using only trajectory data, we develop a log-barrier policy gradient approach. We prove that under suitable assumptions, the policy parameters converge to the optimal parameters, while ensuring that the system trajectories satisfy the stochastic reach-avoid constraint with high probability.
EPFL
EPFL
2025-05-06
New York, NY, USA
979-8-4007-1504-4
1
8
REVIEWED
EPFL
Event name | Event acronym | Event place | Event date |
HSCC 2025 | Irvine CA USA | 2025-05-06 - 2025-05-09 | |