Files

Abstract

We consider an optimal control problem (OCP) for a partial differential equation (PDE) with random coefficients. The optimal control function is a deterministic, distributed forcing term that minimizes an expected quadratic regularized loss functional. For the numerical approximation of this PDE-constrained OCP, we replace the expectation in the objective functional by a suitable quadrature formula and, eventually, discretize the PDE by a Galerkin method. To practically solve such an approximate OCP, we propose an importance sampling version the SAGA algorithm [A. Defazio, F. Bach, and S. Lacoste-Julien, Advances in Neural Information Processing Systems 27, Curran Associates, Red Hook, NY, 2014, pp. 1646--1654], a type of stochastic gradient algorithm with a fixed-length memory term, which computes at each iteration the gradient of the loss functional in only one quadrature point, randomly chosen from a possibly nonuniform distribution. We provide a full error and complexity analysis of the proposed numerical scheme. In particular we compare the complexity of the generalized SAGA algorithm with importance sampling, with that of the stochastic gradient and the conjugate gradient (CG) algorithms, applied to the same discretized OCP. We show that SAGA converges exponentially in the number of iterations as for a CG algorithm and has a similar asymptotic computational complexity, in terms of computational cost versus accuracy. Moreover, it features good preasymptotic properties, as shown by our numerical experiments, which makes it appealing in a limited budget context.

Details

Actions