Files

Abstract

This thesis contributes towards the design and analysis of fast and distributed optimization algorithms based on splitting techniques, such as proximal gradient methods or alternation minimization algorithms, with the application of solving model predictive control (MPC) problems. The first part of the thesis focuses on developing an efficient algorithm based on the fast alternating minimization algorithm to solve MPC problems with polytopic and second-order cone constraints. Due to the requirement of bounding the online computation time in the context of real-time MPC, complexity bounds on the number of iterations to achieve a certain accuracy are derived. In addition, a discussion of the computation of the complexity bounds is provided. To further improve the convergence speed of the proposed algorithm, an o-line pre-conditioning method is presented for MPC problems with polyhedral and ellipsoidal constraints. The inexact alternating minimization algorithm, as well as its accelerated variant, is proposed in the second part of the thesis. Different from standard algorithms, inexact methods allow for errors in the update at each iteration. Complexity upper-bounds on the number of iterations in the presence of errors are derived. By employing the complexity bounds, sufficient conditions on the errors, which guarantee the convergence of the algorithms, are presented. The proposed algorithms are applied for solving distributed optimization problems in the presence of local computation and communication errors, with an emphasis on distributed MPC applications. The convergence properties of the algorithms for this special case are analysed. Motivated by the complexity upper-bounds of the inexact proximal gradient method, two distributed optimization algorithms with an iteratively refining quantization design are proposed for solving distributed optimization problems with a limited communication data-rate. We show that if the parameters of the quantizers satisfy certain conditions, then the quantization error decreases linearly, while at each iteration only a fixed number of bits is transmitted, and the convergence of the distributed algorithms is guaranteed. The proposed methods are further extended to distributed optimization problems with time-varying parameters.

Details

Actions

Preview