Files

Abstract

The research community has been making significant progress in hardware implementation, numerical computing and algorithm development for optimization-based control. However, there are two key challenges that still have to be overcome for optimization-based control to be a viable option in the context of advanced industrial applications. First, the large existing gap between algorithm development and its deployment on platforms used by practitioners in industry. Second, from a more theoretical viewpoint, the lack of robustness of certain approaches, which are based on the unreasonable assumption that the model at hand perfectly represents the object under investigation. This thesis addresses the aforementioned challenges by establishing software toolboxes for automatic code generation, and proposing a data-driven methodology to enhance the performance of real-time optimization strategies during operation. The first part of this thesis focuses on the efficient implementation of Model Predictive Control (MPC) based on first-order operator splitting methods. Because of the cheap numerical operations associated with them, splitting methods are favorable candidates for applications with limited computing power. We first identify the computational bottlenecks and, subsequently, discuss their efficient deployment on processors, Field Programmable Gate Arrays (FPGA), and heterogeneous platforms. For rapid prototyping and deployment, two code generation toolboxes are developed: SPLIT and LAFF. These possess a high-level parsing interface for MATLAB and yield optimized C code that can be directly used in a variety of FPGA platforms. Features such as pipelining, memory partitioning, and parallelization are automatically incorporated, not requiring users to have in-depth knowledge about computer architecture and low-level programming. We then propose a framework to a priori solve the co-design problem arising in splitting method-based MPC to provide trade-offs between resources and latency. We provide analytical expressions that can avoid the daunting and time-consuming task of exploring the design space manually, thus reducing the final application development time. The second part of the thesis deals with learning plant-model mismatch using Gaussian processes (GPs) in Real Time Optimization (RTO) schemes. Inaccurate models, the presence of disturbances, and time-varying conditions typically lead to the suboptimal operation of many plants. We use data-driven global surrogate models in the form of GPs to cope with such problems and show better numerical convergence and handling of noise effectively when compared to standard RTO techniques. We moreover prove that GPs can be certified as probabilistic and deterministic fully linear models, a key property to guarantee global convergence of derivative-free trust region (DFT) methods. We then propose a novel DFT methodology to incorporate noise, which requires less plant evaluations than other alternatives. Finally, we conclude this work by performing experiments on a Solid-Oxide Fuel Cell system.

Details

Actions

Preview