On the Role of Constraints in Optimization under Uncertainty

This thesis addresses the problem of industrial real-time process optimization that suffers from the presence of uncertainty. Since a process model is typically used to compute the optimal operating conditions, both plant-model mismatch and process disturbances can result in suboptimal or, worse, infeasible operation. Hence, for practical applications, methodologies that help avoid re-optimization during process operation, at the cost of an acceptable optimality loss, become important. The design and analysis of such approximate solution strategies in real-time optimization (RTO) demand a careful analysis of the components of the necessary conditions of optimality. This thesis analyzes the role of constraints in process optimality in the presence of uncertainty. This analysis is made in two steps. Firstly, a general analysis is developed to quantify the effect of input adaptation on process performance for static RTO problems. In the second part, the general features of input adaptation for dynamic RTO problems are analyzed with focus on the constraints. Accordingly, the thesis is organized in two parts: For static RTO, a joint analysis of the model optimal inputs, the plant optimal inputs and a class of adapted inputs, and For dynamic RTO, an analytical study of the effect of local adaptation of the model optimal inputs. The first part (Chapters 2 and 3) addresses the problem of adapting the inputs to optimize the plant. The investigation takes a constructive viewpoint, but it is limited to static RTO problems modeled as parametric nonlinear programming (pNLP) problems. In this approach, the inputs are not limited to being local adaptation of the model optimal inputs but, instead, they can change significantly to optimize the plant. Hence, one needs to consider the fact that the set of active constraints for the model and the plant can be different. It is proven that, for a wide class of systems, the detection of a change in the active set contributes only negligibly to optimality, as long as the adapted solution remains feasible. More precisely, if η denotes the magnitude of the parametric variations and if the linear independence constraint qualification (LICQ) and strong second-order sufficient condition (SSOSC) hold for the underlying pNLP, the optimality loss due to any feasible input that conserves only the strict nominal active set is of magnitude O(η2), irrespective of whether or not there is a change in the set of active constraints. The implication of this result for a static RTO algorithm is to prioritize the satisfaction of only a core set of constraints, as long as it is possible to meet the feasibility requirements. The second part (Chapters 4 and 5) of the thesis deals with a way of adapting the model optimal inputs in dynamic RTO problems. This adaptation is made along two sets of directions such that one type of adaptation does not affect the nominally active constraints, while the other does. These directions are termed the sensitivity-seeking (SS) and the constraint-seeking (CS) directions, respectively. The SS and CS directions are defined as elements of a fairly general function space of input variations. A mathematical criterion is derived to define SS directions for a general class of optimal control problems involving both path and terminal constraints. According to this criterion, the SS directions turn out to be solutions of linear integral equations that are completely defined by the model optimal solution. The CS directions are then chosen orthogonal to the subspace of SS directions, where orthogonality is defined with respect to a chosen inner product on the space of input variations. It follows that the corresponding subspaces are infinite-dimensional subspaces of the function space of input variations. It is proven that, when uncertainty is modeled in terms of small parametric variations, the aforementioned classification of input adaptation leads to clearly distinguishable cost variations. More precisely, if η denotes the magnitude of the parametric variations, adaptation of the model optimal inputs along SS directions causes a cost variation of magnitude O(η2). On the other hand, the cost variation due to input adaptation along CS directions is of magnitude O(η). Furthermore, a numerical procedure is proposed for computing the SS and CS components of a given input variation. These components are projections of the input variation on the infinite-dimensional subspaces of SS and CS directions. The numerical procedure consists of the following three steps: approximation of the optimal control problem by a pNLP problem, projection of the given direction on the finite-dimensional SS and CS subspaces of the pNLP and, finally, reconstruction of the SS and CS components of the original problem from those of the pNLP.

Related material