Next Article in Journal
Clones of Terms of a Fixed Variable
Next Article in Special Issue
Mathematical Models for Stress–Strain Behavior of Nano Magnesia-Cement-Reinforced Seashore Soft Soil
Previous Article in Journal
Single-Machine Parallel-Batch Scheduling with Nonidentical Job Sizes and Rejection
Previous Article in Special Issue
Parameter and State Estimation of One-Dimensional Infiltration Processes: A Simultaneous Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Responsive Economic Model Predictive Control for Next-Generation Manufacturing

Wayne State University, Detroit, MI 48202, USA
Current address: 5050 Anthony Wayne Drive, Detroit, MI 48202, USA.
Mathematics 2020, 8(2), 259; https://doi.org/10.3390/math8020259
Submission received: 31 December 2019 / Revised: 10 February 2020 / Accepted: 11 February 2020 / Published: 16 February 2020
(This article belongs to the Special Issue Mathematics and Engineering)

Abstract

:
There is an increasing push to make automated systems capable of carrying out tasks which humans perform, such as driving, speech recognition, and anomaly detection. Automated systems, therefore, are increasingly required to respond to unexpected conditions. Two types of unexpected conditions of relevance in the chemical process industries are anomalous conditions and the responses of operators and engineers to controller behavior. Enhancing responsiveness of an advanced control design known as economic model predictive control (EMPC) (which uses predictions of future process behavior to determine an economically optimal manner in which to operate a process) to unexpected conditions of these types would advance the move toward artificial intelligence properties for this controller beyond those which it has today and would provide new thoughts on interpretability and verification for the controller. This work provides theoretical studies which relate nonlinear systems considerations for EMPC to these higher-level concepts using two ideas for EMPC formulations motivated by specific situations related to self-modification of a control design after human perceptions of the process response are received and to controller handling of anomalies.

1. Introduction

The buzz around artificial intelligence (AI), machine learning, and data in recent years has sparked both excitement and skepticism from the process systems engineering community [1,2]. Some of the most prevalent uses of data in the process systems field have included its use in developing models of various processes (e.g., Reference [3]) with potential applications in model-based control [4], in learning control laws [5,6], and in process monitoring [7,8]. Control engineers have debated about whether control itself should be considered to be artificial intelligence, particularly as control laws become more advanced. For example, a particularly intelligent form of control (known as economic model predictive control (EMPC) [9,10,11,12]) is an optimization-based control strategy that determines the optimal manner in which to operate a chemical process in the sense that the control actions optimize a profit metric for the process over a prediction horizon, subject to process constraints. The significant potential benefits of this control law for next-generation manufacturing have prompted a wide range of investigations in the context of EMPC, including how it may be used for building temperature regulation [13], wastewater treatment [14], microgrid dispatch [15], and gas pipeline networks [16]. Though chemical processes have traditionally been operated at steady-state, EMPC does not necessarily enforce steady-state operation in its efforts to optimize process economic performance. This has raised key questions for this control design regarding important properties of intelligent systems such as interpretability of its operating strategy and verification that it will work correctly for the real environment that it will need to control and interact with.
Interpretability is a desirable property for artificially intelligent systems. It has been considered in a variety of contexts; for example, the issue of building interpretable data-driven models has been considered to be enhanced by sparse regression, where a model with a small number of available possible terms which could be utilized to build it is derived (with an underlying assumption being that simpler models are more physically realistic and therefore should be more interpretable) [17]. Models identified via sparse regression techniques have been utilized in model predictive control for hydraulic fracturing [18]. Interpretability of other model-building strategies has also been a consideration; for example, for neural networks, where interpretability may be considered to be multidimensional, but to generally constitute whether a human can trace how a neural network obtained its conclusions via how the input information was processed [19], recurrent neural networks with long short-term memory were analyzed for how their cells processed different aspects of character-level language models [20].
It is recognized that interpretability of the control actions computed by an EMPC will be a major determining factor in the adoption of EMPC in the process industries (because, if operators and engineers do not know if the process is in an upset condition, they will likely disable features of the controller that make it difficult to understand due to the need to be sure that safety is maintained at all times). Interpretability for EMPC has not yet received significant focus in the literature. The subset of EMPC formulations which track a steady-state [21] possess a form of interpretability in that the reference behavior is understood by engineers and operators. Reference [22] developed an EMPC formulation in which the desired closed-loop process response specified or restricted by an operator or engineer is tracked by the controller. However, developing the best means for ensuring interpretability for EMPC to appropriately trade off end user understanding with economic optimality remains a largely open question. This work provides new perspectives on this important issue, suggesting that a controller formulation that bridges the human–machine interface by allowing the adjustment of constraints in response to human opinions about the process behavior under the EMPC may provide new avenues of both democratizing advanced control and allowing end users to adjust the response to their liking from an interpretability standpoint.
Another important topic for intelligent control systems is enabling their verification (i.e., certifying that they will perform in practice as intended). Verification can take a significant amount of engineering time and expense, and methods for reducing the time required to validate the controller’s performance could reduce the cost of advanced control, could promote operational safety, and could make the controller more straightforward to implement (a lack of ability to verify can prevent an intelligent system from being placed online at all). In the control community, a traditional approach to verification is to design controllers with guaranteed robustness to bounded uncertainty and to use this as a certificate that the controller will be able to maintain closed-loop stability in practice (e.g., References [23,24,25]). This requires some knowledge of the disturbance characteristics (e.g., upper bounds), which may be difficult to fully determine a priori but is important for EMPC, as the controller could drive the closed-loop state to operate at boundaries of safe operating regions to optimize profits, where the uncertainty in the disturbance characteristics could lead to unsafe conditions. Additional conservatism to account for the uncertainty could lead to over-conservatism that could decrease profits. Other methods for handling disturbances in EMPC have been developed, including methods that account for disturbances probabilistically (making assumptions on their distribution) [26] or adapting models used by the predictive controller online (e.g., References [27,28,29]). Results on the use of adapting models in EMPC have even included closed-loop stability guarantees when a recurrent neural network that is updated via error triggering is used as the process model [30]. An example of an adaptive control strategy which handles uncertain dynamics in batch processing is that in Reference [31], which uses model predictive control equipped with a probabilistic recursive least squares model parameter update algorithm with a forgetting factor to capture batch process dynamics. In addition, Reference [32] analyzed a learning-based MPC strategy with a terminal constraint for systems with unmodeled dynamics, where performance is enhanced by using a learned model in the MPC but safety goals are met by ensuring that control actions computed via the MPC are stabilizing.
Another direction that has received attention for handling uncertainty is fault tolerance in the sense of controller reconfigurations upon detection of an actuator fault/anomaly (e.g., Reference [33]) or anomaly response cast in a framework of fault-tolerant control handled via fault/anomaly detection followed by updating the model used by a model-based controller [34]. In Reference [35], fault-tolerant control for nonlinear switched systems was analyzed in the context of safe parking for model predictive control with a steady-state tracking objective function for actuator faults. For EMPC, Reference [36] handled faults through error-triggered data-driven model updates in the controller, and the uniting of EMPC with driving the state into safety-based regions in state-space (e.g., References [37,38]) also constitutes a form of fault-handling. Despite these advances in handling anomalies and uncertainty, which are critical for addressing moving toward a verification paradigm for EMPC, verifying the controller today would still be expected to be time-consuming; additional work is needed to explore further ways of considering and establishing verification for the control design.
Another approach in verification of controllers has been online verification via data-driven models complemented by detection algorithms for problematic controller behavior leading to bounds on the time that would elapse before detection of problematic controller behavior [39]. A feature of this direction in verification, therefore, is the combination of data-driven modeling for control (to address model uncertainty) with guarantees that problematic behavior due to model inaccuracies can be flagged within a given time period. In the present work, we take a conceptually similar approach to verification for EMPC using online anomaly handling with a conservative Lyapunov-based EMPC (LEMPC) [24] design approach that facilitates guaranteed detection of significant plant/model mismatch under sufficient conditions and allows upper bounds on the amount of time available until the mismatch would need to be compensated via model updates without compromising closed-loop stability (as well as the characteristics of the resulting control law after model reidentification required to obtain these theoretical results) to be presented. The development of theoretical guarantees on closed-loop stability with data-driven models that can be updated online in LEMPC has some similarities to References [30,40] but is pursued from a different angle that allows the underlying process dynamics to suddenly change and also allows for more general nonlinear data-driven models to be considered (i.e., we do not restrict the modeling methodology to neural networks as in References [30,40]). It also has similarities to the framework for accounting for faults in LEMPC via model updates in Reference [41] but considers a theoretical treatment of anomaly conditions with data-driven LEMPC, which was not explored in that work.
Motivated by the above considerations, this work focuses on advancing both interpretability and verification for EMPC. These are important considerations for human–machine interaction and can be viewed as different aspects of a “responsive” control design in the sense that the controller is made responsive to changing or unexpected conditions like a human would be. We first address the interpretability concept suggested above in an LEMPC framework in which we elucidate conditions under which an LEMPC could be made responsive to potentially inaccurate metrics reflecting the reactions of end users to the LEMPC’s behavior without loss of closed-loop stability. We subsequently move in the direction of addressing verification considerations for LEMPC by developing theoretical guarantees which can be made for the controller in the presence of process dynamics anomalies/changes when potentially adapting data-driven models are used in the controller. We evaluate the conditions under which closed-loop stability may be lost in such circumstances, with exploration of bounds on times before which detection and accommodation of the anomaly could be stabilized to avoid potential plant shutdown. Numerical examples utilizing continuous stirred tank reactors (CSTRs) are presented to illustrate major concepts. Throughout, we highlight cases where the proposed methods could interface with other artificial intelligence techniques (e.g., sentiment analysis or image-based sensing) without compromising closed-loop stability, highlighting the range of intelligent techniques which can be used to enhance next-generation control within an appropriate theoretical framework.
This work is organized as follows: in Section 2, preliminaries are presented. These are followed by the main results in Section 3, which consist of controller formulations and implementation strategies, with demonstration via numerical examples, where (1) the controller constraints can be adjusted online in response to potentially inaccurate stimuli without closed-loop stability being lost (Section 3.1) and (2) the control strategy has characterizable properties in the presence of process anomalies resulting in unanticipated changes in the underlying process dynamics (Section 3.2). Section 4 concludes and provides an outlook on the presented results. Proofs for theoretical results associated with the second control strategy noted above are provided in the Appendix. This manuscript is an extended version of Reference [42].

2. Preliminaries

2.1. Notation

The operator | · | denotes the vector Euclidean norm. A function α : [ 0 , a ) [ 0 , ) is in class K if it is continuous, if it strictly increases, and if α ( 0 ) = 0 . The notation Ω ρ defines a level set of a scalar-valued function V (i.e., Ω ρ : = { x R n   :   V ( x ) ρ } ). The operator / signifies set subtraction (i.e., A / B : = { x R n   :   x A ,   x B } ). x T represents the transpose of the vector x. We define a sampling time with the notation t k : = k Δ , k = 0 , 1 , .

2.2. Class of Systems

This work considers switched nonlinear systems of the following form:
x ˙ a , i = f i ( x a , i ( t ) , u ( t ) , w i ( t ) )
where x a , i X R n denotes the state vector, u U R m denotes the input vector ( u = [ u 1 , , u m ] T ), and w i W i R z denotes the disturbance vector, where W i : = { w i R z   :   | w i | θ i ,   θ i > 0 } , for i = 1 , 2 , . In this notation, the ith model is used for t [ t s , i , t s , i + 1 ) , where x a , i ( t s , i + 1 ) = x a , i + 1 ( t s , i + 1 ) and t s , 1 = t 0 . The vector function f i is assumed to be a locally Lipschitz function of its arguments with f 1 ( 0 , 0 , 0 ) = 0 and f i ( x a , i , s , u i , s , 0 ) = 0 for i > 1 (i.e., the steady-state of the updated models when w i = 0 is at x a , i = x a , i , s , u = u i , s ). The system of Equation (1) with w i 0 is known as the nominal system. Synchronous measurement sampling is assumed, with measurements available at every t k = k Δ , k = 0 , 1 , . It is noted that t s , i , i = 1 , 2 , , is not required to be an integer multiple of t k . We define x ¯ a , i = x a , i x a , i , s and u ¯ i = u u i , s and define f ¯ i as f i rewritten to have its origin at x ¯ a , i = 0 , u ¯ i = 0 , w i = 0 . Similarly, we define U i to be the set U in deviation variable form from u i , s and X i to be the set X in deviation variable form from x a , i , s .
We assume that there exists an explicit stabilizing (Lyapunov-based) control law h i ( x ¯ a , i ) = [ h i , 1 ( x ¯ a , i )     h i , m ( x ¯ a , i ) ] T that renders the origin of the nominal system of Equation (1) asymptotically stable in the sense that the following inequalities hold:
α 1 , i ( | x ¯ a , i | ) V i ( x ¯ a , i ) α 2 , i ( | x ¯ a , i | )
V i ( x ¯ a , i ) x ¯ a , i f ¯ i ( x ¯ a , i , h i ( x ¯ a , i ) , 0 ) α 3 , i ( | x ¯ a , i | )
V i ( x ¯ a , i ) x ¯ a , i α 4 , i ( | x ¯ a , i | )
h i ( x ¯ a , i ) U i
for all x ¯ a , i D i R n and i = 1 , 2 , , where D i is an open neighborhood of the origin of f ¯ i , and for a positive definite, sufficiently smooth Lyapunov function V i . The functions α 1 , i , α 2 , i , α 3 , i , and α 4 , i are of class K . A level set of V i denoted by Ω ρ i D i is referred to as the stability region of the system of Equation (1) under the controller h i ( x ¯ a , i ) . We consider that Ω ρ i is selected to be contained within X. The Lyapunov-based controller is assumed to be Lipschitz continuous such that the following inequalities hold:
| h i , j ( x ) h i , j ( x ) | L h , i | x x |
for a positive constant L h , i for all x , x Ω ρ i , and i = 1 , 2 , , with j = 1 , , m .
Lipschitz continuity of f i and sufficient smoothness of V i provide the following inequalities, for positive constants M i , L x , i , L w , i , L x , i , and L w , i :
| f ¯ i ( x , u , w i ) | M i
| f ¯ i ( x , u , w i ) f ¯ i ( x , u , 0 ) | L x , i | x x | + L w , i | w i |
V i ( x ) x f ¯ i ( x , u , w i ) V i ( x ) x f ¯ i ( x , u , 0 ) L x , i | x x | + L w , i | w i |
for all x , x Ω ρ i , u U i , and w i W i .
As this work considers responses to unexpected conditions, we consider that there may be cases in which the nonlinear model of Equation (1) may not be available, though an empirical model with the following form may be available:
x ˙ b , q ( t ) = f N L , q ( x b , q ( t ) , u ( t ) )
where f N L , q is a locally Lipschitz nonlinear vector function in x b , q R n and in the input u R m with f N L , 1 ( 0 , 0 ) = 0 and f N L , q ( x b , q , s , u q , s ) = 0 for q > 1 (i.e., the steady-state of the updated models is at x b , q = x b , q , s , u = u q , s ). Here, q = 1 , 2 , , to allow for the possibility that, as the underlying process dynamics change (i.e., the value of i increases in Equation (1)), it may be desirable to switch the empirical model used to describe the system. However, we utilize the index q instead of i for the empirical model to signify that we do not assume that the empirical model must switch with the same frequency as the process dynamics. When the model of Equation (10) does switch, we assume that the switch occurs at a time t s , N L , q + 1 in a manner where x b , q ( t s , N L , q + 1 ) = x b , q + 1 ( t s , N L , q + 1 ) . We define x ¯ b , q = x b , q x b , q , s and u ¯ q = u u q , s and define f ¯ N L , q as f N L , q , rewritten to have its origin at x ¯ b , q = 0 , u ¯ q = 0 , as follows:
x ¯ ˙ b , q ( t ) = f ¯ N L , q ( x ¯ b , q ( t ) , u ¯ q ( t ) )
Similarly, we define U q to be the set U in deviation variable form from u q , s and X q to be the set X in deviation variable form from x b , q , s .
We consider that, for the empirical models in Equation (10), there exists a locally Lipschitz explicit stabilizing controller h N L , q ( x ¯ b , q ) that can render the origin asymptotically stable in the sense that:
α ^ 1 , q ( | x ¯ b , q | ) V ^ q ( x ¯ b , q ) α ^ 2 , q ( | x ¯ b , q | )
V ^ q ( x ¯ b , q ) x ¯ b , q f ¯ N L , q ( x ¯ b , q , h N L , q ( x ¯ b , q ) ) α ^ 3 , q ( | x ¯ b , q | )
| V ^ q ( x ¯ b , q ) x ¯ b , q | α ^ 4 , q ( | x ¯ b , q | )
h N L , q ( x ¯ b , q ) U q
for all x ¯ b , q D N L , q (where D N L , q is a neighborhood of the origin of f ¯ b , q contained in X), where V ^ q : R n R + is a sufficiently smooth Lyapunov function, α ^ i , q , i = 1 , 2 , 3 , 4 , are class K functions, and q = 1 , 2 , . We define Ω ρ ^ q D N L , q as the stability region of the system of Equation (10) under h N L , q and Ω ρ ^ s a f e , q as a superset of Ω ρ ^ q contained in D N L , q and X. Lipschitz continuity of f N L , q and sufficient smoothness of V ^ q imply that there exist M L , q > 0 and L L , q > 0 such that
| f ¯ N L , q ( x , u ) | M L , q
| V ^ q ( x 1 ) x f ¯ N L , q ( x 1 , u ) V ^ q ( x 2 ) x f ¯ N L , q ( x 2 , u ) | L L , q | x 1 x 2 |
x , x 1 , x 2 Ω ρ ^ q , u U q , and q = 1 , 2 , .
Furthermore, we define x ¯ a , i , q = x a , i x b , q , s as the variable representing the deviation of each x a , i from the steady-state of the qth empirical model of Equation (10) and f ¯ i , q as the right-hand side of Equation (1) when the model is rewritten in terms of the deviation variables x ¯ a , i , q and u ¯ q , as follows:
x ¯ ˙ a , i , q = f ¯ i , q ( x ¯ a , i , q ( t ) , u ¯ q ( t ) , w i ( t ) )
We assume that the following holds:
| f ¯ i , q ( x , u , w ) f ¯ i , q ( x , u , 0 ) | L x , i , q | x x | + L w , i , q | w |
V ^ q ( x ) x f ¯ i , q ( x , u , w ) V ^ q ( x ) x f ¯ i , q ( x , u , 0 ) L x , i , q | x x | + L w , i , q | w |
for all x , x , u , u and w such that x + x b , q , s x a , i , s Ω ρ i , x + x b , q , s x a , i , s Ω ρ i , u + u q U , u + u q U , and w W i . We define a level set of V ^ q contained in Ω ρ ^ s a f e , q that is also contained in Ω ρ i to be Ω ρ ^ q , i , and L x , i , q , L w , i , q , L x , i , q , L w , i , q > 0

2.3. Economic Model Predictive Control

Economic model predictive control (EMPC) [12] is an optimization-based control design formulated as follows:
min u ¯ i S ( Δ ) t k t k + N L e ( x ¯ ˜ a , i ( τ ) , u ¯ i ( τ ) ) d τ
s . t .   x ¯ ˜ ˙ a , i ( t ) = f ¯ i ( x ¯ ˜ a , i ( t ) , u ¯ i ( t ) , 0 )
x ¯ ˜ a , i ( t k ) = x ( t k )
u ¯ i ( t ) U i ,     t [ t k , t k + N )
x ¯ ˜ a , i ( t ) X i ,     t [ t k , t k + N )
where L e ( · , · ) represents the stage cost of the EMPC, which can be a general scalar-valued function that is optimized in Equation (17). The notation u S ( Δ ) signifies that u is a piecewise-constant input trajectory with period Δ . The prediction horizon is denoted by N. Equation (18) represents the nominal process model, with predicted state x ¯ ˜ a , i for the ith model. Equations (20) and (21) represent the input and state constraints, respectively. We denote the optimal solution of an EMPC at t k by u p * ( t j | t k ) , p = 1 , , m , j = k , , k + N 1 , where each u p * ( t j | t k ) holds for t [ t j , t j + 1 ) within the prediction horizon. x ( t k ) in Equation (19) signifies that the state measurement represents the actual system state at t k placed in deviation variable form with respect to x ¯ a , i , s . Due to the potential switching of the underlying process dynamics before the model in Equation (18) is updated, the measurement may come from a dynamic system different than the ith model used in Equation (18).

2.4. Lyapunov-Based Economic Model Predictive Control

A variety of variations on the general EMPC formulation in Equations (17)–(21) have been developed. One such variation which will receive focus in this paper is Lyapunov-based EMPC (LEMPC) [24], which is formulated as in Equations (17)–(21) but with the following Lyapunov-based constraints added as well:
V i ( x ¯ ˜ a , i ( t ) ) ρ e , i ,     t [ t k , t k + N ) ,   if   t k t   and   V i ( x ( t k ) ) ρ e , i
V i ( x ( t k ) ) x f ¯ i ( x ( t k ) , u ( t k ) , 0 ) V i ( x ( t k ) ) x f ¯ i ( x ( t k ) , h i ( x ( t k ) ) , 0 ) , if   t k > t   or   V i ( x ( t k ) ) > ρ e , i
where Ω ρ e , i Ω ρ i is selected such that the closed-loop state is maintained within Ω ρ i over time when the process of Equation (1) is operated under the LEMPC of Equations (17)–(23). t is a time after which the constraint of Equation (23) is always applied, regardless of the value of V i ( x ( t k ) ) . The activation conditions of the LEMPC with respect to V i ( x ( t k ) ) ensure that the LEMPC can maintain closed-loop stability within Ω ρ i as well as recursive feasibility.

2.5. Lyapunov-Based Economic Model Predictive Control with an Empirical Model

Several prior works have developed LEMPC formulations including empirical models [43,44] when the model of Equation (1) is either unknown or undesirable for use (e.g., more computationally intensive than an empirical model). They have the following form:
min u ¯ q ( t ) S ( Δ ) t k t k + N [ L e ( x ¯ b , q ( τ ) , u ¯ q ( τ ) ) ] d τ
s . t . x ¯ ˙ b , q = f ¯ N L , q ( x ¯ b , q ( t ) , u ¯ q ( t ) )
x ¯ b , q ( t k ) = x ( t k )
x ¯ b , q ( t ) X q ,     t [ t k , t k + N )
u ¯ q ( t ) U q ,     t [ t k , t k + N )
V ^ q ( x ¯ b , q ( t ) ) ρ ^ e , q ,   t [ t k , t k + N ) if x ( t k )     Ω ρ ^ e , q V ^ q ( x ( t k ) ) x ( f ¯ N L , q ( x ( t k ) , u ( t k ) ) )        V ^ q ( x ( t k ) ) x   ( f ¯ N L , q ( x ( t k ) , h N L , q ( x ( t k ) ) ) )   if   x ( t k ) Ω ρ ^ e , q
     or t k t
where the notation follows that found in Equations (17)–(23) except that the predictions from the nonlinear empirical model are denoted by x ¯ b , q (Equation (24b)) and are initialized from a measurement of the state of the ith system of Equation (1) (i.e., from the state measurement of whichever model describes the process dynamics at t k ). Regardless of which dynamic model describes the underlying process dynamics, the qth empirical model along with the state (Equation (24d)) and Lyapunov-based stability constraints corresponding to that model are used.

3. Responsive Economic Model Predictive Control Design

The next sections present two concepts for moving toward interpretability and verifiability goals for EMPC, cast within a framework of making EMPC more responsive to “unexpected” behavior.

3.1. Automated Control Law Redesign

In this section, we focus on a case in which the process model used does not change over time (i.e., the i = 1 process model in Equation (1) is used for all time) and consider the problem that, despite the pushes toward next-generation manufacturing, many companies that may benefit from automation can have difficulty implementing the appropriate advances if they do not have a knowledgeable control engineer on site due to both a lack of knowledge of advanced control as well as a lack of interpretability of the controller’s actions. We present one idea for making an LEMPC easier to work with by giving it a “self-design” capability that allows the controller to update its formulation in a manner that satisfies end-user requirements without requiring understanding of the control laws on the part of the end users. Critically, closed-loop stability and recursive feasibility guarantees are retained. This can be considered to be a case in which the human response to the operating strategy is “unexpected” (in the sense that it is not easily predictable by the control designer), but the controller must have the ability to adjust its control law in response to the human reaction.
The first step toward designing an appropriate controller for this scenario is to recognize that the human response to the process behavior is some function of the pattern observed in the state and input data and that the pattern is dictated by the control formulation. For EMPC, for example, it is dictated by the constraints and objective function (though the process model of Equation (18) also plays a role in determining the response, we consider that the model must represent the process at hand and that therefore it cannot be tuned to impact the state/input behavior). Conceptually, then, the solution to handling the “unexpected” response of the end user of the controller is to learn the mapping between the end user’s satisfaction with the response and the constraint/objective function formulation and then to use that mapping to find the constraint/objective function formulation that provides “optimal” satisfaction to the end user.
An open question is how to do this and, in particular, how to do it in a manner that provides theoretical guarantees on feasibility/closed-loop stability. To demonstrate this challenge, consider the LEMPC of Equations (17)–(23). The theoretical results for LEMPC which guarantee closed-loop stability and recursive feasibility under sufficient conditions when no changes occur in the underlying process dynamics rely on the constraints of Equations (22) and (23) being present in the control design [24]. Therefore, ad hoc constraint development in an attempt to optimize end-user “satisfaction” with the process response would not be a means for providing closed-loop stability and recursive feasibility guarantees. Instead, any modification of constraints must take place in a more rigorously defined manner.
One approach would be to develop constraints for EMPC which allow “tuning” of the process response but impact neither closed-loop stability nor feasibility as the tuning parameter in these constraints is adjusted. They thus offer some flexibility to the end user in modifying the response but also ensure that the end user’s power to adjust the control law is appropriately restricted for feasibility/stability purposes. An example of constraints which meet this requirement is the input rate of change constraints added to LEMPC in Reference [45]. In the following section, we will discuss in detail how these constraints may be incorporated within the proposed framework for providing an end user with a restricted flexibility in adjusting the process response without losing theoretical properties of LEMPC.
Remark 1.
The question of how the human response may be accurately sensed is outside the scope of the present manuscript. A process example will be provided below in which the end user is assumed to take time to rank his or her “satisfaction” with the process behavior under a number of different controllers to develop a mapping between satisfaction and the tuning parameter of the control law. However, human responses could also be considered to be obtained through other machine learning/artificial intelligence methods, such as sentiment analysis [46].
Remark 2.
Potential benefits of an approach that adjusts the controller’s behavior based on the end user’s response (rather than assuming that some type of standard metric for evaluating control performance (e.g., settling time, rise time, or overshoot of the steady-state) is able to capture the desired response) are that (1) EMPC may operate processes in a potentially time-varying fashion, meaning that the closed-loop state may not be driven to a steady-state and that the behavior of the process under the EMPC may not be easily predictable a priori (e.g., without running closed-loop simulations). Therefore, determining what metrics to use to state whether performance under EMPC is acceptable or not may not be intuitive or easily generalizable, unlike in the case where steady-state operation is desired. (2) Again, unlike the steady-state case, not all end users of a given EMPC formulation may have the same definition of “good” behavior. Ideally, the “best” behavior is the one computed by the EMPC when it optimizes the process economics over the prediction horizon in whatever manner is necessary to ensure that the constraints are met but profit is maximized. However, an end user may not find this to constitute the “best” behavior due to other considerations that are perhaps difficult or costly to include in the control law (for example, the most profitable input trajectories from the perspective of the profit metric being used in Equation (17) may be expected to lead to more actuator wear than is desirable, which will be the subject of the example below). Therefore, it may be difficult to set a general metric on “good” behavior under EMPC, as the additional considerations defining “goodness” that are not directly included in the control law may vary between processes. (3) The concept of designing a controller that is responsive to unexpected evaluations of its behavior could have broader implications, if appropriately developed, than the initial goal of achieving desired process behavior for a given control law. Ideally, developments in this direction would serve as a springboard for reducing a priori control design efforts while increasing flexibility for next-generation manufacturing such that end users are able to achieve many goals during production that they may conceive over time as being important to their operation but without needing to interface extensively with vendors or even needing to update their software to achieve these updated process responses. The vision is one where modifications for manufacturing could become as flexible and safe through new responsive and intelligent controller formulations as modifications to codes are for computer scientists who do not work with physical processes and therefore can readily test and evaluate new protocols to advance the field quickly.

3.1.1. LEMPC with Self-Designing Input Rate of Change Constraints

In Reference [45], an LEMPC formulation with input rate of change constraints was designed with the form in Equations (17)–(23) but with the following rate of change constraints added on the inputs:
| u p ( t k ) h 1 , p ( x ( t k ) ) | ϵ r ,   p = 1 , , m
| u p ( t j ) h 1 , p ( x ¯ ˜ a , i ( t j ) ) | ϵ r ,   p = 1 , , m ,   j = k + 1 , , k + N 1
where ϵ r 0 . This formulation is demonstrated in Reference [45] to maintain closed-loop stability and recursive feasibility under sufficient conditions and to cause the following constraints to be met:
| u p * ( t k | t k ) u p * ( t k 1 | t k 1 ) | ϵ desired ,     p = 1 , , m
| u p * ( t j | t k ) u p * ( t j 1 | t k ) | ϵ desired ,     p = 1 , , m ,   j = k + 1 , , k + N 1
where ϵ desired > 0 . The goal of this formulation of LEMPC is to utilize input rate of change constraints to attempt to reduce variations in the inputs between sampling periods that have the potential to cause actuator wear.
However, as noted in Reference [47], despite the intent of the method to prevent actuator wear, there is no explicit relationship between ϵ desired or ϵ r and the amount of actuator wear. Therefore, a control engineer seeking to prevent actuator wear for a given process under the LEMPC of Equations (17)–(23), (25), and (26) might design the value of ϵ r by performing closed-loop simulations of the process under various values of ϵ r and then by selecting the one that gives the response that the engineer judges to present a sufficient tradeoff between optimizing economic performance and reducing actuator wear. A company with little control expertise on hand, however, may have difficulties with tuning ϵ r without vendor assistance. The fact that controllers today cannot readily “fix” their response if engineers who do not have control expertise would like the response to have different characteristics presents a hurdle to the adoption of even simple control laws, let alone the more complex designs which we would like to move into widespread use as part of the next-generation manufacturing paradigm.
These potential negative responses to a lack of on-site control expertise might be prevented by allowing the controller itself to be responsive to end-user preferences. For example, the value of ϵ r might be designed by allowing a short period of operation under the control law of Equations (17)–(23), (25), and (26) with different values of ϵ r . The engineers at the plant could then look at time periods in the plant data during which each of the values of ϵ r were used and could evaluate the performance of the plant through some metric that can be recorded. Then, the value of ϵ r that is predicted to provide the highest rate of satisfaction (based on some relationship between the value of ϵ r and the evaluation metrics which can be derived through techniques for fitting appropriate models to the kind of data generated, such as regression or other techniques of machine learning) could be selected for use (and further updated over time through a similar mechanism as necessary).
Remark 3.
One could argue that the algorithm by which a control engineer judges whether a given value of ϵ r is preferable could be represented mathematically (e.g., as an optimization problem with an objective function representing a tradeoff between penalties on input variation and loss of profit). However, for the reasons noted in Remark 2 above and also with the goal of developing an algorithm which may facilitate interpretability of LEMPC by allowing its control law to be self-adjusted based on how end users feel about the response of the process under the controller, we handle this within the general case of “unexpected” scenarios to which we would like to make EMPC responsive.

LEMPC with Self-Designing Input Rate of Change Constraints: Theoretical Guarantees

The methodology proposed above incorporates human judgments on the process response for different values of ϵ r for setting ϵ r in Equations (17)–(23), (25), and (26). Despite the fact that human judgment is imprecise, the LEMPC formulations of Equations (17)–(23), (25), and (26), by design, maintains closed-loop stability and recursive feasibility under sufficient conditions (proven in Reference [45]) that are unrelated to the value of ϵ r , demonstrating that the combination of control theory and data-driven models for “unexpected” behavior or human intuition may be possible to achieve with theoretical guarantees.
When the proposed strategy for evaluating ϵ r online via human responses to different values of the parameter ϵ r is used, closed-loop stability and feasibility still hold; however, it may not be guaranteed that Equations (27) and (28) hold. Since ϵ desired is arbitrary in many respects since it is indirectly tied to actuator wear (primarily though human evaluation), the satisfaction of Equations (27) and (28) may not be significant during the time period that an operator or engineer is evaluating ϵ r .
There is no guarantee that the proposed method will produce a value of ϵ r that gives “optimal satisfaction” to the end user. However, this is not considered a limitation of the method, as the end user’s satisfaction is subjective and various methods for modeling the relationship between ϵ r and the end user’s satisfaction could be examined if one is found to produce an inadequate result. The value of ϵ r can also be adjusted further over time if the response after an initial value of ϵ r is chosen is determined not to be preferable. Reference [45] does guarantee however that, throughout all of the time of operation (both when various values of ϵ r are tested and when a single value of ϵ r is selected), closed-loop stability and recursive feasibility can be guaranteed. This is because the value of ϵ r only impacts whether Equations (27) and (28) are satisfied under the LEMPC of Equations (17)–(23), (25), and (26), and Equations (27) and (28) are only of potential concern for actuator wear and not closed-loop stability or feasibility. Furthermore, because Reference [45] demonstrates that h i ( x ¯ ˜ a , i ( t q ) ) ,   t [ t q , t q + 1 ) , q = k , , k + N 1 is a feasible solution to Equations (17)–(23), (25), and (26) at every sampling time regardless of the value of ϵ r because Equations (25) and (26) can be satisfied by h i ( x ¯ ˜ a , i ( t q ) ) , t [ t q , t q + 1 ) , q = k , , k + N 1 for any ϵ r 0 , the value of ϵ r can change between two sampling periods as ϵ r is being evaluated and recursive feasibility (and therefore closed-loop stability, since closed-loop stability depends on Equations (22) and (23) and not on Equations (25) and (26)) will be maintained. Finally, though when ϵ r is being evaluated, the process profit or actuator wear level may not be the same as they would be after the value of ϵ r is selected, this is not expected to pose significant problems for many processes if it is performed over a short period of time. Furthermore, if there are hard process constraints defined by X i that must be met in order to ensure that the product produced during the time when ϵ r is evaluated can be sold, these can be met even as various values of ϵ r are tried because x ¯ a , i ( t ) Ω ρ i X i according to Reference [45] for any value of ϵ r . Furthermore, Reference [45] also guarantees that, even as the values of ϵ r are adjusted, the closed-loop state can be driven to a neighborhood of a steady-state to avoid production volume losses as ϵ r is adjusted if necessary.
Remark 4.
The fact that the above stability analysis holds regardless of the value of ϵ r indicates that the accuracy of the method used in obtaining ϵ r does not impact closed-loop stability. This is particularly important if the method used in obtaining ϵ r involves, for example, performing sentiment analysis of human speech data to determine how well humans like a given value of that parameter. We overcome the limitation of interfacing humans with machines by ensuring that the only parameter of the control law design which is modified in response to the algorithm that carries uncertainty is one which, deterministically, does not impact closed-loop stability.
Remark 5.
Though this section on automated control law redesign has explored only input rate of change constraints, other online redesigns may also be possible in control. For example, in the LEMPC formulation of Equations (17)–(23), the value ρ e , i could be modified over time if an appropriate implementation strategy was developed. Specifically, there exist bounds on ρ e , i given in Reference [24] which are required for closed-loop stability to be maintained for the process of Equation (1) operated under the LEMPC of Equations (17)–(23). Given this, a similar strategy to that presented for the selection of ϵ r could be utilized to adjust the value of ρ e , i within its bounds online without impacting closed-loop stability. This holds because a value of ρ e , i between the minimum and maximum at a given time would always be utilized. According to Reference [24], the consequence of this is that, at the next sampling time, x ¯ a , i ( t k ) Ω ρ i . If x ¯ a , i ( t ) Ω ρ i at the end of every sampling period for any ρ e , i between its minimum and maximum, x ¯ a , i ( t ) Ω ρ i at all times. If both ϵ r and ρ e , i were to be simultaneously varied, for example, closed-loop stability would again hold, as the value of ϵ r does not impact closed-loop stability for the reasons noted above and the value of ρ e , i can vary between its minimum and maximum value as just described without impacting closed-loop stability. Recursive feasibility would also not be impacted. This suggests that it may be possible to design more complex control laws with multiple self-tuning parameters that are simultaneously optimized based on human response to develop control laws that behave in a desirable manner online without posing a safety concern due to loss of closed-loop stability.

EMPC with Self-Designing Input Rate of Change Constraints: Application to a Chemical Process Example

In this section, we employ a process example that demonstrates the concept of self-designing input rate of change constraints. For simplicity, in this example, we do not employ the Lyapunov-based stability constraints of Equations (22) and (23); therefore, no theoretical stability guarantees can be made for this example. However, this does not present problems for illustrating the core concepts of the method of integrating human responses to operating conditions with EMPC.
The process under consideration is an ethylene oxidation process in a continuous stirred tank reactor (CSTR) from Reference [48] with reaction rates from Reference [49]. The following three reactions are considered to occur in the CSTR:
C 2 H 4 + 1 2 O 2 C 2 H 4 O
C 2 H 4 + 3 O 2 2 CO 2 + 2 H 2 O
C 2 H 4 O + 5 2 O 2 2 CO 2 + 2 H 2 O
Mass and energy balances for the reactor, in dimensionless form, are as follows:
x ¯ ˙ 1 = u ¯ 1 ( 1 x ¯ 1 x ¯ 4 )
x ¯ ˙ 2 = u ¯ 1 ( u ¯ 2 x ¯ 2 x ¯ 4 ) A 1 e γ 1 / x ¯ 4 ( x ¯ 2 x ¯ 4 ) 0.5 A 2 e γ 2 / x ¯ 4 ( x ¯ 2 x ¯ 4 ) 0.25
x ¯ ˙ 3 = u ¯ 1 x ¯ 3 x ¯ 4 + A 1 e γ 1 / x ¯ 4 ( x ¯ 2 x ¯ 4 ) 0.5 A 3 e γ 3 / x ¯ 4 ( x ¯ 3 x ¯ 4 ) 0.5
x ¯ ˙ 4 = u ¯ 1 x ¯ 1 ( 1 x ¯ 4 ) + B 1 x ¯ 1 e γ 1 / x ¯ 4 ( x ¯ 2 x ¯ 4 ) 0.5 + B 2 x ¯ 1 e γ 2 / x ¯ 4 ( x ¯ 2 x ¯ 4 ) 0.25 + B 3 x ¯ 1 e γ 3 / x ¯ 4 ( x ¯ 3 x ¯ 4 ) 0.5 B 4 x ¯ 1 ( x ¯ 4 T c )
where the process model parameters are listed in Table 1; the state vector components x ¯ 1 , x ¯ 2 , x ¯ 3 , and x ¯ 4 (i.e., x ¯ = [ x ¯ 1   x ¯ 2   x ¯ 3   x ¯ 4 ] T ) are dimensionless quantities corresponding to the gas density, ethylene concentration, ethylene oxide concentration, and temperature in the CSTR, respectively; and the input vector components u ¯ 1 and u ¯ 2 are dimensionless quantities corresponding to the feed volumetric flow rate and the feed ethylene concentration. The process of Equations (32)–(35) has a steady-state at x ¯ 1 = 0.998 , x ¯ 2 = 0.424 , x ¯ 3 = 0.032 , x ¯ 4 = 1.002 , u ¯ 1 = 0.35 , and u ¯ 2 = 0.5 .
An EMPC is designed to control this process by maximizing the yield of ethylene oxide, which is defined by the following equation over a time interval from the initial time ( t 0 = 0 ) to the final time of operation t f :
Y ( t f ) = 0 t f u ¯ 1 ( τ ) x ¯ 3 ( τ ) x ¯ 4 ( τ ) d τ 0 t f u ¯ 1 ( τ ) u ¯ 2 ( τ ) d τ
However, it is assumed that, in addition to the following bounds on the inputs,
0.0704 u ¯ 1 0.7042
0.2465 u ¯ 2 2.4648
there is also a constraint on the total amount of material which can be fed to the CSTR over time:
0 t f u ¯ 1 ( τ ) u ¯ 2 ( τ ) d τ = 0.175 t f
As Equation (39) fixes the denominator of Equation (36), the stage cost to be minimized using the EMPC is as follows:
L e ( x , u ) = u ¯ 1 ( t ) x ¯ 3 ( t ) x ¯ 4 ( t )
To attempt to avoid actuator wear, input rate of change constraints will also be considered. The general form of the EMPC for this example is therefore as follows:
min u ¯ 1 , u ¯ 2 S ( Δ ) t k t k + N k u ¯ 1 ( τ ) x ¯ ˜ 3 ( τ ) x ¯ ˜ 4 ( τ ) d τ
s . t .   Equations   ( 32 ) ( 35 )
x ¯ ˜ ( t k ) = x ¯ ( t k )
0.0704 u ¯ 1 ( t ) 0.7042 ,     t [ t k , t k + N k )
0.2465 u ¯ 2 ( t ) 2.4648 ,     t [ t k , t k + N k )
1 t v r t v t k u ¯ 1 * ( τ ) u ¯ 2 * ( τ ) d τ + 1 t v t k t k + N k u ¯ 1 ( τ ) u ¯ 2 ( τ ) d τ = 0.175
| u ¯ p ( t j ) u ¯ p ( t j 1 ) | ϵ ,   j = k , , k + N k 1 ,   p = 1 , 2
In this formulation, no Lyapunov-based stability constraints are employed and no closed-loop stability issues arose in the simulations (i.e., the closed-loop state always remained within a bounded region of state-space). Furthermore, due to the lack of Lyapunov-based stability constraints, the input rate of change constraints of Equations (27) and (28) are enforced directly on input differences (i.e., they have the form of Equations Equations (27) and (28) rather than the form of Equations (25) and (26)). x ¯ ˜ represents the predicted value of the process state according to the model of Equation (42). u ¯ 1 * and u ¯ 2 * represent the optimal values of u ¯ 1 and u ¯ 2 that have been applied in past sampling periods (i.e., u ¯ 1 * = u ¯ 1 ( t k 1 ) , and u ¯ 2 * = u ¯ 2 ( t k 1 ) ). The values of u ¯ 1 ( t k 1 ) and u ¯ 2 ( t k 1 ) for k = 0 are assumed to be the steady-state values of these inputs. N k is a shrinking prediction horizon in the sense that, at the beginning of every operating period of length t v = 46.8 , the value of N k is reset to 5 but is then reduced by 1 at each subsequent sampling time of the operating period. This shrinking horizon allows the constraint of Equation (39) to be enforced within every operating period to ensure that, by the end of the time of operation, Equation (39) is met. In Equation (46), r signifies the operating periods completed since the beginning of the time of operation (e.g., in the first t v time units, r = 0 because no operating periods have been completed yet).
We assume that the engineers and operators do not know the value of ϵ that they would like to impose in the EMPC of Equations (41)–(46) but plan to determine an appropriate value by assessing the process behavior from the same initial condition under EMPC’s with different values of ϵ and by selecting a value that they expect will give the optimal tradeoff between economic performance and actuator wear reduction. To represent the process behavior as ϵ is varied in these experiments, we performed eight closed-loop simulations of the process of Equations (32)–(35) under the EMPC of Equations (41)–(46) from the same initial condition x ¯ I = [ x ¯ 1 I   x ¯ 2 I   x ¯ 3 I   x ¯ 4 I ] T = [ 0.997   1.264   0.209   1.004 ] T using eight different input-rate-of-change constraint formulations (the simulations were performed both with no input rate of change constraints and with ϵ values of 0.01, 0.05, 0.1, 0.3, 0.5, 1, and 3). The simulations lasted for 10 operating periods and used a sampling period of Δ = 9.36 , an integration step for the model of Equation (42) (i.e., the model used by the controller) of 10 4 and an integration step for the model of Equations (32)–(35) (i.e., the model of the plant) of 10 5 . The open-source interior point solver Ipopt [50] was used to solve all optimization problems. Figure 1 and Figure 2 show the state and input trajectories for each of the values of ϵ chosen. Table 2 shows how the yield varies with the choice of ϵ . To express the engineer’s or operator’s judgment of the relative “goodness” of the response that they see when both profit and input variations are considered, the engineers and operators are considered to have ranked the response for a given ϵ on a scale of 1 to 10 as shown in Table 2, with 1 being the worst and 10 being the best.
Figure 3 shows the rankings as a function of ϵ as solid blue circles. From this figure, we postulate that a model that may fit this data has the following form:
Ranking = c 1 e ( c 2 ϵ ) ϵ c 3 + c 4
Using the MATLAB function lsqcurvefit, the data from Table 2 for the various values of ϵ reported was fit to the function in Equation (48), resulting in c 1 = 68.8901 , c 2 = 3.8356 , c 3 = 0.8480 , and c 4 = 0.7933 . The plot of the function fit to the data is shown as the red curve on Figure 3. A more rigorous method could have been utilized to fit the model and the data (involving, for example, more samples and an evaluation of the deviation of the model from the data), but the present method is sufficient for demonstrating the concepts developed in this work.
The utility of the function in Equation (48) is that it provides a mathematical representation of the model that an engineer or operator is using within his or her mind to determine the best value of ϵ to utilize when this engineer or operator is not aware of the model himself or herself. This makes the advanced control design more tractable for the operator or engineer to utilize without advanced control knowledge by fitting the “mind of the human” to a function that can then be utilized in optimizing the control design automatically. To demonstrate this, we determine the “optimal” value of ϵ based on the model of Equation (48) by differentiating the equation with respect to ϵ and by setting it to 0. This gives an “optimal” value of ϵ of c 3 / c 2 or 0.22. Simulations were performed for 10 operating periods of the process of Equations (32)–(35) under the EMPC of Equations (41)–(46) with this value of ϵ and initialized from x ¯ I , and the resulting state and input trajectories are shown in Figure 4 and Figure 5. The yield is 8.33%.
Remark 6.
The rankings in Table 2 are fabricated to demonstrate the concept that a human judgment could be translated to a modification of an EMPC formulation parameter. They were contrived to display a form to which a reasonable model could be readily fit using lsqcurvefit and, furthermore, are highly simplified (e.g., only a single ranking is provided for each value of ϵ rather than an average ranking with additional information such as standard deviation that might be expected if more than one individual was to rank the response). For an actual process, the transformation of human opinion on the response to a function of ϵ would therefore be expected to be more complex and to potentially involve statistics-based techniques or other methods for obtaining models from process data; however, an investigation of such methods is outside of the scope of this paper, and therefore, a simplified ranking model was used to demonstrate the concept that a control law parameter might be decided upon by evaluating characteristics of a response where there is a tradeoff between competing operating objectives where at least one of them (in this case, the actuator wear) is more difficult to quantify with a simple model such that the incorporation of human judgment can make the control law design potentially simpler (than if, for example, a detailed actuator wear model was to be developed to allow the controller to more accurately predict the wear itself to then prevent it through a constraint on wear rather than input rate of change).

3.2. EMPC Response to Unexpected Scenarios via Model Updates

A second case for which we will explore EMPC designs which are responsive to unexpected events considers these “unexpected” events to be defined by a change in the underlying process dynamics (i.e., the value of i increases in Equation (1)). This class of problems covers anomaly responses for EMPC, for which we will adopt the common anomaly-handling strategy (as described in the Introduction section) of updating the process model. Mathematically, we assume that the process model was known with reasonable accuracy before the anomaly (i.e., there is an upper bound on the error between the model used in the LEMPC and the model of Equation (1) with i = 1 ).
We make several points with respect to model updates in this section. First, if the underlying dynamics change, it is possible that the structure of the underlying dynamic model has fundamentally changed. When identifying a new model, it may therefore be preferable to identify the parameters of one with a revised structure; this is a case of seeking to identify a more physics-based model from process data [51]. In keeping with the prior section where the potential was shown for integrating machine learning algorithms known to not be guaranteed to provide accurate data with control, we here highlight that, if machine learning-based sensors (e.g., image-based sensors) are utilized with the process, they may aid in suggesting how to update a process model’s structure over time to attempt to keep the structure physically relevant. Because such sensing techniques may not provide correct suggestions, however, a model with a structure suggested by such an algorithm does not need to be automatically implemented in model-based control; instead, engineers could consider multiple models after a machine learning-based algorithm suggests that an anomaly/change in the underlying process model has occurred, where one model to be evaluated is that used until this point and the second is a model that includes any updates implied by the sensing techniques. Subsequently, the prediction accuracy of the two models could be compared, and whichever is most accurate can be considered for use in the LEMPC [52]. Like the methodology in Section 3.1.1, this method limits the ability of any attempts to integrate machine learning (in the sensors) and control from impacting closed-loop stability by using it to complement a rigorous control design approach rather than to dictate it.
Second, at a chemical plant, anomalies may be considered to be either those which pose an immediate hazard to humans and the environment and are considered to require plant shutdown upon detection or those which do not. When the anomaly detected requires plant shutdown, generally the safety system is used to take extreme actions like cutting feeds to shut down the plant as quickly as possible; these generally have a prespecified nature (e.g., closing the feed valve). Anomalies that do not present immediate hazards to humans may either result in sufficiently small plant/model mismatch that the controller is robust against or the plant/model mismatch could cause subsequent control actions to drive the closed-loop state out of the expected region of process operation (at which point, the anomaly may be a hazard). We consider that characterizing conditions under which closed-loop stability is not lost in the second case may constitute steps in moving toward verification of EMPC for the process industries with adaptive model updates in the presence of changing process dynamics.

3.2.1. Automated Response to Anomalies: Formulation and Implementation Strategy

In the next section, we will present theoretical results regarding conditions under which an LEMPC could be conservatively designed to handle anomalies of different types in the sense that closed-loop stability would not be lost upon the occurrence of an anomaly or that impending loss of closed-loop stability could be detected by defining a region Ω ρ ^ s a m p , q (a superset of Ω ρ ^ q ) which the closed-loop state should not leave unless the anomaly has been significant and the model used by the LEMPC should be attempted to be reidentified to try to maintain closed-loop stability. If the closed-loop state leaves Ω ρ ^ s a m p , q , however, it has also left Ω ρ ^ q , so that the LEMPC of Equation (24) may not be feasible. For this reason, the implementation strategy below suggests that, if the closed-loop state leaves Ω ρ ^ s a m p , q , h N L , q should be applied to the process so that a control law with no feasibility issues is used.
The implementation strategy proposed below relies on the existence of two controllers h N L , q and h N L , q + 1 , where h N L , q can stabilize the origin of the nominal closed-loop system of Equation (10) and h N L , q + 1 can stabilize the origin of the nominal closed-loop system of Equation (10) with respect to the q + 1 th model. Specifically, before the change in the underlying process dynamics that occurs at t s , i + 1 is detected at t d , q , the process is operated under the LEMPC with the qth empirical model. After the change is detected (in a worst case via the closed-loop state leaving Ω ρ ^ q ), a worst-case bound t h , q is placed on the time available until the model must be updated at time t I D , q to the q + 1 th empirical model to prevent the closed-loop state from leaving a characterizable operating region.
We consider the following implementation strategy for carrying out the above methodology:
  • At t 0 , the i = 1 first-principles model (Equation (1)) describes the dynamics of the process. The q = 1 empirical model (Equation (10)) is used to design the LEMPC of Equation (24). An index i h x is set to 0. An index ζ is set to 0. Go to step 2.
  • At t s , i + 1 , the underlying dynamic model of Equation (1) changes to the i + 1 th model. The LEMPC is not yet alerted that the anomaly has occurred; the model used in the LEMPC is not changed despite the change in the underlying process dynamics. Go to step 3.
  • While t s , i + 1 < t k < t s , i + 2 , apply a detection method to determine if an anomaly has occurred. If an anomaly is detected, set ζ = 1 and t d , k = t k . Else, ζ = 0 . If x ( t k ) Ω ρ ^ q but ζ = 0 , set ζ = 1 and t d , k = t k . Go to step 4.
  • If i h x = 1 , go to step 4a. Else, if ζ = 1 , go to step 4b, or if ζ = 0 , go to step 4c. If t k > t s , i + 2 , go to step 5.
    (a)
    If x ( t k ) Ω ρ ^ q + 1 , operate the process under the LEMPC of Equation (24) with q q + 1 and set i h x = 0 . Else, apply h N L , q + 1 ( x ( t k ) ) to the process. Return to step 3. t k t k + 1 .
    (b)
    If ( t k + 1 t d , q ) < t h , q , gather online data to develop an improved process model as well as updated functions V ^ q + 1 and h N L , q + 1 ( x ) and an updated stability region Ω ρ ^ q + 1 around the steady-state of the new empirical model but do not yet update the LEMPC and control the process using the prior LEMPC. Else, if ( t k + 1 t d , q ) t h , q , set i h x = 1 and apply h N L , q + 1 ( x ( t k ) ) . Return to step 3. t k t k + 1 .
    (c)
    Operate the process under the LEMPC of Equation (24) that was used at the prior sampling time. Return to step 3. t k t k + 1 .
  • If t k > t s , i + 2 , a process dynamics change occurred at t s , i + 2 . Set t s , i + 1 t s , i + 2 and t k t k + 1 . Return to step 2 with ζ = 0 and i h x = 0 . Else, if t k < t s , i + 2 , t k t k + 1 ; return to step 3.
We note that we do not specify the detection method to be used in step 3, but the use of a sufficiently conservative Ω ρ ^ q (in a sense to be clarified in the following section) allows a worst-case detection mechanism to be that the closed-loop state exits Ω ρ ^ q in step 3. We consider that each t s , i + 1 and t s , i + 2 are separated by a sufficient period of time such that no second change in the underlying process dynamics occurs before the first change has resulted in an update in the dynamic model and the closed-loop state is within Ω ρ ^ q + 1 .
Remark 7.
A significant difference between the proposed procedure and that in References [53,54], which also involves switched systems under LEMPC, is that Reference [53] assumes that the time at which the model is to be switched is known a priori. In handling of anomalies, this cannot be known; therefore, the proposed approach corresponds to LEMPC for switched systems with unknown switching times. We place bounds in the next section on a number of properties of the LEMPC of Equation (24) for this case to demonstrate the manner in which closed-loop stability guarantees depend on, for example, how large the possible changes in the process model could be when they occur. The goal is to provide a perspective on the timeframes available for detecting various anomalies without loss of closed-loop stability, which could aid in verification and self-design studies for EMPC.

3.2.2. Automated Response to Anomalies: Stability and Feasibility Analysis

According to the implementation strategy above, when an anomaly occurs that changes the underlying process dynamics, one of two things will happen: (1) the model used in Equation (24b) remains the same or (2) the change in the underlying process dynamics is detected and the model used in Equation (24b) is changed within a required timeframe to a new model (i.e., q is incremented by one in Equation (10)). In this section, we present the conditions under which closed-loop stability can be maintained in either case. For readability, proofs of theorems presented in this section are available in the Appendix.
We first present several propositions. The first defines the maximum difference between the process model of Equation (1) and that of Equation (10) over time when the two models are initialized from the same state, as long as the states of both systems are kept within a level set of V ^ q which is also contained within the stability region around the steady-state for the model of Equation (1) and as long as there is no change in the underlying dynamics. The second sets an upper bound on the difference between the value of V ^ q at any two points in Ω ρ ^ q . The third provides the closed-loop stability properties of the closed-loop system of Equation (10) under the controller h N L , q .
Proposition 1
([51]). Consider the systems
x ¯ ˙ a , i , q = f ¯ i , q ( x ¯ a , i , q ( t ) , u ¯ q ( t ) , w i ( t ) )
x ¯ ˙ b , q = f ¯ N L , q ( x ¯ b , q ( t ) , u ¯ q ( t ) )
with initial states x ¯ a , i , q ( t 0 ) = x ¯ b , q ( t 0 ) = x ¯ ( t 0 ) contained within Ω ρ ^ q , i , with t 0 = 0 , u ¯ q U q , and w i W i . If x ¯ a , i , q ( t ) and x ¯ b , q ( t ) remain within Ω ρ ^ q , i for t [ 0 , T ] , then there exists a function f W , i , q ( · ) such that:
| x ¯ a , i , q ( t ) x ¯ b , q ( t ) | f W , i , q ( t )
with:
f W , i , q ( t ) : = L w , i , q θ i + M e r r , i , q L x , i , q ( e L x , i , q t 1 )
where M e r r , i , q > 0 is defined by:
| f ¯ i , q ( x , u , 0 ) f ¯ N L , q ( x , u ) | M e r r , i , q
for all x contained in Ω ρ ^ q , i and u U q .
Proposition 2
([24,55]). Consider the Lyapunov function V ^ q ( · ) of the nominal system of Equation (10) under the controller h N L , q ( · ) that meets Equation (12). There exists a quadratic function f V , q ( · ) such that:
V ^ q ( x ) V ^ q ( x ) + f V , q ( | x x | )
for all x , x ¯ Ω ρ ^ s a f e , q with
f V , q ( s ) : = α ^ 4 , q ( α ^ 1 , q 1 ( ρ ^ q ) ) s + M v , q s 2
where M v , q is a positive constant.
Proposition 3
([51]). Consider the closed-loop system of Equation (10) under h N L , q ( x ¯ b , q ) that satisfies the inequalities of Equation (12) in sample-and-hold. Let Δ > 0 , ϵ ^ W , q > 0 , and ρ ^ s a f e , q > ρ ^ q > ρ ^ e , q > ρ ^ min q > ρ ^ s , q > 0 satisfy the following:
α ^ 3 , q ( α ^ 2 , q 1 ( ρ ^ s , q ) ) + L L , q M L , q Δ ϵ ^ W , q / Δ
ρ ^ min q : = max { V ^ q ( x ¯ b , q ( t + Δ ) ) : V ^ q ( x ¯ b , q ( t ) ) ρ ^ s , q } .
If x ¯ b , q ( 0 ) Ω ρ ^ s a f e , q , then,
V ^ q ( x ¯ b , q ( t k + 1 ) ) V ^ q ( x ¯ b , q ( t k ) ) ϵ ^ W , q
for x ¯ b , q ( t k ) Ω ρ ^ s a f e , q / Ω ρ ^ s , q and the state trajectory x ¯ b , q ( t ) of the closed-loop system is always bounded in Ω ρ ^ s a f e , q for t 0 and is ultimately bounded in Ω ρ ^ min q .
The next proposition bounds the error between the actual process state and a prediction of the process state using an empirical model initialized from the same value of the process state over a period of time in which the underlying process dynamics change, but the empirical model is not updated. This requires overlap in stability regions for the ith and i + 1 th models of Equation (1) and for the qth model of Equation (10) within Ω ρ ^ q , i while the qth model is used. The proof of this proposition is available in Appendix A.
Proposition 4.
Consider the following systems:
x ¯ ˙ a , i , q = f ¯ i , q ( x ¯ a , i , q ( t ) , u ¯ q ( t ) , w i ( t ) )
x ¯ ˙ b , q = f ¯ N L , q ( x ¯ b , q ( t ) , u ¯ q ( t ) )
x ¯ ˙ a , i + 1 , q = f ¯ i + 1 , q ( x ¯ a , i + 1 , q ( t ) , u ¯ q ( t ) , w i + 1 ( t ) )
with initial states x ¯ a , i , q ( t 0 ) = x ¯ b , q ( t 0 ) Ω ρ ^ q , i with t 0 = 0 , u ¯ q U q , w i W i , and w i + 1 W i + 1 . Also, x ¯ a , i , q ( t s , i + 1 ) = x ¯ a , i + 1 , q ( t s , i + 1 ) . If x ¯ a , i , q ( t ) , x ¯ b , q ( t ) , x ¯ a , i + 1 , q ( t ) Ω ρ ^ q , i for t [ 0 , t 1 ] and
| f ¯ i + 1 , q ( x ¯ a , i + 1 , q ( s ) , u ¯ q ( s ) , w i + 1 ( s ) ) f ¯ i , q ( x ¯ a , i , q ( s ) , u ¯ q ( s ) , w i ( s ) ) | M c h a n g e , i , q
for all x ¯ a , i , q , x ¯ a , i + 1 , q Ω ρ ^ q , i , u ¯ q U q , w i W i , and w i + 1 W i + 1 , then
| x ¯ a , i , q ( t ) x ¯ b , q ( t ) | f W , i , q ( t )
where f W , i , q ( t ) is defined in Equation (51) for t [ 0 , t s , i + 1 ] and
| x ¯ a , i + 1 , q ( t ) x ¯ b , q ( t ) | f W , i , q ( t s , i + 1 t 0 ) + ( M c h a n g e , i , q ) ( t t s , i + 1 ) + L w , i , q θ i + M e r r , i , q L x , i , q ( e L x , i , q t e L x , i , q t s , i + 1 )
for t [ t s , i + 1 , t 1 ] .
The following theorem provides the conditions under which, when no change in the underlying dynamic model occurs throughout the time of operation and x ( t k ) Ω ρ ^ q , the LEMPC of Equation (24) designed based on h N L , q and the qth empirical model of Equation (10) guarantees that the closed-loop state is maintained within Ω ρ ^ q over time and is ultimately bounded in a neighborhood of the origin of the model of Equation (10).
Theorem 1
([51]). Consider the closed-loop system of Equation (1) under the LEMPC of Equation (24) based on the controller h N L , q ( x ) that satisfies the inequalities in Equation (12). Let ϵ W , i , q > 0 , Δ > 0 , N 1 , and ρ ^ q > ρ ^ e , q > ρ ^ min , i , q > ρ ^ s , q > 0 satisfy the following:
α ^ 3 , q ( α ^ 2 , q 1 ( ρ ^ e , q ) ) + α ^ 4 , q ( α ^ 1 , q 1 ( ρ ^ q ) ) M e r r , i , q + L x , i , q M i Δ + L w , i , q θ i ϵ W , i , q / Δ
ρ ^ e , q ρ ^ q f V , q ( f W , i , q ( Δ ) )
If x ( 0 ) Ω ρ ^ q and Proposition 3 is satisfied, then the state trajectory x ¯ a , i , q ( t ) of the closed-loop system is always bounded in Ω ρ ^ q for t 0 . Furthermore, if t > t and
α ^ 3 , q ( α ^ 2 , q 1 ( ρ ^ s , q ) ) + α ^ 4 , q ( α ^ 1 , q 1 ( ρ ^ q ) ) M e r r , i , q + L x , i , q M i Δ + L w , i , q θ i ϵ W , i , q / Δ
then the state trajectory x a , i ( t ) of the closed-loop system is ultimately bounded in Ω ρ ^ min , i , q and defined as follows:
ρ ^ min , i , q : = max { V ^ q ( x ¯ a , i , q ( t + Δ ) )   |   V ^ q ( x ¯ a , i , q ( t ) ) ρ ^ s , q }
The prior theorem provided conditions under which the closed-loop state is maintained within Ω ρ ^ q in the absence of changes in the dynamic model. In the following theorem, we provide sufficient conditions under which the closed-loop state is maintained in Ω ρ ^ q after t s , i . The proof of this result is presented in Appendix B.
Theorem 2.
Consider the closed-loop system of Equation (1) under the LEMPC of Equation (24) with h N L , q meeting Equation (12), where the conditions of Propositions 3 and 4 hold and where Ω ρ ^ s a f e , q is contained in both Ω ρ i and Ω ρ i + 1 . If t s , i + 1 [ t k , t k + 1 ) , such that, after t s , i + 1 , the system of Equation (1) is controlled by the LEMPC of Equation (24), where x a , i ( t s , i + 1 ) = x a , i + 1 ( t s , i + 1 ) Ω ρ ^ q , and if the following hold true,
α ^ 3 , q ( α ^ 2 , q 1 ( ρ ^ e , q ) ) + α ^ 4 , q ( α ^ 1 , q 1 ( ρ ^ q ) ) M e r r , p , q + L x , p , q M p Δ + L w , p , q θ p ϵ W , p , q / Δ
ρ ^ e , q ρ ^ q f V , q ( f W , p , q ( Δ ) )
for both p = i and p = i + 1 , and
ρ ^ e , q + f V , q ( f W , i , q Δ + ( M c h a n g e , i ) Δ + L w , i , q θ i + M e r r , i , q L x , i , q ( e L x , i , q Δ e L x , i , q t s , i + 1 ) ) ρ ^ q
α ^ 3 , q ( α ^ 2 , q 1 ( ρ ^ e , q ) ) + α ^ 4 , q ( α ^ 1 , q 1 ( ρ ^ q ) ) M e r r , i , q + L x , i , q M i Δ + L w , i , q θ i + α ^ 4 , q ( α ^ 1 , q 1 ( ρ ^ q ) ) M c h a n g e , i , q + L x , i + 1 , q M i + 1 Δ    + L w , i + 1 , q θ i + 1 ϵ W , i , q / Δ
then the closed-loop state is bounded in Ω ρ ^ q for all t 0 .
We highlight that these conditions are conservative and not intended to form the least conservative bounds possible. However, they do help to elucidate some of the factors which impact whether a model used in an LEMPC will need to be reidentified to continue to maintain closed-loop stability when the underlying dynamics change, such as the extent to which the dynamics change. The above theorem indicates that, if Ω ρ ^ q is initially chosen in a sufficiently conservative fashion and the empirical model is sufficiently close to the underlying process dynamics before the model change, closed-loop stability may be maintained even after the underlying dynamics change if the model changes are such that the empirical model remains sufficiently close to the new dynamic model after the change. In general, anomalies may occur that could violate the conditions of Theorem 2. The result of this could be that the closed-loop state may leave Ω ρ ^ q . In this case, it is helpful to characterize conditions under which changes in the underlying dynamics that could be destabilizing could be detected, triggering a model update and controller redesign for the new dynamic model to stabilize the closed-loop system. Therefore, the following theorem characterizes the length of time that the closed-loop state can remain in Ω ρ ^ s a f e , q after a change in the underlying process dynamics occurs if the conditions of Theorem 2 are not met. This can be used in determining how quickly a model reidentification algorithm would need to successfully provide a new model for the LEMPC of Equation (24) for closed-loop stability to be maintained as a function of factors such as the extent that the new model deviates from the empirical model used in the LEMPC when the underlying dynamics change, the sampling period, and the conservatism in the selection of ρ ^ q . The proof of this theorem is presented in Appendix C.
Theorem 3.
Consider the closed-loop system of Equation (1) under the LEMPC of Equation (24) with h N L , q meeting Equation (12) and Proposition 3, where Ω ρ ^ s a f e , q is contained in both Ω ρ i and Ω ρ i + 1 . If at t = t s , i + 1 , where t s , i + 1 [ t k , t k + 1 ) , such that, after t s , i + 1 , the system of Equation (1) is controlled by the LEMPC of Equation (24), where x a , i ( t s , i + 1 ) = x a , i + 1 ( t s , i + 1 ) Ω ρ ^ s a f e , q , then if the following hold true with ρ ^ s a f e , q > ρ ^ s a m p , q > ρ ^ q > ρ ^ q , e , ρ ^ q , e > ρ ^ min , q , i > ρ ^ s , q > 0 , and ρ ^ q , e > ρ ^ min , i + 1 , q > ρ ^ s , q > 0 :
α ^ 3 , q ( α ^ 2 , q 1 ( ρ ^ s , q ) ) + α ^ 4 , q ( α ^ 1 , q 1 ( ρ ^ q ) ) M e r r , i + 1 , q + L x , i + 1 , q M i + 1 Δ + L w , i + 1 , q θ i + 1 ϵ W , i + 1 , q / Δ
ρ ^ e , q + f V , q ( f W , i , q Δ + ( M c h a n g e , i , q ) Δ + L w , i , q θ i + M e r r , i , q L x , i , q ( e L x , i , q Δ e L x , i , q t s , i + 1 ) ) ρ ^ s a m p , q
ρ ^ q + f V , q ( f W , i , q Δ + ( M c h a n g e , i , q ) Δ + L w , i , q θ i + M e r r , i , q L x , i , q ( e L x , i , q Δ e L x , i , q t s , i + 1 ) ) ρ ^ s a m p , q
ρ ^ e , q + f V , q ( f W , i + 1 , q ( Δ ) ) ρ ^ s a m p , q
ρ ^ q + ϵ W , i + 1 , q ρ ^ s a m p , q
as well as Equations (65)–(67), then if x ( t s , i + 1 ) Ω ρ ^ q and Ω ρ ^ min , i + 1 , q Ω ρ ^ s a m p , q and the change to the model is not detected until a sampling time t d , q with x ¯ ( t d , q ) Ω ρ ^ s a f e , q / Ω ρ ^ q ( x ¯ ( t d , q ) Ω ρ ^ s a m p , q Ω ρ ^ s a f e , q ) after which h N L , q is used to control the system in sample-and-hold, then the number of sampling periods between t I D , q and t d , q within which the model in the LEMPC can be updated to a new model meeting Equation (65) with i replaced by i + 1 and q replaced by q + 1 without the closed-loop state exiting Ω ρ ^ s a f e , q is given by t h , q = floor ( ( ρ ^ s a f e , q ρ ^ s a m p , q ) ϵ W , i , q ) , where floor represents the “floor” function that returns the largest integer less than the value of the argument. x ¯ ( t ) refers either to x ¯ a , i + 1 , q ( t ) or x ¯ a , i , q ( t ) , depending on whether t s , i + 1 is within the sampling period preceding the closed-loop state exiting Ω ρ ^ q .
The following theorem provides the conditions under which the closed-loop state is maintained within Ω ρ ^ s a f e , q + 1 for all times after t I D , q and is driven into Ω ρ ^ q + 1 after the model reidentification. The proof of the result is presented in Appendix D.
Theorem 4.
If Ω ρ ^ s a f e , q Ω ρ ^ s a f e , q + 1 and if both Ω ρ ^ s a f e , q and Ω ρ ^ s a f e , q + 1 are contained in Ω ρ i and Ω ρ i + 1 , then if h N L , q + 1 is used to control the system after t I D , q while x ( t k ) Ω ρ ^ s a f e , q + 1 / Ω ρ ^ q + 1 with the conditions of Equations (65) and (66) met for the q + 1 th empirical model for the i + 1 th dynamic system and the LEMPC of Equation (24) using the q + 1 th empirical model of Equation (10) is used to control the system for all times after x ( t k ) Ω ρ ^ q + 1 , then the closed-loop state is then maintained within Ω ρ ^ s a f e , q + 1 until it enters Ω ρ ^ q + 1 and is then maintained in Ω ρ ^ q + 1 for all subsequent sampling times.
Remark 8.
From a verification standpoint, the proofs above move toward addressing the question of what may happen if a controller is designed and even tested for certain conditions, but the process dynamics change. It provides a theoretical characterization of conditions under which action would subsequently need to be taken as well as indications of the time available to take the subsequent action. However, the results above may be difficult to utilize directly in developing an online monitoring scheme, as many of the theoretical conditions rely on knowing properties of the current and updated models that would likely not be characterizable or would not be known until after the anomaly occurred. However, these still may aid in gaining an understanding of different possibilities. For example, a conservative stability region Ω ρ ^ q suggests that larger anomalies could still be detected and mitigated by a combined detection and reidentification procedure without loss of closed-loop stability. Earlier detection may provide more time for reidentification.
Remark 9.
If there is an indication from detection methods that are not based on the closed-loop state leaving the stability region that the underlying dynamics may have changed but that the closed-loop state has not yet left Ω ρ ^ q , then until the closed-loop state leaves Ω ρ ^ q , online experiments (e.g., modifying the objective function as in Reference [51]) could be performed if they do not impact the constraint set to attempt to probe whether the dynamics are more consistent with the prior process model or the potential model postulated after the anomaly is suggested. This may be a method for attempting to detect the changes before the closed-loop state leaves Ω ρ ^ q , which could allow larger changes in the process model to be handled practically than could be guaranteed to be handled in the theorems above, as the magnitude of the deviations in the dynamic model allowed above without loss of closed-loop stability depends on the distance between Ω ρ ^ s a f e , q and Ω ρ ^ s a m p , q . However, it is also highlighted that the above is a conservative result, meaning that, in general, larger changes may be able to be handled without loss of closed-loop stability.
Remark 10.
The above results can be used to comment on why giving greater flexibility to the process after an anomaly to handle it could introduce additional complexity. Specifically, consider the possibility that some actuators may not typically be used for control but could be considered for use after an anomaly (similar to how safety systems activate for chemical processes, but in this case, they would not act according to a prespecified logic but might be able to be manipulated in either an on-off or continuous manner to give the process additional capabilities for handling the anomaly). It is noted that this would constitute dynamics not previously considered. According to the proofs above, one way to guarantee closed-loop stability in the presence of sufficiently small disturbances is to cause the dynamics after they change to not differ too radically from those assumed before the change and used in the prior dynamic model in the EMPC. If additional flexibility is given to the system, this would be an additional model that would have to match up well.
Remark 11.
The results above suggest that, if a model identification algorithm could be guaranteed to provide an accurate model with a small amount of data that could be gathered between when the closed-loop state leaves Ω ρ ^ q but before it leaves Ω ρ ^ s a f e , q (where the amount of data available in that timeframe could be known a priori by the number of measurements available in a given sampling period), then the model could be reidentified and placed within the LEMPC in a manner that is stabilizing.
Remark 12.
Instead of changes to the underlying dynamic model, anomalies may present changes in the constraint set (e.g., anomalies may change equipment material limitations (e.g., maximum shear stresses, which can change with temperature) used to place constraints on the state in an LEMPC). Because the above results assume that the stability region is fully contained within the state constraint set, the detection and response procedure above would need to ensure that there is no time at which the stability region is no longer fully included within the state constraint set under the new dynamic model. This may be handled by making Ω ρ ^ s a f e , q sufficiently conservative such that the closed-loop state never exits a region where the state constraints can be met under different dynamic models.

3.2.3. Automated Response to Unexpected Hazards: Application to a Chemical Process Example

In this section, we demonstrate concepts described above through a process example. This example considers a nonisothermal reactor in which an A B reaction takes place, but the reactant inlet concentration C A 0 and the heat rate Q supplied by a jacket are adjusted by an LEMPC. The process model is as follows:
C ˙ A = F V ( C A 0 C A ) k 0 e E R g T C A 2
T ˙ = F V ( T 0 T ) Δ H k 0 ρ L C p e E R g T C A 2 + Q ρ L C p V
where the parameters are listed in Table 3 and include the reactor volume V, inlet reactant temperature T 0 , pre-exponential constant k 0 , solution heat capacity C p , solution density ρ L , feed/outlet volumetric flow rate F, gas constant R g , activation energy E, and heat of reaction Δ H . The state variables are the reactant concentration C A and temperature T in the reactor, which can be written in deviation form from the operating steady-state vector C A s = 1.22 kmol/m 3 , T s = 438.2 K, C A 0 s = 4 kmol/m 3 , and Q s = 0 kJ/h as x = [ x 1   x 2 ] T = [ C A C A s   T T s ] T and u = [ u 1   u 2 ] T = [ C A 0 C A 0 s   Q Q s ] T . The model of Equations (77) and (78) has the following form:
x ˙ = f ˜ ( x ) + g ( x ) u
where f ˜ represents a vector function derived from Equations (77) and (78) that is not multiplied by u and where g ( x ) = [ g 1   g 2 ] T = [ F V   0 ;   0   1 ρ L C p V ] T represents the vector function which multiplies u in these equations.
The EMPC utilized to adjust the manipulated inputs C A 0 and Q utilizes the following stage cost (to maximize the production rate of the desired product) and physical bounds on the inputs:
L e = k 0 e E / ( R g T ( τ ) ) C A ( τ ) 2
0.5 C A 0 7.5   kmol / m 3
5 × 10 5 Q 5 × 10 5   kJ / h
Lyapunov-based stability constraints are also enforced (where a constraint of the form of Equation (22) is enforced at the end of every sampling time if x ( t k ) Ω ρ ^ e , and the constraint of the form of Equation (23) is enforced at t k when x ( t k ) Ω ρ ^ / Ω ρ ^ e but then followed by a constraint of the form of Equation (22) at the end of all sampling periods after the first).
We will consider several simulations to demonstrate the developments above. In the first, we explore several aspects of the case in which there is a change in the underlying dynamics while the process is operated under LEMPC that is minor such that the closed-loop state does not leave Ω ρ ^ after the change in the underlying dynamics. For this case, the Lyapunov function selected was V ^ q = x T P x , with P given as follows:
P = 1200 5 5 0.1
The Lyapunov-based controller h N L , 1 ( x ) was designed such that its first component h N L , 1 , 1 ( x ) = 0 kmol/m 3 and its second component h N L , 1 , 2 ( x ) is computed as follows (Sontag’s formula [56]):
h N L , 1 , 2 ( x ) = L f ˜ V ^ q + L f ˜ V ^ q 2 + L g ˜ 2 V ^ q 4 L g ˜ 2 V ^ q , if   L g ˜ 2 V ^ q 0 0 , if   L g ˜ 2 V ^ q = 0
Then, it is saturated at the input bounds of Equation (82) if they are met. L f ˜ V ^ q and L g ˜ 2 V ^ q are Lie derivatives of V ^ q with respect to the vector functions f ˜ and g ˜ 2 , respectively. ρ ^ and ρ ^ e were taken from Reference [57] to be 300 and 225, respectively. The process state was initialized at x i n i t = [ 0.4   kmol / m 3   8   K ] T , with controller parameters N = 10 and Δ = 0.01 h. The process model of Equations (77) and (78) was integrated with the explicit Euler numerical integration method using an integration step size of 10 4 h within the LEMPC and of 10 5 h to simulate the process.
For this first simulation, we assume that a change in the underlying process dynamics occurs at 0.5 h that does not compromise closed-loop stability. Specifically, at 0.5 h, it is assumed that an additional source of heat arises outside the reactor such that the right-hand side of Equation (78) is modified by the addition of another term Q e x t r a = 300 K/h. Figure 6 and Figure 7 show the process responses when the LEMPC is not aware of the change in the process dynamic model when it occurs and when it is aware of the change in the process dynamic model after it occurs such that it is fully compensated (i.e., an accurate process model is used in the LEMPC at all times, even after the dynamics change). In both cases, the closed-loop state was maintained within the stability region at all times. These simulations were carried out in MATLAB R2016b using fmincon with the default settings except for the increased iterations/function evaluations allowed, scaling u 2 down by 10 5 and providing the steady-state input values as the initial guess for the optimization problem solution at each sampling time. No attempt was made to check whether the LEMPCs in the simulations located globally optimal solutions to the LEMPC optimization problems. However, the profit was higher than that at the steady-state around which the LEMPC was designed.
The oscillatory behavior of the states before 0.5 h is caused by the fact that the profit is maximized for this process at the boundary of Ω ρ ^ e . Without plant-model mismatch, the LEMPC is able to maintain the closed-loop state exactly on the boundary of Ω ρ ^ e and therefore always operates the process using the constraint of Equation (22); however, when the plant-model mismatch occurs (induced by the use of different integration steps to simulate the process dynamic model within the LEMPC and for the simulation of the process under the computed control actions), the closed-loop state then exits Ω ρ ^ e when the LEMPC predicts it will stay inside of it under the control actions computed by the controller. The result is that the constraint of Equation (23) is then activated until the closed-loop state reenters Ω ρ ^ e . This process of entering Ω ρ ^ e , attempting to operate at its boundary, and then being kicked out only to be driven back in is the cause of the oscillatory response of the states and inputs in Figure 6 and Figure 7. It is noted, however, that though this behavior may be undesirable from, for example, an actuator wear perspective, it does not reflect a loss of closed-loop stability or a malfunction of the controller. The controller is in fact maintaining the closed-loop state within Ω ρ ^ as it was designed to do; the fact that it does so in perhaps a visually unfamiliar fashion means that we have not specified in the control law that it should not do that, so it is not aware that an end user would find that behavior strange (if the oscillatory behavior is deemed undesirable, one could consider, for example, input rate of change constraints and potentially the benefits of the human response-based input rate of change strategy in the prior section for handling unexpected events).
In the case that the LEMPC is not aware of the change in the process dynamics, the profit is 32.7103, whereas when the LEMPC is aware of the change in the dynamics, the profit is 32.5833. Though these values are very close, an interesting note is that the profit when the LEMPC is not aware of the change in the underlying dynamics is slightly higher than when it is aware. Intuitively, one would expect an LEMPC with a more accurate process model to be able to locate a more economically optimal trajectory for the closed-loop state to follow than an LEMPC that cannot provide as accurate predictions. Part of the reason for the enhanced optimality in the case without knowledge of the change in the underlying dynamics, however, comes from the two-mode nature of LEMPC. In the case that the LEMPC is aware of the change in the underlying dynamics, it drives the closed-loop state to an operating condition that remains closer to the boundary of Ω ρ ^ e after 0.5 h than when it is not aware of the change in the underlying dynamics due to the plant/model mismatch being different in the different cases. The result is that the process accesses regions of state-space that lead to higher profits when the LEMPC does not know about the change in the dynamics than if the LEMPC knows more about the process dynamics.
The remainder of this example focuses on elucidating the conservativeness of the proposed approach. Specifically, we now consider the Lyapunov function selected as V ^ q = x T P x , with P given as follows:
P = 2000 10 10 3
Again, h N L , 1 ( x ) is designed such that h N L , 1 , 1 ( x ) = 0 kmol/m 3 , and h N L , 1 , 2 ( x ) is computed via Sontag’s formula but saturated at the input bounds of Equation (82) if they are met. ρ ^ and ρ ^ e were taken to be 1300 and 975, respectively, and ρ ^ s a f e was set to 1800. The process state was initialized at x i n i t = [ 0   kmol / m 3   0   K ] T , with controller parameters N = 10 and Δ = 0.01 h. The process model of Equations (77) and (78) was integrated with the explicit Euler numerical integration method using an integration step size of 10 4 h within the EMPC and with an integration step size of 10 5 h to simulate the process. The constraint of the form of Equation (23) is enforced at t k when x ( t k ) Ω ρ ^ / Ω ρ ^ e but then followed by a constraint of the form of Equation (22) at the end of all sampling periods.
At 0.5 h, it is assumed that an additional source of heat arises outside the reactor such that the right-hand side of Equation (78) is modified by the addition of another heat term Q e x t r a = 500 K/h. In this case, with no change in the process model used by the EMPC or even in the control law (i.e., in contrast to the implementation strategy in Section 3.2.1, h N L , 1 is not employed when the closed-loop state exits Ω ρ ^ ), the behavior in Figure 8 results. Notably, the closed-loop state does not leave Ω ρ ^ s a f e , and no infeasibility issues occurred. In contrast, if we begin to utilize h N L , 1 when the closed-loop state leaves Ω ρ ^ , the closed-loop state will eventually leave Ω ρ ^ s a f e (Figure 9). While we can obtain a new empirical model (in this case, we assume that the dynamics become fully known at 0.54 h and are accounted for completely to demonstrate the result) and can use that to update h N L , 1 to h N L , 2 (i.e., h N L , 1 but with modified saturation bounds to reflect design around the new steady-state of the system with Q A d d e d = 500 K/h) before the closed-loop state leaves Ω ρ ^ s a f e as suggested in the implementation strategy in Section 3.2.1 (creating the profile shown in Figure 10 corresponding to 2 h of operation in which the closed-loop state is driven back to the origin under h N L , 2 ), the fact that the closed-loop state would not have left the stability region if the controller had not been adjusted illustrates the conservativeness of the approach. We note that Figure 10 does not complete the implementation strategy in Section 3.2.1 (which would involve the use of a new LEMPC after the closed-loop state reenters Ω ρ ^ for this example) because that part of the implementation strategy will be demonstrated in the discussion for a slightly different LEMPC presented below.
Finally, we provide a result where the LEMPC computes a time-varying input policy due to the desire to enforce a constraint on the amount of reactant available in the feed over an hour (i.e., a material/feedstock constraint) as follows:
1 1   h   t = 0   h t = 1   h u 1 ( τ ) d τ = 0   kmol / m 3
This constraint is enforced via a soft constraint formulation by introducing slack variables s 1 and s 2 that are penalized in a modified objective function as follows:
t k t k + N k 0 e E R g T ( τ ) C A ( τ ) 2 d τ + 100 ( s 1 2 + s 2 2 )
They are used in the following constraints:
i = 0 k 1 ( u 1 * ( t i | t i ) ) + i = k k + N k ( u 1 ( t i | t k ) ) 3.5 δ ( 100 t k Δ N ) s 1
i = 0 k 1 ( u 1 * ( t i | t i ) ) i = k k + N k ( u 1 ( t i | t k ) ) 3.5 δ ( 100 t k Δ N ) s 2
where N k = N and δ = 1 when t k < 0.9 h and where δ = 0 and N k is the number of sampling periods left in a 1 h operating period when t k 0.9 h. These constraints are developed based on Reference [12]. u 1 * ( t i | t i ) signifies the value of u 1 applied to the process at a prior sampling time, and u 1 ( t i | t k ) reflects the value of u 1 predicted at the current sampling time t k to be applied for t [ t i , t i + 1 ) , i = k , , k + N k . The upper and lower bounds on s 1 and s 2 were set to 2 × 10 19 and 2 × 10 19 , respectively, to allow them to be effectively unbounded. The initial guesses of the slack variables were set to 0 at each sampling time.
When the LEMPC with the above modifications is applied to the process with Q A d d e d = 500 K/h starting at 0.5 h, the closed-loop state again exits Ω ρ ^ for some time after 0.5 h but reenters it and also does not exit Ω ρ ^ s a f e , once again reflecting the conservatism from a closed-loop stability standpoint of a strategy that updates the process model whenever the closed-loop state leaves Ω ρ ^ . Furthermore, if h N L , 1 is utilized after it is detected that the closed-loop state leaves Ω ρ ^ (the first sampling time at which this occurs is 0.51 h), then it exits Ω ρ ^ s a f e by 0.52 h, showing that the length of the sampling period or the size of Ω ρ ^ with respect to Ω ρ ^ s a f e is not sufficiently small enough to impose model updates before closed-loop stability is jeopardized because measurements are only available every sampling time. If instead, however, ρ ^ is updated to be 1200 and ρ ^ e is set to 900, then the closed-loop state remains in Ω ρ ^ between 0.51 and 0.52 h. If at 0.52 h, we assume that the new dynamics (i.e., with Q A d d e d = 500 K/h) become available and are used in designing h N L , 2 (used from 0.52 h until the first sampling time at which x ( t k ) Ω ρ ^ again) and that a second LEMPC designed based on the updated model is used after the closed-loop state has reentered Ω ρ ^ , the state-space trajectory in Figure 11 results.

4. Conclusions

This work developed a Lyapunov-based EMPC framework for handling unexpected considerations of different types. One of the types of considerations handled was end-user response to how a control law operates a process, providing a controller self-update capability through input rate of change constraints that allows even uncertain or imprecise information about the end-user response to be used in optimizing the controller formulation without loss of closed-loop stability or feasibility. The second type of consideration was the occurrence of anomalies, where conditions which would guarantee that the closed-loop state can be stabilized in the presence of an anomaly that changes the underlying process dynamics as long as a detection method identifies a new process model sufficiently quickly, were presented that uses the LEMPC stability properties in developing an anomaly detection mechanism. Chemical process examples were presented for both cases to demonstrate the proposed approach.
The work above provides insights into interpretability and verification considerations for EMPC from a theoretical perspective. However, these remain significant challenges for this control design. For example, there is no guarantee that adjusting a given constraint (e.g., adjusting the upper bound on an input rate of change constraint) will cause process behavior to appear interpretable to an end user before it approaches steady-state behavior, which may reduce the benefits of using EMPC. Furthermore, the results related to anomaly handling were demonstrated via process examples to be highly conservative. No methods were presented for practically ascertaining time (online) until an anomaly would result in the closed-loop state leaving a known region of state-space after detection to facilitate appropriate actions to be taken. Further work on these issues needs to be undertaken to develop practical EMPC designs with appropriate safety and interpretability properties with low time required to verify the designs before putting them into the field for different processes.

Funding

Financial support from the National Science Foundation CBET-1839675, from the Air Force Office of Scientific Research award number FA9550-19-1-0059, and from Wayne State University is gratefully acknowledged.

Conflicts of Interest

The author declares no conflict of interest.

Appendix A. Proof of Proposition 4

Proof. 
The result in Equation (62) is stated in Proposition 1; therefore, it remains to prove that Equation (63) holds. To derive the result of Equation (63), Equations (59) and (60) are integrated as follows:
x ¯ a , i + 1 , q ( t ) = x ¯ a , i , q ( t s , i + 1 ) + t s , i + 1 t f ¯ i + 1 , q ( x ¯ a , i + 1 , q ( s ) , u ¯ q ( s ) , w i + 1 ( s ) ) d s
x ¯ b , q ( t ) = x ¯ b , q ( t s , i + 1 ) + t s , i + 1 t f ¯ N L , q ( x ¯ b , q ( s ) , u ¯ q ( s ) ) d s
for t [ t s , i + 1 , t 1 ] . Subtracting Equation (A2) from Equation (A1) and taking norms of both sides of the resulting equation gives the following:
| x ¯ a , i + 1 , q ( t ) x ¯ b , q ( t ) | = | x ¯ a , i , q ( t s , i + 1 ) x ¯ b , q ( t s , i + 1 ) + t s , i + 1 t [ f ¯ i + 1 , q ( x ¯ a , i + 1 , q ( s ) , u ¯ q ( s ) , w i + 1 ( s ) ) f ¯ N L , q ( x ¯ b , q ( s ) , u ¯ q ( s ) ) ] d s | | x ¯ a , i , q ( t s , i + 1 ) x ¯ b , q ( t s , i + 1 ) | + t s , i + 1 t | f ¯ i + 1 , q ( x ¯ a , i + 1 , q ( s ) , u ¯ q ( s ) , w i + 1 ( s ) ) f ¯ N L , q ( x ¯ b , q ( s ) , u ¯ q ( s ) ) | d s f W , i , q ( t s , i + 1 t 0 ) + t s , i + 1 t | f ¯ i + 1 , q ( x ¯ a , i + 1 , q ( s ) , u ¯ q ( s ) , w i + 1 ( s ) ) f ¯ i , q ( x ¯ a , i , q ( s ) , u ¯ q ( s ) , w i ( s ) )    + f ¯ i , q ( x ¯ a , i , q ( s ) , u ¯ q ( s ) , w i ( s ) ) f ¯ N L , q ( x ¯ b , q ( s ) , u ¯ q ( s ) ) | d s f W , i , q ( t s , i + 1 t 0 ) + t s , i + 1 t | f ¯ i + 1 , q ( x ¯ a , i + 1 , q ( s ) , u ¯ q ( s ) , w i + 1 ( s ) ) f ¯ i , q ( x ¯ a , i , q ( s ) , u ¯ q ( s ) , w i ( s ) ) | d s     + t s , i + 1 t | f ¯ i , q ( x ¯ a , i , q ( s ) , u ¯ q ( s ) , w i ( s ) ) f ¯ N L , q ( x ¯ b , q ( s ) , u ¯ q ( s ) ) | d s
From Equations (15), (52), and (61), we have the following:
| x ¯ a , i + 1 , q ( t ) x ¯ b , q ( t ) | f W , i , q ( t s , i + 1 t 0 ) + t s , i + 1 t M c h a n g e , i , q d s + t s , i + 1 t | f ¯ i , q ( x ¯ a , i , q ( s ) , u ¯ q ( s ) , w i ( s ) ) f ¯ i , q ( x ¯ b , q ( s ) , u ¯ q ( s ) , 0 )     + f ¯ i , q ( x ¯ b , q ( s ) , u ¯ q ( s ) , 0 ) f ¯ N L , q ( x ¯ b , q ( s ) , u ¯ q ( s ) ) | d s f W , i , q ( t s , i + 1 t 0 ) + M c h a n g e ( t t s , i + 1 )     + t s , i + 1 t | f ¯ i , q ( x ¯ a , i , q ( s ) , u ¯ q ( s ) , w i ( s ) ) f ¯ i , q ( x ¯ b , q ( s ) , u ¯ q ( s ) , 0 ) | d s     + t s , i + 1 t | f ¯ i , q ( x ¯ b , q ( s ) , u ¯ q ( s ) , 0 ) f ¯ N L , q ( x ¯ b , q ( s ) , u ¯ q ( s ) ) | d s f W , i , q ( t s , i + 1 t 0 ) + M c h a n g e ( t t s , i + 1 ) + t s , i + 1 t ( L x , i , q | x ¯ a , i , q ( s ) x ¯ b , q ( s ) | + L w , i , q | w i ( s ) | ) d s    + t s , i + 1 t M e r r , i , q d s
Using Equation (50) we get the following,
| x ¯ a , i + 1 , q ( t ) x ¯ b , q ( t ) | f W , i , q ( t s , i + 1 t 0 ) + M c h a n g e , i , q ( t t s , i + 1 ) + ( L w , i , q θ i + M e r r , i , q ) t s , i + 1 t ( e L x , i , q s 1 ) d s    + t s , i + 1 t ( L w , i , q θ i + M e r r , i , q ) d s f W , i , q ( t s , i + 1 t 0 ) + M c h a n g e , i , q ( t t s , i + 1 ) + ( L w , i , q θ i + M e r r , i , q ) t s , i + 1 t ( e L x , i , q s 1 ) d s    + ( L w , i , q θ i + M e r r , i , q ) ( t t s , i + 1 ) f W , i , q ( t s , i + 1 t 0 ) + M c h a n g e , i , q ( t t s , i + 1 ) + ( L w , i , q θ i + M e r r , i , q ) L x , i , q ( e L x , i , q t e L x , i , q t s , i + 1 )
 □

Appendix B. Proof of Theorem 2

Proof. 
To guarantee the results, recursive feasibility of the LEMPC must hold. Feasibility of the LEMPC of Equation (24) follows from Theorem 1 when x ( t k ) Ω ρ ^ q . Subsequently, closed-loop stability must be proven both when t s , i + 1 = t k and when t s , i + 1 ( t k , t k + 1 ) .
Consider first the case that t s , i + 1 = t k . In this case, if Equation (68) holds with p = i + 1 and x ( t k ) Ω ρ ^ q , then x ( t ) Ω ρ ^ q from Theorem 1 for t 0 . Consider second the case that t s , i + 1 ( t k , t k + 1 ) . In this case, until t s , i + 1 , if Equations (68) and (69) hold for p = i , the closed-loop state is maintained within Ω ρ ^ q from Theorem 1. To guarantee that the closed-loop state is maintained in Ω ρ ^ q after t s , i + 1 until t k + 1 , it is first noted that, if x ( t k ) Ω ρ ^ e , q and t s , i + 1 ( t k , t k + 1 ) , then from Proposition 2, we have the following:
V ^ q ( x ¯ a , i + 1 , q ( t ) ) V ^ q ( x ¯ b , q ( t k + 1 ) ) + f V , q ( | x ¯ a , i + 1 , q ( t ) x ¯ b , q ( t k + 1 ) | )
if x ¯ a , i + 1 , q ( t ) , x ¯ b , q ( t ) Ω ρ ^ q for t [ t k , t k + 1 ] . If Proposition 4 holds, then from Equation (24f), we have the following:
V ^ q ( x ¯ a , i + 1 , q ( t ) ) ρ ^ e , q + f V , q ( f W , i , q ( t s , i + 1 t k ) + ( M c h a n g e , i , q ) ( t t s , i + 1 ) + L w , i , q θ i + M e r r , i , q L x , i , q ( e L x , i , q t e L x , i , q t s , i + 1 ) )
If Equation (70) holds, then V ^ q ( x ¯ a , i + 1 , q ( t ) ) ρ ^ q for t [ t s , i + 1 , t k + 1 ] .
If instead x ( t k ) Ω ρ ^ q / Ω ρ ^ e , q and if Equations (68) and (69) hold, the closed-loop state is maintained within Ω ρ ^ q from Theorem 1 until t s , i + 1 . To guarantee that the closed-loop state is maintained in Ω ρ ^ q after t s , i + 1 until t k + 1 , it is first noted that the following is true:
V ^ q ( x ( t k ) ) x   ( f ¯ N L , q ( x ( t k ) , u ¯ q ( t k ) ) ) V ^ q ( x ( t k ) ) x   ( f ¯ N L , q ( x ( t k ) , h N L , q ( x ( t k ) ) ) ) α ^ 3 , q ( | x ( t k ) | )
from Equation (12b) and Equation (24g). When t k t < t s , i + 1 , then from Reference [51], if Equation (68) and the conditions of Theorem 2 hold with p = i , the following is true:
V ^ q ( x ¯ a , i , q ( τ ) ) x   ( f ¯ i , q ( x ¯ a , i , q ( τ ) , u ¯ q ( t k ) , w i ( τ ) ) )      α ^ 3 , q ( α ^ 2 , q 1 ( ρ ^ e , q ) ) + α ^ 4 , q ( α ^ 1 , q 1 ( ρ ^ q ) ) M e r r , i , q + L x , i , q M i Δ + L w , i , q θ i
for τ t k , t s , i + 1 , and
V ^ q ( x ¯ a , i , q ( t s , i + 1 ) ) V ^ q ( x ( t k ) )
Given that x ¯ a , i , q ( t s , i + 1 ) = x ¯ a , i + 1 , q ( t s , i + 1 ) , the following holds:
V ^ q ( x ¯ a , i + 1 , q ( t s , i + 1 ) ) x   ( f ¯ i + 1 , q ( x ¯ a , i + 1 , q ( t s , i + 1 ) , u ¯ q ( t k ) , 0 ) ) = V ^ q ( x ¯ a , i + 1 , q ( t s , i + 1 ) ) x   ( f ¯ i + 1 , q ( x ¯ a , i + 1 , q ( t s , i + 1 ) , u ¯ q ( t k ) , 0 ) ) + V ^ q ( x ¯ a , i , q ( t s , i + 1 ) ) x   ( f ¯ i , q ( x ¯ a , i , q ( t s , i + 1 ) , u ¯ q ( t k ) , 0 ) )    V ^ q ( x ¯ a , i , q ( t s , i + 1 ) ) x   ( f ¯ i , q ( x ¯ a , i , q ( t s , i + 1 ) , u ¯ q ( t k ) , 0 ) ) α ^ 3 , q ( α ^ 2 , q 1 ( ρ ^ e , q ) ) + α ^ 4 , q ( α ^ 1 , q 1 ( ρ ^ q ) ) M e r r , i , q + L x , i , q M i Δ + L w , i , q θ i + V ^ q ( x ¯ a , i + 1 , q ( t s , i + 1 ) ) x   ( f ¯ i + 1 , q ( x ¯ a , i + 1 , q ( t s , i + 1 ) , u ¯ q ( t k ) , 0 ) )    V ^ q ( x ¯ a , i , q ( t s , i + 1 ) ) x   ( f ¯ i , q ( x ¯ a , i , q ( t s , i + 1 ) , u ¯ q ( t k ) , 0 ) ) α ^ 3 , q ( α ^ 2 , q 1 ( ρ ^ e , q ) ) + α ^ 4 , q ( α ^ 1 , q 1 ( ρ ^ q ) ) M e r r , i , q + L x , i , q M i Δ + L w , i , q θ i    + V ^ q ( x ¯ a , i , q ( t s , i + 1 ) ) x f ¯ i + 1 , q ( x ¯ a , i + 1 , q ( t s , i + 1 ) , u ¯ q ( t k ) , 0 ) f ¯ i , q ( x ¯ a , i , q ( t s , i + 1 ) , u ¯ q ( t k ) , 0 ) α ^ 3 , q ( α ^ 2 , q 1 ( ρ ^ e , q ) ) + α ^ 4 , q ( α ^ 1 , q 1 ( ρ ^ q ) ) M e r r , i , q + L x , i , q M i Δ + L w , i , q θ i + α ^ 4 , q ( | x ¯ a , i , q ( t s , i + 1 ) | ) M c h a n g e , i , q α ^ 3 , q ( α ^ 2 , q 1 ( ρ ^ e , q ) ) + α ^ 4 , q ( α ^ 1 , q 1 ( ρ ^ q ) ) M e r r , i , q + L x , i , q M i Δ + L w , i , q θ i + α ^ 4 , q ( α ^ 1 , q 1 ( ρ ^ q ) ) M c h a n g e , i , q
where the last inequality follows from the fact that x ¯ a , i , q ( t s , i + 1 ) Ω ρ ^ q if x ( t k ) Ω ρ ^ q when Equations (68) and (69) hold according to Theorem 1.
Finally, for τ [ t s , i + 1 , t k + 1 ) ,
V ^ q ( x ¯ a , i + 1 , q ( τ ) ) x   ( f ¯ i + 1 , q ( x ¯ a , i + 1 , q ( τ ) , u ¯ q ( t k ) , w i + 1 ( τ ) ) = V ^ q ( x ¯ a , i + 1 , q ( τ ) ) x   ( f ¯ i + 1 , q ( x ¯ a , i + 1 , q ( τ ) , u ¯ q ( t k ) , w i + 1 ( τ ) ) + V ^ q ( x ¯ a , i + 1 , q ( t s , i + 1 ) ) x   ( f ¯ i + 1 , q ( x ¯ a , i + 1 , q ( t s , i + 1 ) , u ¯ q ( t k ) , 0 ) )    V ^ q ( x ¯ a , i + 1 , q ( t s , i + 1 ) ) x   ( f ¯ i + 1 , q ( x ¯ a , i + 1 , q ( t s , i + 1 ) , u ¯ q ( t k ) , 0 ) ) α ^ 3 , q ( α ^ 2 , q 1 ( ρ ^ e , q ) ) + α ^ 4 , q ( α ^ 1 , q 1 ( ρ ^ q ) ) M e r r , i , q + L x , i , q M i Δ + L w , i , q θ i + α ^ 4 , q ( α ^ 1 , q 1 ( ρ ^ q ) ) M c h a n g e , i , q    + V ^ q ( x ¯ a , i + 1 , q ( τ ) ) x   ( f ¯ i + 1 , q ( x ¯ a , i + 1 , q ( τ ) , u ¯ q ( t k ) , w i + 1 ( τ ) ) V ^ q ( x ¯ a , i + 1 , q ( t s , i + 1 ) ) x   ( f ¯ i + 1 , q ( x ¯ a , i + 1 , q ( t s , i + 1 ) , u ¯ q ( t k ) , 0 ) ) α ^ 3 , q ( α ^ 2 , q 1 ( ρ ^ e , q ) ) + α ^ 4 , q ( α ^ 1 , q 1 ( ρ ^ q ) ) M e r r , i , q + L x , i , q M i Δ + L w , i , q θ i + α ^ 4 , q ( α ^ 1 , q 1 ( ρ ^ q ) ) M c h a n g e , i , q    + L x , i + 1 , q | x ¯ a , i + 1 , q ( τ ) x ¯ a , i + 1 , q ( t s , i + 1 ) | + L w , i + 1 , q θ i + 1 α ^ 3 , q ( α ^ 2 , q 1 ( ρ ^ e , q ) ) + α ^ 4 , q ( α ^ 1 , q 1 ( ρ ^ q ) ) M e r r , i , q + L x , i , q M i Δ + L w , i , q θ i + α ^ 4 , q ( α ^ 1 , q 1 ( ρ ^ q ) ) M c h a n g e , i , q    + L x , i + 1 , q M i + 1 Δ + L w , i + 1 , q θ i + 1
If Equation (71) holds, then integrating Equation (A12) gives that V ^ q ( x ¯ a , i + 1 , q ( t ) ) V ^ q ( x ¯ a , i , q ( t s , i + 1 ) ) , for all t [ t s , i + 1 , t k + 1 ] . Since x ¯ a , i + 1 , q ( t s , i + 1 ) Ω ρ ^ q , this guarantees that the closed-loop state remains in Ω ρ ^ q even after the switch in the process model occurs, regardless of whether it occurs at a sampling time or throughout a sampling period, when the conditions of the theorem hold. □

Appendix C. Proof of Theorem 3

Proof. 
This proof consists of several parts. First, recursive feasibility of the LEMPC of Equation (24) until t d , q is presented. Second, it is demonstrated that, after t s , i + 1 and before t d , q , the closed-loop state is maintained in Ω ρ ^ s a m p , q under the conditions of the theorem. Third, it is demonstrated that, after t d , q , the closed-loop state will be maintained in Ω ρ ^ q for a number of sampling periods given by t h , q .
Part 1. Until t d , q , each state measurement provided to the LEMPC of Equation (24) is within Ω ρ ^ q . From Reference [51], under the conditions of Equations (65) and (66), this guarantees feasibility of the LEMPC of Equation (24). After t d , q , when the closed-loop state exits Ω ρ ^ q , feasibility is no longer guaranteed for the LEMPC of Equation (24) but h N L , q is then used instead according to the statement of the theorem so that a characterizable control law is always used.
Part 2. Until t s , i + 1 , closed-loop stability within Ω ρ ^ q is guaranteed under the LEMPC of Equation (24) under the conditions in Equations (65) and (66) from Reference [51]. Subsequently, until t d , q , it must be demonstrated that, if the state measurement is contained within Ω ρ ^ q at t k , then x ( t ) Ω ρ ^ s a m p , q Ω ρ ^ s a f e , q , t [ t k , t k + 1 ] . Here, one of two cases holds: either x ( t k ) Ω ρ ^ e , q or x ( t k ) Ω ρ ^ q / Ω ρ ^ e , q . The state of the underlying model before t s , i + 1 is denoted by x ¯ a , i , q and, after, is x ¯ a , i , q + 1 .
If x ( t k ) Ω ρ ^ e , q and if t s , i + 1 [ t k , t k + 1 ) , from Propositions 1 and 2 and Equation (24f), we have the following:
V ^ q ( x ¯ a , i , q ( t ) ) V ^ q ( x ¯ b , q ( t ) ) + f V , q ( | x ¯ a , i , q ( t ) x ¯ b , q ( t ) | ) ρ ^ e , q + f V , q ( f W , i , q ( Δ ) ) ρ ^ q
for t [ t k , t s , i + 1 ) when Equation (65) holds, and
V ^ q ( x ¯ a , i + 1 , q ( t ) ) V ^ q ( x ¯ b , q ( t ) ) + f V , q ( | x ¯ a , i + 1 ( t ) x ¯ b , q ( t ) | ) ρ ^ e , q + f V , q ( f W , i , q ( t s , i + 1 t k ) + ( M c h a n g e , i , q ) ( t t s , i + 1 ) + L w , i , q θ i + M e r r , i , q L x , i , q ( e L x , i , q t e L x , i , q t s , i + 1 ) )
for t [ t s , i + 1 , t k + 1 ) from Proposition 4. From the conditions in Equation (73), this gives that V ^ q ( x ( t ) ) is maintained within Ω ρ ^ s a m p , q for all t [ t k , t k + 1 ) .
If instead t s , i + 1 occurs before or at t k , then x ¯ b , q ( t k ) = x ¯ a , i + 1 , q ( t k ) and Propositions 1 and 2 and Equation (24f) give the following:
V ^ q ( x ¯ a , i + 1 , q ( t ) ) V ^ q ( x ¯ b , q ( t ) ) + f V , q ( f W , i + 1 , q ( Δ ) ) ρ ^ e , q + f V , q ( f W , i + 1 , q ( Δ ) )
for all t [ t k , t k + 1 ) . From the conditions in Equation (75), this gives that V ^ q ( x ( t ) ) is maintained within Ω ρ ^ s a m p , q for all t [ t k , t k + 1 ) .
If x ( t k ) Ω ρ ^ q / Ω ρ ^ e , q , then the constraint of Equation (24g) is used. In this case, we consider the cases where t s , i + 1 [ t k , t k + 1 ) and the case where t s , i + 1 occurs before t k , separately.
When t s , i + 1 [ t k , t k + 1 ) , then before t s , i + 1 , Equation (24g) holds. From Reference [51], Equation (66) with Equation (67) cause x ¯ a , i , q ( t ) Ω ρ ^ q for t [ t k , t s , i + 1 ) . Subsequently, this result no longer holds because the underlying dynamic model changed so that Equation (24g) no longer provides an indication of the conditions which the closed-loop state meets, and a worst-case scenario in which the closed-loop state could subsequently move out of Ω ρ ^ q is considered. Specifically, the first inequality in Equation (A14) continues to hold. Equation (24f) does not necessarily hold but instead it is guaranteed [51] that x ¯ b , q ( t ) Ω ρ ^ q under Equations (66) and (67), so that V ^ q ( x ¯ b , q ) ρ ^ q . Then, if Equation (74) holds, extending the first inequality in Equation (A14) guarantees that V ^ q ( x ¯ a , i + 1 , q ( t ) ) ρ ^ s a m p , q , for t [ t s , i + 1 , t k + 1 ) . Therefore, throughout a sampling period containing t s , i + 1 , the closed-loop state does not leave Ω ρ ^ s a m p , q . If instead t s , i + 1 is before t k , then Equation (24g) is activated at t k and when x ¯ a , i + 1 , q ( t k ) Ω ρ ^ q / Ω ρ ^ s , q [51]:
V ^ q ( x ¯ a , i + 1 , q ( τ ) ) x   ( f ¯ i + 1 , q ( x ¯ a , i + 1 , q ( τ ) , u ¯ q ( t k ) , w i + 1 ( τ ) ) α ^ 3 , q ( α ^ 2 , q 1 ( ρ ^ s , q ) ) + α ^ 4 , q ( α ^ 1 , q 1 ( ρ ^ q ) ) M e r r , i + 1 , q + L x , i + 1 , q M i + 1 Δ + L w , i + 1 , q θ i + 1
When Equation (72) is satisfied,
V ^ q ( x ¯ a , i + 1 , q ( τ ) ) x   ( f ¯ i + 1 , q ( x ¯ a , i + 1 , q ( τ ) , u ¯ q ( t k ) , w i + 1 ( τ ) ) ϵ W , i + 1 , q / Δ
or
V ^ q ( x ¯ a , i + 1 , q ( t ) ) V ^ q ( x ¯ a , i + 1 , q ( t k ) ) + ϵ W , i + 1 , q Δ ( t t k )
This indicates that V ^ q is guaranteed to increase at a worst-case rate along the closed-loop state trajectories under the control actions determined by the LEMPC of Equation (24) if the condition of Equation (72) is satisfied after an anomaly occurs. To ensure that, at the end of the sampling period, V ^ q ( x ¯ a , i + 1 , q ( t ) ) ρ ^ s a m p , q , given that V ^ q ( x ¯ a , i + 1 , q ( t k ) ) ρ ^ q , Equation (76) must hold. If t s , i + 1 is before t k but x ¯ a , i + 1 , q ( t k ) Ω ρ ^ s , q , then if ρ ^ min , i + 1 , q ρ ^ s a m p , q , then x ¯ a , i + 1 , q ( t ) Ω ρ ^ s a m p , q from Equation (67).
Thus, whether x ( t k ) Ω ρ ^ e , q or x ( t k ) Ω ρ ^ q / Ω ρ ^ e , q , x ( t k + 1 ) Ω ρ ^ s a m p , q . Applying this recursively indicates that, from t s , i + 1 until t d , q , the closed-loop state is maintained within Ω ρ ^ s a m p , q . This also indicates that V ^ q ( x ¯ a , i + 1 , q ( t d , q ) ) ρ ^ s a m p , q . Because Ω ρ ^ s a m p , q Ω ρ ^ s a f e , q , x ¯ a , i + 1 , q ( t d , q ) Ω ρ ^ s a f e , q as well.
Part 3. At t d , q , h N L , q in sample-and-hold begins to be used to control the process. Again, Equations (A16)–(A18) hold.
The time t o u t , q at which the closed-loop state reaches Ω ρ ^ s a f e , q (i.e., when V ^ q ( x ¯ a , i + 1 , q ( t o u t , q ) ) = ρ ^ s a f e , q ) when initialized from V ^ q ( x ¯ a , i + 1 , q ( t k ) ) = ρ ^ s a m p , q , where ρ ^ s a m p , q ρ ^ s a f e , q , is at least ( ρ ^ s a f e , q ρ ^ s a m p , q ) Δ ϵ W , i + 1 , q + t k . To ensure that the time between t k and t o u t , q is no greater than ( ρ ^ s a f e , q ρ ^ s a m p , q ) Δ ϵ W , i + 1 , q , the number of sampling periods available after t d , q until the model needs to be updated with one which meets the conditions in Equation (66) with i set to i + 1 and q set to q + 1 is floor ( ( ρ ^ s a f e , q ρ ^ s a m p , q ) ϵ W , i + 1 , q ) . □

Appendix D. Proof of Theorem 4

Proof. 
If h N L , q + 1 is used to control the system after t I D , q and the conditions of Theorem 4 are met, then x a , i + 1 , q ( t I D , q ) = x a , i + 1 , q + 1 ( t I D , q ) , which lies in both Ω ρ ^ s a f e , q and in Ω ρ ^ s a f e , q + 1 so that the closed-loop state has not left either region. From Reference [51], if Equation (66) is met for the q + 1 / i + 1 model combination, then h N L , q + 1 causes V ^ q + 1 to decrease so that it will not leave Ω ρ ^ s a f e , q + 1 before the closed-loop state enters Ω ρ ^ q + 1 . Once the closed-loop state enters Ω ρ ^ q + 1 , then the LEMPC of Equation (24) is used with the q + 1 model, and if Equations (65) and (66) are met for the q + 1 / i + 1 model combination, the closed-loop state is maintained in Ω ρ ^ q + 1 from Reference [51]. □

References

  1. Lee, J.H.; Shin, J.; Realff, M.J. Machine learning: Overview of the recent progresses and implications for the process systems engineering field. Comput. Chem. Eng. 2018, 114, 111–121. [Google Scholar] [CrossRef]
  2. Venkatasubramanian, V. The promise of artificial intelligence in chemical engineering: Is it here, finally? AIChE J. 2019, 65, 466–478. [Google Scholar] [CrossRef]
  3. Bangi, M.S.F.; Kwon, J.S.I. Deep hybrid modeling of chemical process: Application to hydraulic fracturing. Comput. Chem. Eng. 2020, 134, 106696. [Google Scholar] [CrossRef]
  4. Wu, Z.; Christofides, P.D. Economic Machine-Learning-Based Predictive Control of Nonlinear Systems. Mathematics 2019, 7, 494. [Google Scholar] [CrossRef] [Green Version]
  5. Lovelett, R.J.; Dietrich, F.; Lee, S.; Kevrekidis, I.G. Some manifold learning considerations towards explicit model predictive control. arXiv 2018, arXiv:1812.01173. [Google Scholar] [CrossRef] [Green Version]
  6. Lucia, S.; Karg, B. A deep learning-based approach to robust nonlinear model predictive control. IFAC-PapersOnLine 2018, 51, 511–516. [Google Scholar] [CrossRef]
  7. Tong, C.; Palazoglu, A.; Yan, X. Improved ICA for process monitoring based on ensemble learning and Bayesian inference. Chemom. Intell. Lab. Syst. 2014, 135, 141–149. [Google Scholar] [CrossRef]
  8. Chiang, L.H.; Kotanchek, M.E.; Kordon, A.K. Fault diagnosis based on Fisher discriminant analysis and support vector machines. Comput. Chem. Eng. 2004, 28, 1389–1401. [Google Scholar] [CrossRef]
  9. Rawlings, J.B.; Angeli, D.; Bates, C.N. Fundamentals of economic model predictive control. In Proceedings of the IEEE Conference on Decision and Control, Maui, HI, USA, 10–13 December 2012; pp. 3851–3861. [Google Scholar]
  10. Grüne, L. Economic receding horizon control without terminal constraints. Automatica 2013, 49, 725–734. [Google Scholar] [CrossRef]
  11. Huang, R.; Harinath, E.; Biegler, L.T. Lyapunov stability of economically oriented NMPC for cyclic processes. J. Process Control 2011, 21, 501–509. [Google Scholar] [CrossRef]
  12. Ellis, M.; Durand, H.; Christofides, P.D. A tutorial review of economic model predictive control methods. J. Process Control 2014, 24, 1156–1178. [Google Scholar] [CrossRef]
  13. Patel, N.R.; Risbeck, M.J.; Rawlings, J.B.; Wenzel, M.J.; Turney, R.D. Distributed economic model predictive control for large-scale building temperature regulation. In Proceedings of the American Control Conference, Boston, MA, USA, 6–8 July 2016; pp. 895–900. [Google Scholar]
  14. Zhang, A.; Yin, X.; Liu, S.; Zeng, J.; Liu, J. Distributed economic model predictive control of wastewater treatment plants. Chem. Eng. Res. Des. 2019, 141, 144–155. [Google Scholar] [CrossRef]
  15. Zachar, M.; Daoutidis, P. Nonlinear Economic Model Predictive Control for Microgrid Dispatch. IFAC-PapersOnLine 2016, 49, 778–783. [Google Scholar] [CrossRef]
  16. Gopalakrishnan, A.; Biegler, L.T. Economic nonlinear model predictive control for periodic optimal operation of gas pipeline networks. Comput. Chem. Eng. 2013, 52, 90–99. [Google Scholar] [CrossRef]
  17. Brunton, S.L.; Proctor, J.L.; Kutz, J.N. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proc. Natl. Acad. Sci. USA 2016, 113, 3932–3937. [Google Scholar] [CrossRef] [Green Version]
  18. Narasingam, A.; Kwon, J.S.I. Data-driven identification of interpretable reduced-order models using sparse regression. Comput. Chem. Eng. 2018, 119, 101–111. [Google Scholar] [CrossRef]
  19. Chakraborty, S.; Tomsett, R.; Raghavendra, R.; Harborne, D.; Alzantot, M.; Cerutti, F.; Srivastava, M.; Preece, A.; Julier, S.; Rao, R.M.; et al. Interpretability of deep learning models: A survey of results. In Proceedings of the IEEE Smart World Congress, San Francisco, CA, USA, 4–8 August 2017. [Google Scholar]
  20. Karpathy, A.; Johnson, J.; Li, F.-F. Visualizing and understanding recurrent networks. arXiv 2015, arXiv:1506.02078. [Google Scholar]
  21. Qin, S.J.; Badgwell, T.A. A survey of industrial model predictive control technology. Control Eng. Pract. 2003, 11, 733–764. [Google Scholar] [CrossRef]
  22. Kheradmandi, M.; Mhaskar, P. Prescribing Closed-Loop Behavior Using Nonlinear Model Predictive Control. Ind. Eng. Chem. Res. 2017, 56, 15083–15093. [Google Scholar] [CrossRef]
  23. Bayer, F.A.; Müller, M.A.; Allgöwer, F. Tube-based robust economic model predictive control. J. Process Control 2014, 24, 1237–1246. [Google Scholar] [CrossRef]
  24. Heidarinejad, M.; Liu, J.; Christofides, P.D. Economic model predictive control of nonlinear process systems using Lyapunov techniques. AIChE J. 2012, 58, 855–870. [Google Scholar] [CrossRef]
  25. Diehl, M.; Bjornberg, J. Robust dynamic programming for min-max model predictive control of constrained uncertain systems. IEEE Trans. Autom. Control 2004, 49, 2253–2257. [Google Scholar] [CrossRef]
  26. Mesbah, A. Stochastic model predictive control: An overview and perspectives for future research. IEEE Control Syst. Mag. 2016, 36, 30–44. [Google Scholar]
  27. Das, B.; Mhaskar, P. Lyapunov-based offset-free model predictive control of nonlinear process systems. Can. J. Chem. Eng. 2015, 93, 471–478. [Google Scholar] [CrossRef]
  28. Vaccari, M.; Pannocchia, G. A modifier-adaptation strategy towards offset-free economic MPC. Processes 2017, 5, 2. [Google Scholar] [CrossRef] [Green Version]
  29. Adetola, V.; DeHaan, D.; Guay, M. Adaptive model predictive control for constrained nonlinear systems. Syst. Control Lett. 2009, 58, 320–326. [Google Scholar] [CrossRef]
  30. Wu, Z.; Rincon, D.; Christofides, P.D. Real-Time Adaptive Machine-Learning-Based Predictive Control of Nonlinear Processes. Ind. Eng. Chem. Res. 2019, in press. [Google Scholar] [CrossRef]
  31. Aumi, S.; Mhaskar, P. Adaptive data-based model predictive control of batch systems. In Proceedings of the American Control Conference, Montreal, QC, Canada, 27–29 June 2012. [Google Scholar]
  32. Aswani, A.; Gonzalez, H.; Sastry, S.S.; Tomlin, C. Provably safe and robust learning-based model predictive control. Automatica 2013, 49, 1216–1226. [Google Scholar] [CrossRef] [Green Version]
  33. El-Farra, N.H.; Gani, A.; Christofides, P.D. Fault-tolerant control of process systems using communication networks. AIChE J. 2005, 51, 1665–1682. [Google Scholar] [CrossRef]
  34. Perk, S.; Shao, Q.M.; Teymour, F.; Cinar, A. An adaptive fault-tolerant control framework with agent-based systems. Int. J. Robust Nonlinear Control 2012, 22, 43–67. [Google Scholar] [CrossRef]
  35. Du, M.; Mhaskar, P. Uniting safe-parking and reconfiguration-based approaches for fault-tolerant control of switched nonlinear systems. In Proceedings of the 2010 American Control Conference, Baltimore, MD, USA, 30 June–2 July 2010; pp. 2829–2834. [Google Scholar]
  36. Alanqar, A.; Durand, H.; Christofides, P.D. Fault-Tolerant Economic Model Predictive Control Using Error-Triggered Online Model Identification. Ind. Eng. Chem. Res. 2017, 56, 5652–5667. [Google Scholar] [CrossRef]
  37. Bø, T.I.; Johansen, T.A. Dynamic safety constraints by scenario based economic model predictive control. IFAC Proc. Vol. 2014, 47, 9412–9418. [Google Scholar] [CrossRef] [Green Version]
  38. Albalawi, F.; Alanqar, A.; Durand, H.; Christofides, P.D. A feedback control framework for safe and economically- optimal operation of nonlinear processes. AIChE J. 2016, 62, 2391–2409. [Google Scholar] [CrossRef]
  39. Zhang, X.; Clark, M.; Rattan, K.; Muse, J. Controller verification in adaptive learning systems towards trusted autonomy. In Proceedings of the ACM/IEEE Sixth International Conference on Cyber-Physical Systems, Seattle, WA, USA, 14–16 April 2015; pp. 31–40. [Google Scholar]
  40. Wu, Z.; Rincon, D.; Christofides, P.D. Real-time machine learning for operational safety of nonlinear processes via barrier-function based predictive control. Chem. Eng. Res. Des. 2020, 155, 88–97. [Google Scholar] [CrossRef]
  41. Alanqar, A.; Durand, H.; Christofides, P.D. Error-triggered on-line model identification for model-based feedback control. AIChE J. 2017, 63, 949–966. [Google Scholar] [CrossRef]
  42. Durand, H.; Messina, D. Enhancing practical tractability of Lyapunov-based economic model predictive control. In Proceedings of the American Control Conference, Denver, CO, USA, 1–3 July 2020. [Google Scholar]
  43. Alanqar, A.; Durand, H.; Christofides, P.D. On identification of well-conditioned nonlinear systems: Application to economic model predictive control of nonlinear processes. AIChE J. 2015, 61, 3353–3373. [Google Scholar] [CrossRef]
  44. Alanqar, A.; Ellis, M.; Christofides, P.D. Economic model predictive control of nonlinear process systems using empirical models. AIChE J. 2015, 61, 816–830. [Google Scholar] [CrossRef] [Green Version]
  45. Durand, H.; Ellis, M.; Christofides, P.D. Economic model predictive control designs for input rate-of-change constraint handling and guaranteed economic performance. Comput. Chem. Eng. 2016, 92, 18–36. [Google Scholar] [CrossRef] [Green Version]
  46. Nasukawa, T.; Yi, J. Sentiment analysis: Capturing favorability using natural language processing. In Proceedings of the Second International Conference on Knowledge Capture, Sanibel Island, FL, USA, 23–25 October 2003; pp. 70–77. [Google Scholar]
  47. Durand, H.; Christofides, P.D. Economic model predictive control: Handling valve actuator dynamics and process equipment considerations. Found. Trends Syst. Control 2018, 5, 293–350. [Google Scholar] [CrossRef]
  48. Özgülşen, F.; Adomaitis, R.A.; Çinar, A. A numerical method for determining optimal parameter values in forced periodic operation. Chem. Eng. Sci. 1992, 47, 605–613. [Google Scholar] [CrossRef]
  49. Alfani, F.; Carberry, J.J. An exploratory kinetic study of ethylene oxidation over an unmoderated supported silver catalyst. Chim. Ind. 1970, 52, 1192–1196. [Google Scholar]
  50. Wächter, A.; Biegler, L.T. On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Math. Program. 2006, 106, 25–57. [Google Scholar] [CrossRef]
  51. Giuliani, L.; Durand, H. Data-Based Nonlinear Model Identification in Economic Model Predictive Control. Smart Sustain. Manuf. Syst. 2018, 2, 61–109. [Google Scholar] [CrossRef]
  52. Kheradmandi, M.; Mhaskar, P. Model predictive control with closed-loop re-identification. Comput. Chem. Eng. 2018, 109, 249–260. [Google Scholar] [CrossRef]
  53. Heidarinejad, M.; Liu, J.; Christofides, P.D. Economic model predictive control of switched nonlinear systems. Syst. Control. Lett. 2013, 62, 77–84. [Google Scholar] [CrossRef]
  54. Heidarinejad, M.; Liu, J.; Christofides, P.D. Distributed model predictive control of switched nonlinear systems with scheduled mode transitions. AIChE J. 2013, 59, 860–871. [Google Scholar] [CrossRef]
  55. Mhaskar, P.; Liu, J.; Christofides, P.D. Fault-Tolerant Process Control: Methods and Applications; Springer: London, UK, 2013. [Google Scholar]
  56. Lin, Y.; Sontag, E.D. A universal formula for stabilization with bounded controls. Syst. Control. Lett. 1991, 16, 393–397. [Google Scholar] [CrossRef]
  57. Durand, H. On accounting for equipment-control interactions in economic model predictive control via process state constraints. Chem. Eng. Res. Des. 2019, 144, 63–78. [Google Scholar] [CrossRef]
Figure 1. x ¯ 1 , x ¯ 2 , x ¯ 3 , and x ¯ 4 trajectories under economic model predictive controllers (EMPCs) with different values of ϵ specified in the legend (the gray trajectory labeled “None” corresponds to no input rate of change constraint applied).
Figure 1. x ¯ 1 , x ¯ 2 , x ¯ 3 , and x ¯ 4 trajectories under economic model predictive controllers (EMPCs) with different values of ϵ specified in the legend (the gray trajectory labeled “None” corresponds to no input rate of change constraint applied).
Mathematics 08 00259 g001
Figure 2. u ¯ 1 and u ¯ 2 trajectories under EMPCs with different values of ϵ specified in the legend (the gray trajectory labeled “None” corresponds to no input rate of change constraint applied).
Figure 2. u ¯ 1 and u ¯ 2 trajectories under EMPCs with different values of ϵ specified in the legend (the gray trajectory labeled “None” corresponds to no input rate of change constraint applied).
Mathematics 08 00259 g002
Figure 3. Scatter plot reflecting rankings in Table 2 (solid blue circles) and the curve fit using lsqcurvefit (solid red line).
Figure 3. Scatter plot reflecting rankings in Table 2 (solid blue circles) and the curve fit using lsqcurvefit (solid red line).
Mathematics 08 00259 g003
Figure 4. State trajectories under EMPC with ϵ = 0.22 .
Figure 4. State trajectories under EMPC with ϵ = 0.22 .
Mathematics 08 00259 g004
Figure 5. Input trajectories under EMPC with ϵ = 0.22 .
Figure 5. Input trajectories under EMPC with ϵ = 0.22 .
Mathematics 08 00259 g005
Figure 6. State trajectories under Lyapunov-based EMPC (LEMPC) with Q e x t r a = 300 K/h starting at 0.5 h, where the LEMPC has not been made aware (“Unaware”) and has been made aware (“Aware”) of the change in the energy balance.
Figure 6. State trajectories under Lyapunov-based EMPC (LEMPC) with Q e x t r a = 300 K/h starting at 0.5 h, where the LEMPC has not been made aware (“Unaware”) and has been made aware (“Aware”) of the change in the energy balance.
Mathematics 08 00259 g006
Figure 7. Input trajectories under LEMPC with Q e x t r a = 300 K/h starting at 0.5 h, where the LEMPC has not been made aware (“Unaware”) and has been made aware (“Aware”) of the change in the energy balance.
Figure 7. Input trajectories under LEMPC with Q e x t r a = 300 K/h starting at 0.5 h, where the LEMPC has not been made aware (“Unaware”) and has been made aware (“Aware”) of the change in the energy balance.
Mathematics 08 00259 g007
Figure 8. State-space plot under LEMPC with Q e x t r a = 500 K/h starting at 0.5 h and no change in the control law or model in response.
Figure 8. State-space plot under LEMPC with Q e x t r a = 500 K/h starting at 0.5 h and no change in the control law or model in response.
Mathematics 08 00259 g008
Figure 9. State-space plot under LEMPC with Q e x t r a = 500 K/h starting at 0.5 h and the control law switched to h N L , 1 in response to the closed-loop state leaving Ω ρ ^ .
Figure 9. State-space plot under LEMPC with Q e x t r a = 500 K/h starting at 0.5 h and the control law switched to h N L , 1 in response to the closed-loop state leaving Ω ρ ^ .
Mathematics 08 00259 g009
Figure 10. State-space plot under LEMPC with Q e x t r a = 500 K/h starting at 0.5 h and the control law switched to h N L , 1 in response to the closed-loop state leaving Ω ρ ^ and then switched to h N L , 2 at 0.54 h.
Figure 10. State-space plot under LEMPC with Q e x t r a = 500 K/h starting at 0.5 h and the control law switched to h N L , 1 in response to the closed-loop state leaving Ω ρ ^ and then switched to h N L , 2 at 0.54 h.
Mathematics 08 00259 g010
Figure 11. State-space plot under LEMPC with Q e x t r a = 500 K/h starting at 0.5 h and the control law switched to h N L , 1 in response to the closed-loop state leaving Ω ρ ^ , then switched to h N L , 2 at 0.52 h, and finally switched back to an LEMPC incorporating an updated process model after the closed-loop state reenters Ω ρ ^ .
Figure 11. State-space plot under LEMPC with Q e x t r a = 500 K/h starting at 0.5 h and the control law switched to h N L , 1 in response to the closed-loop state leaving Ω ρ ^ , then switched to h N L , 2 at 0.52 h, and finally switched back to an LEMPC incorporating an updated process model after the closed-loop state reenters Ω ρ ^ .
Mathematics 08 00259 g011
Table 1. Parameters for the continuous stirred tank reactor (CSTR) of Equations (32)–(35).
Table 1. Parameters for the continuous stirred tank reactor (CSTR) of Equations (32)–(35).
ParameterValue
γ 1 −8.13
γ 2 −7.12
γ 3 −11.07
A 1 92.80
A 2 12.66
A 3 2412.71
B 1 7.32
B 2 10.39
B 3 2170.57
B 4 7.02
T C 1.0
Table 2. Yield variation with ϵ .
Table 2. Yield variation with ϵ .
ϵ Yield (%)Ranking
0.017.172
0.057.935
0.18.238
0.38.378
0.58.447
19.032
39.611
No input rate of change constraint9.61Not ranked
Table 3. Parameters for the CSTR model of Equations (77) and (78).
Table 3. Parameters for the CSTR model of Equations (77) and (78).
ParameterValueUnit
V1m 3
T 0 300K
k 0 8.46 × 10 6 m 3 /h·kmol
C p 0.231 kJ/kg·K
ρ L 1000kg/m 3
F5m 3 /h
R g 8.314 kJ/kmol·K
E 5 × 10 4 kJ/kmol
Δ H 1.15 × 10 4 kJ/kmol

Share and Cite

MDPI and ACS Style

Durand, H. Responsive Economic Model Predictive Control for Next-Generation Manufacturing. Mathematics 2020, 8, 259. https://doi.org/10.3390/math8020259

AMA Style

Durand H. Responsive Economic Model Predictive Control for Next-Generation Manufacturing. Mathematics. 2020; 8(2):259. https://doi.org/10.3390/math8020259

Chicago/Turabian Style

Durand, Helen. 2020. "Responsive Economic Model Predictive Control for Next-Generation Manufacturing" Mathematics 8, no. 2: 259. https://doi.org/10.3390/math8020259

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop