the graduate school college of engineering

37
The Pennsylvania State University The Graduate School College of Engineering CARLEMAN LINEARIZATION-BASED NONLINEAR MODEL PREDICTIVE CONTROL A Thesis in Chemical Engineering by Yizhou Fang © 2015 Yizhou Fang Submitted in Partial Fulfillment of the Requirements for the Degree of Master of Science December 2015

Upload: others

Post on 14-Feb-2022

1 views

Category:

Documents


0 download

TRANSCRIPT

The Pennsylvania State UniversityThe Graduate School

College of Engineering

CARLEMAN LINEARIZATION-BASED NONLINEAR MODEL

PREDICTIVE CONTROL

A Thesis inChemical Engineering

byYizhou Fang

© 2015 Yizhou Fang

Submitted in Partial Fulfillmentof the Requirements

for the Degree of

Master of Science

December 2015

The thesis of Yizhou Fang was reviewed and approved∗ by the following:

Antonios ArmaouAssociate Professor of Chemical EngineeringThesis Advisor

Themis MatsoukasProfessor of Chemical Engineering

Robert RiouxFriedrich G. Helfferich Associate Professor of Chemical Engineering

Hosam FathyAssociate Professor of Mechanical Engineering

Janna MaranasProfessor of Chemical EngineeringGraduate Program Chair

∗Signatures are on file in the Graduate School.

ii

Abstract

The need of tight operating conditions in chemical, pharmaceutical, and petroleum industrieshas given rise to the development of advanced process control. Model Predictive Control (MPC)started gaining attention three decades ago for optimal transitions between operating modes. Non-linear MPC converts a constrained control problem of a nonlinear system into an optimizationproblem. This basic architecture makes Nonlinear MPC capable of handling large state-spacemulti-variable systems with constraints, and dealing with model-mismatches and disturbancesreadily.

The computation time of control policy is required to be less than one sampling time for onlineoperation. However, this requirement is most of the times impossible to meet when the system hashigh nonlinearity. That becomes one of the most significant reasons holding back the applicationof Nonlinear MPC. As a result, there is strong motivation to develop an advanced formulation ofNonlinear MPC that demands less computational effort and thus decides the control actions faster.

The primary focus of this thesis is to develop an advanced formulation of Nonlinear MPCthat decreases the amount of computational effort in order to circumvent feedback delay, to im-prove controller performance and to maintain stability of the system. Multiple mathematics toolscombined with optimization techniques are implemented for the purpose of accelerated searchingalgorithms. The optimal control problem is formulated as a receding horizon one. An optimizationproblem is solved at each time the finite horizon moves on. Based on Carleman Linearization, thestates of the system are extended to higher orders following the Kronecker product rule. The non-linear dynamic process can thus be modeled with a bilinear representation while keeping nonlineardynamic information. It enables analytical anticipation of system states and provides the search-ing algorithm with analytically computed sensitivity of the cost function to the control signals.The proposed method resembles both collocation and shooting methods for the following reasons.First, the states of the system are discretized explicitly in time while the sensitivity of the controlsignals is computed analytically. Second, the states are nonlinear functions of the control signals,releasing the optimization problem from equality constraints and reducing the number of designvariables.

This thesis presents an introduction to MPC, Carleman Linearization and detailed derivationsof the proposed method in Chapter 1 and 2. It also provides detailed description of resetting

iii

extended states to compensate for the simulation errors caused by Carleman Linearization as anindependent Chapter 3. Chapter 4 presents case-study examples to indicate the applications of theproposed method. Chapter 5 concludes the work and future plans.

A part of the work presented in this thesis has been published at American Control Conference,Chicago, IL on July 1st, 2015.

iv

Table of Contents

List of Figures vii

List of Tables viii

Acknowledgments ix

Chapter 1Introduction 11.1 Model Predictive Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Carleman Linearization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

Chapter 2Bilinear Carleman Linearization-Based MPC 52.1 General MPC Controller Design Methodology . . . . . . . . . . . . . . . . . . . 52.2 Bilinear Carleman Linearization-based MPC (BCMPC) Formulation . . . . . . . 62.3 Detailed Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Chapter 3Resetting Extended States 133.1 Reason of Resetting Extended States . . . . . . . . . . . . . . . . . . . . . . . . 133.2 Example Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

Chapter 4Application and Discussion 174.1 Comparison Between the Proposed MPC and Nonlinear MPC . . . . . . . . . . . 17

4.1.1 Formulation and Tuning Parameters . . . . . . . . . . . . . . . . . . . . 194.1.2 Simulation Results with the Proposed Formulation . . . . . . . . . . . . 20

4.2 Advantage of Optimizing Action Horizons . . . . . . . . . . . . . . . . . . . . . 214.3 Advantage of Resetting Extended States within each Action Horizon . . . . . . . 21

v

Chapter 5Conclusion and Future Plan 245.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245.2 Future Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

Bibliography 26

vi

List of Figures

3.1 Van de Vusse CSTR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

4.1 Response of Open-loop Unstable CSTR to −10 % Perturbation in CAf . . . . . . 184.2 Performance of Carleman Linearization-based MPC . . . . . . . . . . . . . . . . 204.3 Advantage of Optimizing Action Horizons . . . . . . . . . . . . . . . . . . . . . 224.4 Resetting Extended States in Carleman Linearization-based MPC . . . . . . . . . 224.5 Quantified Improvement in the Values of Cost Functions . . . . . . . . . . . . . 23

vii

List of Tables

3.1 Van de Vusse CSTR Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . 15

4.1 Open-loop Unstable CSTR Parameters . . . . . . . . . . . . . . . . . . . . . . . 19

viii

Acknowledgments

I would like to thank Dr. Armaou for offering this research project Carleman Linearization-based Nonlinear Model Predictive Control. Thank you for this brilliant idea and for advising myresearch.

I would like to thank Dr. Matsoukas, Dr. Rioux and Dr. Fathy for serving on my committee.Thank you for your time and your insights.

Many thanks to Dr. Belegendu and Dr. Fathy, who helped me acquired knowledge in Op-timization and Optimal Control. Thank you for your wonderful lectures and thank you for thesuggestions to my research and encouragement.

Many thanks to the National Science Foundation for the financial support to this researchproject.

I would love to give special thanks to my family in China for your love, trust and support.

ix

Chapter 1 —Introduction

1.1 Model Predictive Control

Model Predictive Control (MPC) has attracted increasingly wide attention in chemical, phar-maceutical and petroleum refinery industries over the last three decades. The basic strategy ofMPC is to use dynamic models to predict future behavior of a system and design inputs to manip-ulate the system into tracking reference trajectories [1]. The foundamental architecture of MPCis to determine the current control action by solving an open loop optimal control problem (OCP)within a finite horizon at each sampling time and implementing only the first control action in thesequence [2]. This architecture equips MPC with advantages over other control strategies suchas coping with constraints, including state, input, output and process constraints, which is highlyapplicable in real industrial processes [3] [4]. MPC is practical for multiple-input-multiple-outputsystems based on its definition of converting the optimal control problem to an optimization one.The control policies adapt as the dynamic processes evolve since control actions are computedby repeatedly solving receding-horizon optimization problems. This property enables MPC to re-ject external disturbances and tolerate model mismatches [1] [7]. Reviews on MPC formulations,stability analysis and performance can be found in [8] [9].

Since MPC focuses on optimality rather than stability by nature, the stability of closed-loopprocess operation is an important open problem of MPC. To address the issue of stability, a largeamount of research has been focused on Lyapunov-based formulations, which address stabilityissues, and the effect of initial condition on the feasibility in optimization [12] [13] [14].

Linear MPC is a relatively mature technology. It is based on a solid foundation of linear controltheories and quadratic programming techniques. Over two thousand applications of Linear MPChave been reported by the end of last century [2].

1

More chanllenges and opportunities lie in Nonlinear MPC. Nonlinear MPC applies to nonlinearreactors and plants that vary over large regions of state space, including changeovers in continuousprocesses, tracking problems in startup and batch processes [2]. The development of large scalenonlinear programming (NLP) algorithms and dynamic optimization strategies further assure apromising future of Nonlinear MPC in industrial applications [11]. More reviews of NonlinearMPC can be found in [15]. One of the most significant challenges in Nonlinear MPC is the issue ofcomputational effort. MPC controllers require more computational effort than classical controllers.Due to the nonlinearity, optimization is non-convex for most of the cases, which leads to evengreater increase in computational effort. Complex chemical processes with large number of statesor high nonlinearity usually require a significant amount of computation. The resulting feedbackdelays, consequent loss of performance and potential stability issues become significant barriers tothe industrial implementation of Nonlinear MPC [28]. In this thesis, we propose an approach toaddress this problem, Carleman Linearization-based Nonlinear MPC.

1.2 Carleman Linearization

In 1932, Torsten Carleman showed a finite dimensional system of nonlinear differential equa-tion can be embedded into an infinite system of linear differential equations. This method is namedas Carleman Linearization[18][19][20] [21].

To facilitate the introduction, we present a description of the Kronecker product rule, whichCarleman linearization is based on. The Kronecker product of matrix X ∈ CN×M and matrixY ∈ CL×K is defined as matrix Z ∈ C(NL)×(MK).

X =

∣∣∣∣∣∣∣∣∣∣x1,1 x1,2 · · · x1,M

x2,1 x2,2 · · · x2,M

· · · · · · · · · · · ·xN,1 xN,2 · · · xN,M

∣∣∣∣∣∣∣∣∣∣, Y =

∣∣∣∣∣∣∣∣∣∣y1,1 y1,2 · · · y1,K

y2,1 y2,2 · · · y2,K

· · · · · · · · · · · ·yL,1 yL,2 · · · yL,K

∣∣∣∣∣∣∣∣∣∣

Z = X ⊗ Y ≡

∣∣∣∣∣∣∣∣∣∣x1,1Y x1,2Y · · · x1,MY

x2,1Y x2,2Y · · · x2,MY

· · · · · · · · · · · ·xN,1Y xN,2Y · · · xN,MY

∣∣∣∣∣∣∣∣∣∣

2

We represent nonlinear dynamic systems in the following form:

x = f(x) +m∑j=1

gj(x)uj (1.1)

x(t0) = x0

where x ∈ Rn is the state vector, and uj ∈ Rm, ∀1 ≤ j ≤ m are the vectors of control inputs. f(x)

and gj(x) are nonlinear vector functions.In chemical processes, exponential terms are commonly seen, which can be approximated by

the definition of matrix exponential:

exp(A) =∞∑l=0

1

l!Al (1.2)

For the simplicity of derivation, we assume the nominal operating point of the system is at theorigin x = 0. Nonlinear vector functions f(x) and gj(x) are expanded by Maclaurin series in thefollowing form:

f(x) = f(0) +∞∑k=1

1

k!∂f[k]|x=0x

[k]

gj(x) = gj(0) +∞∑k=1

1

k!∂gj[k]|x=0x

[k]

So nonlinear dynamic systems can be approximated by a polynomial form:

x ∼=p∑

k=0

Akx[k] +

m∑j=1

p∑k=0

Bjkx[k]uj

Ak denotes 1k!∂f[k]|x=0 and Bjk denotes 1

k!∂gj[k]|x=0, ∀k. A0 denotes f(0) and Bj0 denotes gj(0).

The polynomial order p is supposed to be high enough to reduce truncation errors [20].To implement Carleman linearization, the states of the system x are extended to x⊗ = [xTx[2]T · · ·x[p]T ]T ,

where x[p] denotes the p-th order Kronecker product of x. The bilinear formulation x⊗ = Ax⊗ +m∑j=1

(Bjx⊗ + Bj0)uj + C carries the information of nonlinear dynamic constraints. A, Bj , Bj0, and

C are matrices of the form

3

A =

∣∣∣∣∣∣∣∣∣∣∣∣

A1,1 A1,2 · · · A1,p

A2,0 A2,1 · · · A2,p−1

0 A3,0 · · · A3,p−2

· · · · · · · · · · · ·0 0 · · · Ap,1

∣∣∣∣∣∣∣∣∣∣∣∣, C =

∣∣∣∣∣∣∣∣∣∣∣∣

A1,0

0

0

· · ·0

∣∣∣∣∣∣∣∣∣∣∣∣, Bj =

∣∣∣∣∣∣∣∣∣∣∣∣

Bj1,1 Bj1,2 · · · Bj1,p

Bj2,0 Bj2,1 · · · Bj2,p−1

0 Bj3,0 · · · Bj3,p−2

· · · · · · · · · · · ·0 0 · · · Bjp,1

∣∣∣∣∣∣∣∣∣∣∣∣, Bj0 =

∣∣∣∣∣∣∣∣∣∣∣∣

Bj1,0

0

0

· · ·0

∣∣∣∣∣∣∣∣∣∣∣∣,

where Ak,i =k−1∑l=0

I[l]n ⊗ Ai ⊗ I [k−1−l]

n and Bjk,i =k−1∑l=0

I[l]n ⊗Bji ⊗ I [k−1−l]

n .

One important assumption for the analysis in the following sections is that the control signalsare all piecewise constant, which is generally the case in industrial process MPC. Thus, the formu-

lation x⊗ = Ax⊗ +m∑j=1

(Bjx⊗ + Bj0)uj + C allows for analytical integration of nonlinear models.

Providing the sensitivity of the cost function:tf∫t0

J(x, U)dt to uk,K (the K-th control action in the

sequence of the k-th design variable) also accelerates the computation of the optimal control policy[4] [5][6]. More detailed introduction to Carleman Linearization can be found in [18][19][20] [21].

4

Chapter 2 —Bilinear Carleman Linearization-Based MPC

2.1 General MPC Controller Design Methodology

In general MPC formulation, the optimal control problem is recast as a recursion of recedingfinite-horizon optimization problems at every time point t0, which have a general form:

U∗ = argminU

∫ tf

t0

J(x, U)dt

s.t.

uj(t) =N∑i=1

Uj,i · B(t;Ti−1;Ti),∀j = 1, · · · ,m

T0 = t0, TN ≤ tf

x− f(x)−m∑j=1

gj(x)uj(t) = 0

x(t0) = x0

f c(x, U) ≤ 0

J is the cost function and x is the vector of state variables.U denotes the matrix of control inputs that consists of Uj,i,∀j = 1, · · · ,m,∀i = 1, · · · , N .

Uj,i is the i-th decision for the j-th manipulated variable, which means it is the signal of the j-thcontrol input in its corresponding sampling time (Ti−1, Ti].

The sampling time (Ti−1, Ti] is also defined as the i-th action horizon, which has a length of∆Ti.

5

f(x) and gj(x) are nonlinear vector functions, accounting for the impacts of the states and thej-th control input respectively. The summation of the action horizons is the control horizon. N isthe number of action horizons. T0 is the beginning of the control horizon and TN is the end of thecontrol horizon. t0, same as T0 is the beginning of the prediction horizon and tf is the end of theprediction horizon.

We define B(t;Ti−1;Ti) = H(t− Ti−1)−H(t− Ti) as a rectangular pulse function, where His the standard Heaviside function. Ti−1 and Ti denote the initiation time and the termination timerespectively.

x0 is the initial condition of system states.f c denotes the vector function of equality and inequality constraints.

2.2 Bilinear Carleman Linearization-based MPC (BCMPC)

Formulation

With extended states x⊗ and extended coefficient matrices A, Bj , Bj0, and C through CarlemanLinearization, we represent the system dynamic constraint in a bilinear representation:

x⊗ = Ax⊗ +m∑j=1

(Bjx⊗ + Bj0)Uj,i + C, t ∈ (Ti−1, Ti]

During each sampling time t ∈ (Ti−1, Ti], each Uj,i is a piece-wise constant control action.The future state is predicted with the anayltical solution of the equation above

x(t)⊗ = exp

[(A +

m∑j=1

Bjx⊗Uj,i)(t− Ti−1

)]x(Ti−1

)⊗

+

t∫Ti−1

exp

[(A +

m∑j=1

Bjx⊗Uj,i)(t− τ

)]dτ ·

( m∑j=1

Bj0Uj,i + C)

Remark 1: Deviation around a nominal pointIdeally Carleman Linearization is performed around a nominal point. This means every state

variable and every control input is in the form of deviation from a nominal point. Since the opti-mization involves algebra of large matrices in the proposed formulation, we need to express boththe states and the inputs in deviations to reduce numerical errors.

6

In Chapter 3, we will discuss simulation errors caused by Carleman Linearization and thealgorithms to minimize those errors by resetting extended states.

Remark 2: Dimensionality issueAs the order of Carleman Linearization grows, the dimension of x⊗, A, Bj , Bj0, and C all

grow at a geometric pace respectively. A system has n state variables and is approximated withp-th order Carleman Linearization. The dimension of extened states x⊗ = [xTx[2]T · · ·x[p]T ]T isp∑i=1

ni. The dimensions of A, Bj are both (p∑i=1

ni)× (p∑i=1

ni).

This large expansion in dimensionality will cost extra computational requirements. One solu-tion is to merge identical terms in the state vector x⊗ to yield x⊗,reduced [20][21]. For example,to approach a 2-state nonlinear system with 3-rd order Carleman Linearization, the original statevector x = [x1, x2]T with a dimension of 2 is extended to a dimension of 14.

x⊗ = [x1, x2, x21, x1x2, x1x2, x

22, x

31, x

21x2, x

21x2, x1x

22, x

21x2, x1x

22, x1x

22, x

32]T

This extended state vector x⊗ can be reduced to the following state vector with a dimension of 9.

x⊗,reduced = [x1, x2, x21, x1x2, x

22, x

31, x

21x2, x1x

22, x

32]T

The dimensions of constant matrices A, Bj , Bj0, and C can also be reduced similarly.

Remark 3: Reformulation Optimizing ∆U and ∆T

The control move ∆U , which is the change of the control action with regard to the previousstep and the length of each action horizon ∆Ti can both be design variables. It means the bilinearCarleman Linearization-based MPC algorithm (BCMPC) can be reformulated as optimizing thecontrol move ∆U and optimizing the action horizon ∆Ti [23][24].

Optimizing the control move ∆U is computationally more favored than optimizing U in thecases that the set-points of control actions are unknown. Unknown set-points of control actionsare commonly seen in economic-oriented MPC (EMPC). So optimizing the control move ∆U hasmore applications in EMPC cases.

The action horizon, or the sampling time, is the time interval that a piece-wise constant controlaction Uj,i lasts for. ∆Ti is the length of the corresponding action horizon. In traditional MPCformulations, the length of the control horizon and the length of each action horizon are bothsetteld, and the number of control actions throughout the operating process is a fixed number. Wepropose an idea to optimize the length of each action horizon. In this way we make MPC a more

7

powerful tool to achieve the purpose of optimal control. While the length of the action horizonsare changing, the length of per resetting interval can either change or remain the same throughoutthe simulation depending on the specific operating process.

Remark 4: Different Control Horizon and Prediction HorizonThe approaches of stabilizing systems controlled by MPC feedback laws can be divided into

three major categories, (i) penalty on the deviation of terminal state from the set-point (ii) applyinglocal control Lyapunov functions in the teminal cost, and (iii) using a long enough optimizationhorizon [25]. [25] [26] proved that for an optimization horizon lengthN , the value of cost functionis bounded by some positive real number L. N ≥ [1+L ln(γ(L−1))] ensures closed-loop stability,where γ is a real number that can be chosen as L− 1 in the worst case.

In our proposed formulation of bilinear Carleman Linearizatrion-based MPC, we follow thethird approach and choose a prediction horizon longer than the control horizon and make sure theprediction horizon is long enough to stablize the open-loop unstable system.

2.3 Detailed Formulation

Define the following notations for the purpose of simplicity in derivations:

Ai = A +m∑j=1

BjUj,i,

Gx(Ui) = exp(Ai∆Ti),

Gu(Ui) = Ai−1

[Gx(Ui)− I],

DGx(Ui) = exp(Ai∆Tir

),

DGu(Ui) = Ai−1

[DGx(Ui)− I],

Fi =m∑j=1

Bj0Uj,i + C

We discretize each action horizon (Ti−1, Ti] into r smaller intervals (Ti−1, Ti−1+ 1r, . . . , Ti− 1

r, Ti]

evenly. Then we integrate the states over each small interval of the length ∆Tir

to obtain the ana-lytical prediction of extended states x⊗,i−1+ 1

r, x⊗,i−1+ 2

r, · · · , x⊗,i− 1

r, x⊗,i and then reset extended

8

states respectively using their original states xi−1+ 1r, xi−1+ 2

r, · · · , xi− 1

r, xi :

x⊗,i−1+ 1r

= DGx(Ui)x⊗,i−1 + DGu(Ui)Fi

x⊗,i−1+ 2r

= DGx(Ui)x⊗,i−1+ 1r

+ DGu(Ui)Fi

· · · · · ·

x⊗,i− 1r

= DGx(Ui)x⊗,i− 2r

+ DGu(Ui)Fi

x⊗,i = DGx(Ui)x⊗,i− 1r

+ DGu(Ui)Fi

The detailed reason of resetting extended states is presented in Chapter 3.After resetting x⊗,i−1+ 1

r, x⊗,i−1+ 2

r, · · · , x⊗,i− 1

r, x⊗,i following the Kronecker product rule, the

integral of the extended states over ∆Ti becomes:∫ Ti

Ti−1

x⊗ dt = DGu(Ui)[x⊗,i−1(reset) + x⊗,i−1+ 1r

(reset) + · · ·+ x⊗,i− 1r

(reset)]

+ Ai−1

[r ·DGu(Ui)−∆Ti · I]Fi (2.1)

which is used to construct the cost function:∫ TN

T0

J dt ∼= J0(TN − T0) +N∑i=1

(JA +m∑j=1

JNjU⊗,j,i⊗)

∫ Ti

Ti−1

x⊗dt+N∑i=1

m∑j=1

JBjU⊗,j,i∆Ti

(2.2)

The sensitivity of the cost function to Uk,K is:

∂∆Uk,K

∫ TN

T0

J dt = (JA +m∑j=1

JNjU⊗,j,i⊗)N∑i=K

∫ Ti

Ti−1

∂x⊗∂∆Uk,K

dt

+ JNk(∂U⊗,k,K)⊗∫ TK

TK−1

x⊗dt+ JBk(∂(U⊗,k,K))∆TK (2.3)

Define

N∑i=K

∫ Ti

Ti−1

∂x⊗∂Uk,K

dt = Gk(i,K) (2.4)

9

Gk(i,K) =

∫ TK

TK−1

∂x⊗∂Uk,K

dt, i = K

∫ TK+1

TK

∂x⊗∂x⊗,K

dt∂x⊗,K∂Uk,K

, i = K + 1

N∑i=K+2

∫ Ti

Ti−1

∂x⊗∂x⊗,i−1

dt(i−1∏

l=K+1

∂x⊗,l∂x⊗,l−1

)∂x⊗,K∂Uk,K

, i > K + 1

and

∂ U⊗,k,K = [1 2 Uk,K · · · p Up−1k,K ] (2.5)

∂x⊗,i∂x⊗,i−1

= Gx(Ui) (2.6)

∫ Ti

Ti−1

∂x⊗∂x⊗,i−1

dt = Gu(Ui) (2.7)

Define EK = exp(AK∆TK) and DEK = exp(AK∆TKr

).

∂x⊗,K∂Uk,K

=∂EK

∂Uk,Kx⊗K−1

+ AK

−1 ∂EK

∂Uk,KFK + Gu(UK)Bk0 − AK

−1BkGu(UK)FK (2.8)

The sensitivity of the time integral of extended states x⊗ is the following where resetting ex-tended states can be performed explicitly.∫ TK

TK−1

∂x⊗∂Uk,K

dt =

∫ TK

TK−1

∂DEK

∂Uk,Kdt · [x⊗,K−1(reset) + · · ·+ x⊗,K− 1

r(reset)]

+ r · {AK

−1∫ TK

TK−1

∂DEK

∂Uk,Kdt · FK + AK

−1[DGu(UK)− ∆TK

r· I]Bk0

− AK

−1BkAK

−1[DGu(UK)− ∆TK

r· I]FK} (2.9)

Based on the definition of matrix exponential,

10

∂EK

∂Uk,Kand

∫ TKTK−1

∂DEK

∂Uk,Kdt can both be computed analytically:

∂EK

∂Uk,K=∞∑l=1

(∆TK)l

l!

l∑λ=1

Aλ−1K BkA

l−λK (2.10)

∫ TK

TK−1

∂DEK

∂Uk,Kdt =

∞∑l=1

(∆TKr

)l+1

(l + 1)!

l∑λ=1

Aλ−1K BkA

l−λK (2.11)

The sensitivity of the cost function to the K-th action horizon ∆TK is:

∂∆TK

∫ TN

T0

J dt ∼= J0 +N∑i=K

(JA +m∑j=1

JNjU⊗,j,i⊗)

∫ Ti

Ti−1

∂x⊗∂∆TK

dt+m∑j=1

JBjU⊗,j,K (2.12)

Denote the sensitivity of the time integral of extended states as:

N∑i=K

∫ Ti

Ti−1

∂x⊗∂∆TK

dt = H(i,K)

H(i,K) =

∫ TK

TK−1

∂x⊗∂∆TK

dt, i = K

∫ TK+1

TK

∂x⊗∂x⊗,K

dt∂x⊗,K∂∆TK

, i = K + 1

N∑i=K+2

∫ Ti

Ti−1

∂x⊗∂x⊗,i−1

dt(i−1∏

l=K+1

∂x⊗,l∂x⊗,l−1

)∂x⊗,K∂∆TK

, i > K + 1

and

∂x⊗,K∂∆TK

= AKGx(UK)x⊗,K−1 + Gx(UK)FK (2.13)

∫ TK

TK−1

∂x⊗∆TK

dt = x⊗,K (2.14)

It is required to consider the effect of resetting extended states to calculate the sensitivity accu-

11

rately. Equation (2.6) indicates the sensitivity of extended states at the next sampling time to theaccurate extended states at the current sampling time. This means ∂x⊗,l

∂x⊗,l−1(reset)= Gx(Ul).

Chain rule is applied in order to achieve a more accurate sensitivity.

∂x⊗,l(reset)∂x⊗,l−1(reset)

=∂x⊗,l(reset)

∂xl· ∂xl∂x⊗,l

· ∂x⊗,l∂x⊗,l−1(reset)

(2.15)

∂x⊗,l(reset)

∂xland ∂xl

∂x⊗,lcan be readily calculated based on the number of state variables n and the

dimension of extended states, P .

∂x⊗,l(reset)∂xl

= [∂xl(reset)∂xl

T

,∂x

[2]l(reset)

∂xl

T

, · · · ,∂x

[p]l(reset)

∂xl

T

]T

For example, in a system of n = 2 state variables with 3rd order Carleman Linearization, thedimension P of extended states x⊗ is 14.

∂xl(reset)∂xl

=

[1 0

0 1

]T

∂x[2]l(reset)

∂xl=

[2x1 x2 x2 0

0 x1 x1 2x2

]T

∂x[3]l(reset)

∂xl=

[3x2

1 2x1x2 2x1x2 x22 2x1x2 x2

2 x22 0

0 x21 x2

1 2x1x2 x21 2x1x2 2x1x2 3x2

2

]T∂xl∂x⊗,l

has a dimension of n× P . It consists of an identity matrix of a dimension n× n and therest of the elements being zeros.

∂xl∂x⊗,l

=

1 0 0 . . . 0 0 . . . 0

0 1 0 . . . 0 0 . . . 0

0 0 1 . . . 0 0 . . . 0...

...... . . . ...

... . . . ...0 0 0 . . . 1 0 . . . 0

Similarly, Chain rule is applied in the calculation of ∂x⊗,K

∂∆Uk,Kand ∂x⊗,K

∂∆TKto reset extended states.

12

Chapter 3 —Resetting Extended States

3.1 Reason of Resetting Extended States

In addition to the truncation errors caused by polynomial approximation to the original non-linear system, Carleman Linearization introduces simulation errors because of the inconsistencywithin the original states and the extended states. This directly leads to integration errors when thebilinear representation is integrated over a period of time to predict future states.

For the purpose of symplicity, we use a nonlinear system without control actions as an example:x = f(x). Through Taylor expansion, the nonlinear system is approximated with a polynomialform:

x ∼= A0 + A1x+ A2x2 + · · ·+ Apx

p

After extending the orginial states x to extended states x⊗, the next approximation is takenwhen the dynamics of the system are represented with a linear expression: x⊗ = Ax⊗

x⊗ = [xT x[2]T · · · x[p]T ]T

The dynamics of the k-th order extended states, x[k], ∀ 1 ≤ k ≤ p has the expression:

x[k] = x[k−1]k−1∑l=0

I[l]n ⊗ x⊗ I [k−1−l]

n

∼= x[k−1]k−1∑l=0

I[l]n ⊗ (

p∑n=0

Anx[n])⊗ I [k−1−l]

n

which is represented with orders of x[k−1]T , x[k]T , · · · , x[p+k−1]T . But in x⊗ = Ax⊗, the highest

13

order is capped at x[p]. So the expression of x[k] is truncated to:

x[k] ∼= x[k−1]

k−1∑l=0

I [l]n ⊗ (

p−k+1∑n=0

Anx[n])⊗ I [k−1−l]

n

The dynamic information carried by x[p+1]T , · · · , x[p+k−1]T is all lost in this expression. Asthe order k grows, more terms in the expression of x[k] will be truncated and thus there remainmore simulation errors. As the simulation proceeds on, the higher the order of x[k] is, the fasterthe simulation errors accumulate and that causes inconsistency between different orders of stateswithin the full extended states x⊗.

We discard the terms x[2]T · · · x[p]T in x⊗ and use only the first order states x to re-extendthe states to higher orders following the Kronecker product rule and obtain new extended statesdenoted as x⊗(reset). This process is repeated frequently during simulation, and is defined as “re-setting the extended states”.

In the design of MPC formulation, each action horizon is discretized into smaller “reset-ting intervals”. We reset the extended states at the end of per “resetting interval” following theKronecker product rule to minimize integration errors caused by Carleman Linearization. Wediscretize the i-th action horizon ∆Ti into r samller “resetting intervals”, so [x⊗,i−1, x⊗,i] ⇒[x⊗,i−1, · · · , x⊗,i− 2

r, x⊗,i− 1

r, x⊗,i]. The number of resetting intervals r is a tuning parameter de-

pending on specific cases.

3.2 Example Application

We use a classic open-loop stable example, the Van de Vusse Reactor, to discuss the effect ofresetting extended states. In the worst-case scenario, we do not know the nominal operating condi-tion and perform Carleman Linearization around the trivial steady-state, or the wash-out condition.In this isothermal CSTR, controlling the feed flow rate is an approach to control the product con-centration since it changes the residence time in a constant volume reactor. The parallel reactions:

14

Open-loop Stable CSTR Parameters

Parameter Description Value

k1 Reaction Rate Constant 56min−1

k2 Reaction Rate Constant 53min−1

k3 Reaction Rate Constant 16mol/iter ·min

CAf Feeding Concentration of A 10 gmol/liter

CA0 Initial Concentration of A 3 gmol/liter

CB0 Initial Concentration of B 1.117 gmol/liter

Table 3.1: Van de Vusse CSTR Parameters

A(=cyclopentadiene) is the reactant. B(=cyclopentenol) is the intermediate component andthe desired product. C(=cyclopentanediol) and D(=dicyclopentadiene) are side products. Derivedfrom conservation equations, the dynamic constraints are expressed by the following two ODEs:

CA =F

V(CAf − CA)− k1CA − k3C

2A

CB = −FVCB + k1CA − k2CB

where FV

is the feed flow rate divided by the reactor volume, known as the dilution rate; this is thecontrol input. CAf is the concentration of the feeding reactant A, as a fixed parameter. The otherparameters of the system are listed in Table 3.1 [27].

The CSTR system initiates at a steady-state of CA0 = 3 gmol/L, CB0 = 1.117 gmol/L. Theinput F

Vchanges as a step function, which starts at 0.5714 min−1, decreases by 0.025 min−1 at

t = 2 min and by another 0.025 min−1 at t = 4 min.The solid black lines denote the numerical results simulated with Matlab ode45. They show

the reference values that the prediction by Carleman Linearization-based model is supposed totrack. Ideally the simulation results modeled by Carleman Linearization should be accurate enoughto overlap with the solid black lines.

15

Figure 3.1: Isothermal CSTR with three parallel reactions: Comparison of the outputs CA and CBwith different frequencies of resetting extended states during simulation

The dashed red lines are the results predicted by Carleman Linearization without resettingextended states. The integration errors accumulate as time proceeds as is shown on Figure 3.1.Resetting the extended states every 2 min yields the magenta lines that are tracking the referenceprediction better than the dashed red lines, but the simulation errors are still large.

The dashed dark blue lines are the prediction by Carleman Linearization-based model with aresetting interval of 0.2 min. That means during the simulation, the extended states are reset 10times evenly within each control interval of 2 min. The profiles of CA and CB are tracking thenumerical simulation results expressed by the solid black lines with minor differences. We reset theextended states more frequently and set the resetting interval at 0.1 min. This means the extendedstates are reset 20 times evenly per control interval. The results presented by the dashed light bluelines are exactly tracking the numerical simulation. One notable detail is the forward differencetime step of ode45 selected automatically by Matlab is 0.05min in this case. This shows CarlemanLinearization-based model is less demanding in computation while achieving the same accuracy.This idea has also been reported in [20].

This example indicates the worst case scenario that the nominal operating condition is un-known and large integration errors can accumulate as we anticipate future states. In terms of thedesign of MPC controllers, simulation errors will lead to unavoidable influence on optimizationand degrade the controller perfomance. Resetting extended states compensates for this loss andreduces integration errors.

16

Chapter 4 —Application and Discussion

4.1 Comparison Between the Proposed MPC and Nonlin-

ear MPC

To illustrate the applicability and computational efficiency of the proposed CarlemanLinearization-based MPC formulation, a nonlinear jacketed CSTR is used as a case study example.

In the CSTR jacketed by coolant, there is an exothermic first-order reaction. The dynamicprocess can be described with two ODEs:

CA =q

V(CAf − CA)− k0 exp(− E

R TR) CA

TR =q

V(Tf − TR)− ∆H

ρ Cpk0 exp(− E

R TR) CA +

UA

V ρ Cp (Tc − TR)

The two state variables are the concentration of reactor contents CA and the reactor temper-ature TR. The control input is the coolant temperature Tc in the jacket. Table 4.1 is a list of thenominal operating conditions. The above system is nonlinear around the nominal operating con-dition. Any pertubation in the parameters may cause large and potentially unstable oscillations.Figure 4.1 shows the open loop response of the system to a −10 % perturbation in the feeder con-centration CAf . The unstable oscillations grow larger as the operating process proceeds on.

17

Figure 4.1: The red dashed line shows the response of the open-loop system when there is −10 %perturbation in the feeder concentration CAf .

18

Open-loop Unstable CSTR Parameters

Parameter Description Value

q Feed Flow Rate 100 liter/min

CAf Feed Concentration of A 1 gmol/liter

Tf Feed Temperature 350 K

V Reactor Volume 100 liter

UA Heat Transfer Coefficient 5× 104 J/(min ·K)

k0 Exponential Factor 7.2× 1010 min−1

E/R Reduced Activation Energy 8750 K

∆H Heat of Reaction −5× 104 J/mol

ρ Density of Reactor Contents 1000 g/liter

Cp Heat Capacity of Reactor Contents 0.239 J/(g ·K)

Tc Coolant Temperature 311.1 K

CA Nominal Concentration of Reactor Contents 9.3413× 10−2 gmol/liter

TR Reactor Temperature 385 K

Table 4.1: Open-loop Unstable CSTR Parameters

4.1.1 Formulation and Tuning Parameters

The proposed method reformulates the optimal control problem as optimizing the piece-wiseconstant control inputs over a finite prediction horizon. The sensitivities of the cost function to thecontrol inputs are provided to facilitate the searching algorithm.

A controller is designed to regulate the system at a reactor temperature of TR = 385 K underthe disturbance of −10 % perturbation in the feeder concentration CAf . We performed 4th orderCarleman Linearization to represent the nonlinear system in a bilinear expression. The actionhorizon is set at t = 0.03 min. We reset the extended states at the end of each action horizon toensure accurate simulation. The prediction horizon is chosen at N = 8 and the control horizon atNc = 4. In this way, we assure stablity of the system without a constraint on the terminal state.

19

Figure 4.2: The solid blue lines show the optimal policy results of Nonlinear MPC under thedisturbance of−10 % change is the feeder concentration CAf . The dashed red lines are the optimalpolicy results of the proposed MPC formulation.

The original cost function we try to minimize is in the following quadratic form

N∑i=1

∫ Ti

Ti−1

(xTQx+ uTRu) dt

where Q and R are weighting matrices, Q = QT > 0 and R = RT ≥ 0; x and u are thedeviations of the states and the control signal from their nominal conditions respectively. Then wereformulate the cost function with extended states x⊗ and extended control signal u⊗ based on theoriginal quadratic cost function, and decide the coefficient vectors J0, JA and JB. The dimensionsof these vectors are in consistency with the order of extended states x⊗ and extended control signalu⊗.

4.1.2 Simulation Results with the Proposed Formulation

Figure 4.2 shows a comparison between the performances of the proposed MPC formulation(dashed red lines) and of Nonlinear MPC (solid blue lines). They both stablize the open-loopunstable system well and regulates the reactor temperate TR = 385K without offsets. As is shownon Figure 4.2, the differences between the optimal control policy and the results of the system are

20

negligible.Keeping other conditions identical, using the same Matlab searching function (fmincon) and

the same searching algorithm (interior-point algorithm), the proposed method takes 16.1538 s CPUtime to calculate the optimal control policy with Intel Core i7-3770 CPU at 3.40GHz, which is 35% of 45.5661 s that Nonlinear MPC takes. The simulation results in Figure 4.2 demonstratedthe proposed MPC formulation is more computationally efficient than Nonlinear MPC when theyachieve the same control goals.

4.2 Advantage of Optimizing Action Horizons

We keep the prediction horizon fixed and set the length of each action horizon as a designvariable in addition to the vector of control signals. Previously, throughout the operating process of1.5 min, with a fixed action horizon of 0.03 min, the total number of control actions is fixed at 49.With -10% disturbance in the feeder concentration CAf , the proposed formulation optimizing Uand ∆Ti reduces the value of cost function to 0.0314, compared to 0.0502 of the previous BCMPCformulation optimizing only the control actions U , and 0.0703 of the Nonlinear MPC case. Theformulation optimzing U and ∆Ti improves the performance of the MPC controller in regulatingthe system to track the desired trajectory.

4.3 Advantage of Resetting Extended States within each

Action Horizon

We present another circumstance under which the control actions can not be switched as fre-quently as in Figure 4.2. The action horizon, or sampling time for each piece-wise constant controlaction is t = 0.3 min. We compare the case of resetting extended states at the end of per samplingtime, shown by the dashed red lines in Figure 4.4 with the case of resetting extended states 10 timesevenly within each action horizon, shown by the dashed light blue lines in Figure 4.4. The optimalpolicy results show resetting extended states within each action horizon improves the capability ofthe MPC controller to track the desired trajectory. To further quantify the performance of MPCcontrollers, Figure 4.5 shows a comparison between the values of cost functions at each actionhorizon. The improvement of performance is quantified by the reduced values of the cost function.

21

Figure 4.3: The dashed green lines are the optimal policy results with the proposed MPC formula-tion optimizing both the control actions U and action horizons ∆Ti, compared with the dashed redlines that only optimizes the control actions U .

Figure 4.4: The dashed red lines show the optimal policy results of the proposed MPC whenthe extended states are reset every 0.3 min. The dashed light blue lines are the results when theextended states are reset every 0.03 min.

22

Figure 4.5: The red bars show the value of cost function at each action horizon when the extendedstates are reset every 0.3 min. The light blue bars are the results when the extended states are resetevery 0.03 min.

23

Chapter 5 —Conclusion and Future Plan

5.1 Conclusion

This thesis proposed a computationally efficient method to reformulate general NonlinearMPC controller design methodology into a Carleman Linearization-based MPC formulation. Iden-tical to traditional MPC, it converts an optimal control problem to a receding horizon control onewith dynamic constraints and performance criteria satisfied. The nonlinear dynamic process ismodeled with a bilinear representation. This enables analytical anticipation of future states andanalytically providing the sensitivity of the cost function to the control signals as the searchinggradient. Consequently, the computation of optimal control policy is accelerated and the compu-tational delay is removed. Resetting extended state is performed in both analytical anticipation offuture states and sensitivity calculation to compensate for the simulation errors caused by CarlemanLinearization. The action horizons, or sampling times are designed as manipulated variables in ad-dition to the control signals to improve the performance of controllers designed by the proposedmethod.

The proposed method resembles both the collocation method and the shooting method. Be-cause the proposed formulation discretizes the states of the system explicitly in time and calculatethe sensitivity of the control signals analytically. In addition, the proposed method formulates thestates as nonlinear functions of the control signals and thus it releases the optimization problemfrom equality constraints and reduces the number of design variables.

24

5.2 Future Plan

In the next step of our research, we will propose an advanced reformulation of Carle-man Linearization-based Nonlinear MPC with a combination of Advanced-step Nonlinear MPCmethodology published in [28] [29] [30]. Using Nonlinear Programming (NLP) sensitivity analy-sis to find approximations and to update solutions on-line enables this reformulation of NonlinearMPC increased capability of dealing with measurement noises and model mismatches. It is suit-able to apply Advanced-step Nonlinear MPC methodology to the method proposed in this thesisdue to its bilinear formulation of the control problem and its formulation of sensitivity calculation.

The proposed formulation of Advanced-step Carleman Linearization-based Nonlinear MPCwill be applied to a complex chemical system with large state-space. Our goal is to prove theproposed removes computational delay while it achieves the same stability properties as traditionalNonlinear MPC controllers do.

More recently, economic-oriented MPC (EMPC) has started to gain popularity. The primarydifference of EMPC from traditional MPC is its formulation orientation towards minimizing eco-nomic costs, which naturally put more emphasis on the process paths through directly effectingeconomic performance [31] [32] [33]. The formulations of economic cost functions are generallynon-quadratic contrary to traditional MPC, but there are formulations to guarantee stability and toimprove numerical performance, including adding quadratic regularization terms in the economiccost function and using Lyapunov functions. Carleman Linearization is expected to demonstratesignificant advantages in the formulation.

25

Bibliography

[1] M. Morari and J. H. Lee, “Model predictive control: past, present and future,” Comput. Chem.

Eng., vol. 23, no. 4-5, pp. 667-682, May 1999.

[2] D. Q. Mayne, J. B. Rawlings, C. V Rao, and P. O. M. Scokaert, “Constrained model predictivecontrol: Stability and optimality,” Automatica , vol. 36, no. 6, pp. 789-814, Jun. 2000.

[3] S. J. Qin and T. A. Badgwell, “A survey of industrial model predictive control technology,”Control. Eng. Pract.vol. 11, pp. 733-764, 2003.

[4] A. Armaou and A. Ataei, “Piece-wise constant predictive feedback control of nonlinear sys-tems,” J. Process Control, vol. 24, no. 4, pp. 326-335, Apr. 2014.

[5] Y. Fang and A. Armaou,“Nonlinear Model Predictive Control Using a Bilinear Carlemanlinearization-based Formulation for Chemical Processes,” American Control Conference, pp.5629-5634, 2015.

[6] N. Hashemian and A. Armaou,“Fast Moving Horizon Estimation of Nonlinear Processes ViaCarleman Linearization,” American Control Conference, pp. 3379-3385, 2015.

[7] R. Huang, L. T. Biegler, and S. C. Patwardhan, “Fast Offset-Free Nonlinear Model PredictiveControl Based on Moving Horizon Estimation,” Ind. Eng. Chem. Res., vol. 49, no. 17, pp.7882-7890, Sep. 2010.

[8] C. Garcia, D. Preti, and M. Morari,“ Model Predictive Control: Theory and Practicea Survey,”Automatica, vol. 25, no. 3, pp. 335-348, 1989.

[9] J. B. Rawlings. “Tutorial overview of model predictive control.” IEEE Control Syst. Mag.,vol. 20, no. 3, pp. 38-52. 2000.

[10] B. Friedland, Control System Design - An Introduction to State-Space Methods, McGrawHill, Quebec, Canada, 2009.

26

[11] R. Lopez-Negrete, F. J. DAmato, L. T. Biegler, and A. Kumar, “Fast nonlinear model predic-tive control: Formulation and industrial process applications,” Comput. Chem. Eng., vol. 51,pp. 55-64, Apr. 2013.

[12] P. Kokotovi and M. Arcak, “Constructive nonlinear control: a historical perspective , Auto-

matica, vol. 37, no. 5, pp. 637-662, 2001.

[13] P. Mhaskar, N. H. El-farra, and P. D. Christofides, “Predictive Control of Switched NonlinearSystems With Scheduled Mode Transitions, IEEE Trans. Autom. Contr., vol. 50, no. 11, pp.1670-1680, 2005.

[14] P. Mhaskar, N. H. El-Farra, and P. D. Christofides, “Stabilization of nonlinear systems withstate and control constraints using Lyapunov-based predictive control, Syst. Control Lett., vol.55, no. 8, pp. 650-659, Aug. 2006.

[15] F. Allgwer and A. Zheng, Nonlinear Model Predictive Control, vol 26. Basel; Boston:Birkhuser Verlag, 2000.

[16] B. W. Bequette, “Model Predictive Control” in Process Control: Modeling, Design, And

Simulation, Upper Saddle River, N.J: Prentice Hall PTR, 2003, ch. 16, pp. 487-520.

[17] L. T. Biegler, “Advances in nonlinear programming concepts for process control,” J. Proc.

Cont., vol. 8, no. 5-6, pp. 301-311, 1998.

[18] W. H. Steeb and F. Wilhelm, “Non-Linear Autonomouc Systems of Differential Equationsand Carleman Linearizaiton Procedure,” J. Math. Anal. Appl., vol. 77, no. 2,pp. 601-611,1980.

[19] W. H. Steeb,“ A Note On Carleman Linearization,” Phys. Lett. A., vol. 140, no. 6,pp. 336-338,Oct. 1989.

[20] S. A. Svoronos, D.Papageorgiou and C. Tsiligiannis“Discretization of Nonlinear Control Sys-tems via the Carleman Linearization,” J. Process Control, vol. , no. , pp. 678-685, 1994.

[21] V. Hatzimanikatis, G. Lyberatos, S. Pavlou, and S. A. Svoronos, “A method for pulsed peri-odic optimization of chemical reaction systems,” Chem. Eng. Sci., vol. 48, no. 4, pp. 789-797,1993.

[22] M. Rodriguez and D.Perez, “First Principles Model Based Control,” in European Symposium

on Computer Aided Process Engineering-15, 2005©Elserier Science B.V.

27

[23] V. S. Vassiliadis, R. W. H. Sargent, and C. C. Pantelides, “Solution of a Class of MultistageDynamic Optimization Problems. 1.Problems without Path Constraints,” Ind. Eng. Chem.

Res., vol. 33, no. 9, pp. 2111-2122, Sep. 1994.

[24] V. S. Vassiliadis, R. W. H. Sargent, and C. C. Pantelides, “Solution of a Class of MultistageDynamic Optimization Problems. 2.Problems with Path Constraints,” Ind. Eng. Chem. Res.,vol. 33, no. 9, pp. 2123-2133, Sep. 1994.

[25] S. E. Tuna, M. J. Messina, and a. R. Teel, “Shorter horizons for model predictive control,”Am. Control Conf., vol. 19, no. 4, pp. 678-685, Apr. 2006.

[26] A. R. Teel, “Control Is the Stability Robust Part I Asymptotic Stability Theory,” pp. 3-27,2004.

[27] B. W. Bequette, “Isothermal Chemical Reactor” in Process Control: Modeling, Design, And

Simulation, Upper Saddle River, N.J: Prentice Hall PTR, 2003, pp. 605-617.

[28] V. M. Zavala and L. T. Biegler, “The advanced-step NMPC controller: Optimality, stabilityand robustness,” Automatica, vol. 45, no. 1, pp. 86-93, Jan. 2009.

[29] X. Yang and L. T. Biegler, “Advanced-multi-step nonlinear model predictive control,” J. Pro-

cess Control, vol. 23, no. 8, pp. 1116-1128, Sep. 2013.

[30] R. Huang, V. M. Zavala and L. T. Biegler, “Advanced step nonlinear model predictive controlfor air seperation units,” J. Process Control, vol. 19, no. 4, pp. 678-685, Apr. 2009.

[31] R. Huang and L. T. Biegler, “Economic NMPC for energy intensive applications with elec-tricity price prediction”, in 11th International Symposium on Process Systems Engineering.,Singapore.,2012, pp.1612-1616

[32] M. Heidarinejad, J. Liu, and P. D. Christofides, “Economic model predictive control ofswitched nonlinear systems,” Syst. Control Lett., vol. 62, no. 1, pp. 77-84, Jan. 2013.

[33] R. Amrit, J. B. Rawlings, and L. T. Biegler, “Optimizing process economics online usingmodel predictive control,” Comput. Chem. Eng., vol. 58, pp. 334-343, Nov. 2013.

28