a second order cone programming algorithm for model ... · approach is based on ideas from interior...

86
A Second Order Cone Programming Algorithm for Model Predictive Control Magnus ˚ Aerblad, Division of Automatic Control Department of Electrical Engineering Link¨ opings universitet, SE-581 83 Link¨ oping, Sweden WWW: http://www.control.isy.liu.se E-mail: [email protected], @isy.liu.se 13th February 2004 A U T O M A TIC C O N T R O L C O M M U N IC A TIO N S Y S T E M S LINKÖPING Report no.: LiTH-ISY-R-2591 Also published as Licentiate Thesis, Royal Institute of Technology Technical reports from the Control & Communication group in Link¨ oping are available at http://www.control.isy.liu.se/publications.

Upload: others

Post on 20-Jul-2020

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

A Second Order Cone Programming Algorithm

for Model Predictive Control

Magnus Aerblad,

Division of Automatic Control

Department of Electrical Engineering

Linkopings universitet, SE-581 83 Linkoping, Sweden

WWW: http://www.control.isy.liu.se

E-mail: [email protected], @isy.liu.se

13th February 2004

AUTOMATIC CONTROL

COMMUNICATION SYSTEMS

LINKÖPING

Report no.: LiTH-ISY-R-2591

Also published as Licentiate Thesis, Royal Institute of Technology

Technical reports from the Control & Communication group in Linkoping are

available at http://www.control.isy.liu.se/publications.

Page 2: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

Abstract

In Model Predictive Control (MPC) an optimal control problem hasto be solved at each sampling instant. The objective of this thesis is toderive efficient methods to solve the MPC optimization problem. Theapproach is based on ideas from Interior Point (IP) optimization methodsand Riccati recursions.

The MPC problem considered here has a quadratic objective and con-straints which can be both linear and quadratic. The key to an efficientimplementation is to rewrite the optimization problem as a Second Or-der Cone Program (SOCP). To solve the SOCP a feasible primal-dualIP method is employed. By using a feasible IP method it is possible todetermine when the problem is feasible or not by formalizing the searchfor strictly feasible initial points as a primal-dual IP problem.

There are several different ways to rewrite the optimization problemas an SOCP. However, done carefully, it is possible to use very efficientscalings as well as Riccati recursions for computing the search directions.The use of Riccati recursions makes the computational complexity growat most quadratically with the time horizon, compared to cubically formore standard implementations.

Keywords: Optimization, Second Order Cone Programing, Model

Predictive Control

Page 3: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

A Second Order Cone ProgrammingAlgorithm for Model Predictive Control

Magnus Åkerblad

Licentiate ThesisDepartment of Signals, Sensors and Systems

Royal Institute of TechnologyStockholm, Sweden

Submitted to the School of Electrical Engineering, Royal Institute ofTechnology, in partial fulfillment of the requirements for the degree of

Technical Licentiate.

Page 4: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

TRITA–S3–REG–0205ISSN 1404–2150ISBN 91–7283–382–3

Copyright © 2002 Magnus Åkerblad

Page 5: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

Abstract

In Model Predictive Control (MPC) an optimal control problem has to be solvedat each sampling instant. The objective of this thesis is to derive efficient meth-ods to solve the MPC optimization problem. The approach is based on ideasfrom Interior Point (IP) optimization methods and Riccati recursions.

The MPC problem considered here has a quadratic objective and constraintswhich can be both linear and quadratic. The key to an efficient implementa-tion is to rewrite the optimization problem as a Second Order Cone Program(SOCP). To solve the SOCP a feasible primal-dual IP method is employed. Byusing a feasible IP method it is possible to determine when the problem is fea-sible or not by formalizing the search for strictly feasible initial points as aprimal-dual IP problem.

There are several different ways to rewrite the optimization problem as anSOCP. However, done carefully, it is possible to use very efficient scalings aswell as Riccati recursions for computing the search directions. The use of Ric-cati recursions makes the computational complexity grow at most quadraticallywith the time horizon, compared to cubically for more standard implementa-tions.

Page 6: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered
Page 7: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

Acknowledgments

First of all I would like to thank my supervisor Dr. Anders Hansson. He alwayshad time for my questions and to help me find the answers. This work couldnot have been done without him.

I also would like to thank Professor Bo Wahlberg and Professor LennartLjung for letting me join the Automatic Control groups at KTH and LinköpingUniversity, respectively. It has been a privilege to work at two different depart-ments.

Several people have helped me to improve this thesis. Jonas Gillberg, JohanLöfberg and Ragnar Wallin read early versions of the manuscript and gave mevaluable comments. Moreover, special thanks to Ulla Salaneck who proofreadthe final version of this manuscript.

Page 8: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered
Page 9: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

Notation

Symbols

R The set of real numbersR

n The set of real valued column vectors of dimension nR

m×n The set of real valued matrices of dimension m × n

Vectors and Matrices

I The identity matrix

1 =

1...1

A column vector with ones

e0 =

10...0

First unit vector

Diagonal matrix with elements diag(x)of vector x on the diagonal

Arrow matrix of vector x arrow(x) =

[x0 xT

1

x1 x0I

]

,

where x = (x0, x1), with x0 ∈ R

Page 10: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

vi

Operations and Functions

∇ Gradient∇2 Hessian‖x‖ Euclidean norm of a vectorX ⊂ Y X is a subset of Y≥K Inequality with respect to a proper cone KK1 × K2 The Cartesian product of K1 and K2

A1 ⊕ A2 Block diagonal matrix with blocks A1 and A2

Abbreviations

IP Interior PointKKT Karush-Kuhn-TuckerLP Linear ProgrammingNT Nestorov-ToddMPC Model Predictive ControlQP Quadratic ProgrammingQCQP Quadratic Constrained Quadratic ProgrammingSOCP Second Order Cone Programming

Page 11: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

Contents

1 Introduction 11.1 MPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 IP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.4 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2 Model Predictive Control 52.1 History of MPC . . . . . . . . . . . . . . . . . . . . . . . . . . 52.2 The MPC Setup . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2.2.1 The Predictive Model . . . . . . . . . . . . . . . . . . . 72.2.2 The Objective Function . . . . . . . . . . . . . . . . . . 72.2.3 The Constraints . . . . . . . . . . . . . . . . . . . . . . 82.2.4 The Optimization Problem and the MPC Algorithm . . . 8

2.3 Stability of MPC . . . . . . . . . . . . . . . . . . . . . . . . . 92.4 MPC and IP . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

3 Conic Convex Programming 113.1 The Nonnegative Orthant and the Second Order Cone . . . . . . 113.2 Second Order Cone Programming . . . . . . . . . . . . . . . . 143.3 Karush-Kuhn-Tucker Conditions . . . . . . . . . . . . . . . . . 16

4 Interior-Point Methods 174.1 Newton’s Method . . . . . . . . . . . . . . . . . . . . . . . . . 174.2 Central Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184.3 Barrier Function . . . . . . . . . . . . . . . . . . . . . . . . . . 204.4 Potential-Reduction Methods . . . . . . . . . . . . . . . . . . . 224.5 Nesterov-Todd Search Direction . . . . . . . . . . . . . . . . . 25

Page 12: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

viii Contents

5 Efficient Computation of Search Direction 295.1 Control Problem . . . . . . . . . . . . . . . . . . . . . . . . . . 295.2 Search Direction . . . . . . . . . . . . . . . . . . . . . . . . . 315.3 Efficient Solution of Equations for Search Direction . . . . . . . 32

6 Strictly Feasible Initial Points 376.1 Primal Initial Point . . . . . . . . . . . . . . . . . . . . . . . . 376.2 Dual Initial Point . . . . . . . . . . . . . . . . . . . . . . . . . 41

7 Computational Results 437.1 Complexity Analysis . . . . . . . . . . . . . . . . . . . . . . . 43

7.1.1 Flop Count . . . . . . . . . . . . . . . . . . . . . . . . 437.1.2 Complexity of the Algorithm . . . . . . . . . . . . . . . 44

7.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457.2.1 The Double-Tank Process . . . . . . . . . . . . . . . . 467.2.2 Problem Formulation . . . . . . . . . . . . . . . . . . . 477.2.3 Computational Results . . . . . . . . . . . . . . . . . . 49

8 Conclusions and Future Work 558.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

A NT Search Direction 57

B Riccati Recursion 63

Page 13: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

Chapter 1

Introduction

In this thesis we will combine ideas from Interior Point (IP) optimization meth-ods and linear quadratic control in order to efficiently solve an Model PredictiveControl (MPC) problem. First, we will give a background to MPC and IP meth-ods. Then the outline of the thesis is given, followed by a short summary of themain contributions of this work.

1.1 MPC

The core idea of Model Predictive Control (MPC) is to use a dynamical modelof the system to predict the future behavior as a function of the control in-puts. The optimal future input sequence is then calculated, i.e., the input whichachieves the control objectives in an optimal way. A new optimal input se-quence is determined at each sampling instance given the current state estimate.The basic idea of MPC can be seen in the following analogy:

You are trying to walking across a street. First you look right andleft to estimate if you safely can make it across the street. In otherwords, you are trying to predict if you can walk fast enough tomake it across the street without getting hit by a car. You cometo the conclusion that it is safe and start walking. Then somethingunforeseen happens, a car comes towards you with great speed.You then have to make a new decision, to either walk back to thesidewalk, or to increase your speed to make it across the street.

MPC explicitly takes care of constraints. In this example it is quite easy tosee what kind of constraints there could be. One example is that the speed is

Page 14: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

2 1 Introduction

limited.

1.2 IP

Interior Point (IP) methods are a class of algorithms for solving certain opti-mization problems. A simple interpretation of an IP method is given by thefollowing analogy:

You are standing on an island and are looking for the highest pointon the island. To find this place you look at the surroundings tofind out in which direction the ascent is steepest. You then startwalking in that direction. After walking a bit you start looking fora new steepest ascent, this is repeated until you find yourself ina place were all directions are down, which means you are at thehighest point of the island. Interior reflects the fact that you haveto stay on the island!

There are of course many technical details that are not dealt with in this sim-ple example. For example, what happens if the island has a local hill top?This is an issue of the island being concave or not. In this thesis we will onlydeal with concave maximization problems or equivalently convex minimizationproblems.

1.3 Outline

A short introduction to MPC is presented in Chapter 2. There the history andbackground of MPC will be discussed followed by a presentation of the stan-dard formulation of MPC. The chapter will conclude by a discussion of the sta-bility of MPC and how IP methods can be used to efficiently solve the resultingoptimization problem.

In Chapter 3 two kinds of convex cones will be studied, the nonnegativeorthant and the second order cone. We will formalize optimization problemsfor these cones and present optimality conditions.

How to solve the optimization problem using an IP method will be discussedin Chapter 4. First we will study Newton’s method and how to modify it. Thenthe potential reduction method will be introduced. The chapter will concludeby investigating how to obtain the so-called Nestorov-Todd search direction.

In Chapter 5 an efficient method to solve the optimization problem by usinga Riccati recursion to calculate the search directions will be presented.

Page 15: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

1.4 Contributions 3

To be able to start the optimization algorithm, a strictly feasible initial pointis needed. How this can be obtained and how one can determine if the opti-mization problem is feasible at all, is discussed in Chapter 6.

A complexity analysis of the proposed method together with computationalresults are presented in Chapter 7.

Finally, in Chapter 8, a short summary of the results is given together withsome ideas for future research.

1.4 Contributions

The main contributions of this thesis are:

• To show how the Riccati recursion approach can be used to find the searchdirection when the MPC problem is formulated as a Second order Pro-gram (SOCP).

• The application of the matrix inversion lemma, which makes it possibleto calculate the search direction using floating point operations that growlinearly with the time horizon.

• An efficient method for determining feasibility, and in case the problemis feasible compute the strictly feasible initial points.

These results have previously been reported in (Åkerblad and Hansson, 2002).Other work by the author, not presented in thesis, have been published in (Åkerbladet al., 2000a; Åkerblad et al., 2000b)

Page 16: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered
Page 17: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

Chapter 2

Model Predictive Control

Model Predictive Control is a control strategy which takes care of constraintsin a straightforward way. MPC is based on predictions of the future of themeasured output when a certain control sequence is applied on a process.

We will start our review of MPC in Section 2.1 by looking at the historyof MPC. In Section 2.2 we will define the MPC setup. In Section 2.3 we willdiscuss stability of MPC. Finally in Section 2.4 we will look at how InteriorPoint methods can be applied to efficiently solve the optimization problem ofMPC.

2.1 History of MPC

The ideas of MPC can be traced back to the 1960s when research on open-loopoptimal control was a topic of significant interest. The idea of a moving hori-zon, which is the core of all MPC algorithms, was proposed by (Propoi, 1963).Another early work that relates to MPC can be found in (Lee and Markus, 1967,p.423) where the following statement can be found

One technique for obtaining a feedback controller synthesis fromknowledge of open-loop controllers is to measure the current con-trol process state and then compute very rapidly for the open-loopcontrol function. The first portion of this function is then used dur-ing a short time interval, after which a new measurement of theprocess state is made and a new open-loop control function is com-puted for this new measurement. The procedure is then repeated.

Page 18: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

6 2 Model Predictive Control

This statement captures the essence of MPC, i.e., obtaining an optimal controlsequence and then applying only the first part of it.

The true birth of MPC was in industry with the publications of Richaletet al. (Richalet et al., 1976; Richalet et al., 1978) in which Model PredictiveHeuristic Control (MPHC) was presented, and the publication of Cutler and Ra-maker (Cutler and Ramaker, 1979) which introduced Dynamic Matrix Control(DMC). Both these algorithms used an explicit dynamical model of the plantto predict the effect of future control actions. The determination of the futurecontrol actions was done by minimizing the predicted error subject to some op-erating constraint. The difference in the two algorithms was that MPHC usedan impulse response model whereas DMC used a step response model.

In the eighties MPC became popular within the chemical process industry.This was mainly due to the simplicity of the algorithm and the simple models itrequired. A good report on this can be found in (Garcia et al., 1989) and a goodsurvey on how MPC is used in industry can be found in (Qin and Badgwell,1996). In this period a magnitude of algorithms were created with a multitude ofnames. The main difference between these algorithms were the process modelthey used, or how they dealt with noise. More about these algorithms can befound in e.g. (Camacho and Bordons, 1998).

One thing to remember is that despite the great success of MPC in industry,no stability theory or robustness results were available. That came later.

In the late eighties and early nineties the use of state-space models in MPCbecame popular, and it soon became the most used MPC formulation in theresearch literature (Morari and Lee, 1999). The use of state-space models led tosome advances towards a stability theory for MPC. Stability will be discussedin Section 2.3

2.2 The MPC Setup

In this section the different key elements of the MPC algorithm will be pre-sented. The key elements in an MPC algorithm are

• The predictive model

• The objective function

• The constraints

We will now study these elements closer

Page 19: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

2.2 The MPC Setup 7

2.2.1 The Predictive Model

In this thesis a state-space model is the only model discussed. The reason forthis is, as mentioned earlier, that most of the recent MPC formulations are instate-space form:

xk+1 = Akxk + Bkuk (2.1)

zk = Ckxk + Dkuk (2.2)

where xk ∈ Rn is the state, uk ∈ R

m is the control signal and zk ∈ Rp is a

performance related auxiliary variable used in the objective function which isdefined next.

2.2.2 The Objective Function

The purpose of the objective function is to get a measure of how far away futureoutputs are from the desired output and also weight that with the amount ofcontrol effort. These two objectives often contradict each other, so in choosingthe weights, Ck and Dk, one has to decide which is most important, obtainingthe desired output or using small power. Define the objective function as

N−1∑

k=0

zTk zk + Ψ(xN ) (2.3)

where N is the time horizon and where Ψ(xN ) is the terminal cost. The perfor-mance criterion can easily be extended to handle piecewise quadratic end-pointpenalties by replacing xT

NPxN with

maxi

(xTNPixN − ρi)

This can be used to show stability for larger sets of initial values of x0, see(Löfberg, 2001a). This issue will not be discussed further in this thesis.

It is also possible to write (2.3) such that the output is made to follow adesired trajectory, a reference signal. The ability to use information of futurereference signals is one of the greatest advantages with MPC. This means thatthe process can react before the change is made and thus avoid the effects ofdelay in the system. This can lead to great improvements in performance espe-cially in industry were the evolution of the reference signal is known beforehand(robotics, servos or batch processes) (Camacho and Bordons, 1998). However,in this thesis we will only discuss the case when the reference signal is zero.

Page 20: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

8 2 Model Predictive Control

2.2.3 The Constraints

All processes are limited in some way. It may be that the process actuators arelimited by how large signals they can put out, or that the actuators have slewrate constraints, i.e., they cannot change arbitrarily fast. The process may alsohave constraints which are binary. One example is a valve that can either beopen or closed. Other reasons to have constraints than process limitations exist.Safety issues is one such reason. Some process variables may not violate certainbounds, which could mean overflow in a tank or other hazardous situations forpersonnel and equipment. Another reason is environmental issues, for example,a gas that should not contain too high concentration of a certain compound whenlet out into the environment.

A general way of writing the constraints is

u ∈ U , x ∈ X (2.4)

where U and X are a non-empty convex sets. A common example is the satu-ration constraint

umin ≤ uk ≤ umax

2.2.4 The Optimization Problem and the MPC Algorithm

We are now ready to state the optimization problem which is used to computethe control signal

minu, x

k+N−1∑

j=k

zTj|kzj|k + Ψ(xk+N |k) (2.5)

subject to xj+1|k = Ajxj|k + Bjuj|k, xk|k = xk

zj|k = Cjxj|k + Djuj|k

uj|k ∈ U , xj|k ∈ X

where the notation xj|k means the state at time j given the state at time k. InChapter 5 this notation will be dropped assuming that the starting time is alwayszero. Notice that if U and X are described by linear constraints then (2.5) is aquadratic problem. If X is an ellipsoid we have a second order cone problem,which will be discussed in Chapter 3. Now the MPC algorithm can be stated as

1 Measure xk|k

2 Obtain uj|k by solving the kth optimization problem

Page 21: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

2.3 Stability of MPC 9

3 Apply uk = uk|k

4 Update time k := k + 1

5 Repeat from step 1

2.3 Stability of MPC

Stability of MPC was an unsolved problem for a long time. The first solutionto the stability issue was the unconstrained case where stability is achieved bychoosing DT

j Dj large enough, or having a sufficiently long time horizon (Garciaet al., 1989). The unconstrained problem is not very interesting since it can besolved using LQ techniques (Bitmead et al., 1990).

The stability theory for the constrained case was mainly developed in thenineties. A good survey on the stability theory can be found in (Mayne et al.,2000). Assume that all states are measured and that the model is time-invariantand that (A,B) is controllable. The MPC algorithm will guarantee asymptoticstability of the closed loop system if the following assumptions are satisfied,provided that the optimization problem (2.5) is feasible at time k = 0:

• 0 ∈ X

• Ax + BL(x) ∈ X , ∀x ∈ X

• Ψ(0) = 0, Ψ(x) > 0, ∀x 6= 0

• Ψ(Ax + BL(x)) − Ψ(x) ≤ −[Cx + DL(x)]T [Cx + DL(x)], ∀x ∈ X

• L(x) ∈ U , ∀x ∈ X

• [C D]T [C D] > 0

where Ψ(x) is a terminal state weight and L(x) is a nominal controller whichmaps x onto u. The proof of stability can be found in e.g. (Löfberg, 2001a; Lee,2000).

There are many choices of X , L(x) and Ψ(x) which satisfy these as-sumptions, and there are also many methods to produce them. In (Keerthi andGilbert, 1988) the parameters are set to X = 0, L(x) = 0 and Ψ(x) = 0.This simple choice leads to feasibility problems unless N is large. Anotherchoice is Ψ(x) = xT Px were P > 0 and AT PA−P ≤ −CT C. This choice ofterminal cost together with X = R

n, L(x) = 0 is presented in (Rawlings andMuske, 1993) and is applicable for stable systems. A third choice is to facilitate

Page 22: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

10 2 Model Predictive Control

e.g. ellipsoidal terminal state constraint in order to establish stability, whichis less conservative, i.e., stability can be proven for a larger set of initial val-ues. This will lead to a Quadratically Constrained Quadratic Program (QCQP)that has to be solved at each sample instant. This method was investigatedby (Lee, 2000; Lee and Kouvaritakis, 1999; Scokaert and Rawlings, 1998).

2.4 MPC and IP

The need to solve (2.5) fast is of great importance in MPC, since fast solversmeans that MPC can be used on processes with faster sampling time, or thatlarger optimization problems can be solved.

As we saw in the previous section, stability can be achieved by differentchoices of terminal costs and terminal state constraints. The two first choiceslead to a Quadratic Program (QP) whereas the third choice leads to a QCQP. Re-cently specially tailored IP methods applicable to MPC have appeared, (Gopaland Biegler, 1998). These algorithms solve the resulting QP by utilizing the spe-cial structure of the control problem. By ordering the equations and variables ina certain way, the linear system of equations that has to be solved for the searchdirection, becomes block-diagonal, (Wright, 1993; Wright, 1996). By furtherexamining this structure it is possible to solve the equations using a Riccati re-cursion. This makes the computational burden to grow only linearly with thetime horizon, (Rao et al., 1997; Hansson, 2000; Vandenberghe et al., 2002). Asimilar approach is used in (Steinbach, 1994; Blomvall, 2001). Riccati recur-sions have also been used together with active set methods for solving the op-timal control problem, (Arnold and Puta, 1994; Glad and Jonson, 1984). Com-parisons between active set methods and IP methods have been done by severalauthors, (Albuquerque et al., 1997; Biegler, 1997; Wright, 1996).

The idea of using Riccati-recursions works also for QCQPs. However, it isnot possible to use feasible IP-methods (Wright, 1997). Because of this no proofof polynomial complexity is available. A way to overcome this is to reformulatethe QCQPs as SOCPs, (Lobo et al., 1998). The objective of this thesis is to showhow Riccati recursions can be used also in this context. SOCPs in the contextof MPC is also used in (Wills and Heath, 2002).

Page 23: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

Chapter 3

Conic Convex Programming

Conic convex programming is a class of optimization problems where the ob-jective is linear and the constraint set is the intersection of an affine space witha proper cone. In this chapter two kinds of proper cones will be studied, thenonnegative orthant, and the second order cone. We will also formalize opti-mization problems for these cones and present optimality conditions.

3.1 The Nonnegative Orthant and the Second Or-der Cone

Linear programming is by far the most well known problem in optimization. Acommon way to write a Linear Program (LP) is

miny,s

fT y (3.1)

subject to s = c − Ay

s ≥K 0

where s, c ∈ Rm, f, y ∈ R

n and where A ∈ Rm×n. Here the relation

s ≥K 0 denotes component-wise inequality, i.e., the set of s satisfying s ≥K 0is the nonnegative orthant. The nonnegative orthant can be seen graphically inFigure 3.1. With the notation ≥K we generally mean inequality with respect toa proper cone K ⊆ R

n. A set K ⊆ Rn is called a cone if for any x ∈ K and

θ ≥ 0 we have θx ∈ K. A cone K ⊆ Rn is called a proper cone if

1. K is convex

Page 24: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

12 3 Conic Convex Programming

Figure 3.1: The nonnegative orthant in R2.

2. K is closed

3. K is solid (nonempty interior)

4. K is pointed (x ∈ K and −x ∈ K ⇒ x = 0)

Let K be a proper cone. Then the cone inequality is defined as

x ≤K y ⇐⇒ y − x ∈ K

The nonnegative orthant trivially satisfies the conditions for a proper cone.An extension to LP is what is called an SOCP. In an SOCP a linear function

is minimized over a convex set described as the intersection of one or severalsecond order cones with an affine space. A common way to write an SOCP isas (3.1) with a second order cone inequality instead of the nonnegative orthantinequality. In order to define the second order cone, introduce the partitioningof a vector s ∈ R

n given by

s =

[s0

s1

]

where s0 ∈ R. Then the second order cone is defined as the set of s ∈ Rn such

that‖s1‖ ≤ s0

This cone also satisfies the conditions for a proper cone. A graphical interpre-tation of a second order cone can be seen in Figure 3.2. Many problems can

Page 25: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

3.1 The Nonnegative Orthant and the Second Order Cone 13

–1–0.8

–0.6–0.4

–0.20

0.20.4

0.60.8

1

x

–1–0.8

–0.6–0.4

–0.20

0.20.4

0.60.8

1

y

0

0.2

0.4

0.6

0.8

1

Figure 3.2: The second order cone in R3.

be formulated as an SOCP, see (Lobo et al., 1998). For instance consider thefollowing QCQP

minx

xT Px

subject to xT Wx ≤ 1

which is equivalent to

mint,x

t (3.2)

subject to xT Px ≤ t

xT Wx ≤ 1

Assume that P and W are positive definite matrices. In order to formulate (3.2)as an SOCP let

A1 =

−1 01 0

0 −2P12

A2 =

[0 0

0 −W12

]

c1 =

110

c2 =

[10

]

Page 26: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

14 3 Conic Convex Programming

and let yT =[t x

]. Now (3.2) can be rewritten as

miny,s

[1 0

]y

subject to s1 = c1 − A1y

s2 = c2 − A2y

sk ≥Kk0, k = 1, 2

where K1 and K2 are second order cones. Notice that this problem can bewritten as (3.1) by letting AT =

[AT

1 AT2

], cT =

[cT1 cT

2

]and defining the

cone K as the Cartesian product of K1 and K2, i.e., K = K1 × K2. Throughthe use of the Cartesian product it is easy to get a unified treatment of differentconic convex optimization problems, see e.g. (Alizadeh and Schmieta, 1997).

3.2 Second Order Cone Programming

As briefly mentioned in the previous section, one can combine different conesin a single optimization problem. From now on we will call the mix between anLP and an SOCP just an SOCP since an LP can be written as an SOCP. To bemore precise we will from now on consider the following SOCP:

miny,s

fT y (3.3)

subject to s = c − Ay

A0y = c0

s ≥K 0

Here the vectors c, s and y and the matrix A are partitioned as

cT =[cT1 . . . cT

N

]

sT =[sT1 . . . sT

N

]

yT =[yT1 . . . yT

N

]

A = ⊕Nk=1Ak

where ck, sk ∈ Rmk , yk ∈ R

nk , f ∈ Rn, c0 ∈ R

m0 , A0 ∈ Rm0×n and Ak ∈

Rmk×nk . The cone K is the Cartesian product of N cones Kk of dimension

mk, i.e., K = K1 × . . .×KN . Notice that m =∑N

k=1 mk and n =∑N

k=1 nk.There exists an SOCP associated to (3.3), called the dual SOCP. In order to

derive the dual SOCP define the dual cone as

K∗ = x|yT x ≥ 0 for all y ∈ K

Page 27: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

3.2 Second Order Cone Programming 15

It can be shown that K = K∗ for the cones considered here, i.e., K is self-dual.The dual problem can be derived from the Lagrangian

L(s, y, x0, x) = fT y + xT (s + Ay − c) + xT0 (A0y − c0)

where x is constrained to be in the dual cone K∗. The dual function g is definedas the infimum of the Lagrangian over s ∈ K and y

g(x0, x) = infs∈K,y

L(s, y, x0, x)

= infs∈K,y

yT (f + AT x + AT0 x0) − cT

0 x0 − cT x + xT s

=

−cT

0 x0 − cT x if AT0 x0 + AT x = −f

−∞ otherwise

A property of the dual function is that it is always less then the optimal valueof (3.3), i.e., it provides a lower bound. The largest lower bound is obtained bymaximizing g(x0, x) with respect to x0 and x ∈ K∗. To get a nontrivial lowerbound we need to add the constraint AT

0 x0 + AT x = −f . This results in thefollowing SOCP, called the dual SOCP:

maxx0,x

− cT0 x0 − cT x (3.4)

subject to AT0 x0 + AT x = −f

x ≥K∗ 0

Problem (3.3) is often called the primal SOCP and problems (3.3) and (3.4)together are called the primal-dual pair. The following relations hold betweenthe primal and the dual:

1 Weak duality : The dual objective is less or equal to the primal objectiveat optimum.

2 Strong duality : If either problem (3.3) or (3.4) has a strictly feasiblepoint i.e., either there exist feasible (y, s) such that s >K 0 or there existfeasible (y, x) such that x >K∗ 0, then the objective of the primal anddual are equal at optimum.

The proof of the first statement is obvious from what has been said above. Theproof of the second statement is given in (Luo et al., 1996; Nesterov and Ne-mirovsky, 1994, pp.105–109). There are several reasons for considering thedual problem. One reason is that so called primal-dual algorithms for solving

Page 28: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

16 3 Conic Convex Programming

SOCPs, which jointly solve both the primal and the dual problems are very ef-ficient. Another reason is that the dual problem provides non-heuristic stoppingcriteria for algorithms. To this end let us introduce the duality gap, which isdefined as the difference between the primal and dual objectives for feasibles, y, x0 and x, i.e., s, y, x0 and x which satisfy the constraints of the primaland dual programs, respectively. The duality gap will be denoted by µ and isgiven by

µ = fT y − (−cT0 x0 − cT x) = −(xT

0 A0 + xT A)y + xT0 c0 + xT c

= xT0 (−A0 + c0) + xT (−Ay + c) = xT s (3.5)

Notice that the duality gap always is positive which follows from weak duality.Moreover, it provides for any feasible s, y, x0 and x an upper bound on thedistance from the primal objective to its optimal value, and hence is useful as astopping criterion.

3.3 Karush-Kuhn-Tucker Conditions

The Karush-Kuhn-Tucker (KKT) conditions are necessary and sufficient op-timality conditions for a general convex optimization problem, assuming thatstrong duality holds. The KKT conditions for the SOCP are as follows

A0y = c0 (3.6)

s + Ay = c (3.7)

AT0 x0 + AT x = −f (3.8)

SXe = 0 (3.9)

s ≥K 0 (3.10)

x ≥K∗ 0 (3.11)

Condition (3.9) is called the complementary slackness condition. The matricesS and X are block diagonal, where each block corresponds to one of the coneconstraints, i.e.,

S = ⊕Nk=1Sk; X = ⊕N

k=1Xk

If Kk is a nonnegative orthant then Sk = diag(sk) and Xk = diag(xk). IfKk is a second order cone then Sk = arrow(sk) and Xk = arrow(xk). Thedefinitions of diag(·) and arrow(·) are given in the Notation section. MoreovereT = [eT

1 . . . eTN ], where each ek corresponds to one of the cone constraints.

If Kk is a nonnegative orthant, then ek = 1, where 1T = [1 . . . 1]. If Kk is a

second order cone, then ek is given by the first unit vector, i.e., [1 0 . . . 0]T .

Page 29: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

Chapter 4

Interior-Point Methods

Interior-Point methods for solving optimization problems were first introducedin 1984 by Karmarkar in his famous paper (Karmarkar, 1984). Karmarkar’salgorithm has polynomial complexity, which means that the problem can besolved in polynomial time. IP methods were first developed for LP’s. LP’s haveplayed an important role in optimization since their formulation in the 1930sand 1940s (von Neumann, 1937; Kantorovich, 1939; Dantzig, 1963). One of theadvantages of IP methods is that they can easily be extended from the LP caseto other optimization problems such as second order cone programming andsemidefinite programming. In this chapter the IP framework will be introducedby applying a modification of Newton’s method on the KKT conditions.

4.1 Newton’s Method

Let us consider the problem of solving the KKT conditions 3.6–3.11. One directapproach is to apply Newton’s method on equations 3.6–3.9 to obtain searchdirections and then chose a step length so that the inequalities 3.10–3.11 aresatisfied. With z = (x0, x, y, s) write (3.6–3.9) as

F (z) =

A0y − c0

s + Ay − cAT

0 x0 + AT x + fSXe

= 0 (4.1)

Important to note is that the last equality condition is not linear. Newton’smethod linearizes (4.1) around the current point to obtain the search direction

Page 30: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

18 4 Interior-Point Methods

by solving

J(z)

∆y∆s∆x0

∆x

= −F (z) (4.2)

where J(z) is the Jacobian of F (z). If the current iterate is feasible then (4.2)becomes

A0 0 0 0A I 0 00 0 AT

0 AT

0 X 0 S

∆y∆s∆x0

∆x

=

000

−SXe

(4.3)

To avoid violating the inequality constraints a line search is performed to calcu-late the maximum step length that is allowed. Unfortunately the search directiongenerated by Newton’s method is aggressive in the sense that it wants to de-crease the cost function without any consideration to the inequality constraints,and hence only a small step can be used if to maintain feasibility. Therefore alarge number of iterations is needed to reach optimum, if it is reached at all. Aless aggressive search direction is obtained if one modifies Newton’s method inthe following way, see (Wright, 1997, p.6):

1 Alter the search direction towards the interior of the feasible region.

2 Keep the variables from moving too close to the boundary of the feasibleregion.

These modifications are discussed in the following sections.

4.2 Central Path

The central path describes an arc in the interior of the feasible set and is obtainedby relaxing the complementary slackness condition (3.9):

SXe = τe (4.4)

where τ > 0. It can be shown that the arc is uniquely defined for each τ > 0,if and only if, the feasible set is nonempty, see (Wright, 1997; Wolkowiczet al., 2000, Theorem 9.2.1). As τ approaches zero the solution of the re-laxed KKT conditions approaches the optimal solution. By applying Newton’smethod on the relaxed system the search direction is going to be biased towardthe interior, and hence a longer step can be applied before violating the inequal-ity constraints. The idea is then to gradually decrease τ towards zero. Notice

Page 31: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

4.2 Central Path 19

1

23

4

5

6

Figure 4.1: The central path

that for each fixed value of τ , the solution of the KKT conditions will result in aduality gap µ which is equal to (N1 +N2)τ where N1 is the number of nonneg-ative orthant cones of dimension one, and where N2 is the number of secondorder cones. Let N = N1 + N2. Therefore, choosing τ = µ

N , where µ is theduality gap for a given strictly feasible point, and applying Newton’s methodwill result in a solution on the central path with the same duality gap µ. Thenreducing τ and applying Newton’s method again will result in a new point onthe central path with lower duality gap. By taking τ = σµ

N , where σ ∈ [0, 1] andwhere µ is equal to the duality gap of the current iterate, steps can be obtainedwhich are directed towards the central path if σ = 1, and towards decreasingthe duality gap to zero if σ = 0. Intermediate choices of σ can be seen as atrade-off between reducing µ and improving centrality. The equations for thesearch direction become:

A0 0 0 0A I 0 00 0 AT

0 AT

0 X 0 S

∆y∆s∆x0

∆x

=

000

−SXe + σµN e

(4.5)

Many methods for how to follow the central path exist. The method that willbe used in this thesis is a so called potential reduction method which will bedescribed in Section 4.4. Other methods are so called path following algo-rithm which explicitly restrict the iterates to a neighborhood of the central path,see (Wolkowicz et al., 2000, Chapter 10). Figure 4.1 shows the central pathparameterize by τ and how the iterates follow the central path.

Page 32: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

20 4 Interior-Point Methods

4.3 Barrier Function

Another way to derive IP-methods is to remove the inequality constraints byintroducing a barrier term added to the primal objective function. The mainidea behind this barrier is to keep the variables from becoming infeasible byhaving the barrier go to infinity as the variables approach the boundary of thefeasible region, as is seen in Figure 4.2. The barriers that will be used in this

0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

Figure 4.2: A contour plot of a barrier function for the nonnegative orthant.

thesis are logarithmic barrier functions. For the nonnegative orthant in Rm it is

defined as

FL(w) = −m∑

i=1

ln(wi) (4.6)

and for the second order cone in Rm it is defined as

FS(w) = −1

2ln(w2

0 − ‖w1‖2) (4.7)

Both barriers are smooth and convex in the interior. The first derivatives of thebarrier functions are given by

∇FL(w) = −

1w0

...1

wm

∇FS(w) =−1

w20 − ‖w1‖2

[w0

−w1

]

Introduce the convention that for a cone K = K1 × . . . × KN the cones Kk,k = 1, . . . , N1 are nonnegative orthants and the cones Kk, k = N1 + 1, . . . , N

Page 33: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

4.3 Barrier Function 21

are second order cones. Then

F (s) =

N1∑

k=1

FL(sk) +

N∑

k=N1+1

FS(sk) (4.8)

is a barrier for the cone K. Now add τF (s) to the objective function in (3.3)and keep the cone constraint only implicitly. The primal optimization problemwill then be:

miny,s

fT y + τF (s) (4.9)

subject to s = c − Ay

A0y = c0

Form the corresponding Lagrangian

L(z) = fT y + τF (s) + xT (s + Ay − c) + xT0 (A0y − c0)

Necessary and sufficient optimality conditions for (4.9) are given by

∇L(z) =

A0y − c0

s + Ay − cAT

0 x0 + AT x + fτ∇sF (s) + x

=

0000

(4.10)

Notice that the last equation multiplied by S is equal to the relaxed complemen-tary slackness condition (4.4). To see this look at each cone separately. For anonnegative orthant the last equation in (4.10) reads for each row i

− τ

sk,i+ xk,i = 0

Multiply by sk,i to obtain sk,ixk,i = τ . For a second order cone the equationbecomes

− τ

s2k,0 − ‖sk,1‖2

[sk,0

−sk,1

]

+ xk = 0

Multiply by Sk from left to obtain SkXke0 = τe0 where Xk and Sk are arrowmatrices and e0 is the first unit vector. To conclude we notice that the optimalityconditions in (4.10) are the same as the relaxed conditions except for the coneconstraints which are kept implicit.

Page 34: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

22 4 Interior-Point Methods

4.4 Potential-Reduction Methods

In Section 4.2 the central path was introduced, but the strategy for how to keepthe iterates close to the central path was not presented in detail. To this endintroduce the proximity measure

Ψ(x, s) = N ln( µ

N

)

+ F (s) + F (x) + N (4.11)

where µ = xT s is the duality gap. The proximity measure is nonnegative and itis zero if and only if (x, s) lies on the central path, see (Wolkowicz et al., 2000,pp. 241–242). Now define the primal-dual potential function as

Φ(x, s) = ν ln(µ) + Ψ(x, s) (4.12)

where ν determines how much weight should be put on centrality and how muchweight should be put on decreasing the duality gap. The potential function willbe used to obtain equations for the search direction. This is done by applying thesteepest descent algorithm to the potential function in the s-variable. Steepestdecent can be obtained from a fist order Taylor series approximation of Φ:

Φ(x, s + ∆s) ≈ Φ(x, s) + ∇sΦT ∆s

where ∆s is called a descent direction if ∇sΦT ∆s is negative. Just minimizing

the second term in the approximation with respect to ∆s will make no sense,since the term is unbounded from below. This is overcome in the steepest decentalgorithm by either introducing a bound on the norm of ∆s or adding a termproportional to the squared norm to the objective function. The two approachesare closely related, see e.g., (Boyd and Vandenberghe, 2001). Here we will usethe latter approach:

min∆s

∇sΦT ∆s +

1

2‖∆s‖2

P

where ‖∆s‖2P = ∆sT P∆s = ‖P 1

2 ∆s‖22 and where P is a positive definite

matrix. The equality constraints of (3.3) must also be satisfied by the searchdirection. This is needed to maintain feasibility of (3.3). From this the descentdirection can be found as the solution of the following optimization problem

min∆y,∆s

∇sΦT ∆s +

1

2∆sT P∆s (4.13)

subject to A∆y + ∆s = 0

A0∆y = 0

Page 35: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

4.4 Potential-Reduction Methods 23

This problem is convex and since P is positive definite it has a unique solutiongiven by

A0∆y = 0 (4.14)

A∆y + ∆s = 0 (4.15)

AT0 ∆x0 + AT ∆x = 0 (4.16)

P∆s + ∆x = −∇sΦ (4.17)

where ∆x0 and ∆x are Lagrange-multipliers for the equality-constraints. Nowlet P = S−1X and multiply (4.17) with S. Then (4.14–4.17) can be written as

A0 0 0 0A I 0 00 0 AT

0 AT

0 X 0 S

∆y∆s∆x0

∆x

=

000

−S(ν + N) xsT x

+ S∇sF (s)

(4.18)

Notice that the last expression on the right hand side can be written as

α(−SXe +µσ

Ne)

where σ = Nν+N and α = ν+N

µ . Hence (4.18) only differs from (4.5) by amultiplication of the right hand side with a scalar α. Therefor the resultingsearch direction will just be scaled differently. Since the search directions aremultiplied with a step length that optimizes a criteria, and since the solution of(4.5) and (4.18) only differ by a constant multiplication, it makes no differencewhatsoever if (4.5) or (4.18) is used for computing the search direction as longas σ = N

ν+N .Most primal-dual algorithms can be summarized as

1 find a search direction ∆zk by solving (4.5)

2 set zk+1 = zk + αk∆zk

3 choose αk such that the new iterate does not violate the inequality con-straints.

The potential function measures how good a given point is, i.e., it weights thedistance to the central path and the value of the objective function.

After obtaining the search direction the step length can be computed as theminimizer of Φ along the search direction. It is possible to use different step

Page 36: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

24 4 Interior-Point Methods

lengths for the primal and dual variables. This leads to a two dimensional opti-mization problem, one for a step p in the primal direction and one for a step qin the dual direction. The potential function can be expanded as

Φ(s + p∆s, x + q∆x) = N + (ν + N) ln(c0 + c1p + c2q)

+

N1∑

k=1

ln(sk + ∆skp) +

N1∑

k=1

ln(xk + ∆xkq)

+

N∑

k=N1+1

1

2ln(dk,0 + dk,1p + dk,2p

2)

+

N∑

k=N1+1

1

2ln(gk,0 + gk,1q + gk,2q

2) (4.19)

where

c0 =

N∑

k=1

xTk sk, c1 = −

N∑

k=1

xTk ∆sk, c2 = −

N∑

k=1

∆xTk sk

and where, for k = N1 + 1, . . . , N

dk,0 = s2k,0 − ‖sk,1‖2

dk,1 = −2sk,0∆sk,0 − 2sTk,1∆sk,1

dk,2 = ∆s2k,0 − ‖∆sk,1‖2

The constant g is calculated in a similar way as the constant d by exchangingthe primal variable s with the dual variable x. Now the minimizer of (4.19) canbe obtained using standard methods, e.g., damped Newton. This plane searchalgorithm is similar to (Vandenberghe and Boyd, 1995).

Convergence of the potential reduction method can be shown as in (Wolkowiczet al., 2000; Vandenberghe and Boyd, 1996, Theorem 9.3.1). Since (4.11) isnonnegative, the following inequality is true

Φ ≥ ν ln(µ)

which is equivalent to

µ ≤ exp(Φ

ν)

Assume that (x0, s0) is strictly feasible and that

Φ(xk+1, sk+1) ≤ Φ(xk, sk) − δ (4.20)

Page 37: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

4.5 Nesterov-Todd Search Direction 25

This means that the following inequality holds

Φ(xk, sk) − Φ(x0, s0) = ν ln

(µk

µ0

)

+ Ψ(xk, sk) − Ψ(x0, s0) ≤ −kδ

Since Ψ ≥ 0 the inequality can be written as

ln

(µk

µ0

)

≤ −kδ + Φ(x0, s0)

ν

If k ≥ ν ln( 1ε )+Ψ(x0,s0)

δ then ln(

µk

µ0

)

≤ ln(ε). Hence

µk ≤ εµ0

To summarize, if the potential function can be reduced by a constant δ > 0 ineach iteration then the duality gap can be reduced with a factor ε in k iterations.

4.5 Nesterov-Todd Search Direction

In this section the potential function will be used to derive the Nesterov-Todd(NT) search direction for the SOCP. It has been shown that primal-dual algo-rithms using the NT search direction have polynomial complexity, see (Tsuchiya,1998). Specifically it has been shown that the potential function can be reducedby a positive constant in each iteration using the NT direction.

It was shown in Section 4.4 how a search direction could be found fromthe potential function by applying the steepest descent algorithm. From thiswe are now going to derive the NT-direction following the motivation given in(Todd, 1999; Wolkowicz et al., 2000, Section 9.5). In Section 4.4 the derivationof the equation for the search direction was done using a Taylor series expansionwith respect to the primal variable s. It is also possible to derive equations forsearch directions from an expansion with respect to the dual variable x. Thisresults in

min∆x

∇xΦT ∆x +1

2∆xT Q∆x (4.21)

subject to AT0 ∆x0 + AT ∆x = 0

where Q is a positive definite matrix. This problem is convex and since Q is

Page 38: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

26 4 Interior-Point Methods

positive definite it has a unique solution given by

A0∆y = 0 (4.22)

A∆y + ∆s = 0 (4.23)

AT0 ∆x0 + AT ∆x = 0 (4.24)

Q∆x + ∆s = −∇xΦ (4.25)

It is possible to choose P and Q such that (4.14–4.17) and (4.22–4.25) result inthe same search direction. First note that equations (4.14–4.16) from the primalare the same as equations (4.22–4.24) from the dual. That leaves equations(4.17) and (4.25) to determine the conditions on P and Q for ∆z to be a NTsearch direction, i.e., it should be possible to transform

P∆s + ∆x = −ν + N

µx −∇F (s) (4.26)

into

∆s + Q∆x = −ν + N

µs −∇F (x) (4.27)

If (4.26) is multiplied with Q and if the following conditions hold

• P = Q−1

• Qx = s

• Q∇F (s) = ∇F (x)

then equations (4.26) and (4.27) are the same. In order to find matrices P and Qwhich satisfy these conditions a matrix G is defined in Appendix A. There weshow that by letting P = G−2 and Q = G2, we will satisfy the three conditionsfor the NT search direction.

The matrix G can also be seen as a scaling matrix, or equivalently as achange of variables. Let the scaling matrix operate on the vectors and matricesof the optimization problem as

A = G−1A, x = Gx, s = G−1s

In the scaled variables (4.14–4.17) can be written as

A0∆y = 0 (4.28)

A∆y + G∆s = 0 (4.29)

AT0 ∆x0 + AT ∆x = 0 (4.30)

G−1∆x + G−1∆s = −∇sΦ (4.31)

Page 39: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

4.5 Nesterov-Todd Search Direction 27

By multiplying (4.29) with G−1 and (4.31) with G and by realizing that −G∇sΦ =− ν+N

µ V e + V −1e, which is shown in Appendix A, the equations for the NTsearch direction can be written as

A0 0 0 0

A I 0 0

0 0 AT0 AT

0 I 0 I

∆y∆s∆x0

∆x

=

000

− ν+Nµ V e + V −1e

(4.32)

where V is a block diagonal matrix defined in Appendix A.

Page 40: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered
Page 41: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

Chapter 5

Efficient Computation ofSearch Direction

In this chapter the optimization problem presented in Chapter 2, which has aquadratic objective function and linear and quadratic constraints, will be solvedusing the IP method discussed in Chapter 4. The key to an efficient imple-mentation of a solver for this problem is to rewrite it as an SOCP. This can befashioned in many different ways. However, done carefully, it is possible touse Riccati recursions for computing the search directions, which will greatlyimprove speed.

5.1 Control Problem

In this section the control problem is described. First the model is presented.Then the performance measure is introduced. The optimization problem is thenreformulated as an SOCP. Consider the following model for k = 0, . . . , N −1 :

xk+1 = Akxk + Bkuk

zk = Ckxk + Dkuk

bk ≥ Ekxk + Fkuk (5.1)

1 ≥ xTNWxN

where xk ∈ Rn is the state, uk ∈ R

m is the control signal, zk ∈ Rq is the

output and where Ak ∈ Rn×n, Bk ∈ R

n×m, Ck ∈ Rp×n, Dk ∈ R

p×m,Ek ∈ R

q×n, Fk ∈ Rq×m, bk ∈ R

q and 0 < W = W T ∈ Rn×n. The

Page 42: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

30 5 Efficient Computation of Search Direction

inequality (5.1) should be interpreted as component-wise inequality. With abuseof notation we will denote both the given initial value and the state variable attime zero with x0. The performance criterion to minimize is defined as

φ =N−1∑

k=0

zTk zk + xT

NPxN

where P is positive semidefinite.The optimization problem can be reformulated as

mint

N∑

k=0

tk (5.2)

subject to Hy = d

My ≤ b

zTk zk ≤ tk, k = 0, . . . , N − 1

xTNPxN ≤ tN

xTNWxN ≤ 1

where yT =[xT

0 uT0 . . . xT

N−1 uTN−1 xT

N

], dT =

[xT

0 0 . . . 0]

and

H =

I 0 0 . . . 0−A0 −B0 I . . . 0

.... . .

. . .. . .

...0 . . . −AN−1 −BN−1 I

(5.3)

M =[⊕N−1

k=0

[Ek Fk

]0]

The reason for having one quadratic constraint for each k instead of one for allk is explained in Section 5.3. To be able to solve this optimization problemefficiently we will formulate it as an SOCP. To this end define

Lk =

−1 0 01 0 00 −2Ck −2Dk

cTk =

[1 1 0

], k = 0, . . . , N, cT

N+1 =[1 0

]

P =

−1 01 0

0 −2P12

, W =

[0 0

0 −W12

]

, f =

eT0...

eT0

Page 43: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

5.2 Search Direction 31

where e0 ∈ Rn+m+1 is the first unit vector. Then the optimization problem can

be written as the SOCP

miny,s

fT y (5.4)

subject to sk = ck − Lkyk, k = 0, . . . , N − 1

sN = cN − P yN

sN+1 = cN+1 − W yN

r = b − My

0 = d − Hy

0 ≤Kksk, k = 0, . . . , N + 1

0 ≤ r

where yTk =

[tk xT

k uTk

], yT

N =[tN xT

N

]and where M and H are mod-

ified versions of H and M with zero columns where tk comes in. Here Kk

are second order cones, and 0 ≤ r should be interpreted as component-wiseinequalities.

5.2 Search Direction

In this section the equations for the NT search direction will be stated. Follow-ing the method described in Chapter 4 the NT search direction can be found byaltering the notation in (4.32). In order to do that define the matrix L and vectorc as

L =

[⊕N−1

k=0 Lk ⊕ P[

0 W]

]

, cT =[cT0 . . . cT

N+1

]

Partition the inverse of the scaling matrix for the second order cones as

G−1k (λk, sk) =

1

ωk

[

αk −βTk

−βk I +βkβT

k

1+αk

]

=

αk −βk,0 −βTk,1

−βk,0 Σk,11 Σk,12

−βk,1 Σk,21 Σk,22

where αk, βk and ωk are defined in Appendix A. Notice that this equation alsodefines (αk, βk). For the nonnegative orthant the scaling matrix is given by

Σ2 = diag

(rk

γk

)1/2

Page 44: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

32 5 Efficient Computation of Search Direction

Let Gk and Σ2 operate on the unscaled problem as follows

Lk = G−1k Lk, k = 0, . . . , N − 1

P = G−1N P , W = G−1

N+1W

M = Σ−12 M

λk = Gkλk, sk = G−1k sk, k = 0, . . . , N + 1

γ = Σ2γ, r = Σ−12 r

L =

[⊕N−1

k=0 Lk ⊕ P[

0 W]

]

The NT scaling matrix maps the primal and dual variable to the same vector v,which is given by

vk = Gkλk = G−1k sk, k = 0, . . . , N + 1

vN+2 = Σ2γ = Σ−12 r

With this scaling the following equations for the search direction are obtained

0 0 0 H 0 0

0 0 0 L I 0

0 0 0 M 0 I

HT LT MT 0 0 00 I 0 0 I 00 0 I 0 0 I

∆ρ

∆λ∆γ∆y∆s∆r

=

0000w1

w2

(5.5)

where w1 =(

ϑ+νµ

)

v−V −1e0, w2 =(

ϑ+νµ

)

vN+2−V −1N+21, ϑ = N+2+Nq

and whereV = ⊕N+1

k=0 Vk

with Vk = arrow(vk) and VN+2 = diag(vN+2). Now when the equations forthe search direction have been presented the question arises how to solve thislinear system of equations. A simple way is to just apply a LU-factorization on(5.5) and then use forward and backward substitutions. A more efficient wayusing a Riccati recursion is described in the next section.

5.3 Efficient Solution of Equations for Search Di-rection

In this section it is shown how a Riccati recursion can be used to efficiently solvethe equations for the search direction obtained in Section 5.2. From the two last

Page 45: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

5.3 Efficient Solution of Equations for Search Direction 33

block-rows of (5.5) it follows that ∆s = −∆λ + w1 and ∆r = −∆γ + w2. Bysubstituting this back into (5.5) the following equation is obtained

0 0 0 H

0 −I 0 L

0 0 −I M

HT LT MT 0

∆ρ

∆λ∆γ∆y

=

0−w1

−w2

0

(5.6)

Reorder the variables in ∆y as[∆t0 . . . ∆tN ∆xT

0 ∆uT0 . . . ∆xT

N−1 ∆uTN−1 ∆xT

N

]

and partition the dual variable as ∆λTk =

[

∆λTkt ∆λT

ky

]

, where ∆λkt con-

tains the first two elements of ∆λk. Now define

∆λt =

∆λ0,t

...∆λN+1,t

; ∆λ =

∆λ0,y

...∆λN+1,y

Then with some reordering of rows, (5.6) can be reformulated as

α −I 0 0 0 ΩL

0 αT0 βT

0 0

0 0 0 0 0 H

β 0 0 −I 0 ΣL

0 0 0 0 −I Σ2M

0 LTΩ

T HT LTΣ

T MTΣ

T2 0

∆t

∆λt

∆ρ

∆λ

∆γ

∆y

=

−wt

0

0

−wq

−w2

0

(5.7)

where wT1 =

[wT

t wTq

], β = ⊕N+1

k=0 [βk,1 + Σk,21], Σ = ⊕N+1k=0 Σk,22, and

where

α = ⊕N+1k=0

[−αk − βk,0

βk,0 + Σk,11

]

Ω = ⊕N+1k=0

[−βk

Σk,12

]

L =

−2C0 −2D0 0 . . . 0...

. . .. . .

...0 . . . −2CN−1 −2DN−1 00 . . . 0 0 −2P 1/2

0 . . . 0 0 −W 1/2

Page 46: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

34 5 Efficient Computation of Search Direction

From block-rows 4 and 5 of (5.7) it follows that

∆λ = ΣL∆y + β∆t + wq

∆γ = Σ2M∆y + w2

By substituting this back into (5.7), the following equation is obtained

βT β αT βT ΣL 0α −I ΩL 0

LT ΣT β LT ΩT Q HT

0 0 H 0

∆t∆λt

∆y∆ρ

=

−βT wq

−wt

−w0

(5.8)

where

Q = MT ΣT2 Σ2M + LT ΣT ΣL (5.9)

w = LT ΣT wq + MT ΣT2 w2 (5.10)

Now partition (5.8) as indicated by the lines:[

Φ1 Φ12

ΦT12 Φ2

] [∆1

∆2

]

=

[U1

U2

]

(5.11)

The matrix Φ1 is a (3(N +1)+1)× (3(N +1)+1) block matrix which easilycan be inverted because of the block diagonal structure of α and β. Also noticethat (5.11) can be solved via

∆1 = (Φ1 − Φ12Φ−12 ΦT

12)−1(U1 − Φ12Φ

−12 U2) (5.12)

∆2 = Φ−12 (U2 − ΦT

12∆1)

Since Q has a block-diagonal structure and H is built from the dynamic systemwith a very specific structure, Φ−1

2 U2 can be computed efficiently using a Ric-cati recursion approach as in (Wright, 1993). Had we replaced the N quadraticconstraints zT

k zk ≤ tk with one constraint∑N−1

k=0 zTk zk ≤ t in (5.2), then Q

would not have had the desired structure and the Riccati recursion approachcould not have been applied. By using the matrix inversion lemma it holds that

(Φ1 − Φ12Φ−12 ΦT

12)−1 = Φ−1

1 − Φ−11 Φ12(Φ

T12Φ

−11 Φ12 − Φ2)

−1ΦT12Φ

−11 (5.13)

It can be shown that (ΦT12Φ

−11 Φ12 − Φ2) has the same structure as Φ2, see

Appendix B, and thus can be factorized by a Riccati recursion as well. Substi-tuting (5.13) into (5.12) we realize that the algorithm for efficiently computingthe search direction can be divided into the following steps

Page 47: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

5.3 Efficient Solution of Equations for Search Direction 35

1 calculate η1 = Φ−12 U2 using a Riccati recursion

2 let a = U1 − Φ12η1

3 let ξ = ΦT12Φ

−11 a

4 solve (ΦT12Φ

−11 Φ12 − Φ2)η2 = ξ using a Riccati recursion

5 now ∆1 = Φ−11 a − Φ−1

1 Φ12η2

6 then ∆2 = η1−Φ−12 Φ12∆1 where the second term also can be computed

using a Riccati recursion

Notice that in Step 6 parts of the Riccati recursion used in Step 1 can bereused. Detailed information on the Riccati factorization approach is given inAppendix B, see also (Rao et al., 1997). The computational complexity growslinearly with the time horizon N as will be discussed more in Chapter 7.

Page 48: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered
Page 49: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

Chapter 6

Strictly Feasible Initial Points

So far we have assumed that the starting point is strictly feasible. It is, however,not a trivial task to find a strictly feasible initial point. In this chapter it willbe shown how to obtain strictly feasible initial primal and dual points to theoptimization problem (5.4). Existence of solutions will also be discussed.

6.1 Primal Initial Point

When looking for a strictly feasible primal point for problem (5.4) one real-izes that the inequalities involving tk are easily satisfied by choosing tk largeenough. However, the remaining inequalities are more difficult. Therefore con-sider the following relaxed problem:

miny,τ

Rτ (6.1)

subject to Hy = d

My ≤ b + τ

xTNWxN ≤ 1 + τ

where R > 0. When (6.1) has a solution (y, τ) for which τ < 0 it holds that yis a strictly feasible primal point to (5.4). The corresponding dual problem can

Page 50: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

38 6 Strictly Feasible Initial Points

be stated as follows

maxρ,γ,λ

− dT ρ − bT γ − cT λ (6.2)

subject to WT λ + MT γ + HT ρ = −f

|| λ1 ||≤ λ0

0 ≤ γ

where H = [0 H] and where

M =[−1 ⊕N−1

k=0

[Ek Fk

]0 0

]

W =

−1 0 0−1 0 0

0 0 −2W12

cT =[2 0 0

], fT =

[R 0 . . . 0

]

A primal-dual algorithm can be used to determine if the original problem isfeasible or not. More precisely three different cases of solutions to (6.1) and(6.2) exist

• If −dT ρ−bT γ−cT λ > 0, then no strictly feasible solution to the originalproblem exists.

• If τ < 0 then a strictly feasible solution to the original problem exists.

• If τ = −dT ρ − bT γ − cT λ = 0, then a feasible solution to the originalproblem, however not strictly feasible exists.

These three cases are illustrated in figures 6.1–6.3.Notice that this, so called Phase 1, problem also can be reformulated as an

SOCP as in Section 5.1, and thus it also can be solved using a similar approach.However, we now also have to find a strictly feasible point to the Phase 1 prob-lem. A strictly feasible primal point is trivial to find. Just let u = 0 and thencalculate x by using the dynamic equation, i.e., recursively calculate xk fromxk+1 = Akxk. Then choose τ large enough to satisfy the inequalities. A strictlyfeasible dual point has to satisfy

WT λ + MT γ + HT ρ = −f (6.3)

|| λ1 ||< λ0 (6.4)

0 < γ (6.5)

Page 51: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

6.1 Primal Initial Point 39

iteration number

primal objective

dual objective

Figure 6.1: The dual objective is greater then zero. No strictly feasible solutionto the original problem exists.

iteration number

primal objective

dual objective

Figure 6.2: The primal objective is less then zero. A strictly feasible solution tothe original problem exists.

iteration number

primal objective

dual objective

Figure 6.3: The primal and dual objective are equal to zero. A feasible solutionto the original problem exists, however not strictly feasible.

Page 52: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

40 6 Strictly Feasible Initial Points

If we let λ0 = 1 and λ1 = 0, the following conditions remain to be satisfied

Nq∑

k=1

γk = R − 1 (6.6)

ET γ + GT ρ = 0 (6.7)

FT γ + BT ρ = 0 (6.8)

γ > 0 (6.9)

where

BT =

0 −BT0 . . . 0

.... . .

0 . . . −BTN−1

ET =

ET0 0

. . .ET

N−1

0 . . . 0

GT =

I −AT0 . . . 0

.... . .

. . .I −AT

N−1

0 . . . 0 I

FT = ⊕N−1k=0 FT

k

Equations (6.7-6.8) can be viewed as a linear system evolving backwards intime. Therefor we can solve (6.7-6.9) recursively for k

ρk−1 = ATk−1ρk − ET

k−1γk−1, ρN = 0 (6.10)

0 = BTk−1ρk − FT

k−1γk−1 (6.11)

0 < γk−1 (6.12)

After solving (6.7-6.9) constraint (6.6) can be satisfied by taking R = 1 +∑Nq

k=1 γk. Conditions 6.11 and 6.12 can be formulated as a small linear fea-sibility problem for each k, more precisely as finding strictly feasible γk−1

satisfying

FTk−1γk−1 = BT

k−1ρk

γk−1 ≥ 0

Page 53: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

6.2 Dual Initial Point 41

Then ρk−1 is found from (6.10). The optimization problem has a solutionγk−1 > 0 if the set of solutions to the dual problem

maxu

ρTk Bk−1u

subject to Fk−1u ≥ 0

is bonded and nonempty, see (Wright, 1997). A sufficient condition for this isthat Fk−1u ≥ 0 defines a nonempty polytope. This is true for the importantspecial case

$k ≤ Ekxk + Fkuk ≤ ςk

where Fk has full column rank. This can be written as (5.1) with

Ek =

[Ek

−Ek

]

, Fk =

[Fk

−Fk

]

, bk =

[$k

−ςk

]

Because Fk−1u ≥ 0 is equivalent to Fk−1u = 0 and Fk−1 has full columnrank it follows that u = 0, which means that a strictly feasible γk−1 exists.

6.2 Dual Initial Point

In the previous section we showed how to find a strictly feasible primal pointby solving a so called Phase 1 problem. It now remains to show how to find astrictly feasible dual point (ρ, λ, γ) to satisfy the dual constraints of (5.4), i.e.

HT ρ + LT λ + MT γ = −f (6.13)

‖λ1‖ ≤ λ0 (6.14)

0 ≤ γ (6.15)

where the matrices and variables are defined in sections 5.1 and 5.2. By choos-ing λk1 = 0 and λk0 = 1, N +1 rows of (6.13) will be satisfied. The remainingproblem can then be stated as

ET γ + GT ρ = 0 (6.16)

FT γ + BT ρ = 0 (6.17)

γ > 0 (6.18)

which is the same as (6.7–6.9). This means (ρ, γ) solving the Phase 1 problemtogether with λk,1 = 0 and λk,0 = 1 are strictly feasible for (6.13–6.15).

Page 54: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered
Page 55: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

Chapter 7

Computational Results

This chapter is divided into two parts. First the complexity of the algorithm willbe investigated and then an example will be presented to show some computa-tional results.

7.1 Complexity Analysis

7.1.1 Flop Count

The complexity analysis will be done by computing how many flops are re-quired to solve the equations for the search direction. A flop is a floating pointoperation which is here defined as one addition, subtraction, multiplication ordivision of two floating-point numbers. When floating-point operations wererelatively slow the flop count gave a good estimate of the total computationaltime. Now issues such as cache boundaries and locality of reference can dra-matically affect the computation time of a numerical algorithm (Boyd and Van-denberghe, 2001, p.498). Flop count can still give a rough estimate of the com-putational time of a numerical algorithm. Since the flop count only gives anapproximation of the computational time it is enough to check the order or or-ders, i.e., the largest exponents.

First we will describe some of the standard costs for the operations per-formed in the optimization algorithm.

• Inner product xT y, where x, y ∈ Rn costs 2n flops.

• Matrix-vector multiplication z = Ax where A ∈ Rm×n costs 2mn flops.

Page 56: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

44 7 Computational Results

• Matrix-matrix multiplication C = AB where A ∈ Rm×r and where

B ∈ Rr×n costs 2mnr flops.

• Solving linear equations, Ax = b where A ∈ Rn×n, by LU factorization

costs 2/3n3 flops.

The cost involved in forming the product of several matrices can vary sincethe matrix-matrix multiplication can be carried out in different ways. ConsiderD = ABC, where A ∈ R

m×n, B ∈ Rn×p and where C ∈ R

p×q . The matrixD can be computed in two ways. One way is to form the product AB and thenform D = (AB)C, which gives a total cost of 2mp(n + q) flops. The secondway is to form the product BC and then form D = A(BC). This gives a totalcost of 2nq(m + p). This leads to the conclusion to use the first approach when2mp(n + q) < 2nq(m + p) and otherwise use the second approach. So farno structure has been exploited, but if the matrix has structure, then the amountof flops can be reduced considerably. For example a triangular matrix multi-plication requires n3/3 flops compared to 2n3 for a full matrix multiplication.More about flop count and numerical issues can be found in (Golub and vanLoan, 1989).

7.1.2 Complexity of the Algorithm

The main computational cost in the optimization algorithm is to solve Equa-tion 5.5, which has to be solved in each iteration to obtain the search directions.Following the method described in Section 5.3 the most computationally inten-sive step will be solving Equation 5.8, which is done using a Riccati recursion,as is described in Appendix B.

The complexity analysis of the Riccati recursion will be divided into threesteps. The three steps and all the variables are described in Appendix B. Firstis the initialization, which is the cost of calculating Q. The calculation of Q isdivided into N matrix multiplications. The cost of each matrix multiplication is2(n + m)2q + 2(n + m)p2 + 2(n + m)2p, so the total cost of the initializationis O(Nn2m2qp2). The next step is the backward recursion, which is the cal-culation of Sk and Ψk by computing some intermediate matrices. The cost offorming each Sk is 6n2m + 2nm2 + 3m3 + 2m2n + 4n3 + 2n2m and the costof forming each Ψk is 4mn + 2m2 + 2n2. Thus the total cost of the backwardrecursion is O(Nn3m3). The forward recursion which is the calculation of thesearch directions ∆uk, ∆xk and ∆ρk, which are all computed by matrix-vectorcomputations, has a total cost of O(Nn2m2). A summary of the flop count forthe Riccati recursion can be found in Table 7.1.

Page 57: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

7.2 Example 45

Step FlopsInit O(Nn2m2qp2)B1 O(Nn3m3)B2 O(Nn2m2)F O(Nn2m)

Total O(Nn3m3qp2)

Table 7.1: The different steps in the Riccati recursion and their flop count.

To solve Equation 5.8 three Riccati recursions have to be performed. Thefirst and third recursion differ only in the right hand side, which means thatparts of the Riccati recursion used in the first recursion can be reused in thethird recursion. In the second recursion parts of the multiplications in the ini-tialization of the first recursion can be reused. Thus the total cost of solving(5.8) is O(Nn3m3qp2).

Another way of solving Equation 5.8 is to use a LU-factorization. Thiswould involve three steps. First, the initialization, which is the same as forthe Riccati approach. Then comes the LU factorization, which has N 2 timesmore flops than the Riccati approach. The last step is the forward and backwardsubstitution which costs O(N 2n2m2). Thus the total flop count for the LUfactorization approach is O(N 3n3m3p2q), which is N2 times more than withthe Riccati recursion approach.

To test the algorithm it was implemented in Matlab. The use of for loopsin Matlab is not very efficient so the matrix multiplications in the algorithmwere done by using sparse matrix multiplications. If the algorithm was to beimplemented in a more efficient programming language such as C, matrix mul-tiplications should probably be performed so that the structure of the matricesare exploited.

7.2 Example

In this section the algorithm will be evaluated on a double-tank process. In theevaluation we will see that the algorithm has a linear growth in the flop countwith respect to the time horizon, just as expected.

Page 58: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

46 7 Computational Results

7.2.1 The Double-Tank Process

The double-tank process which was introduced in (Åström and Österberg, 1986)is a laboratory process. It is illustrated in Figure 7.1.

PSfrag replacements

P

q1

q2h1

h1

h2

T1

T2

A1

A2

a1

a2

Figure 7.1: The tank system.

The process dynamics can be described as

h1 = − a1

A1

2gh1 +ku

A1(7.1)

h2 =a1

A2

2gh1 −a2

A2

2gh2 (7.2)

where h1 and h2 are the levels in the upper and lower tank, respectively. Thepump P generates a water flow q1 to tank T1. The hole in the bottom of T1

makes the water flow q2 to T2 dependent on the water level h1 in T1. Thegeometrical data for the tanks are given in Table 7.2. The tank data is takenfrom (Hansson, 2000). The levels in the tanks are measured with sensors thatgive an output voltage proportional to the level. The proportional constant is50 V/m. This means that the range of the sensor outputs is [0, 10] V. The flowq1 [m3/s] generated by the pump P is proportional to the supply voltage u [V]as q1 = ku where k = 2.7 × 10−6 [m3/Vs]. By linearizing around the steady

Page 59: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

7.2 Example 47

A [m2] a [m2] hmax [m]T1 2.734 × 10−3 7 × 10−6 0.2T2 2.734 × 10−3 7 × 10−6 0.2

Table 7.2: The geometrical data of the cross sectional tank area A and the crosssectional area of the drilled hole a.

state solution x0 = 5 V and u0 = 1.82 V and by sampling the system with zeroorder hold (sample interval 2 s) the following discrete state equation is obtained

x(k + 1) =

[0.9648 00.0345 0.9648

]

x(k) +

[0.09710.0017

]

u(k) (7.3)

7.2.2 Problem Formulation

The MPC algorithm was discussed in Chapter 2. In this discussion we saw thatthe choice of cost function determines how aggressive the output becomes. Thematrices in the cost function are here chosen to be

Ck =

1 00 100 0

, k = 1 . . . N (7.4)

Dk =

00

0.01

, k = 1 . . . N (7.5)

P =

[1 00 1

]

(7.6)

This means that the cost of the control signal is valued 10000 times less than thefirst state variable and that the second state variable is valued 100 times morethan the first state variable. The terminal states are valued equal to the first state.

Page 60: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

48 7 Computational Results

The constraint matrices were chosen to be

Ek =

1 0−1 00 10 −10 00 0

, k = 1 . . . N (7.7)

Fk =

00001−1

, k = 1 . . . N (7.8)

bk =

111133

, k = 1 . . . N (7.9)

W =

[10 00 10

]

(7.10)

Hence the constraints are equivalent to

−1 ≤ xk1 ≤ 1

−1 ≤ xk2 ≤ 1

−3 ≤ uk ≤ 3

10x2N1 + 10x2

N2 ≤ 1

In the computation the initial value was set to x0 = [0.5 − 0.5]T . The optimaltrajectories for the levels of the two tanks and the optimal trajectory of thepump voltage for N = 23 are shown in Figure 7.2. Notice how the differentconstraints become active. The control signal starts at its maximum level, 3Vand then it starts to decrease. When the level in the upper tank reaches itsmaximum value after 4 sec. the control signal balance this level with a constantcontrol signal of 0.36 V for about 10 sec. Then the control signal decreases sothat the level of the upper tank can decrease and approach the same level as thelower tank at time 40 sec.

Page 61: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

7.2 Example 49

0 5 10 15 20 25 30 35 40 45 50−0.5

0

0.5

1

Time [sec]

0 5 10 15 20 25 30 35 40 45−3

−2

−1

0

1

2

3

Time [sec]

PSfrag replacements

xu

Figure 7.2: Optimal levels of tanks —upper plot, and optimal control signal—lower plot, as a functions of time.

7.2.3 Computational Results

To verify the flop count, computations were performed in MATLAB 5.3.1,where flop count still is in use. The result of these computations can be seen inFigure 7.3. Notice how the number of flops per iteration grows linearly with thetime horizon just as the complexity analysis suggested.

Figures 7.4 and 7.5 show how the duality gap decreases exponentially. Therate of the decrease seems to be dependent on the size of the problem with lowerrate of decrease per iteration for larger problems. The amount of iterationsgrows with the problem size as can be expected as the rate of decrease in theduality gap becomes lower for larger problems. Section 4.4 showed that theamount of iterations needed to reduce the duality gap by a fixed factor can beapproximated by O(ν), where ν is a tuning parameter in the potential functionwhich has to obey ν >

√#cones for convergence of the algorithm to hold.

Notice that the number of cones grows linearly with the time horizon. Thus thenumber of iterations grows as O(

√N), which is shown in Figure 7.6.

Computations were also performed to test how the overall computationaltime grew as the time horizon was increased. The result is presented as the solidline in Figure 7.7. The Riccati recursion based algorithm was also compared

Page 62: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

50 7 Computational Results

101

102

103

104

105

106

107

Time horizon

Num

ber

of fl

ops

per

itera

tion

Figure 7.3: Flop count per iteration vs time horizon.

against a standard optimization software, SeDuMi, (SeDuMi, 1.3). SeDuMi isdeveloped to solve semidefinite problems but also has a solver which can handleSOCP. To be able to use MATLAB an LMI parser, called YALMIP (Löfberg,2001b), was used. The result is presented as the dashed line in Figure 7.7. Thedash-dotted line presents the computational time when SeDuMi was used tosolve the problem with the state constraints eliminated using (2.1) recursively.The Riccati recursion based algorithm has quadratic complexity whereas theSeDuMi based implementations have cubic complexity.

According to the preceding analysis we would expect the computationaltime for the Riccati recursion based approach to be O(N

32 ). The difference

is probably explained by the fact that flop count analysis is not exact whencomputational time is considered.

Page 63: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

7.2 Example 51

1 2 3 4 5 6 7 8 9 1010

−6

10−5

10−4

10−3

10−2

10−1

100

101

102

Iteration

Dua

lity

gap

Figure 7.4: Decrease in duality gap for time horizon N = 10.

0 10 20 30 40 50 60 70 8010

−5

10−4

10−3

10−2

10−1

100

101

102

103

104

105

Iteration

Dua

lity

gap

Figure 7.5: Decrease in duality gap for time horizon N = 1000.

Page 64: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

52 7 Computational Results

102

103

101

102

Time horizon

Iter

atio

ns

Figure 7.6: Number of iteration vs time horizon.

Page 65: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

7.2 Example 53

102

103

100

101

102

103

104

Tim

e [s

ec]

Time horizon

Figure 7.7: Elapsed computational time vs time horizon. Solid line Riccatirecursion based algorithm, dashed line SeDuMi without elimination of states,dashed-doted line SeDuMi with elimination of states.

Page 66: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered
Page 67: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

Chapter 8

Conclusions and FutureWork

8.1 Conclusions

The objective of this thesis has been to give insight in to how the MPC prob-lem with a quadratic cost function and an ellipsoidal terminal constraint can besolved using a feasible IP method.

In Chapter 5 it was shown how to efficiently solve an optimal control prob-lem with applications to model predictive control. Special attention was givento formulating the problem as an SOCP while still preserving certain structureunder scalings. Preserving this structure makes it possible to solve the equationsfor the search directions using Riccati recursions efficiently.

By using the matrix inversion lemma it was possible to simplify the com-putations. The use of the matrix inversion lemma made it possible to solvethe equations for the search directions using three Riccati recursions instead ofhaving to solve one Riccati recursion with a right hand side of dimension N .

In Chapter 6 it was shown that the algorithm in an easy way can determinewhen a problem is feasible or not. This was done by formalizing the search forstrictly feasible initial points as a primal-dual IP problem.

The use of Riccati recursions makes the computational complexity grow atmost quadratically with the time horizon. This is discussed and shown in anexample in Chapter 7.

Page 68: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

56 8 Conclusions and Future Work

8.2 Future Work

There are many uncovered topics in this thesis. Here are some suggestions forfuture research

• How to incorporate a so called hot-start in the algorithm, i.e., how toutilize the fact that the optimization problem for the next sampling timeis likely to have a solution close to the one for the current sampling time.

• Determined how premature termination of the algorithm affects the con-trol performance, i.e., what happens if the algorithm is stopped beforeoptimum is reached?

• Investigate if a scaling matrix which does not mix up the time indicesexists. Finding such a scaling matrix would imply that the approach ofhaving only one second order cone instead of N could be employed.

Page 69: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

Appendix A

NT Search Direction

Here we will show how to compute a matrix G that will satisfy the three condi-tions on P and Q, when Q = G2 and P = G−2. The conditions are stated inSection 4.5. Notice that the first condition is trivially met. Partition the scalingmatrix as

G = ⊕N1+N2

k=1 Gk

where N1 is the number of nonnegative orthant cones of dimension one, andwhere N2 is the number of second order cones. In view of the second conditionin Section 4.5 we may define

v = Gx = G−1s

We can now define V asV = ⊕N1+N2

k=1 Vk

where Vk is a scalar if the corresponding cone is the nonnegative orthant ofdimension one and where Vk=arrow(vk) if the corresponding cone is the secondorder cone. From now on we will consider the different cones separately anddrop the notation k for simplicity.

First we will look at the case when the cone is a nonnegative orthant ofdimension one. Let the scaling matrix be defined as

G =( s

x

) 12

Now we will look at the two remaining conditions in Section 4.5.

• The second condition is G2x = s. This is trivially satisfied since sxx = s

Page 70: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

58 A NT Search Direction

• The third condition is G2∇F (s) = ∇F (x). By just substituting G thisbecomes s

x1s = 1

x . Hence also the third condition is satisfied.

Finally, we will show that the following relation holds

G∇sF (s) = v−1

Since, by the definition of G and v, we have that G∇sF (s) = (xs)− 1

2 = v−1.We will now look at the case when the cone is the second order cone. Let

the scaling matrix be defined as

G = ω

[

α βT

β I + ββT

1+α

]

where

α =ζ0

ζ20 − ζT

1 ζ1

β =ζ1

ζ20 − ζT

1 ζ1

(A.1)

ζ = (ζ0, ζ1) = (s0 + x0, s1 − x1)

x = (x0, x1) = ω(x0, x1) (A.2)

s = (s0, s1) = ω−1(s0, s1) (A.3)

ω =

[s20 − ‖s1‖2

x20 − ‖x1‖2

]1/4

First we will state some relations which will be used later on.

1. α2 − βT β = 1

2. x20 − ‖x1‖2 = s2

0 − ‖s1‖2

3. G−1 = 1ω

[

α −βT

−β I + ββT

1+α

]

4. G2 = ω4G−2

The second relation holds because of the following equivalent expressions

ω4 =

[s20 − ‖s1‖2

x20 − ‖x1‖2

]

ω2(x20 − ‖x1‖2) = ω−2(s2

0 − ‖s1‖2)

x20 − ‖x1‖2 = s2

0 − ‖s1‖2

The other relations can be trivially verified.Now we are ready to look at the two remaining conditions in Section 4.5 .

Page 71: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

A NT Search Direction 59

• The second condition is G2x = s. By using the definitions of of x and s,(A.2) and (A.3) respectively, and by using relation 1 in forming G2, thesecond condition can be written as

[1 + 2βT β 2αβT

2αβ I + 2ββT

] [x0

x1

]

=

[s0

s1

]

(A.4)

The first row of (A.4) is equivalent to

(1 + 2βT β)x0 + 2αβT x1 = s0 (A.5)

Substituting α and β given by (A.1), and performing the multiplicationthe left hand side of (A.5) can be expressed as

((s0 + x0)2 + (s1 − x1)

T (s1 − x1))x0 + 2(s0 + x0)(s1 − x1)T x1

(s0 + x0)2 − (s1 − x1)T (s1 − x1)(A.6)

By using relation 2, (A.6) can be written as

s0((s0 + x0)2 − (s1 − x1)

T (s1 − x1))

(s0 + x0)2 − (s1 − x1)T (s1 − x1)

which of course simplifies to s0. Now let us check the second block rowof (A.4) which is equivalent to

2αβx0 + x1 + 2ββT x1 = s1 (A.7)

By substituting α and β given by (A.1) and moving x1 to the right handside (A.7) can be written as

2x0ζ0 + 2ζT1 x1

ζ20 − ζT

1 ζ1

(s1 − x1) = s1 − x1 (A.8)

Equation A.8 holds if

2x0ζ0 + 2ζT1 x1

ζ20 − ζT

1 ζ1

= 1

By using the definition of ζ0, ζ1, the left hand side reads

2x0(s0 + x0) + 2(s1 − x1)T x1

(s0 + x0)2 − (s1 − x1)T (s1 − x1)

Using relation 2 this can be written as

x20 + s2

0 + 2s0x0 − sT1 s1 + 2sT

1 x1 − xT1 x1

(s0 + x0)2 − (s1 − x1)T (s1 − x1)

Hence the second condition is satisfied.

Page 72: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

60 A NT Search Direction

• The third conditions is

G2 1

s20 − sT

1 s1

[−s0

s1

]

=1

x20 − xT

1 x1

[−x0

x1

]

Multiply with G−2 from the left and use the definition of x. Then theright hand side of condition three can be written as

ω−2

[1 + 2βT β 2αβT

2αβ I + 2ββT

]

ω−1

[−x0

x1

]1

ω−2(x20 − xT

1 x1)

By using relation 2 this becomes[1 + 2βT β 2αβT

2αβ I + 2ββT

]

ω−1

[−x0

x1

]1

(s20 − sT

1 s1)

It follows from (A.4) that this is equivalent to

ω−1

[−s0

s1

]1

(s20 − sT

1 s1)

Then by the definition of s this becomes[−s0

s1

]1

(s20 − sT

1 s1)

This means that the third condition is satisfied.

Next we will show that −G∇sF (s) = V −1e0, where V =arrow(v). Noticethat

V −1e0 =

[v0

−v1

]1

(v20 − vT

1 v1)

and

−G∇sF (s) = G

[s0

−s1

]1

(s20 − sT

1 s1)(A.9)

First we have from the definition of v that

(v0, v1) = G−1s =1

ω

(

αs0 − βT s1,−s0β +

(

I +ββT

1 + α

)

s1

)

(A.10)

Now by the definition of G

G

[s0

−s1

]

= ω

(

αs0 − βT s1, s0β −(

I +ββT

1 + α

)

s1

)

= ω2

[v0

−v1

]

Page 73: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

A NT Search Direction 61

where the second equality follows from (A.10). This means that the right handside of (A.9) can be written as

[v0

−v1

]ω2

(s20 − sT

1 s1)

Now it remains to prove that (s20−sT

1 s1)ω2 = (v2

0 − vT1 v1), which will done using

(A.10):

v20 − vT

1 v1 =1

ω2

(

s20(α

2 − βT β) − sT1 s1 + 2s0β

T s1

(

−α + 1 +βT β

1 + α

))

+1

ω2

(

1 − 2

1 + α− βT β

(1 + α)2

)

sT1 ββT s1

=(s2

0 − sT1 s1)

ω2

where the last equality follows from relation 1. This together with the definitionof v proves that −G∇sΦ = − ν+N

µ V e0 +V −1e0. For more information on thisscaling matrix for the second order cone see (Tsuchiya, 1998).

Page 74: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered
Page 75: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

Appendix B

Riccati Recursion

In this appendix the Riccati recursion approach for solving (5.8) will be derived.The first step is to calculate η1 = Φ−1

2 U2. This is equivalent to solving

[Q HT

H 0

] [∆y∆ρ

]

=

[−w0

]

(B.1)

where Q and w are defined in (5.9), (5.10), respectively, and where H is definedin (5.3). First we partition Q as Q = ⊕N

k=0Qk, where

Qk =

[Qk Sk

STk Rk

]

=

[ET

k

FTk

]

ΣTk,2Σk,2

[Ek Fk

]

+

[−2CT

k

−2DTk

]

ΣTk Σk

[−2Ck −2Dk

]

QN =

[−2P

12

−W12

]T

ΣTNΣN

[−2P

12

−W12

]

Page 76: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

64 B Riccati Recursion

Then partition w as wT =[rTx,0 rT

u,0 . . . rTu,N−1 rT

x,N

]. After reordering

of variables and equations in (B.1) we get

0 I 0 0 0 · · · · · · · · · 0I Q0 S0 −AT

0 0 · · · · · · · · · 00 ST

0 R0 −BT0 0 · · · · · · · · · 0

0 −A0 −B0 0 I...

......

0 0 0 · · · 0 −AN −BN 0 I0 0 0 · · · 0 0 0 I QN

∆ρ0

∆x0

∆u0

...∆ρN

∆xN

=

0rx,0

ru,0

...0

rx,N

(B.2)There exist symmetric matrices Pk and vectors Ψk such that

∆ρk + Pk∆xk = Ψk (B.3)

This is true for k = N with PN = QN and ΨN = rx,N . Notice that for0 ≤ k ≤ N − 1 it holds that

I Qk Sk

0 STk Rk

0 −Ak −Bk

∆ρk

∆xk

∆uk

+

−ATk 0

−BTk 0

0 I

[∆ρk+1

∆xk+1

]

=

rx,k

ru,k

0

Use the last row and assume that ∆ρk+1 + Pk+1∆xk+1 = Ψk+1 to obtain

[I Qk Sk

0 STk Rk

]

∆ρk

∆xk

∆uk

+

[−AT

k

−BTk

][Ψk+1 − Pk+1(Akxk + Bkuk)

]=

[rx,k

ru,k

]

or equivalently

I

Fk+1

︷ ︸︸ ︷

Qk + ATk Pk+1Ak

Hk+1

︷ ︸︸ ︷

Sk + ATk Pk+1Bk

0 STk + BT

k Pk+1Ak Rk + BTk Pk+1Bk

︸ ︷︷ ︸

Gk+1

∆ρk

∆xk

∆uk

=

[rx,k + AT

k (Ψk+1)ru,k + BT

k (Ψk+1)

]

which has solution

∆uk = G−1k+1[ru,k + BT

k Ψk+1 − HTk+1∆xk]

∆ρk = −(Fk+1 − Hk+1G

−1k+1H

Tk+1

)∆xk

+ rx,k + ATk Ψk+1 − Hk+1G

−1k+1[ru,k + BT

k Ψk+1]

Page 77: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

B Riccati Recursion 65

If Pk obeys the Riccati recursion

Pk = Fk+1 − Hk+1G−1k+1H

Tk+1

and if Ψk is defined via

Ψk = rx,k + ATk Ψk+1 − Hk+1G

−1k+1[ru,k + BT

k Ψk+1]

then (B.3) holds. The solution can thus be obtained by first recursively com-puting Pk and Ψk backwards in time from the two above recursions with finalvalues PN = QN and ΨN = rx,N . Then with starting value ∆x0 = 0 thefollowing recursion is used to compute ∆xk and ∆uk:

∆xk+1 = Ak∆xk + Bk∆uk

∆uk = G−1k+1[ru,k + BT

k Ψk+1 − HTk+1∆xk]

Finally ∆ρk is obtained from (B.3). To summaries the algorithm for the recur-sion can be written as

1. Initialization: Compute Qk, Rk and Sk.

2. Backward Recursion 1:

Equation FlopsGk = Rk + BT

k Pk+1Bk 2n2m + 2m2nHk = Sk + AT

k Pk+1Bk 4n2mLk = G−1

k HTk 3m3 + 2m2n

Pk = Qk + ATk Pk+1Ak − HkLk 4n3 + 2n2m

3. Backward Recursion 2:

Equation Flops

u0k = G−1k (ru,k + BT

k Ψk+1) 2mn + 2m2

Ψk = rx,k − Hku0k + ATk Ψk+1 2mn + 2n2

4. Forward Recursion :

Equation Flops∆uk = u0k − Lkxk 2mn

∆xk+1 = Akxk + Bkuk 2n2 + 2mn∆ρk = −Pk∆xk + Ψk 2n2

Page 78: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

66 B Riccati Recursion

Notice that the Riccati recursion is well defined if Rk is invertible, see e.g.,(Åström and Wittenmark, 1984, p.261). A sufficient condition for this is that[Fk

Dk

]

has full column rank. To see this notice that

Rk =[FT

k DTk

][Σ2

k,2 0

0 4Σ2k

] [Fk

Dk

]

> 0

since Σ2k,2 > 0 and Σ2

k > 0 for any interior point. This condition means thatthe control signal is either visible in the objective function or in the inequalityconstraints.

The only difference between the first and second Riccati recursion is theinitialization step. Next we will show that (ΦT

12Φ−11 Φ12 − Φ2) has the same

structure as Φ2. First look at

Φ1 =

[βT β αT

α −I

]

The structure of Φ1 makes it easy to invert. The inverse is given by

Φ−11 =

[Γ ΓαT

αΓ αΓαT − I

]

where Γ = (βT β+αT α)−1, which is a diagonal matrix, i.e. Γ = diag(γ0, . . . , γN ).Remember that Φ12 is given by

Φ12 =

[βT ΣL 0ΩL 0

]

This gives that

ΦT12Φ

−11 Φ12 =

[χ 00 0

]

where χ = LT ΣT β(ΓβT ΣL+ΓαT ΩL)+ΩT LT (αΓβT ΣL+(αΓαT −I)ΩL).The structure of the matrices in this messy expression can be seen in Figure B.1,and the structure of χ shown in Figure B.2. The structure is the same as thatof (5.9). This leads to the conclusion that the Riccati recursion approach canbe applied. The only difference is the initialization where χ − Q has to becomputed instead of Q. The flop count of this is small since many of the matrixmultiplications already are done in the initialization in step 1.

Page 79: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

B Riccati Recursion 67

0 5 10

0

2

4

6

8

10

12

14

column

row

0 5

0

2

4

6

8

10

column

row

0 5

0

2

4

6

8

10

12

14

column

row

0 2 4 6 8 10 12

0

2

4

6

8

10

column

rowPSfrag replacements

ΣL α

β ΩL

Figure B.1: The structure of the matrices ΣL, α,β and ΩL when N = 3.

0 2 4 6 8 10 12

0

2

4

6

8

10

12

column

row

PSfrag replacements

χ

Figure B.2: The structure of the matrix χ, when N = 3. The structure is thesame as for Q.

Page 80: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered
Page 81: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

Bibliography

Åkerblad, M., A. Hansson and B Wahlberg (2000a). Automatic tuning for clas-sical step-response specification using iterative feedback tuning. In: Proc.39th IEEE Conference on Decision and Control.

Åkerblad, M., A. Horch and A. Hansson (2000b). A controller implementationwith graphical user interface in matlab. In: Reglermöte 2000.

Åkerblad, M. and A. Hansson (2002). Efficient solution of second order coneprogram for model predictive control. In: Proc. of IFAC 2002 WorldCongress. Barcelona, Spain.

Albuquerque, J. S., V. Gopal, G. H. Staus, L. T. Biegler and B. E. Ydstie (1997).Interior point SQP strategies for structured process optimization problems.Computers Chem. Engng. 21, Suppl., S853–S859.

Alizadeh, F. and S. Schmieta (1997). Optimization with semidefinite quadraticand linear constraints. Technical report RRR 23-97, RUTCOR ResearchReport. Rutgers University, NJ, USA.

Arnold, E. and H. Puta (1994). An SQP-type solution method for constraineddiscrete-time optimal control problems. In: Computational Optimal Con-trol (R. Bulirsch and D. Kraft, Eds.). Vol. 115 of International Series ofNumerical Mathematics. pp. 127–136. Birkhäuser Verlag. Basel.

Åström, K. J. and A.-B. Österberg (1986). A modern teaching laboratory forprocess control. IEEE Contr. Syst. Mag. (5), 37–42.

Åström, K. J. and B. Wittenmark (1984). Computer Controlled Systems: Theoryand Design. Prentice Hall.

Biegler, L. T. (1997). Advances in nonlinear programming concepts for processcontrol. Journal of Process Control.

Page 82: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

70 Bibliography

Bitmead, R. R., M. Gevers and V. Wertz (1990). Adaptive Optimal Control: TheThinking Man’s GPC. Prentice Hall.

Blomvall, J (2001). Optimization of Financial Decisions using a new StochasticProgramming Method. PhD thesis. Linköpings Universitet.

Boyd, S. and L. Vandenberghe (2001). Convex optimization. Manuscript, to bepublished in 2003.

Camacho, E. F. and C. Bordons (1998). Model Predictive Control. Springer.

Cutler, C. R. and B. L. Ramaker (1979). Dynamic matrix control—a computercontrol algorithm. In: Proceedings of the AIChE National Meeting. Hus-ton, Texas.

Dantzig, G. B. (1963). Linear Programming and Extensions. Princeton Univer-sity Press.

Garcia, C. E., D. M. Prett and M. Morari (1989). Model predictive control:Theory and practice—a survey. Automatica 3, 335–348.

Glad, T. and H. Jonson (1984). A method for state and control constrained lin-ear quadratic control problems. In: Proceedings of the 9th IFAC WorldCongress. Budapest, Hungary.

Golub, G. H. and C. F. van Loan (1989). Matrix computations. second editioned.. Johns Hopkins.

Gopal, V. and L. T. Biegler (1998). Large scale inequality constrained optimiza-tion and control. IEEE Control Systems Magazine 18(6), 59–68.

Hansson, A. (2000). A primal-dual interior-point method for robust optimalcontrol of linear discrete-time systems. IEEE Transactions on AutomaticControl 45(9), 1639–1655.

Kantorovich, L.V. (1939). Mathematical Methods in the Organization andPlaning of Production. Publication House of the Leningrad State Univer-sity. in Russian translated in Management Science g, pp. 366–422(1960).

Karmarkar, N. K. (1984). A new polynomial-time algorithm for linear program-ming. Combinatorica.

Keerthi, S. S. and E. G. Gilbert (1988). Optimal, infinite horizon feedback lawsfor a general class of constrained discreate time systems: Stability andmoving-horizon approximation. Jornal of Optimization Theory and Ap-plication 57, 265–293.

Page 83: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

Bibliography 71

Lee, J. H. and L. Markus (1967). Fondations of optimal control theory. NewYork: Wiley.

Lee, J-W. (2000). Exponential stability of constrained receding horizon con-trol with terminal ellipsiod constraints. IEEE Transactions on AutomaticControl 45(1), 83–88.

Lee, Y.I. and B. Kouvaritakis (1999). Stabilizable regions of receding hori-zon predictive control with input constraints. Systems and Control Letters38, 13–20.

Lobo, M., L. Vandenberghe, S. Boyd and H. Lebret (1998). Applications ofsecond-order cone programming. Linear Algebra and its Applications284, 193–228.

Löfberg, J. (2001a). Linear model predictive control stability and robustness.Licentiate Thesis No. 866, Linköpings universitet.

Löfberg, J. (2001b). A matlab interface to sp, maxdet and socp. Technical reportlith-isy-r-2328. Department of Electrical Engineering, Linköping Univer-sity. SE-581 83 Linköping, Sweden.

Luo, Z., J. F. Sturm and S. Zhang (1996). Duality and self-duality for conicconvex programming. Technical report. Econometric Institute, ErasmusUniversity Rotterdam.

Mayne, D. Q., J. B. Rawlings, C. V. Rao and P. O. M. Scokaert (2000). Con-strained model predictive control: Stability and optimality. Automatica36, 789–814.

Morari, M. and J. H. Lee (1999). Model predictive control: Past, present andfuture. Computers and Chemical Engineering 23, 667–682.

Nesterov, Y and A. Nemirovsky (1994). Interior point polynomial methods inconvex programming. SIAM.

Propoi, A. I. (1963). Use of lp methods for synthesizing sampled-data automaticsystems. Automation and Remote Control.

Qin, S. J. and T. A. Badgwell (1996). An overview of industrial predictive con-trol technology. In: Chemical Process Control–V, Assessment and NewDirections for Research. Tahoe City, CA.

Page 84: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

72 Bibliography

Rao, C. V., S. J. Wright and J. B. Rawlings (1997). Application of interior-point methods to model predictive control. Preprint ANL/MCS-P664-0597. Mathematics and Computer Science Division, Argonne NationalLaboratory.

Rawlings, J. B. and K. R. Muske (1993). Stability of constrained receding hori-zon control. IEEE Transactions on Automatic Control 38(10), 1512–1516.

Richalet, J. A., A. Rault, J. L. Testud and J. Papon (1976). Algorithmic controlof industrial processes. In: In 4th IFAC Symposium on Identiffication andSystem Parameter Estimation. Tbilisi URSS.

Richalet, J. A., A. Rault, J. L. Testud and J. Papon (1978). Model predic-tive heuristic control: applications to an industrial process. Automatica14, 413–428.

Scokaert, P.O.M. and J.B. Rawlings (1998). Constained linear quadratic control.IEEE Transactions on Automatic Control 43(8), 1163–1169.

SeDuMi (1.3). http://fewcal.kub.nl/sturm/software/sedumi.html.

Steinbach, M. C. (1994). A structured interior point SQP method for nonlinearoptimal control problems. In: Computational Optimal Control (R. Bu-lirsch and D. Kraft, Eds.). Vol. 115 of International Series of NumericalMathematics. pp. 213–222. Birkhäuser Verlag. Basel.

Todd, M. J. (1999). On search directions in interior-point methods for semidef-inite programming. Optim. Methods Softw.

Tsuchiya, T. (1998). A convergence analysis of the scaling-invariant primal-dual path-following algorithms for second-order cone programming.Technical report. The Institute of Statistical Mathematics, Tokyo, Japan.

Vandenberghe, L. and S. Boyd (1995). A primal-dual potential reductionmethod for probelms involving matrix inequalities. Mathematical Pro-gramming 69, 205–236.

Vandenberghe, L. and S. Boyd (1996). Semidefinite programming. SIAM Re-view 38, 49–95.

Vandenberghe, L., S. Boyd and M. Nouralishahi (2002). Robust linear pro-gramming and optimal control. In: Proc. of IFAC 2002 World Congress.Barcelona, Spain.

Page 85: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered

Bibliography 73

von Neumann, J. (1937). Über ein ökonomisches gleichungssystem und eine ve-rallgemeinerung des brouwerschen fixpunktsatzes. Ergebnisse eines math-ematischen Kolloquiums. translated in The Review of Economic Studies,13, pp. 1–9 (1945–1946).

Wills, A. G. and W. P. Heath (2002). Using a modified predictor-correctoralgorithm for model predictive control. In: Proc. of IFAC 2002 WorldCongress. Barcelona, Spain.

Wolkowicz, H., Saigal, R. and Vandenberghe, L., Eds.) (2000). Handbook ofSemidefinite Programming Theory, Algorithms and Applications. KluwerAcademic Publishers.

Wright, S. J. (1993). Interior-point methods for optimal control of discrete-timesystems. J. Optim. Theory Appls. 77, 161–187.

Wright, S. J. (1996). Applying new optimization algorithms to model predictivecontrol. Chemical Process Control-V.

Wright, S. J. (1997). Primal-Dual Interior-Point Methods. SIAM.

Page 86: A Second Order Cone Programming Algorithm for Model ... · approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered