multi variable control system design dmcs456

177
4 Multivariable Control System Design Overview – Design of controllers for multivariable systems requires an assessment of structural properties of transfer matrices. The zeros and gains in multivariable systems have directions. With norms of multivariable signals and systems it is possible to ob- tain bounds for gains, bandwidth and other system properties. A possible approach to multivariable controller design is to reduce the problem to a series of single loop controller design problems. Examples are decentralized control and decoupling control. The internal model principle applies to multivariable systems and, for example, may be used to design for multivariable integral action. 4.1 Introduction Many complex engineering systems are equipped with several actuators that may influence their static and dynamic behavior. Commonly, in cases where some form of automatic control is required over the system, also several sensors are available to provide measurement informa- tion about important system variables that may be used for feedback control purposes. Systems with more than one actuating control input and more than one sensor output may be considered as multivariable systems or multi-input-multi-output (MIMO). The control objective for multi- variable systems is to obtain a desirable behavior of several output variables by simultaneously manipulating several input channels. 4.1.1 Examples of multivariable feedback systems The following two examples discuss various phenomena that specifically occur in MIMO feed- back systems and not in SISO systems, such as interaction between loops and multivariable non-minimum phase behavior. 153

Upload: ahassanabadi2

Post on 09-Oct-2014

66 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Multi Variable Control System Design Dmcs456

4

Multivariable Control System Design

Overview– Design of controllers for multivariable systems requires anassessment of structural properties of transfer matrices. The zeros andgains in multivariable systems have directions.

With norms of multivariable signals and systems it is possible to ob-tain bounds for gains, bandwidth and other system properties.

A possible approach to multivariable controller design is to reduce theproblem to a series of single loop controller design problems. Examplesare decentralized control and decoupling control.

The internal model principle applies to multivariable systems and, forexample, may be used to design for multivariable integral action.

4.1 Introduction

Many complex engineering systems are equipped with several actuators that may influence theirstatic and dynamic behavior. Commonly, in cases where some form of automatic control isrequired over the system, also several sensors are available to provide measurement informa-tion about important system variables that may be used for feedback control purposes. Systemswith more than one actuating control input and more than one sensor output may be consideredasmultivariablesystems ormulti-input-multi-output (MIMO). The control objective for multi-variable systems is to obtain a desirable behavior of several output variables by simultaneouslymanipulating several input channels.

4.1.1 Examples of multivariable feedback systems

The following two examples discuss various phenomena that specifically occur in MIMO feed-back systems and not in SISO systems, such as interaction between loops and multivariablenon-minimum phase behavior.

153

Page 2: Multi Variable Control System Design Dmcs456

154 Chapter 4. Multivariable Control System Design

φ1

φ2

φ3φ4

h1

h2

Figure 4.1: Two-tank liquid flow process with recycle

h1,ref

h2,ref

h1

h2

k1

k2

Pφ4

φ1

Figure 4.2: Two-loop feedback control of the two-tank liquid flow process

Page 3: Multi Variable Control System Design Dmcs456

4.1. Introduction 155

Example 4.1.1 (Two-tank liquid flow process).Consider the flow process of Fig. 4.1. Theincoming flowφ1 and recycle flowφ4 act as manipulable input variables to the system, and theoutgoing flowφ3 acts as a disturbance. The control objective is to keep the liquid levelsh1 andh2

between acceptable limits by applying feedback from measurements of these liquid levels, whileaccommodating variations in the output flowφ3. As derived in Appendix 4.6, a dynamic model,linearized about a given steady state, is[

h1(t)h2(t)

]=[−1 0

1 0

][h1(t)h2(t)

]+[1 10 −1

][φ1(t)φ4(t)

]+[

0−1

]φ3(t). (4.1)

Here thehi andφi denote the deviations from the steady state levels and flows. Laplace transfor-mation yields[

H1(s)H2(s)

]=[s+ 1 0−1 s

]−1([1 10 −1

][81(s)84(s)

]+[

0−1

]83(s)

).,

which results in the transfer matrix relationship[H1(s)H2(s)

]=[

1s+1

1s+1

1s(s+1) − 1

s+1

][81(s)84(s)

]+[

0− 1

s

]83(s). (4.2)

The example demonstrates the following typical MIMO system phenomena:

� Each of the manipulable inputsφ1 andφ4 affects each of the outputs to be controlledh1

andh2, as a consequence the transfer matrix

P(s) =[

1s+1

1s+1

1s(s+1) − 1

s+1

](4.3)

has nonzero entries both at the diagonal and at the off-diagonal entries.

� Consequently, if we control the two output variablesh1 andh2 using two separate controlloops, it is not immediately clear which input to use to controlh1, and which one to controlh2. This issue is called theinput/output pairingproblem.

� A possible feedback scheme using two separate loops is shown in Fig. 4.2. Note that inthis control scheme, there exists a coupling between both loops due to the non-diagonalterms in the transfer matrix (4.3). As a result, the plant transfer function in the open upperloop fromφ1 to h1, with the lower loop closed with proportional gaink2, depends onk2,

P11(s)∣∣llc = 1

s+ 1

(1− k2

s(s+ 1− k2)

). (4.4)

Herellc indicates that the lower loop is closed. Owing to the negative steady state gain inthe lower loop, negative values ofk2 stabilize the lower loop. The dynamics ofP11(s)

∣∣llc

changes withk2:

k2 = 0 : P11(s)∣∣llc = 1

s+1,

k2 = −1 : P11(s)∣∣llc = s+1

s(s+2) ,

k2 = −∞ : P11(s)∣∣llc = 1

s .

Page 4: Multi Variable Control System Design Dmcs456

156 Chapter 4. Multivariable Control System Design

The phenomenon that the loop gain in one loop also depends on the loop gain in anotherloop is calledinteraction. Themultiple loopcontrol structure used here does not explicitlyacknowledge interaction phenomena. A control structure using individual loops, such asmultiple loop control, is an example of adecentralizedcontrol structure, as opposed to acentralizedcontrol structure where all measured information is available for feedback inall feedback channels.

h1,ref

h2,ref

h1

h2

k1

k2

Pφ4

φ1

Kpre

controllerK

Figure 4.3: Decoupling feedback control of two-tank liquid flow process

� A different approach to multivariable control is to try to compensate for the plant interac-tion. In Example 4.1.1 a precompensatorKpre in series with the plant can be found whichremoves the interaction completely. The precompensatorKpre is to be designed such thatthe transfer matrixPKpre is diagonal. Figure 4.3 shows the feedback structure. Supposethat the precompensatorKpre is given the structure

Kpre(s) =[

1 Kpre12 (s)

Kpre21 (s) 1

](4.5)

thenPKpre is diagonal by selecting

Kpre12 (s) = −1, Kpre

21 (s) = 1s. (4.6)

The compensated plantPKpre for which a multi-loop feedback is to be designed as a nextstep, reads

P(s)Kpre(s) =[

1s 00 − 1

s

]. (4.7)

Closing both loops with proportional gainsk1 > 0 andk2 < 0 leads to a stable system. Indoing so, a multivariable controllerK has been designed having transfer matrix

K(s) = Kpre(s)

[k1 00 k2

]=[

k1 −k2

k1/s k2

]. (4.8)

This approach of removing interaction before actually closing multiple individual loops iscalleddecoupling control.

The next example shows that a multivariable system in which the transfer matrix contains first-order entries may show non-minimum phase behavior.

Page 5: Multi Variable Control System Design Dmcs456

4.1. Introduction 157

Example 4.1.2 (Multivariable non-minimum phase behavior).Consider the two input, twooutput linear system[

y1(s)y2(s)

]=[ 1

s+12

s+31

s+11

s+1

][u1(s)u2(s)

]. (4.9)

If the lower loop is closed with constant gain,u2(s) = −k2y2(s), then for high gain values(k2 → ∞) the lower feedback loop is stable but has a zero in the right-half plane,

y1(s) =(

1s+ 1

− 2s+ 3

)u1(s) = − s− 1

(s+ 1)(s+ 3)u1(s). (4.10)

Thus under high gain feedback of the lower loop, the upper part of the system exhibits non-minimum phase behavior. Conversely if the upper loop is closed under high gain feedback,u1 = −k1 y1 with k1 → ∞, then

y2(s) = − s− 1(s+ 1)(s+ 3)

u2(s). (4.11)

Apparently, the non-minimum phase behavior of the system is not connected to one particularinput-output relation, but shows up in both relations. One loop can be closed with high gainsunder stability of the loop, and the other loop is then restricted to have limited gain due to thenon-minimum phase behavior. Analysis of the transfer matrix

P(s) =[ 1

s+12

s+31

s+11

s+1

]= 1(s+ 1)(s+ 3)

[s+ 3 2s+ 2s+ 3 s+ 3

](4.12)

shows that it loses rank ats = 1. In the next section it will be shown thats = 1 is an unstabletransmission zeroof the multivariable system and this limits the closed loop behavior irrespectiveof the controller design method used. Consider the method of decoupling precompensation.Express plant and precompensator as

P(s) =[

P11(s) P12(s)P21(s) P22(s)

], K(s) =

[K11(s) K12(s)K21(s) K22(s)

]. (4.13)

Decoupling means thatPK is diagonal. Inserting the matrix (4.12) intoPK, a solution for adecouplingK is

K11(s) = 1, K22(s) = 1, K12(s) = −2s+ 1s+ 3

, K21(s) = −1. (4.14)

Then the compensated system is

P(s)K(s) =[ 1

s+12

s+31

s+11

s+1

][1 −2s+1

s+3−1 1

]=[− s−1(s+1)(s+3) 0

0 − s−1(s+1)(s+3)

]. (4.15)

The requirement of decoupling has introduced a right-half plane zero in each of the loops, so thateither loop now only can have a restricted gain in the feedback loop. �

The example suggests that the occurrence of non-minimum phase phenomena is a property ofthe transfer matrix and exhibits itself in a matrix sense, not connected to a particular input-outputpair. It shows that a zero of a multivariable system has no relationship with the possible zeros ofindividual entries of the transfer matrix.

Page 6: Multi Variable Control System Design Dmcs456

158 Chapter 4. Multivariable Control System Design

In the example the input-output pairing has been the natural one: outputi is connected bya feedback loop to inputi. This is however quite an arbitrary choice, as it is the result of ourmodel formulation that determines which inputs and which outputs are ordered as one, two andso on. Thus the selection of the most useful input-output pairs is anon-trivial issue inmultiloopcontrolor decentralized control, i.e. in control configurations where one has individual loops asin the example. A classical approach towards dealing with multivariable systems is to bring amultivariable system to a structure that is a collection of one input, one output control problems.This approach ofdecoupling controlmay have some advantages in certain practical situations,and was thought to lead to a simpler design approach. However, as the above example showed,decoupling may introduce additional restrictions regarding the feedback properties of the system.

4.1.2 Survey of developments in multivariable control

The first papers on multivariable control systems appeared in the fifties and considered aspectsof noninteracting control. In the sixties, the work of Rosenbrock (1970) considered matrix tech-niques to study questions of rational and polynomial representation of multivariable systems. Thepolynomial representation was also studied by Wolovich (1974). The books by Kailath (1980)and Vardulakis (1991) provide a broad overview. The use of Nyquist techniques for multivari-able control design was developed by Rosenbrock (1974a). The generalization of the Nyquistcriterion and of root locus techniques to the multivariable case can be found in the work ofPostlethwaite and MacFarlane (1979). The geometric approach to multivariable state-space con-trol design is contained in the classical book by Wonham (1979) and in the book by Basile andMarro (1992). A survey of classical design methods for multivariable control systems can befound in (Korn and Wilfert 1982), (Lunze 1988) and in the two books by Tolle (1983), (1985).Modern approaches to frequency domain methods can be found in (Raisch 1993), (Maciejowski1989), and (Skogestad and Postlethwaite 1995). Interaction phenomena in multivariable processcontrol systems are discussed in terms of a process control formulation in (McAvoy 1983). Amodern, process-control oriented approach to multivariable control is presented in (Morari andZafiriou 1989). The numerical properties of several computational algorithms relevant to the areaof multivariable control design are discussed in (Svaricek 1995).

4.2 Poles and zeros of multivariable systems

In this section, a number of structural properties of multivariable systems are discussed. Theseproperties are important for the understanding of the behavior of the system under feedback. Thesystems are analyzed both in the time domain and frequency domain.

4.2.1 Polynomial and rational matrices

Let P be a proper real-rational transfer matrix.Real-rationalmeans that every entryPij of P isa ratio of two polynomials having real coefficients. Any rational matrixP may be written as afraction

P = 1d

N

whereN is apolynomial matrix1 andd is the monic least common multiple of all denominatorpolynomials of the entries ofP. Polynomial matrices will be studied first. Many results about

1That is, a matrix whose entries are polynomials.

Page 7: Multi Variable Control System Design Dmcs456

4.2. Poles and zeros of multivariable systems 159

polynomial matrices may be found in, e.g., the books by MacDuffee (1956), Gohberg, Lancaster,and Rodman (1982) and Kailath (1980).

In the scalar case a polynomial has no zeros if and only if it is a nonzero constant, or, toput it differently, if and only if its inverse is polynomial as well. The matrix generalization is asfollows.

Definition 4.2.1 (Unimodular polynomial matrix). A polynomial matrix isunimodularif itsquare and its inverse exists and is a polynomial matrix. �

It may be shown that a square polynomial matrixU is unimodular if and only if detU is anonzero constant. Unimodular matrices are considered having no zeros and, hence, multiplicationby unimodular matrices does not affect the zeros.

Summary 4.2.2 (Smith form of a polynomial matrix). For every polynomial matrixN thereexist unimodularU andV such that

U NV =

ε′1 0 0 0 0 0

0 ε′2 0 0 0 0

0 0... 0 0 0

0 0 0 ε′r 0 0

0 0 0 0 0 00 0 0 0 0 0

︸ ︷︷ ︸S

(4.16)

whereε′i, (i = 1, . . . , r) are monic polynomials with the property thatεi/εi−1 is polynomial. The

matrix S is known as theSmith formof N and the polynomialsε′i the invariant polynomialsof

N.In this caser is thenormal rankof the polynomial matrixN, and thezerosof N are defined

as the zeros of one or more of its invariant polynomials. �

The Smith formSof N may be obtained by applying toN a sequence ofelementary row andcolumn operations, which are:

� multiply a row/column by a nonzero constant;

� interchange two rows or two columns;

� add a row/column multiplied by a polynomial to another row/column.

Example 4.2.3 (Elementary row/column operations).The following sequence of elementaryoperations brings the polynomial matrix

N(s) =[

s+ 1 s− 1s+ 2 s− 2

]to diagonal Smith-form:[

s+ 1 s− 1s+ 2 s− 2

](1)⇒

[s+ 1 s− 1

1 −1

](2)⇒

[s+ 1 2s

1 0

](3)⇒

[0 2s1 0

](4)⇒

[1 00 s

].

Here, in step (1) we subtracted row 1 from row 2; in step (2) we added the first column to thesecond column; in step (3),s+ 1 times the second row was subtracted from row 1. Finally, instep (4), we interchanged the rows and then divided the (new) second row by a factor 2.

Elementaryrow operations correspond thepremultiplication by unimodular matrices, andelementarycolumnoperations correspond thepostmultiplication by unimodular matrices. Forthe above four elementary operations these are

Page 8: Multi Variable Control System Design Dmcs456

160 Chapter 4. Multivariable Control System Design

(1) premultiply byUL1(s) :=[

1 0−1 1

];

(2) postmultiply byVR2(s) :=[1 10 1

];

(3) premultiply byUL3(s) :=[1 −(s+ 1)0 1

];

(4) premultiply byUL4(s) :=[

0 11/2 0

].

Instead of applying sequentially multiplication onN we may also first combine the sequenceof unimodular matrices into two unimodular matricesU = UL4UL3UL1 andV = VR2, and thenapply theU andV,[ −1 1

1+ s/2 −1/2− s/2

]︸ ︷︷ ︸

U(s) = UL4(s)UL3(s)UL1(s)

[s+ 1 s− 1s+ 2 s− 2

]︸ ︷︷ ︸

N(s)

[1 10 1

]︸ ︷︷ ︸

V(s) = VR2(s)

=[1 00 s

]︸ ︷︷ ︸

S(s)

.

This clearly shows thatS is the Smith-form ofN. The polynomial matrixN has rank 2 and hasone zero ats= 0. �

Now consider a rational matrixP. We write P as

P = 1d

N (4.17)

whereN is a polynomial matrix andd a scalar polynomial. We immediately get a generalizationof the Smith form.

Summary 4.2.4 (Smith-McMillan form of a rational matrix). For every rational matrixPthere exist unimodular polynomial matricesU andV such that

U PV =

ε1ψ1

0 0 0 0 00 ε2

ψ20 0 0 0

0 0... 0 0 0

0 0 0 εrψr

0 00 0 0 0 0 00 0 0 0 0 0

︸ ︷︷ ︸M

(4.18)

where theεi andψi are coprime such that

εi

ψi= ε′

i

d(4.19)

and theε′i, (i = 1, . . . , r) are the invariant polynomials ofN = dP. The matrixM is known as

theSmith-McMillan formof P.In this caser is thenormal rankof the rational matrixP. The transmission zerosof P are

defined as the zeros ofεi, (i = 1, . . . , r ) and thepolesare the zeros ofψi, (i = 1, . . . , r ). �

Page 9: Multi Variable Control System Design Dmcs456

4.2. Poles and zeros of multivariable systems 161

Definitions for other notions of zeros may be found in (Rosenbrock 1970), (Rosenbrock 1973)and (Rosenbrock 1974b), and in (Schrader and Sain 1989).

Example 4.2.5 (Smith-McMillan form). Consider the transfer matrix

P(s) =[

1s

1s(s+1)

1s+1

1s+1

]= 1

s(s+ 1)

[s+ 1 1

s s

]= 1

d(s)N(s).

The Smith form ofN is obtained by the following sequence of elementary operations:[s+ 1 1

s s

]⇒[s+ 1 1−s2 0

]⇒[1 s+ 10 s2

]⇒[1 00 s2

]

Division of each diagonal entry byd yields the Smith-McMillan form[ 1s(s+1) 0

0 ss+1

].

The set of transmission zeros is{0}, the set of poles is{−1,−1,0}. This shows that transmissionzeros of a multivariable transfer matrix may coincide with poles without being canceled if theyoccur in different diagonal entries of the Smith-McMillan form. In the determinant they docancel. Therefore from the determinant

detP(s) = 1(s+ 1)2

we may not always uncoverall poles and transmission zeros ofP. (GenerallyP need not besquare so its determinant may not even exist.) �

An s0 ∈ C is apoleof P if and only if it is a pole of one or more entriesPij of P. An s0 ∈ C

that is not a pole ofP is a transmission zero ofP if and only if the rank ofP(s0) is strictlyless than the normal rankr as defined by the Smith-McMillan form ofP. For square invertiblematricesP there further holds thats0 is a transmission zero ofP if and only it is pole ifP−1. Forexample

P(s) =[1 1/s0 1

]

has a transmission zero ats= 0 — even though detP(s) = 1 — becauseP−1(s) = [1 −1/s0 1

]has

a pole ats= 0.If the normal rank of anny × nu plant P is r then at mostr entries of the outputy = Pu be

given independent values by manipulating the inputu. This generally means thatr entries of theoutput are enough for feedback control. LetK be anynu × ny controller transfer matrix, thenthe sensitivity matrixSand complementary sensitivity matrixT as previously defined are for themultivariable case (see Exercise 1.5.4)

S= ( I + PK)−1, T = ( I + PK)−1PK. (4.20)

If r < ny, then theny × ny loop gainPK is singular, so thatShas one or more eigenvaluesλ = 1for any s in the complex plane, in particular everywhere on the imaginary axis. In such cases‖S( jω)‖ in whatever norm does not converge to zero asω → 0. This indicates an undesiredsituation which is to be prevented by ascertaining the rank ofP to equal the number of the be

Page 10: Multi Variable Control System Design Dmcs456

162 Chapter 4. Multivariable Control System Design

controller outputs. This must be realized by proper selection and application of actuators andsensors in the feedback control system.

Feedback around a nonsquareP can only occur in conjunction with a compensatorK whichmakes the series connectionPK square, as required by unity feedback. Such controllersK aresaid tosquare downthe plantP. We investigate down squaring.

Example 4.2.6 (Squaring down).Consider

P(s) = [1s

1s2

].

Its Smith-McMillan form isM(s) = [1/s2 0

]. The plant has no transmission zeros and has a

double pole ats= 0. As pre-compensator we propose

K0(s) =[1a

].

This results in the squared-down system

P(s)K0(s) = s+ as2

.

Apparently squaring down may introduces transmission zeros. In this example the choicea >0 creates a zero in the left-half plane, so that subsequent high gain feedback can be applied,allowing a stable closed-loop system with high loop gain. �

In the example,P does not have transmission zeros. A general 1× 2 plant P(s) =[a(s) b(s)

]has transmission zeros if and only ifa and b have common zeros. Ifa and b

are in some sense randomly chosen then it is very unlikely thata andb have common zeros.Generally it holds that nonsquare plants have no transmission zeros assuming the entries ofPij

are in some sense uncorrelated with other entries. However, many physical systems bear structurein the entries of their transfer matrix, so that these cannot be considered as belonging to the classof generic systems. Many physically existing nonsquare systems actually turn out to possesstransmission zeros.

The theory of transmission zero placement by squaring down is presently not complete, al-though a number of results are available in the literature. Sain and Schrader (1990) describe thegeneral problem area, and results regarding the squaring problem are described in (Horowitz andGera 1979), (Karcanias and Giannakopoulos 1989), (Le and Safonov 1992), (Sebakhy, ElSingaby,and ElArabawy 1986), (Stoorvogel and Ludlage 1994), and (Shaked 1976).

Example 4.2.7 (Squaring down).Consider the transfer matrixP and its Smith-McMillan form

P(s) = 1

s1

s(s2−1)

1 1s2−1

0 ss2−1

= 1

s(s2 − 1)

s2 − 1 1

s(s2 − 1) s0 s2

(4.21)

=1 0 0

s 0 1s2 −1 0

︸ ︷︷ ︸U−1(s)

1

s(s2−1) 00 s0 0

︸ ︷︷ ︸M(s)

[s2 − 1 1

1 0

]︸ ︷︷ ︸

V−1(s)

. (4.22)

There is one transmission zero{0} and the set of poles is{−1,1,0}. The postcompensatorK0 isnow 2× 3, which we parameterize as

K0 =[

a b cd e f

].

Page 11: Multi Variable Control System Design Dmcs456

4.2. Poles and zeros of multivariable systems 163

We obtain the newly formed set of transmission zeros as the set of zeros of

det

[a b cd e f

]1 0s 0s2 −1

= (cd− af )+ (ce− bf )s,

where the second matrix in this expression are the first two columns ofU−1. Thus one singletransmission zero can be assigned to a desired location. Choose this zero ats = −1. This leadsfor example to the choicea = 0, b = 0, c = 1, d = 1, e= 1, f = 0,

K0(s) :=[0 0 11 1 0

]

and the squared-down system

K0(s)P(s) =[

0 ss2−1

s+1s

1s(s−1)

]= 1

s(s2 − 1)

[0 s2

(s+ 1)(s2 − 1) s+ 1

].

This squared-down matrix may be shown to have Smith-McMillan form[ 1s(s2−1) 0

0 s(s+ 1)

]

and consequently the set of transmission zeros is{0,−1}, the set of poles remain unchanged as{−1,1,0}. These values are as expected, i.e., the zero ats = 0 has been retained, and the newzero at−1 has been formed. Note thats= 0 is both a zero and a pole. �

4.2.2 Transmission zeros of state-space realizations

Any properny × nu rational matrixP has astate-space realization

P(s) = C(sIn − A)−1B+ D, A ∈ Rn×n, B ∈ R

n×nu , C ∈ Rny×n, D ∈ R

ny×nu .

Realizations are commonly denoted as a quadruple(A, B,C, D). There are many realizations(A, B,C, D) that define the same transfer matrixP. For one, theorder nis not fixed, A realizationof ordern of P is minimal if no realization ofP exists that has a lower order. Realizations areminimal if and only of the ordern equals the degree of the polynomialψ1ψ2 . . .ψr formed fromthe denominator polynomials in (4.18), see (Rosenbrock 1970). The poles ofP then equal theeigenvalues ofA, which, incidentally, shows that computation of poles is a standard eigenvalueproblem. Numerical computation of the transmission zeros may be more difficult. In some specialcases, computation can be done by standard operations, in other cases specialized numericalalgorithms have to be used.

Lemma 4.2.8 (Transmission zeros of a minimal state-space system).Let (A, B,C, D) be aminimal state-space realization of order n of a proper real-rational ny × nu transfer matrix P ofrank r. Then the transmission zeros s0 ∈ C of P as defined in Summary 4.2.4 are the zeros s0 ofthe polynomial matrix[

A− s0 I BC D

](4.23)

That is, s0 is transmission zero if and only ifrank[

A−s0 I BC D

]< n + r. �

Page 12: Multi Variable Control System Design Dmcs456

164 Chapter 4. Multivariable Control System Design

The proof of this result may be found in Appendix 4.6. Several further properties may bederived from this result.

Summary 4.2.9 (Invariance of transmission zeros).The zeross0 of (4.23) are invariant underthe following operations:

� nonsingularinput spacetransformations(T2) andoutput spacetransformations(T3):[A− sI B

C D

]⇒[

A− sI BT2

T3C T3DT2

]

� nonsingularstate spacetransformations(T1):[A− sI B

C D

]⇒[

T1AT−11 − sI T1B

CT−11 D

]

� Static output feedbackoperations, i.e. for the systemx = Ax+ Bu, y = Cx+ Du, we applythe feedback lawu = −Ky+ v wherev acts as new input. Assuming that det( I + DK) 6=0, this involves the transformation:[

A− sI BC D

]⇒[

A− BK( I + DK)−1C − sI B( I + K D)−1

( I + DK)−1C ( I + DK)−1D

]

� State feedback(F) andoutput injection(L) operations,[A− sI B

C D

]⇒[

A− BF − sI BC − DF D

]⇒[

A− BF − LC + LDF − sI B− LDC − DF D

]�

The transmission zeros have a clear interpretation as that ofblocking certain exponentialinputs, (Desoer and Schulman 1974), (MacFarlane and Karcanias 1976).

Example 4.2.10 (Blocking property of transmission zeros).Suppose that the system realiza-tion P(s)= C(sI − A)−1B+ D is minimal and thatP is left-invertible, that is rankP = nu. Thens0 is a transmission zero if and only if

[A−s0 I B

C D

]has less than full column rank, in other words if

and only if there exist vectorsxs0, us0—not both2 zero—that satisfy[A− s0 I B

C D

][xs0

us0

]= 0. (4.24)

Then the state and input defined as

x(t) = xs0 es0t, u(t) = us0 es0t

satisfy the system equations

x(t) = Ax(t)+ Bu(t), Cx(t)+ Du(t) = 0. (4.25)

We see thaty = Cx+ du is identically zero. Transmission zeross0 hence correspond to certainexponential inputs whose output is zero for all time. �

2In fact here neitherxs0 nor us0 is zero.

Page 13: Multi Variable Control System Design Dmcs456

4.2. Poles and zeros of multivariable systems 165

Under constant high gain output feedbacku = Ky the finite closed loop poles generallyapproach the transmission zeros of the transfer matrixP. In this respect the transmission zerosof a MIMO system play a similar role in determining performance limitations as do the zerosin the SISO case. See e.g. (Francis and Wonham 1975). IfP is square and invertible, then thetransmission zeros are the zeros of the polynomial

det

[A− sI B

C D

](4.26)

If in addition D is invertible, then

det

[A− sI B

C D

][I 0

−D−1C I

]= det

[(A− BD−1C)− sI B

0 D

]. (4.27)

So then the transmission zeros are the eigenvalues ofA − BD−1C. If D is not invertible thencomputation of transmission zeros is less straightforward. We may have to resort to computationof the Smith-McMillan form, but preferably we use methods based on state-space realizations.

Example 4.2.11 (Transmission zeros of the liquid flow system).Consider the Example 4.1.1with plant transfer matrix

P(s) =[ 1

s+11

s+11

s(s+1) − 1s+1

]= 1

s(s+ 1)

[s s1 −s

].

Elementary row and column operations successively applied to the polynomial part lead to theSmith form[

s s1 −s

](1)⇒[s+ 1 0

1 −s

](2)⇒[s+ 1 s(s+ 1)

1 0

](3)⇒[1 00 s(s+ 1)

].

The Smith-McMillan form hence is

M(s) = 1s(s+ 1)

[1 00 s(s+ 1)

]=[ 1

s(s+1) 00 1

].

The liquid flow system has no zeros but has two poles{0,−1}.We may compute the poles and zeros also from the state space representation ofP,

P(s) = C(sI − A)−1B+ D,

[A− sI B

C D

]=

−1− s 0 1 11 −s 0 −1

−1 0 0 00 −1 0 0

.

The realization is minimal because bothB and C are square and nonsingular. The poles aretherefore the eigenvalues ofA, which indeed are{0,−1} as before. There are no transmissionzeros because det

[A−sI B

C D

]is a nonzero constant (verify this). �

Example 4.2.12 (Transmission zeros via state space and transfer matrix representation).Consider the system with state space realization(A, B,C, D), with

A =0 0 0

1 0 00 1 0

, B =

0 1

1 00 0

, C =

[0 1 00 0 1

], D =

[0 00 0

].

Page 14: Multi Variable Control System Design Dmcs456

166 Chapter 4. Multivariable Control System Design

The system’s transfer matrixP equals

P(s) = C(sI − A)−1B+ D =[

1/s 1/s2

1/s2 1/s3

]= 1

s3

[s2 ss 1

].

Elementary operations on the numerator polynomial matrix results in[s2 ss 1

]⇒[0 s0 1

]⇒[1 00 0

]

so the Smith-McMillan form ofP is

M(s) = 1s3

[1 00 0

]=[1/s3 0

0 0

].

The system therefore has three poles, all at zero:{0,0,0}. As the matrixA ∈ R3×3 has anequal number of eigenvalues, it must be that the realization is minimal and that its eigenvaluescoincide with the poles of the system. From the Smith-McMillan form we see that there are notransmission zeros. This may also be verified from the matrix pencil

[A− sI B

C D

]=

−s 0 0 0 11 −s 0 1 00 1 −s 0 00 −1 0 0 00 0 −1 0 0

.

Elementary row and column operations do not affect the rank of this matrix pencil. Thereforethe rank equals that of

−s 0 0 0 11 −s 0 1 00 1 −s 0 00 −1 0 0 00 0 −1 0 0

−s 0 0 0 11 0 0 1 00 0 0 0 00 −1 0 0 00 0 −1 0 0

0 0 0 0 10 0 0 1 00 0 0 0 00 −1 0 0 00 0 −1 0 0

.

As shas disappeared from the matrix pencil it is direct that the rank of the matrix pencil does notdepend ons. The system hence has no transmission zeros. �

4.2.3 Numerical computation of transmission zeros

If a transfer matrixP is given, the numerical computation of transmission zeros is in general basedon a minimal state space realization ofP because reliable numerical algorithms for rational andpolynomial matrix operations are not yet numerically as reliable as state space methods. Webriefly discuss two approaches.

Algorithm 4.2.13 (Transmission zeros via high gain output feedback).The approach hasbeen proposed by Davison and Wang (1974), (1978). Let(A, B,C, D) be a minimal realizationof P and assume thatP is either left-invertible or right-invertible. Then:

1. Determine an arbitrary full rank output feedback matrixK ∈ Rnu ×ny, e.g. using a randomnumber generator.

Page 15: Multi Variable Control System Design Dmcs456

4.3. Norms of signals and systems 167

2. Determine the eigenvalues of the matrix

Zρ := A+ BK(1ρ

I − DK)−1C

for various large real values ofρ, e.g.ρ = 1010, . . . ,1020 at double precision computation.

3. For square systems, the transmission zeros equal the finite eigenvalues ofZρ. For non-square systems, the transmission zeros equal the the finite eigenvalues ofZρ for almostallchoices of the output feedback matrixK.

4. In cases of doubt, vary the values ofρ andK and repeat the calculations.�

Algorithm 4.2.14 (Transmission zeros via generalized eigenvalues).The approach has beenproposed in (Laub and Moore 1978) and makes use of theQZ algorithm for solving the gen-eralized eigenvalue problem. Let(A, B,C, D) be a minimal realization ofP of ordern havingthe additional property that it is left-invertible (if the system is right-invertible, then use the dualsystem(AT,CT, BT, DT)). Then:

1. Define the matricesM andL as

M =[

In 00 0

], L =

[A B

−C D

].

2. Compute a solution to thegeneralized eigenvalueproblem i.e. determine all valuess ∈ C

andr ∈ Cn+nu satisfying

[sM− L]r = 0

3. The set offinitevalues fors are the transmission zeros of the system.

Although reliable numerical algorithms exist for the solution of generalized eigenvalue prob-lems, a major problem is to decide which values ofs belong to the finite transmission zeros andwhich values are to be considered as infinite. However, this problem is inherent in all numericalapproaches to transmission zero computation. �

4.3 Norms of signals and systems

For a SISO system with transfer functionH the interpretation of|H( jω)| as a “gain” from inputto output is clear, but how may we define “gain” for a MIMO system with a transfermatrix H?One way to do this is to use norms‖H( jω)‖ instead of absolute values. There are many differentnorms and in this section we review the most common norms for both signals and systems. Thetheory of norms leads to a re-assessment of stability.

4.3.1 Norms of vector-valued signals

We consider continuous-time signalsz defined on the time axisR. These signals may be scalar-valued (with values inR or C ) or vector-valued (with values inRn or C n). They may be addedand multiplied by real or complex numbers and, hence, are elements of what is known as avectorspace.

Given such a signalz, its norm is a mathematically well-defined notion that is a measure forthe “size” of the signal.

Page 16: Multi Variable Control System Design Dmcs456

168 Chapter 4. Multivariable Control System Design

Definition 4.3.1 (Norms). Let X be a vector space over the real or complex numbers. Then afunction

‖ · ‖ : X → R (4.28)

that mapsX into the real numbersR is anorm if it satisfies the following properties:

1. ‖x‖ ≥ 0 for all x ∈ X (nonnegativity),

2. ‖x‖ = 0 if and only if x = 0 (positive-definiteness),

3. ‖λx‖ = |λ| · ‖x‖ for every scalarλ and allx ∈ X (homogeneity with respect to scaling),

4. ‖x + y‖ ≤ ‖x‖ + ‖y‖ for all x ∈ X andy ∈ X (triangle inequality).

The pair(X, ‖ · ‖) is called anormedvector space. If it is clear which norm is used,X by itselfis often called a normed vector space. �

A well-known norm is thep-norm of vectors inC n .

Example 4.3.2 (Norms of vectors inC n). Suppose thatx = (x1, x2, · · · , xn) is ann-dimensionalcomplex-valued vector, that is,x is an element ofC n . Then for 1≤ p ≤ ∞ the p-normof x isdefined as

‖x‖p ={ (∑n

i=1 |xi|p)1/p

for 1 ≤ p<∞,

maxi=1,2,··· ,n |xi| for p = ∞.(4.29)

Well-known special cases are the norms

‖x‖1 =n∑

i=1

|xi|, ‖x‖2 =(

n∑i=1

|xi|2)1/2

, ‖x‖∞ = maxi=1,2,··· ,n

|xi|. (4.30)

‖x‖2 is the familiarEuclidean norm. �

The p-norm for constant vectors may easily be generalized to vector-valued signals.

Definition 4.3.3 (Lp-norm of a signal). For any 1≤ p ≤ ∞ the p-normor Lp-norm‖z‖Lp of acontinuous-time scalar-valued signalz, is defined by

‖z‖Lp ={ (∫∞

−∞ |z(t)|p dt)1/p

for 1 ≤ p< ∞,

supt∈R |z(t)| for p = ∞.(4.31)

If z is vector-valued with values inRn or C n, this definition is generalized to

‖z‖Lp ={ (∫∞

−∞ ‖z(t)‖p dt)1/p

for 1 ≤ p<∞,

supt∈R ‖z(t)‖ for p = ∞,(4.32)

where‖ · ‖ is any norm on then-dimensional spaceRn or C n . �

The signal norms that we mostly need are theL2-norm,

‖z‖L2 =(∫ ∞

−∞‖z(t)‖2

2 dt

)1/2

(4.33)

Page 17: Multi Variable Control System Design Dmcs456

4.3. Norms of signals and systems 169

and theL∞-norm,

‖z‖L∞ = supt∈R

‖z(t)‖∞. (4.34)

The square‖z‖2L2

of the L2-norm is often called theenergyof the signalz, and theL∞-norm‖z‖L∞ its amplitudeor peak value.

Example 4.3.4 (L2 and L∞-norm). Consider the signalz with two entries

z(t) =[

e−t1(t)

2 e−3t1(t)

].

Here1(t) denotes the unit step, soz(t) is zero for negative time. Fromt = 0 onwards both entriesof z(t) decay to zero ast increases. Therefore

‖z‖L∞ =∥∥∥∥[12

]∥∥∥∥∞

= 2.

The square of theL2-norm follows as

‖z‖2L2

=∫ ∞

0e−2t +4 e−6t dt = e−2t

−2+ 4

e−6t

−6

∣∣∣∣t=∞

t=0

= 12

+ 46

= 76

The energy ofz equals 7/6, itsL2-norm is√

7/6. �

4.3.2 Singular values of vectors and matrices

We review the notion of singular values of a matrix3. If A is ann× mcomplex-valued matrix thenthe matricesAH A andAAH are both nonnegative-definite Hermitian. As a result, the eigenvaluesof both AH A andAAH are all real and nonnegative. The min(n,m) largest eigenvalues

λi (AH A), λi (AAH), i = 1, 2, . . . , min(n,m), (4.35)

of AH A and AAH, respectively, ordered in decreasing magnitude, are equal. The remainingeigenvalues, if any, are zero. The square roots of these min(n,m) eigenvalues are called thesingular valuesof the matrixA, and are denoted

σi (A) = λ1/2i (AH A) = λ

1/2i (AAH), i = 1, 2, . . . , min(n,m). (4.36)

Obviously, they are nonnegative real numbers. The number ofnonzerosingular values equals therank of A.

Summary 4.3.5 (Singular value decomposition).Givenn × m matrix A let6 be the diagonaln × m matrix whose diagonal elements areσi(A), i = 1, 2, . . ., min(n,m). Then there existsquare unitary matricesU andV such that

A = U6VH. (4.37)

A (complex-valued) matrixU is unitary if UHU = UUH = I , with I a unit matrix of correctdimensions. The representation (4.37) is known as thesingular value decomposition (SVD)ofthe matrixA.

3See for instance Section 10.8 of Noble (1969).

Page 18: Multi Variable Control System Design Dmcs456

170 Chapter 4. Multivariable Control System Design

The largest and smallest singular valueσ1(A) andσmin(n,m)(A) respectively are commonlydenoted by an overbar and an underbar

σ(A) = σ1(A), σ(A) = σmin(n,m)(A).

Moreover, the largest singular valueσ(A) is a norm ofA. It is known as thespectral norm. �

There exist numerically reliable methods to compute the matricesU andV and the singularvalues. These methods are numerically more stable than any available for the computation ofeigenvalues.

Example 4.3.6 (Singular value decomposition).The singular value decomposition of the 3× 1matrix A given by

A =0

34

(4.38)

is A = U6VH, where

U = 0 −0.6 −0.8

0.6 0.64 −0.480.8 −0.48 0.36

, 6 =

5

00

, V = 1. (4.39)

The matrixA has a single nonzero singular value (because it has rank 1), equal to 5. Hence, thespectral norm ofA is σ1(A) = 5. �

Exercise 4.3.7 (Singular value decomposition).Let A = U6VH be the singular value decom-position of then × m matrix A, with singular valuesσi , i = 1, 2, · · · , min(n,m). Denote thecolumns of then × n unitary matrixU asui, i = 1, 2, · · · , n, and those of them× m unitarymatrix V asvi , i = 1, 2,· · · , m. Prove the following statements:

1. For i = 1, 2, · · · , min(n,m) the column vectorui is an eigenvector ofAAH correspond-ing to the eigenvalueσ2

i . Any remaining columns are eigenvectors corresponding to theeigenvalue 0.

2. Similarly, for i = 1, 2, · · · , min(n,m) the column vectorvi is an eigenvector ofAH A cor-responding to the eigenvalueσ2

i . Any remaining columns are eigenvectors correspondingto the eigenvalue 0.

3. Fori = 1, 2,· · · , min(n,m) the vectorsui andvi satisfy

Avi = σiui, AHui = σivi . (4.40)�

Exercise 4.3.8 (Singular values).Given is a squaren × n matrix A. Prove the following prop-erties (compare p. R-5 of (Chiang and Safonov 1988)):

1. σ(A) = maxx∈Cn‖Ax‖2‖x|2 .

2. σ(A) = minx∈Cn‖Ax‖2‖x‖2

.

3. σ(A) ≤ |λi(A)| ≤ σ(A), with λi(A) the ith eigenvalue ofA.

Page 19: Multi Variable Control System Design Dmcs456

4.3. Norms of signals and systems 171

4. If A−1 exists thenσ(A) = 1/σ(A−1) andσ(A) = 1/σ(A−1).

5. σ(αA) = |α|σ(A), with α any complex number.

6. σ(A+ B) ≤ σ(A)+ σ(B).

7. σ(AB) ≤ σ(A)σ(B).

8. σ(A)− σ(B) ≤ σ(A+ B) ≤ σ(A)+ σ(B).

9. max(σ(A), σ(B)) ≤ σ([ A B]) ≤ √2max(σ(A), σ(B)).

10. maxi, j |Aij | ≤ σ(A) ≤ nmaxi, j |Aij |, with Aij the(i, j) element ofA.

11.∑n

i=1σ2i (A) = tr(AH A).

4.3.3 Norms of linear operators and systems

On the basis of normed signal space we may define norms of operators that act on these signals.

Definition 4.3.9 (Induced norm of a linear operator). Suppose thatφ is a linear mapφ : U →Y from a normed spaceU with norm‖ · ‖U to a normed spaceY with norm‖ · ‖Y . Then thenorm of the operatorφ induced by the norms‖ · ‖U and‖ · ‖Y is defined by

‖φ‖ = sup‖u‖U 6=0

‖φu‖Y

‖u‖U. (4.41)

Exercise 4.3.10 (Induced norms of linear operators).The space of linear operators from avector spaceU to a vector spaceY is itself a vector space.

1. Show that the induced norm‖ · ‖ as defined by (4.41) is indeed a norm on this spacesatisfying the properties of Definition 4.3.1.

Prove that this norm has the following additional properties:

2. If y = φu, then‖y‖Y ≤ ‖φ‖ · ‖u‖U for anyu ∈ U .

3. Submultiplicative property:Let φ1 : U → V andφ2 : V → Y be two linear operators,with U , V andY normed spaces. Then‖φ1φ2‖ ≤ ‖φ1‖ · ‖φ2‖, with all the norms induced.

Constant matrices represent linear maps, so that (4.41) may be used to define norms of ma-trices. The spectral normσ(M) is the norm induced by the 2-norm (see Exercise 4.3.8(1)).

Exercise 4.3.11 (Matrix norms induced by vector norms).Consider them× k complex-valuedmatrix M as a linear mapC k → C m defined byy = Mu. Then depending on the norms definedonC k andC m we obtain different matrix norms.

1. Matrix norm induced by the 1-norm. Prove that the norm of the matrixM induced by the1-norm (both forU andY ) is themaximum absolute column sum

‖M‖ = maxj

∑i

|Mij |, (4.42)

with Mij the(i, j) entry of M.

Page 20: Multi Variable Control System Design Dmcs456

172 Chapter 4. Multivariable Control System Design

2. Matrix norm induced by the∞-norm. Prove that the norm of the matrixM induced by the∞-norm (both forU andY ) is themaximum absolute row sum

‖M‖ = maxi

∑j

|Mij |, (4.43)

with Mij the(i, j) entry of M.

Prove these statements. �

4.3.4 Norms of linear systems

We next turn to a discussion of the norm of a system. Consider a system as in Fig. 4.4, whichmaps the input signalu into an output signaly. Given an inputu, we denote the output of thesystem asy = φu. If φ is a linear operator the system is said to be linear. The norm of the systemis now defined as the norm of this operator.

u φ y

Figure 4.4: Input-output mapping system

We establish formulas for the norms of a linear time-invariant system induced by theL2- andL∞-norms of the input and output signals.

Summary 4.3.12 (Norms of linear time-invariant systems).Consider a MIMO convolutionsystem with impulse response matrixh, so that its IO mapy = φu is determined as

y(t) =∫ ∞

−∞h(τ)u(t − τ) dτ, t ∈ R. (4.44)

Moreover letH denote the transfer matrix, i.e.,H is the Laplace transform ofh.

1. L∞-induced norm.The norm of the system induced by theL∞-norm (4.33) if it exists, isgiven by

maxi=1,2,··· ,m

∫ ∞

−∞

k∑j=1

|hi j (t)| dt, (4.45)

wherehi j is the(i, j) entry of them× k impulse response matrixh.

2. L2-induced norm.SupposeH is a rational matrix. The norm of the system induced by theL2-norm exists if and only ifH is proper and has no poles in the closed right-half plane.In that case theL2-induced norm (4.34) equals theH∞-normof the transfer matrix definedas

‖H‖H∞ = supω∈R

σ(H( jω)). (4.46)

A sketch of the proof is found inx 4.6. For SISO systems the expressions for the two normsobtained in Summary 4.3.12 simplify considerably.

Page 21: Multi Variable Control System Design Dmcs456

4.3. Norms of signals and systems 173

Summary 4.3.13 (Norms for SISO systems).The norms of a SISO system with (scalar) im-pulse responseh and transfer functionH induced by theL∞- andL2-norms are successivelygiven byactionof the impulse response

‖h‖L1 =∫ ∞

−∞|h(t)| dt, (4.47)

and the peak value on the Bode plotif H has no unstable poles

‖H‖H∞ = supω∈R

|H( jω)|. (4.48)

If H has unstable poles then the induced norms do not exist. �

Example 4.3.14 (Norms of a simple system).As a simple example, consider a SISO first-ordersystem with transfer function

H(s) = 11+ sθ

, (4.49)

with θ a positive constant. The corresponding impulse response is

h(t) ={

e−t/θ for t ≥ 0,

0 for t < 0.(4.50)

It is easily found that the norm of the system induced by theL∞-norm is

‖h‖1 =∫ ∞

0

e−t/θ dt = 1. (4.51)

The norm induced by theL2-norm follows as

‖H‖H∞ = supω∈R

1|1+ jωθ| = sup

ω∈R

1√1+ ω2θ2

= 1. (4.52)

For this example the two system norms are equal. Usually they are not. �

Exercise 4.3.15 (H2-norm and Hankel norm of MIMO systems). Two further norms of lin-ear time-invariant systems are commonly encountered. Consider a stable MIMO system withtransfer matrixH and impulse response matrixh.

1. H2-norm.Show that theH2-norm defined as

‖H‖H2=√

tr

(∫ ∞

−∞HT(− j2π f )H( j2π f )d f

)=√

tr

(∫ ∞

−∞hT(t)h(t)dt

)(4.53)

is a norm. The notation tr indicates the trace of a matrix.

2. Hankel norm.The impulse response matrixh defines an operator

y(t) =∫ 0

−∞h(t − τ)u(τ) dτ, t ≥ 0. (4.54)

which maps continuous-time signalsu defined on the time axis(−∞,0] to continuous-time signalsy defined on the time axis [0,∞). The Hankel norm‖H‖H of the system withimpulse response matrixh is defined as the norm of the map given by (4.54) induced bythe L2-norms ofu : (−∞,0] → Rnu and y : [0,∞) → Rny . Prove that this is indeed anorm.

Page 22: Multi Variable Control System Design Dmcs456

174 Chapter 4. Multivariable Control System Design

3. Compute theH2-norm and the Hankel norm of the SISO system of Example 4.3.14.Hint:To compute the Hankel norm first show that ify satisfies (4.54) andh is given by (4.50)then‖y‖2

2 = 12θ (∫ 0−∞ eτ/θ u(τ) dτ)2. From this, prove that‖H‖H = 1

2.�

Summary 4.3.16 (State-space computation of common system norms).Suppose a systemhas proper transfer matrixH and letH(s)= C(sIn − A)−1B+ D be a minimal realization ofH.Then

1. ‖H‖H2is finite if and only if D = 0 andA is astability matrix4. In that case

‖H‖H2=

√tr BTY B=

√tr CXCT

whereX andY are the solutions of the linear equations

AX+ X AT = −BBT, ATY + Y A= −CTC.

The solutionsX andY are unique and are respectively known as thecontrollability gramianandobservability gramian.

2. The Hankel norm‖H‖H is finite only if A is a stability matrix. In that case

‖H‖H =√λmax(XY)

whereX andY are the controllability gramian and observability gramian. The matrixXYhas real nonnegative eigenvalues only, andλmax(XY) denotes the largest of them.

3. TheH∞-norm‖H‖H∞ is finite if and only if A is a stability matrix. Thenγ ∈ R is an upperbound

‖H‖H∞ < γ

if and only if σ(D) < γ and the 2n× 2n matrix[A 0

−CTC −AT

]−[ −BCT D

](γ2 I − DT D)−1

[DTC BT

](4.55)

has no imaginary eigenvalues. Computation of theH∞-norm involves iteration onγ.

Proofs are listed in Appendix 4.6. �

4.4 BIBO and internal stability

Norms shed a new light on the notions of stability and internal stability. Consider the MIMOinput-output mapping system of Fig. 4.4 with linear input-output mapφ. In x 1.3.2 we definedthe system to be BIBO stable if any bounded inputu results in a bounded outputy = φu. Nowbounded means having finite norm, and so different norms may yield different versions of BIBOstability. Normally theL∞- or L2-norms ofu andy are used.

We review a few fairly obvious facts.

Summary 4.4.1 (BIBO stability).4A constant matrixA is astability matrixif it is square and all its eigenvalues have negative real part.

Page 23: Multi Variable Control System Design Dmcs456

4.4. BIBO and internal stability 175

1. If ‖φ‖ is finite, with‖ · ‖ the norm induced by the norms of the input and output signals,then the system is BIBO stable.

2. Suppose that the system is linear time-invariant with rational transfer matrixH. Then thesystem is BIBO stable (both when theL∞- and theL2-norms ofu andy are used) if andonly if H is proper and has all its poles in the open left-half complex plane.

Exercise 4.4.2 (Proof).

1. Prove the statements 1 and 2.

2. Show by a counterexample that the converse of 1 is not true, that is, for certain systems andnorms the system may be BIBO stable in the sense as defined while‖φ‖ is not necessarilyfinite.

3. In the literature the following better though more complicated definition of BIBO stabilityis found: The system is BIBO stable if for every positive real constantN there exists apositive real constantM such that‖u‖ ≤ N implies ‖y‖ ≤ M. With this definition thesystem is BIBO stable if and only if‖φ‖ <∞. Prove this.

Given the above it will be no surprise that we say that a transfer matrix isstable if it isproper and has all its poles in the open left-half complex plane. Inx 1.3.2 we already definedthe notion ofinternal stability for MIMO interconnected systems: An interconnected systemis internally stable if the system obtained by adding external input and output signals in everyexposed interconnection is BIBO stable. Here we specialize this for the MIMO system of Fig. 4.5.

r e u yPK

Figure 4.5: Multivariable feedback structure

r e u yPK

w

Figure 4.6: Inputsr , w and outputse, u defining internal stability

Assume that feedback system shown in Fig. 4.5 ismultivariablei.e., u or y have more thanone entry. The closed loop of Fig. 4.5 is by definitioninternally stableif in the extended closed

Page 24: Multi Variable Control System Design Dmcs456

176 Chapter 4. Multivariable Control System Design

loop of Fig. 4.6 the maps from(w, r ) to (e,u) are BIBO stable. In terms of the transfer matrices,these maps are[

eu

]=

[I P

−K I

]−1[rw

](4.56)

=[

I − P( I + K P)−1K −P( I + K P)−1

( I + K P)−1K ( I + K P)−1

][rw

](4.57)

Necessary for internal stability is that

det( I + P(∞)K(∞)) 6= 0.

Feedback systems that satisfy this nonsingularity condition are said to bewell-posed, (Hsu andChen 1968). The following is a variation of Eqn. (1.44).

Lemma 4.4.3 (Internal stability). Suppose P and K are proper, having minimal realizations(AP, BP,CP, DP) and (AK, BK,CK, DK). Then the feedback system of Fig. 4.5 is internallystable if and only ifdet( I + DPDK ) 6= 0 and the polynomial

det( I + P(s)K(s))det(sI − AP)det(sI − AK )

has all its zeros in the open left-half plane. �

If the ny × nu systemP is not square, then the dimensionny × ny of I + PK and dimensionnu × nu of I + K P differ. The fact that det( I + PK) = det( I + K P) allows to evaluate thestability of the system at the loop location having the lowest dimension.

Example 4.4.4 (Internal stability of closed-loop system).Consider the loop gain

L(s) =[

1s+1

1s−1

0 1s+1

].

The unity feedback aroundL gives as closed-loop characteristic polynomial:

det( I + L(s))det(sI − AL) =(

s+ 2s+ 1

)2

(s+ 1)2(s− 1) = (s+ 2)2(s− 1). (4.58)

The closed loop system hence is not internally stable. Evaluation of the sensitivity matrix

( I + L(s))−1 =[ s+1

s+2 − s+1(s+2)2

0 s+1s+2

]

shows that this transfer matrix in the closed-loop system is stable, but the complementary sensi-tivity matrix

( I + L(s))−1L(s) =[

1s+2

s2+2s+3(s−1)(s+2)2

0 1s+2

]

is unstable, confirming that the closed-loop system indeed is not internally stable. �

Page 25: Multi Variable Control System Design Dmcs456

4.4. BIBO and internal stability 177

Example 4.4.5 (Internal stability and pole-zero cancellation).Consider

P(s) =[

1s

1s

2s+1

1s

], K(s) =

[1 2s

s−1

0 − 2ss−1

]

The unstable poles= 1 of the controller does not appear in the productPK,

P(s)K(s) =[ 1

s 02

s+12

s+1

].

This may suggest as in the SISO case that the map fromr to u will be unstable. Indeed thelower-left entry of this map is unstable,

K( I + PK)−1 =[ ∗ ∗− 4s2

(s−1)(s+1)(s+3) ∗].

Instability may also be verified using Lemma 4.4.3; it is readily verified that

det( I + P(s)K(s))det(sI − AP)det(sI − AK)

= det

[s+1

s 02

s+1s+3s+1

]s2(s+ 1)(s− 1) = s(s− 1)(s+ 1)(s+ 3)

and this has a zero ats= 1. �

Exercise 4.4.6 (Non-minimum phase loop gain).In Example 4.1.2onedecoupling precompen-sator is found that diagonalizes the loop gainPK with the property that both diagonal elementsof PK have a right half-plane zero ats = 1. Show thateveryinternally stabilizing decouplingcontrollerK has this property. �

K Pe u

v

zr

Figure 4.7: Disturbance rejection for MIMO systems

Exercise 4.4.7 (Internal model control & MIMO disturbance rejection). Consider the MIMOsystem shown in Fig. 4.7 and suppose that the plantP is stable.

1. Show that a controller with transfer matrixK stabilizes if and only ifQ := K( I + PK)−1

is a stable transfer matrix.

2. ExpressK andS := ( I + PK)−1 in terms ofP andQ. Why is it useful to expressK andS in terms ofP andQ?

3. Let P be the 2× 2 systemP(s) = 1s+1

[1 s

−s 1

]and suppose thatv is a persistent harmonic

disturbancev(t) = [12

]ejω0t for some knownω0.

Page 26: Multi Variable Control System Design Dmcs456

178 Chapter 4. Multivariable Control System Design

(a) Assumeω0 6= ±1. Find a stabilizingK (using the parameterization by stableQ) suchthat in closed loop the effect ofv(t) on y(t) is asymptotically rejected (i.e.,e(t)→ 0ast → ∞ for r (t) = 0).

(b) Does there exist such a stabilizingK if ω0 = ±1? (Explain.)�

4.5 MIMO structural requirements and design methods

During both the construction and the design of a control system it is mandatory that structuralrequirements of plant and controller and the control configuration are taken into account. Forexample, the choice of placement of the actuators and sensors may affect the number of right-half plane zeros of the plant and thereby may limit the closed loop performance, irrespectivewhich controller is used. In this section a number of such requirements will be identified. Theserequirements show up in the MIMO design methods that are discussed in this section.

4.5.1 Output controllability and functional reproducibility

Various concepts ofcontrollability are relevant for the design of control systems. A minimalrequirement for almost any control system, to be included in any control objective, is the re-quirement that the output of the system can be steered to any desired position in output space bymanipulating the input variables. This property is formalized in the concept ofoutput controlla-bility.

Summary 4.5.1 (Output controllability). A time-invariant linear system with inputu(t) andoutput y(t) is said to beoutput controllableif for any y1, y2 ∈ Rp there exists an inputu(t),t ∈ [t1, t2] with t1 < t2 that brings the output fromy(t1) = y1 to y(t2) = y2. In case the systemhas a strictly proper transfer matrix and is described by a state-space realization(A, B,C), thesystem isoutput controllableif and only if the constant matrix[

CB C AB C A2B . . . C An−1B]

(4.59)

has full row rank �

If in the above definitionC = In then we obtain the definition of state controllability. Theconcept of output controllability is generally weaker: a system(A, B,C)may be output control-lable and yet not be (state) controllable5. The property of output controllability is aninput-outputpropertyof the system while (state) controllability is not.

Note that the concept of output controllability only requires that the output can be given adesired value at each instant of time. A stronger requirement is to demand the output to be able tofollow any preassigned trajectory in time over a given time interval. A system capable of satisfy-ing this requirement is said to beoutput functional reproducibleor functional controllable. Func-tional controllability is a necessary requirement for output regulator and servo/tracking problems.Brockett and Mesarovic (1965) introduced the notion ofreproducibility, termedoutput control-lability by Rosenbrock (1970).

Summary 4.5.2 (Output functional reproducibility). A system having proper real-rationalny × nu transfer matrix is said to befunctionally reproducibleif rank P = ny. In particular forfunctionally reproducible it is necessary thatny ≤ nu. �

5Here we assume thatC has full row rank, which is a natural assumption because otherwise the entries ofy arelinearly dependent.

Page 27: Multi Variable Control System Design Dmcs456

4.5. MIMO structural requirements and design methods 179

Example 4.5.3 (Output controllability and Functional reproducibility). Consider the lineartime-invariant state-space system of Example 4.2.12,

A =0 0 0

1 0 00 1 0

, B =

0 1

1 00 0

, C =

[0 1 00 0 1

]. (4.60)

This forms a minimal realization of the transfer matrix

P(s) = C(sI − A)−1B =[

1s

1s2

1s2

1s3

]. (4.61)

The system(A, B,C) is controllable and observable, andC has full rank. Thus the systemalso is output controllable. However, the rank ofP is 1. Thus the system isnot functionallyreproducible. �

4.5.2 Decoupling control

A classical approach to multivariable control design consists of the design of a precompensatorthat brings the system transfer matrix to diagonal form, with subsequent design of the actual feed-back loops for the various single-input, single-output channels separately. This allows the tuningof individual controllers in separate feedback loops, and it is thought to provide an acceptablecontrol structure providing ease of survey for process operators and maintenance personnel. Thesubject of noninteracting or decoupling control as discussed in this section is based on the worksof Silverman (1970), Williams and Antsaklis (1986). The presentation follows that of Williamsand Antsaklis (1996).

In this section we investigate whether and how a squareP can be brought to diagonal form byapplying feedback and/or static precompensation. Suppose thatP has state-space representation

x(t) = Ax(t)+ Bu(t),

y(t) = Cx(t)+ Du(t)

with u andy having equally many entries,ny = nu = m, i.e., P(s)= C(sI − A)−1B+ D square.Assume in what follows thatP is invertible. The inverse transfer matrixP−1—necessarily arational matrix—may be viewed as a precompensator ofP that diagonalizes the series connectionPP−1. If the direct feedthrough termD = P(∞) of P is invertible, then the inverseP−1 is properand has realization

P−1(s) = −D−1C(sI − A+ BD−1C)−1BD−1 + D−1.

Exercise 4.5.4.Verify this. �

If D is singular then the inverseP−1—assuming it exists—is not proper. In this case weproceed as follows. Define the indicesfi ≥ 0 (i = 1, . . .m) such thatD f defined as

D f (s) = diag(sf1, sf2, . . . , sfm) (4.62)

is such that

D := lim|s|→∞

D f (s)P(s) (4.63)

is defined and every row ofD has at least one nonzero entry. This identifies the indicesfi

uniquely. The fi equal the maximum of the relative degrees of the entries in theith row of P.

Page 28: Multi Variable Control System Design Dmcs456

180 Chapter 4. Multivariable Control System Design

Indeed, then by construction the largest relative degree in each row ofD f P is zero, and as aconsequence lim|s|→∞ D f (s)P(s) has a nonzero value at precisely the entries of relative degreezero. The indicesfi may also be determined using the realization ofP: If the ith row of D is notidentical to zero thenfi = 0, otherwise

fi = min{ k> 0 | row i of C Ak−1B is not identical to zero}.

It may be shown thatfi ≤ n, wheren is the state dimension. The indicesfi so defined are knownas thedecoupling indices.

Summary 4.5.5 (Decoupling state feedback).If D as defined in (4.63) is nonsingular, then theregular state feedback

u(t) = D−1(−C x(t)+ v(t))

where

C =

C1∗ Af1

C2∗ Af2

...Cm∗ Afm

renders the system with outputy and new inputv decoupled with diagonal transfer matrixD−1

f (s) = diag(s− f1, s− f2, . . . , s− fm). The compensated plant is shown in Fig. 4.8(b). �

HereCi∗ is denotes theith row of the matrixC. A proof is given in Appendix 4.6. Thedecoupled plantD−1

f has all its poles ats = 0 and it has no transmission zeros. We also knowthat regular state-feedback does not change the zeros of the matrix

[A−sI B

C D

]. This implies that

after state feedback all non-zero transmission zeros of the system are canceled by poles. In otherwords, after state feedback the realization has unobservable modes at the open loop transmissionzeros. This is undesirable ifP has unstable transmission zeros as it precludes closed loop stabilityusing output feedback. IfP has no unstable transmission zeros, then we may proceed with thedecoupled plantD−1

f and use (diagonal)K to further the design, see Fig. 4.8. A decoupled plant

(a) (b)

ee uu yy v

xC

D−1KK PP

Decoupled plantD−1f

Figure 4.8: (a) closed loop with original plant, (b) with decoupled plant

that is stable may offer better perspectives for further performance enhancement by multiloopfeedback in individual channels. To obtain a stable decoupled plant we take instead of (4.62) aD f of the form

D f = diag(p1, . . . , pm)

Page 29: Multi Variable Control System Design Dmcs456

4.5. MIMO structural requirements and design methods 181

where thepi are strictly Hurwitz6 if monic7 polynomials of degreefi . The formulae are nowmore messy, but the main results holds true also for this choice ofD f :

Summary 4.5.6 (Decoupling, stabilizing feedback).Suppose the transfer matrixP of the plantis proper, squarem× m and invertible, and suppose it has no unstable transmission zeros. Let(A, B,C, D) be a minimal realization ofP and let fi be the decoupling indices and supposethat pi are Hurwitz polynomials of degreefi . Then there is a regular state feedbacku(t) =Fx(t)+ Wv(t) for which the loop fromv(t) to y(t) is decoupled. Its realization is controllableand detectable and has a stable transfer matrix diag( 1

p1, . . . , 1

pm). �

Example 4.5.7. (Diagonal decoupling by state feedback precompensation).Consider a plantwith transfer matrix

P(s) =[

1/s 1/s2

−1/s2 −1/s2

].

A minimal realization ofP is

[A BC D

]=

0 0 0 0 0 11 0 0 0 1 00 0 0 0 1 −10 0 1 0 0 00 1 0 0 0 00 0 0 1 0 0

.

Its transmission zeros follow after a sequence of elementary row and column operations

P(s) = 1s2

[s 11 −1

]⇒ 1

s2

[s+ 1 1

0 1

]⇒ 1

s2

[1 00 s+ 1

]=[1/s2 0

0 (s+ 1)/s2

].

Thus P has one transmission zeros ats = −1. It is a stable zero hence we may proceed. Thedirect feedthrough termD is the zero matrix so the decoupling indicesfi are all greater thanzero. We need to computeCi∗ Ak−1B.

C1∗ B = [0 1 0 0

]

0 11 01 −10 0

= [

1 0]

it is not identically zero so thatf1 = 1. Likewise

C2∗ B = [0 0 0 1

]

0 11 01 −10 0

= [

0 0].

It is zero hence we must computeC2∗ AB,

C2∗ AB= [1 −1

].

6A strictly Hurwitzpolynomial is a polynomial whose zeros have strictly negative real part.7A monicpolynomial is a polynomial whose highest degree coefficient equals 1.

Page 30: Multi Variable Control System Design Dmcs456

182 Chapter 4. Multivariable Control System Design

As it is nonzero we havef2 = 2. This gives us

D =[

C1∗ BC2∗ AB

]=[1 01 −1

].

The matrixD is nonsingular so a decoupling state feedback exists. Take

D f (s) =[s+ 1 0

0 (s+ 1)2

].

Its diagonal entries have the degreesf1 = 1 and f2 = 2 as required. A realization of

D f (s)P(s) =[(s+ 1)/s (s+ 1)/s2

(s+ 1)2/s2 −(s+ 1)2/s2

]is (A, B,C ,D ) with

C =[1 1 0 00 0 2 1

], D =

[1 01 −1

].

Now the regular state feedback

u(t) =[1 01 −1

]−1(−[1 1 0 00 0 2 1

]x(t)+ v(t)

)renders the system with inputv and outputy, decoupled. After applying this state feedback wearrive at the state-space system

x(t) = A+ BD−1(−C x(t)+ v(t)) =

−1 −1 −2 10 −1 0 00 0 −2 −10 0 1 0

x(t)+

1 −11 00 10 0

v(t)

y(t) = Cx(t)+ Dv(t) =[0 1 0 00 0 0 1

]x(t)

(4.64)

and its transfer matrix by construction is

D−1f (s) =

[1/(s+ 1) 0

0 1/(s+ 1)2

]

Note that a minimal realization ofD−1f has order 3 whereas the realization (4.64) has order 4.

Therefore (4.64) is not minimal. This is typical for decoupling procedures. �

Exercise 4.5.8 (An invertible system that is not decouplable by state feedback).Consider theplant with transfer matrix

P(s) =[1/(s+ 1) 1/s1/(s− 1) 1/s

].

1. Find the transmission zeros.

2. Show that the methods of Summary 4.5.5 and Summary 4.5.6 are not applicable here.

3. Find a stable precompensatorC0 that renders the productPC0 diagonal.�

Page 31: Multi Variable Control System Design Dmcs456

4.5. MIMO structural requirements and design methods 183

4.5.3 Directional properties of gains

In contrast to a SISO system, a MIMO system generally does not have a unique gain. A trivialexample is the 2× 2 system with constant transfer matrix

P(s) =[1 00 2

].

The gain is in between 1 and 2 depending on the direction of the input.There are various ways to define gains for MIMO systems. A natural generalization of the

SISO gain|y( jω)|/|u( jω)| from inputu( jω) to outputy( jω) is to use norms instead of merelyabsolute values. We take the 2-norm.

Summary 4.5.9 (Bounds on gains).For any given inputu and fixed frequencyω there holds

σ(P( jω)) ≤ ‖P( jω)u( jω)‖2

‖u( jω)‖2≤ σ(P( jω)) (4.65)

and the lower boundσ(P( jω)) and upper boundσ(P( jω)) are achieved for certain inputsu. �

10-2

10-1

100

101

102

10-4

10-3

10-2

10-1

100

101

Figure 4.9: Lower and upper bound on the gain ofT( jω)

Example 4.5.10 (Upper and lower bounds on gains).Consider the plant with transfer matrix

P(s) =[

1s

1s2+√

2s+1

− 12(s+1)

1s

].

In unity feedback the complementary sensitivity matrixT = P( I + P)−1 has dimension 2× 2.So at each frequencyω the matrixT( jω) has two singular values. Figure 4.9 shows the twosingular valuesσ( jω) andσ( jω) as a function of frequency. Near the crossover frequency thesingular values differ considerably. �

Let

P( jω) = Y( jω)6( jω)U∗( jω) (4.66)

be an SVD (at each frequency) ofP( jω), that is,Y( jω) andU( jω) are unitary and

6( jω) = diag(σ1( jω), σ2( jω), . . . , σmin(ny,nu)( jω))

Page 32: Multi Variable Control System Design Dmcs456

184 Chapter 4. Multivariable Control System Design

with

σ1( jω) ≥ σ2( jω) ≥ . . . ≥ σmin(ny,nu)( jω) ≥ 0.

The columnsuk( jω) of U( jω) theinput principal directions, (Postlethwaite, Edmunds, and Mac-Farlane 1981) and they have the special property that their gains are precisely the correspondingsingular values

‖P( jω)uk( jω)‖2

‖uk( jω)‖2= σk( jω)

and the responseyk( jω) = P( jω)uk( jω) in fact equalsσk( jω) times thekth column ofY( jω).The columns ofY( jω) are theoutput principal directionsand theσk( jω) theprincipal gains.

Thecondition numberκ defined as

κ(P( jω)) = σ(P( jω))σ(P( jω))

(4.67)

is a possible measure of the difficulty of controlling the system. If for a plant all principal gainsare the same, such as whenP( jω) is unitary, thenκ( jω) = 1. If κ( jω) � 1 then the loop gainL = PK around the crossover frequencymaybe difficult to shape. Note that the condition numberis not invariant under scaling. Describing the same system in terms of a different set of physicalunits leads to a different condition number. A high condition number indicates that the system is“close” to losing its full rank, i.e. close to not satisfying the property of functional controllability.

Example 4.5.11 (Multivariable control system—design goals).The performance of a multi-variable feedback system can be assessed using the notion of principal gains as follows. Thebasic idea follows the ideas previously discussed for single input, single output systems.

The disturbance rejecting properties of the loop transfer require the sensitivity matrix to besmall:

σ(S( jω))� 1.

At the same time the restriction of the propagation of measurement noise requires

σ(T( jω))� 1.

As in the SISO case these two requirements are conflicting. Indeed, using the triangle inequalitywe have the bounds

|1− σ(S( jω))| ≤ σ(T( jω)) ≤ 1+ σ(S( jω)) (4.68)

and

|1− σ(T( jω))| ≤ σ(S( jω)) ≤ 1+ σ(T( jω)). (4.69)

It is useful to try to make the difference betweenσ(T) andσ(T), and betweenσ(S) andσ(S),not too large in the cross over region. Making them equal would imply that the whole plantbehaves identical in all directions and this is in general not possible.

Page 33: Multi Variable Control System Design Dmcs456

4.5. MIMO structural requirements and design methods 185

We must also be concerned to make the control inputs not too large. Thus the transfer matrix( I + K P)−1K from r to u must not be too large.

‖u( jω)‖2

‖r ( jω)‖2= ‖( I + K( jω)P( jω))−1K( jω)r ( jω)‖2

‖r ( jω)‖2

≤ σ(( I + K( jω)P( jω))−1K( jω))

≤ σ(( I + K( jω)P( jω))−1)σ(K( jω))

≤ σ(K( jω))σ( I + K( jω)P( jω))

≤ σ(K( jω))1− σ(K( jω)P( jω))

≤ σ(K( jω))1− σ(K( jω))σ(P( jω))

. (4.70)

Here several properties of Exercise 4.3.8 are used and in the last two inequalities it is assumedthatσ(K( jω))σ(P( jω)) < 1. The upperbound shows that whereσ(P( jω)) is not too large thatσ(K( jω))� 1 guarantees a small gain fromr ( jω) to u( jω). The upper bound (4.70) of this gainis easily determined numerically.

The direction of an important disturbance can be taken into consideration to advantage. Letv( jω) be a disturbance acting additively on the output of the system. Complete rejection of thedisturbance would require an input

u( jω) = −P−1( jω)v( jω). (4.71)

Thus

‖u( jω)‖2

‖v( jω)‖2= ‖P−1( jω)v( jω)‖2

‖v( jω)‖2(4.72)

measures the magnitude ofu needed to reject a unit magnitude disturbance acting in the directionv. The most and least favorable disturbance directions are those for whichv is into the directionof the principal output directions corresponding toσ(P( jω)) andσ(P( jω)), respectively. Thisleads to thedisturbance condition numberof the plant,

κv(P( jω)) = ‖P−1( jω)v( jω)‖2

‖v( jω)‖2σ(P( jω)) (4.73)

which measures the input needed to reject the disturbancev relative to the input needed to rejectthe disturbance acting in the most favorable direction. It follows that

1 ≤ κv(P) ≤ κ(P) (4.74)

andκv(P) generalizes the notion of condition number of the plant defined in (4.67). 5rejectionproperties with respect to these disturbance directions. �

4.5.4 Decentralized control structures

Dual to decoupling control where the aim is make the plantP diagonal, isdecentralized controlwhere it is the controllerK that is restricted to be diagonal or block diagonal,

K = diag{Ki} =

K1 0K2

...0 Km

. (4.75)

Page 34: Multi Variable Control System Design Dmcs456

186 Chapter 4. Multivariable Control System Design

This control structure is calledmultiloop controland it assumes that there as many control inputsas control outputs. In a multiloop control structure it is in general desirable to make an orderingof the input and output variables such that the interaction between the various control loops is assmall as possible. This is theinput-output pairing problem.

Example 4.5.12 (Relative gain of a two-input-two-output system).Consider a two-input-two-output system[

y1

y2

]= P

[u1

u2

]and suppose we decide to close the loop by a diagonal controlleruj = Kj(r j − yj ), that is, thefirst component ofu is only controlled by the first component ofy, and similarlyu2 is controlledonly by y2. Suppose we leave the second control loop open for the moment and that we vary thefirst control loop. If the open loop gain fromu2 to y2 does not change a lot as we vary the firstloop u1 = K1(r1 − y1), it is save to say that we can design the second control loop independentfrom the first, or to put it differently, that the second control loop is insensitive to tuning of thefirst. This is desirable. In summary, we want the ratio

λ := gain fromu2 to y2 if first loop is opengain fromu2 to y2 if first loop is closed

(4.76)

preferably close to 1. To make thisλ more explicit we assume now that the reference signalris constant and that all signals have settled to their constant steady state values. Now if the firstloop is open then the first input entryu1 is zero (or constant). However if the first loop isclosedthen—assuming perfect control—it is the outputy1 that is constant (equal tor1). This allows usto express (4.76) as

λ =dy2/du2

∣∣u1

dy2/du2

∣∣y1

where∣∣u1

expresses thatu1 is considered constant in the differentiation. This expression forλ

exists and may be computed ifP(0) is invertible:

dy2

du2

∣∣∣∣u1

= d P21(0)u1 + P22(0)u2

du2

∣∣∣∣u1

= P22(0)

and becauseu = P−1(0)y we also have

dy2

du2

∣∣∣∣y1

= 1du2/dy2

∣∣∣∣y1

= 1d P−1

21 (0)y1+P−122 (0)y2

dy2

∣∣∣∣y1

= 1

P−122 (0)

= P−122 (0).

The relative gainλ hence equalsP22(0)P−122 (0). �

For general square MIMO systemsP we want to consider therelative gain array (RGA)which is the (rational matrix)3 defined element-wise as

3i j = Pij P−1ji

or, equivalently as a matrix, as

3 = P◦ (P−1)T (4.77)

where◦ denotes theHadamard productwhich is the entry-wise product of two matrices of thesame dimensions.

Page 35: Multi Variable Control System Design Dmcs456

4.5. MIMO structural requirements and design methods 187

Summary 4.5.13 (Properties of the RGA).

1. In constant steady state there holds that

3i j (0) =dyi

duj

∣∣all loops(uk, yk), k 6= j open

dyi

duj

∣∣all loops(uk, yk), k 6= j closed

2. The sum of the elements of each row or each column of3 is 1

3. Any permutation of rows and columns inP results in the same permutations of rows andcolumns in3

4. 3 is invariant under diagonal input and output scaling

5. If P is diagonal or upper or lower triangular then3 = Im

6. The normσ(3( jω)) of the RGA is believed to be closely related to the minimized con-dition number ofP( jω) under diagonal scaling, and thus serves as an indicator for badlyconditioned systems. See (Nett and Manousiouthakis 1987), (Grosdidier, Morari, and Holt1985), and (Skogestad and Morari 1987).

7. The RGA measures the sensitivity ofP to relative element-by-element uncertainty. Itindicates thatP becomes singular by an element perturbation fromPij to Pij [1 − λ−1

i j ].

If 3 deviates a lot fromIm then this is an indication that interaction is present in the system.�

Example 4.5.14 (Relative Gain Array for2× 2 systems).For the two-input, two-output case

P =[

P11 P12

P21 P22

], (P−1)T = 1

P11P22 − P21P12

[P22 −P21

−P12 P11

],

the relative gain array3 = P ◦ P−T equals

3 =[λ 1− λ

1− λ λ

], λ := P11P22

P11P22 − P21P12.

We immediate see that rows and columns add up to one, as claimed. Some interesting specialcases are

P =[

P11 00 P22

]⇒ 3 =

[1 00 1

]

P =[

P11 0P21 P22

]⇒ 3 =

[1 00 1

]

P =[

0 P12

P21 0

]⇒ 3 =

[0 11 0

]

P =[

p p−p p

]⇒ 3 =

[0.5 0.50.5 0.5

]In the first and second example, the relative gain suggests to pairu1 with y1, andu2 with y1. Inthe first example this pairing is also direct from the fact thatP is diagonal. In the third examplethe RGA isantidiagonal, so we want to pairu1 with y2 andu2 with y1. In the fourth example allentries of the RGA are the same, hence no pairing can be deduced from the RGA.

Page 36: Multi Variable Control System Design Dmcs456

188 Chapter 4. Multivariable Control System Design

If P is close to singular then several entries of3 are large. The corresponding pairings are tobe avoided, and possibly no sensible pairing exists. For example

P =[a aa a(1+ δ)

]⇒ 3 =

[1+δδ

−1δ−1

δ1+δδ

]with δ ≈ 0. �

Although the RGA has been derived originally by Bristol (1966) for the evaluation ofP(s)at steady-states = 0 assuming stable steady-state behavior, the RGA may be useful over thecomplete frequency range. The followingi/o pairing ruleshave been in use on the basis of anheuristic understanding of the properties of the RGA:

1. Prefer those pairing selections where3( jω) is close to unity around the crossover fre-quency region. This prevents undesirable stability interaction with other loops.

2. Avoid pairings with negative steady-state or low-frequency values ofλii ( jω) on the diag-onal.

The following result by Hovd and Skogestad (1992) makes more precise the statement in (Bris-tol 1966) relating negative entries in the RGA, and nonminimum-phase behavior. See also thecounterexample against Bristol’s claim in (Grosdidier and Morari 1987).

Summary 4.5.15 (Relative Gain Array and RHP transmission zeros).Suppose thatP hasstable elements having no poles nor zeros ats= 0. Assume that the entries of3 are nonzero andfinite for s→ ∞. If 3i j shows different signs when evaluated ats = 0 ands = ∞, then at leastone of the following statements holds:

1. The entryPij has a zero in the right half plane

2. P has a transmission zero in the right half plane

3. The subsystem ofP with input j and outputi removed has a transmission zero in the righthalf plane

As decentralized or multiloop control structures are widely applied in industrial practice, arequirement arises regarding the ability to take one loop out of operation without destabilizingthe other loops. In a problem setting employing integral feedback, this leads to the notion ofdecentralized integral controllability (Campo and Morari 1994).

Definition 4.5.16 (Decentralized integral controllability (DIC)). The systemP is DIC if thereexists a decentralized (multiloop) controller having integral action in each loop such that thefeedback system is stable and remains stable when each loopi is individually detuned by a factorεi for 0 ≤ εi ≤ 1. �

The definition of DIC implies that the systemP must be open-loop stable. It is also assumedthat the integrator in the control loop is put out of order whenεi = 0 in loop i. The steady-stateRGA provides a tool for testing on DIC. The following result has been shown by Grosdidier,Morari, and Holt (1985).

Summary 4.5.17 (Steady-state RGA and stability).Let the plantP be stable and square, andconsider a multiloop controllerK with diagonal transfer matrix having integral action in eachdiagonal element. Assume further that the loop gainPK is strictly proper. If the RGA3(s)of the plant contains a negative diagonal value fors= 0 then the closed-loop system satisfies atleast one of the following properties:

Page 37: Multi Variable Control System Design Dmcs456

4.5. MIMO structural requirements and design methods 189

1. The overall closed-loop system is unstable

2. The loop with the negative steady-state relative gain is unstable by itself

3. The closed-loop system is unstable if the loop corresponding to the negative steady-staterelative gain is opened

A further result regarding stability under multivariable integral feedback has been providedby Lunze (1985); see also (Grosdidier, Morari, and Holt 1985), and (Morari 1985).

Result 4.5.18 (Steady-state gain and DIC).Consider the closed loop system consisting of theopen-loop stable plantP and the controllerK having transfer matrix

K(s) := α

sKss (4.78)

Assume unity feedback around the loop and assume that the summing junction in the loop in-troduces a minus sign in the loop. Then a necessary condition for the closed loop system to bestable for allα in an interval 0≤ α ≤ α with α > 0 is that the matrixP(0)Kss must have all itseigenvalues in the open right half plane. �

Further results on decentralized i/o pairing can be found in (Skogestad and Morari 1992),and in (Hovd and Skogestad 1994). The inherent control limitations of decentralized controlhave been studied by Skogestad, Hovd, and Lundstr¨om (1991). Further results on decentralizedintegral controllability can be found in (Le, Nwokah, and Frazho 1991) and in (Nwokah, Frazho,and Le 1993).

4.5.5 Internal model principle and the servomechanism problem

Theservomechanism problemis to design a controller – calledservocompensator– that rendersthe plant output as insensitive as possible to disturbances while at the same asymptotically trackcertain reference inputsr . We discuss a state space solution to this problem; a solution that worksfor SISO as well as MIMO systems. Consider a plant with state space realization

x(t) = Ax(t)+ Bu(t) u(t) ∈ Rnu , y ∈ R

ny , x(0) = x0 ∈ Rn

y(t) = Cx(t)+ v(t) (4.79)

and assume thatv is a disturbance that is generated by a dynamic system having all of its poleson the imaginary axis

xv(t) = Avxv(t), xv(0) = xv0v(t) = Cvxv(t) (4.80)

The disturbance ispersistentbecause the eigenvalues fofAv are assumed to lie on the imaginaryaxis. Without loss of generality we further assume that(Av,Cv) is an observable pair. Then apossible choice of theservocompensatoris the compensator (controller) with realization

xs(t) = Asxs(t)+ Bse(t), xs(0) = xs0

e(t) = r (t)− y(t) (4.81)

whereAs is block-diagonal withny blocks

As = diag(Av, Av, . . . , Av) (4.82)

Page 38: Multi Variable Control System Design Dmcs456

190 Chapter 4. Multivariable Control System Design

and whereBs is of compatible dimensions and such that(As, Bs) is controllable, but otherwisearbitrary. The composite system, consisting of the plant (4.79) and the servocompensator (4.81)yields[

x(t)xs(t)

]=

[A 0

−BsC As

][x(t)xs(t)

]+[

B0

]u(t)+

[0Bs

](r (t)− v(t))

y(t) = [−BsC 0][ x(t)

xs(t)

]+ v(t)

as its stands the closed loop is not stable, but it may be stabilized by an appropriate state feedback

u(t) = Fx(t)+ Fsxs(t) (4.83)

whereF and Fs are chosen by any method for designing stabilizing state feedback, such as LQtheory. The resulting closed loop is shown in Fig. 4.10.

e xs

x

K P

F

Fsu y

v

r

Figure 4.10: Servomechanism configuration

Summary 4.5.19 (Servocompensator).Suppose that the following conditions are satisfied.

1. (A, B) is stabilizable and(A,C) is detectable,

2. nu ≥ ny, i.e. there are at least as many inputs as there are outputs

3. rank[

A−λi BC 0

] = n + ny for every eigenvalueλi of Av.

ThenF, Fs can be selected such that the closed loop system matrix[A+ BF BFs

−BsC As

](4.84)

has all its eigenvalues in the open left-half plane. In that case the controller (4.81),(4.83) is aservocompensator in thate(t)→ 0 ast → ∞ for any of the persistentv that satisfy (4.80). �

The system matrixAs of the servocompensator contains several copies of the system matricesAv that definesv. This means that the servocompensator contains the mechanism that generatesv; it is an example of the more generalinternal model principle. The conditions as mentionedimply the following, see (Davison 1996) and (Desoer and Wang 1980).

� No transmission zero of the system should coincide with one of the imaginary axis polesof the disturbance model

� The system should be functionally controllable

Page 39: Multi Variable Control System Design Dmcs456

4.5. MIMO structural requirements and design methods 191

� The variablesr andv occur in the equations in a completely interchangeable role, so thatthe disturbance model used forv can also be utilized forr . That is, for anyr that satisfies

xv(t) = Avxv(t), r (t) = Cr xv(t)

the error signale(t) converges to zero. Hence the outputy(t) approachesr (t) ast → ∞.

� Asymptotically, whene(t) is zero, the disturbancev(t) is still present and the servocom-pensator acts in an open loop fashion as the autonomous generator of a compensatingdisturbance at the outputy(t), of equal form asv(t) but of opposite sign.

� If the state of the system is not available for feedback, then the use of a state estimator ispossible without altering the essentials of the preceding results.

Example 4.5.20 (Integral action). If the servocompensator is taken to be an integratorxs = ethen constant disturbancesv are asymptotically rejected and constant reference inputsr asymp-totically tracked. We shall verify the conditions of Summary 4.5.19 for this case. As before theplant P is assumed strictly proper with state space realization and corrupted with noisev,

x(t) = Ax(t)+ Bu(t), y(t) = Cx(t)+ v(t).

Combined with the integrating action of the servocompensatorxs(t) = e(t), with e(t) = r (t)−y(t), we obtain[

x(t)xs(t)

]=

[A 00 0

][x(t)xs(t)

]+[

B0

]u(t)+

[0I

]e(t)

=[

A 0−C 0

][xxs

]+[

B0

]u(t)+

[0I

](r (t)− v(t)).

Closing the loop with state feedbacku(t) = Fx(t)+ Fsxs(t) renders the closed loop state spacerepresentation[

x(t)xs(t)

]=

[A+ BF BFs

−C 0

][x(t)xs(t)

]+[0I

](r (t)− v(t))

y(t) = [C 0

] [ x(t)xs(t)

].

All closed-loop poles can be arbitrarily assigned by state feedbacku(t)= Fx(t)+ Fsxs(t) if andonly if

([A 0C 0

],[

B0

])is controllable. This is the case if and only if[

B AB A2B · · ·0 CB C AB · · ·

]

has full row rank. The above matrix may be decomposed as[A BC 0

][0 B AB · · ·Im 0 0 · · ·

]

and therefore has full row rank if and only if

1. (A, B) is controllable,

2. nu ≥ ny,

Page 40: Multi Variable Control System Design Dmcs456

192 Chapter 4. Multivariable Control System Design

3. rank

[A BC 0

]= n+ ny.

(In fact the third condition implies the second.) The three conditions are exactly what Sum-mary 4.5.19 states. If the full statex is not available for feedback then we may replace thefeedback lawu(t) = Fx(t)+ Fsxs(t) by an approximating observer,

˙x(t) = (A− KC)x(t)+ Bu(t)+ Ky(t)

u(t) = Fx(t)+ Fsxs(t).

4.6 Appendix: Proofs and Derivations

Derivation of process dynamic model of the two-tank liquid flow process.A dynamic processmodel for the liquid flow process of figure 4.1 is derived under the following assumptions:

� The liquid is incompressible;

� The pressure in both vessels is atmospheric;

� The flowφ2 depends instantaneously on the static pressure defined by the liquid levelH1;

� The flowφ4 is the instantaneous result of a flow-controlled recycle pump;

� φ3 is determined by a valve or pump outside the system boundary of the process underconsideration, independent of the variables under consideration in the model.

Let ρ denote density of the liquid expressed in kg/m3, and let A1 and A2 denote the cross-sectional areas (in m2) of each vessel, then the mass balances over each vessel and a relation foroutflow are:

A1ρh1(t) = ρ[φ1(t)+ φ4(t)− φ2(t)] (4.85)

A2ρh2(t) = ρ[φ2(t)− φ4(t)− φ3(t)] (4.86)

The flow rateφ2(t) of the first tank is assumed to be a function of the levelh1 of this tank. Thecommon model is that

φ2(t) = k√

h1(t). (4.87)

wherek is a valve dependent constant. Next we linearize these equations around an assumedequilibrium state(hk,0, φi,0). Define the deviations from the equilibrium values with tildes, thatis,

φi(t) = φi,0 + φi (t), i = 1 . . .4

hk(t) = hk,0 + hk(t), k = 1,2. (4.88)

Linearization of equation (4.87) around(hk,0, φi,0) gives

φ2(t) = k√h1,0

12

˙h1(t) = φ2,0

2h1,0

˙h1(t). (4.89)

Page 41: Multi Variable Control System Design Dmcs456

4.6. Appendix: Proofs and Derivations 193

Inserting this relation into equations (4.85), (4.86) yields the linearized equations[ ˙h1(t)˙h2(t)

]=[− φ2,0

2A1h1,00

φ2,0

2A1h1,00

][h1(t)h2(t)

]+[ 1

A1

1A1

0 − 1A2

][φ1(t)φ4(t)

]+[

0− 1

A2

]φ3(t) (4.90)

Assuming thatφ4,0 = φ1,0, and thusφ2,0 = 2φ1,0, and taking numerical valuesφ1,0 = 1, A1 =A2 = 1, andh1,0 = 1 leads to Equation (4.1).

Proof of Summary 4.3.16.

1. If D is not zero then the impulse response matrixh(t) = CeAt B1(t)+ δ(t)D containsDirac delta functions and as a result theH2-norm can not be finite. SoD = 0 and

h(t) = CeAt B1(t).

If A is not a stability matrix thenh(t) is unbounded, hence theH2-norm is infinite. SoD = 0 andA is a stability matrix. Then

‖H‖2H2

= tr∫ ∞

−∞h(t)Th(t)dt

= tr∫ ∞

0BT eAT t CTCeAt B dt

= tr(BT

∫ ∞

0eATt CTCeAt dt︸ ︷︷ ︸

Y

B).

The so defined matrixY satisfies

ATY + Y A=∫ ∞

0AT eAT t CTCeAt +eAT t CTCeAt A dt= eAT t CTCeAt

∣∣t=∞t=0 = −CTC.

(4.91)

That is, Y satisfies the Lyapunov equationATY + Y A= −CTC, and asA is a stabilitymatrix the solutionY of (4.91) is well known to be unique.

2. Let X andY be the controllability and observability gramians. By the property of state, theoutputy for positive time is a function of the statex0 at t = 0. Then∫ ∞

0y(t)T y(t)dt =

∫ ∞

0xT

0 eAT t CTCeAt x0 dt = xT0Yx0.

If X satisfiesAX + X AT = −BBT then its inverseZ := X−1 satisfiesZ A + AT Z =−Z BBT Z. By completion of the square we may write

ddt

xT Zx = 2xT Zx = 2xT(Z Ax+ Z Bu) = xT(Z A+ AT Z)xT + xT(2Z Bu)

= xT(−Z BBTZ)xT + 2xT Z Bu

= uTu− (u− BT Zx)T(u− BT Zx).

Therefore, assumingx(−∞) = 0,∫ 0

−∞‖u(t)‖2

2 − ‖u(t)− BT Zx(t)‖22 dt = xT(t)Zx(t)

∣∣t=0t=−∞ = xT

0 Zx0 = xT0 X−1x0.

Page 42: Multi Variable Control System Design Dmcs456

194 Chapter 4. Multivariable Control System Design

From this expression it follows that the smallestu (in norm) that steersx from x(−∞)= 0to x(0) = x0 is u(t) = BT Zx(t) (verify this). This then shows that

supu

∫∞0 yT(t)y(t)dt∫ 0−∞ uT(t)u(t)dt

= xT0Yx0

xT0 X−1x0

.

Now this supremum is less then someγ2 for somex0 if and only if

xT0Yx0 < γ

2 · xT0 X−1x0.

There exist suchx0 iff Y < γ2X−1, which in turn is equivalent to thatλi(Y X) =λi (X1/2Y X1/2) < γ2. This proves the result.

3. ‖H‖H∞ < γ holds if and only ifH is stable andγ2 I − H∼ H is positive definite on jR∪∞.This is the case iff it is positive definite at infinity (i.e.,σ(D) < γ, i.e.γ2 I − DT D > 0) andnowhere on the imaginary axisγ2 I − H∼ H is singular. We will show thatγ2 I − H∼ H hasimaginary zeros iff (4.55) has imaginary eigenvalues. It is readily verified that a realizationof γ2 I − H∼ H is

A− sI 0 −B−CTC −AT − sI CT DDTC BT γ2 I − DT D

By the Schur complement results applied to this matrix we see that

det

[A− sI 0−CTC −AT − sI

]· det(γ2I − H∼ H) = det(γ2 I − DT D) · det(Aham− sI).

whereAham is the Hamiltonian matrix (4.55). AsA has no imaginary eigenvalues we havethatγ2 I − H∼ H has no zeros on the imaginary axis iffAhamhas no imaginary eigenvalues.

Proof of Summary 4.5.5.We make use of the fact that if

x(t) = Ax(t)+ Bu(t), x(0) = 0,

y(t) = Cx(t)

then the derivativey(t) satisfies

y(t) = Cx(t) = C Ax(t)+ CBu(t)

The matrixD f (s) = diag(sf1, . . . , sfm) is a diagonal matrix ofdifferentiators. Since the planty = Pusatisfies

x(t) = Ax(t)+ Bu(t), x(0) = 0,

y(t) = Cx(t)+ Du(t) (4.92)

it follows that the component-wise differentiatedv defined asv := D f y satisfies

x(t) = Ax(t)+ Bu(t), x(0) = 0,

v(t) =

d f1

dt f1y1(t)

d f2

dt f2y2(t)...

d fm

dt fm ym(t)

=

C1∗ Af1

C2∗ Af2

...CmAfm

︸ ︷︷ ︸C

x(t)+ Du(t)

Page 43: Multi Variable Control System Design Dmcs456

4.6. Appendix: Proofs and Derivations 195

By assumptionD is nonsingular, sou(t) = D−1(−C x(t) + v(t)). Inserting this in the staterealization (4.92) yields

x(t) = (A− BD−1C )x(t)+ BD−1v(t), x(0) = 0,

y(t) = (C − DD−1C )x(t)+ DD−1v(t)

Sincev = D f y it must be that the above is a realization of theD−1f .

Proof of Lemma 4.2.8 (sketch).We only prove it for the case that none of the transmission zeross0 are poles. Then(A− s0 In)

−1 exists for any transmission zero, ands0 is a transmission zero ifand only if P(s0) drops rank. Since[

A− s0 In BC D

][In (s0 In − A)−1B0 I

]=[

A− s0 In 0C P(s0)

]

we see that the rank of[

A−s0 I BC D

]equalsn plus the rank ofP(s0). Hences0 is a transmission zero

of P if and only if[

A−s0 In BC D

]drops rank.

Proof of Norms of linear time-invariant systems.We indicate how the formulas for the systemnorms as given in Summary 4.3.12 are obtained.

1. L∞-induced norm.In terms of explicit sums, theith componentyi of the outputy = h ∗ uof the convolution system may be bounded as

|yi(t)| =∣∣∣∣∣∫ ∞

−∞

∑j

hi j (τ)uj(t − τ) dτ

∣∣∣∣∣ ≤∫ ∞

−∞

∑j

|hi j (τ)| · |uj(t − τ)| dτ

≤(∫ ∞

−∞

∑j

|hi j (τ)| dτ

)· ‖u‖L∞ , t ∈ R, (4.93)

so that

‖y‖L∞ ≤ maxi

(∫ ∞

−∞

∑j

|hi j (τ)| dτ

)· ‖u‖L∞ . (4.94)

This shows that

‖φ‖ ≤ maxi

∫ ∞

−∞

∑j

|hi j (τ)| dτ. (4.95)

The proof that the inequality may be replaced by equality follows by showing that thereexists an inputu such that (4.94) is achieved withequality.

2. L2-induced norm.By Parseval’s theorem (see e.g. (Kwakernaak and Sivan 1991)) we havethat

‖y‖2L2

‖u‖2L2

=∫∞−∞ yH(t)y(t)dt∫∞−∞ uH(t)u(t)dt

=1

∫∞−∞ yH( jω)y( jω)dω

12π

∫∞−∞ uH( jω)u( jω)dω

(4.96)

=∫∞−∞ uH( jω)HH( jω)H( jω)u( jω)dω∫∞

−∞ uH( jω)u( jω)dω(4.97)

Page 44: Multi Variable Control System Design Dmcs456

196 Chapter 4. Multivariable Control System Design

wherey is the Laplace transform ofy andu that ofu. For any fixed frequencyω we havethat

supu( jω)

uH( jω)HH( jω)H( jω)u( jω)uH( jω)u( jω)

= σ2(H( jω)).

Therefore

supu

‖y‖2L2

‖u‖2L2

≤ supω

σ2(H( jω)). (4.98)

The right hand side of this inequality is by definition‖H‖2H∞

. The proof that the inequalitymay be replaced by equality follows by showing that there exists an inputu for which(4.98) is achieved with equality within an arbitrarily small positive marginε.

Page 45: Multi Variable Control System Design Dmcs456

5

Uncertainty Models and Robustness

Overview– Various techniques exist to examine the stability robustnessof control systems subject to parametric uncertainty.

Parametric and nonparametric uncertainty with varying degree ofstructure may be captured by thebasic perturbation model.The sizeof the perturbation is characterized by bounds on the norm of the pertur-bation. The small gain theorem provides the tool to analyze the stabilityrobustness of this model.

These methods allow to generalize the various stability robustnessresults for SISO systems of Chapter 2 in several ways.

5.1 Introduction

In this chapter we discuss various paradigms for representing uncertainty about thedynamicpropertiesof a plant. Moreover, we present methods to analyze the effect of uncertainty onclosed-loop stability and performance.

Section 5.2 is devoted toparametricuncertainty models. The idea is to assume that theequations that describe the dynamics of the plant and compensator (in particular, their transferfunctions or matrices) are known, but that there is uncertainty about the precise values of variousparameters in these equations. The uncertainty is characterized by anintervalof possible values.We discuss some methods to analyze closed-loop stability under this type of uncertainty. Themost famous recent result isKharitonov’s theorem.

In x 5.3 we introduce the so-calledbasic perturbation modelfor linear systems, which admitsa much wider class of perturbations, includingunstructured perturbations.Unstructured pertur-bations may involve changes in the order of the dynamics and are characterized bynorm bounds.Norm bounds are bounds on the norms of operators corresponding to systems.

In x 5.4 we review the fixed point theorem of functional analysis to the small gain theorem ofsystem theory. With the help of that we establish inx 5.5 sufficient and necessary conditions forthe robust stability of the basic perturbation model. Next, inx 5.6 the basic stability robustness

197

Page 46: Multi Variable Control System Design Dmcs456

198 Chapter 5. Uncertainty Models and Robustness

result is applied to proportional, proportional inverse perturbations and fractional perturbationsof feedback loops.

The perturbation models ofx 5.6 are relatively crude. Doyle’sstructured singular valueallows much finer structuring. It is introduced inx 5.7. An appealing feature of structuredsingular value analysis is that it allows studyingcombinedstability and performance robustnessin a unified framework. This is explained inx 5.8.

In x 5.9 a number of proofs for this chapter are presented.

5.2 Parametric robustness analysis

5.2.1 Introduction

In this section we review theparametricapproach to robustness analysis. In this view of uncer-tainty the plant and compensator transfer functions are assumed to be given, but contain severalparameters whose values are not precisely known. We illustrate this by an example.

prefilter

errorsignal

plantinput

plantoutput

compensatorr

F

e

C

u

Pz

+ −

referenceinput

Figure 5.1: Feedback system

Example 5.2.1 (Third-order uncertain plant). Consider a feedback configuration as in Fig. 5.1where the plant transfer function is given by

P(s) = gs2(1+ θs)

. (5.1)

The gaing is not precisely known but is nominally equal tog0 = 1 [s−2]. The numberθ is aparasitic time constant and nominally 0 [s]1. A system of this type can successfully be controlledby a PD controller with transfer functionC(s) = k+ Tds. The latter transfer function is not properso we modify it to

C(s) = k + Tds1+ T0s

, (5.2)

where the time constantT0 is small so that the PD action is not affected at low frequencies. Withthese plant and compensator transfer functions, the return differenceJ = 1+ L = 1+ PC is givenby

J(s) = 1+ gs2(1+ θs)

k + Tds1+ T0s

= θT0s4 + (θ+ T0)s3 + s2 + gTds+ gks2(1+ θs)(1+ T0s)

. (5.3)

Hence, the stability of the feedback system is determined by the locations of the roots of theclosed-loop characteristic polynomial

χ(s) = θT0s4 + (θ+ T0)s3 + s2 + gTds+ gk. (5.4)

1All physical units are SI. From this point on they are usually omitted.

Page 47: Multi Variable Control System Design Dmcs456

5.2. Parametric robustness analysis 199

Presumably, the compensator can be constructed with sufficient precision so that the values ofthe parametersk, Td, and T0 are accurately known, and such that the closed-loop system isstable at the nominal plant parameter valuesg = g0 = 1 andθ = 0. The question is whether thesystem remains stable under variations ofg andθ, that is, whether the roots of the closed-loopcharacteristic polynomial remain in the open left-half complex plane. �

Because we pursue this example at some length we use the opportunity to demonstrate howthe compensator may be designed by using the classicalroot locusdesign tool. We assume thatthe ground rules for the construction of root loci are known.

k =

k =

8

8−k = 0

Im

Re

Figure 5.2: Root locus for compensator consisting of a simple gain

Example 5.2.2 (Compensator design by root locus method).Given the nominal plant withtransfer function

P0(s) = g0

s2, (5.5)

the simplest choice for a compensator would be a simple gainC(s) = k. Varyingk from 0 to∞or from 0 to−∞ results in the root locus of Fig. 5.2. For no value ofk stability is achieved.

Modification to a PD controller with transfer functionC(s) = k + sTd amounts to addition ofa zero at−k/Td. Accordingly, keeping−k/Td fixed while varyingk the root locus of Fig. 5.2 isaltered to that of Fig. 5.3 (only the part fork ≥ 0 is shown). We assume that a closed-loopbandwidth of 1 is required. This may be achieved by placing the two closed-loop poles at12

√2(−1± j). The distance of this pole pair from the origin is 1, resulting in the desired closed-

loop bandwidth. Setting the ratio of the imaginary to the real part of this pole pair equal to 1ensures an adequate time response with a good compromise between rise time and overshoot.

doublepole at 0−k/Td

Im

Re

zero at

Figure 5.3: Root locus for a PD compensator

The closed-loop characteristic polynomial corresponding to the PD compensator transferfunctionC(s) = k + sTd is easily found to be given by

s2 + g0Tds+ g0k. (5.6)

Page 48: Multi Variable Control System Design Dmcs456

200 Chapter 5. Uncertainty Models and Robustness

Choosingg0Td =√

2 andg0k = 1 (i.e.,Td =√

2 andk = 1) places the closed-loop poles at thedesired locations. The zero in Fig. 5.3 may now be found at−k/Td = − 1

2

√2 = −0.7071.

−1/To −k/Td

Im

Repole

atzero

atdoublepole at 0

Figure 5.4: Root locus for the modified PD compensator

The final step in the design is to make the compensator transfer function proper by changingit to

C(s) = k + Tds1+ T0s

. (5.7)

This amounts to adding a pole at the location−1/T0. Assuming thatT0 is small the root lo-cus now takes the appearance shown in Fig. 5.4. The corresponding closed-loop characteristicpolynomial is

T0s3 + s2 + g0Tds+ g0k. (5.8)

Keeping the valuesTd =√

2 andk = 1 and choosingT0 somewhat arbitrarily equal to 1/10 (whichplaces the additional pole in Fig. 5.4 at−10) results in the closed-loop poles

−0.7652± j0.7715, −8.4697. (5.9)

The dominant pole pair at12√

2(−1± j) has been slightly shifted and an additional non-dominantpole at−8.4697 appears. �

5.2.2 Routh-Hurwitz Criterion

In the parametric approach, stability robustness analysis comes down to investigating the rootsof a characteristic polynomial of the form

χ(s) = χn(p)sn + χn−1(p)s

n−1 + · · · + χ0(p), (5.10)

whose coefficientsχn(p), χn−1(p), · · · , χ0(p) depend on the parameter vectorp. Usually it isnot possible to determine the dependence of the roots onp explicitly.

Sometimes, if the problem is not very complicated, the Routh-Hurwitz criterion may beinvoked for testing stability. For completeness we summarize this celebrated result (see forinstance Chen (1970)). Recall that a polynomial isHurwitz if all its roots have zero or negativereal part. It isstrictly Hurwitz if all its roots have strictly negative real part.

Summary 5.2.3 (Routh-Hurwitz criterion). Consider the polynomial

χ(s) = a0sn + a1sn−1 + · · · + an−1s+ an (5.11)

Page 49: Multi Variable Control System Design Dmcs456

5.2. Parametric robustness analysis 201

with real coefficientsa0, a1, · · · , an. From these coefficients we form theHurwitz tableau

a0 a2 a4 a6 · · ·a1 a3 a5 a7 · · ·b0 b2 b4 · · ·b1 b3 b5 · · ·· · ·

(5.12)

The first two rows of the tableau are directly taken from the “even” and “odd” coefficients of thepolynomialχ, respectively. The third row is constructed from the two preceding rows by letting[

b0 b2 b4 · · ·] = [a2 a4 a6 · · ·]− a0

a1

[a3 a5 a7 · · ·] . (5.13)

Note that the numerator ofb0 is obtained as minus the determinant of the 2× 2 matrix formedby the first column of the two rows above and the column to the right of and aboveb0. Similarly,the numerator ofb2 is obtained as minus the determinant of the 2× 2 matrix formed by the firstcolumn of the two rows above and the column to the right of and aboveb2. All further entries ofthe third row are formed this way. Missing entries at the end of the rows are replaced with zeros.We stop adding entries to the third row when they all become zero.

The fourth rowb1, b3, · · · is formed from the two rows above it in the same way the thirdrow is formed from the first and second rows. All further rows are constructed in this manner.We stop aftern+ 1 rows.

Given this tableau, a necessary and sufficient condition for the polynomialχ to be strictlyHurwitz is that all the entriesa0, a1, b0, b1, · · · of the first column of the tableau are nonzero andhave the same sign.

A usefulnecessarycondition for stability (known as Descartes’rule of signs) is that all thecoefficientsa0, a1, · · · , an of the polynomialχ are nonzero and all have the same sign. �

In principle, the Routh-Hurwitz criterion allows establishing thestability regionof the pa-rameter dependent polynomialχ as given by (5.10), that is, the set of all parameter valuesp forwhich the roots ofχ all have strictly negative real part. In practice, this is often not simple.

Example 5.2.4 (Stability region for third-order plant). By way of example we consider thethird-order plant of Examples 5.2.1 and 5.2.2. Using the numerical values established in Example5.2.2 we have from Example 5.2.1 that the closed-loop characteristic polynomial is given by

χ(s) = θ

10s4 + (θ+ 1

10)s3 + s2 + g

√2s+ g. (5.14)

The parameter vectorp has componentsg andθ. The Hurwitz tableau may easily be found to begiven by

θ10 1 g

θ+ 110 g

√2

b0 g

b1

g

(5.15)

Page 50: Multi Variable Control System Design Dmcs456

202 Chapter 5. Uncertainty Models and Robustness

where

b0 = θ+ 110 −

√2

10 gθ

θ+ 110

, (5.16)

b1 = (θ+ 110 −

√2

10 gθ)√

2− (θ+ 110)

2

θ+ 110 −

√2

10 gθg. (5.17)

Inspection of the coefficients of the closed-loop characteristic polynomial (5.14) shows that anecessary condition for closed-loop stability is that bothg and θ be positive. This conditionensures the first, second and fifth entries of the first column of the tableau to be positive. Thethird entryb0 is positive ifg< g3(θ), whereg3 is the function

g3(θ) = 5√

2(θ+ 110)

θ. (5.18)

The fourth entryb1 is positive ifg< g4(θ), with g4 the function

g4(θ) = 5(θ+ 110)(

√2− 1

10 − θ)

θ. (5.19)

Figure 5.5 shows the graphs ofg3 andg4 and the resulting stability region.The caseθ = 0 needs to be considered separately. Inspection of the root locus of Fig. 5.4

(which applies ifθ = 0) shows that forθ = 0 closed-loop stability is obtained for allg> 0. �

g

g3

g4

θ

40

20

00 1 2

Figure 5.5: Stability region

Exercise 5.2.5.Stability margins The stability of the feedback system of Example 5.2.4 is quiterobust with respect to variations in the parametersθ and g. Inspect the Nyquist plot of thenominal loop gain to determine the various stability margins ofx 1.4.2 and Exercise 1.4.9(b) ofthe closed-loop system. �

5.2.3 Gridding

If the number of parameters is larger than two or three then it seldom is possible to establish thestability region analytically as in Example 5.2.4. A simple but laborious alternative method is

Page 51: Multi Variable Control System Design Dmcs456

5.2. Parametric robustness analysis 203

known asgridding. It consists of covering the relevant part of the parameter space with a grid,and testing for stability in each grid point. The grid does not necessarily need to be rectangular.

Clearly this is a task that easily can be coded for a computer. The number of grid points in-creases exponentially with the dimension of the parameter space, however, and the computationalload for a reasonably fine grid may well be enormous.

Example 5.2.6 (Gridding the stability region for the third-order plant). Figure 5.6 showsthe results of gridding the parameter space in the region defined by 0.5 ≤ g ≤ 4 and 0≤ θ ≤ 1.The small circles indicate points where the closed-loop system is stable. Plus signs correspondto unstable points. Each point may be obtained by applying the Routh-Hurwitz criterion or anyother stability test. �

++

+++

++++

4

2

00 1

θ

g

Figure 5.6: Stability region obtained by gridding

5.2.4 Kharitonov’s theorem

Gridding is straightforward but may involve a tremendous amount of repetitious computation.In 1978, Kharitonov (1978a) (see also Kharitonov (1978b)) published a result that subsequentlyattracted much attention in the control literature2 because it may save a great deal of work.Kharitonov’s theorem deals with the stability of a system with closed-loop characteristic polyno-mial

χ(s) = χ0 + χ1s+ · · · + χnsn. (5.20)

Each of the coefficientsχi is known to be bounded in the form

χi≤ χi ≤ χi, (5.21)

with χiandχi given numbers fori = 0, 1,· · · , n. Kharitonov’s theorem allows verifying whether

all these characteristic polynomials are strictly Hurwitz by checking onlyfour special polynomi-als out of the infinite family.

Summary 5.2.7 (Kharitonov’s theorem). Each member of the infinite family of polynomials

χ(s) = χ0 + χ1s+ · · · + χnsn, (5.22)

with

χi≤ χi ≤ χi, i = 0, 1, 2, · · · , n, (5.23)

2For a survey see Barmish and Kang (1993).

Page 52: Multi Variable Control System Design Dmcs456

204 Chapter 5. Uncertainty Models and Robustness

is strictly Hurwitz if and only if each of the four Kharitonov polynomials

k1(s) = χ0+ χ

1s+ χ2s2 + χ3s3 + χ

4s4 + χ

5s5 + χ6s6 + · · · , (5.24)

k2(s) = χ0 + χ1s+ χ2s2 + χ

3s3 + χ4s4 + χ5s5 + χ

6s6 + · · · , (5.25)

k3(s) = χ0 + χ1s+ χ

2s2 + χ3s3 + χ4s4 + χ

5s5 + χ

6s6 + · · · , (5.26)

k4(s) = χ0+ χ1s+ χ2s2 + χ

3s3 + χ

4s4 + χ5s5 + χ6s6 + · · · (5.27)

is strictly Hurwitz. �

Note the repeated patterns of under- and overbars. A simple proof of Kharitonov’s theoremis given by Minnichelli, Anagnost, and Desoer (1989). We see what we can do with this resultfor our example.

Example 5.2.8 (Application of Kharitonov’s theorem). From Example 5.2.1 we know thatthe stability of the third-order uncertain feedback system is determined by the characteristicpolynomial

χ(s) = θT0s4 + (θ+ T0)s3 + s2 + gTds+ gk, (5.28)

where in Example 5.2.2 we tookk = 1, Td =√

2 andT0 = 1/10. Suppose that the variations ofthe plant parametersg andθ are known to be bounded by

g ≤ g ≤ g, 0 ≤ θ ≤ θ. (5.29)

Inspection of (5.28) shows that the coefficientsχ0, χ1, χ2, χ3, andχ4 are correspondinglybounded by

g ≤ χ0 ≤ g,

g√

2 ≤ χ1 ≤ g√

2,

1 ≤ χ2 ≤ 1,

T0 ≤ χ3 ≤ T0 + θ,

0 ≤ χ4 ≤ T0θ.

(5.30)

We assume that

0 ≤ θ ≤ 0.2, (5.31)

so thatθ = 0.2, but consider two cases for the gaing:

1. 0.5 ≤ g ≤ 5, so thatg = 0.5 andg = 5. This corresponds to a variation in the gain by afactor of ten. Inspection of Fig. 5.5 shows that this region of variation is well within thestability region. It is easily found that the four Kharitonov polynomials are

k1(s) = 0.5+ 0.5√

2s+ s2 + 0.3s3, (5.32)

k2(s) = 5+ 5√

2s+ s2 + 0.1s3 + 0.02s4, (5.33)

k3(s) = 5+ 0.5√

2s+ s2 + 0.3s3 + 0.02s4, (5.34)

k4(s) = 0.5+ 5√

2+ s2 + 0.1s3. (5.35)

Page 53: Multi Variable Control System Design Dmcs456

5.2. Parametric robustness analysis 205

By the Routh-Hurwitz test or by numerical computation of the roots using MATLAB oranother computer tool it may be found that onlyk1 is strictly Hurwitz, whilek2, k3, andk4 are not Hurwitz. This shows that the polynomialχ0 + χ1s+ χ2s2 + χ3s3 + χ4s4 isnot strictly Hurwitz for all variations of the coefficients within the bounds (5.30). Thisdoesnot prove that the closed-loop system is not stable for all variations ofg andθ withinthe bounds (5.29), however, because the coefficientsχi of the polynomial do not varyindependentlywithin the bounds (5.30).

2. 0.5 ≤ g ≤ 2, so thatg = 0.5 andg = 2. The gain now only varies by a factor of four.Repeating the calculations we find that each of the four Kharitonov polynomials is strictlyHurwitz, so that the closed-loop system is stable for all parameter variations.

5.2.5 The edge theorem

Example 5.2.8 shows that Kharitonov’s theorem usually only yields sufficient but not necessaryconditions for robust stability in problems where the coefficients of the characteristic polynomialdo not vary independently. We therefore consider characteristic polynomials of the form

χ(s) = χ0(p)sn + χ1(p)s

n−1 + · · · + χn(p), (5.36)

where the parameter vectorp of uncertain coefficientspi , i = 1, 2,· · · , N, enterslinearly into thecoefficientsχi(p), i = 0, 1,· · · , n. Such problems are quite common; indeed, the example we arepursuing is of this type. We may rearrange the characteristic polynomial in the form

χ(s) = φ0(s)+N∑

i=1

piφi(s), (5.37)

where the polynomialsφi , i = 0, 1, · · · , N are fixed and given. Assuming that each of theparameterspi lies in a bounded interval of the formp

i≤ pi ≤ pi , i = 1, 2,· · · , N, the family of

polynomials (5.37) forms apolytopeof polynomials. Then theedge theorem(Bartlett, Hollot, andLin 1988) states that to check whether each polynomial in the polytope is Hurwitz it is sufficientto verify whether the polynomials on each of theexposed edgesof the polytope are Hurwitz. Theexposed edges are obtained by fixingN − 1 of the parameterspi at their minimal or maximalvalue, and varying the remaining parameter over its interval.

The edge theorem actually is stronger than this.

Summary 5.2.9 (Edge theorem).Let D be a simply connected domain in the complex plane.Then all the roots of each polynomial (5.37) are contained inD if and only if the roots of eachpolynomial on the exposed edges of the polytope are contained inD . �

A simply connected domain is a domain such that every simple closed contour (i.e., a contourthat does not intersect itself) inside the domain encloses only points of the domain. Althoughobviously application of the edge theorem involves much more work than that of Kharitonov’stheorem it produces more results.

Example 5.2.10 (Application of the edge theorem).We apply the edge theorem to the third-order uncertain plant. From Example 5.2.1 we know that the closed-loop characteristic polyno-mial is given by

χ(s) = θT0s4 + (θ+ T0)s

3 + s2 + gTds+ gk (5.38)

= (T0s3 + s2)+ θ(T0s4 + s3)+ g(Tds+ k). (5.39)

Page 54: Multi Variable Control System Design Dmcs456

206 Chapter 5. Uncertainty Models and Robustness

Assuming that 0≤ θ ≤ θ andg ≤ g ≤ g this forms a polytope. By the edge theorem, we need tocheck the locations of the roots of the four “exposed edges”

θ = 0 : T0s3 + s2 + gTds+ gk, g ≤ g ≤ g,

θ = θ : θT0s4 + (θ+ T0)s3 + s2 + gTds+ gk, g ≤ g ≤ g,

g = g : θT0s4 + (θ+ T0)s3 + s2 + gTds+ gk, 0 ≤ θ ≤ θ,

g = g : θT0s4 + (θ+ T0)s3 + s2 + gTds+ gk, 0 ≤ θ ≤ θ.

(5.40)

Figure 5.7 shows the patterns traced by the various root loci so defined withT0, Td and k asdetermined in Example 5.2.2, andθ = 0.2, g = 0.5, g = 5. This is the case where in Example5.2.10 application of Kharitonov’s theorem was not successful in demonstrating robust stability.Figure 5.7 shows that the four root loci are all contained within the left-half complex plane.By the edge theorem, the closed-loop system is stable for all parameter perturbations that areconsidered. �

-400 -300 -200 -100 0-10

-5

0

5

10Fourth edge

-10 -5 0-10

-5

0

5

10First edge

-15 -10 -5 0-5

0

5Second edge

-150 -100 -50 0-1

-0.5

0

0.5

1Third edge

Figure 5.7: Loci of the roots of the four exposed edges

5.2.6 Testing the edges

Actually, to apply the edge theorem it is not necessary to “grid” the edges, as we did in Example5.2.10. By a result of Białas (1985) the stability of convex combinations of stable polynomials3

p1 and p2 of the form

p = λp1 + (1− λ)p2, λ ∈ [0, 1], (5.41)

may be established by a single test. Given a polynomial

q(s) = q0sn + q1sn−1 + q2sn−2 + · · · + qn, (5.42)3Stable polynomialis the same as strictly Hurwitz polynomial.

Page 55: Multi Variable Control System Design Dmcs456

5.2. Parametric robustness analysis 207

define itsHurwitz matrix Q(Gantmacher 1964) as then× n matrix

Q =

q1 q3 q5 · · · · · · · · ·q0 q2 q4 · · · · · · · · ·0 q1 q3 q5 · · · · · ·0 q0 q2 q4 · · · · · ·0 0 q1 q3 q5 · · ·· · · · · · · · · · · · · · · · · ·

. (5.43)

Białas’ result may be rendered as follows.

Summary 5.2.11 (Białas’ test).Suppose that the polynomialsp1 and p2 are strictly Hurwitzwith their leading coefficient nonnegative and the remaining coefficients positive. LetP1 andP2

be their Hurwitz matrices, and define the matrix

W = −P1P−12 . (5.44)

Then each of the polynomials

p = λp1 + (1− λ)p2, λ ∈ [0, 1], (5.45)

is strictly Hurwitz iff the real eigenvalues ofW all are strictly negative. �

Note that no restrictions are imposed on the non-real eigenvalues ofW.

Example 5.2.12 (Application of Białas’ test).We apply Białas’ test to the third-order plant. InExample 5.2.10 we found that by the edge theorem we need to check the locations of the rootsof the four “exposed edges”

T0s3 + s2 + gTds+ gk, g ≤ g ≤ g,

θT0s4 + (θ+ T0)s3 + s2 + gTds+ gk, g ≤ g ≤ g,

θT0s4 + (θ+ T0)s3 + s2 + gTds+ gk, 0 ≤ θ ≤ θ,

θT0s4 + (θ+ T0)s3 + s2 + gTds+ gk, 0 ≤ θ ≤ θ.

(5.46)

The first of these families of polynomials is the convex combination of the two polynomials

p1(s) = T0s3 + s2 + gTds+ gk, (5.47)

p2(s) = T0s3 + s2 + gTds+ gk, (5.48)

which both are strictly Hurwitz for the given numerical values. The Hurwitz matrices of thesetwo polynomials are

P1 =

1 gk 0

T0 gTd 0

0 1 gk

, P2 =

1 gk 0

T0 gTd 0

0 1 gk

, (5.49)

respectively. Numerical evaluation, withT0 = 1/10,Td = √2, k = 1, g = 0.5, andg = 5, yields

W = −P1P−12 =

−1.068 0.6848 0

−0.09685 −0.03152 00.01370 −0.1370 −0.1

. (5.50)

Page 56: Multi Variable Control System Design Dmcs456

208 Chapter 5. Uncertainty Models and Robustness

The eigenvalues ofW are−1,−0.1, and−0.1. They are all real and negative. Hence, by Białas’test, the system is stable on the edge under investigation.

Similarly, it may be checked that the system is stable on the other edges. By the edge theorem,the system is stable for the parameter perturbations studied. �

Exercise 5.2.13 (Application of Białas’ test).Test the stability of the system on the remainingthree edges as given by (5.46). �

5.3 The basic perturbation model

5.3.1 Introduction

This section presents a quite general uncertainty model. It deals with both parametric — orstructured— andunstructuredperturbations, although it is more suited to the latter. Unstructuredperturbations are perturbations that may involve essential changes in the dynamics of the system,characterized bynorm bounds.We first consider the fundamental perturbation paradigm, andthen apply it to a number of special cases.

5.3.2 The basic perturbation model

Figure 5.8 shows the block diagram of the basic perturbation model. The block marked “H” isthe system whose stability robustness under investigation. Its transfer matrixH is sometimescalled theinterconnection matrix. The block “1H” represents a perturbation of the dynamics ofthe system. This perturbation model is simple but the same time powerful4. We illustrate it bysome applications.

p qH

∆H

Figure 5.8: Basic perturbation model

5.3.3 Application: Perturbations of feedback systems

The stability robustness of a unit feedback loop is of course of fundamental interest.

Example 5.3.1 (The additive perturbation model).Figure 5.9(a) shows a feedback loop withloop gainL. After perturbing the loop gain fromL to L +1L, the feedback system may be rep-resented as in Fig. 5.9(b). The block within the dashed lines is the unperturbed system, denotedH, and1L is the perturbation1H in the basic model.

To computeH, denote the input to the perturbation block1L asq and its output asp, asin Fig. 5.8. Inspection of the diagram of Fig. 5.9(b) shows that with the perturbation block1L

taken away, the system satisfies the signal balance equationq = −p− Lq, so that

q = −( I + L)−1 p. (5.51)4It may be attributed to Doyle (1984).

Page 57: Multi Variable Control System Design Dmcs456

5.3. The basic perturbation model 209

− − −+

++ +

(a) (b) (c)

L L

q p

L

q p∆L ∆L

H H

Figure 5.9: Perturbation models of a unit feedback loop.(a) Nominal system. (b) Additive perturbation. (c) Multiplicative perturbation

It follows that the interconnection matrixH is

H = −( I + L)−1 = −S, (5.52)

with S the sensitivity matrix of the feedback system. The model of Fig. 5.9(b) is known as theadditiveperturbation model. �

Example 5.3.2 (The multiplicative perturbation model). An alternative perturbation model,calledmultiplicative or proportional perturbation model, is shown in Fig. 5.9(c). The transfermatrix of the perturbed loop gain is

(1+1L)L, (5.53)

which is equivalent to an additive perturbation1L L. The quantity1L may be viewed as therelative sizeof the perturbation. From the signal balanceq = L(−p− q) we obtainq = −( I +L)−1Lp, so that the interconnection matrix is

H = −( I + L)−1L = −T. (5.54)

T is the complementary sensitivity matrix of the feedback system. �

5.3.4 Application: Parameter perturbations

The basic model may also be used for perturbations that are much more structured than those inthe previous application.

Example 5.3.3 (Parameter perturbation).Consider the third-order plant of Example 5.2.1 withtransfer function

P(s) = gs2(1+ sθ)

. (5.55)

The parametersg andθ are uncertain. It is not difficult to represent this transfer function by ablock diagram where the parametersg andθ are found in distinct blocks. Figure 5.10 shows oneway of doing this. It does not matter that one of the blocks is a pure differentiator. Note the waythe factor 1+ sθ in the denominator is handled by incorporating a feedback loop.

After perturbingg to g +1g andθ to θ +1θ and including a feedback compensator withtransfer functionC the block diagram may be arranged as in Fig. 5.11. The large block insidethe dashed lines is the interconnection matrixH of the basic perturbation model.

Page 58: Multi Variable Control System Design Dmcs456

210 Chapter 5. Uncertainty Models and Robustness

+

−u

s

y

θ

1/s2g

Figure 5.10: Block diagram for third-order plant

++

+

+ +

C s( )

q1 p1 p2 q2

g

s θ

y

H

∆g ∆θ

1/s2

u

Figure 5.11: Application of the the basic perturbation model to the third-order plant

To compute the interconnection matrixH we consider the block diagram of Fig. 5.11. In-spection shows that with the perturbation blocks1g and1θ omitted, the system satisfies thesignal balance equation

q2 = −s(p2 + θq2)+ 1s2(p1 − gC(s)q2) . (5.56)

Solution forq2 yields

q2 = 1/s2

1+ sθ+ C(s)g/s2p1 − s

1+ sθ+ C(s)g/s2p2. (5.57)

Further inspection reveals thatq1 = −C(s)q2, so that

q1 = − C(s)/s2

1+ sθ+ C(s)g/s2p1 + sC(s)

1+ sθ+ C(s)g/s2p2. (5.58)

Consequently, the interconnection matrix is

H(s) = 1

1+ sθ+ C(s)gs2

[− C(s)s2 sC(s)

1s2 −s

](5.59)

=1

1+sθ

1+ L(s)

[C(s)−1

][− 1s2 s

], (5.60)

whereL = PC is the (unperturbed) loop gain. �

Page 59: Multi Variable Control System Design Dmcs456

5.4. The small gain theorem 211

5.4 The small gain theorem

5.4.1 Introduction

In x 5.5 we analyze the stability of the fundamental perturbation model using what is known asthesmall gain theorem.The small gain theorem is an application of thefixed point theorem, oneof the most important results of functional analysis5.

Summary 5.4.1 (Fixed point theorem).Suppose thatU is a complete normed space6. LetL : U → U be a map such that

‖L(u)− L(v)‖ ≤ k ‖u− v‖ for all u, v ∈ U , (5.61)

for some real constant 0≤ k< 1. Such a mapL is called acontraction. Then:

1. There exists auniquesolutionu∗ ∈ U of the equation

u = L(u). (5.62)

The pointu∗ is called afixed pointof the equation.

2. The sequenceun, n = 0, 1, 2,· · · , defined by

un+1 = L(un), n = 0, 1, 2, · · · , (5.63)

converges tou∗ for every initial pointu0 ∈ U . This method of approaching the fixed pointis calledsuccessive approximation.

LL q

(a) (b)

p

++

Figure 5.12: Left: feedback loop. Right: feedback loop with internal inputand output.

5.4.2 The small gain theorem

With the fixed point theorem at hand, we are now ready to establish the small gain theorem (seefor instance Desoer and Vidyasagar (1975)).

5For an easy introduction to this material see for instance the well-known though no longer new book by Luenberger(1969).

6The normed spaceU is complete if every sequenceui ∈ U , i = 1, 2, · · · such that limn,m→∞ ‖un − um‖ = 0 has alimit in U .

Page 60: Multi Variable Control System Design Dmcs456

212 Chapter 5. Uncertainty Models and Robustness

Summary 5.4.2 (Small gain theorem).In the feedback loop of Fig. 5.12(a), suppose thatL :U → U is a BIBO stable system for some complete normed spaceU . Then a sufficient conditionfor the feedback loop of Fig. 5.12(a) to be internally stable is that the input-output mapL is acontraction. IfL is linear then it is a contraction if and only if

‖L‖ < 1, (5.64)

where‖ · ‖ is the norm induced by the signal norm,

‖L‖ := supu∈U

‖Lu‖U

‖u‖U.

The proof may be found inx 5.9. The small gain theorem gives asufficientcondition forinternal stability that is very simple but often also very conservative. The power of the smallgain theorem is that it does not only apply to linear time-invariant systems but also to nonlineartime-varying systems. The signal spaces of signals of finite energy‖u‖L2 and the signals of finiteamplitude‖u‖L∞ are two examples of complete normed spaces, hence, for these norms the smallgain theorem applies.

Example 5.4.3 (Small gain theorem).Consider a feedback system as in Fig. 5.12(a), whereLis a linear time-invariant system with transfer function

L(s) = − k1+ sθ

. (5.65)

θ is a positive time constant andk a positive or negative gain. We investigate the stability of thefeedback system with the help of the small gain theorem.

1. Norm induced by theL∞-norm. First consider BIBO stability in the sense of theL∞-norm on the input and output signals. By inverse Laplace transformation it follows that theimpulse response corresponding toL is given by

l (t) ={

− kθ

e−t/θ for t ≥ 0,

0 otherwise.(5.66)

The norm‖L‖ of the system induced by theL∞-norm is (see Summary 4.3.13)

‖L‖ = ‖l‖L1 =∫ ∞

−∞|l (t)| dt = |k|. (5.67)

Hence, by the small gain theorem a sufficient condition for internal stability of the feedbacksystem is that

−1< k < 1. (5.68)

2. Norm induced by theL2-norm. For positiveθ the systemL is stable so according toSummary 4.3.13 theL2-induced norm exists and equals

‖L‖H∞ = supω∈R

|L( jω)| = supω∈R

| k1+ jωθ

| = |k|. (5.69)

Again we find from the small gain theorem that (5.68) is a sufficient condition for closed-loop stability.

Page 61: Multi Variable Control System Design Dmcs456

5.5. Stability robustness of the basic perturbation model 213

In conclusion, we determine the exact stability region. The feedback equation of the system ofFig. 5.12(b) isw = v+ Lw. Interpreting this in terms of Laplace transforms we may solve forw

to obtain

w = 11− L

v, (5.70)

so that the closed-loop transfer function is

11− L(s)

= 1

1+ k1+sθ

= 1+ sθ(1+ k)+ sθ

. (5.71)

Inspection shows that the closed-loop transfer function has a single pole at−(1+ k)/θ. The cor-responding system is BIBO stable if and only if 1+ k > 0. The closed-loop system is internallystable if and only if

k > −1. (5.72)

This stability region is much larger than that indicated by (5.68). �

Remark. In the robust control literature theH∞-norm of a transfer matrixH is commonlydenoted by‖H‖∞ and not by‖H‖H∞ as we have done so far. In the remainder of the lecturenotes we will adopt the notation predominantly used and hence write‖H‖∞ to mean theH∞-norm, and we refer to this norm simply as the∞-norm. This will not lead to confusion as thep-norm for p = ∞ defined for vectors in Chapter 4 will not return in the remainder of the lecturenotes.

5.5 Stability robustness of the basic perturbation model

5.5.1 Introduction

We study the internal stability of the basic perturbation model of Fig. 5.8, which is repeated inFig. 5.13(a).

H H

w1

w2

v1

v2

+

+

∆H ∆H+

+

(a) (b)

Figure 5.13: (a) Basic perturbation model. (b) Arrangement for internal sta-bility

Page 62: Multi Variable Control System Design Dmcs456

214 Chapter 5. Uncertainty Models and Robustness

5.5.2 Sufficient conditions

We assume that the model isnominally stable, that is, internally stable if1H = 0. This isequivalent to the assumption that the interconnection systemH by itself is BIBO stable. Similarly,we assume that also1H by itself is BIBO stable.

Introducing internal inputs and outputs as in Fig. 5.1(b), we obtain the signal balance equa-tions

w1 = v1 +1H(v2 + Hw1), w2 = v2 + H(v1 +1Hw1). (5.73)

Rearrangement yields

w1 = 1H Hw1 + v1 +1Hv2, (5.74)

w2 = H1Hw2 + Hv1 + v2. (5.75)

Inspection shows that by the fixed point theorem the equations have bounded solutionsw1 andw2 for any boundedv1 andv2 if the inequalities

‖1H H‖ < 1 and ‖H1H‖ < 1 (5.76)

both hold.Following the work of Doyle we use the system norm induced by theL2-norm7. This is the

∞-norm of the system’s transfer matrix as introduced in Summary 4.3.12.We reformulate the conditions‖1H H‖∞ < 1 and‖H1H‖∞ < 1 in two ways. First, since

‖1H H‖∞ = supω σ(H( jω)1H( jω)) and by property 7 of 4.3.8σ(H( jω)1H( jω))≤ σ(H( jω)) ·σ(1H( jω)) it follows that‖1H H‖∞ < 1 is implied by

σ(H( jω)) · σ(1H( jω)) < 1 for all ω ∈ R. (5.77)

The same condition implies‖H1H‖∞ < 1. Hence, (5.77) is a sufficient condition for stabilityof the perturbed system. It provides a frequency-dependent bound of the perturbations that areguaranteed not to destabilize the system.

Another way of reformulating the conditions‖1H H‖∞ < 1 and‖H1H‖∞ < 1 is to replacethem with the sufficient condition

‖1H‖∞ · ‖H‖∞ < 1. (5.78)

This follows from the submultiplicative property of the induced norm of Exercise 4.3.103.

5.5.3 Necessity

The inequality (5.77) is asufficientcondition for internal stability. It is easy to find examples thatadmit perturbations which violate (5.77) but at the same time donot destabilize the system. Itmay be proved, though8, that if robust stability is desired forall perturbations satisfying (5.77)then the condition is also necessary. This means that it is always possible to find a perturbationthat violates (5.77) within an arbitrarily small margin but destabilizes the system.

The condition (5.78) is equivalent to

‖1H‖∞ <1

‖H‖∞. (5.79)

7Use of the norm induced by theL∞-norm has also been considered in the literature (see e.g. Dahleh and Ohda(1988)).

8See Vidysagar (1985) for a proof for the rational case.

Page 63: Multi Variable Control System Design Dmcs456

5.5. Stability robustness of the basic perturbation model 215

Also this condition is necessary for robust stability if stability is required for all perturbationssatisfying the bound.

The latter condition is particularly elegant if the perturbations arescaledsuch that they satisfy‖1H‖∞ ≤ 1. Then a necessary and sufficient condition for robust stability is that‖H‖∞ < 1.

Summary 5.5.1 (Stability of the basic perturbation model).Suppose that in the basic pertur-bation model of Fig. 5.13(a) bothH and1H are BIBO stable.

1. Sufficient for internal stability is that

σ(1H( jω)) <1

σ(H( jω))for all ω ∈ R, (5.80)

with σ denoting the largest singular value. If stability is required forall perturbationssatisfying this bound then the condition is also necessary.

2. Another sufficient condition for internal stability is that

‖1H‖∞ <1

‖H‖∞. (5.81)

If stability is required forall perturbations satisfying this bound then the condition is alsonecessary.

3. In particular, a sufficient and necessary condition for robust stability under all perturbationssatisfying‖1H‖∞ ≤ 1 is that‖H‖∞ < 1.

5.5.4 Examples

We consider two applications of the basic stability robustness result.

Example 5.5.2 (Stability under perturbation). By way of example, we study which perturba-tions of the real parameterθ preserve the stability of the system with transfer function

P(s) = 11+ sθ

. (5.82)

Arranging the system as in Fig. 5.14(a) we obtain the perturbation model of Fig. 5.14(b), withθ0

u

θ s

y u

p q

s

H

y

∆θ

(a) (b)

θο

+

+

+ +

Figure 5.14: (a) Block diagram. (b) Perturbation model

the nominal value ofθ and1θ its perturbation. Because the system should be nominally stablewe needθ0 to be positive.

Page 64: Multi Variable Control System Design Dmcs456

216 Chapter 5. Uncertainty Models and Robustness

The interconnection matrixH of the standard perturbation model is obtained from the signalbalance equationq = −s(p+ θ0q) (omittingu andy). Solution forq yields

q = −s1+ sθ0

p. (5.83)

It follows that

H(s) = −s1+ sθ0

= −sθ0

1+ sθ0

1θ0, (5.84)

so that

σ(H( jω)) = |H( jω)| = 1θ0

√ω2θ2

0

1+ ω2θ20

(5.85)

and

‖H‖∞ = supω

σ(H( jω)) = 1θ0. (5.86)

Since σ(1H( jω)) = |1θ| = ‖1H‖∞, both (1) and (2) of Summary 5.5.1 imply that internalstability is guaranteed for all perturbations such that|1θ| < θ0, or

−θ0 < 1θ < θ0. (5.87)

Obviously, though, the system is stable for allθ > 0, that is, for all perturbations1θ such that

−θ0 < 1θ <∞. (5.88)

Hence, the estimate for the stability region is quite conservative. On the other hand, it is easyto find a perturbation that violates (5.87) and destabilizes the system. The perturbation1θ =−θ0(1+ ε), for instance, withε positive but arbitrarily small, violates (5.87) with an arbitrarilysmall margin, anddestabilizesthe system (because it makesθ negative).

Note that the class of perturbations such that‖1θ‖∞ ≤ θ0 is much larger than just real per-turbations satisfying (5.87). For instance,1θ could be a transfer function, such as

1θ(s) = θ0α

1+ sτ, (5.89)

with τ any positive time constant, andα a real number with magnitude less than 1. This “para-sitic” perturbation leaves the system stable. �

Example 5.5.3 (Two-parameter perturbation). A more complicated example is the feedbacksystem discussed in Examples 5.2.1, 5.2.2 and 5.3.3. It consists of a plant and a compensatorwith transfer functions

P(s) = gs2(1+ sθ)

, C(s) = k + Tds1+ T0s

, (5.90)

respectively. In Example 5.3.3 we found that the interconnection matrixH with respect to per-turbations in the parametersg andθ is

H(s) =1

1+sθ0

1+ L0(s)

[C(s)−1

] [− 1s2 s

], (5.91)

Page 65: Multi Variable Control System Design Dmcs456

5.5. Stability robustness of the basic perturbation model 217

++

+

++

C s( )u

g

q1 q2p1 p2

β1 α1

1/s2

s θy

H

δg δθ

α2 β2

Figure 5.15: Scaled perturbation model for the third-order plant

with g0 andθ0 denoting the nominal values of the two uncertain parametersg andθ. Beforecontinuing the analysis we modify the perturbation model of Fig. 5.11 to that of Fig. 5.15. Thismodel includes scaling factorsα1 andβ1 for the perturbation1g and scaling factorsα2 andβ2 for the perturbation1θ. The scaled perturbations are denotedδg andδθ, respectively. Theproductα1β1 = ε1 is the largest possible perturbation ing, whileα2β2 = ε2 is the largest possibleperturbation inθ. It is easily verified that the interconnection matrix corresponding to Fig. 5.15is

H(s) =1

1+sθ0

1+ L0(s)

[β1C(s)−β2

][− α1s2 α2s

]. (5.92)

It may also easily be worked out that the largest eigenvalue ofHT(− jω)H( jω) is

σ2(ω) =1

1+ω2θ20

|1+ L0( jω)|2(β2

1|C( jω)|2 + β22

)( α21

ω4+ α2

2ω2

), ω ∈ R. (5.93)

The other eigenvalue is 0. The nonnegative quantityσ(ω) is the largest singular value ofH( jω).By substitution ofL0 andC into (5.93) it follows that

σ2(ω) =(β2

1(k2 + ω2T2

d )+ β22(1+ ω2T2

0 ))(α2

1 + α22ω

6)

|χ( jω)|2 , (5.94)

with χ the closed-loop characteristic polynomial

χ(s) = θ0T0s4 + (θ0 + T0)s3 + s2 + g0Tds+ g0k. (5.95)

First we note that if we choose the nominal valueθ0 of the parasitic time constantθ equal tozero — which is a natural choice — andε2 = α2β2 6= 0 thenσ(∞) = ∞, so that stability is notguaranteed. Indeed, this situation admits negative values ofθ, which destabilize the closed-loopsystem. Hence, we need to chooseθ0 positive but such that the nominal closed-loop system isstable, of course.

Second, inspection of (5.94) shows that for fixed uncertaintiesε1 = α1β1 andε2 = α2β2 thefunction σ depends on the way the individual values of the scaling constantsα1, β1, α2, and

Page 66: Multi Variable Control System Design Dmcs456

218 Chapter 5. Uncertainty Models and Robustness

β2 are chosen. On first sight this seems surprising. Reflection reveals that this phenomenonis caused by the fact that the stability robustness test is based onfull perturbations1H . Fullperturbations are perturbations such that all entries of the perturbation transfer matrix1H arefilled with dynamical perturbations.

We choose the numerical valuesk = 1, Td = √2, andT0 = 1/10 as in Example 5.2.2. In

Example 5.2.8 we consider variations in the time constantθ between 0 and 0.2. Correspondingly,we letθ0 = 0.1 andε2 = 0.1. For the variations in the gaing we study two cases as in Example5.2.8:

1. 0.5 ≤ g ≤ 5. Correspondingly, we letg0 = 2.75 andε1 = 2.25.

2. 0.5 ≤ g ≤ 2. Correspondingly, we takeg0 = 1.25 andε1 = 0.75.

For lack of a better choice we select for the time being

α1 = β1 = √ε1, α2 = β2 = √

ε2. (5.96)

Figure 5.16 shows the resulting plots ofσ(ω) for the cases (1) and (2). In case (1) the peak valueis about 67.1, while in case (2) it is approximately 38.7. Both cases fail the stability robustnesstest by a very wide margin, although from Example 5.2.4 we know that in both situations stabilityis ensured. The reasons that the test fails are that (i) the test is based on full perturbations ratherthan the structured perturbations implied by the variations in the two parameters, and (ii) the testalso allowsdynamicalperturbations rather than the real perturbations implied by the parametervariations.

10−2

10−1

100

101

102

103

80

40

0

ω

(a)

(b)

σ( ( ω))H j

Figure 5.16: Plots ofσ(H( jω)) for the two-parameter plant

Less conservative results may be obtained by allowing the scaling factorsα1, β1, α2, andβ2

to be frequency-dependent, and to choose them so as to minimizeσ(ω) for eachω. Substitutingβ1 = ε1/α1 andβ2 = ε2/α2 we obtain

σ2(ω) =ε2

1(k2 + ω2T2

d )+ ε21(k

2+ω2T2d )ω

6

ρ+ ε2

2(1+ ω2T20 )ρ+ ε2

2(1+ ω2T20 )ω

6

|χ( jω)|2 , (5.97)

whereρ = α21/α

22. It is easy to establish that for fixedω the quantityσ2(ω) is minimized for

ρ = ω3 ε1

ε2

√k2 + ω2T2

d

1+ ω2T20

, (5.98)

Page 67: Multi Variable Control System Design Dmcs456

5.5. Stability robustness of the basic perturbation model 219

and that for this value ofρ

σ(ω) =ε1

√k2 + ω2T2

d + ε2ω3√

1+ ω2T20

|χ( jω)| , ω ≥ 0. (5.99)

Figure 5.17 shows plots ofσ for the same two cases (a) and (b) as before. In case (a) the peak

10−2

10−1

100

101

102

103

ω

0

1

2

(a)

(b)

σ( ( ω))H j

Figure 5.17: Plots ofσ(H( jω)) with optimal scaling for the two-parameterplant

value of σ is about 1.83, while in case (b) it is 1. Hence, only in case (b) robust stability isestablished. The reason that in case (a) robust stability is not proved is that the perturbationmodel allows dynamic perturbations. Only if the size of the perturbations is downsized by afactor of almost 2 robust stability is guaranteed.

In Example 5.7.11 in the section on the structured singular value this example is furtherdiscussed. �

+

+

+

x

A

qp∆

sI1

Figure 5.18: Perturbation of the state space system

Exercise 5.5.4 (Stability radius). Hinrichsen and Pritchard (1986) investigate what perturba-tions of the matrixA make the system described by the state differential equation

x(t) = Ax(t), t ∈ R, (5.100)

unstable. Assume that the nominal systemx(t) = Ax(t) is stable. The number

r (A) = inf{A+1 has at least one eigenvalue

in the closed right-half planeg

‖1‖ (5.101)

Page 68: Multi Variable Control System Design Dmcs456

220 Chapter 5. Uncertainty Models and Robustness

is called thestability radiusof the matrixA. The matrix norm used is the spectral norm. If onlyreal-valued perturbations1 are considered thenr (A) is thereal stability radius, denotedrR(A).If the elements of1 may also assume complex values thenr (A) is thecomplex stability radius,denotedrC (A).

1. Prove that

rR(A) ≥ rC (A) ≥ 1‖F‖∞

, (5.102)

with F the rational matrix given byF(s) = (sI − A)−1. Hint: Represent the system bythe block diagram of Fig. 5.18.

2. A more structured perturbation model results by assuming thatA is perturbed as

A −→ A+ B1C, (5.103)

with B andC matrices of suitable dimensions, and1 the perturbation. Adapt the blockdiagram of Fig. 5.18 and prove that the associated complex stability radiusrC (A, B,C)satisfiesrC (A, B,C) ≥ 1/‖H‖∞, with H(s) = C(sI − A)−1B.

Hinrichsen and Pritchard (1986) prove that actuallyrC (A) = 1/‖F‖∞ and rC (A, B,C) =1/‖H‖∞. Thus, explicit formulas are available for the complex stability radius. The real sta-bility radius is more difficult to determine. Only very recently results have become available(Qiu, Bernhardsson, Rantzer, Davison, Young, and Doyle 1995). �

5.5.5 Nonlinear perturbations

The basic stability robustness result also applies fornonlinearperturbations. Suppose that theperturbation1H in the block diagram of Fig. 5.13(a) is a nonlinear operator that maps the signale(t), t ∈ R, into the signal(1He)(t), t ∈ R. Assume that there exists a positive constantk suchthat

‖1He1 −1He2‖ ≤ k‖e1 − e2‖ (5.104)

for every two inpute1,e2 to the nonlinearity. We use theL2-norm for signals. By application ofthe fixed point theorem it follows that the perturbed system of Fig. 5.13(a) is stable if

k‖H‖∞ < 1. (5.105)

Suppose for instance that the signals are scalar, and that1H is a static nonlinearity described by

(1He)(t) = f (e(t)), (5.106)

with f : R → R. Let f satisfy the inequality∣∣∣∣ f (e)e

∣∣∣∣ ≤ c, e 6= 0, e∈ R, (5.107)

with c a nonnegative constant. Figure 5.19 illustrates the way the functionf is bounded. We callf asector bounded function. It is not difficult to see that (5.104) holds withk = c. It follows thatthe perturbed system is stable for any static nonlinear perturbation satisfying (5.107) as long asc ≤ 1/‖H‖∞.

Similarly, the basic stability robustness result holds fortime-varyingperturbations and per-turbations that are both nonlinear and time-varying, provided that these perturbations are suitablybounded.

Page 69: Multi Variable Control System Design Dmcs456

5.6. Stability robustness of feedback systems 221

e

f (e)

ce

−ce

0

Figure 5.19: Sector bounded function

5.6 Stability robustness of feedback systems

5.6.1 Introduction

In this section we apply the results ofx 5.5 to various perturbation models for single-degree-of-freedom feedback systems. We successively discuss proportional, proportional inverse andfractional perturbations for MIMO systems. The results ofx 1.4 emerge as special cases.

5.6.2 Proportional perturbations

In x 5.3 we consider additive and multiplicative (orproportional) perturbation models for single-degree-of-freedom feedback systems. The proportional model is preferred because it representstherelativesize of the perturbation. The corresponding block diagram is repeated in Fig. 5.20(a).It represents a perturbation

++

++

L

q p

H

L

W

q p

V

H

∆L

δL

(a) (b)

Figure 5.20: Proportional and scaled perturbations of a unit feedback loop

L −→ ( I +1L)L (5.108)

Sinceq = −L(p+ q), so thatq = −( I + L)−1Lp, the interconnection matrix is

H = −( I + L)−1L = −T, (5.109)

Page 70: Multi Variable Control System Design Dmcs456

222 Chapter 5. Uncertainty Models and Robustness

with T = ( I + L)−1L = L( I + L)−1 the complementary sensitivity matrixof the closed-loopsystem.

To scalethe proportional perturbation we modify the block diagram of Fig. 5.20(a) to that ofFig. 5.20(b). This diagram represents a perturbation

L −→ ( I + VδLW)L, (5.110)

whereV andW are available to scale such that‖δL‖∞ ≤ 1. For this configuration the intercon-nection matrix isH = −WTV.

Application of the results of Summary 5.5.1 leads to the following conclusions.

Summary 5.6.1 (Robust stability of feedback systems for proportional perturbations).As-sume that the feedback system of Fig. 5.20(a) is nominally stable.

1. A sufficient condition for the closed-loop system to be stable under proportional perturba-tions of the form

L −→ (1+1L)L (5.111)

is that

σ(1L( jω)) <1

σ(T( jω))for all ω ∈ R. (5.112)

with T = ( I + L)−1L = L( I + L)−1 the complementary sensitivity matrix of the feedbackloop. If stability is required forall perturbations satisfying the bound then the condition isalso necessary.

2. Underscaledperturbations

L −→ (1+ VδLW)L (5.113)

the system is stable for all‖δL‖∞ ≤ 1 if and only if

‖WTV‖∞ < 1. (5.114)�

For SISO systems part (1) of this result reduces to Doyle’s robustness criterion ofx 1.4. Infact, Summary 5.6.1 is the original MIMO version of Doyle’s robustness criterion (Doyle 1979).

5.6.3 Proportional inverse perturbations

The result of the preceding subsection confirms the importance of the complementary sensitivitymatrix (or function)T for robustness. We complement it with a “dual” result involving thesensitivity functionS. To this end, consider the perturbation model of Fig. 5.21(a), where theperturbation1L−1 is included in a feedback loop. The model represents a perturbation

L −→ ( I +1L−1)−1L. (5.115)

Assuming thatL has an inverse, this may be rewritten as the perturbation

L−1 −→ L−1( I +1L−1). (5.116)

Page 71: Multi Variable Control System Design Dmcs456

5.6. Stability robustness of feedback systems 223

−+

+−

(a) (b)

L

p q

L

H H

V W

p q

∆L−1

δL−1

Figure 5.21: Proportional inverse perturbation of unit feedback loop

Hence, the perturbation represents a proportional perturbation of theinverseloop gain. Theinterconnection matrixH follows from the signal balanceq = −p− Lqso thatq = −( I + L)−1 p.Hence, the interconnection matrix is

H = −( I + L)−1 = −S. (5.117)

S= ( I + L)−1 is the sensitivity matrix of the feedback loop.We allow for scaling by modifying the model to that of Fig. 5.21(b). This model represents

perturbations of the form

L−1 −→ L−1( I + VδL−1W), (5.118)

whereV andW provide freedom to scale such that‖δL−1‖∞ ≤ 1. The interconnection matrixnow is H = −V SW.

Application of the results of Summary 5.5.1 yields the following conclusions.

Summary 5.6.2 (Robust stability of feedback systems for proportional inverse perturba-tions). Assume that the feedback system of Fig. 5.21(a) and (b) is nominally stable.

1. Underproportional inverse perturbationsof the form

L−1 −→ L−1( I +1L−1) (5.119)

a sufficient condition for stability is that

σ(1L−1( jω)) <1

σ(S( jω))for all ω ∈ R, (5.120)

with S= ( I + L)−1 the sensitivity matrix of the feedback loop. If stability is required forall perturbations satisfying the bound then the condition is also necessary.

2. Underscaledinverse perturbations of the form

L−1 −→ L−1( I + VδL−1W) (5.121)

the closed-loop system is stable for all‖δL−1‖∞ ≤ 1 if and only if

‖W SV‖∞ < 1. (5.122)�

Page 72: Multi Variable Control System Design Dmcs456

224 Chapter 5. Uncertainty Models and Robustness

5.6.4 Example

We illustrate the results of Summaries 5.6.1 and 5.6.2 by application to an example.

Example 5.6.3 (Robustness of a SISO closed-loop system).In Example 5.2.1 we considered aSISO single-degree-of-freedom feedback system with plant and compensator transfer functions

P(s) = gs2(1+ sθ)

, C(s) = k + Tds1+ T0s

, (5.123)

respectively. Nominally the gaing equalsg0 = 1, while the parasitic time constantθ is nominally0. In Example 5.2.2 we chose the compensator parameters ask = 1, Td = √

2 andT0 = 1/10.We use the results of Summaries 5.6.1 and 5.6.2 to study what perturbations of the parameters

g andθ leave the closed-loop system stable.

1. Loop gain perturbation model.Starting with the expression

L(s) = P(s)C(s) = gs2(1+ sθ)

k + Tds1+ T0s

(5.124)

it is easy to find that the proportional loop gain perturbations are

1L(s) = L(s)− L0(s)L0(s)

=g−g0

g0− sθ

1+ sθ. (5.125)

Figures 5.22(a) and (b) show the magnitude plot of the inverse 1/T0 of the nominal com-plementary sensitivity function. Inspection shows that 1/T0 assumes relatively small val-ues in the low-frequency region. This is where the proportional perturbations of the loopgain need to be the smallest. Forθ = 0 the proportional perturbation (5.125) of the loopgain reduces to

1L(s) = g− g0

g0, (5.126)

and, hence, is constant. The minimal value of the function 1/|T0| is about 0.75. Therefore,for θ = 0 the robustness criterion (a) of Summary 5.6.1 allows relative perturbations ofgup to 0.75, so that 0.25< g< 1.75.

For g = g0, on the other hand, the proportional perturbation (5.125) of the loop gain re-duces to

1L(s) = −sθ1+ sθ

. (5.127)

Figure 5.22(a) shows magnitude plots of this perturbation for several values ofθ. Therobustness criterion (a) of Summary 5.6.1 permits values ofθ up to about 1.15. For thisvalue ofθ the gaing can be neither increased nor decreased without violating the criterion.

For smaller values of the parasitic time constantθ a wider range of gains is permitted. Theplots of of|1L| of Fig. 5.22(b) demonstrate that forθ = 0.2 the gaing may vary betweenabout 0.255 and 1.745.

The stability bounds ong andθ are conservative. The reason is of course that the perturba-tion model allows a much wider class of perturbations than just those caused by changesin the parametersg andθ.

Page 73: Multi Variable Control System Design Dmcs456

5.6. Stability robustness of feedback systems 225

.01 .011 1100 100

1000 1000

1 1

.001 .001

(a) (b)

ω ω

1/|T |o 1/|T |o

θ = 0.2θ = 0.6

θ = 1.15

g = 1.745

g = 1.2

g = 0.2

Figure 5.22: Robust stability test for the loop gain perturbation model:(a) 1/|T0| and the relative perturbations forg = 1. (b) 1/|T0| and the relative perturbations forθ = 0.2.

2. Inverse loop gain perturbation model.The proportional inverse loop gain perturbation isgiven by

1L−1(s) = L−1(s)− L−10 (s)

L−10 (s)

= g0 − gg

+ sθg0

g. (5.128)

We apply the results of Summary 5.6.2. Figure 5.23 gives the magnitude plot of the in-verse 1/S0 of the nominal sensitivity function. Inspection shows that for the inverse loopgain perturbation model the high-frequency region is the most critical. By inspection of(5.128) we see that ifθ 6= 0 then|1L−1(∞)| = ∞, so that robust stability is not ensured.Apparently, this model cannot handle high-frequency perturbations caused by parasiticdynamics.

For θ = 0 the proportional inverse loop gain perturbation reduces to

1L−1(s) = g0 − gg

, (5.129)

and, hence, is constant. The magnitude of 1/S0 has a minimum of about 0.867, so thatfor θ = 0 stability is ensured for| g0−g

g | < 0.867, or 0.536< g < 7.523. This range ofvariation is larger than that found by application of the proportional loop gain perturbationmodel.

Again, the result is conservative. It is disturbing that the model does not handle parasiticperturbations.

The derivation of the unstructured robust stability tests of Summary 5.6.1 is based on the smallgain theorem, and presumes the perturbations1L to beBIBO stable. This is highly restrictiveif the loop gainL by itself represents an unstable system, which may easily occur (in particular,when the plant by itself is unstable).

It is well-known (Vidysagar 1985) that the stability assumption on the perturbation may berelaxed to the assumption that both the nominal loop gain and the perturbed loop gain have thesame number of right-half plane poles.

Likewise, the proofs of the inverse perturbation tests of Summary 5.6.2 require the perturba-tion of theinverseof the loop gain to be stable. This is highly restrictive if the loop gain by itself

Page 74: Multi Variable Control System Design Dmcs456

226 Chapter 5. Uncertainty Models and Robustness

10−2

10−1

100

101

102

102

104

1

1/| |So

ω

Figure 5.23: Magnitude plot of 1/S0

has right-half plane zeros. This occurs, in particular, when the plant has right-half plane zeros.The requirement, however, may be relaxed to the assumption that the nominal loop gain and theperturbed loop gain have thesame number of right-half plane zeros.

5.6.5 Fractional representation

The stability robustness analysis of feedback systems based on perturbations of the loop gain orits inverse is simple, but often overly conservative.

Another model that is encountered in the literature9 relies on what we term herefractionalperturbations. It combines, in a way, loop gain perturbations and inverse loop gain perturbations.In this analysis, the loop gainL is represented as

L = N D−1, (5.130)

where thedenominator Dis a square nonsingular rational or polynomial matrix, andN a rationalor polynomial matrix. Any rational transfer matrixL may be represented like this in many ways.If D and N are polynomial then the representation is known as a (right)10 polynomial matrixfraction representation. If D and N are rational and proper with all their poles in the openleft-half complex plane then the representation is known as a (right)rational matrix fractionrepresentation.

Example 5.6.4 (Fractional representations).For a SISO system the fractional representationis obvious. Suppose that

L(s) = gs2(1+ sθ)

. (5.131)

Clearly we have the polynomial fractional representationL = N D−1 with N(s)= g andD(s) =s2(1+ sθ). The fractional representation may be made rational by letting

D(s) = s2(1+ sθ)d(s)

, N(s) = gd(s)

, (5.132)

with d any strictly Hurwitz polynomial. �

9The idea originates from Vidyasagar (Vidyasagar, Schneider, and Francis 1982; Vidysagar 1985). It is elaborated inMcFarlane and Glover (1990).

10Because the denominator is on the right.

Page 75: Multi Variable Control System Design Dmcs456

5.6. Stability robustness of feedback systems 227

Right fractional representation may also be obtained for MIMO systems (see for instanceVidysagar (1985)).

+−

++

p1 p2q2q1

L

H

∆D ∆N

Figure 5.24: Fractional perturbation model

5.6.6 Fractional perturbations

Figure 5.24 shows the fractional perturbation model. Inspection shows that the perturbation isgiven by

L −→ ( I +1N)L( I +1D)−1, (5.133)

or

N D−1 −→ ( I +1N)N D−1( I +1D)−1. (5.134)

Hence, the numerator and denominator are perturbed as

N −→ ( I +1N)N, D −→ ( I +1D)D. (5.135)

Thus,1D and1N representproportional perturbations of the denominatorandof the numerator,respectively.

By block diagram substitution it is easily seen that the configuration of Fig. 5.24 is equivalentto that of Fig. 5.25(a). The latter, in turn, may be rearranged as in Fig. 5.25(b). Here

p = −1Dq1 +1Nq2 = 1L

[q1

q2

], (5.136)

with

1L = [−1D 1N] . (5.137)

To establish the interconnection matrixH of the configuration of Fig. 5.25(b) we consider thesignal balance equationq1 = p− Lq1. It follows that

q1 = ( I + L)−1 p = Sp, (5.138)

with S= ( I + L)−1 the sensitivity matrix. Sinceq2 = −Lq1 we have

q2 = −L( I + L)−1 p = −Tp, (5.139)

Page 76: Multi Variable Control System Design Dmcs456

228 Chapter 5. Uncertainty Models and Robustness

+

+

+

q2

q2

q1

q1

p1p2∆N

∆L

∆D

L

L +

+

H

H

p

(a)

(b)

Figure 5.25: Equivalent configurations

with T = L( I + L)−1 the complementary sensitivity matrix. Inspection of (5.138–5.139) showsthat the interconnection matrix is

H =[

S−T

]. (5.140)

Investigation of the frequency dependence of the greatest singular value ofH( jω) yields infor-mation about the largest possible perturbations1L that leave the loop stable.

It is useful to allow for scaling by modifying the configuration of Fig. 5.25(b) to that ofFig. 5.26. This modification corresponds to representing the perturbations as

1D = VδDW1, 1N = VδNW2, (5.141)

whereV, W1, andW2 are suitably chosen (stable) rational matrices such that

‖δL‖∞ ≤ 1, with δL = [−δD δN]. (5.142)

Accordingly, the interconnection matrix changes to

H =[

W1SV−W2TV

]. (5.143)

Application of the results of Summary 5.5.1 yields the following conclusions.

Summary 5.6.5 (Numerator-denominator perturbations). In the stable feedback configura-tion of Fig. 5.27, suppose that the loop gain is perturbed as

L = N D−1 −→ [( I + VδNW2)N][ ( I + VδDW1)D]−1 (5.144)

Page 77: Multi Variable Control System Design Dmcs456

5.6. Stability robustness of feedback systems 229

+

+L

W2

q2 q1

W1V

p

δL

H

Figure 5.26: Perturbation model with scaling

with V, W1 andW2 stable transfer matrices. Then the closed-loop system is stable for all stableperturbationsδP = [−δD δN] such that‖δP‖∞ ≤ 1 if and only if

‖H‖∞ < 1. (5.145)

Here we have

H =[

W1SV−W2TV

], (5.146)

with S= ( I + L)−1 the sensitivity matrix andT = L( I + L)−1 the complementary sensitivitymatrix. �

−L

Figure 5.27: Feedback loop

The largest singular value ofH( jω), with H given by (5.146), equals the square root of thelargest eigenvalue of

HT(− jω)H( jω) = VT(− jω)[ST(− jω)WT1 (− jω)W1( jω)S( jω)

+ TT(− jω)WT2 (− jω)W2( jω)T( jω)]V( jω). (5.147)

For SISO systems this is the scalar function

HT(− jω)H( jω) = |V( jω)|2[|S( jω)|2|W1( jω)|2 + |T( jω)|2|W2( jω)|2]. (5.148)

5.6.7 Discussion

We consider how to arrange the fractional perturbation model. In the SISO case, without loss ofgenerality we may take the scaling functionV equal to 1. ThenW1 represents the scaling factor

Page 78: Multi Variable Control System Design Dmcs456

230 Chapter 5. Uncertainty Models and Robustness

for the denominator perturbations andW2 that for the numerator perturbations. We accordinglyhave

‖H‖2∞ = sup

ω∈R

(|S( jω)|2|W1( jω)|2 + |T( jω)|2|W2( jω)|2). (5.149)

For well-designed control systems the sensitivity functionS is small at low frequencies while thecomplementary sensitivity functionT is small at high frequencies. Figure 5.28 illustrates this.Hence,W1 may be large at low frequencies andW2 large at high frequencies without violating

.01 .011 1100 100

1 1

.01 .01

.001 .001

|S| |T|

ω ω

Figure 5.28: Sensitivity function and complementary sensitivity function fora well-designed feedback system

the robustness condition‖H‖∞ < 1. This means that at low frequencies we may allow largedenominator perturbations, and at high frequencies large numerator perturbations.

The most extreme point of view is to structure the perturbation model such that all low fre-quency perturbations are denominator perturbations, and all high frequency perturbations arenumerator perturbations. Since we may trivially write

L = 11L

, (5.150)

modeling low frequency perturbations as pure denominator perturbations implies modeling lowfrequency perturbations asinverseloop gain perturbations. Likewise, modeling high frequencyperturbations as pure numerator perturbations implies modeling high frequency perturbations asloop gain perturbations. This amounts to taking

|W1( jω)| ={

WL−1(ω) at low frequencies,0 at high frequencies,

(5.151)

|W2( jω)| ={

0 at low frequencies,WL(ω) at high frequencies,

(5.152)

with WL−1 a bound on the size of the inverse loop perturbations, andWL a bound on the size ofthe loop perturbations.

Obviously, the boundary between “low” and “high” frequencies lies in the “crossover” region,that is, near the frequency where the loop gain crosses over the zero dB line. In this frequencyregion neitherSnor T is small.

Page 79: Multi Variable Control System Design Dmcs456

5.6. Stability robustness of feedback systems 231

Another way of dealing with this perturbation model is to modify the stability robustness testto checking whether for eachω ∈ R

|1L−1( jω)| < 1|S( jω)| or |1L( jω)| < 1

|T( jω)| . (5.153)

This test amounts to verifying whether either the proportional loop gain perturbation test succeedsor the proportional inverse loop gain test. Obviously, its results are less conservative than theindividual tests.

Example 5.6.6 (Numerator-denominator perturbations). Again consider the feedback sys-tem that we studied before, most recently in Example 5.6.3, with nominal plantP0(s) = g0/s2,perturbed plant

P(s) = gs2(1+ sθ)

, (5.154)

and compensatorC(s)= (k+ sTd)/(1+ sT0). We use the design values of Example 5.2.2. Plotsof the magnitudes of the nominal sensitivity and complementary sensitivity functionsS andTare given in Fig. 5.28.S is small up to a frequency of almost 1, whileT is small starting from aslightly higher frequency. The crossover region is located about the frequency 1.

Figure 5.29 illustrates that the perturbation from the nominal parameter valuesg0 = 1 andθ0 = 0 to g = 5, θ = 0.2, for instance, fails the two separate tests, but passes the test (5.153).Hence, the perturbation leaves the closed-loop system stable. �

1/|S| 1/|S|1/|T| 1/|T|

(a) (b) (c)10000

100

1

.01 .01.011 1 1100 100100ω ω ω

∆L−1 ∆L−1∆L ∆L

Figure 5.29: (a), (b) Separate stability tests. (c) Combined test.

Feedback systems are robustly stable for perturbations in the frequency regions where eitherthe sensitivity is small (at low frequencies) or the complementary sensitivity is small (at highfrequencies). In thecrossover regionneither sensitivity is small. Hence, the feedback system isnot robust for perturbations that strongly affect the crossover region.

In the crossover region the uncertainty therefore should be limited. On the one hand thislimitation restrictsstructureduncertainty — caused by load variations and environmental changes— that the system can handle. On the other hand,unstructureduncertainty — deriving fromneglected dynamics and parasitic effects — should be kept within bounds by adequate modeling.

5.6.8 Plant perturbation models

In the previous subsections we modeled the uncertainty as an uncertainty in the loop gainL, whichresults in interconnection matrices in terms of this loop gain, such asSandT. It is important to

Page 80: Multi Variable Control System Design Dmcs456

232 Chapter 5. Uncertainty Models and Robustness

realize that we have a choice in how to model the uncertainty and that we need not necessarilydo that in terms of the loop gain. In particular as the uncertainty is usually present in the plantP only, and not in controllerC, it makes sense to model the uncertainty as such. Table 5.1 listsseveral ways to “pull out” the uncertainty from the plant.

plant perturbed plant interconnection matrixH perturbation matrix1

P P+ V1PW −W KSV 1P

P ( I + V1PW)P −WTV 1P

P P( I + V1PW) −W KSPV 1P

P ( I + V1PW)−1 P −W SV 1P

P P( I + V1PW)−1 −W( I + K P)−1V 1P

D−1N (D + Z1DW1)−1(N + Z1NW2) −[ W1S

W2KS

]D−1Z

[1D 1N

]ND−1 (N + W11N Z)(D + W21D Z)−1 −ZD−1

[KSW1 ( I+KP)−1W2

] [1N1D

]Table 5.1: A list of perturbation models with plantP and controllerK

5.7 Structured singular value robustness analysis

5.7.1 Introduction

We return to the basic perturbation model of Fig. 5.8, which we repeat in Fig. 5.30(a). As

(a) (b)

H H

∆2

∆K

∆1

Figure 5.30: (a) Basic perturbation model. (b) Structured perturbation model

Page 81: Multi Variable Control System Design Dmcs456

5.7. Structured singular value robustness analysis 233

demonstrated inx 5.3, the model is very flexible in representing both structured perturbations(i.e., variations of well-defined parameters) and unstructured perturbations.

The stability robustness theorem of Summary 5.5.1 (p. 215) guarantees robustness under allperturbations1 such thatσ(1( jω)) · σ(H( jω)) < 1 for all ω ∈ R. Perturbations1( jω) whosenormσ(1( jω)) does not exceed the number 1/σ(H( jω)), however, are completelyunstructured.As a result, often quite conservative estimates are obtained of the stability region forstructuredperturbations. Several examples that we considered in the preceding sections confirm this.

Doyle (1982) proposed another measure for stability robustness based on the notion of whathe callsstructured singular value.In this approach, the perturbation structure is detailed as inFig. 5.30(b) (Safonov and Athans 1981). The overall perturbation1 has the block diagonal form

1 =

11 0 · · · 00 12 0 · · ·· · · · · · · · · · · ·0 · · · 0 1K

. (5.155)

Each diagonal block entry1i has fixed dimensions, and has one of the following forms:

� 1i = δ I , with δ a real number. If the unit matrixI has dimension 1 this represents a realparameter variation. Otherwise, this is arepeated real scalar perturbation.

� 1i = δ I , with δ a stable11 scalar transfer matrix. This represents ascalar or repeatedscalar dynamic perturbation.

� 1i is a (not necessarily square) stable transfer matrix. This represents amultivariabledynamic perturbation.

A wide variety of perturbations may be modeled this way.

ω

Im

Re10

Figure 5.31: Nyquist plot of det( I − H1) for a destabilizing perturbation.

We study which are the largest perturbations1with the given structure that do not destabilizethe system of Fig. 5.30(a). Suppose that a given perturbation1 destabilizes the system. Then bythe generalized Nyquist criterion of Summary 1.3.13 the Nyquist plot of det( I − H1) encirclesthe origin at least once, as illustrated in Fig. 5.31. Consider the Nyquist plot of det( I − εH1),with ε a real number that varies between 0 and 1. Forε = 1 the modified Nyquist plot coincideswith that of Fig. 5.31, while forε = 0 the plot reduces to the point 1. Since obviously the plotdepends continuously onε, there must exist a value ofε in the interval(0, 1] such that theNyquist plot of det( I − εH1) passes through the origin. Hence, if1 destabilizes the perturbed

11A transfer function or matrix is “stable” if all its poles are in the open left-half plane.

Page 82: Multi Variable Control System Design Dmcs456

234 Chapter 5. Uncertainty Models and Robustness

system, there existε ∈ (0, 1] andω ∈ R such that det( I − εH( jω)1( jω)) = 0. Therefore,1doesnot destabilize the perturbed system if and only if there do not existε ∈ (0, 1] andω ∈ R

such that det( I − εH( jω1( jω)) = 0.Fix ω, and letκ(H( jω)) be thelargestreal number such that det( I − H( jω)1( jω)) 6= 0 for

all 1( jω) (with the prescribed structure) such thatσ(1( jω)) < κ(H( jω)). Then, obviously, iffor a given perturbation1( jω), ω ∈ R,

σ(1( jω)) < κ(H( jω)) for all ω ∈ R (5.156)

then det( I − εH( jω)1( jω)) 6= 0 for ε ∈ (0,1] andω ∈ R. Therefore, the Nyquist plot of det( I −H1) does not encircle the origin, and, hence,1 does not destabilize the system. Conversely, itis always possible to find a perturbation1 that violates (5.156) within an arbitrarily small marginthat destabilizes the perturbed system. The numberκ(H( jω)) is known as themultivariablerobustness marginof the systemH at the frequencyω (Safonov and Athans 1981).

We note that for fixedω

κ(H( jω)) = sup{γ | σ(1( jω)) < γ ⇒ det( I − H( jω)1( jω)) 6= 0}= inf{σ(1( jω)) | det( I − H( jω)1( jω)) = 0}. (5.157)

If no structure is imposed on the perturbation1 thenκ(H( jω)) = 1/σ(H( jω)). This led Doyleto terming the numberµ(H( jω)) = 1/κ(H( jω)) thestructured singular valueof the complex-valued matrixH( jω).

Besides being determined byH( jω) the structured singular value is of course also dependenton the perturbation structure. Given the perturbation structure of the perturbations1 of thestructured perturbation model, define byD the class ofconstantcomplex-valued matrices of theform

1 =

11 0 · · · 00 12 0 · · ·· · · · · · · · · · · ·0 · · · 0 1K

. (5.158)

The diagonal block1i has the same dimensions as the corresponding block of the dynamicperturbation, and has the following form:

� 1i = δ I , with δ a real number, if the dynamic perturbation is a scalar or repeated real scalarperturbation.

� 1i = δ I , with δ a complex number, if the dynamic perturbation is a scalar dynamic or arepeated scalar dynamic perturbation.

� 1i is a complex-valued matrix if the dynamic perturbation is a multivariable dynamicperturbation.

5.7.2 The structured singular value

We are now ready to define thestructured singular valueof a complex-valued matrixM:

Definition 5.7.1 (Structured singular value). Let M be ann × m complex-valued matrix, andD a set ofm× n structured perturbation matrices. Then thestructured singular valueof M giventhe setD is the numberµ(M) defined by

1µ(M)

= inf1∈D , det( I−M1)=0

σ(1). (5.159)

If det( I − M1) = 0 for all1 ∈ D then 1/µ(M) = 0. �

Page 83: Multi Variable Control System Design Dmcs456

5.7. Structured singular value robustness analysis 235

The structured singular valueµ(M) is the inverse of the norm of the smallest perturbation1

(within the given classD) that makesI + M1 singular. Thus, the largerµ(M), the smaller theperturbation1 is that is needed to makeI + M1 singular.

Example 5.7.2 (Structured singular value).By way of example we consider the computationof the structured singular value of a 2× 2 complex-valued matrix

M =[m11 m12

m21 m22

], (5.160)

under the real perturbation structure

1 =[11 00 12

], (5.161)

with 11 ∈ R and12 ∈ R. It is easily found that

σ(1) = max(|11|, |12|), det( I + M1) = 1+ m1111 + m2212 + m1112, (5.162)

with m = det(M) = m11m22 − m12m21. Hence, the structured singular value ofM is the inverseof

κ(M) = inf{11∈R, 12∈R: 1+m1111+m2212+m1112=0}

max(|11|, |12|). (5.163)

Elimination of, say,12 results in the equivalent expression

κ(M) = inf11∈R

max(|11|,∣∣∣∣ 1+ m1111

m22 + m11

∣∣∣∣). (5.164)

Suppose thatM is numerically given by

M =[1 23 4

]. (5.165)

Then it may be found (the details are left to the reader) that

κ(M) = −5+ √33

4, (5.166)

so that

µ(M) = 4

−5+ √33

= 5+ √33

2= 5.3723. (5.167)

Exercise 5.7.3 (Structured singular value).Fill in the details of the calculation. �

Exercise 5.7.4 (Alternative characterization of the structured singular value).Prove (Doyle1982) that if all perturbations are complex then

µ(M) = max1∈ D : σ(1)≤1

ρ(M1), (5.168)

with ρ denoting the spectral radius — that is, the magnitude of the largest eigenvalue (in magni-tude). Show that if on the other hand some of the perturbations are real then

µ(M) = max1∈ D : σ(1)≤1

ρR(M1). (5.169)

H1ereρR(A) is largest of the magnitudes of the real eigenvalues of the complex matrixA. If Ahas no real eigenvalues thenρR(A) = 0. �

Page 84: Multi Variable Control System Design Dmcs456

236 Chapter 5. Uncertainty Models and Robustness

5.7.3 Structured singular value and robustness

We discuss the structured singular value in more detail inx 5.7.4, but first summarize its applica-tion to robustness analysis.

Summary 5.7.5 (Structured robust stability). Given the stable unperturbed systemH the per-turbed system of Fig. 5.30(b) is stable for all perturbations1 such that1( jω) ∈ D for all ω ∈ R

if

σ(1( jω)) <1

µ(H( jω))for all ω ∈ R, (5.170)

with µ the structured singular value with respect to the perturbation structureD . If robust sta-bility is required with respect toall perturbations within the perturbation class that satisfy thebound (5.170) then the condition is besides sufficient also necessary. �

Given a structured perturbation1 such that1( jω) ∈ D for everyω ∈ R, we have

‖1‖∞ = supω∈R

σ(1( jω)). (5.171)

Suppose that the perturbations arescaled, so that‖1‖∞ ≤ 1. Then Summary 5.7.5 implies thatthe perturbed system is stable ifµH < 1, where

µH = supω∈R

µ(H( jω)). (5.172)

With some abuse of terminology, we callµH thestructured singular valueof H. Even more canbe said:

Summary 5.7.6 (Structured robust stability for scaled perturbations). The perturbed systemof Fig. 5.30(b) is stable for all stable perturbations1 such that1( jω) ∈ D for all ω ∈ R and‖1‖∞ ≤ 1 if and only ifµH < 1. �

Clearly, if the perturbations are scaled to a maximum norm of 1 then a necessary and sufficientcondition for robust stability is thatµH < 1.

5.7.4 Properties of the structured singular value

Before discussing the numerical computation of the structured singular value we list some of theprincipal properties of the structured singular value (Doyle 1982; Packard and Doyle 1993).

Summary 5.7.7 (Principal properties of the structured singular value).The structured singu-lar valueµ(M) of a matrixM under a structured perturbation setD has the following properties:

1. Scaling property:If all perturbations are complex, then

µ(αM) = |α| µ(M) (5.173)

for everyα ∈ R. If none of the perturbations are real then this holds as well for everycomplexα.

2. Upper and lower bounds:Suppose thatM is square. Then

ρR(M) ≤ µ(M) ≤ σ(M), (5.174)

Page 85: Multi Variable Control System Design Dmcs456

5.7. Structured singular value robustness analysis 237

with ρR defined as in Exercise 5.7.4.

If none of the perturbations is real then

ρ(M) ≤ µ(M) ≤ σ(M), (5.175)

with ρ denoting the spectral radius. The upper bounds also apply ifM is not square.

3. Preservation of the structured singular value under diagonal scaling:Suppose that theith diagonal block of the perturbation structure has dimensionsmi × ni . Form two blockdiagonal matricesD andD whoseith diagonal blocks are given by

Di = di Ini , Di = di Imi , (5.176)

respectively, withdi a positive real number.Ik denotes ak × k unit matrix. Then

µ(M) = µ(DMD−1). (5.177)

4. Preservation of the structured singular value under unitary transformation:Suppose thatthe ith diagonal block of the perturbation structure has dimensionsmi × ni . Form theblock diagonal matricesQ and Q whoseith diagonal blocks are given by themi × mi

unitary matrixQi and theni × ni unitary matrixQi , respectively. Then

µ(M) = µ(QM) = µ(MQ). (5.178)�

The scaling property is obvious. The upper bound in (b) follows by considering unrestrictedperturbations of the form1 ∈ C m×n (that is,1 is a full m× n complex matrix). The lower boundin (b) is obtained by considering restricted perturbations of the form1 = δ I , with δ a real orcomplex number. The properties (c) and (d) are easily checked.

Exercise 5.7.8 (Proof of the principal properties of the structured singular value).Fill in thedetails of the proof of Summary 5.7.7. �

5.7.5 A useful formula

The following formula for the structured singular value of a 2× 2 dyadic matrix has usefulapplications.

Summary 5.7.9 (Structured singular value of a dyadic matrix).The structured singular valueof the 2× 2 dyadic matrix

M =[a1b1 a1b2

a2b1 a2b2

]=[a1

a2

] [b1 b2

], (5.179)

with a1, a2, b1, andb2 complex numbers, with respect to the perturbation structure

1 =[11 00 12

], 11 ∈ C , 12 ∈ C , (5.180)

is

µ(M) = |a1b1| + |a2b2|. (5.181)

The proof is given inx 5.9.

Page 86: Multi Variable Control System Design Dmcs456

238 Chapter 5. Uncertainty Models and Robustness

5.7.6 Numerical approximation of the structured singular value

Exact calculation of the structured singular value is often not possible and in any case compu-tationally intensive. The numerical methods that are presently available for approximating thestructured singular value are based on calculating upper and lower bounds for the structuredsingular value.

Summary 5.7.10 (Upper and lower bounds for the structured singular value).

1. D-scaling upper bound.Let the diagonal matricesD andD be chosen as in (c) of Summary5.7.7. Then with property (b) we have

µ(M) = µ(DMD−1) ≤ σ(DMD−1). (5.182)

The rightmost side may be numerically minimized with respect to the free positive num-bersdi that determineD andD.

Suppose that the perturbation structure consists ofm repeated scalar dynamic perturbationblocks andM full multivariable perturbation blocks. Then if 2m+ M ≤ 3 the minimizedupper bound actuallyequalsthe structured singular value12

2. Lower bound.Let Q be a unitary matrix as constructed in (d) of Summary 5.7.7. Withproperty (b) we have (for complex perturbations only)

µ(M) = µ(MQ) ≥ ρ(MQ). (5.183)

Actually (Doyle 1982),

µ(M) = maxQ

ρ(MQ), (5.184)

with Q varying over the set of matrices as constructed in (d) of Summary 5.7.7.�

Practical algorithms for computing lower and upper bounds on the structured singular valuefor complex perturbations have been implemented in the MATLAB µ-Analysis and SynthesisToolbox(Balas, Doyle, Glover, Packard, and Smith 1991). The closeness of the bounds is ameasure of the reliability of the calculation. Real perturbations are not handled by this versionof theµ-toolbox. Algorithms for real and mixed complex-real perturbations are the subject ofcurrent research (see for instance Young, Newlin, and Doyle (1995b) and Young, Newlin, andDoyle (1995a)).

The MATLAB Robust Control Toolbox(Chiang and Safonov 1992) provides routines for cal-culating upper bounds on the structured singular value for both complex and real perturbations.

5.7.7 Example

We apply the singular value method for the analysis of the stability robustness to an example.

Example 5.7.11 (SISO system with two parameters).By way of example, consider the SISOfeedback system that was studied in Example 5.2.1 and several other places. A plant with transferfunction

P(s) = gs2(1+ sθ)

(5.185)

12If there arereal parametric perturbations then this result remains valid, provided that we use a generalization ofD-scaling called(D,G)-scalingwhich allows to exploitreal perturbations.

Page 87: Multi Variable Control System Design Dmcs456

5.7. Structured singular value robustness analysis 239

is connected in feedback with a compensator with transfer function

C(s) = k + sTd

1+ sT0. (5.186)

We use the design values of the parameters established in Example 5.2.2. In Example 5.3.3 itwas found that the interconnection matrix for scaled perturbations in the gaing and time constantθ is given by

H(s) =1

1+sθ0

1+ L0(s)

[β1C(s)

−β2

][− α1s2 α2s

], (5.187)

where the subscripto indicates nominal values, andL0 = P0C is the nominal loop gain. Thenumbersα1, α2, β1, andβ2 are scaling factors such that|g − g0| ≤ ε1 with ε1 = α1β1, and|θ− θ0| ≤ ε2 with ε2 = α2β2. The interconnection matrixH has a dyadic structure. Application

10−2

10−1

100

101

102

103

ω

0

1

2

(a)

(b)

µ( ( ω))H j

Figure 5.32: Structured singular values for the two-parameter plant for twocases

of the result of Summary 5.7.9 shows that its structured singular value with respect tocomplexperturbations in the parameters is

µ(H( jω)) =√

11+ω2θ2

0

|1+ L0(ω)|(α1β1

|C( jω)|ω2

+ α2β2ω

)

=ε1

√k2 +ω2T2

d + ε2ω3√

1+ ω2T20

|χ0( jω)| , (5.188)

with χ0(s) = θ0T0s4 + (θ0 + T0)s3 + s2 + g0Tds+ g0k the nominal closed-loop characteristicpolynomial.

Inspection shows that the right-hand side of this expression is identical to that of (5.99) inExample 5.5.3, which was obtained by a singular value analysis based on optimal scaling. Inview of the statement in Summary 5.7.10(a) this is no coincidence.

Figure 5.32 repeats the plots of of Fig. 5.17 of the structured singular value for the two casesof Example 5.5.3. For case (a) the peak value of the structured singular value is greater than1 so that robust stability is not guaranteed. Only in case (b) robust stability is certain. Sincethe perturbation analysis is based on dynamic rather than parameter perturbations the results areconservative. �

Exercise 5.7.12 (Computation of structured singular values with MATLAB ). Reproduce theplots of Fig. 5.32 using the appropriate numerical routines from theµ-Toolbox or the RobustControl Toolbox for the computation of structured singular values. �

Page 88: Multi Variable Control System Design Dmcs456

240 Chapter 5. Uncertainty Models and Robustness

5.8 Combined performance and stability robustness

5.8.1 Introduction

In the preceding section we introduced the structured singular value to study the stability robust-ness of the basic perturbation model. We continue in this section by consideringperformanceand its robustness. Figure 5.33 represents a control system withexternalinputw, such as dis-turbances, measurement noise, and reference signals, andexternaloutputz. The output signalz represents anerror signal,and ideally should be zero. The transfer matrixH is the transfermatrix from the external inputw to the error signalz.

w H z

Figure 5.33: Control system.

Example 5.8.1 (Two-degree-of-freedom feedback system).To illustrate this model, considerthe two-degree-of-freedom feedback configuration of Fig. 5.34.P is the plant,C the compen-sator, andF a precompensator. The signalr is the reference signal,v the disturbance,m themeasurement noise,u the plant input, andy the control system output. It is easy to find that

y = ( I + PC)−1PCFr+ ( I + PC)−1v− ( I + PC)−1PCm

= T Fr + Sv− Tm, (5.189)

whereS is the sensitivity matrix of the feedback loop andT the complementary sensitivity ma-trix. The tracking errorz = y− r is given by

z= y− r = (T F − I )r + Sv− Tm. (5.190)

Considering the combined signal

w = rv

m

(5.191)

as the external input, it follows that the transfer matrix of the control system is

H = [T F − I S −T

]. (5.192)

The performance of the control system of Fig. 5.33 is ideal ifH = 0, or, equivalently,‖H‖∞ =0. Ideals cannot always be obtained, so we settle for the norm‖H‖∞ to be “small,” rather thanzero. By introducing suitable frequency dependent scaling functions (that is, by modifyingH toW HV) we may arrange that “satisfactory” performance is obtained if and only if

‖H‖∞ < 1. (5.193)

Example 5.8.2 (Performance specification).To demonstrate how performance may be speci-fied this way, again consider the two-degree-of-freedom feedback system of Fig. 5.34. For sim-plicity we assume that it is a SISO system, and that the reference signal and measurement noise

Page 89: Multi Variable Control System Design Dmcs456

5.8. Combined performance and stability robustness 241

+

++

++

r F Cu

P

v

y

m

Figure 5.34: Two-degree-of-freedom feedback control system.

are absent. It follows from Example 5.8.1 that the control system transfer function reduces to thesensitivity functionSof the feedback loop:

H = 11+ PC

= S. (5.194)

Performance is deemed to be adequate if the sensitivity function satisfies the requirement

.01 .1 1 10 100

10

1

.1

.01

.001

.0001

ω

| |Sspec

Figure 5.35: Magnitude plot ofSspec.

|S( jω)| < |Sspec( jω)|, ω ∈ R, (5.195)

with Sspeca specified rational function, such as

Sspec(s) = s2

s2 + 2ζω0s+ ω20

. (5.196)

The parameterω0 determines the minimal bandwidth, while the relative damping coefficientζ specifies the allowable amount of peaking. A doubly logarithmic magnitude plot ofSspec isshown in Fig. 5.35 forω0 = 1 andζ = 1

2.The inequality (5.195) is equivalent to|S( jω)V( jω)| < 1 for ω ∈ R, with V = 1/Sspec. Re-

definingH asH = SV this reduces to

‖H‖∞ < 1. (5.197)

Page 90: Multi Variable Control System Design Dmcs456

242 Chapter 5. Uncertainty Models and Robustness

By suitable scaling we thus may arrange that the performance of the system of Fig. 5.33 isconsidered satisfactory if

‖H‖∞ < 1. (5.198)

In what follows we exploit the observation that by (1) of Summary 5.5.1 this specification isequivalent to the requirement that the artificially perturbed system of Fig. 5.36 remains stableunder all stable perturbations10 such that‖10‖∞ ≤ 1.

w H

∆0

z

Figure 5.36: Artificially perturbed system

5.8.2 Performance robustness

Performance is said to berobustif ‖H‖∞ remains less than 1 under perturbations. To make thisstatement more specific we consider the configuration of Fig. 5.37(a). The perturbation1 maybe structured, and is scaled such that‖1‖∞ ≤ 1. We describe the system by the interconnectionequations[

zq

]= H

[v

p

]=[

H11 H12

H21 H22

][v

p

]. (5.199)

H11 is the nominal control system transfer matrix. We define performance to be robust if

1. the perturbed system remains stable under all perturbations, and

2. the∞-norm of the transfer matrix of the perturbed system remains less than 1 under allperturbations.

wH

z

qp

∆ ∆

∆0

w z

p qH

(a) (b)

Figure 5.37: (a) Perturbed control system. (b) Doubly perturbed system.

Page 91: Multi Variable Control System Design Dmcs456

5.8. Combined performance and stability robustness 243

Necessary and sufficient for robust performance is that the norm of the perturbed transfer matrixfromw to z in the perturbation model of Fig. 5.37(a) is less than 1 for every perturbation1 withnorm less than or equal to 1. This, in turn, is equivalent to the condition that the augmentedperturbation model of Fig. 5.37(b) is stable for every “full” perturbation10 and every structuredperturbation1, both with norm less than or equal to 1. Necessary and sufficient for this is that

µH < 1, (5.200)

with µ the structured singular value ofH with respect to the perturbation structure defined by[10 00 1

]. (5.201)

Summary 5.8.3 (Robust performance and stability).Robust performance of the system ofFig. 5.37(a) is achieved if and only if

µH < 1, (5.202)

whereµH is the structured singular value ofH under perturbations of the form[10 00 1

]. (5.203)

10 is a “full” perturbation, and1 structured as specified. �

+ +

− +

+

+

+

Cu

P

w

z

Cu

P

pq

w

yz

W2

W1

δP

(a)

(b)

Figure 5.38: SISO feedback system. (a) Nominal. (b) Perturbed.

5.8.3 SISO design for robust stability and performance

We describe an elementary application of robust performance analysis using the structured singu-lar value (compare Doyle, Francis, and Tannenbaum (1992)). Consider the feedback control sys-tem configuration of Fig. 5.38(a). A SISO plantP is connected in feedback with a compensatorC.

Page 92: Multi Variable Control System Design Dmcs456

244 Chapter 5. Uncertainty Models and Robustness

� Performance is measured by the closed-loop transfer function from the disturbancew to thecontrol system outputz, that is, by the sensitivity functionS= 1/(1+ PC). Performanceis considered satisfactory if

|S( jω)W1( jω)| < 1, ω ∈ R, (5.204)

with W1 a suitable weighting function.

� Plant uncertainty is modeled by the scaled uncertainty model

P −→ P+ δPW2, (5.205)

with W2 a stable function representing the maximal uncertainty, andδP a scaled stableperturbation such that‖δP‖∞ ≤ 1.

The block diagram of Fig. 5.38(b) includes the weighting filterW1 and the plant perturbationmodel. By inspection we obtain the signal balance equationy = w+ p− PCy, so that

y = 11+ PC

(w+ p). (5.206)

By further inspection of the block diagram it follows that

z = W1y = W1S(w+ p), (5.207)

q = −W2PCy = −W2T(w+ p). (5.208)

Here

S= 11+ PC

, T = PC1+ PC

(5.209)

are the sensitivity function and the complementary sensitivity function of the feedback system,respectively. Thus, the transfer matrixH in the configuration of Fig. 5.37(b) follows from[

qz

]=[−W2T −W2T

W1S W1S

]︸ ︷︷ ︸

H

[pw

]. (5.210)

H has the dyadic structure

H =[

W2TW1S

] [−1 1]. (5.211)

With the result of Summary 5.7.9 we obtain the structured singular value of the interconnectionmatrix H as

µH = supω∈R

µ(H( jω)) = supω∈R

(|W1( jω)S( jω)| + |W2( jω)T( jω)|) (5.212)

= ‖ |W1S| + |W2T| ‖∞. (5.213)

By Summary 5.8.3, robust performance and stability are achieved if and only ifµH < 1.

Page 93: Multi Variable Control System Design Dmcs456

5.8. Combined performance and stability robustness 245

.01.01 11 100100

11

1010

.1.1

.01.01

.001.001

ω ω

| |S | |T

Figure 5.39: Nominal sensitivity and complementary sensitivity functions

Example 5.8.4 (Robust performance of SISO feedback system).We consider the SISO feed-back system we studied in Example 5.2.1 (p. 198) and on several other occasions, with nominalplant and compensator transfer functions

P0(s) = g0

s2, C(s) = k + sTd

1+ sT0, (5.214)

respectively. We use the design values of Example 5.2.2 (p. 199). In Fig. 5.39 magnitude plots aregiven of the nominal sensitivity and complementary sensitivity functions of this feedback system.The nominal sensitivity function is completely acceptable, so we impose as design specificationthat under perturbation∣∣∣∣ S( jω)

S0( jω)

∣∣∣∣ ≤ 1+ ε, ω ∈ R, (5.215)

with S0 the nominal sensitivity function and the positive numberε a tolerance. This comes downto choosing the weighting functionW1 in (5.204) as

W1 = 1(1+ ε)S0

. (5.216)

We consider how to select the weighting functionW2, which specifies the allowable perturba-tions. With W1 chosen as in (5.216) the robust performance criterionµH < 1 according to(5.212) reduces to

11+ ε

+ |W2( jω)T0( jω)| < 1, ω ∈ R, (5.217)

with T0 the nominal complementary sensitivity function. This is equivalent to

|W2( jω)| <ε

1+ε|T0( jω)| , ω ∈ R. (5.218)

Figure 5.40(a) shows a plot of the right-hand side of (5.218) forε = 0.25. This right-hand side isa frequency dependent bound for the maximally allowable scaled perturbationδP. The plot showsthat for low frequencies the admissible proportional uncertainty is limited toε/(1 + ε) = 0.2,

Page 94: Multi Variable Control System Design Dmcs456

246 Chapter 5. Uncertainty Models and Robustness

.01 1 100

1

10

.1

.01

.001

bound

ω

| |∆P

(a) Uncertainty bound and admissibleperturbation

.01 1 100

1

10

.1

.01

.001

ω

| |S

|( ) |1+ ε So

(b) Effect of perturbation onS

Figure 5.40: The bound and its effect onS

but that the system is much more robust with respect to high-frequency perturbations. In thecrossover frequency region the allowable perturbation is even less than 0.2.

Suppose, as we did before, that the plant is perturbed to

P(s) = gs2(1+ sθ)

. (5.219)

The corresponding proportional plant perturbation is

1P(s) = P(s)− P0(s)P0(s)

=g−g0

g0− sθ

1+ sθ. (5.220)

At low frequencies the perturbation has approximate magnitude|g − g0|/g0. For performancerobustness this number definitely needs to be less than the minimum of the bound on the right-hand side of (5.218), which is aboutε1+ε 0.75.

If ε = 0.25 then the latter number is about 0.15. Figure 5.40(a) includes a magnitude plotof 1P for |g− g0|/g0 = 0.1 (that is,g = 0.9 or g = 1.1) andθ = 0.1. For this perturbation theperformance robustness test is just satisfied. Ifg is made smaller than 0.9 or larger than 1.1 orθ

is increased beyond 0.1 then the perturbation fails the test.Figure 5.40(b) shows the magnitude plot of the sensitivity functionSfor g = 1.1 andθ = 0.1.

Comparison with the bound|(1+ ε)S0| shows that performance is robust, as expected. �

Exercise 5.8.5 (Bound on performance robustness).In Example 5.8.4 we used the perfor-mance specification (in the form ofW1) to find bounds on the maximally allowable scaled pertur-bationsδP. Conversely, we may use the uncertainty specification (in the form ofW2) to estimatethe largest possible performance variations.

1. Suppose thatg varies between 0.5 and 1.5 andθ between 0 and 0.2. Determine a functionW2 that tightly bounds the uncertainty1P of (5.220).

2. Use (5.218) to determine the smallest value ofε for which robust performance is guaran-teed.

Page 95: Multi Variable Control System Design Dmcs456

5.9. Appendix: Proofs 247

3. Compute the sensitivity functionSfor a number of values of the parametersg andθ withinthe uncertainty region. Check how tight the bound on performance is that follows from thevalue ofε obtained in 2.

5.9 Appendix: Proofs

In this Appendix to Chapter 5 the proofs of certain results in are sketched.

5.9.1 Small gain theoremWe prove Summary 5.4.2.

Proof of Small gain theorem.The proof of the small gain theorem follows by applying the fixed point the-orem to the feedback equationw = v + Lw that follows from Fig. 5.12(b). Denote the normed signalspace thatL operates on asU . For any fixed inputv ∈ U , if L is a contraction, so is the map defined byw 7→ v+ Lw. Hence, ifL is a contraction, for everyv ∈ U the feedback equation has a unique solutionw ∈ U . It follows that the system with inputv and outputw is BIBO stable.

If L is a contraction with contraction constantk then it follows from‖Lu‖U ≤ k‖u‖U that ‖L‖ ≤ k.Hence,‖L‖ < 1. Conversely, if‖L‖ < 1, for anyu andv in U we have‖Lu − Lv‖U = ‖L(u − v)‖U ≤‖L‖ · ‖u − v‖U , so thatL is a contraction. This proves thatL is a contraction if and only if its norm isstrictly less than 1.

5.9.2 Structured singular value of a dyadic matrixWe prove Summary 5.7.9. In fact we prove the generalization that for rank 1 matrices

M =

a1

a2

...

[b1 b2 · · ·] , ai ,bj ∈ C (5.221)

and perturbation structure

1 =11 0 · · ·

0 12 · · ·· · · · · · · · ·

, 1i ∈ C (5.222)

there holds thatµ(M) =∑i |ai bi |.

Proof of structured singular value of dyadic matrix.Given M as the product of a column and row vectorM = abwe have

det( I − M1) = det( I − ab1) = det(1− b1a) = 1−∑

i

ai bi1i . (5.223)

This gives a lower bound for the real part of det( I − M1),

det( I − M1) = 1−∑

i

ai bi1i ≥ 1−∑

i

|ai bi |maxi

|1i | = 1−∑

i

|ai bi |σ(1). (5.224)

For det( I − M1) to be zero we need at least that the lower bound is not positive, that is, thatσ(1) ≥1/∑

i |ai bi |. Now take1 equal to

1 = 1∑i |ai bi |

sgn(a1b1) 0 · · ·0 sgn(a2b2) · · ·· · · · · · ...

. (5.225)

Page 96: Multi Variable Control System Design Dmcs456

248 Chapter 5. Uncertainty Models and Robustness

Thenσ(1) = 1/∑

i |ai bi | and it is such that

det( I − M1) = 1−∑

j

aj bj1 j = 1−∑

j

aj bjsgnaj bj∑

i ai bi= 0. (5.226)

Hence a smallest1 (in norm) that makesI − M1 singular has norm 1/∑

i |ai bi |. Thereforeµ(M) =∑i |ai bi |.

It may be shown that theD-scaling upper bound for rank 1 matrices equalsµ(M) in this case.

Page 97: Multi Variable Control System Design Dmcs456

6

H∞-Optimization and µ-Synthesis

Overview– Design byH∞-optimization involves the minimization of thepeak magnitude of a suitable closed-loop system function. It is very wellsuited to frequency response shaping. Moreover, robustness against plantuncertainty may be handled more directly than withH2 optimization.

Design byµ-synthesis aims at reducing the peak value of the struc-tured singular value. It accomplishes joint robustness and performanceoptimization.

6.1 Introduction

In this chapter we introduce what is known asH∞-optimization as a design tool for linear mul-tivariable control systems.H∞-optimization amounts to the minimization of the∞-norm of arelevant frequency response function. The name derives from the fact that mathematically theproblem may be set in the spaceH∞ (named after the British mathematician G. H. Hardy), whichconsists of all bounded functions that are analytic in the right-half complex plane. We do not goto this length, however.

H∞-optimization resemblesH2-optimization, where the criterion is the 2-norm. Becausethe 2- and∞-norms have different properties the results naturally are not quite the same. Animportant aspect ofH∞ optimization is that it allows to include robustness constraints explicitlyin the criterion.

In x 6.2 (p. 250) we discuss themixed sensitivity problem.This specialH∞ problem is animportant design tool. We show how this problem may be used to achieve the frequency responseshaping targets enumerated inx 1.5 (p. 27).

In x 6.3 (p. 258) we introduce thestandard problemof H∞-optimization, which is the mostgeneral version. The mixed sensitivity problem is a special case. Several other special cases ofthe standard problem are exhibited. Closed-loop stability is defined and a number of assumptionsand conditions needed for the solution of theH∞-optimization problem are reviewed.

The next few sections are devoted to the solution of the standardH∞-optimization problem.In x 6.4 (p. 264) we derive a formula for so-calledsuboptimalsolutions and establish a lower

249

Page 98: Multi Variable Control System Design Dmcs456

250 Chapter 6.H∞-Optimization andµ-Synthesis

bound for the∞-norm. We also establish how allstabilizingsuboptimal solutions, if any exist,may be obtained.

The key tool in solvingH∞-optimization problems isJ-spectral factorization. In x 6.5(p. 272) we show that by using state space techniques theJ-spectral factorization that is re-quired for the solution of the standard problem may be reduced to the solution of two algebraicRiccati equations.

In x 6.6 (p. 275) we discuss optimal (as opposed tosuboptimal) solutions and highlight someof their peculiarities. Section 6.7 (p. 279) explains how integral control and high-frequency roll-off may be handled.

Section 6.8 (p. 288) is devoted to an introductory exposition ofµ-synthesis. This approximatetechnique for joint robustness and performance optimization usesH∞ optimization to reduce thepeak value of the structured singular value. Section 6.9 (p. 292) is given over to a rather elaboratedescription of an application ofµ-synthesis.

Section 6.10 (p. 304), finally, contains a number of proofs.

6.2 The mixed sensitivity problem

6.2.1 Introduction

In x 5.6.6 (p. 227) we studied the stability robustness of the feedback configuration of Fig. 6.1.We considered fractional perturbations of the type

C P−

Figure 6.1: Feedback loop

L = N D−1 −→ ( I + VδNW2)N D−1( I + VδDW1)−1. (6.1)

The frequency dependent matricesV, W1, and W2 are so chosen that the scaled perturbationδp = [−δD δN] satisfies‖δP‖∞ ≤ 1. Stability robustness is guaranteed if

‖H‖∞ < 1, (6.2)

where

H =[

W1SV−W2TV

]. (6.3)

S= ( I + L)−1 andT = L( I + L)−1 are the sensitivity matrix and the complementary sensitivitymatrix of the closed-loop system, respectively, withL the loop gainL = PC.

Given a feedback system with compensatorC and corresponding loop gainL = PC thatdoes not satisfy the inequality (6.2) one may look for a different compensator that does achieveinequality. An effective way of doing this is to consider the problem ofminimizing‖H‖∞ withrespect to all compensatorsC that stabilize the system. If the minimal valueγ of ‖H‖∞ isgreater than 1 then no compensator exists that stabilizes the systems for all perturbations suchthat‖δP‖∞ ≤ 1. In this case, stability robustness is only obtained for perturbations satisfying‖δP‖∞ ≤ 1/γ.

Page 99: Multi Variable Control System Design Dmcs456

6.2. The mixed sensitivity problem 251

The problem of minimizing∥∥∥∥[

W1SV−W2TV

]∥∥∥∥∞

(6.4)

(Kwakernaak 1983; Kwakernaak 1985) is a version of what is known as themixed sensitivityproblem(Verma and Jonckheere 1984). The name derives from the fact that the optimizationinvolves both the sensitivity and the complementary sensitivity function.

In what follows we explain that the mixed sensitivity problem cannot only be used to verifystability robustness for a class of perturbations, but also to achieve a number of other importantdesign targetsfor the one-degree-of-freedom feedback configuration of Fig. 6.1.

Before starting on this, however, we introduce a useful modification of the problem. We maywrite W2TV = W2L( I + L)−1V = W2PC( I + L)−1V = W2PUV, where

U = C( I + PC)−1 (6.5)

is the input sensitivity matrix introduced inx 1.5 (p. 27) . For a fixed plant we may absorbthe plant transfer matrixP (and the minus sign) into the weighting matrixW2, and consider themodified problem of minimizing∥∥∥∥

[W1SVW2UV

]∥∥∥∥∞

(6.6)

with respect to all stabilizing compensators. We redefine the problem of minimizing (6.6) as themixed sensitivity problem.

We mainly consider the SISO mixed sensitivity problem. The criterion (6.6) then reduces tothe square root of the scalar quantity

supω∈R

(|W1( jω)S( jω)V( jω)|2 + |W2( jω)U( jω)V( jω)|2). (6.7)

Many of the conclusions also hold for the MIMO case, although their application may be moreinvolved.

6.2.2 Frequency response shaping

The mixed sensitivity problem may be used for simultaneously shaping the sensitivity and inputsensitivity functions. The reason is that the solution of the mixed sensitivity sensitivity problemoften has theequalizing property(seex 6.6.1, p. 275). This property implies that the frequencydependent function

|W1( jω)S( jω)V( jω)|2 + |W2( jω)U( jω)V( jω)|2, (6.8)

whose peak value is minimized, actually is aconstant(Kwakernaak 1985). If we denote theconstant asγ2, with γ nonnegative, then it immediately follows from

|W1( jω)S( jω)V( jω)|2 + |W2( jω)U( jω)V( jω)|2 = γ2 (6.9)

that for the optimal solution

|W1( jω)S( jω)V( jω)|2 ≤ γ2, ω ∈ R,

|W2( jω)U( jω)V( jω)|2 ≤ γ2, ω ∈ R.(6.10)

Page 100: Multi Variable Control System Design Dmcs456

252 Chapter 6.H∞-Optimization andµ-Synthesis

Hence,

|S( jω)| ≤ γ

|W1( jω)V( jω)| , ω ∈ R, (6.11)

|U( jω)| ≤ γ

|W2( jω)V( jω)| , ω ∈ R . (6.12)

By choosing the functionsW1, W2, andV correctly the functionsS andU may be made smallin appropriate frequency regions. This is also true if the optimal solution does not have theequalizing property.

If the weighting functions are suitably chosen (in particular, withW1V large at low frequen-cies andW2V large at high frequencies), then often the solution of the mixed sensitivity problemhas the property that the first term of the criterion dominates at low frequencies and the secondat high frequencies:

|W1( jω)S( jω)V( jω)|2︸ ︷︷ ︸dominates at low frequencies

+ |W2( jω)U( jω)V( jω)|2︸ ︷︷ ︸dominates at high frequencies

= γ2. (6.13)

As a result,

|S( jω)| ≈ γ

|W1( jω)V( jω)| for ω small, (6.14)

|U( jω)| ≈ γ

|W2( jω)V( jω)| for ω large. (6.15)

This result allows quite effective control over the shape of the sensitivity and input sensitivityfunctions, and, hence, over the performance of the feedback system.

Because the∞-norm involves the supremum frequency response shaping based on minimiza-tion of the∞-norm is more direct than for theH2 optimization methods ofx 3.5 (p. 127).

Note, however, that the limits of performance discussed inx 1.6 (p. 40) can never be violated.Hence, the weighting functions must be chosen with the respect due to these limits.

6.2.3 Typek control and high-frequency roll-off

In (6.14–6.15), equality may often be achievedasymptotically.

Typek control. Suppose that|W1( jω)V( jω)| behaves as 1/ωk asω→ 0, with k a nonnegativeinteger. This is the case ifW1(s)V(s) includes a factorsk in the denominator. Then|S( jω)|behaves asωk asω → 0, which implies a typek control system, with excellent low-frequencydisturbance attenuation ifk ≥ 1. If k = 1 then the system has integrating action.

High-frequency roll-off. Likewise, suppose that|W2( jω)V( jω)| behaves asωm asω → ∞.This is the case ifW2V is nonproper, that is, if the degree of the numerator ofW2V exceeds thatof the denominator (bym). Then|U( jω)| behaves asω−m asω→ ∞. FromU = C/(1+ PC) itfollows thatC = U/(1+ U P). Hence, ifP is strictly proper andm ≥ 0 then alsoC behaves asω−m, andT = PC/(1+ PC) behaves asω−(m+e), with e the pole excess1 of P.

Hence, by choosingm we pre-assign thehigh-frequency roll-offof the compensator transferfunction, and the roll-offs of the complementary and input sensitivity functions. This is importantfor robustness against high-frequency unstructured plant perturbations.

1The pole excess of a rational transfer functionP is the difference between the number of poles and the number ofzeros. This number is also known as therelative degreeof P

Page 101: Multi Variable Control System Design Dmcs456

6.2. The mixed sensitivity problem 253

Similar techniques to obtain typek control and high-frequency roll-off are used inx 3.5.4(p. 130) andx 3.5.5 (p. 132) , respectively, forH2 optimization.

6.2.4 Partial pole placement

There is a further important property of the solution of the mixed sensitivity problem that needsto be discussed before considering an example. This involves a pole cancellation phenomenonthat is sometimes misunderstood. The equalizing property ofx 6.2.2 (p. 251) implies that

W1(s)W1(−s)S(s)S(−s)V(s)V(−s)+ W2(s)W2(−s)U(s)U(−s)V(s)V(−s)= γ2 (6.16)

for all s in the complex plane. We write the transfer functionP and the weighting functionsW1,W2, andV in rational form as

P = ND, W1 = A1

B1, W2 = A2

B2, V = M

E, (6.17)

with all numerators and denominators polynomials. If also the compensator transfer function isrepresented in rational form as

C = YX

(6.18)

then it easily follows that

S= DXDX + NY

, U = DYDX + NY

. (6.19)

The denominator

Dcl = DX + NY (6.20)

is the closed-loop characteristic polynomial of the feedback system. SubstitutingS andU asgiven by (6.19) into (6.16) we easily obtain

D∼ D · M∼ M · (A∼1 A1B∼

2 B2X∼ X + A∼2 A2B∼

1 B1Y∼Y)

E∼ E · B∼1 B1 · B∼

2 B2 · D∼cl Dcl

= γ2. (6.21)

If A is any rational or polynomial function thenA∼ is defined byA∼(s) = A(−s).Since the right-hand side of (6.21) is a constant, all polynomial factors in the numerator of the

rational function on the left cancel against corresponding factors in the denominator. In particular,the factorD∼ D cancels. If there are no cancellations betweenD∼ D and E∼EB∼

1 B1B∼2 B2 then

the closed-loop characteristic polynomialDcl (which by stability has left-half plane roots only)necessarily has among its roots those roots ofD that lie in the left-half plane, and the mirrorimages with respect to the imaginary axis of those roots ofD that lie in the right-half plane. Thismeans that the open-loop poles (the roots ofD), possibly after having been mirrored into theleft-half plane,reappearas closed-loop poles.

This phenomenon, which is not propitious for good feedback system design, may be pre-vented by choosing the denominator polynomialE of V equal to the plant denominator polyno-mial D, so thatV = M/D. With this special choice of the denominator ofV, the polynomialEcancels againstD in the left-hand side of (6.21), so that the open-loop poles donot reappear asclosed-loop poles.

Page 102: Multi Variable Control System Design Dmcs456

254 Chapter 6.H∞-Optimization andµ-Synthesis

Further inspection of (6.21) shows that if there are no cancellations betweenM∼ M andE∼ EB∼

1 B1B∼2 B2, and we assume without loss of generality thatM has left-half plane roots only,

then the polynomialM cancels against a corresponding factor inDcl. If we takeV proper(whichensuresV( jω) to be finite at high frequencies) then the polynomialM has the same degree asD,and, hence, has the same number of roots asD.

All this means that by letting

V = MD, (6.22)

where the polynomialM has the same degree as the denominator polynomialD of the plant, theopen-loop poles (the roots ofD) are reassigned to the locations of the roots ofM. By suitablychoosing the remaining weighting functionsW1 andW2 these roots may often be arranged to bethedominantpoles.

This technique, known aspartial pole placement(Kwakernaak 1986; Postlethwaite, Tsai,and Gu 1990) allows further control over the design. It is very useful in designing for a specifiedbandwidth and good time response properties.

In the design examples in this chapter it is illustrated how the ideas of partial pole placementand frequency shaping are combined.

A discussion of the root loci properties of the mixed sensitivity problem may be found inChoi and Johnson (1996).

6.2.5 Example: Double integrator

We apply the mixed sensitivity problem to the same example as inx 3.6.2 (p. 133), whereH2

optimization is illustrated2. Consider a SISO plant with nominal transfer function

P0(s) = 1s2. (6.23)

The actual, perturbed plant has the transfer function

P(s) = gs2(1+ sθ)

, (6.24)

whereg is nominally 1 and the nonnegative parasitic time constantθ is nominally 0.

Perturbation analysis. We start with a preliminary robustness analysis. The variations in theparasitic time constantθ mainly cause high-frequency perturbations, while the low-frequencyperturbations are primarily the effect of the variations in the gaing. Accordingly, we modelthe effect of the parasitic time constant as anumeratorperturbation, and the gain variations asdenominatorperturbations, and write

P(s) = N(s)D(s)

=1

1+sθs2

g

. (6.25)

Correspondingly, the relative perturbations of the denominator and the numerator are

D(s)− D0(s)D0(s)

= 1g

− 1,N(s)− N0(s)

N0(s)= −sθ

1+ sθ. (6.26)

2Much of the text of this subsection has been taken from Kwakernaak (1993).

Page 103: Multi Variable Control System Design Dmcs456

6.2. The mixed sensitivity problem 255

The relative perturbation of the denominator is constant over all frequencies, also in the crossoverregion. Because the plant is minimum-phase trouble-free crossover may be achieved (that is,without undue peaking of the sensitivity and complementary sensitivity functions). Hence, weexpect that — in the absence of other perturbations — values of|1/g− 1| up to almost 1 may betolerated.

The size of the relative perturbation of the numerator is less than 1 for frequencies below1/θ, and equal to 1 for high frequencies. To prevent destabilization it is advisable to makethe complementary sensitivity small for frequencies greater than 1/θ. As the complementarysensitivity starts to decrease at the closed-loop bandwidth, the largest possible value ofθ dictatesthe bandwidth. Assuming that performance requirements specify the system to have a closed-loop bandwidth of 1, we expect that — in the absence of other perturbations — values of theparasitic time constantθ up to 1 do not destabilize the system.

Thus, both for robustness and for performance, we aim at a closed-loop bandwidth of 1with small sensitivity at low frequencies and a sufficiently fast decrease of the complementarysensitivity at high frequencies with a smooth transition in the crossover region.

Choice of the weighting functions. To accomplish this with a mixed sensitivity design, wesuccessively consider the choice of the functionsV = M/D (that is, of the polynomialM), W1

andW2.To obtain a good time response corresponding to the bandwidth 1, which does not suffer from

sluggishness or excessive overshoot, we assign two dominant poles to the locations12

√2(−1± j).

This is achieved by choosing the polynomialM as

M(s) = [s− 12

√2(−1+ j)][s− 1

2

√2(−1− j)] = s2 + s

√2+ 1, (6.27)

so that

V(s) = s2 + s√

2+ 1s2

. (6.28)

We tentatively choose the weighting functionW1 equal to 1. Then if the first of the two terms ofthe mixed sensitivity criterion dominates at low frequencies from we have from (6.14) that

|S( jω)| ≈ γ

|V( jω)| = γ

∣∣∣∣ ( jω)2

( jω)2 + jω√

2+ 1

∣∣∣∣ at low frequencies. (6.29)

Figure 6.2 shows the magnitude plot of the factor 1/V. The plot implies a very good low-frequency behavior of the sensitivity function. Owing to the presence of the double open-looppole at the origin the feedback system is of type 2. There is no need to correct this low frequencybehavior by choosingW1 different from 1.

Next contemplate the high-frequency behavior. For high frequenciesV is constant and equalto 1. Consider choosingW2 as

W2(s) = c(1+ rs), (6.30)

with c andr nonnegative constants such thatc 6= 0. Then for high frequencies the magnitude ofW2( jω) asymptotically behaves asc if r = 0, and ascrω if r 6= 0.

Hence, if r = 0 then the high-frequency roll-off of the input sensitivity functionU andthe compensator transfer functionC is 0 and that of the complementary sensitivityT is 2decades/decade (40 dB/decade).

If r 6= 0 thenU and C roll off at 1 decade/decade (20 dB/decade) andT rolls off at 3decades/decade (60 dB/decade).

Page 104: Multi Variable Control System Design Dmcs456

256 Chapter 6.H∞-Optimization andµ-Synthesis

1/| |V

ω

.01 .1 1 10 100

1

.1

.01

.001

10

.0001

Figure 6.2: Bode magnitude plot of 1/V

c=10

c=1

c=1/10

c=1/100

c=1/10

c=1

c=10

c=1/100

.01 .1 1 10 100 .01 .1 1 10 100

10

1

.1

.01

.001

10

1

.1

.01

.001

sensitivity | |Scomplementarysensitivity | |T

ω ω

Figure 6.3: Bode magnitude plots ofSandT for r = 0

Page 105: Multi Variable Control System Design Dmcs456

6.2. The mixed sensitivity problem 257

Solution of the mixed sensitivity problem. We first study the caser = 0, which results in aproper but not strictly proper compensator transfer functionC, and a high-frequency roll-off ofTof 2 decades/decade. Figure 6.3 shows3 the optimal sensitivity functionSand the correspondingcomplementary sensitivity functionT for c = 1/100,c = 1/10, c = 1, andc = 10. Inspectionshows that asc increases,|T| decreases and|S| increases, which conforms to expectation. Thesmallerc is, the closer the shape of|S| is to that of the plot of Fig. 6.2.

We choosec = 1/10. This makes the sensitivity small with little peaking at the cut-offfrequency. The corresponding optimal compensator has the transfer function

C(s) = 1.2586s+ 0.619671+ 0.15563s

, (6.31)

and results in the closed-loop poles12

√2(−1± j) and−5.0114. The two former poles dominate

the latter pole, as planned. The minimal∞-norm is‖H‖∞ = 1.2861.Robustness against high-frequency perturbations may be improved by making the comple-

mentary sensitivity functionT decrease faster at high frequencies. This is accomplished by takingthe constantr nonzero. Inspection ofW2 as given by (6.30) shows that by choosingr = 1 theresulting extra roll-off ofU, C, andT sets in at the frequency 1. Forr = 1/10 the break pointis shifted to the frequency 10. Figure 6.4 shows the resulting magnitude plots. Forr = 1/10

r = 1

r = 1/10

r = 0

r = 1

r = 1/10

r = 0

.01 .1 1 10 100 .01 .1 1 10 100

10

1

.1

.01

.001

10

1

.1

.01

.001

sensitivity| |Scomplementarysensitivity | |T

ω ω

Figure 6.4: Bode magnitude plots ofSandT for c = 1/10

the sensitivity function has little extra peaking while starting at the frequency 10 the comple-mentary sensitivity function rolls off at a rate of 3 decades/decade. The corresponding optimalcompensator transfer function is

C(s) = 1.2107s+ 0.5987

1+ 0.20355s+ 0.01267s2, (6.32)

which results in the closed-loop poles12

√2(−1± j) and−7.3281± j1.8765. Again the former

two poles dominate the latter. The minimal∞-norm is‖H‖∞ = 1.3833.Inspection of the two compensators (6.31) and (6.32) shows that both basically are lead com-

pensators with high-frequency roll-off.

3The actual computation of the compensator is discussed in Example 6.6.1 (p. 276).

Page 106: Multi Variable Control System Design Dmcs456

258 Chapter 6.H∞-Optimization andµ-Synthesis

Robustness analysis. We conclude this example with a brief analysis to check whether ourexpectations about robustness have come true. Given the compensatorC = Y/X the closed-loop characteristic polynomial of the perturbed plant isDcl(s) = D(s)X(s) + N(s)Y(s) =(1+ sθ)s2X(s)+ gY(s). By straightforward computation, which involves fixing one of the twoparametersg andθ and varying the other, the stability region of Fig. 6.5 may be establishedfor the compensator (6.31). That for the other compensator is similar. The diagram shows that

0 .5 1 1.50

2

4

6

g

θ

Figure 6.5: Stability region

for θ = 0 the closed-loop system is stable for allg > 0, that is, for all−1< 1g − 1< ∞. This

stability interval is larger than predicted. Forg = 1 the system is stable for 0≤ θ < 1.179, whichalso is a somewhat larger interval than expected.

The compensator (6.31) is similar to the modified PD compensator that is obtained in Example5.2.2 (p. 199) by root locus design. Likewise, the stability region of Fig. 6.5 is similar to that ofFig. 5.5.

6.3 The standardH∞ optimal regulation problem

6.3.1 Introduction

The mixed sensitivity problem is a special case of the so-calledstandardH∞ optimal regulationproblem (Doyle 1984). This standard problem is defined by the configuration of Fig. 6.6. The

plant

G

compensator

K

w z

yu

Figure 6.6: The standardH∞ optimal regulation problem.

“plant” is a given system with two sets of inputs and two sets of outputs. It is often referred toas thegeneralized plant. The signalw is anexternal input, and represents driving signals thatgenerate disturbances, measurement noise, and reference inputs. The signalu is thecontrol input.

Page 107: Multi Variable Control System Design Dmcs456

6.3. The standardH∞ optimal regulation problem 259

The outputz has the meaning ofcontrol error, and ideally should be zero. The outputy, finally,is theobserved output, and is available for feedback. The plant has an open-loop transfer matrixG such that[

zy

]= G

[w

u

]=[

G11 G12

G21 G22

][w

u

]. (6.33)

By connecting the feedback compensator

u = Ky (6.34)

we obtain fromy = G21w+ G22u the closed-loop signal balance equationy = G21w+ G22Ky,so thaty = ( I − G22K)−1G21w. Fromz= G11w+ G12u = G11x+ G12Ky we then have

z= [G11 + G12K( I − G22K)−1G21]︸ ︷︷ ︸H

w. (6.35)

Hence, the closed-loop transfer matrixH is

H = G11 + G12K( I − G22K)−1G21. (6.36)

The standardH∞-optimal regulation problem is the problem of determining a compensator withtransfer matrixK that

1. stabilizes the closed-loop system, and

2. minimizes the∞-norm‖H‖∞ of the closed-loop transfer matrixH from the external inputw to the control errorz.

W V

K P W

z w

yu

z

2

2

1 1

v

−+

+

Figure 6.7: The mixed sensitivity problem.

To explain why the mixed sensitivity problem is a special case of the standard problem we con-sider the block diagram of Fig. 6.7, where the compensator transfer function now is denotedKrather thanC. In this diagram, the external signalw generates the disturbancev after passingthrough a shaping filter with transfer matrixV. The “control error”zhas two components,z1 andz2. The first componentz1 is the control system output after passing through a weighting filterwith transfer matrixW1. The second componentz2 is the plant inputu after passing through aweighting filter with transfer matrixW2.

It is easy to verify that for the closed-loop system

z=[

z1

z2

]=[

W1SV−W2UV

]︸ ︷︷ ︸

H

w, (6.37)

Page 108: Multi Variable Control System Design Dmcs456

260 Chapter 6.H∞-Optimization andµ-Synthesis

so that minimization of the∞-norm of the closed-loop transfer matrixH amounts to minimiza-tion of∥∥∥∥

[W1SVW2UV

]∥∥∥∥∞. (6.38)

In the block diagram of Fig. 6.7 we denote the input to the compensator asy, as in the standardproblem of Fig. 6.6. We read off from the block diagram that

z1 = W1Vw + W1Pu,

z2 = W2u,

y = −Vw − Pu.

(6.39)

Hence, the open-loop transfer matrixG for the standard problem is

G = W1V W1P

0 W2

−V −P

(6.40)

Other well-known special cases of the standard problemH∞ problem are themodel matchingproblemand theminimum sensitivity problem(see Exercises 6.3.2 and 6.3.3, respectively.)

TheH∞-optimal regulation problem is treated in detail by Green and Limebeer (1995) andZhou, Doyle, and Glover (1996).

Exercise 6.3.1 (Closed-loop transfer matrix).Verify (6.37). �

Exercise 6.3.2 (The model matching problem).The model matching problem consists of find-ing a stable transfer matrixK that minimizes

‖P− K‖∞, (6.41)

with P a given (unstable) transfer matrix. Find a block diagram that represents this problem andshow that it is a standard problem with generalized plant

G =[

P − II 0

]. (6.42)

The problem is also know as the (generalized)Nehariproblem. It is more of theoretical than ofpractical interest. �

Exercise 6.3.3 (Minimum sensitivity problem). A special case of the mixed sensitivity prob-lem is the minimization of the infinity norm‖W SV‖∞ of the weighted sensitivity matrixS forthe closed-loop system of Fig. 6.1. Show that this minimum sensitivity problem is a standardproblem with open-loop plant

G =[

WV W P−V −P

]. (6.43)

The SISO version of this problem historically is the firstH∞ optimization problem that wasstudied. It was solved in a famous paper by Zames (1981). �

Page 109: Multi Variable Control System Design Dmcs456

6.3. The standardH∞ optimal regulation problem 261

F C Pr u

v

z

m

o+

++

− +

+

Figure 6.8: Two-degree- of-freedom feedback configuration

Exercise 6.3.4 (Two-degree-of-freedom feedback system).As a further application considerthe two-degree-of-freedom configuration of Fig. 6.8. In Fig. 6.9 the diagram is expanded witha shaping filterV1 that generates the disturbancev from the driving signalw1, a shaping filterV2 that generates the measurement noisem = V2w2 from the driving signalw2, a shaping filterV0 that generates the reference signalr from the driving signalw3, a weighting filterW1 thatproduces the weighted tracking errorz1 = W1(z0 − r ), and a weighting filterW2 that produces theweighted plant inputz2 = W2u. Moreover, the compensatorC and the prefilterF are combinedinto a single blockK.

V0

V1

V2

W2

W1

r

y2

z2

z1z0

w2

w1

w3

y1u

P

v

K +

+

+

+

+

Figure 6.9: Block diagram for a standard problem

Define the “control error”z with componentsz1 andz2, the observed outputy with compo-nentsy1 andy2 as indicated in the block diagram, the external inputw with componentsw1, w2,andw3, and the control inputu. Determine the open-loop generalized plant transfer matrixGand the closed-loop transfer matrixH. �

6.3.2 Closed-loop stability

We need to define what we mean by the requirement that the compensatorK “stabilizes theclosed-loop system” of Fig. 6.6. First, we take stability in the sense of internal stability, asdefined inx 1.3.2 (p. 12). Second, inspection of the block diagram of Fig. 6.10 shows that we

Page 110: Multi Variable Control System Design Dmcs456

262 Chapter 6.H∞-Optimization andµ-Synthesis

certainly may require the compensator to stabilize the loop around the blockG22, but that wecannot expect that it is always possible to stabilize the entire closed-loop system. “The loop

++

++

w z

u y

K

G11

G12

G21

G22

G

Figure 6.10: Feedback arrangement for the standard problem

aroundG22 is stable” means that the configuration of Fig. 6.11 is internally stable.

K

G22

Figure 6.11: The loop aroundG22

Exercise 6.3.5 (Closed-loop stability).Prove that the configuration of Fig. 6.11 is internallystable if and only if the four following transfer matrices have all their poles in the open left-halfcomplex plane:

( I − KG22)−1, ( I − KG22)

−1K, ( I − G22K)−1, ( I − G22K)−1G22. (6.44)

Prove furthermore that if the loop is SISO with scalar rational transfer functionsK = Y/X, withX andY coprime polynomials4 and G22 = N0/D0, with D0 and N0 coprime, then the loop isinternally stable if and only if the closed-loop characteristic polynomialDcl = D0X − N0Y hasall its roots in the open left-half complex plane. �

Since we cannot expect that it is always possible or, indeed, necessary to stabilize the entireclosed-loop system we interpret the requirement that the compensator stabilizes the closed-loopsystem as the more limited requirement that it stabilizes the loop aroundG22.

4That is,X andY have no common polynomial factors.

Page 111: Multi Variable Control System Design Dmcs456

6.3. The standardH∞ optimal regulation problem 263

Example 6.3.6 (Stability for the model matching problem).To illustrate the point that it is notalways possible or necessary to stabilize the entire system consider the model matching problemof Exercise 6.3.2 (p. 260). Since for this problemG22 = 0, internal stability of the loop aroundG22 is achieved iffK has all its poles in the open left-half plane, that is, iffK is stable. This isconsistent with the problem formulation. Note that ifP is unstable then the closed-loop transferfunction H = P− K can never be stable ifK is stable. Hence, for the model matching problemstability of the entire closed-loop system can never be achieved (except in the trivial case thatPby itself is stable). �

Exercise 6.3.7 (Mixed sensitivity problem).Show that for the mixed sensitivity problem “sta-bility of the loop aroundG22” is equivalent to stability of the feedback loop. Discuss what addi-tional requirements are needed to ensure that the entire closed-loop system be stable.�

6.3.3 Assumptions on the generalized plant and well-posedness

Normally, various conditions are imposed on the generalized plantG to ensure its solvability.This is what we assume at this point:

(a) G is rational but not necessarily proper, whereG11 has dimensionsm1 × k1, G12 has di-mensionsm1 × k2, G21 has dimensionsm2 × k1, andG22 has dimensionsm2 × k2.

(b) G12 has full normal column rank5 and G21 has full normal row rank. This implies thatk1 ≥ m2 andm1 ≥ k2.

These assumptions are less restrictive than those usually found in theH∞ literature (Doyle,Glover, Khargonekar, and Francis 1989; Stoorvogel 1992; Trentelman and Stoorvogel 1993).Indeed the assumptions need to be tightened when we present the state space solution of theH∞problem inx 6.5 (p. 272). The present assumptions allow nonproper weighting matrices in themixed sensitivity problem, which is important for obtaining high frequency roll-off.

Assumption (b) causes little loss of generality. If the rank conditions are not satisfied thenoften the plant inputu or the observed outputy or both may be transformed so that the rankconditions hold.

We conclude with discussingwell-posednessof feedback systems. LetK = Y X−1, with XandY stable rational matrices (possibly polynomial matrices). ThenK( I − G22K)−1 = Y(X −G22Y)−1, so that the closed-loop transfer matrix is

H = G11 + G12Y(X − G22Y)−1G21. (6.45)

The feedback system is said to bewell-posedas long asX − G22Y is nonsingular (except at afinite number of points in the complex plane). A feedback system may be well-posed even ifD0 or X is singular, so thatG and K are not well-defined. This formalism allows consideringinfinite-gain feedback systems, even if such systems cannot be realized.

The compensatorK is stabilizing iff X − G22Y is strictly Hurwitz, that is, has all its zeros6

in the open left-half plane. We accordingly reformulate theH∞ problem as the problem ofminimizing the∞-norm of H = G11 + G12X(X − G22Y)−1G21 with respect to all stable rationalX andY such thatX − G22Y is strictly Hurwitz.

5A rational matrix function has full normal column rank if it has full column rank everywhere in the complex planeexcept at a finite number of points.

6The zeros of a polynomial matrix or a stable rational matrix are those points in the complex plane where it losesrank.

Page 112: Multi Variable Control System Design Dmcs456

264 Chapter 6.H∞-Optimization andµ-Synthesis

Example 6.3.8 (An infinite-gain feedback system may be well-posed).Consider the SISOmixed sensitivity problem defined by Fig. 6.7. From (6.40) we haveG22 = −P. The system iswell-posed ifX + PY is not identical to zero. Hence, the system is well-posed even ifX = 0 (aslong asY 6= 0). ForX = 0 the system has infinite gain. �

6.4 Frequency domain solution of the standard problem

6.4.1 Introduction

In this section we outline the main steps that lead to an explicit formula for so-calledsuboptimalsolutions to the standardH∞ optimal regulation problem. These are solutions such that

‖H‖∞ ≤ γ, (6.46)

with γ a given nonnegative number. The reason for determining suboptimal solutions is thatstrictly optimal solutions cannot be found directly. Once suboptimal solutions may be obtainedoptimal solutions may be approximated by searching on the numberγ.

Suboptimal solutions may be determined by spectral factorization in the frequency domain.This solution method applies to the wide class of problems defined inx 6.3 (p. 258). Inx 6.5(p. 272) we specialize the results to the well-known state space solution. Some of the proofs ofthe results of this section may be found in the appendix to this chapter (x 6.10, p. 304). The detailsare described in Kwakernaak (1996). In Subsection 6.4.7 (p. 269) an example is presented.

In what follows the∞-norm notation is to be read as

‖H‖∞ = supω∈R

σ(H( jω)), (6.47)

with σ the largest singular value. This notation is also used ifH has right-half plane poles, sothat the corresponding system is not stable and, hence, has no∞-norm. As long asH has nopoles on the imaginary axis the right-hand side of (6.47) is well-defined.

We first need the following result.

Summary 6.4.1 (Inequality for ∞-norm). Let γ be a nonnegative number andH a rationaltransfer matrix. Then‖H‖∞ ≤ γ is equivalent to either of the following statements:

(a) H∼ H ≤ γ2 I on the imaginary axis;

(b) H H∼ ≤ γ2 I on the imaginary axis.

H∼ is defined byH∼(s) = HT(−s). H∼ is called theadjoint of H. If A and B are twocomplex-valued matrices thenA ≤ B means thatB − A is a nonnegative-definite matrix. “Onthe imaginary axis” means for alls = jω, ω ∈ R. The proof of Summary 6.4.1 may be found inx 6.10 (p. 304).

To motivate the next result we consider the model matching problem of Exercise 6.3.2(p. 260). For this problem we haveH = P − K, with P a given transfer matrix, andK a stabletransfer matrix to be determined. SubstitutingH = P − K into H∼ H ≤ γ2 I we obtain fromSummary 6.4.1(a) that‖H‖∞ ≤ γ is equivalent to

P∼ P− P∼K − K∼ P+ K∼K ≤ γ2 I on the imaginary axis. (6.48)

Page 113: Multi Variable Control System Design Dmcs456

6.4. Frequency domain solution of the standard problem 265

This in turn we may rewrite as

[I K∼][γ2 I − P∼ P P∼

P − I

]︸ ︷︷ ︸

[IK

]≥ 0 on the imaginary axis. (6.49)

This expression defines the rational matrix5γ . Write K = Y X−1, with X andY rational matriceswith all their poles in the left-half complex plane. Then by multiplying (6.49) on the right byXand on the left byX∼ we obtain

[ X∼ Y∼]

[γ2 I − P∼ P P∼

P − I

]︸ ︷︷ ︸

[XY

]≥ 0 on the imaginary axis. (6.50)

The inequality (6.50) characterizes all suboptimal compensatorsK. A similar inequality, lesseasily obtained than for the model matching problem, holds for the general standard problem:

Summary 6.4.2 (Basic inequality).The compensatorK = Y X−1 achieves‖H‖∞ ≤ γ for thestandard problem, withγ a given nonnegative number, if and only if

[ X∼ Y∼]5γ

[XY

]≥ 0 on the imaginary axis, (6.51)

where5γ is the rational matrix

5γ =[

0 I

−G∼12 −G∼

22

][G11G∼

11 − γ2 I G11G∼21

G21G∼11 G21G∼

21

]−1[0 −G12

I −G22

]. (6.52)

An outline of the proof is found inx 6.10 (p. 304).

6.4.2 Spectral factorization

A much more explicit characterization of all suboptimal compensators (stabilizing or not) thanthat of Summary 6.4.2 follows by finding aspectral factorizationof the rational matrix5γ , ifone exists.

Note that5γ is para-Hermitian, that is,5∼γ =5γ. A para-Hermitian rational matrix5 has a

spectral factorization if there exists a nonsingular square rational matrix7 Z such that bothZ andits inverseZ−1 have all their poles in the closed left-half complex plane, and

5 = Z∼ J Z, (6.53)

with J a signature matrix. Zis called aspectral factor. J is a signature matrix if it is a constantmatrix of the form

J =[

In 00 − Im

], (6.54)

7That is,Z is nonsingular except in a finite number of points in the complex plane.

Page 114: Multi Variable Control System Design Dmcs456

266 Chapter 6.H∞-Optimization andµ-Synthesis

with In and Im unit matrices of dimensionsn × n andm× m, respectively. On the imaginaryaxis5 is Hermitian. If5 has no poles or zeros8 on the imaginary axis thenn is the number ofpositive eigenvalues of5 on the imaginary axis andm the number of negative eigenvalues.

The factorization5 = Z∼ J Z, if it exists, is not at all unique. IfZ is a spectral factor, then sois U Z, whereU is any rational or constant matrix such thatU∼ JU = J. U is said to beJ-unitary.There are many such matrices. If

J =[1 00 −1

], (6.55)

for instance, then all constant matricesU, with

U =[

c1 coshα c2 sinhαc1 sinhα c2 coshα

], (6.56)

α ∈ R, c1 = ±1, c2 = ±1, satisfyUT JU = J.

6.4.3 All suboptimal solutions

Suppose that5γ has a spectral factorization

5γ = Z∼γ J Zγ, (6.57)

with

J =[

Im2 00 − Ik2

]. (6.58)

It may be proved that such a spectral factorization generally exists providedγ is sufficiently large.Write [

XY

]= Z−1

γ

[AB

]. (6.59)

with A anm2 × m2 rational matrix andB a k2 × m2 rational matrix, both with all their poles inthe closed left-half plane. Then (6.51) reduces to

A∼ A ≥ B∼ B on the imaginary axis. (6.60)

Hence, (6.59), withA and B chosen so that they satisfy (6.60), characterizes all suboptimalcompensators. It may be proved that the feedback system defined this way is well-posed iffA isnonsingular.

This characterization of all suboptimal solutions has been obtained under the assumption that5γ has a spectral factorization. Conversely, it may be shown that if suboptimal solutions existthen5γ has a spectral factorization (Kwakernaak 1996).

Summary 6.4.3 (Suboptimal solutions).Givenγ, a suboptimal solution of the standard prob-lem exists iff5γ has a spectral factorization5γ = Z∼

γ J Zγ, with

J =[

Im2 00 − Ik2

]. (6.61)

8The zeros of5 are the poles of5−1.

Page 115: Multi Variable Control System Design Dmcs456

6.4. Frequency domain solution of the standard problem 267

If this factorization exists then all suboptimal compensators are given byK = Y X−1, with[XY

]= Z−1

γ

[AB

], (6.62)

with A nonsingular square rational andB rational, both with all their poles in the closed left-halfplane, and such thatA∼ A ≥ B∼ B on the imaginary axis. �

There are many matrices of stable rational functionsA and B that satisfyA∼ A ≥ B∼ B onthe imaginary axis. An obvious choice isA = I , B = 0. We call this thecentral solution Thecentral solution as defined here is not at all unique, because as seen inx 6.4.2 (p. 265) the spectralfactorization (6.57) is not unique.

6.4.4 Lower bound

Suboptimal solutions exist for those values ofγ for which5γ is spectrally factorizable. Thereexists a lower boundγ0 so that5γ is factorizable for allγ > γ0. It follows thatγ0 is also a lowerbound for the∞-norm, that is,

‖H‖∞ ≥ γ0 (6.63)

for every compensator.

Summary 6.4.4 (Lower bound). Defineγ1 as the first value ofγ asγ decreases from∞ forwhich the determinant of[

G11G∼11 − γ2 I G11G∼

21

G21G∼11 G21G∼

21

](6.64)

has aγ-dependent9 zero on the imaginary axis. Similarly, letγ2 be the first value ofγ asγdecreases from∞ for which the determinant of[

G∼11G11 − γ2 I G∼

11G12

G∼12G11 G∼

12G12

](6.65)

has aγ-dependent zero on the imaginary axis. Then5γ is spectrally factorizable forγ > γ0,with

γ0 = max(γ1, γ2). (6.66)

Also,

‖H‖∞ ≥ γ0. (6.67)

The lower boundγ0 may be achieved. �

The proof may be found in Kwakernaak (1996). Note that the zeros of the determinant (6.64)are the poles of5γ , while the zeros of the determinant of (6.65) are the poles of5−1

γ .

9That is, a zero that varies withγ.

Page 116: Multi Variable Control System Design Dmcs456

268 Chapter 6.H∞-Optimization andµ-Synthesis

6.4.5 Stabilizing suboptimal compensators

In x 6.4.4 a representation is established of all suboptimal solutions of the standard problem fora given performance levelγ. Actually, we are not really interested in all solutions, but in allstabilizingsuboptimal solutions, that is, all suboptimal compensators that make the closed-loopsystem stable (as defined inx 6.3.2, p. 261).

Before discussing how to find such compensators we introduce the notion of acanonicalspectral factorization. First, a square rational matrixM is said to bebiproper if both M and itsinverseM−1 are proper.M is biproper iff lim|s|→∞ M(s) is finite and nonsingular.

The spectral factorization5= Z∼ J Z of a biproper rational matrix5 is said to be canonical ifthe spectral factorZ is biproper. Under the general assumptions ofx 6.4 (p. 264) the matrix5γ isnot necessarily biproper. If it is not biproper then the definition of what a canonical factorization isneeds to be adjusted (Kwakernaak 1996). The assumptions on the state space version of standardproblem that are introduced inx 6.5 (p. 272) ensure that5γ is biproper.

We consider the stability of suboptimal solutions. According tox 6.3.3 (p. 263) closed-loopstability depends on whether the zeros ofX − G22Y all lie in the open left-half plane. Denote

Z−1γ = Mγ =

[Mγ,11 Mγ,12

Mγ,21 Mγ,22

](6.68)

Then by (6.62) all suboptimal compensators are characterized by

X = Mγ,11A+ Mγ,12B, Y = Mγ,21A+ Mγ,22B. (6.69)

Correspondingly,

X − G22Y = (Mγ,11 − G22Mγ,21)A+ (Mγ,12A− G22Mγ,22)B

= (Mγ,11 − G22Mγ,21)( I + TU)A. (6.70)

HereT = (Mγ,11 − G22Mγ,21)−1(Mγ,12 − G22Mγ,22) andU = BA−1.

Suppose that the factorization of5γ is canonical.Then it may be proved (Kwakernaak 1996)that as the real numberε varies from 0 to 1, none of the zeros of

(Mγ,11 − G22Mγ,21)( I + εTU)A (6.71)

crosses the imaginary axis. This means that all the zeros ofX − G22Y are in the open left-halfcomplex plane iff all the zeros of(Mγ,11 − G22Mγ,21)A are in the open left-half complex plane.It follows that a suboptimal compensator is stabilizing iff bothA andMγ,11 − G22Mγ,21 have alltheir roots in the open left-half plane. The latter condition corresponds to the condition that thecentralcompensator, that is, the compensator obtained forA = I andB = 0, be stabilizing.

Thus, we are led to the following conclusion:

Summary 6.4.5 (All stabilizing suboptimal compensators).Suppose that the factorization5γ = Z∼

γ J Zγ is canonical.

1. A stabilizing suboptimal compensator exists if and only if the central compensator is sta-bilizing.

2. If a stabilizing suboptimal compensator exists thenall stabilizing suboptimal compensatorsK = Y X−1 are given by[

XY

]= Z−1

γ

[AB

], (6.72)

Page 117: Multi Variable Control System Design Dmcs456

6.4. Frequency domain solution of the standard problem 269

with A nonsingular square rational andB rational, both with all their poles in the openleft-half complex plane, such thatA has all its zeros in the open left-half complex planeandA∼ A ≥ B∼ B on the imaginary axis.

Exercise 6.4.6 (All stabilizing suboptimal solutions).Prove that if the factorization is canonicalthen all stabilizing suboptimal compensatorsK = Y X−1, if any exist, may be characterized by[

XY

]= Z−1

γ

[IU

], (6.73)

with U rational stable such that‖U‖∞ ≤ 1. �

6.4.6 Search procedure for near-optimal solutions

In the next section we discuss how to perform the spectral factorization of5γ . Assuming that weknow how to do this numerically — analytically it can be done in rare cases only — the searchprocedure of Fig. 6.12 serves to obtain near-optimal solutions.Optimalsolutions are discussedin x 6.6 (p. 275).

No

ΠγDoes a canonical spectral factorization of exist?

Compute the lower bound , or guess a lower bound

Compute the central compensator

Is the compensator close enough to optimal?

Is the compensator stabilizing?

No

Yes

Yes

No

Yes

γ γ> o

γ o

Choose a valueof

Increaseγ

Exit

Decreaseγ

Figure 6.12: Search procedure

6.4.7 Example: Model matching problem

By way of example, we consider the solution of the model matching problem of Exercise 6.3.2(p. 260) with

P(s) = 11− s

. (6.74)

Page 118: Multi Variable Control System Design Dmcs456

270 Chapter 6.H∞-Optimization andµ-Synthesis

Lower bound. By (6.42) we have

G(s) =[

1s−1 −1

1 0

]. (6.75)

It may easily be found that the determinants of the matrices (6.64) and (6.65) are both equal to−γ2. It follows thatγ1 = γ2 = 0 so that alsoγ0 = 0. Indeed, the “compensator”K(s) = P(s) =1/(1− s) achieves‖H‖∞ = ‖P− K‖∞ = 0. Note that this compensator is not stable.

Spectral factorization. It is straightforward to find from (6.52) and (6.75) that for this problem5γ is given by

5γ(s) =

1−

1γ2

1−s2

1γ2

1+s1γ2

1−s − 1γ2

. (6.76)

We needγ > 0. It may be verified that forγ 6= 12 the matrix5γ has the spectral factor and

signature matrix

Zγ (s) =

γ2− 34

γ2− 14

+s

1+s

1γ2− 1

41+s

−14

γ(γ2− 14 )

1+s

(γ2+ 1

4γ2− 1

4+s

)1+s

, J =

[1 00 −1

]. (6.77)

The inverse ofZγ is

Z−1γ (s) =

γ2+ 14

γ2− 14

+s

1+s −γ

γ2− 14

1+s14

γ2− 14

1+s

γ(γ2− 3

4γ2− 1

4+s)

1+s

. (6.78)

5γ is biproper. Forγ 6= 12 both Zγ andZ−1

γ are proper, so that the factorization is canonical. Forγ = 1

2 the spectral factorZγ is not defined. It may be proved that forγ = 12 actually no canonical

factorization exists. Noncanonical factorizationdoexist for this value ofγ, though.

Suboptimal compensators. It follows by application of Summary 6.4.3 (p. 266) after somealgebra that forγ 6= 1

2 all suboptimal “compensators” have the transfer function

K(s) =14

γ2− 14

+ γ(γ2− 3

4

γ2− 14

+ s)

U(s)(γ2+ 1

4

γ2− 14

+ s)

− γ

γ2− 14U(s)

, (6.79)

with U = B/A such thatA∼ A ≥ B∼ B on the imaginary axis. We consider two special cases.

“Central” compensator. Let A = 1, B = 0, so thatU = 0. We then have the “central” com-pensator

K(s) =14

γ2− 14

γ2+ 14

γ2− 14

+ s. (6.80)

Page 119: Multi Variable Control System Design Dmcs456

6.4. Frequency domain solution of the standard problem 271

Forγ > 12 the compensator is stable and for 0≤ γ < 1

2 it is unstable. The resulting “closed-looptransfer function” may be found to be

H(s) = P(s)− K(s) =γ2

γ2− 14(1+ s)

(γ2+ 1

4

γ2− 14

+ s)(1− s). (6.81)

The magnitude of the corresponding frequency response function takes its peak value at thefrequency 0, and we find

‖H‖∞ = |H(0)| = γ2

γ2 + 14

. (6.82)

It may be checked that‖H‖∞ ≤ γ, with equality only forγ = 0 andγ = 12.

Equalizing compensator. ChooseA = 1, B = 1, so thatU = 1. After simplification we obtainfrom (6.79) the compensator

K(s) =γ

(s+ γ+ 1

2 − 12γ

γ+ 12

)s+ γ− 1

2

γ+ 12

. (6.83)

For this compensator we have the closed-loop transfer function

H(s) = γ(1+ s)(s− γ− 1

2

γ+ 12)

(1− s)(s+ γ− 12

γ+ 12). (6.84)

The magnitude of the corresponding frequency response functionH( jω) is constant and equal toγ. Hence, the compensator (6.83) is equalizing with‖H‖∞ = γ.

Stabilizing compensators. The spectral factorization of5γ exists forγ > 0, and is canonicalfor γ 6= 1

2. The central compensator (6.80) is stabilizing iff the compensator itself is stable,which is the case forγ > 1

2. Hence, according to Summary 6.4.5 (p. 268) forγ > 12 stabilizing

suboptimal compensators exist. They are given by

K(s) =14

γ2− 14

+ γ(γ2− 3

4

γ2− 14

+ s)

U(s)(γ2+ 1

4

γ2− 14

+ s)

− γ

γ2− 14U(s)

, (6.85)

with U scalar stable rational such that‖U‖∞ ≤ 1. In particular, the equalizing compensator(6.83), obtained by lettingU(s) = 1, is stabilizing.

Since the central compensator is not stabilizing forγ < 12, we conclude that no stabilizing

compensators exist such that‖H‖∞ < 12.

The optimal compensator. Investigation of (6.79) shows that forγ → 12 the transfer function

of all suboptimal compensators approachesK0(s)= 12. This compensator is (trivially) stable, and

the closed-loop transfer function is

H0(s) = P(s)− K0(s) = 12

1+ s1− s

, (6.86)

Page 120: Multi Variable Control System Design Dmcs456

272 Chapter 6.H∞-Optimization andµ-Synthesis

so that‖H0‖∞ = 12. Since forγ < 1

2 no stabilizing compensators exist we conclude that thecompensatorK0 = 1

2 is optimal,that is, minimizes‖H‖∞ with respect to all stable compensators.

6.5 State space solution

6.5.1 Introduction

In this section we describe how to perform the spectral factorization needed for the solution ofthe standardH∞ optimal regulator problem. To this end, we first introduce a special version ofthe problem instate space form.In x 6.5.5 (p. 274) a more general version is considered. Thestate space approach leads to a factorization algorithm that relies on the solution of two algebraicRiccati equations.

6.5.2 State space formulation

Consider a standard plant that is described by the state differential equation

x = Ax+ Bu+ Fw1, (6.87)

with A, B, and F constant matrices, and the vector-valued signalw1 one component of theexternal inputw. The observed outputy is

y = Cx+w2, (6.88)

with w2 the second component ofw, andC another constant matrix. The error signalz, finally,is assumed to be given by

z=[

Hxu

], (6.89)

with alsoH a constant matrix. It is easy to see that the transfer matrix of the standard plant is

G(s) =

H(sI − A)−1F 0 H(sI − A)−1B

0 0 IC(sI − A)−1F I C(sI − A)−1B

. (6.90)

This problem is a stylized version of the standard problem that is reminiscent of the LQG regu-lator problem.

In this and other state space versions of the standardH∞ problem it is usual to require thatthe compensator stabilize the entire closed-loop system, and not just the loop aroundG22. Thisnecessitates the assumption that the systemx = Ax+ Bu, y = Hx be stabilizable and detectable.Otherwise, closed-loop stabilization is not possible.

6.5.3 Factorization of5γ

In x 6.10 (p. 304) it is proved that for the state space standard problem described inx 6.5.2 therational matrix5γ as defined in Summary 6.4.2 (p. 265) may be spectrally factored as5γ =Z∼γ J Zγ, where the inverseMγ = Z−1

γ of the spectral factor is given by

Mγ (s) =[

I 00 γ I

]+[

C

−BT X2

](sI − A2)

−1( I − 1γ2

X1X2)−1[ X1CT γB]. (6.91)

Page 121: Multi Variable Control System Design Dmcs456

6.5. State space solution 273

The constant matrixA2 is defined as

A2 = A+ (1γ2

FFT − BBT)X2. (6.92)

The matrixX1 is a symmetric solution — if any exists — of the algebraic Riccati equation

0 = AX1 + X1AT + FFT − X1(CTC − 1

γ2HT H)X1 (6.93)

such that

A1 = A+ X1(1γ2

HT H − CTC) (6.94)

is a stability matrix10. The matrixX2 is a symmetric solution — again, if any exists — of thealgebraic Riccati equation

0 = AT X2 + X2A+ HT H − X2(BBT − 1γ2

FFT)X2 (6.95)

such thatA2 is a stability matrix.The numerical solution of algebraic Riccati equations is discussed inx 3.2.9 (p. 114).

6.5.4 The central suboptimal compensator

Once the spectral factorization has been established, we may obtain the transfer matrix of thecentral compensator asK = Y X−1, with[

XY

]= Z−1

γ

[I0

]= Mγ

[I0

]=[

Mγ,11

Mγ,21

]. (6.96)

After some algebra we obtain from (6.91) the transfer matrixK = Mγ,21M−1γ,11 of the central

compensator in the form

K(s) = −BT X2

(sI − A− 1

γ2FFTX2 + BBT X2

+ ( I − 1γ2

X1X2)−1X1CTC

)−1

( I − 1γ2

X1X2)−1X1CT.

(6.97)

In state space form the compensator may be represented as

˙x = (A+ 1γ2

FFTX2)x + ( I − 1γ2

X1X2)−1X1CT(y− Cx)+ Bu, (6.98)

u = −BT X2x. (6.99)

The compensator consists of an observer interconnected with a state feedback law. The order ofthe compensator is the same as that of the standard plant.

10A matrix is a stability matrix if all its eigenvalues have strictly negative real part.

Page 122: Multi Variable Control System Design Dmcs456

274 Chapter 6.H∞-Optimization andµ-Synthesis

6.5.5 A more general state space standard problem

A more general state space representation of the standard plant is

x = Ax+ B1w+ B2u, (6.100)

z = C1x+ D11w+ D12u, (6.101)

y = C2x+ D21w+ D22u. (6.102)

In the µ-Tools MATLAB toolbox (Balas, Doyle, Glover, Packard, and Smith 1991) and theRobust Control Toolbox (Chiang and Safonov 1992) a solution of the correspondingH∞ problembased on Riccati equations is implemented that requires the following conditions to be satisfied:

1. (A, B2) is stabilizable and(C2, A) is detectable.

2. D12 andD21 have full rank.

3.

[A− jω I B2

C1 D12

]has full column rank for allω ∈ R (hence,D12 is tall11).

4.

[A− jω I B1

C2 D21

]has full row rank for allω ∈ R (hence,D21 is wide).

The solution of this more general problem is similar to that for the simplified problem. It consistsof two algebraic Riccati equations whose solutions define an observercumstate feedback law.The expressions, though, are considerably more complex than those inx 6.5.3 andx 6.5.4.

The solution is documented in a paper by Glover and Doyle (1988). More extensive treat-ments may be found in a celebrated paper by Doyle, Glover, Khargonekar, and Francis (1989)and in Glover and Doyle (1989). Stoorvogel (1992) discusses a number of further refinements ofthe problem.

6.5.6 Characteristics of the state space solution

We list a few properties of the state space solution.

1. The assumption of stabilizability and detectability ensures that thewholefeedback systemis stabilized, not just the loop aroundG22. There are interestingH∞ problems that do notsatisfy the stabilizability or detectability assumption, and, hence, cannot be solved withthis algorithm.

2. The eigenvalues of the Hamiltonian corresponding to the Riccati equation (6.93) are thepoles of5γ . Hence, the numberγ1 defined in the discussion on the lower bound inx 6.4may be determined by finding the first value ofγ asγ decreases from∞ such that any ofthese eigenvalues reaches the imaginary axis.

Likewise, the eigenvalues of the Hamiltonian corresponding to the Riccati equation (6.95)are the poles of the inverse of5γ . Hence, the numberγ2 defined in the discussion on thelower bound inx 6.4 may be determined by finding the first value ofγ asγ decreases from∞ such that any of these eigenvalues reaches the imaginary axis.

3. The required solutionsX1 andX2 of the Riccati equations (6.93) and (6.95) are nonnega-tive-definite.

11A matrix is tall if it has at least as many rows as columns. It is wide if it has at least as many columns as rows.

Page 123: Multi Variable Control System Design Dmcs456

6.6. Optimal solutions to theH∞ problem 275

4. The central compensator is stabilizing iff

λmax(X2X1) < γ2, (6.103)

whereλmax denotes the largest eigenvalue. This is a convenient way to test whether stabi-lizing compensators exist.

5. The central compensator is of the same order as the generalized plant (6.100–6.102).

6. The transfer matrixK of the central compensator is strictly proper.

7. The factorization algorithm only applies if the factorization is canonical. If the factoriza-tion is noncanonical then the solution breaks down. This is described in the next section.

6.6 Optimal solutions to theH∞ problem

6.6.1 Optimal solutions

In x 6.4.6 (p. 269) a procedure is outlined for finding optimal solutions that involves a searchon the parameterγ. As the search advances the optimal solution is approached more and moreclosely. There are two possibilities for the optimal solution:

Type A solution.The central suboptimal compensator is stabilizing for allγ ≥ γ0, with γ0 thelower bound discussed in 6.4.4 (p. 267). In this case, the optimal solution is obtained forγ = γ0.

Type B solution. As γ varies, the central compensator becomes destabilizing asγ decreasesbelow some numberγopt with γopt ≥ γ0.

In type B solutions a somewhat disconcerting phenomenon occurs. In the example ofx 6.4.7(p. 269) several of the coefficients of the spectral factor and of the compensator transfer matrixgrow without bound asγ approachesγopt. This is generally true.

In the state space solution the phenomenon is manifested by the fact that asγ approachesγopt

either of the two following eventualities occurs (Glover and Doyle 1989):

(B1.) The matrixI − 1γ2 X2X1 becomes singular.

(B2.) In solving the Riccati equation (6.93) or (6.95) or both the “top coefficient matrix”E1

of x 3.2.9 (p. 114) becomes singular, so that the solution of the Riccati equation ceases toexist.

In both cases large coefficients occur in the equations that define the central compensator.The reason for these phenomena for type B solutions is that at the optimum the factorization

of5γ is non-canonical.As illustrated in the example ofx 6.4.7, however, the compensator transfer matrix approaches

a well-defined limit asγ → γopt, corresponding to theoptimalcompensator. The type B optimalcompensator is of lower order than the suboptimal central compensator. Also this is generallytrue.

It is possible to characterize and computeall optimalsolutions of type B (Glover, Limebeer,Doyle, Kasenally, and Safonov 1991; Kwakernaak 1991).

An important characteristic of optimal solutions of type B is that the largest singular valueσ(H( jω)) of the optimal closed-loop frequency response matrixH is constantas a function ofthe frequencyω ∈ R. This is known as theequalizing propertyof optimal solutions.

Page 124: Multi Variable Control System Design Dmcs456

276 Chapter 6.H∞-Optimization andµ-Synthesis

Straightforward implementation of the two-Riccati equation algorithm leads to numericaldifficulties for type B problems. As the solution approaches the optimum several or all of thecoefficients of the compensator become very large. Because eventually the numbers become toolarge the optimum cannot be approached too closely.

One way to avoid numerical degeneracy is to represent the compensator in what is calleddescriptorform (Glover, Limebeer, Doyle, Kasenally, and Safonov 1991). This is a modificationof the familiar state representation and has the appearance

Ex = Fx+ Gu, (6.104)

y = Hx+ Lu. (6.105)

If E is square nonsingular then by multiplying (6.104) on the left byE−1 the equations (6.104–6.105) may be reduced to a conventional state representation. The central compensator for theH∞ problem may be represented in descriptor form in such a way thatE becomes singular as theoptimum is attained but all coefficients in (6.104–6.105) remain bounded. The singularity ofEmay be used to reduce the dimension of the optimal compensator (normally by 1).

General descriptor systems (withE singular) are discussed in detail in Aplevich (1991).

6.6.2 Numerical examples

We present the numerical solution of two examples of the standard problem.

Example 6.6.1 (Mixed sensitivity problem for the double integrator).We consider the mixedsensitivity problem discussed inx 6.2.5 (p. 254). WithP, V, W1 andW2 as in 6.3, by (6.39) thestandard plant transfer matrix is

G(s) =

M(s)s2

1s2

0 c(1+ rs)− M(s)

s2 − 1s2

, (6.106)

whereM(s) = s2 + s√

2 + 1. We consider the caser = 0 andc = 0.1. It is easy to check thatfor r = 0 a state space representation of the plant is

x =[0 01 0

]︸ ︷︷ ︸

A

x+[

1√2

]︸ ︷︷ ︸

B1

w+[10

]︸︷︷︸B2

u, (6.107)

z=[0 10 0

]︸ ︷︷ ︸

C1

x+[10

]︸︷︷︸D11

w+[0c

]︸︷︷︸D12

u, (6.108)

y = [0 −1

]︸ ︷︷ ︸C2

x+ [−1]︸ ︷︷ ︸

D21

w. (6.109)

Note that we constructed a joint minimal realization of the blocksP andV in the diagram ofFig. 6.7. This is necessary to satisfy the stabilizability condition ofx 6.5.5 (p. 274).

A numerical solution may be obtained with the help of theµ-Tools MATLAB toolbox (Balas,Doyle, Glover, Packard, and Smith 1991). The search procedure forH∞ state space problems

Page 125: Multi Variable Control System Design Dmcs456

6.6. Optimal solutions to theH∞ problem 277

implemented inµ-Tools terminates atγ = 1.2861 (for a specified tolerance in theγ-search of10−8). The state space representation of the corresponding compensator is

˙x =[−0.3422× 109 −1.7147× 109

1 −1.4142

]x +

[−0.3015−0.4264

]y, (6.110)

u = [−1.1348× 109 − 5.6871× 109] x. (6.111)

Numerical computation of the transfer function of the compensator results in

K(s) = 2.7671× 109s+ 1.7147× 109

s2 + 0.3422× 109s+ 2.1986× 109. (6.112)

The solution is of type B as indicated by the large coefficients. By discarding the terms2 in thedenominator we may reduce the compensator to

K(s) = 1.2586s+ 0.61971+ 0.1556s

, (6.113)

which is the result stated in (6.31)(p. 257). It may be checked that the optimal compensatorpossesses the equalizing property: the mixed sensitivity criterion does not depend on frequency.

The caser 6= 0 is dealt with in Exercise 6.7.2 (p. 288). �

Example 6.6.2 (Disturbance feedforward).Hagander and Bernhardsson (1992) discuss theexample of Fig. 6.13. A disturbancew acts on a plant with transfer function 1/(s+ 1). The

w

yK s( )

u

z2

z1x1 x2s+1s+111 1

ρ

actuator plant

++

Figure 6.13: Feedforward configuration

plant is controlled by an inputu through an actuator that also has the transfer function 1/(s+ 1).The disturbancew may be measured directly, and it is desired to control the plant output byfeedforward control from the disturbance. Thus, the observed outputy isw. The “control error”has as components the weighted control system outputz1 = x2/ρ, with ρ a positive coefficient,and the control inputz2 = u. The latter is included to prevent overly large inputs.

In state space form the standard plant is given by

x =[−1 0

1 −1

]︸ ︷︷ ︸

A

x+[01

]︸︷︷︸B1

w+[10

]︸︷︷︸B2

u, (6.114)

Page 126: Multi Variable Control System Design Dmcs456

278 Chapter 6.H∞-Optimization andµ-Synthesis

z=[

0 1ρ

0 0

]︸ ︷︷ ︸

C1

x+[00

]︸︷︷︸D11

w+[01

]︸︷︷︸D12

u, (6.115)

y = [0 0

]︸ ︷︷ ︸C2

x + [1]︸︷︷︸

D21

w. (6.116)

The lower bound ofx 6.4.4 (p. 267) may be found to be given by

γ0 = 1√1+ ρ2

. (6.117)

Define the number

ρc =√√

5+ 12

= 1.270. (6.118)

Hagander and Bernhardsson prove this (see Exercise 6.6.3):

1. Forρ > ρc the optimal solution is of type A.

2. Forρ < ρc the optimal solution is of type B.

Forρ = 2, for instance, numerical computation usingµ-Tools with a tolerance 10−8 results in

γopt = γ0 = 1√5

= 0.4472, (6.119)

with the central compensator

˙x =[−1.6229 −0.4056

1 −1

]x+

[0

0.7071

]y, (6.120)

u = [−0.8809 − 0.5736] x. (6.121)

The coefficients of the state representation of the compensator are of the order of unity, and thecompensator transfer function is

K(s) = −0.4056s− 0.4056ss2 + 2.6229s+ 2.0285

. (6.122)

There is no question of order reduction. The optimal solution is of type A.Forρ = 1 numerical computation leads to

γopt = 0.7167> γ0 = 0.7071, (6.123)

with the central compensator

˙x =[−6.9574× 107 −4.9862× 107

1 −1

]x +

[01

]y, (6.124)

u = [−6.9574× 107 − 4.9862× 107] x. (6.125)

Page 127: Multi Variable Control System Design Dmcs456

6.7. Integral control and high-frequency roll-off 279

The compensator transfer function is

K(s) = −4.9862× 107s− 4.9862× 107

s2 + 0.6957× 108s+ 1.1944× 108. (6.126)

By discarding the terms2 in the denominator this reduces to the first-order compensator

K(s) = −0.7167s+ 1

s+ 1.7167. (6.127)

The optimal solution now is of type B. �

Exercise 6.6.3 (Disturbance feedforward).

1. Verify that the lower boundγ0 is given by (6.117).

2. Prove thatρc as given by (6.118) separates the solutions of type A and type B.

3. Prove that ifρ < ρc then the minimal∞-normγopt is the positive solution of the equationγ4 + 2γ3 = 1/ρ2 (Hagander and Bernhardsson 1992).

4. Check the frequency dependence of the largest singular value of the closed-loop frequencyresponse matrix for the two types of solutions.

5. What do you have to say about the optimal solution forρ = ρc?�

6.7 Integral control and high-frequency roll-off

6.7.1 Introduction

In this section we discuss the application ofH∞ optimization, in particular the mixed sensitivityproblem, to achieve two specific design targets: integral control and high-frequency roll-off.Integral control is dealt with inx 6.7.2. Inx 6.7.3 we explain how to design for high-frequencyroll-off. Subsection 6.7.4 is devoted to an example.

The methods to obtain integral control and high frequency roll-off discussed in this sectionfor H∞ design may also be used withH2 optimization. This is illustrated for SISO systems inx 3.5.4 (p. 130) andx 3.5.5 (p. 132).

6.7.2 Integral control

In x 2.3 (p. 64) it is explained that integral control is a powerful and important technique. Bymaking sure that the control loop contains “integrating action” robust rejection of constant dis-turbances may be obtained, as well as excellent low-frequency disturbance attenuation and com-mand signal tracking. Inx 6.2.3 (p. 252) it is claimed that the mixed sensitivity problem allows to

C P−

Figure 6.14: One-degree-of-freedom feedback system

Page 128: Multi Variable Control System Design Dmcs456

280 Chapter 6.H∞-Optimization andµ-Synthesis

design for integrating action. Consider the SISO mixed sensitivity problem for the one-degree-of-freedom configuration of Fig. 6.14. The mixed sensitivity problem amounts to the minimizationof the peak value of the function

|V( jω)W1( jω)S( jω)|2 + |V( jω)W2( jω)U( jω)|2, ω ∈ R. (6.128)

SandU are the sensitivity function and input sensitivity function

S= 11+ PC

, U = C1+ PC

, (6.129)

respectively, andV, W1 andW2 suitable frequency dependent weighting functions.If the plant P has “natural” integrating action, that is,P has a pole at 0, then no special

provisions are needed. In the absence of natural integrating action we may introduce integratingaction by letting the productVW1 have a pole at 0. This forces the sensitivity functionS to havea zero at 0, because otherwise (6.128) is unbounded atω = 0.

There are two ways to introduce such a pole intoVW1:

1. Let V have a pole at 0, that is, take

V(s) = V0(s)s

, (6.130)

with V0(0) 6= 0. We call this theconstant disturbance modelmethod.

2. Let W1 have a pole at 0, that is, take

W1(s) = W1o(s)s

, (6.131)

with W1o(0) 6= 0. This is theconstant error suppressionmethod.

We discuss these two possibilities.

Constant disturbance model. We first consider lettingV have a pole at 0. AlthoughVW1

needs to have a pole at 0, the weighting functionVW2 cannot have such a pole. IfVW2 has a poleat 0 thenU would be forced to be zero at 0.SandU cannot vanish simultaneously at 0. Hence,if V has a pole at 0 thenW2 should have a zero at 0.

This makes sense. Including a pole at 0 inV means thatV contains a model for constantdisturbances. Constant disturbances can only be rejected by constant inputs to the plant. Hence,zero frequency inputs should not be penalized byW2. Therefore, ifV is of the form (6.130) thenwe need

W2(s) = sW2o(s), (6.132)

with W2o(0) 6= ∞.Figure 6.15(a) defines the interconnection matrixG for the resulting standard problem.

Inspection shows that owing to the pole at 0 in the blockV0(s)/s outside the feedback loopthis standard problem does not satisfy the stabilizability condition ofx 6.5.5 (p. 274).

The first step towards resolving this difficulty is to modify the diagram to that of Fig. 6.15(b),where the plant transfer matrix has been changed tosP(s)/s. The idea is to construct a simulta-neous minimal state realization of the blocksV0(s)/sandsP(s)/s. This brings the unstable poleat 0 “inside the loop.”

Page 129: Multi Variable Control System Design Dmcs456

6.7. Integral control and high-frequency roll-off 281

W s2o( ) V so( )

W s1( )sK s( )

K so( )

P s( )

z w

y uz

2

1

v

−+

+

s

s

(c)

sW s2o( )

sW s2o( )

W s1( )

V so( )

V so( )

K s( ) P s( )

z w

y uz

2

1

v

−+

+

(a)

s

s

ssP s( )

W s1( )K s( )

z w

y uz

2

1

v

−+

+

(b)

Figure 6.15: Constant disturbance model

Page 130: Multi Variable Control System Design Dmcs456

282 Chapter 6.H∞-Optimization andµ-Synthesis

The difficulty now is, of course, that owing to the cancellation insP(s)/s there is no minimalrealization that is controllable from the plant input. Hence, we remove the factors from the nu-merator ofsP(s)/sby modifying the diagram to that of Fig. 6.15(c). By a minimal joint realiza-tion of the blocksV0(s)/sandP(s)/s the interconnection system may now be made stabilizable.This is illustrated in the example ofx 6.7.4.

The block diagram of Fig. 6.15(c) defines a modified mixed sensitivity problem. Supposethat the compensatorK0 solves this modified problem. Then the original problem is solved bycompensator

K(s) = K0(s)s

. (6.133)

Constant error suppression. We next consider choosingW1(s) = W1o(s)/s. This correspondsto penalizing constant errors in the output with infinite weight. Figure 6.16(a) shows the corre-sponding block diagram. Again straightforward realization results in violation of the stabilizabil-ity condition of x 6.5.5 (p. 274). The offending factor 1/s may be pulled inside the loop as inFig. 6.16(b). Further block diagram substitution yields the arrangement of Fig. 6.16(c).

Integrator in the loop method. Contemplation of the block diagrams of Figs. 6.15(c) and6.16(c) shows that both the constant disturbance model method and the constant error rejectionmethod may be reduced to theintegrator in the loopmethod. This method amounts to connectingan integrator 1/s in series with the plantP as in Fig. 6.17(a), and doing a mixed sensitivity designfor this augmented system.

After a compensatorK0 has been obtained for the augmented plant the actual compensatorthat is implemented consists of the series connectionK(s)= K0(s)/sof the compensatorK0 andthe integrator as in Fig. 6.17(b). To prevent the pole at 0 of the integrator from reappearing in theclosed-loop system it is necessary to letV have a pole at 0. ReplacingV(s) with V0(s)/s in thediagram of Fig. 6.17(a) produces the block diagrams of Figs. 6.15(c) and 6.16(c).

6.7.3 High-frequency roll-off

As explained inx 6.2.3 (p. 252), high-frequency roll-off in the compensator transfer functionKand the input sensitivity functionU may be pre-assigned by suitably choosing the high-frequencybehavior of the weighting functionW2. If W2(s)V(s) is of ordersm ass approaches∞ thenKandU have a high-frequency roll-off ofmdecades/decade. NormallyV is chosen biproper12 suchthatV(∞)= 1. Hence, to achieve a roll-off ofmdecades/decade it is necessary thatW2 behavesassm for larges.

If the desired roll-offm is greater than 0 then the weighting functionW2 needs to be nonproper.The resulting interconnection matrix

G = W1V W1P

0 W2

−V −P

(6.134)

is also nonproper and, hence, cannot be realized as a state system of the form needed for the statespace solution of theH∞-problem.

The makeshift solution usually seen in the literature is to cut off the roll-on at high frequency.Suppose by way of example that the desired form ofW2 is W2(s) = c(1 + rs), with c and r

12That is, bothV and 1/V are proper. This means thatV(∞) is finite and nonzero.

Page 131: Multi Variable Control System Design Dmcs456

6.7. Integral control and high-frequency roll-off 283

W s2( ) V s( )

W s1o( )

W s1o( )

W s1o( )sK s( )

K so( )

P s( )

z w

y uz

2

1

v

−+

+

s

s

(c)

W s2( )

W s1o( )

W s1o( )

W s1o( )

V s( )

K s( ) P s( )

z w

y uz

2

1

v

−+

+

(a)

s

s

s

W s2( ) V s( )

P s( )K s( )

z w

y u z

2

1

v

−+

+

(b)

Figure 6.16: Constant error suppression

Page 132: Multi Variable Control System Design Dmcs456

284 Chapter 6.H∞-Optimization andµ-Synthesis

W s2( )

W s1( )

V s( )

K so( )

K so( )

K s( )

P s( )

P s( )

P so ( )

z w

y uz

2

1

v

−+

+

(a)

(b)

1

1

s

s

Figure 6.17: Integrator in the loop method

positive constants. ThenW2 may be made proper by modifying it to

W2(s) = c(1+ rs)1+ sτ

, (6.135)

with τ � r a small positive time constant.This contrivance may be avoided by the block diagram substitution of Fig. 6.18 (Krause

1992). If W2 is nonproper then, in the SISO case,W−12 is necessarily proper, and there is no

difficulty in finding a state realization of the equivalent mixed sensitivity problem defined byFig. 6.18 with the modified plantP0 = W−1

2 P.Suppose that the modified problem is solved by the compensatorK0. Then the original

problem is solved by

K = K0W−12 . (6.136)

Inspection suggests that the optimal compensatorK has the zeros ofW2 as its poles. Becausethis is in fact not the case the extra poles introduced intoK cancel against corresponding zeros.This increase in complexity is the price of using the state space solution.

The example ofx 6.7.4 illustrates the procedure.

6.7.4 Example

We present a simple example to illustrate the mechanics of designing for integrating action andhigh-frequency roll-off. Consider the first-order plant

P(s) = ps+ p

, (6.137)

with p = 0.1. These are the design specifications:

1. Constant disturbance rejection.

Page 133: Multi Variable Control System Design Dmcs456

6.7. Integral control and high-frequency roll-off 285

WW

V

K

Ko Po

P W

w

y u

z2

z122 1

v

−+

+−1

Figure 6.18: Block diagram substitution for nonproperW2

2. A closed-loop bandwidth of 1.

3. A suitable time response to step disturbances without undue sluggishness or overshoot.

4. High-frequency roll-off of the compensator and input sensitivity at 1 decade/decade.

We use the integrator in the loop method ofx 6.7.2 to design for integrating action. Accordingly,the plant is modified to

P0(s) = ps(s+ p)

. (6.138)

Partial pole placement (seex 6.2.4, p. 253) is achieved by letting

V(s) = s2 + as+ bs(s+ p)

. (6.139)

The choicea = √2 andb = 1 places two closed-loop poles at the roots

12

√2(−1± j) (6.140)

of the numerator ofV. We plan these to be the dominant closed-loop poles in order to achieve asatisfactory time response and the required bandwidth.

To satisfy the high-frequency roll-off specification we let

W2(s) = c(1+ rs), (6.141)

with the positive constantsc andr to be selected13. It does not look as if anything needs to beaccomplished withW1 so we simply letW1(s) = 1.

Figure 6.19 shows the block diagram of the mixed sensitivity problem thus defined. It isobtained from Fig. 6.17(a) with the substitution of Fig. 6.18.

We can now see why in the eventual design the compensatorK has no pole at the zero−1/rof the weighting factorW2. The diagram of Fig. 6.19 defines a standard mixed sensitivity problemwith

P(s) =prc

s(s+ 1r )(s+ p)

, (6.142)

V(s) = s2 + as+ bs(s+ p)

= (s+ 1r )(s

2 + as+ b)

s(s+ 1r )(s+ p)

, (6.143)

13See also Exercise 6.7.1 (p. 287).

Page 134: Multi Variable Control System Design Dmcs456

286 Chapter 6.H∞-Optimization andµ-Synthesis

c +rs(1 ) s s+p( )

s s+p( )s +as+2 1

1

− y uuo

z2

z1

w

++

pK so( )

P so( )

Figure 6.19: Modified block diagram for the example

andW1(s) = W2(s) = 1.

If V is in the form of the rightmost side of (6.143) then its numerator has a zero at−1/r .By the partial pole assignment argument ofx 6.2.4 (p. 253) this zero reappears as a closed-looppole. Because this zero is also an open-loop pole it is necessarily a zero of the compensatorK0

that solves the mixed sensitivity problem of Fig. 6.19. This zero, in turn, cancels in the transferfunctionK of (6.136).

The system within the larger shaded block of Fig. 6.19 has the transfer matrix representation

z1 =[

s2 + as+ bs(s+ p)

ps(s+ p)

][w

u

]. (6.144)

This system has the minimal state realization

[x1

x2

]=

[0 01 −p

][x1

x2

]+[

ba− p

]w+

[p0

]u, (6.145)

z1 = [0 1

] [x1

x2

]+w. (6.146)

The block

u = 1c(1+ rs)

u0 (6.147)

may be represented in state form as

x3 = −1r

x3 + 1c

u0, (6.148)

u = 1r

x3. (6.149)

Combining the state differential equations (6.145) and (6.148) and arranging the output equationswe obtain the equations for the interconnection system that defines the standard problem as

Page 135: Multi Variable Control System Design Dmcs456

6.7. Integral control and high-frequency roll-off 287

follows:x1

x2

x3

=

0 0 p

r1 −p 00 0 − 1

r

︸ ︷︷ ︸A

x1

x2

x3

+

b

a− p0

︸ ︷︷ ︸B1

w+0

01c

︸︷︷︸B2

u0, (6.150)

z =[0 1 00 0 0

]︸ ︷︷ ︸

C1

x1

x2

x3

+

[10

]︸︷︷︸D11

w+[01

]︸︷︷︸D12

u0 (6.151)

y = [0 −1 0

]︸ ︷︷ ︸C2

x1

x2

x3

+ (−1)︸ ︷︷ ︸

D21

w. (6.152)

Let c = 1 andr = 0.1. Numerical computation of the optimal compensator yields the type Bsolution

K0(s) = 1.377× 1010s2 + 14.055× 1010s+ 2.820× 1010

s3 + 0.1142× 1010s2 + 1.3207× 1010s+ 1.9195× 1010(6.153)

Neglecting the terms3 in the denominator results in

K0(s) = 1.377s2 + 14.055s+ 2.8200.1142s2 + 1.3207s+ 1.91195

(6.154)

= 12.056(s+ 10)(s+ 0.2047)

(s+ 9.8552)(s+ 1.7049). (6.155)

Hence, the optimal compensator for the original problem is

K(s) = K0(s)sW2(s)

= 12.056(s+ 10)(s+ 0.2047)

(s+ 9.8552)(s+ 1.7049)· 10

s(s+ 10)(6.156)

= 120.56(s+ 0.2047)s(s+ 9.8552)(s+ 1.7049)

. (6.157)

The pole at−10 cancels, as expected, and the compensator has integrating action, as planned.The compensator has a high-frequency roll-off of 2 decades/decade.

Exercise 6.7.1 (Completion of the design).Complete the design of this example.

(a) For the compensator (6.157), compute and plot the sensitivity functionS, the input sensi-tivity function U, and the complementary sensitivity functionT.

(b) Check whether these functions behave satisfactorily, in particular, ifSandT do not peaktoo much. See if these functions may be improved by changing the values ofc andr bytrial and error.

(c) The roll-off of 2 decades/decade is more than called for by the design specifications. Thedesired roll-off of 1 decade/decade may be obtained with the constant weighting functionW2(s) = c. Recompute the optimal compensator for this weighting function withc = 1.

(d) Repeat (a) and (b) for the design of (c).

Page 136: Multi Variable Control System Design Dmcs456

288 Chapter 6.H∞-Optimization andµ-Synthesis

(e) Compute and plot the time responses of the outputz and the plant inputu to a unit stepdisturbancev in the configuration of Fig. 6.20 for the designs selected in (b) and (d).Discuss the results.

C Pu

v

z+

+

Figure 6.20: One-degree-of- freedom system with disturbancev

Exercise 6.7.2 (Roll-off in the double integrator example).In Example 6.6.1 (p. 276) thenumerical solution of the double integrator example ofx 6.2.5 (p. 254) is presented forr = 0.Extend this to the caser 6= 0, and verify (6.32). �

6.8 µ-Synthesis

6.8.1 Introduction

In x 5.8 (p. 240) the use of the structured singular value for the analysis of stability and perfor-mance robustness is explained. We briefly review the conclusions.

In Fig. 6.21(a), the interconnection blockH represents a control system, with structuredperturbations represented by the block1. The signalw is the external input to the control systemandz the control error. The input and output signals of the perturbation block are denotedpandq, respectively. The perturbations are assumed to have been scaled such that all possiblestructured perturbations are bounded by‖1‖∞ ≤ 1.

Let H1 denote the transfer function fromw to z (under perturbation1). Again, the transferfunction is assumed to have been scaled in such a way that the control system performance isdeemed satisfactory if‖H1‖∞ ≤ 1.

The central result established inx 5.8 is that the control system

1. is robustly stable, that is, is stable under all structured perturbations1 such that‖1‖∞ ≤ 1,and

2. has robust performance, that is,‖H1‖∞ ≤ 1 under all structured perturbations1 such that‖1‖∞ ≤ 1,

if and only if

µH < 1. (6.158)

The quantityµH , defined in (5.172), is the structured singular value ofH with respect to pertur-bations as in Fig. 6.21(b), with1 structured and10 unstructured.

Suppose that the control system performance and stability robustness may be modified byfeedback through a compensator with transfer matrixK, as in Fig. 6.22(a). The compensatorfeeds the measured outputy back to the control inputu. Denote byµHK the structured singular

Page 137: Multi Variable Control System Design Dmcs456

6.8. µ-Synthesis 289

value of the closed-loop transfer matrixHK of the system of Fig. 6.22(b) with respect to struc-tured perturbations1 and unstructured perturbations10. Thenµ-synthesisis any procedure toconstruct a compensatorK (if any exists) that stabilizes the nominal feedback system and makes

µHK < 1. (6.159)

(a) (b)

q pw z w z

q p

∆ ∆

∆0

HH

Figure 6.21:µ-Analysis for robust stability and performance.

∆ ∆

∆0(a) (b)

q p q p

w z w z

u y u yG

K

G

K

Figure 6.22:µ-Synthesis for robust stability and performance.

6.8.2 Approximateµ-synthesis for SISO robust performance

In x 5.8 the SISO single-degree-of-freedom configuration of Fig. 6.23 is considered. We analyzeits robustness with respect to proportional plant perturbations of the form

P −→ P(1+ δpW2) (6.160)

with ‖δP‖∞ ≤ 1. The system is deemed to have satisfactory performance robustness if

‖W1S‖∞ ≤ 1 (6.161)

for all perturbations, withS the sensitivity function of the closed-loop system.W1 andW2 aresuitable frequency dependent functions.

Page 138: Multi Variable Control System Design Dmcs456

290 Chapter 6.H∞-Optimization andµ-Synthesis

Ku

P

w

z−

++

Figure 6.23: Single-degree- of-freedom configuration

By determining the structured singular value it follows inx 5.8 that the closed-loop system isrobustly stable and has robust performance if and only if

µHK = supω∈R

(|W1( jω)S( jω)| + |W2( jω)T( jω)|) < 1. (6.162)

T is the complementary sensitivity function of the closed-loop system. Hence, the robust perfor-mance and stability problem consists of determining a feedback compensatorK (if any exists)that stabilizes the nominal system and satisfiesµHK < 1.

One way of doing this is tominimizeµHK with respect to all stabilizing compensatorsK.Unfortunately, this is not a standardH∞-optimal control problem. The problem cannot be solvedby the techniques available for the standardH∞ problem. In fact, no solution is known to thisproblem.

By the well-known inequality(a + b)2 ≤ 2(a2 + b2) for any two real numbersa andb itfollows that

(|W1S| + |W2T|)2 ≤ 2(|W1S|2 + |W2T|2). (6.163)

Hence,

supω∈R

(|W1( jω)S( jω)|2 + |W2( jω)T( jω)|2) < 12 (6.164)

impliesµHK < 1. Therefore, if we can find a compensatorK such that

supω∈R

(|W1( jω)S( jω)|2 + |W2( jω)T( jω)|2)1/2 =∥∥∥∥[

W1SW2T

]∥∥∥∥∞< 1

2

√2 (6.165)

then this compensator achieves robust performance and stability. Such a compensator, if anyexists, may be obtained by minimizing∥∥∥∥

[W1SW2T

]∥∥∥∥∞, (6.166)

which is nothing but a mixed sensitivity problem.Thus, the robust design problem has been reduced to a mixed sensitivity problem. This

reduction is not necessarily successful in solving the robust design problem — see Exercise 6.9.1(p. 294).

Exercise 6.8.1 (Proof of inequality).Prove the inequality(a + b)2 ≤ 2(a2 + b2) for any tworeal numbersa andb, for instance by application of the Cauchy-Schwarz inequality to the vectors(a, b) and (1, 1). �

Exercise 6.8.2 (Minimization of upper bound). Show that the upper bound on the right-handside of (6.163) may also be obtained from Summary 5.7.7(b) (p. 236). �

Page 139: Multi Variable Control System Design Dmcs456

6.8. µ-Synthesis 291

6.8.3 Approximate solution of theµ-synthesis problem

More complicated robust design problems than the simple SISO problem we discussed cannotbe reduced to a tractable problem so easily. Various approximate solution methods to theµ-synthesis problem have been suggested. The best known of these (Doyle 1984) relies on theproperty (see Summary 5.7.72–3)

µ(M) ≤ σ[ DMD−1], (6.167)

with D andD suitable diagonal matrices. The problem of minimizing

µHK = supω∈R

µ(HK( jω)) (6.168)

is now replaced with the problem of minimizing the bound

supω∈R

σ[ D( jω)HK( jω)D−1( jω)] = ‖DHK D−1‖∞, (6.169)

where for each frequencyω the diagonal matricesD( jω) andD( jω) are chosen so that the boundis the tightest possible. Minimizing (6.169) with respect toK is a standardH∞ problem, providedD andD are rational stable matrix functions.

Doyle’s method is based onD-K iteration:

Summary 6.8.3 (D-K iteration).

1. Choose an initial compensatorK that stabilizes the closed-loop system, and compute thecorresponding nominal closed-loop transfer matrixHK.

One way of finding an initial compensator is to minimize‖H0K‖∞ with respect to all sta-

bilizing K, whereH0K is the closed-loop transfer matrix of the configuration of Fig. 6.22

fromw to z with 1 = 10 = 0.

2. Evaluate the upper bound

minD( jω), D( jω)

σ[ D( jω)HK( jω)D−1( jω)], (6.170)

with D andD diagonal matrices as in Summary 5.7.7(c), for a number of values ofω on asuitable frequency grid. The maximum of this upper bound over the frequency grid is anestimate ofµHK .

If µHK is small enough then stop. Otherwise, continue.

3. On the frequency grid, fit stable minimum-phase rational functions to the diagonal entriesof D and D. Because of the scaling property it is sufficient to fit theirmagnitudesonly.The resulting extra freedom (in the phase) is used to improve the fit. Replace the originalmatrix functionsD andD with their rational approximations.

4. Given the rational approximationsD and D, minimize‖DHK D−1‖∞ with respect to allstabilizing compensatorsK. Denote the minimizing compensator asK and the correspond-ing closed-loop transfer matrix asHK . Return to 2.

Page 140: Multi Variable Control System Design Dmcs456

292 Chapter 6.H∞-Optimization andµ-Synthesis

Any algorithm that solves the standardH∞-problem exactly or approximately may be usedin step (d). The procedure is continued until a satisfactory solution is obtained. Convergence isnot guaranteed. The method may be implemented with routines provided in theµ-Tools toolbox(Balas, Doyle, Glover, Packard, and Smith 1991). A lengthy example is discussed inx 6.9.

The method is essentially restricted to “complex” perturbations, that is, perturbations corre-sponding to dynamic uncertainties. “Real” perturbations, caused by uncertain parameters, needto be overbounded by dynamic uncertainties.

6.9 An application ofµ-synthesis

6.9.1 Introduction

To illustrate the application ofµ-synthesis we consider the by now familiar SISO plant of Exam-ple 5.2.1 (p. 198) with transfer function

P(s) = gs2(1+ sθ)

. (6.171)

Nominally,g = g0 = 1 andθ = 0, so that the nominal plant transfer function is

P0(s) = g0

s2. (6.172)

In Example 5.2.2 (p. 199) root locus techniques are used to design a modified PD compensatorwith transfer function

C(s) = k + sTd

1+ sT0, k = 1, Td =

√2 = 1.414, T0 = 0.1. (6.173)

The corresponding closed-loop poles are−0.7652± j 0.7715 and−8.4679.In this section we explore howD–K iteration may be used to obtain an improved design that

achieves robust performance.

6.9.2 Performance specification

In Example 5.8.4 (p. 245) the robustness of the feedback system is studied with respect to theperformance specification

|S( jω)| ≤ 1|V( jω)| , ω ∈ R. (6.174)

The functionV is chosen as

1V

= (1+ ε)S0. (6.175)

S0 is the sensitivity function of the nominal closed-loop system and is given by

S0(s) = s2(1+ sT0)

χ0(s), (6.176)

with χ0(s) = T0s3 + s2 + g0Tds+ g0k the nominal closed-loop characteristic polynomial. Thenumberε is a tolerance.

Page 141: Multi Variable Control System Design Dmcs456

6.9. An application ofµ-synthesis 293

In Example 5.8.4 it is found that robust performance is guaranteed with a toleranceε = 0.25for parameter perturbations that satisfy

0.9 ≤ g ≤ 1.1, 0 ≤ θ < 0.1. (6.177)

The robustness test used in this example is based on a proportional loop perturbation model.In the present example we wish to redesign the system such that performance robustness is

achieved for a modified range of parameter perturbations, namely for

0.5 ≤ g ≤ 1.5, 0 ≤ θ < 0.05. (6.178)

For the purposes of this example we moreover change the performance specification model.Instead of relating performance to the nominal performance of a more or less arbitrary design wechoose the weighting functionV such that

1V(s)

= (1+ ε)s2

s2 + 2ζω0s+ ω20

. (6.179)

Again,ε is a tolerance. The constantω0 specifies the minimum bandwidth andζ determines themaximally allowable peaking. Numerically we chooseε = 0.25,ω0 = 1, andζ = 1

2.The design (6.173) doesnot meet the specification (6.174) withV given by (6.179) for the

range of perturbations (6.178). Figure 6.24 shows plots of the bound 1/|V| together with plotsof the perturbed sensitivity functionsS for the four extreme combinations of parameter values.In particular, the bound is violated when the value of the gaing drops to too small values.

10-1

100

101

102

-60

-40

-20

20

0

angular frequency [rad/s]

bound| |[dB]

S

Figure 6.24: Magnitude plot ofSfor different parameter value combinationsand the bound 1/V

6.9.3 Setting up the problem

To set up theµ-synthesis problem we first define the perturbation model. As seen in Example5.8.4 the loop gain has proportional perturbations

1P(s) = P(s)− P0(s)P0(s)

=g−g0

g0− sθ

1+ sθ. (6.180)

We bound the perturbations as|1P( jω)| ≤ |W0( jω)| for all ω ∈ R, with

W0(s) = α+ sθ0

1+ sθ0. (6.181)

Page 142: Multi Variable Control System Design Dmcs456

294 Chapter 6.H∞-Optimization andµ-Synthesis

The numberα, with α < 1, is a bound|g − g0|/g0 ≤ α for the relative perturbations ing, andθ0 is the largest possible value for the parasitic time constantθ. Numerically we letα = 0.5and θ0 = 0.05. Figure 6.25 shows plots of the perturbation|δP| for various combinations ofvalues of the uncertain parameters together with a magnitude plot ofW0. Figure 6.26 shows

10-1

100

101

102

-60

-40

-20

0

20

angular frequency [rad/s]

| |[dB]δP

bound

Figure 6.25: Magnitude plots of the perturbationsδP for different parametervalues and of the boundW0

the corresponding scaled perturbation model. It also includes the scaling functionV used tonormalize performance.

K

Wo δP

Po

V

y

p

uoz

+ +

++

q

w

u

Figure 6.26: Perturbation and performance model

Exercise 6.9.1 (Reduction to mixed sensitivity problem).In x 6.8.2 (p. 289) it is argued thatrobust performance such that‖SV‖∞ < 1 under proportional plant perturbations bounded by‖1P‖∞ ≤ ‖W0‖∞ is achieved iff

supω∈R

(|S( jω)V( jω)| + |T( jω)W0( jω)|) < 1. (6.182)

A sufficient condition for this is that

supω∈R

(|S( jω)V( jω)|2 + |T( jω)W0( jω)|2) < 12

√2. (6.183)

To see whether the latter condition may be satisfied we consider the mixed sensitivity problemconsisting of the minimization of‖H‖∞, with

H =[

SV−TW0

]. (6.184)

Page 143: Multi Variable Control System Design Dmcs456

6.9. An application ofµ-synthesis 295

(a) Show that the mixed sensitivity problem at hand may be solved as a standardH∞ problemwith generalized plant

G =

V P0

0 W0V−1P0

−V −P0

. (6.185)

(b) Find a (minimal) state representation ofG. (Hint: Compare (6.190) and (6.191).) Checkthat condition (b) ofx 6.5.5 (p. 274) needed for the state space solution of theH∞ problemis not satisfied. To remedy this, modify the problem by adding a third componentz1 = ρuto the error signalz, with ρ small.

(c) Solve theH∞-optimization problem for a small value ofρ, say,ρ = 10−6 or even smaller.Use the numerical values introduced earlier:g0 = 1, ε = 0.25,ω0 = 1, ζ = 0.5, α = 0.5,andθ0 = 0.05. Check that‖H‖∞ cannot be made less than about 0.8, and, hence, thecondition (6.183) cannot be met.

(d) Verify that the solution of the mixed sensitivity problem does not satisfy (6.182). Alsocheck that the solution does not achieve robust performance for the real parameter varia-tions (6.178).

The fact that a robust design cannot be obtained this way does not mean that a robust design doesnot exist. �

The next step in preparing forD–K iteration is to compute the interconnection matrixG asin Fig. 6.22 from the interconnection equations

pzy

= G

qw

u

. (6.186)

Inspection of the block diagram of Fig. 6.26 shows that

p = W0u,

z = Vw+ P0(q+ u), (6.187)

y = −Vw− P0(q+ u),

so thatp

zy

=

0 0 W0

P0 W1 P0

−P0 −W1 −P0

︸ ︷︷ ︸G

qw

u

. (6.188)

To apply the state space algorithm for the solution of theH∞ problem we need a state spacerepresentation ofG. For the outputz we have from the block diagram of Fig. 6.26

z= Vw+ P0u0 = 11+ ε

s2 + 2ζω0s+ ω20

s2w+ g0

s2u0. (6.189)

Page 144: Multi Variable Control System Design Dmcs456

296 Chapter 6.H∞-Optimization andµ-Synthesis

This transfer function representation may be realized by the two-dimensional state space system

[x1

x2

]=[0 10 0

][x1

x2

]+[ 2ζω0

1+ε 0ω2

01+ε g0

][w

u0

],

z= [1 0

] [x1

x2

]+ 1

1+ εw.

(6.190)

The weighting filterW0 may be realized as

x3 = − 1θ0

x3 + α− 1θ0

u,

p = u+ x3. (6.191)

Exercise 6.9.2 (State space representation).Derive or prove the state space representations(6.190) and (6.191). �

Using the interconnection equationu0 = q+ u we obtain from (6.190–6.191) the overall statedifferential and output equations

x =

0 1 0

0 0 0

0 0 − 1θ0

x+

0 2ζω0

1+ε 0

g0ω2

01+ε g0

0 0 α−1θ0

qw

u

,

p

zy

=

0 0 1

1 0 0−1 0 0

x+

0 0 1

0 11+ε 0

0 − 11+ε 0

qw

u

.

(6.192)

To initialize theD–K iteration we select the compensator that was obtained in Example 5.2.2(p. 199) with transfer function

K0(s) = k + sTd

1+ sT0. (6.193)

This compensator has the state space representation

x = − 1T0

x+ 1T0

y,

u = (k − Td

T0)x + Td

T0y. (6.194)

6.9.4 D–K iteration

TheD–K iteration procedure is initialized by defining a frequency axis, calculating the frequencyresponse of the initial closed-loop systemH0 on this axis, and computing and plotting the lowerand upper bounds of the structured singular value ofH0 on this frequency axis14. The calculationof the upper bound also yields the “D-scales.”

14The calculations were done with the MATLAB µ-tools toolbox. The manual (Balas, Doyle, Glover, Packard, andSmith 1991) offers a step-by-step explanation of the iterative process.

Page 145: Multi Variable Control System Design Dmcs456

6.9. An application ofµ-synthesis 297

10-2

10-1

100

101

102

103

0.8

1

1.2

1.4

angular frequency [rad/s]

µ

Figure 6.27: The structured singular value of the initial design

The plot of Fig. 6.27 shows that the structured singular value peaks to a value of about 1.3.Note that the computed lower and upper bounds coincide to the extent that they are indistin-guishable in the plot. Since the peak value exceeds 1, the closed-loop system does not haverobust performance, as we already know.

The next step is to fit rational functions to theD-scales. A quite good-looking fit may beobtained on the first diagonal entry ofD with a transfer function of order 2. The second diagonalentry is normalized to be equal to 1, and, hence, does not need to be fitted. Figure 6.28 showsthe calculated and fitted scales for the first diagonal entry. Because we have 1× 1 perturbation

10-2

10-1

100

101

102

103

10-4

10-2

100

102

angular frequency [rad/s]

| |D

D-scale

fit

Figure 6.28: Calculated and fittedD-scales

blocks only, the left and right scales are equal.The following step in theµ-synthesis procedure is to perform anH∞ optimization on the

scaled systemG1 = DGD−1. The result is a sub-optimal solution corresponding to the boundγ1 = 1.01.

In the search procedure for theH∞-optimal solution as described inx 6.4.6 (p. 269) thesearch is terminated when the optimal value of the search parameterγ has been reached withina prespecified tolerance. This tolerance should be chosen with care. Taking it too small not onlyprolongs the computation but — more importantly — results in a solution that is close to but notequal to the actualH∞ optimal solution. As seen inx 6.6 (p. 221), solutions that are close tooptimal often have large coefficients. These large coefficients make the subsequent calculations,in particular that of the frequency response of the optimal closed-loop system, ill-conditioned,and easily lead to erroneous results. This difficulty is a weak point of algorithms for the solutionof of theH∞ problem that do not provide the actual optimal solution.

Figure 6.29 shows the structured singular value of the closed-loop system that corresponds tothe compensator that has been obtained after the first iteration. The peak value of the structured

Page 146: Multi Variable Control System Design Dmcs456

298 Chapter 6.H∞-Optimization andµ-Synthesis

10-2

10-1

100

101

102

103

0.8

1

1.2

1.4

angular frequency [rad/s]

µ initial solution

first iteration

second iteration

Figure 6.29: Structured singular values for the three successive designs

singular value is about 0.97. This means that the design achieves robust stability.To increase the robustness margin anotherD–K iteration may be performed. Again theD-

scale may be fitted with a second-order rational function. TheH∞ optimization results in asuboptimal solution with levelγ2 = 0.91.

Figure 6.29 shows the structured singular value plots of the three designs we now have. Forthe third design the structured value has a peak value of about 0.9, well below the critical value1.

The plot for the structured singular value for the final design is quite flat. This appears to betypical for minimum-µ designs (Lin, Postlethwaite, and Gu 1993).

6.9.5 Assessment of the solution

The compensatorK2 achieves a peak structured singular value of 0.9. It therefore has robustperformance with a good margin. The margin may be improved by furtherD–K iterations. Wepause to analyze the solution that has been obtained.

Figure 6.30 shows plots of the sensitivity function of the closed-loop system with the com-pensatorK2 for the four extreme combinations of the values of the uncertain parametersg andθ.The plots confirm that the system has robust performance.

10-2

10-1

100

101

102

103

-100

-50

0

50

angular frequency [rad/s]

bound| |[dB]

S

Figure 6.30: Perturbed sensitivity functions forK2 and the bound 1/V

Figure 6.31 gives plots of the nominal system functionsS and U. The input sensitivityfunctionU increases to quite a large value, and has no high-frequency roll-off, at least not inthe frequency region shown. The plot shows that robust performance is obtained at the cost of alarge controller gain-bandwidth product. The reason for this is that the design procedure has noexplicit or implicit provision that restrains the bandwidth.

Page 147: Multi Variable Control System Design Dmcs456

6.9. An application ofµ-synthesis 299

10-2

10-1

100

101

102

103

-100

-50

0

50

100

angular frequency [rad/s]m

ag

nitu

de

[dB

]

U

S

Figure 6.31: Nominal sensitivityS and input sensitivityU for the compen-satorK2

10-2

10-1

100

101

102

103

-50

0

50

100

angular frequency [rad/s]

ma

gn

itu

de

[dB

]

UV

1/W2

Figure 6.32: Magnitude plots ofUV and the bound 1/W2

6.9.6 Bandwidth limitation

We revise the problem formulation to limit the bandwidth explicitly. One way of doing this isto impose bounds on the high-frequency behavior of the input sensitivity functionU. This, inturn, may be done by bounding the high-frequency behavior of the weighted input sensitivityUV. Figure 6.32 shows a magnitude plot ofUV for the compensatorK2. We wish to modify thedesign so thatUV is bounded as

|U( jω)V( jω)| ≤ 1|W2( jω)| , ω ∈ R, (6.195)

where

1W2(s)

= β

s+ω1. (6.196)

Numerically, we tentatively chooseβ = 100 andω1 = 1. Figure 6.32 includes the correspondingbound 1/|W2| on |U|.

To include this extra performance specification we modify the block diagram of Fig. 6.26to that of Fig. 6.33. The diagram includes an additional outputz2 while z has been renamedto z1. The closed-loop transfer function fromw to z2 is −W2UV. To handle the performancespecifications‖SV‖∞ ≤ 1 and‖W2UV‖∞ ≤ 1 jointly we impose the performance specification∥∥∥∥

[W1SVW2UV

]∥∥∥∥∞

≤ 1, (6.197)

whereW1 = 1. This is the familiar performance specification of the mixed sensitivity problem.

Page 148: Multi Variable Control System Design Dmcs456

300 Chapter 6.H∞-Optimization andµ-Synthesis

yK

u

z2

W2 W0p δP q

+

+

Po

V

+

+uo z1

w

Figure 6.33: Modified block diagram for extra performance specification

By inspection of Fig. 6.33 we see that the structured perturbation model used to design forrobust performance now is

[qw

]=[δP 0 00 δ1 δ2

]︸ ︷︷ ︸

1

p

z1

z2

, (6.198)

with δP, δ1 andδ2 independent scalar complex perturbations such that‖1‖∞ ≤ 1.

BecauseW2 is a nonproper rational function we modify the diagram to the equivalent diagramof Fig. 6.34 (comparex 6.7, p. 279). The compensatorK and the filterW2 are combined to a

yK W2

K~

z2

~uW2

−1

x4

Wo δPp

q

uoPo

w

V

z1

−+

+

+

+

Figure 6.34: Revised modified block diagram

new compensatorK = KW2, and any nonproper transfer functions now have been eliminated.It is easy to check that the open-loop interconnection system according to Fig. 6.34 may be

Page 149: Multi Variable Control System Design Dmcs456

6.9. An application ofµ-synthesis 301

represented as

x =

0 1 0 0

0 0 0 g0

0 0 − 1θ0

α−1θ0

0 0 0 −ω1

x+

0 2ζω01+ε 0

g0ω2

01+ε 0

0 0 0

0 0 β

qw

u

,

pz1

z2

y

=

0 0 1 11 0 0 00 0 0 0

−1 0 0 0

x+

0 0 0

0 11+ε 0

0 0 1

0 − 11+ε 0

qw

u

.

(6.199)

To let the structured perturbation1 have square diagonal blocks we rename the external inputw

tow1 and expand the interconnection system with a void additional external inputw2:

x =

0 1 0 0

0 0 0 g0

0 0 − 1θ0

α−1θ0

0 0 0 −ω1

x+

0 2ζω01+ε 0 0

g0ω2

01+ε 0 0

0 0 0 0

0 0 0 β

qw1

w2

u

,

pz1

z2

y

=

0 0 1 11 0 0 00 0 0 0

−1 0 0 0

x+

0 0 0 0

0 11+ε 0 0

0 0 0 1

0 − 11+ε 0 0

qw1

w2

u

.

(6.200)

The perturbation model now is[qw

]=[δP 00 10

][pz

]. (6.201)

with δp a scalar block and10 a full 2× 2 block.To obtain an initial design we do anH∞-optimal design on the system withq andp removed,

that is, on the nominal, unperturbed system. This amounts to the solution of a mixed sensitivityproblem. Nine iterations are needed to find a suboptimal solution according to the levelγ0 = 0.97.The latter number is less than 1, which means that nominal performance is achieved. The peakvalue of the structured singular value turns out to be 1.78, however, so that we do not have robustperformance.

One D–K iteration leads to a design with a reduced peak value of the structured singularvalue of 1.32 (see Fig. 6.35). Further reduction does not seem possible. This means that thebandwidth constraint is too severe.

To relax the bandwidth constraint we change the value ofβ in the bound 1/W2 as given by(6.196) from 100 to 1000. Starting withH∞-optimal initial design oneD-K iteration leads to adesign with a peak structured singular value of 1.1. Again, robust performance is not feasible.

Finally, after choosingβ = 10000 the same procedure results after oneD-K iteration in acompensator with a peak structured singular value of 1.02. This compensator very nearly hasrobust performance with the required bandwidth.

Figure 6.35 shows the structured singular value plots for the three compensators that aresuccessively obtained forβ = 100,β = 1000 andβ = 10000.

Page 150: Multi Variable Control System Design Dmcs456

302 Chapter 6.H∞-Optimization andµ-Synthesis

10-2

10-1

100

101

102

103

0.6

0.8

1

1.2

1.4

angular frequency [rad/s]

µβ = 100

β = 1000

β = 10000

Figure 6.35: Structured singular value plots for three successive designs

6.9.7 Order reduction of the final design

The limited bandwidth robust compensator has order 9. This number may be explained as fol-lows. The generalized plant (6.200) has order 4. In the (single)D–K iteration that yields thecompensator the “D-scale” is fitted by a rational function of order 2. Premultiplying the plant byD and postmultiplying by its inverse increases the plant order to 4+ 2+ 2 = 8. Central subop-timal compensatorsK for this plant hence also have order 8. This means that the correspondingcompensatorsK = KW−1

2 for the configuration of Fig. 6.33 have order 9.It is typical for the D-K algorithm that compensators of high order are obtained. This is

caused by the process of fitting rational scales. Often the order of the compensator may consid-erably be decreased without affecting performance or robustness.

Figure 6.36(a) shows the Bode magnitude and phase plots of the limited bandwidth robust

10

10

-2

-2

10

10

-1

-1

10

10

0

0

10

10

1

1

10

10

2

2

10

10

3

3

10

10

4

4

10

10

5

5-200

-100

0

100

ph

ase

[de

g]

angular frequency [rad/s]

10-4

10-2

100

102

104

ma

gn

itu

de

(a)

(a)

(b),(c)

(b),(c)

Figure 6.36: Exact (a) and approximate (b, c) frequency responses of thecompensator

compensator. The compensator is minimum-phase and has pole excess 2. Its Bode plot has breakpoints near the frequencies 1, 60 and 1000 rad/s. We construct a simplified compensator in twosteps:

Page 151: Multi Variable Control System Design Dmcs456

6.9. An application ofµ-synthesis 303

1. The break point at 1000 rad/s is caused by a large pole. We remove this pole by omittingthe leading terms9 from the denominator of the compensator. This large pole correspondsto a pole at∞ of the H∞-optimal compensator. Figure 6.36(b) is the Bode plot of theresulting simplified compensator.

2. The Bode plot that is now found shows that the compensator has pole excess 1. It hasbreak points at 1 rad/s (corresponding to a single zero) and at 60 rad/s (corresponding toa double pole). We therefore attempt to approximate the compensator transfer function bya rational function with a single zero and two poles. A suitable numerical routine fromMATLAB yields the approximation

K(s) = 6420s+ 0.6234

(s+ 22.43)2 + 45.312. (6.202)

Figure 6.36(c) shows that the approximation is quite good.Figure 6.37 gives the magnitudes of the perturbed sensitivity functions corresponding to the

four extreme combinations of parameter values that are obtained for this reduced-order compen-sator. The plots confirm that performance is very nearly robust.

Figure 6.38 displays the nominal performance. Comparison of the plot of the input sensitivityU with that of Fig. 6.31 confirms that the bandwidth has been reduced.

10-2

10-1

100

101

102

103

-100

-50

0

50

angular frequency [rad/s]

ma

gn

itu

de

[dB

]

bound

Figure 6.37: Perturbed sensitivity functions of the reduced-order limited-bandwidth design

10-2

10-1

100

101

102

103

-100

-50

0

50

angular frequency [rad/s]

ma

gn

itu

de

[dB

]

S

U

Figure 6.38: Nominal sensitivityS and input sensitivityU of the reduced-order limited- bandwidth design

Exercise 6.9.3 (Peak in the structured singular value plot).Figure 6.35 shows that the sin-gular value plot for the limited-bandwidth design has a small peak of magnitude slightly greater

Page 152: Multi Variable Control System Design Dmcs456

304 Chapter 6.H∞-Optimization andµ-Synthesis

than 1 near the frequency 1 rad/s. Inspection of Fig. 6.37, however, reveals no violation of theperformance specification onSnear this frequency. What is the explanation for this peak?�

6.9.8 Conclusion

The example illustrates thatµ-synthesis makes it possible to design for robust performance whileexplicitly and quantitatively accounting for plant uncertainty. The method cannot be used na¨ıvely,though, and considerable insight is needed.

A severe shortcoming is that the design method inherently leads to compensators of highorder, often unnecessarily high. At this time onlyad hocmethods are available to decrease theorder. The usual approach is to apply an order reduction algorithm to the compensator.

Exercise 6.9.4 (Comparison with other designs).The compensator (6.202) is not very dif-ferent from the compensators found in Example 5.2.2 (p. 199) by the root locus method, byH2

optimization inx 3.6.2 (p. 133). and by solution of a mixed sensitivity problem inx 6.2.5 (p. 254).

1. Compare the four designs by computing their closed-loop poles and plotting the sensitivityand complementary sensitivity functions in one graph.

2. Compute the stability regions for the four designs and plot them in one graph as in Fig. 6.5.

3. Discuss the comparative robustness and performance of the four designs.�

6.10 Appendix: Proofs

6.10.1 IntroductionIn this Appendix to Chapter 6 we present several proofs forx 6.4 (p. 264) andx 6.5 (p. 272).

6.10.2 Proofs forx 6.4We first prove the result of Summary 6.4.1 (p. 264).

Inequality for the∞-norm. By the definition of the∞-norm the inequality‖H‖∞ ≤ γ is equivalent to thecondition thatσ(H∼( jω)H( jω))≤ γ for ω ∈ R, with σ the largest singular value. This in turn is the sameas the condition thatλi (H∼ H) ≤ γ2 on the imaginary axis for alli , with λi the i th largest eigenvalue. Thisfinally amounts to the condition thatH∼ H ≤ γ2 I on the imaginary axis, which is part (a) of Summary 6.4.1.The proof of (b) is similar.

Preliminary to the proof of the basic inequality forH∞ optimal control of Summary 6.4.2 (p. 250)we establish the following lemma. The lemma presents, in polished form, a result originally obtained byMeinsma (Kwakernaak 1991).

Summary 6.10.1 (Meinsma’s lemma).Let 5 be a nonsingular para-Hermitian rational matrix functionthat admits a factorization5 = Z∼ J Z with

J =[

In 00 −Im

](6.203)

and Z square rational. Suppose thatV is an(n+ m)× n rational matrix of full normal column rank15 andW anm× (n+ m) rational matrix of full normal row rank such thatW∼V = 0. Then

15A rational matrix has full normal column rank if it has full column rank everywhere in the complex plane except ata finite number of points. A similar definition holds for full normal row rank.

Page 153: Multi Variable Control System Design Dmcs456

6.10. Appendix: Proofs 305

(a) V∼5V ≥ 0 on the imaginary axis if and only ifW∼5−1W ≤ 0 on the imaginary axis.

(b) V∼5V > 0 on the imaginary axis if and only ifW∼5−1W< 0 on the imaginary axis.

Proof of Meinsma’s lemma.(a) V∼5V ≥ 0 on the imaginary axis is equivalent to

V = Z−1

[AB

], (6.204)

with A ann × n nonsingular rational matrix andB ann × m rational matrix such thatA∼ A ≥ B∼ B on theimaginary axis. Define anm× m nonsingular rational matrixA and anm× n rational matrixB such that

[−B A]

[AB

]= 0. (6.205)

It follows that BA = AB, and, hence,A−1B = BA−1. From A∼ A ≥ B∼ B on the imaginary axis it followsthat‖BA−1‖∞ ≤ 1, so that also‖ A−1B‖∞ ≤ 1, and, hence,AA∼ ≥ BB∼ on the imaginary axis. Since

W∼V = W∼ Z−1

[AB

]= 0, (6.206)

it follows from (6.205) that we have

W∼ Z−1 = U[−B A], (6.207)

with U some square nonsingular rational matrix. It follows thatW∼ = U[−B A] Z, and, hence,

W∼5−1W = U( BB∼ − AA∼)U∼. (6.208)

This proves thatW∼5−1W ≤ 0 on the imaginary axis. The proof of (b) is similar.

We next consider the proof of the basic inequality of Summary 6.4.2 (p. 265) for the standardH∞optimal control problem.

Proof of Basic inequality.By Summary 6.4.1(b) (p. 264) the inequality‖H‖∞ ≤ γ is equivalent toH H∼ ≤γ2 I on the imaginary axis. Substitution of

H = G11 + G12K( I − G22K)−1︸ ︷︷ ︸L

G21, (6.209)

leads to

G11G∼11 + G11G∼

21L∼ + LG21G∼11 + LG21G∼

21L∼ ≤ γ2 I (6.210)

on the imaginary axis. This, in turn, is equivalent to

[ I L ]

[G11G∼

11 − γ2 I G11G∼21

G21G∼11 G21G∼

21

]︸ ︷︷ ︸

8

[I

L∼

]≤ 0 (6.211)

on the imaginary axis. Consider

8−1 =[811 812

821 822

]−1

(6.212)

=[

I 8128−122

0 I

][811 −8128

−122821 0

0 822

][I 0

8−122821 I

]. (6.213)

Page 154: Multi Variable Control System Design Dmcs456

306 Chapter 6.H∞-Optimization andµ-Synthesis

For the suboptimal solution to exist we need that

8128−122821 −811 = G11G

∼21(G21G

∼21)

−1G21G∼11 − G11G

∼11 + γ2 I ≥ 0 (6.214)

on the imaginary axis. This is certainly the case forγ sufficiently large. Also, obviously822 = G21G∼21 ≥ 0

on the imaginary axis. Then by a well-known theorem (see for instance Wilson (1978)) there exist squarerational matricesV1 andV2 such that

8128−122821 −811 = V∼

1 V1, 822 = V∼2 V2. (6.215)

Hence, we may write

8 =[

I 8128−122

0 I

][V∼

1 00 V∼

2

]︸ ︷︷ ︸

Z∼

[−I 00 I

][V1 00 V2

][I 0

8−122821 I

],︸ ︷︷ ︸

Z

(6.216)

which shows that8 admits a factorization. It may be proved that the matrix8 is nonsingular. Taking

V =[

IL

], W =

[−LI

], (6.217)

application of Summary 6.10.1(a) (p. 304) to (6.211) (replacing5 with 8) yields

[ −L∼ I ] 8−1

[IL

]≥ 0 (6.218)

on the imaginary axis. Since[−LI

]=[−G12K( I − G22K)−1

I

]=[0 −G12

I −G22

][IK

]( I − G22K)−1 (6.219)

we have

[K∼ I

] [ 0 I−G∼

12 −G∼22

][G11G∼

11 − γ2 I G11G∼21

G21G∼11 G21G∼

21

]−1 [0 −G12

I −G22

]︸ ︷︷ ︸

[IK

]≥ 0. (6.220)

This is what we set out to prove.

6.10.3 Proofs forx 6.5Our derivation of the factorization of5γ of x 6.5.3 (p. 272) for the state space version of the standardproblem is based on the Kalman-Jakuboviˇc-Popov equality of Summary 3.7.1 (p. 143) inx 3.7.2. Thisequality establishes the connection between factorizations and algebraic Riccati equations.

Proof (Factorization of5γ ). We are looking for a factorization of

5γ =[

0 I

−G∼12 −G∼

22

][G11G∼

11 − γ2 I G11G∼21

G21G∼11 G21G∼

21

]−1[0 −G12

I −G22

]. (6.221)

First factorization. As a first step, we determine a factorization[G11G∼

11 − γ2 I G11G∼21

G21G∼11 G21G∼

21

]= Z1J1Z∼

1 , (6.222)

such thatZ1 and Z−11 have all their poles in the closed left-half complex plane. Such a factorization is

sometimes called a spectralcofactorization. A spectral cofactor of a rational matrix5 may be obtained by

Page 155: Multi Variable Control System Design Dmcs456

6.10. Appendix: Proofs 307

first finding a spectral factorization of5T and then transposing the resulting spectral factor. For the problemat hand, it is easily found that[

G11(s)G∼11(s)− γ2 I G11(s)G∼

21(s)

G21(s)G∼11(s) G21(s)G∼

21(s)

]

=

H(sI − A)−1FFT(−sI − AT)−1HT − γ2 I 0 H(sI − A)−1FFT(−sI − AT)−1CT

0 −γ2 I 0

C(sI − A)−1FFT(−sI − AT)−1HT 0 C(sI − A)−1FFT(−sI − AT)−1CT + I

.

(6.223)

The transpose of the right-hand side may be written asR1 + H∼1 (s)W1H1(s), where

R1 =−γ2 I 0 0

0 −γ2 I 00 0 I

, H1(s) = FT(sI − AT)−1[ HT 0 CT], W1 = I . (6.224)

Appending the distinguishing subscript 1 to all matrices we apply the Kalman-Jakuboviˇc-Popov equality ofSummary 3.7.1 withA1 = AT, B1 = [ HT 0 CT], C1 = FT, D1 = 0, andR1 andW1 as displayed above.This leads to the algebraic Riccati equation

0 = AX1 + X1AT + FFT − X1(CTC − 1

γ2HT H)X1. (6.225)

The corresponding gainF1 and the factorV1 are

F1 =− 1

γ2 H0C

X1, V1(s) = I +

− 1

γ2 H0C

X1(sI − AT)−1[ HT 0 CT]. (6.226)

We need a symmetric solutionX1 of the Riccati equation (6.225) such thatAT + ( 1γ2 HT H − CTC)X1, or,

equivalently,

A1 = A+ X1(1γ2

HT H − CTC) (6.227)

is a stability matrix. By transposing the factorV1 we obtain the cofactorization (6.222), with

Z1(s)= VT1 (s) = I +

H

0C

(sI − A)−1X1

[− 1γ2 F 0 C

], (6.228)

J1 = R1 =−γ2 I 0 0

0 −γ2 I 00 0 I

. (6.229)

Second factorization.The next step in completing the factorization of5γ is to show, by invoking somealgebra, that

Z−11

[0 −G12

I −G22

]=

−H(sI − A1)−1CT −H(sI − A1)

−1B

0 −I

I − C(sI − A1)−1X1CT −C(sI − A1)

−1B

=0 0

0 −II 0

+

−H

0−C

(sI − A1)

−1[X1CT B

]. (6.230)

Page 156: Multi Variable Control System Design Dmcs456

308 Chapter 6.H∞-Optimization andµ-Synthesis

To obtain the desired factorization of5γ we again resort to the KJP equality, with (appending the distin-guishing subscript 2)

A2 = A1, B2 = [ X1CT B], R2 = 0, (6.231)

C2 =−H

0−C

, D2 =

0 0

0 −II 0

, W2 = J−1

1 =− 1

γ2 I 0 00 − 1

γ2 I 00 0 I

. (6.232)

This leads to the algebraic Riccati equation (inX2)

0 = AT1 X2 + X2 A1 − 1

γ2HT H + CTC

− ( X2X1 − I )CTC(X1X2 − I )+ γ2X2BBT X2. (6.233)

Substitution ofA1 from (6.227) and rearrangement yields

0 = AT X2 + X2A− ( X2X1 − I )1γ2

HT H(X1X2 − I )

+ X2X1(1γ2

HT H − CTC)X1X2 + γ2X2BBT X2. (6.234)

Substitution ofX1(1γ2 HT H − CTC)X1 from the Riccati equation (6.225) and further rearrangement result

in

0 = ( X2X1 − I )AT X2 + X2A(X1X2 − I )

+ ( X2X1 − I )1γ2

HT H(X1X2 − I )− X2(γ2BBT − FFT)X2. (6.235)

Postmultiplication of this expression byγ(X1X2 − I )−1, assuming that the inverse exists, and premultipli-cation by the transpose yields the algebraic Riccati equation

0 = AT X2 + X2A+ HT H − X2(BBT − 1γ2

FFT)X2, (6.236)

whereX2 = γ2X2(X1X2 − I )−1. After solving (6.236) forX2 we obtainX2 as X2 = X2(X1X2 − γ2 I )−1.The gainF2 and factorV2 for the spectral factorization of5γ are

F2 = L−12 (B

T2 X2 + DT

2 W2C2) =[

C(X1X2 − I )

−γ2BT X2

], (6.237)

V2(s)= I + F2(sI − A2)−1B2. (6.238)

We need to select a solutionX2 of the Riccati equation (6.233) such thatA2 − B2F2 is a stability matrix. Itis proved in the next subitem thatA2 − B2F2 is a stability matrix iff

A2 = A− BF2 = A+ (1γ2

FFT − BBT)X2 (6.239)

is a stability matrix, whereF2 = (BBT − 1γ2 FFT)X2 is the gain corresponding to the algebraic Riccati

equation (6.236).Since

L2 =[

I 00 − 1

γ2 I

], (6.240)

the desired spectral factorZγ of 5γ is

Zγ (s)=[

I 00 1

γI

]V2(s). (6.241)

Page 157: Multi Variable Control System Design Dmcs456

6.10. Appendix: Proofs 309

Invoking an amount of algebra it may be found that the inverseMγ of the spectral factorZγ may be ex-pressed as

Mγ (s)=[

I 00 γ I

]+[

C

−BT X2

](sI − A2)

−1( I − 1γ2

X1X2)−1[ X1C

T γB]. (6.242)

Choice of the solution of the algebraic Riccati equation.We require a solution to the algebraic Riccatiequation (6.233) such thatA2 − B2F2 is a stability matrix. Consider

A2 − B2F2 = A1 − X1CTC(X1X2 − I )+ γ2BBT X2. (6.243)

Substitution ofA1 = A+ 1γ2 X1HT H − X1CTC and X = X2(X1X2 − γ2 I )−1 results in

A2 − B2F2 = (AX1X2 − γ2A+ 1γ2

X1HT H X1X2 − X1HT H

−X1CTCX1X2 + γ2BBTX2)(X1X2 − γ2 I )−1. (6.244)

Substitution ofX1(1γ2 HT H − CTC)X1 from (6.225) results in

A2 − B2F2 = (−γ2A− X1HT H + γ2BBT X2 − X1AT X2 − FFTX2)(X1X2 − γ2 I )−1. (6.245)

Further substitution ofHT H + AT X2 from (6.236) and simplification yields

A2 − B2F2 = (X1X2 − γ2 I )(A− BBTX2 + 1γ2

FFT X2)(X1X2 − γ2 I )−1. (6.246)

Inspection shows thatA2 − B2F2 has the same eigenvalues asA2 = A − BBT X2 + 1γ2 FFT X2. Hence,

A2 − B2F2 is a stability matrix iff A2 is a stability matrix.

Page 158: Multi Variable Control System Design Dmcs456

310 Chapter 6.H∞-Optimization andµ-Synthesis

Page 159: Multi Variable Control System Design Dmcs456

A

Matrices

This appendix lists several matrix definitions, formulae and results that are used in the lecturenotes.

In what follows capital letters denote matrices, lower case letters denote column or row vec-tors or scalars. The element in theith row and jth column of a matrixA is denoted byAij .Whenever sumsA+ B and productsABetcetera are used then it is assumed that the dimensionsof the matrices are compatible.

A.1 Basic matrix results

Eigenvalues and eigenvectors

A column vectorv ∈ C n is aneigenvectorof a square matrixA ∈ C n×n if v 6= 0 andAv = λv forsomeλ ∈ C . In that caseλ is referred to as aneigenvalueof A. Oftenλi (A) is used to denotethe ith eigenvalue ofA (which assumes an ordering of the eigenvalues, an ordering that shouldbe clear from the context in which it is used). The eigenvalues are the zeros of thecharacteristicpolynomial

χA(λ) = det(λ In − A), (λ ∈ C , )

whereIn denotes then× n identitymatrix orunit matrix.An eigenvalue decompositionof a square matrixA is a decomposition ofA of the form

A = V DV−1, whereV andD are square andD is diagonal.

In this case the diagonal entries ofD are the eigenvalues ofA and the columns ofV are thecorresponding eigenvectors. Not every square matrixA has an eigenvalue decomposition.

The eigenvalues of a squareA and ofT AT−1 are the same for any nonsingularT. In particularχA = χT AT−1.

Rank, trace, determinant, singular and nonsingular matrices

The trace, tr(A) of a square matrixA ∈ C n×n is defined as tr(A) = ∑ni=1 Aii . It may be shown

that

tr(A) =n∑

i=1

λi (A).

311

Page 160: Multi Variable Control System Design Dmcs456

312 Appendix A. Matrices

Therank of a (possibly nonsquare) matrixA is the maximal number of linearly independentrows (or, equivalently, columns) inA. It also equals the rank of the square matrixAT A which inturn equals the number of nonzero eigenvalues ofAT A.

Thedeterminantof a square matrixA ∈ C n×n is usually defined (but not calculated) recur-sively by

det(A) ={ ∑n

j=1(−1) j+1A1 j det(Aminor1 j ) if n> 1

A if n = 1.

Here Aminori j is the (n − 1)× (n − 1)-matrix obtained fromA by removing itsith row and jth

column. The determinant of a matrix equals the product of its eigenvalues, det(A)=∏ni=1λi (A).

A square matrix issingular if det(A) = 0 and isregular or nonsingularif det(A) 6= 0. Forsquare matricesA andB of the same dimension we have

det(AB) = det(A)det(B).

Symmetric, Hermitian and positive definite matrices, the transpose and unitary matrices

A matrix A ∈ Rn×n is (real) symmetricif AT = A. Here AT is the transposeof A is defined

elementwise as(AT)i j = Aji , (i, j = 1, . . . ,n).A matrix A ∈ C n×n is Hermitian if AH = A. Here AH is thecomplex conjugate transpose

of A defined as(AH)i j = Aji (i, j = 1, . . . ,n). Overbarsx + jy of a complex numberx + jydenote the complex conjugate:x+ jy = x− jy.

Every real-symmetric and Hermitian matrixA has an eigenvalue decompositionA = V DV−1

and they have the special property that the matrixV may be chosenunitary which is that thecolumns ofV have unit length and are mutually orthogonal:VHV = I .

A symmetric or Hermitian matrixA is said to benonnegative definiteor positive semi-definiteif xH Ax ≥ 0 for all column vectorsx. We denote this by

A ≥ 0.

A symmetric or Hermitian matrixA is said to bepositive definiteif xH Ax> 0 for all nonzerocolumn vectorsx. We denote this by

A> 0.

For Hermitian matricesA andB the inequalityA ≥ B is defined to mean thatA− B ≥ 0.

Lemma A.1.1 (Nonnegative definite matrices).Let A∈ Cn×n be a Hermitian matrix. Then

1. All eigenvalues of A are real valued,

2. A≥ 0 ⇐⇒ λi (A) ≥ 0 (∀ i = 1, . . . ,n),

3. A> 0 ⇐⇒ λi (A) > 0 (∀ i = 1, . . . ,n),

4. If T is nonsingular then A≥ 0 if and only TH AT ≥ 0.

Page 161: Multi Variable Control System Design Dmcs456

A.2. Three matrix lemmas 313

A.2 Three matrix lemmas

Lemma A.2.1 (Eigenvalues of matrix products).Suppose A and BH are matrices of the samedimension n× m. Then for anyλ ∈ C there holds

det(λ In − AB) = λn−mdet(λ Im − BA). (A.1)

Proof. One the one hand we have[λ Im BA In

][Im − 1

λB

0 In

]=[λ Im 0A In − 1

λAB

]

and on the other hand[λ Im BA In

][Im 0

−A In

]=[λ Im − BA B

0 In

].

Taking determinants of both of these equations shows that

λmdet( In − 1λ

AB) = det

[λ Im BA In

]= det(λ Im − BA).

So thenonzeroeigenvalues ofAB and BA are the same. This gives the two very usefulidentities:

1. det( In − AB) = det( Im − BA),

2. tr(AB) =∑i λi (AB) =∑

j λ j(BA) = tr(BA).

Lemma A.2.2 (Sherman-Morrison-Woodburry & rank-one update).

(A+ UVH)−1 = A−1 − A−1U( I + VH A−1VH)A−1.

This formula is used mostly if U= u and V= v are column vectors. Then UVH = uvH has rankone, and it shows that a rank-one update of A corresponds to a rank-one update of its inverse,

(A+ uvH)−1 = A−1 − 11+ vH A−1u

(A−1u)(vH A−1)︸ ︷︷ ︸rank-one

.

Lemma A.2.3 (Schur complement).Suppose a Hermitian matrix A is partitioned as

A =[

P QQH R

]

with P and R square. Then

A> 0 ⇐⇒ P is invertible, P> 0 and R− QH P−1Q> 0.

The matrix R− QH P−1Q is referred to as theSchur complementof P (in A). �

Page 162: Multi Variable Control System Design Dmcs456

314 Appendix A. Matrices

Page 163: Multi Variable Control System Design Dmcs456

References

Abramowitz, M. and I. A. Stegun (1965).Handbook of Mathematical Functions with Formu-las, Graphs, and Mathematical Tables. Dover Publications, Inc., New York.

Anderson, B. D. O. and E. I. Jury (1976). A note on the Youla-Bongiorno-Lu condition.Automatica 12, 387–388.

Anderson, B. D. O. and J. B. Moore (1971).Linear Optimal Control. Prentice-Hall, Engle-wood Cliffs, NJ.

Anderson, B. D. O. and J. B. Moore (1990).Optimal Control: Linear Quadratic Methods.Prentice Hall, Englewood Cliffs, NJ.

Aplevich, J. D. (1991).Implicit Linear Systems, Volume 152 ofLecture Notes in Control andInformation Sciences. Springer-Verlag, Berlin. ISBN 0-387-53537-3.

Bagchi, A. (1993).Optimal control of stochastic systems. International Series in Systems andControl Engineering. Prentice Hall International, Hempstead, UK.

Balas, G. J., J. C. Doyle, K. Glover, A. Packard, and R. Smith (1991).User’s Guide,µ-Analysis and Synthesis Toolbox. The MathWorks, Natick, Mass.

Barmish, B. R. and H. I. Kang (1993). A survey of extreme point results for robustness ofcontrol systems.Automatica 29, 13–35.

Bartlett, A. C., C. V. Hollot, and H. Lin (1988). Root locations of an entire polytope of poly-nomials: It suffices to check the edges.Math. Control Signals Systems 1, 61–71.

Basile, G. and G. Marro (1992).Controlled and Conditioned Invariance in Linear SystemTheory. Prentice Hall, Inc., Englewood Cliffs, NJ.

Berman, G. and R. G. Stanton (1963). The asymptotes of the root locus.SIAM Review 5,209–218.

Białas, S. (1985). A necessary and sufficient condition for the stability of convex combinationsof polynomials or matrices.Bulletin Polish Acad. of Sciences 33, 473–480.

Black, H. S. (1934). Stabilized feedback amplifiers.Bell System Technical Journal 13, 1–18.

Blondel, V. (1994).Simultaneous Stabilization of Linear Systems, Volume 191 ofLectureNotes in Control and Information Sciences. Springer-Verlag, Berlin etc.

Bode, H. W. (1940). Relations between attenuation and phase in feedback amplifier design.Bell System Technical Journal 19, 421–454.

Bode, H. W. (1945).Network Analysis and Feedback Amplifier Design. Van Nostrand, NewYork.

Boyd, S. P. and C. H. Barratt (1991).Linear Controller Design: Limits of Performance. Pren-tice Hall, Englewood Cliffs, NJ.

315

Page 164: Multi Variable Control System Design Dmcs456

316 References

Bristol, E. H. (1966). On a new measure of interaction for multivariable process control.IEEETransactions on Automatic Control 11, 133–134.

Brockett, R. W. and M. D. Mesarovic (1965). The reproducibility of multivariable systems.Journal of Mathematical Analysis and Applications 11, 548–563.

Campo, P. J. and M. Morari (1994). Achievable closed-loop properties of systems under de-centralized control: conditions involving the steady-state gain.IEEE Transactions on Au-tomatic Control 39, 932–943.

Chen, C.-T. (1970).Introduction to Linear Systems Theory. Holt, Rinehart and Winston, NewYork.

Chiang, R. Y. and M. G. Safonov (1988).User’s Guide, Robust-Control Toolbox. The Math-Works, South Natick, Mass.

Chiang, R. Y. and M. G. Safonov (1992).User’s Guide, Robust Control Toolbox. The Math-Works, Natick, Mass., USA.

Choi, S.-G. and M. A. Johnson (1996). The root loci ofH∞-optimal control: A polynomialapproach.Optimal Control Applications and Methods 17, 79–105.

Control Toolbox (1990).User’s Guide. The MathWorks, Natick, Mass., USA.

Dahleh, M. A. and Y. Ohda (1988). A necessary and sufficient condition for robust BIBOstability.Systems & Control Letters 11, 271–275.

Davison, E. J. (1996). Robust servomechanism problem. InThe Control Handbook, pp. 731–747. CRC press.

Davison, E. J. and S. H. Wang (1974). Properties and calculation of transmission zeros oflinear multivariable systems.Automatica 10, 643–658.

Davison, E. J. and S. H. Wang (1978). An algorithm for the calculation of transmission ze-ros of a the system (C,A,B,D) using high gain output feedback.IEEE Transactions onAutomatic Control 23, 738–741.

D’Azzo, J. J. and C. H. Houpis (1988).Linear Control System Analysis and Design: Conven-tional and Modern. McGraw-Hill Book Co., New York, NY. Third Edition.

Desoer, C. A. and J. D. Schulman (1974). Zeros and poles of matrix transfer functions andtheir dynamical interpretations.IEEE Transactions on Circuits and Systems 21, 3–8.

Desoer, C. A. and M. Vidyasagar (1975).Feedback Systems: Input-Output Properties. Aca-demic Press, New York.

Desoer, C. A. and Y. T. Wang (1980). Linear time-invariant robust servomechanism prob-lem: a self-contained exposition. In C. T. Leondes (Ed.),Control and Dynamic Systems,Volume 16, pp. 81–129. Academic Press, New York, NY.

Dorf, R. C. (1992).Modern Control Systems. Addison-Wesley Publishing Co., Reading, MA.Sixth Edition.

Doyle, J. C. (1979). Robustness of multiloop linear feedback systems. InProc. 17th IEEEConf. Decision & Control, pp. 12–18.

Doyle, J. C. (1982). Analysis of feedback systems with structured uncertainties.IEE Proc.,Part D 129, 242–250.

Doyle, J. C. (1984). Lecture Notes, ONR/Honeywell Workshop on Advances in MultivariableControl, Minneapolis, Minn.

Page 165: Multi Variable Control System Design Dmcs456

References 317

Doyle, J. C., B. A. Francis, and A. R. Tannenbaum (1992).Feedback Control Theory. Macmil-lan, New York.

Doyle, J. C., K. Glover, P. P. Khargonekar, and B. A. Francis (1989). State-space solutions tostandardH2 andH∞ control problems.IEEE Trans. Aut. Control 34, 831–847.

Doyle, J. C. and G. Stein (1981). Multivariable feedback design: concepts for a classi-cal/modern synthesis.IEEE Transactions on Automatic Control 26, 4–16.

Engell, S. (1988).Optimale lineare Regelung: Grenzen der erreichbaren Regelgute in li-nearen zeitinvarianten Regelkreisen, Volume 18 ofFachberichte Messen–Steuern–Regeln.Springer-Verlag, Berlin, etc.

Evans, W. R. (1948). Graphical analysis of control systems.Trans. Amer. Institute of ElectricalEngineers 67, 547–551.

Evans, W. R. (1950). Control system synthesis by root locus method.Trans. Amer. Institute ofElectrical Engineers 69, 66–69.

Evans, W. R. (1954).Control System Dynamics. McGraw-Hill Book Co., New York, NY.

Follinger, O. (1958).Uber die Bestimmung der Wurzelortskurve.Regelungstechnik 6, 442–446.

Francis, B. A. and W. M. Wonham (1975). The role of transmission zeros in linear multivari-able regulators.International Journal of Control 22, 657–681.

Franklin, G. F., J. D. Powell, and A. Emami-Naeini (1986).Feedback Control of DynamicSystems. Addison-Wesley Publ. Co., Reading, MA.

Franklin, G. F., J. D. Powell, and A. Emami-Naeini (1991).Feedback Control of DynamicSystems. Addison-Wesley Publ. Co., Reading, MA. Second Edition.

Freudenberg, J. S. and D. P. Looze (1985). Right half plane poles and zeros and design trade-offs in feedback systems.IEEE Trans. Automatic Control 30, 555–565.

Freudenberg, J. S. and D. P. Looze (1988).Frequency Domain Properties of Scalar and Mul-tivariable Feedback Systems, Volume 104 ofLecture Notes in Control and InformationSciences. Springer-Verlag, Berlin, etc.

Gantmacher, F. R. (1964).The Theory of Matrices. Chelsea Publishing Company, New York,NY. Reprinted.

Glover, K. and J. C. Doyle (1988). State-space formulae for all stabilizing controllers thatsatisfy anH∞-norm bound and relations to risk sensitivity.Systems & Control Letters 11,167–172.

Glover, K. and J. C. Doyle (1989). A state space approach toH∞ optimal control. In H. Ni-jmeijer and J. M. Schumacher (Eds.),Three Decades of Mathematical System Theory,Volume 135 ofLecture Notes in Control and Information Sciences. Springer-Verlag, Hei-delberg, etc.

Glover, K., D. J. N. Limebeer, J. C. Doyle, E. M. Kasenally, and M. G. Safonov (1991). Acharacterization of all solutions to the four block general distance problem.SIAM J. Con-trol and Optimization 29, 283–324.

Gohberg, I., P. Lancaster, and L. Rodman (1982).Matrix polynomials. Academic Press, NewYork, NY.

Golub, G. M. and C. Van Loan (1983).Matrix Computations. The Johns Hopkins UniversityPress, Baltimore, Maryland.

Page 166: Multi Variable Control System Design Dmcs456

318 References

Graham, D. and R. C. Lathrop (1953). The synthesis of ‘optimum’ transient response: Criteriaand standard forms.Trans. AIEE Part II (Applications and Industry) 72, 273–288.

Green, M. and D. J. N. Limebeer (1995).Linear Robust Control. Prentice Hall, EnglewoodCliffs.

Grimble, M. J. (1994). Two and a half degrees of freedom LQG controller and application towind turbines.IEEE Trans. Aut. Control 39(1), 122–127.

Grosdidier, P. and M. Morari (1987). A computer aided methodology for the design of decen-tralized controllers.Computers and Chemical Engineering 11, 423–433.

Grosdidier, P., M. Morari, and B. R. Holt (1985). Closed-loop properties from steady-stategain information.Industrial and Engineering Chemistry. Fundamentals 24, 221–235.

Hagander, P. and B. Bernhardsson (1992). A simple test example forH∞ optimal control. InPreprints, 1992 American Control Conference, Chicago, Illinois.

Henrici, P. (1974).Applied and Computational Complex Analysis, Vol. 1. Wiley-Interscience,New York.

Hinrichsen, D. and A. J. Pritchard (1986). Stability radii of linear systems.Systems & ControlLetters 7, 1–10.

Horowitz, I. (1982, November). Quantitative feedback theory.IEE Proc. Pt. D 129, 215–226.

Horowitz, I. and A. Gera (1979). Blending of uncertain nonminimum-phase plants for elimi-nation or reduction of nonminimum-phase property.International Journal of Systems Sci-ence 10, 1007–1024.

Horowitz, I. and M. Sidi (1972). Synthesis of feedback systems with large plant ignorance forprescribed time domain tolerances.Int. J. Control 126, 287–309.

Horowitz, I. M. (1963).Synthesis of of Feedback Systems. Academic Press, New York.

Hovd, M. and S. Skogestad (1992). Simple frequency-dependent tools for control systemanalysis, structure selection and design.Automatica 28, 989–996.

Hovd, M. and S. Skogestad (1994). Pairing criteria for decentralized control of unstable plants.Industrial and Engineering Chemistry Research 33, 2134–2139.

Hsu, C. H. and C. T. Chen (1968). A proof of the stability of multivariable feedback systems.Proceedings of the IEEE 56, 2061–2062.

James, H. M., N. B. Nichols, and R. S. Philips (1947).Theory of Servomechanisms. McGraw-Hill, New York.

Kailath, T. (1980).Linear Systems. Prentice Hall, Englewood Cliffs, N. J.

Kalman, R. E. and R. S. Bucy (1961). New results in linear filtering and prediction theory.J.Basic Engineering, Trans. ASME Series D 83, 95–108.

Karcanias, N. and C. Giannakopoulos (1989). Necessary and sufficient conditions for zeroassignment by constant squaring down.Linear Algebra and its Applications 122/123/124,415–446.

Kharitonov, V. L. (1978a). Asymptotic stability of an equilibrium position of a family ofsystems of linear differential equations.Differential’nye Uraveniya 14, 1483–1485.

Kharitonov, V. L. (1978b). On a generalization of a stability criterion.Akademii Nauk KahskoiSSR, Fiziko-matematicheskaia 1, 53–57.

Korn, U. and H. H. Wilfert (1982).Mehrgrossenregelungen. Moderne Entwurfsprinzipien imZeit- und Frequenzbereich. Verlag Technik, Berlin.

Page 167: Multi Variable Control System Design Dmcs456

References 319

Krall, A. M. (1961). An extension and proof of the root-locus method.Journal Soc. In-dustr. Applied Mathematatics 9, 644–653.

Krall, A. M. (1963). A closed expression for the root locus method.Journal Soc. Industr.Applied Mathematics 11, 700–704.

Krall, A. M. (1970). The root-locus method: A survey.SIAM Review 12, 64–72.

Krall, A. M. and R. Fornaro (1967). An algorithm for generating root locus diagrams.Com-munications of the ACM 10, 186–188.

Krause, J. M. (1992). Comments on Grimble’s comments on Stein’s comments on rolloff ofH∞ optimal controllers.IEEE Trans. Auto. Control 37, 702.

Kwakernaak, H. (1976). Asymptotic root loci of optimal linear regulators.IEEE Trans. Aut.Control 21(3), 378–382.

Kwakernaak, H. (1983). Robustness optimization of linear feedback systems. InPreprints,22nd IEEE Conference of Decision and Control, San Antonio, Texas, USA.

Kwakernaak, H. (1985). Minimax frequency domain performance and robustness optimiza-tion of linear feedback systems.IEEE Trans. Auto. Control 30, 994–1004.

Kwakernaak, H. (1986). A polynomial approach to minimax frequency domain optimizationof multivariable systems.Int. J. Control 44, 117–156.

Kwakernaak, H. (1991). The polynomial approach toH∞-optimal regulation. In E. Mosca andL. Pandolfi (Eds.),H∞-Control Theory, Volume 1496 ofLecture Notes in Mathematics,pp. 141–221. Springer-Verlag, Heidelberg, etc.

Kwakernaak, H. (1993). Robust control andH∞-optimization.Automatica 29, 255–273.

Kwakernaak, H. (1995). Symmetries in control system design. In A. Isidori (Ed.),Trends inControl — A European Perspective, pp. 17–51. Springer, Heidelberg, etc.

Kwakernaak, H. (1996). Frequency domain solution of the standardH∞ problem. In M. J.Grimble and V. Kucera (Eds.),Polynomial Methods for Control Systems Design. Springer-Verlag.

Kwakernaak, H. and R. Sivan (1972).Linear Optimal Control Systems. Wiley, New York.

Kwakernaak, H. and R. Sivan (1991).Modern Signals and Systems. Prentice Hall, EnglewoodCliffs, NJ.

Kwakernaak, H. and J. H. Westdijk (1985). Regulability of a multiple inverted pendulumsystem.Control – Theory and Advanced Technology 1, 1–9.

Landau, I. D., F. Rolland, C. Cyrot, and A. Voda (1993). R´egulation num´erique robuste: Leplacement de pˆoles avec calibrage de la fonction de sensibilit´e. In A. Oustaloup (Ed.),Summer School on Automatic Control of Grenoble. Hermes. To be published.

Laub, A. J. and B. C. Moore (1978). Calculation of transmission zeros using QZ techniques.Automatica 14, 557–566.

Le, D. K., O. D. I. Nwokah, and A. E. Frazho (1991). Multivariable decentralized integralcontrollability.International Journal of Control 54, 481–496.

Le, V. X. and M. G. Safonov (1992). Rational matrix GCD’s and the design of squaring-down compensators- a state-space theory.IEEE Transactions on Automatic Control 37,384–392.

Lin, J.-L., I. Postlethwaite, and D.-W. Gu (1993).µ–K Iteration: A new algorithm forµ-synthesis.Automatica 29, 219–224.

Page 168: Multi Variable Control System Design Dmcs456

320 References

Luenberger, D. G. (1969).Optimization by Vector Space Methods. John Wiley, New York.

Lunze, J. (1985). Determination of robust multivariable I-controllers by means of experimentsand simulation.Systems Analysis Modelling Simulation 2, 227–249.

Lunze, J. (1988).Robust Multivariable Feedback Control. Prentice Hall International Ltd,London, UK.

Lunze, J. (1989).Robust Multivariable Feedback Control. International Series in Systems andControl Engineering. Prentice Hall International, Hempstead, UK.

MacDuffee, C. C. (1956).The Theory of Matrices. Chelsea Publishing Company, New York,N.Y., USA.

MacFarlane, A. G. J. and N. Karcanias (1976). Poles and zeros of linear multivariable systems:a survey of the algebraic, geometric and complex-variable theory.International Journalof Control 24, 33–74.

Maciejowski, J. M. (1989).Multivariable Feedback Design. Addison-Wesley PublishingCompany, Wokingham, UK.

McAvoy, T. J. (1983).Interaction Analysis. Principles and Applications. Instrument Soc.Amer., Research Triangle Park, NC.

McFarlane, D. C. and K. Glover (1990).Robust Controller Design Using Normalized CoprimeFactor Plant Descriptions, Volume 146. Springer-Verlag, Berlin, etc.

Middleton, R. H. (1991). Trade-offs in linear control system design.Automatica 27, 281–292.

Minnichelli, R. J., J. J. Anagnost, and C. A. Desoer (1989). An elementary proof of Kha-ritonov’s stability theorem with extensions.IEEE Trans. Aut. Control 34, 995–1001.

Morari, M. (1985). Robust stability of systems with integral control.IEEE Transactions onAutomatic Control 30, 574–577.

Morari, M. and E. Zafiriou (1989).Robust Process Control. Prentice Hall International Ltd,London, UK.

Nett, C. N. and V. Manousiouthakis (1987). Euclidean condition and block relative gain:connections, conjectures, and clarifications.IEEE Transactions on Automatic Control 32,405–407.

Noble, B. (1969).Applied Linear Algebra. Prentice Hall, Englewood Cliffs, NJ.

Nwokah, O. D. I., A. E. Frazho, and D. K. Le (1993). A note on decentralized integral con-trollability. International Journal of Control 57, 485–494.

Nyquist, H. (1932). Regeneration theory.Bell System Technical Journal 11, 126–147.

Ogata, K. (1970).Modern Control Engineering. Prentice Hall, Englewood Cliffs, NJ.

Packard, A. and J. C. Doyle (1993). The complex structured singular value.Automatica 29,71–109.

Postlethwaite, I., J. M. Edmunds, and A. G. J. MacFarlane (1981). Principal gains and princi-pal phases in the analysis of linear multivariable feedback systems.IEEE Transactions onAutomatic Control 26, 32–46.

Postlethwaite, I. and A. G. J. MacFarlane (1979).A Complex Variable Approach to the Analy-sis of Linear Multivariable Feedback Systems, Volume 12 ofLecture Notes in Control andInformation Sciences. Springer-Verlag, Berlin, etc.

Postlethwaite, I., M. C. Tsai, and D.-W. Gu (1990). Weighting function selection inH∞ de-sign. InProceedings, 11th IFAC World Congress, Tallinn. Pergamon Press, Oxford, UK.

Page 169: Multi Variable Control System Design Dmcs456

References 321

Qiu, L., B. Bernhardsson, A. Rantzer, E. J. Davison, P. M. Young, and J. C. Doyle (1995). Aformula for computation of the real stability radius.Automatica 31, 879–890.

Raisch, J. (1993).Mehrgrossenregelung im Frequenzbereich. R.Oldenbourg Verlag, M¨unchen,BRD.

Rosenbrock, H. H. (1970).State-space and Multivariable Theory. Nelson, London.

Rosenbrock, H. H. (1973). The zeros of a system.International Journal of Control 18, 297–299.

Rosenbrock, H. H. (1974a).Computer-aided Control System Design. Academic Press, Lon-don.

Rosenbrock, H. H. (1974b). Correction on ”The zeros of a system”.International Journal ofControl 20, 525–527.

Saberi, A., B. M. Chen, and P. Sannuti (1993).Loop transfer recovery: Analysis and design.Prentice Hall, Englewood Cliffs, N. J.

Saberi, A., P. Sannuti, and B. M. Chen (1995).H2 optimal control. Prentice Hall, EnglewoodCliffs, N. J. ISBN 0-13-489782-X.

Safonov, M. G. and M. Athans (1981). A multiloop generalization of the circle criterion forstability margin analysis.IEEE Trams. Automatic Control 26, 415–422.

Sain, M. K. and C. B. Schrader (1990). The role of zeros in the performance of multiin-put,multioutput feedback systems.IEEE Transactions on Education 33, 244–257.

Savant, C. J. (1958).Basic Feedback Control System Design. McGraw-Hill Book Co., NewYork, NY.

Schrader, C. B. and M. K. Sain (1989). Research on system zeros: A survey.InternationalJournal of Control 50, 1407–1433.

Sebakhy, O. A., M. ElSingaby, and I. F. ElArabawy (1986). Zero placement and squaringproblem: a state space approach.International Journal of Systems Science 17, 1741–1750.

Shaked, U. (1976). Design techniques for high feedback gain stability.International Journalof Control 24, 137–144.

Silverman, L. M. (1970). Decoupling with state feedback and precompensation.IEEE Trans.Automatic Control AC-15, 487–489.

Sima, V. (1996).Algorithms for linear-quadratic optimization. Marcel Dekker, New York-Basel. ISBN 0-8247-9612-8.

Skogestad, S., M. Hovd, and P. Lundstr¨om (1991). Simple frequency-dependent tools foranalysis of inherent control limitations.Modeling Identification and Control 12, 159–177.

Skogestad, S. and M. Morari (1987). Implications of large RGA elements on control perfor-mance.Industrial and Engineering Chemistry Research 26, 2323–2330.

Skogestad, S. and M. Morari (1992). Variable selection for decentralized control.ModelingIdentification and Control 13, 113–125.

Skogestad, S. and I. Postlethwaite (1995).Multivariable Feedback Control. Analysis and De-sign. John Wiley and Sons Ltd., Chichester, Sussex, UK.

Stoorvogel, A. A. (1992).The H-Infinity Control Problem: A State Space Approach. PrenticeHall, Englewood Cliffs, NJ.

Stoorvogel, A. A. and J. H. A. Ludlage (1994). Squaring down and the problems of almostzeros for continuous time systems.Systems and Control Letters 23, 381–388.

Page 170: Multi Variable Control System Design Dmcs456

322 References

Svaricek, F. (1995). Zuverlassige numerische Analyse linearer Regelungssysteme.B.G.Teubner Verlagsgesellschaft, Stuttgart, BRD.

Thaler, G. J. (1973).Design of Feedback Systems. Dowden, Hutchinson, and Ross, Strouds-burg, PA.

Tolle, H. (1983).Mehrgrossen-Regelkreissynthese. Band I. Grundlagen und Frequenzbere-ichsverfahren. R.Oldenbourg Verlag, M¨unchen, BRD.

Tolle, H. (1985). Mehrgrossen-Regelkreissynthese. Band II: Entwurf im Zustandsraum.R.Oldenbourg Verlag, M¨unchen, BRD.

Trentelman, H. L. and A. A. Stoorvogel (1993, November). Control theory for linear systems.Vakgroep Wiskunde, RUG. Course for the Dutch Graduate Network on Systems and Con-trol, Winter Term 1993–1994.

Truxal, J. G. (1955).Automatic Feedback Control System Synthesis. McGraw-Hill Book Co.,New York, NY.

Van de Vegte, J. (1990).Feedback control systems. Prentice-Hall, Englewood Cliffs, N. J.Second Edition.

Vardulakis, A. I. G. (1991).Linear Multivariable Control: Algebraic Analysis and SynthesisMethods. John Wiley and Sons Ltd., Chichester, Sussex, UK.

Verma, M. and E. Jonckheere (1984).L∞-compensation with mixed sensitivity as a broad-band matching problem.Systems & Control Letters 14, 295–306.

Vidyasagar, M., H. Schneider, and B. A. Francis (1982). Algebraic and topological aspects offeedback stabilization.IEEE Trans. Aut. Control 27, 880–894.

Vidysagar, M. (1985).Control Systems Synthesis—A Factorization Approach. MIT Press,Cambridge, MA.

Williams, T. and P. J. Antsaklis (1996). Decoupling. InThe Control Handbook, pp. 795–804.CRC press.

Williams, T. W. C. and P. J. Antsaklis (1986). A unifying approach to the decoupling of linearmultivariable systems.Internat. J. Control 44(1), 181–201.

Wilson, G. T. (1978). A convergence theorem for spectral factorization.J. MultivariateAnal. 8, 222–232.

Wolovich, W. A. (1974).Linear Multivariable Systems. Springer Verlag, New York, NY.

Wong, E. (1983).Introduction to random processes. Springer-Verlag, New York.

Wonham, W. M. (1979).Linear Multivariable Control: A Geometric Approach. Springer-Verlag, New York, etc.

Youla, D. C., J. J. Bongiorno, and C. N. Lu (1974). Single-loop feedback-stabilization oflinear multivariable dynamical plants.Automatica 10, 159–173.

Young, P. M., M. P. Newlin, and J. C. Doyle (1995a). Computing bounds for the mixedµ

problem.Int. J. of Robust and Nonlinear Control 5, 573–590.

Young, P. M., M. P. Newlin, and J. C. Doyle (1995b). Let’s get real. In B. A. Francis and P. P.Khargonekar (Eds.),Robust Control Theory, Volume 66 ofIMA Volumes in Mathematicsand its Applications, pp. 143–173. Springer-Verlag.

Zames, G. (1981). Feedback and optimal sensitivity: Model reference transformations, multi-plicative seminorms, and approximate inverses.IEEE Trans. Aut. Control 26, 301–320.

Page 171: Multi Variable Control System Design Dmcs456

References 323

Zhou, K., J. C. Doyle, and K. Glover (1996).Robust and Optimal Control. Prentice Hall,Englewood Cliffs. ISBN 0-13-456567-3.

Ziegler, J. G. and N. B. Nichols (1942). Optimum settings for automatic controllers.Trans.ASME, J. Dyn. Meas. and Control 64, 759–768.

Page 172: Multi Variable Control System Design Dmcs456

324 References

Page 173: Multi Variable Control System Design Dmcs456

Index

∼, 12411

2-degrees-of-freedom, 512-degrees-of-freedom, 11

acceleration constant, 63action, 173additive perturbation model, 209adjoint

of rational matrix, 264algebraic Riccati equation, 105amplitude, 169ARE, 105

solution, 114asymptotic stability, 12

Bezout equation, 16bandwidth, 28, 80

improvement, 8basic perturbation model, 208BIBO stability, 13biproper, 268Blaschke product, 40Bode

diagram, 70gain-phase relationship, 35magnitude plot, 70phase plot, 70sensitivity integral, 37

Butterworth polynomial, 90

canonical factorization, 268central solution (ofH∞ problem), 267certainty equivalence, 121characteristic polynomial, 14, 311classical control theory, 59closed-loop

characteristic polynomial, 15system matrix, 107transfer matrix, 33

complementarysensitivity function, 23

sensitivity matrix, 33complex conjugate transpose, 312constant disturbance model, 280constant disturbance rejection, 65constant error suppression, 280contraction, 211control

decentralized-, 185controllability gramian, 174crossover region, 32

decentralizedcontrol, 156, 185integral action, 188

decouplingcontrol, 156indices, 180

delaymargin, 21time, 81

derivative time, 66Descartes’ rule of sign, 201descriptor representation, 276determinant, 311DIC, 188disturbance condition number, 185D-K iteration, 291D-scaling upper bound, 238

edge theorem, 205eigenvalue, 311

decomposition, 311eigenvector, 311elementary column operation, 159elementary row operation, 159energy, 169equalizing property, 275error covariance matrix, 118error signal, 5Euclidean norm, 168

325

Page 174: Multi Variable Control System Design Dmcs456

326 Index

feedback equation, 6fixed point theorem, 211forward compensator, 5Freudenberg-looze equality, 40full perturbation, 218functional

controllability, 178reproducibility, 178

FVE, 81

gain, 88at a frequency, 69crossover frequency, 80margin, 21, 80

generalized Nyquist criterion, 19generalized plant, 126, 258gridding, 203Guillemin-Truxal, 89

H2-norm, 124, 173H2-optimization, 125Hadamard product, 186Hankel norm, 173Hermitian, 312high-gain, 6H∞-norm, 172, 213H∞-problem, 258

central solution, 267equalizing property, 275

Hurwitzmatrix, 207polynomial, 200strictly-, 181tableau, 201

induced norm, 171inferential control, 128∞-norm

of transfer matrix, 213of vector, 168

input-output pairing, 186input sensitivity

function, 30matrix, 33

integral control, 66integrating action, 64integrator in the loop, 132intensity matrix, 116interaction, 156

interconnection matrix, 208internal model principle, 65internal stability, 13, 175ITAE, 91

J-unitary, 266

Kalman-Jakuboviˇc-Popov equality, 108Kalman filter, 118Kharitonov’s theorem, 204KJP

equality, 108inequality, 108

L2-norm, 168lag compensation, 84leading coefficient, 16lead compensation, 82linearity improvement, 8linear regulator problem

stochastic, 116L∞-norm, 169loop

gain, 18gain matrix, 18IO map, 6, 10shaping, 34, 82transfer matrix, 15transfer recovery, 123

LQ, 104LQG, 115LTR, 123Lyapunov equation, 118Lyapunov matrix differential equation, 142

margindelay-, 21gain-, 21, 80modulus-, 21multivariable robustness-, 234phase-, 21, 80

matrix, 311closed loop system-, 107complementary sensitivity-, 33determinant, 311error covariance-, 118Hamiltonian, 114Hermitian, 312input sensitivity-, 33intensity-, 116

Page 175: Multi Variable Control System Design Dmcs456

Index 327

loop transfer-, 15nonnegative definite-, 312nonsingular, 311observer gain-, 117positive definite-, 312positive semi-definite-, 312proper rational-, 18rank, 311sensitivity-, 33singular, 311state feedback gain-, 105symmetric, 312system-, 14trace, 311unitary, 312

M-circle, 76measurement noise, 31MIMO, 11, 33, 153minimal realization, 163minimum phase, 37mixed sensitivity problem, 251

typek control, 252modulus margin, 21monic, 109µ, 234µ-synthesis, 289µH , 236multiloop control, 186multiplicative perturbation model, 209multivariable dynamic perturbation, 233multivariable robustness margin, 234

N-circle, 76Nehari problem, 260Nichols

chart, 77plot, 77

nominal, 32stability, 214

nonnegative definite, 312nonsingular matrix, 311norm, 168

Euclidean, 168Hankel, 173induced, 171∞-, 213Lp-, 168matrix, 171onC n , 168

p-, 168spectral-, 170submultiplicative property, 171

normal rankpolynomial matrix, 159rational matrix, 160

notch filter, 84Nyquist

plot, 18, 74stability criterion, 18

observability gramian, 174observer, 117

gain matrix, 117poles, 122

open-loopcharacteristic polynomial, 15pole, 88stability, 20zero, 88

optimal linear regulator problem, 104order

of realization, 163output

controllability, 178functional reproducible, 178principal directions, 184

overshoot, 81

para-Hermitean, 265parametric perturbation, 198parasitic effect, 32parity interlacing property, 20partial pole placement, 254peak value, 169perturbation

full, 218multivariable dynamic, 233parametric, 198repeated real scalar, 233repeated scalar dynamic, 233unstructured, 208

perturbation modeladditive, 209basic, 208multiplicative, 209proportional, 209

phasecrossover frequency, 80

Page 176: Multi Variable Control System Design Dmcs456

328 Index

margin, 21, 80minimum-, 37shift, 69

PI control, 66PID control, 66plant capacity, 30p-norm, 168

of signal, 168of vector, 168

poleassignment, 16excess, 252of MIMO system, 161placement, 16

pole-zero excess, 74polynomial

characteristic-, 14invariant-, 159matrix, Smith form, 159matrix fraction representation, 226monic, 109unimodular matrix, 159

position error constant, 62positive

definite, 312semi definite, 312

principal directioninput, 184output, 184

principal gain, 184proper, 18

rational function, 15strictly-, 16

proportionalcontrol, 66feedback, 7perturbation model, 209

pure integral control, 66

QFT, 93quantitative feedback theory, 93

rank, 311rational matrix

fractional representation, 226Smith McMillan form, 160

realizationminimal, 163order of, 163

regulatorpoles, 122system, 2

relative degree, 74, 252relative gain array, 186repeated real scalar perturbation, 233repeated scalar dynamic perturbation, 233reset time, 66resonance

frequency, 80peak, 80

return compensator, 5return difference, 108

equality, 108, 144inequality, 108

RGA, 186rise time, 81robustness, 7robust performance, 242roll-off, 32, 87

LQG, 132mixed sensitivity, 252

root loci, 88

Schur complement, 114, 313sector bounded function, 220sensitivity

complementary-, 23function, 23input-, 30matrix, 33matrix, complementary-, 33matrix, input-, 33

separation principle, 121servocompensator, 189servomechanism, 189servo system, 2settling time, 81shaping

loop-, 82Sherman-Morrison-Woodbury, 313σ, σ, 170signal

amplitude, 169energy, 169L2-norm, 168L∞-norm, 169peak value, 169

signature matrix, 265

Page 177: Multi Variable Control System Design Dmcs456

Index 329

single-degree-of-freedom, 12singular matrix, 311singular value, 169

decomposition, 169SISO, 11small gain theorem, 212Smith-McMillan form, 160Smith form, 159spectral

cofactorizaton, 306factor, 265

spectral norm, 170square down, 162stability, 11

asymptotic-, 12BIBO-, 13internal, 175internal-, 13matrix, 174nominal, 214Nyquist (generalized), 19Nyquist criterion, 18of polynomial, 206of transfer matrix, 175open-loop, 20radius, 220radius, complex, 220radius, real, 220region, 201

standardH∞ problem, 258standardH2 problem, 126state

estimation error, 117feedback gain matrix, 105

state-space realization, 163static output feedback, 164steady-state, 60

error, 60step function

unit, 60strictly proper, 16structured singular value, 234

D-scaling upper bound, 238lower bound, 236, 238of transfer matrix, 236scaling property, 236upper bound, 236

submultiplicative property, 171suboptimal solution (ofH∞ problem), 264

SVD, 169Sylvester equation, 16symmetric, 312system matrix, 14

trace, 119, 311tracking system, 2transmission zero

rational matrix, 160triangle inequality, 168two-degrees-of-freedom, 11typek control

mixed sensitivity, 252typek system, 61

unimodular, 159unit

feedback, 5step function, 60

unitary, 312unstructured perturbations, 208

vector spacenormed, 168

velocity constant, 63

well-posedness, 176, 263

zeroopen loop, 88polynomial matrix, 159