Download - Lecture Robust Zhou
-
7/30/2019 Lecture Robust Zhou
1/141
Lecture Note on Robust Control
Jyh-Ching JuangDepartment of Electrical Engineering
National Cheng Kung UniversityTainan, TAIWAN
1
-
7/30/2019 Lecture Robust Zhou
2/141
Motivation of Robust ControlJ.C. Juang
For a physical process, uncertainties may appear as
Parameter variation
Unmodeled or neglected dynamics
Change of operating condition
Environmental factors
External disturbances and noises
Others
Robust control, in short, is to ensure that the performance and stability of feedbacksystems in the presence of uncertainties.
Objectives of the course
Recognize and respect the effects of uncertainties in dynamic systems
Understand how to analyze the performance and robustness of the dynamicsystems in the presence of uncertainties
Learn to use computer aided tools for control system analysis and design
Design of robust controllers to accommodate uncertainties
Apply robust control techniques to engineering problems
2
-
7/30/2019 Lecture Robust Zhou
3/141
Course RequirementsJ.C. Juang
Prerequisite:
Control engineering
Linear system theory
Multivariable systems
Working knowledge of Matlab
References:
Kemin Zhou with John Doyle, Essentials of Robust Control, Prentice Hall,1998.
G. E. Dullerud and F. Paganini, A Course in Robust Control Theory, Springer-Verlag, 2000.
Doyle, Francis, and Tannenbaum, Feedback Control Theory, Maxwell MacMil-lan, 1992.
Boyd, El Ghaoui, Feron, and Balakrishnan, Linear Matrix Inequalities inSystem and Control Theory, SIAM, 1994.
Zhou, Doyle, and Glover, Robust and Optimal Control, Prentice Hall, 1996.
Maciejowski, Multivariable Feedback Design, Addison-Wiley, 1989.
Skogestad and Postlethwaite, Multivariable Feedback Control, John Wiley& Sons, 1996.
Dahleh and Diaz-Bobillo, Control of Uncertain Systems, Prentice Hall, 1995.
Green and Limebeer, Linear Robust Control, Prentice Hall, 1995.
Many others
Grading:
50% : Homework50% : Term project: paper study and design report
3
-
7/30/2019 Lecture Robust Zhou
4/141
Course OutlineJ.C. Juang
1. Introduction
2. Preliminaries
(a) Linear algebra and matrix theory
(b) Function space and signal
(c) Linear system theory
(d) Measure of systems (different norms)
(e) Equations for analysis and design
3. Internal Stability
(a) Feedback structure and well-posedness
(b) Coprime factorization
(c) Linear fractional map
(d) Stability and stabilizing controllers
4. Performance and Robustness
(a) Feedback design tradeoffs
(b) Bodes relation
(c) Uncertainty model
(d) Small gain theorem
(e) Robust stability
(f) Robust performance
(g) Structured singular values
5. Controller Design
(a) H2state feedback
(b) Hstate feedback
(c) Observer-based controller
(d) Output injection and separation principle
(e) Output feedback H2controller design
(f) Loop transfer recovery
(g) Output feedback Hcontroller design
(h) synthesis
6. Miscellaneous Topics
4
-
7/30/2019 Lecture Robust Zhou
5/141
Review of Linear Algebra and Matrix Theory
5
-
7/30/2019 Lecture Robust Zhou
6/141
Linear SubspacesJ.C. Juang
Let R and C be real and complex scalar field, respectively.
The set of all linear combinations of x1, x2, , xk V is a subspace called the spanof x1, x2, , xk, denoted by
span{x1, x2, , xk} = {x|x = 1x1 + 2x2 + + kxk; i F}
A set of vectors x1, x2, , xk Ware said to be linearly dependent over F if thereexists 1, 2, , k F not all zero such that 1x1 + 2x2 + + kxk = 0; otherwisethey are linearly independent.
Let Wbe a subspace of a vector space V, then a set of vectors {x1, x2, , xk} Wis said to be a basis for W if x1, x2, , xk are linearly independent and W =span{x1, x2, , xk}.
The dimension of a vector subspace Wequals the number of basis vectors. A set of vectors {x1, x2, , xk} in Ware mutually orthogonal if xi xj = 0 for all i = j,
where xi is the complex conjugate transpose of xi.
A collection of subspaces W1, W2, . . ., Wk of V is mutually orthogonal if xy = 0whenever x Wi and y Wj for i = j.
The vectors are orthonormal if xi xj = ij =
1 when i = j0 otherwise
, the Kronecker delta.
Let Wbe a subspace of V. The set of all vectors in Vthat are orthogonal to everyvector in W is the orthogonal complement of Wand is denoted by W.
W
= {y V : y
x = 0, x W} Each vector x in Vcan be expressed uniquely in the form x = xW + xW for xW W
and xW W. A set of vectors {u1, u2, . . . , uk} is said to be an orthonormal basis for a k-dimensional
subspace Wif the vectors form a basis and are orthonormal. Suppose that the di-mension ofVis n, it is then possible to find a set of orthonormal basis {uk+1, . . . , un}such that
W = span{uk+1, . . . , un} Let M
Rmn be a linear transformation from Rn to Rm. M can be viewed as an
m n matrix. The kernel or null space of a linear transformation M is defined as
kerM =N(M) = {x|x Rn : Mx = 0}
6
-
7/30/2019 Lecture Robust Zhou
7/141
The image or range of M is
imgM = {y|y Rm : y = Mx for some x Rn}
The rank of a matrix M is defined as the dimension of imgM.
An m n matrix M is said to a have full row rank if m n and rank(M) = m. Ithas a full column rank if m
n and rank(M) = n.
Let M be an m n real, full rank matrix with m > n, the orthogonal complementof M is a matrix M of dimension m (m n) such that M M is a square,nonsingular matrix with the following property: MTM = 0.
The following properties hold: (kerM) = imgMT and (imgM) = kerMT.
Homework: Given M =
1 4 20 1 1
. Find kerM, imgM, kerMT, and imgMT.
7
-
7/30/2019 Lecture Robust Zhou
8/141
Eigenvalues and EigenvectorsJ.C. Juang
The scalar is an eigenvalue of the square matrix M Cnn ifdet(I M) = 0
There are n eigenvalues, which are denoted by i(M), i = 1, 2, , n. The spectrum of M, denoted by spec(M), is the collection of all eigenvalues of M,
i.e., spec(M) = {1, 2, , n}. The spectral radius is defined as
(M) = maxi
|i(M)|
where i is an eigenvalue of M.
If M is a Hermitian matrix, i.e., M = M, then all eigenvalues of M are real. Inthis case, max(M) is used to represent the maximal eigenvalue of M and min isused to represent the minimal eigenvalue of M.
If i spec(M), then any nonzero vector xi Cn that satisfiesMxi = ixi
is a (right) eigenvector of M. Likewise, any nonzero vector wi Cn that satisfieswi M = iw
i
is a left eigenvector of M.
If M is Hermitian, then there exists a unitary matrix U, i.e., UU = I and a realdiagonal such that
M = UU
In this case, the columns of U are the (right) eigenvectors of M.
Homework: Given M =
1 4 24 0 1
2 1 1
. What is (M)? Find U and .
8
-
7/30/2019 Lecture Robust Zhou
9/141
Matrix Inversion and Pseudo-InverseJ.C. Juang
For a square matrix M, its inverse, denoted by M1 is the matrix such that MM1 =M1M = I.
Let M be a square matrix partitioned as follows
M = M11 M12M21 M22 Suppose that M11 is nonsingular, then M can be decomposed as
M =
I 0
M21M111 I
M11 0
0 S11
I M111 M120 I
where S11 = M22 M21M111 M12 is the Schur complement of M11 in M.
If M (as partitioned above) is nonsingular, then
M1 =
M111 + M
111 M12S
111 M21M
111 M111 M12S111
S111 M21M
111 S
111
If both M11 and M22 are invertible,
(M11 M12M122 M21)1 = M111 + M111 M12(M22 M21M111 M12)1M21M111 The pseudo-inverse (also called Moore-Penrose inverse) of a matrix M is the matrix
M+ that satisfies the following conditions:
1. MM+M = M
2. M+MM+ = M+
3. (MM+) = MM+
4. (M+M) = M+M
Consider the linear matrix equation with unknown X,
AXB = C
The equation is solvable if and only if
AA+CB+B = C
All solutions can be characterized as
X = A+CB+ A+AYBB+ + Yfor some Y.
In the case of no solutions, the best approximation is
Xappr = A+CB+
Homework: Given M =
M11 M12M21 M22
=
1 4 24 0 1
3 1 1
. Use the formula above to
determine M1.
9
-
7/30/2019 Lecture Robust Zhou
10/141
Invariant SubspacesJ.C. Juang
Let M be a square matrix. A subspace Cn is invariant for the transformationM, or M-invariant, if Mx for every x .
is invariant for M means that the image of under M is contained in : M .
Examples of M-invariant subspaces: eigenspace of M, kerM, and imgM. If is a nontrivial subspace and is M-invariant, then there exist x and such
that Mx = x.
Let 1, . . ., k be eigenvalues of M (not necessarily distinct), and let xi be the cor-responding (generalized) eigenvectors. Then = span{x1, . . . , xk} is an M-invariantsubspace provided that all the lower-rank generalized eigenvectors are included.
More specifically, let 1 = 2 = = l be eigenvalues of M, and let x1, x2, . . ., xl becorresponding eigenvector and the generalized eigenvectors obtained through thefollowing equations:
(M 1I)x1 = 0(M 1I)x2 = x1...
(M 1I)xl = xl1Then a subspace with xj for some j l is an M-invariant subspace if andonly if all lower-rank eigenvectors and generalized eigenvectors of xj are in , i.e.,xi , i j.
An M-invariant subspace Cn is called a stable invariant subspace if all theeigenvalues of M constrained to have negative real parts.
Example
M
x1 x2 x3 x4
=
x1 x2 x3 x4
1 11
34
with {1} < 0 (the real part of 1 is less than zero), {3} < 0, and {4} > 0.M-invariant subspace? stable invariant subspace?
span{x1}span{x2}span{x3}span{x4}
span{x1, x2}span{x1, x2, x3}span{x1, x2, x4}
span{x2, x3}
10
-
7/30/2019 Lecture Robust Zhou
11/141
Vector Norm and Matrix NormJ.C. Juang
Consider a linear space V over the field F. A real-valued function defined on allelements x from Vis called a norm, written , if it satisfies the following axioms:
1. (Nonnegativity) x 0 for all x Vand x = 0 if and only if x = 0.2. (Homogeneity) x = || x for all x Vand F.3. (Triangle inequality) x + y x + y for all x and y in V.
A linear space together with a norm defined on it becomes a normed linear space.
Vector norm: Let x =
x1x2...
xn
be a vector in Cn. The followings are norms on Cn
1. Vector p-norm (for 1 p < ):
xp = ( ni=1
|xi|p)1/p
2. Vector 1-norm: x1 =n
i=1 |xi|3. Vector 2-norm or Euclidean norm: x2 =
xx =
ni=1 |xi|2
4. Vector -norm: x = max1in |xi| If p is a vector norm on Cn and M Cnn. The matrix norm induced by the
vector p-norm p is
M
p = sup
xCn,x=0
Mxpxp
Matrix Norm: Let M be a matrix in Cmn. M =
m11 m12 m1nm21 m22 m2n
.... . .
mm1 mmn
1. Matrix 1-norm (column sum): M1 = maxjm
i=1 |mij|2. Matrix 2-norm: M = M2 =
max(MM).
3. Matrix -norm (row sum): M = maxi
nj=1 |mij|
4. Frobenius norm: MF = (mi=1nj=1 |mij |2) 12 = trace(MM) Homework: Let x =
13
2
and M =
4 0 24 0 1
3 1 1
. Determine x1, x2, x,
M1, M2, M, and MF.
11
-
7/30/2019 Lecture Robust Zhou
12/141
Singular Value DecompositionJ.C. Juang
The singular values of a matrix M are defined as
i(M) =
i(MM)
Here, i(M)s are real and nonnegative.
The maximal singular value, max, can be shown to be
max(M) = M = maxx=0
Mx2x2
When M is invertible, the maximal singular value of M1 is related to the minimalsingular value of M, min(M), by
max(M1) =
1
min(M)
The rank of M is the same as the number of nonzero singular values of M.
The matrix M and its complex conjugate transpose M have the same singularvalues, i.e., i(M) = i(M
).
Let M Cmn, there exists unitary matrices U = u1 u2 um Cmm andV =
v1 v2 vn
Cnn such thatM = UV
where
= 1 00 0 with 1 =
1 0 00 2 0..
.
. . .00 0 r
with 1 2 r 0 where r = min{m, n}. Singular Value Decomposition. The matrix admits the decomposition
M =r
i=1
iuivi =
u1 u2 ur
1 0 00 2 0...
. . . 00 0 r
v1 v2 vr
kerM = span{
vr+1,
, vn}
and (kerM) = span{
v1,
, vr}
.
imgM = span{u1, , ur} and (imgM) = span{ur+1, , um}. The Frobenius norm of M equals MF =
21 +
22 + + 2r .
The condition number of a matrix M is defined as (M) = max(M) max(M1) pro-
vided that M1 exists.
12
-
7/30/2019 Lecture Robust Zhou
13/141
Semidefinite MatricesJ.C. Juang
A square Hermitian matrix M = M is positive definite (semidefinite), denoted byM > 0 ( 0), if xMx > 0 ( 0) for all x = 0.
All eigenvalues of a positive definite matrix are positive.
Let M = M11 M12
M21 M22
be a Hermitian matrix. Then M > 0 if and only if M11 > 0
and S11 = M22 M21M111 M12 > 0. Let M 0. There exists a unitary matrix U and nonnegative diagonal such that
M = UU.
The square root of a positive semidefinite matrix M, denoted by M1/2, satisfiesM1/2 M1/2 = M and M1/2 0. It can be shown that M1/2 = U1/2U.
For a positive definite matrix M, its inverse M1 exists and is positive definite.
For two positive definite matrices M1 and M2, we have M1 + M2 > 0 when > 0and 0.
Homework: Let M =
5 33 2
and assume the 2 1 vector x has a 2-norm that is
less than or equal to 1. In the 2-dimensional plane, draw the possible values ofMx.
Homework: Let M =
5 3 13 2 2
1 2 x
. Find the minal value of x such that M is positive
definite.
13
-
7/30/2019 Lecture Robust Zhou
14/141
Matrix CalculusJ.C. Juang
Let x = [xi] be an n-vector and f(x) be a scalar function on x, then the firstderivative gradient vector is defined as
f(x)x
= [gi] =
f(x)x1
f(x)x2
...f(x)xn
and the second derivative Hessian matrix is
2f(x)
x2= [Hij] =
2f(x)
x21
2f(x)x1 xn
......
2f(x)xn x1
2f(x)x2n
Let X = [Xij] be an mn matrix and f(X) be a scalar function on X. The derivativeof f(X) with respect to X is defined as
Xf(X) =
Xijf(X)
Formula for derivatives (assuming that x and y are vectors and A, B, and X arereal matrices)
x(yTAx) = ATy
x (xTAx) = (A + AT)x
(traceX)
X= I
(traceAXB)
X= ATBT
(traceAXTB)
X= BA
(traceXTAX)
X= (A + AT)X
(traceAXBX)
X = AT
XT
BT
+ BT
XT
AT
(log det X)
X= (XT)1
(det X)
X= (det X)(XT)1
14
-
7/30/2019 Lecture Robust Zhou
15/141
Functional Space and Signal
15
-
7/30/2019 Lecture Robust Zhou
16/141
Function SpacesJ.C. Juang
Let V be a vector space over C and be a norm defined on V. Then V is anormed space.
The space Lp(, ) (for 1 p < ) consists of all Lebesgue measurable functionsw(t) defined on the interval (
,
) such that
wp := (
|w(t)|pdt)1/p <
The space L(, ) consists of all Lebesgue measurable functions w(t) with abounded
w := ess suptR
|w(t)|
Let Vbe a vector space over C. An inner product on Vis a complex value function, : V V C
such that for any u,v,w
Vand ,
C
1. u,v + w = u, v + u, w2. u, v = v, u3. u, u > 0 if u = 0
A vector space Vwith an inner product is an inner product space. A Hilbert space is a complete inner product space with the norm induced by its
inner product.
The space L2 = L2(, ) is a Hilbert space of matrix-valued functions on R, withinner product
u, v :=
trace(u
(t)v(t)) dt
The space L2+ = L2[0, ) is a subspace ofL2(, ) with functions zero for t < 0. The space L2 = L2(, 0] is a subspace ofL2(, ) with functions zero for t > 0. Let Vbe a Hilbert space and U Va subset. Then the orthogonal complement ofU, denoted by U is defined as
U = {u : u, v = 0, v U, u V}The orthogonal complement is a Hilbert space.
Let
W=
L2+
L2, then
W =
L2.
Let U and W be subspaces of a vector space V. V is said to be the direct sumof U and W, written V= U W, if UW= {0} and every element v V can beexpressed as v = u + w with u Uand w W. IfVis an inner product space and Uand Ware orthogonal, then Vis said to be the orthogonal direct sum ofUand W.
The space L2 is the orthogonal direct sum of L2+ and L2.
16
-
7/30/2019 Lecture Robust Zhou
17/141
Power and Spectral SignalsJ.C. Juang
Let w(t) be a function of time. The autocorrelation matrix is
Rww() := limT
1
2T T
T
w(t + )w(t) dt
if the limit exists and is finite for all .
The Fourier transform of the autocorrelation matrix is the spectral density, de-noted as Sww(j)
Sww(j) :=
Rww()ejd
The autocorrelation can be obtained from the spectral density by performing aninverse Fourier transform
Rww() =1
2
Sww(j)ej d
A signal is a power signal if the autocorrelation matrix Rww() exists for all andthe power spectral density function Sww(j) exists.
The power of w(t) is defined as
wrms :=
limT
1
2T
TT
w(t)2 dt =
trace[Rww(0)]
In terms of the spectral density function,
w2rms = 12
trace[Sww(j)] d
Note that rms is not a norm, since a finite-duration signal has a zero rms value.
17
-
7/30/2019 Lecture Robust Zhou
18/141
Signal QuantizationJ.C. Juang
For a signal w(t) mapping from [0, ) to R (or Rm), its measures can be defined as1. -norm (peak value)
w = supt0
|w(t)|or
w = supt0
max1im
|wi(t)| = max1im
wi
2. 2-norm (total energy)
w2 =
0
w2(t) dt
or
w2 =
0
m
i=1w2i (t) dt =
m
i=1wi22 =
0
wT(t)w(t) dt
3. 1-norm (resource consumption)
w1 =0
|w(t)| dt
or
w1 =0
mi=1
|wi(t)| dt =mi=1
wi1
4. p-norm
w
p = (
0 |w(t)
|p dt)1/p
5. rms (root mean square) value (average power)
wrms =
limT
1
T
T0
w2(t) dt
or
wrms =
limT
1
T
T0
wT(t)w(t) dt
If
w
2 0, and final state xf, there exists a piecewisecontinuous input w() such that x(t) = xf.
2. The controllability matrix B AB A2B An1B has full row rank, i.e.,< A|imgB >:= ni=1 img(Ai1B) = Rn.
3. The matrix
Wc(t) =
t0
eABBTeAT d
is positive definite for any t > 0.
4. PBH test: The matrix
A I B has full row rank for all in C.5. Let and x be any eigenvalue and any corresponding left eigenvector of A, i.e.,
xA = x, then xB = 0.6. The eigenvalues of A + BF can be freely assigned by a suitable choice of F.
The pair (A, B) is stabilizable if and only if
1. The matrix
A I B has full row rank for all in C with {} 0.2. For all and x such that xA = x and {} 0, we have xB = 0.3. There exists a matrix F such that A + BF is Hurwitz.
The pair (C, A) is observable if and only if
1. For any t > 0, the initial state xo can be determined from the time history ofthe input w(t) and the output z(t) in the interval [0, t].
2. The observability matrix
CCA
CA2
...CAn1
has full column rank, i.e.,n
i=1 ker(CAi1) =
0.
3. The matrix
Wo(t) =
t0
eATCTCeA d
is positive definite for any t > 0.
4. The matrix A IC has full column rank for all in C.5. The eigenvalues of A + HC can be freely assigned by a suitable choice of H.
6. For all and x such that Ax = x, we have Cx = 0.7. The pair (AT, CT) is controllable.
21
-
7/30/2019 Lecture Robust Zhou
22/141
The pair (C, A) is detectable if and only if
1. The matrix
A I
C
has full-column rank for all {} 0.
2. There exists a matrix H such that A + HC is Hurwitz.
3. For all and x such that Ax = x and {} 0, we have Cx = 0. Kalman canonical decomposition: through a nonsingular coordinate transforma-
tion T, the realization admits
T AT1 T BCT1 D
=
Aco 0 A13 0 BcoA21 Aco A23 A24 Bco
0 0 Aco 0 00 0 A43 Aco 0
Cco 0 Cco 0 D
In this case, the transfer function
M(s) = D + C(sI A)1B = D + Cco(sI Aco)1Bco The state space realization (A,B,C,D) is minimal if and only if(A, B) is controllable
and (C, A) is observable.
22
-
7/30/2019 Lecture Robust Zhou
23/141
State Space AlgebraJ.C. Juang
Let
M1(s) :
x1 = A1x1 + B1w1z1 = C1x1 + D1w1
and M2(s) :
x2 = A2x2 + B1w2z2 = C2x2 + D2w2
In terms of the compact representation, M1(s) s= A1 B1C1 D1 and M2(s) s= A2 B2C2 D2 . Parallel connection of M1(s) and M2(s).
w
-w2 M2(s)
-w1 M1(s)
z2
z1
6?c -z
The new state is x =
x1x2
, input w = w1 = w2, and output z = z1 + z2.
Thus
x =
A1 00 A2
x +
B1B2
w and z =
C1 C2
x + (D1 + D2)w
or M1(s) + M2(s)s
=
A1 0 B10 A2 B2
C1 C2 D1 + D2
.
Series connection of two systems
-w2 M2(s) -z2 = w1 M1(s) -
z1
The connection gives M1(s)M2(s)s
=
A1 B1C1 D1
A2 B2C2 D2
=
A1 B1C2 B1D20 A2 B2
C1 D1C2 D1D2
A2 0 B2B1C2 A1 B1D2D1C2 C1 D1D2
. Inverse system M11 (s)
s=
A1 B1D11 C1 B1D11
C1D11 D11
provided that D1 is invertible.
23
-
7/30/2019 Lecture Robust Zhou
24/141
Can be verified that M1(s)M11 (s) = I.
w1 = D11 z1 D11 C1x1
x1 = A1x1 + B1(D11 z1 D11 C1x1) = (A B1D11 C1)x1 + B1D11 z1
Transpose or dual system MT1 (s)s
=
AT1 C
T1
BT1 DT1
.
Conjugate system M1 (s) = MT1 (s) s=
AT1 CT1BT1 D
T1
. Thus, M1 (j) = M
1 (j).
Feedback connection of M1(s) and M2(s).
-c6
-w1w M1(s) -z = z1
w2M2(s)z2
The connection gives z = z1 = w2 and w1 = w + z2.
Thus,
z = C1x1 + D1(w + z2)
z2 = C2x2 + D2z
z = (I D1D2)1(C1x1 + D1C2x2 + D1w)z2 = (I D2D1)1(D2C1x1 + C2x2 + D2D1w)x1 = A1x1 + B1w + B1(I D2D1)1(D2C1x1 + C2x2 + D2D1w)x2 = A2x2 + B2(I D1D2)1(C1x1 + D1C2x2 + D1w)
(I M1(s)M2(s))1M1(s)s
=
A1 + B1(I D2D1)1D2C1 B1(I D2D1)1C2 B1 + B1(I D2D1)1D2D1B2(I D1D2)1C1 A2 + B2(I D1D2)1D1C2 B2(I D1D2)1D1
(I D1D2)1C1 (I D1D2)1C2 (I D1D2)1D1
24
-
7/30/2019 Lecture Robust Zhou
25/141
System Poles and ZerosJ.C. Juang
Let M(s) = D + C(sI A)1B s=
A BC D
. The eigenvalues of A are the poles of
M(s).
The system matrix of M(s) is defined as
Q(s) =
A sI B
C D
which is a polynomial matrix.
The normal rank of Q(s), denoted normal rank, is the maximally possible rank ofQ(s) for at least one s C.
A complex number o C is called an invariant zero of the system realization if itsatisfies
rank A oI BC D < normal rank A sI BC D The invariant zeros are not changed by constant state feedback, constant output
injection, or similarity transformation.
Suppose that
A sI B
C D
has full-column normal rank. Then o C is an invari-
ant zero if and only if there exist 0 = x Cn and w Cm such thatA oI B
C D
xw
= 0
Moreover, if w = 0, then o is also a nonobservable mode. When the system is square, i.e., equal number of input and output, then the
invariant zeros can be computed by solving a generalized eigenvalue problem.A BC D
xw
=
I 00 0
xw
for some generalized eigenvalue and generalized eigenvector
xw
.
Suppose that A sI BC D has full-row normal rank. Then o C is an invariantzero if and only if there exist 0 = y Cn and v Cp such that
y v A oI B
C D
= 0
Moreover, if v = 0, then o is also a noncontrollable mode.
25
-
7/30/2019 Lecture Robust Zhou
26/141
The system M(s) has full-column normal rank if and only if
A sI B
C D
has
full-column normal rank.
Note that A sI B
C D
=
I 0
C(A sI)1 I
A sI B0 M(s)
and
normal rank
A sI B
C D
= n + normal rank M(s)
Let M(s) be a p m transfer matrix and let (A,B,C,D) be a minimal realization. Ifo is a zero of M(s) that is distinct from the poles, then there exists an input andinitial state such that the output of the system z(t) is zero for all t.
Let xo and wo be such that
A oI B
C D xowo = 0
i.e.,Axo + Bwo = oxo and Cxo + Dwo = 0
Consider the input w(t) = woeot and initial state x(0) = xo.
The output is
z(t) = CeAtxo +
t0
CeA(t)Bw() d + Dw(t)
= CeAtxo + CeAt
t0
e(oIA)(oI A)xo d + Dwoeot
= CeAtxo + CeAt e(oIA)|t0xo + Dwoeot= 0
26
-
7/30/2019 Lecture Robust Zhou
27/141
Measure of System and Fundamental Equations
27
-
7/30/2019 Lecture Robust Zhou
28/141
H2 SpaceJ.C. Juang
Let Sbe an open set in C and let f(s) be a complex-valued function defined on S.Then f(s) is said to be analytic at a point zo in S if it is differentiable at zo andalso at each point in some neighborhood of zo.
If f(s) is analytic at zo then f has continuous derivatives of all orders at zo. A function f(s) is said to be analytic in S if it has a derivative or is analytic at
each point of S. A matrix-valued function is analytic in Sif every element of the matrix is analytic
in S. All real rational stable transfer function matrices are analytic in the right-half
plane.
Maximum modulus theorem. Iff(s) is defined and continuous on a closed-boundedset
Sand analytic on the interior of
S, then
|f(s)
|cannot attain the maximum in
the interior of Sunless f(s) is a constant. |f(s)| can only achieve its maximum on the boundary of S, i.e.,
maxsS
|f(s)| = maxsS
|f(s)|
where Sdenotes the boundary of S. The space L2 = L2(jR) is a Hilbert space of matrix-valued functions on jR and
consists of all complex matrix functions M such that
trace[M(j)M(j)] d 0. The corresponding norm is defined as
M22 := sup>0
{ 12
trace[M( +j)M( +j)] d}
It can be shown that
M22 =1
2
trace[M(j)M(j)] d
28
-
7/30/2019 Lecture Robust Zhou
29/141
The real rational subspace of H2, which consists of all strictly proper and realrational stable transfer function matrices is denoted by RH2.
The space H2 is the orthogonal complement of H2 in L2. IfM is a strictly proper, stable, real rational transfer function matrix, then M H2
and M H2 . Parsevals Relation: Isometric isomorphism between the
L2 spaces in the time
domain and the L2 spaces in the frequency domain.L2(, ) L2(jR)
L2[0, ) H2L2(, 0] H2
If m(t) L2(, ) and its bilateral Laplace transform is M(s) L2(jR), thenM2 = m2
Define an orthogonal projection P+ :
L2(
,
)
L2[0,
) such that for any func-
tion m(t) L2(, )P+m(t) =
m(t) for t 0
0 otherwise
On the other hand, the operator P from L2(, ) to L2(, 0] is defined as
Pm(t) =
0 for t > 0
m(t) for t 0
Relationships among function spaces
-
-
-
? ?
6 6
Inverse transform
Laplace transform
Inverse transform
Laplace transform
Inverse transform
Laplace transform
L2(, 0]
L2(, )
L2[0, )
H2
L2(jR)
H2
P+
P
P+
P
29
-
7/30/2019 Lecture Robust Zhou
30/141
H SpaceJ.C. Juang
The space L(jR) is a Banach space of matrix-valued functions that are essentiallybounded on jR, with norm
M
:= ess sup
R
max[M(j)]
The rational subspace ofL denoted by RL consists of all proper and real rationaltransfer function matrices with no poles on the imaginary axis.
The space H is a subspace of L with functions that are analytic and boundedin the open right-half plane. The H norm is defined as
M := sup{s}>0
max[M(s)] = supR
max[M(j)]
The real rational subspace of H is denoted by RH, which consists of all properand real rational stable transfer function matrices.
The space H is a subspace of L with functions that are analytic and boundedin the open left-half plane. The H norm is defined as
M := sup{s}
-
7/30/2019 Lecture Robust Zhou
31/141
Lyapunov EquationJ.C. Juang
Given A Rnn and Q = QT Rnn, the equation on X Rnn
ATX+ XA + Q = 0
is called a Lyapunov equation. Define the map
: Rnn Rnn, (X) = ATX + XAThen the Lyapunov equation has a solution X if and only if Q img.
The solution is unique if and only if is injective.
The Lyapunov equation has a unique solution if and only if A has no two eigenvaluessum to zero.
Assume that A is stable, then
1. X = 0 eATtQeAt dt.2. X > 0 if Q > 0 and X 0 if Q 0.3. If Q 0, then (Q, A) is observable if and only if X > 0.
Suppose A, Q, and X satisfy the Lyapunov equation, then
1. {i(A)} 0 if X > 0 and Q 0.2. A is stable if X > 0 and Q > 0.
3. A is stable if (Q, A) is detectable, Q 0, and X 0. Proof:
(a) Let there be x = 0 such that Ax = x with {} 0.(b) Form x(ATX+XA+Q)x = 0 which can be reduced to 2{} xXx+xQx =
0.
(c) Thus xQx = 0
(d) Leads to Qx = 0
(e) Implies that
A I
Q
x = 0, which contradicts the detectability assump-
tion.
Homework: Let A =
2 1 10 2 51 4 4
and Q =
1 0 00 1 00 0 1
. Determine X such that
ATX+ XA + Q = 0.
31
-
7/30/2019 Lecture Robust Zhou
32/141
Controllability and Observability GramiansJ.C. Juang
M(s)- -w z
6xo
Consider the stable transfer function from M(s) with z = M(s)w. Assume thefollowing minimal realization
M(s) = C(sI A)1BThat is,
x = Ax + Bwz = Cx
Suppose that there is no input from 0 to , then the output energy generated bythe initial state can be computed as follows:
[
0
zT(t)z(t) dt]1
2
= {xT0 [0
eATtCTCeAt dt]xo} 12
= [xTo Woxo]1
2
where Wo = 0 eATtCTCeAt dt is the observability gramian of the system M(s). Indeed, Wo is the positive definite solution of the Lyapunov equation
ATWo + WoA + CTC = 0
When A is stable, the system is observable if and only if Wo > 0.
On the other hand, the input from to 0 that drives the state to xo satisfies
xo =
0
eAtBw(t) dt
If we minimize the input energy subject to the reachability condition, the optimalinput can be found to be
w(t) = BTeATtW1c xo
where
Wc =
0
eAtBBTeATt dt =
0
eAtBBTeATt dt
32
-
7/30/2019 Lecture Robust Zhou
33/141
The matrix Wc is the controllability gramian of the system M(s). It satisfies theLyapunov equation
AWc + WcAT + BBT = 0
When A is stable, the system is controllable if and only if Wc is positive definite.
Note that the input energy (square) needed is
[0
wT(t)w(t) dt]1
2
= {xTo [0
eAtBBTeATt dt]xo}12
= [xTo W1c xo]
1
2
In summary, the observability gramian Wo determines the total energy in thesystem output starting from a given initial state (with no input) and the control-lability gramian Wc determines which points in the state space that can be reachedusing an input with total energy one.
Both Wo and Wc depend on the realization. Through a similarity transformation,the realization (T AT1, T B, CT1) gives the gramians as T WcT
T and TTWoT1,
respectively.
The eigenvalues of WcWo are invariant under similarity transformation, thus, asystem property.
33
-
7/30/2019 Lecture Robust Zhou
34/141
Balanced Realization and Hankel Singular ValuesJ.C. Juang
Let Wc and Wo be the controllability gramian and observability gramian of thesystem (A,B,C), respectively, i.e.,
AWc + WcAT + BBT = 0
andATWo + WoA + CTC = 0
The Hankel singular values are defined as the square roots of the eigenvalues ofWcWo, which are independent of the particular realization.
Let T be the matrix such that T WcWoT1 = =
21 00 22
. . .2n
.
The Hankel singular values of the system are 1, 2, , n (in descending order). The matrix T diagonalizes the controllability and observability gramians. Indeed,
the new realization (T AT1, T B, CT1) admits T WcTT = TTWoT
1 =
1 00 2
. . .n
A realization (A, B, C) is balanced if its controllability and observability gramians
are the same.
The maximal gain from the past input to the future output is defined as the Hankelnorm.
-
6
t
x(t)
sxo
M(s)h = supu(.)L2(,0)
zL2(0,)uL2(,0)
= supxo
(xTo Woxo)1
2
(xTo W1c xo)
1
2
= 1
2max(WcWo)
= 1
The Hankel norm is thus the maximal Hankel singular value.
34
-
7/30/2019 Lecture Robust Zhou
35/141
Quantification of SystemsJ.C. Juang
The norm of a system is typically defined as the induced norm between its outputand input.
M(s)- -w z
Two classes of approaches
1. Size of the output due to a particular signal or a class of signal2. Relative size of the output and input
System 2-norm (SISO case).
Let m(t) be the impulse response of the system M(s). Its 2-norm is defined as
M2 = (
|m(t)|2 dt) 12
The system 2-norm can be interpreted as the 2-norm of the output due toimpulse input.
By Parseval theorem,
M2 = ( 12
|M(j)|2 d) 12
If the input signal w and output signal z are stochastic in nature. Let Szz()and Sww() be the spectral density of the output and input, respectively. Then
Szz() = Sww() |M(j)|2
Note that
zrms = (1
2 S
zz() d)1
2 = (1
2 |M(j)|
2
Sww() d)1
2
The system 2-norm can then be interpreted as the rms value of the outputsubject to unity spectral density white noise.
35
-
7/30/2019 Lecture Robust Zhou
36/141
System 2-norm (MIMO case). The system 2-norm is defined as
M2 = ( 12
trace[M(j)M(j)] d)1
2
= (trace
0
m(t)mT(t) dt)1
2
= (1
2
i i(M(j))2 d)1
2
where m(t) is the impulse response.
Let ei be the i-th standard basis vector ofRm. Apply the impulsive input (t)ei to
the system to obtain the impulse response zi(t) = m(t)ei. Then
M22 =mi=1
zi22
The system 2-norm can be interpreted as
1. The rms response due to white noise: M2 = zrms|w=white noise2. The 2-norm response due to impulse input M2 = z2|w=impulse
The system norm can be regarded as the peak value in the Bode magnitude(singular value) plot.
M = sup{s}>0
max(M(s))
= sup
max(M(j)) (When M(s) is stable)
= supwrms=0
Mwrmswrms
= supw2=0
Mw2w2
= supw2=1
Mw2
System 1-norm: peak to peak ratio
M1 = sup
Mww
36
-
7/30/2019 Lecture Robust Zhou
37/141
State-Space Computation of 2-NormJ.C. Juang
Consider the state space realization of a stable transfer function M(s)
x = Ax + Bw
z = Cx + Dw
In 2-norm computation, A is assumed stable and D is zero.
Recall the impulse response of M(s), m(t), is
m(t) = CeAtB
The 2-norm, according to the definition, satisfies
M22 = trace0
mT(t)m(t) dt
= trace [B
T
(
0 e
ATt
C
T
Ce
At
dt)B]= trace BTWoB
Thus, the 2-norm of M(s) can be computed by solving a Lyapunov function for Wo
ATWo + WoA + CTC = 0
and taking the square root of the trace of BTWoB.
M2 =
trace BTWoB
Similarly, let Wc be the controllability gramian
AWc + WcAT + BBT = 0
then,
M2 =
trace CWcCT
37
-
7/30/2019 Lecture Robust Zhou
38/141
H Norm ComputationJ.C. Juang
The H norm of M(s) can be computed through a search of the maximal singularvalue of M(j) for all .
M(s)
= ess sup
max(M(s))
Let M(s)s
=
A BC D
. Then
M(s) < if and only if max(D) < and H has no eigenvalues on the imaginary axis where
H =
A 0
CTC AT
+
B
CTD
(2I DTD)1 DTC BT Computation of the H norm requires iterations on .
M(s) < if and only if
(s) = 2I M(s)M(s) > 0if and only if (j) is nonsingular for all if and only if 1(s) has no imaginaryaxis pole. But
1(s)s
=
H
B(2I DTD)1
CTD(2I DTD)1
(2I DTD)1DTC (2I
DTD)1BT (2I D
TD)1
Homework: For the system M(s)
s=
2 1 1 11 1 1 2
4 2 1 01 0 1 0
, determine
1. The controllability and observability gramians of the system.
2. The Hankel singular value.
3. The Balanced realization of the system.
4. The H2 and H norms of M(s).
38
-
7/30/2019 Lecture Robust Zhou
39/141
Linear Matrix InequalityJ.C. Juang
Linear matrix inequality (LMI)
F(x) = F0 +m
i=1xiFi < 0
where x Rm is the variable and the symmetric matrices Fi = FTi Rnn fori = 1, , m are given.
The inequality < must be interpreted as the negative definite.
The LMI is a convex constraint on x, i.e., the set {x|F(x) < 0} is convex. That is, ifx1 and x2 satisfy F(x1) < 0 and F(x2) < 0, then F(x1 + x2) < 0 for positive scalars and such that + = 1.
Multiple LMIs can be rewritten as a single LMI in view of the concatenation
F1(x) < 0, and F2(x) < 0
F1(x) 0
0 F2(x) < 0 Schur complement technique: Assume that Q(x) = QT(x), R(x) = RT(x), and S(x)
depend affinely on x, thenQ(x) S(x)
ST(x) R(x)
< 0 R(x) < 0 and Q(x) S(x)R1(x)ST(x) < 0
That is, nonlinear inequalities of Q(x) S(x)R1(x)ST(x) < 0 can be represented asan LMI.
Let Z(x) be a matrix on x. The constraint Z(x) < 1 is equivalent to I >Z(x)ZT(x) and can be represented as the LMI
I Z(x)ZT(x) I
> 0
Let c(x) be a vector and P(x) be a symmetric matrix, the constraints P(x) > 0and cT(x)P1(x)c(x) < 1 can be represented as the LMI
P(x) c(x)cT(x) 1
> 0
The constrainttrace ST(x)P1(x)S(x) < 1, P(x) > 0
where P(x) = PT(x) and S(x) depend affinely on x can be restated as
trace Q < 1, ST(x)P1(x)S(x) < Q, P (x) > 0
and hence
trace Q < 1,
Q ST(x)
S(x) P(x)
> 0
39
-
7/30/2019 Lecture Robust Zhou
40/141
Orthogonal complement of a matrix. Let P Rnm be of rank m < n, the orthogonalcomplement of P is a matrix P Rn(nm) such that
PTP = 0
and
P P
is invertible. Finslers lemma. Let P Rnm and R Rnn where rank(P) = m < n. Suppose P
is the orthogonal complement of P then
P PT + R < 0
for some real if and only ifPTRP < 0
To see the above, note that
P PT + R < 0
PTPT
(P PT + R) P P < 0
PTP PTP + PTRP PTRP
PTRP PTRP
< 0
PTRP < 0 and(PTP PTP) + PTRP PTRP(PTRP)1PTRP < 0
PTRP < 0
Generalized Projection lemma. Given P Rnm (rank m < n), Q Rnl (rankl < n), and R = RT
Rnn, there exists K
Rml such that
R + P KQT + QKTPT < 0
if and only ifPTRP < 0 and QTRQ < 0
where P and Q be the orthogonal complements of P and Q, respectively.
Homework: Let P =
2 11 1
3 0
, Q =
1 42 3
3 2
, and R =
4 1 21 8 3
2 3 2
. Find a
matrix K such that R + P KQT + QKTPT < 0.
40
-
7/30/2019 Lecture Robust Zhou
41/141
Stability and Norm Computation: LMI ApproachJ.C. Juang
Lyapunov Stability. The systemx = Ax
is stable if and only if all the eigenvalues of A are in the left half plane if and only
if there exists an X > 0
such thatAX+ XAT < 0
H Norm Computation. The H norm of the system
A BC D
is less than if
and only if 2I > DTD and there exists an X > 0 such that
ATX+ XA + CTC+ (XB + CTD)(2I DTD)1(BTX + DTC) = 0if and only if 2I > DTD and there exists an X > 0 such that
ATX+ XA + CTC+ (XB + CTD)(2I DTD)1(BTX + DTC) < 0if and only if
ATX+ XA + CTC XB + CTDBTX+ DTC DTD 2I
< 0
if and only if there exists an X > 0 such that
ATX+ XA XB C T
BTX I DTC D I
< 0
H2 Norm Computation. The H2 norm of the system M(s) s=
A BC 0
can be de-
termined from the gramian. Indeed, we have M(s)22 = trace BTWoB where Wosatisfies
ATWo + WoA + CTC < 0
Thus, to find the 2-norm, one can solve positive definite X and Y such that
ATX+ XA + CTC < 0
and Y BTXXB X > 0The 2-norm is determined from trace Y.
41
-
7/30/2019 Lecture Robust Zhou
42/141
Stability
42
-
7/30/2019 Lecture Robust Zhou
43/141
Well-PosednessJ.C. Juang
Well-posedness means
Given any initial state and input signals, the state and output are uniquelydetermined.
All transfer functions between different nodes exist and are proper.
In order to investigate the normal behavior of the system and to avoid possiblepathologies, we are in need of this well-posedness condition.
Consider the feedback system in which both Pyu and K are linear, time-invariant,real rational, and proper.
K(s)
Pyu(s)i -6
- i -?-
yu
y
v1
v2
The transfer function from external input
v1v2
to
uy
admits
I KPyu I
uy
=
v1v2
or
(I KPyu)u = v1 + Kv2(I PyuK)y = Pyuv1 + v2
Thus,uy
=
I K
Pyu I1
v1v2
=
(I KPyu)1 0
0 (I PyuK)1
I KPyu I
v1v2
The feedback system is well-posed if and only if the inverse of(I Pyu(s)K(s)) exists
and is proper.
Note that
(I K(s)Pyu(s))1
= I+ K(s) (I Pyu(s)K(s))1
Pyu(s) The feedback system is well-posed if and only if (I Pyu()K()) is invertible or
I K()Pyu() I
is invertible.
In particular, the feedback system is well-posed if either K() or Pyu() is zero.
43
-
7/30/2019 Lecture Robust Zhou
44/141
Internal StabilityJ.C. Juang
Concepts of Stability
(i) Equilibrium State: An equilibrium state is asymptotically stable (in the senseof Lyapunov) if the state trajectory returns to the original equilibrium point
under perturbations.(ii) Input-Output Behavior: A system is stable if bounded input results in bounded
output.
K(s)
Pyu(s)j -6
- j -?-
yu
y
v1
v2
The system is internally stable if the output
uy
is bounded under bounded
excitation
v1v2
.
Since
uy
= I KPyu I
1 v1v2
= (I KPyu)1 (I KPyu)1K
(I PyuK)1Pyu (I PyuK)1 v1
v2
The system is internally stable if all four transfer function matrices are stablerational transfer function matrices, i.e.,
(I KPyu)1 (I KPyu)1K(I PyuK)1Pyu (I PyuK)1
RH
Internal stability guarantees that all signals in a system are bounded provided thatthe injected signals are bounded.
Internal stability cannot be concluded even if three of the four transfer functionmatrices are stable.
For example,
Pyu =s 1s + 1
and K = 1s 1
44
-
7/30/2019 Lecture Robust Zhou
45/141
-
7/30/2019 Lecture Robust Zhou
46/141
Concepts of Coprime FactorizationJ.C. Juang
Two elements are coprime if their greatest common divisor (g.c.d.) is a unit.
For example, 8 and 11 are coprime in the set of integer because their g.c.d. is 1.
If a and b are coprime, then there exist (coprime) integers x and y such thatax + by = 1.
In the previous example, 8 7 + 11 (5) = 1. A rational number is the ratio of two integers and indeed any rational number can
be canonically written as a ratio of two coprime numbers.
Two polynomials are coprime if they do not share common zeros.
Two polynomials n(s) and d(s) are coprime if and only if there exists polynomialsu(s) and v(s) such that
n(s)u(s) + d(s)v(s) = 1
For example, for the coprime polynomials n(s) = 2(s 1) and d(s) = s2 s + 1, thereexist u(s) = 12s and v(s) = 1 such that n(s)u(s) + d(s)v(s) = 1.
A rational transfer function can be represented as a fraction of two coprime poly-nomials. For example, the transfer function
2(s 1)s2 s + 1
is a fraction of the polynomials 2(s 1) and s2 s + 1. The Euclids algorithm can be used to find the great common divider and hence
to check coprimeness. A rational transfer function can also be written as a ratio of two stable transfer
functions.2(s 1)
s2 s + 1 =
2(s 1)s2 + s + 1
s2 s + 1s2 + s + 1
1 Benefits:
Stability can be treated in an algebraic manner
State space computations can be carried out
46
-
7/30/2019 Lecture Robust Zhou
47/141
Coprime FactorizationJ.C. Juang
Let N(s) and D(s) be in RH. N(s) and D(s) are right coprime over RH if thereexist U(s) and V(s), both in RH, such that
U(s)N(s) + V(s)D(s) = I
They are left coprime if there exist U(s) and V(s) in RH such thatN(s)U(s) + D(s)V(s) = I
The above equations are called Bezout identities.
When N(s) and D(s) are right coprime, the concatenated matrix
N(s)D(s)
has a left
inverse, namely
U(s) V(s)
, in RH. Likewise, N(s) and D(s) are left coprimeif and only if
N(s) D(s)
has a right inverse.
G(s) is said to admit a right coprime factorization if G(s) = Nr(s)D1
r (s) for someright coprime Nr(s) and Dr(s).
G(s) = D1l (s)Nl(s) is a left coprime factorization if Nl(s) and Dl(s) are left coprime.
Coprime Factorization Theorem. For every real rational transfer function matrixG(s), there exist right coprime factors Nr(s), Dr(s) and left coprime factors Nl(s),Dl(s) such that
G(s) = Nr(s)D1r (s) = D
1l (s)Nl(s)
Proof: (by construction)
Assume that G(s) admits the (minimal) state space realization
G(s) = D + C(sI A)1B s=
A BC D
Let F and H be stabilizing state feedback gain and output injection gain,respectively. That is, both A + BF and A + HC are stable.
Then, the coprime factors admit the following state space realizations
Nr(s)Dr(s) s= A + BF B
C+ DF DF I
Nl(s) Dl(s) s= A + HC (B + HD) HC D I
Checks:
47
-
7/30/2019 Lecture Robust Zhou
48/141
Nr(s), Nl(s), Dr(s), and Dl(s) are in RH. This is true because A + BF andA + HC are stable matrices.
G(s) = Nr(s)D1r (s) = D
1l (s)Nl(s).
Nr(s)D1r (s)
s=
A + BF BC+ DF D
A + BF B
F I
1
= A + BF BC+ DF D A BF I =
A + BF BF B0 A B
C+ DF DF D
similarity transformation
=
A + BF 0 00 A B
C+ DF C D
removal of uncontrollable modes
=
A BC D
= G(s)
The coprimeness.
Nr(s)Dr(s)
admits a left inverse in RH. Let
Ur(s) Vr(s)
s=
A + HC H (B + HD)
F 0 I
Then
Ur(s) Vr(s) Nr(s)
Dr(s)
s= A + HC H (B + HD)
F 0 I
A + BF BC+ DF DF I
=
A + HC HC BF B0 A + BF B
F F I
=
A + HC 0 00 A + BF B
F 0 I
= I
Let G = NrD1r where Nr and Dr be right coprime, then G is stable if and only ifD1r is stable.
If G is stable, then we can take Nr = Nl = G, Dr = I, Dl = I, Ur = 0, Ul = 0, Dr = Iand Dl = I.
48
-
7/30/2019 Lecture Robust Zhou
49/141
Double Coprime FactorizationJ.C. Juang
A transfer function matrix Pyu(s) is said to have a double coprime factorizationif there exist a right coprime factorization Pyu(s) = Nr(s)D
1r (s) and a left coprime
factorization Pyu(s) = D1l (s)Nl(s) such that
Vr(s) Ur(s)Nl(s) Dl(s)
Dr(s) Ul(s)Nr(s) Vl(s) = I 00 I Here, Nr, Dr, Nl, Dl, Ur, Ul, Vr, and Vl are all in RH.
The double coprime equations
UrNr + VrDr = I
DlVl + NlUl = I
DlNr + NlDr = 0UrVl + VrUl = 0
Suppose that Pyu(s) is a proper real rational transfer function matrix with thestabilizable and detectable realization
Pyu(s)s
=
A B0C0 D00
and let F and H be such that A + B0F and A + HC0 are both stable. Then aparticular state space realization of the double coprime factors is
Dr(s) Ul(s)
Nr(s) Vl(s)
s=
A + B0F B0 HF I 0
(C0 + D00F) D00 I
Vr(s) Ur(s)Nl(s) Dl(s)
s
=
A + HC0 B0 + HD00 HF I 0
C0 D00 I
49
-
7/30/2019 Lecture Robust Zhou
50/141
Stability and Coprime FactorizationJ.C. Juang
K(s)
Pyu(s)
j-
6
-
j-?-
yu
y
v1
v2
Internal stability requires thatI K
Pyu I1
=
(I KPyu)1 (I KPyu)1K
(I PyuK)1Pyu (I PyuK)1
RH
A matrix transfer function E(s) RH is called unimodular if its inverse exist andE1(s) RH.
Assume that Pyu(s) admits the following right and left coprime factorizations
Pyu(s) = Nr(s)D1r (s) = D
1l (s)Nl(s)
and assume that K(s) admits the following coprime factorizationsK(s) = V1r (s)Ur(s) = Ul(s)V1l (s)
Then the following statements are equivalent
1. K(s) stabilizes Pyu(s).
2. Ur(s)Nr(s) + Vr(s)Dr(s) is unimodular.
3. Dl(s)Vl(s) + Nl(s)Ul(s) is unimodular.
4.
Dr(s) Ul(s)
Nr(s) Vl(s)
is unimodular.
5. Vr(s) Ur(s)Nl(s) Dl(s)
is unimodular. Proof of 1 and 2.
50
-
7/30/2019 Lecture Robust Zhou
51/141
Assume Pyu = NrD1r and K = V1r Ur. Let E = Ur(s)Nr(s) + Vr(s)Dr(s). Then
(I KPyu)1 = (I+ V1r UrNrD1r )1 = Dr(UrNr + VrDr)1Vr= DrE
1Vr
(I KPyu)1K = DrE1Ur(I PyuK)1Pyu = NrE1Vr
(I
PyuK)
1 = I + Pyu(I
KPyu)
1K = I
NrE
1Ur
Clearly, when E is unimodular, the system is internally stable.
On the other hand, internal stability implies thatNrDr
E1
Ur Vr
RH Since Nr and Dr are right coprime, there exist Ur and Vr in RH such that
Ur Vr Nr
Dr
= I. Likewise, there exist Nr and Dr such that
Ur Vr
NrDr
=
I.
Hence, Ur Vr
NrDr
E1
Ur Vr
NrDr
= E1 RH
That is, E is invertible in RH. Proof of 1 and 4.
Note thatDr(s) Ul(s)
Nr(s) Vl(s)
=
I UlV
1l
NrD1r I
Dr 00 Vl
=
I K
Pyu I
Dr 00 Vl
The term is invertible provided that the system is well-posed.
Thus, I K
Pyu I1
=
Dr 00 Vl
X
Dr(s) Ul(s)
Nr(s) Vl(s)1
Y1
However, Xand Y are right coprime in the sense that there exist W =
Vr Ur
Nl Dl
and Z =
0 Ur
Nl 0
both in RH such that
W X+ ZY = Vr Ur
Nl Dl Dr 0
0 Vl + 0 Ur
Nl 0 Dr Ul
Nr Vl = I 0
0 I Thus,
I K(s)
Pyu(s) I1
is stable if and only if
Dr(s) Ul(s)
Nr(s) Vl(s)1
is stable if
and only if
Dr(s) Ul(s)
Nr(s) Vl(s)
is unimodular in RH.
51
-
7/30/2019 Lecture Robust Zhou
52/141
-
7/30/2019 Lecture Robust Zhou
53/141
-
7/30/2019 Lecture Robust Zhou
54/141
Standard Form for Feedback DesignJ.C. Juang
Canonical form for feedback systems design and analysis problem.
P(s)
K(s)
-
- -w z
uy
In the above figure, w stands for the exogenous input signal which could be thecommand signal, the disturbance, or the sensor noise depending on the applica-tion. u is the control signal generated by the controller to satisfy certain designrequirements. y is the measured output, the signal used for controller synthesis.Finally, z is the output signal to be controlled which represents the response ofthe system, the tracking error, or the actuation signal.
zy
=
Pzw(s) Pzu(s)Pyw(s) Pyu(s)
wu
The matrix transfer function from
wu
to
zy
, P(s) =
Pzw(s) Pzu(s)Pyw(s) Pyu(s)
, is de-
rived based on the plant dynamics, the design requirements, the interconnection,and the weightings.
u = K(s)y
The objective of the regulator design problem is to synthesize the controller K(s)
such that the design requirements are satisfied. Design requirements may includeinternal stability, achievement of certain input-output relationships, robustness,among others.
Redraw the previous figure as
54
-
7/30/2019 Lecture Robust Zhou
55/141
-
7/30/2019 Lecture Robust Zhou
56/141
Closed-Loop MapJ.C. Juang
The system is governed by
z = Pzww + Pzuu
y = Pyww + Pyuu
andu = Ky
Linear Fractional Map. When the fictitious signals are zero, the transfer function
from the input w to the output z is a linear fractional map. Let P =
Pzw PzuPyw Pyu
,
the linear fractional transform is
Fl(P, K) = Pzw + Pzu K(I Pyu K)1Pyw = Pzw + Pzu(I K Pyu)1K Pyw Scattering Matrix Representation. Let S be the matrix that governs the map from
yu to wz , that is,
wz
= S
yu
=
Swy SwuSzy Szu
yu
Then,
w = P1yw Pyu u + P1yw yand
z = (Pzu Pzw P1yw Pyu)u + PzwP1yw yThus,
S = Swy SwuSzy Szu = P1yw
P1yw Pyu
Pzw P1yw Pzu PzwP1yw Pyu It can be shown that
w = Swyy + Swuu = (Swy + Swu K)y
andz = Szyy + Szuu = (Szy + Szu K)y
Thus,z = (Szy + Szu K)(Swy + Swu K)
1w
This is a scattering matrix representation of the closed-loop map.
State Space Representation. Assume that the transfer function matrix P(s) admitsthe state space realization:
x = Ax + B1w + B0u
z = C1x + D11w + D10u
y = C0x + D01w + D00u
56
-
7/30/2019 Lecture Robust Zhou
57/141
-
7/30/2019 Lecture Robust Zhou
58/141
-
7/30/2019 Lecture Robust Zhou
59/141
-
7/30/2019 Lecture Robust Zhou
60/141
-
7/30/2019 Lecture Robust Zhou
61/141
-
7/30/2019 Lecture Robust Zhou
62/141
-
7/30/2019 Lecture Robust Zhou
63/141
-
7/30/2019 Lecture Robust Zhou
64/141
All Stabilizing ControllersJ.C. Juang
All stabilizing controllers can be represented in terms of a linear fractional map
K(s) = [Vr(s) Q(s)Nl(s)]1[Ur(s) + Q(s)Dl(s)]=
[Ul(s) + Dr(s)Q(s)][Vl(s)
Nr(s)Q(s)]
1
and thusK(s) = Fl(J(s), Q(s))
for Q(s) in RH and
J(s) =
Ul(s)V1l (s) V1r (s)V1l (s) V
1l (s)Nr(s)
or
V1r (s)Ur(s) V1r (s)V1l (s) Nl(s)V
1r (s)
To see this,
K(s)
= (Ul + DrQ)(Vl NrQ)1= Ul(Vl NrQ)1 DrQ(Vl NrQ)1= UlV1l UlV1l NrQ(Vl NrQ)1 DrQ(Vl NrQ)1= UlV1l (Dr + UlV1l Nr)Q(Vl NrQ)1= UlV1l V1r Q(I V1l NrQ)1V1l= Fl(J(s), Q(s))
The matrix J(s) in the linear fractional controller parametrization is
J(s)s
=A + B0F + HC0 + HD00F H B0 + HD00
F 0 IC0 + D00F I D00
64
-
7/30/2019 Lecture Robust Zhou
65/141
Block Diagram InterpretationJ.C. Juang
Controller design problem
K(s)
P(s)
- -
-
w z
u y
After controller parametrization
Q(s)
J(s)
P(s)- --
-
w z
u y
The closed-loop map after stabilization
Q(s)
T(s)- --
w z
or
- T01(s) - Q(s) - T10(s) -b -zw- T11(s)
?
65
-
7/30/2019 Lecture Robust Zhou
66/141
-
7/30/2019 Lecture Robust Zhou
67/141
Feedback Control and Performance Limitations
67
-
7/30/2019 Lecture Robust Zhou
68/141
-
7/30/2019 Lecture Robust Zhou
69/141
The sensitivity function also governs the response from the command r to thetracking error e. Thus for good tracking performance, i.e., small tracking error,S must be small as well.
Let Lo(s) be the nominal loop transfer function, then the map from the com-mand r to the output y is characterized by the transfer function To(s) where
To(s) =Lo(s)
1 + Lo(s)Suppose that due to neglected dynamics, perturbations, and so forth, theactual loop transfer function becomes L(s), the resulting closed-loop transferfunction becomes
T(s) =L(s)
1 + L(s)
The sensitivity function S(s) of the closed-loop map T(s) with respect to thevariation of the loop transfer function L is
S(s) =TToT
LLo
L
=1
1 + Lo(s)Thus, S(s) must be maintained small in order to reduce the effect of plantuncertainty on the closed-loop map.
But, how small is small? Recall that in the open-loop situation, the transferfunction from d to y is 1, thus, it is generally required that |S(s)| to be smallerthan 1 at all ss.
Let D C be an open set and let f(s) be a complex valued function definedon D. Then f(s) is said to be analytic at a point zo in D if its is differentiableat zo and also at each point in some neighborhood of zo. In other words, thefunction has a power series representation around zo.
A function f(s) is said to be analytic in D if it is analytic at each point of D. The maximum modulus theorem states that if a complex valued function f
of complex variable is analytic inside and on the boundary of some domainD, then the maximum modulus (magnitude) of the function f occurs on theboundary of the domain D.
In other words, since S(s) is required to be stable, i.e., it is analytic in the closedright half plane (including at infinity), the maximum of |S(s)| is achieved onthe imaginary axis. It is thus equivalent to evaluating |S(j)|.
6a-r - e u ?d
- K(s) - G(s) -a -yc G(s) -a -?d
yo
69
-
7/30/2019 Lecture Robust Zhou
70/141
-
7/30/2019 Lecture Robust Zhou
71/141
-
6
u Re L(j)
Im L(j)
1
Note that the number of poles of 1 + L in RHP is the same as the number ofunstable open loop poles.
Assume that both G(s) and K(s) are stable, then closed-loop stability meansthat the Nyquist plot is not allowed to circle around 1.
The critical issue for the encirclement is the phase 180o with unity gain. Let L be a perturbation that creates marginally instability, i.e.,
Lo(j) + L = 1Then, to have robust stability, L must be large. This is to require that
L
Lo= 1 + Lo
Lo
be large. In other words,
T(s) =Lo(s)
1 + Lo(s)
must be small in magnitude at the critical frequencies. The function T(s) is
the complementary sensitivity function. Another Limitation: The sensitivity and complementary sensitivity functionssum to one.
S(s) + T(s) = 1 , sS(s) and T(s) cannot be made small at the same time. There is a conflictbetween achieving good sensitivity and good stability margin.
In summary, feedback can be used to
Shape system response
Attenuate disturbance
Accommodate uncertainty Adapt toward variation
and yet it is subject to fundamental limitations on the achievable performance ofa feedback system. In particular, the tradeoffs between sensitivity function andcomplementary sensitivity function must be accounted for. A small sensitivityfunction implies that
71
-
7/30/2019 Lecture Robust Zhou
72/141
Small sensitivity toward variations of GK
Good disturbance rejection
Small tracking error
On the other hand, a small complementary sensitivity function means
Large stability margin
Good noise rejection Robust against variation of sensor dynamics
72
-
7/30/2019 Lecture Robust Zhou
73/141
Multivariable Feedback SystemJ.C. Juang
Consider the feedback design problem in the figure
r -f - K(s) -u f -upd1
G(s) -f -?
yp yo
d2
f n6
??
The input loop transfer function (matrix) or the transfer function at the inputnode Lin(s) is defined as
Lin(s) = K(s)G(s)
Likewise, the output loop transfer function is
Lout(s) = G(s)K(s)
The input sensitivity function is defined as the transfer function matrix from theinput disturbance d1 to the plant input signal up
Sin(s) = [I+ Lin(s)]1 = [I+ K(s)G(s)]1
andup = Sind1
The output sensitivity function is defined as the transfer function matrix from the
output disturbance d2 to the output signal yo
Sout(s) = [I+ Lout(s)]1 = [I + G(s)K(s)]1
andyo = Soutd2
The matrix I+ Lin(s) is called input return difference matrix and I+ Lout(s) is theoutput return difference matrix.
The input and output complementary sensitivity functions are defined, respec-tively, as
Tin(s) = I Sin(s) = Lin(s)[I+ Lin(s)]1 = K(s)G(s)[I+ K(s)G(s)]1
andTout(s) = I Sout(s) = Lout(s)[I + Lout(s)]1 = G(s)K(s)[I + G(s)K(s)]1
73
-
7/30/2019 Lecture Robust Zhou
74/141
-
7/30/2019 Lecture Robust Zhou
75/141
-
7/30/2019 Lecture Robust Zhou
76/141
-
7/30/2019 Lecture Robust Zhou
77/141
-
7/30/2019 Lecture Robust Zhou
78/141
For a nonminimum phase system, the constraint becomes more strigent. Indeed,consider
L(s) =s + zs + z
Lmp(s)
where Lmp(s) is stable and minimum phase and z > 0. Then
L(jo) = Lmp(jo) + jo + zjo + z
Since jo+zjo+z
< 0, an extra phase is introduced. In particular, when o = z,
jo+zjo+z
= 90 and when o = z/2, jo+zjo+z = 53.13
A rule of thumb is to have the crossover frquency
c < z/2
Let L(s) be the open-loop transfer function with at least two more poles than zerosand let pis be the open right-half plane poles of L(s). Then, the Bodes sensitivity
integral states that 0
ln |S(j)| d = mi=1
{pi} 0
If the sensitivity is kept low in certain frequency range, there will exist somefrequency range in which |S(j)| is large. This is the so-called water bed effect.
Suppose that L(s) has a single real right-half plane zero z, then0
ln |S(j)| 2zz2 + 2
d = lnmi=1
|pi + zpi z|
The function 2zz2+2
acts as a weighting on the sensitivity function and further limitsthe design tradeoffs.
78
-
7/30/2019 Lecture Robust Zhou
79/141
-
7/30/2019 Lecture Robust Zhou
80/141
-
7/30/2019 Lecture Robust Zhou
81/141
-
7/30/2019 Lecture Robust Zhou
82/141
Robustness Analysis MotivationJ.C. Juang
Let Go(s) =s1
s(s2)and K(s) be a stabilizing controller.
Suppose that the open loop plant is perturbed as G(s) = s1+2s(s21) for some 1 and
2.
6c-
- K(s) -c - 1s - 1s - 1 -c -
1
26c
6?
- 2
6
w1 z1 z2 w2
G(s)
Go(s)
The problem of robust stability is to assess how much tolerance on the perturba-tions 1 and 2 the system can endure before going unstable.
The robustness analysis problem can be formulated as the following standard formwhere the perturbation is cast into an uncertainty block and the known dynamicis represented in another block.
1
s21
1+KGo 1
s2K
1+KGo1
s(s2)1
1+KGo 1
s(s2)K
1+KGo
1 00 2
-
-
w1
w2
z2
z1
82
-
7/30/2019 Lecture Robust Zhou
83/141
-
7/30/2019 Lecture Robust Zhou
84/141
Plant Uncertainty and RobustnessJ.C. Juang
Consider the system in the figure where Lo(s) is the nominal loop transfer func-tion and (s) is a multiplicative perturbation. The actual loop transfer functionbecomes L(s) = (I + (s))Lo(s).
6e-
- Lo(s) - I + (s) -
L(s)
The system can be redrawn as the standard form
Lo(I+ Lo)1
-
w z
If the nominal system is closed-loop stable, (s) is open-loop stable, and
max((j)) max[Lo(j)(I + Lo(j))1] < 1for all , then the perturbed system is closed-loop stable.
Too see this, recall that the characteristic equation is det[I + (I + (s))Lo(s)] = 0.Since (I + Lo) is invertible,
det[I+(I+(s))Lo(s)] = det[(I+Lo(s))+(s)Lo(s)] = det[I+Lo(s)]det[I+(s)Lo(s)[I+Lo(s)]1]
In order for det[I + (s)Lo(s)[I + Lo(s)]1
] = 0, it is necessary and sufficient thatI + (s)Lo(s)[I+ Lo(s)]
1 is nonsingular. In other words,
max[Lo(s)(I+ Lo(s))1] max((s)) < 1
will ensure the nonsingularity of I + (I + (s))Lo(s) and hence the stability of theclosed-loop system.
84
-
7/30/2019 Lecture Robust Zhou
85/141
-
7/30/2019 Lecture Robust Zhou
86/141
Singular Value Plots and UncertaintiesJ.C. Juang
Complementary sensitivity function at the output node.
6b-
K
- G -b-
? -
Robustness Test
max(GK(I + GK)1) max() < 1
Uncertainty:
Output (sensor) error
Neglected HF dynamics
Changing number of RHP zeros
Multiplicative uncertainty: G
=(I+ )G
Performance:
Sensor noise attenuation
Output response to output com-
mand
Control function at the input node.
6b-K
- G -b
-
? -
Robustness Test
max(K(I + GK)1) max() < 1
Uncertainty:
Additive plant error
Uncertain RHP zeros
Additive uncertainty: G
= G +
Performance:
Input response to output com-mand
86
-
7/30/2019 Lecture Robust Zhou
87/141
Complementary sensitivity function at the input node.
6b-
K
G --b-
?-
Robustness Test
max(KG(I + KG)1) max() < 1
Uncertainty:
Input (actuator) error
Neglected HF dynamics
Changing number of RHP zeros
Multiplicative uncertainty: G
=
G(I+ )
Performance:
Input response to input com-mand
Sensitivity function at the output node.
6b-
K
- G -b?
-
Robustness Test
max((I + GK)1) max() < 1
Uncertainty:
LF plant parameter errors
Changing number of RHP poles
G
= (I + )1G
Performance:
Output sensitivity
Output error to output com-mands and disturbances
87
-
7/30/2019 Lecture Robust Zhou
88/141
-
7/30/2019 Lecture Robust Zhou
89/141
Robustness MarginJ.C. Juang
Gain margin problem
Real perturbation kg
Test:
|1 + kgL(j)
|= 0
At the phase crossover frequency p, the imaginary part of L(jp) is zero, i.e.,L(jp) = Lr; thus, kg = L(jp)1 = 1/|Lr|.
Phase margin problem
Unity gain, complex perturbation ej
Test: |1 + ejL(j)| = 0 At the gain crossover frequency g, |L(jg)| = 1, i.e., L(jg) = ejL(jg); thus,
= + L(jg).
Robustness margin problem
Complex perturbation kr
Test: |1 + krL(j)| = 0 At the worst-case frequency o, L(jo) = Lr +jLi; thus, |kr| = (
L2r + L
2i )1.
-
6
Re
Im
q1
L(j)
-
6
Re
Im
q1
L(j)
-
6
Re
Im
q1
L(j)
89
-
7/30/2019 Lecture Robust Zhou
90/141
Small Gain TheoremJ.C. Juang
M(s)
-
Suppose that M(s) RH, then the system is well-posed and internally stable forall (s) RH if and only if
M(s) (s) < 1 Critical stability occurs when
det(I M(j)) = 0for a frequency and an uncertainty .
Since M(s) is stable, the condition for robust stability can be stated as
det(I M(j)) = 0for all and . This is guaranteed if
max(M(j)(j)) < 1 Note that
sup
max(M) sup
[max(M)max()]
sup
max(M) sup
max()
M On the other hand, we can construct the worst-case perturbation as follows. As-
sume that M =
M is achieved at the frequency o and let M(jo) = UMV
(singular value decomposition). Then, by assigning = 1M V U, one has
M(s) (s) = M 1M
= 1
anddet(I M(jo)) = det(I 1M UMVV U) = 0
90
-
7/30/2019 Lecture Robust Zhou
91/141
The small gain theorem is conservative.
However, when the perturbation is unstructured, complex, and independent, thesmall gain theorem gives a necessary and sufficient condition for robust stability.
Assume that A is stable and the perturbed system has a state transition matrixA + BEC for some known B and C (structure) and unknown E.
The perturbed system can be arranged as a feedback system
B (sI A)1 C
E
- - -
Thus, a sufficient condition for robust stability is that, according to the small gaintheorem,
max(E) 0, the measure ismax
i(M) real;i(M)>0i(M)
94
-
7/30/2019 Lecture Robust Zhou
95/141
-
7/30/2019 Lecture Robust Zhou
96/141
8. For all Q Q and D D,(M) = (MQ) = (QM) (D 12 MD 12 )
9.
maxQQ
(QM) maxB
(M) = (M) infDD
max(D1
2 MD1
2 )
10. max(D1
2 MD1
2 ) is convex in ln(D1
2 ).11. Linear matrix inequality
max(D1
2MD1
2 ) <
max(D 12MD 12D 12MD12 ) < 2 D12MD 12D 12MD 12 2I < 0 MDM 2D < 0
12. Computation (lower bound)
maxQQ (QM) = (M)
Q
M Q-
- Nonconvex, local maximum
Power method
Projection method
13. Computation (Upper bound)
(M) infDD
max(D1
2 MD1
2 )
D1
2 D1
2
D1
2 M D1
2- -
-
Convex optimization
Frobenius
Perron eigenvalue
Linear matrix inequality The bound is tight if 2s + f 3.
96
-
7/30/2019 Lecture Robust Zhou
97/141
Main Loop TheoremJ.C. Juang
For all 1 1 with max(1) 1, the perturbed closed-loop system is well-posedand stable if and only if
sup(s)0
1(M22) <
1
M =
M11 M12M21 M22
-
- -
For all 1 1 with max(1) 1, the perturbed closed-loop system is well-posed,stable, and
sup(s)0
2Fl(M, 1) <
if and only ifsup(s)0
(M) <
=
1 00 2
M =
M11 M12M21 M22
-
-
97
-
7/30/2019 Lecture Robust Zhou
98/141
Kharitonov ApproachJ.C. Juang
Stability can be checked by the roots of characteristic equation det(I M) = 0. Let the characteristic polynomial be
p(s, q) =
ni=0
qisi = qnsn + qn1s(n1) + + q1s + q0
where qis are in the interval
q Q or qi qi q+i i
The polynomial is called an interval polynomial p(s, q) =n
i=0 [qi , q
+i ] s
i.
Kharitonov Theorem: The interval polynomial is robustly stable, i.e., all the rootsare in the LHP for all qis in the interval, if and only if its four invariant degreeKharitonov polynomials
k1(s) = q0 + q
1 s + q
+s s
2 + q+3 s3 + q4 s
4 + q5 s5 + q+6 s
6 + k2(s) = q
+0 + q
+1 s + q
s s
2 + q3 s3 + q+4 s
4 + q+5 s5 + q6 s
6 + k3(s) = q
+0 + q
1 s + q
s s
2 + q+3 s3 + q+4 s
4 + q5 s5 + q6 s
6 + k4(s) = q
0 + q
+1 s + q
+s s
2 + q3 s3 + q4 s
4 + q+5 s5 + q+6 s
6 + are stable.
Proof: Fix a frequency o and consider the range p(jo, Q) = {p(jo, q) : q Q}.p(jo, Q) = q0 + q1(jo) + q2(jo)
2 + q3(jo)3 + q4(jo)
4 +
= q0 q22
o + q4
4
o q66
o + +j(q1o q33
o + q5
5
o )Note that (Re and Im stand for realpart and imaginary parts, respec-tively)
minqQ
Rep(jo, q) = Re k1(jo)
maxqQ
Rep(jo, q) = Re k2(jo)
minqQ
Imp(jo, q) =
Im k3(jo), o 0Im k4(jo), o < 0
maxqQ
Imp(jo, q) = Im k4(jo), o 0
Im k3(jo), o < 0-
6
Re
Im
k1 k3
k4 k2
o 0
Thus, the range of p(jo, q) in the complex is bounded by four vertices k1(jo),k2(jo), k3(jo), and k4(jo). When this is true for every frequency, the intervalpolynomial is stable.
98
-
7/30/2019 Lecture Robust Zhou
99/141
Controller Design
99
-
7/30/2019 Lecture Robust Zhou
100/141
State Feedback DesignJ.C. Juang
Assume that the state is accessible for feedback.
In terms of the standard design formulation:
x = Ax + B0u + B
1w
y = x
z = C1x + D10u + D11w
Typically, the feedback control is assumed to be a state feedback
u = F x
for some feedback gain F.
The closed-loop map becomes
x = (A + B0F)x + B1wz = (C1 + D10F)x + D11w
Issues in the feedback design
1. Stability
2. Robustness
3. Performance
4. Boundedness of z
5. Boundedness of the map from w to z
Stability The closed-loop (A+B0F) is stable if there exists a positive definite matrixX such that
X(A + B0F) + (A + B0F)TX < 0
Performance The H2-norm of the map from w to z is the square root of traceBT1 XB1where X is the observability gramian
X(A + B0F) + (A + B0F)TX+ (C1 + D10F)
T(C1 + D10F) = 0
Robustness The H-norm of the map from w to z is bounded above by if thereexists a positive definite matrix X such that
X(A + B0F) + (A + B0F)TX+ (C1 + D10F)T(C1 + D10F)
+[XB1 + (C1 + D10F)TD11](
2I DT11D11)1[BT1 X + DT11(C1 + D10F)] = 0
100
-
7/30/2019 Lecture Robust Zhou
101/141
Quadratic StabilizationJ.C. Juang
Key Concept: Construct a quadratic Lyapunov function to govern the stability(and robustness) of the closed-loop system.
Assume that the Lyapunov function is
V(x) = xTP x
for some positive definite matrix P.
The time derivative of V(x) is
V(x) = xTP(A + B0F)x + xT(A + B0F)
TP x
= xT[P A + ATP + P B0F + FTBT0 P]x
A necessary and sufficient condition for stability is that
P A + ATP + P B0F + FTBT
0P < 0
It suffices to have
P A + ATP + P B0F + FTBT0 P + F
TRF < 0
for some positive definite R.
The latter can be manipulated to become
(F + R1BT0 P)TR(F + R1BT0 P) + P A + A
TP P B0R1BT0 P < 0
By assigning
F = R1BT0 Pwith P satisfying
P > 0 and P A + ATP P B0R1BT0 P < 0a stabilizing state feedback gain is constructed.
Equivalently (by treating P = X1), a stabilizing state feedback gain can be de-signed by solving an X such that
X > 0
and
AX+ XAT B0R1BT0 < 0This is the linear matrix inequality (LMI) approach.
101
-
7/30/2019 Lecture Robust Zhou
102/141
Matched and Unmatched UncertaintiesJ.C. Juang
Consider the systemx = (A + A)x + (B0 + B)u
for some uncertainties A and B with
A = B0EA and B = B0EB
for some unknown EA and EB.
Uncertainties of the above form, i.e., the uncertainties are in the range space ofB0, are said to satisfy the matching condition.
Assume that
1. The matrix A is asymptotically stable
2. The uncertainty Ea is bounded: Ea eA3. The uncertainty Eb is bounded:
Eb
eB < 1
Again, consider the Lyapunov function
V(x) = xTP x
and the Lyapunov equationATP + P A + Q = 0
for some positive definite Q and positive definite P.
The state feedback gain F is assumed to be
u = F x = r1BT0 P xHere, r is a positive scalar to be determined.
The derivative of V(x) is
V(x) = xTP x + xTPx
= xTQx r1xTP B0(2I+ EB + ETB)BT0 P x + xT(P B0EA + ETABT0 P)x
Note thatQ min(Q)I
P B0(2I+ EB + ETB)B
T0 P 2(1 eB)P B0BT0 P
and P B0Ea + ETa B
T0 P P B0BT0 P + ETa Ea P B0BT0 P + e2AI
Thus, the derivative of the Lyapunov function
V(x) min(Q)xTx 2(1 eB)r1xTP B0BT0 P x + xTP B0BT0 P x + e2AxTx= xT[e2A min(Q)]x + xTP B0[1 2r1(1 eB)]BT0 P x
102
-
7/30/2019 Lecture Robust Zhou
103/141
By selecting Q and r such that
min(Q) e2A and r 2(1 eB)the system is stable for all permissible A and B.
It is always possible to stabilize bounded, matched uncertainties provided that thestate is accessible.
Consider the systemx = (A + A)x + B0u
whereA =
i
iEiFi
where both Ei and Fi are known (structure of the uncertainty) and i is unknown.A bound on i, however, is available: |i| i.
The uncertainty is unmatched in the sense that the uncertainty is not necessarilyin the range space of B0.
A robustly stabilizing state feedback gain F can be constructed as
F = R1BT0 Pwhere P is the positive definite solution to
ATP + P A + Q 2P B0R1BT0 P + P(i
iEiETi )P +
i
iFTi Fi = 0
for some positive definite Q and R.
To verify the above claim, check the Lyapunov function
V(x) = xTP x
and its time derivative
V(x) = xTQx
i
ixT(ETi P Fi)T(ETi P Fi)x
103
-
7/30/2019 Lecture Robust Zhou
104/141
Algebraic Riccati EquationJ.C. Juang
Let A, Q, and R be real n n matrices with Q and R symmetric. Define the 2n2nmatrix
H :=
A RQ
AT A matrix of this form is called a Hamiltonian matrix.
The Hamiltonian matrix is related to the following algebraic Riccati equation
ATX+ XA + XRX + Q = 0
The spectrum of H, spec(H), is symmetric with respect to the imaginary axis. Toprove this, introduce the 2n 2n matrix
J :=
0 II 0
having the property J2 = I. Then,J1HJ = JHJ = HT
so H and HT are similar. Thus, spec(H) spec(HT) spec(H)
Assume H has no eigenvalues on the imaginary axis. Then it must have n eigen-values in {s} < 0 and n in {s} > 0. Thus the two spectral subspaces, (H) and+(H), have dimension n. We will be interested in the stable subspace (H) andits associated properties. Partitioning the subspace,
(H) = img
X1X2
where X1 and X2 are in R
nn. If X1 is nonsingular, i.e., if the two subspaces
(H), img
0I
are complementary, we can define
X = X2X11
to get
(H) = img
X1X2
= img
IX
X1 = img
IX
Notice that X is uniquely determined by H, we can thus have
X = Ric(H)
104
-
7/30/2019 Lecture Robust Zhou
105/141
Ric is a function R2n2n Rnn which maps H to X where
(H) = img
IX
The domain of Ric, denoted by domRic, consists of Hamiltonian matrices H with
two properties:
1. H has no eigenvalues on the imaginary axis,
2. (H) and img
0I
are complementary.
Suppose H domRic and X = Ric(H). Then1. X is symmetric
2. X satisfies the equation
ATX+ XA + XRX Q = 03. A + RX is stable
Proof of 1.
Let X1 and X2 be as above, i.e., (H) = img
X1X2
. It is claimed that
XT1 X2 is symmetric
To prove this, note that there exists a stable matrix H in Rnn such that
H
X1X2
=
X1X2
H
Premultiply this equation by
X1X2 T
J
to get X1X2
TJH
X1X2
=
X1X2
TJ
X1X2
H
Now, JH is symmetric; hence so is the left hand side of the above equation. Henceso is the right hand side:
(XT1 X2 + XT2 X1)H = HT(XT1 X2 + XT2 X1)T = HT(XT1 X2 + XT2 X1)But this is a Lyapunov equation. Since H is stable, the unique solution is
XT1 X2 + XT2 X1 = 0We have
XX1 = X2
Premultiply this by XT1 , we have XT1 XX1 is symmetric. Hence, X is symmetric
since X1 is nonsingular.
105
-
7/30/2019 Lecture Robust Zhou
106/141
Bounded Real LemmaJ.C. Juang
Suppose that M(s) = D+C(sIA)1B with A asymptotically stable. Then M(s) < if and only if
1. D <
2. There exists X = XT
satisfying the Riccati equationATX+ XA + CTC+ (XB + CTD)(2I DTD)1(BTX+ DTC) = 0
such that A + BR1(DTC+ BTX) is asymptotically stable.
Furthermore, when such an X exists, X 0. Proof
1. It amounts to performing a spectral decomposition, i.e., finding a stable N(s)such that
2I M(s)M(s) = N(s)N(s)2. Recall that
2I M(s)M(s) s= A 0 BCTC AT CTD
DTC BT 2I DTD
Through a similarity transformation of
I 0
X I
s=
A 0 BCTC XA ATX AT XB + CTD
DTC+ BTX BT 2I DTD
3. Let N(s) s= A B
Cx Dx
. Then
N(s)N(s)s
=
AT CTxB DTx
A B
Cx Dx
s
=
A 0 BCTx Cx AT CTx Dx
DTx Cx BT DTx Dx
4. Comparing the coefficients
DTx Dx = 2I DTD
DTx Cx = BTX DTC
C
T
x Cx = CT
C XA AT
X5. This leads to the Riccati equation.
A dual version is to find a positive semidefinite matrix Y such that
AY + Y AT + BBT + (Y CT + BDT)(2I DDT)1(CY + DBT) = 0
106
-
7/30/2019 Lecture Robust Zhou
107/141
Riccati Equation in Control DesignJ.C. Juang
Theorem: Suppose H =
A BBT
CTC AT
with (A, B) stabilizable and (C, A) de-
tectable. Then H domRic and Ric(H) 0. If (C, A) is observable, Ric(H) > 0.
The corresponding algebraic Riccati equation isATX+ XA XBBTX + CTC = 0
Proof:
1. H is in the domain of Riccati.
(a) H has no eigenvalues on the imaginary axis.
Let j and
xz
be an eigenpair. Then
(A
j)x = BBTz and
(A
j)z = CTCx
or z, (A j)x = BTz2 and x, (A j)z = Cx2. Thus, Cx2 =z, (A j)x = BTz2. This implies that BTz = 0 and Cx = 0.
The assumptions on stabilizability and detectability are violated since
z
A jI B = 0 and A jIC
x = 0. Hence, there are no eigenval-
ues on the j axis.
(b) The two subspaces (H) and img
0I
are complementary. Recall that
(H) = imgX1X2 and H
X1X2 =
X1X2 H. The two subspaces are com-plementary is equivalent to that X1 is nonsingular, i.e., kerX1 = 0.
The subspace kerX1 is H-invariant. To see this, note that
AX1 BBTX2 = X1HLet x kerX1, then
xX2 (AX1 BBTX2)x = xX2X1Hxor (by recalling that X1X2 is symmetric) xX2BBTX2x = xX1X2Hx, orxX2BBTX2x = 0, or BTX2x = 0. This implies that
X1Hx = 0
That is, Hx is in the kernel of X1.
The fact that X1 is nonsingular is proved by contradiction. SupposekerX1 = 0, then H|kerX1 has an eigenvalue and eigenvector x such that
Hx = x {} < 0, 0 = x kerX1
107
-
7/30/2019 Lecture Robust Zhou
108/141
Note that CTCX1 ATX2 = X2H. Postmultiplying by x gives(AT + )X2x = 0
Since BTX2x = 0, we have
xX2
A + B
= 0
Stabilizability implies that X2x = 0. But if X
1x = 0 and X
2x = 0, then
x = 0, a contradiction.
2. To prove that X 0. Set X = Ric(H). ThenATX + XA XBBTX+ CTC = 0
or(A BBTX)TX+ X(A BBTX) + (XBBTX + CTC) = 0
Note that A BBTX is stable, we have
X =
0
e(ABBTX)Tt(XBBTX+ CTC)e(ABB
TX)t dt
Since XBBTX+ CTC is positive semidefinite, so is X.
3. The statement that X > 0 when (C, A) is observable is proved by contradiction.
Suppose that there exists a stable unobservable mode with {} < 0 suchthat
A I
C
x = 0 for a nonzero vector x.
From the Riccati equation
2({}) xXx xXBBTXx = 0 Thus, xXx = 0. That is X is singular.
Conversely, if there exists a nonzero vector x kerX, then the Riccatiequation implies that Cx = 0 and XAx = 0.
The space kerX is A-invariant. There then exists a such that x = Ax =(A BBTX)x and Cx = 0 with {} < 0. That is, there exists a stable,unobservable mode.
108
-
7/30/2019 Lecture Robust Zhou
109/141
Linear Quadratic RegulatorJ.C. Juang
Consider the linear system
x = Ax + B0u , x(0) = xo
and the quadratic cost functional to be minimized0
xTQx + uTRudt
Theorem: Assume that
1. The pair (A, B0) is stabilizable.
2. The matrix Q is symmetric and positive semidefinite.
3. The pair (C, A) is detectable where C satisfies Q = CTC, i.e., C is a Choleskyfactor of Q.
4. The matrix R is symmetric and positive definite.
Then, the optimal, stabilizing control exists and is of the state feedback form
u = F x
where the state feedback gain F is
F = R1BT0 Xwith X satisfying the algebraic Riccati equation
X = Ric
A B0R1BT0
CTC AT
orATX+ XA + Q XB0R1BT0 X = 0
Q governs the rate of convergence of the state and R controls the extent of controlactivity.
Stabilizable = the integral is finite; Observable = guarantee x(t) approacheszero as t goes to infinity; R positive definite = no impulsive controls.
The assumptions imply that
A B0R1BT0
CTC AT
dom Ric and X 0.
Define
z = C
0
x +
0R
1
2
u = C1x + D10u
Then, the cost function becomes0
xTQx + uTRudt =
0
zTz dt = z22
109
-
7/30/2019 Lecture Robust Zhou
110/141
Approach 1
Define the Hamiltonian
H(x,u,p) = 12
(xTQx + uTRu) +pT(Ax + B0u)
where p is the costate.
Optimality implies that
Hu
= 0 = Ru + BT0 p
Hp
= x = Ax + B0u
Hx
= p = Qx + ATp
Thus, the optimal controlu = R1BT0 p
and xp = A B0R1BT0Q AT xp
Assume that the state and costate are related
p = Xx
for some X. Then,
u = R1BT0 Xxx = (A B0R1BT0 X)x
Xx = (Q ATX)xThe last equation then leads to the algebraic Riccati equation
ATX+ XA + Q XB0R1BT0 X = 0 Approach 2
The closed-loop system is
x = (A + B0F)x , x(0) = xo
z = (C1 + D10F)x
which can also be rewritten as
x = (A + B0F)x + xo(t) , x(0) = 0
z = (C1 + D10F)x
Let X be the observability gramian of the closed-loop system, that is,
(A + B0F)TX+ X(A + B0F) + (C1 + D10F)
T(C1 + D10F) = 0
110
-
7/30/2019 Lecture Robust Zhou
111/141
Consider the Lyapunov function V(x) = xTXx and its derivative
V(x) = xTXx + xTXx
= (Ax + B0u)TXx + xTX(Ax + B0u)
= [(A + B0F)x + B0(u F x)]TXx + xTX[(A + B0F)x + B0(u F x)]= xT[(A + B0F)X+ X(A + B0F)]x + 2B0(u