kalman decomposition
TRANSCRIPT
8/12/2019 Kalman Decomposition
http://slidepdf.com/reader/full/kalman-decomposition 1/13
1
Lecture 4 and 5
Controllability and Observability: Kalman decompositions
Spring 2013 - EE 194, Advanced Control (Prof. Khan)
January 30 (Wed.) and Feb. 04 (Mon.), 2013
I. OBSERVABILITY OF DT LTI SYSTEMS
Consider the DT LTI dynamics again with the observation model. We are interested in deriving
the necessary conditions for optimal estimation and thus the additional noise terms can beremoved. The system to be considered is the following:
xk+1 = Axk + Buk, (1)
yk = C xk, (2)
as before we consider the worst case of one observation per k, i.e., p = 1.
Definition 1 (Observability). A DT LTI system is said to be observable in n time-steps when
the initial condition, x0 , can be recovered from a sequence of observations, y0, . . . , yn−1 , and
inputs, u0, . . . , un−1.
The observability problem is to find x0 from a sequence of observations, y0, . . . , yk−1, and
The lecture material is largely based on: Fundamentals of Linear State Space Systems, John S. Bay, McGraw-Hill Series in
Electrical Engineering, 1998.
February 6, 2013 DRAFT
8/12/2019 Kalman Decomposition
http://slidepdf.com/reader/full/kalman-decomposition 2/13
2
inputs, u0, . . . , uk−1. To this end, note that
y0 = C x0,
y1 = C x1,
= CAx0 + CBu0,
y2 = C x2,
= CA2x0 + CABu0 + CBu1,
...
yk−1 = CAk−1x0 + CAk−2Bu0 + . . . + CBuk−2.
In matrix form, the above is given by
y0
y1
y2...
yk−1
=
C
CA
CA2
...
CAk−1
x0 + Υ
u0
u1
u2
...
uk−1
.
The above is a linear system of equations compactly written as
Ψ0:k−1 = O0:k−1 ∈Rk×n
x0, (3)
and hence, from the known inputs and known observations, the initial condition, x0, can be
recovered if and only if
rank (O0:k−1) = n. (4)
Using the same arguments from controllability, we note that rank (O) < n for k ≤ n − 2 as the
observability matrix, O , has at most n − 1 rows. Similarly, adding another observation after the
nth observation does not help by Cayley-Hamilton arguments as
rank (O0:n−1) = rank (O0:n) = rank (O0:n+1) = . . . . (5)
February 6, 2013 DRAFT
8/12/2019 Kalman Decomposition
http://slidepdf.com/reader/full/kalman-decomposition 3/13
3
Hence, the observability matrix is defined as
O =
C
CA
CA2
...
CAn−1
, (6)
without subscripts.
Remark 1. In literature, rank (C ) = n is often referred to as the system being (A, B) –controllable.
Similarly, rank (O) = n is often referred to as the system being (A, C ) –observable.
Remark 2. Show that
rank (C ) = n ⇔ CC T 0 ⇔ (CC T )−1 exists, (7)
rank (O) = n ⇔ OT O 0 ⇔ (OT O)−1 exists. (8)
Remark 3. Show that
(A, C ) –observable ⇔ (AT , C T ) –controllable, (9)
(A, B) –controllable ⇔ (AT , BT ) –observable. (10)
February 6, 2013 DRAFT
8/12/2019 Kalman Decomposition
http://slidepdf.com/reader/full/kalman-decomposition 4/13
4
I I . KALMAN DECOMPOSITIONS
Suppose it is known that a SISO (single-input single-output) system has rank (C ) = nc < n,
and rank (O) = no < n. In other words, the SISO system, in question, is neither controllable nor
observable. We are interested in a transformation that rearranges the system modes (eigenvalues)into:
• modes that are both controllable and observable;
• modes that are neither controllable nor observable;
• modes that are controllable but not observable; and
• modes that are observable but not controllable.
Such a transformation is called Kalman decomposition.
A. Decomposing controllable modes
Firstly, consider separating only the controllable subspace from the rest of the state-space. To
this end, consider a state transformation matrix, V , whose first nc columns are a maximal set of
linearly independent columns from the controllability matrix, C . The remaining n − nc columns
are chosen to be linearly independent from the first nc columns (and linearly independent among
themselves). The transformation operator (matrix) is n × n, defined as
V = v1 . . . vnc | vnc+1 . . . vn V 1 V 2 . (11)
It is straightforward to show that the transformed system (xk = V xk) is
xk+1 = V −1AV xk + V −1Bu, (12)
yk = CV xk. (13)
Partition V −1 as
V −1 = V 1
V 2 , (14)
where V 1 is nc × n and V 2 is (n − nc) × n. Now note that
I n = V −1V =
V 1
V 2
V 1 V 2
=
V 1V 1 V 1V 2
V 2V 1 V 2V 2
=
I nc
I n−nc
. (15)
February 6, 2013 DRAFT
8/12/2019 Kalman Decomposition
http://slidepdf.com/reader/full/kalman-decomposition 5/13
5
The above leads to the following1 :
V 2V 1 = 0 ⇒ C lies in the null space of V 2, (16)
further leading to
V 2B = 0, and V 2AV 1 = 0. (17)
The transformed system in Eq. (12) is thus given by
A V −1AV =
V 1
V 2
A
V 1 V 2
=
V 1AV 1 V 1AV 2
V 2AV 1 V 2AV 2
Ac Acc
0 Ac
, (18)
B V −1B =
V 1
V 2
B =
V 1B
V 2B
Bc
0
. (19)
Now recall that a similarity transformation does not change the rank of the controllability matrix,
C , of the transformed system. Mathematically,
rank (C ) = rank
V −1B (V −1AV )V −1B . . . (V −1AV )n−1V −1B
,
= rank
V −1B V −1AB . . . V −1An−1B
,
= rank
V −1
B AB . . . An−1B
,
= rank B AB . . . An−1B ,
= rank (C ) ,
= nc.
Furthermore,
rank (C ) = rank
B AB . . . Anc−1
B. . . A
n−1B
,
= rank
Bc AcBc . . . Anc−1c Bc
0 0 . . . 0
. . .
An−1c Bc
0
,
= rank
Bc AcBc . . . Anc−1c Bc
0 0 . . . 0
,
= nc,
1This is because all of the freedom in the linear independence of C is already captured in V 1.
February 6, 2013 DRAFT
8/12/2019 Kalman Decomposition
http://slidepdf.com/reader/full/kalman-decomposition 6/13
6
where to go from second equation to the third, we use Cayley-Hamilton theorem. All of this to
show that the subsystem (Ac, Bc) is controllable.
Conclusions: Using the transformation matrix, V , defined in Eq. (11), we have transformed
the original system, xk+1 = Axk + Buk into the following system:
xk+1 = Axk + Buk, (20)
xc(k + 1)
xc(k + 1)
=
Ac Acc
0 Ac
xc(k)
xc(k)
+
Bc
0
uk, (21)
such that the transformed state-space is separated into a controllable subspace and an uncontrollable
subspace.
Consider the transformed system in Eq. (20) and partition the (transformed) state-vector, x(k),
into an nc × 1 component, xc(k), and an n − nc × 1, xc(k). We get
xc(k + 1) = Acxc(k) + Accxc(k) + Bcuk, (22)
xc(k + 1) = Acxc(k). (23)
Clearly, the lower subsystem, xc(k), is not controllable. In the transformed coordinate space, xk,
we can only steer a portion, xc(k), of the transformed states. The subspace of Rn that can be
steered is denoted as R(C ), defined as the row space of C , where R(C ) is called the controllable
subspace.
Finally, recall that the eigenvalues of similar matrices, A and V −1AV , are the same. Recall
the structure of V −1AV from Eq. (18); since V −1AV is block upper-triangular, the eigenvalues
of A are given by the eigenvalues of Ac and Ac, called the controllable eigenvalues (modes)
and uncontrollable eigenvalues (modes), respectively.
Example 1. Consider the CT LTI system:
x = 2 1 1
5 3 6−5 −1 −4
x + 1
00
u,
C =
1 2 −4
0 5 −5
0 5 −5
,
February 6, 2013 DRAFT
8/12/2019 Kalman Decomposition
http://slidepdf.com/reader/full/kalman-decomposition 7/13
7
with rank (C ) = 2 < 3. Defines V as
V =
1 2
0 5
0 5
0
0
1
,
and note that the requirement for choosing V are satisfied. Finally, verify that the transformed
system matrices are
V −1AV =
0 6 1
1 −1 0
0 0 0
, V −1B =
1
0
0
.
Verify that the top-left subsystem is controllable. From this structure, we see that the two-
dimensional controllable system is grouped into the first two state variables and that the third
state variable has neither an input signal nor a coupling from the other subsystem. It is therefore
not controllable.
B. Decomposing observable modes
Now we can easily follow the same drill and come up with another transformation, xk = W xk,
where W contains no linearly independent rows of O and the remaining rows are chosen to be
linearly independent of the first no rows (and linearly independent among themselves). Finally,
the transformed system,
xk+1 = Axk + Buk,
=
Ao 0
Aoo Ao
xk +
Bo
Bo
uk, (24)
yk =
C o 0
xk (25)
is separated into an observable subspace and an unobservable subspace. In fact, we can now
write this result as a theorem.
Theorem 1. Consider the DT LTI system,
xk+1 = Axk,
yk = C xk,
February 6, 2013 DRAFT
8/12/2019 Kalman Decomposition
http://slidepdf.com/reader/full/kalman-decomposition 8/13
8
where A ∈ Rn×n and C ∈ R
p×n. Let O be the observability matrix as defined in Eq. (6). If
rank (O) = no < n, there there exists a non-singular matrix, W ∈ Rn×n such that
W −1AW = Ao 0
Aoo Ao , CW = C o 0 ,
where Ao ∈ Rno×no , C o ∈ R
no×n , and the pair (Ao, C o) is observable.
By carefully applying a sequence of such controllability and observability transformations,
the complete Kalman decomposition results in a system of the following form:
xco(k + 1)
xco(k + 1)xco(k + 1)
xco(k + 1)
= Aco 0 A13 0
A21 Aco A23 A24
0 0 Aco 0
0 0 A43 Aco
xco(k)
xco(k)xco(k)
xco(k)
+ Bco
Bco
0
0
u(k), (26)
y(k) =
C co 0 C co 0
xco(k)
xco(k)
xco(k)
xco(k)
. (27)
February 6, 2013 DRAFT
8/12/2019 Kalman Decomposition
http://slidepdf.com/reader/full/kalman-decomposition 9/13
9
C. Kalman decomposition theorem
Theorem 2. Given a DT (or CT) LTI system that is neither controllable nor observable, there
exists a non-singular transformation, x = T
x , such that the system dynamics can be transformed
into the structure given by Eqs. (26) – (27).
Proof: The proof is left as an exercise and constitutes Homework 1.
February 6, 2013 DRAFT
8/12/2019 Kalman Decomposition
http://slidepdf.com/reader/full/kalman-decomposition 10/13
10
APPENDIX A
SIMILARITY TRANSFORMS
Consider any invertible matrix, V ∈ Rn×n, and a state transformation, xk = V zk. The
transformed DT-LTI system is given by
xk+1 = Axk + Buk,
⇒ V zk+1 = AV zk + Buk,
⇒ zk+1 = V −1AV A
zk + V −1B B
uk,
yk = CV C
zk.
The new (transformed) system is now defined by the system matrices, A, B, and C . The two
systems, xk and zk, are called similar systems.
Remark 4. Each of the following can be proved.
• Given any DT-LTI systems, there are infinite possible transformations.
• Similar systems have the same eigenvalues; the eigenvectors of the corresponding system
matrices are different.
• Similar systems have the same transfer functions; in other words, the transfer function of
the DT-LTI dynamics is unique with infinite possible state-space realizations.
• The rank of the controllability and observability are invariant to a similar transformation.
APPENDIX B
TRIVIA: NULL SPACES
Consider a set of m vectors, xi ∈ Rn, i = 1, . . . , m.
Definition 2 (Span). The span of m vectors, xi ∈ Rn, i = 1, . . . , m is defined as the set of all
vectors that can be written as a linear combination of {xi}s , i.e.,
span(x1, . . . , xm) =
y
y =mi=1
αixi
, (28)
for αi ∈ R.
February 6, 2013 DRAFT
8/12/2019 Kalman Decomposition
http://slidepdf.com/reader/full/kalman-decomposition 11/13
11
Definition 3. The null space, N (A) ⊆ Rn , of an n × n matrix, A , is defined as the set of all
vectors such that
Ay = 0, (29)
for y ∈ N (A).
Consider a real-valued matrix, A ∈ Rn×n and let x1 = 0 and x2 = 0 be the maximal set of
linearly independent vectors2 such that
Ax1 = 0, Ax2 = 0, (30)
which leads to
A
2
i=1 αixi =
2
i=1 αiAxi = 0. (31)
Following the definition of span and null space, we can say that span(x1, x2) lies in the null
space of A, i.e.,
N (A) = span(x1, x2). (32)
In this particular case, the dimension of N (A) is 2 and3 rank (A) = n − 2.
2In other words, there does not exist any other vector, x3, that is linearly independent of x1 and x2 such that Ax3 = 0.
3This further leads to an alternate definition of rank as n− nc, where nc dim( N (A)).
February 6, 2013 DRAFT
8/12/2019 Kalman Decomposition
http://slidepdf.com/reader/full/kalman-decomposition 12/13
12
Lets recall the uncontrollable case with rank (C ) = nc. Clearly, C has nc linearly independent
(l.i.) columns (vectors in Rn); let the matrix V 1 ∈ R
n×nc consists of these nc l.i. columns. On
the other hand, let the matrix
V 1 ∈ R
n×n−nc consists of the remaining columns of C and note
that
every column of V 1 ∈ span(columns of V 1),
or, span(columns of V 1) contains every column of V 1.
The above leads to the following:
Remark 5. If there exists a matrix, V 2 , such that V 2V 1 = 0 , then
every column of V 1 ∈ N (V 2),
⇒ span(columns of V 1) ⊆ N (V 2),
⇒ every column of V 1 ∈ N (V 2).
February 6, 2013 DRAFT
8/12/2019 Kalman Decomposition
http://slidepdf.com/reader/full/kalman-decomposition 13/13
13
A. Kalman decomposition, revisited
Construct a transformation matrix, V ∈ Rn×n, whose whose first n × nc block is V 1. Choose
the remaining n − nc columns to be linearly independent from the first nc columns (and linearly
independent among themselves); call the second n × (n − nc) block as V 2:
V
V 1 V 2
, V −1
V 1
V 2
,
I n = V −1V =
V 1
V 2
V 1 V 2
=
V 1V 1 V 1V 2
V 2V 1 V 2V 2
=
I nc 0
0 I n−nc
.
From the above, note that V 2V 1 = 0. Now consider the transformed system matrices, A and B:
A V −1AV = V 1
V 2
A V 1 V 2 = V 1AV 1 V 1AV 2
V 2AV 1 V 2AV 2
,
B V −1B =
V 1
V 2
B =
V 1B
V 2B
.
Note that
AV 1 ∈ span(columns of V 1) ⊆ N (V 2),
B ∈ N (V 2).
February 6, 2013 DRAFT