course lecture8
TRANSCRIPT
7/26/2019 Course Lecture8
http://slidepdf.com/reader/full/course-lecture8 1/14
EML5311 Lyapunov Stability & Robust Control Design
1 Lyapunov Stability criterion
In Robust control design of nonlinear uncertain systems, stability theory plays an important
role in engineering systems. For any given control system, it is crucial to have a stable sys-
tem since an unstable control system is useless. Lyapunov 1. stability criterion is a general
and useful procedure for studying the stability of nonlinear systems. The Lyapunov stability
theory include two methods, Lyapunov’s first method and Lyapunov’s direct method. Lya-
punov’s first method is a technique which simply uses the idea of system linearization(lowest
order approximation) around a given point and one can only achieve local stability results
with small stability regions. Lyapunov’s direct method is the most important tool for de-
sign and analysis of nonlinear systems. Lyapunov’s direct method is directly applied to
nonlinear systems without the need to linearization and thus achieves global stability. The
basic concept behind Lyapunov’s direct method is that if the total energy of a system, electri-
cal/mechanical; linear/nonlinear, is continuously dissipating, then the system will eventually
reach an equilibrium point and remain at that point. Hence, Lyapunov’s direct method in-
clude two steps, first find a appropriate scalar function, referred to as Lyapunov function,
second evaluate its first-order time derivative along the trajectory of the system. If the
Lyapunov function derivative is decreasing along the system trajectory as time increases,then the system energy is dissipating and thus the system will eventually settle down. The
definitions below give a more formal statement of admissible choices of Lyapunov function
candidate.
Autonomous systems: the nonlinear system
x = f (x, u, t)
is said to be autonomous if f does not depend explicitly on time, i.e., if the system can be
written x = f (x)
Otherwise, the system is called non-autonomous.
Equilibrium point: A state xe is an equilibrium point(state) of the system if x(t) = xe,
1theory introduced in late 19th century by the Russian mathematician Alexandr Mikhailovich Lyapunov
1
7/26/2019 Course Lecture8
http://slidepdf.com/reader/full/course-lecture8 2/14
then it remains equal to xe for all time. Mathematically, this means that xe satisfies
0 = f (xe).
In this paper, we are mainly interested in stability of equilibrium points.
Stability and instability: The equilibrium point xe = 0 is said to be stable if, for any
Γ > 0, there exists γ > 0, such that if x(0) < γ , then x(t) < Γ for all t ≥ 0. Otherwise,
the equilibrium point is unstable.
Asymptotic stability: An equilibrium point 0 is asymptotically stable if it is stable, and if
in addition there exits some γ > 0 such that x(0) < γ implies that x(t) → 0 as t → ∞.
Exponential stability: An equilibrium point 0 is exponentially stable if there exits two
strictly positive numbers α and β such that
x(t) ≤ αx(0)eβt, ∀ t > 0,
in some ball Bγ in the neighborhood of the origin.
Lyapunov’s first method:
1. The equilibrium point of the nonlinear system is asymptotically stable if the linearized
system is strictly stable.
2. The equilibrium point of the nonlinear system is unstable if the linearized system is
strictly unstable.
3. If the linearized system is marginally stable, one cannot conclude anything from the
linear approximation (equilibrium point may be stable, unstable, or asymptotically stable
for the nonlinear system)
Lyapunov function: If function V( x) is positive definite and has continuous partial deriva-
tives in a ball Bγ , and if its time derivative along any state trajectory of system x = f (x) is
negative semi-definite, i.e., V (x ≤ 0, then V (x) is said to be a Lyapunov function.
Global stability: Assume that there exists a scalar function V of the state x, with contin-
uous first order derivatives such that
• V (x) is positive definite
2
7/26/2019 Course Lecture8
http://slidepdf.com/reader/full/course-lecture8 3/14
• V (x) is negative definite
• V (x) → ∞ as x → ∞
then the equilibrium at the origin is globally asymptotically stable.
Stability of uniform ultimate boundedness: A solution x, x(t0) = x0 is said to be
uniformly ultimately bounded (UUB) in a hyperball B(0, ) centered at the origin and of
radius , if there exists a non-negative constant Ψ(x0, B) < ∞, independent of t0, such that
x0 < δ implies x(t) ∈ B for all t ≥ t0 + Ψ(x0, B).
Example: Lyapunov function for LTI systems. Consider the linear system
x = Ax,
where x ∈ n is the state, A ∈ n×n is the system matrix. Propose a quadratic Lyapunov
function candidate
V (x) = xT P x,
where P is a positive definite function to be determined. Taking time derivative yields
V (x) = xT x + xT x = xT (AT P + P A)x = −xT Qx,
where Q is the solution of algebraic Lyapunov equation −Q = AT P + P A. Therefore, the
system is stable if Q is positive definite or semi-definite.
A Lyapunov function successful for stability analysis can be found not by randomly choosing
P but only by determining P from the Lyapunov equation for any given positive definite
Q. It has been shown that, given a positive definite Q, the system is stable if and only if
the unique solution of Lyapunov equation is also positive definite. That is, this backward
procedure is necessary and sufficient for both existence of Lyapunov function and analyzing
stability. As will be shown later, this systematic way of generating Lyapunov functions for
linear systems also applies to many nonlinear (uncertain) systems, for example, the class of
feedback linearizable nonlinear systems, the class of nonlinear systems with a linear part,
etc.For control design, consider the system
x = Ax + Bu,
where B ∈ n×m is the input matrix, u ∈ n is the input. If the pair (A, B) is controllable,
control design and search for Lyapunov function are done through the backward procedure
3
7/26/2019 Course Lecture8
http://slidepdf.com/reader/full/course-lecture8 4/14
as follows: given positive definite matrices Q and R, there is a unique positive definite matrix
P satisfying the algebraic Riccati equation AT P + P A − P BR−1BT P − Q = 0, then the
Lyapunov function is V (x) = xT P x and the stabilizing control is u(x) = −R−1BT P x.
The example shows that control design and search of Lyapunov function are integrated andcan be done systematically for LTI systems and that Lyapunov functions for linear systems
can always be chosen to be quadratic functions. We shall use the above result in chapter
three to investigate robust control design for linear and certain nonlinear uncertain systems.
Moreover, one of the main objectives of this book is to develop systematic procedures of de-
signing control and searching for Lyapunov function for general nonlinear uncertain systems,
though it is not as complete of a solution as the above one for LTI systems.
Example: Consider the scalar system given by
x = u + a,
where a is an uncertain (time-varying) parameter satisfying |a| < 1. Under the standard
linear feedback control law u = −kx, the derivative of the Lyapunov function V = 0.5x2 is
given by
V = −kx
x − a
k
.
Because of the uncertainty in a, V is only negative definite outside the ball B(0,a/k) ⊂
B(0, 1/k). Hence, the system is not asymptotically stable, but the solution is given by
x = e−ktx0 + ak
1 − e−kt
→ a
k as t → ∞.
So, the solution is globally uniformly ultimately bounded (GUUB) with respect to 1/k for
the class of uncertainty denoted by a. Furthermore, the bound of GUUB stability tends to
the origin as k → ∞.
The implications of the example are twofold. First, if V is negative definite outside some
hyper-ball in state space, stability result of GUUB is concluded. Second, while larger control
energy makes the bound of GUUB of the state smaller, no control of finite energy achieves
asymptotic stability. Both observations can be extended to general nonlinear systems.The next section addresses robotic manipulator systems which are widely used in the area
robust control. Some of the theories developed here are applied to robotic manipulator
systems. A brief general discussion is presented below for robotic systems.
4
7/26/2019 Course Lecture8
http://slidepdf.com/reader/full/course-lecture8 5/14
2 Robotic Manipulators
A robot is a reprogrammable multifunctional manipulator designed to move material, parts,
tools, or specialized devices through variable programmed motions for the performance of a variety of tasks. A robot arm is classified to be either rigid or flexible link. A rigid link
could be either revolute (rotary) or linear (prismatic), a prismatic link allows a linear relative
motion between any two links, see figure (??). In the chapters to come, all robot manipulator
systems discussed are of revolute nature.
In the case of a system as complicated as a robot, it is not practical to assume that the
parameters in the dynamic model of the robotic system are known precisely. There will
always be inexact cancellation of the nonlinearties in the system due to uncertainties. In such
cases we use robust control to simplify the equations of motion as much as possible by ignoring
certain terms in the equations. One of the uses of robotic systems in the environmental
waste management in which accuracy is important specially the accuracy in positioning the
end-effector position of the manipulator. Requirement such as safety, motion compliance
control, and operation environment can be fulfilled by using low-level robot controller in
which the end-effector arm is moved quickly, yet accurately while maintaining a high degree
of robustness.
Since we are interested in robotic manipulator system as we shall present in chapter 6, let
us formulate the dynamical model for a rigid link robot manipulator. The rigid link robot
is described by
τ = M (q )q + V m(q, q )q + N (q, q ) (1)
where
N (q, q ) = G(q ) + F (q ) + ∆F
M (q ) ∈ n×n is the inertia matrix, V m(q, q ) ∈ n×n is a matrix containing the centripetal
and Coriolis terms, G(q ) ∈ n is the gravity vector, ∆F (q, t) ∈ n is a vector representing
lumped uncertainties, q (t) ∈ n is the joint variable vector, and τ ∈ n is the input torque
vector. There are three widely used properties of the robot dynamic equation above. These
properties will be used in chapter 6, or whenever a robotic system is under study, during the
stability analysis of the robust controller.
Property 1
5
7/26/2019 Course Lecture8
http://slidepdf.com/reader/full/course-lecture8 6/14
The inertia matrix M (q ) is symmetric and positive definite. Hence,
m1 ≤ M (q ) ≤ m2(q ),
where m1 is a positive constant and m2(q ) is a strictly positive definite function. Moreover,m1 and m2(q ) are chosen in such a way that the maximum possible parameter variation of
M (q ) is taken into account.
Note: For the case that the robotic system is purely revolute, m2(q ) = m2 is a positive
constant.
Property 2
The matrices M (q ) and V m(q, q ) satisfy the following equation:
xT
1
2
M (q ) − V m(q, q )
x = 0, ∀x ∈ n.
In other words, matrix1
2M (q ) − V m(q, q )
is skew-symmetric.
Property 3
The centripetal/Coriolis term V m(q, q ) is bounded as
V m(q, q ) ≤ a1q ,
and the fiction and gravity terms are bounded as
G(q ) + F d q + F s(q ) ≤ a2 + a3q ,
where ai are known constants.
After introducing the properties used in the analysis of robotic systems, let us discuss briefly
a variety of robust control design for robotic systems.
Position Control
This design technique is used to position a robotic system link(s) to a specific position
(desired location) in which accuracy is important especially in industrial and medical robotic
systems. Two main types of robust control design schemes have been proposed, one utilizesthe so-called “Min-Max” control and the other uses the “saturation” type controller. The
Min-Max controller is naturally discontinuous and yields global exponential stability, while
the saturation controller is continuous but yields global uniform ultimate boundedness. The
position control simply drives the robotic link(s) to a final desired position with a very small
error, which is referred to as a set point tracking.
6
7/26/2019 Course Lecture8
http://slidepdf.com/reader/full/course-lecture8 7/14
Force Control
Many control design schemes have been developed for robotic systems in free space . This
is, a robot arm is not in contact with any surface. However, most industrial robots usedfor yelding, grinding, polishing, etc.., require contact with objects or surfaces. Hence, the
robot arm motion is constrained depending on the direction of the arm movement. This fact
motivated researchers to investigate the constrained motion case and develop position/force
controllers. Among these controllers are hybrid position/force control, impedance control,
and reduced order methods. The disadvantage of the hybrid position/force control is that
it requires exact knowledge of the robot manipulator and thus, the analysis is limited to
the “uncertainty free” systems. An adaptive control design scheme was developed for hy-
brid position/force robots with uncertainty which is based on the joint-space robot model
formulation.
Impedance Control
Impedance control is based on the idea that the robust controller should be utilized to regu-
late the dynamic behavior between the robot arm end-effector motion and the force exerted
on the surface, rather than considering the motion and force control problems separately.
The name “impedance” emanates from the idea of using an Ohm’s law type relationship
between motion and force. Similar to previous types of controllers, impedance controller has
been extensively studied. A robust impedance controller was developed to ensure stability inlieu of uncertainties. An adaptive impedance controller was also developed that takes care
of parametric uncertainty.
Industrial Robots
In present days, adaptive control is widely utilized in industrial robots because of the ad-
vantage of the inexpensive computer power that has become available. Moreover, these
robots are being utilized to their full potential in terms of the speed and precision of their
movements. It is possible to use a dynamic model of the manipulator as the heart of thesophisticated control algorithm with a powerful control computer. This dynamic model al-
lows the control algorithm to know how to control the manipulator’s actuators in order to
compensate for the complicated effects of inertia, centripetal, Coriolis, gravity, and friction
forces when the robot is in motion. The result is that the manipulator can be made to
follow a desired trajectory through space with smaller tracking errors. Adaptive control, as
7
7/26/2019 Course Lecture8
http://slidepdf.com/reader/full/course-lecture8 8/14
other types of controllers, has its advantages and disadvantages. Adaptive control cannot
be utilized to estimate system with fast time-varying uncertainties or parameters because
one cannot predict the nature of the uncertainty and the adaptive algorithm may not be
able to adapt to fast enough to the time-varying parameters. On the other hand, a robustcontroller, used mostly in this dissertation, can stabilize nonlinear systems with arbitrary
fast time-varying uncertainties or parameters. Moreover, we shall introduce robust control
design techniques for robotic systems with arbitrary fast time-varying uncertainties since
robust control design requires only known bounding functions of the uncertainties. This
dissertation focuses on nonlinear robust control design schemes.
3 Robust control design under Matching Conditions
Many primary results of nonlinear uncertain systems under matching conditions have been
developed in the last 15 years. Gutman introduced a discontinuous min-max control which
yields asymptotic stability for nonlinear systems under the matching condition. Because of
the discontinuity behavior of the controller, it is physically poorly behaved since all physical
systems have a finite bandwidth, but the discontinuous control requires systems with infinite
bandwidth. Later, Corless and Leitmann introduced a class of continuous state feedback
controller guaranteeing uniform ultimate boundedness under the matching conditions. The
mathematical model of nonlinear uncertain systems under matching conditions is established
through the following definition.Definition: Consider the following nonlinear uncertain system
x = f (x, t) + ∆f (x, t) + B(x, t)u + ∆B(x, t)u (2)
where ∆f (x, t) and ∆B(x, t) are the unknown parts of f (x, t) and B(x, t), respectively. The
system is said to satisfy the matching conditions MCs if uncertainty ∆f (x, t) can be decom-
posed as
∆f (x, t) = B(x, t)∆f (x, t), ∆B(x, t) = B(x, t)∆B(x, t),
and if there exists a positive constant such that,
∆B(x, t) ≤ 1 − . (3)
Therefore, the system can be rewritten as
x = f (x, t) + B(x, t) [∆f (x1) + (1 + ∆B(x, t)) u(x, t)] (4)
8
7/26/2019 Course Lecture8
http://slidepdf.com/reader/full/course-lecture8 9/14
in which the uncertainty enters the system through the same channel as control input u. The
reason behind inequality (3) is twofold, first, the system is not stabilizable for the case when
∆B(x, t) = −1. Moreover, if ∆B(x, t) > 1, then term 1+∆B(x, t) is uncertain and hence,
any input control may cause the state to grow out of bound. Second, the inequality ensuresthat there is no singularity in the control design by guaranteeing that term 1 + ∆B(x, t) is
invertible.
Remark: If the uncertainty would be known, one can easily choose a control input to cancel
its effect and achieve stability. But, since physical dynamical systems contain some uncer-
tainties which are unknown, one replaces those uncertainties by their bounding functions
which are chosen depending of the structure of the system and then the robust control de-
sign scheme can be adopted. we shall investigate system stability through Lyapunov’s direct
method.
3.1 Lyapunov stability in Robust Control Design
The nominal model of system (4) is given by
x = f (x, t) + B(x, t)u(x, t) (5)
We shall assume that the origin (x = 0) is globally asymptotically stable for the uncontrolled
system x = f (x, t). Furthermore, suppose that there exists a Lyapunov function for system
(5), i.e., there exists a continuously differentiable function V (x, t) that satisfies the followinginequalities, for all (x, t) ∈ [0, ∞)
δ 1(x) ≤ V (x, t) ≤ δ 2(x), ∂V
∂t +
∂ V
∂x [f (x, t)] ≤ −δ (x) (6)
where δ i are class K functions. To demonstrate the stability of system (4), choose input
control u(x, t) to be of the form
u(x, t) = − µ(x, t)
(µ(x, t) + εϕ(t)ρ(x, t), (7)
where ε > 0 and ϕ(t) an L1 function, are chosen freely by the designer and
µ(x, t) = B(x, t)∂V
∂xρ(x, t)
∆f (x, t) ≤ ρ(x, t).
9
7/26/2019 Course Lecture8
http://slidepdf.com/reader/full/course-lecture8 10/14
Differentiating V (x, t) under robust control (7) yields
V = ∂V
∂t +
∂ V
∂x [f (x, t) + B∆f + B (1 + ∆B) u]
≤ −δ (x) + ∂ V ∂x
[B∆f + B (1 + ∆B) u]
≤ −δ (x) +
∂V
∂xB
ρ(x, t) + ∂ V
∂xB (1 + ∆B) u
≤ −δ (x) + µ − µ2(x, t) (1 + ∆B)
(µ(x, t) + εϕ(t)
≤ −δ (x) + µ − µ2(x, t)
(µ(x, t) + εϕ(t)
≤ −δ (x) + εµ(x, t)ϕ(t)
(µ(x, t) + εϕ(t)
≤ −δ (x) + εϕ(t) (8)
The following results are deduced from robust control design under matching conditions.
1. If ϕ(t) is constant, say ϕ(t) = 1, then the system is globally uniformly ultimately
bounded with an ultimate bound given by a class K function of over infinite time
horizon.
2. If ϕ(t) is an exponentially decaying function, say ϕ(t) = e−at, for some a > 0, then the
system is globally exponentially stable.
In summary, one can apply the above systematic design scheme to systems satisfying the
matching conditions. The mechanical dynamics of a rigid-link robotic manipulator for in-
stance, is an example of a physical system satisfying the matching conditions. However,
there are many uncertain nonlinear systems that do not satisfy the matching conditions.
The next section introduces robust control design scheme for systems satisfying the so-called
equivalently matched uncertainty.
3.2 Examples of Unstabilizable Uncertain Systems
Although it would be ideal that robust control can be designed to stabilize all uncertain
systems in the form of (??), the following examples show that not all uncertain systems are
stabilizable.
Example: Consider the second-order system
x1 = x2 + ∆(x1, x2), x2 = u,
10
7/26/2019 Course Lecture8
http://slidepdf.com/reader/full/course-lecture8 11/14
in which the uncertainty ∆(·) is bounded as |∆(x1, x2)| ≤ 2+x2
1+x2
2. One can easily see that
the system with any admissible uncertainty is not stabilizable since a possibility of additive
uncertainty ∆1(x1, x2) within the given bounding function is −x2 + x1.
The system is not stabilizable since uncertainty within its bound can change the structureof the system such that part of system dynamics becomes unstable and decoupled from the
rest of the system and from control input.
Example: Consider the scalar system
x = x + [1 + ∆(x)]u,
where uncertainty is bounded as |∆(x)| ≤ C for some C ≥ 1. The system is not stabilizable
since ∆(x) could be −1, and then the system is not subject to any control. The uncertainty
∆(x) may be such that 1 + ∆(x) is uncertain because of C > 1, and therefore any controlintroduced may have adverse effect since it may cause the state to grow out of bound more
quickly. In fact, whenever there is a large multiplicative uncertainty associated with the
control input, no control is the best choice, and the uncertain system becomes unstabilizable
if any control is needed. It is worth noting that the first subsystem in Example ?? becomes
this example if ∆(x1, x2) = x1 + ∆(x1)x2.
Example: Consider the scalar system
x = ∆(x) + u2,
where uncertainty is bounded as |∆(x)| ≤ 1. The system is not stabilizable since, no matter
what choice is made for u, the control action in x is always unidirectional (positive). In
fact, any scalar uncertain system is not stabilizable if the designer cannot make x be both
positive and negative upon his choice through selecting u (specifically, through choosing
robust control to dominate all possible uncertainties).
Example: Consider the system
x1 = ∆11x1 + x2 + ∆13x3
x2 = ∆21x1 + ∆22x2 + x3
x3 = u,
where uncertain terms ∆ij are independent but bounded by constants C ij > 0. The system
is not stabilizable for many sets of constants C ij . To see this conclusion, consider the
simplest case that the uncertainties are time-invariant and state-independent. In this case,
11
7/26/2019 Course Lecture8
http://slidepdf.com/reader/full/course-lecture8 12/14
the transfer function between u and x1 is
X 1(s)
U (s) =
∆13(s − ∆22) + 1
s(s − ∆11)(s − ∆22) − ∆21s,
and the controllability matrix is
C =
0 ∆13 ∆11∆13 + 10 1 ∆21∆13 + ∆22
1 0 0
.
The zero z of the transfer function and the determinant of controllability matrix are, respec-
tively,
z = − 1
∆13
+ ∆22, and det(C ) = ∆2
13∆21 + ∆13∆22 − ∆11∆13 − 1.
If det(C ) = 0, the system becomes uncontrollable due to pole-zero cancellation, and thecancellation may occur in the right half of the s plane. Uncontrollability due to unstable
pole-zero cancellation implies that the system cannot be stabilized. For the system under
consideration, the presence of uncertainty ∆13 of potentially large size implies that this
kind of instabilizability may arise unless certain size limitations in terms of the bound of
∆13 are imposed on the maximum magnitudes of ∆11, ∆21 and ∆22. Relationship between
bounding functions of uncertainties can be found through robust control design to guarantee
both stabilizability and robust stability. There are many other uncertain systems in which
unstable, uncontrollable pole-zero cancellation may occur.
Although dynamics of the above examples are simple, they show existence of unstabilizable
systems and, more importantly, provide intuitive explanations of what may cause systems
to be unstabilizable. Specifically, there are two categories in the state space: loss of con-
trollability and control contribution to differential equation being either unknown or only
unidirectional (as shown in second and third examples). In first and last examples, the two
systems have isolated subsystem or pole-zero cancellation and therefore are uncontrollable.
As a result of the above examples, it is crucial to identify stabilizable uncertain systems and
to design robust control for those systems. Robust control theory is to identify the class of
all stabilizable uncertain systems and to provide stabilizing controls that guarantee desiredperformance.
The ultimate objective of robust control theory of nonlinear uncertain system is twofold.
First, if necessary, determine the least requirements, called structural conditions, on the
system (either in terms of system structure or location of uncertainty) such that it can
be stabilized or controlled. Second, find procedures under which robust control u can be
12
7/26/2019 Course Lecture8
http://slidepdf.com/reader/full/course-lecture8 13/14
systematically designed. The key issue in the design is the search of Lyapunov functions
and their associated robust controllers (which may be different for achieving various types
of performances).
4 Back-Stepping Design Procedure
The backstepping design procedure can be seen from the following simple example.
Example: Consider the second-order system:
x1 = x2, x2 = u.
This system is linear and consists of two cascaded integrators. A linear stabilizing control
can be designed by solving a simple Lyapunov equation. The Riccati equation can be used todesign robust control if there are linearly bounded uncertainties. However, those procedures
do not apply to nonlinear systems since they depend on linear matrix equations. Here, we
plan to start an intuitive design that can be extended later to nonlinear systems.
From the second equation, we see that u can control x2 to anywhere. For the first equation,
if x2 were a control variable, an obvious stabilizing control would be x2 = −x1. Since x2
is not a control but a state variable, the equation x2 = −x1 does not make any sense. To
distinguish the state variable x2 and the control designed for x2 from the actual control u,
let us call the control designed for x2 fictitious control and denote it by xd2
= −x1. Although
the fictitious control is not implementable, we can rewrite the first equation as
x1 = −x1 + (x2 + x1) = −x1 + (x2 − xd2
).
This simple manipulation reveals intuitively that stabilization of the first equation may be
achieved if we can make x2 − xd2
= x2 + x1 converge to zero. Hence, fictitious control xd2
can be viewed as the desired trajectory for state variable x2. Recall that, in the second
equation, control u can be designed to drive x2 anywhere. The problem of making x2 track
xd2
is equivalent to making the new, translated state variable z 2 = x2 −xd2
converge zero (that
is, a stabilization problem). The dynamics of z can be found as follows:
z 2 = x2 − xd2
= x2 + x1 = u + x2.
Obviously, the control u = −x2 − z 2 = −x2 − (x2 + x1) guarantees asymptotic stability
of z 2. Once z 2 = x2 + x1 converges to zero, x1 will approach zero by the design of xd2
in
13
7/26/2019 Course Lecture8
http://slidepdf.com/reader/full/course-lecture8 14/14
x1 = −x1 + z 2 (which is stable if z 2 = 0), and consequently x2 goes to zero. Therefore, the
overall system is asymptotically stable.
This intuitive argument of stability can be verified by a simple Lyapunov proof. Choosing
Lyapunov function V = x2
1 + z 2
2, one can easily show that the control u = −x2 − (x1 + x2)yields global asymptotic stability. In fact, the Lyapunov function is the sum of Lyapunov
functions for subsystems of the states x1 and z 2.
The control in this example is designed by working sequentially through two integrators.
In the process, a fictitious control is design, a state transformation is performed in which
fictitious control is differentiated. Such a design is called a recursive design since, by the
transformation, the design of fictitious control is imbedded into the actual control design.
The design is also called backstepping or backward recursive because the direction in which
the sequential design is proceeded is the opposite to the direction of signal flow graph of
the system, that is, the direction at which physical information flows within the system.
This approach which obviously works systematically for multiple-integrator systems was
realized in the sixties. But applications of its extensions to nonlinear control, adaptive
control, and robust control have been developed only in past several years. Mathematically,
the design procedure can be genearlized and applied to nonlinear systems because of the
following reasons. First, by introducing a fictitious control variable to a given subsystem, its
dynamics satisfy locally the matching conditions with respect to the fictitious control and
therefore can be compensated. Second, state transformation make the difference between
dynamics of fictitious control and its corresponding state variable equivalently matched andtherefore can be compensated. Finally, sub-Lyapunov functions can be easily found for all
subsystems since they are of first order, and the overall Lyapunov function is simply the sum
of sub-Lyapunov functions, by which stability of the overall system can be concluded.
14