stability ft
TRANSCRIPT
7/31/2019 Stability FT
http://slidepdf.com/reader/full/stability-ft 1/7
7/31/2019 Stability FT
http://slidepdf.com/reader/full/stability-ft 2/7
IEEE TRANSACTIONS ON AUTOMATICAL CONTROL VOL. , NO. 2
This paper aims to study the finite-time stability of stochas-
tic nonlinear system in Lyapunov sense. First, a lemma that
ensures the existence and uniqueness of the global solution
of stochastic nonlinear system is proven under the local
Lipschitz and local boundnedness conditions on the integral of
diffusion operator with respect to the product measure ( Lemma
2.1). Then, the definition of finite-time stability in probability
for stochastic nonlinear system is presented ( Definition 3.1).
The contribution of the present paper are two folds: first, a
sufficient condition (Theorem 3.1) for the finite-time stability
in probability of stochastic nonlinear system is derived and
proved. A useful lemma ( Lemma 3.1), extended from Bihari’s
inequality is presented, which plays an important role in the
prove of our main theorem. Second, a continuous finite-time
control (Theorem 4.1) is designed to guarantee the finite-
time stability in probability for a class of stochastic nonlinear
systems. Based on Lemma 2.1, it is seen that the stochastic
differential equation satisfying the conditions in Theorem 3.1
admits a unique solution for any finite initial condition. For the
control design, since our proposed control falls in the family
of ρ-function, the stochastic closed-loop control system admitsa unique solution.
The rest of the paper is organized as follows: in Section
II, a class of stochastic nonlinear systems to be considered in
this paper, the notations, and some definitions are formulated.
Some preliminary results on the existence and uniqueness
are presented under the conditions that the coefficients of
a stochastic nonlinear equation satisfy the local Lipschitz
condition and the integral of diffusion operator for a radially
unbounded C 2 function is local bounded with respect to
product measure. In Section III, the finite-time stability in
probability for stochastic nonlinear systems is defined, and
the extension of Bihari’s inequality (see, [16]), is derived.
Then, the main result on the stochastic Lyapunov theorem:a sufficient condition on the finite-time stability in probability
of stochastic nonlinear systems is presented. An simulation
example is given to illustrate the theoretical analysis. In
section IV, we use the stochastic Lyapunov theorem to show
that a state feedback control can be designed to stabilize a
class of stochastic closed-loop systems in finite time. Finally,
concluding remarks are given in section V.
I I . PRELIMINARY RESULTS
In this paper, we will consider an n-dimensional stochastic
nonlinear system of the form
dx(t) = f (t, x(t))dt+g(t, x(t))dw(t), x(0) = x0 ∈ Rn, t ≥ 0,(2.1)
where x(·) ∈ Rn is the state process, w(·) is an m-dimensional
Brownian motion defined on a complete probability space
(Ω,F , P ) with the augmented filtration F tt≥0 generated
by w(·), and the functions f : R+ × Rn → Rn and g :R+ × Rn → Rn×m, also called coefficients of the equation,
are Borel measurable and satisfy f (t, 0) = 0, g(t, 0) = 0 for
all t ≥ 0.
For notional convenience, in above and in the sequel,
Rn always denotes the n-dimensional Euclidean space, R+
denotes all nonnegative real numbers and Rn×m denotes the
space of n×m matrices with real entries. For a vector x ∈ Rn,
|x| will denote the Euclidean norm |x| = (n
i=1 x2i )1/2. For
a matrix A, A will denote the Frobenius norm: A =(traceAT A)1/2, where the superscript “T” denotes the
transpose of a vector or a matrix. a ∧ b means the minimum
of a and b, while a ∨ b means the maximum.
A function V : Rn → R is said to be C k if it is k-times
continuously differentiable. Let ∂V/∂x denote the gradient of
a C 1 function V , we always write ∂V/∂x as a row vector.
For a C 2 function V , ∂ 2V /∂ 2x denotes the Hessian of V ,the n × n matrix of second-order partial derivatives of V .A function V : Rn → R is said to be positive-definite if
V (0) = 0 and V (x) > 0 for all x ∈ Rn \ 0.
Since we have assumed that f (t, 0) = 0 and g(t, 0) = 0for all t ≥ 0, this implies that Equation (2.1) admits a trivial
zero solution. It is well known that, in order for a stochastic
differential equation to have a unique global solution for any
give initial data, the coefficients of the equation are generally
needed to satisfy the linear growth condition and the local
Lipschitz condition. We will impose the following assumption.
Assumption 1: Both f (t, x) and g(t, x) satisfy the local
Lipschitz condition, that is, for any R > 0, there exists a
constant C R ≥ 0 such that
|f (t, x1) − f (t, x2)| ∨ |g(t, x1) − g(t, x2)| ≤ C R|x1 − x2|
for all t ∈ R+ and |x1| ∨ |x2| ≤ R.
If Assumption 1 holds, for any initial value x0 ∈ Rn, there
is a unique maximal local solution to (2.1) for all t ∈ [0, σe),
where σe is the explosion time [see, [15], pp.95]. In order to
show the solution is global, we only need to show that σe = ∞a.s.. For a C 2 function V , let LV denote the diffusion operator
of V with respect to the equation (2.1) defined by
LV (x) =∂V (x)
∂xf (t, x) +
1
2tracegT (t, x)
∂ 2V (x)
∂x2
g(t, x).
For the existence and uniqueness result of global solutions
to (2.1), the following lemma gives a positive answer under
some appropriate conditions. Even though some authors [9],
[23], have proved the existence and uniqueness result based
on a special C 2 function V , we here should point out that
our result has many refinements and improvements even
generalizes some of existing results.
Lemma 2.1: Let Assumption 1 hold. Suppose that there
exists a C 2 function V : Rn → R+, whose diffusion operator
LV with respect to (2.1) satisfies
E
σk∧T
0
LV (x(s))ds =
T
0
E
I s≤σkLV (x(s))
ds ≤ C T ,
(2.2)
for any stopping time σk = inf t; |x(t)| ≥ k, k ∈ N and
T > 0, where C T ≥ 0 is a constant depending on T only. If
the function V is radially unbounded, that is,
lim|x|→∞
V (x) = ∞ (2.3)
holds, then there exists a unique global solution to (2.1) for
any x0 ∈ Rn.
Proof: We will show that σe = ∞ a.s.. For each integer k ∈ N ,we define a stopping time as follows:
σk = inf
t : |x(t)| ≥ k
7/31/2019 Stability FT
http://slidepdf.com/reader/full/stability-ft 3/7
IEEE TRANSACTIONS ON AUTOMATICAL CONTROL VOL. , NO. 3
with the traditional setting inf ∅ = ∞, where ∅ denotes the
empty set. Clearly, σk is also increasing as k increases. If we
set σ∞ = limk→∞ σk, then σ∞ ≤ σe a.s. Furthermore, if we
can show that σ∞ = ∞ a.s., then σe = ∞ a.s., which implies
that (2.1) admits a unique global solution for all t ≥ 0. Let
T > 0 be arbitrary. Define σk ∧ T = σT k . For any t ≥ 0, by
Ito’s formula, we have
V (x(t ∧ σT k )) = V (x(0)) + t∧σT k
0
L(V (x(s)))ds
+
t∧σT k0
∂V (x(s))
∂xg(s, x(s))dw(s). (2.4)
Since ∂V/∂x and g(., x) are bounded whenever x is restricted
to a compact set, the local martingale term in (2.4) is a
martingale for t ∈ [0, σk ∧ T ]. Thus, by (2.2), we have
EV (x(σT k )) = V (x(0)) + E
σT k0
LV (x(s))ds
≤ V (x(0)) + C T < ∞. (2.5)
Note that for everyω ∈ σk ≤ T ,
|x(σ
T
k )| = |x(σk)| = k.
Hence it follows from (2.5) that
P σk ≤ T inf |x|=k
V (x) ≤ E
I σk≤T V (x(σk))
≤ V (x(0)) + C T < ∞. (2.6)
For any given k, the set x ∈ Rn : |x| = k is compact,
therefore the continuous function V attains a minimum on it.
By (2.3), it is not hard to get limk→∞ inf |x|=k V (x) = ∞.Letting k → ∞ on both sides of (2.6) gives limk→∞ P (σk ≤T ) = 0, and hence P (σ∞ ≤ T ) = 0. Since T > 0 is arbitrary,
we must have P (σ∞ < ∞) = 0, which implies that P (σ∞ =∞) = 1 and the required assertion follows.
Remark 2.1: The existence of a unique solution for stochas-
tic nonlinear system is very important in theory, which is also
the basis of dealing with practical control problems. However,
it is found that this aspect has not drawn enough attention in
the design of stochastic control systems.
Remark 2.2: Obviously, for a function V ∈ C 2(R2, R+),
if its diffusion operator LV ≤ 0, more generally, LV is
bounded above or, for example, is controlled by a nonnegative
continuous function η(·) such that ∞0 η(t)dt < ∞ , then
the condition (2.2) holds. An typical example of that (2.3)
holds is that there exists a class K∞ function µ such that
V (x) ≥ µ(|x|).
Definition 2.1: A function µ : R+ → R+ is said to be
a class K function if it is continuous, strictly increasing andµ(0) = 0. A class K function µ is said to belong to class K∞
if µ(r) → ∞ as r → ∞.
III. FINITE-TIME STABILITY THEOREM FOR STOCHASTIC
NONLINEAR SYSTEM
In this section, we shall propose and study the finite-time
stability theorem for the stochastic nonlinear system (2.1). The
technique used is based on the stochastic Lyapunov theory. It is
worth noting that although some existing results on stochastic
Lyapunov theory for stochastic nonlinear system can be found
in Has’minskii [9], Kusher [13], Mao [16], and Deng et al [4],
our stochastic Lyapunov theory is the extension to the finite-
time stability case. In fact, the following theorem can be seen
as the stochastic counterpart of the finite-time stability theory
for deterministic system in Bhat and Bernstein [2]. To our
best knowledge, it seems to be the first theorem on finite-time
stability of stochastic nonlinear system.
Let us first present a precise definition of finite-time stability
in probability for the trivial solution of (2.1). The following
definition is motivated by the definition of finite-time stablility
and stochastic Lyapunov stability in [2] and [16].
Definition 3.1: The trivial solution of equation (2.1) is said
to be finite-time stable in probability, if equation (2.1) admits
a unique solution for any initial condition x0 ∈ Rn, denoted
by x(t; x0), moreover, the following statements hold:
(i) Finite-time asymptotic stability in probability: given any
initial condition x0 ∈ Rn \ 0, the stochastic settling time,
τ = inf t; x(t; x0) = 0, is finite almost surely, that is, P
τ <∞
= 1;
(ii) Lyapuvov stability in probability: For every pair of ε ∈(0, 1) and r > 0, there exists a δ = δ(ε, r) > 0 such that
P |x(t; x0)| < r, for all t ≥ 0 ≥ 1 − ε, (3.1)
whenever |x0| < δ.
Remark 3.1: Note that, given an initial value x0 ∈ Rn,
if equation (2.1) admits a unique solution x(t, x0), then it
is clear that x(·, x0) is continuous, therefore, the finite-time
asymptotic stability in probability in Definition 3.1 implies
that P
x(τ ; x0) = 0
= 1. Besides, if x(τ ; x0) = 0 holds a.s.,
then x(t, x0) = 0 holds a.s. for all t ≥ τ from the uniqueness.
At the moment, the inequality (3.1) is equivalent to
P
sup
0≤t≤τ |x(t; x0)| < r
≥ 1 − ε. (3.2)
Now, we are in the position to present the main result of finite-
time stability of stochastic nonlinear system.
Theorem 3.1: Let Assumption 1 hold. If there exists a C 2
function V : Rn → R+, K∞ class functions µ1 and µ2,
positive real numbers C > 0 and 0 < γ < 1, such that for all
x ∈ Rn and t ≥ 0,
µ1(|x|) ≤ V (x) ≤ µ2(|x|), (3.3)
LV (x) ≤ −C
V (x)γ
, (3.4)
then the trivial solution of stochastic system (2.1) is finite-time
stable in probability.To proceed, we need the following lemmas.
Lemma 3.1: Let 0 < γ < 1 and λ > 0. Assume that there
exists a continuous function h : [0, ∞) → [0, ∞) with h(0) >0 such that, for any 0 ≤ u ≤ t,
h(t) − h(u) ≤ −λ
tu
(h(s))γds. (3.5)
Then there exists a real number T > 0 such that
h(t) ≤
h(0)1−γ − λ(1 − γ )t 11−γ
, ∀t ∈
0, T
. (3.6)
7/31/2019 Stability FT
http://slidepdf.com/reader/full/stability-ft 4/7
IEEE TRANSACTIONS ON AUTOMATICAL CONTROL VOL. , NO. 4
Proof: Let T 0 = inf t ≥ 0; h(t) = 0. Since h(0) > 0 and
h(t) ≥ 0 for all t > 0, it then follows that 0 < T 0 ≤ ∞. We
now define
l(t) =
h(0)1−γ − λ(1 − γ )t 11−γ
. (3.7)
It is not hard to verify that
l(t) − l(u) = −λ t
u (l(s))γ
ds (3.8)
for any 0 ≤ u ≤ t ∈
0, T 0 ∧ h(0)1−γ
λ(1−γ)
:= [0, T 0]. Note that
l(0) = h(0) and l(t) ≥ 0 as t ∈ [0, T 0]. Let S = t ∈[0, T 0]; h(t) > l(t). If S = ∅, then the required assertion
follows by taking T = T 0. Suppose that there exists a t ∈ S .This means that 0 < t ≤ T 0 and h(t) > l(t) ≥ 0. Let t =inf t < t; ∀s ∈ (t, t], h(t) > l(t). By the continuity of h(·)and l(·), we have that h(t) = l(t), and thus h(t) > l(t) > 0for any t ∈ (t, t). On the other hand, by (3.5), it is clear that
h(t) ≤ p(t) for all t ∈ [t, T 0], where
p(t) = h(t
) − λ t
t(h(s))
γ
ds.(3.9)
For any t ∈ [t, t), we have that
dp(t)
dt= −λ(h(t))γ , p(t) > 0,
anddl(t)
dt= −λ(l(t))γ , l(t) > 0.
Note that p(t)
l(t)
=
p(t)l(t) − l(t) p(t)
l(t)2
=−λ(h(t))γl(t) + λ(l(t))γ p(t)
l(t)2
≥λ(l(t))γh(t) − λ(h(t))γ l(t)
l(t)2
=λ(l(t))γ(h(t))γ
(h(t))1−γ − (l(t))1−γ
l(t)2
≥ 0.
Thus p(t)/l(t) is an increasing function as t ∈ [t, t). Since
p(t)/l(t) = h(t)/l(t) = 1, we immediately obtain that
p(t) ≥ l(t), t ∈ [t, t). By this, (3.8) and (3.9), we have tt
(h(s))γds ≤
tt
(l(s))γds, ∀t ∈ [t, t),
which implies that there exists a t ∈ (t, t) at least such thath(t) ≤ l(t), however, this is a contradiction. The proof is
complete.
Remark 3.2: Note that Lemma 3.1 is the extension of
Bihari’s inequality. In this lemma, T can be chosen as
T = T 0 ≤h(0)1−γ
λ(1 − γ ). (3.10)
Indeed, it is obvious that l(t) = 0 at t = h(0)1−γ
λ(1−γ) . If T 0 >h(0)1−γ
λ(1−γ) , then h(h(0)1−γ
λ(1−γ) ) > 0 from the definition of T 0, which
contradicts the fact of (3.6) because l(h(0)1−γ
λ(1−γ) ) = 0.
Lemma 3.2: Under the assumptions of Theorem 3.1, equa-
tion (2.1) admits a unique solution for any initial value
x0 ∈ Rn.
Proof: The conclusion is a direct consequence of Lemma
2.1 and Remark 2.2.
Proof of Theorem 3.1: By Lemma 3.2, equation (2.1) admits
a unique solution denoted by x(t; x0) for any initial value
x0 ∈ R
n. Let us first prove Lyaponov stability in probability
for the stochastic system (2.1). Let 0 < ε < 1 and r > 0be arbitrary. Define σr = t; |x(t; x0)| > r. Applying Ito’s
formula gives that
EV (x(t ∧ σr)) = V (x0) + E
t∧σr0
LV (x(s))ds ≤ V (x(0)),
(3.11)
where we have used (3.4) and the fact that t∧σr0
∂V ∂x g(s, x(s))dw(s) is a martingale. By (3.3), we
have
P (σr ≤ t)µ1(r) ≤ E
I σr≤tV (x(σr))
≤ E V (x(t ∧ σr))≤ V (x0) ≤ µ2(|x0|). (3.12)
Taking δ = µ−12
µ1(r)ε
, we obtain from (3.12) that P (σr ≤t) ≤ ε whenever |x0| ≤ δ. Letting t → ∞, we get P (σr <∞) ≤ ε, which implies that
P (supt≥0
|x(t; x0)| ≤ r) ≥ 1 − ε
as required.
We now turn our attention to prove finite-time stability in
probability. Obviously, the assertion holds for x0 = 0 since
x(t; x0) ≡ 0. We therefore only need to show the result forx0 ∈ Rn \ 0. Define
τ k = inf t ≥ 0; |x(t; x0)| ∈ (1
k, k),
where k ∈ N and satisfies 1k < |x0| < k. It is clear that τ k
is an increasing stopping time sequence. We now set τ ∞ =limk→∞ τ k. By Ito’s formula, for arbitrary 0 ≤ u ≤ t, we
have
EV (t ∧ τ k) = EV (u ∧ τ k) + E
t∧τ ku∧τ k
LV (x(s))ds
= EV (u ∧ τ k) + tu
E I s≤τ kLV (x(s))ds.
(3.13)
On one hand, it is easily seen that
EV (xt∧τ k) − EV (xu∧τ k)
= E
I t≤τ kV (x(t))
− E
I u≤τ kV (x(u))
+E
I t>τ k − I u>τ k
V (x(τ k))
≥ E
I t≤τ kV (x(t))
− E
I u≤τ kV (x(u))(3.14)
7/31/2019 Stability FT
http://slidepdf.com/reader/full/stability-ft 5/7
IEEE TRANSACTIONS ON AUTOMATICAL CONTROL VOL. , NO. 5
since V is positive definite. On the other hand, for any s ∈[0, ∞), by (3.3) and the definition of τ k, we can derive that
I s≤τ kµ1(1
k) ≤ I s≤τ kµ1(|x(s)|) ≤ I s≤τ kV (x(s))
≤ I s≤τ kµ2(|x(s)|) ≤ I s≤τ kµ2(k),
(3.15)
which, together with (3.4), gives
E
I s≤τ kLV (x(s))
≤ −CE
I s≤τ kV (x(s))γ
= −CE
I s≤τ kV (x(s))γ
≤ −C µ1( 1k )γ
µ2(k)γ
EI s≤τ kV (x(s))
γ.
(3.16)
Let C k = C µ1(
1k)γ
µ2(k)γ. It is obvious that C k is strictly decreasing
and converges to zero as k → ∞. By (3.13), (3.14) and (3.16),
we can deduce that
E
I t≤τ kV (x(t))
− E
I u≤τ kV (x(u))
≤ −C k tu
EI s≤τ kV (x(s))γds. (3.17)
Now define h(t) = E
I t≤τ kV (x(t))
, it then follows from
Lemma 3.1 and Remark 3.2 that there exists a sequence
0 < T k ≤V (x0)
1−γ
C k(1 − γ )
such that
h(T k) = E
I T k≤τ kV (x(T k))
= 0
which implies that P (T k ≤ τ k) = 0 from the definition of τ kand the fact that V is positive definite. Note that T k ↑ ∞ as
k → ∞. By the dominated convergence theorem, we have that
P (τ ∞ = ∞) = 0, which also implies that P (τ < ∞) = 1,
where
τ = inf
t; |x(t; x0)| = x(t; x0) = 0
,
and the required conclusion follows. The proof is thus com-
pleted.
Remark 3.3: We note that, in the proving process of Theo-
rem 3.1, assumptions (3.3) and (3.4) will guarantee finite-time
stability in probability for the trivial solution of (2.1), if system
(2.1) has a unique solution.
When the coefficients in equation (2.1) are continuous and
satisfy some ρ-conditions, we can establish the following
existence theorem of a unique solution, which follows from
Theorem 170 in Situ [21]. We state it as a lemma and willuse it in later analysis. Indeed, this lemma is a special case of
Theorem 170 in [21].
Lemma 3.3: Assume that f (t, x) and g(t, x) are continuous
in x. Assume also that there exist two non-negative functions
c1(t), c2(t), and a family of ρ-functions such that P - a.s., for
each 0 < T < ∞,
|f (t, x)| ≤ c1(t)(|1 + |x|), (3.18)
|g(t, x)|2 ≤ c1(t)(|1 + |x|2), (3.19)
2x1 − x2, f (t, x1) − f (t, x2) + |g(t, x1) − g(t, x2)|2
≤ c2(t)ρT (|x1 − x2|2), t ∈ [0, T ], (3.20)
0.0 0.5 1.0 1.5 2.0
− 2
− 1
0
1
2
Time (t)
S t a t e
( x )
Fig. 1. Simulation results for (3.21) with initial values x0 = 1 and x0 = −1
where T 0 ci(t)dt < ∞, i = 1, 2, and ρT (u) ≥ 0 as
u ≥ 0, is strictly increasing, continuous and concave such
that 0+ du/ρT (u) = ∞. Then for any given initial valuex0 ∈ Rn, (2.1) has a unique strong solution.
Example 3.1: Consider a one-dimensional stochastic au-
tonomous system in the form
dx(t) = f (x(t))dt + g(x(t))dw(t), x0 = 0, (3.21)
where
f (x) = c1x − c2xβ, c2 > 0, 1 > β > 0,
g(x) = c3x.
It is easy to check that f and g satisfy all assumptions in
Lemma 3.3 with c1(t) = [|c1| + c2] ∨ c23, c2(t) = [c21 + c23] andρT (u) = ρ(u) = u. In fact, it is obvious that c2(x1−x2)(xβ2 −xβ1 ) ≤ 0. So there is a unique strong solution to (3.21) from
Lemma 3.3. Consider Lyapunov function V (x) = |x|α, α ≥ 2.It is not hard to compute
LV (x) = αc1|x|α − αc2|x|α−2x1+β +1
2c23α(α − 1)|x|α.
Let ℵ denote the set of all odd numbers in N . Now set
β = pq , p, q ∈ ℵ, p < q,
c1 = −12c23(α − 1),
γ = α−1+β
α
,
we thus get
LV (x) = −αc2|x|αγ = −αc2|V (x)|γ,
and conclude that the trivial solution of (3.21) is finite-time
stable in probability.
Simulations have been carried out for system (3.21) with
c1 = −2, c2 = 1, c3 = 2, α = 2, β = 13 , in order to verify
Theorem 3.1. Figure 1 shows that the state of (3.21) converges
to zero in a finite time no matter the initial state is either
positive or negative.
7/31/2019 Stability FT
http://slidepdf.com/reader/full/stability-ft 6/7
IEEE TRANSACTIONS ON AUTOMATICAL CONTROL VOL. , NO. 6
IV. FINITE-TIME STABILIZATION OF A CLASS OF
STOCHASTIC NONLINEAR SYSTEMS
In this section, we will apply Theorem 3.1 and show that
we can find a state feedback control to guarantee finite-time
stability in probability of a class of stochastic closed-loop
control systems. To this end, we consider stochastic system
in the following form:
dx(t) = [f (t, x(t)) + u]dt + g(t, x(t))dw(t),
x(0) = x0 ∈ Rn \ 0, (4.1)
where x ∈ Rn is the system state, u ∈ Rn is the state
feedback control in the form u = u(x), and w(·) is a
one-dimensional Brownian motion. Assume that the coef-
ficients f (t, x) = (f 1(t, x), · · · , f n(t, x))T and g(t, x) =(g1(t, x), · · · , gn(t, x))T are continuous in x for all t ∈ R+
and satisfy f (t, 0) = 0 and g(t, 0) = 0. The problem is to find
a state feedback control u such that (4.1) is finite-time stable
in probability for any nonzero initial value.
We note that, when g(t, x) ≡ 0, system (4.1) reduces
to a deterministic closed-loop system. This system includes
the strict-feedback system and the MIMO system. Although
many stabilization control designs for stochastic nonlinear sys-
tems have been proposed based on Lyaponov-type functions,
including backstepping design [4], [20], [7], and adaptive
backstepping design [22]. However, it is still an open problem
whether these existing design methods can be used to make
system (4.1) to be finite-time stable in probability. For the full
state feedback control case, we find a stabilization design that
makes the system simultaneously stable instead of step by step
fashion.
Here, we first give an elementary inequality, which can befound in [11].
Lemma 4.1: For any xi ∈ R, i = 1, · · · , n, 0 ≤ b ≤ 1, the
following inequality holds:
|x1| + |x2| + · · · + |xn|
b≤ |x1|b+ |x2|b+ · · · + |xn|b. (4.2)
We now state our main result in this section.
Theorem 4.1: Assume that f (t, x) and g(t, x) are contin-
uous in x and satisfy f (t, 0) = 0 and g(t, 0) = 0 for each
t ≥ 0. If there exist constants C i ≥ 0, C i ≥ 0, i = 1, · · · , n,
and a ρ-function, such that P -a.s.
|f i(t, x)| ≤ C i|x|, (4.3)
|gi(t, x)|2 ≤ C i|x|2, (4.4)
2x1 − x2, f (t, x1) − f (t, x2) + |g(t, x1) − g(t, x2)|2
≤ ρ(|x1 − x2|2), ∀x1, x2 ∈ Rn, (4.5)
then system (4.1) can be finite-time stabilized in probability
by choosing a continuous state feedback control u.
Proof: Choose a Lyapunov function V (x) = |x|2, x ∈ Rn.
We now compute
LV (x) =∂V (x)
∂x[f (t, x) + u]
+1
2trace
gT (t, x)
∂ 2V (x)
∂x2g(t, x)
= 2xT · [f (t, x) + u] + |g(t, x)|2
= 2
n
i=1
xif i(t, x) + 2
n
i=1
xiui +
n
i=1
|gi(t, x)|2
≤ni=1
|xi|2 +
ni=1
|f i(t, x)|2
+2
ni=1
xiui +
ni=1
|gi(t, x)|2. (4.6)
By (4.3) and (4.4), we have
ni=1
|f i(t, x)|2 ≤ ni=1
C 2i
|x|2 (4.7)
andni=1
|gi(t, x)|2 ≤ ni=1
C i|x|2. (4.8)
We now set
ui = ui(x1, · · · , xn)
= −1
2xi
1 +
ni=1
[C 2i + C i]
−x2γ−1i , γ =p
q, p, q ∈ N,
1
2< γ < 1.(4.9)
Substituting (4.7)-(4.9) into (4.6) yields that
L
V (x) ≤ −2
n
i=1 x
2γ
i ≤ −2x
2
1 + x
2
2 + · · · + |xn|
2γ
= −2(V (x))γ (4.10)
by using Lemma (4.1). It is obvious that u is continuous. It
is suffices to show that system (4.1) admits a unique solution.
Let
u = (u1, · · · , un)T ,
ui = −1
2xi(1 +
ni=1
(C 2i + C i)) − x2γ−1i
In fact, the following inequality holds:
x − x, u − u =
n
i=1
(xi − xi) · (ui − ui)
= −1
2
ni=1
|xi − xi|2
1 +ni=1
(C 2i + C i))
−ni=1
(xi − xi) · (x2γ−1i − x2γ−1i )
≤1
2
ni=1
|xi − xi|2
1 +
ni=1
(C 2i + C i)
, (4.11)
which, together with (4.3)-(4.5), gives the required assertion
from Lemma 3.3, if we choose a new ρ-function ρ(u) =
7/31/2019 Stability FT
http://slidepdf.com/reader/full/stability-ft 7/7
IEEE TRANSACTIONS ON AUTOMATICAL CONTROL VOL. , NO. 7
ρ(u)+ 12(1+
ni=1(C 2i +C i))u. By Theorem 3.1, we therefore
conclude that the trivial solution of (4.1) is finite-time stable
in probability. The proof is complete.
Remark 4.1: Let us give two common examples for the ρ-
function:
ρ1(u) = K 1u, u ≥ 0, K 1 > 0; (4.12)
ρ2(u) = K 2 arctan(u), u ≥ 0, K 2 > 0. (4.13)
It can be shown that if ρ1 and ρ2 are ρ-functions, then ρ3 =ρ1 + ρ2 is also a ρ-function (see, [21]).
V. CONCLUSION
The main objective of this paper is to prove a stochastic
Lyapunov theorem on finite-time stability in probability for
stochastic nonlinear system. Although many theoretical criteria
for judging finite-time stability of deterministic nonlinear
system have been proposed over the last decade. Generally
speaking, these criteria can not be directly applied to stochastic
nonlinear system. The first obstacle is how to define finite-
time stability of stochastic nonlinear system, in other words,
what is the meaning of finite-time stability for a stochasticnonlinear system? For a stochastic nonlinear system, it is
impossible to find a fixed time at which the state of the system
is zero. Therefore, there is a key difference on finite-time
stability between deterministic system and stochastic system.
The other lies in that the Ito differentiation introduces not only
the gradient but also the Hessian of the Lyapunov function in
the Lyapunov analysis.
As we stated in Remark 2.1, the existence of a unique
solution for stochastic nonlinear system is a precondition of
various stochastic nonlinear control designs. However, This
has not drawn enough attention in stochastic control designs.
Every control design, in principle, should guarantee the ex-
istence of a unique solution for stochastic nonlinear system.
But this will bring some technical difficulties. Let us consider
stochastic system (4.1) again. Suppose there exist nonnegative
C 1 functions, ϕi(x1, · · · , xn), ψi(x1, · · · , xn), i = 1, · · · , nsatisfying
|f i(t, x)| ≤ (|x1| + · · · + |xn|)ϕi(x1, · · · , xn), (5.1)
|gi(t, x)| ≤ (|x1| + · · · + |xn|)ψi(x1, · · · , xn). (5.2)
Similar to Theorem 4.1, we can choose a continuous state
feedback control u ∈ Rn with the form
ui = ui(x1, · · · , xn)
= − 12
xi1 + n
ni=1
ϕ2i (x1, · · · , xn)
+nni=1
ψ2i (x1, · · · , xn)
−x2γ−1i , γ =p
q, p, q ∈ N,
1
2< γ < 1, (5.3)
such that
LV (x) ≤ −2ni=1
x2γi ≤ −2
x21 + x22 + · · · + |xn|2γ
= −2(V (x))γ . (5.4)
But it is clear that u is not local Lipschitz continuous, and such
a control design can not guarantee the existence of a unique
solution for (4.1). This is the reason why we emphasize that
a discussion on the existence and uniqueness of solutions for
stochastic nonlinear systems is both necessary and important.
REFERENCES
[1] A. Astolfi, D. Karagiannis, and R. Ortega, Nonlinear and Adaptive Con-trol. Springer-Verlag, England: London, 2008.
[2] S. P. Bhat and D. S. Bernstein, Finite-time Stability of Continuous Autonomous Systems. SIAM J. Control Optim., vol. 38, pp. 751-766, 2000.
[3] H. Deng and M. Krstic, Stochastic Nonlinear Stabilization-I: A Backstep- ping Design. Systems & Control Letters, vol. 32, pp. 143-150, 1997.
[4] H. Deng, M. Krstic and R. J. Williams, Stabilization of Stochastic Nonlinear Systems Driven by Noise of Unknown Covariance. IEEE Trans. Automat. Contr., vol. 46, pp. 1237-1253, 2001.
[5] S. J. Liu, J. F. Zhang and Z. P. Jiang, Decentralized adaptive output-
feedback stabilization for large-scale stochastic nonlinear systems. Auto-matica, vol. 43, pp. 238-251, 2007.
[6] S. J. Liu, J. F. Zhang and Z. P. Jiang, Global output-feedback stabi-lization for a class of stochastic non-minimum-phase nonlinear systems.
Automatica, vol. 44, pp. 1944-1957, 2008.[7] P. Florchinger, Lyapunov-like Techniques for Stochastic Stability. SIAM
J. Control Optim., vol. 33, pp. 1151-1169, 1995.
[8] G. Garcia, S. Tarbouriech, and J. Bernussou, Finite-time Stabilization of Linear Time-varying Continuous Systems. IEEE Trans. Automat. Contr.,vol. 54, pp. 364-369. 2009.
[9] R. Z. Has’minskii, Stochastic Stability of Differential Equations.Rochwille, Alphen: Sijtjoff and Noordhoff, 1980.
[10] Y. Hong, Z. P. Jiang, and G. Feng, Finite-time Input-to-State Stabilityand Applications to Finite-time Control Design. SIAM J. Control Optim.,vol. 48, pp. 4395-4418, 2010.
[11] X. Huang, W. Lin, and B.Yang, Global Finite-time Stabilization of aClass of Uncertain Nonlinear Systems. Automatica, vol. 41, pp. 881-888,2005.
[12] M. Krstic, I. Kanellakopoulos and P. V. Kokotovic, Nonlinear and Adaptive Design. Wiley, USA: New York, 1995.
[13] H. J. Kusher, Stochastic Stability and Control. New York: Academic,1967.
[14] Y. G. Liu and J. F. Zhang, Practical Output-feedback Risk-sensitiveControl for Stochastic Nonlinear Systems with Stable Zero-dynamics.
SIAM J. Control Optim., vol. 45, pp. 885-926, 2006.[15] X. Mao, Exponential Stability of Stochastic Differential Equations.Dekker, USA: New York, 1994.
[16] X. Mao, Stochastic Differential Equations and Applications. Horwood,England: Chichester, 1997.
[17] E. Moulay and W. Perruquetti, Finite-time Stability and Stabilizationof a Class of Continuous Systems. J. Math. Anal. Appl., vol. 323, pp.1430-1443, 2006.
[18] E. Moulay, M. Dambrine, N. Yeganefar, and W. Perruquetti, Finite-time Stability and Stabilization of Time-delay Systems. Systems & Control
Letters, vol. 57, pp. 561-566, 2008.[19] S. G. Nersesov, W. M. Haddad, and Q. Hui, Finite-time Stabilization of
Nonlinear Dynamical Systems Via Control Vector Lyapunov Functions. Journal of the Franklin Institute, vol. 345, pp. 819-837, 2008.
[20] Z. Pan and T. Basar, Backstepping Controller Design for Nonlinear Stochastic Systems Under a Risk-sensitive Cost Criterion. SIAM J. Control
Optim., vol. 37, pp. 957-995, 1999.
[21] R. Situ, Theory of Stochastic Differential Equations with Jumps and Applications: Mathematical and Analysis Techniques with Applicationsto Engineering. Springer, USA: New York, 2005.
[22] Z. J. Wu, X. J. Xie, and S. Y. Zhang, Adaptive Backstepping Controller Design Using Stochastic Small-gain Theorem. Automatica, vol. 43, pp.608-620, 2007.
[23] X. Yu and X. J. Xie, Output Feedback Regulation of Stochastic Nonlin-ear Systems with Stochastic iISS Inverse Dynamics . IEEE Trans. Automat.Contr., vol. 55, pp. 304-320, 2010.