[studies in fuzziness and soft computing] uncertainty theory volume 154 || uncertainty theory

29
Chapter 5 Uncertainty Theory A classical measure is essentially a set function satisfying nonnegativity and countable additivity axioms. However, the additivity axiom of classical mea- sure theory has been challenged by many mathematicians. The earliest chal- lenge was from the theory of capacities by Choquet [26], in which monotonic- ity and continuity axioms were assumed, but nonnegativity was abandoned. Sugeno [205] generalized classical measure theory to fuzzy measure theory by replacing additivity axiom with weaker axioms of monotonicity and semicon- tinuity. Unfortunately, the credibility measure and chance measure are neither in Choquet’s category nor in Sugeno’s category. In many cases, the author thinks that “self-duality” plus “countable subadditivity” is more essential than “continuity” and “semicontinuity”. For this reason, this chapter will take a new direction to weaken the additivity axiom, and produce a new uncertainty theory. Uncertainty theory is a branch of mathematics based on normality, mono- tonicity, self-duality, and countable subadditivity axioms. It will provide the commonness of probability theory, credibility theory and chance theory. The emphasis in this chapter is mainly on uncertain measure, uncertainty space, uncertain variable, uncertainty distribution, expected value, variance, mo- ments, critical values, entropy, distance, characteristic function, convergence almost surely, convergence in measure, convergence in mean, convergence in distribution, and conditional uncertainty. 5.1 Uncertain Measure Let Γ be a nonempty set, and L a σ-algebra over Γ. Each element Λ L is called an event. In order to present an axiomatic definition of uncertain measure, it is necessary to assign to each event Λ a number M{Λ} which indicates the level that Λ will occur. In order to ensure that the number M{Λ} has certain mathematical properties, we accept the following four axioms:

Upload: dr-baoding

Post on 09-Dec-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 || Uncertainty Theory

Chapter 5

Uncertainty Theory

A classical measure is essentially a set function satisfying nonnegativity andcountable additivity axioms. However, the additivity axiom of classical mea-sure theory has been challenged by many mathematicians. The earliest chal-lenge was from the theory of capacities by Choquet [26], in which monotonic-ity and continuity axioms were assumed, but nonnegativity was abandoned.Sugeno [205] generalized classical measure theory to fuzzy measure theory byreplacing additivity axiom with weaker axioms of monotonicity and semicon-tinuity.

Unfortunately, the credibility measure and chance measure are neitherin Choquet’s category nor in Sugeno’s category. In many cases, the authorthinks that “self-duality” plus “countable subadditivity” is more essentialthan “continuity” and “semicontinuity”. For this reason, this chapter willtake a new direction to weaken the additivity axiom, and produce a newuncertainty theory.

Uncertainty theory is a branch of mathematics based on normality, mono-tonicity, self-duality, and countable subadditivity axioms. It will provide thecommonness of probability theory, credibility theory and chance theory. Theemphasis in this chapter is mainly on uncertain measure, uncertainty space,uncertain variable, uncertainty distribution, expected value, variance, mo-ments, critical values, entropy, distance, characteristic function, convergencealmost surely, convergence in measure, convergence in mean, convergence indistribution, and conditional uncertainty.

5.1 Uncertain Measure

Let Γ be a nonempty set, and L a σ-algebra over Γ. Each element Λ ∈ L

is called an event. In order to present an axiomatic definition of uncertainmeasure, it is necessary to assign to each event Λ a number M{Λ} whichindicates the level that Λ will occur. In order to ensure that the number M{Λ}has certain mathematical properties, we accept the following four axioms:

Page 2: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 || Uncertainty Theory

206 Chapter 5 - Uncertainty Theory

Axiom 1. (Normality) M{Γ} = 1.

Axiom 2. (Monotonicity) M{Λ1} ≤ M{Λ2} whenever Λ1 ⊂ Λ2.

Axiom 3. (Self-Duality) M{Λ} + M{Λc} = 1 for any event Λ.

Axiom 4. (Countable Subadditivity) For every countable sequence of events{Λi}, we have

M

{ ∞⋃

i=1

Λi

}≤

∞∑

i=1

M{Λi}. (5.1)

.

Remark 5.1: Pathology occurs if self-duality axiom is not assumed. Forexample, we define a set function that takes value 1 for each set. Then itsatisfies all axioms but self-duality. Is it not strange if such a set functionserves as a measure?

Remark 5.2: Pathology occurs if subadditivity is not assumed. For ex-ample, suppose that a universal set contains 3 elements. We define a setfunction that takes value 0 for each singleton, and 1 for each set with at least2 elements. Then such a set function satisfies all axioms but subadditivity.Is it not strange if such a set function serves as a measure?

Remark 5.3: Pathology occurs if countable subadditivity axiom is replacedwith finite subadditivity axiom. For example, assume the universal set con-sists of all real numbers. We define a set function that takes value 0 if theset is bounded, 0.5 if both the set and complement are unbounded, and 1 ifthe complement of the set is bounded. Then such a set function is finitelysubadditive but not countably subadditive. Is it not strange if such a setfunction serves as a measure?

Definition 5.1 The set function M is called an uncertain measure if it sat-isfies the four axioms.

Example 5.1: Probability measure, credibility measure and chance measureare instances of uncertain measure.

Example 5.2: Let Γ = {γ1, γ2, γ3}. For this case, there are only 8 events.Define

M{γ1} = 0.6, M{γ2} = 0.3, M{γ3} = 0.2,

M{γ1, γ2} = 0.8, M{γ1, γ3} = 0.7, M{γ2, γ3} = 0.4,

M{∅} = 0, M{Γ} = 1.

It is clear that the set function M is neither probability measure nor credi-bility measure. However, M is an uncertain measure because it satisfies thefour axioms.

Page 3: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 || Uncertainty Theory

Section 5.1 - Uncertain Measure 207

Example 5.3: Let Γ = {γ1, γ2, γ3, γ4}, and let α be a number between 0.25and 0.5. We define a set function as follows,

M{Λ} =

⎧⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎩

0, if Λ = ∅α, if Λ contains one element0.5, if Λ contains two elements

1 − α, if Λ contains three elements1, if Λ = Γ.

It is clear that the set function M is a probability measure if α = 0.25, and acredibility measure if α = 0.5. It is also easy to verify that M is an uncertainmeasure for any α with 0.25 ≤ α ≤ 0.5.

Example 5.4: Let Γ = [0, 2], L the Borel algebra, and π the Lebesguemeasure. Then the set function

M{Λ} =

⎧⎪⎪⎨

⎪⎪⎩

π{Λ}, if π{Λ} < 0.5

1 − π{Λc}, if π{Λc} < 0.5

0.5, otherwise

is an uncertain measure.

Theorem 5.1 Suppose that M is an uncertain measure. Then we have

M{∅} = 0, (5.2)

0 ≤ M{Λ} ≤ 1 (5.3)

for any event Λ.

Proof: It follows from the normality and self-duality axioms that M{∅} =1 − M{Γ} = 1 − 1 = 0. It follows from the monotonicity axiom that 0 ≤M{Λ} ≤ 1 because ∅ ⊂ Λ ⊂ Γ.

Theorem 5.2 An uncertain measure is a probability measure if and only ifit meets the countable additivity axiom.

Proof: If an uncertain measure is a probability measure, then it clearly meetsthe countable additivity axiom. Conversely, if the uncertain measure meetsthe countable additivity axiom, then it is a probability measure because italso meets the normality and nonnegativity axioms.

Remark 5.4: An uncertain measure on Γ is a probability measure if thereare at most two elements in Γ.

Theorem 5.3 An uncertain measure is a credibility measure if and only ifit meets the maximality axiom.

Page 4: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 || Uncertainty Theory

208 Chapter 5 - Uncertainty Theory

Proof: If an uncertain measure is a credibility measure, then it clearlymeets the maximality axiom. Conversely, if the uncertain measure meets themaximality axiom, then it is a credibility measure because it also meets thenormality, monotonicity and self-duality axioms.

Remark 5.5: An uncertain measure on Γ is a credibility measure if thereare at most two elements in Γ.

Theorem 5.4 Let Γ = {γ1, γ2, · · · }. If M is an uncertain measure, then

M{γi} + M{γj} ≤ 1 ≤∞∑

k=1

M{γk} (5.4)

for any i and j.

Proof: Since M is increasing and self-dual, we have, for any i and j,

M{γi} + M{γj} ≤ M{Γ\{γj}} + M{γj} = 1.

Since Γ = ∪k{γk} and M is countably subadditive, we have

1 = M{Γ} = M

{ ∞⋃

k=1

{γk}}

≤∞∑

k=1

M{γk}.

The theorem is proved.

Uncertainty Null-Additivity Theorem

Null-additivity is a direct deduction from subadditivity. This fact has beenshown by credibility measure and chance measure. We first prove a moregeneral theorem.

Theorem 5.5 Let {Λi} be a decreasing sequence of events with M{Λi} → 0as i → ∞. Then for any event Λ, we have

limi→∞

M{Λ ∪ Λi} = limi→∞

M{Λ\Λi} = M{Λ}. (5.5)

Proof: It follows from the monotonicity and countable subadditivity axiomsthat

M{Λ} ≤ M{Λ ∪ Λi} ≤ M{Λ} + M{Λi}for each i. Thus we get M{Λ ∪ Λi} → M{Λ} by using M{Λi} → 0. Since(Λ\Λi) ⊂ Λ ⊂ ((Λ\Λi) ∪ Λi), we have

M{Λ\Λi} ≤ M{Λ} ≤ M{Λ\Λi} + M{Λi}.Hence M{Λ\Λi} → M{Λ} by using M{Λi} → 0.

Remark 5.6: It follows from the above theorem that the uncertain measureis null-additive, i.e., M{Λ1 ∪ Λ2} = M{Λ1} + M{Λ2} if either M{Λ1} = 0or M{Λ2} = 0. In other words, the uncertain measure remains unchanged ifthe event is enlarged or reduced by an event with measure zero.

Page 5: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 || Uncertainty Theory

Section 5.1 - Uncertain Measure 209

Uncertainty Asymptotic Theorem

Theorem 5.6 (Uncertainty Asymptotic Theorem) For any events Λ1,Λ2, · · · ,we have

limi→∞

M{Λi} > 0, if Λi ↑ Γ, (5.6)

limi→∞

M{Λi} < 1, if Λi ↓ ∅. (5.7)

Proof: Assume Λi ↑ Γ. Since Γ = ∪iΛi, it follows from the countablesubadditivity axioms that

1 = M{Γ} ≤∞∑

i=1

M{Λi}.

Since M{Λi} is increasing with respect to i, we have limi→∞ M{Λi} > 0. IfΛi ↓ ∅, then Λc

i ↑ Γ. It follows from the first inequality and self-duality axiomthat

limi→∞

M{Λi} = 1 − limi→∞

M{Λci} < 1.

The theorem is proved.

Example 5.5: Assume Γ is the set of real numbers. Let α be a number with0 < α ≤ 0.5. Define a set function as follows,

M{Λ} =

⎧⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎩

0, if Λ = ∅α, if Λ is upper bounded0.5, if both Λ and Λc are upper unbounded

1 − α, if Λc is upper bounded1, if Λ = Γ.

(5.8)

It is easy to verify that M is an uncertain measure. Write Λi = (−∞, i] fori = 1, 2, · · · Then Λi ↑ Γ and limi→∞ M{Λi} = α. Furthermore, we haveΛc

i ↓ ∅ and limi→∞ M{Λci} = 1 − α.

Uncertainty Space

Definition 5.2 Let Γ be a nonempty set, L a σ-algebra over Γ, and M anuncertain measure. Then the triplet (Γ,L,M) is called an uncertainty space.

Example 5.6: Probability space and credibility space are instances of un-certainty space.

Page 6: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 || Uncertainty Theory

210 Chapter 5 - Uncertainty Theory

5.2 Uncertain Variables

Definition 5.3 An uncertain variable is a measurable function from an un-certainty space (Γ,L,M) to the set of real numbers, i.e., for any Borel set Bof real numbers, the set

{ξ ∈ B} = {γ ∈ Γ∣∣ ξ(γ) ∈ B} (5.9)

is an event.

Example 5.7: Random variable, fuzzy variable and hybrid variable areinstances of uncertain variable.

Definition 5.4 An uncertain variable ξ on the uncertainty space (Γ,L,M)is said to be(a) nonnegative if M{ξ < 0} = 0;(b) positive if M{ξ ≤ 0} = 0;(c) continuous if M{ξ = x} is a continuous function of x;(d) simple if there exists a finite sequence {x1, x2, · · · , xm} such that

M {ξ �= x1, ξ �= x2, · · · , ξ �= xm} = 0; (5.10)

(e) discrete if there exists a countable sequence {x1, x2, · · · } such that

M {ξ �= x1, ξ �= x2, · · · } = 0. (5.11)

It is clear that 0 ≤ M{ξ = x} ≤ 1, and there is at most one point x0 suchthat M{ξ = x0} > 0.5. For a continuous uncertain variable, we always have0 ≤ M{ξ = x} ≤ 0.5.

Definition 5.5 Let ξ1 and ξ2 be uncertain variables defined on the uncer-tainty space (Γ,L,M). We say ξ1 = ξ2 if ξ1(γ) = ξ2(γ) for almost all γ ∈ Γ.

Uncertain Vector

Definition 5.6 An n-dimensional uncertain vector is a measurable functionfrom an uncertainty space (Γ,L,M) to the set of n-dimensional real vectors,i.e., for any Borel set B of �n, the set

{ξ ∈ B} ={γ ∈ Γ

∣∣ ξ(γ) ∈ B}

(5.12)

is an event.

Theorem 5.7 The vector (ξ1, ξ2, · · · , ξn) is an uncertain vector if and onlyif ξ1, ξ2, · · · , ξn are uncertain variables.

Page 7: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 || Uncertainty Theory

Section 5.2 - Uncertain Variables 211

Proof: Write ξ = (ξ1, ξ2, · · · , ξn). Suppose that ξ is an uncertain vector onthe uncertainty space (Γ,L,M). For any Borel set B of �, the set B ×�n−1

is a Borel set of �n. Thus the set{γ ∈ Γ

∣∣ ξ1(γ) ∈ B}

={γ ∈ Γ

∣∣ ξ1(γ) ∈ B, ξ2(γ) ∈ �, · · · , ξn(γ) ∈ �}

={γ ∈ Γ

∣∣ ξ(γ) ∈ B ×�n−1}

is an event. Hence ξ1 is an uncertain variable. A similar process mayprove that ξ2, ξ3, · · · , ξn are uncertain variables. Conversely, suppose thatall ξ1, ξ2, · · · , ξn are uncertain variables on the uncertainty space (Γ,L,M).We define

B ={B ⊂ �n

∣∣ {γ ∈ Γ|ξ(γ) ∈ B} is an event}

.

The vector ξ = (ξ1, ξ2, · · · , ξn) is proved to be an uncertain vector if we canprove that B contains all Borel sets of �n. First, the class B contains allopen intervals of �n because

{γ∣∣ ξ(γ) ∈

n∏

i=1

(ai, bi)

}=

n⋂

i=1

{γ∣∣ ξi(γ) ∈ (ai, bi)

}

is an event. Next, the class B is a σ-algebra of �n because (i) we have �n ∈ B

since {γ|ξ(γ) ∈ �n} = Γ; (ii) if B ∈ B, then{γ ∈ Γ

∣∣ ξ(γ) ∈ B}

is an event, and

{γ ∈ Γ∣∣ ξ(γ) ∈ Bc} = {γ ∈ Γ

∣∣ ξ(γ) ∈ B}c

is an event. This means that Bc ∈ B; (iii) if Bi ∈ B for i = 1, 2, · · · , then{γ ∈ Γ|ξ(γ) ∈ Bi} are events and

{γ ∈ Γ

∣∣ ξ(γ) ∈∞⋃

i=1

Bi

}=

∞⋃

i=1

{γ ∈ Γ∣∣ ξ(γ) ∈ Bi}

is an event. This means that ∪iBi ∈ B. Since the smallest σ-algebra con-taining all open intervals of �n is just the Borel algebra of �n, the class B

contains all Borel sets of �n. The theorem is proved.

Uncertain Arithmetic

Definition 5.7 Suppose that f : �n → � is a measurable function, andξ1, ξ2, · · · , ξn uncertain variables on the uncertainty space (Γ,L,M). Thenξ = f(ξ1, ξ2, · · · , ξn) is an uncertain variable defined as

ξ(γ) = f(ξ1(γ), ξ2(γ), · · · , ξn(γ)), ∀γ ∈ Γ. (5.13)

Page 8: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 || Uncertainty Theory

212 Chapter 5 - Uncertainty Theory

Example 5.8: Let ξ1 and ξ2 be two uncertain variables. Then the sumξ = ξ1 + ξ2 is an uncertain variable defined by

ξ(γ) = ξ1(γ) + ξ2(γ), ∀γ ∈ Γ.

The product ξ = ξ1ξ2 is also an uncertain variable defined by

ξ(γ) = ξ1(γ) · ξ2(γ), ∀γ ∈ Γ.

Theorem 5.8 Let ξ be an n-dimensional uncertain vector, and f : �n → �a measurable function. Then f(ξ) is an uncertain variable.

Proof: Assume that ξ is an uncertain vector on the uncertainty space(Γ,L,M). For any Borel set B of �, since f is a measurable function, thef−1(B) is a Borel set of �n. Thus the set

{γ ∈ Γ

∣∣ f(ξ(γ)) ∈ B}

={γ ∈ Γ

∣∣ ξ(γ) ∈ f−1(B)}

is an event for any Borel set B. Hence f(ξ) is an uncertain variable.

5.3 Uncertainty Distribution

Definition 5.8 The uncertainty distribution Φ: � → [0, 1] of an uncertainvariable ξ is defined by

Φ(x) = M{γ ∈ Γ

∣∣ ξ(γ) ≤ x}

. (5.14)

Example 5.9: Probability distribution, credibility distribution and chancedistribution are instances of uncertainty distribution.

Theorem 5.9 An uncertainty distribution is an increasing function suchthat

0 ≤ limx→−∞

Φ(x) < 1, 0 < limx→+∞

Φ(x) ≤ 1. (5.15)

Proof: It is obvious that an uncertainty distribution Φ is an increasingfunction, and the inequalities (5.15) follow from the uncertainty asymptotictheorem immediately.

Definition 5.9 A continuous uncertain variable is said to be(a) singular if its uncertainty distribution is a singular function;(b) absolutely continuous if its uncertainty distribution is absolutely continu-ous.

Definition 5.10 The uncertainty density function φ: � → [0,+∞) of anuncertain variable ξ is a function such that

Φ(x) =∫ x

−∞φ(y)dy, ∀x ∈ �, (5.16)

Page 9: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 || Uncertainty Theory

Section 5.4 - Expected Value 213

∫ +∞

−∞φ(y)dy = 1 (5.17)

where Φ is the uncertainty distribution of ξ.

Theorem 5.10 Let ξ be an uncertain variable whose uncertainty densityfunction φ exists. Then we have

M{ξ ≤ x} =∫ x

−∞φ(y)dy, M{ξ ≥ x} =

∫ +∞

x

φ(y)dy. (5.18)

Proof: The first part follows immediately from the definition. In addition,by the self-duality of uncertain measure, we have

M{ξ ≥ x} = 1 − M{ξ < x} =∫ +∞

−∞φ(y)dy −

∫ x

−∞φ(y)dy =

∫ +∞

x

φ(y)dy.

The theorem is proved.

Joint Uncertainty Distribution

Definition 5.11 Let (ξ1, ξ2, · · · , ξn) be an uncertain vector. Then the jointuncertainty distribution Φ : �n → [0, 1] is defined by

Φ(x1, x2, · · · , xn) = M{γ ∈ Γ

∣∣ ξ1(γ) ≤ x1, ξ2(γ) ≤ x2, · · · , ξn(γ) ≤ xn

}.

Definition 5.12 The joint uncertainty density function φ : �n → [0,+∞)of an uncertain vector (ξ1, ξ2, · · · , ξn) is a function such that

Φ(x1, x2, · · · , xn) =∫ x1

−∞

∫ x2

−∞· · ·∫ xn

−∞φ(y1, y2, · · · , yn)dy1dy2 · · · dyn

holds for all (x1, x2, · · · , xn) ∈ �n, and∫ +∞

−∞

∫ +∞

−∞· · ·∫ +∞

−∞φ(y1, y2, · · · , yn)dy1dy2 · · · dyn = 1

where Φ is the joint uncertainty distribution of (ξ1, ξ2, · · · , ξn).

5.4 Expected Value

Definition 5.13 Let ξ be an uncertain variable. Then the expected value ofξ is defined by

E[ξ] =∫ +∞

0

M{ξ ≥ r}dr −∫ 0

−∞M{ξ ≤ r}dr (5.19)

provided that at least one of the two integrals is finite.

Page 10: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 || Uncertainty Theory

214 Chapter 5 - Uncertainty Theory

Theorem 5.11 Let ξ be an uncertain variable whose uncertainty densityfunction φ exists. If the Lebesgue integral

∫ +∞

−∞xφ(x)dx

is finite, then we have

E[ξ] =∫ +∞

−∞xφ(x)dx. (5.20)

Proof: It follows from the definition of expected value operator and FubiniTheorem that

E[ξ] =∫ +∞

0

M{ξ ≥ r}dr −∫ 0

−∞M{ξ ≤ r}dr

=∫ +∞

0

[∫ +∞

r

φ(x)dx

]dr −

∫ 0

−∞

[∫ r

−∞φ(x)dx

]dr

=∫ +∞

0

[∫ x

0

φ(x)dr

]dx −

∫ 0

−∞

[∫ 0

x

φ(x)dr

]dx

=∫ +∞

0

xφ(x)dx +∫ 0

−∞xφ(x)dx

=∫ +∞

−∞xφ(x)dx.

The theorem is proved.

Theorem 5.12 Let ξ be an uncertain variable with uncertainty distributionΦ. If

limx→−∞

Φ(x) = 0, limx→∞

Φ(x) = 1

and the Lebesgue-Stieltjes integral∫ +∞

−∞xdΦ(x)

is finite, then we have

E[ξ] =∫ +∞

−∞xdΦ(x). (5.21)

Proof: Since the Lebesgue-Stieltjes integral∫ +∞−∞ xdΦ(x) is finite, we imme-

diately have

limy→+∞

∫ y

0

xdΦ(x) =∫ +∞

0

xdΦ(x), limy→−∞

∫ 0

y

xdΦ(x) =∫ 0

−∞xdΦ(x)

Page 11: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 || Uncertainty Theory

Section 5.4 - Expected Value 215

and

limy→+∞

∫ +∞

y

xdΦ(x) = 0, limy→−∞

∫ y

−∞xdΦ(x) = 0.

It follows from∫ +∞

y

xdΦ(x) ≥ y

(lim

z→+∞Φ(z) − Φ(y)

)= y (1 − Φ(y)) ≥ 0, for y > 0,

∫ y

−∞xdΦ(x) ≤ y

(Φ(y) − lim

z→−∞Φ(z)

)= yΦ(y) ≤ 0, for y < 0

thatlim

y→+∞y (1 − Φ(y)) = 0, lim

y→−∞yΦ(y) = 0.

Let 0 = x0 < x1 < x2 < · · · < xn = y be a partition of [0, y]. Then we have

n−1∑

i=0

xi (Φ(xi+1) − Φ(xi)) →∫ y

0

xdΦ(x)

andn−1∑

i=0

(1 − Φ(xi+1))(xi+1 − xi) →∫ y

0

M{ξ ≥ r}dr

as max{|xi+1 − xi| : i = 0, 1, · · · , n − 1} → 0. Since

n−1∑

i=0

xi (Φ(xi+1) − Φ(xi)) −n−1∑

i=0

(1 − Φ(xi+1)(xi+1 − xi) = y(Φ(y) − 1) → 0

as y → +∞. This fact implies that

∫ +∞

0

M{ξ ≥ r}dr =∫ +∞

0

xdΦ(x).

A similar way may prove that

−∫ 0

−∞M{ξ ≤ r}dr =

∫ 0

−∞xdΦ(x).

It follows that the equation (5.21) holds.

Theorem 5.13 Let ξ be an uncertain variable with finite expected value.Then for any real numbers a and b, we have

E[aξ + b] = aE[ξ] + b. (5.22)

Page 12: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 || Uncertainty Theory

216 Chapter 5 - Uncertainty Theory

Proof: Step 1: We first prove that E[ξ + b] = E[ξ] + b for any real numberb. If b ≥ 0, we have

E[ξ + b] =∫ +∞

0

M{ξ + b ≥ r}dr −∫ 0

−∞M{ξ + b ≤ r}dr

=∫ +∞

0

M{ξ ≥ r − b}dr −∫ 0

−∞M{ξ ≤ r − b}dr

= E[ξ] +∫ b

0

(M{ξ ≥ r − b} + M{ξ < r − b})dr

= E[ξ] + b.

If b < 0, then we have

E[aξ + b] = E[ξ] −∫ 0

b

(M{ξ ≥ r − b} + M{ξ < r − b})dr = E[ξ] + b.

Step 2: We prove E[aξ] = aE[ξ]. If a = 0, then the equation E[aξ] =aE[ξ] holds trivially. If a > 0, we have

E[aξ] =∫ +∞

0

M{aξ ≥ r}dr −∫ 0

−∞M{aξ ≤ r}dr

=∫ +∞

0

M{ξ ≥ r/a}dr −∫ 0

−∞M{ξ ≤ r/a}dr

= a

∫ +∞

0

M{ξ ≥ t}dt − a

∫ 0

−∞M{ξ ≤ t}dt

= aE[ξ].

If a < 0, we have

E[aξ] =∫ +∞

0

M{aξ ≥ r}dr −∫ 0

−∞M{aξ ≤ r}dr

=∫ +∞

0

M{ξ ≤ r/a}dr −∫ 0

−∞M{ξ ≥ r/a}dr

= a

∫ +∞

0

M{ξ ≥ t}dt − a

∫ 0

−∞M{ξ ≤ t}dt

= aE[ξ].

Finally, for any real numbers a and b, it follows from Steps 1 and 2 that thetheorem holds.

5.5 Variance

Definition 5.14 Let ξ be an uncertain variable with finite expected value e.Then the variance of ξ is defined by V [ξ] = E[(ξ − e)2].

Page 13: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 || Uncertainty Theory

Section 5.5 - Variance 217

Theorem 5.14 If ξ is an uncertain variable with finite expected value, a andb are real numbers, then V [aξ + b] = a2V [ξ].

Proof: It follows from the definition of variance that

V [aξ + b] = E[(aξ + b − aE[ξ] − b)2

]= a2E[(ξ − E[ξ])2] = a2V [ξ].

Theorem 5.15 Let ξ be an uncertain variable with expected value e. ThenV [ξ] = 0 if and only if M{ξ = e} = 1.

Proof: If V [ξ] = 0, then E[(ξ − e)2] = 0. Note that

E[(ξ − e)2] =∫ +∞

0

M{(ξ − e)2 ≥ r}dr

which implies M{(ξ − e)2 ≥ r} = 0 for any r > 0. Hence we have

M{(ξ − e)2 = 0} = 1.

That is, M{ξ = e} = 1.Conversely, if M{ξ = e} = 1, then we have M{(ξ − e)2 = 0} = 1 and

M{(ξ − e)2 ≥ r} = 0 for any r > 0. Thus

V [ξ] =∫ +∞

0

M{(ξ − e)2 ≥ r}dr = 0.

The theorem is proved.

Theorem 5.16 Let f be a convex function on [a, b], and ξ an uncertainvariable that takes values in [a, b] and has expected value e. Then

E[f(ξ)] ≤ b − e

b − af(a) +

e − a

b − af(b). (5.23)

Proof: For each γ ∈ Γ, we have a ≤ ξ(γ) ≤ b and

ξ(γ) =b − ξ(γ)b − a

a +ξ(γ) − a

b − ab.

It follows from the convexity of f that

f(ξ(γ)) ≤ b − ξ(γ)b − a

f(a) +ξ(γ) − a

b − af(b).

Taking expected values on both sides, we obtain the inequality.

Theorem 5.17 (Maximum Variance Theorem) Let ξ be an uncertain vari-able that takes values in [a, b] and has expected value e. Then

V [ξ] ≤ (e − a)(b − e). (5.24)

Proof: It follows from Theorem 5.16 immediately by defining f(x) = (x−e)2.

Page 14: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 || Uncertainty Theory

218 Chapter 5 - Uncertainty Theory

5.6 Moments

Definition 5.15 Let ξ be an uncertain variable. Then for any positive inte-ger k,(a) the expected value E[ξk] is called the kth moment;(b) the expected value E[|ξ|k] is called the kth absolute moment;(c) the expected value E[(ξ − E[ξ])k] is called the kth central moment;(d) the expected value E[|ξ−E[ξ]|k] is called the kth absolute central moment.

Note that the first central moment is always 0, the first moment is justthe expected value, and the second central moment is just the variance.

Theorem 5.18 Let ξ be a nonnegative uncertain variable, and k a positivenumber. Then the k-th moment

E[ξk] = k

∫ +∞

0

rk−1M{ξ ≥ r}dr. (5.25)

Proof: It follows from the nonnegativity of ξ that

E[ξk] =∫ ∞

0

M{ξk ≥ x}dx =∫ ∞

0

M{ξ ≥ r}drk = k

∫ ∞

0

rk−1M{ξ ≥ r}dr.

The theorem is proved.

Theorem 5.19 Let ξ be an uncertain variable, and t a positive number. IfE[|ξ|t] < ∞, then

limx→∞

xtM{|ξ| ≥ x} = 0. (5.26)

Conversely, if (5.26) holds for some positive number t, then E[|ξ|s] < ∞ forany 0 ≤ s < t.

Proof: It follows from the definition of expected value operator that

E[|ξ|t] =∫ +∞

0

M{|ξ|t ≥ r}dr < ∞.

Thus we have

limx→∞

∫ +∞

xt/2

M{|ξ|t ≥ r}dr = 0.

The equation (5.26) is proved by the following relation,

∫ +∞

xt/2

M{|ξ|t ≥ r}dr ≥∫ xt

xt/2

M{|ξ|t ≥ r}dr ≥ 12xtM{|ξ| ≥ x}.

Conversely, if (5.26) holds, then there exists a number a such that

xtM{|ξ| ≥ x} ≤ 1, ∀x ≥ a.

Page 15: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 || Uncertainty Theory

Section 5.7 - Independence 219

Thus we have

E[|ξ|s] =∫ a

0

M{|ξ|s ≥ r}dr +∫ +∞

a

M{|ξ|s ≥ r}dr

=∫ a

0

M{|ξ|s ≥ r}dr +∫ +∞

a

srs−1M{|ξ| ≥ r}dr

≤∫ a

0

M{|ξ|s ≥ r}dr + s

∫ +∞

0

rs−t−1dr

< +∞.

(by∫ +∞

0

rpdr < ∞ for any p < −1)

The theorem is proved.

Theorem 5.20 Let ξ be an uncertain variable that takes values in [a, b] andhas expected value e. Then for any positive integer k, the kth absolute momentand kth absolute central moment satisfy the following inequalities,

E[|ξ|k] ≤ b − e

b − a|a|k +

e − a

b − a|b|k, (5.27)

E[|ξ − e|k] ≤ b − e

b − a(e − a)k +

e − a

b − a(b − e)k. (5.28)

Proof: It follows from Theorem 5.16 immediately by defining f(x) = |x|kand f(x) = |x − e|k.

5.7 Independence

Definition 5.16 The uncertain variables ξ1, ξ2, · · · , ξn are said to be inde-pendent if

E

[n∑

i=1

fi(ξi)

]=

n∑

i=1

E[fi(ξi)] (5.29)

for any measurable functions f1, f2, · · · , fn provided that the expected valuesexist and are finite.

Theorem 5.21 If ξ and η are independent uncertain variables with finiteexpected values, then we have

E[aξ + bη] = aE[ξ] + bE[η] (5.30)

for any real numbers a and b.

Proof: The theorem follows from the definition by defining f1(x) = ax andf2(x) = bx.

Page 16: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 || Uncertainty Theory

220 Chapter 5 - Uncertainty Theory

Theorem 5.22 Suppose that ξ1, ξ2, · · · , ξn are independent uncertain vari-ables, and f1, f2, · · · , fn are measurable functions. Then the uncertain vari-ables f1(ξ1), f2(ξ2), · · · , fn(ξn) are independent.

Proof: The theorem follows from the definition because the compound ofmeasurable functions is also measurable.

5.8 Identical Distribution

This section introduces the concept of identical distribution of uncertain vari-ables.

Definition 5.17 The uncertain variables ξ and η are identically distributedif

M{ξ ∈ B} = M{η ∈ B} (5.31)

for any Borel set B of real numbers.

Theorem 5.23 Let ξ and η be identically distributed uncertain variables,and f : � → � a measurable function. Then f(ξ) and f(η) are identicallydistributed uncertain variables.

Proof: For any Borel set B of real numbers, we have

M{f(ξ) ∈ B} = M{ξ ∈ f−1(B)} = M{η ∈ f−1(B)} = M{f(η) ∈ B}.

Hence f(ξ) and f(η) are identically distributed uncertain variables.

Theorem 5.24 If ξ and η are identically distributed uncertain variables,then they have the same uncertainty distribution.

Proof: Since ξ and η are identically distributed uncertain variables, we haveM{ξ ∈ (−∞, x]} = M{η ∈ (−∞, x]} for any x. Thus ξ and η have the sameuncertainty distribution.

Theorem 5.25 If ξ and η are identically distributed uncertain variableswhose uncertainty density functions exist, then they have the same uncer-tainty density function.

Proof: It follows from Theorem 5.24 immediately.

5.9 Critical Values

Definition 5.18 Let ξ be an uncertain variable, and α ∈ (0, 1]. Then

ξsup(α) = sup{r∣∣M {ξ ≥ r} ≥ α

}(5.32)

is called the α-optimistic value to ξ, and

ξinf(α) = inf{r∣∣M {ξ ≤ r} ≥ α

}(5.33)

is called the α-pessimistic value to ξ.

Page 17: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 || Uncertainty Theory

Section 5.9 - Critical Values 221

Theorem 5.26 Let ξ be an uncertain variable and α a number between 0and 1. We have(a) if c ≥ 0, then (cξ)sup(α) = cξsup(α) and (cξ)inf(α) = cξinf(α);(b) if c < 0, then (cξ)sup(α) = cξinf(α) and (cξ)inf(α) = cξsup(α).

Proof: (a) If c = 0, then the part (a) is obvious. In the case of c > 0, wehave

(cξ)sup(α) = sup{r∣∣M{cξ ≥ r} ≥ α}

= c sup{r/c | M{ξ ≥ r/c} ≥ α}= cξsup(α).

A similar way may prove (cξ)inf(α) = cξinf(α). In order to prove the part (b),it suffices to prove that (−ξ)sup(α) = −ξinf(α) and (−ξ)inf(α) = −ξsup(α).In fact, we have

(−ξ)sup(α) = sup{r∣∣M{−ξ ≥ r} ≥ α}

= − inf{−r | M{ξ ≤ −r} ≥ α}= −ξinf(α).

Similarly, we may prove that (−ξ)inf(α) = −ξsup(α). The theorem is proved.

Theorem 5.27 Let ξ be an uncertain variable. Then we have(a) if α > 0.5, then ξinf(α) ≥ ξsup(α);(b) if α ≤ 0.5, then ξinf(α) ≤ ξsup(α).

Proof: Part (a): Write ξ(α) = (ξinf(α) + ξsup(α))/2. If ξinf(α) < ξsup(α),then we have

1 ≥ M{ξ < ξ(α)} + M{ξ > ξ(α)} ≥ α + α > 1.

A contradiction proves ξinf(α) ≥ ξsup(α).Part (b): Assume that ξinf(α) > ξsup(α). It follows from the definition

of ξinf(α) that M{ξ ≤ ξ(α)} < α. Similarly, it follows from the definition ofξsup(α) that M{ξ ≥ ξ(α)} < α. Thus

1 ≤ M{ξ ≤ ξ(α)} + M{ξ ≥ ξ(α)} < α + α ≤ 1.

A contradiction proves ξinf(α) ≤ ξsup(α). The theorem is verified.

Theorem 5.28 Let ξ be an uncertain variable. Then ξsup(α) is a decreasingfunction of α, and ξinf(α) is an increasing function of α.

Proof: It follows from the definition immediately.

Page 18: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 || Uncertainty Theory

222 Chapter 5 - Uncertainty Theory

5.10 Entropy

This section provides a definition of entropy to characterize the uncertaintyof uncertain variables resulting from information deficiency.

Definition 5.19 Suppose that ξ is a discrete uncertain variable taking valuesin {x1, x2, · · · }. Then its entropy is defined by

H[ξ] =∞∑

i=1

S(M{ξ = xi}) (5.34)

where S(t) = −t ln t − (1 − t) ln(1 − t).

Example 5.10: Suppose that ξ is a discrete uncertain variable taking valuesin {x1, x2, · · · }. If there exists some index k such that M{ξ = xk} = 1, and0 otherwise, then its entropy H[ξ] = 0.

Example 5.11: Suppose that ξ is a simple uncertain variable taking valuesin {x1, x2, · · · , xn}. If M{ξ = xi} = 0.5 for all i = 1, 2, · · · , n, then itsentropy H[ξ] = n ln 2.

Theorem 5.29 Suppose that ξ is a discrete uncertain variable taking valuesin {x1, x2, · · · }. Then

H[ξ] ≥ 0 (5.35)and equality holds if and only if ξ is essentially a deterministic/crisp number.

Proof: The nonnegativity is clear. In addition, H[ξ] = 0 if and only ifM{ξ = xi} = 0 or 1 for each i. That is, there exists one and only one index ksuch that M{ξ = xk} = 1, i.e., ξ is essentially a deterministic/crisp number.

This theorem states that the entropy of an uncertain variable reaches itsminimum 0 when the uncertain variable degenerates to a deterministic/crispnumber. In this case, there is no uncertainty.

Theorem 5.30 Suppose that ξ is a simple uncertain variable taking valuesin {x1, x2, · · · , xn}. Then

H[ξ] ≤ n ln 2 (5.36)and equality holds if and only if M{ξ = xi} = 0.5 for all i = 1, 2, · · · , n.

Proof: Since the function S(t) reaches its maximum ln 2 at t = 0.5, we have

H[ξ] =n∑

i=1

S(M{ξ = xi}) ≤ n ln 2

and equality holds if and only if M{ξ = xi} = 0.5 for all i = 1, 2, · · · , n.

This theorem states that the entropy of an uncertain variable reaches itsmaximum when the uncertain variable is an equipossible one. In this case,there is no preference among all the values that the uncertain variable willtake.

Page 19: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 || Uncertainty Theory

Section 5.12 - Inequalities 223

5.11 Distance

Definition 5.20 The distance between uncertain variables ξ and η is definedas

d(ξ, η) = E[|ξ − η|]. (5.37)

Theorem 5.31 Let ξ, η, τ be uncertain variables, and let d(·, ·) be the dis-tance. Then we have(a) (Nonnegativity) d(ξ, η) ≥ 0;(b) (Identification) d(ξ, η) = 0 if and only if ξ = η;(c) (Symmetry) d(ξ, η) = d(η, ξ);(d) (Triangle Inequality) d(ξ, η) ≤ 2d(ξ, τ) + 2d(η, τ).

Proof: The parts (a), (b) and (c) follow immediately from the definition.Now we prove the part (d). It follows from the countable subadditivity axiomthat

d(ξ, η) =∫ +∞

0

M {|ξ − η| ≥ r}dr

≤∫ +∞

0

M {|ξ − τ | + |τ − η| ≥ r}dr

≤∫ +∞

0

M {{|ξ − τ | ≥ r/2} ∪ {|τ − η| ≥ r/2}} dr

≤∫ +∞

0

(M{|ξ − τ | ≥ r/2} + M{|τ − η| ≥ r/2}) dr

=∫ +∞

0

M{|ξ − τ | ≥ r/2}dr +∫ +∞

0

M{|τ − η| ≥ r/2}dr

= 2E[|ξ − τ |] + 2E[|τ − η|] = 2d(ξ, τ) + 2d(τ, η).

5.12 Inequalities

Theorem 5.32 Let ξ be an uncertain variable, and f a nonnegative func-tion. If f is even and increasing on [0,∞), then for any given number t > 0,we have

M{|ξ| ≥ t} ≤ E[f(ξ)]f(t)

. (5.38)

Page 20: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 || Uncertainty Theory

224 Chapter 5 - Uncertainty Theory

Proof: It is clear that M{|ξ| ≥ f−1(r)} is a monotone decreasing functionof r on [0,∞). It follows from the nonnegativity of f(ξ) that

E[f(ξ)] =∫ +∞

0

M{f(ξ) ≥ r}dr

=∫ +∞

0

M{|ξ| ≥ f−1(r)}dr

≥∫ f(t)

0

M{|ξ| ≥ f−1(r)}dr

≥∫ f(t)

0

dr · M{|ξ| ≥ f−1(f(t))}

= f(t) · M{|ξ| ≥ t}

which proves the inequality.

Theorem 5.33 (Markov Inequality) Let ξ be an uncertain variable. Thenfor any given numbers t > 0 and p > 0, we have

M{|ξ| ≥ t} ≤ E[|ξ|p]tp

. (5.39)

Proof: It is a special case of Theorem 5.32 when f(x) = |x|p.

Theorem 5.34 (Chebyshev Inequality) Let ξ be an uncertain variable whosevariance V [ξ] exists. Then for any given number t > 0, we have

M {|ξ − E[ξ]| ≥ t} ≤ V [ξ]t2

. (5.40)

Proof: It is a special case of Theorem 5.32 when the uncertain variable ξ isreplaced with ξ − E[ξ], and f(x) = x2.

Theorem 5.35 (Holder’s Inequality) Let p and q be positive real numberswith 1/p + 1/q = 1, and let ξ and η be independent uncertain variables withE[|ξ|p] < ∞ and E[|η|q] < ∞. Then we have

E[|ξη|] ≤ p√

E[|ξ|p] q√

E[|η|q]. (5.41)

Proof: The inequality holds trivially if at least one of ξ and η is zero a.s. Nowwe assume E[|ξ|p] > 0 and E[|η|q] > 0. It is easy to prove that the functionf(x, y) = p

√x q√

y is a concave function on D = {(x, y) : x ≥ 0, y ≥ 0}. Thusfor any point (x0, y0) with x0 > 0 and y0 > 0, there exist two real numbersa and b such that

f(x, y) − f(x0, y0) ≤ a(x − x0) + b(y − y0), ∀(x, y) ∈ D.

Page 21: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 || Uncertainty Theory

Section 5.12 - Inequalities 225

Letting x0 = E[|ξ|p], y0 = E[|η|q], x = |ξ|p and y = |η|q, we have

f(|ξ|p, |η|q) − f(E[|ξ|p], E[|η|q]) ≤ a(|ξ|p − E[|ξ|p]) + b(|η|q − E[|η|q]).

Taking the expected values on both sides, we obtain

E[f(|ξ|p, |η|q)] ≤ f(E[|ξ|p], E[|η|q]).

Hence the inequality (5.41) holds.

Theorem 5.36 (Minkowski Inequality) Let p be a real number with p ≥ 1,and let ξ and η be independent uncertain variables with E[|ξ|p] < ∞ andE[|η|p] < ∞. Then we have

p√

E[|ξ + η|p] ≤ p√

E[|ξ|p] + p√

E[|η|p]. (5.42)

Proof: The inequality holds trivially if at least one of ξ and η is zero a.s. Nowwe assume E[|ξ|p] > 0 and E[|η|p] > 0. It is easy to prove that the functionf(x, y) = ( p

√x + p

√y)p is a concave function on D = {(x, y) : x ≥ 0, y ≥ 0}.

Thus for any point (x0, y0) with x0 > 0 and y0 > 0, there exist two realnumbers a and b such that

f(x, y) − f(x0, y0) ≤ a(x − x0) + b(y − y0), ∀(x, y) ∈ D.

Letting x0 = E[|ξ|p], y0 = E[|η|p], x = |ξ|p and y = |η|p, we have

f(|ξ|p, |η|p) − f(E[|ξ|p], E[|η|p]) ≤ a(|ξ|p − E[|ξ|p]) + b(|η|p − E[|η|p]).

Taking the expected values on both sides, we obtain

E[f(|ξ|p, |η|p)] ≤ f(E[|ξ|p], E[|η|p]).

Hence the inequality (5.42) holds.

Theorem 5.37 (Jensen’s Inequality) Let ξ be an uncertain variable, andf : � → � a convex function. If E[ξ] and E[f(ξ)] are finite, then

f(E[ξ]) ≤ E[f(ξ)]. (5.43)

Especially, when f(x) = |x|p and p ≥ 1, we have |E[ξ]|p ≤ E[|ξ|p].

Proof: Since f is a convex function, for each y, there exists a number k suchthat f(x)− f(y) ≥ k · (x− y). Replacing x with ξ and y with E[ξ], we obtain

f(ξ) − f(E[ξ]) ≥ k · (ξ − E[ξ]).

Taking the expected values on both sides, we have

E[f(ξ)] − f(E[ξ]) ≥ k · (E[ξ] − E[ξ]) = 0

which proves the inequality.

Page 22: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 || Uncertainty Theory

226 Chapter 5 - Uncertainty Theory

5.13 Convergence Concepts

We have the following four convergence concepts of uncertain sequence: con-vergence almost surely (a.s.), convergence in measure, convergence in mean,and convergence in distribution. The relations among them are given inTable 5.1.

Table 5.1: Relationship among Convergence Concepts

Convergence⇒

Convergence⇒

Convergence

in Mean in Measure in Distribution

Definition 5.21 Suppose that ξ, ξ1, ξ2, · · · are uncertain variables defined onthe uncertainty space (Γ,L,M). The sequence {ξi} is said to be convergenta.s. to ξ if there exists an event Λ with M{Λ} = 1 such that

limi→∞

|ξi(γ) − ξ(γ)| = 0 (5.44)

for every γ ∈ Λ. In that case we write ξi → ξ, a.s.

Definition 5.22 Suppose that ξ, ξ1, ξ2, · · · are uncertain variables. We saythat the sequence {ξi} converges in measure to ξ if

limi→∞

M {|ξi − ξ| ≥ ε} = 0 (5.45)

for every ε > 0.

Definition 5.23 Suppose that ξ, ξ1, ξ2, · · · are uncertain variables with finiteexpected values. We say that the sequence {ξi} converges in mean to ξ if

limi→∞

E[|ξi − ξ|] = 0. (5.46)

Definition 5.24 Suppose that Φ,Φ1,Φ2, · · · are the uncertainty distributionsof uncertain variables ξ, ξ1, ξ2, · · · , respectively. We say that {ξi} convergesin distribution to ξ if Φi → Φ at any continuity point of Φ.

Theorem 5.25 Suppose that ξ, ξ1, ξ2, · · · are uncertain variables. If {ξi}converges in mean to ξ, then {ξi} converges in measure to ξ.

Proof: It follows from the Markov inequality that for any given numberε > 0, we have

M{|ξi − ξ| ≥ ε} ≤ E[|ξi − ξ|]ε

→ 0

as i → ∞. Thus {ξi} converges in measure to ξ. The theorem is proved.

Page 23: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 || Uncertainty Theory

Section 5.14 - Characteristic Function 227

Theorem 5.26 Suppose ξ, ξ1, ξ2, · · · are uncertain variables. If {ξi} con-verges in measure to ξ, then {ξi} converges in distribution to ξ.

Proof: Let x be a given continuity point of the uncertainty distribution Φ.On the one hand, for any y > x, we have

{ξi ≤ x} = {ξi ≤ x, ξ ≤ y} ∪ {ξi ≤ x, ξ > y} ⊂ {ξ ≤ y} ∪ {|ξi − ξ| ≥ y − x}.

It follows from the countable subadditivity axiom that

Φi(x) ≤ Φ(y) + M{|ξi − ξ| ≥ y − x}.

Since {ξi} converges in measure to ξ, we have M{|ξi − ξ| ≥ y − x} → 0 asi → ∞. Thus we obtain lim supi→∞ Φi(x) ≤ Φ(y) for any y > x. Lettingy → x, we get

lim supi→∞

Φi(x) ≤ Φ(x). (5.47)

On the other hand, for any z < x, we have

{ξ ≤ z} = {ξi ≤ x, ξ ≤ z} ∪ {ξi > x, ξ ≤ z} ⊂ {ξi ≤ x} ∪ {|ξi − ξ| ≥ x − z}

which implies that

Φ(z) ≤ Φi(x) + M{|ξi − ξ| ≥ x − z}.

Since M{|ξi − ξ| ≥ x − z} → 0, we obtain Φ(z) ≤ lim infi→∞ Φi(x) for anyz < x. Letting z → x, we get

Φ(x) ≤ lim infi→∞

Φi(x). (5.48)

It follows from (5.47) and (5.48) that Φi(x) → Φ(x). The theorem is proved.

5.14 Characteristic Function

This section introduces the concept of characteristic function of uncertainvariable, and provides inversion formula, uniqueness theorem, and continuitytheorem.

Definition 5.27 Let ξ be an uncertain variable with uncertainty distributionΦ. Then the characteristic function of ξ is defined by

ϕ(t) =∫ +∞

−∞eitxdΦ(x), ∀t ∈ � (5.49)

provided that the Lebesgue-Stieltjes integral exists, where eitx = cos tx+i sin txand i =

√−1.

Page 24: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 || Uncertainty Theory

228 Chapter 5 - Uncertainty Theory

Theorem 5.38 Let ξ be an uncertain variable with uncertainty distributionΦ and characteristic function ϕ. Then we have(a) ϕ(0) = lim

x→+∞Φ(x) − lim

x→−∞Φ(x);

(b) |ϕ(t)| ≤ ϕ(0);(c) ϕ(−t) = ϕ(t), the complex conjugate of ϕ(t);(d) ϕ(t) is a uniformly continuous function on �.

Proof: Like Theorem 3.66.

Theorem 5.39 (Inversion Formula) Let ξ be an uncertain variable with un-certainty distribution Φ and characteristic function ϕ. Then

Φ(b) − Φ(a) = limT→+∞

12π

∫ T

−T

e−iat − e−ibt

itϕ(t)dt (5.50)

holds for all points a, b(a < b) at which Φ is continuous.

Proof: Like Theorem 3.67.

Theorem 5.40 (Uniqueness Theorem) Let Φ1 and Φ2 be two uncertaintydistributions with characteristic functions ϕ1 and ϕ2, respectively. Then ϕ1 =ϕ2 if and only if there is a constant c such that Φ1 = Φ2 + c.

Proof: Like Theorem 3.68.

Theorem 5.41 (Continuity Theorem) Let Φ,Φ1,Φ2, · · · be a sequence of un-certainty distributions satisfying

cn = limx→+∞

Φ(x) − limx→+∞

Φn(x) = limx→−∞

Φ(x) − limx→−∞

Φn(x), ∀n

and let ϕ,ϕ1, ϕ2, · · · be corresponding characteristic functions. Then {Φn +cn} converges to Φ at any continuity point of Φ if and only if {ϕn} convergesuniformly to ϕ in arbitrary finite interval [c, d].

Proof: Like Theorem 3.69.

5.15 Conditional Uncertainty

We consider the uncertain measure of an event A after it has been learnedthat some other event B has occurred. This new uncertain measure of A iscalled the conditional uncertain measure of A given B.

For any events A and B, since (A∩B)∪ (Ac ∩B) = B, we have M{B} ≤M{A ∩ B} + M{Ac ∩ B} by using the countable subadditivity axiom. Thus

0 ≤ 1 − M{Ac ∩ B}M{B} ≤ M{A ∩ B}

M{B} ≤ 1 (5.51)

Page 25: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 || Uncertainty Theory

Section 5.15 - Conditional Uncertainty 229

provided that M{B} > 0. Any numbers between 1 − M{Ac ∩ B}/M{B}and M{A ∩ B}/M{B} are reasonable values that the conditional uncertainmeasure may take. Based on the maximum uncertainty principle, we havethe following conditional uncertain measure.

Definition 5.28 Let (Γ,L,M) be an uncertainty space, and A,B ∈ L. Thenthe conditional uncertain measure of A given B is defined by

M{A|B} =

⎧⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎩

M{A ∩ B}M{B} , if

M{A ∩ B}M{B} < 0.5

1 − M{Ac ∩ B}M{B} , if

M{Ac ∩ B}M{B} < 0.5

0.5, otherwise

(5.52)

provided that M{B} > 0.

It follows immediately from the definition of conditional uncertain mea-sure that

1 − M{Ac ∩ B}M{B} ≤ M{A|B} ≤ M{A ∩ B}

M{B} . (5.53)

Furthermore, the conditional uncertain measure obeys the maximum uncer-tainty principle, and takes values as close to 0.5 as possible.

Theorem 5.42 Let (Γ,L,M) be an uncertainty space, and B an event withM{B} > 0. Then M{·|B} defined by (5.52) is an uncertain measure, and(Γ,L,M{·|B}) is an uncertainty space.

Proof: It is sufficient to prove that M{·|B} satisfies the normality, mono-tonicity, self-duality and countable subadditivity axioms. At first, it satisfiesthe normality axiom, i.e.,

M{Γ|B} = 1 − M{Γc ∩ B}M{B} = 1 − M{∅}

M{B} = 1.

For any events A1 and A2 with A1 ⊂ A2, if

M{A1 ∩ B}M{B} ≤ M{A2 ∩ B}

M{B} < 0.5,

then

M{A1|B} =M{A1 ∩ B}

M{B} ≤ M{A2 ∩ B}M{B} = M{A2|B}.

IfM{A1 ∩ B}

M{B} ≤ 0.5 ≤ M{A2 ∩ B}M{B} ,

Page 26: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 || Uncertainty Theory

230 Chapter 5 - Uncertainty Theory

then M{A1|B} ≤ 0.5 ≤ M{A2|B}. If

0.5 <M{A1 ∩ B}

M{B} ≤ M{A2 ∩ B}M{B} ,

then we have

M{A1|B} =(1 − M{Ac

1 ∩ B}M{B}

)∨0.5 ≤

(1 − M{Ac

2 ∩ B}M{B}

)∨0.5 = M{A2|B}.

This means that M{·|B} satisfies the monotonicity axiom. For any event A,if

M{A ∩ B}M{B} ≥ 0.5,

M{Ac ∩ B}M{B} ≥ 0.5,

then we have M{A|B} + M{Ac|B} = 0.5 + 0.5 = 1 immediately. Otherwise,without loss of generality, suppose

M{A ∩ B}M{B} < 0.5 <

M{Ac ∩ B}M{B} ,

then we have

M{A|B} + M{Ac|B} =M{A ∩ B}

M{B} +(

1 − M{A ∩ B}M{B}

)= 1.

That is, M{·|B} satisfies the self-duality axiom. Finally, for any countablesequence {Ai} of events, if M{Ai|B} < 0.5 for all i, it follows from thecountable subadditivity axiom that

M

{ ∞⋃

i=1

Ai ∩ B

}≤

M

{ ∞⋃

i=1

Ai ∩ B

}

M{B} ≤

∞∑

i=1

M{Ai ∩ B}

M{B} =∞∑

i=1

M{Ai|B}.

Suppose there is one term greater than 0.5, say M{A1|B} ≥ 0.5 and M{Ai|B} <0.5 for i = 2, 3, · · · If M{∪iAi|B} = 0.5, then we immediately have

M

{ ∞⋃

i=1

Ai ∩ B

}≤

∞∑

i=1

M{Ai|B}.

If M{∪iAi|B} > 0.5, we may prove the above inequality by the followingfacts:

Ac1 ∩ B ⊂

∞⋃

i=2

(Ai ∩ B) ∪( ∞⋂

i=1

Aci ∩ B

),

M{Ac1 ∩ B} ≤

∞∑

i=2

M{Ai ∩ B} + M

{ ∞⋂

i=1

Aci ∩ B

},

Page 27: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 || Uncertainty Theory

Section 5.15 - Conditional Uncertainty 231

M

{ ∞⋃

i=1

Ai|B}

= 1 −M

{ ∞⋂

i=1

Aci ∩ B

}

M{B} ,

∞∑

i=1

M{Ai|B} ≥ 1 − M{Ac1 ∩ B}

M{B} +

∞∑

i=2

M{Ai ∩ B}

M{B} .

If there are at least two terms greater than 0.5, then the countable subad-ditivity is clearly true. Thus M{·|B} satisfies the countable subadditivityaxiom. Hence M{·|B} is an uncertain measure. Furthermore, (Γ,L,M{·|B})is an uncertainty space.

Remark 5.7: If M is a probability measure, then M{A|B} is just the con-ditional probability of A given B.

Remark 5.8: If M is a credibility measure, then M{A|B} is just the condi-tional credibility of A given B.

Remark 5.9: We may define conditional uncertain measure with respectto a σ-algebra rather than a single event. For this case, the conditionaluncertain measure is not a constant but an uncertain variable. In addition, ifL′ is a σ-algebra generated by the uncertain variable η, then the conditionaluncertain measure given η is

M{A|η} = M{A|L′} (5.54)

for each event A.

Example 5.12: Let ξ and η be two uncertain variables. Then we have

M {ξ = x|η = y} =

⎧⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎩

M{ξ = x, η = y}M{η = y} , if

M{ξ = x, η = y}M{η = y} < 0.5

1 − M{ξ �= x, η = y}M{η = y} , if

M{ξ �= x, η = y}M{η = y} < 0.5

0.5, otherwise

provided that M{η = y} > 0.

Definition 5.29 The conditional uncertainty distribution Φ: � → [0, 1] ofan uncertain variable ξ given B is defined by

Φ(x|B) = M {ξ ≤ x|B} (5.55)

provided that M{B} > 0.

Page 28: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 || Uncertainty Theory

232 Chapter 5 - Uncertainty Theory

Example 5.13: Let ξ and η be uncertain variables. Then the conditionaluncertainty distribution of ξ given η = y is

Φ(x|η = y) =

⎧⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎩

M{ξ ≤ x, η = y}M{η = y} , if

M{ξ ≤ x, η = y}M{η = y} < 0.5

1 − M{ξ > x, η = y}M{η = y} , if

M{ξ > x, η = y}M{η = y} < 0.5

0.5, otherwise

provided that M{η = y} > 0.

Definition 5.30 The conditional uncertainty density function φ of an un-certain variable ξ given B is a nonnegative function such that

Φ(x|B) =∫ x

−∞φ(y|B)dy, ∀x ∈ �, (5.56)

∫ +∞

−∞φ(y|B)dy = 1 (5.57)

where Φ(x|B) is the conditional uncertainty distribution of ξ given B.

Definition 5.31 Let ξ be an uncertain variable. Then the conditional ex-pected value of ξ given B is defined by

E[ξ|B] =∫ +∞

0

M{ξ ≥ r|B}dr −∫ 0

−∞M{ξ ≤ r|B}dr (5.58)

provided that at least one of the two integrals is finite.

Theorem 5.43 Let ξ be an uncertain variable whose conditional uncertaintydensity function φ(x|B) exists. If the Lebesgue integral

∫ +∞

−∞xφ(x|B)dx

is finite, then the conditional expected value of ξ given B is

E[ξ|B] =∫ +∞

−∞xφ(x|B)dx. (5.59)

Proof: Like Theorem 5.11.

Theorem 5.44 Let ξ be an uncertain variable with conditional uncertaintydistribution Φ(x|B). If

limx→−∞

Φ(x|B) = 0, limx→+∞

Φ(x|B) = 1

Page 29: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 || Uncertainty Theory

Section 5.15 - Conditional Uncertainty 233

and the Lebesgue-Stieltjes integral∫ +∞

−∞xdΦ(x|B)

is finite, then the conditional expected value of ξ given B is

E[ξ|B] =∫ +∞

−∞xdΦ(x|B). (5.60)

Proof: Like Theorem 5.12.