[studies in fuzziness and soft computing] uncertainty theory volume 154 ||

418

Upload: baoding

Post on 30-Jan-2017

213 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||
Page 2: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

B. Liu

Uncertainty Theory

Springer-Verlag Berlin Heidelberg GmbH

Page 3: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Studies in Fuzziness and Soft Computing, Volume 154

Editor-in-chiefProf. Janusz KacprzykSystems Research InstitutePolish Academy of Sciencesul. Newelska 601-447 WarsawPolandE-mail: [email protected]

Further volumes of this series can be found on our homepage:springeronline.com

Vol. 136. L. Wang (Ed.)Soft Computing in Communications, 2004ISBN 3-540-40575-5

Vol. 137. V. Loia, M. Nikravesh, L.A. Zadeh (Eds.)Fuzzy Logic and the Internet, 2004ISBN 3-540-20180-7

Vol. 138. S. Sirmakessis (Ed.)Text Mining and its Applications, 2004ISBN 3-540-20238-2

Vol. 139. M. Nikravesh, B. Azvine, I. Yager, L.A. Zadeh (Eds.)Enhancing the Power of the Internet, 2004ISBN 3-540-20237-4

Vol. 140. A. Abraham, L.C. Jain, B.J. van der Zwaag (Eds.)Innovations in Intelligent Systems, 2004ISBN 3-540-20265-X

Vol. 141. G.C. Onwubolu, B.V. Babu New Optimzation Techniques in Engineering, 2004ISBN 3-540-20167-X

Vol. 142. M. Nikravesh, L.A. Zadeh, V. Korotkikh (Eds.)Fuzzy Partial Differential Equations and Relational Equations, 2004ISBN 3-540-20322-2

Vol. 143. L. RutkowskiNew Soft Computing Techniques for System Modelling, Pattern Classifi cation and Image Processing, 2004ISBN 3-540-20584-5

Vol. 144. Z. Sun, G.R. FinnieIntelligent Techniques in E-Commerce, 2004ISBN 3-540-20518-7

Vol. 145. J. Gil-AlujaFuzzy Sets in the Management of Uncertainty, 2004ISBN 3-540-20341-9

Vol. 146. J.A. Gámez, S. Moral, A. Salmerón (Eds.) Advances in Bayesian Networks, 2004ISBN 3-540-20876-3

Vol. 147. K. Watanabe, M.M.A. HashemNew Algorithms and their Applications to Evolutionary Robots, 2004ISBN 3-540-20901-8

Vol. 148. C. Martin-Vide, V. Mitrana, G. Paun (Eds.)Formal Languages and Applications, 2004ISBN 3-540-20907-7

Vol. 149. J.J. BuckleyFuzzy Statistics, 2004ISBN 3-540-21084-9

Vol. 150. L. Bull (Ed.)Applications of Learning Classifi er Systems, 2004ISBN 3-540-21109-8

Vol. 151. T. Kowalczyk, E. Pleszczy ska, F. Ruland (Eds.)Grade Models and Methods for Data Analysis, 2004ISBN 3-540-21120-9

Vol. 152. J. Rajapakse, L. Wang (Eds.)Neural Information Processing: Research and Development, 2004ISBN 3-540-21123-3

Vol. 153. J. Fulcher, L.C. Jain (Eds.)Applied Intelligent Systems, 2004ISBN 3-540-21153-5

˘

Page 4: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Baoding Liu

Uncertainty TheoryAn Introduction to its Axiomatic Foundations

123

Page 5: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Prof. Baoding LiuUncertainty Theory Laboratory

Dept. of Mathematical Science

Tsinghua University

Beijing 100084

ChinaE-mail: [email protected]

ISSN 1434-9922

Library of Congress Control Number: 2004103354

This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifi cally the rights of translation, reprinting, reuse of illustrations, recitations, broadcasting, reproduction on microfi lm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Violations

are

liable

to

prosecution

under

the German Copyright Law.

springeronline.com

© Springer-Verlag Berlin Heidelberg 2004

The use of general descriptive names, registered names trademarks, etc. in this publication does not imply, even in the absence of a specifi c statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.

Typesetting: data delivered by authorCover design: E. Kirchner, Springer-Verlag, Heidelberg Printed on acid free paper 62/3020/M - 5 4 3 2 1 0

ISBN 978-3-662-13262-3 ISBN 978-3-540-39987-2 (eBook) DOI 10.1007/978-3-540-39987-2

Springer-Verlag Berlin Heidelberg GmbH.

Originally published by Springer-Verlag Berlin Heidelberg New York in 2004

Page 6: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Contents

Preface ix

1 Measure and Integral 11.1 Measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Borel Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.3 Lebesgue Measure . . . . . . . . . . . . . . . . . . . . . . . . 71.4 Measurable Function . . . . . . . . . . . . . . . . . . . . . . 81.5 Lebesgue Integral . . . . . . . . . . . . . . . . . . . . . . . . 131.6 Lebesgue-Stieltjes Integral . . . . . . . . . . . . . . . . . . . 17

2 Probability Theory 212.1 Three Axioms . . . . . . . . . . . . . . . . . . . . . . . . . . 212.2 Random Variables . . . . . . . . . . . . . . . . . . . . . . . . 252.3 Probability Distribution . . . . . . . . . . . . . . . . . . . . . 312.4 Independent and Identical Distribution . . . . . . . . . . . . 362.5 Expected Value Operator . . . . . . . . . . . . . . . . . . . . 402.6 Variance, Covariance and Moments . . . . . . . . . . . . . . 532.7 Optimistic and Pessimistic Values . . . . . . . . . . . . . . . 542.8 Some Inequalities . . . . . . . . . . . . . . . . . . . . . . . . 562.9 Characteristic Function . . . . . . . . . . . . . . . . . . . . . 592.10 Convergence Concepts . . . . . . . . . . . . . . . . . . . . . . 612.11 Laws of Large Numbers . . . . . . . . . . . . . . . . . . . . . 652.12 Conditional Probability . . . . . . . . . . . . . . . . . . . . . 712.13 Stochastic Simulations . . . . . . . . . . . . . . . . . . . . . . 73

3 Credibility Theory 793.1 Four Axioms . . . . . . . . . . . . . . . . . . . . . . . . . . . 803.2 Fuzzy Variables . . . . . . . . . . . . . . . . . . . . . . . . . 873.3 Credibility Distribution . . . . . . . . . . . . . . . . . . . . . 953.4 Independent and Identical Distribution . . . . . . . . . . . . 1033.5 Optimistic and Pessimistic Values . . . . . . . . . . . . . . . 1073.6 Expected Value Operator . . . . . . . . . . . . . . . . . . . . 1093.7 Variance, Covariance and Moments . . . . . . . . . . . . . . 1243.8 Some Inequalities . . . . . . . . . . . . . . . . . . . . . . . . 1253.9 Characteristic Function . . . . . . . . . . . . . . . . . . . . . 127

Page 7: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

vi Contents

3.10 Convergence Concepts . . . . . . . . . . . . . . . . . . . . . . 1293.11 Fuzzy Simulations . . . . . . . . . . . . . . . . . . . . . . . . 133

4 Trust Theory 1374.1 Rough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1374.2 Four Axioms . . . . . . . . . . . . . . . . . . . . . . . . . . . 1384.3 Rough Variable . . . . . . . . . . . . . . . . . . . . . . . . . . 1424.4 Trust Distribution . . . . . . . . . . . . . . . . . . . . . . . . 1484.5 Independent and Identical Distribution . . . . . . . . . . . . 1544.6 Expected Value Operator . . . . . . . . . . . . . . . . . . . . 1574.7 Variance, Covariance and Moments . . . . . . . . . . . . . . 1694.8 Optimistic and Pessimistic Values . . . . . . . . . . . . . . . 1714.9 Some Inequalities . . . . . . . . . . . . . . . . . . . . . . . . 1734.10 Characteristic Function . . . . . . . . . . . . . . . . . . . . . 1754.11 Convergence Concepts . . . . . . . . . . . . . . . . . . . . . . 1774.12 Laws of Large Numbers . . . . . . . . . . . . . . . . . . . . . 1814.13 Conditional Trust . . . . . . . . . . . . . . . . . . . . . . . . 1854.14 Rough Simulations . . . . . . . . . . . . . . . . . . . . . . . . 188

5 Fuzzy Random Theory 1915.1 Fuzzy Random Variables . . . . . . . . . . . . . . . . . . . . 1915.2 Chance Measure . . . . . . . . . . . . . . . . . . . . . . . . . 1945.3 Chance Distribution . . . . . . . . . . . . . . . . . . . . . . . 1985.4 Independent and Identical Distribution . . . . . . . . . . . . 2025.5 Expected Value Operator . . . . . . . . . . . . . . . . . . . . 2045.6 Variance, Covariance and Moments . . . . . . . . . . . . . . 2065.7 Optimistic and Pessimistic Values . . . . . . . . . . . . . . . 2075.8 Convergence Concepts . . . . . . . . . . . . . . . . . . . . . . 2105.9 Laws of Large Numbers . . . . . . . . . . . . . . . . . . . . . 2115.10 Fuzzy Random Simulations . . . . . . . . . . . . . . . . . . . 212

6 Random Fuzzy Theory 2156.1 Random Fuzzy Variables . . . . . . . . . . . . . . . . . . . . 2156.2 Chance Measure . . . . . . . . . . . . . . . . . . . . . . . . . 2186.3 Chance Distribution . . . . . . . . . . . . . . . . . . . . . . . 2236.4 Independent and Identical Distribution . . . . . . . . . . . . 2266.5 Expected Value Operator . . . . . . . . . . . . . . . . . . . . 2276.6 Variance, Covariance and Moments . . . . . . . . . . . . . . 2286.7 Optimistic and Pessimistic Values . . . . . . . . . . . . . . . 2296.8 Convergence Concepts . . . . . . . . . . . . . . . . . . . . . . 2326.9 Random Fuzzy Simulations . . . . . . . . . . . . . . . . . . . 241

7 Bifuzzy Theory 2457.1 Bifuzzy Variables . . . . . . . . . . . . . . . . . . . . . . . . 2457.2 Chance Measure . . . . . . . . . . . . . . . . . . . . . . . . . 2477.3 Chance Distribution . . . . . . . . . . . . . . . . . . . . . . . 2527.4 Independent and Identical Distribution . . . . . . . . . . . . 255

Page 8: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Contents vii

7.5 Expected Value Operator . . . . . . . . . . . . . . . . . . . . 2567.6 Variance, Covariance and Moments . . . . . . . . . . . . . . 2577.7 Optimistic and Pessimistic Values . . . . . . . . . . . . . . . 2587.8 Convergence Concepts . . . . . . . . . . . . . . . . . . . . . . 2617.9 Bifuzzy Simulations . . . . . . . . . . . . . . . . . . . . . . . 270

8 Birandom Theory 2738.1 Birandom Variables . . . . . . . . . . . . . . . . . . . . . . . 2738.2 Chance Measure . . . . . . . . . . . . . . . . . . . . . . . . . 2768.3 Chance Distribution . . . . . . . . . . . . . . . . . . . . . . . 2798.4 Independent and Identical Distribution . . . . . . . . . . . . 2828.5 Expected Value Operator . . . . . . . . . . . . . . . . . . . . 2848.6 Variance, Covariance and Moments . . . . . . . . . . . . . . 2858.7 Optimistic and Pessimistic Values . . . . . . . . . . . . . . . 2868.8 Convergence Concepts . . . . . . . . . . . . . . . . . . . . . . 2888.9 Laws of Large Numbers . . . . . . . . . . . . . . . . . . . . . 2898.10 Birandom Simulations . . . . . . . . . . . . . . . . . . . . . . 290

9 Rough Random Theory 2939.1 Rough Random Variables . . . . . . . . . . . . . . . . . . . . 2939.2 Chance Measure . . . . . . . . . . . . . . . . . . . . . . . . . 2969.3 Chance Distribution . . . . . . . . . . . . . . . . . . . . . . . 2989.4 Independent and Identical Distribution . . . . . . . . . . . . 3019.5 Expected Value Operator . . . . . . . . . . . . . . . . . . . . 3039.6 Variance, Covariance and Moments . . . . . . . . . . . . . . 3039.7 Optimistic and Pessimistic Values . . . . . . . . . . . . . . . 3059.8 Convergence Concepts . . . . . . . . . . . . . . . . . . . . . . 3079.9 Laws of Large Numbers . . . . . . . . . . . . . . . . . . . . . 3079.10 Rough Random Simulations . . . . . . . . . . . . . . . . . . 308

10 Rough Fuzzy Theory 31110.1 Rough Fuzzy Variables . . . . . . . . . . . . . . . . . . . . . 31110.2 Chance Measure . . . . . . . . . . . . . . . . . . . . . . . . . 31210.3 Chance Distribution . . . . . . . . . . . . . . . . . . . . . . . 31610.4 Independent and Identical Distribution . . . . . . . . . . . . 31910.5 Expected Value Operator . . . . . . . . . . . . . . . . . . . . 32010.6 Variance, Covariance and Moments . . . . . . . . . . . . . . 32010.7 Optimistic and Pessimistic Values . . . . . . . . . . . . . . . 32210.8 Convergence Concepts . . . . . . . . . . . . . . . . . . . . . . 32410.9 Rough Fuzzy Simulations . . . . . . . . . . . . . . . . . . . . 327

11 Random Rough Theory 33111.1 Random Rough Variables . . . . . . . . . . . . . . . . . . . . 33111.2 Chance Measure . . . . . . . . . . . . . . . . . . . . . . . . . 33311.3 Chance Distribution . . . . . . . . . . . . . . . . . . . . . . . 33611.4 Independent and Identical Distribution . . . . . . . . . . . . 33911.5 Expected Value Operator . . . . . . . . . . . . . . . . . . . . 341

Page 9: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

viii

11.6 Variance, Covariance and Moments . . . . . . . . . . . . . . 34111.7 Optimistic and Pessimistic Values . . . . . . . . . . . . . . . 34211.8 Convergence Concepts . . . . . . . . . . . . . . . . . . . . . . 34411.9 Laws of Large Numbers . . . . . . . . . . . . . . . . . . . . . 34511.10 Random Rough Simulations . . . . . . . . . . . . . . . . . . 346

12 Fuzzy Rough Theory 34912.1 Fuzzy Rough Variables . . . . . . . . . . . . . . . . . . . . . 34912.2 Chance Measure . . . . . . . . . . . . . . . . . . . . . . . . . 35112.3 Chance Distribution . . . . . . . . . . . . . . . . . . . . . . . 35412.4 Independent and Identical Distribution . . . . . . . . . . . . 35712.5 Expected Value Operator . . . . . . . . . . . . . . . . . . . . 35912.6 Variance, Covariance and Moments . . . . . . . . . . . . . . 36012.7 Optimistic and Pessimistic Values . . . . . . . . . . . . . . . 36112.8 Convergence Concepts . . . . . . . . . . . . . . . . . . . . . . 36412.9 Laws of Large Numbers . . . . . . . . . . . . . . . . . . . . . 36412.10 Fuzzy Rough Simulations . . . . . . . . . . . . . . . . . . . . 365

13 Birough Theory 36913.1 Birough Variables . . . . . . . . . . . . . . . . . . . . . . . . 36913.2 Chance Measure . . . . . . . . . . . . . . . . . . . . . . . . . 37113.3 Chance Distribution . . . . . . . . . . . . . . . . . . . . . . . 37413.4 Independent and Identical Distribution . . . . . . . . . . . . 37713.5 Expected Value Operator . . . . . . . . . . . . . . . . . . . . 37913.6 Variance, Covariance and Moments . . . . . . . . . . . . . . 37913.7 Optimistic and Pessimistic Values . . . . . . . . . . . . . . . 38013.8 Convergence Concepts . . . . . . . . . . . . . . . . . . . . . . 38213.9 Laws of Large Numbers . . . . . . . . . . . . . . . . . . . . . 38313.10 Birough Simulations . . . . . . . . . . . . . . . . . . . . . . . 384

14 Some Remarks 38714.1 Uncertainty Theory Tree . . . . . . . . . . . . . . . . . . . . 38714.2 Multifold Uncertainty . . . . . . . . . . . . . . . . . . . . . . 38814.3 Ranking Uncertain Variables . . . . . . . . . . . . . . . . . . 38814.4 Nonclassical Credibility Theory . . . . . . . . . . . . . . . . . 38914.5 Generalized Trust Theory . . . . . . . . . . . . . . . . . . . . 396

Bibliography 399

List of Frequently Used Symbols 408

Index 409

Page 10: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Preface

As a branch of mathematics that studies the behavior of random, fuzzy andrough events, uncertainty theory is the generic name of probability theory,credibility theory, and trust theory. The main purpose of this book is toprovide axiomatic foundations of uncertainty theory.

It was generally believed that the study of probability theory was started byPascal and Fermat in 1654 when they succeeded in deriving the exact proba-bilities for certain gambling problem. Great progress was achieved when VonMises initialized the concept of sample space, and filled the gape betweenprobability theory and measure theory in 1931. A complete axiomatic foun-dation of probability theory was given by Kolmogoroff in 1933. Since then,probability theory has been developed steadily and has been widely appliedin science and engineering. The axiomatic foundation of probability theorywill be introduced in Chapter 2.

Fuzzy set was initialized by Zadeh via membership function in 1965, andwas well developed and applied in a wide variety of real problems. As afuzzy set of real numbers, the term fuzzy variable was first introduced byKaufmann in 1975. In order to make a mathematical foundation, Nahmiasgave three axioms to define possibility measure in 1978, and Liu gave thefourth axiom to define product possibility measure in 2002. There are threetypes of measure in the fuzzy world: possibility, necessity, and credibility.Traditionally, possibility measure is regarded as the parallel concept of prob-ability measure. However, it is, in fact, the credibility measure that playsthe role of probability measure! This fact provides a motivation to developan axiomatic approach based on credibility measure, called credibility the-ory. Generally speaking, credibility theory is the branch of mathematicsthat studies the behavior of fuzzy events. Chapter 3 will provide a completeaxiomatic foundation of credibility theory.

Rough set was initialized by Pawlak in 1982 and was proved to be anexcellent mathematical tool dealing with vague description of objects. A fun-damental assumption is that any object from a universe is perceived throughavailable information, and such information may not be sufficient to charac-terize the object exactly. A rough set is then defined by a pair of crisp sets,called the lower and the upper approximations. In order to give an axiomaticfoundation, a concept of rough space was presented by Liu in 2002, and arough variable was defined as a measurable function from the rough space tothe set of real numbers, thus offering a trust theory. Chapter 4 will introducethe axiomatic foundation of trust theory.

Page 11: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

x Preface

Random variable has been extended in many ways. For example, a fuzzyrandom variable is a measurable function from a probability space to the setof fuzzy variable; a rough random variable is a measurable function from aprobability space to the set of rough variables; and a birandom variable is ameasurable function from a probability space to the set of random variables.

As extensions of fuzzy variable, a random fuzzy variable is a functionfrom a possibility space to the set of random variables; a bifuzzy variableis a function from a possibility space to the set of fuzzy variables; and arough fuzzy variable is a function from a possibility space to the set of roughvariables.

Rough variable has been extended to random rough variable, fuzzy roughvariable and birough variable, defined as a measurable function from a roughspace to the set of random variables, fuzzy variables, and rough variables,respectively.

The book is suitable for mathematicians, researchers, engineers, design-ers, and students in the field of applied mathematics, operations research,probability and statistics, industrial engineering, information science, andmanagement science. The readers will learn the axiomatic approach of un-certainty theory, and find this work a stimulating and useful reference.

I would like to thank H.T. Nguyen, K. Iwamura, M. Gen, A.O. Esogbue,R. Zhao, Y. Liu, J. Zhou, J. Peng, M. Lu, J. Gao, G. Wang, H. Ke, Y. Zhu,L. Yang, L. Liu, Y. Zheng for their valuable comments. A special thankis due to Pingke Li for his assistance in proofreading. I am also indebtedto a series of grants from National Natural Science Foundation, Ministryof Education, and Ministry of Science and Technology of China. Finally, Iexpress my deep gratitude to Professor Janusz Kacprzyk for the invitationto publish this book in his series, and the editorial staff of Springer for theirwonderful cooperation and helpful comments.

Baoding LiuTsinghua University

http://orsc.edu.cn/∼liuJanuary 2004

Page 12: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

To My Wife Jinlan

Page 13: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Chapter 1

Measure and Integral

Measure theory is a branch of mathematics. Length, area, volume and weightare instances of measure concept. The emphasis in this chapter is mainly onthe concept of measure, Borel set, measurable function, Lebesgue integral,Lebesgue-Stieltjes integral, monotone class theorem, Caratheodory extensiontheorem, measure continuity theorem, product measure theorem, monotoneconvergence theorem, Fatou’s lemma, Lebesgue dominated convergence the-orem, and Fubini theorem. The main results in this chapter are well-known.For this reason the credit references are not given. This chapter can be omit-ted by the readers who are familiar with the basic concepts and theorems ofmeasure and integral.

1.1 Measure

Definition 1.1 Let Ω be a nonempty set. A collection A is called an algebraof subsets of Ω if the following conditions hold:(a) Ω ∈ A;(b) if A ∈ A, then Ac ∈ A;(c) if Ai ∈ A for i = 1, 2, · · · , n, then ∪ni=1Ai ∈ A.If the condition (c) is replaced with closure under countable union, then A iscalled a σ-algebra.

Example 1.1: Assume that Ω is a nonempty set. Then {∅,Ω} is the smallestσ-algebra, and the power set P(Ω) (all subsets of Ω) is the largest σ-algebraover Ω.

Example 1.2: Let A be a proper nonempty subset of Ω. Then {∅,Ω, A,Ac}is the smallest σ-algebra containing A.

Example 1.3: Let A be the set of all finite disjoint unions of all intervalsof the form (−∞, a], (a, b], (b,∞) and �. Then A is an algebra, but not a

Page 14: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

2 Chapter 1 - Measure and Integral

σ-algebra because Ai = (0, (i− 1)/i] ∈ A for all i but

∞⋃i=1

Ai = (0, 1) �∈ A.

Theorem 1.1 The intersection of any collection of σ-algebras is a σ-algebra.Furthermore, for any nonempty class C, there is a unique minimal σ-algebracontaining C.

Proof: The first assertion is easily proved. Let A be the intersection of all σ-algebra containing C. It follows from the first assertion that A is a σ-algebra.It is easy to verify that A is the minimal one and contains C.

Theorem 1.2 A σ-algebra A is closed under difference, countable union,countable intersection, limit, upper limit, and lower limit. That is,

A2 \A1 ∈ A;∞⋃i=1

Ai ∈ A;∞⋂i=1

Ai ∈ A; limi→∞

Ai ∈ A; (1.1)

lim supi→∞

Ai =∞⋂k=1

∞⋃i=k

Ai ∈ A; lim infi→∞

Ai =∞⋃k=1

∞⋂i=k

Ai ∈ A. (1.2)

Proof: It follows immediately from the definition.

Definition 1.2 Let Ω be a nonempty set, and A a σ-algebra of subsets ofΩ. Then (Ω,A) is called a measurable space, and the sets in A are calledmeasurable sets.

Definition 1.3 Let (Ω,A) be a measurable space. A measure π is an ex-tended real-valued function on A such that(a) π{A} ≥ 0 for any A ∈ A;(b) for every countable sequence of mutually disjoint events {Ai}∞i=1, we have

π

{ ∞⋃i=1

Ai

}=

∞∑i=1

π{Ai}. (1.3)

Definition 1.4 Let (Ω,A) be a measurable space. A measure π is said tobe finite if and only if π{A} is finite for any A ∈ A. A measure π is saidto be σ-finite if and only if Ω can be written as ∪∞i=1Ai, where Ai ∈ A andπ{Ai} <∞ for all i.

Definition 1.5 Let Ω be a nonempty set, A a σ-algebra of subsets of Ω, andπ a measure on A. Then the triplet (Ω,A, π) is called a measure space.

The monotone class theorem, Caratheodory extension theorem, and ap-proximation theorem will be listed here without proof. The interested readermay consult books related to measure theory.

Page 15: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 1.1 - Measure 3

Theorem 1.3 (Monotone Class Theorem) Assume that A0 is an algebra ofsubsets of Ω, and C is a monotone class of subsets of Ω (if Ai ∈ C and Ai ↑ Aor Ai ↓ A, then A ∈ C). If C contains A0, then C contains the smallestσ-algebra over A0.

Theorem 1.4 (Caratheodory Extension Theorem) A σ-finite measure π onthe algebra A0 has a unique extension to a measure on the smallest σ-algebraA containing A0.

Theorem 1.5 (Approximation Theorem) Let (Ω,A, π) be a measure space,and let A0 be an algebra of subsets of Ω such that A is the smallest σ-algebracontaining A0. If π is σ-finite and A ∈ A has finite measure, then for anygiven ε > 0, there exists a set A0 ∈ A0 such that π{A \A0} < ε.

Measure Continuity Theorems

Theorem 1.6 Let (Ω,A, π) be a measure space, and A1, A2, · · · ∈ A.(a) If {Ai} is an increasing sequence, then

limi→∞

π{Ai} = π{

limi→∞

Ai

}. (1.4)

(b) If {Ai} is a decreasing sequence, and π{A1} is finite, then

limi→∞

π{Ai} = π{

limi→∞

Ai

}. (1.5)

Proof: (a) Write Ai → A and A0 = ∅, the empty set. Then {Ai − Ai−1} isa sequence of disjoint sets and

∞⋃i=1

(Ai −Ai−1) = A,

k⋃i=1

(Ai −Ai−1) = Ak

for k = 1, 2, · · · Thus we have

π{A} = π

{ ∞⋃i=1

(Ai −Ai−1)}

=∞∑i=1

π {Ai −Ai−1}

= limk→∞

k∑i=1

π {Ai −Ai−1} = limk→∞

π

{k⋃

i=1

(Ai −Ai−1)}

= limk→∞

π{Ak}.

The part (a) is proved.(b) The sequence {A1−Ai} is clearly increasing. It follows from π{A1} <

∞ and the part (a) that

π{A1} − π{A} = π{

limi→∞

(A1 −Ai)}

= limi→∞

π {A1 −Ai}

= π{A1} − limi→∞

π{Ai}

Page 16: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

4 Chapter 1 - Measure and Integral

which implies that π{Ai} → π{A}. The theorem is proved.

Example 1.4: If π{A1} is not finite, then the part (b) of Theorem 1.6 doesnot hold. For example, let Ai = [i,+∞) for i = 1, 2, · · · and let π be the lengthof intervals. Then Ai ↓ ∅ as i→∞. However, π{Ai} ≡ +∞ �→ 0 = π{∅}.

Theorem 1.7 Let (Ω,A, π) be a measure space, and A1, A2, · · · ∈ A. Thenwe have

π{

lim infi→∞

Ai

}≤ lim inf

i→∞π{Ai}. (1.6)

If π{∪∞i=1Ai} <∞, then

lim supi→∞

π{Ai} ≤ π

{lim supi→∞

Ai

}. (1.7)

Proof: (a) Since ∩∞i=kAi is an increasing sequence and ∩∞i=kAi ⊂ Ak, we get

π{

lim infi→∞

Ai

}= π

{limk→∞

∞⋂i=k

Ai

}= lim

k→∞π

{ ∞⋂i=k

Ai

}≤ lim inf

i→∞π{Ai}.

(b) Similarly, ∪∞i=kAi is a decreasing sequence and ∪∞i=kAi ⊃ Ak. Thus

π

{lim supi→∞

Ai

}= π

{limk→∞

∞⋃i=k

Ai

}= lim

k→∞π

{ ∞⋃i=k

Ai

}≥ lim sup

i→∞π{Ai}.

The theorem is proved.

Example 1.5: The strict inequalities in Theorem 1.7 may hold. For exam-ple, let

Ai =

{(0, 1], if i is odd(1, 2], if i is even

for i = 1, 2, · · ·, and let π be the length of intervals. Then

π{

lim infi→∞

Ai

}= π{∅} = 0 < 1 = lim inf

i→∞π{Ai},

lim supi→∞

π{Ai} = 1 < 2 = π{(0, 2]} = π

{lim supi→∞

Ai

}.

Theorem 1.8 Let (Ω,A, π) be a measure space, and A1, A2, · · · ∈ A. Ifπ{∪∞i=1Ai} <∞, and limi→∞ Ai exists, then

limi→∞

π{Ai} = π{

limi→∞

Ai

}. (1.8)

Proof: It follows from Theorem 1.7 that

π{

lim infi→∞

Ai

}≤ lim inf

i→∞π{Ai} ≤ lim sup

i→∞π{Ai} ≤ π

{lim supi→∞

Ai

}.

Since limi→∞ Ai exists, we get the equation.

Page 17: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 1.2 - Borel Set 5

Product Measure Theorem

Let Ω1,Ω2, · · · ,Ωn be any sets (not necessarily subsets of the same space).The Cartesian product Ω = Ω1×Ω2×· · ·×Ωn is the set of all ordered n-tuplesof the form (x1, x2, · · · , xn), where xi ∈ Ωi for i = 1, 2, · · · , n.

Definition 1.6 Let Ai be σ-algebras of subsets of Ωi, i = 1, 2, · · · , n, respec-tively. Write Ω = Ω1 × Ω2 × · · · × Ωn. A measurable rectangle in Ω is aset A = A1 × A2 × · · · × An, where Ai ∈ Ai for i = 1, 2, · · · , n. The small-est σ-algebra containing all measurable rectangles of Ω is called the productσ-algebra, denoted by A = A1 ×A2 × · · · ×An.

Note that the product σ-algebra A is the smallest σ-algebra over measur-able rectangles, rather than the Cartesian product of A1,A2, · · · ,An.

Theorem 1.9 (Product Measure Theorem) Let (Ωi,Ai, πi), i = 1, 2, · · · , nbe measure spaces. Assume that πi, i = 1, 2, · · · , n are σ-finite, Ω = Ω1 ×Ω2 × · · · × Ωn, A = A1 × A2 × · · · × An. Then there is a unique measure πon A such that

π{A1 ×A2 × · · · ×An} = π1{A1} × π2{A2} × · · · × πn{An} (1.9)

for every measurable rectangle A1 × A2 × · · · × An. The measure π is calledthe product of π1, π2, · · · , πn, denoted by π = π1 × π2 × · · · × πn. The triplet(Ω,A, π) is called the product measure space.

Infinite Product Measure Theorem

Let (Ωi,Ai, πi), i = 1, 2, · · · be an infinite sequence of measure spaces suchthat πi(Ωi) = 1 for i = 1, 2, · · · The Cartesian product Ω = Ω1 × Ω2 × · · · isdefined as the set of all ordered tuples of the form (x1, x2, · · ·), where xi ∈ Ωi

for i = 1, 2, · · · For this case, we define a measurable rectangle as a set of theform A = A1 × A2 × · · ·, where Ai ⊂ Ai for all i and Ai = Ωi for all butfinitely many i. The smallest σ-algebra containing all measurable rectanglesof Ω is called the product σ-algebra, denoted by A = A1 ×A2 × · · ·

Theorem 1.10 (Infinite Product Measure Theorem) Assume that (Ωi,Ai, πi)are measure spaces such that πi{Ωi} = 1 for i = 1, 2, · · · Let Ω = Ω1×Ω2×· · ·and A = A1 ×A2 × · · · Then there is a unique measure π on A such that

π{A1×· · ·×An×Ωn+1×Ωn+2×· · ·} = π1{A1}×π2{A2}×· · ·×πn{An} (1.10)

for any measurable rectangle A1 × · · · × An × Ωn+1 × Ωn+2 × · · · and alln = 1, 2, · · · The measure π is called the infinite product, denoted by π =π1×π2×· · · The triplet (Ω,A, π) is called the infinite product measure space.

Page 18: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

6 Chapter 1 - Measure and Integral

1.2 Borel Set

Let � be the set of all real numbers, and �n the n-dimensional Euclideanspace. We first introduce open, closed, Fσ, and Gδ sets.

A set O ⊂ �n is said to be open if for any x ∈ O, there exists a smallpositive number δ such that {y ∈ �n : ‖y − x‖ < δ} ⊂ O. The empty set∅ and �n are open sets. If {Oi} is a sequence of open sets, then the unionO1 ∪ O2 ∪ · · · is an open set. The finite intersection O1 ∩ O2 ∩ · · · ∩ Om isalso an open set. However, the infinite intersection need not be open. Forexample, let

Oi =(−1

i,i + 1

i

), i = 1, 2, · · ·

Then the intersection O1∩O2∩ · · · = [0, 1] is not an open set. The countableintersection of open sets is said to be a Gδ set.

The complement of an open set is called a closed set. Let {Ci} be asequence of closed sets. Then the intersection C1 ∩ C2 ∩ · · · is a closed set.The finite union C1 ∪C2 ∪ · · · ∪Cm is also a closed set. However, the infiniteunion of closed sets need not be a closed set. For example, let

Ci =[

1i + 1

,i

i + 1

], i = 1, 2, · · ·

Then the union C1∪C2∪· · · = (0, 1) is not a closed set. The countable unionof closed sets is said to be an Fσ set.

All open sets are Gδ sets. All closed sets are Fσ sets. A set is a Gδ set ifand only if its complement is an Fσ set.

Example 1.6: The set of rational numbers is an Fσ set because it is theunion ∪i{ri}, where r1, r2, · · · are all rational numbers. The set of irrationalnumbers is a Gδ set because it is the complement of the set of rationalnumbers.

Suppose that a = (a1, a2, · · · , an) and b = (b1, b2, · · · , bn) are points in �n

with ai < bi for i = 1, 2, · · · , n. The open interval of �n is defined as

(a, b) = {(x1, x2, · · · , xn) | ai < xi < bi, i = 1, 2, · · · , n}.

The closed interval, left-semiclosed interval and right-semiclosed interval aredefined as

[a, b] = {(x1, x2, · · · , xn) | ai ≤ xi ≤ bi, i = 1, 2, · · · , n},

[a, b) = {(x1, x2, · · · , xn) | ai ≤ xi < bi, i = 1, 2, · · · , n},

(a, b] = {(x1, x2, · · · , xn) | ai < xi ≤ bi, i = 1, 2, · · · , n}.

Page 19: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 1.3 - Lebesgue Measure 7

Definition 1.7 The smallest σ-algebra B containing all open intervals of �n

is called a Borel algebra, any element in B is called a Borel set, and (�n,B)is called a Borel measurable space.

We may replace the open intervals in Definition 1.7 with other classesof intervals, for example, closed intervals, left-semiclosed intervals, right-semiclosed intervals, or all intervals.

Example 1.7: Open set, closed set, Fσ set, Gδ set, the set of rationalnumbers, the set of irrational numbers, and countable set of real numbers areall Borel sets.

Example 1.8: We introduce a non-Borel set of �. Two real numbers aand b are called equivalent if and only if a − b is a rational number. Let[a] represent the equivalence class of all numbers that are equivalent to a.Note that if a1 and a2 are not equivalent, then [a1] ∩ [a2] = ∅. Let A be aset containing precisely one element from each of the equivalence classes [a],a ∈ �. We also assume that the representatives are chosen so that A ⊂ [0, 1].It has been proved that A is not a Borel set.

1.3 Lebesgue Measure

Theorem 1.11 There is a unique measure π on the Borel algebra of � suchthat π{(a, b]} = b − a for any interval (a, b] of �. Such a measure is calledthe Lebesgue measure.

Proof: It is a special case of Theorem 1.21 to be proved later.

Remark 1.1: In fact, Theorem 1.11 can be extended to n-dimensional case.There is a unique measure π on the Borel algebra of �n such that

π

{n∏

i=1

(ai, bi]

}=

n∏i=1

(bi − ai) (1.11)

for any interval (a1, b1]× (a2, b2]× · · · × (an, bn] of �n.

Example 1.9: Let A be the set of all rational numbers. Since A is countable,we denote it by A = {a1, a2, · · ·}. For any given ε > 0, the open intervals

Ii =(ai −

ε

2i+1, ai +

ε

2i+1

), i = 1, 2, · · ·

are a countable cover of A, and

π{A} ≤ π

{ ∞⋃i=1

Ii

}≤

∞∑i=1

π{Ii} = ε.

Page 20: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

8 Chapter 1 - Measure and Integral

Letting ε→ 0, we know that the Lebesgue measure π{A} = 0.

Example 1.10: Assume that a set has Lebesgue measure zero. Is it count-able? The answer is negative. We divide the interval [0, 1] into three equalopen intervals from which we choose the middle one, i.e., (1/3, 2/3). Thenwe divide each of the remaining two intervals into three equal open intervals,and choose the middle one in each case, i.e., (1/9, 2/9) and (7/9, 8/9). Weperform this process and obtain Dij for j = 1, 2, · · · , 2i−1 and i = 1, 2, · · ·Note that {Dij} is a sequence of mutually disjoint open intervals. Withoutloss of generality, suppose Di1 < Di2 < · · · < Di,2i−1 for i = 1, 2, · · · Definethe set

D =∞⋃i=1

2i−1⋃j=1

Dij . (1.12)

Then C = [0, 1] \ D is called the Cantor set. In other words, x ∈ C if andonly if x can be expressed in ternary form using only digits 0 and 2, i.e.,

x =∞∑i=1

ai3i

(1.13)

where ai = 0 or 2 for i = 1, 2, · · · The Cantor set is closed, perfect (everypoint in the set is a limit point of the set), nowhere dense, uncountable, andhas Lebesgue measure zero.

1.4 Measurable Function

Definition 1.8 Let (Ω1,A1) and (Ω2,A2) be measurable spaces. The func-tion f from (Ω1,A1) to (Ω2,A2) is said to be measurable if and only if

f−1(A) ∈ A1, ∀A ∈ A2. (1.14)

If Ω1 and Ω2 are Borel sets, then A1 and A2 are always assumed to be theBorel algebras on Ω1 and Ω2, respectively. For this case, the measurablefunction is also called a Borel measurable function or Baire function.

Theorem 1.12 The function f is measurable from (Ω,A) to �m if and onlyif f−1(I) ∈ A for any open interval I of �m.

Proof: If the function f is measurable, then f−1(I) ∈ A since each openinterval I is a Borel set. Conversely, if f−1(I) ∈ A for each open interval I,then the class

C = {C∣∣ f−1(C) ∈ A}

contains all intervals of �m. It is also easy to verify that C is a σ-algebra.Thus C contains all Borel sets of �. Hence f is measurable.

Page 21: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 1.4 - Measurable Function 9

Remark 1.2: Theorem 1.12 remains true if we replace the open intervalwith closed interval or semiclosed interval.

Example 1.11: A function f : �n → �m is said to be continuous if for anygiven x ∈ �n and ε > 0, there exists a δ > 0 such that ‖f(y) − f(x)‖ < εwhenever ‖y − x‖ < δ. The Dirichlet function

f(x) =

{1, if x is rational0, if x is irrational

is discontinuous at every point of �. The Riemann function

f(x) =

{1/p, if x = q/p and (p, q) = 1

0, if x is irrational

is discontinuous at all rational points but continuous at all irrational points.However, there does not exist a function which is continuous at all rationalpoints but discontinuous at all irrational points. Any continuous function ffrom �n to �m is measurable, because f−1(I) is an open set (not necessarilyinterval) of �n for any open interval I ∈ �m.

Example 1.12: A monotone function f from � to � is measurable because{x|f(x) ∈ I} is an interval for any interval I.

Example 1.13: A function is said to be simple if it takes a finite set ofvalues. A function is said to be step if it takes a countably infinite set ofvalues. Generally speaking, a step function from �n to � is not necessarilymeasurable except that it can be written as f(x) = ai if x ∈ Ai, where Ai

are Borel sets for i = 1, 2, · · ·

Example 1.14: Let f be a measurable function from (Ω,A) to �. Then itspositive part and negative part

f+(ω) =

{f(ω), if f(ω) ≥ 0

0, otherwise,f−(ω) =

{−f(ω), if f(ω) ≤ 0

0, otherwise

are measurable functions, because{ω∣∣ f+(ω) > t

}={ω∣∣ f(ω) > t

}∪{ω∣∣ f(ω) ≤ 0 if t < 0

},{

ω∣∣ f−(ω) > t

}={ω∣∣ f(ω) < −t

}∪{ω∣∣ f(ω) ≥ 0 if t < 0

}.

Example 1.15: Let f1 and f2 be measurable functions from (Ω,A) to �.Then f1 ∨ f2 and f1 ∧ f2 are measurable functions, because{

ω∣∣ f1(ω) ∨ f2(ω) > t

}={ω∣∣ f1(ω) > t

}∪{ω∣∣ f2(ω) > t

},

Page 22: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

10 Chapter 1 - Measure and Integral

{ω∣∣ f1(ω) ∧ f2(ω) > t

}={ω∣∣ f1(ω) > t

}∩{ω∣∣ f2(ω) > t

}.

Example 1.16: Let f1 and f2 be measurable functions. Then f1 + f2 is ameasurable function, because{

ω∣∣ f1(ω) + f2(ω) > t

}=⋃r

{ω∣∣ f1(ω) > r

}∩{ω∣∣ f2(ω) > t− r

}.

We may also prove that f1 − f2, f1f2 f1/f2 and |f1| are measurable.

Example 1.17: Let (Ω,A) be a measurable space, and A ⊂ Ω. Then itscharacteristic function

f(ω) =

{1, if ω ∈ A

0, otherwise

is measurable if A is a measurable set; and is not measurable if A is not.

Theorem 1.13 Let {fi} be a sequence of measurable functions from (Ω,A)to �. Then the following functions are measurable:

sup1≤i<∞

fi(ω); inf1≤i<∞

fi(ω); lim supi→∞

fi(ω); lim infi→∞

fi(ω). (1.15)

Especially, if limi→∞ fi(ω) exists, then it is also a measurable function.

Proof: The theorem can be proved by the following facts:{ω∣∣ sup

1≤i<∞fi(ω) > r

}=

∞⋃i=1

{ω∣∣ fi(ω) > r

};

inf1≤i<∞

fi(ω) = − sup1≤i<∞

(−fi(ω));

lim supi→∞

fi(ω) = inf1≤i<∞

(supk≥i

fk(ω))

;

lim infi→∞

fi(ω) = sup1≤i<∞

(infk≥i

fk(ω))

.

Theorem 1.14 (a) Let f be a nonnegative measurable function from (Ω,A)to �. Then there exists an increasing sequence {hi} of nonnegative simplemeasurable functions such that

limi→∞

hi(ω) = f(ω), ∀ω ∈ Ω. (1.16)

(b) If f is an arbitrary measurable function from (Ω,A) to �, then thereexists a sequence {hi} of simple measurable functions such that (1.16) holds.

Page 23: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 1.4 - Measurable Function 11

Proof: We define a sequence of nonnegative simple measurable functions asfollows,

hi(ω) =

⎧⎪⎨⎪⎩k − 1

2i, if

k − 12i≤ f(ω) <

k

2i, k = 1, 2, · · · , i2i

i, if i ≤ f(ω)

for i = 1, 2, · · · It is clear that the sequence is increasing such that (1.16)holds. The part (a) is proved. Now we define

f+(ω) =

{f(ω), if f(ω) > 0

0, otherwise,f−(ω) =

{−f(ω), if f(ω) < 0

0, otherwise.

Then f+ and f− are nonnegative measurable functions such that f = f+ −f−. It follows from the part (a) that there exist two sequences {h+

i } and{h−

i } of simple measurable functions such that

limi→∞

h+i (ω) = f+(ω), lim

i→∞h−i (ω) = f−(ω).

It is easy to verify that hi = h+i − h−

i are also simple measurable functionsfor all i, and

limi→∞

hi(ω) = limi→∞

h+i (ω)− lim

i→∞h−i (ω) = f+(ω)− f−(ω) = f(ω).

The theorem is proved.

Absolutely Continuous and Singular Functions

Definition 1.9 A function f : � → � is said to be absolutely continuous iffor any given ε > 0, there exists a small number δ > 0 such that

m∑i=1

|f(yi)− f(xi)| < ε (1.17)

for every finite disjoint class {(xi, yi), i = 1, 2, · · · ,m} of bounded open inter-vals for which

m∑i=1

|yi − xi| < δ. (1.18)

Definition 1.10 A continuous and increasing function f : � → � is said tobe singular if f is not a constant and its derivative f ′ = 0 almost everywhere.

Example 1.18: Cantor function is a singular function. Let C be the Cantorset. Then x ∈ C if and only if x can be expressed in ternary form using onlydigits 0 and 2, i.e.,

x =∞∑i=1

ai3i

(1.19)

Page 24: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

12 Chapter 1 - Measure and Integral

where ai = 0 or 2 for i = 1, 2, · · · We define a function g on C as follows,

g(x) =∞∑i=1

ai2i

. (1.20)

Then g(x) is an increasing function and g(C) = [0, 1]. The Cantor functionis defined on [0, 1] as follows,

f(x) = sup{g(y)

∣∣ y ∈ C, y ≤ x}

. (1.21)

It is clear that the Cantor function f(x) is increasing such that

f(0) = 0, f(1) = 1, f(x) = g(x), ∀x ∈ C.

Moreover, f(x) is a continuous function and f ′(x) = 0 almost everywhere.Thus the Cantor function f is a singular function.

Supremum Continuity Theorems

Let {fi} be a sequence of (measurable or not) functions. Generally speaking,

limi→∞

supx

fi(x) �= supx

limi→∞

fi(x)

even when the limitation exists. The following theorems give the conditionthat both sides are equal.

Theorem 1.15 Let {fi} be an increasing sequence of functions. Then wehave

limi→∞

supx

fi(x) = supx

limi→∞

fi(x). (1.22)

Proof: Suppose that fi(x) ↑ f(x) for all x ∈ �. Then we have

supx

f(x) ≥ lim supi→∞

supx

fi(x). (1.23)

On the other hand, let ε > 0 be given, then there exists x0 ∈ � such that

f(x0) ≥ supx

f(x)− ε

2.

Furthermore, since fi(x0) ↑ f(x0), there exists i0 such that, when i > i0,

fi(x0) ≥ f(x0)−ε

2≥ sup

xf(x)− ε

which implies that

lim infi→∞

supx

fi(x) ≥ lim infi→∞

fi(x0) ≥ supx

f(x)− ε.

Page 25: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 1.5 - Lebesgue Integral 13

Letting ε→ 0, we get

lim infi→∞

supx

fi(x) ≥ supx

f(x). (1.24)

It follows from (1.23) and (1.24) that limi→∞

supx

fi(x) exists and (1.22) holds.

Example 1.19: If {fi} is not an increasing sequence, then the theorem doesnot hold. For example, let

fi(x) =

{0, if x ∈ (−∞, i)1, if x ∈ [i,+∞)

for i = 1, 2, · · · It is clear that limi→∞ fi(x) ≡ 0. Thus we have

limi→∞

supx

fi(x) = 1 �= 0 = supx

limi→∞

fi(x).

In addition, we may prove the following theorem via a similar way.

Theorem 1.16 Let {fi} be a decreasing sequence of functions. Then wehave

limi→∞

infx

fi(x) = infx

limi→∞

fi(x). (1.25)

1.5 Lebesgue Integral

Definition 1.11 Let h(x) be a nonnegative simple measurable function de-fined by

h(x) =

⎧⎪⎪⎨⎪⎪⎩c1, if x ∈ A1

c2, if x ∈ A2

· · ·cm, if x ∈ Am

where A1, A2, · · · , Am are Borel sets. Then the Lebesgue integral of h on aBorel set A is ∫

A

h(x)dx =m∑i=1

ciπ{A ∩Ai}. (1.26)

Definition 1.12 Let f(x) be a nonnegative measurable function on the Borelset A, and {hi(x)} a sequence of nonnegative simple measurable functionssuch that hi(x) ↑ f(x) as i→∞. Then the Lebesgue integral of f on A is∫

A

f(x)dx = limi→∞

∫A

hi(x)dx. (1.27)

Page 26: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

14 Chapter 1 - Measure and Integral

Definition 1.13 Let f(x) be a measurable function on the Borel set A, anddefine

f+(x) =

{f(x), if f(x) > 0

0, otherwise,f−(x) =

{−f(x), if f(x) < 0

0, otherwise.

Then the Lebesgue integral of f on A is∫A

f(x)dx =∫A

f+(x)dx−∫A

f−(x)dx (1.28)

provided that at least one of∫A

f+(x)dx and∫A

f−(x)dx is finite.

Definition 1.14 Let f(x) be a measurable function on the Borel set A. Ifboth of

∫A

f+(x)dx and∫A

f−(x)dx are finite, then the function f is said tobe integrable on A.

Integral Continuity Theorems

Theorem 1.17 (Monotone Convergence Theorem) (a) Let {fi} be an in-creasing sequence of measurable functions on A. If there is an integrablefunction g such that fi(x) ≥ g(x) for all i, then we have∫

A

limi→∞

fi(x)dx = limi→∞

∫A

fi(x)dx. (1.29)

(b) Let {fi} be a decreasing sequence of measurable functions on A. If there isan integrable function g such that fi(x) ≤ g(x) for all i, then (1.29) remainstrue.

Proof: Without loss of generality, we may assume that fi are all nonnegative,i.e., g(x) ≡ 0. We also write f(x) = limi→∞ fi(x). Since {fi} is an increasingsequence, we immediately have∫

A

limi→∞

fi(x)dx ≥ limi→∞

∫A

fi(x)dx.

In order to prove the opposite inequality, for each i we choose an increasingsequence {hi,j} of nonnegative simple measurable functions such that

hi,j(x) ↑ fi as j →∞.

We sethk(x) = max

1≤i≤khi,k(x), k = 1, 2, · · ·

Then {hk} is an increasing sequence of nonnegative simple measurable func-tions such that

fk(x) ≥ hk(x), k = 1, 2, · · ·

Page 27: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 1.5 - Lebesgue Integral 15

hk(x) ↑ f(x) as k →∞.

Therefore, we have

limk→∞

∫A

fk(x)dx ≥ limk→∞

∫A

hk(x)dx =∫A

f(x)dx.

The other part may be proved by setting fi = g − fi.

Example 1.20: The condition fi ≥ g cannot be removed in the monotoneconvergence theorem. For example, let fi(x) = 0 if x ≤ i and −1 otherwise.Then fi(x) ↑ 0 everywhere on A. However,∫

�limi→∞

fi(x)dx = 0 �= −∞ = limi→∞

∫�

fi(x)dx.

Theorem 1.18 (Fatou’s Lemma) Assume that {fi} is a sequence of measur-able functions on A. (a) If there is an integrable function g such that fi ≥ gfor all i, then ∫

A

lim infi→∞

fi(x)dx ≤ lim infi→∞

∫A

fi(x)dx. (1.30)

(b) If there is an integrable function g such that fi ≤ g for all i, then∫A

lim supi→∞

fi(x)dx ≥ lim supi→∞

∫A

fi(x)dx. (1.31)

Proof: We set gk(x) = infi≥k fi(x). Then {gk} is an increasing sequence ofmeasurable functions such that gk ≥ g for all k, and

lim infi→∞

fi(x) ↑ limk→∞

gk(x), ∀x ∈ A.

It follows from the monotone convergence theorem that∫A

lim infi→∞

fi(x)dx =∫A

limk→∞

gk(x) = limk→∞

∫A

gk(x) ≤ lim infi→∞

∫A

fi(x)dx.

Next we write fi = g− fi. Then fi are nonnegative integrable functions and

lim infi→∞

fi = g − lim supi→∞

fi,

lim infi→∞

∫A

fi(x)dx =∫A

g(x)dx− lim supi→∞

∫A

fi(x)dx.

Applying the first part, we get the required inequality. The theorem is proved.

Example 1.21: The condition fi ≥ g cannot be removed in Fatou’s Lemma.For example, let A = (0, 1), fi(x) = −i if x ∈ (0, 1/i) and 0 otherwise. Thenfi(x)→ 0 everywhere on A. However,∫

A

lim infi→∞

fi(x)dx = 0 > −1 = lim infi→∞

∫A

fi(x)dx.

Page 28: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

16 Chapter 1 - Measure and Integral

Theorem 1.19 (Lebesgue Dominated Convergence Theorem) Let {fi} be asequence of measurable functions on A whose limitation limi→∞ fi(x) existsa.s. If there is an integrable function g such that |fi(x)| ≤ g(x) for any i,then we have ∫

A

limi→∞

fi(x)dx = limi→∞

∫A

fi(x)dx. (1.32)

Proof: It is clear that {|fi(x)− f(x)|} is a sequence of measurable functionssuch that |fi(x)−f(x)| ≤ 2g(x) for i = 1, 2, · · · It follows from Fatou’s Lemmathat

lim supi→∞

∣∣∣∣∫A

fi(x)dx−∫A

f(x)dx

∣∣∣∣ ≤ lim supi→∞

∫A

|fi(x)− f(x)| dx

≤∫A

lim supi→∞

|fi(x)− f(x)|dx

= 0

which implies (1.32). The theorem is proved.

Example 1.22: The condition |fi| ≤ g in the Lebesgue dominated conver-gence theorem cannot be removed. Let A = (0, 1), fi(x) = i if x ∈ (0, 1/i)and 0 otherwise. Then fi(x)→ 0 everywhere on A. However,∫

A

limi→∞

fi(x)dx = 0 �= 1 = limi→∞

∫A

fi(x)dx.

Fubini Theorems

Theorem 1.20 (Fubini Theorem) Let f(x, y) be an integrable function on�2. Then we have(a) f(x, y) is an integrable function of x for almost all y;(b) f(x, y) is an integrable function of y for almost all x;

(c)∫�2

f(x, y)dxdy =∫�

(∫�

f(x, y)dy

)dx =

∫�

(∫�

f(x, y)dx

)dy.

Proof: Step 1: Suppose that (a1, a2] and (b1, b2] are any right-semiclosedintervals of �, and f(x, y) is the characteristic function of (a1, a2] × (b1, b2].If f(x, y) is an integrable function on �2, then∫

�2f(x, y)dxdy = π{(a1, a2]} · π{(b1, b2]} = (a2 − a1)(b2 − b1) <∞;

∫�

f(x, y)dx =

{a2 − a1, if y ∈ (b1, b2]

0, if y �∈ (b1, b2];∫�

f(x, y)dy =

{b2 − b1, if x ∈ (a1, a2]

0, if x �∈ (a1, a2]

Page 29: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 1.6 - Lebesgue-Stieltjes Integral 17

which imply that (a), (b) and (c) are all true.Step 2: Let I1, I2, · · · , In be disjoint right-semiclosed intervals of �2, and

f(x, y) be the characteristic function of ∪ni=1Ii. It is easy to prove that (a),(b) and (c) are all true via Step 1.

Step 3: Let B be a Borel set of �2. For any given ε > 0, it follows fromthe approximation theorem that there exist disjoint right-semiclosed intervalsof �2 for i = 1, 2, · · · , n such that π {B \ (∪ni=1Ii)} < ε. Let f(x, y) be thecharacteristic function of B and integrable. If g(x, y) is the characteristicfunction of ∪ni=1Ii, then∫

�2g(x, y)dxdy ≤

∫�2

f(x, y)dxdy <∞.

Thus g(x, y) is integrable and satisfies (a), (b) and (c). Note that f = gexcept on B \ (∪ni=1Ii). It is easy to verify that f(x, y) satisfies (a), (b) and(c) by letting ε→ 0.

Step 4: Let f(x, y) be a nonnegative simple measurable function on �2.Then there exist nonnegative numbers c1, c2, · · · , cn and disjoint Borel setsB1, B2, · · · , Bn of �2 such that

f(x, y) = c1g1(x, y) + c2g2(x, y) + · · ·+ cngn(x, y)

where gi are characteristic functions of Bi, i = 1, 2, · · · , n, respectively. Iff(x, y) is integrable, then g1, g2, · · · , gn are all integrable and satisfy (a), (b)and (c) via Step 3. It follows that f(x, y) satisfies (a), (b) and (c).

Step 5: Let f(x, y) be a nonnegative measurable function. Then thereexists a sequence of nonnegative simple measurable functions {gi} such thatgi ↑ f as i→∞. Since f is integrable, the functions g1, g2, · · · are integrableand satisfy (a), (b) and (c). The monotone convergence theorem implies thatthe function f satisfies (a), (b) and (c).

Step 6: Let f(x, y) be an arbitrary integrable function. By using f =f+ − f−, we may prove that (a), (b) and (c) hold. The theorem is proved.

1.6 Lebesgue-Stieltjes Integral

Theorem 1.21 Let Φ(x) be a nondecreasing and right-continuous functionon �. Then there exists a unique measure π on the Borel algebra of � suchthat

π{(a, b]} = Φ(b)− Φ(a) (1.33)

for all a and b with a < b. Such a measure is called the Lebesgue-Stieltjesmeasure corresponding to Φ.

Proof: Let A0 be the set of all finite disjoint unions of all intervals of theform (−∞, a], (a, b], (b,∞) and �. For simplicity, we denote all of them bythe right-semiclosed interval (a, b]. Then A0 is an algebra and can generate

Page 30: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

18 Chapter 1 - Measure and Integral

the Borel algebra. The theorem is proved by the Caratheodory extensiontheorem if we can verify that there exists a unique measure on A0 such that(1.33) holds. The proof is based on the following steps.

Step 1: Let (ai, bi] be disjoint right-semiclosed intervals for i = 0, 1, · · · , nsuch that (ai, bi] ⊂ (a0, b0] for each i. Without loss of generality, we assumethat a1 < a2 < · · · < an. Then we have

a0 ≤ a1 < b1 ≤ a2 < b2 ≤ · · · ≤ an < bn ≤ b0

andn∑

i=1

π{(ai, bi]} =n∑

i=1

(Φ(bi)− Φ(ai)) ≤ Φ(b0)− Φ(a0) = π{(a0, b0]}.

If (ai, bi], i = 1, 2, · · · are a countably infinite sequence, then by letting n →∞, we obtain

∞∑i=1

π{(ai, bi]} ≤ π{(a0, b0]}. (1.34)

Step 2: Let (ai, bi] be disjoint right-semiclosed intervals for i = 0, 1, · · · , nsuch that (a0, b0] ⊂ ∪ni=1(ai, bi]. Without loss of generality, we assume thata1 < a2 < · · · < an. Then we have

a1 < b1 ≤ a2 < b2 ≤ · · · ≤ an < bn, a1 ≤ a0 < b0 ≤ bn

andn∑

i=1

π{(ai, bi]} =n∑

i=1

(Φ(bi)− Φ(ai)) ≥ Φ(bn)− Φ(a1) ≥ π{(a0, b0]}.

If (ai, bi], i = 1, 2, · · · are a countably infinite sequence, then by letting n →∞, we obtain

∞∑i=1

π{(ai, bi]} ≥ π{(a0, b0]}. (1.35)

Step 3: Let (ai, bi] be disjoint right-semiclosed intervals for i = 0, 1, · · ·such that ∪∞i=1(ai, bi] = (a0, b0]. It follows from (1.34) and (1.35) that

∞∑i=1

π{(ai, bi]} = π{(a0, b0]}. (1.36)

Step 4: For any A ∈ A0, there exist disjoint right-semiclosed intervals(ai, bi], i = 1, 2, · · · , n such that ∪ni=1(ai, bi] = A. We define

π{A} =n∑

i=1

π{(ai, bi]}. (1.37)

Page 31: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 1.6 - Lebesgue-Stieltjes Integral 19

First note that π is uniquely determined by (1.37). In fact, let (a′j , b

′j ], j =

1, 2, · · · , k be another collection of disjoint right-semiclosed intervals such that∪kj=1(a

′j , b

′j ] = A. It is clear that

(ai, bi] =k⋃

j=1

((ai, bi] ∩ (a′

j , b′j ]), i = 1, 2, · · · , n;

(a′j , b

′j ] =

n⋃i=1

((ai, bi] ∩ (a′

j , b′j ]), j = 1, 2, · · · , k.

Note that (ai, bi] ∩ (a′j , b

′j ] are disjoint right-semiclosed intervals for i =

1, 2, · · · , n and j = 1, 2, · · · , k. Thus

n∑i=1

π{(ai, bi]} =n∑

i=1

k∑j=1

π{(ai, bi] ∩ (a′j , b

′j ]} =

k∑j=1

π{(a′j , b

′j ]}.

Hence π is uniquely determined by (1.37), and π coincides with π for everyright-semiclosed interval. Furthermore, π is finitely additive. We next provethat π is countably additive. Let {Aj} be a sequence of disjoint sets in A0.Then we may write

Aj =nj⋃i=1

(aij , bij ], j = 1, 2, · · ·

and

π{Aj} =nj∑i=1

π{(aij , bij ]}, j = 1, 2, · · ·

It follows that

π

⎧⎨⎩∞⋃j=1

Aj

⎫⎬⎭ = π

⎧⎨⎩∞⋃j=1

nj⋃i=1

(aij , bij ]

⎫⎬⎭ =∞∑j=1

nj∑i=1

π{(aij , bij ]} =∞∑j=1

π{Aj}.

Thus π is countably additive and is a measure on A0.Step 5: Finally, we prove that π is the unique extension of π to A0. Let

π1 and π2 be two such extensions, and A ∈ A0. Then there exist disjointright-semiclosed intervals (ai, bi], i = 1, 2, · · · , n such that ∪ni=1(ai, bi] = A.Thus

π1{A} =n∑

i=1

π1{ai, bi]} =n∑

i=1

π{ai, bi]} =n∑

i=1

π2{ai, bi]} = π2{A}

which states that the extension of π is unique.

Page 32: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

20 Chapter 1 - Measure and Integral

Definition 1.15 Let Φ(x) be a nondecreasing, right-continuous function on�, and let h(x) be a nonnegative simple measurable function, i.e.,

h(x) =

⎧⎪⎪⎪⎨⎪⎪⎪⎩c1, if x ∈ A1

c2, if x ∈ A2

...cm, if x ∈ Am.

Then the Lebesgue-Stieltjes integral of h on the Borel set A is∫A

h(x)dΦ(x) =m∑i=1

ciπ{A ∩Ai} (1.38)

where π is the Lebesgue-Stieltjes measure corresponding to Φ.

Definition 1.16 Let f(x) be a nonnegative measurable function on the Borelset A, and let {hi(x)} be a sequence of nonnegative simple measurable func-tions such that hi(x) ↑ f(x) as i → ∞. Then the Lebesgue-Stieltjes integralof f on A is ∫

A

f(x)dΦ(x) = limi→∞

∫A

hi(x)dΦ(x). (1.39)

Definition 1.17 Let f(x) be a measurable function on the Borel set A, anddefine

f+(x) =

{f(x), if f(x) > 0

0, otherwise,f−(x) =

{−f(x), if f(x) < 0

0, otherwise.

Then the Lebesgue-Stieltjes integral of f on A is∫A

f(x)dΦ(x) =∫A

f+(x)dΦ(x)−∫A

f−(x)dΦ(x) (1.40)

provided that at least one of∫A

f+(x)dΦ(x) and∫A

f−(x)dΦ(x) is finite.

Page 33: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Chapter 2

Probability Theory

Probability theory is the branch of mathematics that studies the behavior ofrandom events. The emphasis in this chapter is mainly on probability space,random variable, probability distribution, independent and identical distri-bution, expected value operator, critical values, inequalities, characteristicfunction, convergence concepts, laws of large numbers, conditional probabil-ity, and stochastic simulation.

2.1 Three Axioms

In this section, let us give the definitions of probability space and productprobability space as well as some basic results.

Definition 2.1 Let Ω be a nonempty set, and A a σ-algebra of subsets (calledevents) of Ω. The set function Pr is called a probability measure if(Axiom 1) Pr{Ω} = 1;(Axiom 2) Pr{A} ≥ 0 for any A ∈ A;(Axiom 3) for every countable sequence of mutually disjoint events {Ai}∞i=1,we have

Pr

{ ∞⋃i=1

Ai

}=

∞∑i=1

Pr{Ai}. (2.1)

Definition 2.2 Let Ω be a nonempty set, A a σ-algebra of subsets of Ω, andPr a probability measure. Then the triplet (Ω,A,Pr) is called a probabilityspace.

Example 2.1: Let Ω = {ω1, ω2, · · ·}, and let A be the σ-algebra of allsubsets of Ω. Assume that p1, p2, · · · are nonnegative numbers such thatp1 + p2 + · · · = 1. Define a set function on A as

Pr{A} =∑ωi∈A

pi, A ∈ A.

Page 34: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

22 Chapter 2 - Probability Theory

Then Pr is a probability measure and (Ω,A,Pr) is a probability space.

Example 2.2: Let Ω = [0, 1] and let A be the Borel algebra on Ω. If Pr isthe Lebesgue measure, then Pr is a probability measure and (Ω,A,Pr) is aprobability space.

Theorem 2.1 Let (Ω,A,Pr) be a probability space. Then we have(a) Pr{∅} = 0;(b) Pr{A}+ Pr{Ac} = 1 for any A ∈ A;(c) 0 ≤ Pr{A} ≤ 1 for any A ∈ A;(d) Pr{A} ≤ Pr{B} whenever A ⊂ B;(e) Pr{A ∪B}+ Pr{A ∩B} = Pr{A}+ Pr{B} for any A,B ∈ A.

Proof: (a) Since ∅ and Ω are disjoint events and ∅ ∪ Ω = Ω, we havePr{∅}+ Pr{Ω} = Pr{Ω} which makes Pr{∅} = 0.

(b) Since A and Ac are disjoint events and A∪Ac = Ω, we have Pr{A}+Pr{Ac} = Pr{Ω} = 1.

(c) The inequality 0 ≤ Pr{A} is obvious. If Pr{A} > 1, then we havePr{Ac} = 1− Pr{A} < 0. A contradiction shows that Pr{A} ≤ 1.

(d) Since A ⊂ B, we have B = A ∪ (B ∩ Ac), where A and B ∩ Ac aredisjoint events. Therefore Pr{B} = Pr{A}+ Pr{B ∩Ac} ≥ Pr{A}.

(e) Since A ∩ Bc, A ∩ B and Ac ∩ B are disjoint events and A ∪ B =(A ∩Bc) ∪ (A ∩B) ∪ (Ac ∩B), we have

Pr{A ∪B} = Pr{A ∩Bc}+ Pr{A ∩B}+ Pr{Ac ∩B}.

Furthermore, we may prove that

Pr{A} = Pr{A ∩Bc}+ Pr{A ∩B},Pr{B} = Pr{Ac ∩B}+ Pr{A ∩B}.

It follows from these relations that (e) holds.

Independent Events

Definition 2.3 The events Ai, i ∈ I are said to be independent if and onlyif for any collections {i1, i2, · · · , ik} of distinct indices in I, we have

Pr{Ai1 ∩Ai2 ∩ · · · ∩Aik} = Pr{Ai1}Pr{Ai2} · · ·Pr{Aik}. (2.2)

Theorem 2.2 If the events Ai, i ∈ I are independent, and Bi are either Ai

or Aci for i ∈ I, then the events Bi, i ∈ I are independent.

Page 35: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 2.1 - Three Axioms 23

Proof: In order to prove the theorem, it suffices to prove that Pr{Ac1∩A2} =

Pr{Ac1}Pr{A2}. It follows from Ac

1 ∩A2 = A2 \ (A1 ∩A2) that

Pr{Ac1 ∩A2} = Pr{A2 \ (A1 ∩A2)}

= Pr{A2} − Pr{A1 ∩A2} (since A1 ∩A2 ⊂ A2)

= Pr{A2} − Pr{A1}Pr{A2} (by the independence)

= (1− Pr{A1}) Pr{A2}= Pr{Ac

1}Pr{A2}.

Borel-Cantelli Lemma

Theorem 2.3 (Borel-Cantelli Lemma) Let (Ω,A,Pr) be a probability space,and let A1, A2, · · · ∈ A. Then we have(a) if

∑∞i=1 Pr{Ai} <∞, then

Pr{

lim supi→∞

Ai

}= 0; (2.3)

(b) if A1, A2, · · · are independent and∑∞

i=1 Pr{Ai} =∞, then

Pr{

lim supi→∞

Ai

}= 1. (2.4)

Proof: (a) It follows from the probability continuity theorem that

Pr{

lim supi→∞

Ai

}= Pr

{ ∞⋂k=1

∞⋃i=k

Ai

}= lim

k→∞Pr

{ ∞⋃i=k

Ai

}

≤ limk→∞

∞∑i=k

Pr{Ai} = 0.

(by

∞∑i=1

Pr{Ai} <∞)

Thus the part (a) is proved. In order to prove the part (b), we only need toprove

limk→∞

Pr

{ ∞⋃i=k

Ai

}= 1.

In other words, we should prove

limk→∞

Pr

{ ∞⋂i=k

Aci

}= 0.

Page 36: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

24 Chapter 2 - Probability Theory

For any k, we have

Pr

{ ∞⋂i=k

Aci

}=

∞∏i=k

(1− Pr{Ai}) (by independence)

≤ exp

(−

∞∑i=k

Pr{Ai})

(by 1− x ≤ exp(−x))

= 0.

(by

∞∑i=1

Pr{Ai} =∞)

Hence the part (b) is proved.

Probability Continuity Theorem

Theorem 2.4 (Probability Continuity Theorem) Let (Ω,A,Pr) be a proba-bility space, and A1, A2, · · · ∈ A. If limi→∞ Ai exists, then

limi→∞

Pr{Ai} = Pr{

limi→∞

Ai

}. (2.5)

Proof: It is a special case of Theorem 1.8.

Theorem 2.5 Let (Ω,A,Pr) be a probability space, and A1, A2, · · · ∈ A.Then we have

Pr{

lim infi→∞

Ai

}≤ lim inf

i→∞Pr{Ai} ≤ lim sup

i→∞Pr{Ai} ≤ Pr

{lim supi→∞

Ai

}.

Proof: It is a special case of Theorem 1.7.

Product Probability Space

Let (Ωi,Ai,Pri), i = 1, 2, · · · , n be probability spaces, and Ω = Ω1 × Ω2 ×· · · × Ωn, A = A1 × A2 × · · · × An. Note that the probability measuresPri, i = 1, 2, · · · , n are finite. It follows from the product measure theoremthat there is a unique measure Pr on A such that

Pr{A1 ×A2 × · · · ×An} = Pr1{A1} × Pr2{A2} × · · · × Prn{An}

for any Ai ∈ Ai, i = 1, 2, · · · , n. The measure Pr is also a probability measuresince

Pr{Ω} = Pr1{Ω1} × Pr2{Ω2} × · · · × Prn{Ωn} = 1.

Such a probability measure is called the product probability measure, denotedby Pr = Pr1 × Pr2 × · · · × Prn. Thus a product probability space may bedefined as follows.

Definition 2.4 Let (Ωi,Ai,Pri), i = 1, 2, · · · , n be probability spaces, andΩ = Ω1×Ω2×· · ·×Ωn, A = A1×A2×· · ·×An, Pr = Pr1×Pr2× · · ·×Prn.Then the triplet (Ω,A,Pr) is called the product probability space.

Page 37: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 2.2 - Random Variables 25

Infinite Product Probability Space

Very often we are interested in the limiting property of random sequence, forexample

limn→∞

ξ1 + ξ2 + · · ·+ ξnn

.

Such a limiting event cannot be defined in any product probability spacewith finite dimension. This fact provides a motivation to define an infiniteproduct probability space.

Let (Ωi,Ai,Pri), i = 1, 2, · · · be an arbitrary sequence of probabilityspaces, and

Ω = Ω1 × Ω2 × · · · , A = A1 ×A2 × · · · (2.6)

It follows from the infinite product measure theorem that there is a uniqueprobability measure Pr on A such that

Pr {A1 × · · · ×An × Ωn+1 × Ωn+2 × · · ·} = Pr1{A1} × · · · × Prn{An}

for any measurable rectangle A1 × · · · × An × Ωn+1 × Ωn+2 × · · · and alln = 1, 2, · · · The probability measure Pr is called the infinite product ofPri, i = 1, 2, · · · and is denoted by

Pr = Pr1 × Pr2 × · · · (2.7)

Definition 2.5 Let (Ωi,Ai,Pri), i = 1, 2, · · · be probability spaces, and Ω =Ω1 × Ω2 × · · ·, A = A1 × A2 × · · · , Pr = Pr1×Pr2× · · · Then the triplet(Ω,A,Pr) is called the infinite product probability space.

2.2 Random Variables

Definition 2.6 A random variable is a measurable function from a proba-bility space (Ω,A,Pr) to the set of real numbers.

Example 2.3: Let Ω = {ω1, ω2} and Pr{ω1} = Pr{ω2} = 0.5. Then thefunction

ξ(ω) =

{0, if ω = ω1

1, if ω = ω2

is a random variable.

Example 2.4: Let Ω = [0, 1], and let A be the Borel algebra on Ω. If Pr isthe Lebesgue measure, then (Ω,A,Pr) is a probability space. Now we defineξ as an identity function from Ω to [0,1]. Since ξ is a measurable function, itis a random variable.

Page 38: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

26 Chapter 2 - Probability Theory

Definition 2.7 A random variable ξ is said to be(a) nonnegative if Pr{ξ < 0} = 0;(b) positive if Pr{ξ ≤ 0} = 0;(c) continuous if Pr{ξ = x} = 0 for each x ∈ �;(d) simple if there exists a finite sequence {x1, x2, · · · , xm} such that

Pr {ξ �= x1, ξ �= x2, · · · , ξ �= xm} = 0; (2.8)

(e) discrete if there exists a countable sequence {x1, x2, · · ·} such that

Pr {ξ �= x1, ξ �= x2, · · ·} = 0. (2.9)

Definition 2.8 Let ξ and η be random variables defined on the probabilityspace (Ω,A,Pr). We say ξ = η if and only if ξ(ω) = η(ω) for all ω ∈ Ω.

Random Vector

Definition 2.9 An n-dimensional random vector is a measurable functionfrom a probability space (Ω,A,Pr) to the set of n-dimensional real vectors.

Since a random vector ξ is a function from Ω to �n, we can writeξ(ω) = (ξ1(ω), ξ2(ω), · · · , ξn(ω)) for every ω ∈ Ω, where ξ1, ξ2, · · · , ξn arefunctions from Ω to �. Are ξ1, ξ2, · · · , ξn random variables in the sense ofDefinition 2.6? Conversely, we assume that ξ1, ξ2, · · · , ξn are random vari-ables. Is (ξ1, ξ2, · · · , ξn) a random vector in the sense of Definition 2.9? Theanswer is in the affirmative. In fact, we have the following theorem.

Theorem 2.6 The vector (ξ1, ξ2, · · · , ξn) is a random vector if and only ifξ1, ξ2, · · · , ξn are random variables.

Proof: Write ξ = (ξ1, ξ2, · · · , ξn). Suppose that ξ is a random vector on theprobability space (Ω,A,Pr). For any Borel set B of �, the set B × �n−1 isalso a Borel set of �n. Thus we have{

ω ∈ Ω∣∣ ξ1(ω) ∈ B

}={ω ∈ Ω

∣∣ ξ1(ω) ∈ B, ξ2(ω) ∈ �, · · · , ξn(ω) ∈ �}

={ω ∈ Ω

∣∣ ξ(ω) ∈ B ×�n−1}∈ A

which implies that ξ1 is a random variable. A similar process may prove thatξ2, ξ3, · · · , ξn are random variables.

Conversely, suppose that all ξ1, ξ2, · · · , ξn are random variables on theprobability space (Ω,A,Pr). We define

B ={B ⊂ �n

∣∣ {ω ∈ Ω|ξ(ω) ∈ B} ∈ A}

.

The vector ξ = (ξ1, ξ2, · · · , ξn) is proved to be a random vector if we canprove that B contains all Borel sets of �n. First, the class B contains allopen intervals of �n because{

ω ∈ Ω∣∣ ξ(ω) ∈

n∏i=1

(ai, bi)

}=

n⋂i=1

{ω ∈ Ω

∣∣ ξi(ω) ∈ (ai, bi)}∈ A.

Page 39: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 2.2 - Random Variables 27

Next, the class B is a σ-algebra of �n because (i) we have �n ∈ B since{ω ∈ Ω|ξ(ω) ∈ �n} = Ω ∈ A; (ii) if B ∈ B, then {ω ∈ Ω|ξ(ω) ∈ B} ∈ A, and

{ω ∈ Ω∣∣ ξ(ω) ∈ Bc} = {ω ∈ Ω

∣∣ ξ(ω) ∈ B}c ∈ A

which implies that Bc ∈ B; (iii) if Bi ∈ B for i = 1, 2, · · ·, then {ω ∈ Ω|ξ(ω) ∈Bi} ∈ A and{

ω ∈ Ω∣∣ ξ(ω) ∈

∞⋃i=1

Bi

}=

∞⋃i=1

{ω ∈ Ω∣∣ ξ(ω) ∈ Bi} ∈ A

which implies that ∪iBi ∈ B. Since the smallest σ-algebra containing allopen intervals of �n is just the Borel algebra of �n, the class B contains allBorel sets of �n. The theorem is proved.

Random Arithmetic

Definition 2.10 (Random Arithmetic on Single Probability Space) Let f :�n → � be a measurable function, and ξ1, ξ2, · · · , ξn random variables definedon the probability space (Ω,A,Pr). Then ξ = f(ξ1, ξ2, · · · , ξn) is a randomvariable defined by

ξ(ω) = f(ξ1(ω), ξ2(ω), · · · , ξn(ω)), ∀ω ∈ Ω. (2.10)

Example 2.5: Let ξ1 and ξ2 be random variables on the probability space(Ω,A,Pr). Then their sum and product are defined by

(ξ1 + ξ2)(ω) = ξ1(ω) + ξ2(ω), (ξ1 × ξ2)(ω) = ξ1(ω)× ξ2(ω), ∀ω ∈ Ω.

Definition 2.11 (Random Arithmetic on Different Probability Spaces) Letf : �n → � be a measurable function, and ξi random variables definedon probability spaces (Ωi,Ai,Pri), i = 1, 2, · · · , n, respectively. Then ξ =f(ξ1, ξ2, · · · , ξn) is a random variable on the product probability space (Ω,A,Pr),defined by

ξ(ω1, ω2, · · · , ωn) = f(ξ1(ω1), ξ2(ω2), · · · , ξn(ωn)) (2.11)

for all (ω1, ω2, · · · , ωn) ∈ Ω.

Example 2.6: Let ξ1 and ξ2 be random variables on the probability spaces(Ω1,A1,Pr1) and (Ω2,A2,Pr2), respectively. Then their sum and productare defined by

(ξ1 + ξ2)(ω1, ω2) = ξ1(ω1) + ξ2(ω2), (ξ1 × ξ2)(ω1, ω2) = ξ1(ω1)× ξ2(ω2)

for any (ω1, ω2) ∈ Ω1 × Ω2.

The reader may wonder whether ξ(ω1, ω2, · · · , ωn) defined by (2.11) is arandom variable. The following theorem answers this question.

Page 40: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

28 Chapter 2 - Probability Theory

Theorem 2.7 Let ξ be an n-dimensional random vector, and f : �n → � ameasurable function. Then f(ξ) is a random variable.

Proof: Assume that ξ is a random vector on the probability space (Ω,A,Pr).For any Borel set B of �, since f is a measurable function, f−1(B) is also aBorel set of �n. Thus we have{

ω ∈ Ω∣∣ f(ξ(ω)) ∈ B

}={ω ∈ Ω

∣∣ ξ(ω) ∈ f−1(B)}∈ A

which implies that f(ξ) is a random variable.

Continuity Theorems

Theorem 2.8 (a) Let {ξi} be an increasing sequence of random variablessuch that limi→∞ ξi is a random variable. Then for any real number r, wehave

limi→∞

Pr{ξi > r} = Pr{

limi→∞

ξi > r}

. (2.12)

(b) Let {ξi} be a decreasing sequence of random variables such that limi→∞ ξiis a random variable. Then for any real number r, we have

limi→∞

Pr{ξi ≥ r} = Pr{

limi→∞

ξi ≥ r}

. (2.13)

(c) The equations (2.12) and (2.13) remain true if “>” and “≥” are replacedwith “≤” and “<”, respectively.

Proof: Since {ξi} is an increasing sequence of random variables, we mayprove that

{ξi > r} ↑{

limi→∞

ξi > r}

.

It follows from the probability continuity theorem that (2.12) holds. Simi-larly, if {ξi} is a decreasing sequence of random variables, then we have

{ξi ≥ r} ↓{

limi→∞

ξi ≥ r}

which implies (2.13) by using the probability continuity theorem.

Example 2.7: The symbol “>” cannot be replaced with “≥” in (2.12). Let(Ω,A,Pr) be a probability space on which we define

ξ(ω) = 1, ξi(ω) = 1− 1i, i = 1, 2, · · ·

for all ω ∈ Ω. Then ξi ↑ ξ as i→∞. However,

limi→∞

Pr{ξi ≥ 1} = 0 �= 1 = Pr {ξ ≥ 1} .

Page 41: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 2.2 - Random Variables 29

Theorem 2.9 Let {ξi} be a sequence of random variables such that

lim infi→∞

ξi and lim supi→∞

ξi

are random variables. Then we have

Pr{

lim infi→∞

ξi > r}≤ lim inf

i→∞Pr{ξi > r}, (2.14)

Pr{

lim supi→∞

ξi ≥ r

}≥ lim sup

i→∞Pr{ξi ≥ r}, (2.15)

Pr{

lim infi→∞

ξi ≤ r}≥ lim sup

i→∞Pr{ξi ≤ r} ≥ lim inf

i→∞Pr{ξi ≤ r}, (2.16)

Pr{

lim supi→∞

ξi < r

}≤ lim inf

i→∞Pr{ξi < r} ≤ lim sup

i→∞Pr{ξi < r}. (2.17)

Proof: It is clear that infi≥k

ξi is an increasing sequence and infi≥k

ξi ≤ ξk for

each k. It follows from Theorem 2.8 that

Pr{

lim infi→∞

ξi > r}

= Pr{

limk→∞

infi≥k

ξi > r

}= lim

k→∞Pr{

infi≥k

ξi > r

}≤ lim inf

k→∞Pr {ξk > r} .

The inequality (2.14) is proved. Similarly, supi≥k

ξi is a decreasing sequence and

supi≥k

ξi ≥ ξk for each k. It follows from Theorem 2.8 that

Pr{

lim supi→∞

ξi ≥ r

}= Pr

{limk→∞

supi≥k

ξi ≥ r

}= lim

k→∞Pr{

supi≥k

ξi ≥ r

}≥ lim sup

k→∞Pr {ξk ≥ r} .

The inequality (2.15) is proved. Furthermore, we have

Pr{

lim infi→∞

ξi ≤ r}

= Pr{

limk→∞

infi≥k

ξi ≤ r

}= lim

k→∞Pr{

infi≥k

ξi ≤ r

}≥ lim sup

k→∞Pr {ξk ≤ r} .

The inequality (2.16) is proved. Similarly,

Pr{

lim supi→∞

ξi < r

}= Pr

{limk→∞

supi≥k

ξi < r

}= lim

k→∞Pr{

supi≥k

ξi < r

}≤ lim inf

k→∞Pr {ξk < r} .

Page 42: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

30 Chapter 2 - Probability Theory

The inequality (2.17) is proved.

Theorem 2.10 Let {ξi} be a sequence of random variables such that thelimitation limi→∞ ξi exists and is a random variable. Then for almost allr ∈ �, we have

limi→∞

Pr{ξi ≥ r} = Pr{

limi→∞

ξi ≥ r}

. (2.18)

The equation (2.18) remains true if “≥” is replaced with “≤”, “>” or “<”.

Proof: Write ξi → ξ. Note that Pr{ξ ≥ r} is a decreasing function of r andcontinuous almost everywhere. The theorem is proved if we can verify that(2.18) holds for any continuity point r0 of Pr{ξ ≥ r}. For any given ε > 0,there exists δ > 0 such that

|Pr{ξ ≥ r0 ± δ} − Pr{ξ ≥ r0}| ≤ε

2. (2.19)

Now we define

Ωn =∞⋂i=n

{|ξi − ξ| < δ}, n = 1, 2, · · ·

Then {Ωn} is an increasing sequence such that Ωn → Ω. Thus there exists aninteger m such that Pr{Ωm} > 1 − ε/2 and Pr{Ωc

m} < ε/2. For any i > m,we have

{ξi ≥ r0} = ({ξi ≥ r0} ∩ Ωm) ∪ ({ξi ≥ r0} ∩ Ωcm) ⊂ {ξ ≥ r0 − δ} ∪ Ωc

m.

By using (2.19), we get

Pr{ξi ≥ r0} ≤ Pr{ξ ≥ r0 − δ}+ Pr{Ωcm} ≤ Pr{ξ ≥ r0}+ ε. (2.20)

Similarly, for i > m, we have

{ξ ≥ r0 + δ} = ({ξ ≥ r0 + δ} ∩ Ωm)∪ ({ξ ≥ r0 + δ} ∩ Ωcm) ⊂ {ξi ≥ r0}∪Ωc

m.

By using (2.19), we get

Pr{ξ ≥ r0} −ε

2≤ Pr{ξ ≥ r0 + δ} ≤ Pr{ξi ≥ r0}+

ε

2. (2.21)

It follows from (2.20) and (2.21) that

Pr{ξ ≥ r0} − ε ≤ Pr{ξi ≥ r0} ≤ Pr{ξ ≥ r0}+ ε.

Letting ε→ 0, we obtain Pr{ξi ≥ r0} → Pr{ξ ≥ r0}. The theorem is proved.

Page 43: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 2.3 - Probability Distribution 31

2.3 Probability Distribution

Definition 2.12 The probability distribution Φ: [−∞,+∞] → [0, 1] of arandom variable ξ is defined by

Φ(x) = Pr{ω ∈ Ω

∣∣ ξ(ω) ≤ x}

. (2.22)

That is, Φ(x) is the probability that the random variable ξ takes a value lessthan or equal to x.

Theorem 2.11 The probability distribution Φ : [−∞,+∞]→ [0, 1] of a ran-dom variable ξ is a nondecreasing and right-continuous function with

limx→−∞Φ(x) = Φ(−∞) = 0;

limx→∞Φ(x) = Φ(+∞) = 1.

(2.23)

Conversely, if Φ : [−∞,+∞]→ [0, 1] is a nondecreasing and right-continuousfunction satisfying (2.23), then there is a unique probability measure Pr onthe Borel algebra of � such that Pr{(−∞, x]} = Φ(x) for all x ∈ [−∞,+∞].Furthermore, the random variable defined as the identity function

ξ(x) = x, ∀x ∈ � (2.24)

from the probability space (�,A,Pr) to � has the probability distribution Φ.

Proof: For any x, y ∈ � with x < y, we have

Φ(y)− Φ(x) = Pr{x < ξ ≤ y} ≥ 0.

Thus the probability distribution Φ is nondecreasing. Next, let {εi} be asequence of positive numbers such that εi → 0 as i → ∞. Then, for everyi ≥ 1, we have

Φ(x + εi)− Φ(x) = Pr{x < ξ ≤ x + εi}.

It follows from the probability continuity theorem that

limi→∞

Φ(x + εi)− Φ(x) = Pr{∅} = 0.

Hence Φ is a right-continuous function. Finally,

limx→−∞Φ(x) = lim

x→−∞Pr{ξ ≤ x} = Pr{∅} = 0,

limx→+∞Φ(x) = lim

x→+∞Pr{ξ ≤ x} = Pr{Ω} = 1.

Conversely, it follows from Theorem 1.21 that there is a unique probabilitymeasure Pr on the Borel algebra of � such that Pr{(−∞, x]} = Φ(x) for all

Page 44: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

32 Chapter 2 - Probability Theory

x ∈ [−∞,+∞]. Furthermore, it is easy to verify that the random variable de-fined by (2.24) from the probability space (�,A,Pr) to � has the probabilitydistribution Φ.

Theorem 2.11 states that the identity function is a universal function forany probability distribution by defining an appropriate probability space. Infact, there is a universal probability space for any probability distribution bydefining an appropriate function. It is shown by the following theorem.

Theorem 2.12 Assume that Ω = (0, 1), A is the Borel algebra on Ω, andPr is the Lebesgue measure. Then (Ω,A,Pr) is a probability space. If Φ is aprobability distribution, then the function

ξ(ω) = sup{x∣∣ Φ(x) ≤ ω

}(2.25)

from Ω to � is a random variable whose probability distribution is just Φ.

Proof: Since ξ(ω) is an increasing function, it is a measurable function.Thus ξ is a random variable. For any y ∈ �, we have

Pr{ξ ≤ y} = Pr{ω∣∣ ω ≤ Φ(y)

}= Φ(y).

The theorem is proved.

Example 2.8: Assume that the random variables ξ and η have the sameprobability distribution. One question is whether ξ = η or not. Generallyspeaking, it is not true. Let Ω = {ω1, ω2} and

Pr{ω} ={

1/2, if ω = ω1

1/2, if ω = ω2.

Then (Ω,A,Pr) is a probability space. We now define two random variablesas follows,

ξ(ω) ={−1, if ω = ω1

1, if ω = ω2,η(ω) =

{1, if ω = ω1

−1, if ω = ω2.

Then ξ and η have the same probability distribution,

Φ(x) =

⎧⎪⎨⎪⎩0, if x < −10.5, if − 1 ≤ x < 11, if x ≤ 1.

However, it is clear that ξ �= η in the sense of Definition 2.8.

Since probability distribution is a monotone function, the set of disconti-nuity points of the probability distribution is countable. In other words, theset of continuity points is dense everywhere in �.

Page 45: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 2.3 - Probability Distribution 33

Theorem 2.13 Let Φ1 and Φ2 be two probability distributions such thatΦ1(x) = Φ2(x) for all x ∈ D, a dense set of �. Then Φ1 ≡ Φ2.

Proof: Since D is dense everywhere, for any point x, there exists a sequence{xi} in D such that xi ↓ x as i→∞, and Φ1(xi) = Φ2(xi) for all i. It followsfrom the right-continuity of probability distribution that Φ1(x) = Φ2(x). Thetheorem is proved.

Theorem 2.14 A random variable ξ with probability distribution Φ is(a) nonnegative if and only if Φ(x) = 0 for all x < 0;(b) positive if and only if Φ(x) = 0 for all x ≤ 0;(c) simple if and only if Φ is a simple function;(d) discrete if and only if Φ is a step function;(e) continuous if and only if Φ is a continuous function.

Proof: The parts (a), (b), (c) and (d) follow immediately from the definition.Next we prove the part (e). If ξ is a continuous random variable, thenPr{ξ = x} = 0. It follows from the probability continuity theorem that

limy↑x

(Φ(x)− Φ(y)) = limy↑x

Pr{y < ξ ≤ x} = Pr{ξ = x} = 0

which proves the left-continuity of Φ. Since a probability distribution isalways right-continuous, Φ is continuous. Conversely, if Φ is continuous,then we immediately have Pr{ξ = x} = 0 for each x ∈ �.

Definition 2.13 A continuous random variable is said to be(a) singular if its probability distribution is a singular function;(b) absolutely continuous if its probability distribution is an absolutely con-tinuous function.

Theorem 2.15 Let Φ be the probability distribution of a random variable.Then

Φ(x) = r1Φ1(x) + r2Φ2(x) + r3Φ3(x), x ∈ � (2.26)

where Φ1,Φ2,Φ3 are probability distributions of discrete, singular and abso-lutely continuous random variables, respectively, and r1, r2, r3 are nonnega-tive numbers such that r1 + r2 + r3 = 1. Furthermore, the decomposition(2.26) is unique.

Proof: Let {xi} be the countable set of all discontinuity points of Φ. Wedefine a function as

f1(x) =∑xi≤x

(Φ(xi)− lim

y↑xi

Φ(y))

, x ∈ �.

Then f1(x) is a step function which is increasing and right-continuous withrespect to x. Now we set

f2(x) = Φ(x)− f1(x), x ∈ �.

Page 46: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

34 Chapter 2 - Probability Theory

Then we have

limz↓x

f2(z)− f2(x) = limz↓x

(Φ(z)− Φ(x))− limz↓x

(f1(z)− f1(x)) = 0,

limz↑x

f2(z)− f2(x) = limz↑x

(Φ(z)− Φ(x))− limz↑x

(f1(z)− f1(x)) = 0.

That is, the function f2(x) is continuous. Next we prove that f2(x) is in-creasing. Let x′ < x be given. Then we may verify that∑

x′<xi≤x

(Φ(xi)− lim

y↑xi

Φ(y))≤ Φ(x)− Φ(x′).

Thus we have

f2(x)− f2(x′) = Φ(x)− Φ(x′)−∑

x′<xi≤x

(Φ(xi)− lim

y↑xi

Φ(y))≥ 0

which implies that f2(x) is an increasing function of x. It has been provedthat the increasing continuous function f2 has a unique decomposition f2 =g2 + g3, where g2 is an increasing singular function, and g3 is an increasingabsolutely continuous function. Thus

Φ(x) = f1(x) + g2(x) + g3(x), ∀x ∈ �.

We denote

limx→∞ f1(x) = r1, lim

x→∞ g2(x) = r2, limx→∞ g3(x) = r3

where r1, r2, r3 are nonnegative numbers such that r1 + r2 + r3 = 1. Fornonzero r1, r2, r3, we set

Φ1(x) =f1(x)

r1, Φ2(x) =

g2(x)r2

, Φ3(x) =g3(x)

r3, x ∈ �.

It is easy to verify that Φ1,Φ2,Φ3 are probability distributions of discrete,singular, absolutely continuous random variables, respectively, and (2.26) ismet. Since the step function is unique, the decomposition is unique, too.

Theorem 2.16 Let ξ be a random variable. Then the function Pr{ξ ≥ x} isdecreasing and left-continuous.

Proof: The function Pr{ξ ≥ x} is clearly decreasing. Next, let {εi} be asequence of positive numbers such that εi → 0 as i → ∞. Then, for everyi ≥ 1, we have

Pr{ξ ≥ x− εi} − Pr{ξ ≥ x} = Pr{x− εi ≤ ξ < x}.

It follows from the probability continuity theorem that

limi→∞

Pr{ξ ≥ x− εi} − Pr{ξ ≥ x} = Pr{∅} = 0.

Hence Pr{ξ ≥ x} is a left-continuous function.

Page 47: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 2.3 - Probability Distribution 35

Probability Density Function

Definition 2.14 The probability density function φ: � → [0,+∞) of a ran-dom variable ξ is a function such that

Φ(x) =∫ x

−∞φ(y)dy (2.27)

holds for all x ∈ [−∞,+∞], where Φ is the probability distribution of therandom variable ξ.

Example 2.9: The probability density function may not exist even if theprobability distribution is continuous and differentiable a.e. Recall the Can-tor function f defined on Page 12. We set

Φ(x) =

⎧⎪⎨⎪⎩0, if x ∈ [−∞, 0)

f(x), if x ∈ [0, 1]1, if x ∈ (1,+∞].

(2.28)

Then Φ is a nondecreasing and continuous function with Φ(x) → 0 as x →−∞ and Φ(x) → 1 as x → ∞. Hence it is a probability distribution. Notethat Φ′(x) = 0 almost everywhere, and∫ +∞

−∞Φ′(x)dx = 0 �= 1.

Thus the probability density function does not exist.

Remark 2.1: Let φ : � → [0,+∞) be a measurable function such that∫ +∞−∞ φ(x)dx = 1. Then φ is the probability density function of some random

variable becauseΦ(x) =

∫ x

−∞φ(y)dy

is a nondecreasing and continuous function satisfying (2.23).

Theorem 2.17 Let ξ be a random variable whose probability density functionφ exists. Then for any Borel set B of �, we have

Pr{ξ ∈ B} =∫B

φ(y)dy. (2.29)

Proof: Let C be the class of all subsets C of � for which the relation

Pr{ξ ∈ C} =∫C

φ(y)dy (2.30)

holds. We will show that C contains all Borel sets of �. It follows from theprobability continuity theorem and relation (2.30) that C is a monotone class.

Page 48: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

36 Chapter 2 - Probability Theory

It is also clear that C contains all intervals of the form (−∞, a], (a, b], (b,∞)and � since

Pr{ξ ∈ (−∞, a]} = Φ(a) =∫ a

−∞φ(y)dy,

Pr{ξ ∈ (b,+∞)} = Φ(+∞)− Φ(b) =∫ +∞

b

φ(y)dy,

Pr{ξ ∈ (a, b]} = Φ(b)− Φ(a) =∫ b

a

φ(y)dy,

Pr{ξ ∈ �} = Φ(+∞) =∫ +∞

−∞φ(y)dy

where Φ is the probability distribution of ξ. Let F be the class of all finiteunions of disjoint sets of the form (−∞, a], (a, b], (b,∞) and �. Note that forany disjoint sets C1, C2, · · · , Cm of F and C = C1 ∪ C2 ∪ · · · ∪ Cm, we have

Pr{ξ ∈ C} =m∑j=1

Pr{ξ ∈ Cj} =m∑j=1

∫Cj

φ(y)dy =∫C

φ(y)dy.

That is, C ∈ C. Hence we have F ⊂ C. It may also be verified that the classF is an algebra. Since the smallest σ-algebra containing F is just the Borelalgebra of �, the monotone class theorem implies that C contains all Borelsets of �.

Definition 2.15 The joint probability distribution Φ : [−∞,+∞]n → [0, 1]of a random vector (ξ1, ξ2, · · · , ξn) is defined by

Φ(x1, x2, · · · , xn) = Pr{ω ∈ Ω

∣∣ ξ1(ω) ≤ x1, ξ2(ω) ≤ x2, · · · , ξn(ω) ≤ xn

}.

Definition 2.16 The joint probability density function φ: �n → [0,+∞) ofa random vector (ξ1, ξ2, · · · , ξn) is a function such that

Φ(x1, x2, · · · , xn) =∫ x1

−∞

∫ x2

−∞· · ·∫ xn

−∞φ(y1, y2, · · · , yn)dy1dy2 · · ·dyn (2.31)

holds for all (x1, x2, · · · , xn) ∈ [−∞,+∞]n, where Φ is the probability distri-bution of the random vector (ξ1, ξ2, · · · , ξn).

2.4 Independent and Identical Distribution

Definition 2.17 The random variables ξ1, ξ2, · · · , ξm are said to be indepen-dent if and only if

Pr{ξi ∈ Bi, i = 1, 2, · · · ,m} =m∏i=1

Pr{ξi ∈ Bi} (2.32)

for any Borel sets B1, B2, · · · , Bm of �.

Page 49: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 2.4 - Independent and Identical Distribution 37

Definition 2.18 The random variables ξi, i ∈ I are said to be independentif and only if for all finite collections {i1, i2, · · · , ik} of distinct indices in I,we have

Pr{ξij ∈ Bij , j = 1, 2, · · · , k} =k∏

j=1

Pr{ξij ∈ Bij} (2.33)

for any Borel sets Bi1 , Bi2 , · · · , Bik of �.

Theorem 2.18 Let ξ1, ξ2, · · · , ξm be independent random variables, and fi :� → � measurable functions for i = 1, 2, · · · ,m. Then the random variablesf1(ξ1), f2(ξ2), · · · , fm(ξm) are independent.

Proof: For any Borel sets of B1, B2, · · · , Bm of �, we have

Pr{f1(ξ1) ∈ B1, f2(ξ2) ∈ B2, · · · , fm(ξm) ∈ Bn}

= Pr{ξ1 ∈ f−11 (B1), ξ2 ∈ f−1

2 (B2), · · · , ξm ∈ f−1m (Bm)}

= Pr{ξ1 ∈ f−11 (B1)}Pr{ξ2 ∈ f−1

2 (B2)} · · ·Pr{ξm ∈ f−1m (Bm)}

= Pr{f1(ξ1) ∈ B1}Pr{f2(ξ2) ∈ B2} · · ·Pr{fm(ξm) ∈ Bm}.

Thus f1(ξ1), f2(ξ2), · · · , fm(ξm) are independent random variables.

Theorem 2.19 Let ξi be random variables with probability distributions Φi,i = 1, 2, · · · ,m, respectively, and Φ the probability distribution of the vector(ξ1, ξ2, · · · , ξm). Then ξ1, ξ2, · · · , ξm are independent if and only if

Φ(x1, x2, · · · , xm) = Φ1(x1)Φ2(x2) · · ·Φm(xm) (2.34)

for all (x1, x2, · · · , xm) ∈ �m.

Proof: If ξ1, ξ2, · · · , ξm are independent random variables, then we have

Φ(x1, x2, · · · , xm) = Pr{ξ1 ≤ x1, ξ2 ≤ x2, · · · , ξm ≤ xm}= Pr{ξ1 ≤ x1}Pr{ξ2 ≤ x2} · · ·Pr{ξm ≤ xm}= Φ1(x1)Φ2(x2) · · ·Φm(xm)

for all (x1, x2, · · · , xm) ∈ �m.Conversely, assume that (2.34) holds. Let x2, x3, · · · , xm be fixed real

numbers, and C the class of all subsets C of � for which the relation

Pr{ξ1 ∈ C, ξ2 ≤ x2, · · · , ξm ≤ xm} = Pr{ξ1 ∈ C}m∏i=2

Pr{ξi ≤ xi} (2.35)

holds. We will show that C contains all Borel sets of �. It follows from theprobability continuity theorem and relation (2.35) that C is a monotone class.It is also clear that C contains all intervals of the form (−∞, a], (a, b], (b,∞)

Page 50: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

38 Chapter 2 - Probability Theory

and �. Let F be the class of all finite unions of disjoint sets of the form(−∞, a], (a, b], (b,∞) and �. Note that for any disjoint sets C1, C2, · · · , Ck

of F and C = C1 ∪ C2 ∪ · · · ∪ Ck, we have

Pr{ξ1 ∈ C, ξ2 ≤ x2, · · · , ξm ≤ xm}

=m∑j=1

Pr{ξ1 ∈ Cj , ξ2 ≤ x2, · · · , ξm ≤ xm}

= Pr{ξ1 ∈ C}Pr{ξ2 ≤ x2} · · ·Pr{ξm ≤ xm}.

That is, C ∈ C. Hence we have F ⊂ C. It may also be verified that the classF is an algebra. Since the smallest σ-algebra containing F is just the Borelalgebra of �, the monotone class theorem implies that C contains all Borelsets of �.

Applying the same reasoning to each ξi in turn, we obtain the indepen-dence of the random variables.

Theorem 2.20 Let ξi be random variables with probability density functionsφi, i = 1, 2, · · · ,m, respectively, and φ the probability density function of thevector (ξ1, ξ2, · · · , ξm). Then ξ1, ξ2, · · · , ξm are independent if and only if

φ(x1, x2, · · · , xm) = φ1(x1)φ2(x2) · · ·φm(xm) (2.36)

for almost all (x1, x2, · · · , xm) ∈ �m.

Proof: If φ(x1, x2, · · · , xm) = φ1(x1)φ2(x2) · · ·φm(xm) a.e., then we have

Φ(x1, x2, · · · , xm) =∫ x1

−∞

∫ x2

−∞· · ·∫ xm

−∞φ(t1, t2, · · · , tm)dt1dt2 · · ·dtm

=∫ x1

−∞

∫ x2

−∞· · ·∫ xm

−∞φ1(t1)φ2(t2) · · ·φm(tm)dt1dt2 · · ·dtm

=∫ x1

−∞φ1(t1)dt1

∫ x2

−∞φ2(t2)dt2 · · ·

∫ xm

−∞φm(tm)dtm

= Φ1(x1)Φ2(x2) · · ·Φm(xm)

for all (x1, x2, · · · , xm) ∈ �m. Thus ξ1, ξ2, · · · , ξm are independent. Con-versely, if ξ1, ξ2, · · · , ξm are independent, then for any (x1, x2, · · · , xm) ∈ �m,we have

Φ(x1, x2, · · · , xm) = Φ1(x1)Φ2(x2) · · ·Φm(xm)

=∫ x1

−∞

∫ x2

−∞· · ·∫ xm

−∞φ1(t1)φ2(t2) · · ·φm(tm)dt1dt2 · · ·dtm

which implies that φ(x1, x2, · · · , xm) = φ1(x1)φ2(x2) · · ·φm(xm) a.e.

Page 51: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 2.4 - Independent and Identical Distribution 39

Identical Distribution

Definition 2.19 The random variables ξ1, ξ2, · · · , ξm are said to be identi-cally distributed if and only if

Pr{ξi ∈ B} = Pr{ξj ∈ B}, i, j = 1, 2, · · · ,m (2.37)

for any Borel set B of �.

Theorem 2.21 The random variables ξ and η are identically distributed ifand only if they have the same probability distribution.

Proof: Let Φ and Ψ be the probability distributions of ξ and η, respectively.If ξ and η are identically distributed random variables, then, for any x ∈ �,we have

Φ(x) = Pr{ξ ∈ (−∞, x]} = Pr{η ∈ (−∞, x]} = Ψ(x).

Thus ξ and η have the same probability distribution.Conversely, assume that ξ and η have the same probability distribution.

Let C be the class of all subsets C of � for which the relation

Pr{ξ ∈ C} = Pr{η ∈ C} (2.38)

holds. We will show that C contains all Borel sets of �. It follows from theprobability continuity theorem and relation (2.38) that C is a monotone class.It is also clear that C contains all intervals of the form (−∞, a], (a, b], (b,∞)and � since ξ and η have the same probability distribution. Let F be the classof all finite unions of disjoint sets of the form (−∞, a], (a, b], (b,∞) and �.Note that for any disjoint sets C1, C2, · · · , Ck of F and C = C1∪C2∪· · ·∪Ck,we have

Pr{ξ ∈ C} =k∑

j=1

Pr{ξ ∈ Cj} =k∑

j=1

Pr{η ∈ Cj} = Pr{η ∈ C}.

That is, C ∈ C. Hence we have F ⊂ C. It may also be verified that the classF is an algebra. Since the smallest σ-algebra containing F is just the Borelalgebra of �, the monotone class theorem implies that C contains all Borelsets of �.

Theorem 2.22 Let φ and ψ be the probability density functions of randomvariables ξ and η, respectively. Then ξ and η are identically distributed ifand only if φ = ψ, a.e.

Proof: It follows from Theorem 2.21 that the random variables ξ and η areidentically distributed if and only if they have the same probability distribu-tion, if and only if φ = ψ, a.e.

Page 52: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

40 Chapter 2 - Probability Theory

Example 2.10: Let ξ be a random variable, and a a positive number. Then

ξ∗ =

{ξ, if |ξ| < a

0, otherwise

is a bounded random variable known as ξ truncated at a. Let ξ1, ξ2, · · · , ξnbe independent and identically distributed (iid) random variables. Then forany given a > 0, the random variables ξ∗1 , ξ∗2 , · · · , ξ∗n are iid.

Definition 2.20 The n-dimensional random vectors ξ1, ξ2, · · · , ξm are saidto be independent if and only if

Pr{ξi ∈ Bi, i = 1, 2, · · · ,m} =m∏i=1

Pr{ξi ∈ Bi} (2.39)

for any Borel sets B1, B2, · · · , Bm of �n.

Definition 2.21 The n-dimensional random vectors ξ1, ξ2, · · · , ξm are saidto be identically distributed if and only if

Pr{ξi ∈ B} = Pr{ξj ∈ B}, i, j = 1, 2, · · · ,m (2.40)

for any Borel set B of �n.

2.5 Expected Value Operator

Definition 2.22 Let ξ be a random variable. Then the expected value of ξis defined by

E[ξ] =∫ +∞

0

Pr{ξ ≥ r}dr −∫ 0

−∞Pr{ξ ≤ r}dr (2.41)

provided that at least one of the two integrals is finite.

Example 2.11: Assume that ξ is a discrete random variable defined by

ξ =

⎧⎪⎪⎨⎪⎪⎩a1 with probability p1

a2 with probability p2

· · ·am with probability pm.

Note that p1 + p2 + · · ·+ pm = 1. It follows from the definition of expectedvalue operator that

E[ξ] =m∑i=1

aipi.

Page 53: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 2.5 - Expected Value Operator 41

Theorem 2.23 Let ξ be a nonnegative random variable. Then∞∑i=1

Pr{ξ ≥ i} ≤ E[ξ] ≤ 1 +∞∑i=1

Pr{ξ ≥ i}, (2.42)

∞∑i=1

iPr{i + 1 > ξ ≥ i} ≤ E[ξ] ≤∞∑i=0

(i + 1) Pr{i + 1 > ξ ≥ i}. (2.43)

Proof: Since Pr{ξ ≥ r} is a decreasing function of r, we have

E[ξ] =∞∑i=1

∫ i

i−1

Pr{ξ ≥ r}dr ≥∞∑i=1

∫ i

i−1

Pr{ξ ≥ i}dr =∞∑i=1

Pr{ξ ≥ i},

E[ξ] =∞∑i=1

∫ i

i−1

Pr{ξ ≥ r}dr ≤∞∑i=1

∫ i

i−1

Pr{ξ ≥ i− 1}dr = 1 +∞∑i=1

Pr{ξ ≥ i}.

Thus (2.42) is proved. The inequality (2.43) is from the following two equa-tions,

∞∑i=1

Pr{ξ ≥ i} =∞∑i=1

∞∑j=i

Pr{j + 1 > ξ ≥ j}

=∞∑j=1

j∑i=1

Pr{j + 1 > ξ ≥ j}

=∞∑j=1

j Pr{j + 1 > ξ ≥ j},

1 +∞∑i=1

Pr{ξ ≥ i} =∞∑i=0

Pr{i + 1 > ξ ≥ i}+∞∑i=1

iPr{i + 1 > ξ ≥ i}

=∞∑i=0

(i + 1) Pr{i + 1 > ξ ≥ i}.

Theorem 2.24 (Moments Lemma) Let ξ be a random variable, and t a pos-itive number. Then E[|ξ|t] <∞ if and only if

∞∑i=1

Pr{|ξ| ≥ i1/t

}<∞. (2.44)

Proof: The theorem follows immediately from Pr{|ξ|t ≥ i} = Pr{|ξ| ≥ i1/t}and Theorem 2.23.

Theorem 2.25 Let ξ be a random variable, and t a positive number. IfE[|ξ|t] <∞, then

limx→∞xt Pr{|ξ| ≥ x} = 0. (2.45)

Conversely, let ξ be a random variable satisfying (2.45) for some t > 0. ThenE[|ξ|s] <∞ for any 0 ≤ s < t.

Page 54: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

42 Chapter 2 - Probability Theory

Proof: It follows from the definition of expected value that

E[|ξ|t] =∫ ∞

0

Pr{|ξ|t ≥ r}dr <∞.

Thus we havelimx→∞

∫ ∞

xt/2

Pr{|ξ|t ≥ r}dr = 0.

The equation (2.45) is proved by the following relation,∫ ∞

xt/2

Pr{|ξ|t ≥ r}dr ≥∫ xt

xt/2

Pr{|ξ|t ≥ r}dr ≥ 12xt Pr{|ξ| ≥ x}.

Conversely, if (2.45) holds, then there exists a number a such that

xt Pr{|ξ| ≥ x} ≤ 1, ∀x ≥ a.

Thus we have

E[|ξ|s] =∫ a

0

Pr {|ξ|s ≥ r} dr +∫ +∞

a

Pr {|ξ|s ≥ r} dr

≤∫ a

0

Pr {|ξ|s ≥ r} dr +∫ +∞

0

srs−1 Pr {|ξ| ≥ r} dr

≤∫ a

0

Pr {|ξ|s ≥ r} dr + s

∫ +∞

0

rs−t−1dr

< +∞.

(by∫ ∞

0

rpdr <∞ for any p < −1)

The theorem is proved.

Example 2.12: The condition (2.45) does not ensure that E[|ξ|t] <∞. Weconsider the positive random variable

ξ = t

√2i

iwith probability

12i

, i = 1, 2, · · ·

It is clear that

limx→∞xt Pr{ξ ≥ x} = lim

n→∞

(t

√2n

n

)t ∞∑i=n

12i

= limn→∞

2n

= 0.

However, the expected value of ξt is

E[ξt] =∞∑i=1

(t

√2i

i

)t

· 12i

=∞∑i=1

1i

=∞.

Page 55: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 2.5 - Expected Value Operator 43

Theorem 2.26 Let ξ be a random variable whose probability density functionφ exists. If the Lebesgue integral∫ +∞

−∞xφ(x)dx

is finite, then we have

E[ξ] =∫ +∞

−∞xφ(x)dx. (2.46)

Proof: It follows from Definition 2.22 and Fubini Theorem that

E[ξ] =∫ +∞

0

Pr{ξ ≥ r}dr −∫ 0

−∞Pr{ξ ≤ r}dr

=∫ +∞

0

[∫ +∞

r

φ(x)dx

]dr −

∫ 0

−∞

[∫ r

−∞φ(x)dx

]dr

=∫ +∞

0

[∫ x

0

φ(x)dr

]dx−

∫ 0

−∞

[∫ 0

x

φ(x)dr

]dx

=∫ +∞

0

xφ(x)dx +∫ 0

−∞xφ(x)dx

=∫ +∞

−∞xφ(x)dx.

The theorem is proved.

Theorem 2.27 Let ξ be a random variable with probability distribution Φ.If the Lebesgue-Stieltjes integral∫ +∞

−∞xdΦ(x)

is finite, then we have

E[ξ] =∫ +∞

−∞xdΦ(x). (2.47)

Proof: Since the Lebesgue-Stieltjes integral∫ +∞−∞ xdΦ(x) is finite, we imme-

diately have

limy→+∞

∫ y

0

xdΦ(x) =∫ +∞

0

xdΦ(x), limy→−∞

∫ 0

y

xdΦ(x) =∫ 0

−∞xdΦ(x)

and

limy→+∞

∫ +∞

y

xdΦ(x) = 0, limy→−∞

∫ y

−∞xdΦ(x) = 0.

Page 56: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

44 Chapter 2 - Probability Theory

It follows from∫ +∞

y

xdΦ(x) ≥ y

(lim

z→+∞Φ(z)− Φ(y))

= y(1− Φ(y)) ≥ 0, if y > 0,

∫ y

−∞xdΦ(x) ≤ y

(Φ(y)− lim

z→−∞Φ(z))

= yΦ(y) ≤ 0, if y < 0

thatlim

y→+∞ y (1− Φ(y)) = 0, limy→−∞ yΦ(y) = 0.

Let 0 = x0 < x1 < x2 < · · · < xn = y be a partition of [0, y]. Then we have

n−1∑i=0

xi (Φ(xi+1)− Φ(xi))→∫ y

0

xdΦ(x)

andn−1∑i=0

(1− Φ(xi+1))(xi+1 − xi)→∫ y

0

Pr{ξ ≥ r}dr

as max{|xi+1 − xi| : i = 0, 1, · · · , n− 1} → 0. Since

n−1∑i=0

xi (Φ(xi+1)− Φ(xi))−n−1∑i=0

(1− Φ(xi+1))(xi+1 − xi) = y(Φ(y)− 1)→ 0

as y → +∞. This fact implies that∫ +∞

0

Pr{ξ ≥ r}dr =∫ +∞

0

xdΦ(x).

A similar way may prove that

−∫ 0

−∞Pr{ξ ≤ r}dr =

∫ 0

−∞xdΦ(x).

Thus (2.47) is verified by the above two equations.

Linearity of Expected Value Operator

Theorem 2.28 Let ξ be a random variable whose expected value exists. Thenfor any numbers a and b, we have

E[aξ + b] = aE[ξ] + b. (2.48)

Page 57: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 2.5 - Expected Value Operator 45

Proof: In order to prove the theorem, it suffices to verify that E[ξ + b] =E[ξ]+ b and E[aξ] = aE[ξ]. It follows from the expected value operator that,if b ≥ 0,

E[ξ + b] =∫ ∞

0

Pr{ξ + b ≥ r}dr −∫ 0

−∞Pr{ξ + b ≤ r}dr

=∫ ∞

0

Pr{ξ ≥ r − b}dr −∫ 0

−∞Pr{ξ ≤ r − b}dr

= E[ξ] +∫ b

0

(Pr{ξ ≥ r − b}+ Pr{ξ < r − b}) dr

= E[ξ] + b.

If b < 0, then we have

E[ξ + b] = E[ξ]−∫ 0

b

(Pr{ξ ≥ r − b}+ Pr{ξ < r − b}) dr = E[ξ] + b.

On the other hand, if a = 0, then the equation E[aξ] = aE[ξ] holds trivially.If a > 0, we have

E[aξ] =∫ ∞

0

Pr{aξ ≥ r}dr −∫ 0

−∞Pr{aξ ≤ r}dr

=∫ ∞

0

Pr{ξ ≥ r

a

}dr −

∫ 0

−∞Pr{ξ ≤ r

a

}dr

= a

∫ ∞

0

Pr{ξ ≥ r

a

}d( r

a

)− a

∫ 0

−∞Pr{ξ ≤ r

a

}d( r

a

)= aE[ξ].

The equation E[aξ] = aE[ξ] is proved if we verify that E[−ξ] = −E[ξ]. Infact,

E[−ξ] =∫ ∞

0

Pr{−ξ ≥ r}dr −∫ 0

−∞Pr{−ξ ≤ r}dr

=∫ ∞

0

Pr {ξ ≤ −r} dr −∫ 0

−∞Pr {ξ ≥ −r} dr

=∫ 0

−∞Pr {ξ ≤ r} dr −

∫ ∞

0

Pr {ξ ≥ r} dr

= −E[ξ].

The proof is finished.

Theorem 2.29 Let ξ and η be random variables with finite expected values.Then we have

E[ξ + η] = E[ξ] + E[η]. (2.49)

Page 58: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

46 Chapter 2 - Probability Theory

Proof: Step 1: We first prove the case where both ξ and η are nonnega-tive simple random variables taking values a1, a2, · · · , am and b1, b2, · · · , bn,respectively. Then ξ + η is also a nonnegative simple random variable takingvalues ai + bj , i = 1, 2, · · · ,m, j = 1, 2, · · · , n. Thus we have

E[ξ + η] =m∑i=1

n∑j=1

(ai + bj) Pr{ξ = ai, η = bj}

=m∑i=1

n∑j=1

ai Pr{ξ = ai, η = bj}+m∑i=1

n∑j=1

bj Pr{ξ = ai, η = bj}

=m∑i=1

ai Pr{ξ = ai}+n∑

j=1

bj Pr{η = bj}

= E[ξ] + E[η].

Step 2: Next we prove the case where ξ and η are nonnegative randomvariables. For every i ≥ 1 and every ω ∈ Ω, we define

ξi(ω) =

⎧⎪⎨⎪⎩k − 1

2i, if

k − 12i≤ ξ(ω) <

k

2i, k = 1, 2, · · · , i2i

i, if i ≤ ξ(ω),

ηi(ω) =

⎧⎪⎨⎪⎩k − 1

2i, if

k − 12i≤ η(ω) <

k

2i, k = 1, 2, · · · , i2i

i, if i ≤ η(ω).

Then {ξi}, {ηi} and {ξi + ηi} are three sequences of nonnegative simplerandom variables such that ξi ↑ ξ, ηi ↑ η and ξi + ηi ↑ ξ + η as i→∞. Notethat the functions Pr{ξi > r}, Pr{ηi > r}, Pr{ξi + ηi > r}, i = 1, 2, · · · arealso simple. It follows from Theorem 2.8 that

Pr{ξi > r} ↑ Pr{ξ > r}, ∀r ≥ 0

as i→∞. Since the expected value E[ξ] exists, we have

E[ξi] =∫ +∞

0

Pr{ξi > r}dr →∫ +∞

0

Pr{ξ > r}dr = E[ξ]

as i→∞. Similarly, we may prove that E[ηi]→ E[η] and E[ξi+ηi]→ E[ξ+η]as i → ∞. Therefore E[ξ + η] = E[ξ] + E[η] since we have proved thatE[ξi + ηi] = E[ξi] + E[ηi] for i = 1, 2, · · ·

Step 3: Finally, if ξ and η are arbitrary random variables, then we define

ξi(ω) =

{ξ(ω), if ξ(ω) ≥ −i

−i, otherwise,ηi(ω) =

{η(ω), if η(ω) ≥ −i

−i, otherwise.

Since the expected values E[ξ] and E[η] are finite, we have

limi→∞

E[ξi] = E[ξ], limi→∞

E[ηi] = E[η], limi→∞

E[ξi + ηi] = E[ξ + η].

Page 59: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 2.5 - Expected Value Operator 47

Note that (ξi + i) and (ηi + i) are nonnegative random variables. It followsfrom Theorem 2.28 that

E[ξ + η] = limi→∞

E[ξi + ηi]

= limi→∞

(E[(ξi + i) + (ηi + i)]− 2i)

= limi→∞

(E[ξi + i] + E[ηi + i]− 2i)

= limi→∞

(E[ξi] + i + E[ηi] + i− 2i)

= limi→∞

E[ξi] + limi→∞

E[ηi]

= E[ξ] + E[η]

which proves the theorem.

Theorem 2.30 Let ξ and η be random variables with finite expected values.Then for any numbers a and b, we have

E[aξ + bη] = aE[ξ] + bE[η]. (2.50)

Proof: The theorem follows immediately from Theorems 2.28 and 2.29.

Product of Independent Random Variables

Theorem 2.31 Let ξ and η be independent random variables with finite ex-pected values. Then the expected value of ξη exists and

E[ξη] = E[ξ]E[η]. (2.51)

Proof: Step 1: We first prove the case where both ξ and η are nonnega-tive simple random variables taking values a1, a2, · · · , am and b1, b2, · · · , bn,respectively. Then ξη is also a nonnegative simple random variable takingvalues aibj , i = 1, 2, · · · ,m, j = 1, 2, · · · , n. It follows from the independenceof ξ and η that

E[ξη] =m∑i=1

n∑j=1

aibj Pr{ξ = ai, η = bj}

=m∑i=1

n∑j=1

aibj Pr{ξ = ai}Pr{η = bj}

=(

m∑i=1

ai Pr{ξ = ai})(

n∑j=1

bj Pr{η = bj})

= E[ξ]E[η].

Step 2: Next we prove the case where ξ and η are nonnegative randomvariables. For every i ≥ 1 and every ω ∈ Ω, we define

ξi(ω) =

⎧⎪⎨⎪⎩k − 1

2i, if

k − 12i≤ ξ(ω) <

k

2i, k = 1, 2, · · · , i2i

i, if i ≤ ξ(ω),

Page 60: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

48 Chapter 2 - Probability Theory

ηi(ω) =

⎧⎪⎨⎪⎩k − 1

2i, if

k − 12i≤ η(ω) <

k

2i, k = 1, 2, · · · , i2i

i, if i ≤ η(ω).

Then {ξi}, {ηi} and {ξiηi} are three sequences of nonnegative simple randomvariables such that ξi ↑ ξ, ηi ↑ η and ξiηi ↑ ξη as i → ∞. It follows fromthe independence of ξ and η that ξi and ηi are independent. Hence wehave E[ξiηi] = E[ξi]E[ηi] for i = 1, 2, · · · It follows from Theorem 2.8 thatPr{ξi > r}, i = 1, 2, · · · are simple functions such that

Pr{ξi > r} ↑ Pr{ξ > r}, for all r ≥ 0

as i→∞. Since the expected value E[ξ] exists, we have

E[ξi] =∫ +∞

0

Pr{ξi > r}dr →∫ +∞

0

Pr{ξ > r}dr = E[ξ]

as i → ∞. Similarly, we may prove that E[ηi] → E[η] and E[ξiηi] → E[ξη]as i→∞. Therefore E[ξη] = E[ξ]E[η].

Step 3: Finally, if ξ and η are arbitrary independent random variables,then the nonnegative random variables ξ+ and η+ are independent and soare ξ+ and η−, ξ− and η+, ξ− and η−. Thus we have

E[ξ+η+] = E[ξ+]E[η+], E[ξ+η−] = E[ξ+]E[η−],

E[ξ−η+] = E[ξ−]E[η+], E[ξ−η−] = E[ξ−]E[η−].

It follows that

E[ξη] = E[(ξ+ − ξ−)(η+ − η−)]

= E[ξ+η+]− E[ξ+η−]− E[ξ−η+] + E[ξ−η−]

= E[ξ+]E[η+]− E[ξ+]E[η−]− E[ξ−]E[η+] + E[ξ−]E[η−]

= (E[ξ+]− E[ξ−]) (E[η+]− E[η−])

= E[ξ+ − ξ−]E[η+ − η−]

= E[ξ]E[η]

which proves the theorem.

Expected Value of Function of Random Variable

Theorem 2.32 Let ξ be a random variable with probability distribution Φ,and f : � → � a measurable function. If the Lebesgue-Stieltjes integral∫ +∞

−∞f(x)dΦ(x)

Page 61: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 2.5 - Expected Value Operator 49

is finite, then we have

E[f(ξ)] =∫ +∞

−∞f(x)dΦ(x). (2.52)

Proof: It follows from the definition of expected value operator that

E[f(ξ)] =∫ +∞

0

Pr{f(ξ) ≥ r}dr −∫ 0

−∞Pr{f(ξ) ≤ r}dr. (2.53)

If f is a nonnegative simple measurable function, i.e.,

f(x) =

⎧⎪⎪⎨⎪⎪⎩a1, if x ∈ B1

a2, if x ∈ B2

· · ·am, if x ∈ Bm

where B1, B2, · · · , Bm are mutually disjoint Borel sets, then we have

E[f(ξ)] =∫ +∞

0

Pr{f(ξ) ≥ r}dr =m∑i=1

ai Pr{ξ ∈ Bi}

=m∑i=1

ai

∫Bi

dΦ(x) =∫ +∞

−∞f(x)dΦ(x).

We next prove the case where f is a nonnegative measurable function. Letf1, f2, · · · be a sequence of nonnegative simple functions such that fi ↑ f asi→∞. We have proved that

E[fi(ξ)] =∫ +∞

0

Pr{fi(ξ) ≥ r}dr =∫ +∞

−∞fi(x)dΦ(x).

In addition, Theorem 2.8 states that Pr{fi(ξ) > r} ↑ Pr{f(ξ) > r} as i→∞for r ≥ 0. It follows from the monotone convergence theorem that

E[f(ξ)] =∫ +∞

0

Pr{f(ξ) > r}dr

= limi→∞

∫ +∞

0

Pr{fi(ξ) > r}dr

= limi→∞

∫ +∞

−∞fi(x)dΦ(x)

=∫ +∞

−∞f(x)dΦ(x).

Page 62: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

50 Chapter 2 - Probability Theory

Finally, if f is an arbitrary measurable function, then we have f = f+ − f−

andE[f(ξ)] = E[f+(ξ)− f−(ξ)]

= E[f+(ξ)]− E[f−(ξ)]

=∫ +∞

−∞f+(x)dΦ(x)−

∫ +∞

−∞f−(x)dΦ(x)

=∫ +∞

−∞f(x)dΦ(x).

The theorem is proved.

Sum of a Random Number of Random Variables

Theorem 2.33 Assume that {ξi} is a sequence of iid random variables, andη is a positive random integer (i.e., a random variable taking “positive inte-ger” values) that is independent of the sequence {ξi}. Then we have

E

[η∑

i=1

ξi

]= E[η]E[ξ1]. (2.54)

Proof: Since η is independent of the sequence {ξi}, we have

Pr

{η∑

i=1

ξi ≥ r

}=

∞∑k=1

Pr{η = k}Pr {ξ1 + ξ2 + · · ·+ ξk ≥ r} .

If ξi are nonnegative random variables, then we have

E

[η∑

i=1

ξi

]=∫ +∞

0

Pr

{η∑

i=1

ξi ≥ r

}dr

=∫ +∞

0

∞∑k=1

Pr{η = k}Pr {ξ1 + ξ2 + · · ·+ ξk ≥ r} dr

=∞∑k=1

Pr{η = k}∫ +∞

0

Pr {ξ1 + ξ2 + · · ·+ ξk ≥ r} dr

=∞∑k=1

Pr{η = k} (E[ξ1] + E[ξ2] + · · ·+ E[ξk])

=∞∑k=1

Pr{η = k}kE[ξ1] (by iid hypothesis)

= E[η]E[ξ1].

Page 63: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 2.5 - Expected Value Operator 51

If ξi are arbitrary random variables, then ξi = ξ+i − ξ−i , and

E

[η∑

i=1

ξi

]= E

[η∑

i=1

(ξ+i − ξ−i )

]= E

[η∑

i=1

ξ+i −

η∑i=1

ξ−i

]

= E

[η∑

i=1

ξ+i

]− E

[η∑

i=1

ξ−i

]= E[η]E[ξ+

1 ]− E[η]E[ξ−1 ]

= E[η](E[ξ+1 ]− E[ξ−1 ]) = E[η]E[ξ+

1 − ξ−1 ] = E[η]E[ξ1].

The theorem is thus proved.

Continuity Theorems

Theorem 2.34 (a) Let {ξi} be an increasing sequence of random variablessuch that limi→∞ ξi is a random variable. If there exists a random variableη with finite expected value such that ξi ≥ η for all i, then we have

limi→∞

E[ξi] = E[

limi→∞

ξi

]. (2.55)

(b) Let {ξi} be a decreasing sequence of random variables such that limi→∞ ξiis a random variable. If there exists a random variable η with finite expectedvalue such that ξi ≤ η for all i, then (2.55) remains true.

Proof: Without loss of generality, we assume η ≡ 0. Then we have

limi→∞

E[ξi] = limi→∞

∫ +∞

0

Pr{ξi > r}dr

=∫ +∞

0

limi→∞

Pr{ξi > r}dr (by Theorem 1.17)

=∫ +∞

0

Pr{

limi→∞

ξi > r}

dr (by Theorem 2.8)

= E[

limi→∞

ξi

].

The decreasing case may be proved by setting ξi = η − ξi ≥ 0.

Example 2.13: Dropping the condition of ξi ≥ η, Theorem 2.34 does nothold. For example, Ω = {ω1, ω2, · · ·}, Pr{ωj} = 1/2j for j = 1, 2, · · · and therandom variables are defined by

ξi(ωj) =

{0, if j ≤ i

−2j , if j > i

Page 64: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

52 Chapter 2 - Probability Theory

for i, j = 1, 2, · · · Then

limi→∞

E[ξi] = −∞ �= 0 = E[

limi→∞

ξi

].

Theorem 2.35 Let {ξi} be a sequence of random variables such that

lim infi→∞

ξi and lim supi→∞

ξi

are random variables. (a) If there exists a random variable η with finiteexpected value such that ξi ≥ η for all i, then

E[lim infi→∞

ξi

]≤ lim inf

i→∞E[ξi]. (2.56)

(b) If there exists a random variable η with finite expected value such thatξi ≤ η for all i, then

E

[lim supi→∞

ξi

]≥ lim sup

i→∞E[ξi]. (2.57)

Proof: Without loss of generality, we assume η ≡ 0. Then we have

E[lim infi→∞

ξi

]=∫ +∞

0

Pr{

lim infi→∞

ξi > r}

dr

≤∫ +∞

0

lim infi→∞

Pr{ξi > r}dr (by Theorem 2.9)

≤ lim infi→∞

∫ +∞

0

Pr {ξi > r} dr (by Fatou’s Lemma)

= lim infi→∞

E [ξi] .

The inequality (2.56) is proved. The other inequality may be proved viasetting ξi = η − ξi ≥ 0.

Theorem 2.36 Let {ξi} be a sequence of random variables such that thelimitation limi→∞ ξi exists and is a random variable. If there exists a randomvariable η with finite expected value such that |ξi| ≤ η for all i, then,

limi→∞

E[ξi] = E[

limi→∞

ξi

]. (2.58)

Proof: It follows from Theorem 2.35 that

E[lim infi→∞

ξi

]≤ lim inf

i→∞E[ξi] ≤ lim sup

i→∞E[ξi] ≤ E

[lim supi→∞

ξi

].

Since limi→∞ ξi exists, we have lim infi→∞ ξi = lim supi→∞ ξi = limi→∞ ξi.Thus (2.58) holds.

Page 65: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 2.6 - Variance, Covariance and Moments 53

Distance of Random Variables

Definition 2.23 The distance of random variables ξ and η is defined as

d(ξ, η) = E[|ξ − η|]. (2.59)

Theorem 2.37 Let ξ, η, τ be random variables, and let d(·, ·) be the distancemeasure. Then we have(a) d(ξ, η) = 0 if ξ = η;(b) d(ξ, η) > 0 if ξ �= η;(c) (Symmetry) d(ξ, η) = d(η, ξ);(d) (Triangle Inequality) d(ξ, η) ≤ d(ξ, τ) + d(η, τ).

Proof: The parts (a), (b) and (c) follow immediately from the definition.The part (d) is proved by the following relation,

E[|ξ − η|] ≤ E[|ξ − τ |+ |η − τ |] = E[|ξ − τ |] + E[|η − τ |].

2.6 Variance, Covariance and Moments

Definition 2.24 The variance of a random variable ξ is defined by

V [ξ] = E[(ξ − E[ξ])2]. (2.60)

Theorem 2.38 If ξ is a random variable whose variance exists, a and b arereal numbers, then V [aξ + b] = a2V [ξ].

Proof: It follows from the definition of variance that

V [aξ + b] = E[(aξ + b− aE[ξ]− b)2

]= a2E[(ξ − E[ξ])2] = a2V [ξ].

Theorem 2.39 Let ξ be a random variable with expected value e. ThenV [ξ] = 0 if and only if Pr{ξ = e} = 1.

Proof: If V [ξ] = 0, then E[(ξ − e)2] = 0. Note that

E[(ξ − e)2] =∫ +∞

0

Pr{(ξ − e)2 ≥ r}dr

which implies Pr{(ξ−e)2 ≥ r} = 0 for any r > 0. Hence we have Pr{(ξ−e)2 =0} = 1, i.e., Pr{ξ = e} = 1.

Conversely, if Pr{ξ = e} = 1, then we have Pr{(ξ − e)2 = 0} = 1 andPr{(ξ − e)2 ≥ r} = 0 for any r > 0. Thus

V [ξ] =∫ +∞

0

Pr{(ξ − e)2 ≥ r}dr = 0.

Page 66: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

54 Chapter 2 - Probability Theory

Definition 2.25 The standard deviation of a random variable is defined asthe nonnegative square root of its variance.

Definition 2.26 Let ξ and η be random variables such that E[ξ] and E[η]are finite. Then the covariance of ξ and η is defined by

Cov[ξ, η] = E [(ξ − E[ξ])(η − E[η])] . (2.61)

Example 2.14: If ξ and η are independent random variables, then Cov[ξ, η] =0. However, the inverse is not true. For example, ξ = sin τ , η = cos τ , whereτ is a uniformly distributed variable on [0, 2π]. It is easy to verify thatCov[ξ, η] = 0. However, they are not independent.

Theorem 2.40 If ξ1, ξ2, · · · , ξn are random variables with finite expected val-ues, then

V [ξ1 + ξ2 + · · ·+ ξn] =n∑

i=1

V [ξi] + 2n−1∑i=1

n∑j=i+1

Cov[ξi, ξj ]. (2.62)

In particular, if ξ1, ξ2, · · · , ξn are independent, then

V [ξ1 + ξ2 + · · ·+ ξn] = V [ξ1] + V [ξ2] + · · ·+ V [ξn]. (2.63)

Proof: It follows from the definition of variance that

V

[n∑

i=1

ξi

]= E

[(ξ1 + ξ2 + · · ·+ ξn − E[ξ1]− E[ξ2]− · · · −E[ξn])2

]=

n∑i=1

E[(ξi − E[ξi])2

]+ 2

n−1∑i=1

n∑j=i+1

E [(ξi − E[ξi])(ξj − E[ξj ])]

which implies (2.62). If ξ1, ξ2, · · · , ξn are independent, then Cov[ξi, ξj ] = 0for all i, j with i �= j. Thus (2.63) holds.

Definition 2.27 For any positive integer k, the expected value E[ξk] is calledthe kth moment of the random variable ξ. The expected value E[(ξ −E[ξ])k]is called the kth central moment of the random variable ξ.

Note that the expected value is just the first moment, the first centralmoment of ξ is 0, and the second moment is just the variance.

2.7 Optimistic and Pessimistic Values

Let ξ be a random variable. In order to measure it, we may use its expectedvalue. Alternately, we may employ α-optimistic value and α-pessimistic valueas a ranking measure.

Page 67: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 2.7 - Optimistic and Pessimistic Values 55

Definition 2.28 Let ξ be a random variable, and α ∈ (0, 1]. Then

ξsup(α) = sup{r∣∣ Pr {ξ ≥ r} ≥ α

}(2.64)

is called the α-optimistic value of ξ.

This means that the random variable ξ will reach upwards of the α-optimistic value ξsup(α) at least α of time. The optimistic value is also calledpercentile.

Definition 2.29 Let ξ be a random variable, and α ∈ (0, 1]. Then

ξinf(α) = inf{r∣∣ Pr {ξ ≤ r} ≥ α

}(2.65)

is called the α-pessimistic value of ξ.

This means that the random variable ξ will be below the α-pessimisticvalue ξinf(α) at least α of time.

Theorem 2.41 Let ξ be a random variable. Then we have

Pr{ξ ≥ ξsup(α)} ≥ α, Pr{ξ ≤ ξinf(α)} ≥ α (2.66)

where ξinf(α) and ξsup(α) are the α-pessimistic and α-optimistic values of therandom variable ξ, respectively.

Proof: It follows from the definition of the optimistic value that there existsan increasing sequence {ri} such that Pr{ξ ≥ ri} ≥ α and ri ↑ ξsup(α) asi → ∞. Since {ω|ξ(ω) ≥ ri} ↓ {ω|ξ(ω) ≥ ξsup(α)}, it follows from theprobability continuity theorem that

Pr{ξ ≥ ξsup(α)} = limi→∞

Pr{ξ ≥ ri} ≥ α.

The inequality Pr{ξ ≤ ξinf(α)} ≥ α may be proved similarly.

Example 2.15: Note that Pr{ξ ≥ ξsup(α)} > α and Pr{ξ ≤ ξinf(α)} > αmay hold. For example,

ξ =

{0 with probability 0.41 with probability 0.6.

If α = 0.8, then ξsup(0.8) = 0 which makes Pr{ξ ≥ ξsup(0.8)} = 1 > 0.8. Inaddition, ξinf(0.8) = 1 and Pr{ξ ≤ ξinf(0.8)} = 1 > 0.8.

Theorem 2.42 Let ξinf(α) and ξsup(α) be the α-pessimistic and α-optimisticvalues of the random variable ξ, respectively. Then we have(a) ξinf(α) is an increasing function of α;(b) ξsup(α) is a decreasing function of α;(c) if α > 0.5, then ξinf(α) ≥ ξsup(α);(d) if α ≤ 0.5, then ξinf(α) ≤ ξsup(α).

Page 68: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

56 Chapter 2 - Probability Theory

Proof: The cases (a) and (b) are obvious. Case (c): Write ξ(α) = (ξinf(α)+ξsup(α))/2. If ξinf(α) < ξsup(α), then we have

1 ≥ Pr{ξ < ξ(α)}+ Pr{ξ > ξ(α)} ≥ α + α > 1.

A contradiction proves ξinf(α) ≥ ξsup(α). Case (d): Assume that ξinf(α) >ξsup(α). It follows from the definition of ξinf(α) that Pr{ξ ≤ ξ(α)} < α.Similarly, it follows from the definition of ξsup(α) that Pr{ξ ≥ ξ(α)} < α.Thus

1 ≤ Pr{ξ ≤ ξ(α)}+ Pr{ξ ≥ ξ(α)} < α + α ≤ 1.

A contradiction proves ξinf(α) ≤ ξsup(α). The theorem is proved.

Theorem 2.43 Assume that ξ and η are random variables. Then, for anyα ∈ (0, 1], we have(a) if λ ≥ 0, then (λξ)sup(α) = λξsup(α) and (λξ)inf(α) = λξinf(α);(b) if λ < 0, then (λξ)sup(α) = λξinf(α) and (λξ)inf(α) = λξsup(α).

Proof: (a) If λ = 0, then it is obviously valid. When λ > 0, we have

(λξ)sup(α) = sup {r | Pr{λξ ≥ r} ≥ α}= λ sup {r/λ | Pr {ξ ≥ r/λ} ≥ α}= λξsup(α).

A similar way may prove that (λξ)inf(α) = λξinf(α).(b) In order to prove this part, it suffices to verify that (−ξ)sup(α) =

−ξinf(α) and (−ξ)inf(α) = −ξsup(α). In fact, for any α ∈ (0, 1], we have

(−ξ)sup(α) = sup{r | Pr{−ξ ≥ r} ≥ α}= − inf{−r | Pr{ξ ≤ −r} ≥ α}= −ξinf(α).

Similarly, we may prove that (−ξ)inf(α) = −ξsup(α). The theorem is proved.

2.8 Some Inequalities

Theorem 2.44 Let ξ be a random variable, and f a nonnegative measurablefunction. If f is even (i.e., f(x) = f(−x) for any x ∈ �) and increasing on[0,∞), then for any given number t > 0, we have

Pr{|ξ| ≥ t} ≤ E[f(ξ)]f(t)

. (2.67)

Page 69: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 2.8 - Some Inequalities 57

Proof: It is clear that Pr{|ξ| ≥ f−1(r)} is a monotone decreasing functionof r on [0,∞). It follows from the nonnegativity of f(ξ) that

E[f(ξ)] =∫ +∞

0

Pr{f(ξ) ≥ r}dr

=∫ +∞

0

Pr{|ξ| ≥ f−1(r)}dr

≥∫ f(t)

0

Pr{|ξ| ≥ f−1(r)}dr

≥∫ f(t)

0

dr · Pr{|ξ| ≥ f−1(f(t))}

= f(t) · Pr{|ξ| ≥ t}

which proves the inequality.

Theorem 2.45 (Markov Inequality) Let ξ be a random variable. Then forany given numbers t > 0 and p > 0, we have

Pr{|ξ| ≥ t} ≤ E[|ξ|p]tp

. (2.68)

Proof: It is a special case of Theorem 2.44 when f(x) = |x|p.

Theorem 2.46 (Chebyshev Inequality) Let ξ be a random variable whosevariance V [ξ] exists. Then for any given number t > 0, we have

Pr {|ξ − E[ξ]| ≥ t} ≤ V [ξ]t2

. (2.69)

Proof: It is a special case of Theorem 2.44 when the random variable ξ isreplaced with ξ − E[ξ] and f(x) = x2.

Example 2.16: Let ξ be a random variable with finite expected value e andvariance σ2. It follows from Chebyshev inequality that

Pr{|ξ − e| ≥ kσ} ≤ V [ξ − e](kσ)2

=1k2

.

Theorem 2.47 (Holder’s Inequality) Let p and q be two positive real num-bers with 1/p + 1/q = 1, ξ and η random variables with E[|ξ|p] < ∞ andE[|η|q] <∞. Then we have

E[|ξη|] ≤ p√

E[|ξ|p] q√

E[|η|q]. (2.70)

It is a Cauchy-Schwartz inequality when p = q = 2, i.e.,

E[|ξη|] ≤√

E[ξ2]E[η2]. (2.71)

Page 70: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

58 Chapter 2 - Probability Theory

Proof: The Holder’s Inequality holds trivially if at least one of ξ and η iszero a.s. Now we assume E[|ξ|p] > 0 and E[|η|q] > 0, and set

a =|ξ|

p√

E[|ξ|p], b =

|η|q√

E[|η|q].

It follows from ab ≤ ap/p + bq/q that

|ξη| ≤ p√

E[|ξ|p] q√

E[|η|q](|ξ|p

pE[|ξ|p] +|η|q

qE[|η|q]

).

Taking the expected values on both sides, we obtain the inequality.

Theorem 2.48 (Minkowski Inequality) Let p be a real number with 1 ≤ p <∞, ξ and η random variables with E[|ξ|p] < ∞ and E[|η|p] < ∞. Then wehave

p√

E[|ξ + η|p] ≤ p√

E[|ξ|p] + p√

E[|η|p]. (2.72)

Proof: The inequality holds trivially when p = 1. It thus suffices to provethe theorem when p > 1. It is clear that there is a number q with q > 1 suchthat 1/p + 1/q = 1. It follows from Theorem 2.47 that

E[|ξ||ξ + η|p−1] ≤ p√

E[|ξ|p] q

√E[|ξ + η|(p−1)q] = p

√E[|ξ|p] q

√E[|ξ + η|p],

E[|η||ξ + η|p−1] ≤ p√

E[|η|p] q

√E[|ξ + η|(p−1)q] = p

√E[|η|p] q

√E[|ξ + η|p].

We thus have

E[|ξ + η|p] ≤ E[|ξ||ξ + η|p−1] + E[|η||ξ + η|p−1]

≤(

p√

E[|ξ|p] + p√

E[|η|p])

q√

E[|ξ + η|p]

which implies that the inequality (2.72) holds.

Theorem 2.49 (Jensen’s Inequality) Let ξ be a random variable, and f aconvex function. If E[ξ] and E[f(ξ)] exist and are finite, then

f(E[ξ]) ≤ E[f(ξ)]. (2.73)

Especially, when f(x) = |x|p and p > 1, we have |E[ξ]|p ≤ E[|ξ|p].

Proof: Since f is a convex function, for each y, there exists a number k suchthat f(x)− f(y) ≥ k · (x− y). Replacing x with ξ and y with E[ξ], we obtain

f(ξ)− f(E[ξ]) ≥ k · (ξ − E[ξ]).

Taking the expected values on both sides, we have

E[f(ξ)]− f(E[ξ]) ≥ k · (E[ξ]− E[ξ]) = 0

which proves the inequality.

Page 71: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 2.9 - Characteristic Function 59

2.9 Characteristic Function

Characteristic function is an important concept and plays a powerful rolein probability theory. This section introduces the concept of characteristicfunction, inversion formula, and uniqueness theorem.

Definition 2.30 Let ξ be a random variable with probability distribution Φ.Then the function

ϕ(t) =∫ +∞

−∞eitxdΦ(x), t ∈ � (2.74)

is called the characteristic function of ξ, where eitx = cos tx + i sin tx andi =√−1, the imaginary unit.

Example 2.17: Let ξ be a random variable whose probability distributionis

Φ(x) =

{0, if x < 01, otherwise.

Then its characteristic function is ϕ(t) ≡ 1.

Example 2.18: Let ξ be a uniformly distributed variable on (a, b). Thenits characteristic function is

ϕ(t) =eitb − eita

(b− a)t, t �= 0.

Theorem 2.50 Let ξ be a random variable, and ϕ its characteristic function.Then we have(a) ϕ(0) = 1;(b) |ϕ(t)| ≤ ϕ(0);(c) ϕ(−t) = ϕ(t), the complex conjugate of ϕ(t);(d) ϕ(t) is a uniformly continuous function on �.

Proof: The part (a) is obvious. The parts (b) and (c) are proved as follows,

|ϕ(t)| ≤∫ +∞

−∞

∣∣eitx∣∣ dΦ(x) =∫ +∞

−∞dΦ(x) = 1 = ϕ(0),

ϕ(t) =∫ +∞

−∞cos txdΦ(x)− i

∫ +∞

−∞sin txdΦ(x)

=∫ +∞

−∞cos(−t)xdΦ(x) + i

∫ +∞

−∞sin(−t)xdΦ(x) = ϕ(−t).

Page 72: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

60 Chapter 2 - Probability Theory

(d) We next show that ϕ is uniformly continuous. Since

ei(t+h)x − eitx = 2iei(t+h/2)x sinhx

2,

we have

|ϕ(t + h)− ϕ(t)| ≤∫ +∞

−∞

∣∣∣∣2iei(t+h/2)x sinhx

2

∣∣∣∣ dΦ(x) ≤ 2∫ ∞

−∞

∣∣∣∣sin hx

2

∣∣∣∣ dΦ(x)

where the right-hand side is independent of t. Since sin(hx)/2→ 0 as h→ 0,the Lebesgue dominated convergence theorem shows that∫ +∞

−∞

∣∣∣∣sin hx

2

∣∣∣∣ dΦ(x)→ 0

as h→ 0. Hence ϕ is uniformly continuous on �.

Theorem 2.51 (Inversion Formula) Let ξ be a random variable with prob-ability distribution Φ and characteristic function ϕ. Then

Φ(b)− Φ(a) = limT→+∞

12π

∫ T

−T

e−iat − e−ibt

itϕ(t)dt (2.75)

holds for all points a, b(a < b) at which Φ is continuous.

Proof: Sincee−iat − e−ibt

it=∫ b

a

eiutdu, we have

f(T ) =12π

∫ T

−T

e−iat − e−ibt

itϕ(t)dt =

12π

∫ T

−T

ϕ(t)dt

∫ b

a

e−iutdu

=12π

∫ b

a

du

∫ T

−T

e−iutϕ(t)dt =12π

∫ +∞

−∞dΦ(x)

∫ b

a

du

∫ T

−T

ei(x−u)tdt

=∫ +∞

−∞g(T, x)dΦ(x)

where

g(T, x) =1π

∫ T (x−a)

T (x−b)

sin v

vdv.

The classical Dirichlet formula

∫ β

α

sin v

vdv → 1 as α→ −∞, β → +∞

implies that g(T, x) is bounded uniformly. Furthermore,

limT→+∞

g(T, x) =1π

limT→+∞

∫ T (x−a)

T (x−b)

sin v

vdv =

⎧⎪⎨⎪⎩1, if a < x < b

0.5, if x = a or b

0, if x < a or x > b.

Page 73: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 2.10 - Convergence Concepts 61

It follows from Lebesgue dominated convergence theorem that

limT→+∞

f(T ) =∫ +∞

−∞lim

T→+∞g(T, x)dΦ(x) = Φ(b)− Φ(a).

The proof is completed.

Theorem 2.52 (Uniqueness Theorem) Let Φ1 and Φ2 be two probabilitydistributions with characteristic functions ϕ1 and ϕ2, respectively. Thenϕ1 = ϕ2 if and only if Φ1 = Φ2.

Proof: If Φ1 = Φ2, then we get ϕ1 = ϕ2 immediately from the definition.Conversely, let a, b (a < b) be continuity points of both Φ1 and Φ2. Then theinversion formula yields

Φ1(b)− Φ1(a) = Φ2(b)− Φ2(a).

Letting a → −∞, we obtain Φ1(b) = Φ2(b) via Φ1(a) → 0 and Φ2(a) → 0.Since the set of continuity points of probability distribution is dense every-where in �, we have Φ1 = Φ2 by Theorem 2.13.

2.10 Convergence Concepts

There are four main types of convergence concepts of random sequence:convergence almost surely (a.s.), convergence in probability, convergence inmean, and convergence in distribution.

Table 2.1: Relations among Convergence Concepts

Convergence Almost Surely↘ Convergence → Convergence↗ in Probability in Distribution

Convergence in Mean

Definition 2.31 Suppose that ξ, ξ1, ξ2, · · · are random variables defined onthe probability space (Ω,A,Pr). The sequence {ξi} is said to be convergenta.s. to ξ if and only if there exists a set A ∈ A with Pr{A} = 1 such that

limi→∞

|ξi(ω)− ξ(ω)| = 0 (2.76)

for every ω ∈ A. In that case we write ξi → ξ, a.s.

Page 74: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

62 Chapter 2 - Probability Theory

Definition 2.32 Suppose that ξ, ξ1, ξ2, · · · are random variables defined onthe probability space (Ω,A,Pr). We say that the sequence {ξi} converges inprobability to ξ if

limi→∞

Pr {|ξi − ξ| ≥ ε} = 0 (2.77)

for every ε > 0.

Definition 2.33 Suppose that ξ, ξ1, ξ2, · · · are random variables with finiteexpected values on the probability space (Ω,A,Pr). We say that the sequence{ξi} converges in mean to ξ if

limi→∞

E[|ξi − ξ|] = 0. (2.78)

Definition 2.34 Suppose that Φ,Φ1,Φ2, · · · are the probability distributionsof random variables ξ, ξ1, ξ2, · · ·, respectively. We say that {ξi} converges indistribution to ξ if Φi(x)→ Φ(x) for all continuity points x of Φ.

Convergence Almost Surely vs. Convergence in Probability

Theorem 2.53 Suppose that ξ, ξ1, ξ2, · · · are random variables defined on theprobability space (Ω,A,Pr). Then {ξi} converges a.s. to ξ if and only if, forevery ε > 0, we have

limn→∞Pr

{ ∞⋃i=n

{|ξi − ξ| ≥ ε}}

= 0. (2.79)

Proof: For every i ≥ 1 and ε > 0, we define

X ={ω ∈ Ω

∣∣ limi→∞

ξi(ω) �= ξ(ω)}

,

Xi(ε) ={ω ∈ Ω

∣∣ |ξi(ω)− ξ(ω)| ≥ ε}

.

It is clear that

X =⋃ε>0

( ∞⋂n=1

∞⋃i=n

Xi(ε)

).

Note that ξi → ξ, a.s. if and only if Pr{X} = 0. That is, ξi → ξ, a.s. if andonly if

Pr

{ ∞⋂n=1

∞⋃i=n

Xi(ε)

}= 0

for every ε > 0. Since

∞⋃i=n

Xi(ε) ↓∞⋂n=1

∞⋃i=n

Xi(ε),

Page 75: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 2.10 - Convergence Concepts 63

it follows from the probability continuity theorem that

limn→∞Pr

{ ∞⋃i=n

Xi(ε)

}= Pr

{ ∞⋂n=1

∞⋃i=n

Xi(ε)

}= 0.

The theorem is proved.

Theorem 2.54 Suppose that ξ, ξ1, ξ2, · · · are random variables defined on theprobability space (Ω,A,Pr). If {ξi} converges a.s. to ξ, then {ξi} convergesin probability to ξ.

Proof: It follows from the convergence a.s. and Theorem 2.53 that

limn→∞Pr

{ ∞⋃i=n

{|ξi − ξ| ≥ ε}}

= 0

for each ε > 0. For every n ≥ 1, since

{|ξn − ξ| ≥ ε} ⊂∞⋃i=n

{|ξi − ξ| ≥ ε},

we have Pr{|ξn − ξ| ≥ ε} → 0 as n→∞. Hence the theorem holds.

Example 2.19: Convergence in probability does not imply convergence a.s.For example, let Ω = [0, 1]. Assume that A is the Borel algebra on Ω, andPr is the Lebesgue measure. Then (Ω,A,Pr) is a probability space. For anypositive integer i, there is an integer j such that i = 2j + k, where k is aninteger between 0 and 2j − 1. We define a random variable on Ω by

ξi(ω) =

{1, if k/2j ≤ ω ≤ (k + 1)/2j

0, otherwise(2.80)

for i = 1, 2, · · · and ξ = 0. For any small number ε > 0, we have

Pr {|ξi − ξ| ≥ ε} =12j→ 0

as i→∞. That is, the sequence {ξi} converges in probability to ξ. However,for any ω ∈ [0, 1], there is an infinite number of intervals of the form [k/2j , (k+1)/2j ] containing ω. Thus ξi(ω) �→ 0 as i→∞. In other words, the sequence{ξi} does not converge a.s. to ξ.

Convergence in Probability vs. Convergence in Mean

Theorem 2.55 Suppose that ξ, ξ1, ξ2, · · · are random variables defined on theprobability space (Ω,A,Pr). If the sequence {ξi} converges in mean to ξ, then{ξi} converges in probability to ξ.

Page 76: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

64 Chapter 2 - Probability Theory

Proof: It follows from the Markov inequality that, for any given numberε > 0,

Pr {|ξi − ξ| ≥ ε} ≤ E[|ξi − ξ|]ε

→ 0

as i→∞. Thus {ξi} converges in probability to ξ.

Example 2.20: Convergence in probability does not imply convergencein mean. For example, assume that Ω = {ω1, ω2, · · ·}, Pr{ωj} = 1/2j forj = 1, 2, · · · and the random variables are defined by

ξi{ωj} ={

2i, if j = i0, otherwise (2.81)

for i = 1, 2, · · · and ξ = 0. For any small number ε > 0, we have

Pr {|ξi − ξ| ≥ ε} =12i→ 0.

That is, the sequence {ξi} converges in probability to ξ. However, we have

E [|ξi − ξ|] = 2i · 12i

= 1.

That is, the sequence {ξi} does not converge in mean to ξ.

Convergence Almost Surely vs. Convergence in Mean

Example 2.21: Convergence a.s. does not imply convergence in mean. Con-sider the random variables defined by (2.81) in which {ξi} converges a.s. toξ. However, it does not converge in mean to ξ.

Example 2.22: Convergence in mean does not imply convergence a.s., too.Consider the random variables defined by (2.80). We have

E [|ξi − ξ|] =12j→ 0.

where j is the maximal integer such that 2j ≤ i. That is, the sequence {ξi}converges in mean to ξ. However, {ξi} does not converge a.s. to ξ.

Convergence in Probability vs. Convergence in Distribution

Theorem 2.56 Suppose that ξ, ξ1, ξ2, · · · are random variables defined on thesame probability space (Ω,A,Pr). If the sequence {ξi} converges in probabilityto the random variable ξ, then {ξi} converges in distribution to ξ.

Page 77: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 2.11 - Laws of Large Numbers 65

Proof: Let x be any given continuity point of the distribution Φ. On theone hand, for any y > x, we have

{ξi ≤ x} = {ξi ≤ x, ξ ≤ y} ∪ {ξi ≤ x, ξ > y} ⊂ {ξ ≤ y} ∪ {|ξi − ξ| ≥ y − x}

which implies that

Φi(x) ≤ Φ(y) + Pr{|ξi − ξ| ≥ y − x}.

Since {ξi} converges in probability to ξ, we have Pr{|ξi − ξ| ≥ y − x} → 0.Thus we obtain lim supi→∞ Φi(x) ≤ Φ(y) for any y > x. Letting y → x, weget

lim supi→∞

Φi(x) ≤ Φ(x). (2.82)

On the other hand, for any z < x, we have

{ξ ≤ z} = {ξ ≤ z, ξi ≤ x} ∪ {ξ ≤ z, ξi > x} ⊂ {ξi ≤ x} ∪ {|ξi − ξ| ≥ x− z}

which implies that

Φ(z) ≤ Φi(x) + Pr{|ξi − ξ| ≥ x− z}.

Since Pr{|ξi − ξ| ≥ x − z} → 0, we obtain Φ(z) ≤ lim infi→∞ Φi(x) for anyz < x. Letting z → x, we get

Φ(x) ≤ lim infi→∞

Φi(x). (2.83)

It follows from (2.82) and (2.83) that Φi(x)→ Φ(x). The theorem is proved.

Example 2.23: However, the inverse of Theorem 2.56 is not true. Forexample, Ω = {ω1, ω2}, and

Pr{ω} ={

1/2, if ω = ω1

1/2, if ω = ω2,ξ(ω) =

{−1, if ω = ω1

1, if ω = ω2.

We also define ξi = −ξ for all i. Then ξi and ξ are identically distributed.Thus {ξi} converges in distribution to ξ. But, for any small number ε > 0,we have Pr{|ξi − ξ| > ε} = Pr{Ω} = 1. That is, the sequence {ξi} does notconverge in probability to ξ.

2.11 Laws of Large Numbers

The laws of large numbers include two types: (a) the weak laws of largenumbers dealing with convergence in probability; (b) the strong laws of largenumbers dealing with convergence a.s. In order to introduce them, we willdenote

Sn = ξ1 + ξ2 + · · ·+ ξn (2.84)

for each n throughout this section.

Page 78: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

66 Chapter 2 - Probability Theory

Weak Laws of Large Numbers

Theorem 2.57 (Chebyshev’s Weak Law of Large Numbers) Let {ξi} be asequence of independent but not necessarily identically distributed randomvariables with finite expected values. If there exists a number a > 0 such thatV [ξi] < a for all i, then (Sn − E[Sn])/n converges in probability to 0. Thatis, for any given ε > 0, we have

limn→∞Pr

{∣∣∣∣Sn − E[Sn]n

∣∣∣∣ ≥ ε

}= 0. (2.85)

Proof: For any given ε > 0, it follows from Chebyshev inequality that

Pr{∣∣∣∣Sn − E[Sn]

n

∣∣∣∣ ≥ ε

}≤ 1

ε2V

[Sn

n

]=

V [Sn]ε2n2

≤ a

ε2n→ 0

as n → ∞. The theorem is proved. Especially, if those random variableshave a common expected value e, then Sn/n converges in probability to e.

Theorem 2.58 Let {ξi} be a sequence of iid random variables with finiteexpected value e. Then Sn/n converges in probability to e as n→∞.

Proof: Since the expected value of ξi is finite, there exists β > 0 such thatE[|ξi|] < β < ∞. Let α be an arbitrary positive number, and let n be anarbitrary positive integer. We define

ξ∗i =

{ξi, if |ξi| < nα

0, otherwise

for i = 1, 2, · · · It is clear that {ξ∗i } is a sequence of iid random variables. Lete∗n be the common expected value of ξ∗i , and S∗

n = ξ∗1 + ξ∗2 + · · ·+ ξ∗n. Thenwe have

V [ξ∗i ] ≤ E[ξ∗2i ] ≤ nαE[|ξ∗i |] ≤ nαβ,

E

[S∗n

n

]=

E[ξ∗1 ] + E[ξ∗2 ] + · · ·+ E[ξ∗n]n

= e∗n,

V

[S∗n

n

]=

V [ξ∗1 ] + V [ξ∗2 ] + · · ·+ V [ξ∗n]n2

≤ αβ.

It follows from the Chebyshev inequality that

Pr{∣∣∣∣S∗

n

n− e∗n

∣∣∣∣ ≥ ε

}≤ 1

ε2V

[S∗n

n

]≤ αβ

ε2(2.86)

for every ε > 0. It is also clear that e∗n → e as n → ∞ by the Lebesguedominated convergence theorem. Thus there exists an integer N∗ such that|e∗n − e| < ε whenever n ≥ N∗. Applying (2.86), we get

Pr{∣∣∣∣S∗

n

n− e

∣∣∣∣ ≥ 2ε}≤ Pr

{∣∣∣∣S∗n

n− e∗n

∣∣∣∣ ≥ ε

}≤ αβ

ε2(2.87)

Page 79: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 2.11 - Laws of Large Numbers 67

for any n ≥ N∗. It follows from the iid hypothesis and Theorem 2.25 that

Pr{S∗n �= Sn} ≤

n∑i=1

Pr{|ξi| ≥ nα} ≤ nPr{|ξ1| ≥ nα} → 0

as n→∞. Thus there exists N∗∗ such that

Pr{S∗n �= Sn} ≤ α, ∀n ≥ N∗∗.

Applying (2.87), for all n ≥ N∗ ∨N∗∗, we have

Pr{∣∣∣∣Sn

n− e

∣∣∣∣ ≥ 2ε}≤ αβ

ε2+ α→ 0

as α→ 0. It follows that Sn/n converges in probability to e.

Strong Laws of Large Numbers

Lemma 2.1 (Toeplitz Lemma) Let {ai} be a sequence of real numbers suchthat ai → a as i→∞. Then

limn→∞

a1 + a2 + · · ·+ ann

= a. (2.88)

Proof: Let ε > 0 be given. Since ai → a, there exists N such that

|ai − a| < ε

2, ∀i ≥ N.

It is also able to choose an integer N∗ > N such that

1N∗

N∑i=1

|ai − a| < ε

2.

Thus for any n > N∗, we have∣∣∣∣∣ 1nn∑

i=1

ai − a

∣∣∣∣∣ ≤ 1N∗

N∑i=1

|ai − a|+ 1n

n∑i=N+1

|ai − a| < ε.

It follows from the arbitrariness of ε that the Toeplitz Lemma holds.

Lemma 2.2 (Kronecker Lemma) Let {ai} be a sequence of real numberssuch that

∑∞i=1 ai converges. Then

limn→∞

a1 + 2a2 + · · ·+ nann

= 0. (2.89)

Page 80: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

68 Chapter 2 - Probability Theory

Proof: We set s0 = 0 and si = a1 + a2 + · · · + ai for i = 1, 2, · · · Then wehave

1n

n∑i=1

iai =1n

n∑i=1

i(si − si−1) = sn −1n

n−1∑i=1

si.

The sequence {si} converges to a finite limit, say s. It follows from ToeplitzLemma that (

∑n−1i=1 si)/n→ s as n→∞. Thus Kronecker Lemma is proved.

Theorem 2.59 (Kolmogorov Inequality) Let ξ1, ξ2, · · · , ξn be independent ran-dom variables with finite expected values. Then for any given ε > 0, we have

Pr{

max1≤i≤n

|Si − E[Si]| ≥ ε

}≤ V [Sn]

ε2. (2.90)

Proof: Without loss of generality, assume that E[ξi] = 0 for each i. We set

A1 = {|S1| ≥ ε} , Ai = {|Sj | < ε, j = 1, 2, · · · , i− 1, and |Si| ≥ ε}

for i = 2, 3, · · · , n. It is clear that

A ={

max1≤i≤n

|Si| ≥ ε

}is the disjoint union of A1, A2, · · · , An. Since E[Sn] = 0, we have

V [Sn] =∫ +∞

0

Pr{S2n ≥ r}dr ≥

n∑k=1

∫ +∞

0

Pr{{S2

n ≥ r} ∩Ak

}dr. (2.91)

Now for any k with 1 ≤ k ≤ n, it follows from the independence that∫ +∞

0

Pr{{S2

n ≥ r} ∩Ak

}dr

=∫ +∞

0

Pr{{(Sk + ξk+1 + · · ·+ ξn)2 ≥ r} ∩Ak

}dr

=∫ +∞

0

Pr{{S2

k + ξ2k+1 + · · ·+ ξ2

n ≥ r} ∩Ak

}dr

+2n∑

j=k+1

E[IAkSk]E[ξj ] +

n∑j �=l;j,l=k+1

Pr{Ak}E[ξj ]E[ξl]

≥∫ +∞

0

Pr{{S2

k ≥ r} ∩Ak

}dr

≥ ε2 Pr{Ak}.

Using (2.91), we get

V [Sn] ≥ ε2n∑

i=1

Pr{Ai} = ε2 Pr{A}

which implies that the Kolmogorov inequality holds.

Page 81: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 2.11 - Laws of Large Numbers 69

Theorem 2.60 Let {ξi} be a sequence of independent random variables. If∑∞i=1 V [ξi] <∞, then

∑∞i=1(ξi − E[ξi]) converges a.s.

Proof: The series∑∞

i=1(ξi − E[ξi]) converges a.s. if and only if∑∞

i=n(ξi −E[ξi])→ 0 a.s. as n→∞ if and only if

limn→∞Pr

⎧⎨⎩∞⋃j=0

{∣∣∣∣∣n+j∑i=n

(ξi − E[ξi])

∣∣∣∣∣ ≥ ε

}⎫⎬⎭ = 0

for every given ε > 0. In fact,

Pr

⎧⎨⎩∞⋃j=0

{∣∣∣∣∣n+j∑i=n

(ξi − E[ξi])

∣∣∣∣∣ ≥ ε

}⎫⎬⎭= lim

m→∞Pr

⎧⎨⎩m⋃j=0

{∣∣∣∣∣n+j∑i=n

(ξi − E[ξi])

∣∣∣∣∣ ≥ ε

}⎫⎬⎭= lim

m→∞Pr

{max

0≤j≤m

∣∣∣∣∣n+j∑i=n

(ξi − E[ξi])

∣∣∣∣∣ ≥ ε

}

≤ limm→∞

1ε2

n+m∑i=n

V [ξi] (by Kolmogorov inequality)

=1ε2

∞∑i=n

V [ξi]→ 0 as n→∞ by∞∑i=1

V [ξi] <∞.

The theorem is proved.

Theorem 2.61 (Kolmogorov Strong Law of Large Numbers) Let {ξi} be in-dependent random variables with finite expected values. If

∞∑i=1

V [ξi]i2

<∞, (2.92)

thenSn − E[Sn]

n→ 0, a.s. (2.93)

Proof: It follows from (2.92) that∞∑i=1

V

[ξi − E[ξi]

i

]=

∞∑i=1

V [ξi]i2

<∞.

By Theorem 2.60, we know that∑∞

i=1(ξi − E[ξi])/i converges a.s. ApplyingKronecker Lemma, we obtain

Sn − E[Sn]n

=1n

n∑i=1

i

(ξi − E[ξi]

i

)→ 0, a.s.

Page 82: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

70 Chapter 2 - Probability Theory

The theorem is proved.

Theorem 2.62 (The Strong Law of Large Numbers) Let {ξi} be a sequenceof iid random variables with finite expected value e. Then Sn/n→ e a.s.

Proof: For each i ≥ 1, let ξ∗i be ξi truncated at i, i.e.,

ξ∗i =

{ξi, if |ξi| < i

0, otherwise,

and write S∗n = ξ∗1 + ξ∗2 + · · ·+ ξ∗n. Then we have

V [ξ∗i ] ≤ E[ξ∗2i ] ≤i∑

j=1

j2 Pr{j − 1 ≤ |ξ1| < j}

for all i. Thus

∞∑i=1

V [ξ∗i ]i2

≤∞∑i=1

i∑j=1

j2

i2Pr{j − 1 ≤ |ξ1| < j}

=∞∑j=1

j2 Pr{j − 1 ≤ |ξ1| < j}∞∑i=j

1i2

≤ 2∞∑j=1

j Pr{j − 1 ≤ |ξ1| < j} by∞∑i=j

1i2≤ 2

j

= 2 + 2∞∑j=1

(j − 1) Pr{j − 1 ≤ |ξ1| < j}

≤ 2 + 2e <∞.

It follows from Theorem 2.61 that

S∗n − E[S∗

n]n

→ 0, a.s. (2.94)

Note that ξ∗i ↑ ξi as i → ∞. Using the Lebesgue dominated convergencetheorem, we conclude that E[ξ∗i ]→ e. It follows from Toeplitz Lemma that

E[S∗n]

n=

E[ξ∗1 ] + E[ξ∗2 ] + · · ·+ E[ξ∗n]n

→ e, a.s. (2.95)

Since (ξi − ξ∗i )→ 0, a.s., Toeplitz Lemma states that

Sn − S∗n

n=

1n

n∑i=1

(ξi − ξ∗i )→ 0, a.s. (2.96)

It follows from (2.94), (2.95) and (2.96) that Sn/n→ e a.s.

Page 83: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 2.12 - Conditional Probability 71

2.12 Conditional Probability

We consider the probability of an event A after it has been learned thatsome other event B has occurred. This new probability of A is called theconditional probability of the event A given that the event B has occurred.

Definition 2.35 Let (Ω,A,Pr) be a probability space, and A,B ∈ A. Thenthe conditional probability of A given B is defined by

Pr{A|B} =Pr{A ∩B}

Pr{B} (2.97)

provided that Pr{B} > 0.

Theorem 2.63 Let (Ω,A,Pr) be a probability space, and B ∈ A. If Pr{B} >0, then Pr{·|B} defined by (2.97) is a probability measure on (Ω,A), and(Ω,A,Pr{·|B}) is a probability space.

Proof: At first, we have

Pr{Ω|B} =Pr{Ω ∩B}

Pr{B} =Pr{B}Pr{B} = 1.

Secondly, for any A ∈ A, the set function Pr{A|B} is nonnegative. Finally,for any sequence {Ai}∞i=1 of mutually disjoint events, we have

Pr

{ ∞⋃i=1

Ai|B}

=Pr{( ∞⋃

i=1

Ai

)∩B

}Pr{B} =

∞∑i=1

Pr{Ai ∩B}

Pr{B} =∞∑i=1

Pr{Ai|B}.

Thus Pr{·|B} is a probability measure on (Ω,A). Furthermore, (Ω,A,Pr{·|B})is a probability space.

Theorem 2.64 (Bayes’ Rule) Let the events A1, A2, · · · , An form a partitionof the space Ω such that Pr{Ai} > 0 for i = 1, 2, · · · , n, and B an event withPr{B} > 0. Then we have

Pr{Ak|B} =Pr{Ak}Pr{B|Ak}n∑

i=1

Pr{Ai}Pr{B|Ai}(2.98)

for k = 1, 2, · · · , n.

Proof: Since A1, A2, · · · , An form a partition of the space Ω, we have

Pr{B} =n∑

i=1

Pr{Ai ∩B} =n∑

i=1

Pr{Ai}Pr{B|Ai}.

Page 84: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

72 Chapter 2 - Probability Theory

Thus, for any k, if Pr{B} > 0, then

Pr{Ak|B} =Pr{Ak ∩B}

Pr{B} =Pr{Ak}Pr{B|Ak}n∑

i=1

Pr{Ai}Pr{B|Ai}.

The theorem is proved.

Definition 2.36 Let (Ω,A,Pr) be a probability space. Then the conditionalprobability distribution Φ: [−∞,+∞] × A → [0, 1] of a random variable ξgiven B is defined by

Φ(x|B) = Pr{ξ ≤ x

∣∣ B} (2.99)

provided that Pr{B} > 0.

Definition 2.37 Let (Ω,A,Pr) be a probability space. Then the conditionalprobability density function φ: �×A→ [0,+∞) of a random variable ξ givenB is a function such that

Φ(x|B) =∫ x

−∞φ(y|B)dy (2.100)

holds for all x ∈ [−∞,+∞], where Φ is the conditional probability distributionof the random variable ξ given B provided that Pr{B} > 0.

Example 2.24: Let ξ and η be random variables, where η takes on onlycountably many values y1, y2, · · · Then, for each i, the conditional probabilitydistribution of ξ given η = yi is

Φ(x|η = yi) = Pr{ξ ≤ x

∣∣ η = yi}

=Pr{ξ ≤ x, η = yi}

Pr{η = yi}.

Example 2.25: Let (ξ, η) be a random vector with joint probability densityfunction ψ. Then the marginal probability density functions of ξ and η are

f(x) =∫ +∞

−∞ψ(x, y)dy, g(y) =

∫ +∞

−∞ψ(x, y)dx,

respectively. Furthermore, we have

Pr{ξ ≤ x, η ≤ y} =∫ x

−∞

∫ y

−∞ψ(r, t)drdt =

∫ y

−∞

[∫ x

−∞

ψ(r, t)g(t)

dr

]g(t)dt

which implies that the conditional probability distribution of ξ given η = yis

Φ(x|η = y) =∫ x

−∞

ψ(r, y)g(y)

dr, a.s. (2.101)

Page 85: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 2.13 - Stochastic Simulations 73

and the conditional probability density function of ξ given η = y is

φ(x|η = y) =f(x, y)g(y)

, a.s. (2.102)

Note that (2.101) and (2.102) are defined only for g(y) �= 0. In fact, the set{y|g(y) = 0} has probability 0.

Definition 2.38 Let ξ be a random variable on the probability space (Ω,A,Pr).Then the conditional expected value of ξ given B is defined by

E[ξ|B] =∫ +∞

0

Pr{ξ ≥ r|B}dr −∫ 0

−∞Pr{ξ ≤ r|B}dr (2.103)

provided that at least one of the two integrals is finite.

Theorem 2.65 Let ξ and η be random variables with finite expected values.Then for any set B and any numbers a and b, we have

E[aξ + bη|B] = aE[ξ|B] + bE[η|B]. (2.104)

Proof: Like Theorem 2.30.

2.13 Stochastic Simulations

Stochastic simulation (also referred to as Monte Carlo simulation) has beenapplied to numerous areas, and is defined as a technique of performing sam-pling experiments on the models of stochastic systems. Although simulationis an imprecise technique which provides only statistical estimates rather thanexact results and is also a slow and costly way to study problems, it is indeeda powerful tool dealing with complex problems without analytic techniques.

The basis of stochastic simulation is random number generation. Gener-ally, let x be a random variable with a probability distribution Φ(·). SinceΦ(·) is a nondecreasing function, the inverse function Φ−1(·) is defined on[0, 1]. Assume that u is a uniformly distributed variable on the interval [0, 1].Then we have

Pr{Φ−1(u) ≤ y

}= Pr {u ≤ Φ(y)} = Φ(y) (2.105)

which proves that the variable x = Φ−1(u) has the probability distributionΦ(·). In order to get a random variable x with probability distribution Φ(·),we can produce a uniformly distributed variable u from the interval [0, 1],and x is assigned to be Φ−1(u). The above process is called the inversetransform method. But for the main known distributions, instead of using theinverse transform method, we have direct generating processes. For detailedexpositions, interested readers may consult Fishman [29], Law and Kelton

Page 86: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

74 Chapter 2 - Probability Theory

[56], Bratley et al. [11], Rubinstein [124], and Liu [75]. Here we give somegenerating methods for probability distributions frequently used in this book.

Uniform Distribution: A random variable ξ has a uniform distribution ifits probability density function is defined by

φ(x) =

⎧⎨⎩1

b− a, a ≤ x ≤ b

0, otherwise(2.106)

denoted by U(a, b), where a and b are given real numbers with a < b. Thesubfunction of generating pseudorandom numbers has been provided by theC library for any type of computer, defined as

#include 〈stdlib.h〉int rand(void)

which produces a pseudorandom integer between 0 and RAND MAX, whereRAND MAX is defined in stdlib.h as 215 − 1. Thus a uniformly distributedvariable on an interval [a, b] can be produced as follows:

Algorithm 2.1 (Uniform Distribution)Step 1. u = rand( ).Step 2. u← u/RAND MAX.Step 3. Return a + u(b− a).

Exponential Distribution: A random variable ξ has an exponential dis-tribution with expected value β (β > 0) if its probability density function isdefined by

φ(x) =

⎧⎨⎩1β

e−x/β , if 0 ≤ x <∞

0, otherwise(2.107)

denoted by EXP(β). An exponentially distributed variable can be generatedby the following way:

Algorithm 2.2 (Exponential Distribution)Step 1. Generate u from U(0, 1).Step 2. Return −β ln(u).

Normal Distribution: A random variable ξ has a normal distribution ifits probability density function is defined as:

φ(x) =1

σ√

2πexp[− (x− μ)2

2σ2

], −∞ < x < +∞ (2.108)

denoted by N (μ, σ2), where μ is the expected value and σ2 is the variance.

Page 87: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 2.13 - Stochastic Simulations 75

Algorithm 2.3 (Normal Distribution)Step 1. Generate μ1 and μ2 from U(0, 1).

Step 2. y = [−2 ln(μ1)]12 sin(2πμ2).

Step 3. Return μ + σy.

Triangular Distribution: A random variable ξ has a triangular distribu-tion if its probability density function is defined as:

f(x) =

⎧⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎩

2(x− a)(b− a)(m− a)

, if a < x ≤ m,

2(b− x)(b− a)(b−m)

, if m < x ≤ b,

0, otherwise

(2.109)

denoted by T (a, b,m), where a < m < b.

Algorithm 2.4 (Triangular Distribution)Step 1. c = (m− a)/(b− a).Step 2. Generate u from U(0, 1).

Step 3. If u < c, then y =√

cu; otherwise y = 1−√

(1− c)(1− u).Step 4. Return a + (b− a)y.

We next show why and how the stochastic simulation works well forstochastic systems through some numerical examples.

Example 2.26: Let ξ be an n-dimensional random vector defined on theprobability space (Ω,A,Pr) (equivalently, it is characterized by a probabilitydistribution Φ), and f : �n → � a measurable function. Then f(ξ) isa random variable. In order to calculate the expected value E[f(ξ)], wegenerate ωk from Ω according to the probability measure Pr, and write ξk =ξ(ωk) for k = 1, 2, · · · , N . Equivalently, we generate random vectors ξk,k = 1, 2, · · · , N according to the probability distribution Φ. It follows fromthe strong law of large numbers that

N∑k=1

f(ξk)

N−→ E[f(ξ)], a.s. (2.110)

as N →∞. Therefore, the value E[f(ξ)] can be estimated by 1N

∑Nk=1 f(ξk)

provided that N is sufficiently large.

Page 88: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

76 Chapter 2 - Probability Theory

Algorithm 2.5 (Stochastic Simulation)Step 1. Set L = 0.Step 2. Generate ω from Ω according to the probability measure Pr.Step 3. L← L + f(ξ(ω)).Step 4. Repeat the second and third steps N times.Step 5. E[f(ξ)] = L/N .

Let ξ1 be an exponentially distributed variable EXP(1), ξ2 a normallydistributed variable N (2, 1), and ξ3 a uniformly distributed variable U(0, 3).A run of stochastic simulation with 3000 cycles shows that E[ξ1 + ξ2

2 + ξ33 ] =

12.94.

Example 2.27: Let ξ be an n-dimensional random vector defined on theprobability space (Ω,A,Pr), and f : �n → �m a measurable function. Inorder to obtain the probability,

L = Pr {f(ξ) ≤ 0} , (2.111)

we generate ωk from Ω according to the probability measure Pr, and writeξk = ξ(ωk) for k = 1, 2, · · · , N . Let N ′ denote the number of occasions onwhich f(ξk) ≤ 0 for k = 1, 2, · · · , N (i.e., the number of random vectorssatisfying the system of inequalities). Let us define

h(ξk) =

{1, if f(ξk) ≤ 00, otherwise.

Then we have E[h(ξk)] = L for all k, and N ′ =∑N

k=1 h(ξk). It follows fromthe strong law of large numbers that

N ′

N=

N∑k=1

h(ξk)

N

converges a.s. to L. Thus the probability L can be estimated by N ′/Nprovided that N is sufficiently large.

Algorithm 2.6 (Stochastic Simulation)Step 1. Set N ′ = 0.Step 2. Generate ω from Ω according to the probability measure Pr.Step 3. If f(ξ(ω)) ≤ 0, then N ′++.Step 4. Repeat the second and third steps N times.Step 5. L = N ′/N .

Page 89: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 2.13 - Stochastic Simulations 77

Let ξ1 be an exponentially distributed variable EXP(1), ξ2 a normallydistributed variable N (2, 1), and ξ3 a uniformly distributed variable U(0, 3).A run of stochastic simulation with 3000 cycles shows that

Pr{ξ1 + ξ2

2 + ξ33 ≤ 30

}= 0.95.

Example 2.28: Suppose that ξ is an n-dimensional random vector definedon the probability space (Ω,A,Pr), and f : �n → � is a measurable function.The problem is to determine the maximal value f such that

Pr{f(ξ) ≥ f

}≥ α (2.112)

where α is a predetermined confidence level with 0 < α < 1. We generate ωk

from Ω according to the probability measure Pr, and write ξk = ξ(ωk) fork = 1, 2, · · · , N . Now we define

h(ξk) =

{1, if f(ξk) ≥ f

0, otherwise

for k = 1, 2, · · · , N , which are a sequence of random variables, and E[h(ξk)] =α for all k. By the strong law of large numbers, we obtain

N∑k=1

h(ξk)

N−→ α, a.s.

as N →∞. Note that the sum∑N

k=1 h(ξk) is just the number of ξk satisfyingf(ξk) ≥ f for k = 1, 2, · · · , N . Thus the value f can be taken as the N ′thlargest element in the sequence {f(ξ1), f(ξ2), · · · , f(ξN )}, where N ′ is theinteger part of αN .

Algorithm 2.7 (Stochastic Simulation)Step 1. Set N ′ as the integer part of αN .Step 2. Generate ω1, ω2, · · · , ωN from Ω according to the probability mea-

sure Pr.Step 3. Return the N ′th largest element in {f(ξ(ω1)), f(ξ(ω2)), · · · ,

f(ξ(ωN ))}.

Let us employ the stochastic simulation to search for the maximal f suchthat

Pr{ξ1 + ξ2

2 + ξ33 ≥ f

}≥ 0.8

where ξ1 is an exponentially distributed variable EXP(1), ξ2 a normallydistributed variable N (2, 1), and ξ3 a uniformly distributed variable U(0, 3).A run of stochastic simulation with 3000 cycles shows that f = 4.93.

Page 90: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Chapter 3

Credibility Theory

An ordinary set is normally defined as a collection of elements. Each singleelement can either belong or not belong to the set. Such a set can be describedin different ways: one can either list the elements that belong to the set;describe the set analytically by some equalities and inequalities; or define themember elements by using the characteristic function, in which 1 indicatesmembership and 0 nonmembership. However, in many cases, the membershipis not clear. For example, “young man”, “many people”, “high mountain”,“great river”, “large number”, “about 100 meters”. They are not tractable bythe classical set theory or probability theory. In order to deal with them, letus first introduce the concept of fuzzy set initialized by Zadeh [153] in 1965:A fuzzy subset A of a universal set U is defined by its membership functionμ which assigns to each element x ∈ U a real number μ(x) in the interval[0, 1], where the value of μ(x) at x represents the grade of membership of xin A. Thus, the nearer the value of μ(x) is unity, the higher the grade ofmembership of x in A.

Fuzzy set theory has been well developed and applied in a wide varietyof real problems. As a fuzzy set of real numbers, the term fuzzy variable wasfirst introduced by Kaufmann [44], then it appeared in Zadeh [155][156] andNahmias [98]. Possibility theory was proposed by Zadeh [156], and developedby many researchers such as Dubois and Prade [25][26]. In order to provide amathematical theory to describe fuzziness, several types of theoretical frame-work were suggested.

There are three types of measures in the fuzzy world: possibility, ne-cessity, and credibility. Note that they are not a measure in the sense ofDefinition 1.3 on Page 2. Traditionally, possibility measure is regarded asthe parallel concept of probability measure and is widely used. However, itis, in fact, the credibility measure that plays the role of probability measure!This fact provides a motivation to develop an axiomatic approach based oncredibility measure, called credibility theory. Generally speaking, credibilitytheory is the branch of mathematics that studies the behavior of fuzzy events.

Page 91: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

80 Chapter 3 - Credibility Theory

The emphasis in this chapter is mainly on fuzzy set, fuzzy variable, fuzzyarithmetic, possibility space, possibility measure, necessity measure, credibil-ity measure, credibility distribution, independent and identical distribution,expected value operator, variance, critical values, characteristic function, in-equalities, convergence concepts, and fuzzy simulation.

3.1 Four Axioms

In order to present the axiomatic definition of possibility, it is necessaryto assign to each event A a number Pos{A} which indicates the possibilitythat A will occur. In order to ensure that the number Pos{A} has certainmathematical properties which we intuitively expect a possibility to have,four axioms must be satisfied. Let Θ be a nonempty set representing thesample space, and P(Θ) the power set of Θ. The four axioms are listed asfollows:

Axiom 1. Pos{Θ} = 1.

Axiom 2. Pos{∅} = 0.

Axiom 3. Pos{∪iAi} = supi Pos{Ai} for any collection {Ai} in P(Θ).

Axiom 4. Let Θi be nonempty sets on which Posi{·} satisfy the first threeaxioms, i = 1, 2, · · · , n, respectively, and Θ = Θ1 ×Θ2 × · · · ×Θn. Then

Pos{A} = sup(θ1,θ2,···,θn)∈A

Pos1{θ1} ∧ Pos2{θ2} ∧ · · · ∧ Posn{θn} (3.1)

for each A ∈ P(Θ). In that case we write Pos = Pos1 ∧ Pos2 ∧ · · · ∧ Posn.

The first three axioms were given by Nahmias [98] to define a possibilitymeasure, and the fourth one was given by Liu [76] to define the productpossibility measure. Note that Pos = Pos1 ∧ Pos2 ∧ · · · ∧ Posn satisfies thefirst three axioms. The whole credibility theory can be developed based onthe four axioms.

Definition 3.1 Let Θ be a nonempty set, and P(Θ) the power set of Θ. ThenPos is called a possibility measure if it satisfies the first three axioms.

Definition 3.2 Let Θ be a nonempty set, P(Θ) the power set of Θ, and Posa possibility measure. Then the triplet (Θ,P(Θ),Pos) is called a possibilityspace.

Theorem 3.1 Let (Θ,P(Θ),Pos) be a possibility space. Then we have(a) 0 ≤ Pos{A} ≤ 1 for any A ∈ P(Θ);(b) Pos{A} ≤ Pos{B} whenever A ⊂ B;(c) Pos{A ∪ B} ≤ Pos{A} + Pos{B} for any A,B ∈ P(Θ). That is, thepossibility measure is subadditive.

Page 92: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 3.1 - Four Axioms 81

Proof: (a) Since Θ = A ∪ Ac, we have Pos{A} ∨ Pos{Ac} = Pos{Θ} = 1which implies that Pos{A} ≤ 1. On the other hand, since A = A ∪ ∅, wehave Pos{A} ∨ 0 = Pos{A} which implies that Pos{A} ≥ 0. It follows that0 ≤ Pos{A} ≤ 1 for any A ∈ P(Θ).

(b) Let A ⊂ B. Then there exists a set C such that B = A∪C. Thus wehave Pos{A} ∨ Pos{C} = Pos{B} which gives that Pos{A} ≤ Pos{B}.

The part (c) holds obviously because Pos{A ∪B} = Pos{A} ∨ Pos{B} ≤Pos{A}+ Pos{B}.

Theorem 3.2 (Possibility Lower Semicontinuity Theorem) Let (Θ,P(Θ),Pos)be a possibility space. If A1, A2, · · · ∈ P(Θ) and Ai ↑ A, then

limi→∞

Pos{Ai} = Pos{A}. (3.2)

Proof: Since {Ai} is an increasing sequence, we have A = A1 ∪A2 ∪ · · · andPos{A1} ≤ Pos{A2} ≤ · · · Thus

Pos{A} = Pos

{ ∞⋃i=1

Ai

}= sup

1≤i<∞Pos{Ai} = lim

i→∞Pos{Ai}.

Definition 3.3 (Zhou and Liu [162]) Let (Θ,P(Θ),Pos) be a possibility space.Then the set

Θ+ ={θ ∈ Θ

∣∣ Pos{θ} > 0}

(3.3)

is called the kernel of the possibility space (Θ,P(Θ),Pos).

Product Possibility Space

Theorem 3.3 (Liu [76]) Suppose that (Θi,P(Θi),Posi), i = 1, 2, · · · , n arepossibility spaces. Let Θ = Θ1 × Θ2 × · · · × Θn and Pos = Pos1 ∧ Pos2 ∧· · · ∧ Posn. Then the set function Pos is a possibility measure on P(Θ), and(Θ,P(Θ),Pos) is a possibility space.

Proof: We must prove that Pos satisfies the first three axioms. It is obviousthat Pos{∅} = 0 and Pos{Θ} = 1. In addition, for any arbitrary collection{Ai} in P(Θ), we have

Pos {∪iAi} = sup(θ1,θ2,···,θn)∈∪iAi

Pos1{θ1} ∧ Pos2{θ2} ∧ · · · ∧ Posn{θn}

= supi

sup(θ1,θ2,···,θn)∈Ai

Pos1{θ1} ∧ Pos2{θ2} ∧ · · · ∧ Posn{θn}

= supi

Pos{Ai}.

Thus the set function Pos defined by (3.1) is a possibility measure and(Θ,P(Θ),Pos) is a possibility space.

Page 93: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

82 Chapter 3 - Credibility Theory

Definition 3.4 (Liu [76]) Let (Θi,P(Θi),Posi), i = 1, 2, · · · , n be possibilityspaces, Θ = Θ1 × Θ2 × · · · × Θn and Pos = Pos1 ∧ Pos2 ∧ · · · ∧ Posn. Then(Θ,P(Θ),Pos) is called the product possibility space of (Θi,P(Θi),Posi), i =1, 2, · · · , n.

Infinite Product Possibility Space

Theorem 3.4 Let (Θi,P(Θi),Posi), i = 1, 2, · · · be possibility spaces. If

Θ = Θ1 ×Θ2 × · · · × · · · (3.4)

Pos{A} = sup(θ1,θ2,···)∈A

Pos1{θ1} ∧ Pos2{θ2} ∧ · · · (3.5)

then the set function Pos is a possibility measure on P(Θ), and (Θ,P(Θ),Pos)is a possibility space.

Proof: We must prove that Pos satisfies the first three axioms. It is obviousthat Pos{∅} = 0 and Pos{Θ} = 1. In addition, for any arbitrary collection{Ai} in P(Θ), we have

Pos {∪iAi} = sup(θ1,θ2,···)∈∪iAi

Pos1{θ1} ∧ Pos2{θ2} ∧ · · ·

= supi

sup(θ1,θ2,···)∈Ai

Pos1{θ1} ∧ Pos2{θ2} ∧ · · ·

= supi

Pos{Ai}.

Thus the set function Pos defined by (3.5) is a possibility measure and(Θ,P(Θ),Pos) is a possibility space.

Definition 3.5 Let (Θi,P(Θi),Posi), i = 1, 2, · · · be possibility spaces. De-fine Θ = Θ1 × Θ2 × · · · and Pos = Pos1 ∧ Pos2 ∧ · · · Then (Θ,P(Θ),Pos) iscalled the infinite product possibility space of (Θi,P(Θi),Posi), i = 1, 2, · · ·

Necessity Measure

The necessity measure of a set A is defined as the impossibility of the oppositeset Ac.

Definition 3.6 Let (Θ,P(Θ),Pos) be a possibility space, and A a set inP(Θ). Then the necessity measure of A is defined by

Nec{A} = 1− Pos{Ac}. (3.6)

Theorem 3.5 Let (Θ,P(Θ),Pos) be a possibility space. Then we have(a) Nec{Θ} = 1;(b) Nec{∅} = 0;(c) Nec{A} = 0 whenever Pos{A} < 1;(d) Nec{A} ≤ Nec{B} whenever A ⊂ B;(e) Nec{A}+ Pos{Ac} = 1 for any A ∈ P(Θ).

Page 94: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 3.1 - Four Axioms 83

Proof: The theorem may be proved easily by its definition.

Example 3.1: However, the necessity measure Nec is not subadditive. LetΘ = {θ1, θ2}, Pos{θ1} = 1 and Pos{θ2} = 0.8. Then

Nec{θ1}+ Nec{θ2} = (1− 0.8) + 0 < 1 = Nec{θ1, θ2}.

Theorem 3.6 (Necessity Upper Semicontinuity Theorem) Let (Θ,P(Θ),Pos)be a possibility space. If A1, A2, · · · ∈ P(Θ) and Ai ↓ A, then

limi→∞

Nec{Ai} = Nec{A}. (3.7)

Proof: If Ai ↓ A, then Aci ↑ Ac. It follows from the possibility lower

semicontinuity theorem that

Nec{Ai} = 1− Pos{Aci} → 1− Pos{Ac} = Nec{A}.

The theorem is proved.

Credibility Measure

The credibility of a fuzzy event is defined as the average of its possibility andnecessity. It will play the role of probability measure.

Definition 3.7 (Liu and Liu [77]) Let (Θ,P(Θ),Pos) be a possibility space,and A a set in P(Θ). Then the credibility measure of A is defined by

Cr{A} =12

(Pos{A}+ Nec{A}) . (3.8)

Remark 3.1: A fuzzy event may fail even though its possibility achieves 1,and may hold even though its necessity is 0. However, the fuzzy event musthold if its credibility is 1, and fail if its credibility is 0.

Theorem 3.7 Let (Θ,P(Θ),Pos) be a possibility space, and A a set in P(Θ).Then we have

Pos{A} ≥ Cr{A} ≥ Nec{A}. (3.9)

Proof: We first prove Pos{A} ≥ Nec{A}. If Pos{A} = 1, then it is obviousthat Pos{A} ≥ Nec{A}. Otherwise, we must have Pos{Ac} = 1, whichimplies that Nec{A} = 1 − Pos{Ac} = 0. Thus Pos{A} ≥ Nec{A} alwaysholds. It follows from the definition of credibility that the value of credibilityis between possibility and necessity. Hence (3.9) holds. The theorem isproved.

Page 95: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

84 Chapter 3 - Credibility Theory

Theorem 3.8 Let (Θ,P(Θ),Pos) be a possibility space. Then we have(a) Cr{Θ} = 1;(b) Cr{∅} = 0;(c) Cr{A} ≤ Cr{B} whenever A ⊂ B;(d) Cr is self dual, i.e., Cr{A}+ Cr{Ac} = 1 for any A ∈ P(Θ);(e) Cr is subadditive, i.e., Cr{A∪B} ≤ Cr{A}+Cr{B} for any A,B ∈ P(Θ).

Proof: Clearly (a), (b), (c) and (d) follow from the definition. We now provethe part (e). The argument breaks down into four cases.

Case 1: Pos{A} = 1 and Pos{B} = 1. For this case, we have

Cr{A}+ Cr{B} ≥ 12

+12

= 1 ≥ Cr{A ∪B}.

Case 2: Pos{A} < 1 and Pos{B} < 1. For this case, Pos{A ∪ B} =Pos{A} ∨ Pos{B} < 1. Thus we have

Cr{A}+ Cr{B} =12Pos{A}+

12Pos{B}

≥ 12

(Pos{A} ∨ Pos{B})

=12Pos{A ∪B}

= Cr{A ∪B}.

Case 3: Pos{A} = 1 and Pos{B} < 1. For this case, Pos{A ∪ B} =Pos{A} ∨ Pos{B} = 1. Then we have

Pos{Ac} = Pos{Ac ∩B} ∨ Pos{Ac ∩Bc}≤ Pos{Ac ∩B}+ Pos{Ac ∩Bc}≤ Pos{B}+ Pos{Ac ∩Bc}.

Applying this inequality, we obtain

Cr{A}+ Cr{B} = 1− 12Pos{Ac}+

12Pos{B}

≥ 1− 12

(Pos{B}+ Pos{Ac ∩Bc}) +12Pos{B}

= 1− 12Pos{Ac ∩Bc}

= Cr{A ∪B}.

Case 4: Pos{A} < 1 and Pos{B} = 1. This case may be proved by asimilar process of Case 3. The proof is complete.

Page 96: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 3.1 - Four Axioms 85

Theorem 3.9 Let (Θ,P(Θ),Pos) be a possibility space, and A1, A2, · · · ∈P(Θ). If

∑∞i=1 Pos{Ai} <∞ or

∑∞i=1 Cr{Ai} <∞, then

Pos{

lim supi→∞

Ai

}= Cr

{lim supi→∞

Ai

}= 0. (3.10)

Proof: At first,∑∞

i=1 Pos{Ai} < ∞ and∑∞

i=1 Cr{Ai} < ∞ are equivalent.Since the possibility measure is increasing and subadditive, we have

Pos{

lim supi→∞

Ai

}= Pos

{ ∞⋂k=1

∞⋃i=k

Ai

}≤ lim

k→∞Pos

{ ∞⋃i=k

Ai

}

≤ limk→∞

∞∑i=k

Pos{Ai} = 0.

(by

∞∑i=1

Pos{Ai} <∞)

Together with Cr{A} ≤ Pos{A}, the theorem is proved.

Example 3.2: However, the condition∑∞

i=1 Nec{Ai} <∞ tells us nothing.Let Θ = {θ1, θ2, · · ·} and Pos{θi} = i/(i + 1) for i = 1, 2, · · · We define Ai ={θ1, θ2, · · · , θi}. Then we have Nec{Ai} = 0 and

∑∞i=1 Nec{Ai} = 0 < ∞.

But,lim supi→∞

Ai = Θ,

Pos{

lim supi→∞

Ai

}= Nec

{lim supi→∞

Ai

}= Cr

{lim supi→∞

Ai

}= 1.

Credibility Semicontinuity Laws

Generally speaking, the credibility measure is neither lower semicontinuousnor upper semicontinuous. However, we have the following credibility semi-continuity laws.

Theorem 3.10 (Credibility Semicontinuity Law) Let (Θ,P(Θ),Pos) be a pos-sibility space, and A1, A2, · · · ∈ P(Θ). Then we have

limi→∞

Cr{Ai} = Cr{

limi→∞

Ai

}(3.11)

if one of the following conditions is satisfied:(a) Cr

{limi→∞

Ai

}≤ 0.5 and Ai ↑ A; (b) lim

i→∞Cr{Ai} < 0.5 and Ai ↑ A;

(c) Cr{

limi→∞

Ai

}≥ 0.5 and Ai ↓ A; (d) lim

i→∞Cr{Ai} > 0.5 and Ai ↓ A.

Proof: (a) If Cr{A} ≤ 0.5, then Cr{Ai} ≤ 0.5 for all i. Thus Cr{A} =Pos{A}/2 and Cr{Ai} = Pos{Ai}/2 for all i. It follows from the possibilitylower semicontinuity theorem that Cr{Ai} → Cr{A}.

Page 97: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

86 Chapter 3 - Credibility Theory

(b) Since limi→∞ Cr{Ai} < 0.5, we have Cr{Ai} < 0.5 for all i. ThusPos{Ai} = 2Cr{Ai} for all i. Therefore

Pos{A} = Pos

{ ∞⋃i=1

Ai

}= lim

i→∞Pos{Ai} = lim

i→∞2Cr{Ai} < 1

which implies that Cr{A} < 0.5. It follows from part (a) that Cr{Ai} →Cr{A} as i→∞.

(c) Since Cr{A} ≥ 0.5 and Ai ↓ A, it follows from the self-duality ofcredibility measure that Cr{Ac} ≤ 0.5 and Ac

i ↑ Ac. Thus Cr{Ai} = 1 −Cr{Ac

i} → 1− Cr{Ac} = Cr{A} as i→∞.(d) Since limi→∞ Cr{Ai} > 0.5 and Ai ↓ A, it follows from the self-duality

of credibility measure that

limi→∞

Cr{Aci} = lim

i→∞(1− Cr{Ai}) < 0.5

and Aci ↑ Ac. Thus Cr{Ai} = 1−Cr{Ac

i} → 1−Cr{Ac} = Cr{A} as i→∞.The theorem is proved.

Example 3.3: When Cr{Ai} < 0.5 for all i and Ai ↑ A, we cannot deducethat Cr{Ai} → Cr{A}. For example, let Θ = {θ1, θ2, · · ·} and Pos{θj} =(j − 1)/j for j = 1, 2, · · · Suppose that Ai = {θ1, θ2, · · · , θi} for i = 1, 2, · · ·Then we have Ai ↑ Θ. However,

Cr{Ai} =i− 12i→ 1

2�= 1 = Cr{Θ}.

Example 3.4: When Cr{Ai} > 0.5 for all i and Ai ↓ A, we cannot deducethat Cr{Ai} → Cr{A}. For example, let Θ = {θ1, θ2, · · ·} and Pos{θj} =(j − 1)/j for j = 1, 2, · · · Suppose that Ai = {θi+1, θi+2, · · ·} for i = 1, 2, · · ·Then we have Ai ↓ ∅. However,

Cr{Ai} =i + 12i→ 1

2�= 0 = Cr{∅}.

Theorem 3.11 Let (Θ,P(Θ),Pos) be a possibility space, and A1, A2, · · · ∈

A. (a) If Cr{

lim infi→∞

Ai

}≤ 0.5 or lim

k→∞Cr{ ∞⋂

i=k

Ai

}< 0.5, then

Cr{

lim infi→∞

Ai

}≤ lim inf

i→∞Cr{Ai}. (3.12)

(b) If Cr{

lim supi→∞

Ai

}≥ 0.5 or lim

k→∞Cr{ ∞⋃

i=k

Ai

}> 0.5, then

lim supi→∞

Cr{Ai} ≤ Cr{

lim supi→∞

Ai

}. (3.13)

Page 98: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 3.2 - Fuzzy Variables 87

Proof: (a) Since ∩∞i=kAi is an increasing sequence and ∩∞i=kAi ⊂ Ak, we get

Cr{

lim infi→∞

Ai

}= Cr

{limk→∞

∞⋂i=k

Ai

}= lim

k→∞Cr

{ ∞⋂i=k

Ai

}≤ lim inf

i→∞Cr{Ai}.

Similarly, ∪∞i=kAi is a decreasing sequence and ∪∞i=kAi ⊃ Ak. Thus we have

Cr{

lim supi→∞

Ai

}= Cr

{limk→∞

∞⋃i=k

Ai

}= lim

k→∞Cr

{ ∞⋃i=k

Ai

}≥ lim sup

i→∞Cr{Ai}.

The theorem is proved.

Example 3.5: The strict inequalities in Theorem 3.11 may hold. For exam-ple, let

Ai =

{(0, 0.5], if i is odd(0.5, 1], if i is even

for i = 1, 2, · · ·, and let Pos{x} = x for x ∈ (0, 1]. Then

Cr{

lim infi→∞

Ai

}= Cr{∅} = 0 < 0.25 = lim inf

i→∞Cr{Ai},

lim supi→∞

Cr{Ai} = 0.75 < 1 = Cr{(0, 1]} = Cr{

lim supi→∞

Ai

}.

3.2 Fuzzy Variables

Definition 3.8 A fuzzy variable is defined as a function from a possibilityspace (Θ,P(Θ),Pos) to the set of real numbers.

Definition 3.9 Let ξ be a fuzzy variable defined on the possibility space(Θ,P(Θ),Pos). Then the set

ξα ={ξ(θ)

∣∣ θ ∈ Θ,Pos{θ} ≥ α}

(3.14)

is called the α-level set of ξ. Especially, the set{ξ(θ)

∣∣ θ ∈ Θ,Pos{θ} > 0}

={ξ(θ)

∣∣ θ ∈ Θ+}

(3.15)

is called the support of ξ, where Θ+ is the kernel of the possibility space(Θ,P(Θ),Pos).

Definition 3.10 A fuzzy variable ξ is said to be(a) nonnegative if Pos{ξ < 0} = 0;(b) positive if Pos{ξ ≤ 0} = 0;

Page 99: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

88 Chapter 3 - Credibility Theory

(c) continuous if Pos{ξ = x} is a continuous function of x;(d) simple if there exists a finite sequence {x1, x2, · · · , xm} such that

Pos {ξ �= x1, ξ �= x2, · · · , ξ �= xm} = 0; (3.16)

(e) discrete if there exists a countable sequence {x1, x2, · · ·} such that

Pos {ξ �= x1, ξ �= x2, · · ·} = 0. (3.17)

Theorem 3.12 If the fuzzy variable ξ is continuous, then both Pos{ξ ≥ x}and Pos{ξ ≤ x} are continuous functions of x. Furthermore, Pos{x ≤ ξ ≤ y}is a continuous function on {(x, y)|x < y}.

Proof: Since Pos{ξ = x} is a continuous function of x, the function

Pos{ξ ≥ x} = supy≥x

Pos{ξ = y}

is obviously continuous. Similarly, we may prove that Pos{ξ ≤ x} andPos{x ≤ ξ ≤ y} are continuous functions.

Example 3.6: Generally speaking, the fuzzy variable ξ is not continuouseven when both Pos{ξ ≤ x} and Pos{ξ ≥ x} are continuous functions. Forexample, let Θ = �, and

Pos{θ} =

⎧⎪⎪⎪⎪⎨⎪⎪⎪⎪⎩0, if θ < −1

1 + θ, if − 1 ≤ θ < 00, if θ = 0

1− θ, if 0 < θ ≤ 10, if 1 < θ.

If ξ is an identity function from � to �, then Pos{ξ ≤ x} and Pos{ξ ≥ x}are continuous functions. However, the fuzzy variable ξ is not continuousbecause Pos{ξ = x} is not continuous at x = 0.

Membership Function

Definition 3.11 Let ξ be a fuzzy variable defined on the possibility space(Θ,P(Θ),Pos). Then its membership function is derived from the possibilitymeasure by

μ(x) = Pos{θ ∈ Θ∣∣ ξ(θ) = x}, x ∈ �. (3.18)

Theorem 3.13 Let μ : � → [0, 1] be a function with supμ(x) = 1. Thenthere is a fuzzy variable whose membership function is μ.

Proof: For any A ∈ P(�), we define a set function as

Pos{A} = supx∈A

μ(x).

Page 100: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 3.2 - Fuzzy Variables 89

We will prove that Pos is a possibility measure. First, we have Pos{∅} = 0and Pos{�} = supμ(x) = 1. In addition, for any sequence {Ai} in P(�),

Pos

{⋃i

Ai

}= sup

{μ(x)

∣∣ x ∈⋃i

Ai

}= sup

isupx∈Ai

μ(x) = supi

Pos{Ai}.

Hence Pos is a possibility measure and (�,P(�),Pos) is a possibility space.Now we define a fuzzy variable ξ as an identity function from (�,P(�),Pos)to �. It is easy to verify the relation

Pos{ξ = x} = μ(x), ∀x ∈ �.

Thus the fuzzy variable ξ has the membership function μ(x).

Theorem 3.14 A fuzzy variable ξ with membership function μ is(a) nonnegative if and only if μ(x) = 0 for all x < 0;(b) positive if and only if μ(x) = 0 for all x ≤ 0;(c) simple if and only if μ takes nonzero values at a finite number of points;(d) discrete if and only if μ takes nonzero values at a countable set of points;(e) continuous if and only if μ is a continuous function.

Proof: The theorem is obvious since the membership function μ(x) =Pos{ξ = x} for all x ∈ �.

Definition 3.12 Let ξ and η be fuzzy variables defined on the possibilityspace (Θ,P(Θ),Pos). Then ξ = η if and only if ξ(θ) = η(θ) for all θ ∈ Θ.

Example 3.7: Assume that the fuzzy variables ξ and η have the samemembership function. One question is whether ξ = η or not. Generallyspeaking, it is not true. Let Θ = {θ1, θ2, θ3} and

Pos{θ} =

⎧⎨⎩0.5, if θ = θ1

1, if θ = θ2

0.5, if θ = θ3.

Then (Θ,P(Θ),Pos) is a possibility space. We now define two fuzzy variablesas follows,

ξ(θ) =

⎧⎨⎩−1, if θ = θ1

0, if θ = θ2

1, if θ = θ3,η(θ) =

⎧⎨⎩1, if θ = θ1

0, if θ = θ2

−1, if θ = θ3.

Then ξ and η have the same membership function,

μ(x) =

⎧⎨⎩0.5, if x = −11, if x = 0

0.5, if x = 1.

However, ξ �= η in the sense of Definition 3.12.

Page 101: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

90 Chapter 3 - Credibility Theory

Fuzzy Vector

Definition 3.13 An n-dimensional fuzzy vector is defined as a function froma possibility space (Θ,P(Θ),Pos) to the set of n-dimensional real vectors.

Theorem 3.15 The vector (ξ1, ξ2, · · · , ξn) is a fuzzy vector if and only ifξ1, ξ2, · · · , ξn are fuzzy variables.

Proof: Write ξ = (ξ1, ξ2, · · · , ξn). Suppose that ξ is a fuzzy vector. Thenξ1, ξ2, · · · , ξn are functions from Θ to �. Thus ξ1, ξ2, · · · , ξn are fuzzy vari-ables. Conversely, suppose that ξi are fuzzy variables defined on the possi-bility spaces (Θi,P(Θi),Posi), i = 1, 2, · · · , n, respectively. It is clear that(ξ1, ξ2, · · · , ξn) is a function from the product possibility space (Θ,P(Θ),Pos)to �n, i.e.,

ξ(θ1, θ2, · · · , θn) = (ξ1(θ1), ξ2(θ2), · · · , ξn(θn))

for all (θ1, θ2, · · · , θn) ∈ Θ. Hence ξ = (ξ1, ξ2, · · · , ξn) is a fuzzy vector.

Definition 3.14 If ξ = (ξ1, ξ2, · · · , ξn) is a fuzzy vector on the possibilityspace (Θ,P(Θ),Pos). Then its joint membership function is derived from thepossibility measure by

μ(x) = Pos{θ ∈ Θ∣∣ ξ(θ) = x}, ∀x ∈ �n. (3.19)

Fuzzy Arithmetic

Definition 3.15 (Fuzzy Arithmetic on Single Possibility Space) Let f : �n →� be a function, and ξ1, ξ2, · · · , ξn fuzzy variables on the possibility space(Θ,P(Θ),Pos). Then ξ = f(ξ1, ξ2, · · · , ξn) is a fuzzy variable defined as

ξ(θ) = f(ξ1(θ), ξ2(θ), · · · , ξn(θ)) (3.20)

for any θ ∈ Θ.

Example 3.8: Let ξ1 and ξ2 be fuzzy variables on the possibility space(Θ,P(Θ),Pos). Then their sum and product are

(ξ1 + ξ2)(θ) = ξ1(θ) + ξ2(θ), (ξ1 × ξ2)(θ) = ξ1(θ)× ξ2(θ), ∀θ ∈ Θ.

Definition 3.16 (Fuzzy Arithmetic on Different Possibility Spaces) Let f :�n → � be a function, and ξi fuzzy variables defined on possibility spaces(Θi,P(Θi),Posi), i = 1, 2, · · · , n, respectively. Then ξ = f(ξ1, ξ2, · · · , ξn) is afuzzy variable defined on the product possibility space (Θ,P(Θ),Pos) as

ξ(θ1, θ2, · · · , θn) = f(ξ1(θ1), ξ2(θ2), · · · , ξn(θn)) (3.21)

for any (θ1, θ2, · · · , θn) ∈ Θ.

Page 102: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 3.2 - Fuzzy Variables 91

Example 3.9: Let ξ1 and ξ2 be fuzzy variables on the possibility spaces(Θ1,P(Θ1),Pos1) and (Θ2,P(Θ2),Pos2), respectively. Then their sum andproduct are

(ξ1 + ξ2)(θ1, θ2) = ξ1(θ1) + ξ2(θ2), (ξ1 × ξ2)(θ1, θ2) = ξ1(θ1)× ξ2(θ2)

for any (θ1, θ2) ∈ Θ1 ×Θ2.

Continuity Theorems

Theorem 3.16 (a) Let {ξi} be an increasing sequence of fuzzy variables suchthat limi→∞ ξi is a fuzzy variable. If Cr

{limi→∞

ξi ≤ r}≥ 0.5 or lim

i→∞Cr{ξi ≤

r} > 0.5, then

limi→∞

Cr{ξi ≤ r} = Cr{

limi→∞

ξi ≤ r}

. (3.22)

If Cr{

limi→∞

ξi > r}≤ 0.5 or lim

i→∞Cr{ξi > r} < 0.5, then

limi→∞

Cr{ξi > r} = Cr{

limi→∞

ξi > r}

. (3.23)

(b) Let {ξi} be a decreasing sequence of fuzzy variables such that limi→∞ ξi

is a fuzzy variable. If Cr{

limi→∞

ξi < r}≤ 0.5 or lim

i→∞Cr{ξi < r} < 0.5, then

limi→∞

Cr{ξi < r} = Cr{

limi→∞

ξi < r}

. (3.24)

If Cr{

limi→∞

ξi ≥ r}≥ 0.5 or lim

i→∞Cr{ξi ≥ r} > 0.5, then

limi→∞

Cr{ξi ≥ r} = Cr{

limi→∞

ξi ≥ r}

. (3.25)

Proof: (a) Since ξi ↑ ξ, we have {ξi ≤ r} ↓ {ξ ≤ r} and {ξi > r} ↑ {ξ > r}.It follows from the credibility semicontinuity law that (3.22) and (3.23) hold.

(b) Since ξi ↓ ξ, we have {ξi < r} ↑ {ξ < r} and {ξi ≥ r} ↓ {ξ ≥ r}. Itfollows from the credibility semicontinuity law that (3.24) and (3.25) hold.

Theorem 3.17 Let {ξi} be a sequence of fuzzy variables such that

lim infi→∞

ξi and lim supi→∞

ξi

are fuzzy variables. (a) If Cr{

lim infi→∞

ξi ≤ r}≥ 0.5 or lim

k→∞Cr{

infi≥k

ξi ≤ r

}>

0.5, thenCr{

lim infi→∞

ξi ≤ r}≥ lim sup

i→∞Cr{ξi ≤ r}. (3.26)

Page 103: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

92 Chapter 3 - Credibility Theory

(b) If Cr{

lim infi→∞

ξi > r}≤ 0.5 or lim

k→∞Cr{

infi≥k

ξi > r

}< 0.5, then

Cr{

lim infi→∞

ξi > r}≤ lim inf

i→∞Cr{ξi > r}. (3.27)

(c) If Cr{

lim supi→∞

ξi < r

}≤ 0.5 or lim

k→∞Cr{

supi≥k

ξi < r

}< 0.5, then

Cr{

lim supi→∞

ξi < r

}≤ lim inf

i→∞Cr{ξi < r}. (3.28)

(d) If Cr{

lim supi→∞

ξi ≥ r

}≥ 0.5 or lim

k→∞Cr{

supi≥k

ξi ≥ r

}> 0.5, then

Cr{

lim supi→∞

ξi ≥ r

}≥ lim sup

i→∞Cr{ξi ≥ r}. (3.29)

Proof: It is clear that infi≥k

ξi is an increasing sequence and infi≥k

ξi ≤ ξk for

each k. It follows from Theorem 3.16 that

Cr{

lim infi→∞

ξi ≤ r}

= Cr{

limk→∞

infi≥k

ξi ≤ r

}= lim

k→∞Cr{

infi≥k

ξi ≤ r

}≥ lim sup

k→∞Cr {ξk ≤ r} .

The inequality (3.26) is proved. Similarly,

Cr{

lim infi→∞

ξi > r}

= Cr{

limk→∞

infi≥k

ξi > r

}= lim

k→∞Cr{

infi≥k

ξi > r

}≤ lim inf

k→∞Cr {ξk > r} .

The inequality (3.27) is proved. Furthermore, it is clear that supi≥k

ξi is a

decreasing sequence and supi≥k

ξi ≥ ξk for each k. It follows from Theorem 3.16

that

Cr{

lim supi→∞

ξi < r

}= Cr

{limk→∞

supi≥k

ξi < r

}= lim

k→∞Cr{

supi≥k

ξi < r

}≤ lim inf

k→∞Cr {ξk < r} .

The inequality (3.28) is proved. Similarly,

Cr{

lim supi→∞

ξi ≥ r

}= Cr

{limk→∞

supi≥k

ξi ≥ r

}= lim

k→∞Cr{

supi≥k

ξi ≥ r

}≥ lim sup

k→∞Cr {ξk ≥ r} .

Page 104: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 3.2 - Fuzzy Variables 93

The inequality (3.29) is proved.

Theorem 3.18 Let {ξi} be a sequence of fuzzy variables such that ξi → ξuniformly. Then for almost all r ∈ �, we have

limi→∞

Cr{ξi ≥ r} = Cr{

limi→∞

ξi ≥ r}

. (3.30)

The equation (3.30) remains true if “≥” is replaced with “≤”, “>” or “<”.

Proof: Note that Cr{ξ ≥ r} is a decreasing function of r and continuousalmost everywhere. The theorem is proved if we can verify that (3.30) holdsfor any continuity point r0 of Cr{ξ ≥ r}. For any given ε > 0, there existsδ > 0 such that

|Cr{ξ ≥ r0 ± δ} − Cr{ξ ≥ r0}| ≤ ε. (3.31)

Since ξi → ξ uniformly, there exists m such that

|ξi − ξ| < δ, ∀i > m

which makes that

{ξ ≥ r0 + δ} ⊂ {ξi ≥ r0} ⊂ {ξ ≥ r0 − δ}.

By using (3.31), we get

Cr{ξ ≥ r0} − ε ≤ Cr{ξi ≥ r0} ≤ Cr{ξ ≥ r0}+ ε. (3.32)

Letting ε→ 0, we obtain (3.30). The theorem is proved.

Example 3.10: Dropping the condition of ξi → ξ uniformly, Theorem 3.18does not hold. For example, let Θ = {θ1, θ2, · · ·} and Pos{θj} = 1 for all j.We define a sequence of fuzzy variables as follows,

ξi(θj) =

{0, if j < i

1, otherwise

for i = 1, 2, · · · Then ξi ↓ 0. For any 0 < r < 1, we have

limi→∞

Cr{ξi ≥ r} = 0.5 �= 0 = Cr{

limi→∞

ξi ≥ r}

.

Trapezoidal Fuzzy Variable and Triangular Fuzzy Variable

Example 3.11: By a trapezoidal fuzzy variable we mean the fuzzy variablefully determined by quadruplet (r1, r2, r3, r4) of crisp numbers with r1 < r2 <

Page 105: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

94 Chapter 3 - Credibility Theory

r3 < r4, whose membership function is given by

μ(x) =

⎧⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎩

x− r1

r2 − r1, if r1 ≤ x ≤ r2

1, if r2 ≤ x ≤ r3

x− r4

r3 − r4, if r3 ≤ x ≤ r4

0, otherwise.

By a triangular fuzzy variable we mean the fuzzy variable fully determined bythe triplet (r1, r2, r3) of crisp numbers with r1 < r2 < r3, whose membershipfunction is given by

μ(x) =

⎧⎪⎪⎪⎪⎨⎪⎪⎪⎪⎩

x− r1

r2 − r1, if r1 ≤ x ≤ r2

x− r3

r2 − r3, if r2 ≤ x ≤ r3

0, otherwise.

Let us consider a trapezoidal fuzzy variable ξ = (r1, r2, r3, r4). From thedefinitions of possibility, necessity and credibility, it is easy to obtain

Pos {ξ ≤ 0} =

⎧⎪⎪⎨⎪⎪⎩1, if r2 ≤ 0r1

r1 − r2, if r1 ≤ 0 ≤ r2

0, otherwise,

(3.33)

Nec{ξ ≤ 0} =

⎧⎪⎪⎨⎪⎪⎩1, if r4 ≤ 0r3

r3 − r4, if r3 ≤ 0 ≤ r4

0, otherwise,

(3.34)

Cr{ξ ≤ 0} =

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

1, if r4 ≤ 0

2r3 − r4

2(r3 − r4), if r3 ≤ 0 ≤ r4

12, if r2 ≤ 0 ≤ r3

r1

2(r1 − r2), if r1 ≤ 0 ≤ r2

0, otherwise.

(3.35)

Theorem 3.19 (Lu [87]) Let ξ = (r1, r2, r3, r4) be a trapezoidal fuzzy vari-able. Then for any given confidence level α with 0 < α ≤ 1, we have(a) when α ≤ 0.5, Cr{ξ ≤ 0} ≥ α if and only if (1− 2α)r1 + 2αr2 ≤ 0;(b) when α > 0.5, Cr{ξ ≤ 0} ≥ α if and only if (2− 2α)r3 + (2α− 1)r4 ≤ 0.

Page 106: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 3.3 - Credibility Distribution 95

Proof: (a) When α ≤ 0.5, it follows from (3.35) that Cr{ξ ≤ 0} ≥ α if andonly if r1/(2(r1 − r2)) ≥ α, if and only if (1− 2α)r1 + 2αr2 ≤ 0.

(b) When α > 0.5, it follows from (3.35) that Cr{ξ ≤ 0} ≥ α if and onlyif (2r3 − r4)/(2(r3 − r4)) ≥ α, if and only if (2− 2α)r3 + (2α− 1)r4 ≤ 0.

3.3 Credibility Distribution

Definition 3.17 (Liu [75]) The credibility distribution Φ : [−∞,+∞] →[0, 1] of a fuzzy variable ξ is defined by

Φ(x) = Cr{θ ∈ Θ

∣∣ ξ(θ) ≤ x}

. (3.36)

That is, Φ(x) is the credibility that the fuzzy variable ξ takes a value less thanor equal to x. Generally speaking, the credibility distribution Φ is neitherleft-continuous nor right-continuous. A necessary and sufficient condition forcredibility distribution is given by the following theorem.

Theorem 3.20 The credibility distribution Φ of a fuzzy variable is a nonde-creasing function on [−∞,+∞] with⎧⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎩

Φ(−∞) = 0Φ(+∞) = 1lim

x→−∞Φ(x) ≤ 0.5 ≤ limx→∞Φ(x)

limy↓x

Φ(y) = Φ(x) if limy↓x

Φ(y) > 0.5 or Φ(x) ≥ 0.5.

(3.37)

Conversely, if Φ : [−∞,+∞] → [0, 1] is a nondecreasing function satisfying(3.37), then Φ is the credibility distribution of fuzzy variable defined by themembership function

μ(x) =

⎧⎪⎪⎪⎨⎪⎪⎪⎩2Φ(x), if Φ(x) < 0.5

1, if limy↑x

Φ(y) < 0.5 ≤ Φ(x)

2− 2Φ(x), if 0.5 ≤ limy↑x

Φ(y).

(3.38)

Proof: It is obvious that Φ is a nondecreasing function, and

Φ(−∞) = Cr{∅} = 0, Φ(+∞) = Cr{Θ} = 1.

It is also clear that limx→∞Pos{ξ ≤ x} = 1 and lim

x→−∞Nec{ξ ≤ x} = 0. Thus

we have

limx→−∞Φ(x) = lim

x→−∞12

(Pos{ξ ≤ x}+ Nec{ξ ≤ x})

=12

(lim

x→−∞Pos{ξ ≤ x}+ limx→−∞Nec{ξ ≤ x}

)≤ 1

2(1 + 0) = 0.5

Page 107: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

96 Chapter 3 - Credibility Theory

andlimx→∞Φ(x) = lim

x→∞12

(Pos{ξ ≤ x}+ Nec{ξ ≤ x})

=12

(limx→∞Pos{ξ ≤ x}+ lim

x→∞Nec{ξ ≤ x})

≥ 12(1 + 0) = 0.5.

In addition, assume that x is a point at which limy↓x Φ(y) > 0.5. That is,

limy↓x

Cr{ξ ≤ y} > 0.5.

Since {ξ ≤ y} ↓ {ξ ≤ x} as y ↓ x, it follows from the credibility semicontinuitylaw that

Φ(y) = Cr{ξ ≤ y} ↓ Cr{ξ ≤ x} = Φ(x)

as y ↓ x. When x is a point at which Φ(x) ≥ 0.5, if limy↓x Φ(y) �= Φ(x), thenwe have

limy↓x

Φ(y) > Φ(x) ≥ 0.5.

For this case, we have proved that limy↓x Φ(y) = Φ(x). Thus (3.37) is proved.Conversely, if Φ : [−∞,+∞] → [0, 1] is a nondecreasing function satisfy-

ing (3.37), then μ defined by (3.38) is a function taking values in [0, 1] andsupμ(x) = 1. It follows from Theorem 3.13 that there is a fuzzy variable ξwhose membership is just μ. Let us verify that Φ is the credibility distribu-tion of ξ, i.e., Cr{ξ ≤ x} = Φ(x) for each x. The argument breaks down intotwo cases. (i) If Φ(x) < 0.5, then we have supy>x μ(y) = 1, and μ(y) = 2Φ(y)for each y with y ≤ x. Thus

Cr{ξ ≤ x} =12(Pos{ξ ≤ x}+ 1− Pos{ξ > x})

=12

(supy≤x

μ(y) + 1− supy>x

μ(y))

= supy≤x

Φ(y) = Φ(x).

(ii) If Φ(x) ≥ 0.5, then we have supy≤x μ(y) = 1 and Φ(y) ≥ Φ(x) ≥ 0.5 foreach y with y > x. Thus μ(y) = 2− 2Φ(y) and

Cr{ξ ≤ x} =12(Pos{ξ ≤ x}+ 1− Pos{ξ > x})

=12

(supy≤x

μ(y) + 1− supy>x

μ(y))

=12

(1 + 1− sup

y>x(2− 2Φ(y))

)= inf

y>xΦ(y) = lim

y↓xΦ(y) = Φ(x).

Page 108: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 3.3 - Credibility Distribution 97

The theorem is proved.

Example 3.12: Let a and b be two numbers with 0 ≤ a ≤ 0.5 ≤ b ≤ 1. Wedefine a fuzzy variable by the following membership function,

μ(x) =

⎧⎪⎨⎪⎩2a, if x < 01, if x = 0

2− 2b, if x > 0.

Then its credibility distribution is

Φ(x) =

⎧⎪⎪⎪⎨⎪⎪⎪⎩0, if x = −∞a, if −∞ < x < 0b, if 0 ≤ x < +∞1, if x = +∞.

Thus we havelim

x→−∞Φ(x) = a, limx→+∞Φ(x) = b.

Example 3.13: Let ξ be a fuzzy variable whose membership function isμ(x) ≡ 1. Then its credibility distribution is

Φ(x) =

⎧⎪⎨⎪⎩0, if x = −∞

0.5, if −∞ < x < +∞1, if x = +∞.

It is the unique credibility distribution taking a constant value on (−∞,+∞).

Example 3.14: The conditions limy↓x Φ(y) > 0.5 and Φ(x) ≥ 0.5 in The-orem 3.20 are not equivalent to each other. This fact may be shown by thefollowing increasing function,

Φ(x) =

{0, if x ≤ 01, if x > 0.

Note that it is not a credibility distribution because it is not right-continuousat x = 0.

Theorem 3.21 A fuzzy variable with credibility distribution Φ is(a) nonnegative if and only if Φ(x) = 0 for all x < 0;(b) positive if and only if Φ(x) = 0 for all x ≤ 0.

Proof: It follows immediately from the definition.

Page 109: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

98 Chapter 3 - Credibility Theory

Theorem 3.22 Let ξ be a fuzzy variable. Then we have(a) if ξ is simple, then its credibility distribution is a simple function;(b) if ξ is discrete, then its credibility distribution is a step function;(c) if ξ is continuous, then its credibility distribution is a continuous functionexcept at −∞ and +∞.

Proof: The parts (a) and (b) follow immediately from the definition. Wenext prove the part (c). Since ξ is continuous, Pos{ξ = x} is a continuousfunction of x. Thus its credibility distribution

Φ(x) =12

(supy≤x

Pos{ξ = y}+ 1− supy>x

Pos{ξ = y})

is a continuous function of x.

Example 3.15: However, the inverse of Theorem 3.22 is not true. Forexample, let ξ be a fuzzy variable whose membership function is

μ(x) =

{x, if 0 ≤ x ≤ 11, otherwise.

Then its credibility distribution is

Φ(x) =

⎧⎪⎨⎪⎩0, if x = −∞

0.5, if −∞ < x < +∞1, if x = +∞.

It is clear that Φ(x) is simple and continuous except at x = −∞ and x = +∞.But the fuzzy variable ξ is neither simple nor continuous.

Definition 3.18 A continuous fuzzy variable is said to be(a) singular if its credibility distribution is a singular function;(b) absolutely continuous if its credibility distribution is absolutely continuous.

Theorem 3.23 The credibility distribution Φ of a fuzzy variable has a de-composition

Φ(x) = r1Φ1(x) + r2Φ2(x) + r3Φ3(x), x ∈ � (3.39)

where Φ1,Φ2,Φ3 are credibility distributions of discrete, singular and abso-lutely continuous fuzzy variables, respectively, and r1, r2, r3 are nonnegativenumbers such that

r1 + r2 + r3 = 2 limx→+∞Φ(x). (3.40)

Furthermore, the decomposition (3.39) is unique in the sense of

limx→+∞Φ1(x) = 0.5, lim

x→+∞Φ2(x) = 0.5, limx→+∞Φ3(x) = 0.5. (3.41)

Page 110: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 3.3 - Credibility Distribution 99

Proof: Let {xi} be the countable set of all discontinuity points of Φ, and

f1(x) =∑xi≤x

(limy↓xi

Φ(y) ∧ Φ(x)− limy↑xi

Φ(y))

, x ∈ �.

Then f1(x) is an increasing step function of x, and

limz↓x

(Φ(z)− Φ(x)) = limz↓x

(f1(z)− f1(x)) ,

limz↑x

(Φ(z)− Φ(x)) = limz↑x

(f1(z)− f1(x)) .

Now we setf2(x) = Φ(x)− f1(x), x ∈ �.

Then we have

limz↓x

f2(z)− f2(x) = limz↓x

(Φ(z)− Φ(x))− limz↓x

(f1(z)− f1(x)) = 0,

limz↑x

f2(z)− f2(x) = limz↑x

(Φ(z)− Φ(x))− limz↑x

(f1(z)− f1(x)) = 0.

That is, the function f2(x) is continuous. Next we prove that f2(x) is in-creasing. Let x′ < x be given. Then we may verify that∑

x′<xi≤x

(limy↓xi

Φ(y) ∧ Φ(x)− limy↑xi

Φ(y))≤ Φ(x)− Φ(x′).

Thus we have

f2(x)− f2(x′) = Φ(x)− Φ(x′)−∑

x′<xi≤x

(limy↓xi

Φ(y) ∧ Φ(x)− limy↑xi

Φ(y))≥ 0

which implies that f2(x) is an increasing function of x. It has been provedthat the increasing continuous function f2 has a unique decomposition f2 =g2 + g3, where g2 is an increasing singular function, and g3 is an increasingabsolutely continuous function. This fact implies that

Φ = f1 + g2 + g3.

We denote

limx→+∞ f1(x) =

r1

2, lim

x→+∞ g2(x) =r2

2, lim

x→+∞ g3(x) =r3

2.

It is clear that r1, r2, r3 are all nonnegative numbers satisfying r1 + r2 + r3 =2 limx→+∞ Φ(x). For nonzero r1, r2, r3, we set

Φ1(x) =f1(x)

r1, Φ2(x) =

g2(x)r2

, Φ3(x) =g3(x)

r3, x ∈ �.

Page 111: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

100 Chapter 3 - Credibility Theory

Then we have Φ1(x) ↑ 0.5, Φ2(x) ↑ 0.5 and Φ3(x) ↑ 0.5 as x→∞. Thus Φ1,Φ2 and Φ3 are credibility distributions of discrete, singular and absolutelycontinuous fuzzy variables, respectively, and (3.39) is met. Furthermore, inthe sense of (3.41), the step function is unique. Thus the decomposition(3.39) is unique, too.

Theorem 3.24 Let ξ be a fuzzy variable. Then Cr{ξ ≥ x} is a decreasingfunction of x. Furthermore, if

Cr{ξ ≥ x} ≥ 0.5 or limy↑x

Cr{ξ ≥ y} > 0.5,

then we havelimy↑x

Cr{ξ ≥ y} = Cr{ξ ≥ x}. (3.42)

Proof: It is obvious that Cr{ξ ≥ x} is a decreasing function of x. Since{ξ ≥ y} ↓ {ξ ≥ x} as y ↑ x, it follows from the credibility semicontinuity lawand Cr{ξ ≥ x} ≥ 0.5 that Cr{ξ ≥ y} ↓ Cr{ξ ≥ x} as y ↑ x. Similarly, itfollows from the credibility semicontinuity law and limy↑x Cr{ξ ≥ y} > 0.5that Cr{ξ ≥ y} ↓ Cr{ξ ≥ x} as y ↑ x. The theorem is proved.

Definition 3.19 (Liu [75]) The credibility density function φ: � → [0,+∞)of a fuzzy variable ξ is a function such that

Φ(x) =∫ x

−∞φ(y)dy (3.43)

holds for all x ∈ [−∞,+∞], where Φ is the credibility distribution of the fuzzyvariable ξ.

Example 3.16: The credibility distribution of a triangular fuzzy variable(r1, r2, r3) is

Φ(x) =

⎧⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎩

0, if x ≤ r1

x− r1

2(r2 − r1), if r1 ≤ x ≤ r2

x + r3 − 2r2

2(r3 − r2), if r2 ≤ x ≤ r3

1, if r3 ≤ x,

and its credibility density function is

φ(x) =

⎧⎪⎪⎪⎪⎨⎪⎪⎪⎪⎩1

2(r2 − r1), if r1 ≤ x ≤ r2

12(r3 − r2)

, if r2 ≤ x ≤ r3

0, otherwise.

Page 112: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 3.3 - Credibility Distribution 101

Example 3.17: The credibility distribution of a trapezoidal fuzzy variable(r1, r2, r3, r4) is

Φ(x) =

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

0, if x ≤ r1

x− r1

2(r2 − r1), if r1 ≤ x ≤ r2

12, if r2 ≤ x ≤ r3

x + r4 − 2r3

2(r4 − r3), if r3 ≤ x ≤ r4

1, if r4 ≤ x,

and its credibility density function is

φ(x) =

⎧⎪⎪⎪⎪⎨⎪⎪⎪⎪⎩1

2(r2 − r1), if r1 ≤ x ≤ r2

12(r4 − r3)

, if r3 ≤ x ≤ r4

0, otherwise.

Example 3.18: Let ξ be a fuzzy variable whose membership function isdefined by

μ(x) =

{1, if x = 00, otherwise.

In fact, the fuzzy variable ξ is a constant 0. It is clear that the credibilitydistribution of ξ is

Φ(x) =

{0, if x < 01, if x ≥ 0.

It is also clear that the credibility density function does not exist.

Example 3.19: The credibility density function does not necessarily existeven if the membership function is continuous and unimodal with a finitesupport. Recall the Cantor function f defined by (1.21) on Page 12. Now weset

μ(x) =

⎧⎪⎨⎪⎩f(x), if x ∈ [0, 1]

f(2− x), if x ∈ (1, 2]0, otherwise.

(3.44)

Then μ is a continuous and unimodal function with μ(1) = 1. Hence μ isa membership function. However, its credibility distribution is a singularfunction. Thus the credibility density function does not exist.

Page 113: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

102 Chapter 3 - Credibility Theory

Theorem 3.25 Let ξ be a fuzzy variable whose credibility density functionφ exists. Then we have ∫ +∞

−∞φ(y)dy = 1, (3.45)

Cr{ξ ≤ x} =∫ x

−∞φ(y)dy, (3.46)

Cr{ξ ≥ x} =∫ +∞

x

φ(y)dy. (3.47)

Proof: The equations (3.45) and (3.46) follow immediately from the defini-tion. In addition, by the self-duality of credibility measure, we have

Cr{ξ ≥ x} = 1− Cr{ξ < x} =∫ +∞

−∞φ(y)dy −

∫ x

−∞φ(y)dy =

∫ +∞

x

φ(y)dy.

The theorem is proved.

Example 3.20: Different from the random case, generally speaking,

Cr{a ≤ ξ ≤ b} �=∫ b

a

φ(y)dy.

Consider the trapezoidal fuzzy variable ξ = (1, 2, 3, 4). Then Cr{2 ≤ ξ ≤3} = 0.5. However, it is obvious that φ(x) = 0 when 2 ≤ x ≤ 3 and∫ 3

2

φ(y)dy = 0 �= 0.5 = Cr{2 ≤ ξ ≤ 3}.

Definition 3.20 (Liu [75]) Let (ξ1, ξ2, · · · , ξn) be a fuzzy vector. Then thejoint credibility distribution Φ : [−∞,+∞]n → [0, 1] is defined by

Φ(x1, x2, · · · , xn) = Cr{θ ∈ Θ

∣∣ ξ1(θ) ≤ x1, ξ2(θ) ≤ x2, · · · , ξn(θ) ≤ xn

}.

Definition 3.21 (Liu [75]) The joint credibility density function φ : �n →[0,+∞) of a fuzzy vector (ξ1, ξ2, · · · , ξn) is a function such that

Φ(x1, x2, · · · , xn) =∫ x1

−∞

∫ x2

−∞· · ·∫ xn

−∞φ(y1, y2, · · · , yn)dy1dy2 · · ·dyn (3.48)

holds for all (x1, x2, · · · , xn) ∈ [−∞,+∞]n, where Φ is the joint credibilitydistribution of the fuzzy vector (ξ1, ξ2, · · · , ξn).

Page 114: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 3.4 - Independent and Identical Distribution 103

3.4 Independent and Identical Distribution

Definition 3.22 The fuzzy variables ξ1, ξ2, · · · , ξm are said to be independentif and only if

Pos{ξi ∈ Bi, i = 1, 2, · · · ,m} = min1≤i≤m

Pos{ξi ∈ Bi} (3.49)

for any sets B1, B2, · · · , Bm of �.

Remark 3.2: In probability theory, two events A and B are said to beindependent if Pr{A ∩ B} = Pr{A}Pr{B}. However, in credibility theory,Pos{A∩B} = Pos{A}∧Pos{B} tells us nothing. For example, Pos{A∩A} ≡Pos{A} ∧ Pos{A}, but we do not think that A is independent with itself.

Example 3.21: Let Θ = {(θ′1, θ′′1 ), (θ′1, θ′′2 ), (θ′2, θ

′′1 ), (θ′2, θ

′′2 )}, Pos{∅} = 0,

Pos{(θ′1, θ′′1 )} = 1, Pos{(θ′1, θ′′2 )} = 0.8, Pos{(θ′2, θ′′1 )} = 0.5, Pos{(θ′2, θ′′2 )} =0.5 and Pos{Θ} = 1. Two fuzzy variables are defined as

ξ1(θ′, θ′′) =

{0, if θ′ = θ′11, if θ′ = θ′2,

ξ2(θ′, θ′′) =

{1, if θ′′ = θ′′10, if θ′′ = θ′′2 .

Then we have

Pos{ξ1 = 1, ξ2 = 1} = 0.5 = Pos{ξ1 = 1} ∧ Pos{ξ2 = 1},

Pos{ξ1 = 1, ξ2 = 0} = 0.5 = Pos{ξ1 = 1} ∧ Pos{ξ2 = 0},Pos{ξ1 = 0, ξ2 = 1} = 1.0 = Pos{ξ1 = 0} ∧ Pos{ξ2 = 1},Pos{ξ1 = 0, ξ2 = 0} = 0.8 = Pos{ξ1 = 0} ∧ Pos{ξ2 = 0}.

Thus ξ1 and ξ2 are independent fuzzy variables.

Example 3.22: Consider Θ = {θ1, θ2}, Pos{θ1} = 1, Pos{θ2} = 0.8 and thefuzzy variables are defined by

ξ1(θ) =

{0, if θ = θ1

1, if θ = θ2,ξ2(θ) =

{1, if θ = θ1

0, if θ = θ2.

Then we have

Pos{ξ1 = 1, ξ2 = 1} = Pos{∅} = 0 �= 0.8 ∧ 1 = Pos{ξ1 = 1} ∧ Pos{ξ2 = 1}.

Thus ξ1 and ξ2 are not independent fuzzy variables.

Definition 3.23 The fuzzy variables ξi, i ∈ I are said to be independent ifand only if for all finite collections {i1, i2, · · · , ik} of distinct indices in I, wehave

Pos{ξij ∈ Bij , j = 1, 2, · · · , k} = min1≤j≤k

Pos{ξij ∈ Bij} (3.50)

for any sets Bi1 , Bi2 , · · · , Bik of �.

Page 115: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

104 Chapter 3 - Credibility Theory

Theorem 3.26 Let ξi be independent fuzzy variables, and fi : � → � func-tions, i = 1, 2, · · · ,m. Then f1(ξ1), f2(ξ2), · · · , fm(ξm) are independent fuzzyvariables.

Proof: For any sets of B1, B2, · · · , Bm of �, we have

Pos{f1(ξ1) ∈ B1, f2(ξ2) ∈ B2, · · · , fm(ξm) ∈ Bn}

= Pos{ξ1 ∈ f−11 (B1), ξ2 ∈ f−1

2 (B2), · · · , ξm ∈ f−1m (Bm)}

= Pos{ξ1 ∈ f−11 (B1)} ∧ Pos{ξ2 ∈ f−1

2 (B2)} ∧ · · · ∧ Pos{ξm ∈ f−1m (Bm)}

= Pos{f1(ξ1) ∈ B1} ∧ Pos{f2(ξ2) ∈ B2} ∧ · · · ∧ Pos{fm(ξm) ∈ Bm}.

Thus f1(ξ1), f2(ξ2), · · · , fm(ξm) are independent fuzzy variables.

Theorem 3.27 (Extension Principle of Zadeh) Let ξ1, ξ2, · · · , ξn be indepen-dent fuzzy variables with membership functions μ1, μ2, · · · , μn, respectively,and f : �n → � a function. Then the membership function μ of ξ =f(ξ1, ξ2, · · · , ξn) is derived from the membership functions μ1, μ2, · · · , μn by

μ(x) = supx1,x2,···,xn∈�

{min

1≤i≤nμi(xi)

∣∣ x = f(x1, x2, · · · , xn)}

. (3.51)

Proof: It follows from Definitions 3.11 and 3.16 that the membership func-tion of ξ = f(ξ1, ξ2, · · · , ξn) is

μ(x) = Pos{(θ1, θ2, · · · , θn) ∈ Θ

∣∣ x = f (ξ1(θ1), ξ2(θ2), · · · , ξn(θn))}

= supθi∈Θi,i=1,2,···,n

{min

1≤i≤nPosi{θi}

∣∣ x = f (ξ1(θ1), ξ2(θ2), · · · , ξn(θn))}

= supx1,x2,···,xn∈�

{min

1≤i≤nμi(xi)

∣∣ x = f(x1, x2, · · · , xn)}

.

The theorem is proved.

Remark 3.3: The extension principle of Zadeh is only applicable to theoperations on independent fuzzy variables. In the past literature, the exten-sion principle is used as a postulate. However, it is treated as a theorem incredibility theory.

Example 3.23: By using Theorem 3.27, we can obtain the sum of trape-zoidal fuzzy variables ξ = (a1, a2, a3, a4) and η = (b1, b2, b3, b4) as

μ(z) =

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎩

z − (a1 + b1)(a2 + b2)− (a1 + b1)

, if a1 + b1 ≤ z ≤ a2 + b2

1, if a2 + b2 ≤ z ≤ a3 + b3

z − (a4 + b4)(a3 + b3)− (a4 + b4)

, if a3 + b3 ≤ z ≤ a4 + b4

0, otherwise.

Page 116: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 3.4 - Independent and Identical Distribution 105

That is, the sum of two trapezoidal fuzzy variables is also a trapezoidal fuzzyvariable, and ξ + η = (a1 + b1, a2 + b2, a3 + b3, a4 + b4). The product of atrapezoidal fuzzy variable ξ = (a1, a2, a3, a4) and a scalar number λ is

λ · ξ =

{(λa1, λa2, λa3, λa4), if λ ≥ 0

(λa4, λa3, λa2, λa1), if λ < 0.

That is, the product of a trapezoidal fuzzy variable and a scalar number isalso a trapezoidal fuzzy variable.

Theorem 3.28 Let ξ1, ξ2, · · · , ξn be independent fuzzy variables, and f :�n → �m a function. Then the fuzzy event f(ξ1, ξ2, · · · , ξn) ≤ 0 has pos-sibility

Pos {f(ξ1, ξ2, · · · , ξn) ≤ 0} = supx1,x2,···,xn

{min

1≤i≤nμi(xi)

∣∣ f(x1, x2, · · · , xn) ≤ 0}

.

Proof: Assume that ξi are defined on the possibility spaces (Θi,P(Θi),Posi),i = 1, 2, · · · , n, respectively. Then the fuzzy event f(ξ1, ξ2, · · · , ξn) ≤ 0 isdefined on the product possibility space (Θ,P(Θ),Pos), whose possibility is

Pos{f(ξ1, ξ2, · · · , ξn) ≤ 0}= Pos

{(θ1, θ2, · · · , θn) ∈ Θ

∣∣ f(ξ1(θ1), ξ2(θ2), · · · , ξn(θn)) ≤ 0}

= supθi∈Θi,1≤i≤n

{min

1≤i≤nPos{θi}

∣∣ f(ξ1(θ1), ξ2(θ2), · · · , ξn(θn)) ≤ 0}

= supx1,x2,···,xn∈�

{min

1≤i≤nμi(xi)

∣∣ f(x1, x2, · · · , xn) ≤ 0}

.

The theorem is proved.

Definition 3.24 The fuzzy variables ξ1, ξ2, · · · , ξm are said to be identicallydistributed if and only if

Pos{ξi ∈ B} = Pos{ξj ∈ B}, i, j = 1, 2, · · · ,m (3.52)

for any set B of �.

Theorem 3.29 The fuzzy variables ξ and η are identically distributed if andonly if

Nec{ξ ∈ B} = Nec{η ∈ B} (3.53)

for any set B of �.

Proof: The fuzzy variables ξ and η are identically distributed if and only if,for any set B of �, Pos{ξ ∈ Bc} = Pos{η ∈ Bc}, if and only if Nec{ξ ∈ B} =Nec{η ∈ B}.

Page 117: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

106 Chapter 3 - Credibility Theory

Theorem 3.30 The fuzzy variables ξ and η are identically distributed if andonly if

Cr{ξ ∈ B} = Cr{η ∈ B} (3.54)

for any set B of �.

Proof: If the fuzzy variables ξ and η are identically distributed, then, forany set B of �, we have Pos{ξ ∈ B} = Pos{η ∈ B} and Nec{ξ ∈ B} =Nec{η ∈ B}. Thus Cr{ξ ∈ B} = Cr{η ∈ B}.

Conversely, if Cr{ξ ∈ B} = Cr{η ∈ B} ≥ 0.5, then Pos{ξ ∈ B} =Pos{η ∈ B} ≡ 1. If Cr{ξ ∈ B} = Cr{η ∈ B} < 0.5, then Pos{ξ ∈ B} =2Cr{ξ ∈ B} = 2Cr{η ∈ B} = Pos{η ∈ B}. Both of them imply that ξ and ηare identically distributed fuzzy variables.

Theorem 3.31 The fuzzy variables ξ and η are identically distributed if andonly if ξ and η have the same membership function.

Proof: Let μ and ν be the membership functions of ξ and η, respectively. Ifξ and η are identically distributed fuzzy variables, then, for any x ∈ �, wehave μ(x) = Pos{ξ = x} = Pos{η = x} = ν(x). Thus ξ and η have the samemembership function.

Conversely, if ξ and η have the same membership function, i.e., μ(x) ≡ν(x), then, for any set B of �, we have

Pos{ξ ∈ B} = sup{μ(x)

∣∣ x ∈ B}

= sup{ν(x)

∣∣ x ∈ B}

= Pos{η ∈ B}.

Thus ξ and η are identically distributed fuzzy variables.

Theorem 3.32 If ξ and η are identically distributed fuzzy variables, then ξand η have the same credibility distribution.

Proof: If ξ and η are identically distributed fuzzy variables, then, for anyx ∈ �, we have Cr{ξ ∈ (−∞, x]} = Cr{η ∈ (−∞, x]}. Thus ξ and η have thesame credibility distribution.

Example 3.24: The inverse of Theorem 3.32 is not true. We consider thefollowing two fuzzy variables,

ξ =

⎧⎨⎩0 with possibility 1.01 with possibility 0.62 with possibility 0.8,

η =

⎧⎨⎩0 with possibility 1.01 with possibility 0.72 with possibility 0.8.

It is easy to verify that ξ and η have the same credibility distribution,

Φ(x) =

⎧⎪⎨⎪⎩0, if x < 0

0.6, if 0 ≤ x < 21, if 2 ≤ x.

Page 118: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 3.5 - Optimistic and Pessimistic Values 107

However, they are not identically distributed fuzzy variables.

Example 3.25: Let ξ be a fuzzy variable, and a a positive number. Then

ξ∗ =

{ξ, if |ξ| < a

0, otherwise

is a bounded fuzzy variable, called ξ truncated at a. Let ξ1, ξ2, · · · , ξn beindependent and identically distributed (iid) fuzzy variables. Then for anygiven a > 0, the fuzzy variables ξ∗1 , ξ∗2 , · · · , ξ∗n are iid.

Definition 3.25 The n-dimensional fuzzy vectors ξ1, ξ2, · · · , ξm are said tobe independent if and only if

Pos{ξi ∈ Bi, i = 1, 2, · · · ,m} = min1≤i≤m

Pos{ξi ∈ Bi} (3.55)

for any sets B1, B2, · · · , Bm of �n.

Definition 3.26 The n-dimensional fuzzy vectors ξ1, ξ2, · · · , ξm are said tobe identically distributed if and only if

Pos{ξi ∈ B} = Pos{ξj ∈ B}, i, j = 1, 2, · · · ,m (3.56)

for any set B of �n.

3.5 Optimistic and Pessimistic Values

In order to rank fuzzy variables, we may use two critical values: optimisticvalue and pessimistic value.

Definition 3.27 (Liu [75]) Let ξ be a fuzzy variable, and α ∈ (0, 1]. Then

ξsup(α) = sup{r∣∣ Cr {ξ ≥ r} ≥ α

}(3.57)

is called the α-optimistic value to ξ.

This means that the fuzzy variable ξ will reach upwards of the α-optimisticvalue ξsup(α) with credibility α. In other words, the α-optimistic valueξsup(α) is the supremum value that ξ achieves with credibility α.

Definition 3.28 (Liu [75]) Let ξ be a fuzzy variable, and α ∈ (0, 1]. Then

ξinf(α) = inf{r∣∣ Cr {ξ ≤ r} ≥ α

}(3.58)

is called the α-pessimistic value to ξ.

This means that the fuzzy variable ξ will be below the α-pessimistic valueξinf(α) with credibility α. In other words, the α-pessimistic value ξinf(α) isthe infimum value that ξ achieves with credibility α.

Page 119: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

108 Chapter 3 - Credibility Theory

Theorem 3.33 Let ξ be a fuzzy variable. Assume that ξsup(α) is the α-optimistic value and ξinf (α) is the α-pessimistic value to ξ. If α > 0.5, thenwe have

Cr{ξ ≤ ξinf(α)} ≥ α, Cr{ξ ≥ ξsup(α)} ≥ α. (3.59)

Proof: It follows from the definition of α-pessimistic value that there existsa decreasing sequence {xi} such that Cr{ξ ≤ xi} ≥ α and xi ↓ ξinf(α) asi→∞. Since {ξ ≤ xi} ↓ {ξ ≤ ξinf(α)} and limi→∞ Cr{ξ ≤ xi} ≥ α > 0.5, itfollows from the credibility semicontinuity law that

Cr{ξ ≤ ξinf(α)} = limi→∞

Cr{ξ ≤ xi} ≥ α.

Similarly, there exists an increasing sequence {xi} such that Cr{ξ ≥ xi} ≥δ and xi ↑ ξsup(α) as i → ∞. Since {ξ ≥ xi} ↓ {ξ ≥ ξsup(α)} andlimi→∞ Cr{ξ ≥ xi} ≥ α > 0.5, it follows from the credibility semicontinuitylaw that

Cr{ξ ≥ ξsup(α)} = limi→∞

Cr{ξ ≥ xi} ≥ α.

The theorem is proved.

Example 3.26: When α ≤ 0.5, it is possible that the inequalities

Cr{ξ ≥ ξsup(α)} < α, Cr{ξ ≤ ξinf(α)} < α

hold. Let us consider a fuzzy variable ξ whose membership function is

μ(x) =

{1, if x ∈ (−1, 1)0, otherwise.

It is clear that ξsup(0.5) = 1. However, Cr{ξ ≥ ξsup(0.5)} = 0 < 0.5. Inaddition, ξinf(0.5) = −1 and Cr{ξ ≤ ξinf(0.5)} = 0 < 0.5.

Theorem 3.34 Let ξinf(α) and ξsup(α) be the α-pessimistic and α-optimisticvalues of the fuzzy variable ξ, respectively. Then we have(a) ξinf(α) is an increasing function of α;(b) ξsup(α) is a decreasing function of α;(c) if α > 0.5, then ξinf(α) ≥ ξsup(α);(d) if α ≤ 0.5, then ξinf(α) ≤ ξsup(α).

Proof: The parts (a) and (b) follow immediately from the definition. Part(c): Write ξ(α) = (ξinf(α) + ξsup(α))/2. If ξinf(α) < ξsup(α), then we have

1 ≥ Cr{ξ < ξ(α)}+ Cr{ξ > ξ(α)} ≥ α + α > 1.

A contradiction proves ξinf(α) ≥ ξsup(α). Part (d): Assume that ξinf(α) >ξsup(α). It follows from the definition of ξinf(α) that Cr{ξ ≤ ξ(α)} < α.Similarly, it follows from the definition of ξsup(α) that Cr{ξ ≥ ξ(α)} < α.Thus

1 ≤ Cr{ξ ≤ ξ(α)}+ Cr{ξ ≥ ξ(α)} < α + α ≤ 1.

A contradiction proves ξinf(α) ≤ ξsup(α). The theorem is proved.

Page 120: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 3.6 - Expected Value Operator 109

Theorem 3.35 Assume that ξ is a fuzzy variable. Then we have(a) if λ ≥ 0, then (λξ)sup(α) = λξsup(α) and (λξ)inf(α) = λξinf(α);(b) if λ < 0, then (λξ)sup(α) = λξinf(α) and (λξ)inf(α) = λξsup(α).

Proof: If λ = 0, then the part (a) is obviously valid. When λ > 0, we have

(λξ)sup(α) = sup {r | Cr{λξ ≥ r} ≥ α}= λ sup {r/λ | Cr {ξ ≥ r/λ} ≥ α}= λξsup(α).

A similar way may prove that (λξ)inf(α) = λξinf(α).In order to prove the part (b), it suffices to verify that (−ξ)sup(α) =

−ξinf(α) and (−ξ)inf(α) = −ξsup(α). In fact, for any α ∈ (0, 1], we have

(−ξ)sup(α) = sup{r | Cr{−ξ ≥ r} ≥ α}= − inf{−r | Cr{ξ ≤ −r} ≥ α}= −ξinf(α).

Similarly, we may prove that (−ξ)inf(α) = −ξsup(α). The theorem is proved.

3.6 Expected Value Operator

Expected value operator of random variable plays an extremely importantrole in probability theory. For fuzzy variables, there are many ways to definean expected value operator, for example, Dubois and Prade [23], Heilpern[34], Campos and Gonzalez [16], Gonzalez [31] and Yager [142][149]. Themost general definition of expected value operator of fuzzy variable was givenby Liu and Liu [77]. This definition is not only applicable to continuous fuzzyvariables but also discrete ones.

Definition 3.29 (Liu and Liu [77]) Let ξ be a fuzzy variable. Then theexpected value of ξ is defined by

E[ξ] =∫ +∞

0

Cr{ξ ≥ r}dr −∫ 0

−∞Cr{ξ ≤ r}dr (3.60)

provided that at least one of the two integrals is finite.

Example 3.27: Let ξ be a fuzzy variable with a membership function

μ(x) ={

1, if x ∈ [a, b]0, otherwise.

The expected value is E[ξ] = 12 (a + b).

Page 121: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

110 Chapter 3 - Credibility Theory

Example 3.28: The triangular fuzzy variable ξ = (r1, r2, r3) has an expectedvalue

E[ξ] =14(r1 + 2r2 + r3).

Example 3.29: The expected value of a trapezoidal fuzzy variable ξ =(r1, r2, r3, r4) is

E[ξ] =14(r1 + r2 + r3 + r4).

Example 3.30: The definition of expected value operator is also applicableto discrete case. Assume that ξ is a discrete fuzzy variable whose membershipfunction is given by

μ(x) =

⎧⎪⎪⎨⎪⎪⎩μ1, if x = a1

μ2, if x = a2

· · ·μm, if x = am.

Without loss of generality, we also assume that a1 ≤ a2 ≤ · · · ≤ am. Defini-tion 3.29 implies that the expected value of ξ is

E[ξ] =m∑i=1

wiai (3.61)

where the weights wi, i = 1, 2, · · · ,m are given by

w1 =12

(μ1 + max

1≤j≤mμj − max

1<j≤mμj

),

wi =12

(max1≤j≤i

μj − max1≤j<i

μj + maxi≤j≤m

μj − maxi<j≤m

μj

), 2 ≤ i ≤ m− 1

wm =12

(max

1≤j≤mμj − max

1≤j<mμj + μm

).

It is easy to verify that all wi ≥ 0 and the sum of all weights is just 1.Dropping the condition a1 ≤ a2 ≤ · · · ≤ am, if a1, a2, · · · , am are distinct,then the weights are given by

wi =12

(max

1≤k≤m{μk|ak ≤ ai} − max

1≤k≤m{μk|ak < ai}

+ max1≤k≤m

{μk|ak ≥ ai} − max1≤k≤m

{μk|ak > ai})

for i = 1, 2, · · · ,m.

Page 122: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 3.6 - Expected Value Operator 111

Example 3.31: Let ξ be a fuzzy variable with membership function

μ(x) =

⎧⎪⎨⎪⎩0, if x < 0x, if 0 ≤ x ≤ 11, otherwise.

Then its expected value is +∞. If ξ is a fuzzy variable with membershipfunction

μ(x) =

⎧⎪⎨⎪⎩1, if x < 0

1− x, if 0 ≤ x ≤ 10, otherwise.

Then its expected value is −∞.

Example 3.32: The expected value may not exist for some fuzzy variables.For example, the fuzzy variable ξ defined by the membership function

μ(x) =

⎧⎪⎪⎨⎪⎪⎩1

1 + x, if x ≥ 0

11− x

, if x < 0

does not have expected value because both of the following integrals∫ +∞

0

Cr{ξ ≥ r}dr and∫ 0

−∞Cr{ξ ≤ r}dr

are infinite.

Theorem 3.36 Let ξ be a nonnegative fuzzy variable. Then∞∑i=1

Cr{ξ ≥ i} ≤ E[ξ] ≤ 1 +∞∑i=1

Cr{ξ ≥ i}. (3.62)

Proof: Since Cr{ξ ≥ r} is a decreasing function of r, we have

E[ξ] =∞∑i=1

∫ i

i−1

Cr{ξ ≥ r}dr ≥∞∑i=1

∫ i

i−1

Cr{ξ ≥ i}dr =∞∑i=1

Cr{ξ ≥ i},

E[ξ] =∞∑i=1

∫ i

i−1

Cr{ξ ≥ r}dr ≤∞∑i=1

∫ i

i−1

Cr{ξ ≥ i−1}dr = 1+∞∑i=1

Cr{ξ ≥ i}.

The theorem is proved.

Theorem 3.37 Let ξ be a fuzzy variable, and t a positive number. ThenE[|ξ|t] <∞ if and only if

∞∑i=1

Cr{|ξ| ≥ i1/t

}<∞. (3.63)

Page 123: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

112 Chapter 3 - Credibility Theory

Proof: The theorem follows immediately from Cr{|ξ|t ≥ i} = Cr{|ξ| ≥ i1/t}and Theorem 3.36.

Theorem 3.38 Let ξ be a fuzzy variable, and t a positive number. If E[|ξ|t] <∞, then

limx→∞xtCr{|ξ| ≥ x} = 0. (3.64)

Conversely, if (3.64) holds for some positive number t, then E[|ξ|s] <∞ forany 0 ≤ s < t.

Proof: It follows from the definition of expected value operator that

E[|ξ|t] =∫ ∞

0

Cr{|ξ|t ≥ r}dr <∞.

Thus we havelimx→∞

∫ ∞

xt/2

Cr{|ξ|t ≥ r}dr = 0.

The equation (3.64) is proved by the following relation,∫ ∞

xt/2

Cr{|ξ|t ≥ r}dr ≥∫ xt

xt/2

Cr{|ξ|t ≥ r}dr ≥ 12xtCr{|ξ| ≥ x}.

Conversely, if (3.64) holds, then there exists a number a such that

xtCr{|ξ| ≥ x} ≤ 1, ∀x ≥ a.

Thus we have

E[|ξ|s] =∫ a

0

Cr {|ξ|s ≥ r} dr +∫ +∞

a

Cr {|ξ|s ≥ r} dr

≤∫ a

0

Cr {|ξ|s ≥ r} dr +∫ +∞

0

srs−1Cr {|ξ| ≥ r} dr

≤∫ a

0

Cr {|ξ|s ≥ r} dr + s

∫ +∞

0

rs−t−1dr

< +∞.

(by∫ ∞

0

rpdr <∞ for any p < −1)

The theorem is proved.

Example 3.33: The condition (3.64) does not ensure that E[|ξ|t] <∞. Weconsider the positive fuzzy variable

ξ =

{0 with possibility 1t√

i with possibility 1/(i log i), i = 2, 3, · · ·

Page 124: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 3.6 - Expected Value Operator 113

It is clear that

limx→∞xtCr{ξ ≥ x} = lim

n→∞(

t√

n)t sup

i≥n

12i log i

= limn→∞

12 log n

= 0.

However, the expected value of ξt is

E[ξt] =12

∞∑i=2

1i log i

=∞.

Theorem 3.39 (Liu [75]) Let ξ be a fuzzy variable whose credibility densityfunction φ exists. If the Lebesgue integral∫ +∞

−∞xφ(x)dx

is finite, then we have

E[ξ] =∫ +∞

−∞xφ(x)dx. (3.65)

Proof: It follows from the definition of expected value operator and FubiniTheorem that

E[ξ] =∫ +∞

0

Cr{ξ ≥ r}dr −∫ 0

−∞Cr{ξ ≤ r}dr

=∫ +∞

0

[∫ +∞

r

φ(x)dx

]dr −

∫ 0

−∞

[∫ r

−∞φ(x)dx

]dr

=∫ +∞

0

[∫ x

0

φ(x)dr

]dx−

∫ 0

−∞

[∫ 0

x

φ(x)dr

]dx

=∫ +∞

0

xφ(x)dx +∫ 0

−∞xφ(x)dx

=∫ +∞

−∞xφ(x)dx.

The theorem is proved.

Example 3.34: Let ξ be a fuzzy variable with credibility distribution Φ.Generally speaking,

E[ξ] �=∫ +∞

−∞xdΦ(x).

For example, let ξ be a fuzzy variable with membership function

μ(x) =

⎧⎪⎨⎪⎩0, if x < 0x, if 0 ≤ x ≤ 11, otherwise.

Page 125: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

114 Chapter 3 - Credibility Theory

Then E[ξ] = +∞. However∫ +∞

−∞xdΦ(x) =

14�= +∞.

Example 3.35: Let ξ be a fuzzy variable with membership function

μ(x) =

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

x + 23

, if − 2 ≤ x ≤ −1

x + 33

, if − 1 < x ≤ 0

3− x

3, if 0 ≤ x ≤ 1

2− x

3, if 1 < x ≤ 2

0, otherwise.

Then its expected value is E[ξ] = 0, and its credibility distribution is

Φ(x) =

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

0, if x < −2x + 2

6, if − 2 ≤ x ≤ −1

x + 36

, if − 1 < x < 1

x + 46

, if 1 ≤ x ≤ 2

1, if 2 < x.

....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................... ........................................................................................................................................................................................................................................................................................

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

..................

...............

x

μ(x) Φ(x)

1

.....................................................................................•

....................................................................................................................................................................................................

...........................

................................................................................ ...........................................................•....................................................................................................................................................................

•..........................................................................................................

−2 −1 0 1 2 −2 −1 0 1 2

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Figure 3.1: Membership Function μ and Credibility Distribution Φ

Note that the credibility distribution Φ is neither left-continuous nor right-continuous. Thus there is no Lebesgue-Stieltjes measure π such that

π{(−∞, x]} = Φ(x)

Page 126: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 3.6 - Expected Value Operator 115

for all x ∈ �. It follows that the Lebesgue-Stieltjes integral∫ +∞

−∞xdΦ(x)

does not exist.

Theorem 3.40 Let ξ be a fuzzy variable with credibility distribution Φ. If

limx→−∞Φ(x) = 0, lim

x→∞Φ(x) = 1

and the Lebesgue-Stieltjes integral∫ +∞

−∞xdΦ(x)

is finite, then we have

E[ξ] =∫ +∞

−∞xdΦ(x). (3.66)

Proof: Since the Lebesgue-Stieltjes integral∫ +∞−∞ xdΦ(x) is finite, we imme-

diately have

limy→+∞

∫ y

0

xdΦ(x) =∫ +∞

0

xdΦ(x), limy→−∞

∫ 0

y

xdΦ(x) =∫ 0

−∞xdΦ(x)

and

limy→+∞

∫ +∞

y

xdΦ(x) = 0, limy→−∞

∫ y

−∞xdΦ(x) = 0.

It follows from∫ +∞

y

xdΦ(x) ≥ y

(lim

z→+∞Φ(z)− Φ(y))

= y (1− Φ(y)) ≥ 0, for y > 0,

∫ y

−∞xdΦ(x) ≤ y

(Φ(y)− lim

z→−∞Φ(z))

= yΦ(y) ≤ 0, for y < 0

thatlim

y→+∞ y (1− Φ(y)) = 0, limy→−∞ yΦ(y) = 0.

Let 0 = x0 < x1 < x2 < · · · < xn = y be a partition of [0, y]. Then we have

n−1∑i=0

xi (Φ(xi+1)− Φ(xi))→∫ y

0

xdΦ(x)

andn−1∑i=0

(1− Φ(xi+1))(xi+1 − xi)→∫ y

0

Cr{ξ ≥ r}dr

Page 127: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

116 Chapter 3 - Credibility Theory

as max{|xi+1 − xi| : i = 0, 1, · · · , n− 1} → 0. Since

n−1∑i=0

xi (Φ(xi+1)− Φ(xi))−n−1∑i=0

(1− Φ(xi+1)(xi+1 − xi) = y(Φ(y)− 1)→ 0

as y → +∞. This fact implies that∫ +∞

0

Cr{ξ ≥ r}dr =∫ +∞

0

xdΦ(x).

A similar way may prove that

−∫ 0

−∞Cr{ξ ≤ r}dr =

∫ 0

−∞xdΦ(x).

It follows that the equation (3.66) holds.

Continuity Theorem

Theorem 3.41 Let {ξi} be a sequence of fuzzy variables such that ξi → ξuniformly. Then

limi→∞

E[ξi] = E[

limi→∞

ξi

]. (3.67)

Proof: Without loss of generality, we assume ξi ≥ 0 for each i. Then wehave

limi→∞

E[ξi] = limi→∞

∫ +∞

0

Cr{ξi ≥ r}dr

=∫ +∞

0

limi→∞

Cr{ξi ≥ r}dr (by Theorem 1.19)

=∫ +∞

0

Cr{

limi→∞

ξi ≥ r}

dr (by Theorem 3.18)

= E[

limi→∞

ξi

].

The theorem is proved.

Example 3.36: Dropping the condition of ξi → ξ uniformly, Theorem 3.41does not hold. For example, let Θ = {θ1, θ2, · · ·} and Pos{θj} = 1 for all j.We define a sequence of fuzzy variables as follows,

ξi(θj) =

{0, if j < i

1, otherwise

for i = 1, 2, · · · Then ξi → 0. However,

limi→∞

E[ξi] =12�= 0 = E

[limi→∞

ξi

].

Page 128: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 3.6 - Expected Value Operator 117

Linearity of Expected Value Operator

Theorem 3.42 (Liu and Liu [83]) Let ξ be a fuzzy variable whose expectedvalue exists. Then for any numbers a and b, we have

E[aξ + b] = aE[ξ] + b. (3.68)

Proof: In order to prove the theorem, it suffices to verify that E[ξ + b] =E[ξ]+ b and E[aξ] = aE[ξ]. It follows from the expected value operator that,if b ≥ 0,

E[ξ + b] =∫ ∞

0

Cr{ξ + b ≥ r}dr −∫ 0

−∞Cr{ξ + b ≤ r}dr

=∫ ∞

0

Cr{ξ ≥ r − b}dr −∫ 0

−∞Cr{ξ ≤ r − b}dr

= E[ξ] +∫ b

0

(Cr{ξ ≥ r − b}+ Cr{ξ < r − b}) dr

= E[ξ] + b.

If b < 0, then we have

E[ξ + b] = E[ξ]−∫ 0

b

(Cr{ξ ≥ r − b}+ Cr{ξ < r − b}) dr = E[ξ] + b.

On the other hand, if a = 0, then the equation E[aξ] = aE[ξ] holds trivially.If a > 0, we have

E[aξ] =∫ ∞

0

Cr{aξ ≥ r}dr −∫ 0

−∞Cr{aξ ≤ r}dr

=∫ ∞

0

Cr{ξ ≥ r

a

}dr −

∫ 0

−∞Cr{ξ ≤ r

a

}dr

= a

∫ ∞

0

Cr{ξ ≥ r

a

}d( r

a

)− a

∫ 0

−∞Cr{ξ ≤ r

a

}d( r

a

)= aE[ξ].

The equation E[aξ] = aE[ξ] will be proved if we verify that E[−ξ] = −E[ξ].In fact,

E[−ξ] =∫ ∞

0

Cr{−ξ ≥ r}dr −∫ 0

−∞Cr{−ξ ≤ r}dr

=∫ ∞

0

Cr {ξ ≤ −r} dr −∫ 0

−∞Cr {ξ ≥ −r} dr

=∫ 0

−∞Cr {ξ ≤ r} dr −

∫ ∞

0

Cr {ξ ≥ r} dr

= −E[ξ].

Page 129: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

118 Chapter 3 - Credibility Theory

The proof is finished.

Theorem 3.43 (Liu and Liu [83]) Let ξ and η be independent fuzzy variableswith finite expected values. Then we have

E[ξ + η] = E[ξ] + E[η]. (3.69)

Proof: We first prove the case where both ξ and η are simple fuzzy variables,i.e.,

ξ =

⎧⎪⎪⎪⎨⎪⎪⎪⎩a1 with possibility μ1

a2 with possibility μ2

· · ·am with possibility μm,

η =

⎧⎪⎪⎪⎨⎪⎪⎪⎩b1 with possibility ν1

b2 with possibility ν2

· · ·bn with possibility νn.

Then ξ+η is also a simple fuzzy variable taking values ai+bj with possibilitiesμi ∧ νj , i = 1, 2, · · · ,m, j = 1, 2, · · · , n, respectively. Now we define

w′i =

12

(max

1≤k≤m{μk|ak ≤ ai} − max

1≤k≤m{μk|ak < ai}

+ max1≤k≤m

{μk|ak ≥ ai} − max1≤k≤m

{μk|ak > ai})

,

w′′j =

12

(max

1≤l≤n{νl|bl ≤ bi} − max

1≤l≤n{νl|bl < bi}

+ max1≤l≤n

{νl|bl ≥ bi} − max1≤l≤n

{νl|bl > bi})

,

wij =12

(max

1≤k≤m,1≤l≤n{μk ∧ νl|ak + bl ≤ ai + bj}

− max1≤k≤m,1≤l≤n

{μk ∧ νl|ak + bl < ai + bj}

+ max1≤k≤m,1≤l≤n

{μk ∧ νl|ak + bl ≥ ai + bj}

− max1≤k≤m,1≤l≤n

{μk ∧ νl|ak + bl > ai + bj})

for i = 1, 2, · · · ,m and j = 1, 2, · · · , n. It is also easy to verify that

w′i =

n∑j=1

wij , w′′j =

m∑i=1

wij

for i = 1, 2, · · · ,m and j = 1, 2, · · · , n. If {ai}, {bj} and {ai+bj} are sequencesconsisting of distinct elements, then

E[ξ] =m∑i=1

aiw′i, E[η] =

n∑j=1

bjw′′j , E[ξ + η] =

m∑i=1

n∑j=1

(ai + bj)wij .

Page 130: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 3.6 - Expected Value Operator 119

Thus E[ξ + η] = E[ξ] + E[η]. If not, we may give them a small perturbationsuch that they are distinct, and prove the linearity by letting the perturbationtend to zero.

Next we prove the case where ξ and η are fuzzy variables such that

limy↑0

Cr{ξ ≤ y} ≤ 12≤ Cr{ξ ≤ 0},

limy↑0

Cr{η ≤ y} ≤ 12≤ Cr{η ≤ 0}.

(3.70)

This condition is equivalent to Pos{ξ = 0} = Pos{η = 0} = 1. We definesimple fuzzy variables ξi via credibility distributions as follows,

Φi(x) =

⎧⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎩

k − 12i

, ifk − 1

2i≤ Cr{ξ ≤ x} <

k

2i, k = 1, 2, · · · , 2i−1

k

2i, if

k − 12i≤ Cr{ξ ≤ x} <

k

2i, k = 2i−1 + 1, · · · , 2i

1, if Cr{ξ ≤ x} = 1

for i = 1, 2, · · · Thus {ξi} is a sequence of simple fuzzy variables satisfying

Cr{ξi ≤ r} ↑ Cr{ξ ≤ r}, if r ≤ 0Cr{ξi ≥ r} ↑ Cr{ξ ≥ r}, if r ≥ 0

as i → ∞. Similarly, we define simple fuzzy variables ηi via credibilitydistributions as follows,

Ψi(x) =

⎧⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎩

k − 12i

, ifk − 1

2i≤ Cr{η ≤ x} <

k

2i, k = 1, 2, · · · , 2i−1

k

2i, if

k − 12i≤ Cr{η ≤ x} <

k

2i, k = 2i−1 + 1, · · · , 2i

1, if Cr{η ≤ x} = 1

for i = 1, 2, · · · Thus {ηi} is a sequence of simple fuzzy variables satisfying

Cr{ηi ≤ r} ↑ Cr{η ≤ r}, if r ≤ 0Cr{ηi ≥ r} ↑ Cr{η ≥ r}, if r ≥ 0

as i→∞. It is also clear that {ξi+ηi} is a sequence of simple fuzzy variables.Furthermore, when r ≤ 0, it follows from (3.70) that

limi→∞

Cr{ξi + ηi ≤ r} = limi→∞

supx≤0,y≤0,x+y≤r

Cr{ξi ≤ x} ∧ Cr{ηi ≤ y}

= supx≤0,y≤0,x+y≤r

limi→∞

Cr{ξi ≤ x} ∧ Cr{ηi ≤ y}

= supx≤0,y≤0,x+y≤r

Cr{ξ ≤ x} ∧ Cr{η ≤ y}

= Cr{ξ + η ≤ r}.

Page 131: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

120 Chapter 3 - Credibility Theory

That is,Cr{ξi + ηi ≤ r} ↑ Cr{ξ + η ≤ r}, if r ≤ 0.

A similar way may prove that

Cr{ξi + ηi ≥ r} ↑ Cr{ξ + η ≥ r}, if r ≥ 0.

Since the expected values E[ξ] and E[η] exist, we have

E[ξi] =∫ +∞

0

Cr{ξi ≥ r}dr −∫ 0

−∞Cr{ξi ≤ r}dr

→∫ +∞

0

Cr{ξ ≥ r}dr −∫ 0

−∞Cr{ξ ≤ r}dr = E[ξ],

E[ηi] =∫ +∞

0

Cr{ηi ≥ r}dr −∫ 0

−∞Cr{ηi ≤ r}dr

→∫ +∞

0

Cr{η ≥ r}dr −∫ 0

−∞Cr{η ≤ r}dr = E[η],

E[ξi + ηi] =∫ +∞

0

Cr{ξi + ηi ≥ r}dr −∫ 0

−∞Cr{ξi + ηi ≤ r}dr

→∫ +∞

0

Cr{ξ + η ≥ r}dr −∫ 0

−∞Cr{ξ + η ≤ r}dr = E[ξ + η]

as i → ∞. Therefore E[ξ + η] = E[ξ] + E[η] since we have proved thatE[ξi + ηi] = E[ξi] + E[ηi] for i = 1, 2, · · ·

Finally, if ξ and η are arbitrary fuzzy variables with finite expected values,then there exist two numbers a and b such that

limy↑0

Cr{ξ + a ≤ y} ≤ 12≤ Cr{ξ + a ≤ 0},

limy↑0

Cr{η + b ≤ y} ≤ 12≤ Cr{η + b ≤ 0}.

It follows from Theorem 3.42 that

E[ξ + η] = E[(ξ + a) + (η + b)− a− b]

= E[(ξ + a) + (η + b)]− a− b

= E[ξ + a] + E[η + b]− a− b

= E[ξ] + a + E[η] + b− a− b

= E[ξ] + E[η]

which proves the theorem.

Page 132: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 3.6 - Expected Value Operator 121

Theorem 3.44 (Liu and Liu [83]) Let ξ and η be independent fuzzy variableswith finite expected values. Then for any numbers a and b, we have

E[aξ + bη] = aE[ξ] + bE[η]. (3.71)

Proof: The theorem follows immediately from Theorems 3.42 and 3.43.

Example 3.37: Theorem 3.44 does not hold if ξ and η are not independent.For example, Θ = {θ1, θ2, θ3}, Pos{θ1} = 1, Pos{θ2} = 0.6, Pos{θ3} = 0.4and the fuzzy variables are defined by

ξ1(θ) =

⎧⎪⎨⎪⎩1, if θ = θ1

0, if θ = θ2

2, if θ = θ3,

ξ2(θ) =

⎧⎪⎨⎪⎩0, if θ = θ1

2, if θ = θ2

3, if θ = θ3.

Then we have

(ξ1 + ξ2)(θ) =

⎧⎪⎨⎪⎩1, if θ = θ1

2, if θ = θ2

5, if θ = θ3.

Thus E[ξ1] = 0.9, E[ξ2] = 0.8, and E[ξ1 + ξ2] = 1.9. This fact implies that

E[ξ1 + ξ2] > E[ξ1] + E[ξ2].

If the fuzzy variables are defined by

η1(θ) =

⎧⎪⎨⎪⎩0, if θ = θ1

1, if θ = θ2

2, if θ = θ3,

η2(θ) =

⎧⎪⎨⎪⎩0, if θ = θ1

3, if θ = θ2

1, if θ = θ3.

Then we have

(η1 + η2)(θ) =

⎧⎪⎨⎪⎩0, if θ = θ1

4, if θ = θ2

3, if θ = θ3.

Thus E[η1] = 0.5, E[η2] = 0.9, and E[η1 + η2] = 1.2. This fact implies that

E[η1 + η2] < E[η1] + E[η2].

Distance of Fuzzy Variables

Definition 3.30 The distance of fuzzy variables ξ and η is defined as

d(ξ, η) = E[|ξ − η|]. (3.72)

The distance of fuzzy variables satisfies that (a) d(ξ, η) = 0 if ξ = η; (b)d(ξ, η) > 0 if ξ �= η; (c) (Symmetry) d(ξ, η) = d(η, ξ). However, it does notsatisfy the triangle inequality.

Page 133: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

122 Chapter 3 - Credibility Theory

Expected Value of Function of Fuzzy Variable

Let ξ be a fuzzy variable, and f : � → � a function. Then the expectedvalue of f(ξ) is

E[f(ξ)] =∫ +∞

0

Cr{f(ξ) ≥ r}dr −∫ 0

−∞Cr{f(ξ) ≤ r}dr.

For random case, it has been proved that the expected value E[f(ξ)] is theLebesgue-Stieltjes integral of f(x) with respect to the probability distributionΦ of ξ if the integral exists. However, in fuzzy case,

E[f(ξ)] �=∫ +∞

−∞f(x)dΦ(x)

where Φ is the credibility distribution of ξ. In fact, it follows from the defi-nition of expected value operator that

E[f(ξ)] =∫ +∞

−∞xdΨ(x)

where Ψ is the credibility distribution of the fuzzy variable f(ξ) satisfying

limx→−∞Ψ(x) = 0, lim

x→+∞Ψ(x) = 1, −∞ <

∫ +∞

−∞xdΨ(x) < +∞.

Example 3.38: We consider a fuzzy variable ξ whose membership functionis given by

μ(x) =

⎧⎪⎨⎪⎩0.6, if − 1 ≤ x < 01, if 0 ≤ x ≤ 10, otherwise.

Then the expected value E[ξ2] = 0.5. However, the credibility distributionof ξ is

Φ(x) =

⎧⎪⎪⎪⎨⎪⎪⎪⎩0, if x < −1

0.3, if − 1 ≤ x < 00.5, if 0 ≤ x < 11, if 1 ≤ x

and the Lebesgue-Stieltjes integral∫ +∞

−∞x2dΦ(x) = (−1)2 × 0.3 + 02 × 0.2 + 12 × 0.5 = 0.8 �= E[ξ2].

Page 134: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 3.6 - Expected Value Operator 123

Sum of a Fuzzy Number of Fuzzy Variables

Theorem 3.45 (Zhao and Liu [158]) Assume that {ξi} is a sequence of iidfuzzy variables, and n is a positive fuzzy integer (i.e., a fuzzy variable taking“positive integer” values) that is independent of the sequence {ξi}. Then forany number r, we have

Pos

{n∑

i=1

ξi ≥ r

}= Pos {nξ1 ≥ r} , (3.73)

Nec

{n∑

i=1

ξi ≥ r

}= Nec {nξ1 ≥ r} , (3.74)

Cr

{n∑

i=1

ξi ≥ r

}= Cr {nξ1 ≥ r} . (3.75)

The equations (3.73), (3.74) and (3.75) remain true if the symbol “≥” isreplaced with “≤”, “>” or “<”. Furthermore, we have

E

[n∑

i=1

ξi

]= E [nξ1] . (3.76)

Proof: Since {ξi} is a sequence of iid fuzzy variables and n is independentof {ξi}, we have

Pos{

n∑i=1

ξi ≥ r

}= sup

n,x1+x2+···+xn≥rPos{n = n} ∧

(min

1≤i≤nPos {ξi = xi}

)≥ sup

nx≥rPos{n = n} ∧

(min

1≤i≤nPos {ξi = x}

)= sup

nx≥rPos{n = n} ∧ Pos{ξ1 = x}

= Pos {nξ1 ≥ r} .

On the other hand, for any given ε > 0, there exists an integer n and realnumbers x1, x2, · · · , xn with x1 + x2 + · · ·+ xn ≥ r such that

Pos

{n∑

i=1

ξi ≥ r

}− ε ≤ Pos{n = n} ∧ Pos{ξi = xi}

for each i with 1 ≤ i ≤ n. Without loss of generality, we assume that nx1 ≥ r.Then we have

Pos

{n∑

i=1

ξi ≥ r

}− ε ≤ Pos{n = n} ∧ Pos{ξ1 = x1} ≤ Pos {nξ1 ≥ r} .

Page 135: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

124 Chapter 3 - Credibility Theory

Letting ε→ 0, we get

Pos

{n∑

i=1

ξi ≥ r

}≤ Pos {nξ1 ≥ r} .

It follows that (3.73) holds. Similarly, we may prove that (3.73) still holds ifthe symbol “≥” is replaced with “≤”, “>” or “<”. Furthermore, we have

Nec

{n∑

i=1

ξi ≥ r

}= 1−Pos

{n∑

i=1

ξi < r

}= 1−Pos{nξ1 < r} = Nec{nξ1 ≥ r},

Cr

{n∑

i=1

ξi ≥ r

}=

12

(Pos

{n∑

i=1

ξi ≥ r

}+ Nec

{n∑

i=1

ξi ≥ r

})

=12

(Pos {nξ1 ≥ r}+ Nec {nξ1 ≥ r})

= Cr{nξ1 ≥ r}.Finally, by the definition of expected value operator, we have

E

[n∑

i=1

ξi

]=∫ +∞

0

Cr

{n∑

i=1

ξi ≥ r

}dr −

∫ 0

−∞Cr

{n∑

i=1

ξi ≤ r

}dr

=∫ +∞

0

Cr {nξ1 ≥ r} dr −∫ 0

−∞Cr {nξ1 ≤ r} dr

= E [nξ1] .

The theorem is proved.

3.7 Variance, Covariance and Moments

Definition 3.31 (Liu and Liu [77]) Let ξ be a fuzzy variable with finiteexpected value e. The variance of ξ is defined as

V [ξ] = E[(ξ − e)2

]. (3.77)

Theorem 3.46 If ξ is a fuzzy variable whose variance exists, a and b arereal numbers, then V [aξ + b] = a2V [ξ].

Proof: It follows from the definition of variance that

V [aξ + b] = E[(aξ + b− aE[ξ]− b)2

]= a2E[(ξ − E[ξ])2] = a2V [ξ].

Theorem 3.47 Let ξ be a fuzzy variable with expected value e. Then V [ξ] =0 if and only if Cr{ξ = e} = 1.

Page 136: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 3.8 - Some Inequalities 125

Proof: If V [ξ] = 0, then E[(ξ − e)2] = 0. Note that

E[(ξ − e)2] =∫ +∞

0

Cr{(ξ − e)2 ≥ r}dr

which implies Cr{(ξ−e)2 ≥ r} = 0 for any r > 0. Hence we have Cr{(ξ−e)2 =0} = 1, i.e., Cr{ξ = e} = 1. Conversely, if Cr{ξ = e} = 1, then we haveCr{(ξ − e)2 = 0} = 1 and Cr{(ξ − e)2 ≥ r} = 0 for any r > 0. Thus

V [ξ] =∫ +∞

0

Cr{(ξ − e)2 ≥ r}dr = 0.

Definition 3.32 The standard deviation of a fuzzy variable is defined as thenonnegative square root of its variance.

Definition 3.33 (Hua [36]) Let ξ and η be fuzzy variables such that E[ξ]and E[η] are finite. Then the covariance of ξ and η is defined by

Cov[ξ, η] = E [(ξ − E[ξ])(η − E[η])] . (3.78)

Definition 3.34 (Liu [79]) For any positive integer k, the expected valueE[ξk] is called the kth moment of the fuzzy variable ξ. The expected valueE[(ξ − E[ξ])k] is called the kth central moment of the fuzzy variable ξ.

3.8 Some Inequalities

Theorem 3.48 (Liu [79]) Let ξ be a fuzzy variable, and f a nonnegativefunction. If f is even and increasing on [0,∞), then for any given numbert > 0, we have

Cr{|ξ| ≥ t} ≤ E[f(ξ)]f(t)

. (3.79)

Proof: It is clear that Cr{|ξ| ≥ f−1(r)} is a monotone decreasing functionof r on [0,∞). It follows from the nonnegativity of f(ξ) that

E[f(ξ)] =∫ +∞

0

Cr{f(ξ) ≥ r}dr

=∫ +∞

0

Cr{|ξ| ≥ f−1(r)}dr

≥∫ f(t)

0

Cr{|ξ| ≥ f−1(r)}dr

≥∫ f(t)

0

dr · Cr{|ξ| ≥ f−1(f(t))}

= f(t) · Cr{|ξ| ≥ t}

which proves the inequality.

Page 137: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

126 Chapter 3 - Credibility Theory

Theorem 3.49 (Liu [79]) Let ξ be a fuzzy variable. Then for any givennumbers t > 0 and p > 0, we have

Cr{|ξ| ≥ t} ≤ E[|ξ|p]tp

. (3.80)

Proof: It is a special case of Theorem 3.48 when f(x) = |x|p.

Theorem 3.50 (Liu [79]) Let ξ be a fuzzy variable whose variance V [ξ] ex-ists. Then for any given number t > 0, we have

Cr {|ξ − E[ξ]| ≥ t} ≤ V [ξ]t2

. (3.81)

Proof: It is a special case of Theorem 3.48 when the fuzzy variable ξ isreplaced with ξ − E[ξ], and f(x) = x2.

Example 3.39: Let ξ be a fuzzy variable with finite expected value e andvariance σ2. It follows from Theorem 3.50 that

Cr{|ξ − e| ≥ kσ} ≤ V [ξ − e](kσ)2

=1k2

.

Theorem 3.51 (Liu [79]) Let p and q be two positive real numbers with1/p + 1/q = 1, ξ and η independent fuzzy variables with E[|ξ|p] < ∞ andE[|η|q] <∞. Then we have

E[|ξη|] ≤ p√

E[|ξ|p] q√

E[|η|q]. (3.82)

Proof: The inequality holds trivially if at least one of ξ and η is zero a.s.Now we assume E[|ξ|p] > 0 and E[|η|q] > 0, and set

a =|ξ|

p√

E[|ξ|p], b =

|η|q√

E[|η|q].

It follows from ab ≤ ap/p + bq/q that

|ξη| ≤ p√

E[|ξ|p] q√

E[|η|q](|ξ|p

pE[|ξ|p] +|η|q

qE[|η|q]

).

Taking the expected values on both sides, we obtain the inequality.

Theorem 3.52 (Liu [79]) Let ξ and η be independent fuzzy variables withE[|ξ|] <∞ and E[|η|] <∞. Then we have

E[|ξ + η|] ≤ E[|ξ|] + E[|η|]. (3.83)

Proof: Since ξ and η are independent, it follows from the linearity of ex-pected value operator that E[|ξ + η|] ≤ E[|ξ|+ |η|] = E[|ξ|] + E[|η|].

Page 138: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 3.9 - Characteristic Function 127

Theorem 3.53 Let ξ be a fuzzy variable, and f a convex function. If E[ξ]and E[f(ξ)] exist and are finite, then

f(E[ξ]) ≤ E[f(ξ)]. (3.84)

Especially, when f(x) = |x|p and p > 1, we have |E[ξ]|p ≤ E[|ξ|p].

Proof: Since f is a convex function, for each y, there exists a number k suchthat f(x)− f(y) ≥ k · (x− y). Replacing x with ξ and y with E[ξ], we obtain

f(ξ)− f(E[ξ]) ≥ k · (ξ − E[ξ]).

Taking the expected values on both sides, we have

E[f(ξ)]− f(E[ξ]) ≥ k · (E[ξ]− E[ξ]) = 0

which proves the inequality.

3.9 Characteristic Function

There is a concept of characteristic function in probability theory. This sec-tion introduces the concept of characteristic function of fuzzy variable, andprovides inversion formula and uniqueness theorem.

Definition 3.35 (Zhu and Liu [164]) Let ξ be a fuzzy variable with credibilitydistribution Φ. Then the characteristic function of ξ is defined by

ϕ(t) =∫ +∞

−∞eitxdΦ(x), ∀t ∈ � (3.85)

provided that the Lebesgue-Stieltjes integral exists, where eitx = cos tx+i sin txand i =

√−1.

Example 3.40: Let ξ be a fuzzy variable with the membership function

μ(x) ={

1, if x ∈ [a, b]0, otherwise.

Then the characteristic function of ξ is

ϕ(t) =12(eiat + eibt

), ∀t ∈ �.

Example 3.41: Let ξ be a triangular fuzzy variable (a, b, c). Then itscharacteristic function is

ϕ(t) =i

2(b− a)t(eiat − eibt) +

i

2(c− b)t(eibt − eict), ∀t ∈ �.

Page 139: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

128 Chapter 3 - Credibility Theory

Example 3.42: Let ξ be a trapezoidal fuzzy variable (a, b, c, d). Then itscharacteristic function is

ϕ(t) =i

2(b− a)t(eiat − eibt

)+

i

2(d− c)t(eict − eidt

), ∀t ∈ �.

Example 3.43: The characteristic function of a fuzzy variable may take anyvalue c between 0 and 1. For example, a fuzzy variable ξ is defined by themembership function,

μ(x) =

{1, if x = 0

1− c, otherwise.

Then its credibility distribution is

Φ(x) =

⎧⎪⎪⎪⎨⎪⎪⎪⎩0, if x = −∞

(1− c)/2, if −∞ < x < 0(1 + c)/2, if 0 ≤ x < +∞

1, if x = +∞,

and its characteristic function is ϕ(t) ≡ c.

Theorem 3.54 (Zhu and Liu [164]) Let ξ be a fuzzy variable with credibilitydistribution Φ and characteristic function ϕ. Then we have(a) ϕ(0) = lim

x→+∞Φ(x)− limx→−∞Φ(x);

(b) |ϕ(t)| ≤ ϕ(0);(c) ϕ(−t) = ϕ(t), the complex conjugate of ϕ(t);(d) ϕ(t) is a uniformly continuous function on �.

Proof: The proof is similar with Theorem 2.50 except that

ϕ(0) =∫ +∞

−∞dΦ(x) = lim

x→+∞Φ(x)− limx→−∞Φ(x).

Theorem 3.55 (Zhu and Liu [164], Inversion Formula) Let ξ be a fuzzyvariable with credibility distribution Φ and characteristic function ϕ. Then

Φ(b)− Φ(a) = limT→+∞

12π

∫ T

−T

e−iat − e−ibt

itϕ(t)dt (3.86)

holds for all points a, b(a < b) at which Φ is continuous.

Proof: Like Theorem 2.51.

Theorem 3.56 (Zhu and Liu [164], Uniqueness Theorem) Let Φ1 and Φ2 betwo credibility distributions with characteristic functions ϕ1 and ϕ2, respec-tively. Then ϕ1 = ϕ2 if and only if there is a constant c such that Φ1 = Φ2+c.

Page 140: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 3.10 - Convergence Concepts 129

Proof: It follows from the definition of characteristic function that

ϕ1(t) =∫ +∞

−∞eitxdΦ1(x), ϕ2(t) =

∫ +∞

−∞eitxdΦ2(x).

This implies that Φ1 and Φ2 may produce Lebesgue-Stieltjes measures. Inother words, Φ1 and Φ2 must be right-continuous functions. If ϕ1 = ϕ2, itfollows from the inversion formula that Φ1(x) = Φ2(x) + c for all x at whichboth Φ1 and Φ2 are continuous, where

c = lima→−∞(Φ1(a)− Φ2(a)).

Since the set of continuity points of Φ1 and Φ2 is dense everywhere in �, wehave Φ1 = Φ2 + c.

Conversely, if there is a constant c such that Φ1 = Φ2 + c, then the credi-bility distributions Φ1 and Φ2 produce the same Lebesgue-Stieltjes measure.Thus ϕ1 = ϕ2.

3.10 Convergence Concepts

This section discusses some convergence concepts of fuzzy sequence: conver-gence almost surely (a.s.), convergence in credibility, convergence in mean,and convergence in distribution.

Table 3.1: Relations among Convergence Concepts

Convergence Almost SurelyConvergence → Convergence ↗

in Mean in Credibility ↘Convergence in Distribution

Definition 3.36 (Liu [79]) Suppose that ξ, ξ1, ξ2, · · · are fuzzy variables de-fined on the possibility space (Θ,P(Θ),Pos). The sequence {ξi} is said to beconvergent a.s. to ξ if and only if there exists a set A ∈ P(Θ) with Cr{A} = 1such that

limi→∞

|ξi(θ)− ξ(θ)| = 0 (3.87)

for every θ ∈ A. In that case we write ξi → ξ, a.s.

Definition 3.37 (Liu [79]) Suppose that ξ, ξ1, ξ2, · · · are fuzzy variables de-fined on the possibility space (Θ,P(Θ),Pos). We say that the sequence {ξi}converges in credibility to ξ if

limi→∞

Cr {|ξi − ξ| ≥ ε} = 0 (3.88)

Page 141: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

130 Chapter 3 - Credibility Theory

for every ε > 0.

Definition 3.38 (Liu [79]) Suppose that ξ, ξ1, ξ2, · · · are fuzzy variables withfinite expected values defined on the possibility space (Θ,P(Θ),Pos). We saythat the sequence {ξi} converges in mean to ξ if

limi→∞

E[|ξi − ξ|] = 0. (3.89)

Definition 3.39 (Liu [79]) Suppose that Φ,Φ1,Φ2, · · · are the credibility dis-tributions of fuzzy variables ξ, ξ1, ξ2, · · ·, respectively. We say that {ξi} con-verges in distribution to ξ if Φi(x) → Φ(x) for all continuity points x ofΦ.

Convergence in Mean vs. Convergence in Credibility

Theorem 3.57 (Liu [79]) Suppose that ξ, ξ1, ξ2, · · · are fuzzy variables de-fined on the possibility space (Θ,P(Θ),Pos). If the sequence {ξi} convergesin mean to ξ, then {ξi} converges in credibility to ξ.

Proof: It follows from Theorem 3.49 that, for any given number ε > 0,

Cr {|ξi − ξ| ≥ ε} ≤ E[|ξi − ξ|]ε

→ 0

as i→∞. Thus {ξi} converges in credibility to ξ.

Example 3.44: Convergence in credibility does not imply convergence inmean. For example, Θ = {θ1, θ2, · · ·}, Pos{θj} = 1/j for j = 1, 2, · · · and thefuzzy variables are defined by

ξi(θj) =

{i, if j = i

0, otherwise(3.90)

for i = 1, 2, · · · and ξ = 0. For any small number ε > 0, we have

Cr {|ξi − ξ| ≥ ε} =12i→ 0.

That is, the sequence {ξi} converges in credibility to ξ. However,

E [|ξi − ξ|] ≡ 12�→ 0.

That is, the sequence {ξi} does not converge in mean to ξ.

Page 142: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 3.10 - Convergence Concepts 131

Convergence Almost Surely vs. Convergence in Credibility

Example 3.45: Convergence a.s. does not imply convergence in credibility.For example, Θ = {θ1, θ2, · · ·}, Pos{θ1} = 1 and Pos{θj} = (j − 1)/j forj = 2, 3, · · · and the fuzzy variables are defined by

ξi(θj) =

{i, if j = i

0, otherwise(3.91)

for i = 1, 2, · · · and ξ = 0. Then the sequence {ξi} converges a.s. to ξ.However, for any small number ε > 0, we have

Cr {|ξi − ξ| ≥ ε} =i− 12i�→ 0.

That is, the sequence {ξi} does not converge in credibility to ξ.

Theorem 3.58 (Wang and Liu [141]) Suppose that ξ, ξ1, ξ2, · · · are fuzzyvariables defined on the possibility space (Θ,P(Θ),Pos). If the sequence {ξi}converges in credibility to ξ, then {ξi} converges a.s. to ξ.

Proof: If {ξi} does not converge a.s. to ξ, then there exists an elementθ∗ ∈ Θ with Pos{θ∗} > 0 such that ξi(θ∗) �→ ξ(θ∗) as i→∞. In other words,there exists a small number ε > 0 and a subsequence {ξik(θ∗)} such that|ξik(θ∗)− ξ(θ∗)| ≥ ε for any k. Since credibility measure is an increasing setfunction, we have

Cr {|ξik − ξ| ≥ ε} ≥ Cr{θ∗} ≥ Pos{θ∗}2

> 0

for any k. It follows that {ξi} does not converge in credibility to ξ. Acontradiction proves the theorem.

Convergence in Credibility vs. Convergence in Distribution

Theorem 3.59 (Wang and Liu [141]) Suppose that ξ, ξ1, ξ2, · · · are fuzzyvariables. If the sequence {ξi} converges in credibility to ξ, then {ξi} con-verges in distribution to ξ.

Proof: Let x be any given continuity point of the distribution Φ. On theone hand, for any y > x, we have

{ξi ≤ x} = {ξi ≤ x, ξ ≤ y} ∪ {ξi ≤ x, ξ > y} ⊂ {ξ ≤ y} ∪ {|ξi − ξ| ≥ y − x}.

It follows from the subadditivity of credibility measure that

Φi(x) ≤ Φ(y) + Cr{|ξi − ξ| ≥ y − x}.

Page 143: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

132 Chapter 3 - Credibility Theory

Since {ξi} converges in credibility to ξ, we have Cr{|ξi − ξ| ≥ y − x} → 0.Thus we obtain lim supi→∞ Φi(x) ≤ Φ(y) for any y > x. Letting y → x, weget

lim supi→∞

Φi(x) ≤ Φ(x). (3.92)

On the other hand, for any z < x, we have

{ξ ≤ z} = {ξ ≤ z, ξi ≤ x} ∪ {ξ ≤ z, ξi > x} ⊂ {ξi ≤ x} ∪ {|ξi − ξ| ≥ x− z}

which implies that

Φ(z) ≤ Φi(x) + Cr{|ξi − ξ| ≥ x− z}.

Since Cr{|ξi − ξ| ≥ x − z} → 0, we obtain Φ(z) ≤ lim infi→∞ Φi(x) for anyz < x. Letting z → x, we get

Φ(x) ≤ lim infi→∞

Φi(x). (3.93)

It follows from (3.92) and (3.93) that Φi(x)→ Φ(x). The theorem is proved.

Example 3.46: However, the inverse of Theorem 3.59 is not true. Forexample, let Θ = {θ1, θ2, θ3}, and

Pos{θ} =

⎧⎨⎩1/2, if θ = θ1

1, if θ = θ2

1/2, if θ = θ3,ξ{θ} =

⎧⎨⎩−1, if θ = θ1

0, if θ = θ2

1, if θ = θ3.

We also defineξi = −ξ, i = 1, 2, · · · (3.94)

Then ξi and ξ are identically distributed. Thus {ξi} converges in distribu-tion to ξ. But, for any small number ε > 0, we have Cr{|ξi − ξ| > ε} =Cr{θ1, θ3} = 1/4. That is, the sequence {ξi} does not converge in credibilityto ξ.

Convergence Almost Surely vs. Convergence in Distribution

Example 3.47: Consider the example defined by (3.94) in which the se-quence {ξi} converges in distribution to ξ. However, {ξi} does not convergea.s. to ξ.

Example 3.48: Consider the example defined by (3.91) in which the se-quence {ξi} converges a.s. to ξ. However, the credibility distributions of ξiare

Φi(x) =

⎧⎪⎨⎪⎩0, if x < 0

(i + 1)/(2i), if 0 ≤ x < i

1, if i ≤ x,

Page 144: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 3.11 - Fuzzy Simulations 133

i = 1, 2, · · ·, respectively. The credibility distribution of ξ is

Φ(x) =

{0, if x < 01, if x ≥ 0.

It is clear that Φi(x) �→ Φ(x) at x > 0. That is, the sequence {ξi} does notconverge in distribution to ξ.

3.11 Fuzzy Simulations

Fuzzy simulation was developed by Liu and Iwamura [64][65] and Liu and Liu[77], and was defined as a technique of performing sampling experiments onthe models of fuzzy systems. Numerous numerical experiments have shownthat the fuzzy simulation indeed works very well for handling fuzzy systems.In this section, we will introduce the technique of fuzzy simulation for com-puting credibility, finding critical values, and calculating expected value.

Example 3.49: Suppose that f : �n → �m is a function, and ξ =(ξ1, ξ2, · · · , ξn) is a fuzzy vector on the possibility space (Θ,P(Θ),Pos). Wedesign a fuzzy simulation to compute the credibility

L = Cr {f(ξ) ≤ 0} . (3.95)

We randomly generate θk from Θ such that Pos{θk} ≥ ε, and write νk =Pos{θk}, k = 1, 2, · · · , N , respectively, where ε is a sufficiently small num-ber. Equivalently, we randomly generate u1k, u2k, · · · , unk from the ε-levelsets of ξ1, ξ2, · · · , ξn, and write νk = μ1(u1k) ∧ μ2(u2k) ∧ · · · ∧ μn(unk) fork = 1, 2, · · · , N , where μi are membership functions of ξi, i = 1, 2, · · · , n,respectively. Then the credibility Cr {f(ξ) ≤ 0} can be estimated by theformula,

L =12

(max

1≤k≤N

{νk∣∣ f(ξ(θk)) ≤ 0

}+ min

1≤k≤N

{1− νk

∣∣ f(ξ(θk)) > 0})

.

Algorithm 3.1 (Fuzzy Simulation)Step 1. Randomly generate θk from Θ such that Pos{θk} ≥ ε for k =

1, 2, · · · , N , where ε is a sufficiently small number.Step 2. Set νk = Pos{θk} for k = 1, 2, · · · , N .Step 3. Return L via the estimation formula.

Let ξ1 and ξ2 be two fuzzy variables with membership functions

μ1(x) = exp[−(x− 1)2

], μ2(x) = exp

[−(x− 2)2

],

respectively. A run of the fuzzy simulation with 3000 cycles shows that thecredibility Cr{ξ1 ≤ ξ2} = 0.61.

Page 145: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

134 Chapter 3 - Credibility Theory

Example 3.50: Let f : �n → � be a function, and ξ a fuzzy vector definedon the possibility space (Θ,P(Θ),Pos). We design a fuzzy simulation to findthe maximal f such that the inequality

Cr{f(ξ) ≥ f

}≥ α (3.96)

holds. We randomly generate θk from Θ such that Pos{θk} ≥ ε, and writeνk = Pos{θk}, k = 1, 2, · · · , N , respectively, where ε is a sufficiently smallnumber. For any number r, we have

L(r) =12

(max

1≤k≤N

{νk∣∣ f(ξ(θk)) ≥ r

}+ min

1≤k≤N

{1− νk

∣∣ f(ξ(θk)) < r})

.

It follows from monotonicity that we may employ bisection search to find themaximal value r such that L(r) ≥ α. This value is an estimation of f . Wesummarize this process as follows.

Algorithm 3.2 (Fuzzy Simulation)Step 1. Generate θk from Θ such that Pos{θk} ≥ ε for k = 1, 2, · · · , N ,

where ε is a sufficiently small number.Step 2. Find the maximal value r such that L(r) ≥ α holds.Step 3. Return r.

We assume that ξ1, ξ2, ξ3 are triangular fuzzy variables (1, 2, 3), (2, 3, 4),(3, 4, 5), respectively. A run of fuzzy simulation with 1000 cycles shows thatthe maximal f satisfying Cr{ξ2

1 + ξ22 + ξ2

3 ≥ f} ≥ 0.8 is 19.2.

Example 3.51: Let f : �n → � be a function, and ξ a fuzzy vector definedon the possibility space (Θ,P(Θ),Pos). Then f(ξ) is also a fuzzy variablewhose expected value is

E[f(ξ)] =∫ +∞

0

Cr{f(ξ) ≥ r}dr −∫ 0

−∞Cr{f(ξ) ≤ r}dr. (3.97)

A fuzzy simulation will be designed to estimate E[f(ξ)]. We randomly gen-erate θk from Θ such that Pos{θk} ≥ ε, and write νk = Pos{θk}, k =1, 2, · · · , N , respectively, where ε is a sufficiently small number. Then for anynumber r ≥ 0, the credibility Cr{f(ξ) ≥ r} can be estimated by

12

(max

1≤k≤N

{νk∣∣ f(ξ(θk)) ≥ r

}+ min

1≤k≤N

{1− νk

∣∣ f(ξ(θk)) < r})

and for any number r < 0, the credibility Cr{f(ξ) ≤ r} can be estimated by

12

(max

1≤k≤N

{νk∣∣ f(ξ(θk)) ≤ r

}+ min

1≤k≤N

{1− νk

∣∣ f(ξ(θk)) > r})

Page 146: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 3.11 - Fuzzy Simulations 135

provided that N is sufficiently large. Thus E[f(ξ)] may be estimated by thefollowing procedure.

Algorithm 3.3 (Fuzzy Simulation)Step 1. Set e = 0.Step 2. Randomly generate θk from Θ such that Pos{θk} ≥ ε for k =

1, 2, · · · , N , where ε is a sufficiently small number.Step 3. Set a = f(ξ(θ1)) ∧ · · · ∧ f(ξ(θN )), b = f(ξ(θ1)) ∨ · · · ∨ f(ξ(θN )).Step 4. Randomly generate r from [a, b].Step 5. If r ≥ 0, then e← e + Cr{f(ξ) ≥ r}.Step 6. If r < 0, then e← e− Cr{f(ξ) ≤ r}.Step 7. Repeat the fourth to sixth steps for N times.Step 8. E[f(ξ)] = a ∨ 0 + b ∧ 0 + e · (b− a)/N .

Now let ξi = (i, i+1, i+6) be triangular fuzzy variables for i = 1, 2, · · · , 100.Then we have E[ξ1+ξ2+· · ·+ξ100] = E[ξ1]+E[ξ2]+· · ·+E[ξ100] = 5250. A runof fuzzy simulation with 10000 cycles shows that E[ξ1+ξ2+· · ·+ξ100] = 5352.The relative error is less than 2%.

Let ξ1 = (1, 2, 3), ξ2 = (2, 3, 4), ξ3 = (3, 4, 5) and ξ4 = (4, 5, 6) be trian-gular fuzzy variables. A run of fuzzy simulation with 5000 cycles shows thatthe expected value E[

√ξ21 + ξ2

2 + ξ23 + ξ2

4 ] = 7.35.

Page 147: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Chapter 4

Trust Theory

Rough set, initialized by Pawlak [109], has been proved to be an excellentmathematical tool dealing with vague description of objects. A fundamentalassumption is that any object from a universe is perceived through availableinformation, and such information may not be sufficient to characterize theobject exactly. One way is the approximation of a set by other sets. Thus arough set may be defined by a pair of crisp sets, called the lower and the upperapproximations, that are originally produced by an equivalence relation.

Trust theory is the branch of mathematics that studies the behavior ofrough events. The emphasis in this chapter is mainly on rough set, roughspace, rough variable, rough arithmetic, trust measure, trust distribution, in-dependent and identical distribution, expected value operator, critical values,convergence concepts, laws of large numbers, and rough simulation.

4.1 Rough Set

Let U be a universe. Slowinski and Vanderpooten [132] extended the equiva-lence relation to more general case and proposed a binary similarity relationthat has not symmetry and transitivity but reflexivity. Different from theequivalence relation, the similarity relation does not generate partitions onU , for example, the similarity relation defined on � as “x is similar to y ifand only if |x− y| ≤ 1”.

The similarity class of x, denoted by R(x), is the set of objects which aresimilar to x,

R(x) = {y ∈ U∣∣ y � x}. (4.1)

Let R−1(x) be the class of objects to which x is similar,

R−1(x) = {y ∈ U∣∣ x � y}. (4.2)

Then the lower and the upper approximations of a set are given by thefollowing definition.

Page 148: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

138 Chapter 4 - Trust Theory

Definition 4.1 (Slowinski and Vanderpooten [132]) Let U be a universe, andX a set representing a concept. Then its lower approximation is defined by

X ={x ∈ U

∣∣ R−1(x) ⊂ X}

; (4.3)

while the upper approximation is defined by

X =⋃x∈X

R(x). (4.4)

That is, the lower approximation is a subset containing the objects surelybelonging to the set, whereas the upper approximation is a superset con-taining the objects possibly belonging to the set. It is easy to prove thatX ⊂ X ⊂ X.

Example 4.1: Let � be a universe. We define a similarity relation � suchthat y � x if and only if [y] = [x], where [x] represents the largest integer lessthan or equal to x. For the set [0, 1], we have [0, 1] = [0, 1), and [0, 1] = [0, 2).All sets [0, r) with 0 ≤ r ≤ 1 have the same upper approximation [0, 1).

Example 4.2: Let � be a universe. We define a similarity relation � suchthat y � x if and only if |y − x| ≤ 1. For the set [0, 3], we have [0, 3] = [1, 2],and [0, 3] = [−1, 4]. For the set [0, 1], we have [0, 1] = ∅, and [0, 1] = [−1, 2].

Definition 4.2 (Pawlak [109]) The collection of all sets having the samelower and upper approximations is called a rough set, denoted by (X,X).

4.2 Four Axioms

In order to provide an axiomatic theory to describe rough variable, we firstgive four axioms. Let Λ be a nonempty set, A a σ-algebra of subsets of Λ, Δan element in A, and π a real-valued set function on A. The four axioms arelisted as follows:

Axiom 1. π{Λ} < +∞.

Axiom 2. π{Δ} > 0.

Axiom 3. π{A} ≥ 0 for any A ∈ A.

Axiom 4. For every countable sequence of mutually disjoint events {Ai}∞i=1,we have

π

{ ∞⋃i=1

Ai

}=

∞∑i=1

π{Ai}. (4.5)

In fact, the set function π satisfying the four axioms is clearly a measure.Furthermore, the triplet (Λ,A, π) is a measure space.

Page 149: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 4.2 - Four Axioms 139

Definition 4.3 (Liu [75]) Let Λ be a nonempty set, A a σ-algebra of subsetsof Λ, Δ an element in A, and π a set function satisfying the four axioms.Then (Λ,Δ,A, π) is called a rough space.

Definition 4.4 (Liu [75]) Let (Λ,Δ,A, π) be a rough space. Then the uppertrust of an event A is defined by

Tr{A} =π{A}π{Λ} ; (4.6)

the lower trust of the event A is defined by

Tr{A} =π{A ∩Δ}

π{Δ} ; (4.7)

and the trust of the event A is defined by

Tr{A} =12

(Tr{A}+ Tr{A}) . (4.8)

Theorem 4.1 Let (Λ,Δ,A, π) be a rough space. Then the trust measure isa measure on A, and satisfies(a) Tr{Λ} = 1;(b) Tr{∅} = 0;(c) Tr is increasing, i.e., Tr{A} ≤ Tr{B} whenever A ⊂ B;(d) Tr is self dual, i.e., Tr{A}+ Tr{Ac} = 1 for any A ∈ A.

Proof: It is clear that Tr{A} ≥ 0 for any A ∈ A. Now let {Ai}∞i=1 be acountable sequence of mutually disjoint set in A. Then we have

Tr

{ ∞⋃i=1

Ai

}=

12

(Tr

{ ∞⋃i=1

Ai

}+ Tr

{ ∞⋃i=1

Ai

})

=

π

{ ∞⋃i=1

Ai

}2π{Λ} +

π

{ ∞⋃i=1

Ai ∩Δ

}2π{Δ}

=

∞∑i=1

π {Ai}

2π{Λ} +

∞∑i=1

π {Ai ∩Δ}

2π{Δ}

=12

( ∞∑i=1

Tr {Ai}+∞∑i=1

Tr {Ai})

=∞∑i=1

Tr {Ai} .

Thus Tr is a measure on A. The other parts follow immediately from thedefinition.

Remark 4.1: A rough event must hold if its trust is 1, and fail if its trustis 0. That is, the trust measure plays the role of probability measure andcredibility measure.

Page 150: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

140 Chapter 4 - Trust Theory

Trust Continuity Theorem

Theorem 4.2 (Trust Continuity Theorem) Let (Λ,Δ,A, π) be a rough space,and A1, A2, · · · ∈ A. If limi→∞ Ai exists, then

limi→∞

Tr{Ai} = Tr{

limi→∞

Ai

}. (4.9)

Proof: It is a special case of Theorem 1.8.

Theorem 4.3 Let (Λ,Δ,A, π) be a rough space. If A1, A2, · · · ∈ A, then wehave

Tr{

lim infi→∞

Ai

}≤ lim inf

i→∞Tr{Ai} ≤ lim sup

i→∞Tr{Ai} ≤ Tr

{lim supi→∞

Ai

}.

Proof: It is a special case of Theorem 1.7.

Independent Events

Definition 4.5 The events Ai, i ∈ I are said to be independent if and onlyif for any collections {i1, i2, · · · , ik} of distinct indices in I, we have

Tr{Ai1 ∩Ai2 ∩ · · · ∩Aik} = Tr{Ai1}Tr{Ai2} · · ·Tr{Aik}. (4.10)

Theorem 4.4 If the events Ai, i ∈ I are independent, and Bi are either Ai

or Aci for i ∈ I, then the events Bi, i ∈ I are independent.

Proof: In order to prove the theorem, it suffices to prove that Tr{Ac1∩A2} =

Tr{Ac1}Tr{A2}. It follows from Ac

1 ∩A2 = A2 \ (A1 ∩A2) that

Tr{Ac1 ∩A2} = Tr{A2 \ (A1 ∩A2)}

= Tr{A2} − Tr{A1 ∩A2} (since A1 ∩A2 ⊂ A2)

= Tr{A2} − Tr{A1}Tr{A2} (by the independence)

= (1− Tr{A1})Tr{A2}= Tr{Ac

1}Tr{A2}.

Theorem 4.5 Let (Λ,Δ,A, π) be a rough space, and let A1, A2, · · · ∈ A.Then we have(a) if

∑∞i=1 Tr{Ai} <∞, then

Tr{

lim supi→∞

Ai

}= 0; (4.11)

(b) if A1, A2, · · · are independent and∑∞

i=1 Tr{Ai} =∞, then

Tr{

lim supi→∞

Ai

}= 1. (4.12)

Page 151: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 4.2 - Four Axioms 141

Proof: (a) It follows from the trust continuity theorem that

Tr{

lim supi→∞

Ai

}= Tr

{ ∞⋂k=1

∞⋃i=k

Ai

}= lim

k→∞Tr

{ ∞⋃i=k

Ai

}

≤ limk→∞

∞∑i=k

Tr{Ai} = 0.

(by

∞∑i=1

Tr{Ai} <∞)

Thus the part (a) is proved. In order to prove the part (b), we only need toprove

limk→∞

Tr

{ ∞⋃i=k

Ai

}= 1.

In other words, we should prove

limk→∞

Tr

{ ∞⋂i=k

Aci

}= 0.

For any k, we have

Tr

{ ∞⋂i=k

Aci

}=

∞∏i=k

(1− Tr{Ai}) (by independence)

≤ exp

(−

∞∑i=k

Tr{Ai}) (

by 1− x ≤ e−x)

= 0.

(by

∞∑i=1

Tr{Ai} =∞)

Hence the part (b) is proved.

Product Rough Space

Theorem 4.6 Suppose that (Λi,Δi,Ai, πi) are rough spaces, i = 1, 2, · · · , n.Let

Λ = Λ1 × Λ2 × · · · × Λn, Δ = Δ1 ×Δ2 × · · · ×Δn,

A = A1 ×A2 × · · · ×An, π = π1 × π2 × · · · × πn.(4.13)

Then (Λ,Δ,A, π) is also a rough space.

Proof: It follows from the product measure theorem that π is a measure onthe σ-algebra A. Thus (Λ,Δ,A, π) is also a rough space.

Definition 4.6 (Liu [75]) Let (Λi,Δi,Ai, πi), i = 1, 2, · · · , n be rough spaces.Then (Λ,Δ,A, π) is called the product rough space, where Λ, Δ, A and π aredetermined by (4.13).

Page 152: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

142 Chapter 4 - Trust Theory

Infinite Product Rough Space

Theorem 4.7 Suppose that (Λi,Δi,Ai, πi) are rough spaces, i = 1, 2, · · · Let

Λ = Λ1 × Λ2 × · · · Δ = Δ1 ×Δ2 × · · ·A = A1 ×A2 × · · · π = π1 × π2 × · · ·

(4.14)

Then (Λ,Δ,A, π) is also a rough space.

Proof: It follows from the infinite product measure theorem that π is ameasure on the σ-algebra A. Thus (Λ,Δ,A, π) is also a rough space.

Definition 4.7 Let (Λi,Δi,Ai, πi), i = 1, 2, · · · be rough spaces. We say that(Λ,Δ,A, π) is the infinite product rough space, where Λ, Δ, A and π aredetermined by (4.14).

Laplace Criterion

When we do not have information enough to determine the measure π for areal-life problem, we use Laplace criterion which assumes that all elementsin Λ are equally likely to occur. For this case, the measure π may be takenas the Lebesgue measure. This criterion will be used in all examples in thisbook for simplicity.

4.3 Rough Variable

Definition 4.8 (Liu [75]) A rough variable ξ is a measurable function fromthe rough space (Λ,Δ,A, π) to the set of real numbers. That is, for everyBorel set B of �, we have{

λ ∈ Λ∣∣ ξ(λ) ∈ B

}∈ A. (4.15)

The lower and the upper approximations of the rough variable ξ are thendefined as follows,

ξ ={ξ(λ)

∣∣ λ ∈ Δ}

, ξ ={ξ(λ)

∣∣ λ ∈ Λ}

. (4.16)

Remark 4.2: Since Δ ⊂ Λ, it is obvious that ξ ⊂ ξ.

Example 4.3: Let Λ = {λ|0 ≤ λ ≤ 10}, Δ = {λ|2 ≤ λ ≤ 6}, and let A

be the Borel algebra on Λ, and π the Lebesgue measure. Then the functionξ(λ) = λ2 defined on (Λ,Δ,A, π) is a rough variable.

Example 4.4: A rough variable ([a, b], [c, d]) with c ≤ a < b ≤ d representsthe identity function ξ(λ) = λ from the rough space (Λ,Δ,A, π) to the set ofreal numbers, where Λ = {λ|c ≤ λ ≤ d}, Δ = {λ|a ≤ λ ≤ b}, A is the Borelalgebra on Λ, and π is the Lebesgue measure.

Page 153: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 4.3 - Rough Variable 143

Example 4.5: Let ξ = ([a, b], [c, d]) be a rough variable with c ≤ a < b ≤ d.We then have

Tr{ξ ≤ 0} =

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

0, if c ≥ 0c

2(c− d), if a ≥ 0 ≥ c

2ac− ad− bc

2(b− a)(d− c), if b ≥ 0 ≥ a

d− 2c2(d− c)

, if d ≥ 0 ≥ b

1, if 0 ≥ d.

When [a, b] = [c, d], we have

Tr{ξ ≤ 0} =

⎧⎪⎪⎨⎪⎪⎩0, if a ≥ 0a

a− b, if b ≥ 0 ≥ a

1, if 0 ≥ b.

Definition 4.9 A rough variable ξ is said to be(a) nonnegative if Tr{ξ < 0} = 0;(b) positive if Tr{ξ ≤ 0} = 0;(c) continuous if Tr{ξ = x} = 0 for each x ∈ �;(d) simple if there exists a finite sequence {x1, x2, · · · , xm} such that

Tr {ξ �= x1, ξ �= x2, · · · , ξ �= xm} = 0; (4.17)

(e) discrete if there exists a countable sequence {x1, x2, · · ·} such that

Tr {ξ �= x1, ξ �= x2, · · ·} = 0. (4.18)

Rough Vector

Definition 4.10 An n-dimensional rough vector ξ is a measurable functionfrom the rough space (Λ,Δ,A, π) to the set of n-dimensional vectors. Thatis, for every Borel set B of �n, we have{

λ ∈ Λ∣∣ ξ(λ) ∈ B

}∈ A. (4.19)

The lower and the upper approximations of the rough vector ξ are then definedas follows,

ξ ={ξ(λ)

∣∣ λ ∈ Δ}

, ξ ={ξ(λ)

∣∣ λ ∈ Λ}

. (4.20)

Theorem 4.8 The vector (ξ1, ξ2, · · · , ξn) is a rough vector if and only ifξ1, ξ2, · · · , ξn are rough variables.

Page 154: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

144 Chapter 4 - Trust Theory

Proof: Write ξ = (ξ1, ξ2, · · · , ξn). Suppose that ξ is a rough vector on thetrust space (Λ,Δ,A, π). For any Borel set B of �, the set B × �n−1 is aBorel set of �n. Thus we have{

λ ∈ Λ∣∣ ξ1(λ) ∈ B

}={λ ∈ Λ

∣∣ ξ1(λ) ∈ B, ξ2(λ) ∈ �, · · · , ξn(λ) ∈ �}

={λ ∈ Λ

∣∣ ξ(λ) ∈ B ×�n−1}∈ A

which implies that ξ1 is a rough variable. A similar process may prove thatξ2, ξ3, · · · , ξn are rough variables.

Conversely, suppose that all ξ1, ξ2, · · · , ξn are rough variables on the roughspace (Λ,Δ,A, π). We define

B ={B ⊂ �n

∣∣ {λ ∈ Λ|ξ(λ) ∈ B} ∈ A}

.

The vector ξ is proved to be a rough vector if B contains all Borel sets of �n.In fact, for any open interval

∏ni=1(ai, bi) of �n, we have{

λ ∈ Λ∣∣ ξ(λ) ∈

n∏i=1

(ai, bi)

}=

n⋂i=1

{λ ∈ Λ

∣∣ ξi(λ) ∈ (ai, bi)}∈ A.

Thus∏n

i=1(ai, bi) ∈ B. That is, the class B contains all open intervals of �n.We next prove that B is a σ-algebra of �n: (i) it is clear that �n ∈ B since{λ ∈ Λ|ξ(λ) ∈ �n} = Λ ∈ A; (ii) if B ∈ B, then {λ ∈ Λ|ξ(λ) ∈ B} ∈ A, and

{λ ∈ Λ∣∣ ξ(λ) ∈ Bc} = {λ ∈ Λ

∣∣ ξ(λ) ∈ B}c ∈ A

which implies that Bc ∈ B; (iii) if Bi ∈ B for i = 1, 2, · · ·, then {λ ∈ Λ|ξ(λ) ∈Bi} ∈ A and{

λ ∈ Λ∣∣ ξ(λ) ∈

∞⋃i=1

Bi

}=

∞⋃i=1

{λ ∈ Λ∣∣ ξ(λ) ∈ Bi} ∈ A

which implies that ∪iBi ∈ B. Since the smallest σ-algebra containing allopen intervals of �n is just the Borel algebra of �n, the class B contains allBorel sets of �n. The theorem is proved.

Rough Arithmetic

Definition 4.11 (Liu [75], Rough Arithmetic on Single Rough Space) Letf : �n → � be a measurable function, and ξ1, ξ2, · · · , ξn rough variables onthe rough space (Λ,Δ,A, π). Then ξ = f(ξ1, ξ2, · · · , ξn) is a rough variabledefined on the rough space (Λ,Δ,A, π) as

ξ(λ) = f(ξ1(λ), ξ2(λ), · · · , ξn(λ)), ∀λ ∈ Λ. (4.21)

Example 4.6: Let ξ1 and ξ2 be rough variables defined on the rough space(Λ,Δ,A, π). Then their sum and product are

(ξ1 + ξ2)(λ) = ξ1(λ) + ξ2(λ), (ξ1 × ξ2)(λ) = ξ1(λ)× ξ2(λ), ∀λ ∈ Λ.

Page 155: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 4.3 - Rough Variable 145

Definition 4.12 (Liu [75], Rough Arithmetic on Different Rough Spaces)Let f : �n → � be a measurable function, and ξi rough variables on roughspaces (Λi,Δi,Ai, πi), i = 1, 2, · · · , n, respectively. Then ξ = f(ξ1, ξ2, · · · , ξn)is a rough variable on the product rough space (Λ,Δ,A, π), defined as

ξ(λ1, λ2, · · · , λn) = f(ξ1(λ1), ξ2(λ2), · · · , ξn(λn)) (4.22)

for any (λ1, λ2, · · · , λn) ∈ Λ.

Example 4.7: Let ξ1 and ξ2 be rough variables defined on the rough spaces(Λ1,Δ1,A1, π1) and (Λ2,Δ2,A2, π2), respectively. Then the sum ξ = ξ1 + ξ2

is a rough variable defined on the product rough space (Λ,Δ,A, π) as

ξ(λ1, λ2) = ξ1(λ1) + ξ2(λ2), ∀(λ1, λ2) ∈ Λ.

The product ξ = ξ1 ·ξ2 is a rough variable defined on the product rough space(Λ,Δ,A, π) as

ξ(λ1, λ2) = ξ1(λ1) · ξ2(λ2), ∀(λ1, λ2) ∈ Λ.

Example 4.8: Let ξ = ([a1, a2], [a3, a4]) and η = ([b1, b2], [b3, b4]) be tworough variables. Note that a3 ≤ a1 < a2 ≤ a4 and b3 ≤ b1 < b2 ≤ b4. Itfollows from the rough arithmetic that

ξ + η = ([a1 + b1, a2 + b2], [a3 + b3, a4 + b4]), (4.23)

kξ =

{([ka1, ka2], [ka3, ka4]), if k ≥ 0([ka2, ka1], [ka4, ka3]), if k < 0.

(4.24)

Remark 4.3: Recall the concept of interval number defined by Alefeld andHerzberger [1] as an ordered pair of real numbers. In fact, an interval number[a, b] can be regarded as a rough variable ([a, b], [a, b]). We will find that therough arithmetic coincides with the interval arithmetic defined by Alefeldand Herzberger [1] and Hansen [33]. That is,

[a1, a2] + [b1, b2] = [a1 + b1, a2 + b2], (4.25)

k[a1, a2] =

{[ka1, ka2], if k ≥ 0[ka2, ka1], if k < 0.

(4.26)

Theorem 4.9 Let ξ be an n-dimensional rough vector, and f : �n → � ameasurable function. Then f(ξ) is a rough variable.

Proof: Let ξ be defined on the rough space (Λ,Δ,A, π). For any Borel setB of �, since f is a measurable function, f−1(B) is also a Borel set of �n.Hence we have{

λ ∈ Λ∣∣ f(ξ(λ)) ∈ B

}={λ ∈ Λ

∣∣ ξ(λ) ∈ f−1(B)}∈ A

which implies that f(ξ) is a rough variable.

Page 156: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

146 Chapter 4 - Trust Theory

Continuity Theorems

Theorem 4.10 (a) Let {ξi} be an increasing sequence of rough variablessuch that limi→∞ ξi is a rough variable. Then for any real number r, we have

limi→∞

Tr{ξi > r} = Tr{

limi→∞

ξi > r}

. (4.27)

(b) Let {ξi} be a decreasing sequence of rough variables such that limi→∞ ξiis a rough variable. Then for any real number r, we have

limi→∞

Tr{ξi ≥ r} = Tr{

limi→∞

ξi ≥ r}

. (4.28)

(c) The equations (4.27) and (4.28) remain true if “>” and “≥” are replacedwith “≤” and “<”, respectively.

Proof: Since {ξi} is an increasing sequence of rough variables, we may provethat

{ξi ≤ r} ↓{

limi→∞

ξi ≤ r}

, {ξi > r} ↑{

limi→∞

ξi > r}

.

It follows from the trust continuity theorem that (4.27) holds. If {ξi} is adecreasing sequence, then

{ξi ≥ r} ↓{

limi→∞

ξi ≥ r}

, {ξi < r} ↑{

limi→∞

ξi < r}

.

It follows from the trust continuity theorem that (4.28) holds.

Theorem 4.11 Let {ξi} be a sequence of rough variables such that

lim infi→∞

ξi and lim supi→∞

ξi

are rough variables. Then we have

Tr{

lim infi→∞

ξi > r}≤ lim inf

i→∞Tr{ξi > r}, (4.29)

Tr{

lim supi→∞

ξi ≥ r

}≥ lim sup

i→∞Tr{ξi ≥ r}, (4.30)

Tr{

lim infi→∞

ξi ≤ r}≥ lim sup

i→∞Tr{ξi ≤ r}, (4.31)

Tr{

lim supi→∞

ξi < r

}≤ lim inf

i→∞Tr{ξi < r}. (4.32)

Page 157: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 4.3 - Rough Variable 147

Proof: It is clear that infi≥k

ξi is an increasing sequence and infi≥k

ξi ≤ ξk for

each k. It follows from Theorem 4.10 that

Tr{

lim infi→∞

ξi > r}

= Tr{

limk→∞

infi≥k

ξi > r

}= lim

k→∞Tr{

infi≥k

ξi > r

}≤ lim inf

k→∞Tr {ξk > r} .

The inequality (4.29) is proved. Similarly, supi≥k

ξi is a decreasing sequence and

supi≥k

ξi ≥ ξk for each k. It follows from Theorem 4.10 that

Tr{

lim supi→∞

ξi ≥ r

}= Tr

{limk→∞

supi≥k

ξi ≥ r

}= lim

k→∞Tr{

supi≥k

ξi ≥ r

}≥ lim sup

k→∞Tr {ξk ≥ r} .

The inequality (4.30) is proved. Furthermore, we have

Tr{

lim infi→∞

ξi ≤ r}

= Tr{

limk→∞

infi≥k

ξi ≤ r

}= lim

k→∞Tr{

infi≥k

ξi ≤ r

}≥ lim sup

k→∞Tr {ξk ≤ r} .

The inequality (4.31) is proved. Similarly,

Tr{

lim supi→∞

ξi < r

}= Tr

{limk→∞

supi≥k

ξi < r

}= lim

k→∞Tr{

supi≥k

ξi < r

}≤ lim inf

k→∞Tr {ξk < r} .

The inequality (4.32) is proved.

Theorem 4.12 Let {ξi} be a sequence of rough variables such that the lim-itation limi→∞ ξi exists and is a rough variable. Then for almost all r ∈ �,we have

limi→∞

Tr{ξi ≥ r} = Tr{

limi→∞

ξi ≥ r}

. (4.33)

The equation (4.33) remains true if “≥” is replaced with “≤”, “>” or “<”.

Proof: Write ξi → ξ. Note that Tr{ξ ≥ r} is a decreasing function of r andcontinuous almost everywhere. The theorem is proved if we can verify that(4.33) holds for any continuity point r0 of Tr{ξ ≥ r}. For any given ε > 0,there exists δ > 0 such that

|Tr{ξ ≥ r0 ± δ} − Tr{ξ ≥ r0}| ≤ε

2. (4.34)

Page 158: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

148 Chapter 4 - Trust Theory

Now we define

Λn =∞⋂i=n

{|ξi − ξ| < δ}, n = 1, 2, · · ·

Then {Λn} is an increasing sequence such that Λn → Λ. Thus there exists aninteger m such that Tr{Λm} > 1 − ε/2 and Tr{Λc

m} < ε/2. For any i > m,we have

{ξi ≥ r0} = ({ξi ≥ r0} ∩ Λm) ∪ ({ξi ≥ r0} ∩ Λcm) ⊂ {ξ ≥ r0 − δ} ∪ Λc

m.

By using (4.34), we get

Tr{ξi ≥ r0} ≤ Tr{ξ ≥ r0 − δ}+ Tr{Λcm} ≤ Tr{ξ ≥ r0}+ ε. (4.35)

Similarly, for i > m, we have

{ξ ≥ r0 + δ} = ({ξ ≥ r0 + δ} ∩ Λm)∪ ({ξ ≥ r0 + δ} ∩ Λcm) ⊂ {ξi ≥ r0}∪Λc

m.

By using (4.34), we get

Tr{ξ ≥ r0} −ε

2≤ Tr{ξ ≥ r0 + δ} ≤ Tr{ξi ≥ r0}+

ε

2. (4.36)

It follows from (4.35) and (4.36) that

Tr{ξ ≥ r0} − ε ≤ Tr{ξi ≥ r0} ≤ Tr{ξ ≥ r0}+ ε.

Letting ε→ 0, we obtain (4.33). The theorem is proved.

4.4 Trust Distribution

Definition 4.13 (Liu [75]) The trust distribution Φ : [−∞,+∞]→ [0, 1] ofa rough variable ξ is defined by

Φ(x) = Tr{λ ∈ Λ

∣∣ ξ(λ) ≤ x}

. (4.37)

That is, Φ(x) is the trust that the rough variable ξ takes a value less than orequal to x.

Theorem 4.13 The trust distribution Φ of a rough variable ξ is a nonde-creasing and right-continuous function satisfying

Φ(−∞) = limx→−∞Φ(x) = 0,

Φ(+∞) = limx→+∞Φ(x) = 1.

(4.38)

Conversely, if Φ is a nondecreasing and right-continuous function satisfying(4.38), then there is a unique measure π on the Borel algebra of � such thatπ{(−∞, x]} = Φ(x) for all x ∈ [−∞,+∞]. Furthermore, the rough variabledefined as the identity function

ξ(x) = x, ∀x ∈ � (4.39)

from the rough space (�,�,A, π) to � has the trust distribution Φ.

Page 159: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 4.4 - Trust Distribution 149

Proof: It is clear that the trust distribution Φ is nondecreasing. Next, let{εi} be a sequence of positive numbers such that εi → 0 as i → ∞. Then,for every i ≥ 1, we have

Φ(x + εi)− Φ(x) = Tr{x < ξ ≤ x + εi}.

It follows from the trust continuity theorem that

limi→∞

Φ(x + εi)− Φ(x) = Tr{∅} = 0.

Hence Φ is right-continuous. Finally,

limx→−∞Φ(x) = lim

x→−∞Tr{ξ ≤ x} = Tr{∅} = 0,

limx→+∞Φ(x) = lim

x→+∞Tr{ξ ≤ x} = Tr{Λ} = 1.

Conversely, it follows from Theorem 1.21 that there is a unique measure π onthe Borel algebra of � such that π{(−∞, x]} = Φ(x) for all x ∈ [−∞,+∞].Furthermore, it is easy to verify that the rough variable defined by (4.39)from the rough space (�,�,A, π) to � has the trust distribution Φ. Thetheorem is proved.

Theorem 4.13 states that the identity function is a universal function forany trust distribution by defining an appropriate rough space. In fact, there isa universal rough space for any trust distribution by defining an appropriatefunction. It is given by the following theorem.

Theorem 4.14 Let (Λ,Δ,A, π) be a rough space, where Λ = Δ = (0, 1), A isthe Borel algebra, and π is the Lebesgue measure. If Φ is a trust distribution,then the function

ξ(λ) = sup{x∣∣ Φ(x) ≤ λ

}(4.40)

from Λ to � is a rough variable whose trust distribution is just Φ.

Proof: Since ξ(λ) is an increasing function, it is a rough variable. For anyy ∈ �, we have

Tr{ξ ≤ y} = Tr{λ∣∣ λ ≤ Φ(y)

}= Φ(y).

The theorem is proved.

Theorem 4.15 Let Φ1 and Φ2 be two trust distributions such that Φ1(x) =Φ2(x) for all x ∈ D, a dense set of �. Then Φ1 ≡ Φ2.

Proof: Since D is dense everywhere, for any point x, there exists a sequence{xi} in D such that xi ↓ x as i → ∞, and Φ1(xi) = Φ2(xi) for all i. Itfollows from the right-continuity of trust distribution that Φ1(x) = Φ2(x).The theorem is proved.

Page 160: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

150 Chapter 4 - Trust Theory

Theorem 4.16 A rough variable ξ with trust distribution Φ is(a) nonnegative if and only if Φ(x) = 0 for all x < 0;(b) positive if and only if Φ(x) = 0 for all x ≤ 0;(c) simple if and only if Φ is a simple function;(d) discrete if and only if Φ is a step function;(e) continuous if and only if Φ is a continuous function.

Proof: The parts (a), (b), (c) and (d) follow immediately from the definition.Next we prove the part (e). If ξ is a continuous rough variable, then Tr{ξ =x} = 0. It follows from the trust continuity theorem that

limy↑x

(Φ(x)− Φ(y)) = limy↑x

Tr{y < ξ ≤ x} = Tr{ξ = x} = 0

which proves the left-continuity of Φ. Since a trust distribution is alwaysright-continuous, Φ is continuous. Conversely, if Φ is continuous, then weimmediately have Tr{ξ = x} = 0 for each x ∈ �.

Definition 4.14 A continuous rough variable is said to be(a) singular if its trust distribution is a singular function;(b) absolutely continuous if its trust distribution is absolutely continuous.

Theorem 4.17 Let Φ be the trust distribution of a rough variable. Then

Φ(x) = r1Φ1(x) + r2Φ2(x) + r3Φ3(x), x ∈ � (4.41)

where Φ1,Φ2,Φ3 are trust distributions of discrete, singular and absolutelycontinuous rough variables, respectively, and r1, r2, r3 are nonnegative num-bers such that r1 + r2 + r3 = 1. Furthermore, the decomposition (4.41) isunique.

Proof: Let {xi} be the countable set of all discontinuity points of Φ. Wedefine a function as

f1(x) =∑xi≤x

(Φ(xi)− lim

y↑xi

Φ(y))

, x ∈ �.

Then f1(x) is a step function which is increasing and right-continuous withrespect to x. Now we set

f2(x) = Φ(x)− f1(x), x ∈ �.

Then we have

limz↓x

f2(z)− f2(x) = limz↓x

(Φ(z)− Φ(x))− limz↓x

(f1(z)− f1(x)) = 0,

limz↑x

f2(z)− f2(x) = limz↑x

(Φ(z)− Φ(x))− limz↑x

(f1(y)− f1(x)) = 0.

Page 161: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 4.4 - Trust Distribution 151

That is, the function f2(x) is continuous. Next we prove that f2(x) is in-creasing. Let x′ < x be given. Then we may verify that∑

x′<xi≤x

(Φ(xi)− lim

y↑xi

Φ(y))≤ Φ(x)− Φ(x′).

Thus we have

f2(x)− f2(x′) = Φ(x)− Φ(x′)−∑

x′<xi≤x

(Φ(xi)− lim

y↑xi

Φ(y))≥ 0

which implies that f2(x) is an increasing function of x. It has been provedthat the increasing continuous function f2 has a unique decomposition f2 =g2 + g3, where g2 is an increasing singular function, and g3 is an increasingabsolutely continuous function. Thus

Φ(x) = f1(x) + g2(x) + g3(x), ∀x ∈ �.

We denote

limx→+∞ f1(x) = r1, lim

x→+∞ g2(x) = r2, limx→+∞ g3(x) = r3

where r1, r2, r3 are nonnegative numbers such that r1 + r2 + r3 = 1. Fornonzero r1, r2, r3, we set

Φ1(x) =f1(x)

r1, Φ2(x) =

g2(x)r2

, Φ3(x) =g3(x)

r3, x ∈ �.

It is easy to verify that Φ1,Φ2,Φ3 are trust distributions of discrete, singular,absolutely continuous rough variables, respectively, and (4.41) is met. Sincethe step function is unique, the decomposition is unique, too.

Theorem 4.18 Let ξ be a rough variable. Then the function Tr{ξ ≥ x} isdecreasing and left-continuous.

Proof: The function Tr{ξ ≥ x} is clearly decreasing. Next, let {εi} be asequence of positive numbers such that εi → 0 as i → ∞. Then, for everyi ≥ 1, we have

Tr{ξ ≥ x− εi} − Tr{ξ ≥ x} = Tr{x− εi ≤ ξ < x}.

It follows from the trust continuity theorem that

limi→∞

Tr{ξ ≥ x− εi} − Tr{ξ ≥ x} = Tr{∅} = 0.

Hence Tr{ξ ≥ x} is a left-continuous function.

Page 162: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

152 Chapter 4 - Trust Theory

Definition 4.15 (Liu [75]) The trust density function φ : � → [0,+∞) of arough variable ξ is a function such that

Φ(x) =∫ x

−∞φ(y)dy (4.42)

holds for all x ∈ [−∞,+∞], where Φ is the trust distribution of ξ.

Example 4.9: Let ξ = ([a, b], [c, d]) be a rough variable with c ≤ a < b ≤ d.Then its trust distribution is

Φ(x) =

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

0, if x ≤ cx− c

2(d− c), if c ≤ x ≤ a

[(b− a) + (d− c)]x + 2ac− ad− bc

2(b− a)(d− c), if a ≤ x ≤ b

x + d− 2c2(d− c)

, if b ≤ x ≤ d

1, if b ≤ x

and the trust density function is

φ(x) =

⎧⎪⎪⎪⎪⎨⎪⎪⎪⎪⎩1

2(d− c), if c ≤ x ≤ a or b ≤ x ≤ d

12(b− a)

+1

2(d− c), if a ≤ x ≤ b

0, otherwise.

When the rough variable ξ becomes an interval number [a, b], the trust dis-tribution is

Φ(x) =

⎧⎪⎪⎨⎪⎪⎩0, if x ≤ a

x− a

b− a, if a ≤ x ≤ b

1, if b ≤ x

and the trust density function is

φ(x) =

⎧⎨⎩1

b− a, if a ≤ x ≤ b

0, otherwise.

Theorem 4.19 Let ξ be a rough variable whose trust density function φexists. Then for any Borel set B of �, we have

Tr{ξ ∈ B} =∫B

φ(y)dy. (4.43)

Page 163: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 4.5 - Independent and Identical Distribution 153

Proof: Let C be the class of all subsets C of � for which the relation

Tr{ξ ∈ C} =∫C

φ(y)dy (4.44)

holds. We will show that C contains all Borel sets of �. It follows from thetrust continuity theorem and relation (4.44) that C is a monotone class. It isalso clear that C contains all intervals of the form (−∞, a], (a, b], (b,∞) and� since

Tr{ξ ∈ (−∞, a]} = Φ(a) =∫ a

−∞φ(y)dy,

Tr{ξ ∈ (b,+∞)} = Φ(+∞)− Φ(b) =∫ +∞

b

φ(y)dy,

Tr{ξ ∈ (a, b]} = Φ(b)− Φ(a) =∫ b

a

φ(y)dy,

Tr{ξ ∈ �} = Φ(+∞) =∫ +∞

−∞φ(y)dy

where Φ is the trust distribution of ξ. Let F be the class of all finite unionsof disjoint sets of the form (−∞, a], (a, b], (b,∞) and �. Note that for anydisjoint sets C1, C2, · · · , Cm of F and C = C1 ∪ C2 ∪ · · · ∪ Cm, we have

Tr{ξ ∈ C} =m∑j=1

Tr{ξ ∈ Cj} =m∑j=1

∫Cj

φ(y)dy =∫C

φ(y)dy.

That is, C ∈ C. Hence we have F ⊂ C. It may also be verified that the classF is an algebra. Since the smallest σ-algebra containing F is just the Borelalgebra of �, the monotone class theorem implies that C contains all Borelsets of �.

Definition 4.16 (Liu [75]) The joint trust distribution Φ : [−∞,+∞]n →[0, 1] of a rough vector (ξ1, ξ2, · · · , ξn) is defined by

Φ(x1, x2, · · · , xn) = Tr{λ ∈ Λ

∣∣ ξ1(λ) ≤ x1, ξ2(λ) ≤ x2, · · · , ξn(λ) ≤ xn

}.

Definition 4.17 (Liu [75]) The joint trust density function φ : �n → [0,+∞)of a rough vector (ξ1, ξ2, · · · , ξn) is a function such that

Φ(x1, x2, · · · , xn) =∫ x1

−∞

∫ x2

−∞· · ·∫ xn

−∞φ(y1, y2, · · · , yn)dy1dy2 · · ·dyn (4.45)

holds for all (x1, x2, · · · , xn) ∈ [−∞,+∞]n, where Φ is the joint trust distri-bution of (ξ1, ξ2, · · · , ξn).

Page 164: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

154 Chapter 4 - Trust Theory

4.5 Independent and Identical Distribution

Definition 4.18 The rough variables ξ1, ξ2, · · · , ξm are said to be indepen-dent if and only if

Tr{ξi ∈ Bi, i = 1, 2, · · · ,m} =m∏i=1

Tr{ξi ∈ Bi} (4.46)

for any Borel sets B1, B2, · · · , Bm of �.

Definition 4.19 The rough variables ξi, i ∈ I are said to be independent ifand only if for all finite collections {i1, i2, · · · , ik} of distinct indices in I, wehave

Tr{ξij ∈ Bij , j = 1, 2, · · · , k} =k∏

j=1

Tr{ξij ∈ Bij} (4.47)

for any Borel sets Bi1 , Bi2 , · · · , Bik of �.

Theorem 4.20 Let ξi be independent rough variables, and fi : � → � mea-surable functions, i = 1, 2, · · · ,m. Then f1(ξ1), f2(ξ2), · · · , fm(ξm) are inde-pendent rough variables.

Proof: For any Borel sets of B1, B2, · · · , Bm of �, we have

Tr{f1(ξ1) ∈ B1, f2(ξ2) ∈ B2, · · · , fm(ξm) ∈ Bm}

= Tr{ξ1 ∈ f−11 (B1), ξ2 ∈ f−1

2 (B2), · · · , ξm ∈ f−1m (Bm)}

= Tr{ξ1 ∈ f−11 (B1)}Tr{ξ2 ∈ f−1

2 (B2)} · · ·Tr{ξm ∈ f−1m (Bm)}

= Tr{f1(ξ1) ∈ B1}Tr{f2(ξ2) ∈ B2} · · ·Tr{fm(ξm) ∈ Bm}.

Thus f1(ξ1), f2(ξ2), · · · , fm(ξm) are independent rough variables.

Theorem 4.21 Let ξi be rough variables with trust distributions Φi, i =1, 2, · · · ,m, respectively, and Φ the trust distribution of (ξ1, ξ2, · · · , ξm). Thenξ1, ξ2, · · · , ξm are independent if and only if

Φ(x1, x2, · · · , xm) = Φ1(x1)Φ2(x2) · · ·Φm(xm) (4.48)

for all (x1, x2, · · · , xm) ∈ �m.

Proof: If ξ1, ξ2, · · · , ξm are independent rough variables, then we have

Φ(x1, x2, · · · , xm) = Tr{ξ1 ≤ x1, ξ2 ≤ x2, · · · , ξm ≤ xm}= Tr{ξ1 ≤ x1}Tr{ξ2 ≤ x2} · · ·Tr{ξm ≤ xm}= Φ1(x1)Φ2(x2) · · ·Φm(xm).

Page 165: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 4.5 - Independent and Identical Distribution 155

Conversely, assume that (4.48) holds. Let x2, x3, · · · , xm be fixed realnumbers, and let C be the class of all subsets C of � for which the relation

Tr{ξ1 ∈ C, ξ2 ≤ x2, · · · , ξm ≤ xm} = Tr{ξ1 ∈ C}m∏i=2

Tr{ξi ≤ xi} (4.49)

holds. We will show that C contains all Borel sets of �. It follows from thetrust continuity theorem and relation (4.49) that C is a monotone class. It isalso clear that C contains all intervals of the form (−∞, a], (a, b], (b,∞) and�. Let F be the class of all finite unions of disjoint sets of the form (−∞, a],(a, b], (b,∞) and �. Note that for any disjoint sets C1, C2, · · · , Ck of F andC = C1 ∪ C2 ∪ · · · ∪ Ck, we have

Tr{ξ1 ∈ C, ξ2 ≤ x2, · · · , ξm ≤ xm}

=∑k

j=1 Tr{ξ1 ∈ Cj , ξ2 ≤ x2, · · · , ξm ≤ xm}= Tr{ξ1 ∈ C}Tr{ξ2 ≤ x2} · · ·Tr{ξm ≤ xm}.

That is, C ∈ C. Hence we have F ⊂ C. It may also be verified that the classF is an algebra. Since the smallest σ-algebra containing F is just the Borelalgebra of �, the monotone class theorem implies that C contains all Borelsets of �.

Applying the same reasoning to each ξi in turn, we obtain the indepen-dence of the rough variables.

Theorem 4.22 Let ξi be rough variables with trust density functions φi, i =1, 2, · · · ,m, respectively, and φ the trust density function of (ξ1, ξ2, · · · , ξm).Then ξ1, ξ2, · · · , ξm are independent if and only if

φ(x1, x2, · · · , xm) = φ1(x1)φ2(x2) · · ·φm(xm) (4.50)

for almost all (x1, x2, · · · , xm) ∈ �m.

Proof: If φ(x1, x2, · · · , xm) = φ1(x1)φ2(x2) · · ·φm(xm) a.e., then we have

Φ(x1, x2, · · · , xm) =∫ x1

−∞

∫ x2

−∞· · ·∫ xm

−∞φ(t1, t2, · · · , tm)dt1dt2 · · ·dtm

=∫ x1

−∞

∫ x2

−∞· · ·∫ xm

−∞φ1(t1)φ2(t2) · · ·φm(tm)dt1dt2 · · ·dtm

= Φ1(x1)Φ2(x2) · · ·Φm(xm)

for all (x1, x2, · · · , xm) ∈ �m. Thus ξ1, ξ2, · · · , ξm are independent. Con-versely, if ξ1, ξ2, · · · , ξm are independent, then

Φ(x1, x2, · · · , xm) = Φ1(x1)Φ2(x2) · · ·Φm(xm)

=∫ x1

−∞

∫ x2

−∞· · ·∫ xm

−∞φ1(t1)φ2(t2) · · ·φm(tm)dt1dt2 · · ·dtm

which implies that φ(x1, x2, · · · , xm) = φ1(x1)φ2(x2) · · ·φm(xm) a.e.

Page 166: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

156 Chapter 4 - Trust Theory

Definition 4.20 The rough variables ξ1, ξ2, · · · , ξm are said to be identicallydistributed if and only if

Tr{ξi ∈ B} = Tr{ξj ∈ B}, i, j = 1, 2, · · · ,m (4.51)

for any Borel set B of �.

Theorem 4.23 The rough variables ξ and η are identically distributed if andonly if they have the same trust distribution.

Proof: Let Φ and Ψ be the trust distributions of ξ and η, respectively. Ifξ and η are identically distributed rough variables, then, for any x ∈ �, wehave

Φ(x) = Tr{ξ ∈ (−∞, x]} = Tr{η ∈ (−∞, x]} = Ψ(x).

Thus ξ and η have the same trust distribution.Conversely, assume that ξ and η have the same trust distribution. Let C

be the class of all subsets C of � for which the relation

Tr{ξ ∈ C} = Tr{η ∈ C} (4.52)

holds. We will show that C contains all Borel sets of �. It follows from thetrust continuity theorem and relation (4.52) that C is a monotone class. It isalso clear that C contains all intervals of the form (−∞, a], (a, b], (b,∞) and� since ξ and η have the same trust distribution. Let F be the class of allfinite unions of disjoint sets of the form (−∞, a], (a, b], (b,∞) and �. Notethat for any disjoint sets C1, C2, · · · , Ck of F and C = C1 ∪C2 ∪ · · · ∪Ck, wehave

Tr{ξ ∈ C} =k∑

j=1

Tr{ξ ∈ Cj} =k∑

j=1

Tr{η ∈ Cj} = Tr{η ∈ C}.

That is, C ∈ C. Hence we have F ⊂ C. It may also be verified that the classF is an algebra. Since the smallest σ-algebra containing F is just the Borelalgebra of �, the monotone class theorem implies that C contains all Borelsets of �.

Theorem 4.24 Let φ and ψ be the trust density functions of rough variablesξ and η, respectively. Then ξ and η are identically distributed if and only ifφ = ψ, a.e.

Proof: It follows from Theorem 4.23 that the rough variables ξ and η areidentically distributed if and only if they have the same trust distribution, ifand only if φ = ψ, a.e.

Example 4.10: Let ξ be a rough variable, and a a positive number. Then

ξ∗ =

{ξ, if |ξ| < a

0, otherwise

Page 167: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 4.6 - Expected Value Operator 157

is a bounded rough variable known as ξ truncated at a. Let ξ1, ξ2, · · · , ξn beiid rough variables. Then for any given number a > 0, the rough variablesξ∗1 , ξ∗2 , · · · , ξ∗n are iid.

Definition 4.21 The n-dimensional rough vectors ξ1, ξ2, · · · , ξm are said tobe independent if and only if

Tr{ξi ∈ Bi, i = 1, 2, · · · ,m} =m∏i=1

Tr{ξi ∈ Bi} (4.53)

for any Borel sets B1, B2, · · · , Bm of �n.

Definition 4.22 The n-dimensional rough vectors ξ1, ξ2, · · · , ξm are said tobe identically distributed if and only if

Tr{ξi ∈ B} = Tr{ξj ∈ B}, i, j = 1, 2, · · · ,m (4.54)

for any Borel set B of �n.

4.6 Expected Value Operator

In order to measure rough variable, we define an expected value operator.

Definition 4.23 (Liu [75]) Let ξ be a rough variable. Then the expectedvalue of ξ is defined by

E[ξ] =∫ +∞

0

Tr{ξ ≥ r}dr −∫ 0

−∞Tr{ξ ≤ r}dr (4.55)

provided that at least one of the two integrals is finite.

Example 4.11: Let ξ = ([a, b], [c, d]) be a rough variable with c ≤ a < b ≤ d.We then have

E[ξ] =14(a + b + c + d).

Especially, if the rough variable ξ degenerates to an interval number [a, b],then we have

E[ξ] =12(a + b).

Theorem 4.25 Let ξ be a nonnegative rough variable. Then

∞∑i=1

Tr{ξ ≥ i} ≤ E[ξ] ≤ 1 +∞∑i=1

Tr{ξ ≥ i}, (4.56)

∞∑i=1

iTr{i + 1 > ξ ≥ i} ≤ E[ξ] ≤∞∑i=0

(i + 1)Tr{i + 1 > ξ ≥ i}. (4.57)

Page 168: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

158 Chapter 4 - Trust Theory

Proof: Since Tr{ξ ≥ r} is a decreasing function of r, we have

E[ξ] =∞∑i=1

∫ i

i−1

Tr{ξ ≥ r}dr ≥∞∑i=1

∫ i

i−1

Tr{ξ ≥ i}dr =∞∑i=1

Tr{ξ ≥ i},

E[ξ] =∞∑i=1

∫ i

i−1

Tr{ξ ≥ r}dr ≤∞∑i=1

∫ i

i−1

Tr{ξ ≥ i− 1}dr = 1 +∞∑i=1

Tr{ξ ≥ i}.

Thus (4.56) is proved. The inequality (4.57) is from the following two equa-tions,

∞∑i=1

Tr{ξ ≥ i} =∞∑i=1

∞∑j=i

Tr{j + 1 > ξ ≥ j}

=∞∑j=1

j∑i=1

Tr{j + 1 > ξ ≥ j}

=∞∑j=1

jTr{j + 1 > ξ ≥ j},

1 +∞∑i=1

Tr{ξ ≥ i} =∞∑i=0

Tr{i + 1 > ξ ≥ i}+∞∑i=1

iTr{i + 1 > ξ ≥ i}

=∞∑i=0

(i + 1)Tr{i + 1 > ξ ≥ i}.

Theorem 4.26 Let ξ be a rough variable, and t a positive number. ThenE[|ξ|t] <∞ if and only if

∞∑i=1

Tr{|ξ| ≥ i1/t

}<∞. (4.58)

Proof: The theorem follows immediately from Tr{|ξ|t ≥ i} = Tr{|ξ| ≥ i1/t}and Theorem 4.25.

Theorem 4.27 Let ξ be a rough variable, and t a positive number. If E[|ξ|t] <∞, then

limx→∞xtTr{|ξ| ≥ x} = 0. (4.59)

Conversely, let ξ be a rough variable satisfying (4.59) for some t > 0. ThenE[|ξ|s] <∞ for any 0 ≤ s < t.

Proof: It follows from the definition of expected value that

E[|ξ|t] =∫ ∞

0

Tr{|ξ|t ≥ r}dr <∞.

Page 169: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 4.6 - Expected Value Operator 159

Thus we have

limx→∞

∫ ∞

xt/2

Tr{|ξ|t ≥ r}dr = 0.

The equation (4.59) is proved by the following relation,

∫ ∞

xt/2

Tr{|ξ|t ≥ r}dr ≥∫ xt

xt/2

Tr{|ξ|t ≥ r}dr ≥ 12xtTr{|ξ| ≥ x}.

Conversely, if (4.59) holds, then there exists a number a such that

xtTr{|ξ| ≥ x} ≤ 1, ∀x ≥ a.

Thus we have

E[|ξ|s] =∫ a

0

Tr {|ξ|s ≥ r} dr +∫ +∞

a

Tr {|ξ|s ≥ r} dr

≤∫ a

0

Tr {|ξ|s ≥ r} dr +∫ +∞

0

srs−1Tr {|ξ| ≥ r} dr

≤∫ a

0

Tr {|ξ|s ≥ r} dr + s

∫ +∞

0

rs−t−1dr

≤ +∞.

(by∫ ∞

0

rpdr <∞ for any p < −1)

The theorem is proved.

Theorem 4.28 (Liu [75]) Let ξ be a rough variable whose trust density func-tion φ exists. If the Lebesgue integral

∫ ∞

−∞xφ(x)dx

is finite, then we have

E[ξ] =∫ ∞

−∞xφ(x)dx. (4.60)

Page 170: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

160 Chapter 4 - Trust Theory

Proof: It follows from the definition of expected value operator and FubiniTheorem that

E[ξ] =∫ +∞

0

Tr{ξ ≥ r}dr −∫ 0

−∞Tr{ξ ≤ r}dr

=∫ +∞

0

[∫ +∞

r

φ(x)dx

]dr −

∫ 0

−∞

[∫ r

−∞φ(x)dx

]dr

=∫ +∞

0

[∫ x

0

φ(x)dr

]dx−

∫ 0

−∞

[∫ 0

x

φ(x)dr

]dx

=∫ +∞

0

xφ(x)dx +∫ 0

−∞xφ(x)dx

=∫ +∞

−∞xφ(x)dx.

The theorem is proved.

Theorem 4.29 Let ξ be a rough variable with trust distribution Φ. If theLebesgue-Stieltjes integral ∫ +∞

−∞xdΦ(x)

is finite, then we have

E[ξ] =∫ +∞

−∞xdΦ(x). (4.61)

Proof: Since the Lebesgue-Stieltjes integral∫ +∞−∞ xdΦ(x) is finite, we imme-

diately have

limy→+∞

∫ y

0

xdΦ(x) =∫ +∞

0

xdΦ(x), limy→−∞

∫ 0

y

xdΦ(x) =∫ 0

−∞xdΦ(x)

and

limy→+∞

∫ +∞

y

xdΦ(x) = 0, limy→−∞

∫ y

−∞xdΦ(x) = 0.

It follows from∫ +∞

y

xdΦ(x) ≥ y

(lim

z→+∞Φ(z)− Φ(y))

= y(1− Φ(y)) ≥ 0, if y > 0,

∫ y

−∞xdΦ(x) ≤ y

(Φ(y)− lim

z→−∞Φ(z))

= yΦ(y) ≤ 0, if y < 0

thatlim

y→+∞ y (1− Φ(y)) = 0, limy→−∞ yΦ(y) = 0.

Page 171: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 4.6 - Expected Value Operator 161

Let 0 = x0 < x1 < x2 < · · · < xn = y be a partition of [0, y]. Then we have

n−1∑i=0

xi (Φ(xi+1)− Φ(xi))→∫ y

0

xdΦ(x)

andn−1∑i=0

(1− Φ(xi+1))(xi+1 − xi)→∫ y

0

Tr{ξ ≥ r}dr

as max{|xi+1 − xi| : i = 0, 1, · · · , n− 1} → 0. Since

n−1∑i=0

xi (Φ(xi+1)− Φ(xi))−n−1∑i=0

(1− Φ(xi+1))(xi+1 − xi) = y(Φ(y)− 1)→ 0

as y → +∞. This fact implies that∫ +∞

0

Tr{ξ ≥ r}dr =∫ +∞

0

xdΦ(x). (4.62)

A similar way may prove that

−∫ 0

−∞Tr{ξ ≤ r}dr =

∫ 0

−∞xdΦ(x). (4.63)

It follows from (4.62) and (4.63) that (4.61) holds.

Linearity of Expected Value Operator

Theorem 4.30 Let ξ be a rough variable whose expected value exists. Thenfor any numbers a and b, we have

E[aξ + b] = aE[ξ] + b. (4.64)

Proof: In order to prove the theorem, it suffices to verify that E[ξ + b] =E[ξ]+ b and E[aξ] = aE[ξ]. It follows from the expected value operator that,if b ≥ 0,

E[ξ + b] =∫ ∞

0

Tr{ξ + b ≥ r}dr −∫ 0

−∞Tr{ξ + b ≤ r}dr

=∫ ∞

0

Tr{ξ ≥ r − b}dr −∫ 0

−∞Tr{ξ ≤ r − b}dr

= E[ξ] +∫ b

0

(Tr{ξ ≥ r − b}+ Tr{ξ < r − b}) dr

= E[ξ] + b.

Page 172: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

162 Chapter 4 - Trust Theory

If b < 0, then we have

E[ξ + b] = E[ξ]−∫ 0

b

(Tr{ξ ≥ r − b}+ Tr{ξ < r − b}) dr = E[ξ] + b.

On the other hand, if a = 0, then the equation E[aξ] = aE[ξ] holds trivially.If a > 0, we have

E[aξ] =∫ ∞

0

Tr{aξ ≥ r}dr −∫ 0

−∞Tr{aξ ≤ r}dr

=∫ ∞

0

Tr{ξ ≥ r

a

}dr −

∫ 0

−∞Tr{ξ ≤ r

a

}dr

= a

∫ ∞

0

Tr{ξ ≥ r

a

}d( r

a

)− a

∫ 0

−∞Tr{ξ ≤ r

a

}d( r

a

)= aE[ξ].

The equation E[aξ] = aE[ξ] is proved if we verify that E[−ξ] = −E[ξ]. Infact,

E[−ξ] =∫ ∞

0

Tr{−ξ ≥ r}dr −∫ 0

−∞Tr{−ξ ≤ r}dr

=∫ ∞

0

Tr {ξ ≤ −r} dr −∫ 0

−∞Tr {ξ ≥ −r} dr

=∫ 0

−∞Tr {ξ ≤ r} dr −

∫ ∞

0

Tr {ξ ≥ r} dr

= −E[ξ].

The proof is finished.

Theorem 4.31 Let ξ and η be rough variables with finite expected values.Then we have

E[ξ + η] = E[ξ] + E[η]. (4.65)

Proof: We first prove the case where both ξ and η are nonnegative simplerough variables taking values a1, a2, · · · , am and b1, b2, · · · , bn, respectively.Then ξ + η is also a nonnegative simple rough variable taking values ai + bj ,i = 1, 2, · · · ,m, j = 1, 2, · · · , n. Thus we have

E[ξ + η] =m∑i=1

n∑j=1

(ai + bj)Tr{ξ = ai, η = bj}

=m∑i=1

n∑j=1

aiTr{ξ = ai, η = bj}+m∑i=1

n∑j=1

bjTr{ξ = ai, η = bj}

=m∑i=1

aiTr{ξ = ai}+n∑

j=1

bjTr{η = bj}

= E[ξ] + E[η].

Page 173: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 4.6 - Expected Value Operator 163

Next we prove the case where ξ and η are nonnegative rough variables.For every i ≥ 1 and every λ ∈ Λ, we define

ξi(λ) =

⎧⎪⎨⎪⎩k − 1

2i, if

k − 12i≤ ξ(λ) <

k

2i, k = 1, 2, · · · , i2i

i, if i ≤ ξ(λ),

ηi(λ) =

⎧⎪⎨⎪⎩k − 1

2i, if

k − 12i≤ η(λ) <

k

2i, k = 1, 2, · · · , i2i

i, if i ≤ η(λ).

Then {ξi}, {ηi} and {ξi+ηi} are three sequences of nonnegative simple roughvariables such that ξi ↑ ξ, ηi ↑ η and ξi + ηi ↑ ξ + η as i→∞. The functionsTr{ξi > r}, Tr{ηi > r} and Tr{ξi + ηi > r} are also simple for i = 1, 2, · · ·Furthermore, it follows from Theorem 4.10 that

Tr{ξi > r} ↑ Tr{ξ > r}, ∀r ≥ 0

as i→∞. Since the expected value E[ξ] exists, we have

E[ξi] =∫ +∞

0

Tr{ξi > r}dr →∫ +∞

0

Tr{ξ > r}dr = E[ξ]

as i→∞. Similarly, we may prove that E[ηi]→ E[η] and E[ξi+ηi]→ E[ξ+η]as i → ∞. Therefore E[ξ + η] = E[ξ] + E[η] since we have proved thatE[ξi + ηi] = E[ξi] + E[ηi] for i = 1, 2, · · ·

Finally, if ξ and η are arbitrary rough variables, then we define

ξi(λ) =

{ξ(λ), if ξ(λ) ≥ −i

−i, otherwise,ηi(λ) =

{η(λ), if η(λ) ≥ −i

−i, otherwise.

Since the expected values E[ξ] and E[η] are finite, we have

limi→∞

E[ξi] = E[ξ], limi→∞

E[ηi] = E[η], limi→∞

E[ξi + ηi] = E[ξ + η].

Note that (ξi + i) and (ηi + i) are nonnegative rough variables. It followsfrom Theorem 4.30 that

E[ξ + η] = limi→∞

E[ξi + ηi]

= limi→∞

(E[(ξi + i) + (ηi + i)]− 2i)

= limi→∞

(E[ξi + i] + E[ηi + i]− 2i)

= limi→∞

(E[ξi] + i + E[ηi] + i− 2i)

= limi→∞

E[ξi] + limi→∞

E[ηi]

= E[ξ] + E[η]

which proves the theorem.

Page 174: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

164 Chapter 4 - Trust Theory

Theorem 4.32 Let ξ and η be rough variables with finite expected values.Then for any numbers a and b, we have

E[aξ + bη] = aE[ξ] + bE[η]. (4.66)

Proof: The theorem follows immediately from Theorems 4.30 and 4.31.

Product of Independent Rough Variables

Theorem 4.33 Let ξ and η be independent rough variables with finite ex-pected values. Then the expected value of ξη exists and

E[ξη] = E[ξ]E[η]. (4.67)

Proof: We first prove the case where both ξ and η are nonnegative simplerough variables taking values a1, a2, · · · , am and b1, b2, · · · , bn, respectively.Then ξη is also a nonnegative simple rough variable taking values aibj , i =1, 2, · · · ,m, j = 1, 2, · · · , n. It follows from the independence of ξ and η that

E[ξη] =m∑i=1

n∑j=1

aibjTr{ξ = ai, η = bj}

=m∑i=1

n∑j=1

aibjTr{ξ = ai}Tr{η = bj}

=(

m∑i=1

aiTr{ξ = ai})(

n∑j=1

bjTr{η = bj})

= E[ξ]E[η].

Next we prove the case where ξ and η are nonnegative rough variables.For every i ≥ 1 and every λ ∈ Λ, we define

ξi(λ) =

⎧⎪⎨⎪⎩k − 1

2i, if

k − 12i≤ ξ(λ) <

k

2i, k = 1, 2, · · · , i2i

i, if i ≤ ξ(λ),

ηi(λ) =

⎧⎪⎨⎪⎩k − 1

2i, if

k − 12i≤ η(λ) <

k

2i, k = 1, 2, · · · , i2i

i, if i ≤ η(λ).

Then {ξi}, {ηi} and {ξiηi} are three sequences of nonnegative simple roughvariables such that ξi ↑ ξ, ηi ↑ η and ξiηi ↑ ξη as i → ∞. It follows fromthe independence of ξ and η that ξi and ηi are independent. Hence wehave E[ξiηi] = E[ξi]E[ηi] for i = 1, 2, · · · It follows from Theorem 4.10 thatTr{ξi > r}, i = 1, 2, · · · are simple functions such that

Tr{ξi > r} ↑ Tr{ξ > r}, for all r ≥ 0

Page 175: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 4.6 - Expected Value Operator 165

as i→∞. Since the expected value E[ξ] exists, we have

E[ξi] =∫ +∞

0

Tr{ξi > r}dr →∫ +∞

0

Tr{ξ > r}dr = E[ξ]

as i → ∞. Similarly, we may prove that E[ηi] → E[η] and E[ξiηi] → E[ξη]as i→∞. Therefore E[ξη] = E[ξ]E[η].

Finally, if ξ and η are arbitrary independent rough variables, then thenonnegative rough variables ξ+ and η+ are independent and so are ξ+ andη−, ξ− and η+, ξ− and η−. Thus we have

E[ξ+η+] = E[ξ+]E[η+], E[ξ+η−] = E[ξ+]E[η−],

E[ξ−η+] = E[ξ−]E[η+], E[ξ−η−] = E[ξ−]E[η−].

It follows that

E[ξη] = E[(ξ+ − ξ−)(η+ − η−)]

= E[ξ+η+]− E[ξ+η−]− E[ξ−η+] + E[ξ−η−]

= E[ξ+]E[η+]− E[ξ+]E[η−]− E[ξ−]E[η+] + E[ξ−]E[η−]

= (E[ξ+]− E[ξ−]) (E[η+]− E[η−])

= E[ξ+ − ξ−]E[η+ − η−]

= E[ξ]E[η]

which proves the theorem.

Expected Value of Function of Rough Variable

Theorem 4.34 Let ξ be a rough variable with trust distribution Φ, and f :� → � a measurable function. If the Lebesgue-Stieltjes integral∫ +∞

−∞f(x)dΦ(x)

is finite, then we have

E[f(ξ)] =∫ +∞

−∞f(x)dΦ(x). (4.68)

Proof: It follows from the definition of expected value operator that

E[f(ξ)] =∫ +∞

0

Tr{f(ξ) ≥ r}dr −∫ 0

−∞Tr{f(ξ) ≤ r}dr. (4.69)

If f is a nonnegative simple measurable function, i.e.,

f(x) =

⎧⎪⎪⎨⎪⎪⎩a1, if x ∈ B1

a2, if x ∈ B2

· · ·am, if x ∈ Bm

Page 176: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

166 Chapter 4 - Trust Theory

where B1, B2, · · · , Bm are mutually disjoint Borel sets, then we have

E[f(ξ)] =∫ +∞

0

Tr{f(ξ) ≥ r}dr =m∑i=1

aiTr{ξ ∈ Bi}

=m∑i=1

ai

∫Bi

dΦ(x) =∫ +∞

−∞f(x)dΦ(x).

We next prove the case where f is a nonnegative measurable function. Letf1, f2, · · · be a sequence of nonnegative simple functions such that fi ↑ f asi→∞. We have proved that

E[fi(ξ)] =∫ +∞

0

Tr{fi(ξ) ≥ r}dr =∫ +∞

−∞fi(x)dΦ(x).

In addition, Theorem 4.10 states that Tr{fi(ξ) > r} ↑ Tr{f(ξ) > r} as i→∞for r ≥ 0. It follows from the monotone convergence theorem that

E[f(ξ)] =∫ +∞

0

Tr{f(ξ) > r}dr

= limi→∞

∫ +∞

0

Tr{fi(ξ) > r}dr

= limi→∞

∫ +∞

−∞fi(x)dΦ(x)

=∫ +∞

−∞f(x)dΦ(x).

Finally, if f is an arbitrary measurable function, then we have f = f+ − f−

andE[f(ξ)] = E[f+(ξ)− f−(ξ)]

= E[f+(ξ)]− E[f−(ξ)]

=∫ +∞

−∞f+(x)dΦ(x)−

∫ +∞

−∞f−(x)dΦ(x)

=∫ +∞

−∞f(x)dΦ(x).

The theorem is proved.

Sum of a Rough Number of Rough Variables

Theorem 4.35 Assume that {ξi} is a sequence of iid rough variables, andη is a positive rough integer (i.e., a rough variable taking “positive integer”values) that is independent of the sequence {ξi}. Then we have

E

[η∑

i=1

ξi

]= E[η]E[ξ1]. (4.70)

Page 177: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 4.6 - Expected Value Operator 167

Proof: Since η is independent of the sequence {ξi}, we have

Tr

{η∑

i=1

ξi ≥ r

}=

∞∑k=1

Tr{η = k}Tr {ξ1 + ξ2 + · · ·+ ξk ≥ r} .

If ξi are nonnegative rough variables, then we have

E

[η∑

i=1

ξi

]=∫ +∞

0

Tr

{η∑

i=1

ξi ≥ r

}dr

=∫ +∞

0

∞∑k=1

Tr{η = k}Tr {ξ1 + ξ2 + · · ·+ ξk ≥ r} dr

=∞∑k=1

Tr{η = k}∫ +∞

0

Tr {ξ1 + ξ2 + · · ·+ ξk ≥ r} dr

=∞∑k=1

Tr{η = k} (E[ξ1] + E[ξ2] + · · ·+ E[ξk])

=∞∑k=1

Tr{η = k}kE[ξ1] (by iid hypothesis)

= E[η]E[ξ1].

If ξi are arbitrary rough variables, then ξi = ξ+i − ξ−i , and

E

[η∑

i=1

ξi

]= E

[η∑

i=1

(ξ+i − ξ−i )

]= E

[η∑

i=1

ξ+i −

η∑i=1

ξ−i

]

= E

[η∑

i=1

ξ+i

]− E

[η∑

i=1

ξ−i

]= E[η]E[ξ+

1 ]− E[η]E[ξ−1 ]

= E[η](E[ξ+1 ]− E[ξ−1 ]) = E[η]E[ξ+

1 − ξ−1 ] = E[η]E[ξ1].

The theorem is thus proved.

Continuity Theorems

Theorem 4.36 (a) Let {ξi} be an increasing sequence of rough variablessuch that limi→∞ ξi is a rough variable. If there exists a rough variable ηwith finite expected value such that ξi ≥ η for all i, then we have

limi→∞

E[ξi] = E[

limi→∞

ξi

]. (4.71)

(b) Let {ξi} be a decreasing sequence of rough variables such that limi→∞ ξi isa rough variable. If there exists a rough variable η with finite expected valuesuch that ξi ≤ η for all i, then (4.71) remains true.

Page 178: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

168 Chapter 4 - Trust Theory

Proof: Without loss of generality, we assume η ≡ 0. Then we have

limi→∞

E[ξi] = limi→∞

∫ +∞

0

Tr{ξi > r}dr

=∫ +∞

0

limi→∞

Tr{ξi > r}dr (by Theorem 1.17)

=∫ +∞

0

Tr{

limi→∞

ξi > r}

dr (by Theorem 4.10)

= E[

limi→∞

ξi

].

The decreasing case may be proved by setting ξi = η − ξi ≥ 0.

Theorem 4.37 Let {ξi} be a sequence of rough variables such that

lim infi→∞

ξi and lim supi→∞

ξi

are rough variables. (a) If there exists a rough variable η with finite expectedvalue such that ξi ≥ η for all i, then

E[lim infi→∞

ξi

]≤ lim inf

i→∞E[ξi]. (4.72)

(b) If there exists a rough variable η with finite expected value such that ξi ≤ ηfor all i, then

E

[lim supi→∞

ξi

]≥ lim sup

i→∞E[ξi]. (4.73)

Proof: Without loss of generality, we assume η ≡ 0. Then we have

E[lim infi→∞

ξi

]=∫ +∞

0

Tr{

lim infi→∞

ξi > r}

dr

≤∫ +∞

0

lim infi→∞

Tr{ξi > r}dr (by Theorem 4.11)

≤ lim infi→∞

∫ +∞

0

Tr {ξi > r} dr (by Fatou’s Lemma)

= lim infi→∞

E [ξi] .

The inequality (4.72) is proved. The other inequality may be proved viasetting ξi = η − ξi ≥ 0.

Page 179: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 4.7 - Variance, Covariance and Moments 169

Theorem 4.38 Let {ξi} be a sequence of rough variables such that the limi-tation limi→∞ ξi exists and is a rough variable. If there exists a rough variableη with finite expected value such that |ξi| ≤ η for all i, then,

limi→∞

E[ξi] = E[

limi→∞

ξi

]. (4.74)

Proof: It follows from Theorem 4.37 that

E[lim infi→∞

ξi

]≤ lim inf

i→∞E[ξi] ≤ lim sup

i→∞E[ξi] ≤ E

[lim supi→∞

ξi

].

Since limi→∞ ξi exists, we have lim infi→∞ ξi = lim supi→∞ ξi = limi→∞ ξi.Thus (4.74) holds.

Distance of Rough Variables

Definition 4.24 The distance of rough variables ξ and η is defined as

d(ξ, η) = E[|ξ − η|]. (4.75)

Theorem 4.39 Let ξ, η, τ be rough variables, and let d(·, ·) be the distancemeasure. Then we have(a) d(ξ, η) = 0 if ξ = η;(b) d(ξ, η) > 0 if ξ �= η;(c) (Symmetry) d(ξ, η) = d(η, ξ);(d) (Triangle Inequality) d(ξ, η) ≤ d(ξ, τ) + d(η, τ).

Proof: The parts (a), (b) and (c) follow immediately from the definition.The part (d) is proved by the following relation,

E[|ξ − η|] ≤ E[|ξ − τ |+ |η − τ |] = E[|ξ − τ |] + E[|η − τ |].

4.7 Variance, Covariance and Moments

Definition 4.25 (Liu [75]) Let ξ be a rough variable with finite expectedvalue E[ξ]. The variance of ξ is defined as

V [ξ] = E[(ξ − E[ξ])2

]. (4.76)

Theorem 4.40 If ξ is a rough variable whose variance exists, a and b arereal numbers, then V [aξ + b] = a2V [ξ].

Proof: It follows from the definition of variance that

V [aξ + b] = E[(aξ + b− aE[ξ]− b)2

]= a2E[(ξ − E[ξ])2] = a2V [ξ].

Theorem 4.41 Let ξ be a rough variable with expected value e. Then V [ξ] =0 if and only if Tr{ξ = e} = 1.

Page 180: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

170 Chapter 4 - Trust Theory

Proof: If V [ξ] = 0, then E[(ξ − e)2] = 0. Note that

E[(ξ − e)2] =∫ +∞

0

Tr{(ξ − e)2 ≥ r}dr

which implies Tr{(ξ−e)2 ≥ r} = 0 for any r > 0. Hence we have Tr{(ξ−e)2 =0} = 1, i.e., Tr{ξ = e} = 1.

Conversely, if Tr{ξ = e} = 1, then we have Tr{(ξ − e)2 = 0} = 1 andTr{(ξ − e)2 ≥ r} = 0 for any r > 0. Thus

V [ξ] =∫ +∞

0

Tr{(ξ − e)2 ≥ r}dr = 0.

Definition 4.26 The standard deviation of a rough variable is defined as thenonnegative square root of its variance.

Definition 4.27 (Liu [79]) Let ξ and η be rough variables such that E[ξ]and E[η] are finite. Then the covariance of ξ and η is defined by

Cov[ξ, η] = E [(ξ − E[ξ])(η − E[η])] . (4.77)

In fact, we also have Cov[ξ, η] = E[ξη]−E[ξ]E[η]. In addition, if ξ and ηare independent rough variables, then Cov[ξ, η] = 0. However, the inverse isnot true.

Theorem 4.42 If ξ1, ξ2, · · · , ξn are rough variables with finite expected val-ues, then

V [ξ1 + ξ2 + · · ·+ ξn] =n∑

i=1

V [ξi] + 2n−1∑i=1

n∑j=i+1

Cov[ξi, ξj ]. (4.78)

In particular, if ξ1, ξ2, · · · , ξn are independent, then

V [ξ1 + ξ2 + · · ·+ ξn] = V [ξ1] + V [ξ2] + · · ·+ V [ξn]. (4.79)

Proof: It follows from the definition of variance that

V

[n∑

i=1

ξi

]= E

[(ξ1 + ξ2 + · · ·+ ξn − E[ξ1]− E[ξ2]− · · · −E[ξn])2

]=

n∑i=1

E[(ξi − E[ξi])2

]+ 2

n−1∑i=1

n∑j=i+1

E [(ξi − E[ξi])([ξj − E[ξj ])]

which implies (4.78). If ξ1, ξ2, · · · , ξn are independent, then Cov[ξi, ξj ] = 0for all i, j with i �= j. Thus (4.79) holds.

Definition 4.28 (Liu [79]) For any positive integer k, the expected valueE[ξk] is called the kth moment of the rough variable ξ. The expected valueE[(ξ − E[ξ])k] is called the kth central moment of the rough variable ξ.

Page 181: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 4.8 - Optimistic and Pessimistic Values 171

4.8 Optimistic and Pessimistic Values

In this section, let us define two critical values—optimistic value and pes-simistic value—to rank the rough variables.

Definition 4.29 (Liu [75]) Let ξ be a rough variable, and α ∈ (0, 1]. Then

ξsup(α) = sup{r∣∣ Tr {ξ ≥ r} ≥ α

}(4.80)

is called the α-optimistic value to ξ, and

ξinf(α) = inf{r∣∣ Tr {ξ ≤ r} ≥ α

}(4.81)

is called the α-pessimistic value to ξ.

Example 4.12: Let ξ = ([a, b], [c, d]) be a rough variable with c ≤ a < b ≤ d.Then the α-optimistic value of ξ is

ξsup(α) =

⎧⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎩

(1− 2α)d + 2αc, if α ≤ d− b

2(d− c)

2(1− α)d + (2α− 1)c, if α ≥ 2d− a− c

2(d− c)d(b− a) + b(d− c)− 2α(b− a)(d− c)

(b− a) + (d− c), otherwise,

and the α-pessimistic value of ξ is

ξinf(α) =

⎧⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎩

(1− 2α)c + 2αd, if α ≤ a− c

2(d− c)

2(1− α)c + (2α− 1)d, if α ≥ b + d− 2c2(d− c)

c(b− a) + a(d− c) + 2α(b− a)(d− c)(b− a) + (d− c)

, otherwise.

If the rough variable ξ degenerates to an interval number [a, b], then its α-optimistic value is

ξsup(α) = αa + (1− α)b,

and its α-pessimistic value is

ξinf(α) = (1− α)a + αb.

Theorem 4.43 Let ξ be a rough variable. Then we have

Tr{ξ ≥ ξsup(α)} ≥ α, Tr{ξ ≤ ξinf(α)} ≥ α (4.82)

where ξinf(α) and ξsup(α) are the α-pessimistic and α-optimistic values of therough variable ξ, respectively.

Page 182: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

172 Chapter 4 - Trust Theory

Proof: It follows from the definition of the optimistic value that there existsan increasing sequence {ri} such that Tr{ξ ≥ ri} ≥ α and ri ↑ ξsup(α) asi → ∞. Since {λ|ξ(λ) ≥ ri} ↓ {λ|ξ(λ) ≥ ξsup(α)}, it follows from the trustcontinuity theorem that

Tr{ξ ≥ ξsup(α)} = limi→∞

Tr{ξ ≥ ri} ≥ α.

The inequality Tr{ξ ≤ ξinf(α)} ≥ α may be proved similarly.

Theorem 4.44 Let ξinf(α) and ξsup(α) be the α-pessimistic and α-optimisticvalues of the rough variable ξ, respectively. Then we have(a) ξinf(α) is an increasing function of α;(b) ξsup(α) is a decreasing function of α;(c) if α > 0.5, then ξinf(α) ≥ ξsup(α);(d) if α ≤ 0.5, then ξinf(α) ≤ ξsup(α).

Proof: The cases (a) and (b) are obvious. Case (c): Write ξ(α) = (ξinf(α)+ξsup(α))/2. If ξinf(α) < ξsup(α), then we have

1 ≥ Tr{ξ < ξ(α)}+ Tr{ξ > ξ(α)} ≥ α + α > 1.

A contradiction proves ξinf(α) ≥ ξsup(α). Case (d): Assume that ξinf(α) >ξsup(α). It follows from the definition of ξinf(α) that Tr{ξ ≤ ξ(α)} < α.Similarly, it follows from the definition of ξsup(α) that Tr{ξ ≥ ξ(α)} < α.Thus

1 ≤ Tr{ξ ≤ ξ(α)}+ Tr{ξ ≥ ξ(α)} < α + α ≤ 1.

A contradiction proves ξinf(α) ≤ ξsup(α). The theorem is proved.

Theorem 4.45 Assume that ξ is a rough variable. Then, for any α ∈ (0, 1],we have(a) if λ ≥ 0, then (λξ)sup(α) = λξsup(α) and (λξ)inf(α) = λξinf(α);(b) if λ < 0, then (λξ)sup(α) = λξinf(α) and (λξ)inf(α) = λξsup(α).

Proof: (a) If λ = 0, then the part is obviously valid. When λ > 0, we have

(λξ)sup(α) = sup {r | Tr{λξ ≥ r} ≥ α}= λ sup {r/λ | Tr {ξ ≥ r/λ} ≥ α}= λξsup(α).

A similar way may prove that (λξ)inf(α) = λξinf(α).(b) It suffices to verify that (−ξ)sup(α) = −ξinf(α) and (−ξ)inf(α) =

−ξsup(α). In fact, for any α ∈ (0, 1], we have

(−ξ)sup(α) = sup{r | Tr{−ξ ≥ r} ≥ α}= − inf{−r | Tr{ξ ≤ −r} ≥ α}= −ξinf(α).

Similarly, we may prove that (−ξ)inf(α) = −ξsup(α). The theorem is proved.

Page 183: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 4.9 - Some Inequalities 173

4.9 Some Inequalities

Theorem 4.46 (Liu [79]) Let ξ be a rough variable, and f a nonnegativemeasurable function. If f is even and increasing on [0,∞), then for any givennumber t > 0, we have

Tr{|ξ| ≥ t} ≤ E[f(ξ)]f(t)

. (4.83)

Proof: It is clear that Tr{|ξ| ≥ f−1(r)} is a monotone decreasing functionof r on [0,∞). It follows from the nonnegativity of f(ξ) that

E[f(ξ)] =∫ +∞

0

Tr{f(ξ) ≥ r}dr

=∫ +∞

0

Tr{|ξ| ≥ f−1(r)}dr

≥∫ f(t)

0

Tr{|ξ| ≥ f−1(r)}dr

≥∫ f(t)

0

dr · Tr{|ξ| ≥ f−1(f(t))}

= f(t) · Tr{|ξ| ≥ t}

which proves the inequality.

Theorem 4.47 (Liu [79]) Let ξ be a rough variable. Then for any givennumbers t > 0 and p > 0, we have

Tr{|ξ| ≥ t} ≤ E[|ξ|p]tp

. (4.84)

Proof: It is a special case of Theorem 4.46 when f(x) = |x|p.

Theorem 4.48 (Liu [79]) Let ξ be a rough variable whose variance V [ξ]exists. Then for any given number t > 0, we have

Tr {|ξ − E[ξ]| ≥ t} ≤ V [ξ]t2

. (4.85)

Proof: It is a special case of Theorem 4.46 when the rough variable ξ isreplaced with ξ − E[ξ] and f(x) = x2.

Example 4.13: Let ξ be a rough variable with finite expected value e andvariance σ2. It follows from Theorem 4.48 that

Tr{|ξ − e| ≥ kσ} ≤ V [ξ − e](kσ)2

=1k2

.

Page 184: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

174 Chapter 4 - Trust Theory

Theorem 4.49 (Liu [79]) Let p and q be two positive real numbers with1/p + 1/q = 1, ξ and η rough variables with E[|ξ|p] < ∞ and E[|η|q] < ∞.Then we have

E[|ξη|] ≤ p√

E[|ξ|p] q√

E[|η|q]. (4.86)

Proof: The inequality holds trivially if at least one of ξ and η is zero a.s.Now we assume E[|ξ|p] > 0 and E[|η|q] > 0, and set

a =|ξ|

p√

E[|ξ|p], b =

|η|q√

E[|η|q].

It follows from ab ≤ ap/p + bq/q that

|ξη| ≤ p√

E[|ξ|p] q√

E[|η|q](|ξ|p

pE[|ξ|p] +|η|q

qE[|η|q]

).

Taking the expected values on both sides, we obtain the inequality.

Theorem 4.50 (Liu [79]) Let p be a real number with 1 ≤ p < ∞, ξ and ηrough variables with E[|ξ|p] <∞ and E[|η|p] <∞. Then we have

p√

E[|ξ + η|p] ≤ p√

E[|ξ|p] + p√

E[|η|p]. (4.87)

Proof: The inequality holds trivially when p = 1. It thus suffices to provethe theorem when p > 1. It is clear that there is a number q with q > 1 suchthat 1/p + 1/q = 1. It follows from Theorem 4.49 that

E[|ξ||ξ + η|p−1] ≤ p√

E[|ξ|p] q

√E[|ξ + η|(p−1)q] = p

√E[|ξ|p] q

√E[|ξ + η|p],

E[|η||ξ + η|p−1] ≤ p√

E[|η|p] q

√E[|ξ + η|(p−1)q] = p

√E[|η|p] q

√E[|ξ + η|p].

We thus haveE[|ξ + η|p] ≤ E[|ξ||ξ + η|p−1] + E[|η||ξ + η|p−1]

≤(

p√

E[|ξ|p] + p√

E[|η|p])

q√

E[|ξ + η|p]

which implies that the inequality (4.87) holds.

Theorem 4.51 Let ξ be a rough variable, and f a convex function. If E[ξ]and E[f(ξ)] exist and are finite, then

f(E[ξ]) ≤ E[f(ξ)]. (4.88)

Especially, when f(x) = |x|p and p > 1, we have |E[ξ]|p ≤ E[|ξ|p].Proof: Since f is a convex function, for each y, there exists a number k suchthat f(x)− f(y) ≥ k · (x− y). Replacing x with ξ and y with E[ξ], we obtain

f(ξ)− f(E[ξ]) ≥ k · (ξ − E[ξ]).

Taking the expected values on both sides, we have

E[f(ξ)]− f(E[ξ]) ≥ k · (E[ξ]− E[ξ]) = 0

which proves the inequality.

Page 185: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 4.10 - Characteristic Function 175

4.10 Characteristic Function

This section introduces the concept of characteristic function of rough vari-able, and discusses inversion formula and uniqueness theorem.

Definition 4.30 Let ξ be a rough variable with trust distribution Φ. Thenthe function

ϕ(t) =∫ +∞

−∞eitxdΦ(x), t ∈ � (4.89)

is called the characteristic function of ξ, where eitx = cos tx + i sin tx andi =√−1.

Theorem 4.52 Let ξ be a rough variable, and ϕ its characteristic function.Then we have(a) ϕ(0) = 1;(b) |ϕ(t)| ≤ ϕ(0);(c) ϕ(−t) = ϕ(t), the complex conjugate of ϕ(t);(d) ϕ(t) is a uniformly continuous function on �.

Proof: The part (a) is obvious. The parts (b) and (c) are proved as follows,

|ϕ(t)| ≤∫ +∞

−∞

∣∣eitx∣∣ dΦ(x) =∫ +∞

−∞dΦ(x) = 1 = ϕ(0),

ϕ(t) =∫ +∞

−∞cos txdΦ(x)− i

∫ +∞

−∞sin txdΦ(x)

=∫ +∞

−∞cos(−t)xdΦ(x) + i

∫ +∞

−∞sin(−t)xdΦ(x) = ϕ(−t).

(d) We next show that ϕ is uniformly continuous. Since

ei(t+h)x − eitx = 2iei(t+h/2)x sinhx

2,

we have

|ϕ(t + h)− ϕ(t)| ≤∫ +∞

−∞

∣∣∣∣2iei(t+h/2)x sinhx

2

∣∣∣∣ dΦ(x) ≤ 2∫ ∞

−∞

∣∣∣∣sin hx

2

∣∣∣∣ dΦ(x)

where the right-hand side is independent of t. Since sin(hx)/2→ 0 as h→ 0,the Lebesgue dominated convergence theorem shows that∫ +∞

−∞

∣∣∣∣sin hx

2

∣∣∣∣ dΦ(x)→ 0

as h→ 0. Hence ϕ is uniformly continuous on �.

Page 186: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

176 Chapter 4 - Trust Theory

Theorem 4.53 (Inversion Formula) Let ξ be a rough variable with trustdistribution Φ and characteristic function ϕ. Then

Φ(b)− Φ(a) = limT→+∞

12π

∫ T

−T

e−iat − e−ibt

itϕ(t)dt (4.90)

holds for all points a, b(a < b) at which Φ is continuous.

Proof: Sincee−iat − e−ibt

it=∫ b

a

eiutdu, we have

f(T ) =12π

∫ T

−T

e−iat − e−ibt

itϕ(t)dt =

12π

∫ T

−T

ϕ(t)dt

∫ b

a

e−iutdu

=12π

∫ b

a

du

∫ T

−T

e−iutϕ(t)dt =12π

∫ +∞

−∞dΦ(x)

∫ b

a

du

∫ T

−T

ei(x−u)tdt

=∫ +∞

−∞g(T, x)dΦ(x)

where

g(T, x) =1π

∫ T (x−a)

T (x−b)

sin v

vdv.

The classical Dirichlet formula

∫ β

α

sin v

vdv → 1 as α→ −∞, β → +∞

implies that g(T, x) is bounded uniformly. Furthermore,

limT→+∞

g(T, x) =1π

limT→+∞

∫ T (x−a)

T (x−b)

sin v

vdv =

⎧⎪⎨⎪⎩1, if a < x < b

0.5, if x = a or b

0, if x < a or x > b.

It follows from Lebesgue dominated convergence theorem that

limT→+∞

f(T ) =∫ +∞

−∞lim

T→+∞g(T, x)dΦ(x) = Φ(b)− Φ(a).

The proof is completed.

Theorem 4.54 (Uniqueness Theorem) Let Φ1 and Φ2 be two trust distribu-tions with characteristic functions ϕ1 and ϕ2, respectively. Then ϕ1 = ϕ2 ifand only if Φ1 = Φ2.

Proof: If Φ1 = Φ2, then we get ϕ1 = ϕ2 immediately from the definition.Conversely, let a, b(a < b) be continuity points of both Φ1 and Φ2. Then theinversion formula yields

Φ1(b)− Φ1(a) = Φ2(b)− Φ2(a).

Page 187: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 4.11 - Convergence Concepts 177

Letting a → −∞, we obtain Φ1(b) = Φ2(b) via Φ1(a) → 0 and Φ2(a) → 0.Since the set of continuity points of trust distribution is dense everywhere in�, we have Φ1 = Φ2 by Theorem 4.15.

4.11 Convergence Concepts

This section discusses some convergence concepts of rough sequence: conver-gence almost surely (a.s.), convergence in trust, convergence in mean, andconvergence in distribution.

Table 4.1: Relations among Convergence Concepts

Convergence Almost Surely↘ Convergence → Convergence↗ in Trust in Distribution

Convergence in Mean

Definition 4.31 (Liu [79]) Suppose that ξ, ξ1, ξ2, · · · are rough variables de-fined on the rough space (Λ,Δ,A, π). The sequence {ξi} is said to be conver-gent a.s. to the rough variable ξ if and only if there exists a set A ∈ A withTr{A} = 1 such that

limi→∞

|ξi(λ)− ξ(λ)| = 0 (4.91)

for every λ ∈ A. In that case we write ξi → ξ, a.s.

Definition 4.32 (Liu [79]) Suppose that ξ, ξ1, ξ2, · · · are rough variables de-fined on the rough space (Λ,Δ,A, π). We say that the sequence {ξi} convergesin trust to the rough variable ξ if

limi→∞

Tr {|ξi − ξ| ≥ ε} = 0 (4.92)

for every ε > 0.

Definition 4.33 (Liu [79]) Suppose that ξ, ξ1, ξ2, · · · are rough variables withfinite expected values defined on the rough space (Λ,Δ,A, π). We say thatthe sequence {ξi} converges in mean to the rough variable ξ if

limi→∞

E[|ξi − ξ|] = 0. (4.93)

Definition 4.34 (Liu [79]) Suppose that Φ,Φ1,Φ2, · · · are the trust distribu-tions of rough variables ξ, ξ1, ξ2, · · ·, respectively. We say that {ξi} convergesin distribution to ξ if Φi(x)→ Φ(x) for all continuity points x of Φ.

Page 188: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

178 Chapter 4 - Trust Theory

Convergence Almost Surely vs. Convergence in Trust

Theorem 4.55 (Liu [79]) Suppose that ξ, ξ1, ξ2, · · · are rough variables de-fined on the rough space (Λ,Δ,A, π). Then {ξi} converges a.s. to the roughvariable ξ if and only if for every ε > 0, we have

limn→∞Tr

{ ∞⋃i=n

{|ξi − ξ| ≥ ε}}

= 0. (4.94)

Proof: For every i ≥ 1 and ε > 0, we define

X ={λ ∈ Λ

∣∣ limi→∞

ξi(λ) �= ξ(λ)}

,

Xi(ε) ={λ ∈ Λ

∣∣ |ξi(λ)− ξ(λ)| ≥ ε}

.

It is clear that

X =⋃ε>0

( ∞⋂n=1

∞⋃i=n

Xi(ε)

).

Note that ξi → ξ, a.s. if and only if Tr{X} = 0. That is, ξi → ξ, a.s. if andonly if

Tr

{ ∞⋂n=1

∞⋃i=n

Xi(ε)

}= 0

for every ε > 0. Since∞⋃i=n

Xi(ε) ↓∞⋂n=1

∞⋃i=n

Xi(ε),

it follows from the trust continuity theorem that

limn→∞Tr

{ ∞⋃i=n

Xi(ε)

}= Tr

{ ∞⋂n=1

∞⋃i=n

Xi(ε)

}= 0.

The theorem is proved.

Theorem 4.56 (Liu [79]) Suppose that ξ, ξ1, ξ2, · · · are rough variables de-fined on the rough space (Λ,Δ,A, π). If {ξi} converges a.s. to the roughvariable ξ, then {ξi} converges in trust to ξ.

Proof: It follows from the convergence a.s. and Theorem 4.55 that

limn→∞Tr

{ ∞⋃i=n

{|ξi − ξ| ≥ ε}}

= 0

for each ε > 0. For every n ≥ 1, since

{|ξn − ξ| ≥ ε} ⊂∞⋃i=n

{|ξi − ξ| ≥ ε},

Page 189: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 4.11 - Convergence Concepts 179

we have Tr{|ξn−ξ| ≥ ε} → 0 as n→∞. That is, the sequence {ξi} convergesin trust to ξ. The theorem holds.

Example 4.14: Convergence in trust does not imply convergence a.s. Forexample, let Λ = Δ = [0, 1]. Assume that A is the class of all Borel sets onΛ, and π is the Lebesgue measure. Then (Λ,Δ,A, π) is a rough space. Forany positive integer i, there is an integer j such that i = 2j + k, where k isan integer between 0 and 2j − 1. We define a rough variable on Λ by

ξi(λ) =

{1, if k/2j ≤ λ ≤ (k + 1)/2j

0, otherwise.(4.95)

for i = 1, 2, · · · and ξ = 0. For any small number ε > 0, we have

Tr {|ξi − ξ| ≥ ε} =12j→ 0

as i→∞. That is, the sequence {ξi} converges in trust to ξ. However, for anyλ ∈ [0, 1], there is an infinite number of intervals of the form [k/2j , (k+1)/2j ]containing λ. Thus ξi(λ) �→ 0 as i → ∞. In other words, the sequence {ξi}does not converge a.s. to ξ.

Convergence in Trust vs. Convergence in Mean

Theorem 4.57 (Liu [79]) Suppose that ξ, ξ1, ξ2, · · · are rough variables de-fined on the rough space (Λ,Δ,A, π). If the sequence {ξi} converges in meanto the rough variable ξ, then {ξi} converges in trust to ξ.

Proof: It follows from Theorem 4.47 that, for any given number ε > 0,

Tr {|ξi − ξ| ≥ ε} ≤ E[|ξi − ξ|]ε

→ 0

as i→∞. Thus {ξi} converges in trust to ξ.

Example 4.15: Convergence in trust does not imply convergence in mean.For example, let Λ = Δ = {λ1, λ2, · · ·} and π{λj} = 1/2j for j = 1, 2, · · ·.We define the rough variables as

ξi{λj} =

{2i, if j = i

0, otherwise(4.96)

for i = 1, 2, · · · and ξ = 0. For any small number ε > 0, we have

Tr {|ξi − ξ| ≥ ε} =12i→ 0.

That is, the sequence {ξi} converges in trust to ξ. However, we have

E [|ξi − ξ|] = 2i · 12i

= 1.

That is, the sequence {ξi} does not converge in mean to ξ.

Page 190: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

180 Chapter 4 - Trust Theory

Convergence Almost Surely vs. Convergence in Mean

Example 4.16: Convergence a.s. does not imply convergence in mean. Con-sider the rough variables defined by (4.96) in which {ξi} converges a.s. to ξ.However, it does not converge in mean to ξ.

Example 4.17: Convergence in mean does not imply convergence a.s., too.Consider the rough variables defined by (4.95). We have

E [|ξi − ξ|] =12j→ 0

where j is the maximal integer such that 2j ≤ i. That is, the sequence {ξi}converges in mean to ξ. However, {ξi} does not converge a.s. to ξ.

Convergence in Trust vs. Convergence in Distribution

Theorem 4.58 (Liu [79]) Suppose that ξ, ξ1, ξ2, · · · are rough variables. Ifthe sequence {ξi} converges in trust to ξ, then {ξi} converges in distributionto ξ.

Proof. Let x be any given continuity point of the distribution Φ. On theone hand, for any y > x, we have

{ξi ≤ x} = {ξi ≤ x, ξ ≤ y} ∪ {ξi ≤ x, ξ > y} ⊂ {ξ ≤ y} ∪ {|ξi − ξ| ≥ y − x}

which implies that

Φi(x) ≤ Φ(y) + Tr{|ξi − ξ| ≥ y − x}.

Since {ξi} converges in trust to ξ, we have Tr{|ξi − ξ| ≥ y − x} → 0. Thuswe obtain lim supi→∞ Φi(x) ≤ Φ(y) for any y > x. Letting y → x, we get

lim supi→∞

Φi(x) ≤ Φ(x). (4.97)

On the other hand, for any z < x, we have

{ξ ≤ z} = {ξ ≤ z, ξi ≤ x} ∪ {ξ ≤ z, ξi > x} ⊂ {ξi ≤ x} ∪ {|ξi − ξ| ≥ x− z}

which implies that

Φ(z) ≤ Φi(x) + Tr{|ξi − ξ| ≥ x− z}.

Since Tr{|ξi − ξ| ≥ x − z} → 0, we obtain Φ(z) ≤ lim infi→∞ Φi(x) for anyz < x. Letting z → x, we get

Φ(x) ≤ lim infi→∞

Φi(x). (4.98)

Page 191: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 4.12 - Laws of Large Numbers 181

It follows from (4.97) and (4.98) that Φi(x)→ Φ(x). The theorem is proved.

Example 4.18: Convergence in distribution does not imply convergence intrust. For example, Λ = Δ = {λ1, λ2}, and

π{λ} ={

1/2, if λ = λ1

1/2, if λ = λ2,ξ(λ) =

{−1, if λ = λ1

1, if λ = λ2.

We also define ξi = −ξ for all i. Then ξi and ξ are identically distributed.Thus {ξi} converges in distribution to ξ. But, for any small number ε > 0,we have Tr{|ξi − ξ| ≥ ε} = 1. That is, the sequence {ξi} does not convergein trust to ξ.

4.12 Laws of Large Numbers

In order to introduce the laws of large numbers of rough variable, we willdenote Sn = ξ1 + ξ2 + · · ·+ ξn for each n throughout this section.

Weak Laws of Large Numbers

Theorem 4.59 Let {ξi} be a sequence of independent but not necessarilyidentically distributed rough variables with finite expected values. If thereexists a number a > 0 such that V [ξi] < a for all i, then (Sn − E[Sn])/nconverges in trust to 0. That is, for any given ε > 0, we have

limn→∞Tr

{∣∣∣∣Sn − E[Sn]n

∣∣∣∣ ≥ ε

}= 0. (4.99)

Proof: For any given ε > 0, it follows from Theorem 4.48 that

Tr{∣∣∣∣Sn − E[Sn]

n

∣∣∣∣ ≥ ε

}≤ 1

ε2V

[Sn

n

]=

V [Sn]ε2n2

≤ a

ε2n→ 0

as n→∞. The theorem is proved. Especially, if those rough variables havea common expected value e, then Sn/n converges in trust to e.

Theorem 4.60 Let {ξi} be a sequence of iid rough variables with finite ex-pected value e. Then Sn/n converges in trust to e as n→∞.

Proof: Since the expected value of ξi is finite, there exists β > 0 such thatE[|ξi|] < β < ∞. Let α be an arbitrary positive number, and let n be anarbitrary positive integer. We define

ξ∗i =

{ξi, if |ξi| < nα

0, otherwise

Page 192: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

182 Chapter 4 - Trust Theory

for i = 1, 2, · · · It is clear that {ξ∗i } is a sequence of iid rough variables. Lete∗n be the common expected value of ξ∗i , and S∗

n = ξ∗1 + ξ∗2 + · · ·+ ξ∗n. Thenwe have

V [ξ∗i ] ≤ E[ξ∗2i ] ≤ nαE[|ξ∗i |] ≤ nαβ,

E

[S∗n

n

]=

E[ξ∗1 ] + E[ξ∗2 ] + · · ·+ E[ξ∗n]n

= e∗n,

V

[S∗n

n

]=

V [ξ∗1 ] + V [ξ∗2 ] + · · ·+ V [ξ∗n]n2

≤ αβ.

It follows from Theorem 4.48 that

Tr{∣∣∣∣S∗

n

n− e∗n

∣∣∣∣ ≥ ε

}≤ 1

ε2V

[S∗n

n

]≤ αβ

ε2(4.100)

for every ε > 0. It is also clear that e∗n → e as n → ∞ by the Lebesguedominated convergence theorem. Thus there exists an integer N∗ such that|e∗n − e| < ε whenever n ≥ N∗. Applying (4.100), we get

Tr{∣∣∣∣S∗

n

n− e

∣∣∣∣ ≥ 2ε}≤ Tr

{∣∣∣∣S∗n

n− e∗n

∣∣∣∣ ≥ ε

}≤ αβ

ε2(4.101)

for any n ≥ N∗. It follows from the iid hypothesis and Theorem 4.27 that

Tr{S∗n �= Sn} ≤

n∑i=1

Tr{|ξi| ≥ nα} ≤ nTr{|ξ1| ≥ nα} → 0

as n→∞. Thus there exists N∗∗ such that

Tr{S∗n �= Sn} ≤ α, ∀n ≥ N∗∗.

Applying (4.101), for all n ≥ N∗ ∨N∗∗, we have

Tr{∣∣∣∣Sn

n− e

∣∣∣∣ ≥ 2ε}≤ αβ

ε2+ α→ 0

as α→ 0. It follows that Sn/n converges in trust to e.

Strong Laws of Large Numbers

Theorem 4.61 Let ξ1, ξ2, · · · , ξn be independent rough variables with finiteexpected values. Then for any given ε > 0, we have

Tr{

max1≤i≤n

|Si − E[Si]| ≥ ε

}≤ V [Sn]

ε2. (4.102)

Page 193: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 4.12 - Laws of Large Numbers 183

Proof: Without loss of generality, assume that E[ξi] = 0 for each i. We set

A1 = {|S1| ≥ ε} , Ai = {|Sj | < ε, j = 1, 2, · · · , i− 1, and |Si| ≥ ε}

for i = 2, 3, · · · , n. It is clear that

A ={

max1≤i≤n

|Si| ≥ ε

}is the disjoint union of A1, A2, · · · , An. Since E[Sn] = 0, we have

V [Sn] =∫ +∞

0

Tr{S2n ≥ r}dr ≥

n∑k=1

∫ +∞

0

Tr{{S2

n ≥ r} ∩Ak

}dr. (4.103)

Now for any k with 1 ≤ k ≤ n, it follows from the independence that∫ +∞

0

Tr{{S2

n ≥ r} ∩Ak

}dr

=∫ +∞

0

Tr{{(Sk + ξk+1 + · · ·+ ξn)2 ≥ r} ∩Ak

}dr

=∫ +∞

0

Tr{{S2

k + ξ2k+1 + · · ·+ ξ2

n ≥ r} ∩Ak

}dr

+2n∑

j=k+1

E[IAkSk]E[ξj ] +

n∑j �=l;j,l=k+1

Tr{Ak}E[ξj ]E[ξl]

≥∫ +∞

0

Tr{{S2

k ≥ r} ∩Ak

}dr

≥ ε2Tr{Ak}.

Using (4.103), we get

V [Sn] ≥ ε2n∑

i=1

Tr{Ai} = ε2Tr{A}

which implies that (4.102) holds.

Theorem 4.62 Let {ξi} be a sequence of independent rough variables. If∑∞i=1 V [ξi] <∞, then

∑∞i=1(ξi − E[ξi]) converges a.s.

Proof: The series∑∞

i=1(ξi − E[ξi]) converges a.s. if and only if∑∞

i=n(ξi −E[ξi])→ 0 a.s. as n→∞ if and only if

limn→∞Tr

⎧⎨⎩∞⋃j=0

{∣∣∣∣∣n+j∑i=n

(ξi − E[ξi])

∣∣∣∣∣ ≥ ε

}⎫⎬⎭ = 0

Page 194: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

184 Chapter 4 - Trust Theory

for every given ε > 0. In fact,

Tr

⎧⎨⎩∞⋃j=0

{∣∣∣∣∣n+j∑i=n

(ξi − E[ξi])

∣∣∣∣∣ ≥ ε

}⎫⎬⎭= lim

m→∞Tr

⎧⎨⎩m⋃j=0

{∣∣∣∣∣n+j∑i=n

(ξi − E[ξi])

∣∣∣∣∣ ≥ ε

}⎫⎬⎭= lim

m→∞Tr

{max

0≤j≤m

∣∣∣∣∣n+j∑i=n

(ξi − E[ξi])

∣∣∣∣∣ ≥ ε

}

≤ limm→∞

1ε2

n+m∑i=n

V [ξi] (by (4.102))

=1ε2

∞∑i=n

V [ξi]→ 0 as n→∞ by∞∑i=1

V [ξi] <∞.

The theorem is proved.

Theorem 4.63 Let {ξi} be a sequence of independent rough variables withfinite expected values. If

∞∑i=1

V [ξi]i2

<∞, (4.104)

thenSn − E[Sn]

n→ 0, a.s. (4.105)

Proof: It follows from (4.104) that

∞∑i=1

V

[ξi − E[ξi]

i

]=

∞∑i=1

V [ξi]i2

<∞.

By Theorem 4.62, we know that∑∞

i=1(ξi − E[ξi])/i converges a.s. ApplyingKronecker Lemma, we obtain

Sn − E[Sn]n

=1n

n∑i=1

i

(ξi − E[ξi]

i

)→ 0, a.s.

The theorem is proved.

Theorem 4.64 Let {ξi} be a sequence of iid rough variables with finite ex-pected value e. Then Sn/n→ e a.s.

Page 195: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 4.13 - Conditional Trust 185

Proof: For each i ≥ 1, let ξ∗i be ξi truncated at i, i.e.,

ξ∗i =

{ξi, if |ξi| < i

0, otherwise,

and write S∗n = ξ∗1 + ξ∗2 + · · ·+ ξ∗n. It follows that

V [ξ∗i ] ≤ E[ξ∗2i ] ≤i∑

j=1

j2Tr{j − 1 ≤ |ξ1| < j}

for all i. Thus∞∑i=1

V [ξ∗i ]i2

≤∞∑i=1

i∑j=1

j2

i2Tr{j − 1 ≤ |ξ1| < j}

=∞∑j=1

j2Tr{j − 1 ≤ |ξ1| < j}∞∑i=j

1i2

≤ 2∞∑j=1

jTr{j − 1 ≤ |ξ1| < j} by∞∑i=j

1i2≤ 2

j

= 2 + 2∞∑j=1

(j − 1)Tr{j − 1 ≤ |ξ1| < j}

≤ 2 + 2e <∞.

It follows from Theorem 4.63 that

S∗n − E[S∗

n]n

→ 0, a.s. (4.106)

Note that ξ∗i ↑ ξi as i → ∞. Using the Lebesgue dominated convergencetheorem, we conclude that E[ξ∗i ]→ e. It follows from Toeplitz Lemma that

E[S∗n]

n=

E[ξ∗1 ] + E[ξ∗2 ] + · · ·+ E[ξ∗n]n

→ e, a.s. (4.107)

Since (ξi − ξ∗i )→ 0, a.s. It follows from Toeplitz Lemma that

Sn − S∗n

n=

1n

n∑i=1

(ξi − ξ∗i )→ 0, a.s. (4.108)

It follows from (4.106), (4.107) and (4.108) that Sn/n→ e a.s.

4.13 Conditional Trust

We consider the trust of an event A after it has been learned that some otherevent B has occurred. This new trust of A is called the conditional trust ofthe event A given that the event B has occurred.

Page 196: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

186 Chapter 4 - Trust Theory

Definition 4.35 Let (Λ,Δ,A, π) be a rough space, and A,B ∈ A. Then theconditional trust of A given B is defined by

Tr{A|B} =Tr{A ∩B}

Tr{B} (4.109)

provided that Tr{B} > 0.

Theorem 4.65 Let (Λ,Δ,A, π) be a rough space, and B ∈ A. If Tr{B} > 0,then Tr{·|B} defined by (4.109) is a measure.

Proof: At first, we have

Tr{Λ|B} =Tr{Λ ∩B}

Tr{B} =Tr{B}Tr{B} = 1.

Second, for any A ∈ A, the set function Tr{A|B} is nonnegative. Finally, forany sequence {Ai}∞i=1 of mutually disjoint events, we have

Tr

{ ∞⋃i=1

Ai|B}

=Tr{( ∞⋃

i=1

Ai

)∩B

}Tr{B} =

∞∑i=1

Tr{Ai ∩B}

Tr{B} =∞∑i=1

Tr{Ai|B}.

Thus Tr{·|B} is a measure.

Theorem 4.66 Let the events A1, A2, · · · , An form a partition of the spaceΩ such that Tr{Ai} > 0 for i = 1, 2, · · · , n, and B an event with Tr{B} > 0.Then we have

Tr{Ak|B} =Tr{Ak}Tr{B|Ak}n∑

i=1

Tr{Ai}Tr{B|Ai}(4.110)

for k = 1, 2, · · · , n.

Proof: Since A1, A2, · · · , An form a partition of Λ, we have

Tr{B} =n∑

i=1

Tr{Ai ∩B} =n∑

i=1

Tr{Ai}Tr{B|Ai}.

Thus, for any k, if Tr{B} > 0, then

Tr(Ak|B) =Tr{Ak ∩B}

Tr{B} =Tr{Ak}Tr{B|Ak}n∑

i=1

Tr{Ai}Tr{B|Ai}.

The theorem is proved.

Page 197: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 4.13 - Conditional Trust 187

Definition 4.36 The conditional trust distribution Φ: [−∞,+∞] × A →[0, 1] of a rough variable ξ given B is defined by

Φ(x|B) = Tr{ξ ≤ x

∣∣ B} (4.111)

provided that Tr{B} > 0.

Definition 4.37 The conditional trust density function φ: �×A→ [0,+∞)of a rough variable ξ given B is a function such that

Φ(x|B) =∫ x

−∞φ(y|B)dy (4.112)

holds for all x ∈ [−∞,+∞], where Φ is the conditional trust distribution ofthe rough variable ξ given B provided that Tr{B} > 0.

Example 4.19: Let ξ and η be rough variables, where η takes on only count-ably many values y1, y2, · · · Then, for each i, the conditional trust distributionof ξ given η = yi is

Φ(x|η = yi) = Tr{ξ ≤ x

∣∣ η = yi}

=Tr{ξ ≤ x, η = yi}

Tr{η = yi}.

Example 4.20: Let (ξ, η) be a rough vector with joint trust density functionψ. Then the marginal trust density functions of ξ and η are

f(x) =∫ +∞

−∞ψ(x, y)dy, g(y) =

∫ +∞

−∞ψ(x, y)dx,

respectively. Furthermore, we have

Tr{ξ ≤ x, η ≤ y} =∫ x

−∞

∫ y

−∞ψ(r, t)drdt =

∫ y

−∞

[∫ x

−∞

ψ(r, t)g(t)

dr

]g(t)dt

which implies that the conditional trust distribution of ξ given η = y is

Φ(x|η = y) =∫ x

−∞

ψ(r, y)g(y)

dr, a.s. (4.113)

and the conditional trust density function of ξ given η = y is

φ(x|η = y) =f(x, y)g(y)

, a.s. (4.114)

Note that (4.113) and (4.114) are defined only for g(y) �= 0.

Page 198: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

188 Chapter 4 - Trust Theory

Definition 4.38 Let ξ be a rough variable. Then the conditional expectedvalue of ξ given B is defined by

E[ξ|B] =∫ +∞

0

Tr{ξ ≥ r|B}dr −∫ 0

−∞Tr{ξ ≤ r|B}dr (4.115)

provided that at least one of the two integrals is finite.

Theorem 4.67 Let ξ and η be rough variables with finite expected values.Then for any Borel set B and any numbers a and b, we have

E[aξ + bη|B] = aE[ξ|B] + bE[η|B]. (4.116)

Proof: Like Theorem 4.32.

4.14 Rough Simulations

Rough simulation was proposed by Liu [75] for estimating the value of trust,finding critical value, and calculating expected value. Here we show it throughsome numerical examples.

Example 4.21: Let ξ be an n-dimensional rough vector on the rough space(Λ,Δ,A, π), and f : �n → �m a measurable function. Then f(ξ) is also arough vector. In order to obtain the trust,

L = Tr {f(ξ) ≤ 0} , (4.117)

we produce samples λk, k = 1, 2, · · · , N from Λ according to the measureπ. Let N denote the number of occasions on which f(ξ(λk)) ≤ 0 for k =1, 2, · · · , N . Then we have the upper trust L = N/N provided that N issufficiently large. A similar way may produce the lower trust L. Thus thetrust L = (L + L)/2.

Algorithm 4.1 (Rough Simulation)Step 1. Set N = 0 and N = 0.Step 2. Generate λ and λ from Δ and Λ according to π, respectively.Step 3. If f(ξ(λ)) ≤ 0, then N++.

Step 4. If f(ξ(λ)) ≤ 0, then N++.Step 5. Repeat the second to fourth steps N times.Step 6. L = (N + N)/(2N).

Assume that the rough variables ξ1 = ([1, 2], [0, 5]) and ξ2 = ([2, 3], [1, 4]).In order to calculate the trust L = Tr{ξ2

1 + ξ22 ≤ 18}, we perform the rough

simulation with 2000 cycles and obtain L = 0.82.

Page 199: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 4.14 - Rough Simulations 189

Example 4.22: Suppose that ξ is an n-dimensional rough vector defined onthe rough space (Λ,Δ,A, π), and f : �n → � is a measurable function. Herewe employ rough simulation to estimate the maximal value f such that

Tr{f(ξ) ≥ f

}≥ α (4.118)

holds, where α is a predetermined confidence level with 0 < α ≤ 1. Wesample λ1, λ2, · · · , λN from Δ and λ1, λ2, · · · , λN from Λ according to themeasure π. For any number v, let N(v) denote the number of λk satisfyingf(ξ(λk)) ≥ v for k = 1, 2, · · · , N , and N(v) denote the number of λk satisfyingf(ξ(λk)) ≥ v for k = 1, 2, · · · , N . It follows from monotonicity that we mayemploy bisection search to find the maximal value v such that

N(v) + N(v)2N

≥ α. (4.119)

This value is an estimation of f .

Algorithm 4.2 (Rough Simulation)Step 1. Generate λ1, λ2, · · · , λN from Δ according to the measure π.

Step 2. Generate λ1, λ2, · · · , λN from Λ according to the measure π.Step 3. Find the maximal value v such that (4.119) holds.Step 4. Return v.

Assume that the rough variables ξ1 = ([0, 1], [−1, 3]), ξ2 = ([1, 2], [0, 3]),and ξ3 = ([2, 3], [1, 5]). Now we compute the maximal value f such thatTr{ξ1 + ξ2

2 + ξ33 ≥ f} ≥ 0.8. A run of the rough simulation with 2000 cycles

shows that f = 12.7.

Example 4.23: Let f : �n → � be a measurable function, and ξ an n-dimensional rough vector defined on the rough space (Λ,Δ,A, π). In orderto calculate the expected value E[f(ξ)], we sample λ1, λ2, · · · , λN from Δand λ1, λ2, · · · , λN from Λ according to the measure π. Then the expectedvalue E[f(ξ)] is estimated by

N∑i=1

(f(ξ(λi)) + f(ξ(λi))

)2N

provided that N is sufficiently large.

Algorithm 4.3 (Rough Simulation)Step 1. Set e = 0.Step 2. Generate λ from Δ according to the measure π.Step 3. Generate λ from Λ according to the measure π.

Page 200: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

190 Chapter 4 - Trust Theory

Step 4. e← e + f(ξ(λ)) + f(ξ(λ)).Step 5. Repeat the second to fourth steps N times.Step 6. Return e/(2N).

Assume that the rough variable ξ = ([−1, 1], [−2, 2]). We employ therough simulation to compute the expected value of (1+ ξ)/(1+ ξ2). A run ofthe rough simulation with 2000 cycles obtains that E[(1+ξ)/(1+ξ2)] = 0.67.

Page 201: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Chapter 5

Fuzzy Random Theory

Fuzzy random variables are mathematical descriptions for fuzzy stochasticphenomena, and are defined in several ways. Kwakernaak [51][52] first intro-duced the notion of fuzzy random variable. This concept was then developedby several researchers such as Puri and Ralescu [118], Kruse and Meyer [50],and Liu and Liu [82] according to different requirements of measurability.

The concept of chance measure of fuzzy random event was first givenby Liu [73][74]. In order to rank fuzzy random variables, Liu and Liu [82]presented a scalar expected value operator, and Liu [73] presented the conceptof optimistic and pessimistic values. In order to describe the fuzzy randomvariable, Yang and Liu [150] presented the concept of chance distribution.

The emphasis in this chapter is mainly on fuzzy random variable, fuzzyrandom arithmetic, chance measure, chance distribution, independent andidentical distribution, expected value operator, variance, convergence con-cepts, laws of large numbers, and fuzzy random simulations.

5.1 Fuzzy Random Variables

Roughly speaking, a fuzzy random variable is a measurable function froma probability space to the set of fuzzy variables. In other words, a fuzzyrandom variable is a random variable taking fuzzy values. For our purpose,we use the following mathematical definition of fuzzy random variable.

Definition 5.1 (Liu and Liu [82]) A fuzzy random variable is a functionξ from a probability space (Ω,A,Pr) to the set of fuzzy variables such thatPos{ξ(ω) ∈ B} is a measurable function of ω for any Borel set B of �.

Page 202: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

192 Chapter 5 - Fuzzy Random Theory

Example 5.1: Let (Ω,A,Pr) be a probability space. If Ω = {ω1, ω2, · · · , ωm}and u1, u2, · · · , um are fuzzy variables, then the function

ξ(ω) =

⎧⎪⎪⎨⎪⎪⎩u1, if ω = ω1

u2, if ω = ω2

· · ·um, if ω = ωm

is clearly a fuzzy random variable.

Example 5.2: If η is a random variable defined on the probability space(Ω,A,Pr), and u is a fuzzy variable, then the sum ξ = η+u is a fuzzy randomvariable defined by

ξ(ω) = η(ω) + u, ∀ω ∈ Ω

provided that Pos{ξ(ω) ∈ B} is a measurable function of ω for any Borel setB of �. Similarly, the product ξ = ηu defined by

ξ(ω) = η(ω)u, ∀ω ∈ Ω

is also a fuzzy random variable provided that Pos{ξ(ω) ∈ B} is a measurablefunction of ω for any Borel set B of �.

Theorem 5.1 Assume that ξ is a fuzzy random variable. Then for any Borelset B of �, the following alternatives hold,(a) the possibility Pos{ξ(ω) ∈ B} is a random variable;(b) the necessity Nec{ξ(ω) ∈ B} is a random variable;(c) the credibility Cr{ξ(ω) ∈ B} is a random variable.

Proof: If ξ is a fuzzy random variable, then Pos{ξ(ω) ∈ B} is a measurablefunction of ω from the probability space (Ω,A,Pr) to �. Thus the possibilityPos{ξ(ω) ∈ B} is a random variable. It follows from Nec{B} = 1−Pos{Bc}and Cr{B} = (Pos{B}+ Nec{B})/2 that Nec{ξ(ω) ∈ B} and Cr{ξ(ω) ∈ B}are random variables.

Theorem 5.2 (Liu and Liu [82]) Let ξ be a fuzzy random variable. If theexpected value E[ξ(ω)] is finite for each ω, then E[ξ(ω)] is a random variable.

Proof: In order to prove that the expected value E[ξ(ω)] is a random vari-able, we only need to show that E[ξ(ω)] is a measurable function of ω. It isobvious that

E[ξ(ω)] =∫ +∞

0

Cr{ξ(ω) ≥ r}dr −∫ 0

−∞Cr{ξ(ω) ≤ r}dr

= limj→∞

limk→∞

(k∑

l=1

j

kCr{

ξ(ω) ≥ lj

k

}−

k∑l=1

j

kCr{

ξ(ω) ≤ − lj

k

}).

Since Cr{ξ(ω) ≥ lj/k} and Cr{ξ(ω) ≤ −lj/k} are all measurable functionsfor any integers j, k and l, the expected value E[ξ(ω)] is a measurable functionof ω. The proof is complete.

Page 203: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 5.1 - Fuzzy Random Variables 193

Definition 5.2 An n-dimensional fuzzy random vector is a function ξ froma probability space (Ω,A,Pr) to the set of n-dimensional fuzzy vectors suchthat Pos{ξ(ω) ∈ B} is a measurable function of ω for any Borel set B of �n.

Theorem 5.3 If (ξ1, ξ2, · · · , ξn) is a fuzzy random vector, then ξ1, ξ2, · · · , ξnare fuzzy random variables.

Proof: Write ξ = (ξ1, ξ2, · · · , ξn). Suppose that ξ is a fuzzy random vectoron the probability space (Ω,A,Pr). For any Borel set B of �, the set B×�n−1

is a Borel set of �n. It follows that the function

Pos {ξ1(ω) ∈ B} = Pos

⎧⎪⎪⎪⎨⎪⎪⎪⎩ξ1(ω) ∈ Bξ2(ω) ∈ �

...ξn(ω) ∈ �

⎫⎪⎪⎪⎬⎪⎪⎪⎭ = Pos{ξ(ω) ∈ B ×�n−1

}

is a measurable function of ω. Hence ξ1 is a fuzzy random variable. A similarprocess may prove that ξ2, ξ3, · · · , ξn are fuzzy random variables.

Theorem 5.4 Let ξ be an n-dimensional fuzzy random vector, and f : �n →� a measurable function. Then f(ξ) is a fuzzy random variable.

Proof: It is clear that f−1(B) is a Borel set of �n for any Borel set B of �.Thus, for each ω ∈ Ω, the function

Pos{f(ξ(ω)) ∈ B} = Pos{ξ(ω) ∈ f−1(B)}

is a measurable function of ω. That is, f(ξ) is a fuzzy random variable. Thetheorem is proved.

Fuzzy Random Arithmetic

Definition 5.3 (Fuzzy Random Arithmetic on Single Space) Let f : �n → �be a measurable function, and ξ1, ξ2, · · · , ξn fuzzy random variables on theprobability space (Ω,A,Pr), i = 1, 2, · · · , n. Then ξ = f(ξ1, ξ2, · · · , ξn) is afuzzy random variable defined by

ξ(ω) = f(ξ1(ω), ξ2(ω), · · · , ξn(ω)), ω ∈ Ω. (5.1)

Example 5.3: Let ξ1 and ξ2 be two fuzzy random variables defined on theprobability spaces (Ω,A,Pr). Then the sum ξ = ξ1 + ξ2 is a fuzzy randomvariable defined by

ξ(ω) = ξ1(ω) + ξ2(ω), ∀ω ∈ Ω.

The product ξ = ξ1ξ2 is also a fuzzy random variable defined by

ξ(ω) = ξ1(ω) · ξ2(ω), ∀ω ∈ Ω.

Page 204: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

194 Chapter 5 - Fuzzy Random Theory

Definition 5.4 (Fuzzy Random Arithmetic on Different Spaces) Assume thatf : �n → � is a measurable function, and ξi are fuzzy random variableson the probability spaces (Ωi,Ai,Pri), i = 1, 2, · · · , n, respectively. Thenξ = f(ξ1, ξ2, · · · , ξn) is a fuzzy random variable on the product probabilityspace (Ω1×Ω2×· · ·×Ωn,A1×A2×· · ·×An,Pr1×Pr2× · · ·×Prn), definedby

ξ(ω1, ω2, · · · , ωn) = f(ξ1(ω1), ξ2(ω2), · · · , ξn(ωn)) (5.2)

for all (ω1, ω2, · · · , ωn) ∈ Ω1 × Ω2 × · · · × Ωn.

Example 5.4: Let ξ1 and ξ2 be two fuzzy random variables defined on theprobability spaces (Ω1,A1,Pr1) and (Ω2,A2,Pr2), respectively. Then thesum ξ = ξ1 + ξ2 is a fuzzy random variable on (Ω1×Ω2,A1×A2,Pr1×Pr2),defined by

ξ(ω1, ω2) = ξ1(ω1) + ξ2(ω2), ∀(ω1, ω2) ∈ Ω1 × Ω2.

The product ξ = ξ1ξ2 is a fuzzy random variable defined on the probabilityspace (Ω1 × Ω2,A1 ×A2,Pr1×Pr2) as

ξ(ω1, ω2) = ξ1(ω1) · ξ2(ω2), ∀(ω1, ω2) ∈ Ω1 × Ω2.

Example 5.5: Let us consider the following two fuzzy random variableswith “trapezoidal fuzzy variable” values,

ξ1 =

{(a1, a2, a3, a4) with probability 0.3(b1, b2, b3, b4) with probability 0.7,

ξ2 =

{(c1, c2, c3, c4) with probability 0.6(d1, d2, d3, d4) with probability 0.4.

Then the sum of the two fuzzy random variables is

ξ1 + ξ2 =

⎧⎪⎪⎪⎨⎪⎪⎪⎩(a1 + c1, a2 + c2, a3 + c3, a4 + c4) with probability 0.18(a1 + d1, a2 + d2, a3 + d3, a4 + d4) with probability 0.12(b1 + c1, b2 + c2, b3 + c3, b4 + c4) with probability 0.42(b1 + d1, b2 + d2, b3 + d3, b4 + d4) with probability 0.28.

5.2 Chance Measure

Now let us consider the chance of fuzzy random event. Recall that the prob-ability of random event and the possibility of fuzzy event are defined as areal number. However, for a fuzzy random event, the chance is defined as afunction rather than a number.

Page 205: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 5.2 - Chance Measure 195

Definition 5.5 (Liu [73], Gao and Liu [30]) Let ξ be a fuzzy random vari-able, and B a Borel set of �. Then the chance of fuzzy random event ξ ∈ Bis a function from (0, 1] to [0, 1], defined as

Ch {ξ ∈ B} (α) = supPr{A}≥α

infω∈A

Cr {ξ(ω) ∈ B} . (5.3)

Theorem 5.5 Let ξ be a fuzzy random variable, and B a Borel set of �.For any given α∗ ∈ (0, 1], we write β∗ = Ch {ξ ∈ B} (α∗). Then we have

Pr{ω ∈ Ω

∣∣ Cr {ξ(ω) ∈ B} ≥ β∗} ≥ α∗. (5.4)

Proof: It follows from the definition of chance that β∗ is just the supremumof β satisfying

Pr{ω ∈ Ω

∣∣ Cr {ξ(ω) ∈ B} ≥ β}≥ α∗.

Thus there exists an increasing sequence {βi} such that

Pr{ω ∈ Ω

∣∣ Cr {ξ(ω) ∈ B} ≥ βi

}≥ α∗

and βi ↑ β∗ as i→∞. It is easy to verify that{ω ∈ Ω

∣∣ Cr {ξ(ω ∈ B} ≥ βi

}↓{ω ∈ Ω

∣∣ Cr {ξ(ω) ∈ B} ≥ β∗}as i→∞. It follows from the probability continuity theorem that

Pr{ω ∈ Ω

∣∣ Cr {ξ(ω) ∈ B} ≥ β∗}= lim

i→∞Pr{ω ∈ Ω

∣∣ Cr {ξ(ω) ∈ B} ≥ βi

}≥ α∗.

The proof is complete.

Theorem 5.6 (Yang and Liu [150]) Let ξ be a fuzzy random variable, and{Bi} a sequence of Borel sets of � such that Bi ↓ B. If limi→∞ Ch{ξ ∈Bi}(α) > 0.5 or Ch{ξ ∈ B}(α) ≥ 0.5, then we have

limi→∞

Ch{ξ ∈ Bi}(α) = Ch{ξ ∈ lim

i→∞Bi

}(α). (5.5)

Proof: First we suppose that limi→∞ Ch{ξ ∈ Bi}(α) > 0.5. Write

β = Ch{ξ ∈ B}(α), βi = Ch{ξ ∈ Bi}(α), i = 1, 2, · · ·

Since Bi ↓ B, it is clear that β1 ≥ β2 ≥ · · · ≥ β. Thus the limitation

ρ = limi→∞

βi = limi→∞

Ch{ξ ∈ Bi}(α) > 0.5

Page 206: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

196 Chapter 5 - Fuzzy Random Theory

and ρ ≥ β. On the other hand, since ρ ≤ βi for each i, it follows fromTheorem 5.5 that

Pr{ω ∈ Ω∣∣ Cr{ξ(ω) ∈ Bi} ≥ ρ} ≥ Pr{ω ∈ Ω

∣∣ Cr{ξ(ω) ∈ Bi} ≥ βi} ≥ α.

Since ρ > 0.5, by using the credibility semicontinuity law, it is easy to verifythat

{ω ∈ Ω∣∣ Cr{ξ(ω) ∈ Bi} ≥ ρ} ↓ {ω ∈ Ω

∣∣ Cr{ξ(ω) ∈ B} ≥ ρ}.

It follows from the probability continuity theorem that

Pr{ω ∈ Ω∣∣ Cr{ξ(ω) ∈ B} ≥ ρ} = lim

i→∞Pr{ω ∈ Ω

∣∣ Cr{ξ(ω) ∈ Bi} ≥ ρ} ≥ α

which implies that ρ ≤ β. Hence ρ = β and (5.5) holds. Under the conditionCh{ξ ∈ B}(α) ≥ 0.5, if limi→∞ Ch{ξ ∈ Bi}(α) = Ch{ξ ∈ B}(α), then (5.5)holds. Otherwise, we have

limi→∞

Ch{ξ ∈ Bi}(α) > Ch{ξ ∈ B}(α) ≥ 0.5

which also implies (5.5).

Theorem 5.7 (Yang and Liu [150]) (a) Let ξ, ξ1, ξ2, · · · be fuzzy randomvariables such that ξi(ω) ↑ ξ(ω) for each ω ∈ Ω. If limi→∞ Ch{ξi ≤ r}(α) >0.5 or Ch {ξ ≤ r} (α) ≥ 0.5, then

limi→∞

Ch{ξi ≤ r}(α) = Ch{

limi→∞

ξi ≤ r}

(α). (5.6)

(b) Let ξ, ξ1, ξ2, · · · be fuzzy random variables such that ξi(ω) ↓ ξ(ω) for eachω ∈ Ω. If limi→∞ Ch{ξi ≥ r}(α) > 0.5 or Ch{ξ ≥ r}(α) ≥ 0.5, then we have

limi→∞

Ch{ξi ≥ r}(α) = Ch{

limi→∞

ξi ≥ r}

(α). (5.7)

Proof: (a) Suppose limi→∞ Ch{ξi ≤ r}(α) > 0.5 and write

β = Ch{ξ ≤ r}(α), βi = Ch{ξi ≤ r}(α), i = 1, 2, · · ·

Since ξi(ω) ↑ ξ(ω) for each ω ∈ Ω, it is clear that {ξi(ω) ≤ r} ↓ {ξ(ω) ≤ r}for each ω ∈ Ω and β1 ≥ β2 ≥ · · · ≥ β. Thus the limitation

ρ = limi→∞

βi = limi→∞

Ch{ξi ≤ r}(α) > 0.5

and ρ ≥ β. On the other hand, since ρ ≤ βi for each i, we have

Pr{ω ∈ Ω∣∣ Cr{ξi(ω) ≤ r} ≥ ρ} ≥ Pr{ω ∈ Ω

∣∣ Cr{ξi(ω) ≤ r} ≥ βi} ≥ α.

Page 207: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 5.2 - Chance Measure 197

Since ρ > 0.5 and {ξi(ω) ≤ r} ↓ {ξ(ω) ≤ r} for each ω ∈ Ω, it follows fromthe credibility semicontinuity law that

{ω ∈ Ω∣∣ Cr{ξi(ω) ≤ r} ≥ ρ} ↓ {ω ∈ Ω

∣∣ Cr{ξ(ω) ≤ r} ≥ ρ}.

By using the probability continuity theorem, we get

Pr{ω ∈ Ω∣∣ Cr{ξ(ω) ≤ r} ≥ ρ} = lim

i→∞Pr{ω ∈ Ω

∣∣ Cr{ξi(ω) ≤ r} ≥ ρ} ≥ α

which implies that ρ ≤ β. Hence ρ = β and (5.6) holds. Under the conditionCh {ξ ≤ r} (α) ≥ 0.5, if limi→∞ Ch{ξi ≤ r}(α) = Ch {ξ ≤ r} (α), then (5.6)holds. Otherwise, we have

limi→∞

Ch{ξi ≤ r}(α) > Ch{

limi→∞

ξi ≤ r}

(α) ≥ 0.5

which also implies (5.6). The part (b) may be proved similarly.

Variety of Chance Measure

Definition 5.6 (Liu [73]) Let ξ be a fuzzy random variable, and B a Borelset of �. For any real number α ∈ (0, 1], the α-chance of fuzzy random eventξ ∈ B is defined as the value of chance at α, i.e., Ch{ξ ∈ B}(α), where Chdenotes the chance measure.

Definition 5.7 (Liu and Liu [86]) Let ξ be a fuzzy random variable, and Ba Borel set of �. Then the equilibrium chance of fuzzy random event ξ ∈ Bis defined as

Che {ξ ∈ B} = sup0<α≤1

{α∣∣ Ch {ξ ∈ B} (α) ≥ α

}(5.8)

where Ch denotes the chance measure.

Remark 5.1: If the chance curve is continuous, then the equilibrium chanceis just the fixed point of chance curve, i.e., the value α ∈ (0, 1] with Ch{ξ ∈B}(α) = α.

Definition 5.8 (Liu and Liu [85]) Let ξ be a fuzzy random variable, and Ba Borel set of �. Then the average chance of fuzzy random event ξ ∈ B isdefined as

Cha {ξ ∈ B} =∫ 1

0

Ch {ξ ∈ B} (α)dα (5.9)

where Ch denotes the chance measure.

Remark 5.2: The average chance (also called mean chance by Liu and Liu[85]) is just the area under the chance curve.

Page 208: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

198 Chapter 5 - Fuzzy Random Theory

Definition 5.9 A fuzzy random variable ξ is said to be(a) nonnegative if Ch{ξ < 0}(α) ≡ 0;(b) positive if Ch{ξ ≤ 0}(α) ≡ 0;(c) simple if there exists a finite sequence {x1, x2, · · · , xm} such that

Ch {ξ �= x1, ξ �= x2, · · · , ξ �= xm} (α) ≡ 0; (5.10)

(d) discrete if there exists a countable sequence {x1, x2, · · ·} such that

Ch {ξ �= x1, ξ �= x2, · · ·} (α) ≡ 0. (5.11)

5.3 Chance Distribution

Definition 5.10 (Yang and Liu [150]) Let ξ be a fuzzy random variable.Then its chance distribution Φ: [−∞,+∞]× (0, 1]→ [0, 1] is defined by

Φ(x;α) = Ch {ξ ≤ x} (α). (5.12)

Theorem 5.8 (Yang and Liu [150]) The chance distribution Φ(x;α) of afuzzy random variable is a decreasing and left-continuous function of α foreach fixed x.

Proof: Denote the fuzzy random variable by ξ. For any given α1 and α2

with 0 < α1 < α2 ≤ 1, it is clear that

Φ(x;α1) = supPr{A}≥α1

infω∈A

Cr {ξ(ω) ≤ x}

≥ supPr{A}≥α2

infω∈A

Cr {ξ(ω) ≤ x} = Φ(x;α2).

Thus Φ(x;α) is a decreasing function of α for each fixed x.Next we prove the left-continuity of Φ(x;α) with respect to α. Let α ∈

(0, 1] be given, and let {αi} be a sequence of numbers with αi ↑ α. SinceΦ(x;α) is a decreasing function of α, the limitation limi→∞ Φ(x;αi) existsand is not less than Φ(x;α). If the limitation is equal to Φ(x;α), then theleft-continuity is proved. Otherwise, we have

limi→∞

Φ(x;αi) > Φ(x;α).

Let z∗ = (limi→∞ Φ(x;αi) + Φ(x;α))/2. It is clear that

Φ(x;αi) > z∗ > Φ(x;α)

for all i. It follows from Φ(x;αi) > z∗ that there exists Ai with Pr{Ai} ≥ αi

such thatinf

ω∈Ai

Cr{ξ(ω) ≤ x} > z∗

Page 209: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 5.3 - Chance Distribution 199

for each i. Now we define

A∗ =∞⋃i=1

Ai.

It is clear that Pr{A∗} ≥ Pr{Ai} ≥ αi. Letting i→∞, we get Pr{A∗} ≥ α.Thus

Φ(x;α) ≥ infω∈A∗

Cr{ξ(ω) ≤ x} ≥ z∗.

A contradiction proves the theorem.

Theorem 5.9 (Yang and Liu [150]) The chance distribution Φ(x;α) of fuzzyrandom variable is an increasing function of x for each fixed α, and

Φ(−∞;α) = 0, Φ(+∞;α) = 1, ∀α; (5.13)

limx→−∞Φ(x;α) ≤ 0.5, ∀α; (5.14)

limx→+∞Φ(x;α) ≥ 0.5, if α < 1. (5.15)

Furthermore, if limy↓x

Φ(y;α) > 0.5 or Φ(x;α) ≥ 0.5, then we have

limy↓x

Φ(y;α) = Φ(x;α). (5.16)

Proof: Let Φ(x;α) be the chance distribution of the fuzzy random variableξ defined on the probability space (Ω,A,Pr). For any x1 and x2 with −∞ ≤x1 < x2 ≤ +∞, it is clear that

Φ(x1;α) = supPr{A}≥α

infω∈A

Cr {ξ(ω) ≤ x1}

≤ supPr{A}≥α

infω∈A

Cr {ξ(ω) ≤ x2} = Φ(x2;α).

Therefore, Φ(x;α) is an increasing function of x for each fixed α.Since ξ(ω) is a fuzzy variable for any ω ∈ Ω, we have Cr{ξ(ω) ≤ −∞} = 0.

It follows that

Φ(−∞;α) = supPr{A}≥α

infω∈A

Cr {ξ(ω) ≤ −∞} = 0.

Similarly, we have Cr{ξ(ω) ≤ +∞} = 1 for any ω ∈ Ω. Thus

Φ(+∞;α) = supPr{A}≥α

infω∈A

Cr {ξ(ω) ≤ +∞} = 1.

Thus (5.13) is proved.If (5.14) is not true, then there exists a number z∗ > 0.5 and a sequence

{xi} with xi ↓ −∞ such that Φ(xi, α) > z∗ for all i. Writing

Ai ={ω ∈ Ω

∣∣ Cr{ξ(ω) ≤ xi} > z∗}

Page 210: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

200 Chapter 5 - Fuzzy Random Theory

for i = 1, 2, · · ·, we have Pr{Ai} ≥ α, and A1 ⊃ A2 ⊃ · · · It follows from theprobability continuity theorem that

Pr

{ ∞⋂i=1

Ai

}= lim

i→∞Pr{Ai} ≥ α.

Thus there exists ω∗ such that ω∗ ∈ Ai for all i. Therefore

0.5 ≥ limi→∞

Cr{ξ(ω∗) ≤ xi} ≥ z∗ > 0.5.

A contradiction proves (5.14).If (5.15) is not true, then there exists a number z∗ < 0.5 and a sequence

{xi} with xi ↑ +∞ such that Φ(xi, α) < z∗ for all i. Writing

Ai ={ω ∈ Ω

∣∣ Cr{ξ(ω) ≤ xi} < z∗}

for i = 1, 2, · · ·, we have

Pr{Ai} = 1− Pr{ω ∈ Ω

∣∣ Cr{ξ(ω) ≤ xi} ≥ z∗}

> 1− α

and A1 ⊃ A2 ⊃ · · · It follows from the probability continuity theorem that

Pr

{ ∞⋂i=1

Ai

}= lim

i→∞Pr{Ai} ≥ 1− α > 0.

Thus there exists ω∗ such that ω∗ ∈ Ai for all i. Therefore

0.5 ≤ limi→∞

Cr{ξ(ω∗) ≤ xi} ≤ z∗ < 0.5.

A contradiction proves (5.15).Finally, we prove (5.16). Let {xi} be an arbitrary sequence with xi ↓ x

as i→∞. It follows from Theorem 5.6 that

limy↓x

Φ(y;α) = limy↓x

Ch{ξ ∈ (−∞, y]}(α) = Ch{ξ ∈ (−∞, x]}(α) = Φ(x;α).

The theorem is proved.

Example 5.6: The limitation limx→−∞ Φ(x;α) may take any value a be-tween 0 and 0.5, and limx→+∞ Φ(x;α) may take any value b between 0.5 and1. Let ξ be a fuzzy random variable taking a single value of fuzzy variabledefined by the following membership function,

μ(x) =

⎧⎪⎨⎪⎩2a, if x < 01, if x = 0

2− 2b, if 0 < x.

Page 211: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 5.3 - Chance Distribution 201

Then for any α, we have

Φ(x;α) =

⎧⎪⎪⎪⎨⎪⎪⎪⎩0, if x = −∞a, if −∞ < x < 0b, if 0 ≤ x < +∞1, if x = +∞.

It follows that limx→−∞Φ(x;α) = a and lim

x→+∞Φ(x;α) = b.

Example 5.7: When α = 1, the limitation limx→+∞ Φ(x; 1) may take anyvalue c between 0 and 1. Let Ω = {ω1, ω2, · · ·}, and Pr{ωi} = 1/2i fori = 1, 2, · · · The fuzzy random variable ξ is defined on the probability space(Ω,A,Pr) as

ξ(ωi) =

{0 with possibility (2c) ∧ 1i with possibility (2− 2c) ∧ 1.

Then we have

Φ(x; 1) =

⎧⎪⎨⎪⎩0, if −∞ ≤ x < 0c, if 0 ≤ x < +∞1, if x = +∞.

It follows that limx→+∞Φ(x; 1) = c.

Example 5.8: When limy↓x Φ(y;α) ≤ 0.5 or Φ(x;α) < 0.5, it is possiblethat limy↓x Φ(y;α) �= Φ(x;α). For example, let ξ be a fuzzy random variabletaking a single value of fuzzy variable defined by the membership function,

μ(x) =

{0, if x ≤ 01, if 0 < x.

Then for any α, we have

Φ(x;α) =

⎧⎪⎨⎪⎩0, if −∞ ≤ x ≤ 0

0.5, if 0 < x < +∞1, if x = +∞.

It is clear that limy↓0 Φ(y;α) = 0.5 and Φ(0;α) = 0. That is, they are notequal to each other.

Theorem 5.10 Let ξ be a fuzzy random variable. Then Ch{ξ ≥ x}(α) is(a) a decreasing and left-continuous function of α for any fixed x;(b) a decreasing function of x for any fixed α. Furthermore, if

Ch{ξ ≥ x}(α) ≥ 0.5 or limy↑x

Ch{ξ ≥ y}(α) > 0.5,

then we have limy↑x

Ch{ξ ≥ y}(α) = Ch{ξ ≥ x}(α).

Page 212: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

202 Chapter 5 - Fuzzy Random Theory

Proof: Like Theorems 5.8 and 5.9.

Definition 5.11 (Yang and Liu [150]) The chance density function φ: �×(0, 1]→ [0,+∞) of a fuzzy random variable ξ is a function such that

Φ(x;α) =∫ x

−∞φ(y;α)dy (5.17)

holds for all x ∈ [−∞,+∞] and α ∈ (0, 1], where Φ is the chance distributionof ξ.

5.4 Independent and Identical Distribution

This section introduces the concept of independent and identically distributed(iid) fuzzy random variables.

Definition 5.12 (Liu and Liu [82]) The fuzzy random variables ξ1, ξ2, · · · , ξnare said to be iid if and only if

(Pos{ξi(ω) ∈ B1},Pos{ξi(ω) ∈ B2}, · · · ,Pos{ξi(ω) ∈ Bm}) , i = 1, 2, · · · , n

are iid random vectors for any Borel sets B1, B2, · · · , Bm of � and any posi-tive integer m.

Theorem 5.11 Let ξ1, ξ2, · · · , ξn be iid fuzzy random variables. Then forany Borel set B of �, we have(a) Pos{ξi(ω) ∈ B}, i = 1, 2, · · · , n are iid random variables;(b) Nec{ξi(ω) ∈ B}, i = 1, 2, · · · , n are iid random variables;(c) Cr{ξi(ω) ∈ B}, i = 1, 2, · · · , n are iid random variables.

Proof: The part (a) follows immediately from the definition. (b) Sinceξ1, ξ2, · · · , ξn are iid fuzzy random variables, the possibilities Pos{ξi(ω) ∈Bc}, i = 1, 2, · · · , n are iid random variables. It follows from Nec{ξi(ω) ∈B} = 1−Pos{ξi(ω) ∈ Bc}, i = 1, 2, · · · , n that Nec{ξi(ω) ∈ B}, i = 1, 2, · · · , nare iid random variables. (c) It follows from the definition of iid fuzzy randomvariables that (Pos{ξi(ω) ∈ B},Pos{ξi(ω) ∈ Bc}), i = 1, 2, · · · , n are iidrandom vectors. Since, for each i,

Cr{ξi(ω) ∈ B} =12

(Pos{ξi(ω) ∈ B}+ 1− Pos{ξi(ω) ∈ Bc}) ,

the credibilities Cr{ξi(ω) ∈ B}, i = 1, 2, · · · , n are iid random variables.

Theorem 5.12 Let f : � → � be a measurable function. If ξ1, ξ2, · · · , ξn areiid fuzzy random variables, then f(ξ1), f(ξ2), · · · , f(ξn) are iid fuzzy randomvariables.

Page 213: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 5.4 - Independent and Identical Distribution 203

Proof: We have proved in Theorem 5.4 that f(ξ1), f(ξ2), · · · , f(ξn) are fuzzyrandom variables. For any positive integer m and Borel sets B1, B2, · · · , Bm

of �, since f−1(B1), f−1(B2), · · · , f−1(Bm) are Borel sets, we know that(Pos{ξi(ω) ∈ f−1(B1)},Pos{ξi(ω) ∈ f−1(B2)}, · · · ,Pos{ξi(ω) ∈ f−1(Bm)}

),

i = 1, 2, · · · , n are iid random vectors. Equivalently, the random vectors

(Pos{f(ξi(ω)) ∈ B1},Pos{f(ξi(ω)) ∈ B2}, · · · ,Pos{f(ξi(ω)) ∈ Bm}) ,

i = 1, 2, · · · , n are iid. Hence f(ξ1), f(ξ2), · · · , f(ξn) are iid fuzzy randomvariables.

Theorem 5.13 (Liu and Liu [82]) If ξ1, ξ2, · · · , ξn are iid fuzzy random vari-ables such that E[ξ1(ω)], E[ξ2(ω)], · · ·, E[ξn(ω)] are all finite for each ω, thenE[ξ1(ω)], E[ξ2(ω)], · · ·, E[ξn(ω)] are iid random variables.

Proof: For any ω ∈ Ω, it follows from the expected value operator that

E[ξi(ω)] =∫ +∞

0

Cr{ξi(ω) ≥ r}dr −∫ 0

−∞Cr{ξi(ω) ≤ r}dr

= limj→∞

limk→∞

⎛⎝ 2k∑l=1

j

2kCr{

ξi(ω) ≥ lj

2k

}−

2k∑l=1

j

2kCr{

ξi(ω) ≤ − lj

2k

}⎞⎠for i = 1, 2, · · · , n. Now we write

η+i (ω) =

∫ ∞

0

Cr{ξi(ω) ≥ r}dr, η−i (ω) =

∫ 0

−∞Cr{ξi(ω) ≤ r}dr,

η+ij(ω) =

∫ j

0

Cr{ξi(ω) ≥ r}dr, η−ij(ω) =

∫ 0

−j

Cr{ξi(ω) ≤ r}dr,

η+ijk(ω) =

2k∑l=1

j

2kCr{

ξi(ω) ≥ lj

2k

}, η−

ijk(ω) =2k∑l=1

j

2kCr{

ξi(ω) ≤ − lj

2k

}for any positive integers j, k and i = 1, 2, · · · , n. It follows from the mono-tonicity of the functions Cr{ξi(ω) ≥ r} and Cr{ξi(ω) ≤ r} that the sequences{η+

ijk(ω)} and {η−ijk(ω)} satisfy (a) for each j and k,

(η+ijk(ω), η−

ijk(ω)), i =

1, 2, · · · , n are iid random vectors; and (b) for each i and j, η+ijk(ω) ↑ η+

ij(ω),and η−

ijk(ω) ↑ η−ij(ω) as k →∞.

For any real numbers x, y, xi, yi, i = 1, 2, · · · , n, it follows from the prop-erty (a) that

Pr

{η+ijk(ω) ≤ xi, η

−ijk(ω) ≤ yi

i = 1, 2, · · · , n

}=

n∏i=1

Pr{η+ijk(ω) ≤ xi, η

−ijk(ω) ≤ yi

},

Page 214: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

204 Chapter 5 - Fuzzy Random Theory

Pr{η+ijk(ω) ≤ x, η−

ijk(ω) ≤ y}

= Pr{η+i′jk(ω) ≤ x, η−

i′jk(ω) ≤ y}

, ∀i, i′.

It follows from the property (b) that{η+ijk(ω) ≤ xi, η

−ijk(ω) ≤ yi

i = 1, 2, · · · , n

}→{

η+ij(ω) ≤ xi, η

−ij(ω) ≤ yi

i = 1, 2, · · · , n

},

{η+ijk(ω) ≤ x, η−

ijk(ω) ≤ y}→{η+ij(ω) ≤ x, η−

ij(ω) ≤ y}

as k →∞. By using the probability continuity theorem, we get

Pr

{η+ij(ω) ≤ xi, η

−ij(ω) ≤ yi

i = 1, 2, · · · , n

}=

n∏i=1

Pr{η+ij(ω) ≤ xi, η

−ij(ω) ≤ yi

},

Pr{η+ij(ω) ≤ x, η−

ij(ω) ≤ y}

= Pr{η+i′j(ω) ≤ x, η−

i′j(ω) ≤ y}

, ∀i, i′.

Thus(η+ij(ω), η−

ij(ω)), i = 1, 2, · · · , n are iid random vectors, and satisfy (c)

for each j,(η+ij(ω), η−

ij(ω)), i = 1, 2, · · · , n are iid random vectors; and (d) for

each i, η+ij(ω) ↑ η+

i (ω) and η−ij(ω) ↑ η−

i (ω) as j →∞.A similar process may prove that

(η+i (ω), η−

i (ω)), i = 1, 2, · · · , n are iid

random vectors. Thus E[ξ1(ω)], E[ξ2(ω)], · · · , E[ξn(ω)] are iid random vari-ables. The Theorem is proved.

5.5 Expected Value Operator

Expected value of fuzzy random variable has been defined as a fuzzy numberin several ways, for example, Kwakernaak [51], Puri and Ralescu [118], andKruse and Meyer [50]. However, in practice, we need a scalar expected valueoperator of fuzzy random variables.

Definition 5.13 (Liu and Liu [82]) Let ξ be a fuzzy random variable. Thenits expected value is defined by

E[ξ] =∫ +∞

0

Pr{ω ∈ Ω

∣∣ E[ξ(ω)] ≥ r}

dr−∫ 0

−∞Pr{ω ∈ Ω

∣∣ E[ξ(ω)] ≤ r}

dr

provided that at least one of the two integrals is finite.

Remark 5.3: The reader may wonder why the expected value operator Eappears in both sides of the definitions of E[ξ]. In fact, the symbol E repre-sents different meanings—it is overloaded. That is, the overloading allows usto use the same symbol E for different expected value operators, because wecan deduce the meaning from the type of argument.

Page 215: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 5.5 - Expected Value Operator 205

Example 5.9: Assume that ξ is a fuzzy random variable defined as

ξ = (ρ, ρ + 1, ρ + 2), with ρ ∼ N (0, 1).

Then for each ω ∈ Ω, we have E[ξ(ω)] = 14 [ρ(ω)+2(ρ(ω)+1)+ (ρ(ω)+2)] =

ρ(ω) + 1. Thus E[ξ] = E[ρ] + 1 = 1.

Theorem 5.14 Assume that ξ and η are fuzzy random variables with finiteexpected values. If for each ω ∈ Ω, the fuzzy variables ξ(ω) and η(ω) areindependent, then for any real numbers a and b, we have

E[aξ + bη] = aE[ξ] + bE[η]. (5.18)

Proof: For any ω ∈ Ω, since ξ(ω) and η(ω) are independent fuzzy variables,we have E[aξ(ω) + bη(ω)] = aE[ξ(ω)] + bE[η(ω)] by the linearity of expectedvalue operator of independent fuzzy variable. It follows that

E[aξ + bη] = E [aE[ξ(ω)] + bE[η(ω)]]= aE [E[ξ(ω)]] + bE [E[η(ω)]]= aE[ξ] + bE[η].

The theorem is proved.

Continuity Theorems

Theorem 5.15 (Yang and Liu [150]) (a) Let ξ, ξ1, ξ2, · · · be fuzzy randomvariables such that ξi(ω) ↑ ξ(ω) uniformly for each ω ∈ Ω. If there exists afuzzy random variable η with finite expected value such that ξi ≥ η for all i,then we have

limi→∞

E[ξi] = E[ξ]. (5.19)

(b) Let ξ, ξ1, ξ2, · · · be fuzzy random variables such that ξi(ω) ↓ ξ(ω) uniformlyfor each ω ∈ Ω. If there exists a fuzzy random variable η with finite expectedvalue such that ξi ≤ η for all i, then we have

limi→∞

E[ξi] = E[ξ]. (5.20)

Proof: (a) For each ω ∈ Ω, since ξi(ω) ↑ ξ(ω) uniformly, it follows fromTheorem 3.41 that E[ξi(ω)] ↑ E[ξ(ω)]. Since ξi ≥ η, we have E[ξi(ω)] ≥E[η(ω)]. Thus {E[ξi(ω)]} is an increasing sequence of random variables suchthat E[ξi(ω)] ≥ E[η(ω)], where E[η(ω)] is a random variable with finiteexpected value. It follows from Theorem 2.34 that (5.19) holds. The part (b)may be proved similarly.

Theorem 5.16 (Yang and Liu [150]) Let ξ, ξ1, ξ2, · · · be a sequence of fuzzyrandom variables such that ξi(ω)→ ξ(ω) uniformly for each ω ∈ Ω. If thereexists a fuzzy random variable η with finite expected value such that |ξi| ≤ ηfor all i, then we have

limi→∞

E[ξi] = E[ξ]. (5.21)

Page 216: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

206 Chapter 5 - Fuzzy Random Theory

Proof: For each ω ∈ Ω, since ξi(ω)→ ξ(ω) uniformly, it follows from Theo-rem 3.41 that E[ξi(ω)]→ E[ξ(ω)]. Since |ξi| ≤ η, we have E[ξi(ω)] ≤ E[η(ω)].Thus {E[ξi(ω)]} is a sequence of random variables such that E[ξi(ω) ≤E[η(ω)], where E[η(ω)] is a random variable with finite expected value. Itfollows from Theorem 2.36 that (5.21) holds.

5.6 Variance, Covariance and Moments

Definition 5.14 (Liu and Liu [82]) Let ξ be a fuzzy random variable withfinite expected value E[ξ]. Then the variance of ξ is V [ξ] = E[(ξ − E[ξ])2].

Theorem 5.17 If ξ is a fuzzy random variable with finite expected value, aand b are real numbers, then V [aξ + b] = a2V [ξ].

Proof: It follows from the definition of variance that

V [aξ + b] = E[(aξ + b− aE[ξ]− b)2

]= a2E[(ξ − E[ξ])2] = a2V [ξ].

Theorem 5.18 Assume that ξ is a fuzzy random variable whose expectedvalue exists. Then we have

V [E[ξ(ω)]] ≤ V [ξ]. (5.22)

Proof: Denote the expected value of ξ by e. It follows from Theorem 3.53that

V [E[ξ(ω)]] = E[(E[ξ(ω)]− e)2

]≤ E

[E[(ξ(ω)− e)2

]]= V [ξ].

The theorem is proved.

Theorem 5.19 Let ξ be a fuzzy random variable with expected value e. ThenV [ξ] = 0 if and only if Ch{ξ = e}(1) = 1.

Proof: Assume V [ξ] = 0. It follows from V [ξ] = E[(ξ − e)2] that∫ +∞

0

Pr{ω ∈ Ω

∣∣ E[(ξ(ω)− e)2] ≥ r}

dr = 0

which implies that Pr{ω ∈ Ω|E[(ξ(ω) − e)2] ≥ r} = 0 for any r > 0. There-fore, Pr{ω ∈ Ω|E[(ξ(ω) − e)2] = 0} = 1. That is, there exists a set A∗ withPr{A∗} = 1 such that E[(ξ(ω) − e)2] = 0 for each ω ∈ A∗. It follows fromTheorem 3.47 that Cr{ξ(ω) = e} = 1 for all ω ∈ A∗. Hence

Ch{ξ = e}(1) = supPr{A}≥1

infω∈A

Cr{ξ(ω) = e} = 1.

Conversely, if Ch{ξ = e}(1) = 1, it follows from Theorem 5.5 that thereexists a set A∗ with Pr{A∗} = 1 such that

infω∈A∗

Cr{ξ(ω) = e} = 1.

Page 217: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 5.7 - Optimistic and Pessimistic Values 207

That is, Cr{(ξ(ω)− e)2 ≥ r} = 0 for each r > 0 and each ω ∈ A∗. Thus

E[(ξ(ω)− e)2] =∫ +∞

0

Cr{(ξ(ω)− e)2 ≥ r}dr = 0

for all ω ∈ A∗. It follows that Pr{ω ∈ Ω|E[(ξ(ω)− e)2] ≥ r

}= 0 for any

r > 0. Hence

V [ξ] =∫ +∞

0

Pr{ω ∈ Ω

∣∣ E[(ξ(ω)− e)2] ≥ r}

dr = 0.

The theorem is proved.

Definition 5.15 Let ξ and η be fuzzy random variables such that E[ξ] andE[η] are finite. Then the covariance of ξ and η is defined by

Cov[ξ, η] = E [(ξ − E[ξ])(η − E[η])] . (5.23)

Definition 5.16 For any positive integer k, the expected value E[ξk] is calledthe kth moment of the fuzzy random variable ξ. The expected value E[(ξ −E[ξ])k] is called the kth central moment of the fuzzy random variable ξ.

5.7 Optimistic and Pessimistic Values

Let ξ be a fuzzy random variable. In order to measure it, we define twocritical values: optimistic value and pessimistic value.

Definition 5.17 (Liu [73]) Let ξ be a fuzzy random variable, and γ, δ ∈(0, 1]. Then

ξsup(γ, δ) = sup{r∣∣ Ch {ξ ≥ r} (γ) ≥ δ

}(5.24)

is called the (γ, δ)-optimistic value to ξ, and

ξinf(γ, δ) = inf{r∣∣ Ch {ξ ≤ r} (γ) ≥ δ

}(5.25)

is called the (γ, δ)-pessimistic value to ξ.

That is, the fuzzy random variable ξ will reach upwards of the (γ, δ)-optimisticvalue ξsup(γ, δ) with credibility δ at probability γ, and will be below the(γ, δ)-pessimistic value ξinf(γ, δ) with credibility δ at probability γ.

Remark 5.4: If the fuzzy random variable ξ becomes a random variable andδ > 0, then the (γ, δ)-optimistic value is ξsup(γ) = sup{r|Pr{ξ ≥ r} ≥ γ},and the (γ, δ)-pessimistic value is ξinf(γ) = inf{r|Pr{ξ ≤ r} ≥ γ}. Thiscoincides with the stochastic case.

Remark 5.5: If the fuzzy random variable ξ becomes a fuzzy variable andγ > 0, then the (γ, δ)-optimistic value is ξsup(δ) = sup{r|Cr{ξ ≥ r} ≥ δ}, andthe (γ, δ)-pessimistic value is ξinf(δ) = inf{r|Cr{ξ ≤ r} ≥ δ}. This coincideswith the fuzzy case.

Page 218: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

208 Chapter 5 - Fuzzy Random Theory

Theorem 5.20 Let ξ be a fuzzy random variable. Assume that ξsup(γ, δ) isthe (γ, δ)-optimistic value and ξinf(γ, δ) is the (γ, δ)-pessimistic value to ξ. Ifδ > 0.5, then we have

Ch{ξ ≤ ξinf(γ, δ)}(γ) ≥ δ, Ch{ξ ≥ ξsup(γ, δ)}(γ) ≥ δ. (5.26)

Proof: It follows from the definition of (γ, δ)-pessimistic value that thereexists a decreasing sequence {xi} such that Ch{ξ ≤ xi}(γ) ≥ δ and xi ↓ξinf(γ, δ) as i→∞. Thus we have

limi→∞

Ch{ξ ≤ xi}(γ) ≥ δ > 0.5.

It follows from Theorem 5.9 that

Ch{ξ ≤ ξinf(γ, δ)}(γ) = limi→∞

Ch{ξ ≤ xi}(γ) ≥ δ.

Similarly, there exists an increasing sequence {xi} such that Ch{ξ ≥ xi}(γ) ≥δ and xi ↑ ξsup(γ, δ) as i→∞. Thus we have

limi→∞

Ch{ξ ≥ xi}(γ) ≥ δ > 0.5.

It follows from Theorem 5.10 that

Ch{ξ ≥ ξsup(γ, δ)}(γ) = limi→∞

Ch{ξ ≥ xi}(γ) ≥ δ.

The theorem is proved.

Example 5.10: However, if δ ≤ 0.5, it is possible that the inequalities

Ch{ξ ≥ ξsup(γ, δ)}(γ) < δ, Ch{ξ ≤ ξinf(γ, δ)}(γ) < δ

hold. Suppose that Ω = {ω1, ω2}, Pr{ω1} = 0.5, and Pr{ω2} = 0.5. Let ξ bea fuzzy random variable defined on (Ω,A,Pr) as

ξ(θ) =

{η, if θ = θ1

0, if θ = θ2

where η is a fuzzy variable whose membership function is defined by

μ(x) =

{1, if x ∈ (−1, 1)0, otherwise.

Then we have

ξsup(0.5, 0.5) = 1 and Ch{ξ ≥ 1}(0.5) = 0 < 0.5;

ξinf(0.5, 0.5) = −1 and Ch{ξ ≤ −1}(0.5) = 0 < 0.5.

Page 219: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 5.7 - Optimistic and Pessimistic Values 209

Theorem 5.21 (Lu [88]) Let ξsup(γ, δ) and ξinf(γ, δ) be the (γ, δ)-optimisticand (γ, δ)-pessimistic values of fuzzy random variable ξ, respectively. If γ ≤0.5, then we have

ξinf(γ, δ) ≤ ξsup(γ, δ) + δ1; (5.27)

if γ > 0.5, then we have

ξinf(γ, δ) + δ2 ≥ ξsup(γ, δ) (5.28)

where δ1 and δ2 are defined by

δ1 = supω∈Ω{ξ(ω)sup(1− δ)− ξ(ω)inf(1− δ)} ,

δ2 = supω∈Ω{ξ(ω)sup(δ)− ξ(ω)inf(δ)} ,

and ξ(ω)sup(δ) and ξ(ω)inf(δ) are δ-optimistic and δ-pessimistic values offuzzy variable ξ(ω) for each ω, respectively.

Proof: Assume that γ ≤ 0.5. For any given ε > 0, we define

Ω1 ={ω ∈ Ω

∣∣ Cr {ξ(ω) > ξsup(γ, δ) + ε} ≥ δ}

,

Ω2 ={ω ∈ Ω

∣∣ Cr {ξ(ω) < ξinf(γ, δ)− ε} ≥ δ}

.

It follows from the definitions of ξsup(γ, δ) and ξinf(γ, δ) that Pr{Ω1} < γand Pr{Ω2} < γ. Thus Pr{Ω1}+ Pr{Ω2} < γ + γ ≤ 1. This fact implies thatΩ1 ∪ Ω2 �= Ω. Let ω∗ �∈ Ω1 ∪ Ω2. Then we have

Cr {ξ(ω∗) > ξsup(γ, δ) + ε} < δ,

Cr {ξ(ω∗) < ξinf(γ, δ)− ε} < δ.

Since Cr is self dual, we have

Cr {ξ(ω∗) ≤ ξsup(γ, δ) + ε} > 1− δ,

Cr {ξ(ω∗) ≥ ξinf(γ, δ)− ε} > 1− δ.

It follows from the definitions of ξ(ω∗)sup(1− δ) and ξ(ω∗)inf(1− δ) that

ξsup(γ, δ) + ε ≥ ξ(ω∗)inf(1− δ),

ξinf(γ, δ)− ε ≤ ξ(ω∗)sup(1− δ)

which implies that

ξinf(γ, δ)− ε− (ξsup(γ, δ) + ε) ≤ ξ(ω∗)sup(1− δ)− ξ(ω∗)inf(1− δ) ≤ δ1.

Letting ε→ 0, we obtain (5.27).

Page 220: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

210 Chapter 5 - Fuzzy Random Theory

Next we prove the inequality (5.28). Assume γ > 0.5. For any givenε > 0, we define

Ω1 ={ω ∈ Ω

∣∣ Cr {ξ(ω) ≥ ξsup(γ, δ)− ε} ≥ δ}

,

Ω2 ={ω ∈ Ω

∣∣ Cr {ξ(ω) ≤ ξinf(γ, δ) + ε} ≥ δ}

.

It follows from the definitions of ξsup(γ, δ) and ξinf(γ, δ) that Pr{Ω1} ≥ γand Pr{Ω2} ≥ γ. Thus Pr{Ω1}+ Pr{Ω2} ≥ γ + γ > 1. This fact implies thatΩ1 ∩ Ω2 �= ∅. Let ω∗ ∈ Ω1 ∩ Ω2. Then we have

Cr {ξ(ω∗) ≥ ξsup(γ, δ)− ε} ≥ δ,

Cr {ξ(ω∗) ≤ ξinf(γ, δ) + ε} ≥ δ.

It follows from the definitions of ξ(ω∗)sup(δ) and ξ(ω∗)inf(δ) that

ξsup(γ, δ)− ε ≤ ξ(ω∗)sup(δ),

ξinf(γ, δ) + ε ≥ ξ(ω∗)inf(δ)

which implies that

ξsup(γ, δ)− ε− (ξinf(γ, δ) + ε) ≤ ξ(ω∗)sup(δ)− ξ(ω∗)inf(δ) ≤ δ2.

The inequality (5.28) is proved by letting ε→ 0.

5.8 Convergence Concepts

This section introduces four types of sequence convergence concept: conver-gence a.s., convergence in chance, convergence in mean, and convergence indistribution.

Definition 5.18 Suppose that ξ, ξ1, ξ2, · · · are fuzzy random variables definedon the probability space (Ω,A,Pr). The sequence {ξi} is said to be convergenta.s. to ξ if and only if there exists a set A ∈ A with Pr{A} = 1 such that{ξi(ω)} converges a.s. to ξ(ω) for every ω ∈ A.

Definition 5.19 Suppose that ξ, ξ1, ξ2, · · · are fuzzy random variables. Wesay that the sequence {ξi} converges in chance to ξ if

limi→∞

limα↓0

Ch {|ξi − ξ| ≥ ε} (α) = 0 (5.29)

for every ε > 0.

Definition 5.20 Suppose that ξ, ξ1, ξ2, · · · are fuzzy random variables withfinite expected values. We say that the sequence {ξi} converges in mean to ξif

limi→∞

E[|ξi − ξ|] = 0. (5.30)

Definition 5.21 Suppose that Φ,Φ1,Φ2, · · · are the chance distributions offuzzy random variables ξ, ξ1, ξ2, · · ·, respectively. We say that {ξi} convergesin distribution to ξ if Φi(x;α)→ Φ(x;α) for all continuity points (x;α) of Φ.

Page 221: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 5.10 - Fuzzy Random Simulations 211

5.9 Laws of Large Numbers

Theorem 5.22 (Yang and Liu [150]) Let {ξi} be a sequence of independentbut not necessarily identically distributed fuzzy random variables with a com-mon expected value e. If there exists a number a > 0 such that V [ξi] < a forall i, then (E[ξ1(ω)] + E[ξ2(ω)] + · · · + E[ξn(ω)])/n converges in probabilityto e as n→∞.

Proof: Since {ξi} is a sequence of independent fuzzy random variables, weknow that {E[ξi(ω)]} is a sequence of independent random variables. Byusing Theorem 5.18, we get V [E[ξi(ω)]] ≤ V [ξi] < a for each i. It followsfrom the weak law of large numbers of random variable that (E[ξ1(ω)] +E[ξ2(ω)] + · · ·+ E[ξn(ω)])/n converges in probability to e.

Theorem 5.23 (Yang and Liu [150]) Let {ξi} be a sequence of iid fuzzyrandom variables with a finite expected value e. Then (E[ξ1(ω)] +E[ξ2(ω)] +· · ·+ E[ξn(ω)])/n converges in probability to e as n→∞.

Proof: Since {ξi} is a sequence of iid fuzzy random variables with a finiteexpected value e, we know that {E[ξi(ω)]} is a sequence of iid random vari-ables with finite expected e. It follows from the weak law of large numbersof random variable that (E[ξ1(ω)] + E[ξ2(ω)] + · · · + E[ξn(ω)])/n convergesin probability to e.

Theorem 5.24 (Yang and Liu [150]) Let ξ1, ξ2, · · · , ξn be independent fuzzyrandom variables with a common expected value e. If

∞∑i=1

V [ξi]i2

<∞, (5.31)

then (E[ξ1(ω)] + E[ξ2(ω)] + · · ·+ E[ξn(ω)])/n converges a.s. to e as n→∞.

Proof: Since {ξi} is a sequence of independent fuzzy random variables, weknow that {E[ξi(ω)]} is a sequence of independent random variables. Byusing Theorem 5.18, we get V [E[ξi(ω)]] ≤ V [ξi] for each i. It follows fromthe strong law of large numbers of random variable that (E[ξ1(ω)]+E[ξ2(ω)]+· · ·+ E[ξn(ω)])/n converges a.s. to e.

Theorem 5.25 (Liu and Liu [82]) Suppose that {ξi} is a sequence of iidfuzzy random variables with a finite expected value e. Then (E[ξ1(ω)] +E[ξ2(ω)] + · · ·+ E[ξn(ω)])/n converges a.s. to e as n→∞.

Proof: Since {ξi} is a sequence of iid fuzzy random variables with a finiteexpected value e, Theorem 5.13 implies that {E[ξi(ω)]} is a sequence of iidrandom variables with an expected value e. It follows from the strong law oflarge numbers of random variable that (E[ξ1(ω)]+E[ξ2(ω)]+· · ·+E[ξn(ω)])/nconverges a.s. to e as n→∞.

Page 222: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

212 Chapter 5 - Fuzzy Random Theory

5.10 Fuzzy Random Simulations

Let us introduce fuzzy random simulations for finding critical values [73],computing chance functions [74], and calculating expected value [84].

Example 5.11: Suppose that ξ is an n-dimensional fuzzy random vectordefined on the probability space (Ω,A,Pr), and f : �n → �m is a measurablefunction. For any real number α ∈ (0, 1], we design a fuzzy random simulationto compute the α-chance Ch {f(ξ) ≤ 0} (α). That is, we should find thesupremum β such that

Pr{ω ∈ Ω

∣∣ Cr {f(ξ(ω)) ≤ 0} ≥ β}≥ α. (5.32)

First, we sample ω1, ω2, · · · , ωN from Ω according to the probability measurePr, and estimate βk = Cr{f(ξ(ωk)) ≤ 0} for k = 1, 2, · · · , N by fuzzy simu-lation. Let N ′ be the integer part of αN . Then the value β can be taken asthe N ′th largest element in the sequence {β1, β2, · · · , βN}.

Algorithm 5.1 (Fuzzy Random Simulation)Step 1. Generate ω1, ω2, · · · , ωN from Ω according to the probability mea-

sure Pr.Step 2. Compute the credibilities βk = Cr{f(ξ(ωk) ≤ 0} for k = 1, 2, · · · , N

by fuzzy simulation.Step 3. Set N ′ as the integer part of αN .Step 4. Return the N ′th largest element in {β1, β2, · · · , βN}.

Now we consider the following two fuzzy random variables,

ξ1 = (ρ1, ρ1 + 1, ρ1 + 2), with ρ1 ∼ N (0, 1),ξ2 = (ρ2, ρ2 + 1, ρ2 + 2), with ρ2 ∼ N (1, 2).

A run of fuzzy random simulation with 5000 cycles shows that

Ch{ξ1 + ξ2 ≥ 0}(0.8) = 0.88.

Example 5.12: Assume that ξ is an n-dimensional fuzzy random vector onthe probability space (Ω,A,Pr), and f : �n → � is a measurable function.For any given confidence levels α and β, the problem is to find the maximalvalue f such that

Ch{f(ξ) ≥ f

}(α) ≥ β (5.33)

holds. That is, we should compute the maximal value f such that

Pr{ω ∈ Ω

∣∣ Cr{f(ξ(ω)) ≥ f

}≥ β}≥ α (5.34)

holds. We sample ω1, ω2, · · · , ωN from Ω according to the probability measurePr, and estimate fk = sup {fk|Cr{f(ξ(ωk)) ≥ fk} ≥ β} for k = 1, 2, · · · , N

Page 223: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 5.10 - Fuzzy Random Simulations 213

by fuzzy simulation. Let N ′ be the integer part of αN . Then the value f canbe taken as the N ′th largest element in the sequence {f1, f2, · · · , fN}.

Algorithm 5.2 (Fuzzy Random Simulation)Step 1. Generate ω1, ω2, · · · , ωN from Ω according to the probability mea-

sure Pr.Step 2. Find fk = sup {fk|Cr{f(ξ(ωk)) ≥ fk} ≥ β} for k = 1, 2, · · · , N by

fuzzy simulation.Step 3. Set N ′ as the integer part of αN .Step 4. Return the N ′th largest element in {f1, f2, · · · , fN}.

We now find the maximal value f such that Ch{ξ21 + ξ2

2 ≥ f}(0.9) ≥ 0.9,where ξ1 and ξ2 are fuzzy random variables defined as

ξ1 = (ρ1, ρ1 + 1, ρ1 + 2), with ρ1 ∼ U(1, 2),ξ2 = (ρ2, ρ2 + 1, ρ2 + 2), with ρ2 ∼ U(2, 3).

A run of fuzzy random simulation with 5000 cycles shows that f = 7.89.

Example 5.13: Assume that ξ is an n-dimensional fuzzy random vectoron the probability space (Ω,A,Pr), and f : �n → � is a measurable func-tion. One problem is to calculate the expected value E[f(ξ)]. Note that, foreach ω ∈ Ω, we may calculate the expected value E[f(ξ(ω)] by fuzzy simu-lation. Since E[f(ξ)] is essentially the expected value of stochastic variableE[f(ξ(ω)], we may combine stochastic simulation and fuzzy simulation toproduce a fuzzy random simulation.

Algorithm 5.3 (Fuzzy Random Simulation)Step 1. Set e = 0.Step 2. Sample ω from Ω according to the probability measure Pr.Step 3. e ← e + E[f(ξ(ω))], where E[f(ξ(ω))] may be calculated by the

fuzzy simulation.Step 4. Repeat the second to fourth steps N times.Step 5. E[f(ξ)] = e/N .

We employ the fuzzy random simulation to calculate the expected valueof ξ1ξ2, where ξ1 and ξ2 are fuzzy random variables defined as

ξ1 = (ρ1, ρ1 + 1, ρ1 + 2), with ρ1 ∼ EXP(1),ξ2 = (ρ2, ρ2 + 1, ρ2 + 2), with ρ2 ∼ EXP(2).

A run of fuzzy random simulation with 5000 cycles shows that E[ξ1ξ2] = 6.34.

Page 224: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Chapter 6

Random Fuzzy Theory

Liu [75] initialized the concept of random fuzzy variable and defined thechance of random fuzzy event as a function from (0,1] to [0,1]. In order torank random fuzzy variables, Liu and Liu [83] presented a scalar expectedvalue operator, and Liu [75] presented the concepts of optimistic and pes-simistic values. In order to describe random fuzzy variable, Zhu and Liu[163] presented the concept of chance distribution.

The emphasis in this chapter is mainly on random fuzzy variable, ran-dom fuzzy arithmetic, chance measure, chance distribution, independent andidentical distribution, expected value operator, variance, critical values, con-vergence concepts, and random fuzzy simulation.

6.1 Random Fuzzy Variables

Roughly speaking, a random fuzzy variable is a fuzzy variable taking “randomvariable” values. Formally, we have the following definition.

Definition 6.1 (Liu [75]) A random fuzzy variable is a function from thepossibility space (Θ,P(Θ),Pos) to the set of random variables.

Example 6.1: Let η1, η2, · · · , ηm be random variables, and u1, u2, · · · , um

real numbers in [0, 1] such that u1 ∨ u2 ∨ · · · ∨ um = 1. Then

ξ =

⎧⎪⎪⎨⎪⎪⎩η1 with possibility u1

η2 with possibility u2

· · ·ηm with possibility um

is clearly a random fuzzy variable. Is it a function from a possibility space(Θ,P(Θ),Pos) to the set of random variables? Yes. For example, we defineΘ = {1, 2, · · · ,m}, Pos{i} = ui, i = 1, 2, · · · ,m, and the function is ξ(i) =ηi, i = 1, 2, · · · ,m.

Page 225: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

216 Chapter 6 - Random Fuzzy Theory

Example 6.2: If η is a random variable, and a is a fuzzy variable definedon the possibility space (Θ,P(Θ),Pos), then ξ = η + a is a fuzzy randomvariable. In fact, ξ is also a random fuzzy variable, defined by

ξ(θ) = η + a(θ), ∀θ ∈ Θ.

Example 6.3: Let ξ ∼ N (ρ, 1), where ρ is a fuzzy variable with membershipfunction μρ(x) = [1− |x− 2|] ∨ 0. Then ξ is a random fuzzy variable taking“normally distributed variable N (ρ, 1)” values.

Example 6.4: In many statistics problems, the probability distribution iscompletely known except for the values of one or more parameters. Forexample, it might be known that the lifetime ξ of a modern engine is anexponentially distributed variable with an unknown expected value θ, andhas the following form of probability density function,

φ(x) =

⎧⎨⎩1θe−x/θ, if 0 ≤ x <∞

0, otherwise.

Usually, there is some relevant information in practice. It is thus possible tospecify an interval in which the value of θ is likely to lie, or to give an ap-proximate estimate of the value of θ. It is typically not possible to determinethe value of θ exactly. If the value of θ is provided as a fuzzy variable, thenξ is a random fuzzy variable.

Theorem 6.1 Assume that ξ is a random fuzzy variable. Then the proba-bility Pr{ξ(θ) ∈ B} is a fuzzy variable for any Borel set B of �.

Proof: If ξ is a random fuzzy variable on the possibility space (Θ,P(Θ),Pos),then the probability Pr{ξ(θ) ∈ B} is obviously a fuzzy variable since it is afunction from the possibility space to the set of real numbers (in fact, theinterval [0, 1]).

Theorem 6.2 Let ξ be a random fuzzy variable. If the expected value E[ξ(θ)]is finite for each θ, then E[ξ(θ)] is a fuzzy variable.

Proof: If ξ is a random fuzzy variable on the possibility space (Θ,P(Θ),Pos),then the expected value E[ξ(θ)] is obviously a fuzzy variable since it is afunction from the possibility space to the set of real numbers.

Definition 6.2 An n-dimensional random fuzzy vector is a function from thepossibility space (Θ,P(Θ),Pos) to the set of n-dimensional random vectors.

Theorem 6.3 The vector (ξ1, ξ2, · · · , ξn) is a random fuzzy vector if and onlyif ξ1, ξ2, · · · , ξn are random fuzzy variables.

Page 226: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 6.1 - Random Fuzzy Variables 217

Proof: Write ξ = (ξ1, ξ2, · · · , ξn). Suppose that ξ is a random fuzzy vectoron the possibility space (Θ,P(Θ),Pos). Then, for each θ ∈ Θ, the vector ξ(θ)is a random vector. It follows from Theorem 2.6 that ξ1(θ), ξ2(θ), · · · , ξn(θ)are random variables. Thus ξ1, ξ2, · · · , ξn are random fuzzy variables.

Conversely, suppose that ξ1, ξ2, · · · , ξn are random fuzzy variables onthe possibility space (Θ,P(Θ),Pos). Then, for each θ ∈ Θ, the variablesξ1(θ), ξ2(θ), · · · , ξn(θ) are random variables. It follows from Theorem 2.6that ξ(θ) = (ξ1(θ), ξ2(θ), · · · , ξn(θ)) is a random vector. Thus ξ is a randomfuzzy vector.

Theorem 6.4 Let ξ be an n-dimensional random fuzzy vector, and f : �n →� a measurable function. Then f(ξ) is a random fuzzy variable.

Proof: For each θ ∈ Θ, ξ(θ) is a random vector and f(ξ(θ)) is a randomvariable. Thus f(ξ) is a random fuzzy variable since it is a function from apossibility space to the set of random variables.

Random Fuzzy Arithmetic

Definition 6.3 (Liu [75], Random Fuzzy Arithmetic on Single Space) Let f :�n → � be a measurable function, and ξ1, ξ2, · · · , ξn random fuzzy variableson the possibility space (Θ,P(Θ),Pos). Then ξ = f(ξ1, ξ2, · · · , ξn) is a randomfuzzy variable defined as

ξ(θ) = f(ξ1(θ), ξ2(θ), · · · , ξn(θ)), ∀θ ∈ Θ. (6.1)

Example 6.5: Let ξ1 and ξ2 be two random fuzzy variables defined on thepossibility space (Θ,P(Θ),Pos). Then the sum ξ = ξ1 + ξ2 is a random fuzzyvariable defined by

ξ(θ) = ξ1(θ) + ξ2(θ), ∀θ ∈ Θ.

The product ξ = ξ1ξ2 is also a random fuzzy variable defined by

ξ(θ) = ξ1(θ) · ξ2(θ), ∀θ ∈ Θ.

Definition 6.4 (Liu [75], Random Fuzzy Arithmetic on Different Spaces)Let f : �n → � be a measurable function, and ξi random fuzzy variableson the possibility spaces (Θi,P(Θi),Posi), i = 1, 2, · · · , n, respectively. Thenξ = f(ξ1, ξ2, · · · , ξn) is a random fuzzy variable on the product possibilityspace (Θ,P(Θ),Pos), defined as

ξ(θ1, θ2, · · · , θn) = f(ξ1(θ1), ξ2(θ2), · · · , ξn(θn)) (6.2)

for all (θ1, θ2, · · · , θn) ∈ Θ.

Page 227: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

218 Chapter 6 - Random Fuzzy Theory

Example 6.6: Let ξ1 and ξ2 be two random fuzzy variables defined onthe possibility spaces (Θ1,P(Θ1),Pos1) and (Θ2,P(Θ2),Pos2), respectively.Then the sum ξ = ξ1 + ξ2 is a random fuzzy variable on the possibility space(Θ1 ×Θ2,P(Θ1 ×Θ2),Pos1 ∧ Pos2), defined by

ξ(θ1, θ2) = ξ1(θ1) + ξ2(θ2), ∀(θ1, θ2) ∈ Θ1 ×Θ2.

The product ξ = ξ1ξ2 is a random fuzzy variable defined on the possibilityspace (Θ1 ×Θ2,P(Θ1 ×Θ2),Pos1 ∧ Pos2) as

ξ(θ1, θ2) = ξ1(θ1) · ξ2(θ2), ∀(θ1, θ2) ∈ Θ1 ×Θ2.

Example 6.7: Let ξ1 and ξ2 be two random fuzzy variables defined asfollows,

ξ1 ∼{N (u1, σ

21) with possibility 0.7

N (u2, σ22) with possibility 1.0,

ξ2 ∼{N (u3, σ

23) with possibility 1.0

N (u4, σ24) with possibility 0.8.

Then the sum of the two random fuzzy variables is also a random fuzzyvariable,

ξ ∼

⎧⎪⎪⎪⎨⎪⎪⎪⎩N (u1 + u3, σ

21 + σ2

3) with possibility 0.7N (u1 + u4, σ

21 + σ2

4) with possibility 0.7N (u2 + u3, σ

22 + σ2

3) with possibility 1.0N (u2 + u4, σ

22 + σ2

4) with possibility 0.8.

6.2 Chance Measure

The chance of fuzzy random event has been defined as a function from (0, 1]to [0, 1]. Analogously, this section introduces the chance of random fuzzyevent.

Definition 6.5 (Liu [75]) Let ξ be a random fuzzy variable, and B a Borelset of �. Then the chance of random fuzzy event ξ ∈ B is a function from(0, 1] to [0, 1], defined as

Ch {ξ ∈ B} (α) = supCr{A}≥α

infθ∈A

Pr {ξ(θ) ∈ B} . (6.3)

Theorem 6.5 Let ξ be a random fuzzy variable, and B a Borel set of �.For any given α∗ > 0.5, we write β∗ = Ch {ξ ∈ B} (α∗). Then we have

Cr{θ ∈ Θ

∣∣ Pr {ξ(θ) ∈ B} ≥ β∗} ≥ α∗. (6.4)

Page 228: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 6.2 - Chance Measure 219

Proof: It follows from the definition of chance that β∗ is just the supremumof β satisfying

Cr{θ ∈ Θ

∣∣ Pr {ξ(θ) ∈ B} ≥ β}≥ α∗.

Thus there exists an increasing sequence {βi} such that

Cr{θ ∈ Θ

∣∣ Pr {ξ(θ) ∈ B} ≥ βi

}≥ α∗ > 0.5 (6.5)

and βi ↑ β∗ as i→∞. It is easy to verify that{θ ∈ Θ

∣∣ Pr {ξ(θ) ∈ B} ≥ βi

}↓{θ ∈ Θ

∣∣ Pr {ξ(θ) ∈ B} ≥ β∗}as i→∞. It follows from (6.5) and the credibility semicontinuity law that

Cr{θ ∈ Θ

∣∣ Pr {ξ(θ) ∈ B} ≥ β∗}= lim

i→∞Cr{θ ∈ Θ

∣∣ Pr {ξ(θ) ∈ B} ≥ βi

}≥ α∗.

The proof is complete.

Example 6.8: However, if α∗ ≤ 0.5, it is possible that the inequality

Cr{θ ∈ Θ∣∣ Pr{ξ(θ) ∈ B} ≥ β∗} < α∗

holds. For example, let Θ = {θ1, θ2, · · ·} and Pos{θi} = 1 for i = 1, 2, · · · Arandom fuzzy variable ξ is defined on (Θ,P(Θ),Pos) as

ξ(θi) =

{1 with probability 1/(i + 1)0 with probability i/(i + 1)

for i = 1, 2, · · · Then we have

β∗ = Ch{ξ ≤ 0}(0.5) = sup1≤i<∞

i

i + 1= 1.

However,

Cr{θ ∈ Θ

∣∣ Pr{ξ(θ) ≤ 0} ≥ β∗} = Cr{∅} = 0 < 0.5.

Theorem 6.6 (Zhu and Liu [163]) Let ξ be a random fuzzy variable on thepossibility space (Θ,P(Θ),Pos), and B a Borel set of �. Then Ch{ξ ∈ B}(α)is a decreasing function of α, and

limα↓0

Ch {ξ ∈ B} (α) = supθ∈Θ+

Pr {ξ(θ) ∈ B} ; (6.6)

Ch {ξ ∈ B} (1) = infθ∈Θ+

Pr {ξ(θ) ∈ B} (6.7)

where Θ+ is the kernel of (Θ,P(Θ),Pos).

Page 229: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

220 Chapter 6 - Random Fuzzy Theory

Proof: For any given α1 and α2 with 0 < α1 < α2 ≤ 1, it is clear that

Ch {ξ ∈ B} (α1) = supCr{A}≥α1

infθ∈A

Pr {ξ(θ) ∈ B}

≥ supCr{A}≥α2

infθ∈A

Pr {ξ(θ) ∈ B} = Ch {ξ ∈ B} (α2).

Thus Ch{ξ ∈ B}(α) is a decreasing function of α.Next we prove (6.6). On the one hand, for any α ∈ (0, 1], we have

Ch{ξ ∈ B}(α) = supCr{A}≥α

infθ∈A

Pr {ξ(θ) ∈ B} ≤ supθ∈Θ+

Pr {ξ(θ) ∈ B} .

Letting α ↓ 0, we get

limα↓0

Ch {ξ ∈ B} (α) ≤ supθ∈Θ+

Pr {ξ(θ) ∈ B} . (6.8)

On the other hand, for any θ∗ ∈ Θ+, we write α∗ = Cr{θ∗} > 0. SinceCh{ξ ∈ B}(α) is a decreasing function of α, we have

limα↓0

Ch{ξ ∈ B}(α) ≥ Ch{ξ ∈ B}(α∗) ≥ Pr{ξ(θ∗) ∈ B}

which implies that

limα↓0

Ch {ξ ∈ B} (α) ≥ supθ∈Θ+

Pr {ξ(θ) ∈ B} . (6.9)

It follows from (6.8) and (6.9) that (6.6) holds.Finally, we prove (6.7). On the one hand, for any set A with Cr{A} = 1,

we have Θ+ ⊂ A. Thus

Ch{ξ ∈ B}(1) = supCr{A}≥1

infθ∈A

Pr {ξ(θ) ∈ B} ≤ infθ∈Θ+

Pr {ξ(θ) ∈ B} . (6.10)

On the other hand, since Cr{Θ+} = 1, we have

Ch {ξ ∈ B} (1) ≥ infθ∈Θ+

Pr {ξ(θ) ∈ B} . (6.11)

It follows from (6.10) and (6.11) that (6.7) holds. The theorem is proved.

Theorem 6.7 (Zhu and Liu [163]) Let ξ be a random fuzzy variable, and{Bi} a sequence of Borel sets of �. If α > 0.5 and Bi ↓ B, then we have

limi→∞

Ch{ξ ∈ Bi}(α) = Ch{ξ ∈ lim

i→∞Bi

}(α). (6.12)

Proof: Since Bi ↓ B, the chance Ch{ξ ∈ Bi}(α) is decreasing with respectto i. Thus the limitation limi→∞ Ch{ξ ∈ Bi}(α) exists and is not less than

Page 230: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 6.2 - Chance Measure 221

Ch{ξ ∈ B}(α). If the limitation is equal to Ch{ξ ∈ B}(α), then the theoremis proved. Otherwise,

limi→∞

Ch{ξ ∈ Bi}(α) > Ch{ξ ∈ B}(α).

Thus there exists a number z such that

limi→∞

Ch{ξ ∈ Bi}(α) > z > Ch{ξ ∈ B}(α). (6.13)

Hence there exists a set Ai with Cr{Ai} ≥ α such that

infθ∈Ai

Pr{ξ(θ) ∈ Bi} > z

for every i. Since α > 0.5, we may define A = {θ ∈ Θ|Pos{θ} > 2 − 2α}. Itis clear that Cr{A} ≥ α and A ⊂ Ai for all i. Thus,

infθ∈A

Pr{ξ(θ) ∈ Bi} ≥ infθ∈Ai

Pr{ξ(θ) ∈ Bi} > z

for every i. It follows from the probability continuity theorem that

Pr{ξ(θ) ∈ Bi} ↓ Pr{ξ(θ) ∈ B}, ∀θ ∈ Θ.

Thus,Ch{ξ ∈ B}(α) ≥ inf

θ∈APr{ξ(θ) ∈ B} ≥ z

which contradicts to (6.13). The theorem is proved.

Example 6.9: If α ≤ 0.5, then (6.12) may not hold. For example, let

Θ = {θ1, θ2, · · ·}, Pos{θj} = 1, j = 1, 2, · · ·

Ω = {ω1, ω2, · · ·}, Pr{ωj} = 1/2j , j = 1, 2, · · ·Define a random fuzzy variable ξ as follows

ξ(θi) = ηi, ηi(ωj) =

{1/i, if 1 ≤ j ≤ i

0, if i < j

for i, j = 1, 2, · · · Let Bi = (0, 1/i], i = 1, 2, · · · Then Bi ↓ ∅. However,

Ch{ξ ∈ Bi}(α) = 1− 1/2i → 1 �= 0 = Ch{ξ ∈ ∅}(α).

Theorem 6.8 (Zhu and Liu [163]) (a) Let ξ, ξ1, ξ2, · · · be random fuzzy vari-ables such that ξi(θ) ↑ ξ(θ) for each θ ∈ Θ. If α > 0.5, then for each realnumber r, we have

limi→∞

Ch{ξi ≤ r}(α) = Ch{

limi→∞

ξi ≤ r}

(α). (6.14)

(b) Let ξ, ξ1, ξ2, · · · be random fuzzy variables such that ξi(θ) ↓ ξ(θ) for eachθ ∈ Θ. If α > 0.5, then for each real number r, we have

limi→∞

Ch{ξi ≥ r}(α) = Ch{

limi→∞

ξi ≥ r}

(α). (6.15)

Page 231: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

222 Chapter 6 - Random Fuzzy Theory

Proof: (a) Since ξi(θ) ↑ ξ(θ) for each θ ∈ Θ, we have {ξi(θ) ≤ r} ↓ {ξ(θ) ≤r}. Thus the limitation limi→∞ Ch{ξi ≤ r}(α) exists and is not less thanCh{ξ ≤ r}(α). If the limitation is equal to Ch{ξ ≤ r}(α), the theorem isproved. Otherwise,

limi→∞

Ch{ξi ≤ r}(α) > Ch{ξ ≤ r}(α).

Thus there exists a number z ∈ (0, 1) such that

limi→∞

Ch{ξi ≤ r}(α) > z > Ch{ξ ≤ r}(α). (6.16)

Hence there exists a set Ai with Cr{Ai} ≥ α such that

infθ∈Ai

Pr{ξi(θ) ≤ r} > z

for every i. Since α > 0.5, we may define A = {θ ∈ Θ|Pos{θ} > 2 − 2α}.Then Cr{A} ≥ α and A ⊂ Ai for all i. Thus,

infθ∈A

Pr{ξi(θ) ≤ r} ≥ infθ∈Ai

Pr{ξi(θ) ≤ r} > z

for every i. On the other hand, it follows from Theorem 2.8 that

Pr{ξi(θ) ≤ r} ↓ Pr{ξ(θ) ≤ r}.

Thus,Pr{ξ(θ) ≤ r} ≥ z, ∀θ ∈ A.

Hence we have

Ch{ξ ≤ r}(α) ≥ infθ∈A

Pr{ξ(θ) ≤ r} ≥ z

which contradicts to (6.16). The part (a) is proved. We may prove the part(b) via a similar way.

Variety of Chance Measure

Definition 6.6 (Liu [75]) Let ξ be a random fuzzy variable, and B a Borelset of �. For any real number α ∈ (0, 1], the α-chance of random fuzzy eventξ ∈ B is defined as the value of chance at α, i.e., Ch {ξ ∈ B} (α) where Chdenotes the chance measure.

Definition 6.7 (Liu and Liu [80]) Let ξ be a random fuzzy variable, and Ba Borel set of �. Then the equilibrium chance of random fuzzy event ξ ∈ Bis defined as

Che {ξ ∈ B} = sup0<α≤1

{α∣∣ Ch {ξ ∈ B} (α) ≥ α

}(6.17)

where Ch denotes the chance measure.

Page 232: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 6.3 - Chance Distribution 223

Remark 6.1: If the chance curve is continuous, then the equilibrium chanceis just the fixed point of chance curve, i.e., the value α ∈ (0, 1] with Ch{ξ ∈B}(α) = α.

Definition 6.8 (Liu and Liu [80]) Let ξ be a random fuzzy variable, and Ba Borel set of �. Then the average chance of random fuzzy event ξ ∈ B isdefined as

Cha {ξ ∈ B} =∫ 1

0

Ch {ξ ∈ B} (α)dα (6.18)

where Ch denotes the chance measure.

Remark 6.2: The average chance (also called mean chance by Liu and Liu[80]) is just the area under the chance curve.

Definition 6.9 A random fuzzy variable ξ is said to be(a) nonnegative if Ch{ξ < 0}(α) ≡ 0;(b) positive if Ch{ξ ≤ 0}(α) ≡ 0;(c) simple if there exists a finite sequence {x1, x2, · · · , xm} such that

Ch {ξ �= x1, ξ �= x2, · · · , ξ �= xm} (α) ≡ 0; (6.19)

(d) discrete if there exists a countable sequence {x1, x2, · · ·} such that

Ch {ξ �= x1, ξ �= x2, · · ·} (α) ≡ 0. (6.20)

6.3 Chance Distribution

Definition 6.10 (Zhu and Liu [163]) The chance distribution Φ: [−∞,+∞]×(0, 1]→ [0, 1] of a random fuzzy variable ξ is defined by

Φ(x;α) = Ch {ξ ≤ x} (α). (6.21)

Theorem 6.9 (Zhu and Liu [163]) The chance distribution Φ(x;α) of a ran-dom fuzzy variable is a decreasing and left-continuous function of α for eachfixed x.

Proof: Denote the random fuzzy variable by ξ. For any given α1 and α2

with 0 < α1 < α2 ≤ 1, it follows from Theorem 6.6 that

Φ(x;α1) = Ch{ξ ≤ x}(α1) ≥ Ch{ξ ≤ x}(α2) = Φ(x;α2).

Thus Φ(x;α) is a decreasing function of α for each fixed x.We next prove the left-continuity of Φ(x;α) with respect to α. Let α ∈

(0, 1] be given, and let {αi} be a sequence of numbers with αi ↑ α. SinceΦ(x;α) is a decreasing function of α, the limitation limi→∞ Φ(x;αi) exists

Page 233: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

224 Chapter 6 - Random Fuzzy Theory

and is not less than Φ(x;α). If the limitation is equal to Φ(x;α), then theleft-continuity is proved. Otherwise, we have

limi→∞

Φ(x;αi) > Φ(x;α).

Let z∗ = (limi→∞ Φ(x;αi) + Φ(x;α))/2. It is clear that

Φ(x;αi) > z∗ > Φ(x;α)

for all i. It follows from Φ(x;αi) > z∗ that there exists Ai with Cr{Ai} ≥ αi

such thatinfθ∈Ai

Pr{ξ(θ) ≤ x} > z∗

for each i. Now we define

A∗ =∞⋃i=1

Ai.

It is clear that Cr{A∗} ≥ Cr{Ai} ≥ αi. Letting i→∞, we get Cr{A∗} ≥ α.Thus

Φ(x;α) ≥ infθ∈A∗

Pr{ξ(θ) ≤ x} ≥ z∗.

A contradiction proves the theorem.

Theorem 6.10 (Zhu and Liu [163]) The chance distribution Φ(x;α) of arandom fuzzy variable is an increasing function of x for any fixed α, and

Φ(−∞;α) = 0, Φ(+∞;α) = 1, ∀α; (6.22)

limx→−∞Φ(x;α) = 0 if α > 0.5; (6.23)

limx→+∞Φ(x;α) = 1 if α < 0.5. (6.24)

Furthermore, if α > 0.5, then we have

limy↓x

Φ(y;α) = Φ(x;α). (6.25)

Proof: Let Φ(x;α) be the chance distribution of random fuzzy variable ξdefined on the possibility space (Θ,P(Θ),Pos). For any given x1 and x2 with−∞ ≤ x1 < x2 ≤ +∞, it is clear that

Φ(x1;α) = supCr{A}≥α

infθ∈A

Pr {ξ(θ) ≤ x1}

≤ supCr{A}≥α

infθ∈A

Pr {ξ(θ) ≤ x2} = Φ(x2;α).

That is, the chance distribution Φ(x;α) is an increasing function of x for eachfixed α.

Page 234: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 6.3 - Chance Distribution 225

Since ξ(θ) is a random variable for any θ ∈ Θ, we have Pr{ξ(θ) ≤ −∞} =0 for any θ ∈ Θ. It follows that

Φ(−∞;α) = supCr{A}≥α

infθ∈A

Pr {ξ(θ) ≤ −∞} = 0.

Similarly, we have Pr{ξ(θ) ≤ +∞} = 1 for any θ ∈ Θ. Thus

Φ(+∞;α) = supCr{A}≥α

infθ∈A

Pr {ξ(θ) ≤ +∞} = 1.

Thus (6.22) is proved.Next we prove (6.23) and (6.24). If α > 0.5, then there exists an element

θ∗ ∈ Θ such that 2 − 2α < Pos{θ∗} ≤ 1. It is easy to verify that θ∗ ∈ A ifCr{A} ≥ α. Hence

limx→−∞Φ(x;α) = lim

x→−∞ supCr{A}≥α

infθ∈A

Pr {ξ(θ) ≤ x}

≤ limx→−∞Pr{ξ(θ∗) ≤ x} = 0.

Thus (6.23) holds. When α < 0.5, there exists an element θ∗ such thatCr{θ∗} ≥ α. Thus we have

limx→+∞Φ(x;α) = lim

x→+∞ supCr{A}≥α

infθ∈A

Pr {ξ(θ) ≤ x}

≥ limx→+∞Pr{ξ(θ∗) ≤ x} = 1

which implies that (6.24) holds.Finally, we prove (6.25). Let {xi} be an arbitrary sequence with xi ↓ x

as i→∞. It follows from Theorem 6.7 that

limy↓x

Φ(y;α) = limy↓x

Ch{ξ ∈ (−∞, y]}(α) = Ch{ξ ∈ (−∞, x]}(α) = Φ(x;α).

The theorem is proved.

Example 6.10: When α ≤ 0.5, the limitation limx→−∞ Φ(x;α) may takeany value c between 0 and 1. Suppose that Θ = {θ1, θ2, · · ·}, Pos{θi} = 1for i = 1, 2, · · · We define a random fuzzy variable ξ on the possibility space(Θ,P(Θ),Pos) as

ξ(θi) =

{−i with probability c

0 with possibility 1− c.

Then for any α ≤ 0.5, we have

Φ(x;α) =

⎧⎪⎨⎪⎩0, if x = −∞c, if −∞ < x < 01, if 0 ≤ x ≤ +∞.

Page 235: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

226 Chapter 6 - Random Fuzzy Theory

It follows that limx→−∞Φ(x;α) = c.

Example 6.11: When α ≥ 0.5, the limitation limx→+∞ Φ(x;α) may takeany value c between 0 and 1. Suppose that Θ = {θ1, θ2, · · ·}, Pos{θi} =i/(i+1) for i = 1, 2, · · ·We define a random fuzzy variable ξ on the possibilityspace (Θ,P(Θ),Pos) as

ξ(θi) =

{0 with probability c

i with possibility 1− c.

Then for any α ≥ 0.5, we have

Φ(x;α) =

⎧⎪⎨⎪⎩0, if −∞ ≤ x < 0c, if 0 < x < +∞1, if x = +∞.

It follows that limx→+∞Φ(x;α) = c.

Theorem 6.11 Let ξ be a random fuzzy variable. Then Ch{ξ ≥ x}(α) is(a) a decreasing and left-continuous function of α for any fixed x;(b) a decreasing function of x for any fixed α. Furthermore, when α > 0.5,we have

limy↑x

Ch{ξ ≥ y}(α) = Ch{ξ ≥ x}(α). (6.26)

Proof: Like Theorems 6.9 and 6.10.

Definition 6.11 (Zhu and Liu [163]) The chance density function φ: � ×(0, 1]→ [0,+∞) of a random fuzzy variable ξ is a function such that

Φ(x;α) =∫ x

−∞φ(y;α)dy (6.27)

holds for all x ∈ [−∞,+∞] and α ∈ (0, 1], where Φ is the chance distributionof ξ.

6.4 Independent and Identical Distribution

This section introduces the concept of independent and identically distributed(iid) random fuzzy variables.

Definition 6.12 The random fuzzy variables ξ1, ξ2, · · · , ξn are said to be iidif and only if

(Pr{ξi(θ) ∈ B1},Pr{ξi(θ) ∈ B2}, · · · ,Pr{ξi(θ) ∈ Bm}) , i = 1, 2, · · · , n

are iid fuzzy vectors for any Borel sets B1, B2, · · · , Bm of � and any positiveinteger m.

Page 236: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 6.5 - Expected Value Operator 227

Theorem 6.12 Let ξ1, ξ2, · · · , ξn be iid random fuzzy variables. Then forany Borel set B of �, Pr{ξi(θ) ∈ B}, i = 1, 2, · · · , n are iid fuzzy variables.

Proof: It follows immediately from the definition.

Theorem 6.13 Let f : � → � be a measurable function. If ξ1, ξ2, · · · , ξn areiid random fuzzy variables, then f(ξ1), f(ξ2), · · · , f(ξn) are iid random fuzzyvariables.

Proof: We have proved that f(ξ1), f(ξ2), · · · , f(ξn) are random fuzzy vari-ables. For any positive integer m and Borel sets B1, B2, · · · , Bm of �, sincef−1(B1), f−1(B2), · · · , f−1(Bm) are Borel sets, we know that(

Pr{ξi(θ) ∈ f−1(B1)},Pr{ξi(θ) ∈ f−1(B2)}, · · · ,Pr{ξi(θ) ∈ f−1(Bm)}),

i = 1, 2, · · · , n are iid fuzzy vectors. Equivalently, the fuzzy vectors

(Pr{f(ξi(θ)) ∈ B1},Pr{f(ξi(θ)) ∈ B2}, · · · ,Pr{f(ξi(θ)) ∈ Bm}) ,

i = 1, 2, · · · , n are iid. Hence f(ξ1), f(ξ2), · · · , f(ξn) are iid random fuzzyvariables.

6.5 Expected Value Operator

The expected value operator of random fuzzy variable is defined as follows.

Definition 6.13 (Liu and Liu [83]) Let ξ be a random fuzzy variable. Thenthe expected value of ξ is defined by

E[ξ] =∫ +∞

0

Cr{θ ∈ Θ | E[ξ(θ)] ≥ r}dr −∫ 0

−∞Cr{θ ∈ Θ | E[ξ(θ)] ≤ r}dr

provided that at least one of the two integrals is finite.

Example 6.12: Suppose that ξ is a random fuzzy variable defined as

ξ ∼ U(ρ, ρ + 2), with ρ = (0, 1, 2).

Without loss of generality, we assume that ρ is defined on the possibilityspace (Θ,P(Θ),Pos). Then for each θ ∈ Θ, ξ(θ) is a random variable andE[ξ(θ)] = ρ(θ) + 1. Thus the expected value of ξ is E[ξ] = E[ρ] + 1 = 2.

Theorem 6.14 Assume that ξ and η are random fuzzy variables with finiteexpected values. If E[ξ(θ)] and E[η(θ)] are independent fuzzy variables, thenfor any real numbers a and b, we have

E[aξ + bη] = aE[ξ] + bE[η]. (6.28)

Page 237: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

228 Chapter 6 - Random Fuzzy Theory

Proof: For any θ ∈ Θ, by the linearity of expected value operator of randomvariable, we have E[aξ(θ) + bη(θ)] = aE[ξ(θ)] + bE[η(θ)]. Since E[ξ(θ)] andE[η(θ)] are independent fuzzy variables, we have

E[aξ + bη] = E [aE[ξ(θ)] + bE[η(θ)]]= aE [E[ξ(θ)]] + bE [E[η(θ)]]= aE[ξ] + bE[η].

The theorem is proved.

Theorem 6.15 Assume that ξ, ξ1, ξ2, · · · are random fuzzy variables suchthat E[ξi(θ)]→ E[ξ(θ)] uniformly. Then

limi→∞

E[ξi] = E[ξ]. (6.29)

Proof: Since ξi are random fuzzy variables, E[ξi(θ)] are fuzzy variables forall i. It follows from E[ξi(θ)] → E[ξ(θ)] uniformly and Theorem 3.41 that(6.29) holds.

6.6 Variance, Covariance and Moments

Definition 6.14 (Liu and Liu [83]) Let ξ be a random fuzzy variable withfinite expected value E[ξ]. The variance of ξ is defined as

V [ξ] = E[(ξ − E[ξ])2]. (6.30)

Theorem 6.16 If ξ is a random fuzzy variable with finite expected value, aand b are real numbers, then V [aξ + b] = a2V [ξ].

Proof: It follows from the definition of variance that

V [aξ + b] = E[(aξ + b− aE[ξ]− b)2

]= a2E[(ξ − E[ξ])2] = a2V [ξ].

Theorem 6.17 Assume that ξ is a random fuzzy variable whose expectedvalue exists. Then we have

V [E[ξ(θ)]] ≤ V [ξ]. (6.31)

Proof: Denote the expected value of ξ by e. It follows from Jensen’s in-equality that

V [E[ξ(θ)]] = E[(E[ξ(θ)]− e)2

]≤ E

[E[(ξ(θ)− e)2

]]= V [ξ].

The theorem is proved.

Theorem 6.18 Let ξ be a random fuzzy variable with expected value e. ThenV [ξ] = 0 if and only if Ch{ξ = e}(1) = 1.

Page 238: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 6.7 - Optimistic and Pessimistic Values 229

Proof: If V [ξ] = 0, then it follows from V [ξ] = E[(ξ − e)2] that∫ +∞

0

Cr{θ ∈ Θ

∣∣ E[(ξ(θ)− e)2] ≥ r}

dr = 0

which implies that Cr{θ ∈ Θ|E[(ξ(θ)−e)2] ≥ r} = 0 for any r > 0. Therefore,Cr{θ ∈ Θ|E[(ξ(θ) − e)2] = 0} = 1. That is, there exists a set A∗ withCr{A∗} = 1 such that E[(ξ(θ) − e)2] = 0 for each θ ∈ A∗. It follows fromTheorem 2.39 that Pr{ξ(θ) = e} = 1 for each θ ∈ Θ+. Hence

Ch{ξ = e}(1) = supCr{A}≥1

infθ∈A

Pr{ξ(θ) = e} = 1.

Conversely, if Ch{ξ = e}(1) = 1, it follows from Theorem 6.5 that thereexists a set A∗ with Cr{A∗} = 1 such that

infθ∈A∗

Pr{ξ(θ) = e} = 1.

In other words, Pr{(ξ(θ)− e)2 ≥ r} = 0 for each r > 0 and θ ∈ A∗. Thus

E[(ξ(θ)− e)2] =∫ +∞

0

Pr{(ξ(θ)− e)2 ≥ r}dr = 0

for each θ ∈ A∗. It follows that Cr{θ ∈ Θ|E[(ξ(θ)− e)2] ≥ r

}= 0 for any

r > 0. Hence

V [ξ] =∫ +∞

0

Cr{θ ∈ Θ

∣∣ E[(ξ(θ)− e)2] ≥ r}

dr = 0.

The theorem is proved.

Definition 6.15 Let ξ and η be random fuzzy variables such that E[ξ] andE[η] are finite. Then the covariance of ξ and η is defined by

Cov[ξ, η] = E [(ξ − E[ξ])(η − E[η])] . (6.32)

Definition 6.16 For any positive integer k, the expected value E[ξk] is calledthe kth moment of the random fuzzy variable ξ. The expected value E[(ξ −E[ξ])k] is called the kth central moment of the random fuzzy variable ξ.

6.7 Optimistic and Pessimistic Values

Let ξ be a random fuzzy variable. In order to measure it, we define twocritical values: optimistic value and pessimistic value.

Definition 6.17 (Liu [75]) Let ξ be a random fuzzy variable, and γ, δ ∈(0, 1]. Then

ξsup(γ, δ) = sup{r∣∣ Ch {ξ ≥ r} (γ) ≥ δ

}(6.33)

Page 239: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

230 Chapter 6 - Random Fuzzy Theory

is called the (γ, δ)-optimistic value to ξ, and

ξinf(γ, δ) = inf{r∣∣ Ch {ξ ≤ r} (γ) ≥ δ

}(6.34)

is called the (γ, δ)-pessimistic value to ξ.

The random fuzzy variable ξ will reach upwards of the (γ, δ)-optimisticvalue ξsup(γ, δ) with probability δ at credibility γ, and will be below the(γ, δ)-pessimistic value ξinf(γ, δ) with probability δ at credibility γ.

Remark 6.3: If the random fuzzy variable ξ becomes a random variable andγ > 0, then the (γ, δ)-optimistic value is ξsup(δ) = sup{r|Pr{ξ ≥ r} ≥ δ},and the (γ, δ)-pessimistic value is ξinf(δ) = inf{r|Pr{ξ ≤ r} ≥ δ}. Thiscoincides with the stochastic case.

Remark 6.4: If the random fuzzy variable ξ becomes a fuzzy variable andδ > 0, then the (γ, δ)-optimistic value is ξsup(γ) = sup{r|Cr{ξ ≥ r} ≥ γ},and the (γ, δ)-pessimistic value is ξinf(γ) = inf{r|Cr{ξ ≤ r} ≥ γ}. Thiscoincides with the fuzzy case.

Theorem 6.19 Let ξ be a random fuzzy variable. Assume that ξsup(γ, δ) isthe (γ, δ)-optimistic value and ξinf(γ, δ) is the (γ, δ)-pessimistic value to ξ. Ifγ > 0.5, then we have

Ch{ξ ≤ ξinf(γ, δ)}(γ) ≥ δ, Ch{ξ ≥ ξsup(γ, δ)}(γ) ≥ δ. (6.35)

Proof: It follows from the definition of (γ, δ)-pessimistic value that thereexists a decreasing sequence {xi} such that Ch{ξ ≤ xi}(γ) ≥ δ and xi ↓ξinf(γ, δ) as i→∞. Thus we have

limi→∞

Ch{ξ ≤ xi}(γ) ≥ δ.

It follows from γ > 0.5 and Theorem 6.10 that

Ch{ξ ≤ ξinf(γ, δ)}(γ) = limi→∞

Ch{ξ ≤ xi}(γ) ≥ δ.

Similarly, there exists an increasing sequence {xi} such that Ch{ξ ≥ xi}(γ) ≥δ and xi ↑ ξsup(γ, δ) as i→∞. Thus we have

limi→∞

Ch{ξ ≥ xi}(γ) ≥ δ.

It follows from γ > 0.5 and Theorem 6.11 that

Ch{ξ ≥ ξsup(γ, δ)}(γ) = limi→∞

Ch{ξ ≥ xi}(γ) ≥ δ.

The theorem is proved.

Page 240: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 6.7 - Optimistic and Pessimistic Values 231

Example 6.13: When γ ≤ 0.5, it is possible that the inequalities

Ch{ξ ≥ ξsup(γ, δ)}(γ) < δ, Ch{ξ ≤ ξinf(γ, δ)}(γ) < δ

hold. For example, let Θ = {θ1, θ2, · · ·} and Pos{θi} = 1 for i = 1, 2, · · · Letξ be a random fuzzy variable defined on (Θ,P(Θ),Pos) as

ξ(θi) =

{1/(i + 1) with probability 0.5i/(i + 1) with probability 0.5

for i = 1, 2, · · · Then we have

ξsup(0.5, 0.5) = 1 and Ch{ξ ≥ 1}(0.5) = 0 < 0.5;

ξinf(0.5, 0.5) = 0 and Ch{ξ ≤ 0}(0.5) = 0 < 0.5.

Theorem 6.20 Let ξsup(γ, δ) and ξinf(γ, δ) be the (γ, δ)-optimistic and (γ, δ)-pessimistic values of random fuzzy variable ξ, respectively. If γ ≤ 0.5, thenwe have

ξinf(γ, δ) ≤ ξsup(γ, δ) + δ1; (6.36)

if γ > 0.5, then we have

ξinf(γ, δ) + δ2 ≥ ξsup(γ, δ) (6.37)

where δ1 and δ2 are defined by

δ1 = supθ∈Θ{ξ(θ)sup(1− δ)− ξ(θ)inf(1− δ)} ,

δ2 = supθ∈Θ{ξ(θ)sup(δ)− ξ(θ)inf(δ)} ,

and ξ(θ)sup(δ) and ξ(θ)inf(δ) are δ-optimistic and δ-pessimistic values of ran-dom variable ξ(θ) for each θ, respectively.

Proof: Assume that γ ≤ 0.5. For any given ε > 0, we define

Θ1 ={θ ∈ Θ

∣∣ Pr {ξ(θ) > ξsup(γ, δ) + ε} ≥ δ}

,

Θ2 ={θ ∈ Θ

∣∣ Pr {ξ(θ) < ξinf(γ, δ)− ε} ≥ δ}

.

It follows from the definitions of ξsup(γ, δ) and ξinf(γ, δ) that Cr{Θ1} < γand Cr{Θ2} < γ. Thus Cr{Θ1}+Cr{Θ2} < γ +γ ≤ 1. This fact implies thatΘ1 ∪Θ2 �= Θ. Let θ∗ �∈ Θ1 ∪Θ2. Then we have

Pr {ξ(θ∗) > ξsup(γ, δ) + ε} < δ,

Pr {ξ(θ∗) < ξinf(γ, δ)− ε} < δ.

Page 241: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

232 Chapter 6 - Random Fuzzy Theory

Since Pr is self dual, we have

Pr {ξ(θ∗) ≤ ξsup(γ, δ) + ε} > 1− δ,

Pr {ξ(θ∗) ≥ ξinf(γ, δ)− ε} > 1− δ.

It follows from the definitions of ξ(θ∗)sup(1− δ) and ξ(θ∗)inf(1− δ) that

ξsup(γ, δ) + ε ≥ ξ(θ∗)inf(1− δ),

ξinf(γ, δ)− ε ≤ ξ(θ∗)sup(1− δ)

which implies that

ξinf(γ, δ)− ε− (ξsup(γ, δ) + ε) ≤ ξ(θ∗)sup(1− δ)− ξ(θ∗)inf(1− δ) ≤ δ1.

Letting ε→ 0, we obtain (6.36).Next we prove the inequality (6.37). Assume γ > 0.5. For any given

ε > 0, we define

Θ1 ={θ ∈ Θ

∣∣ Pr {ξ(θ) ≥ ξsup(γ, δ)− ε} ≥ δ}

,

Θ2 ={θ ∈ Θ

∣∣ Pr {ξ(θ) ≤ ξinf(γ, δ) + ε} ≥ δ}

.

It follows from the definitions of ξsup(γ, δ) and ξinf(γ, δ) that Cr{Θ1} ≥ γand Cr{Θ2} ≥ γ. Thus Cr{Θ1}+Cr{Θ2} ≥ γ +γ > 1. This fact implies thatΘ1 ∩Θ2 �= ∅. Let θ∗ ∈ Θ1 ∩Θ2. Then we have

Pr {ξ(θ∗) ≥ ξsup(γ, δ)− ε} ≥ δ,

Pr {ξ(θ∗) ≤ ξinf(γ, δ) + ε} ≥ δ.

It follows from the definitions of ξ(θ∗)sup(δ) and ξ(θ∗)inf(δ) that

ξsup(γ, δ)− ε ≤ ξ(θ∗)sup(δ),

ξinf(γ, δ) + ε ≥ ξ(θ∗)inf(δ)

which implies that

ξsup(γ, δ)− ε− (ξinf(γ, δ) + ε) ≤ ξ(θ∗)sup(δ)− ξ(θ∗)inf(δ) ≤ δ2.

The inequality (6.37) is proved by letting ε→ 0.

6.8 Convergence Concepts

This section introduces four types of sequence convergence concept: conver-gence almost surely (a.s.), convergence in chance, convergence in mean, andconvergence in distribution.

Page 242: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 6.8 - Convergence Concepts 233

Table 6.1: Relationship among Convergence Concepts

Convergence⇒

Convergence⇐

Convergence

in Chance in Distribution in Mean

Definition 6.18 (Zhu and Liu [165]) Suppose that ξ, ξ1, ξ2, · · · are randomfuzzy variables defined on the possibility space (Θ,P(Θ),Pos). The sequence{ξi} is said to be convergent a.s. to ξ if and only if there exists a set A ∈ P(Θ)with Cr{A} = 1 such that {ξi(θ)} converges a.s. to ξ(θ) for every θ ∈ A.

Definition 6.19 (Zhu and Liu [165]) Suppose that ξ, ξ1, ξ2, · · · are randomfuzzy variables. We say that the sequence {ξi} converges in chance to ξ if

limi→∞

limα↓0

Ch {|ξi − ξ| ≥ ε} (α) = 0 (6.38)

for every ε > 0.

Definition 6.20 (Zhu and Liu [165]) Suppose that ξ, ξ1, ξ2, · · · are randomfuzzy variables with finite expected values. We say that the sequence {ξi}converges in mean to ξ if

limi→∞

E[|ξi − ξ|] = 0. (6.39)

Definition 6.21 (Zhu and Liu [165]) Suppose that Φ,Φ1,Φ2, · · · are thechance distributions of random fuzzy variables ξ, ξ1, ξ2, · · ·, respectively. Wesay that {ξi} converges in distribution to ξ if Φi(x;α)→ Φ(x;α) for all con-tinuity points (x;α) of Φ.

Convergence Almost Surely vs. Convergence in Chance

Example 6.14: Convergence a.s. does not imply convergence in chance. Forexample, let

Θ = {θ1, θ2, · · ·}, Pos{θj} = (j − 1)/j for j = 1, 2, · · ·

Ω = {ω1, ω2, · · ·}, Pr{ωj} = 1/2j for j = 1, 2, · · ·

Suppose that the random fuzzy variables ξ, ξ1, ξ2, · · · are defined on the pos-sibility space (Θ,P(Θ),Pos) as ξ = 0 and

ξi(θj) =

{ηi, if j = i

0, otherwise(6.40)

Page 243: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

234 Chapter 6 - Random Fuzzy Theory

where ηi are random variables defined on the probability space (Ω,A,Pr) as

ηi(ωj) =

{2j , if j ≤ i

0, otherwise

for i, j = 1, 2, · · · For each θ ∈ Θ, it is easy to verify that the random sequence{ξi(θ)} converges a.s. to ξ(θ). Thus the random fuzzy sequence {ξi} convergesa.s. to ξ. However, for a small number ε > 0, we have

Pr{|ξi(θ)− ξ(θ)| ≥ ε} =

⎧⎨⎩ 1− 12i

, if θ = θi

0, otherwise

for i = 1, 2, · · · It follows that

limα↓0

Ch{|ξi − ξ| ≥ ε}(α) = 1− 12i→ 1 �= 0.

That is, the random fuzzy sequence {ξi} does not converge in chance to ξ.

Example 6.15: Convergence in chance does not imply convergence a.s. Forexample, suppose that

Θ = {θ1, θ2, · · ·}, Pos{θj} = (j − 1)/j for j = 1, 2, · · ·

Ω = [0, 1], A is the Borel algebra on Ω, and Pr is the Lebesgue measure.Then (Ω,A,Pr) is a probability space. For any positive integer i, there is aninteger j such that i = 2j + k, where k is an integer between 0 and 2j − 1.We define the random fuzzy variables ξ, ξ1, ξ2, · · · on the possibility space(Θ,P(Θ),Pos) as ξ = 0 and

ξi(θ) = ηi, ∀θ ∈ Θ (6.41)

where ηi are random variables defined on the probability space (Ω,A,Pr) as

ηi(ω) =

{1, if k/2j ≤ ω ≤ (k + 1)/2j

0, otherwise

for i = 1, 2, · · · For a small number ε > 0, we have

Pr{|ξi(θ)− ξ(θ)| ≥ ε} = 1/2j , ∀θ ∈ Θ

for i = 1, 2, · · · Thus,

limα↓0

Ch{|ξi − ξ| ≥ ε}(α) =12j→ 0

which implies that the random fuzzy sequence {ξi} converges in chance to ξ.However, for every ω ∈ Ω, there exists an infinite number of intervals of theform [k/2j , (k + 1)/2j ] containing ω. Hence the random fuzzy sequence {ξi}does not converge a.s. to ξ.

Page 244: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 6.8 - Convergence Concepts 235

Convergence Almost Surely vs. Convergence in Mean

Example 6.16: Convergence a.s. does not imply convergence in mean. Con-sider the random fuzzy sequence defined by (6.40) in which {ξi} convergesa.s. to ξ. However,

E[|ξi(θ)− ξ(θ)|] =

{i, if θ = θi

0, otherwise

for i = 1, 2, · · · Then, we have

E[|ξi − ξ|] =i− 12i· i �→ 0

which implies that the random fuzzy sequence {ξi} does not converge in meanto ξ.

Example 6.17: Convergence in mean does not imply convergence a.s. Con-sider the random fuzzy sequence defined by (6.41) in which {ξi} does notconverge a.s. to ξ. Since

E[|ξi(θ)− ξ(θ)|] = 1/2j , ∀θ ∈ Θ

for i = 1, 2, · · · and j is the integer such that i = 2j +k, where k is an integerbetween 0 and 2j − 1. Thus, we have

E[|ξi − ξ|] =12j→ 0

which implies that the random fuzzy sequence {ξi} converges in mean to ξ.

Convergence in Chance vs. Convergence in Mean

Example 6.18: Convergence in chance does not imply convergence in mean.For example, let

Θ = {θ1, θ2, · · ·}, Pos{θj} = 1/j for j = 1, 2, · · ·

Ω = {ω1, ω2, · · ·}, Pr{ωj} = 1/2j for j = 1, 2, · · ·Suppose that the random fuzzy variables ξ, ξ1, ξ2, · · · are defined on the pos-sibility space (Θ,P(Θ),Pos) as ξ = 0 and

ξi(θj) =

{ηi, if j = i

0, otherwise

where ηi are random variables defined on the probability space (Ω,A,Pr) as

ηi(ωj) =

{i2i, if j = i

0, otherwise

Page 245: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

236 Chapter 6 - Random Fuzzy Theory

for i, j = 1, 2, · · · For a small number ε > 0, we have

Pr{|ξi(θ)− ξ(θ)| ≥ ε} =

{1/2i, if θ = θi

0, otherwise

for i = 1, 2, · · · Then

limα↓0

Ch{|ξi − ξ| ≥ ε}(α) =12i→ 0.

That is, the random fuzzy sequence {ξi} converges in chance to ξ. However,

E[|ξi(θ)− ξ(θ)|] =

{i, if θ = θi

0, otherwise

for i = 1, 2, · · · Thus, we have

E[|ξi − ξ|] = i× 12i

=12�→ 0

which implies that the random fuzzy sequence {ξi} does not converge in meanto ξ.

Example 6.19: Convergence in mean does not imply convergence in chance.For example, let

Θ = {θ1, θ2, · · ·}, Pos{θj} = 1/j for j = 1, 2, · · ·

Ω = {ω1, ω2, · · ·}, Pr{ωj} = 1/2j for j = 1, 2, · · ·Suppose that the random fuzzy variables ξ, ξ1, ξ2, · · · are defined on the pos-sibility space (Θ,P(Θ),Pos) as ξ = 0 and

ξi(θj) =

{ηi, if j = i

0, otherwise

where ηi are random variables defined on the probability space (Ω,A,Pr) as

ηi(ωj) =

{1, if j ≤ i

0, otherwise

for i, j = 1, 2, · · · Thus we have

E[|ξi(θ)− ξ(θ)|] =

{1− 1/2i, if θ = θi

0, otherwise

for i = 1, 2, · · · and

E[|ξi − ξ|] =(

1− 12i

)× 1

2i→ 0

Page 246: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 6.8 - Convergence Concepts 237

which implies that the random fuzzy sequence {ξi} converges in mean to ξ.However, for a small number ε > 0, we have

Pr{|ξi(θ)− ξ(θ)| ≥ ε} =

{1− 1/2i, if θ = θi

0, otherwise

for i = 1, 2, · · · Thus we have

limα↓0

Ch{|ξi − ξ| ≥ ε}(α) = 1− 12i→ 1 �= 0.

That is, the random fuzzy sequence {ξi} does not converge in chance to ξ.

Convergence Almost Surely vs. Convergence in Distribution

Example 6.20: Convergence a.s. does not imply convergence in distribution.For example, let

Θ = {θ1, θ2, · · ·}, Pos{θj} = j/(j + 1) for j = 1, 2, · · ·

Ω = {ω1, ω2}, Pr{ω1} = 0.4, Pr{ω2} = 0.6.

Suppose that the random fuzzy variables ξ, ξ1, ξ2, · · · are defined on the pos-sibility space (Θ,P(Θ),Pos) as ξ = 0 and

ξi(θj) =

{ηi, if j = i

0, otherwise

where ηi are random variables defined on the probability space (Ω,A,Pr) as

ηi(ω) =

{1, if ω = ω1

0, if ω = ω2

for i, j = 1, 2, · · · It is easy to verify that the random fuzzy sequence {ξi}converges a.s. to ξ. However, when 0.75 < α ≤ 1, the chance distribution ofξ is

Φ(x;α) =

{0, if x < 01, otherwise

and the chance distributions of ξi are all

Φi(x;α) =

⎧⎪⎨⎪⎩0, if x < 0

0.6, if 0 ≤ x < 11, if 1 ≤ x

Page 247: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

238 Chapter 6 - Random Fuzzy Theory

for i = 1, 2, · · · It is clear that {Φi} does not converge to Φ at all continuitypoints of Φ. Thus the random fuzzy sequence {ξi} does not converge indistribution to ξ.

Example 6.21: Convergence in distribution does not imply convergence a.s.For example, let

Θ = {θ1, θ2}, Pos{θ1} = Pos{θ2} = 1,

Ω = {ω1, ω2}, Pr{ω1} = Pr{ω2} = 0.5.

The random fuzzy variable ξ is defined on the possibility space (Θ,P(Θ),Pos)as

ξ(θ) =

{−η, if θ = θ1

η, if θ = θ2

where η is a random variable defined on the probability space (Ω,A,Pr) as

η(ω) =

{−1, if ω = ω1

1, if ω = ω2.

We defineξi = −ξ, i = 1, 2, · · · (6.42)

Clearly, for any α ∈ (0, 1], the chance distributions of ξ, ξ1, ξ2, · · · are all

Φ(x;α) =

⎧⎪⎨⎪⎩0, if x < −1

0.5, if − 1 ≤ x < 1

1, if 1 ≤ x.

Thus the random fuzzy sequence {ξi} converges in distribution to ξ. However,it is clear that {ξi} does not converge a.s. to ξ.

Convergence in Chance vs. Convergence in Distribution

Theorem 6.21 (Zhu and Liu [165]) Let ξ, ξ1, ξ2, · · · be random fuzzy vari-ables defined on the possibility space (Θ,P(Θ),Pos). If the sequence {ξi}converges in chance to ξ, then {ξi} converges in distribution to ξ.

Proof: Let Φ,Φi be the chance distributions of ξ, ξi for i = 1, 2, · · ·, re-spectively. If {ξi} does not converge in distribution to ξ, then there exists acontinuity point (x, α) of Φ such that Φi(x;α) �→ Φ(x;α). In other words,there exists a number ε∗ > 0 and a subsequence {Φik} such that

Φik(x;α)− Φ(x;α) > 2ε∗, ∀k (6.43)

orΦ(x;α)− Φik(x;α) > 2ε∗, ∀k. (6.44)

Page 248: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 6.8 - Convergence Concepts 239

If (6.43) holds, then for the positive number ε∗, there exists δ > 0 such that

|Φ(x + δ;α)− Φ(x;α)| < ε∗

which implies thatΦik(x;α)− Φ(x + δ;α) > ε∗.

Equivalently, we have

supCr{A}≥α

infθ∈A

Pr{ξik(θ) ≤ x} − supCr{A}≥α

infθ∈A

Pr{ξ(θ) ≤ x + δ} > ε∗.

Thus, for each k, there exists a set Ak ∈ P(Θ) with Cr{Ak} ≥ α such that

infθ∈Ak

Pr{ξik(θ) ≤ x} − supCr{A}≥α

infθ∈A

Pr{ξ(θ) ≤ x + δ} > ε∗.

Moreover, since Cr{Ak} ≥ α, we have

infθ∈Ak

Pr{ξik(θ) ≤ x} − infθ∈Ak

Pr{ξ(θ) ≤ x + δ} > ε∗.

Thus there exists θk ∈ Ak with Cr{θk} > 0 such that

Pr{ξik(θk) ≤ x} − Pr{ξ(θk) ≤ x + δ} > ε∗. (6.45)

Note that ξik(θk) and ξ(θk) are all random variables, and

{ξik(θk) ≤ x} = {ξik(θk) ≤ x, ξ(θk) ≤ x + δ} ∪ {ξik(θk) ≤ x, ξ(θk) > x + δ}⊂ {ξ(θk) ≤ x + δ} ∪ {|ξik(θk)− ξ(θk)| > δ}.

It follows from (6.45) that

Pr{|ξik(θk)− ξ(θk)| > δ} ≥ Pr{ξik(θk) ≤ x} − Pr{ξ(θk} ≤ x + δ} > ε∗.

Thus we getlimα↓0

Ch{|ξik − ξ| > δ}(α) > ε∗

which implies that the random fuzzy sequence {ξi} does not converge inchance to ξ. A contradiction proves that {ξi} converges in distribution to ξ.A similar way may prove the case (6.44).

Example 6.22: Convergence in distribution does not imply convergence inchance. Let us consider the example defined by (6.42) in which {ξi} convergesin distribution to ξ. However, for small number ε > 0, we have

Pr{|ξi(θ)− ξ(θ)| ≥ ε} =

{1, if θ = θ1

1, if θ = θ2

for i = 1, 2, · · · It follows that

limα↓0

Ch{|ξi − ξ| ≥ ε}(α) = 1.

That is, the random fuzzy sequence {ξi} does not converge in chance to ξ.

Page 249: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

240 Chapter 6 - Random Fuzzy Theory

Convergence in Mean vs. Convergence in Distribution

Theorem 6.22 (Zhu and Liu [165]) Suppose that ξ, ξ1, ξ2, · · · are randomfuzzy variables on the possibility space (Θ,P(Θ),Pos). If the sequence {ξi}converges in mean to ξ, then {ξi} converges in distribution to ξ.

Proof: Suppose that Φ,Φi are chance distributions of ξ, ξi for i = 1, 2, · · ·,respectively. If {ξi} does not converge in distribution to ξ, then there existsa continuity point (x, α) of Φ such that Φi(x;α) �→ Φ(x;α). In other words,there exists a number ε∗ > 0 and a subsequence {Φik} such that

Φik(x;α)− Φ(x;α) > 2ε∗, ∀ k (6.46)

orΦ(x;α)− Φik(x;α) > 2ε∗, ∀ k. (6.47)

If (6.46) holds, then for the positive number ε∗, there exists δ with 0 < δ <α ∧ 0.5 such that

|Φ(x + δ;α− δ)− Φ(x;α)| < ε∗

which implies that

Φik(x;α)− Φ(x + δ;α− δ) > ε∗.

Equivalently, we have

supCr{A}≥α

infθ∈A

Pr{ξik(θ) ≤ x} − supCr{A}≥α−δ

infθ∈A

Pr{ξ(θ) ≤ x + δ} > ε∗.

Thus, for each k, there exists a set Ak ∈ P(Θ) with Cr{Ak} ≥ α such that

infθ∈Ak

Pr{ξik(θ) ≤ x} − supCr{A}≥α−δ

infθ∈A

Pr{ξ(θ) ≤ x + δ} > ε∗.

Write A′k = {θ ∈ Ak

∣∣ Cr{θ} < δ}. Then A′k ⊂ Ak and Cr{A′

k} ≤ δ. DefineA∗

k = Ak\A′k. Then

infθ∈A∗

k

Pr{ξik(θ) ≤ x} − supCr{A}≥α−δ

infθ∈A

Pr{ξ(θ) ≤ x + δ} > ε∗.

It follows from the subadditivity of credibility measure that

Cr{A∗k} ≥ Cr{Ak} − Cr{A′

k} ≥ α− δ.

Thus, we have

infθ∈A∗

k

Pr{ξik(θ) ≤ x} − infθ∈A∗

k

Pr{ξ(θ) ≤ x + δ} > ε∗.

Furthermore, there exists θk ∈ A∗k with Cr{θk} ≥ δ such that

Pr{ξik(θk) ≤ x} − Pr{ξ(θk) ≤ x + δ} > ε∗. (6.48)

Page 250: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 6.9 - Random Fuzzy Simulations 241

Note that ξik(θk) and ξ(θk) are all random variables, and

{ξik(θk) ≤ x} = {ξik(θk) ≤ x, ξ(θk) ≤ x + δ} ∪ {ξik(θk) ≤ x, ξ(θk) > x + δ}⊂ {ξ(θk) ≤ x + δ} ∪ {|ξik(θk)− ξ(θk)| > δ}.

It follows from (6.48) that

Pr{|ξik(θk)− ξ(θk)| > δ} ≥ Pr{ξik(θk) ≤ x} − Pr{ξ(θk) ≤ x + δ} > ε∗.

Thus, for each k, we have

E[|ξik(θk)− ξ(θk)|] =∫ +∞

0

Pr{|ξik(θk)− ξ(θk)| > r}dr > δ × ε∗.

Therefore, for each k, we have

E[|ξik − ξ|] =∫ +∞

0

Cr{θ ∈ Θ∣∣ E[|ξik(θ)− ξ(θ)|] ≥ r}dr

≥ Cr{θk} × E[|ξik(θk)− ξ(θk)|]> δ2 × ε∗

which implies that the random fuzzy sequence {ξi} does not converge inmean to ξ. A contradiction proves that {ξi} converges in distribution to ξ.A similar way may prove the case (6.47).

Example 6.23: Convergence in distribution does not imply convergence inmean. Let us consider the example defined by (6.42) in which {ξi} convergesin distribution to ξ. However,

E[|ξi(θ)− ξ(θ)|] =

{2, if θ = θ1

2, if θ = θ2

for i = 1, 2, · · · Then we have

E[|ξi − ξ|] = 2× 12

+ 2× 12

= 2

which implies that the random fuzzy sequence {ξi} does not converge in meanto ξ.

6.9 Random Fuzzy Simulations

It is impossible to design an analytic algorithm to deal with general randomfuzzy systems. In order to do that, we introduce some random fuzzy simu-lations for finding critical value, computing chance function, and calculatingexpected value.

Page 251: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

242 Chapter 6 - Random Fuzzy Theory

Example 6.24: Assume that ξ is an n-dimensional random fuzzy vectordefined on the possibility space (Θ,P(Θ),Pos), and f : �n → �m is a mea-surable function. For any confidence level α, we design a random fuzzy sim-ulation to compute the α-chance Ch {f(ξ) ≤ 0} (α). Equivalently, we shouldfind the supremum β such that

Cr{θ ∈ Θ

∣∣ Pr {f(ξ(θ)) ≤ 0} ≥ β}≥ α. (6.49)

We randomly generate θk from Θ such that Pos{θk} ≥ ε, and write νk =Pos{θk}, k = 1, 2, · · · , N , respectively, where ε is a sufficiently small number.For any number θk, by using stochastic simulation, we can estimate theprobability g(θk) = Pr{f(ξ(θk)) ≤ 0}. For any number r, we set

L(r) =12

(max

1≤k≤N

{νk∣∣ g(θk) ≥ r

}+ min

1≤k≤N

{1− νk

∣∣ g(θk) < r})

.

It follows from monotonicity that we may employ bisection search to find themaximal value r such that L(r) ≥ α. This value is an estimation of L. Wesummarize this process as follows.

Algorithm 6.1 (Random Fuzzy Simulation)Step 1. Generate θk from Θ such that Pos{θk} ≥ ε for k = 1, 2, · · · , N ,

where ε is a sufficiently small number.Step 2. Find the maximal value r such that L(r) ≥ α holds.Step 3. Return r.

The random fuzzy variables ξ1, ξ2, ξ3 are defined as

ξ1 ∼ N (ρ1, 1), with ρ1 = (1, 2, 3),ξ2 ∼ N (ρ2, 1), with ρ2 = (2, 3, 4),ξ3 ∼ N (ρ3, 1), with ρ3 = (3, 4, 5).

A run of random fuzzy simulation with 5000 cycles shows that

Ch{√

ξ21 + ξ2

2 + ξ23 ≥ 3

}(0.9) = 0.91.

Example 6.25: Assume that f : �n → � is a measurable function, andξ is an n-dimensional random fuzzy vector defined on the possibility space(Θ,P(Θ),Pos). For any given confidence levels α and β, we need to design arandom fuzzy simulation to find the maximal value f such that

Ch{f(ξ) ≥ f

}(α) ≥ β

holds. That is, we must find the maximal value f such that

Cr{θ ∈ Θ

∣∣ Pr{f(ξ(θ)) ≥ f

}≥ β}≥ α.

Page 252: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 6.9 - Random Fuzzy Simulations 243

We randomly generate θk from Θ such that Pos{θk} ≥ ε, and write νk =Pos{θk}, k = 1, 2, · · · , N , respectively, where ε is a sufficiently small num-ber. For any number θk, we search for the maximal value f(θk) such thatPr{f(ξ(θk)) ≥ f(θk)} ≥ β by stochastic simulation. For any number r, wehave

H(r) =12

(max

1≤k≤N

{νk∣∣ f(θk) ≥ r

}+ min

1≤k≤N

{1− νk

∣∣ f(θk) < r})

.

It follows from monotonicity that we may employ bisection search to find themaximal value r such that H(r) ≥ α. This value is an estimation of f . Wesummarize this process as follows.

Algorithm 6.2 (Random Fuzzy Simulation)Step 1. Generate θk from Θ such that Pos{θk} ≥ ε for k = 1, 2, · · · , N ,

where ε is a sufficiently small number.Step 2. Find the maximal value r such that H(r) ≥ α holds.Step 3. Return r.

In order to find the maximal value f such that Ch{ξ21+ξ2

2+ξ23 ≥ f}(0.9) ≥

0.9, where ξ1, ξ2, ξ3 are random fuzzy variables defined as

ξ1 ∼ EXP(ρ1), with ρ1 = (1, 2, 3),ξ2 ∼ EXP(ρ2), with ρ2 = (2, 3, 4),ξ3 ∼ EXP(ρ3), with ρ3 = (3, 4, 5),

we perform the random fuzzy simulation with 5000 cycles and obtain thatf = 2.31.

Example 6.26: Assume that f : �n → � is a measurable function, andξ is an n-dimensional random fuzzy vector defined on the possibility space(Θ,P(Θ),Pos). Then f(ξ) is a random fuzzy variable whose expected valueE[f(ξ)] is∫ +∞

0

Cr{θ ∈ Θ | E[f(ξ(θ))] ≥ r}dr −∫ 0

−∞Cr{θ ∈ Θ | E[f(ξ(θ))] ≤ r}dr.

A random fuzzy simulation will be introduced to compute the expected valueE[f(ξ)]. We randomly sample θk from Θ such that Pos{θk} ≥ ε, and denoteνk = Pos{θk} for k = 1, 2, · · · , N , where ε is a sufficiently small number.Then for any number r ≥ 0, the credibility Cr{θ ∈ Θ|E[f(ξ(θ))] ≥ r} can beestimated by

12

(max

1≤k≤N{νk|E[f(ξ(θk))] ≥ r}+ min

1≤k≤N{1− νk|E[f(ξ(θk))] < r}

)

Page 253: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

244 Chapter 6 - Random Fuzzy Theory

and for any number r < 0, the credibility Cr{θ ∈ Θ|E[f(ξ(θ))] ≤ r} can beestimated by

12

(max

1≤k≤N{νk|E[f(ξ(θk))] ≤ r}+ min

1≤k≤N{1− νk|E[f(ξ(θk))] > r}

)provided that N is sufficiently large, where E[f(ξ(θk))], k = 1, 2, · · · , N maybe estimated by the stochastic simulation.

Algorithm 6.3 (Random Fuzzy Simulation)Step 1. Set e = 0.Step 2. Randomly sample θk from Θ such that Pos{θk} ≥ ε for k =

1, 2, · · · , N , where ε is a sufficiently small number.Step 3. Let a = min1≤k≤N E[f(ξ(θk))] and b = max1≤k≤N E[f(ξ(θk))].Step 4. Randomly generate r from [a, b].Step 5. If r ≥ 0, then e← e + Cr{θ ∈ Θ|E[f(ξ(θ))] ≥ r}.Step 6. If r < 0, then e← e− Cr{θ ∈ Θ|E[f(ξ(θ))] ≤ r}.Step 7. Repeat the fourth to sixth steps for N times.Step 8. E[f(ξ)] = a ∨ 0 + b ∧ 0 + e · (b− a)/N .

In order to compute the expected value of ξ1ξ2ξ3, where ξ1, ξ2, ξ3 arerandom fuzzy variables defined as

ξ1 ∼ U(ρ1, ρ1 + 1), with ρ1 = (1, 2, 3),ξ2 ∼ U(ρ2, ρ2 + 1), with ρ2 = (2, 3, 4),ξ3 ∼ U(ρ3, ρ3 + 1), with ρ3 = (3, 4, 5),

we perform the random fuzzy simulation with 5000 cycles and obtain thatE[ξ1ξ2ξ3] = 33.6.

Page 254: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Chapter 7

Bifuzzy Theory

Some extensions of fuzzy set have been made in the literature, for example,type 2 fuzzy set, intuitionistic fuzzy set, twofold fuzzy set, and bifuzzy vari-able. Type 2 fuzzy set was introduced by Zadeh [155] as a fuzzy set whosemembership grades are also fuzzy sets. The intuitionistic fuzzy set was pro-posed by Atanassov[3] as a pair of membership functions whose sum takesvalues between 0 and 1. Twofold fuzzy set was derived by Dubois and Prade[21] from possibility and necessity measures as a pair of fuzzy sets: the setof objects which possibly satisfy a certain property, and the set of objectswhich necessarily satisfy the property.

Bifuzzy variable was initialized by Liu [76] as a function from a possibilityspace to the set of fuzzy variables. In other words, a bifuzzy variable is a fuzzyvariable defined on the universal set of fuzzy variables, or a fuzzy variabletaking “fuzzy variable” values. Liu [76] also gave the concept of chancemeasure, expected value operator, and the optimistic and pessimistic valuesof bifuzzy variable. In order to describe a bifuzzy variable, Zhou and Liu[161] presented the concept of chance distribution.

The emphasis in this chapter is mainly on bifuzzy variable, chance mea-sure, chance distribution, independent and identical distribution, expectedvalue operator, variance, critical values, convergence concepts, and bifuzzysimulation.

7.1 Bifuzzy Variables

Definition 7.1 (Liu [76]) A bifuzzy variable is a function from the possibilityspace (Θ,P(Θ),Pos) to the set of fuzzy variables.

Example 7.1: Let η1, η2, · · · , ηm be fuzzy variables and u1, u2, · · · , um be

Page 255: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

246 Chapter 7 - Bifuzzy Theory

real numbers in [0, 1] such that u1 ∨ u2 ∨ · · · ∨ um = 1. Then

ξ =

⎧⎪⎪⎪⎨⎪⎪⎪⎩η1 with possibility u1

η2 with possibility u2

· · ·ηm with possibility um

is clearly a bifuzzy variable.

Example 7.2: Let ξ = (ρ−1, ρ, ρ+1, ρ+2), where ρ is a fuzzy variable withmembership function μρ(x) = [1− |x− 2|] ∨ 0. Then ξ is a bifuzzy variable.

Example 7.3: The prediction of grain yield could be a bifuzzy variable, forexample,

ξ =

⎧⎪⎪⎪⎨⎪⎪⎪⎩“about 10000 ton” with possibility 0.6“about 10500 ton” with possibility 0.8“about 11200 ton” with possibility 1.0“about 12000 ton” with possibility 0.7.

Example 7.4: It is assumed that most people are middle. Then the heightof a person can be described by

ξ =

⎧⎪⎨⎪⎩“middle” with possibility 1.0“tall” with possibility 0.8“short” with possibility 0.6

which is actually a bifuzzy variable.

Theorem 7.1 Assume that ξ is a bifuzzy variable. Then for any set B of�, we have(a) the possibility Pos{ξ(θ) ∈ B} is a fuzzy variable;(b) the necessity Nec{ξ(θ) ∈ B} is a fuzzy variable;(c) the credibility Cr{ξ(θ) ∈ B} is a fuzzy variable.

Proof: Since Pos{ξ(θ) ∈ B}, Nec{ξ(θ) ∈ B} and Cr{ξ(θ) ∈ B} are functionsfrom the possibility space (Θ,P(Θ),Pos) to the set of real numbers (in fact,[0, 1]), they are fuzzy variables.

Theorem 7.2 Let ξ be a bifuzzy variable. If the expected value E[ξ(θ)] isfinite for each θ, then E[ξ(θ)] is a fuzzy variable.

Proof: Since the expected value E[ξ(θ)] is a function from the possibilityspace (Θ,P(Θ),Pos) to the set of real numbers, it is a fuzzy variable.

Page 256: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 7.2 - Chance Measure 247

Definition 7.2 (Zhou and Liu [161]) An n-dimensional bifuzzy vector is afunction from the possibility space (Θ,P(Θ),Pos) to the set of n-dimensionalfuzzy vectors.

Theorem 7.3 The vector (ξ1, ξ2, · · · , ξn) is a bifuzzy vector if and only ifξ1, ξ2, · · · , ξn are bifuzzy variables.

Proof: Write ξ = (ξ1, ξ2, · · · , ξn). Suppose that the vector ξ is a bi-fuzzy vector on the possibility space (Θ,P(Θ),Pos). Then, for each θ ∈Θ, the vector ξ(θ) is a fuzzy vector. It follows from Theorem 3.15 thatξ1(θ), ξ2(θ), · · · , ξn(θ) are fuzzy variables. Thus ξ1, ξ2, · · · , ξn are bifuzzy vari-ables.

Conversely, suppose that ξ1, ξ2, · · · , ξn are bifuzzy variables on the pos-sibility space (Θ,P(Θ),Pos). Then, for each θ ∈ Θ, the variables ξ1(θ),ξ2(θ), · · · , ξn(θ) are fuzzy variables. It follows from Theorem 3.15 that ξ(θ) =(ξ1(θ), ξ2(θ), · · · , ξn(θ)) is a fuzzy vector. Thus ξ is a bifuzzy vector.

Theorem 7.4 Let ξ be an n-dimensional bifuzzy vector, and f : �n → � afunction. Then f(ξ) is a bifuzzy variable.

Proof: For each θ ∈ Θ, ξ(θ) is a fuzzy vector and f(ξ(θ)) is a fuzzy variable.Thus f(ξ) is a bifuzzy variable since it is a function from a possibility spaceto the set of fuzzy variables.

Definition 7.3 (Liu [75], Bifuzzy Arithmetic on Single Space) Let f : �n →� be a function, and ξ1, ξ2, · · · , ξn bifuzzy variables defined on the possibilityspace (Θ,P(Θ),Pos). Then ξ = f(ξ1, ξ2, · · · , ξn) is a bifuzzy variable definedby

ξ(θ) = f(ξ1(θ), ξ2(θ), · · · , ξn(θ)), ∀θ ∈ Θ. (7.1)

Definition 7.4 (Liu [75], Bifuzzy Arithmetic on Different Spaces) Let f :�n → � be a function, and ξi bifuzzy variables defined on (Θi,P(Θi),Posi),i = 1, 2, · · · , n, respectively. Then ξ = f(ξ1, ξ2, · · · , ξn) is a bifuzzy variabledefined on the product possibility space (Θ,P(Θ),Pos) as

ξ(θ1, θ2, · · · , θn) = f(ξ1(θ1), ξ2(θ2), · · · , ξn(θn)) (7.2)

for any (θ1, θ2, · · · , θn) ∈ Θ.

7.2 Chance Measure

Definition 7.5 (Liu [76]) Let ξ be a bifuzzy variable, and B a set of �.Then the chance of bifuzzy event ξ ∈ B is a function from (0, 1] to [0, 1],defined as

Ch {ξ ∈ B} (α) = supCr{A}≥α

infθ∈A

Cr {ξ(θ) ∈ B} . (7.3)

Page 257: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

248 Chapter 7 - Bifuzzy Theory

Theorem 7.5 Let ξ be a bifuzzy variable, and B a Borel set of �. For anygiven α∗ > 0.5, we write β∗ = Ch {ξ ∈ B} (α∗). Then we have

Cr{θ ∈ Θ

∣∣ Cr {ξ(θ) ∈ B} ≥ β∗} ≥ α∗. (7.4)

Proof: Since β∗ is the supremum of β satisfying

Cr{θ ∈ Θ

∣∣ Cr {ξ(θ) ∈ B} ≥ β}≥ α∗,

there exists an increasing sequence {βi} such that

Cr{θ ∈ Θ

∣∣ Cr {ξ(θ) ∈ B} ≥ βi

}≥ α∗ > 0.5 (7.5)

and βi ↑ β∗ as i→∞. It is also easy to verify that{θ ∈ Θ

∣∣ Cr {ξ(θ ∈ B} ≥ βi

}↓{θ ∈ Θ

∣∣ Cr {ξ(θ) ∈ B} ≥ β∗}as i→∞. It follows from (7.5) and the credibility semicontinuity law that

Cr{θ ∈ Θ

∣∣ Cr {ξ(θ) ∈ B} ≥ β∗}= lim

i→∞Cr{θ ∈ Θ

∣∣ Cr {ξ(θ) ∈ B} ≥ βi

}≥ α∗.

The proof is complete.

Example 7.5: When α∗ ≤ 0.5, generally speaking, the inequality

Cr{θ ∈ Θ∣∣ Cr{ξ(θ) ∈ B} ≥ β∗} ≥ α∗

does not hold. For example, let Θ = {θ1, θ2, · · ·} and Pos{θi} = 1 for i =1, 2, · · · A bifuzzy variable ξ is defined on (Θ,P(Θ),Pos) as

ξ(θi) =

{1 with possibility 10 with possiblity (i− 1)/i

for i = 1, 2, · · · Then we have

β∗ = Ch{ξ ≤ 0}(0.5) = sup1≤i<∞

i− 12i

=12.

However,

Cr{θ ∈ Θ

∣∣ Cr{ξ(θ) ≤ 0} ≥ β∗} = Cr{∅} = 0 < 0.5.

Theorem 7.6 Let ξ be a bifuzzy variable defined on the possibility space(Θ,P(Θ),Pos), and B a set of �. Then Ch{ξ ∈ B}(α) is a decreasing func-tion of α, and

limα↓0

Ch {ξ ∈ B} (α) = supθ∈Θ+

Cr {ξ(θ) ∈ B} ; (7.6)

Ch {ξ ∈ B} (1) = infθ∈Θ+

Cr {ξ(θ) ∈ B} (7.7)

where Θ+ is the kernel of (Θ,P(Θ),Pos).

Page 258: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 7.2 - Chance Measure 249

Proof: For any given α1 and α2 with 0 < α1 < α2 ≤ 1, it is clear that

Ch {ξ ∈ B} (α1) = supCr{A}≥α1

infθ∈A

Cr {ξ(θ) ∈ B}

≥ supCr{A}≥α2

infθ∈A

Cr {ξ(θ) ∈ B}

= Ch {ξ ∈ B} (α2).

That is, Ch{ξ ∈ B}(α) is a decreasing function of α.Next we prove (7.6). On the one hand, for any α ∈ (0, 1], we have

Ch{ξ ∈ B}(α) = supCr{A}≥α

infθ∈A

Cr {ξ(θ) ∈ B} ≤ supθ∈Θ+

Cr {ξ(θ) ∈ B} .

Letting α ↓ 0, we get

limα↓0

Ch {ξ ∈ B} (α) ≤ supθ∈Θ+

Cr {ξ(θ) ∈ B} . (7.8)

On the other hand, for any θ∗ ∈ Θ+, we write α∗ = Cr{θ∗} > 0. SinceCh{ξ ∈ B}(α) is a decreasing function of α, we have

limα↓0

Ch{ξ ∈ B}(α) ≥ Ch{ξ ∈ B}(α∗) ≥ Cr{ξ(θ∗) ∈ B}

which implies that

limα↓0

Ch {ξ ∈ B} (α) ≥ supθ∈Θ+

Cr {ξ(θ) ∈ B} . (7.9)

It follows from (7.8) and (7.9) that (7.6) holds.Finally, we prove (7.7). On the one hand, for any set A with Cr{A} = 1,

it is clear that Θ+ ⊂ A. Thus

Ch{ξ ∈ B}(1) = supCr{A}≥1

infθ∈A

Cr {ξ(θ) ∈ B} ≤ infθ∈Θ+

Cr {ξ(θ) ∈ B} . (7.10)

On the other hand, since Cr{Θ+} = 1, we have

Ch {ξ ∈ B} (1) ≥ infθ∈Θ+

Cr {ξ(θ) ∈ B} . (7.11)

It follows from (7.10) and (7.11) that (7.7) holds. The theorem is proved.

Theorem 7.7 Let ξ be a bifuzzy variable, and {Bi} a sequence of sets of� such that Bi ↓ B. If α > 0.5 and limi→∞ Ch{ξ ∈ Bi}(α) > 0.5 orCh {ξ ∈ B} (α) ≥ 0.5, then we have

limi→∞

Ch{ξ ∈ Bi}(α) = Ch{ξ ∈ lim

i→∞Bi

}(α). (7.12)

Page 259: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

250 Chapter 7 - Bifuzzy Theory

Proof: Since Bi ↓ B, the chance Ch{ξ ∈ Bi}(α) is decreasing with respectto i. Thus the limitation limi→∞ Ch{ξ ∈ Bi}(α) exists and is not less thanCh{ξ ∈ B}(α). If the limitation is equal to Ch{ξ ∈ B}(α), then the theoremis proved. Otherwise,

limi→∞

Ch{ξ ∈ Bi}(α) > Ch{ξ ∈ B}(α).

Thus there exists a number z such that

limi→∞

Ch{ξ ∈ Bi}(α) > z > Ch{ξ ∈ B}(α). (7.13)

Hence there exists a set Ai with Cr{Ai} ≥ α such that

infθ∈Ai

Cr{ξ(θ) ∈ Bi} > z

for every i. Since α > 0.5, we may define A = {θ ∈ Θ|Pos{θ} > 2 − 2α}. Itis clear that Cr{A} ≥ α and A ⊂ Ai for all i. Thus,

infθ∈A

Cr{ξ(θ) ∈ Bi} ≥ infθ∈Ai

Cr{ξ(θ) ∈ Bi} > z

for every i. It follows from the credibility semicontinuity law that

Cr{ξ(θ) ∈ Bi} ↓ Cr{ξ(θ) ∈ B}, ∀θ ∈ A.

Thus,Ch{ξ ∈ B}(α) ≥ inf

θ∈ACr{ξ(θ) ∈ B} ≥ z

which contradicts to (7.13). The theorem is proved.

Theorem 7.8 (a) Let ξ, ξ1, ξ2, · · · be bifuzzy variables such that ξi(θ) ↑ ξ(θ)for each θ ∈ Θ. If α > 0.5 and limi→∞ Ch{ξi ≤ r}(α) > 0.5 or Ch {ξ ≤ r} (α)≥ 0.5, then for each real number r, we have

limi→∞

Ch{ξi ≤ r}(α) = Ch{

limi→∞

ξi ≤ r}

(α). (7.14)

(b) Let ξ, ξ1, ξ2, · · · be random fuzzy variables such that ξi(θ) ↓ ξ(θ) for eachθ ∈ Θ. If α > 0.5 and limi→∞ Ch{ξi ≥ r}(α) > 0.5 or Ch {ξ ≥ r} (α) ≥ 0.5,then for each real number r, we have

limi→∞

Ch{ξi ≥ r}(α) = Ch{

limi→∞

ξi ≥ r}

(α). (7.15)

Proof: (a) Since ξi(θ) ↑ ξ(θ) for each θ ∈ Θ, we have {ξi(θ) ≤ r} ↓ {ξ(θ) ≤r}. Thus the limitation limi→∞ Ch{ξi ≤ r}(α) exists and is not less thanCh{ξ ≤ r}(α). If the limitation is equal to Ch{ξ ≤ r}(α), the theorem isproved. Otherwise,

limi→∞

Ch{ξi ≤ r}(α) > Ch{ξ ≤ r}(α).

Page 260: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 7.2 - Chance Measure 251

Then there exists z ∈ (0, 1) such that

limi→∞

Ch{ξi ≤ r}(α) > z > Ch{ξ ≤ r}(α). (7.16)

Hence there exists a set Ai with Cr{Ai} ≥ α such that

infθ∈Ai

Cr{ξi(θ) ≤ r} > z

for every i. Since α > 0.5, we may define A = {θ ∈ Θ|Pos{θ} > 2 − 2α}.Then Cr{A} ≥ α and A ⊂ Ai for all i. Thus,

infθ∈A

Cr{ξi(θ) ≤ r} ≥ infθ∈Ai

Cr{ξi(θ) ≤ r} > z

for every i. On the other hand, it follows from Theorem 3.16 that

Cr{ξi(θ) ≤ r} ↓ Cr{ξ(θ) ≤ r}.

Thus,Cr{ξ(θ) ≤ r} ≥ z, ∀θ ∈ A.

Hence we have

Ch{ξ ≤ r}(α) ≥ infθ∈A

Cr{ξ(θ) ≤ r} ≥ z

which contradicts to (7.16). The part (a) is proved. We may prove the part(b) via a similar way.

Variety of Chance Measure

Definition 7.6 (Zhou and Liu [161]) Let ξ be a bifuzzy variable, and B aset of �. For any real number α ∈ (0, 1], the α-chance of bifuzzy event ξ ∈ Bis defined as the value of chance at α, i.e., Ch {ξ ∈ B} (α) where Ch denotesthe chance measure.

Definition 7.7 (Zhou and Liu [161]) Let ξ be a bifuzzy variable, and B aset of �. Then the equilibrium chance of bifuzzy event ξ ∈ B is defined as

Che {ξ ∈ B} = sup0<α≤1

{α∣∣ Ch {ξ ∈ B} (α) ≥ α

}(7.17)

where Ch denotes the chance measure.

Definition 7.8 (Zhou and Liu [161]) Let ξ be a bifuzzy variable, and B aset of �. Then the average chance of bifuzzy event ξ ∈ B is defined as

Cha {ξ ∈ B} =∫ 1

0

Ch {ξ ∈ B} (α)dα (7.18)

where Ch denotes the chance measure.

Page 261: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

252 Chapter 7 - Bifuzzy Theory

Definition 7.9 A bifuzzy variable ξ is said to be(a) nonnegative if Ch{ξ < 0}(α) ≡ 0;(b) positive if Ch{ξ ≤ 0}(α) ≡ 0;(c) simple if there exists a finite sequence {x1, x2, · · · , xm} such that

Ch {ξ �= x1, ξ �= x2, · · · , ξ �= xm} (α) ≡ 0; (7.19)

(d) discrete if there exists a countable sequence {x1, x2, · · ·} such that

Ch {ξ �= x1, ξ �= x2, · · ·} (α) ≡ 0. (7.20)

7.3 Chance Distribution

Definition 7.10 (Zhou and Liu [161]) Let ξ be a bifuzzy variable. Thechance distribution Φ: [−∞,+∞]× (0, 1]→ [0, 1] of ξ is defined by

Φ(x;α) = Ch {ξ ≤ x} (α). (7.21)

Theorem 7.9 (Zhou and Liu [161]) The chance distribution Φ(x;α) of bi-fuzzy variable is a decreasing and left-continuous function of α for each x.

Proof: Denote the bifuzzy variable by ξ. For any given α1 and α2 with0 < α1 < α2 ≤ 1, it follows from Theorem 7.6 that

Φ(x;α1) = Ch{ξ ≤ x}(α1) ≥ Ch{ξ ≤ x}(α2) = Φ(x;α2).

Thus Φ(x;α) is a decreasing function of α.Next we prove the left-continuity of Φ(x;α) with respect to α. Let α ∈

(0, 1] be given and let {αi} be a sequence of numbers with αi ↑ α. SinceΦ(x;α) is a decreasing function of α, the limitation limi→∞ Φ(x;αi) existsand is not less than Φ(x;α). If the limitation is equal to Φ(x;α), then theleft-continuity is proved. Otherwise, we have

limi→∞

Φ(x;αi) > Φ(x;α).

Let z∗ = (limi→∞ Φ(x;αi) + Φ(x;α))/2. It is clear that

Φ(x;αi) > z∗ > Φ(x;α)

for all i. It follows from Φ(x;αi) > z∗ that there exists Ai with Cr{Ai} ≥ αi

such thatinfθ∈Ai

Cr{ξ(θ) ≤ x} > z∗

for each i. Now we define

A∗ =∞⋃i=1

Ai.

Page 262: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 7.3 - Chance Distribution 253

It is clear that Cr{A∗} ≥ Cr{Ai} ≥ αi. Letting i→∞, we get Cr{A∗} ≥ α.Thus

Φ(x;α) ≥ infθ∈A∗

Cr{ξ(θ) ≤ x} ≥ z∗.

A contradiction proves the theorem.

Theorem 7.10 (Zhou and Liu [161]) The chance distribution Φ(x;α) of bi-fuzzy variable is an increasing function of x for each fixed α, and

Φ(−∞;α) = 0, Φ(+∞;α) = 1, ∀α; (7.22)

limx→−∞Φ(x;α) ≤ 0.5 if α > 0.5; (7.23)

limx→+∞Φ(x;α) ≥ 0.5 if α < 0.5. (7.24)

Furthermore, if α > 0.5 and limy↓x

Φ(y;α) > 0.5 or Φ(x;α) ≥ 0.5, then we have

limy↓x

Φ(y;α) = Φ(x;α). (7.25)

Proof: Let Φ be the chance distribution of bifuzzy variable ξ on the possibil-ity space (Θ,P(Θ),Pos). For any given x1 and x2 with −∞ ≤ x1 < x2 ≤ +∞,it is clear that

Φ(x1;α) = supCr{A}≥α

infθ∈A

Cr {ξ(θ) ≤ x1}

≤ supCr{A}≥α

infθ∈A

Cr {ξ(θ) ≤ x2} = Φ(x2;α).

That is, the chance distribution Φ(x;α) is an increasing function of x. Sinceξ(θ) is a fuzzy variable for any θ ∈ Θ, we have Cr{ξ(θ) ≤ −∞} = 0 for anyθ ∈ Θ. It follows that

Φ(−∞;α) = supCr{A}≥α

infθ∈A

Cr {ξ(θ) ≤ −∞} = 0.

Similarly, we have Cr{ξ(θ) ≤ +∞} = 1 for any θ ∈ Θ. Thus

Φ(+∞;α) = supCr{A}≥α

infθ∈A

Cr {ξ(θ) ≤ +∞} = 1.

Next we prove (7.23) and (7.24). If α > 0.5, then there exists an elementθ∗ ∈ Θ such that 2 − 2α < Pos{θ∗} ≤ 1. It is easy to verify that θ∗ ∈ A ifCr{A} ≥ α. Hence

limx→−∞Φ(x;α) = lim

x→−∞ supCr{A}≥α

infθ∈A

Cr {ξ(θ) ≤ x}

≤ limx→−∞Cr{ξ(θ∗) ≤ x} ≤ 0.5.

Page 263: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

254 Chapter 7 - Bifuzzy Theory

When α < 0.5, there exists an element θ∗ such that Cr{θ∗} ≥ α. Thus wehave

limx→+∞Φ(x;α) = lim

x→+∞ supCr{A}≥α

infθ∈A

Cr {ξ(θ) ≤ x}

≥ limx→+∞Cr{ξ(θ∗) ≤ x} ≥ 0.5.

Finally, we prove (6.25). Let {xi} be an arbitrary sequence with xi ↓ xas i→∞. It follows from Theorem 7.7 that

limy↓x

Φ(y;α) = limy↓x

Ch{ξ ∈ (−∞, y]}(α) = Ch{ξ ∈ (−∞, x]}(α) = Φ(x;α).

The theorem is proved.

Example 7.6: The limitation limx→−∞ Φ(x;α) may take any value a be-tween 0 and 0.5, and limx→+∞ Φ(x;α) may take any value b between 0.5and 1. Let ξ be a bifuzzy variable who takes a single value of fuzzy variabledefined by the following membership function,

μ(x) =

⎧⎪⎨⎪⎩2a, if x < 01, if x = 0

2− 2b, if 0 < x.

Then for any α, we have

Φ(x;α) =

⎧⎪⎪⎪⎨⎪⎪⎪⎩0, if x = −∞a, if −∞ < x < 0b, if 0 ≤ x < +∞1, if x = +∞.

It follows that limx→−∞Φ(x;α) = a and lim

x→+∞Φ(x;α) = b.

Example 7.7: When α ≤ 0.5, the limitation limx→−∞ Φ(x;α) may take anyvalue c between 0 and 1. Suppose that Θ = {θ1, θ2, · · ·}, Pos{θi} = 1 for i =1, 2, · · ·We define a bifuzzy variable ξ on the possibility space (Θ,P(Θ),Pos)as

ξ(θi) =

{−i with possibility (2c) ∧ 10 with possibility (2− 2c) ∧ 1.

When α ≤ 0.5, we have

Φ(x;α) =

⎧⎪⎨⎪⎩0, if x = −∞c, if −∞ < x < 01, if 0 ≤ x ≤ +∞

and limx→−∞Φ(x;α) = c.

Page 264: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 7.4 - Independent and Identical Distribution 255

Example 7.8: When α ≥ 0.5, the limitation limx→+∞ Φ(x;α) may take anyvalue c between 0 and 1. Suppose that Θ = {θ1, θ2, · · ·}, Pos{θi} = i/(i + 1)for i = 1, 2, · · · and ξ is a bifuzzy variable defined on the possibility space(Θ,P(Θ),Pos) as

ξ(θi) =

{0 with possibility (2c) ∧ 1i with possibility (2− 2c) ∧ 1.

However, when α ≥ 0.5, we have

Φ(x;α) =

⎧⎪⎨⎪⎩0, if −∞ ≤ x < 0c, if 0 ≤ x < +∞1, if x = +∞

and limx→+∞Φ(x;α) = c.

Theorem 7.11 Let ξ be a bifuzzy variable. Then Ch{ξ ≥ x}(α) is(a) a decreasing and left-continuous function of α for each x;(b) a decreasing function of x for each α. Furthermore, if α > 0.5 and

Ch{ξ ≥ x}(α) ≥ 0.5 or limy↑x

Ch{ξ ≥ y}(α) > 0.5,

then we have limy↑x

Ch{ξ ≥ y}(α) = Ch{ξ ≥ x}(α).

Proof: Like Theorems 7.9 and 7.10.

Definition 7.11 (Zhou and Liu [161]) The chance density function φ: �×(0, 1]→ [0,+∞) of a bifuzzy variable ξ is a function such that

Φ(x;α) =∫ x

−∞φ(y;α)dy (7.26)

holds for all x ∈ [−∞,+∞] and α ∈ (0, 1], where Φ is the chance distributionof ξ.

7.4 Independent and Identical Distribution

This section introduces the concept of independent and identically distributed(iid) bifuzzy variables.

Definition 7.12 The bifuzzy variables ξ1, ξ2, · · · , ξn are said to be iid if andonly if

(Pos{ξi(θ) ∈ B1},Pos{ξi(θ) ∈ B2}, · · · ,Pos{ξi(θ) ∈ Bm}) , i = 1, 2, · · · , n

are iid fuzzy vectors for any sets B1, B2, · · · , Bm of � and any positive integerm.

Page 265: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

256 Chapter 7 - Bifuzzy Theory

Theorem 7.12 Let ξ1, ξ2, · · · , ξn be iid bifuzzy variables. Then for any setB of �, we have(a) Pos{ξi(θ) ∈ B}, i = 1, 2, · · · , n are iid fuzzy variables;(b) Nec{ξi(θ) ∈ B}, i = 1, 2, · · · , n are iid fuzzy variables;(c) Cr{ξi(θ) ∈ B}, i = 1, 2, · · · , n are iid fuzzy variables.

Proof: The part (a) follows immediately from the definition. (b) Sinceξ1, ξ2, · · · , ξn are iid bifuzzy variables, the possibilities Pos{ξi ∈ Bc}, i =1, 2, · · · , n are iid fuzzy variables. It follows from Nec{ξi ∈ B} = 1−Pos{ξi ∈Bc}, i = 1, 2, · · · , n that Nec{ξi(θ) ∈ B}, i = 1, 2, · · · , n are iid fuzzy variables.(c) It follows from the definition of iid bifuzzy variables that (Pos{ξi(θ) ∈B},Pos{ξi(θ) ∈ Bc}), i = 1, 2, · · · , n are iid fuzzy vectors. Since, for each i,

Cr{ξi(θ) ∈ B} =12

(Pos{ξi(θ) ∈ B}+ 1− Pos{ξi(θ) ∈ Bc}) ,

the credibilities Cr{ξi(θ) ∈ B}, i = 1, 2, · · · , n are iid fuzzy variables.

Theorem 7.13 Let f : � → � be a function. If ξ1, ξ2, · · · , ξn are iid bifuzzyvariables, then f(ξ1), f(ξ2), · · ·, f(ξn) are iid bifuzzy variables.

Proof: We have proved that f(ξ1), f(ξ2), · · · , f(ξn) are bifuzzy variables.For any positive integer m and sets B1, B2, · · · , Bm of �, since

f−1(B1), f−1(B2), · · · , f−1(Bm)

are sets of �, we know that(Pos{ξi(θ) ∈ f−1(B1)},Pos{ξi(θ) ∈ f−1(B2)}, · · · ,Pos{ξi(θ) ∈ f−1(Bm)}

),

i = 1, 2, · · · , n are iid fuzzy vectors. Equivalently, the fuzzy vectors

(Pos{f(ξi(θ)) ∈ B1},Pos{f(ξi(θ)) ∈ B2}, · · · ,Pos{f(ξi(θ)) ∈ Bm}) ,

i = 1, 2, · · · , n are iid. Hence f(ξ1), f(ξ2), · · · , f(ξn) are iid bifuzzy variables.

7.5 Expected Value Operator

Definition 7.13 (Liu [76]) Let ξ be a bifuzzy variable. Then the expectedvalue of ξ is defined by

E[ξ] =∫ +∞

0

Cr{θ ∈ Θ | E[ξ(θ)] ≥ r}dr −∫ 0

−∞Cr{θ ∈ Θ | E[ξ(θ)] ≤ r}dr

provided that at least one of the two integrals is finite.

Page 266: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 7.6 - Variance, Covariance and Moments 257

Theorem 7.14 (Zhou and Liu [161]) Assume that ξ and η are bifuzzy vari-ables with finite expected values. If (i) for each θ ∈ Θ, the fuzzy variablesξ(θ) and η(θ) are independent, and (ii) E[ξ(θ)] and E[η(θ)] are independentfuzzy variables, then for any real numbers a and b, we have

E[aξ + bη] = aE[ξ] + bE[η]. (7.27)

Proof: For any θ ∈ Θ, since the fuzzy variables ξ(θ) and η(θ) are inde-pendent, we have E[aξ(θ) + bη(θ)] = aE[ξ(θ)] + bE[η(θ)]. In addition, sinceE[ξ(θ)] and E[η(θ)] are independent fuzzy variables, we have E[aξ + bη] =E [aE[ξ(θ)] + bE[η(θ)]] = aE [E[ξ(θ)]] + bE [E[η(θ)]] = aE[ξ] + bE[η]. Thetheorem is proved.

Theorem 7.15 Let ξ, ξ1, ξ2, · · · be bifuzzy variables such that E[ξi(θ)] →E[ξ(θ)] uniformly. Then

limi→∞

E[ξi] = E[ξ]. (7.28)

Proof: Since ξi are bifuzzy variables, E[ξi(θ)] are fuzzy variables for all i.It follows from E[ξi(θ)] → E[ξ(θ)] uniformly and Theorem 3.41 that (7.28)holds.

7.6 Variance, Covariance and Moments

Definition 7.14 (Zhou and Liu [161]) Let ξ be a bifuzzy variable with finiteexpected value e. The variance of ξ is defined as

V [ξ] = E[(ξ − e)2

]. (7.29)

Theorem 7.16 If ξ is a bifuzzy variable with finite expected value, a and bare real numbers, then V [aξ + b] = a2V [ξ].

Proof: It follows from the definition of variance that

V [aξ + b] = E[(aξ + b− aE[ξ]− b)2

]= a2E[(ξ − E[ξ])2] = a2V [ξ].

Theorem 7.17 Assume that ξ is a bifuzzy variable whose expected valueexists. Then we have

V [E[ξ(θ)]] ≤ V [ξ]. (7.30)

Proof: Denote the expected value of ξ by e. It follows from Theorem 3.53that

V [E[ξ(θ)]] = E[(E[ξ(θ)]− e)2

]≤ E

[E[(ξ(θ)− e)2

]]= V [ξ].

The theorem is proved.

Theorem 7.18 (Zhou and Liu [161]) Let ξ be a bifuzzy variable with expectedvalue e. Then V [ξ] = 0 if and only if Ch{ξ = e}(1) = 1.

Page 267: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

258 Chapter 7 - Bifuzzy Theory

Proof: If V [ξ] = 0, then it follows from V [ξ] = E[(ξ − e)2] that∫ +∞

0

Cr{θ ∈ Θ

∣∣ E[(ξ(θ)− e)2] ≥ r}

dr = 0

which implies that Cr{θ ∈ Θ|E[(ξ(θ)−e)2] ≥ r} = 0 for any r > 0. Therefore,Cr{θ ∈ Θ|E[(ξ(θ) − e)2] = 0} = 1. That is, there exists a set A∗ withCr{A∗} = 1 such that E[(ξ(θ) − e)2] = 0 for each θ ∈ A∗. It follows fromTheorem 3.47 that Cr{ξ(θ) = e} = 1 for each θ ∈ A∗. Hence

Ch{ξ = e}(1) = supCr{A}≥α

infθ∈A

Cr{ξ(θ) = e} = 1.

Conversely, if Ch{ξ = e}(1) = 1, it follows from Theorem 7.5 that thereexists a set A∗ with Cr{A∗} = 1 such that

infθ∈A∗

Cr{ξ(θ) = e} = 1.

That is, Cr{(ξ(θ)− e)2 ≥ r} = 0 for each r > 0 and each θ ∈ A∗. Thus

E[(ξ(θ)− e)2] =∫ +∞

0

Cr{(ξ(θ)− e)2 ≥ r}dr = 0

for each θ ∈ A∗. It follows that Cr{θ ∈ Θ|E[(ξ(θ)− e)2] ≥ r

}= 0 for any

r > 0. Hence

V [ξ] =∫ +∞

0

Cr{θ ∈ Θ

∣∣ E[(ξ(θ)− e)2] ≥ r}

dr = 0.

The theorem is proved.

Definition 7.15 (Zhou and Liu [161]) Let ξ and η be bifuzzy variables suchthat E[ξ] and E[η] are finite. Then the covariance of ξ and η is defined by

Cov[ξ, η] = E [(ξ − E[ξ])(η − E[η])] . (7.31)

Definition 7.16 (Zhou and Liu [161]) For any positive integer k, the ex-pected value E[ξk] is called the kth moment of the bifuzzy variable ξ. Theexpected value E[(ξ − E[ξ])k] is called the kth central moment of the bifuzzyvariable ξ.

7.7 Optimistic and Pessimistic Values

Definition 7.17 (Liu [75]) Let ξ be a bifuzzy variable, and γ, δ ∈ (0, 1].Then we call

ξsup(γ, δ) = sup{r∣∣ Ch{ξ ≥ r

}(γ) ≥ δ} (7.32)

the (γ, δ)-optimistic value to ξ, and

ξinf(γ, δ) = inf{r∣∣ Ch{ξ ≤ r}(γ) ≥ δ

}(7.33)

the (γ, δ)-pessimistic value to ξ.

Page 268: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 7.7 - Optimistic and Pessimistic Values 259

Theorem 7.19 (Zhou and Liu [161]) Let ξ be a bifuzzy variable. Assumethat ξsup(γ, δ) is the (γ, δ)-optimistic value and ξinf(γ, δ) is the (γ, δ)-pessimisticvalue to ξ. If γ > 0.5 and δ > 0.5, then we have

Ch{ξ ≤ ξinf(γ, δ)}(γ) ≥ δ, Ch{ξ ≥ ξsup(γ, δ)}(γ) ≥ δ. (7.34)

Proof: It follows from the definition of (γ, δ)-pessimistic value that thereexists a decreasing sequence {xi} such that Ch{ξ ≤ xi}(γ) ≥ δ and xi ↓ξinf(γ, δ) as i→∞. Thus we have

limi→∞

Ch{ξ ≤ xi}(γ) ≥ δ > 0.5.

It follows from γ > 0.5 and Theorem 7.10 that

Ch{ξ ≤ ξinf(γ, δ)}(γ) = limi→∞

Ch{ξ ≤ xi}(γ) ≥ δ.

Similarly, there exists an increasing sequence {xi} such that Ch{ξ ≥ xi}(γ) ≥δ and xi ↑ ξsup(γ, δ) as i→∞. Thus we have

limi→∞

Ch{ξ ≥ xi}(γ) ≥ δ > 0.5.

It follows from γ > 0.5 and Theorem 7.11 that

Ch{ξ ≥ ξsup(γ, δ)}(γ) = limi→∞

Ch{ξ ≥ xi}(γ) ≥ δ.

The theorem is proved.

Example 7.9: When γ ≤ 0.5 or δ ≤ 0.5, it is possible that the inequalities

Ch{ξ ≥ ξsup(γ, δ)}(γ) < δ, Ch{ξ ≤ ξinf(γ, δ)}(γ) < δ

hold. Suppose that Θ = {θ1, θ2}, Pos{θ1} = 1, and Pos{θ2} = 0.8. Let ξ bea bifuzzy variable defined on (Θ,P(Θ),Pos) as

ξ(θ) =

{η, if θ = θ1

0, if θ = θ2

where η is a fuzzy variable whose membership function is defined by

μ(x) =

{1, if x ∈ (−1, 1)0, otherwise.

Then we have

ξsup(0.5, 0.5) = 1 and Ch{ξ ≥ 1}(0.5) = 0 < 0.5;

ξinf(0.5, 0.5) = −1 and Ch{ξ ≤ −1}(0.5) = 0 < 0.5.

Page 269: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

260 Chapter 7 - Bifuzzy Theory

Theorem 7.20 (Zhou and Liu [161]) Let ξsup(γ, δ) and ξinf(γ, δ) be the(γ, δ)-optimistic and (γ, δ)-pessimistic values of bifuzzy variable ξ, respec-tively. If γ ≤ 0.5, then we have

ξinf(γ, δ) ≤ ξsup(γ, δ) + δ1; (7.35)

if γ > 0.5, then we have

ξinf(γ, δ) + δ2 ≥ ξsup(γ, δ) (7.36)

where δ1 and δ2 are defined by

δ1 = supθ∈Θ{ξ(θ)sup(1− δ)− ξ(θ)inf(1− δ)} ,

δ2 = supθ∈Θ{ξ(θ)sup(δ)− ξ(θ)inf(δ)} ,

and ξ(θ)sup(δ) and ξ(θ)inf(δ) are δ-optimistic and δ-pessimistic values of fuzzyvariable ξ(θ) for each θ, respectively.

Proof: Assume that γ ≤ 0.5. For any given ε > 0, we define

Θ1 ={θ ∈ Θ

∣∣ Cr {ξ(θ) > ξsup(γ, δ) + ε} ≥ δ}

,

Θ2 ={θ ∈ Θ

∣∣ Cr {ξ(θ) < ξinf(γ, δ)− ε} ≥ δ}

.

It follows from the definitions of ξsup(γ, δ) and ξinf(γ, δ) that Cr{Θ1} < γand Cr{Θ2} < γ. Thus Cr{Θ1}+Cr{Θ2} < γ +γ ≤ 1. This fact implies thatΘ1 ∪Θ2 �= Θ. Let θ∗ �∈ Θ1 ∪Θ2. Then we have

Cr {ξ(θ∗) > ξsup(γ, δ) + ε} < δ,

Cr {ξ(θ∗) < ξinf(γ, δ)− ε} < δ.

Since Cr is self-dual, we have

Cr {ξ(θ∗) ≤ ξsup(γ, δ) + ε} > 1− δ,

Cr {ξ(θ∗) ≥ ξinf(γ, δ)− ε} > 1− δ.

It follows from the definitions of ξ(θ∗)sup(1− δ) and ξ(θ∗)inf(1− δ) that

ξsup(γ, δ) + ε ≥ ξ(θ∗)inf(1− δ),

ξinf(γ, δ)− ε ≤ ξ(θ∗)sup(1− δ)

which implies that

ξinf(γ, δ)− ε− (ξsup(γ, δ) + ε) ≤ ξ(θ∗)sup(1− δ)− ξ(θ∗)inf(1− δ) ≤ δ1.

Letting ε→ 0, we obtain (7.35).

Page 270: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 7.8 - Convergence Concepts 261

Next we prove the inequality (7.36). Assume γ > 0.5. For any givenε > 0, we define

Θ1 ={θ ∈ Θ

∣∣ Cr {ξ(θ) ≥ ξsup(γ, δ)− ε} ≥ δ}

,

Θ2 ={θ ∈ Θ

∣∣ Cr {ξ(θ) ≤ ξinf(γ, δ) + ε} ≥ δ}

.

It follows from the definitions of ξsup(γ, δ) and ξinf(γ, δ) that Cr{Θ1} ≥ γand Cr{Θ2} ≥ γ. Thus Cr{Θ1}+Cr{Θ2} ≥ γ +γ > 1. This fact implies thatΘ1 ∩Θ2 �= ∅. Let θ∗ ∈ Θ1 ∩Θ2. Then we have

Cr {ξ(θ∗) ≥ ξsup(γ, δ)− ε} ≥ δ,

Cr {ξ(θ∗) ≤ ξinf(γ, δ) + ε} ≥ δ.

It follows from the definitions of ξ(θ∗)sup(δ) and ξ(θ∗)inf(δ) that

ξsup(γ, δ)− ε ≤ ξ(θ∗)sup(δ),

ξinf(γ, δ) + ε ≥ ξ(θ∗)inf(δ)

which implies that

ξsup(γ, δ)− ε− (ξinf(γ, δ) + ε) ≤ ξ(θ∗)sup(δ)− ξ(θ∗)inf(δ) ≤ δ2.

The inequality (7.36) is proved by letting ε→ 0.

7.8 Convergence Concepts

This section introduces four types of sequence convergence concept: conver-gence almost surely (a.s.), convergence in chance, convergence in mean, andconvergence in distribution.

Table 7.1: Relationship among Convergence Concepts

Convergence Almost SurelyConvergence ↗ ↖ Convergencein Chance ↘ ↙ in Mean

Convergence in Distribution

Definition 7.18 (Zhou and Liu [162]) Suppose that ξ, ξ1, ξ2, · · · are bifuzzyvariables defined on the possibility space (Θ,P(Θ),Pos). The bifuzzy sequence{ξi} is said to be convergent a.s. to ξ if and only if there exists a set A ∈ P(Θ)with Cr{A} = 1 such that {ξi(θ)} converges a.s. to ξ(θ) for every θ ∈ A.

Page 271: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

262 Chapter 7 - Bifuzzy Theory

Definition 7.19 (Zhou and Liu [162]) Suppose that ξ, ξ1, ξ2, · · · are bifuzzyvariables defined on the possibility space (Θ,P(Θ),Pos). We say that thebifuzzy sequence {ξi} converges in chance to ξ if

limi→∞

limα↓0

Ch {|ξi − ξ| ≥ ε} (α) = 0 (7.37)

for every ε > 0.

Definition 7.20 (Zhou and Liu [162]) Suppose that ξ, ξ1, ξ2, · · · are bifuzzyvariables with finite expected values. We say that the sequence {ξi} convergesin mean to ξ if

limi→∞

E[|ξi − ξ|] = 0. (7.38)

Definition 7.21 (Zhou and Liu [162]) Suppose that Φ,Φ1,Φ2, · · · are thechance distributions of bifuzzy variables ξ, ξ1, ξ2, · · ·, respectively. We saythat {ξi} converges in distribution to ξ if Φi(x;α)→ Φ(x;α) for all continuitypoints (x;α) of Φ.

Convergence Almost Surely vs. Convergence in Chance

Theorem 7.21 (Zhou and Liu [162]) Suppose that ξ, ξ1, ξ2, · · · are bifuzzyvariables on the possibility space (Θ,P(Θ),Pos). If the bifuzzy sequence {ξi}converges in chance to ξ, then {ξi} converges a.s. to ξ.

Proof: Let Θ+ be the kernel of the possibility space (Θ,P(Θ),Pos). Since{ξi} converges in chance to ξ, it is easy to prove that the fuzzy sequence{ξi(θ)} converges in credibility to ξ(θ) for each θ ∈ Θ+. Furthermore, Theo-rem 3.58 states that the fuzzy sequence {ξi(θ)} converges a.s. to ξ(θ). Thusthe bifuzzy sequence {ξi} converges a.s. to ξ. The proof is complete.

Example 7.10: Convergence a.s. does not imply convergence in chance. Forexample, let

Θ = {θ1, θ2, · · ·}, Pos{θj} = (j − 1)/j for j = 1, 2, · · ·Θ′ = {θ′1, θ′2, · · ·}, Pos′{θ′j} = (j − 1)/j for j = 1, 2, · · ·

Suppose that the bifuzzy variables ξ, ξ1, ξ2, · · · are defined on the possibilityspace (Θ,P(Θ),Pos) as ξ = 0 and

ξi(θj) =

{ηi, if j = i

0, otherwise(7.39)

where ηi are fuzzy variables defined on (Θ′,P(Θ′),Pos′) as

ηi(θ′j) =

{i, if j = i

0, otherwise(7.40)

Page 272: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 7.8 - Convergence Concepts 263

for i, j = 1, 2, · · · For every θ ∈ Θ, we can easily verify that the fuzzy sequence{ξi(θ)} converges a.s. to ξ(θ). Thus the bifuzzy sequence {ξi} converges a.s. toξ. However, for any small number ε > 0, we have

Cr′{|ξi(θ)− ξ(θ)| ≥ ε} =

⎧⎨⎩i− 12i

, if θ = θi

0, otherwise

for i = 1, 2, · · · It follows from Theorem 7.6 that

limα↓0

Ch{|ξi − ξ| ≥ ε}(α) = supθ∈Θ+

Cr′{|ξi(θ)− ξ(θ)| ≥ ε} =i− 12i→ 1

2�= 0.

That is, the bifuzzy sequence {ξi} does not converge in chance to ξ.

Convergence in Chance vs. Convergence in Distribution

Theorem 7.22 (Zhou and Liu [162]) Suppose that ξ, ξ1, ξ2, · · · are bifuzzyvariables on the possibility space (Θ,P(Θ),Pos). If the bifuzzy sequence {ξi}converges in chance to ξ, then {ξi} converges in distribution to ξ.

Proof: Suppose that Φ,Φi are chance distributions of ξ, ξi for i = 1, 2, · · ·,respectively. If {ξi} does not converge in distribution to ξ, then there existsa continuity point (x, α) of Φ such that Φi(x;α) �→ Φ(x;α). In other words,there exists a number ε∗ > 0 and a subsequence {Φik} such that

Φik(x;α)− Φ(x;α) > 2ε∗, ∀k (7.41)

orΦ(x;α)− Φik(x;α) > 2ε∗, ∀k. (7.42)

If (7.41) holds, then for the positive number ε∗, there exists δ > 0 such that

|Φ(x + δ;α)− Φ(x;α)| < ε∗

which implies thatΦik(x;α)− Φ(x + δ;α) > ε∗.

Equivalently, we have

supCr{A}≥α

infθ∈A

Cr{ξik(θ) ≤ x} − supCr{A}≥α

infθ∈A

Cr{ξ(θ) ≤ x + δ} > ε∗.

Thus, for each k, there exists a set Ak ⊂ Θ with Cr{Ak} ≥ α such that

infθ∈Ak

Cr{ξik(θ) ≤ x} − supCr{A}≥α

infθ∈A

Cr{ξ(θ) ≤ x + δ} > ε∗.

Moreover, since Cr{Ak} ≥ α, we have

infθ∈Ak

Cr{ξik(θ) ≤ x} − infθ∈Ak

Cr{ξ(θ) ≤ x + δ} > ε∗.

Page 273: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

264 Chapter 7 - Bifuzzy Theory

Thus there exists θk ∈ Ak with Cr{θk} > 0 such that

Cr{ξik(θk) ≤ x} − Cr{ξ(θk) ≤ x + δ} > ε∗. (7.43)

Note that ξik(θk) and ξ(θk) are all fuzzy variables, and

{ξik(θk) ≤ x} = {ξik(θk) ≤ x, ξ(θk) ≤ x + δ} ∪ {ξik(θk) ≤ x, ξ(θk) > x + δ}⊂ {ξ(θk) ≤ x + δ} ∪ {|ξik(θk)− ξ(θk)| > δ}.

It follows from the subadditivity of credibility and (7.43) that

Cr{|ξik(θk)− ξ(θk)| > δ} ≥ Cr{ξik(θk) ≤ x} − Cr{ξ(θk} ≤ x + δ} > ε∗.

Thus we getlimα↓0

Ch{|ξik − ξ| > δ}(α) > ε∗

which implies that the bifuzzy sequence {ξi} does not converge in chance toξ. A contradiction proves that {ξi} converges in distribution to ξ. A similarway may prove the case (7.42).

Example 7.11: Convergence in distribution does not imply convergence inchance. For example, let Θ = {θ1, θ2, θ3}, Θ′ = {θ′1, θ′2, θ′3}, and

Pos{θ} =

⎧⎪⎨⎪⎩1/2, if θ = θ1

1, if θ = θ2

1/2, if θ = θ3,

Pos′{θ′} =

⎧⎪⎨⎪⎩1/2, if θ′ = θ′11, if θ′ = θ′2

1/2, if θ′ = θ′3.

The bifuzzy variable ξ is defined on (Θ,P(Θ),Pos) as

ξ{θ} =

⎧⎪⎨⎪⎩−η, if θ = θ1

0, if θ = θ2

η, if θ = θ3

(7.44)

where η is a fuzzy variable defined on (Θ′,P(Θ′),Pos′) as

η(θ′) =

⎧⎪⎨⎪⎩−1, if θ′ = θ′10, if θ′ = θ′21, if θ′ = θ′3.

(7.45)

We also defineξi = −ξ, i = 1, 2, · · · (7.46)

It is clear that the bifuzzy variables ξ, ξ1, ξ2, · · · have the same chance distri-bution: when 0 < α ≤ 0.25,

Φ(x;α) =

⎧⎪⎨⎪⎩0, if x < −1

0.25, if − 1 ≤ x < 01, if 0 ≤ x;

Page 274: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 7.8 - Convergence Concepts 265

when 0.25 < α ≤ 0.75,

Φ(x;α) =

{0, if x < 01, if 0 ≤ x;

when 0.75 < α ≤ 1,

Φ(x;α) =

⎧⎪⎨⎪⎩0, if x < 0

0.75, if 0 ≤ x < 11, if 1 ≤ x.

Thus {ξi} converges in distribution to ξ. But, for any small number ε > 0,we have

Cr′{|ξi(θ)− ξ(θ)| ≥ ε} =

⎧⎪⎨⎪⎩1/4, if θ = θ1

0, if θ = θ2

1/4, if θ = θ3

for i = 1, 2, · · · It follows that

limα↓0

Ch{|ξi − ξ| ≥ ε}(α) = supθ∈Θ+

Cr′{|ξi(θ)− ξ(θ)| ≥ ε} =14�→ 0.

That is, the bifuzzy sequence {ξi} does not converge in chance to ξ.

Convergence Almost Surely vs. Convergence in Distribution

Example 7.12: Convergence a.s. does not imply convergence in distribution.For example, let

Θ = {θ1, θ2, · · ·}, Pos{θj} = (j − 1)/j for j = 1, 2, · · ·Θ′ = {θ′1, θ′2, · · ·}, Pos′{θ′j} = (j − 1)/j for j = 1, 2, · · ·

The bifuzzy variables ξ, ξ1, ξ2, · · · are defined on (Θ,P(Θ),Pos) as ξ = 0 and

ξi(θj) =

{ηi, if j = i

0, otherwise(7.47)

where ηi are fuzzy variables defined on (Θ′,P(Θ′),Pos′) as

ηi(θ′j) =

{1, if j = i

0, otherwise(7.48)

for i, j = 1, 2, · · · Then the bifuzzy sequence {ξi} converges a.s. to ξ. However,when 0 < α ≤ 1/4, the chance distribution of ξ is

Φ(x;α) =

{0, if x < 01, if 0 ≤ x;

Page 275: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

266 Chapter 7 - Bifuzzy Theory

and the chance distributions of ξi are

Φi(x;α) =

⎧⎪⎪⎪⎨⎪⎪⎪⎩0, if x < 0

i + 12i

, if 0 ≤ x < 1

1, if 1 ≤ x,

i = 1, 2, · · ·, respectively. Thus the bifuzzy sequence {ξi} does not convergein distribution to ξ.

Example 7.13: Convergence in distribution does not imply convergencea.s. Recall the example defined by (7.46) in which the bifuzzy sequence {ξi}converges in distribution to ξ. However, ξ(θ1) = −η and ξi(θ1) = η fori = 1, 2, · · · This imlies that ξi(θ1) �→ ξ(θ1), a.s. Thus the bifuzzy sequence{ξi} does not converge a.s. to ξ.

Convergence Almost Surely vs. Convergence in Mean

Theorem 7.23 (Zhou and Liu [162]) Suppose that ξ, ξ1, ξ2, · · · are bifuzzyvariables on the possibility space (Θ,P(Θ),Pos). If the sequence {ξi} con-verges in mean to ξ, then {ξi} converges a.s. to ξ.

Proof: Let Θ+ be the kernel of the possibility space (Θ,P(Θ),Pos). Since{ξi} converges in mean to ξ, the fuzzy sequence {ξi(θ)} converges in mean toξ(θ) for each θ ∈ Θ+. It follows from Theorems 3.57 and 3.58 that {ξi(θ)}converges a.s. to ξ(θ). Thus the bifuzzy sequence {ξi} converges a.s. to ξ.The theorem is proved.

Example 7.14: Convergence a.s. does not imply convergence in mean. Con-sider the example defined by (7.47) in which the bifuzzy sequence {ξi} con-verges a.s. to ξ. However,

E[|ξi(θ)− ξ(θ)|] =

⎧⎨⎩i− 12i

, if θ = θi

0, otherwise

for i = 1, 2, · · · Thus we have

E[|ξi − ξ|] =i− 12i× i− 1

2i→ 1

4.

That is, the bifuzzy sequence {ξi} does not converge in mean to ξ.

Convergence in Mean vs. Convergence in Distribution

Theorem 7.24 (Zhou and Liu [162]) Suppose that ξ, ξ1, ξ2, · · · are bifuzzyvariables on the possibility space (Θ,P(Θ),Pos). If the sequence {ξi} con-verges in mean to ξ, then {ξi} converges in distribution to ξ.

Page 276: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 7.8 - Convergence Concepts 267

Proof: Suppose that Φ,Φi are chance distributions of ξ, ξi for i = 1, 2, · · ·,respectively. If {ξi} does not converge in distribution to ξ, then there existsa continuity point (x, α) of Φ such that Φi(x;α) �→ Φ(x;α). In other words,there exists a number ε∗ > 0 and a subsequence {Φik} such that

Φik(x;α)− Φ(x;α) > 2ε∗, ∀k (7.49)

orΦ(x;α)− Φik(x;α) > 2ε∗, ∀k. (7.50)

If (7.49) holds, then for the positive number ε∗, there exists δ with 0 < δ <α ∧ 0.5 such that

|Φ(x + δ;α− δ)− Φ(x;α)| < ε∗

which implies that

Φik(x;α)− Φ(x + δ;α− δ) > ε∗.

Equivalently, we have

supCr{A}≥α

infθ∈A

Cr{ξik(θ) ≤ x} − supCr{A}≥α−δ

infθ∈A

Cr{ξ(θ) ≤ x + δ} > ε∗.

Thus, for each k, there exists a set Ak ⊂ Θ with Cr{Ak} ≥ α such that

infθ∈Ak

Cr{ξik(θ) ≤ x} − supCr{A}≥α−δ

infθ∈A

Cr{ξ(θ) ≤ x + δ} > ε∗.

Write A′k = {θ ∈ Ak|Cr{θ} < δ}. Then A′

k ⊂ Ak and Cr{A′k} ≤ δ. Define

A∗k = Ak \A′

k. Then

infθ∈A∗

k

Cr{ξik(θ) ≤ x} − supCr{A}≥α−δ

infθ∈A

Cr{ξ(θ) ≤ x + δ} > ε∗.

It follows from the subadditivity of credibility measure that

Cr{A∗k} ≥ Cr{Ak} − Cr{A′

k} ≥ α− δ.

Thus, we have

infθ∈A∗

k

Cr{ξik(θ) ≤ x} − infθ∈A∗

k

Cr{ξ(θ) ≤ x + δ} > ε∗.

Furthermore, there exists θk ∈ A∗k with Cr{θk} ≥ δ such that

Cr{ξik(θk) ≤ x} − Cr{ξ(θk) ≤ x + δ} > ε∗. (7.51)

Note that ξik(θk) and ξ(θk) are all fuzzy variables, and

{ξik(θk) ≤ x} = {ξik(θk) ≤ x, ξ(θk) ≤ x + δ} ∪ {ξik(θk) ≤ x, ξ(θk) > x + δ}⊂ {ξ(θk) ≤ x + δ} ∪ {|ξik(θk)− ξ(θk)| > δ}.

Page 277: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

268 Chapter 7 - Bifuzzy Theory

It follows from the subadditivity of credibility and (7.51) that

Cr{|ξik(θk)− ξ(θk)| > δ} ≥ Cr{ξik(θk) ≤ x} − Cr{ξ(θk} ≤ x + δ} > ε∗.

Thus, for each k, we have

E[|ξik(θk)− ξ(θk)|] =∫ +∞

0

Cr{|ξik(θk)− ξ(θk)| ≥ r}dr > δ × ε∗.

Therefore, for each k, we have

E[|ξik − ξ|] =∫ +∞

0

Cr{θ ∈ Θ∣∣ E[|ξik(θ)− ξ(θ)|] ≥ r}dr

≥ Cr{θk} × E[|ξik(θk)− ξ(θk)|]> δ2 × ε∗

which implies that the bifuzzy sequence {ξi} does not converge in mean toξ. A contradiction proves that {ξi} converges in distribution to ξ. A similarway may prove the case (7.50).

Example 7.15: Convergence in distribution does not imply convergence inmean. Let us consider the example defined by (7.46) in which the bifuzzysequence {ξi} converges in distribution to ξ. However, we have

E[|ξi(θ)− ξ(θ)|] =

⎧⎪⎨⎪⎩1/2, if θ = θ1

0, if θ = θ2

1/2, if θ = θ3

for i = 1, 2, · · · Then we have

E[|ξi − ξ|] =∫ +∞

0

Cr{θ ∈ Θ

∣∣ E[|ξi(θ)− ξ(θ)|] ≥ r}

dr =14× 1

2�→ 0.

That is, the bifuzzy sequence {ξi} does not converge in mean to ξ.

Convergence in Chance vs. Convergence in Mean

Example 7.16: Convergence in chance does not imply convergence in mean.For example, let

Θ = {θ1, θ2, · · ·}, Pos{θj} = (j − 1)/j for j = 1, 2, · · ·Θ′ = {θ′1, θ′2, · · ·}, Pos′{θ′j} = 1/j for j = 1, 2, · · ·

Suppose that ξ, ξ1, ξ2, · · · are bifuzzy variables defined on (Θ,P(Θ),Pos) asξ = 0 and

ξi(θj) =

{ηi, if j = i

0, otherwise(7.52)

Page 278: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 7.8 - Convergence Concepts 269

where ηi are fuzzy variables defined on (Θ′,P(Θ′),Pos′) as

ηi(θ′j) =

{i, if j = i

0, otherwise(7.53)

for i, j = 1, 2, · · · Then for any small number ε > 0, we have

Cr′{|ξi(θ)− ξ(θ)| ≥ ε} =

⎧⎨⎩12i

, if θ = θi

0, otherwise

for i = 1, 2, · · · Furthermore, we have

limα↓0

Ch{|ξi − ξ| ≥ ε}(α) = supθ∈Θ+

Cr′{|ξi(θ)− ξ(θ)| ≥ ε} =12i→ 0.

That is, the bifuzzy sequence {ξi} converges in chance to ξ. However,

E[|ξi(θ)− ξ(θ)|] =

{1/2, if θ = θi

0, otherwise

for i = 1, 2, · · · Thus we have

E[|ξi − ξ|] =∫ +∞

0

Cr{θ ∈ Θ∣∣ E[|ξi(θ)− ξ(θ)|] ≥ r}dr =

i− 12i× 1

2�→ 0.

That is, the bifuzzy sequence {ξi} does not converge in mean to ξ.

Example 7.17: Convergence in mean does not imply convergence in chance.For example, let

Θ = {θ1, θ2, · · ·}, Pos{θj} = 1/j for j = 1, 2, · · ·Θ′ = {θ′1, θ′2, · · ·}, Pos′{θ′j} = (j − 1)/j for j = 1, 2, · · ·

The bifuzzy variables ξ, ξ1, ξ2, · · · are defined on (Θ,P(Θ),Pos) as ξ = 0 and

ξi(θj) =

{ηi, if j = i

0, otherwise(7.54)

where ηi are fuzzy variables defined on (Θ′,P(Θ′),Pos′) as

ηi(θ′j) =

{1, if j = i

0, otherwise(7.55)

for i, j = 1, 2, · · · Then we have

E[|ξi(θ)− ξ(θ)|] =

⎧⎨⎩i− 12i

, if θ = θi

0, otherwise

Page 279: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

270 Chapter 7 - Bifuzzy Theory

for i = 1, 2, · · · and

E[|ξi − ξ|] =12i× i− 1

2i→ 0.

Thus the bifuzzy sequence {ξi} converges in mean to ξ. However, for anysmall number ε > 0, we have

Cr′{|ξi(θ)− ξ(θ)| ≥ ε} =

⎧⎨⎩i− 12i

, if θ = θi

0, otherwise

for i = 1, 2, · · · and

limα↓0

Ch{|ξi − ξ| ≥ ε}(α) = supθ∈Θ+

Cr′{|ξi(θ)− ξ(θ)| ≥ ε} =i− 12i→ 1

2�= 0.

That is, the bifuzzy sequence {ξi} does not converge in chance to ξ.

7.9 Bifuzzy Simulations

It is impossible to design an analytic algorithm to deal with general bifuzzysystems. In order to do that, we introduce some bifuzzy simulations forfinding critical value, computing chance function, and calculating expectedvalue.

Example 7.18: Assume that ξ is an n-dimensional bifuzzy vector defined onthe possibility space (Θ,P(Θ),Pos), and f : �n → � is a function. For anyconfidence level α, we design a bifuzzy simulation to compute the α-chanceCh {f(ξ) ≤ 0} (α). Equivalently, we should find the supremum β such that

Cr{θ ∈ Θ

∣∣ Cr {f(ξ(θ)) ≤ 0} ≥ β}≥ α. (7.56)

We randomly generate θk from Θ such that Pos{θk} ≥ ε, and write νk =Pos{θk}, k = 1, 2, · · · , N , respectively, where ε is a sufficiently small number.For any number θk, by using fuzzy simulation, we can estimate the credibilityg(θk) = Cr{f(ξ(θk)) ≤ 0}. For any number r, we have

L(r) =12

(max

1≤k≤N

{νk∣∣ g(θk) ≥ r

}+ min

1≤k≤N

{1− νk

∣∣ g(θk) < r})

.

It follows from monotonicity that we may employ bisection search to find themaximal value r such that L(r) ≥ α. This value is an estimation of L. Wesummarize this process as follows.

Algorithm 7.1 (Bifuzzy Simulation)Step 1. Generate θk from Θ such that Pos{θk} ≥ ε for k = 1, 2, · · · , N ,

where ε is a sufficiently small number.

Page 280: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 7.9 - Bifuzzy Simulations 271

Step 2. Find the maximal value r such that L(r) ≥ α holds.Step 3. Return r.

Suppose that the bifuzzy variables ξ1, ξ2, ξ3 are defined as

ξ1 = (ρ1 − 1, ρ1, ρ1 + 1), with ρ1 = (0, 1, 2),ξ2 = (ρ2 − 1, ρ2, ρ2 + 1), with ρ2 = (1, 2, 3),ξ3 = (ρ3 − 1, ρ3, ρ3 + 1), with ρ3 = (2, 3, 4).

A run of bifuzzy simulation with 10000 cycles shows that

Ch {ξ1 + ξ2 + ξ3 ≥ 2} (0.9) = 0.61.

Example 7.19: Assume that f : �n → � is a function, and ξ is an n-dimensional bifuzzy vector defined on the possibility space (Θ,P(Θ),Pos).For any given confidence levels α and β, we need to design a bifuzzy simula-tion to find the maximal value f such that

Ch{f(ξ) ≥ f

}(α) ≥ β

holds. That is, we must find the maximal value f such that

Cr{θ ∈ Θ

∣∣ Cr{f(ξ(θ)) ≥ f

}≥ β}≥ α.

We randomly generate θk from Θ such that Pos{θk} ≥ ε, and write νk =Pos{θk}, k = 1, 2, · · · , N , respectively, where ε is a sufficiently small num-ber. For any number θk, we search for the maximal value f(θk) such thatCr{f(ξ(θk)) ≥ f(θk)} ≥ β by fuzzy simulation. For any number r, we have

L(r) =12

(max

1≤k≤N

{νk∣∣ f(θk) ≥ r

}+ min

1≤k≤N

{1− νk

∣∣ f(θk) < r})

.

It follows from monotonicity that we may employ bisection search to find themaximal value r such that L(r) ≥ α. This value is an estimation of f . Wesummarize this process as follows.

Algorithm 7.2 (Bifuzzy Simulation)Step 1. Generate θk from Θ such that Pos{θk} ≥ ε for k = 1, 2, · · · , N ,

where ε is a sufficiently small number.Step 2. Find the maximal value r such that L(r) ≥ α holds.Step 3. Return r.

In order to find the maximal f such that Ch{ξ21 + ξ2

2 + ξ23 ≥ f

}(0.9) ≥

0.8, where ξ1, ξ2, ξ3 are defined as

μξ1(x) = exp[−|x− ρ1|], with μρ1(x) = [1− (x− 1)2] ∨ 0,μξ2(x) = exp[−|x− ρ2|], with μρ2(x) = [1− (x− 2)2] ∨ 0,μξ3(x) = exp[−|x− ρ3|], with μρ3(x) = [1− (x− 3)2] ∨ 0,

Page 281: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

272 Chapter 7 - Bifuzzy Theory

we perform the bifuzzy simulation with 10000 cycles and obtain that f = 1.89.

Example 7.20: Assume that f : �n → � is a function, and ξ is an n-dimensional bifuzzy vector defined on the possibility space (Θ,P(Θ),Pos).Then f(ξ) is a bifuzzy variable whose expected value E[f(ξ)] is∫ +∞

0

Cr{θ ∈ Θ | E[f(ξ(θ))] ≥ r}dr −∫ 0

−∞Cr{θ ∈ Θ | E[f(ξ(θ))] ≤ r}dr.

A bifuzzy simulation will be introduced to compute the expected value E[f(ξ)].We randomly sample θk from Θ such that Pos{θk} ≥ ε, and denote νk =Pos{θk} for k = 1, 2, · · · , N , where ε is a sufficiently small number. Then forany number r ≥ 0, the credibility Cr{θ ∈ Θ|E[f(ξ(θ))] ≥ r} can be estimatedby

12

(max

1≤k≤N{νk|E[f(ξ(θk))] ≥ r}+ min

1≤k≤N{1− νk|E[f(ξ(θk))] < r}

)and for any number r < 0, the credibility Cr{θ ∈ Θ|E[f(ξ(θ))] ≤ r} can beestimated by

12

(max

1≤k≤N{νk|E[f(ξ(θk))] ≤ r}+ min

1≤k≤N{1− νk|E[f(ξ(θk))] > r}

)provided that N is sufficiently large, where E[f(ξ(θk))], k = 1, 2, · · · , N maybe estimated by the fuzzy simulation.

Algorithm 7.3 (Bifuzzy Simulation)Step 1. Set e = 0.Step 2. Randomly sample θk from Θ such that Pos{θk} ≥ ε for k =

1, 2, · · · , N , where ε is a sufficiently small number.Step 3. Let a = min1≤k≤N E[f(ξ(θk))] and b = max1≤k≤N E[f(ξ(θk))].Step 4. Randomly generate r from [a, b].Step 5. If r ≥ 0, then e← e + Cr{θ ∈ Θ|E[f(ξ(θ))] ≥ r}.Step 6. If r < 0, then e← e− Cr{θ ∈ Θ|E[f(ξ(θ))] ≤ r}.Step 7. Repeat the fourth to sixth steps for N times.Step 8. E[f(ξ)] = a ∨ 0 + b ∧ 0 + e · (b− a)/N .

Suppose that the fuzzy variables ξ1, ξ2, ξ3, ξ4 are defined as

ξ1 = (ρ1 − 1, ρ1, ρ1 + 1), with ρ1 = (1, 2, 3),ξ2 = (ρ2 − 1, ρ2, ρ2 + 1), with ρ2 = (2, 3, 4),ξ3 = (ρ3 − 1, ρ3, ρ3 + 1), with ρ3 = (3, 4, 5),ξ4 = (ρ4 − 1, ρ4, ρ4 + 1), with ρ4 = (4, 5, 6).

A run of bifuzzy simulation with 10000 cycles shows that the expected value

E[√

ξ1 + ξ2 + ξ3 + ξ4] = 3.70.

Page 282: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Chapter 8

Birandom Theory

Roughly speaking, a birandom variable is a random variable defined on theuniversal set of random variables, or a random variable taking “random vari-able” values.

The emphasis in this chapter is mainly on birandom variable, birandomarithmetic, chance measure, chance distribution, independent and identicaldistribution, expected value operator, variance, critical values, convergenceconcepts, laws of large numbers, and birandom simulation.

8.1 Birandom Variables

Definition 8.1 (Peng and Liu [116]) A birandom variable is a function ξfrom a probability space (Ω,A,Pr) to the set of random variables such thatPr{ξ(ω) ∈ B} is a measurable function of ω for any Borel set B of �.

Example 8.1: Let Ω = {ω1, ω2}, and Pr{ω1} = Pr{ω2} = 1/2. Then(Ω,A,Pr) is a probability space on which we define a function as

ξ(ω) =

{ξ1, if ω = ω1

ξ2, if ω = ω2

where ξ1 is a uniformly distributed random variable on [0, 1], and ξ2 is anormally distributed random variable. Then the function ξ is a birandomvariable.

Theorem 8.1 Assume that ξ is a birandom variable, and B is a Borel setof �. Then the probability Pr{ξ(ω) ∈ B} is a random variable.

Proof: Since Pr{ξ(ω) ∈ B} is a measurable function of ω from the probabil-ity space (Ω,A,Pr) to the set of real numbers (in fact, [0, 1]), it is a randomvariable.

Page 283: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

274 Chapter 8 - Birandom Theory

Theorem 8.2 Let ξ be a birandom variable. If the expected value E[ξ(ω)] isfinite for each ω, then E[ξ(ω)] is a random variable.

Proof: In order to prove that the expected value E[ξ(ω)] is a random vari-able, we only need to show that E[ξ(ω)] is a measurable function of ω. It isobvious that

E[ξ(ω)] =∫ +∞

0

Pr{ξ(ω) ≥ r}dr −∫ 0

−∞Pr{ξ(ω) ≤ r}dr

= limj→∞

limk→∞

(k∑

l=1

j

kPr{

ξ(ω) ≥ lj

k

}−

k∑l=1

j

kPr{

ξ(ω) ≤ − lj

k

}).

Since Pr{ξ(ω) ≥ lj/k} and Pr{ξ(ω) ≤ −lj/k} are all measurable functionsfor any integers j, k and l, the expected value E[ξ(ω)] is a measurable functionof ω. The proof is complete.

Definition 8.2 (Peng and Liu [116]) An n-dimensional birandom vector isa function ξ from a probability space (Ω,A,Pr) to the set of n-dimensionalrandom vectors such that Pr{ξ(ω) ∈ B} is a measurable function of ω forany Borel set B of �n.

Theorem 8.3 (Peng and Liu [116]) If (ξ1, ξ2, · · · , ξn) is a birandom vector,then ξ1, ξ2, · · · , ξn are birandom variables. Conversely, if ξ1, ξ2, · · · , ξn are bi-random variables, and for each ω ∈ Ω, the random variables ξ1(ω), ξ2(ω), · · · ,ξn(ω) are independent, then (ξ1, ξ2, · · · , ξn) is a birandom vector.

Proof: Write ξ = (ξ1, ξ2, · · · , ξn). Suppose that ξ is a birandom vector onthe probability space (Ω,A,Pr). For any Borel set B of �, the set B ×�n−1

is a Borel set of �n. Thus the function

Pr {ξ1(ω) ∈ B} = Pr

⎧⎪⎪⎪⎨⎪⎪⎪⎩ξ1(ω) ∈ Bξ2(ω) ∈ �

...ξn(ω) ∈ �

⎫⎪⎪⎪⎬⎪⎪⎪⎭ = Pr{ξ(ω) ∈ B ×�n−1

}

is a measurable function of ω. Hence ξ1 is a birandom variable. A similarprocess may prove that ξ2, ξ3, · · · , ξn are birandom variables.

Conversely, suppose that ξ1, ξ2, · · · , ξn are birandom variables on the prob-ability space (Ω,A,Pr). We write ξ = (ξ1, ξ2, · · · , ξn) and define

C ={C ⊂ �n

∣∣ Pr{ξ(ω) ∈ C} is a measurable function of ω}

.

The vector ξ is a birandom vector if we can prove that C contains all Borelsets of �n. Let C1, C2, · · · ∈ C, and Ci ↑ C or Ci ↓ C. It follows fromthe probability continuity theorem that Pr{ξ(ω) ∈ Ci} → Pr{ξ(ω) ∈ C} asi→∞. Thus Pr{ξ(ω) ∈ C} is a measurable function of ω, and C ∈ C. Hence

Page 284: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 8.1 - Birandom Variables 275

C is a monotone class. It is also clear that C contains all intervals of the form(−∞, a], (a, b], (b,∞) and �n since

Pr {ξ(ω) ∈ (−∞, a]} =n∏

i=1

Pr {ξi(ω) ∈ (−∞, ai]} ;

Pr {ξ(ω) ∈ (a, b]} =n∏

i=1

Pr {ξi(ω) ∈ (ai, bi]} ;

Pr {ξ(ω) ∈ (b,+∞)} =n∏

i=1

Pr {ξi(ω) ∈ (bi,+∞)} ;

Pr {ξ(ω) ∈ �n} = 1.

Let F be the class of all finite unions of disjoint intervals of the form (−∞, a],(a, b], (b,∞) and �n. Note that for any disjoint sets C1, C2, · · · , Cm of F andC = C1 ∪ C2 ∪ · · · ∪ Cm, we have

Pr {ξ(ω) ∈ C} =m∑i=1

Pr {ξ(ω) ∈ Ci} .

That is, C ∈ C. Hence we have F ⊂ C. It may also be verified that the classF is an algebra. Since the smallest σ-algebra containing F is just the Borelalgebra of �n, the monotone class theorem implies that C contains all Borelsets of �n. The theorem is proved.

Theorem 8.4 Let ξ be an n-dimensional birandom vector, and f : �n → �a measurable function. Then f(ξ) is a birandom variable.

Proof: It is clear that f−1(B) is a Borel set of �n for any Borel set B of �.Thus, for each ω ∈ Ω, we have

Pr{f(ξ(ω)) ∈ B} = Pr{ξ(ω) ∈ f−1(B)}

which is a measurable function of ω. That is, f(ξ) is a birandom variable.The theorem is proved.

Definition 8.3 (Peng and Liu [116], Birandom Arithmetic on Single Space)Let f : �n → � be a measurable function, and ξ1, ξ2, · · · , ξn birandom vari-ables on the probability space (Ω,A,Pr). Then ξ = f(ξ1, ξ2, · · · , ξn) is abirandom variable defined by

ξ(ω) = f(ξ1(ω), ξ2(ω), · · · , ξn(ω)), ∀ω ∈ Ω. (8.1)

Definition 8.4 (Peng and Liu [116], Birandom Arithmetic on Different Spaces)Let f : �n → � be a measurable function, and ξi birandom variables onthe probability spaces (Ωi,Ai,Pri), i = 1, 2, · · · , n, respectively. Then ξ =

Page 285: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

276 Chapter 8 - Birandom Theory

f(ξ1, ξ2, · · · , ξn) is a birandom variable on the product probability space (Ω,A,Pr),defined by

ξ(ω1, ω2, · · · , ωn) = f(ξ1(ω1), ξ2(ω2), · · · , ξn(ωn)) (8.2)

for all (ω1, ω2, · · · , ωn) ∈ Ω.

8.2 Chance Measure

Definition 8.5 (Peng and Liu [116]) Let ξ be a birandom variable, and B aBorel set of �. Then the chance of birandom event ξ ∈ B is a function from(0, 1] to [0, 1], defined as

Ch {ξ ∈ B} (α) = supPr{A}≥α

infω∈A

Pr {ξ(ω) ∈ B} . (8.3)

Theorem 8.5 Let ξ be a birandom variable, and B a Borel set of �. Writeβ∗ = Ch {ξ ∈ B} (α∗). Then we have

Pr{ω ∈ Ω

∣∣ Pr {ξ(ω) ∈ B} ≥ β∗} ≥ α∗. (8.4)

Proof: It follows from the definition of chance that β∗ is just the supremumof β satisfying

Pr{ω ∈ Ω

∣∣ Pr {ξ(ω) ∈ B} ≥ β}≥ α∗.

Thus there exists an increasing sequence {βi} such that

Pr{ω ∈ Ω

∣∣ Pr {ξ(ω) ∈ B} ≥ βi

}≥ α∗

and βi ↑ β∗ as i→∞. It is easy to prove that{ω ∈ Ω

∣∣ Pr {ξ(ω ∈ B} ≥ βi

}↓{ω ∈ Ω

∣∣ Pr {ξ(ω) ∈ B} ≥ β∗}as i→∞. It follows from the probability continuity theorem that

Pr{ω ∈ Ω

∣∣ Pr {ξ(ω) ∈ B} ≥ β∗}= lim

i→∞Pr{ω ∈ Ω

∣∣ Pr {ξ(ω) ∈ B} ≥ βi

}≥ α∗.

The proof is complete.

Theorem 8.6 Let ξ be a birandom variable, and {Bi} a sequence of Borelsets of �. If Bi ↓ B, then

limi→∞

Ch{ξ ∈ Bi}(α) = Ch{ξ ∈ lim

i→∞Bi

}(α). (8.5)

Page 286: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 8.2 - Chance Measure 277

Proof: Write

β = Ch{ξ ∈ B}(α), βi = Ch{ξ ∈ Bi}(α), i = 1, 2, · · ·

Since Bi ↓ B, it is clear that β1 ≥ β2 ≥ · · · ≥ β. Thus the limitation

ρ = limi→∞

βi = limi→∞

Ch{ξ ∈ Bi}(α)

exists and ρ ≥ β. On the other hand, since ρ ≤ βi for each i, it follows fromTheorem 8.5 that

Pr{ω ∈ Ω∣∣ Pr{ξ(ω) ∈ Bi} ≥ ρ} ≥ Pr{ω ∈ Ω

∣∣ Pr{ξ(ω) ∈ Bi} ≥ βi} ≥ α.

It follows from the probability continuity theorem that

{ω ∈ Ω∣∣ Pr{ξ(ω) ∈ Bi} ≥ ρ} ↓ {ω ∈ Ω

∣∣ Pr{ξ(ω) ∈ B} ≥ ρ}.

It follows from the probability continuity theorem that

Pr{ω ∈ Ω∣∣ Pr{ξ(ω) ∈ B} ≥ ρ} = lim

i→∞Pr{ω ∈ Ω

∣∣ Pr{ξ(ω) ∈ Bi} ≥ ρ} ≥ α

which implies that ρ ≤ β. Hence ρ = β and (8.5) holds.

Theorem 8.7 (a) Let ξ, ξ1, ξ2, · · · be birandom variables such that ξi(ω) ↑ξ(ω) for each ω ∈ Ω. Then we have

limi→∞

Ch{ξi ≤ r}(α) = Ch{

limi→∞

ξi ≤ r}

(α). (8.6)

(b) Let ξ, ξ1, ξ2, · · · be birandom variables such that ξi(ω) ↓ ξ(ω) for eachω ∈ Ω. Then we have

limi→∞

Ch{ξi ≥ r}(α) = Ch{

limi→∞

ξi ≥ r}

(α). (8.7)

Proof: (a) Write

β = Ch{ξ ≤ r}(α), βi = Ch{ξi ≤ r}(α), i = 1, 2, · · ·

Since ξi(ω) ↑ ξ(ω) for each ω ∈ Ω, it is clear that {ξi(ω) ≤ r} ↓ {ξ(ω) ≤ r}for each ω ∈ Ω and β1 ≥ β2 ≥ · · · ≥ β. Thus the limitation

ρ = limi→∞

βi = limi→∞

Ch{ξi ≤ r}(α)

exists and ρ ≥ β. On the other hand, since ρ ≤ βi for each i, it follows fromTheorem 8.5 that

Pr{ω ∈ Ω∣∣ Pr{ξi(ω) ≤ r} ≥ ρ} ≥ Pr{ω ∈ Ω

∣∣ Pr{ξi(ω) ≤ r} ≥ βi} ≥ α.

Page 287: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

278 Chapter 8 - Birandom Theory

Since {ξi(ω) ≤ r} ↓ {ξ(ω) ≤ r} for each ω ∈ Ω, it follows from the probabilitycontinuity theorem that

{ω ∈ Ω∣∣ Pr{ξi(ω) ≤ r} ≥ ρ} ↓ {ω ∈ Ω

∣∣ Pr{ξ(ω) ≤ r} ≥ ρ}.

By using the probability continuity theorem, we get

Pr{ω ∈ Ω∣∣ Pr{ξ(ω) ≤ r} ≥ ρ} = lim

i→∞Pr{ω ∈ Ω

∣∣ Pr{ξi(ω) ≤ r} ≥ ρ} ≥ α

which implies that ρ ≤ β. Hence ρ = β and (8.6) holds.The part (b) may be proved similarly.

Variety of Chance Measure

Definition 8.6 (Peng and Liu [116]) Let ξ be a birandom variable, and Ba Borel set of �. For any real number α ∈ (0, 1], the α-chance of birandomevent ξ ∈ B is defined as the value of chance at α, i.e., Ch {ξ ∈ B} (α) whereCh denotes the chance measure.

Definition 8.7 (Peng and Liu [116]) Let ξ be a birandom variable, and Ba Borel set of �. Then the equilibrium chance of birandom event ξ ∈ B isdefined as

Che {ξ ∈ B} = sup0<α≤1

{α∣∣ Ch {ξ ∈ B} (α) ≥ α

}(8.8)

where Ch denotes the chance measure.

Definition 8.8 (Peng and Liu [116]) Let ξ be a birandom variable, and B aBorel set of �. Then the average chance of birandom event ξ ∈ B is definedas

Cha {ξ ∈ B} =∫ 1

0

Ch {ξ ∈ B} (α)dα (8.9)

where Ch denotes the chance measure.

Definition 8.9 A birandom variable ξ is said to be(a) nonnegative if Ch{ξ < 0}(α) ≡ 0;(b) positive if Ch{ξ ≤ 0}(α) ≡ 0;(c) simple if there exists a finite sequence {x1, x2, · · · , xm} such that

Ch {ξ �= x1, ξ �= x2, · · · , ξ �= xm} (α) ≡ 0; (8.10)

(d) discrete if there exists a countable sequence {x1, x2, · · ·} such that

Ch {ξ �= x1, ξ �= x2, · · ·} (α) ≡ 0. (8.11)

Page 288: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 8.3 - Chance Distribution 279

8.3 Chance Distribution

Definition 8.10 (Peng and Liu [116]) Let ξ be a birandom variable. Thenthe chance distribution Φ: [−∞,+∞]× (0, 1]→ [0, 1] of ξ is defined by

Φ(x;α) = Ch {ξ ≤ x} (α). (8.12)

Theorem 8.8 (Peng and Liu [116]) The chance distribution Φ(x;α) of abirandom variable is(a) a decreasing and left-continuous function of α for any fixed x;(b) an increasing and right-continuous function of x for any fixed α, and

Φ(−∞;α) = 0, Φ(+∞;α) = 1, ∀α; (8.13)

limx→−∞Φ(x;α) = 0, ∀α; (8.14)

limx→+∞Φ(x;α) = 1 if α < 1. (8.15)

Proof: Let Φ(x;α) be the chance distribution of birandom variable ξ definedon the probability space (Ω,A,Pr). Part (a): For any given α1 and α2 with0 < α1 < α2 ≤ 1, it is clear that

Φ(x;α1) = supPr{A}≥α1

infω∈A

Pr {ξ(ω) ≤ x}

≥ supPr{A}≥α2

infω∈A

Pr {ξ(ω) ≤ x} = Φ(x;α2).

Thus Φ(x;α) is a decreasing function of α for each fixed x. We next prove thatΦ(x;α) is a left-continuous function of α. Let α ∈ (0, 1] be given, and let {αi}be a sequence of numbers with αi ↑ α. Since Φ(x;α) is a decreasing functionof α, the limitation limi→∞ Φ(x;αi) exists and is not less than Φ(x;α). If thelimitation is equal to Φ(x;α), then the left-continuity is proved. Otherwise,we have

limi→∞

Φ(x;αi) > Φ(x;α).

Let z∗ = (limi→∞ Φ(x;αi) + Φ(x;α))/2. It is clear that

Φ(x;αi) > z∗ > Φ(x;α)

for all i. It follows from Φ(x;αi) > z∗ that there exists Ai with Pr{Ai} ≥ αi

such thatinf

ω∈Ai

Pr{ξ(ω) ≤ x} > z∗

for each i. Now we define

A∗ =∞⋃i=1

Ai.

Page 289: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

280 Chapter 8 - Birandom Theory

It is clear that Pr{A∗} ≥ Pr{Ai} ≥ αi. Letting i→∞, we get Pr{A∗} ≥ α.Thus

Φ(x;α) ≥ infω∈A∗

Pr{ξ(ω) ≤ x} ≥ z∗.

A contradiction proves the part (a).Next we prove the part (b). For any x1 and x2 with−∞ ≤ x1 < x2 ≤ +∞,

it is clear that

Φ(x1;α) = supPr{A}≥α

infω∈A

Pr {ξ(ω) ≤ x1}

≤ supPr{A}≥α

infω∈A

Pr {ξ(ω) ≤ x2} = Φ(x2;α).

Therefore, Φ(x;α) is an increasing function of x. Let us prove that Φ(x;α)is a right-continuous function of x. Let {xi} be an arbitrary sequence withxi ↓ x as i→∞. It follows from Theorem 8.6 that

limy↓x

Φ(y;α) = limy↓x

Ch{ξ ∈ (−∞, y]}(α) = Ch{ξ ∈ (−∞, x]}(α) = Φ(x;α).

Thus Φ(x;α) is a right-continuous function of x for each fixed α.Since ξ(ω) is a random variable for any ω ∈ Ω, we have Pr{ξ(ω) ≤ −∞} =

0 for any ω ∈ Ω. It follows that

Φ(−∞;α) = supPr{A}≥α

infω∈A

Pr {ξ(ω) ≤ −∞} = 0.

Similarly, we have Pr{ξ(ω) ≤ +∞} = 1 for any ω ∈ Ω. Thus

Φ(+∞;α) = supPr{A}≥α

infω∈A

Pr {ξ(ω) ≤ +∞} = 1.

Thus (8.13) is proved.If (8.14) is not true, then there exists a number z∗ > 0 and a sequence

{xi} with xi ↓ −∞ such that Φ(xi, α) > z∗ for all i. Writing

Ai ={ω ∈ Ω

∣∣ Pr{ξ(ω) ≤ xi} > z∗}

for i = 1, 2, · · ·, we have Pr{Ai} ≥ α, and A1 ⊃ A2 ⊃ · · · It follows from theprobability continuity theorem that

Pr

{ ∞⋂i=1

Ai

}= lim

i→∞Pr{Ai} ≥ α > 0.

Thus there exists ω∗ such that ω∗ ∈ Ai for all i. Therefore

0 = limi→∞

Pr{ξ(ω∗) ≤ xi} ≥ z∗ > 0.

A contradiction proves (8.14).

Page 290: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 8.3 - Chance Distribution 281

If (8.15) is not true, there exists a number z∗ < 1 and a sequence {xi}with xi ↑ +∞ such that Φ(xi, α) < z∗ for all i. Writing

Ai ={ω ∈ Ω

∣∣ Pr{ξ(ω) ≤ xi} < z∗}

for i = 1, 2, · · ·, we have

Pr{Ai} = 1− Pr{ω ∈ Ω

∣∣ Pr{ξ(ω) ≤ xi} ≥ z∗}

> 1− α

and A1 ⊃ A2 ⊃ · · · It follows from the probability continuity theorem that

Pr

{ ∞⋂i=1

Ai

}= lim

i→∞Pr{Ai} ≥ 1− α > 0.

Thus there exists ω∗ such that ω∗ ∈ Ai for all i. Therefore

1 = limi→∞

Pr{ξ(ω∗) ≤ xi} ≤ z∗ < 1.

A contradiction proves (8.15). The proof is complete.

Example 8.2: Let Ω = {ω1, ω2}, and Pr{ω1} = Pr{ω2} = 0.5. Assume thatξ is a birandom variable defined on the probability space (Ω,A,Pr) as

ξ(ω) =

{η1, if ω = ω1

η2, if ω = ω2

where η1 and η2 are random variables defined as

η1 =

{0 with probability 0.41 with probability 0.6,

η2 =

{2 with probability 0.33 with probability 0.7.

If 0 < α ≤ 0.5, then the chance distribution of ξ is

Φ(x;α) =

⎧⎪⎨⎪⎩0, if x < 0

0.4, if 0 ≤ x < 11, if 1 ≤ x;

if 0.5 < α ≤ 1, then the chance distribution of ξ is

Φ(x;α) =

⎧⎪⎨⎪⎩0, if x < 2

0.3, if 2 ≤ x < 31, if 3 ≤ x.

It is clear that the chance distribution Φ(x;α) is neither left-continuous withrespect to x nor right-continuous with respect to α.

Page 291: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

282 Chapter 8 - Birandom Theory

Example 8.3: When α = 1, the limitation limx→+∞ Φ(x; 1) may take anyvalue c between 0 and 1. Let Ω = {ω1, ω2, · · ·}, and Pr{ωi} = 1/2i fori = 1, 2, · · · The birandom variable ξ is defined on the probability space(Ω,A,Pr) as

ξ(ωi) =

{0 with probability c

i with probability 1− c.

Then we have

Φ(x; 1) =

⎧⎪⎨⎪⎩0, if −∞ ≤ x < 0c, if 0 ≤ x < +∞1, if x = +∞.

It follows that limx→+∞Φ(x; 1) = c.

Theorem 8.9 Let ξ be a birandom variable. Then Ch{ξ ≥ x}(α) is(a) a decreasing and left-continuous function of α for any fixed x;(b) a decreasing and left-continuous function of x for any fixed α.

Proof: Like Theorem 8.8.

Definition 8.11 (Peng and Liu [116]) The chance density function φ: �×(0, 1]→ [0,+∞) of a birandom variable ξ is a function such that

Φ(x;α) =∫ x

−∞φ(y;α)dy (8.16)

holds for all x ∈ [−∞,+∞] and α ∈ (0, 1], where Φ is the chance distributionof ξ.

8.4 Independent and Identical Distribution

This section begins with introducing the concept of independent and identicaldistribution (iid) of birandom variables and discusses some mathematicalproperties.

Definition 8.12 (Peng and Liu [116]) The birandom variables ξ1, ξ2, · · · , ξnare called iid if and only if the random vectors

(Pr{ξi(ω) ∈ B1},Pr{ξi(ω) ∈ B2}, · · · ,Pr{ξi(ω) ∈ Bm}), i = 1, 2, · · · , n

are iid for any Borel sets B1, B2, · · · , Bm of � and any positive integer m.

Theorem 8.10 (Peng and Liu [116]) Let ξ1, ξ2, · · · , ξn be iid birandom vari-ables, and f : � → � a measurable function. Then the birandom variablesf(ξ1), f(ξ2), · · · , f(ξn) are also iid.

Page 292: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 8.4 - Independent and Identical Distribution 283

Proof: Since ξ1, ξ2, · · · , ξn are iid birandom variables, the random vectors

(Pr{ξi(ω) ∈ f−1(B1)},Pr{ξi(ω) ∈ f−1(B2)}, · · · ,Pr{ξi(ω) ∈ f−1(Bm)}),

i = 1, 2, · · · , n are iid for any Borel sets B1, B2, · · · , Bm of � and any positiveinteger m. Equivalently, the random vectors

(Pr{f(ξi)(ω) ∈ B1},Pr{f(ξi)(ω) ∈ B2}, · · · ,Pr{f(ξi)(ω) ∈ Bm}),

i = 1, 2, · · · , n are iid. Thus f(ξ1), f(ξ2), · · · , f(ξn) are iid birandom variables.

Theorem 8.11 (Peng and Liu [116]) If ξ1, ξ2, · · · , ξn are iid birandom vari-ables such that E[ξ1(ω)], E[ξ2(ω)], · · ·, E[ξn(ω)] are all finite for each ω, thenE[ξ1(ω)], E[ξ2(ω)], · · ·, E[ξn(ω)] are iid random variables.

Proof: For any ω ∈ Ω, it follows from the expected value operator that

E[ξi(ω)] =∫ +∞

0

Pr{ξi(ω) ≥ r}dr −∫ 0

−∞Pr{ξi(ω) ≤ r}dr

= limj→∞

limk→∞

⎛⎝ 2k∑l=1

j

2kPr{

ξi(ω) ≥ lj

2k

}−

2k∑l=1

j

2kPr{

ξi(ω) ≤ − lj

2k

}⎞⎠for i = 1, 2, · · · , n. Now we write

η+i (ω) =

∫ ∞

0

Pr{ξi(ω) ≥ r}dr, η−i (ω) =

∫ 0

−∞Pr{ξi(ω) ≤ r}dr,

η+ij(ω) =

∫ j

0

Pr{ξi(ω) ≥ r}dr, η−ij(ω) =

∫ 0

−j

Pr{ξi(ω) ≤ r}dr,

η+ijk(ω) =

2k∑l=1

j

2kPr{

ξi(ω) ≥ lj

2k

}, η−

ijk(ω) =2k∑l=1

j

2kPr{

ξi(ω) ≤ − lj

2k

}for any positive integers j, k and i = 1, 2, · · · , n. It follows from the mono-tonicity of the functions Pr{ξi ≥ r} and Pr{ξi ≤ r} that the sequences{η+

ijk(ω)} and {η−ijk(ω)} satisfy (a) for each j and k,

(η+ijk(ω), η−

ijk(ω)), i =

1, 2, · · · , n are iid random vectors, and η−ijk(ω) ↑ η−

ij(ω) as k → ∞; and (b)for each i and j, η+

ijk(ω) ↑ η+ij(ω).

For any real numbers x, y, xi, yi, i = 1, 2, · · · , n, it follows from the prop-erty (a) that

Pr

{η+ijk(ω) ≤ xi, η

−ijk(ω) ≤ yi

i = 1, 2, · · · , n

}=

n∏i=1

Pr{η+ijk(ω) ≤ xi, η

−ijk(ω) ≤ yi

},

Pr{η+ijk(ω) ≤ x, η−

ijk(ω) ≤ y}

= Pr{η+i′jk(ω) ≤ x, η−

i′jk(ω) ≤ y}

, ∀i, i′.

Page 293: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

284 Chapter 8 - Birandom Theory

It follows from the property (b) that{η+ijk(ω) ≤ xi, η

−ijk(ω) ≤ yi

i = 1, 2, · · · , n

}→{

η+ij(ω) ≤ xi, η

−ij(ω) ≤ yi

i = 1, 2, · · · , n

},

{η+ijk(ω) ≤ x, η−

ijk(ω) ≤ y}→{η+ij(ω) ≤ x, η−

ij(ω) ≤ y}

as k →∞. By using the probability continuity theorem, we get

Pr

{η+ij(ω) ≤ xi, η

−ij(ω) ≤ yi

i = 1, 2, · · · , n

}=

n∏i=1

Pr{η+ij(ω) ≤ xi, η

−ij(ω) ≤ yi

},

Pr{η+ij(ω) ≤ x, η−

ij(ω) ≤ y}

= Pr{η+i′j(ω) ≤ x, η−

i′j(ω) ≤ y}

, ∀i, i′.

Thus(η+ij(ω), η−

ij(ω)), i = 1, 2, · · · , n are iid random vectors, and satisfy (c)

for each j,(η+ij(ω), η−

ij(ω)), i = 1, 2, · · · , n are iid random vectors; and (d) for

each i, η+ij(ω) ↑ η+

i (ω) and η−ij(ω) ↑ η−

i (ω) as j →∞.A similar process may prove that

(η+i (ω), η−

i (ω)), i = 1, 2, · · · , n are iid

random vectors. Thus E[ξ1(ω)], E[ξ2(ω)], · · · , E[ξn(ω)] are iid random vari-ables. The theorem is proved.

8.5 Expected Value Operator

Definition 8.13 (Peng and Liu [116]) Let ξ be a birandom variable. Thenthe expected value of ξ is defined by

E[ξ] =∫ +∞

0

Pr{ω ∈ Ω

∣∣ E[ξ(ω)] ≥ r}

dr−∫ 0

−∞Pr{ω ∈ Ω

∣∣ E[ξ(ω)] ≤ r}

dr

provided that at least one of the two integrals is finite.

Theorem 8.12 Assume that ξ and η are birandom variables with finite ex-pected values. Then for any real numbers a and b, we have

E[aξ + bη] = aE[ξ] + bE[η]. (8.17)

Proof: For any ω ∈ Ω, by the linearity of expected value operator of randomvariable, we have E[aξ(ω) + bη(ω)] = aE[ξ(ω)] + bE[η(ω)]. It follows thatE[aξ+bη] = E [aE[ξ(ω)] + bE[η(ω)]] = aE [E[ξ(ω)]]+bE [E[η(ω)]] = aE[ξ]+bE[η]. The theorem is proved.

Page 294: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 8.6 - Variance, Covariance and Moments 285

8.6 Variance, Covariance and Moments

Definition 8.14 (Peng and Liu [116]) Let ξ be a birandom variable withfinite expected value E[ξ]. The variance of ξ is defined as

V [ξ] = E[(ξ − E[ξ])2

]. (8.18)

Theorem 8.13 Assume that ξ is a birandom variable, a and b are real num-bers. Then we have

V [ξ] = E[ξ2]− (E[ξ])2, (8.19)

V [aξ + b] = a2V [ξ]. (8.20)

Proof: By the definition of variance and the linearity of the expected valueoperator of birandom variable, we have

V [ξ] = E[(ξ − E[ξ])2] = E[ξ2 − 2ξE[ξ] + (E[ξ])2]

= E[ξ2]− 2E[ξ]E[ξ] + (E[ξ])2 = E[ξ2]− (E[ξ])2.

Furthermore, we have

V [aξ + b] = E[(aξ + b− aE[ξ]− b)2

]= a2E[(ξ − E[ξ])2] = a2V [ξ].

The theorem is proved.

Theorem 8.14 Assume that ξ is a birandom variable whose expected valueexists. Then we have

V [E[ξ(ω)]] ≤ V [ξ]. (8.21)

Proof: Denote the expected value of ξ by e. It follows from the Jensen’sInequality that

V [E[ξ(ω)]] = E[(E[ξ(ω)]− e)2

]≤ E

[E[(ξ(ω)− e)2

]]= V [ξ].

The theorem is proved.

Theorem 8.15 Let ξ be a birandom variable with expected value e. ThenV [ξ] = 0 if and only if Ch{ξ = e}(1) = 1.

Proof: If V [ξ] = 0, then it follows from V [ξ] = E[(ξ − e)2] that∫ +∞

0

Pr{ω ∈ Ω

∣∣ E[(ξ(ω)− e)2] ≥ r}

dr = 0

which implies that Pr{ω ∈ Ω|E[(ξ(ω) − e)2] ≥ r} = 0 for any r > 0. There-fore, Pr{ω ∈ Θ|E[(ξ(ω) − e)2] = 0} = 1. That is, there exists a set A∗ with

Page 295: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

286 Chapter 8 - Birandom Theory

Pr{A∗} = 1 such that E[(ξ(ω) − e)2] = 0 for each ω ∈ A∗. It follows fromTheorem 2.39 that Pr{ξ(ω) = e} = 1 for each ω ∈ A∗. Hence

Ch{ξ = e}(1) = supPr{A}≥1

infω∈A

Pr{ξ(ω) = e} = 1.

Conversely, if Ch{ξ = e}(1) = 1, it follows from Theorem 8.5 that thereexists a set A∗ with Pr{A∗} = 1 such that

infω∈A∗

Pr{ξ(ω) = e} = 1.

That is, Pr{(ξ(ω)− e)2 ≥ r} = 0 for each r > 0 and each ω ∈ A∗. Thus

E[(ξ(ω)− e)2] =∫ +∞

0

Pr{(ξ(ω)− e)2 ≥ r}dr = 0

for each ω ∈ A∗. It follows that Pr{ω ∈ Ω|E[(ξ(ω)− e)2] ≥ r

}= 0 for any

r > 0. Hence

V [ξ] =∫ +∞

0

Pr{ω ∈ Ω

∣∣ E[(ξ(ω)− e)2] ≥ r}

dr = 0.

The theorem is proved.

Definition 8.15 Let ξ and η be birandom variables such that E[ξ] and E[η]are finite. Then the covariance of ξ and η is defined by

Cov[ξ, η] = E [(ξ − E[ξ])(η − E[η])] . (8.22)

Definition 8.16 For any positive integer k, the expected value E[ξk] is calledthe kth moment of the birandom variable ξ. The expected value E[(ξ−E[ξ])k]is called the kth central moment of the birandom variable ξ.

8.7 Optimistic and Pessimistic Values

Definition 8.17 (Peng and Liu [116]) Let ξ be a birandom variable, andγ, δ ∈ (0, 1]. Then

ξsup(γ, δ) = sup{r∣∣ Ch{ξ ≥ r

}(γ) ≥ δ} (8.23)

is called the (γ, δ)-optimistic value to ξ, and

ξinf(γ, δ) = inf{r∣∣ Ch{ξ ≤ r}(γ) ≥ δ

}(8.24)

is called the (γ, δ)-pessimistic value to ξ.

Theorem 8.16 Let ξ be a birandom variable and γ, δ ∈ (0, 1]. Assume thatξsup(γ, δ) is the (γ, δ)-optimistic value and ξinf(γ, δ) is the (γ, δ)-pessimisticvalue to ξ. Then we have

Ch{ξ ≤ ξinf(γ, δ)}(γ) ≥ δ, Ch{ξ ≥ ξsup(γ, δ)}(γ) ≥ δ. (8.25)

Page 296: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 8.7 - Optimistic and Pessimistic Values 287

Proof: It follows from the definition of (γ, δ)-pessimistic value that thereexists a decreasing sequence {xi} such that Ch{ξ ≤ xi}(γ) ≥ δ and xi ↓ξinf(γ, δ) as i→∞. Since Ch{ξ ≤ x}(γ) is a right-continuous function of x,the inequality Ch{ξ ≤ ξinf(γ, δ)}(γ) ≥ δ holds.

Similarly, there exists an increasing sequence {xi} such that Ch{ξ ≥xi}(γ) ≥ δ and xi ↑ ξsup(γ, δ) as i → ∞. Since Ch{ξ ≥ x}(γ) is a left-continuous function of x, the inequality Ch{ξ ≥ ξsup(γ, δ)}(γ) ≥ δ holds.The theorem is proved.

Theorem 8.17 Let ξsup(γ, δ) and ξinf(γ, δ) be the (γ, δ)-optimistic and (γ, δ)-pessimistic values of birandom variable ξ, respectively. If γ ≤ 0.5, then wehave

ξinf(γ, δ) ≤ ξsup(γ, δ) + δ1; (8.26)

if γ > 0.5, then we have

ξinf(γ, δ) + δ2 ≥ ξsup(γ, δ) (8.27)

where δ1 and δ2 are defined by

δ1 = supω∈Ω{ξ(ω)sup(1− δ)− ξ(ω)inf(1− δ)} ,

δ2 = supω∈Ω{ξ(ω)sup(δ)− ξ(ω)inf(δ)} ,

and ξ(ω)sup(δ) and ξ(ω)inf(δ) are δ-optimistic and δ-pessimistic values ofrandom variable ξ(ω) for each ω, respectively.

Proof: Assume that γ ≤ 0.5. For any given ε > 0, we define

Ω1 ={ω ∈ Ω

∣∣ Pr {ξ(ω) > ξsup(γ, δ) + ε} ≥ δ}

,

Ω2 ={ω ∈ Ω

∣∣ Pr {ξ(ω) < ξinf(γ, δ)− ε} ≥ δ}

.

It follows from the definitions of ξsup(γ, δ) and ξinf(γ, δ) that Pr{Ω1} < γand Pr{Ω2} < γ. Thus Pr{Ω1}+ Pr{Ω2} < γ + γ ≤ 1. This fact implies thatΩ1 ∪ Ω2 �= Ω. Let ω∗ �∈ Ω1 ∪ Ω2. Then we have

Pr {ξ(ω∗) > ξsup(γ, δ) + ε} < δ,

Pr {ξ(ω∗) < ξinf(γ, δ)− ε} < δ.

Since Pr is self dual, we have

Pr {ξ(ω∗) ≤ ξsup(γ, δ) + ε} > 1− δ,

Pr {ξ(ω∗) ≥ ξinf(γ, δ)− ε} > 1− δ.

It follows from the definitions of ξ(ω∗)sup(1− δ) and ξ(ω∗)inf(1− δ) that

ξsup(γ, δ) + ε ≥ ξ(ω∗)inf(1− δ),

Page 297: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

288 Chapter 8 - Birandom Theory

ξinf(γ, δ)− ε ≤ ξ(ω∗)sup(1− δ)

which implies that

ξinf(γ, δ)− ε− (ξsup(γ, δ) + ε) ≤ ξ(ω∗)sup(1− δ)− ξ(ω∗)inf(1− δ) ≤ δ1.

Letting ε→ 0, we obtain (8.26).Next we prove the inequality (8.27). Assume γ > 0.5. For any given ε >,

we defineΩ1 =

{ω ∈ Ω

∣∣ Pr {ξ(ω) ≥ ξsup(γ, δ)− ε} ≥ δ}

,

Ω2 ={ω ∈ Ω

∣∣ Pr {ξ(ω) ≤ ξinf(γ, δ) + ε} ≥ δ}

.

It follows from the definitions of ξsup(γ, δ) and ξinf(γ, δ) that Pr{Ω1} ≥ γand Pr{Ω2} ≥ γ. Thus Pr{Ω1}+ Pr{Ω2} ≥ γ + γ > 1. This fact implies thatΩ1 ∩ Ω2 �= ∅. Let ω∗ ∈ Ω1 ∩ Ω2. Then we have

Pr {ξ(ω∗) ≥ ξsup(γ, δ)− ε} ≥ δ,

Pr {ξ(ω∗) ≤ ξinf(γ, δ) + ε} ≥ δ.

It follows from the definitions of ξ(ω∗)sup(δ) and ξ(ω∗)inf(δ) that

ξsup(γ, δ)− ε ≤ ξ(ω∗)sup(δ),

ξinf(γ, δ) + ε ≥ ξ(ω∗)inf(δ)

which implies that

ξsup(γ, δ)− ε− (ξinf(γ, δ) + ε) ≤ ξ(ω∗)sup(δ)− ξ(ω∗)inf(δ) ≤ δ2.

The inequality (8.27) is proved by letting ε→ 0.

8.8 Convergence Concepts

This section introduces four types of sequence convergence concepts: conver-gence a.s., convergence in chance, convergence in mean, and convergence indistribution.

Definition 8.18 Suppose that ξ, ξ1, ξ2, · · · are birandom variables defined onthe probability space (Ω,A,Pr). The sequence {ξi} is said to be convergenta.s. to ξ if and only if there exists a set A ∈ A with Pr{A} = 1 such that{ξi(ω)} converges a.s. to ξ(ω) for every ω ∈ A.

Definition 8.19 Suppose that ξ, ξ1, ξ2, · · · are birandom variables. We saythat the sequence {ξi} converges in chance to ξ if

limi→∞

limα↓0

Ch {|ξi − ξ| ≥ ε} (α) = 0 (8.28)

for every ε > 0.

Page 298: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 8.9 - Laws of Large Numbers 289

Definition 8.20 Suppose that ξ, ξ1, ξ2, · · · are birandom variables with finiteexpected values. We say that the sequence {ξi} converges in mean to ξ if

limi→∞

E[|ξi − ξ|] = 0. (8.29)

Definition 8.21 Suppose that Φ,Φ1,Φ2, · · · are the chance distributions ofbirandom variables ξ, ξ1, ξ2, · · ·, respectively. We say that {ξi} converges indistribution to ξ if Φi(x;α)→ Φ(x;α) for all continuity points (x;α) of Φ.

8.9 Laws of Large Numbers

Theorem 8.18 (Peng and Liu [116]) Let {ξi} be a sequence of independentbut not necessarily identically distributed birandom variables with commonexpected value e. If there exists a number a > 0 such that V [ξi] < a for alli, then (E[ξ1(ω)] + E[ξ2(ω)] + · · ·+ E[ξn(ω)])/n converges in probability to eas n→∞.

Proof: Since {ξi} is a sequence of independent birandom variables, we knowthat {E[ξi(ω)]} is a sequence of independent random variables. By usingTheorem 8.14, we get V [E[ξi(ω)]] ≤ V [ξi] < a for each i. It follows from theweak law of large numbers of random variable that (E[ξ1(ω)] + E[ξ2(ω)] +· · ·+ E[ξn(ω)])/n converges in probability to e.

Theorem 8.19 Let {ξi} be a sequence of iid birandom variables with a finiteexpected value e. Then (E[ξ1(ω)]+E[ξ2(ω)]+ · · ·+E[ξn(ω)])/n converges inprobability to e as n→∞.

Proof: Since {ξi} is a sequence of iid birandom variables with a finite ex-pected value e, we know that {E[ξi(ω)]} is a sequence of iid random vari-ables with finite expected e. It follows from the weak law of large numbersof random variable that (E[ξ1(ω)] + E[ξ2(ω)] + · · · + E[ξn(ω)])/n convergesin probability to e.

Theorem 8.20 Let {ξi} be independent birandom variables with a commonexpected value e. If

∞∑i=1

V [ξi]i2

<∞, (8.30)

then (E[ξ1(ω)] + E[ξ2(ω)] + · · ·+ E[ξn(ω)])/n converges a.s. to e as n→∞.

Proof: Since {ξi} is a sequence of independent birandom variables, we knowthat {E[ξi(ω)]} is a sequence of independent random variables. By usingTheorem 8.14, we get V [E[ξi(ω)]] ≤ V [ξi] for each i. It follows from thestrong law of large numbers of random variable that (E[ξ1(ω)] + E[ξ2(ω)] +· · ·+ E[ξn(ω)])/n converges a.s. to e.

Page 299: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

290 Chapter 8 - Birandom Theory

Theorem 8.21 (Peng and Liu [116]) Suppose that {ξi} is a sequence of iidbirandom variables with a finite expected value e. Then (E[ξ1(ω)]+E[ξ2(ω)]+· · ·+ E[ξn(ω)])/n converges a.s. to e as n→∞.

Proof: Since {ξi} is a sequence of iid birandom variables, we know that{E[ξi(ω)]} is a sequence of iid random variables with a finite expected valuee. It follows from the classical strong law of large numbers that

1n

n∑i=1

E[ξi(ω)]→ a, a.s.

as n→∞. The proof is complete.

8.10 Birandom Simulations

In this section, we introduce birandom simulations for finding critical values,computing chance functions, and calculating expected value.

Example 8.4: Suppose that ξ is an n-dimensional birandom vector definedon the probability space (Ω,A,Pr), and f : �n → �m is a measurable func-tion. For any real number α ∈ (0, 1], we design a birandom simulation tocompute the α-chance Ch {f(ξ) ≤ 0} (α). That is, we should find the supre-mum β such that

Pr{ω ∈ Ω

∣∣ Pr {f(ξ(ω)) ≤ 0} ≥ β}≥ α. (8.31)

First, we sample ω1, ω2, · · · , ωN from Ω according to the probability measurePr, and estimate βk = Pr{f(ξ(ωk)) ≤ 0} for k = 1, 2, · · · , N by stochasticsimulation. Let N ′ be the integer part of αN . Then the value β can be takenas the N ′th largest element in the sequence {β1, β2, · · · , βN}.

Algorithm 8.1 (Biandom Simulation)Step 1. Generate ω1, ω2, · · · , ωN from Ω according to the probability mea-

sure Pr.Step 2. Compute the probability βk = Pr{f(ξ(ωk) ≤ 0} for k = 1, 2, · · · , N

by stochastic simulation.Step 3. Set N ′ as the integer part of αN .Step 4. Return the N ′th largest element in {β1, β2, · · · , βN}.

Now we consider the following two birandom variables

ξ1 = U(ρ1, ρ1 + 1), with ρ1 ∼ N (0, 1),ξ2 = U(ρ2, ρ2 + 2), with ρ2 ∼ N (1, 2).

A run of birandom simulation with 5000 cycles shows that

Ch{ξ1 + ξ2 ≥ 0}(0.8) = 0.94.

Page 300: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 8.10 - Birandom Simulations 291

Example 8.5: Assume that ξ is an n-dimensional birandom vector on theprobability space (Ω,A,Pr), and f : �n → � is a measurable function. Forany given confidence levels α and β, we find the maximal value f such that

Ch{f(ξ) ≥ f

}(α) ≥ β (8.32)

holds. That is, we should compute the maximal value f such that

Pr{ω ∈ Ω

∣∣ Pr{f(ξ(ω)) ≥ f

}≥ β}≥ α (8.33)

holds. We sample ω1, ω2, · · · , ωN from Ω according to the probability measurePr, and estimate fk = sup {fk|Pr{f(ξ(ωk)) ≥ fk} ≥ β} for k = 1, 2, · · · , Nby the stochastic simulation. Let N ′ be the integer part of αN . Then f canbe taken as the N ′th largest element in the sequence {f1, f2, · · · , fN}.

Algorithm 8.2 (Birandom Simulation)Step 1. Generate ω1, ω2, · · · , ωN from Ω according to the probability mea-

sure Pr.Step 2. Find fk = sup {fk|Pr{f(ξ(ωk)) ≥ fk} ≥ β} for k = 1, 2, · · · , N by

stochastic simulation.Step 3. Set N ′ as the integer part of αN .Step 4. Return the N ′th largest element in {f1, f2, · · · , fN}.

We now find the maximal value f such that Ch{ξ21 + ξ2

2 ≥ f}(0.9) ≥ 0.9,where ξ1 and ξ2 are birandom variables defined as

ξ1 = U(ρ1, ρ1 + 1), with ρ1 ∼ N (0, 1),ξ2 = U(ρ2, ρ2 + 2), with ρ2 ∼ N (1, 2).

A run of birandom simulation with 5000 cycles shows that f = 0.20.

Example 8.6: Assume that ξ is an n-dimensional birandom vector on theprobability space (Ω,A,Pr), and f : �n → � is a measurable function.One problem is to calculate the expected value E[f(ξ)]. Note that, for eachω ∈ Ω, we may calculate the expected value E[f(ξ(ω)] by stochastic simu-lation. Since E[f(ξ)] is essentially the expected value of stochastic variableE[f(ξ(ω)], we may produce a birandom simulation as follows.

Algorithm 8.3 (Birandom Simulation)Step 1. Set e = 0.Step 2. Sample ω from Ω according to the probability measure Pr.Step 3. e ← e + E[f(ξ(ω))], where E[f(ξ(ω))] may be calculated by the

stochastic simulation.Step 4. Repeat the second to fourth steps N times.Step 5. E[f(ξ)] = e/N .

Page 301: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

292 Chapter 8 - Birandom Theory

We employ the birandom simulation to calculate the expected value ofξ1ξ2, where ξ1 and ξ2 are birandom variables defined as

ξ1 = U(ρ1, ρ1 + 1), with ρ1 ∼ N (0, 1),ξ2 = U(ρ2, ρ2 + 2), with ρ2 ∼ N (1, 2).

A run of birandom simulation with 5000 cycles shows that E[ξ1ξ2] = 0.98.

Page 302: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Chapter 9

Rough Random Theory

A rough random variable was presented by Liu [75] as a random variabledefined on the universal set of rough variables, or a random variable taking“rough variable” values.

The emphasis in this chapter is mainly on rough random variable, roughrandom arithmetic, chance measures, chance distribution, independent andidentical distribution, expected value operator, variance, critical values, con-vergence concepts, laws of large numbers, and rough random simulation.

9.1 Rough Random Variables

Definition 9.1 (Liu [75]) A rough random variable is a function ξ from aprobability space (Ω,A,Pr) to the set of rough variables such that Tr{ξ(ω) ∈B} is a measurable function of ω for any Borel set B of �.

Theorem 9.1 Assume that ξ is a rough random variable, and B is a Borelset of �. Then the trust Tr{ξ(ω) ∈ B} is a random variable on (Ω,A,Pr).

Proof: Since the trust Tr{ξ(ω) ∈ B} is a measurable function of ω from theprobability space (Ω,A,Pr) to �, it is a random variable.

Theorem 9.2 Let ξ be a rough random variable. If the expected value E[ξ(ω)]is finite for each ω, then E[ξ(ω)] is a random variable.

Proof: In order to prove that the expected value E[ξ(ω)] is a random vari-able, we only need to show that E[ξ(ω)] is a measurable function of ω. It isobvious that

E[ξ(ω)] =∫ +∞

0

Tr{ξ(ω) ≥ r}dr −∫ 0

−∞Tr{ξ(ω) ≤ r}dr

= limj→∞

limk→∞

(k∑

l=1

j

kTr{

ξ(ω)) ≥ lj

k

}−

k∑l=1

j

kTr{

ξ(ω) ≤ − lj

k

}).

Page 303: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

294 Chapter 9 - Rough Random Theory

Since Tr{ξ(ω) ≥ lj/k} and Tr{ξ(ω) ≤ −lj/k} are all measurable functionsfor any integers j, k and l, the expected value E[ξ(ω)] is a measurable functionof ω. The proof is complete.

Definition 9.2 An n-dimensional rough random vector is a function ξ froma probability space (Ω,A,Pr) to the set of n-dimensional rough vectors suchthat Tr{ξ(ω) ∈ B} is a measurable function of ω for any Borel set B of �n.

Theorem 9.3 If (ξ1, ξ2, · · · , ξn) is a rough random vector, then ξ1, ξ2, · · · , ξnare rough random variables. Conversely, if ξ1, ξ2, · · · , ξn are rough randomvariables, and for each ω ∈ Ω, the rough variables ξ1(ω), ξ2(ω), · · · , ξn(ω) areindependent, then (ξ1, ξ2, · · · , ξn) is a rough random vector.

Proof: Write ξ = (ξ1, ξ2, · · · , ξn). Suppose that ξ is a rough random vectoron the probability space (Ω,A,Pr). For any Borel set B of �, the set B×�n−1

is a Borel set of �n. Note that

Tr {ξ1(ω) ∈ B} = Tr

⎧⎪⎪⎪⎨⎪⎪⎪⎩ξ1(ω) ∈ Bξ2(ω) ∈ �

...ξn(ω) ∈ �

⎫⎪⎪⎪⎬⎪⎪⎪⎭ = Tr{ξ(ω) ∈ B ×�n−1

}

is a measurable function of ω. Hence ξ1 is a rough random variable. A similarprocess may prove that ξ2, ξ3, · · · , ξn are rough random variables.

Conversely, suppose that ξ1, ξ2, · · · , ξn are rough random variables on theprobability space (Ω,A,Pr). We write ξ = (ξ1, ξ2, · · · , ξn) and define

C ={C ⊂ �n

∣∣ Tr{ξ(ω) ∈ C} is a measurable function of ω}

.

The vector ξ is a rough random vector if we can prove that C contains allBorel sets of �n. Let C1, C2, · · · ∈ C, and Ci ↑ C or Ci ↓ C. It follows fromthe trust continuity theorem that Tr{ξ(ω) ∈ Ci} → Tr{ξ(ω) ∈ C} as i→∞.Thus Tr{ξ(ω) ∈ C} is a measurable function of ω, and C ∈ C. Hence C isa monotone class. It is also clear that C contains all intervals of the form(−∞, a], (a, b], (b,∞) and �n since

Tr {ξ(ω) ∈ (−∞, a]} =n∏

i=1

Tr {ξi(ω) ∈ (−∞, ai]} ;

Tr {ξ(ω) ∈ (a, b]} =n∏

i=1

Tr {ξi(ω) ∈ (ai, bi]} ;

Tr {ξ(ω) ∈ (b,+∞)} =n∏

i=1

Tr {ξi(ω) ∈ (bi,+∞)} ;

Tr {ξ(ω) ∈ �n} = 1.

Page 304: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 9.2 - Chance Measure 295

Let F be the class of all finite unions of disjoint intervals of the form (−∞, a],(a, b], (b,∞) and �n. Note that for any disjoint sets C1, C2, · · · , Cm of F andC = C1 ∪ C2 ∪ · · · ∪ Cm, we have

Tr {ξ(ω) ∈ C} =m∑i=1

Tr {ξ(ω) ∈ Ci} .

That is, C ∈ C. Hence we have F ⊂ C. It may also be verified that the classF is an algebra. Since the smallest σ-algebra containing F is just the Borelalgebra of �n, the monotone class theorem implies that C contains all Borelsets of �n. The theorem is proved.

Theorem 9.4 Let ξ be an n-dimensional rough random vector, and f :�n → � a measurable function. Then f(ξ) is a rough random variable.

Proof: It is clear that f−1(B) is a Borel set of �n for any Borel set B of �.Thus, for each ω ∈ Ω, we have

Tr{f(ξ(ω)) ∈ B} = Tr{ξ(ω) ∈ f−1(B)}

which is a measurable function of ω. That is, f(ξ) is a rough random variable.The theorem is proved.

Definition 9.3 (Liu [75], Rough Random Arithmetic on Single Space) Letf : �n → � be a measurable function, and ξ1, ξ2, · · · , ξn rough random vari-ables defined on the probability space (Ω,A,Pr). Then ξ = f(ξ1, ξ2, · · · , ξn)is a rough random variable defined by

ξ(ω) = f(ξ1(ω), ξ2(ω), · · · , ξn(ω)), ∀ω ∈ Ω. (9.1)

Definition 9.4 (Liu [75], Rough Random Arithmetic on Different Spaces)Let f : �n → � be a measurable function, and ξi rough random variables de-fined on (Ωi,Ai,Pri), i = 1, 2, · · · , n, respectively. Then ξ = f(ξ1, ξ2, · · · , ξn)is a rough random variable on the product probability space (Ω,A,Pr), definedby

ξ(ω1, ω2, · · · , ωn) = f(ξ1(ω1), ξ2(ω2), · · · , ξn(ωn)) (9.2)

for all (ω1, ω2, · · · , ωn) ∈ Ω.

Random Set

Roughly speaking, a random set is a measurable function from a probabilityspace to the class of sets. Random sets have been studied for a long period,for example, Robbins [123], Matheron [93], Molchanov [96]. Since a set canbe regarded as special type of rough set, a random set is a special type ofrough random variable.

Page 305: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

296 Chapter 9 - Rough Random Theory

9.2 Chance Measure

Definition 9.5 (Liu [75]) Let ξ be a rough random variable, and B a Borelset of �. Then the chance of rough random event ξ ∈ B is a function from(0, 1] to [0, 1], defined as

Ch {ξ ∈ B} (α) = supPr{A}≥α

infω∈A

Tr {ξ(ω) ∈ B} . (9.3)

Theorem 9.5 Let ξ be a rough random variable, and B a Borel set of �.Write β∗ = Ch {ξ ∈ B} (α∗). Then we have

Pr{ω ∈ Ω

∣∣ Tr {ξ(ω) ∈ B} ≥ β∗} ≥ α∗. (9.4)

Proof: It follows from the definition of chance that β∗ is just the supremumof β satisfying

Pr{ω ∈ Ω

∣∣ Tr {ξ(ω) ∈ B} ≥ β}≥ α∗.

Thus there exists an increasing sequence {βi} such that

Pr{ω ∈ Ω

∣∣ Tr {ξ(ω) ∈ B} ≥ βi

}≥ α∗

and βi ↑ β∗ as i→∞. It is also easy to verify that{ω ∈ Ω

∣∣ Tr {ξ(ω ∈ B} ≥ βi

}↓{ω ∈ Ω

∣∣ Tr {ξ(ω) ∈ B} ≥ β∗}as i→∞. It follows from the probability continuity theorem that

Pr{ω ∈ Ω

∣∣ Tr {ξ(ω) ∈ B} ≥ β∗}= lim

i→∞Pr{ω ∈ Ω

∣∣ Tr {ξ(ω) ∈ B} ≥ βi

}≥ α∗.

The proof is complete.

Theorem 9.6 Let ξ be a rough random variable, and {Bi} a sequence ofBorel sets of �. If Bi ↓ B, then

limi→∞

Ch{ξ ∈ Bi}(α) = Ch{ξ ∈ lim

i→∞Bi

}(α). (9.5)

Proof: Write

β = Ch{ξ ∈ B}(α), βi = Ch{ξ ∈ Bi}(α), i = 1, 2, · · ·

Since Bi ↓ B, it is clear that β1 ≥ β2 ≥ · · · ≥ β. Thus the limitation

ρ = limi→∞

βi = limi→∞

Ch{ξ ∈ Bi}(α)

Page 306: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 9.2 - Chance Measure 297

exists and ρ ≥ β. On the other hand, since ρ ≤ βi for each i, it follows fromTheorem 9.5 that

Pr{ω ∈ Ω∣∣ Tr{ξ(ω) ∈ Bi} ≥ ρ} ≥ Pr{ω ∈ Ω

∣∣ Tr{ξ(ω) ∈ Bi} ≥ βi} ≥ α.

It follows from the trust continuity theorem that

{ω ∈ Ω∣∣ Tr{ξ(ω) ∈ Bi} ≥ ρ} ↓ {ω ∈ Ω

∣∣ Tr{ξ(ω) ∈ B} ≥ ρ}.

It follows from the probability continuity theorem that

Pr{ω ∈ Ω∣∣ Tr{ξ(ω) ∈ B} ≥ ρ} = lim

i→∞Pr{ω ∈ Ω

∣∣ Tr{ξ(ω) ∈ Bi} ≥ ρ} ≥ α

which implies that ρ ≤ β. Hence ρ = β and (9.5) holds.

Theorem 9.7 (a) Assume that ξ, ξ1, ξ2, · · · are rough random variables suchthat ξi(ω) ↑ ξ(ω) for each ω ∈ Ω. Then we have

limi→∞

Ch{ξi ≤ r}(α) = Ch{

limi→∞

ξi ≤ r}

(α). (9.6)

(b) Assume that ξ, ξ1, ξ2, · · · are rough random variables such that ξi(ω) ↓ ξ(ω)for each ω ∈ Ω. Then we have

limi→∞

Ch{ξi ≥ r}(α) = Ch{

limi→∞

ξi ≥ r}

(α). (9.7)

Proof: (a) Write

β = Ch{ξ ≤ r}(α), βi = Ch{ξi ≤ r}(α), i = 1, 2, · · ·

Since ξi(ω) ↑ ξ(ω) for each ω ∈ Ω, it is clear that {ξi(ω) ≤ r} ↓ {ξ(ω) ≤ r}for each ω ∈ Ω and β1 ≥ β2 ≥ · · · ≥ β. Thus the limitation

ρ = limi→∞

βi = limi→∞

Ch{ξi ≤ r}(α)

exists and ρ ≥ β. On the other hand, since ρ ≤ βi for each i, it follows fromTheorem 9.5 that

Pr{ω ∈ Ω∣∣ Tr{ξi(ω) ≤ r} ≥ ρ} ≥ Pr{ω ∈ Ω

∣∣ Tr{ξi(ω) ≤ r} ≥ βi} ≥ α.

Since {ξi(ω) ≤ r} ↓ {ξ(ω) ≤ r} for each ω ∈ Ω, it follows from the trustcontinuity theorem that

{ω ∈ Ω∣∣ Tr{ξi(ω) ≤ r} ≥ ρ} ↓ {ω ∈ Ω

∣∣ Tr{ξ(ω) ≤ r} ≥ ρ}.

By using the probability continuity theorem, we get

Pr{ω ∈ Ω∣∣ Tr{ξ(ω) ≤ r} ≥ ρ} = lim

i→∞Pr{ω ∈ Ω

∣∣ Tr{ξi(ω) ≤ r} ≥ ρ} ≥ α

which implies that ρ ≤ β. Hence ρ = β and (9.6) holds. The part (b) maybe proved similarly.

Page 307: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

298 Chapter 9 - Rough Random Theory

Variety of Chance Measure

Definition 9.6 Let ξ be a rough random variable, and B a Borel set of �.For any real number α ∈ (0, 1], the α-chance of rough random event ξ ∈ Bis defined as the value of chance at α, i.e., Ch {ξ ∈ B} (α) where Ch denotesthe chance measure.

Definition 9.7 Let ξ be a rough random variable, and B a Borel set of �.Then the equilibrium chance of rough random event ξ ∈ B is defined as

Che {ξ ∈ B} = sup0<α≤1

{α∣∣ Ch {ξ ∈ B} (α) ≥ α

}(9.8)

where Ch denotes the chance measure.

Definition 9.8 Let ξ be a rough random variable, and B a Borel set of �.Then the average chance of rough random event ξ ∈ B is defined as

Cha {ξ ∈ B} =∫ 1

0

Ch {ξ ∈ B} (α)dα (9.9)

where Ch denotes the chance measure.

Definition 9.9 A rough random variable ξ is said to be(a) nonnegative if Ch{ξ < 0}(α) ≡ 0;(b) positive if Ch{ξ ≤ 0}(α) ≡ 0;(c) simple if there exists a finite sequence {x1, x2, · · · , xm} such that

Ch {ξ �= x1, ξ �= x2, · · · , ξ �= xm} (α) ≡ 0; (9.10)

(d) discrete if there exists a countable sequence {x1, x2, · · ·} such that

Ch {ξ �= x1, ξ �= x2, · · ·} (α) ≡ 0. (9.11)

9.3 Chance Distribution

Definition 9.10 The chance distribution Φ: [−∞,+∞]× (0, 1]→ [0, 1] of arough random variable ξ is defined by

Φ(x;α) = Ch {ξ ≤ x} (α). (9.12)

Theorem 9.8 The chance distribution Φ(x;α) of rough random variable is(a) a decreasing and left-continuous function of α for any fixed x;(b) an increasing and right-continuous function of x for any fixed α, and

Φ(−∞;α) = 0, Φ(+∞;α) = 1, ∀α; (9.13)

limx→−∞Φ(x;α) = 0, ∀α; (9.14)

limx→+∞Φ(x;α) = 1 if α < 1. (9.15)

Page 308: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 9.3 - Chance Distribution 299

Proof: Let Φ(x;α) be the chance distribution of rough random variable ξdefined on the probability space (Ω,A,Pr). Part (a): For any given α1 andα2 with 0 < α1 < α2 ≤ 1, it is clear that

Φ(x;α1) = supPr{A}≥α1

infω∈A

Tr {ξ(ω) ≤ x}

≥ supPr{A}≥α2

infω∈A

Tr {ξ(ω) ≤ x} = Φ(x;α2).

Thus Φ(x;α) is a decreasing function of α. We next prove that Φ(x;α) is aleft-continuous function of α. Let α ∈ (0, 1] be given and {αi} a sequence ofnumbers with αi ↑ α. Since Φ(x;α) is a decreasing function of α, the limitlimi→∞ Φ(x;αi) exists and is not less than Φ(x;α). If the limit is equal toΦ(x;α), then the left-continuity is proved. Otherwise, we have

limi→∞

Φ(x;αi) > Φ(x;α).

Let z∗ = (limi→∞ Φ(x;αi) + Φ(x;α))/2. It is clear that

Φ(x;αi) > z∗ > Φ(x;α)

for all i. It follows from Φ(x;αi) > z∗ that there exists Ai with Pr{Ai} ≥ αi

such thatinf

ω∈Ai

Tr{ξ(ω) ≤ x} > z∗

for each i. Now we define

A∗ =∞⋃i=1

Ai.

It is clear that Pr{A∗} ≥ Pr{Ai} ≥ αi. Letting i→∞, we get Pr{A∗} ≥ α.Thus

Φ(x;α) ≥ infω∈A∗

Tr{ξ(ω) ≤ x} ≥ z∗.

A contradiction proves the part (a).We next prove the part (b). For any x1 and x2 with −∞ ≤ x1 < x2 ≤

+∞, it is clear that

Φ(x1;α) = supPr{A}≥α

infω∈A

Tr {ξ(ω) ≤ x1}

≤ supPr{A}≥α

infω∈A

Tr {ξ(ω) ≤ x2} = Φ(x2;α).

Therefore, Φ(x;α) is an increasing function of x. We next prove that Φ(x;α)is a right-continuous function of x. Let {xi} be an arbitrary sequence withxi ↓ x as i→∞. It follows from Theorem 9.6 that

limy↓x

Φ(y;α) = limy↓x

Ch{ξ ∈ (−∞, y]}(α) = Ch{ξ ∈ (−∞, x]}(α) = Φ(x;α).

Page 309: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

300 Chapter 9 - Rough Random Theory

Thus Φ(x;α) is a right-continuous function of x.Since ξ(ω) is a rough variable for any ω ∈ Ω, we have Tr{ξ(ω) ≤ −∞} = 0

for any ω ∈ Ω. It follows that

Φ(−∞;α) = supPr{A}≥α

infω∈A

Tr {ξ(ω) ≤ −∞} = 0.

Similarly, we have Tr{ξ(ω) ≤ +∞} = 1 for any ω ∈ Ω. Thus

Φ(+∞;α) = supPr{A}≥α

infω∈A

Tr {ξ(ω) ≤ +∞} = 1.

Thus (9.13) is proved.If (9.14) is not true, then there exists a number z∗ > 0 and a sequence

{xi} with xi ↓ −∞ such that Φ(xi, α) > z∗ for all i. Writing

Ai ={ω ∈ Ω

∣∣ Tr{ξ(ω) ≤ xi} > z∗}

for i = 1, 2, · · ·, we have Pr{Ai} ≥ α, and A1 ⊃ A2 ⊃ · · · It follows from theprobability continuity theorem that

Pr

{ ∞⋂i=1

Ai

}= lim

i→∞Pr{Ai} ≥ α > 0.

Thus there exists ω∗ such that ω∗ ∈ Ai for all i. Therefore

0 = limi→∞

Tr{ξ(ω∗) ≤ xi} ≥ z∗ > 0.

A contradiction proves (9.14).If (9.15) is not true, then there exists a number z∗ < 1 and a sequence

{xi} with xi ↑ +∞ such that Φ(xi, α) < z∗ for all i. Writing

Ai ={ω ∈ Ω

∣∣ Tr{ξ(ω) ≤ xi} < z∗}

for i = 1, 2, · · ·, we have

Pr{Ai} = 1− Pr{ω ∈ Ω

∣∣ Tr{ξ(ω) ≤ xi} ≥ z∗}

> 1− α

and A1 ⊃ A2 ⊃ · · · It follows from the probability continuity theorem that

Pr

{ ∞⋂i=1

Ai

}= lim

i→∞Pr{Ai} ≥ 1− α > 0.

Thus there exists ω∗ such that ω∗ ∈ Ai for all i. Therefore

1 = limi→∞

Tr{ξ(ω∗) ≤ xi} ≤ z∗ < 1.

A contradiction proves (9.15). The proof is complete.

Page 310: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 9.4 - Independent and Identical Distribution 301

Theorem 9.9 Let ξ be a rough random variable. Then Ch{ξ ≥ x}(α) is(a) a decreasing and left-continuous function of α for any fixed x;(b) a decreasing and left-continuous function of x for any fixed α.

Proof: Like Theorem 9.8.

Definition 9.11 The chance density function φ: � × (0, 1] → [0,+∞) of arough random variable ξ is a function such that

Φ(x;α) =∫ x

−∞φ(y;α)dy (9.16)

holds for all x ∈ [−∞,+∞] and α ∈ (0, 1], where Φ is the chance distributionof ξ.

9.4 Independent and Identical Distribution

This section begins with introducing the concept of independent and identicaldistribution (iid) of rough random variables and discusses some mathematicalproperties.

Definition 9.12 The rough random variables ξ1, ξ2, · · · , ξn are called iid ifand only if the random vectors

(Tr{ξi(ω) ∈ B1},Tr{ξi(ω) ∈ B2}, · · · ,Tr{ξi(ω) ∈ Bm}), i = 1, 2, · · · , n

are iid for any Borel sets B1, B2, · · · , Bm of � and any positive integer m.

Theorem 9.10 Let ξ1, ξ2, · · · , ξn be iid rough random variables, and f : � →� a measurable function. Then f(ξ1), f(ξ2), · · · , f(ξn) are also iid rough ran-dom variables.

Proof. Since ξ1, ξ2, · · · , ξn are iid rough random variables, the random vectors

(Tr{ξi(ω) ∈ f−1(B1)},Tr{ξi(ω) ∈ f−1(B2)}, · · · ,Tr{ξi(ω) ∈ f−1(Bm)}),

i = 1, 2, · · · , n are iid for any Borel sets B1, B2, · · · , Bm of � and any positiveinteger m. Equivalently, the random vectors

(Tr{f(ξi)(ω) ∈ B1},Tr{f(ξi)(ω) ∈ B2}, · · · ,Tr{f(ξi)(ω) ∈ Bm}),

i = 1, 2, · · · , n are iid. Thus f(ξ1), f(ξ2), · · · , f(ξn) are iid rough randomvariables.

Theorem 9.11 Suppose that ξ1, ξ2, · · · , ξn are iid rough random variablessuch that E[ξ1(ω)], E[ξ2(ω)], · · · , E[ξn(ω)] are all finite for each ω. ThenE[ξ1(ω)], E[ξ2(ω)], · · · , E[ξn(ω)] are iid random variables.

Page 311: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

302 Chapter 9 - Rough Random Theory

Proof: For any ω ∈ Ω, it follows from the expected value operator that

E[ξi(ω)] =∫ +∞

0

Tr{ξi(ω) ≥ r}dr −∫ 0

−∞Tr{ξi(ω) ≤ r}dr

= limj→∞

limk→∞

⎛⎝ 2k∑l=1

j

2kTr{

ξi(ω) ≥ lj

2k

}−

2k∑l=1

j

2kTr{

ξi(ω) ≤ − lj

2k

}⎞⎠for i = 1, 2, · · · , n. Now we write

η+i (ω) =

∫ ∞

0

Tr{ξi(ω) ≥ r}dr, η−i (ω) =

∫ 0

−∞Tr{ξi(ω) ≤ r}dr,

η+ij(ω) =

∫ j

0

Tr{ξi(ω) ≥ r}dr, η−ij(ω) =

∫ 0

−j

Tr{ξi(ω) ≤ r}dr,

η+ijk(ω) =

2k∑l=1

j

2kTr{

ξi(ω) ≥ lj

2k

}, η−

ijk(ω) =2k∑l=1

j

2kTr{

ξi(ω) ≤ − lj

2k

}for any positive integers j, k and i = 1, 2, · · · , n. It follows from the mono-tonicity of the functions Tr{ξi ≥ r} and Tr{ξi ≤ r} that the sequences{η+

ijk(ω)} and {η−ijk(ω)} satisfy (a) for each j and k,

(η+ijk(ω), η−

ijk(ω)), i =

1, 2, · · · , n are iid random vectors, and η−ijk(ω) ↑ η−

ij(ω) as k → ∞; and (b)for each i and j, η+

ijk(ω) ↑ η+ij(ω).

For any real numbers x, y, xi, yi, i = 1, 2, · · · , n, it follows from property(a) that

Pr

{η+ijk(ω) ≤ xi, η

−ijk(ω) ≤ yi

i = 1, 2, · · · , n

}=

n∏i=1

Pr{η+ijk(ω) ≤ xi, η

−ijk(ω) ≤ yi

},

Pr{η+ijk(ω) ≤ x, η−

ijk(ω) ≤ y}

= Pr{η+i′jk(ω) ≤ x, η−

i′jk(ω) ≤ y}

, ∀i, i′.

It follows from property (b) that{η+ijk(ω) ≤ xi, η

−ijk(ω) ≤ yi

i = 1, 2, · · · , n

}→{

η+ij(ω) ≤ xi, η

−ij(ω) ≤ yi

i = 1, 2, · · · , n

},

{η+ijk(ω) ≤ x, η−

ijk(ω) ≤ y}→{η+ij(ω) ≤ x, η−

ij(ω) ≤ y}

as k →∞. By using the probability continuity theorem, we get

Pr

{η+ij(ω) ≤ xi, η

−ij(ω) ≤ yi

i = 1, 2, · · · , n

}=

n∏i=1

Pr{η+ij(ω) ≤ xi, η

−ij(ω) ≤ yi

},

Pr{η+ij(ω) ≤ x, η−

ij(ω) ≤ y}

= Pr{η+i′j(ω) ≤ x, η−

i′j(ω) ≤ y}

, ∀i, i′.

Page 312: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 9.6 - Variance, Covariance and Moments 303

Thus(η+ij(ω), η−

ij(ω)), i = 1, 2, · · · , n are iid random vectors, and satisfy (c)

for each j,(η+ij(ω), η−

ij(ω)), i = 1, 2, · · · , n are iid random vectors; and (d) for

each i, η+ij(ω) ↑ η+

i (ω) and η−ij(ω) ↑ η−

i (ω) as j →∞.A similar process may prove that

(η+i (ω), η−

i (ω)), i = 1, 2, · · · , n are iid

random vectors. Thus E[ξ1(ω)], E[ξ2(ω)], · · · , E[ξn(ω)] are iid random vari-ables. The theorem is proved.

9.5 Expected Value Operator

Definition 9.13 (Liu [75]) Let ξ be a rough random variable. Then its ex-pected value is defined by

E[ξ] =∫ +∞

0

Pr{ω ∈ Ω

∣∣ E[ξ(ω)] ≥ r}

dr−∫ 0

−∞Pr{ω ∈ Ω

∣∣ E[ξ(ω)] ≤ r}

dr

provided that at least one of the two integrals is finite.

Theorem 9.12 Assume that ξ and η are rough random variables with finiteexpected values. Then for any real numbers a and b, we have

E[aξ + bη] = aE[ξ] + bE[η]. (9.17)

Proof: For any ω ∈ Ω, by the linearity of expected value operator of roughvariable, we have E[aξ(ω) + bη(ω)] = aE[ξ(ω)] + bE[η(ω)]. It follows thelinearity of expected value operator of random variable that E[aξ + bη] =E [aE[ξ(ω)] + bE[η(ω)]] = aE [E[ξ(ω)]] + bE [E[η(ω)]] = aE[ξ] + bE[η]. Thetheorem is proved.

9.6 Variance, Covariance and Moments

Definition 9.14 (Liu [75]) Let ξ be a rough random variable with finite ex-pected value E[ξ]. The variance of ξ is defined as V [ξ] = E

[(ξ − E[ξ])2

].

Theorem 9.13 If ξ is a rough random variable with finite expected value, aand b are real numbers, then V [aξ + b] = a2V [ξ].

Proof: It follows from the definition of variance that

V [aξ + b] = E[(aξ + b− aE[ξ]− b)2

]= a2E[(ξ − E[ξ])2] = a2V [ξ].

Theorem 9.14 Assume that ξ is a rough random variable whose expectedvalue exists. Then we have

V [E[ξ(ω)]] ≤ V [ξ]. (9.18)

Page 313: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

304 Chapter 9 - Rough Random Theory

Proof: Denote the expected value of ξ by e. It follows from the Jensen’sInequality that

V [E[ξ(ω)]] = E[(E[ξ(ω)]− e)2

]≤ E

[E[(ξ(ω)− e)2

]]= V [ξ].

The theorem is proved.

Theorem 9.15 Let ξ be a rough random variable with expected value e. ThenV [ξ] = 0 if and only if Ch{ξ = e}(1) = 1.

Proof: If V [ξ] = 0, then it follows from V [ξ] = E[(ξ − e)2] that∫ +∞

0

Pr{ω ∈ Ω

∣∣ E[(ξ(ω)− e)2] ≥ r}

dr = 0

which implies that Pr{ω ∈ Ω|E[(ξ(ω) − e)2] ≥ r} = 0 for any r > 0. There-fore, Pr{ω ∈ Θ|E[(ξ(ω) − e)2] = 0} = 1. That is, there exists a set A∗ withPr{A∗} = 1 such that E[(ξ(ω) − e)2] = 0 for each ω ∈ A∗. It follows fromTheorem 4.41 that Tr{ξ(ω) = e} = 1 for each ω ∈ A∗. Hence

Ch{ξ = e}(1) = supPr{A}≥1

infω∈A

Tr{ξ(ω) = e} = 1.

Conversely, if Ch{ξ = e}(1) = 1, it follows from Theorem 9.5 that thereexists a set A∗ with Pr{A∗} = 1 such that

infω∈A∗

Tr{ξ(ω) = e} = 1.

That is, Tr{(ξ(ω)− e)2 ≥ r} = 0 for each r > 0 and each ω ∈ A∗. Thus

E[(ξ(ω)− e)2] =∫ +∞

0

Tr{(ξ(ω)− e)2 ≥ r}dr = 0

for each ω ∈ A∗. It follows that Pr{ω ∈ Ω|E[(ξ(ω)− e)2] ≥ r

}= 0 for any

r > 0. Hence

V [ξ] =∫ +∞

0

Pr{ω ∈ Ω

∣∣ E[(ξ(ω)− e)2] ≥ r}

dr = 0.

The theorem is proved.

Definition 9.15 Let ξ and η be rough random variables such that E[ξ] andE[η] are finite. Then the covariance of ξ and η is defined by

Cov[ξ, η] = E [(ξ − E[ξ])(η − E[η])] . (9.19)

Definition 9.16 For any positive integer k, the expected value E[ξk] is calledthe kth moment of the rough random variable ξ. The expected value E[(ξ −E[ξ])k] is called the kth central moment of the rough random variable ξ.

Page 314: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 9.7 - Optimistic and Pessimistic Values 305

9.7 Optimistic and Pessimistic Values

Definition 9.17 (Liu [75]) Let ξ be a rough random variable, and γ, δ ∈(0, 1]. Then

ξsup(γ, δ) = sup{r∣∣ Ch{ξ ≥ r

}(γ) ≥ δ} (9.20)

is called the (γ, δ)-optimistic value to ξ, and

ξinf(γ, δ) = inf{r∣∣ Ch{ξ ≤ r}(γ) ≥ δ

}(9.21)

is called the (γ, δ)-pessimistic value to ξ.

Theorem 9.16 Let ξ be a rough random variable and γ, δ ∈ (0, 1]. As-sume that ξsup(γ, δ) is the (γ, δ)-optimistic value and ξinf(γ, δ) is the (γ, δ)-pessimistic value to ξ. Then we have

Ch{ξ ≤ ξinf(γ, δ)}(γ) ≥ δ, Ch{ξ ≥ ξsup(γ, δ)}(γ) ≥ δ. (9.22)

Proof: It follows from the definition of (γ, δ)-pessimistic value that thereexists a decreasing sequence {xi} such that Ch{ξ ≤ xi}(γ) ≥ δ and xi ↓ξinf(γ, δ) as i→∞. Since Ch{ξ ≤ x}(γ) is a right-continuous function of x,the inequality Ch{ξ ≤ ξinf(γ, δ)}(γ) ≥ δ holds.

Similarly, there exists an increasing sequence {xi} such that Ch{ξ ≥xi}(γ) ≥ δ and xi ↑ ξsup(γ, δ) as i → ∞. Since Ch{ξ ≥ x}(γ) is a left-continuous function of x, the inequality Ch{ξ ≥ ξsup(γ, δ)}(γ) ≥ δ holds.The theorem is proved.

Theorem 9.17 Let ξsup(γ, δ) and ξinf(γ, δ) be the (γ, δ)-optimistic and (γ, δ)-pessimistic values of rough random variable ξ, respectively. If γ ≤ 0.5, thenwe have

ξinf(γ, δ) ≤ ξsup(γ, δ) + δ1; (9.23)

if γ > 0.5, then we have

ξinf(γ, δ) + δ2 ≥ ξsup(γ, δ) (9.24)

where δ1 and δ2 are defined by

δ1 = supω∈Ω{ξ(ω)sup(1− δ)− ξ(ω)inf(1− δ)} ,

δ2 = supω∈Ω{ξ(ω)sup(δ)− ξ(ω)inf(δ)} ,

and ξ(ω)sup(δ) and ξ(ω)inf(δ) are δ-optimistic and δ-pessimistic values ofrough variable ξ(ω) for each ω, respectively.

Proof: Assume that γ ≤ 0.5. For any given ε > 0, we define

Ω1 ={ω ∈ Ω

∣∣ Tr {ξ(ω) > ξsup(γ, δ) + ε} ≥ δ}

,

Page 315: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

306 Chapter 9 - Rough Random Theory

Ω2 ={ω ∈ Ω

∣∣ Tr {ξ(ω) < ξinf(γ, δ)− ε} ≥ δ}

.

It follows from the definitions of ξsup(γ, δ) and ξinf(γ, δ) that Pr{Ω1} < γand Pr{Ω2} < γ. Thus Pr{Ω1}+ Pr{Ω2} < γ + γ ≤ 1. This fact implies thatΩ1 ∪ Ω2 �= Ω. Let ω∗ �∈ Ω1 ∪ Ω2. Then we have

Tr {ξ(ω∗) > ξsup(γ, δ) + ε} < δ,

Tr {ξ(ω∗) < ξinf(γ, δ)− ε} < δ.

Since Tr is self dual, we have

Tr {ξ(ω∗) ≤ ξsup(γ, δ) + ε} > 1− δ,

Tr {ξ(ω∗) ≥ ξinf(γ, δ)− ε} > 1− δ.

It follows from the definitions of ξ(ω∗)sup(1− δ) and ξ(ω∗)inf(1− δ) that

ξsup(γ, δ) + ε ≥ ξ(ω∗)inf(1− δ),

ξinf(γ, δ)− ε ≤ ξ(ω∗)sup(1− δ)

which implies that

ξinf(γ, δ)− ε− (ξsup(γ, δ) + ε) ≤ ξ(ω∗)sup(1− δ)− ξ(ω∗)inf(1− δ) ≤ δ1.

Letting ε→ 0, we obtain (9.23).Next we prove the inequality (9.24). Assume γ > 0.5. For any given

ε > 0, we define

Ω1 ={ω ∈ Ω

∣∣ Tr {ξ(ω) ≥ ξsup(γ, δ)− ε} ≥ δ}

,

Ω2 ={ω ∈ Ω

∣∣ Tr {ξ(ω) ≤ ξinf(γ, δ) + ε} ≥ δ}

.

It follows from the definitions of ξsup(γ, δ) and ξinf(γ, δ) that Pr{Ω1} ≥ γand Pr{Ω2} ≥ γ. Thus Pr{Ω1}+ Pr{Ω2} ≥ γ + γ > 1. This fact implies thatΩ1 ∩ Ω2 �= ∅. Let ω∗ ∈ Ω1 ∩ Ω2. Then we have

Tr {ξ(ω∗) ≥ ξsup(γ, δ)− ε} ≥ δ,

Tr {ξ(ω∗) ≤ ξinf(γ, δ) + ε} ≥ δ.

It follows from the definitions of ξ(ω∗)sup(δ) and ξ(ω∗)inf(δ) that

ξsup(γ, δ)− ε ≤ ξ(ω∗)sup(δ),

ξinf(γ, δ) + ε ≥ ξ(ω∗)inf(δ)

which implies that

ξsup(γ, δ)− ε− (ξinf(γ, δ) + ε) ≤ ξ(ω∗)sup(δ)− ξ(ω∗)inf(δ) ≤ δ2.

The inequality (9.24) is proved by letting ε→ 0.

Page 316: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 9.9 - Laws of Large Numbers 307

9.8 Convergence Concepts

This section introduces four types of sequence convergence concept: conver-gence a.s., convergence in chance, convergence in mean, and convergence indistribution.

Definition 9.18 Suppose that ξ, ξ1, ξ2, · · · are rough random variables de-fined on the probability space (Ω,A,Pr). The sequence {ξi} is said to beconvergent a.s. to ξ if and only if there exists a set A ∈ A with Pr{A} = 1such that {ξi(ω)} converges a.s. to ξ(ω) for every ω ∈ A.

Definition 9.19 Suppose that ξ, ξ1, ξ2, · · · are rough random variables. Wesay that the sequence {ξi} converges in chance to ξ if

limi→∞

limα↓0

Ch {|ξi − ξ| ≥ ε} (α) = 0 (9.25)

for every ε > 0.

Definition 9.20 Suppose that ξ, ξ1, ξ2, · · · are rough random variables withfinite expected values. We say that the sequence {ξi} converges in mean to ξif

limi→∞

E[|ξi − ξ|] = 0. (9.26)

Definition 9.21 Suppose that Φ,Φ1,Φ2, · · · are the chance distributions ofrough random variables ξ, ξ1, ξ2, · · ·, respectively. We say that {ξi} convergesin distribution to ξ if Φi(x;α)→ Φ(x;α) for all continuity points (x;α) of Φ.

9.9 Laws of Large Numbers

This section introduces four laws of large numbers of rough random variable.

Theorem 9.18 Let {ξi} be a sequence of independent but not necessarilyidentically distributed rough random variables with a common expected valuee. If there exists a number a > 0 such that V [ξi] < a for all i, then (E[ξ1(ω)]+E[ξ2(ω)] + · · ·+ E[ξn(ω)])/n converges in probability to e as n→∞.

Proof: Since {ξi} is a sequence of independent rough random variables, weknow that {E[ξi(ω)]} is a sequence of independent random variables. Byusing Theorem 9.14, we get V [E[ξi(ω)]] ≤ V [ξi] < a for each i. It followsfrom the weak law of large numbers of random variable that (E[ξ1(ω)] +E[ξ2(ω)] + · · ·+ E[ξn(ω)])/n converges in probability to e.

Theorem 9.19 Let {ξi} be a sequence of iid rough random variables witha finite expected value e. Then (E[ξ1(ω)] + E[ξ2(ω)] + · · · + E[ξn(ω)])/nconverges in probability to e as n→∞.

Page 317: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

308 Chapter 9 - Rough Random Theory

Proof: Since {ξi} is a sequence of iid rough random variables with a finiteexpected value e, we know that {E[ξi(ω)]} is a sequence of iid random vari-ables with finite expected e. It follows from the weak law of large numbersof random variable that (E[ξ1(ω)] + E[ξ2(ω)] + · · · + E[ξn(ω)])/n convergesin probability to e.

Theorem 9.20 Let {ξi} be a sequence of independent rough random vari-ables with a common expected value e. If

∞∑i=1

V [ξi]i2

<∞, (9.27)

then (E[ξ1(ω)] + E[ξ2(ω)] + · · ·+ E[ξn(ω)])/n converges a.s. to e as n→∞.

Proof: Since {ξi} is a sequence of independent rough random variables,we know that {E[ξi(ω)]} is a sequence of independent random variables. Byusing Theorem 9.14, we get V [E[ξi(ω)]] ≤ V [ξi] for each i. It follows from thestrong law of large numbers of random variable that (E[ξ1(ω)] + E[ξ2(ω)] +· · ·+ E[ξn(ω)])/n converges a.s. to e.

Theorem 9.21 Suppose that {ξi} is a sequence of iid rough random variableswith a finite expected value e. Then (E[ξ1(ω)] + E[ξ2(ω)] + · · ·+ E[ξn(ω)])/nconverges a.s. to e as n→∞.

Proof: Since {ξi} is a sequence of iid rough random variables, we know that{E[ξi(ω)]} is a sequence of iid random variables. It follows from the classicalstrong law of large numbers that

1n

n∑i=1

E[ξi(ω)]→ a, a.s.

as n→∞. The proof is complete.

9.10 Rough Random Simulations

In this section, we introduce rough random simulations for finding criticalvalues, computing chance functions, and calculating expected value.

Example 9.1: Suppose that ξ is an n-dimensional rough random vectordefined on the probability space (Ω,A,Pr), and f : �n → �m is a measur-able function. For any real number α ∈ (0, 1], we design a rough randomsimulation to compute the α-chance Ch {f(ξ) ≤ 0} (α). That is, we shouldfind the supremum β such that

Pr{ω ∈ Ω

∣∣ Tr {f(ξ(ω)) ≤ 0} ≥ β}≥ α. (9.28)

Page 318: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 9.10 - Rough Random Simulations 309

First, we sample ω1, ω2, · · · , ωN from Ω according to the probability measurePr, and estimate βk = Tr{f(ξ(ωk)) ≤ 0} for k = 1, 2, · · · , N by rough simu-lation. Let N ′ be the integer part of αN . Then the value β can be taken asthe N ′th largest element in the sequence {β1, β2, · · · , βN}.

Algorithm 9.1 (Rough Random Simulation)Step 1. Generate ω1, ω2, · · · , ωN from Ω according to the probability mea-

sure Pr.Step 2. Compute the trust βk = Tr{f(ξ(ωk) ≤ 0} for k = 1, 2, · · · , N by

fuzzy simulation.Step 3. Set N ′ as the integer part of αN .Step 4. Return the N ′th largest element in {β1, β2, · · · , βN}.

Now we consider the following two rough random variables

ξ1 = ([ρ1, ρ1 + 1], [ρ1 − 1, ρ1 + 2]), with ρ1 ∼ N (0, 1),ξ2 = ([ρ2, ρ2 + 1], [ρ2 − 1, ρ2 + 2]), with ρ2 ∼ N (1, 2).

A run of rough random simulation with 5000 cycles shows that

Ch{ξ1 + ξ2 ≥ 0}(0.8) = 0.78.

Example 9.2: Assume that ξ is an n-dimensional rough random vector onthe probability space (Ω,A,Pr), and f : �n → � is a measurable function.For any given confidence levels α and β, let us find the maximal value f suchthat

Ch{f(ξ) ≥ f

}(α) ≥ β (9.29)

holds. That is, we should compute the maximal value f such that

Pr{ω ∈ Ω

∣∣ Tr{f(ξ(ω)) ≥ f

}≥ β}≥ α (9.30)

holds. We sample ω1, ω2, · · · , ωN from Ω according to the probability measurePr, and estimate fk = sup {fk|Tr{f(ξ(ωk)) ≥ fk} ≥ β} for k = 1, 2, · · · , Nby the rough simulation. Let N ′ be the integer part of αN . Then the valuef can be taken as the N ′th largest element in the sequence {f1, f2, · · · , fN}.

Algorithm 9.2 (Rough Random Simulation)Step 1. Generate ω1, ω2, · · · , ωN from Ω according to the probability mea-

sure Pr.Step 2. Find fk = sup {fk|Tr{f(ξ(ωk)) ≥ fk} ≥ β} for k = 1, 2, · · · , N by

rough simulation.Step 3. Set N ′ as the integer part of αN .Step 4. Return the N ′th largest element in {f1, f2, · · · , fN}.

Page 319: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

310 Chapter 9 - Rough Random Theory

We now find the maximal value f such that Ch{ξ21 + ξ2

2 ≥ f}(0.9) ≥ 0.9,where ξ1 and ξ2 are rough random variables defined as

ξ1 = ([ρ1, ρ1 + 1], [ρ1 − 1, ρ1 + 2]), with ρ1 ∼ N (0, 1),ξ2 = ([ρ2, ρ2 + 1], [ρ2 − 1, ρ2 + 2]), with ρ2 ∼ N (1, 2).

A run of rough random simulation with 5000 cycles shows that f = 0.20.

Example 9.3: Assume that ξ is an n-dimensional rough random vector onthe probability space (Ω,A,Pr), and f : �n → � is a measurable function.One problem is to calculate the expected value E[f(ξ)]. Note that, for eachω ∈ Ω, we may calculate the expected value E[f(ξ(ω)] by rough simula-tion. Since E[f(ξ)] is essentially the expected value of stochastic variableE[f(ξ(ω)], we may combine stochastic simulation and rough simulation toproduce a rough random simulation.

Algorithm 9.3 (Rough Random Simulation)Step 1. Set e = 0.Step 2. Sample ω from Ω according to the probability measure Pr.Step 3. e ← e + E[f(ξ(ω))], where E[f(ξ(ω))] may be calculated by the

rough simulation.Step 4. Repeat the second to fourth steps N times.Step 5. E[f(ξ)] = e/N .

We employ the rough random simulation to calculate the expected valueof ξ1ξ2, where ξ1 and ξ2 are rough random variables defined as

ξ1 = ([ρ1, ρ1 + 1], [ρ1 − 1, ρ1 + 2]), with ρ1 ∼ N (0, 1),ξ2 = ([ρ2, ρ2 + 1], [ρ2 − 1, ρ2 + 2]), with ρ2 ∼ N (1, 2).

A run of rough random simulation with 5000 cycles shows that E[ξ1ξ2] = 0.79.

Page 320: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Chapter 10

Rough Fuzzy Theory

A rough fuzzy variable was defined by Liu [75] as a fuzzy variable on theuniversal set of rough variables, or a fuzzy variable taking “rough variable”values.

The emphasis in this chapter is mainly on rough fuzzy variable, roughfuzzy arithmetic, chance measure, chance distribution, independent and iden-tical distribution, expected value operator, variance, critical values, conver-gence concepts, and rough fuzzy simulation.

10.1 Rough Fuzzy Variables

Definition 10.1 (Liu [75]) A rough fuzzy variable is a function from a pos-sibility space (Θ,P(Θ),Pos) to the set of rough variables.

Remark 10.1: Note that the concept is different from the rough fuzzy setpresented by Dubois and Prade [27].

Theorem 10.1 Assume that ξ is a rough fuzzy variable, and B is a Borelset of �. Then the trust Tr{ξ(θ) ∈ B} is a fuzzy variable.

Proof: The trust Tr{ξ(θ) ∈ B} is obviously a fuzzy variable since it is afunction from a possibility space to the set of real numbers.

Theorem 10.2 Let ξ be a rough fuzzy variable. If the expected value E[ξ(θ)]is finite for each θ, then E[ξ(θ)] is a fuzzy variable.

Proof: The expected value E[ξ(θ)] is obviously a fuzzy variable since it is afunction from a possibility space to the set of real numbers.

Definition 10.2 An n-dimensional rough fuzzy vector is a function from apossibility space (Θ,P(Θ),Pos) to the set of n-dimensional rough vectors.

Page 321: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

312 Chapter 10 - Rough Fuzzy Theory

Theorem 10.3 The vector (ξ1, ξ2, · · · , ξn) is a rough fuzzy vector if and onlyif ξ1, ξ2, · · · , ξn are rough fuzzy variables.

Proof: Write ξ = (ξ1, ξ2, · · · , ξn). Suppose that ξ is a rough fuzzy vector onthe possibility space (Θ,P(Θ),Pos). Then, for each θ ∈ Θ, the vector ξ(θ) isa rough vector. It follows from Theorem 4.8 that ξ1(θ), ξ2(θ), · · · , ξn(θ) arerough variables. Thus ξ1, ξ2, · · · , ξn are rough fuzzy variables.

Conversely, suppose that ξ1, ξ2, · · · , ξn are rough fuzzy variables on thepossibility space (Θ,P(Θ),Pos). Then, for each θ ∈ Θ, the variables ξ1(θ),ξ2(θ), · · · , ξn(θ) are rough variables. It follows from Theorem 4.8 that ξ(θ) =(ξ1(θ), ξ2(θ), · · · , ξn(θ)) is a rough vector. Thus ξ is a rough fuzzy vector.

Theorem 10.4 Let ξ be an n-dimensional rough fuzzy vector, and f : �n →� a measurable function. Then f(ξ) is a rough fuzzy variable.

Proof: For each θ ∈ Θ, ξ(θ) is a rough vector and f(ξ(θ)) is a rough variable.Thus f(ξ) is a rough fuzzy variable since it is a function from a possibilityspace to the set of rough variables.

Definition 10.3 (Liu [75], Rough Fuzzy Arithmetic on Single Space) Letf : �n → � be a measurable function, and ξ1, ξ2, · · · , ξn rough fuzzy variableson the possibility space (Θ,P(Θ),Pos). Then ξ = f(ξ1, ξ2, · · · , ξn) is a roughfuzzy variable defined by

ξ(θ) = f(ξ1(θ), ξ2(θ), · · · , ξn(θ)), ∀θ ∈ Θ. (10.1)

Definition 10.4 (Liu [75], Rough Fuzzy Arithmetic on Different Spaces)Let f : �n → � be a measurable function, and ξi rough fuzzy variables on(Θi,P(Θi),Posi), i = 1, 2, · · · , n, respectively. Then ξ = f(ξ1, ξ2, · · · , ξn) is arough fuzzy variable defined on the product possibility space (Θ,P(Θ),Pos) as

ξ(θ1, θ2, · · · , θn) = f(ξ1(θ1), ξ2(θ2), · · · , ξn(θn)) (10.2)

for any (θ1, θ2, · · · , θn) ∈ Θ.

10.2 Chance Measure

Definition 10.5 (Liu [75]) Let ξ be a rough fuzzy variable, and B a Borelset of �. Then the chance of rough fuzzy event ξ ∈ B is a function from (0, 1]to [0, 1], defined as

Ch {ξ ∈ B} (α) = supCr{A}≥α

infθ∈A

Tr {ξ(θ) ∈ B} . (10.3)

Theorem 10.5 Let ξ be a rough fuzzy variable, and B a Borel set of �. Forany given α∗ > 0.5, we write β∗ = Ch {ξ ∈ B} (α∗). Then we have

Cr{θ ∈ Θ

∣∣ Tr {ξ(θ) ∈ B} ≥ β∗} ≥ α∗. (10.4)

Page 322: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 10.2 - Chance Measure 313

Proof: Since β∗ is the supremum of β satisfying

Cr{θ ∈ Θ

∣∣ Tr {ξ(θ) ∈ B} ≥ β}≥ α∗,

there exists an increasing sequence {βi} such that

Cr{θ ∈ Θ

∣∣ Tr {ξ(θ) ∈ B} ≥ βi

}≥ α∗ > 0.5 (10.5)

and βi ↑ β∗ as i→∞. Since{θ ∈ Θ

∣∣ Tr {ξ(θ ∈ B} ≥ βi

}↓{θ ∈ Θ

∣∣ Tr {ξ(θ) ∈ B} ≥ β∗}as i→∞, it follows from (10.5) and the credibility semicontinuity law that

Cr{θ ∈ Θ

∣∣ Tr {ξ(θ) ∈ B} ≥ β∗}= lim

i→∞Cr{θ ∈ Θ

∣∣ Tr {ξ(θ) ∈ B} ≥ βi

}≥ α∗.

The proof is complete.

Theorem 10.6 Assume that ξ is a rough fuzzy variable on the possibilityspace (Θ,P(Θ),Pos), and B is a Borel set of �. Then Ch{ξ ∈ B}(α) is adecreasing function of α, and

limα↓0

Ch {ξ ∈ B} (α) = supθ∈Θ+

Tr {ξ(θ) ∈ B} ; (10.6)

Ch {ξ ∈ B} (1) = infθ∈Θ+

Tr {ξ(θ) ∈ B} (10.7)

where Θ+ is the kernel of (Θ,P(Θ),Pos).

Proof: For any given α1 and α2 with 0 < α1 < α2 ≤ 1, it is clear that

Φ(x;α1) = supCr{A}≥α1

infθ∈A

Tr {ξ(θ) ∈ B}

≥ supCr{A}≥α2

infθ∈A

Tr {ξ(θ) ∈ B} = Φ(x;α2).

Thus Ch{ξ ∈ B}(α) is a decreasing function of α.Next we prove (10.6). On the one hand, for any α ∈ (0, 1], we have

Ch{ξ ∈ B}(α) = supCr{A}≥α

infθ∈A

Tr {ξ(θ) ∈ B} ≤ supθ∈Θ+

Tr {ξ(θ) ∈ B} .

Letting α ↓ 0, we get

limα↓0

Ch {ξ ∈ B} (α) ≤ supθ∈Θ+

Tr {ξ(θ) ∈ B} . (10.8)

Page 323: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

314 Chapter 10 - Rough Fuzzy Theory

On the other hand, for any θ∗ ∈ Θ+, we write α∗ = Cr{θ∗} > 0. SinceCh{ξ ∈ B}(α) is a decreasing function of α, we have

limα↓0

Ch{ξ ∈ B}(α) ≥ Ch{ξ ∈ B}(α∗) ≥ Tr{ξ(θ∗) ∈ B}

which implies that

limα↓0

Ch {ξ ∈ B} (α) ≥ supθ∈Θ+

Tr {ξ(θ) ∈ B} . (10.9)

It follows from (10.8) and (10.9) that (10.6) holds.Finally, let us prove (10.7). On the one hand, for any set A with Cr{A} =

1, it is clear that Θ+ ⊂ A. Thus

Ch{ξ ∈ B}(1) = supCr{A}≥1

infθ∈A

Tr {ξ(θ) ∈ B} ≤ infθ∈Θ+

Tr {ξ(θ) ∈ B} . (10.10)

On the other hand, since Cr{Θ+} = 1, we have

Ch {ξ ∈ B} (1) ≥ infθ∈Θ+

Tr {ξ(θ) ∈ B} . (10.11)

It follows from (10.10) and (10.11) that (10.7) holds. The theorem is proved.

Theorem 10.7 Let ξ be a rough fuzzy variable, and {Bi} a sequence of Borelsets of �. If α > 0.5 and Bi ↓ B, then we have

limi→∞

Ch{ξ ∈ Bi}(α) = Ch{ξ ∈ lim

i→∞Bi

}(α). (10.12)

Proof: Since Bi ↓ B, the chance Ch{ξ ∈ Bi}(α) is decreasing with respectto i. Thus the limitation limi→∞ Ch{ξ ∈ Bi}(α) exists and is not less thanCh{ξ ∈ B}(α). If the limitation is equal to Ch{ξ ∈ B}(α), then the theoremis proved. Otherwise,

limi→∞

Ch{ξ ∈ Bi}(α) > Ch{ξ ∈ B}(α).

Thus there exists a number z such that

limi→∞

Ch{ξ ∈ Bi}(α) > z > Ch{ξ ∈ B}(α). (10.13)

Hence there exists a set Ai with Cr{Ai} ≥ α such that

infθ∈Ai

Tr{ξ(θ) ∈ Bi} > z

for every i. Since α > 0.5, we may define A = {θ ∈ Θ|Pos{θ} > 2 − 2α}. Itis clear that Cr{A} ≥ α and A ⊂ Ai for all i. Thus,

infθ∈A

Tr{ξ(θ) ∈ Bi} ≥ infθ∈Ai

Tr{ξ(θ) ∈ Bi} > z

Page 324: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 10.2 - Chance Measure 315

for every i. It follows from the trust continuity theorem that

Tr{ξ(θ) ∈ Bi} ↓ Tr{ξ(θ) ∈ B}, ∀θ ∈ A.

Thus,Ch{ξ ∈ B}(α) ≥ inf

θ∈ATr{ξ(θ) ∈ B} ≥ z

which contradicts to (10.13). The theorem is proved.

Theorem 10.8 (a) Let ξ, ξ1, ξ2, · · · be rough fuzzy variables such that ξi(θ) ↑ξ(θ) for each θ ∈ Θ. If α > 0.5, then for each real number r, we have

limi→∞

Ch{ξi ≤ r}(α) = Ch{

limi→∞

ξi ≤ r}

(α). (10.14)

(b) Let ξ, ξ1, ξ2, · · · be rough fuzzy variables such that ξi(θ) ↓ ξ(θ) for eachθ ∈ Θ. If α > 0.5, then for each real number r, we have

limi→∞

Ch{ξi ≥ r}(α) = Ch{

limi→∞

ξi ≥ r}

(α). (10.15)

Proof: (a) Since ξi(θ) ↑ ξ(θ) for each θ ∈ Θ, we have {ξi(θ) ≤ r} ↓ {ξ(θ) ≤r}. Thus the limitation limi→∞ Ch{ξi ≤ r}(α) exists and is not less thanCh{ξ ≤ r}(α). If the limitation is equal to Ch{ξ ≤ r}(α), the theorem isproved. Otherwise,

limi→∞

Ch{ξi ≤ r}(α) > Ch{ξ ≤ r}(α).

Then there exists z ∈ (0, 1) such that

limi→∞

Ch{ξi ≤ r}(α) > z > Ch{ξ ≤ r}(α). (10.16)

Hence there exists a set Ai with Cr{Ai} ≥ α such that

infθ∈Ai

Tr{ξi(θ) ≤ r} > z

for every i. Since α > 0.5, we may define A = {θ ∈ Θ|Pos{θ} > 2 − 2α}.Then Cr{A} ≥ α and A ⊂ Ai for all i. Thus,

infθ∈A

Tr{ξi(θ) ≤ r} ≥ infθ∈Ai

Tr{ξi(θ) ≤ r} > z

for every i. On the other hand, it follows from Theorem 4.10 that

Tr{ξi(θ) ≤ r} ↓ Tr{ξ(θ) ≤ r}.

Thus,Tr{ξ(θ) ≤ r} ≥ z, ∀θ ∈ A.

Hence we have

Ch{ξ ≤ r}(α) ≥ infθ∈A

Tr{ξ(θ) ≤ r} ≥ z

which contradicts to (10.16). The part (a) is proved. A similar way mayprove the part (b).

Page 325: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

316 Chapter 10 - Rough Fuzzy Theory

Variety of Chance Measure

Definition 10.6 Let ξ be a rough fuzzy variable, and B a Borel set of �.For any real number α ∈ (0, 1], the α-chance of rough fuzzy event ξ ∈ B isdefined as the value of chance at α, i.e., Ch {ξ ∈ B} (α) where Ch denotesthe chance measure.

Definition 10.7 Let ξ be a rough fuzzy variable, and B a Borel set of �.Then the equilibrium chance of rough fuzzy event ξ ∈ B is defined as

Che {ξ ∈ B} = sup0<α≤1

{α∣∣ Ch {ξ ∈ B} (α) ≥ α

}(10.17)

where Ch denotes the chance measure.

Definition 10.8 Let ξ be a rough fuzzy variable, and B a Borel set of �.Then the average chance of rough fuzzy event ξ ∈ B is defined as

Cha {ξ ∈ B} =∫ 1

0

Ch {ξ ∈ B} (α)dα (10.18)

where Ch denotes the chance measure.

Definition 10.9 A rough fuzzy variable ξ is said to be(a) nonnegative if Ch{ξ < 0}(α) ≡ 0;(b) positive if Ch{ξ ≤ 0}(α) ≡ 0;(c) simple if there exists a finite sequence {x1, x2, · · · , xm} such that

Ch {ξ �= x1, ξ �= x2, · · · , ξ �= xm} (α) ≡ 0; (10.19)

(d) discrete if there exists a countable sequence {x1, x2, · · ·} such that

Ch {ξ �= x1, ξ �= x2, · · ·} (α) ≡ 0. (10.20)

10.3 Chance Distribution

Definition 10.10 The chance distribution Φ: [−∞,+∞]× (0, 1]→ [0, 1] ofa rough fuzzy variable ξ is defined by

Φ(x;α) = Ch {ξ ≤ x} (α). (10.21)

Theorem 10.9 The chance distribution Φ(x;α) of a rough fuzzy variable isa decreasing and left-continuous function of α for each fixed x.

Proof: Denote the rough fuzzy variable by ξ. For any given α1 and α2 with0 < α1 < α2 ≤ 1, it follows from Theorem 10.6 that

Φ(x;α1) = Ch{ξ ≤ x}(α1) ≥ Ch{ξ ≤ x}(α2) = Φ(x;α2).

Page 326: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 10.3 - Chance Distribution 317

Thus Φ(x;α) is a decreasing function of α for each fixed x.We next prove the left-continuity of Φ(x;α) with respect to α. Let α ∈

(0, 1] be given, and let {αi} be a sequence of numbers with αi ↑ α. SinceΦ(x;α) is a decreasing function of α, the limitation limi→∞ Φ(x;αi) existsand is not less than Φ(x;α). If the limitation is equal to Φ(x;α), then theleft-continuity is proved. Otherwise, we have

limi→∞

Φ(x;αi) > Φ(x;α).

Let z∗ = (limi→∞ Φ(x;αi) + Φ(x;α))/2. It is clear that

Φ(x;αi) > z∗ > Φ(x;α)

for all i. It follows from Φ(x;αi) > z∗ that there exists Ai with Cr{Ai} ≥ αi

such thatinfθ∈Ai

Tr{ξ(θ) ≤ x} > z∗

for each i. Now we define

A∗ =∞⋃i=1

Ai.

It is clear that Cr{A∗} ≥ Cr{Ai} ≥ αi. Letting i→∞, we get Cr{A∗} ≥ α.Thus

Φ(x;α) ≥ infθ∈A∗

Tr{ξ(θ) ≤ x} ≥ z∗.

A contradiction proves the theorem.

Theorem 10.10 The chance distribution Φ(x;α) of a rough fuzzy variableis an increasing function of x for any fixed α, and

Φ(−∞;α) = 0, Φ(+∞;α) = 1, ∀α; (10.22)

limx→−∞Φ(x;α) = 0 if α > 0.5; (10.23)

limx→+∞Φ(x;α) = 1 if α < 0.5. (10.24)

Furthermore, if α > 0.5, then we have

limy↓x

Φ(y;α) = Φ(x;α). (10.25)

Proof: Let Φ(x;α) be the chance distribution of rough fuzzy variable ξdefined on the possibility space (Θ,P(Θ),Pos). For any given x1 and x2 with−∞ ≤ x1 < x2 ≤ +∞, it is clear that

Φ(x1;α) = supCr{A}≥α

infθ∈A

Tr {ξ(θ) ≤ x1}

≤ supCr{A}≥α

infθ∈A

Tr {ξ(θ) ≤ x2} = Φ(x2;α).

Page 327: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

318 Chapter 10 - Rough Fuzzy Theory

That is, the chance distribution Φ(x;α) is an increasing function of x for eachfixed α.

Since ξ(θ) is a rough variable for any θ ∈ Θ, we have Tr{ξ(θ) ≤ −∞} = 0for any θ ∈ Θ. It follows that

Φ(−∞;α) = supCr{A}≥α

infθ∈A

Tr {ξ(θ) ≤ −∞} = 0.

Similarly, we have Tr{ξ(θ) ≤ +∞} = 1 for any θ ∈ Θ. Thus

Φ(+∞;α) = supCr{A}≥α

infθ∈A

Tr {ξ(θ) ≤ +∞} = 1.

Thus (10.22) is proved.Next we prove (10.23) and (10.24). If α > 0.5, then there exists an

element θ∗ ∈ Θ such that 2 − 2α < Pos{θ∗} ≤ 1. It is easy to verify thatθ∗ ∈ A if Cr{A} ≥ α. Hence

limx→−∞Φ(x;α) = lim

x→−∞ supCr{A}≥α

infθ∈A

Tr {ξ(θ) ≤ x}

≤ limx→−∞Tr{ξ(θ∗) ≤ x} = 0.

Thus (10.23) holds. When α < 0.5, there exists an element θ∗ such thatCr{θ∗} ≥ α. Thus we have

limx→+∞Φ(x;α) = lim

x→+∞ supCr{A}≥α

infθ∈A

Tr {ξ(θ) ≤ x}

≥ limx→+∞Tr{ξ(θ∗) ≤ x} = 1

which implies that (10.24) holds.Finally, we prove (10.25). Let {xi} be an arbitrary sequence with xi ↓ x

as i→∞. It follows from Theorem 10.7 that

limy↓x

Φ(y;α) = limy↓x

Ch{ξ ∈ (−∞, y]}(α) = Ch{ξ ∈ (−∞, x]}(α) = Φ(x;α).

The theorem is proved.

Theorem 10.11 Let ξ be a rough fuzzy variable. Then Ch{ξ ≥ x}(α) is(a) a decreasing and left-continuous function of α for any fixed x;(b) a decreasing function of x for any fixed α. Furthermore, when α > 0.5,we have

limy↑x

Ch{ξ ≥ y}(α) = Ch{ξ ≥ x}(α). (10.26)

Proof: Like Theorems 10.9 and 10.10.

Page 328: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 10.5 - Expected Value Operator 319

Definition 10.11 The chance density function φ: �× (0, 1]→ [0,+∞) of arough fuzzy variable ξ is a function such that

Φ(x;α) =∫ x

−∞φ(y;α)dy (10.27)

holds for all x ∈ [−∞,+∞] and α ∈ (0, 1], where Φ is the chance distributionof ξ.

10.4 Independent and Identical Distribution

This section introduces the concept of independent and identically distributed(iid) rough fuzzy variables.

Definition 10.12 The rough fuzzy variables ξ1, ξ2, · · · , ξn are said to be iidif and only if

(Tr{ξi(θ) ∈ B1},Tr{ξi(θ) ∈ B2}, · · · ,Tr{ξi(θ) ∈ Bm}) , i = 1, 2, · · · , n

are iid fuzzy vectors for any Borel sets B1, B2, · · · , Bm of � and any positiveinteger m.

Theorem 10.12 Let ξ1, ξ2, · · · , ξn be iid rough fuzzy variables. Then for anyBorel set B of �, Tr{ξi(θ) ∈ B}, i = 1, 2, · · · , n are iid fuzzy variables.

Proof: It follows immediately from the definition.

Theorem 10.13 Let f : � → � be a measurable function. If ξ1, ξ2, · · · , ξnare iid rough fuzzy variables, then f(ξ1), f(ξ2), · · ·, f(ξn) are iid rough fuzzyvariables.

Proof: We have proved that f(ξ1), f(ξ2), · · · , f(ξn) are rough fuzzy vari-ables. For any positive integer m and Borel sets B1, B2, · · · , Bm of �, sincef−1(B1), f−1(B2), · · · , f−1(Bm) are Borel sets, we know that(

Tr{ξi(θ) ∈ f−1(B1)},Tr{ξi(θ) ∈ f−1(B2)}, · · · ,Tr{ξi(θ) ∈ f−1(Bm)}),

i = 1, 2, · · · , n are iid fuzzy vectors. Equivalently, the fuzzy vectors

(Tr{f(ξi(θ)) ∈ B1},Tr{f(ξi(θ)) ∈ B2}, · · · ,Tr{f(ξi(θ)) ∈ Bm}) ,

i = 1, 2, · · · , n are iid. Hence f(ξ1), f(ξ2), · · · , f(ξn) are iid rough fuzzy vari-ables.

Page 329: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

320 Chapter 10 - Rough Fuzzy Theory

10.5 Expected Value Operator

Definition 10.13 (Liu [75]) Let ξ be a rough fuzzy variable. The expectedvalue E[ξ] is defined by

E[ξ] =∫ +∞

0

Cr{θ ∈ Θ | E[ξ(θ)] ≥ r}dr −∫ 0

−∞Cr{θ ∈ Θ | E[ξ(θ)] ≤ r}dr

provided that at least one of the two integrals is finite.

Theorem 10.14 Assume that ξ and η are rough fuzzy variables with finiteexpected values. If E[ξ(θ)] and E[η(θ)] are independent fuzzy variables, thenfor any real numbers a and b, we have

E[aξ + bη] = aE[ξ] + bE[η]. (10.28)

Proof: For any θ ∈ Θ, we have E[aξ(θ)+bη(θ)] = aE[ξ(θ)]+bE[η(θ)]. SinceE[ξ(θ)] and E[η(θ)] are independent fuzzy variables, we get E[aξ + bη] =E [aE[ξ(θ)] + bE[η(θ)]] = aE [E[ξ(θ)]] + bE [E[η(θ)]] = aE[ξ] + bE[η]. Thetheorem is proved.

Theorem 10.15 Let ξ, ξ1, ξ2, · · · be rough fuzzy variables such that E[ξi(θ)]→E[ξ(θ)] uniformly. Then

limi→∞

E[ξi] = E[

limi→∞

ξi

]. (10.29)

Proof: Since ξi are rough fuzzy variables, E[ξi(θ)] are fuzzy variables forall i. It follows from E[ξi(θ)] → E[ξ(θ)] uniformly and Theorem 3.41 that(10.29) holds.

10.6 Variance, Covariance and Moments

Definition 10.14 (Liu [75]) Let ξ be a rough fuzzy variable with finite ex-pected value E[ξ]. The variance of ξ is defined as the expected value of roughfuzzy variable (ξ − E[ξ])2. That is, V [ξ] = E

[(ξ − E[ξ])2

].

Theorem 10.16 If ξ is a rough fuzzy variable with finite expected value, aand b are real numbers, then V [aξ + b] = a2V [ξ].

Proof: It follows from the definition of variance that

V [aξ + b] = E[(aξ + b− aE[ξ]− b)2

]= a2E[(ξ − E[ξ])2] = a2V [ξ].

Theorem 10.17 Assume that ξ is a rough fuzzy variable whose expectedvalue exists. Then we have

V [E[ξ(θ)]] ≤ V [ξ]. (10.30)

Page 330: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 10.7 - Optimistic and Pessimistic Values 321

Proof: Denote the expected value of ξ by e. It follows from Theorem 3.53that

V [E[ξ(θ)]] = E[(E[ξ(θ)]− e)2

]≤ E

[E[(ξ(θ)− e)2

]]= V [ξ].

The theorem is proved.

Theorem 10.18 Let ξ be a rough fuzzy variable with expected value e. ThenV [ξ] = 0 if and only if Ch{ξ = e}(1) = 1.

Proof: If V [ξ] = 0, then it follows from V [ξ] = E[(ξ − e)2] that∫ +∞

0

Cr{θ ∈ Θ

∣∣ E[(ξ(θ)− e)2] ≥ r}

dr = 0

which implies that Cr{θ ∈ Θ|E[(ξ(θ)−e)2] ≥ r} = 0 for any r > 0. Therefore,Cr{θ ∈ Θ|E[(ξ(θ) − e)2] = 0} = 1. That is, there exists a set A∗ withCr{A∗} = 1 such that E[(ξ(θ) − e)2] = 0 for each θ ∈ A∗. It follows fromTheorem 4.41 that Tr{ξ(θ) = e} = 1 for each θ ∈ A∗. Hence

Ch{ξ = e}(1) = supCr{A}≥1

infθ∈A

Tr{ξ(θ) = e} = 1.

Conversely, if Ch{ξ = e}(1) = 1, it follows from Theorem 10.5 that thereexists a set A∗ with Cr{A∗} = 1 such that

infθ∈A∗

Tr{ξ(θ) = e} = 1.

That is, Tr{(ξ(θ)− e)2 ≥ r} = 0 for each r > 0 and θ ∈ A∗. Thus

E[(ξ(θ)− e)2] =∫ +∞

0

Tr{(ξ(θ)− e)2 ≥ r}dr = 0

for each θ ∈ A∗. It follows that Cr{θ ∈ Θ|E[(ξ(θ)− e)2] ≥ r

}= 0 for any

r > 0. Hence

V [ξ] =∫ +∞

0

Cr{θ ∈ Θ

∣∣ E[(ξ(θ)− e)2] ≥ r}

dr = 0.

The theorem is proved.

Definition 10.15 Let ξ and η be rough fuzzy variables such that E[ξ] andE[η] are finite. Then the covariance of ξ and η is defined by

Cov[ξ, η] = E [(ξ − E[ξ])(η − E[η])] . (10.31)

Definition 10.16 For any positive integer k, the expected value E[ξk] iscalled the kth moment of the rough fuzzy variable ξ. The expected valueE[(ξ −E[ξ])k] is called the kth central moment of the rough fuzzy variable ξ.

Page 331: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

322 Chapter 10 - Rough Fuzzy Theory

10.7 Optimistic and Pessimistic Values

Definition 10.17 (Liu [75]) Let ξ be a rough fuzzy variable, and γ, δ ∈ (0, 1].Then

ξsup(γ, δ) = sup{r∣∣ Ch{ξ ≥ r

}(γ) ≥ δ} (10.32)

is called the (γ, δ)-optimistic value to ξ, and

ξinf(γ, δ) = inf{r∣∣ Ch{ξ ≤ r}(γ) ≥ δ

}(10.33)

is called the (γ, δ)-pessimistic value to ξ.

Theorem 10.19 Let ξ be a rough fuzzy variable. Assume that ξsup(γ, δ) isthe (γ, δ)-optimistic value and ξinf(γ, δ) is the (γ, δ)-pessimistic value to ξ. Ifγ > 0.5, then we have

Ch{ξ ≤ ξinf(γ, δ)}(γ) ≥ δ, Ch{ξ ≥ ξsup(γ, δ)}(γ) ≥ δ. (10.34)

Proof: It follows from the definition of (γ, δ)-pessimistic value that thereexists a decreasing sequence {xi} such that Ch{ξ ≤ xi}(γ) ≥ δ and xi ↓ξinf(γ, δ) as i→∞. Thus we have

limi→∞

Ch{ξ ≤ xi}(γ) ≥ δ.

It follows from γ > 0.5 and Theorem 10.10 that

Ch{ξ ≤ ξinf(γ, δ)}(γ) = limi→∞

Ch{ξ ≤ xi}(γ) ≥ δ.

Similarly, there exists an increasing sequence {xi} such that Ch{ξ ≥ xi}(γ) ≥δ and xi ↑ ξsup(γ, δ) as i→∞. Thus we have

limi→∞

Ch{ξ ≥ xi}(γ) ≥ δ.

It follows from γ > 0.5 and Theorem 10.11 that

Ch{ξ ≥ ξsup(γ, δ)}(γ) = limi→∞

Ch{ξ ≥ xi}(γ) ≥ δ.

The theorem is proved.

Theorem 10.20 Let ξsup(γ, δ) and ξinf(γ, δ) be the (γ, δ)-optimistic and (γ, δ)-pessimistic values of rough fuzzy variable ξ, respectively. If γ ≤ 0.5, then wehave

ξinf(γ, δ) ≤ ξsup(γ, δ) + δ1; (10.35)

if γ > 0.5, then we have

ξinf(γ, δ) + δ2 ≥ ξsup(γ, δ) (10.36)

Page 332: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 10.7 - Optimistic and Pessimistic Values 323

where δ1 and δ2 are defined by

δ1 = supθ∈Θ{ξ(θ)sup(1− δ)− ξ(θ)inf(1− δ)} ,

δ2 = supθ∈Θ{ξ(θ)sup(δ)− ξ(θ)inf(δ)} ,

and ξ(θ)sup(δ) and ξ(θ)inf(δ) are δ-optimistic and δ-pessimistic values ofrough variable ξ(θ) for each θ, respectively.

Proof: Assume that γ ≤ 0.5. For any given ε > 0, we define

Θ1 ={θ ∈ Θ

∣∣ Tr {ξ(θ) > ξsup(γ, δ) + ε} ≥ δ}

,

Θ2 ={θ ∈ Θ

∣∣ Tr {ξ(θ) < ξinf(γ, δ)− ε} ≥ δ}

.

It follows from the definitions of ξsup(γ, δ) and ξinf(γ, δ) that Cr{Θ1} < γand Cr{Θ2} < γ. Thus Cr{Θ1}+Cr{Θ2} < γ +γ ≤ 1. This fact implies thatΘ1 ∪Θ2 �= Θ. Let θ∗ �∈ Θ1 ∪Θ2. Then we have

Tr {ξ(θ∗) > ξsup(γ, δ) + ε} < δ,

Tr {ξ(θ∗) < ξinf(γ, δ)− ε} < δ.

Since Tr is self-dual, we have

Tr {ξ(θ∗) ≤ ξsup(γ, δ) + ε} > 1− δ,

Tr {ξ(θ∗) ≥ ξinf(γ, δ)− ε} > 1− δ.

It follows from the definitions of ξ(θ∗)sup(1− δ) and ξ(θ∗)inf(1− δ) that

ξsup(γ, δ) + ε ≥ ξ(θ∗)inf(1− δ),

ξinf(γ, δ)− ε ≤ ξ(θ∗)sup(1− δ)

which implies that

ξinf(γ, δ)− ε− (ξsup(γ, δ) + ε) ≤ ξ(θ∗)sup(1− δ)− ξ(θ∗)inf(1− δ) ≤ δ1.

Letting ε→ 0, we obtain (10.35).Next we prove the inequality (10.36). Assume γ > 0.5. for any given

ε > 0, we define

Θ1 ={θ ∈ Θ

∣∣ Tr {ξ(θ) ≥ ξsup(γ, δ)− ε} ≥ δ}

,

Θ2 ={θ ∈ Θ

∣∣ Tr {ξ(θ) ≤ ξinf(γ, δ)}+ ε ≥ δ}

.

It follows from the definitions of ξsup(γ, δ) and ξinf(γ, δ) that Cr{Θ1} ≥ γand Cr{Θ2} ≥ γ. Thus Cr{Θ1}+Cr{Θ2} ≥ γ +γ > 1. This fact implies thatΘ1 ∩Θ2 �= ∅. Let θ∗ ∈ Θ1 ∩Θ2. Then we have

Tr {ξ(θ∗) ≥ ξsup(γ, δ)− ε} ≥ δ,

Page 333: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

324 Chapter 10 - Rough Fuzzy Theory

Tr {ξ(θ∗) ≤ ξinf(γ, δ) + ε} ≥ δ.

It follows from the definitions of ξ(θ∗)sup(δ) and ξ(θ∗)inf(δ) that

ξsup(γ, δ)− ε ≤ ξ(θ∗)sup(δ),

ξinf(γ, δ) + ε ≥ ξ(θ∗)inf(δ)

which implies that

ξsup(γ, δ)− ε− (ξinf(γ, δ) + ε) ≤ ξ(θ∗)sup(δ)− ξ(θ∗)inf(δ) ≤ δ2.

The inequality (10.36) is proved by letting ε→ 0.

10.8 Convergence Concepts

This section introduces four types of sequence convergence concept: conver-gence a.s., convergence in chance, convergence in mean, and convergence indistribution.

Table 10.1: Relationship among Convergence Concepts

Convergence⇒

Convergence⇐

Convergence

in Chance in Distribution in Mean

Definition 10.18 Suppose that ξ, ξ1, ξ2, · · · are rough fuzzy variables definedon the possibility space (Θ,P(Θ),Pos). The sequence {ξi} is said to be con-vergent a.s. to ξ if and only if there exists a set A ∈ P(Θ) with Cr{A} = 1such that {ξi(θ)} converges a.s. to ξ(θ) for every θ ∈ A.

Definition 10.19 Suppose that ξ, ξ1, ξ2, · · · are rough fuzzy variables. Wesay that the sequence {ξi} converges in chance to ξ if

limi→∞

limα↓0

Ch {|ξi − ξ| ≥ ε} (α) = 0 (10.37)

for every ε > 0.

Definition 10.20 Suppose that ξ, ξ1, ξ2, · · · are rough fuzzy variables withfinite expected values. We say that the sequence {ξi} converges in mean to ξif

limi→∞

E[|ξi − ξ|] = 0. (10.38)

Definition 10.21 Suppose that Φ,Φ1,Φ2, · · · are the chance distributions ofrough fuzzy variables ξ, ξ1, ξ2, · · ·, respectively. We say that {ξi} converges indistribution to ξ if Φi(x;α)→ Φ(x;α) for all continuity points (x;α) of Φ.

Page 334: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 10.8 - Convergence Concepts 325

Theorem 10.21 Let ξ, ξ1, ξ2, · · · be rough fuzzy variables defined on the pos-sibility space (Θ,P(Θ),Pos). If the sequence {ξi} converges in chance to ξ,then {ξi} converges in distribution to ξ.

Proof: Let Φ,Φi be the chance distributions of ξ, ξi for i = 1, 2, · · ·, re-spectively. If {ξi} does not converge in distribution to ξ, then there exists acontinuity point (x, α) of Φ such that Φi(x;α) �→ Φ(x;α). In other words,there exists a number ε∗ > 0 and a subsequence {Φik} such that

Φik(x;α)− Φ(x;α) > 2ε∗, ∀k (10.39)

orΦ(x;α)− Φik(x;α) > 2ε∗, ∀k. (10.40)

If (10.39) holds, then for the positive number ε∗, there exists δ > 0 such that

|Φ(x + δ;α)− Φ(x;α)| < ε∗

which implies thatΦik(x;α)− Φ(x + δ;α) > ε∗.

Equivalently, we have

supCr{A}≥α

infθ∈A

Tr{ξik(θ) ≤ x} − supCr{A}≥α

infθ∈A

Tr{ξ(θ) ≤ x + δ} > ε∗.

Thus, for each k, there exists a set Ak ∈ P(Θ) with Cr{Ak} ≥ α such that

infθ∈Ak

Tr{ξik(θ) ≤ x} − supCr{A}≥α

infθ∈A

Tr{ξ(θ) ≤ x + δ} > ε∗.

Moreover, since Cr{Ak} ≥ α, we have

infθ∈Ak

Tr{ξik(θ) ≤ x} − infθ∈Ak

Tr{ξ(θ) ≤ x + δ} > ε∗.

Thus there exists θk ∈ Ak with Cr{θk} > 0 such that

Tr{ξik(θk) ≤ x} − Tr{ξ(θk) ≤ x + δ} > ε∗. (10.41)

Note that ξik(θk) and ξ(θk) are all rough variables, and

{ξik(θk) ≤ x} = {ξik(θk) ≤ x, ξ(θk) ≤ x + δ} ∪ {ξik(θk) ≤ x, ξ(θk) > x + δ}⊂ {ξ(θk) ≤ x + δ} ∪ {|ξik(θk)− ξ(θk)| > δ}.

It follows from (10.41) that

Tr{|ξik(θk)− ξ(θk)| > δ} ≥ Tr{ξik(θk) ≤ x} − Tr{ξ(θk} ≤ x + δ} > ε∗.

Thus we getlimα↓0

Ch{|ξik − ξ| > δ}(α) > ε∗

which implies that the rough fuzzy sequence {ξi} does not converge in chanceto ξ. A contradiction proves that {ξi} converges in distribution to ξ. A similarway may prove the case (10.40).

Page 335: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

326 Chapter 10 - Rough Fuzzy Theory

Theorem 10.22 Suppose that ξ, ξ1, ξ2, · · · are rough fuzzy variables on thepossibility space (Θ,P(Θ),Pos). If the sequence {ξi} converges in mean to ξ,then {ξi} converges in distribution to ξ.

Proof: Suppose that Φ,Φi are chance distributions of ξ, ξi for i = 1, 2, · · ·,respectively. If {ξi} does not converge in distribution to ξ, then there existsa continuity point (x, α) of Φ such that Φi(x;α) �→ Φ(x;α). In other words,there exists a number ε∗ > 0 and a subsequence {Φik} such that

Φik(x;α)− Φ(x;α) > 2ε∗, ∀ k (10.42)

orΦ(x;α)− Φik(x;α) > 2ε∗, ∀ k. (10.43)

If (10.42) holds, then for the positive number ε∗, there exists δ with 0 < δ <α ∧ 0.5 such that

|Φ(x + δ;α− δ)− Φ(x;α)| < ε∗

which implies that

Φik(x;α)− Φ(x + δ;α− δ) > ε∗.

Equivalently, we have

supCr{A}≥α

infθ∈A

Tr{ξik(θ) ≤ x} − supCr{A}≥α−δ

infθ∈A

Tr{ξ(θ) ≤ x + δ} > ε∗.

Thus, for each k, there exists a set Ak ∈ P(Θ) with Cr{Ak} ≥ α such that

infθ∈Ak

Tr{ξik(θ) ≤ x} − supCr{A}≥α−δ

infθ∈A

Tr{ξ(θ) ≤ x + δ} > ε∗.

Write A′k = {θ ∈ Ak

∣∣ Cr{θ} < δ}. Then A′k ⊂ Ak and Cr{A′

k} ≤ δ. DefineA∗

k = Ak\A′k. Then

infθ∈A∗

k

Tr{ξik(θ) ≤ x} − supCr{A}≥α−δ

infθ∈A

Tr{ξ(θ) ≤ x + δ} > ε∗.

It follows from the subadditivity of credibility measure that

Cr{A∗k} ≥ Cr{Ak} − Cr{A′

k} ≥ α− δ.

Thus, we have

infθ∈A∗

k

Tr{ξik(θ) ≤ x} − infθ∈A∗

k

Tr{ξ(θ) ≤ x + δ} > ε∗.

Furthermore, there exists θk ∈ A∗k with Cr{θk} ≥ δ such that

Tr{ξik(θk) ≤ x} − Tr{ξ(θk) ≤ x + δ} > ε∗. (10.44)

Page 336: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 10.9 - Rough Fuzzy Simulations 327

Note that ξik(θk) and ξ(θk) are all rough variables, and

{ξik(θk) ≤ x} = {ξik(θk) ≤ x, ξ(θk) ≤ x + δ} ∪ {ξik(θk) ≤ x, ξ(θk) > x + δ}⊂ {ξ(θk) ≤ x + δ} ∪ {|ξik(θk)− ξ(θk)| > δ}.

It follows from (10.44) that

Tr{|ξik(θk)− ξ(θk)| > δ} ≥ Tr{ξik(θk) ≤ x} − Tr{ξ(θk) ≤ x + δ} > ε∗.

Thus, for each k, we have

E[|ξik(θk)− ξ(θk)|] =∫ +∞

0

Tr{|ξik(θk)− ξ(θk)| > r}dr > δ × ε∗.

Therefore, for each k, we have

E[|ξik − ξ|] =∫ +∞

0

Cr{θ ∈ Θ∣∣ E[|ξik(θ)− ξ(θ)|] ≥ r}dr

≥ Cr{θk} × E[|ξik(θk)− ξ(θk)|] > δ2 × ε∗

which implies that the rough fuzzy sequence {ξi} does not converge in meanto ξ. A contradiction proves that {ξi} converges in distribution to ξ. Asimilar way may prove the case (10.43).

10.9 Rough Fuzzy Simulations

It is impossible to design an analytic algorithm to deal with general roughfuzzy systems. In order to do that, we introduce some rough fuzzy simula-tions for finding critical value, computing chance function, and calculatingexpected value.

Example 10.1: Assume that ξ is an n-dimensional rough fuzzy vector de-fined on the possibility space (Θ,P(Θ),Pos), and f : �n → �m is a measur-able function. For any confidence level α, we design a rough fuzzy simulationto compute the α-chance Ch {f(ξ) ≤ 0} (α). Equivalently, we should find thesupremum β such that

Cr{θ ∈ Θ

∣∣ Tr {f(ξ(θ)) ≤ 0} ≥ β}≥ α. (10.45)

We randomly generate θk from Θ such that Pos{θk} ≥ ε, and write νk =Pos{θk}, k = 1, 2, · · · , N , respectively, where ε is a sufficiently small number.For any number θk, by using rough simulation, we can estimate the trustg(θk) = Tr{f(ξ(θk)) ≤ 0}. For any number r, we have

L(r) =12

(max

1≤k≤N

{νk∣∣ g(θk) ≥ r

}+ min

1≤k≤N

{1− νk

∣∣ g(θk) < r})

.

Page 337: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

328 Chapter 10 - Rough Fuzzy Theory

It follows from monotonicity that we may employ bisection search to find themaximal value r such that L(r) ≥ α. This value is an estimation of f . Wesummarize this process as follows.

Algorithm 10.1 (Rough Fuzzy Simulation)Step 1. Generate θk from Θ such that Pos{θk} ≥ ε for k = 1, 2, · · · , N ,

where ε is a sufficiently small number.Step 2. Find the maximal value r such that L(r) ≥ α holds.Step 3. Return r.

The rough fuzzy variables ξ1, ξ2, ξ3 are defined as

ξ1 = ([ρ1, ρ1 + 1], [ρ1 − 1, ρ1 + 2]), with ρ1 = (1, 2, 3),ξ2 = ([ρ2, ρ2 + 1], [ρ2 − 1, ρ2 + 2]), with ρ2 = (2, 3, 4),ξ3 = ([ρ3, ρ3 + 1], [ρ3 − 1, ρ3 + 2]), with ρ3 = (3, 4, 5).

A run of rough fuzzy simulation with 5000 cycles shows that

Ch{√

ξ21 + ξ2

2 + ξ23 ≥ 4

}(0.9) = 0.94.

Example 10.2: Assume that f : �n → � is a measurable function, andξ is an n-dimensional rough fuzzy vector defined on the possibility space(Θ,P(Θ),Pos). For any given confidence levels α and β, we need to design arough fuzzy simulation to find the maximal value f such that

Ch{f(ξ) ≥ f

}(α) ≥ β

holds. That is, we must find the maximal value f such that

Cr{θ ∈ Θ

∣∣ Tr{f(ξ(θ)) ≥ f

}≥ β}≥ α.

We randomly generate θk from Θ such that Pos{θk} ≥ ε, and write νk =Pos{θk}, k = 1, 2, · · · , N , respectively, where ε is a sufficiently small num-ber. For any number θk, we search for the maximal value f(θk) such thatTr{f(ξ(θk)) ≥ f(θk)} ≥ β by rough simulation. For any number r, we have

L(r) =12

(max

1≤k≤N

{νk∣∣ f(θk) ≥ r

}+ min

1≤k≤N

{1− νk

∣∣ f(θk) < r})

.

It follows from monotonicity that we may employ bisection search to find themaximal value r such that L(r) ≥ α. This value is an estimation of f . Wesummarize this process as follows.

Algorithm 10.2 (Rough Fuzzy Simulation)

Page 338: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 10.9 - Rough Fuzzy Simulations 329

Step 1. Generate θk from Θ such that Pos{θk} ≥ ε for k = 1, 2, · · · , N ,where ε is a sufficiently small number.

Step 2. Find the maximal value r such that L(r) ≥ α holds.Step 3. Return r.

In order to find the maximal value f such that Ch{ξ21+ξ2

2+ξ23 ≥ f}(0.9) ≥

0.9, where ξ1, ξ2, ξ3 are rough fuzzy variables defined as

ξ1 = ([ρ1, ρ1 + 1], [ρ1 − 1, ρ1 + 2]), with ρ1 = (1, 2, 3),ξ2 = ([ρ2, ρ2 + 1], [ρ2 − 1, ρ2 + 2]), with ρ2 = (2, 3, 4),ξ3 = ([ρ3, ρ3 + 1], [ρ3 − 1, ρ3 + 2]), with ρ3 = (3, 4, 5),

we perform the rough fuzzy simulation with 5000 cycles and obtain thatf = 18.02.

Example 10.3: Assume that f : �n → � is a measurable function, andξ is an n-dimensional rough fuzzy vector defined on the possibility space(Θ,P(Θ),Pos). Then f(ξ) is a rough fuzzy variable whose expected valueE[f(ξ)] is∫ +∞

0

Cr{θ ∈ Θ | E[f(ξ(θ))] ≥ r}dr −∫ 0

−∞Cr{θ ∈ Θ | E[f(ξ(θ))] ≤ r}dr.

A rough fuzzy simulation will be introduced to compute the expected valueE[f(ξ)]. We randomly sample θk from Θ such that Pos{θk} ≥ ε, and denoteνk = Pos{θk} for k = 1, 2, · · · , N , where ε is a sufficiently small number.Then for any number r ≥ 0, the credibility Cr{θ ∈ Θ|E[f(ξ(θ))] ≥ r} can beestimated by

12

(max

1≤k≤N{νk|E[f(ξ(θk))] ≥ r}+ min

1≤k≤N{1− νk|E[f(ξ(θk))] < r}

)and for any number r < 0, the credibility Cr{θ ∈ Θ|E[f(ξ(θ))] ≤ r} can beestimated by

12

(max

1≤k≤N{νk|E[f(ξ(θk))] ≤ r}+ min

1≤k≤N{1− νj |E[f(ξ(θk))] > r}

)provided that N is sufficiently large, where E[f(ξ(θk))], k = 1, 2, · · · , N maybe estimated by the rough simulation.

Algorithm 10.3 (Rough Fuzzy Simulation)Step 1. Set e = 0.Step 2. Randomly sample θk from Θ such that Pos{θk} ≥ ε for k =

1, 2, · · · , N , where ε is a sufficiently small number.Step 3. Let a = min1≤k≤N E[f(ξ(θk))] and b = max1≤k≤N E[f(ξ(θk))].

Page 339: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

330 Chapter 10 - Rough Fuzzy Theory

Step 4. Randomly generate r from [a, b].Step 5. If r ≥ 0, then e← e + Cr{θ ∈ Θ|E[f(ξ(θ))] ≥ r}.Step 6. If r < 0, then e← e− Cr{θ ∈ Θ|E[f(ξ(θ))] ≤ r}.Step 7. Repeat the fourth to sixth steps for N times.Step 8. E[f(ξ)] = a ∨ 0 + b ∧ 0 + e · (b− a)/N .

In order to compute the expected value of ξ1ξ2ξ3, where ξ1, ξ2, ξ3 arerough fuzzy variables defined as

ξ1 = ([ρ1, ρ1 + 1], [ρ1 − 1, ρ1 + 2]), with ρ1 = (1, 2, 3),ξ2 = ([ρ2, ρ2 + 1], [ρ2 − 1, ρ2 + 2]), with ρ2 = (2, 3, 4),ξ3 = ([ρ3, ρ3 + 1], [ρ3 − 1, ρ3 + 2]), with ρ3 = (3, 4, 5),

we perform the rough fuzzy simulation with 5000 cycles and obtain thatE[ξ1ξ2ξ3] = 42.55.

Page 340: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Chapter 11

Random Rough Theory

A random rough variable was initialized by Liu [75] as a rough variable definedon the universal set of random variables, or a rough variable taking “randomvariable” values.

The emphasis in this chapter is mainly on random rough variable, ran-dom rough arithmetic, chance measure, chance distribution, independent andidentical distribution, expected value operator, variance, critical values, con-vergence concepts, laws of large numbers, and random rough simulation.

11.1 Random Rough Variables

Definition 11.1 (Liu [75]) A random rough variable is a function ξ from arough space (Λ,Δ,A, π) to the set of random variables such that Pr{ξ(λ) ∈ B}is a measurable function of λ for any Borel set B of �.

Theorem 11.1 Assume that ξ is a random rough variable, and B is a Borelset of �. Then the probability Pr{ξ(λ) ∈ B} is a rough variable.

Proof: Since the probability Pr{ξ(λ) ∈ B} is a measurable function of λfrom the rough space (Λ,Δ,A, π) to the set of real numbers, it is a roughvariable.

Theorem 11.2 Let ξ be a random rough variable. If the expected valueE[ξ(λ)] is finite for each λ, then E[ξ(λ)] is a rough variable.

Proof: In order to prove that the expected value E[ξ(λ)] is a rough variable,we only need to show that E[ξ(λ)] is a measurable function of λ. It is obviousthat

E[ξ(λ)] =∫ +∞

0

Pr{ξ(λ) ≥ r}dr −∫ 0

−∞Pr{ξ(λ) ≤ r}dr

= limj→∞

limk→∞

(k∑

l=1

j

kPr{

ξ(λ) ≥ lj

k

}−

k∑l=1

j

kPr{

ξ(λ) ≤ − lj

k

}).

Page 341: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

332 Chapter 11 - Random Rough Theory

Since Pr{ξ(λ)) ≥ lj/k} and Pr{ξ(λ) ≤ −lj/k} are all measurable functionsfor any integers j, k and l, the expected value E[ξ(λ)] is a measurable functionof λ. The proof is complete.

Definition 11.2 An n-dimensional random rough vector is a function ξ froma rough space (Λ,Δ,A, π) to the set of n-dimensional random vectors suchthat Pr{ξ(λ) ∈ B} is a measurable function of λ for any Borel set B of �n.

Theorem 11.3 If (ξ1, ξ2, · · · , ξn) is a random rough vector, then ξ1, ξ2, · · · , ξnare random rough variables. Conversely, if ξ1, ξ2, · · · , ξn are random roughvariables, and for each λ ∈ Λ, the random variables ξ1(λ), ξ2(λ), · · · , ξn(λ)are independent, then (ξ1, ξ2, · · · , ξn) is a random rough vector.

Proof: Write ξ = (ξ1, ξ2, · · · , ξn). Suppose that ξ is a random rough vectoron the rough space (Λ,Δ,A, π). For any Borel set B of �, the set B ×�n−1

is a Borel set of �n. Thus the function

Pr {ξ1(λ) ∈ B} = Pr

⎧⎪⎪⎪⎨⎪⎪⎪⎩ξ1(λ) ∈ Bξ2(λ) ∈ �

...ξn(λ) ∈ �

⎫⎪⎪⎪⎬⎪⎪⎪⎭ = Pr{ξ(λ) ∈ B ×�n−1

}

is a measurable function of λ. Hence ξ1 is a random rough variable. A similarprocess may prove that ξ2, ξ3, · · · , ξn are random rough variables.

Conversely, suppose that ξ1, ξ2, · · · , ξn are random rough variables on therough space (Λ,Δ,A, π). We write ξ = (ξ1, ξ2, · · · , ξn) and define

C ={C ⊂ �n

∣∣ Pr{ξ(λ) ∈ C} is a measurable function of λ}

.

The vector ξ is a random rough vector if we can prove that C contains allBorel sets of �n. Let C1, C2, · · · ∈ C, and Ci ↑ C or Ci ↓ C. It follows fromthe probability continuity theorem that Pr{ξ(λ) ∈ Ci} → Pr{ξ(λ) ∈ C} asi→∞. Thus Pr{ξ(λ) ∈ C} is a measurable function of λ, and C ∈ C. HenceC is a monotone class. It is also clear that C contains all intervals of the form(−∞, a], (a, b], (b,∞) and �n since

Pr {ξ(λ) ∈ (−∞, a]} =n∏

i=1

Pr {ξi(λ) ∈ (−∞, ai]} ;

Pr {ξ(λ) ∈ (a, b]} =n∏

i=1

Pr {ξi(λ) ∈ (ai, bi]} ;

Pr {ξ(λ) ∈ (b,+∞)} =n∏

i=1

Pr {ξi(λ) ∈ (bi,+∞)} ;

Pr {ξ(λ) ∈ �n} = 1.

Page 342: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 11.2 - Chance Measure 333

Let F be the class of all finite unions of disjoint intervals of the form (−∞, a],(a, b], (b,∞) and �n. Note that for any disjoint sets C1, C2, · · · , Cm of F andC = C1 ∪ C2 ∪ · · · ∪ Cm, we have

Pr {ξ(λ) ∈ C} =m∑i=1

Pr {ξ(λ) ∈ Ci} .

That is, C ∈ C. Hence we have F ⊂ C. It may also be verified that the classF is an algebra. Since the smallest σ-algebra containing F is just the Borelalgebra of �n, the monotone class theorem implies that C contains all Borelsets of �n. The theorem is proved.

Theorem 11.4 Let ξ be an n-dimensional random rough vector, and f :�n → � a measurable function. Then f(ξ) is a random rough variable.

Proof: It is clear that f−1(B) is a Borel set of �n for any Borel set B of �.Thus, for each λ ∈ Λ, we have

Pr{f(ξ(λ)) ∈ B} = Pr{ξ(λ) ∈ f−1(B)}

which is a measurable function of λ. That is, f(ξ) is a random rough variable.The theorem is proved.

Definition 11.3 (Liu [75], Random Rough Arithmetic on Single Space) Letf : �n → � be a measurable function, and ξ1, ξ2, · · · , ξn random rough vari-ables defined on the rough space (Λ,Δ,A, π). Then ξ = f(ξ1, ξ2, · · · , ξn) is arandom rough variable defined by

ξ(λ) = f(ξ1(λ), ξ2(λ), · · · , ξn(λ)), ∀λ ∈ Λ. (11.1)

Definition 11.4 (Liu [75], Random Rough Arithmetic on Different Spaces)Let f : �n → � be a measurable function, and ξi random rough vari-ables defined on (Λi,Δi,Ai, πi), i = 1, 2, · · · , n, respectively. Then ξ =f(ξ1, ξ2, · · · , ξn) is a random rough variable defined on the product rough space(Λ,Δ,A, π) as

ξ(λ1, λ2, · · · , λn) = f(ξ1(λ1), ξ2(λ2), · · · , ξn(λn)) (11.2)

for all (λ1, λ2, · · · , λn) ∈ Λ.

11.2 Chance Measure

Definition 11.5 (Liu [75]) Let ξ be a random rough variable, and B a Borelset of �. Then the chance of random rough event ξ ∈ B is a function from(0, 1] to [0, 1], defined as

Ch {ξ ∈ B} (α) = supTr{A}≥α

infλ∈A

Pr {ξ(λ) ∈ B} . (11.3)

Page 343: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

334 Chapter 11 - Random Rough Theory

Theorem 11.5 Let ξ be a random rough variable, and B a Borel set of �.For any given α∗ ∈ (0, 1], write β∗ = Ch {ξ ∈ B} (α∗). Then we have

Tr{λ ∈ Λ

∣∣ Pr {ξ(λ) ∈ B} ≥ β∗} ≥ α∗. (11.4)

Proof: It follows from the definition of chance that β∗ is just the supremumof β satisfying

Tr{λ ∈ Λ

∣∣ Pr {ξ(λ) ∈ B} ≥ β}≥ α∗.

Thus there exists an increasing sequence {βi} such that

Tr{λ ∈ Λ

∣∣ Pr {ξ(λ) ∈ B} ≥ βi

}≥ α∗

and βi ↑ β∗ as i→∞. It is also easy to prove that{λ ∈ Λ

∣∣ Pr {ξ(λ ∈ B} ≥ βi

}↓{λ ∈ Λ

∣∣ Pr {ξ(λ) ∈ B} ≥ β∗}as i→∞. It follows from the trust continuity theorem that

Tr{λ ∈ Λ

∣∣ Pr {ξ(λ) ∈ B} ≥ β∗}= lim

i→∞Tr{λ ∈ Λ

∣∣ Pr {ξ(λ) ∈ B} ≥ βi

}≥ α∗.

The proof is complete.

Theorem 11.6 Let ξ be a random rough variable, and {Bi} a sequence ofBorel sets of �. If Bi ↓ B, then we have

limi→∞

Ch{ξ ∈ Bi}(α) = Ch{ξ ∈ lim

i→∞Bi

}(α). (11.5)

Proof: Write

β = Ch{ξ ∈ B}(α), βi = Ch{ξ ∈ Bi}(α), i = 1, 2, · · ·

Since Bi ↓ B, it is clear that β1 ≥ β2 ≥ · · · ≥ β. Thus the limitation

ρ = limi→∞

βi = limi→∞

Ch{ξ ∈ Bi}(α)

exists and ρ ≥ β. On the other hand, since ρ ≤ βi for each i, it follows fromTheorem 11.5 that

Tr{λ ∈ Λ∣∣ Pr{ξ(λ) ∈ Bi} ≥ ρ} ≥ Tr{λ ∈ Λ

∣∣ Pr{ξ(λ) ∈ Bi} ≥ βi} ≥ α.

It follows from the probability continuity theorem that

{λ ∈ Λ∣∣ Pr{ξ(λ) ∈ Bi} ≥ ρ} ↓ {λ ∈ Λ

∣∣ Pr{ξ(λ) ∈ B} ≥ ρ}.

It follows from the trust continuity theorem that

Tr{λ ∈ Λ∣∣ Pr{ξ(λ) ∈ B} ≥ ρ} = lim

i→∞Tr{λ ∈ Λ

∣∣ Pr{ξ(λ) ∈ Bi} ≥ ρ} ≥ α

which implies that ρ ≤ β. Hence ρ = β and (11.5) holds.

Page 344: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 11.2 - Chance Measure 335

Theorem 11.7 (a) Let ξ, ξ1, ξ2, · · · be random rough variables such that ξi(λ) ↑ξ(λ) for each λ ∈ Λ. Then we have

limi→∞

Ch{ξi ≤ r}(α) = Ch{

limi→∞

ξi ≤ r}

(α). (11.6)

(b) Let ξ, ξ1, ξ2, · · · be random rough variables such that ξi(λ) ↓ ξ(λ) for eachλ ∈ Λ. Then we have

limi→∞

Ch{ξi ≥ r}(α) = Ch{

limi→∞

ξi ≥ r}

(α). (11.7)

Proof: (a) Write

β = Ch{ξ ≤ r}(α), βi = Ch{ξi ≤ r}(α), i = 1, 2, · · ·

Since ξi(λ) ↑ ξ(λ) for each λ ∈ Λ, it is clear that {ξi(λ) ≤ r} ↓ {ξ(λ) ≤ r}for each λ ∈ Λ and β1 ≥ β2 ≥ · · · ≥ β. Thus the limitation

ρ = limi→∞

βi = limi→∞

Ch{ξi ≤ r}(α)

exists and ρ ≥ β. On the other hand, since ρ ≤ βi for each i, it follows fromTheorem 11.5 that

Tr{λ ∈ Λ∣∣ Pr{ξi(λ) ≤ r} ≥ ρ} ≥ Tr{λ ∈ Λ

∣∣ Pr{ξi(λ) ≤ r} ≥ βi} ≥ α.

Since {ξi(λ) ≤ r} ↓ {ξ(λ) ≤ r} for each λ ∈ Λ, it follows from the probabilitycontinuity theorem that

{λ ∈ Λ∣∣ Pr{ξi(λ) ≤ r} ≥ ρ} ↓ {λ ∈ Λ

∣∣ Pr{ξ(λ) ≤ r} ≥ ρ}.

By using the trust continuity theorem, we get

Tr{λ ∈ Λ∣∣ Pr{ξ(λ) ≤ r} ≥ ρ} = lim

i→∞Tr{λ ∈ Λ

∣∣ Pr{ξi(λ) ≤ r} ≥ ρ} ≥ α

which implies that ρ ≤ β. Hence ρ = β and (11.6) holds. The part (b) maybe proved similarly.

Variety of Chance Measure

Definition 11.6 Let ξ be a random rough variable, and B a Borel set of �.For any real number α ∈ (0, 1], the α-chance of random rough event ξ ∈ Bis defined as the value of chance at α, i.e., Ch {ξ ∈ B} (α) where Ch denotesthe chance measure.

Definition 11.7 Let ξ be a random rough variable, and B a Borel set of �.Then the equilibrium chance of random rough event ξ ∈ B is defined as

Che {ξ ∈ B} = sup0<α≤1

{α∣∣ Ch {ξ ∈ B} (α) ≥ α

}(11.8)

where Ch denotes the chance measure.

Page 345: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

336 Chapter 11 - Random Rough Theory

Definition 11.8 Let ξ be a random rough variable, and B a Borel set of �.Then the average chance of random rough event ξ ∈ B is defined as

Cha {ξ ∈ B} =∫ 1

0

Ch {ξ ∈ B} (α)dα (11.9)

where Ch denotes the chance measure.

Definition 11.9 A random rough variable ξ is said to be(a) nonnegative if Ch{ξ < 0}(α) ≡ 0;(b) positive if Ch{ξ ≤ 0}(α) ≡ 0;(c) simple if there exists a finite sequence {x1, x2, · · · , xm} such that

Ch {ξ �= x1, ξ �= x2, · · · , ξ �= xm} (α) ≡ 0; (11.10)

(d) discrete if there exists a countable sequence {x1, x2, · · ·} such that

Ch {ξ �= x1, ξ �= x2, · · ·} (α) ≡ 0. (11.11)

11.3 Chance Distribution

Definition 11.10 The chance distribution Φ: [−∞,+∞]× (0, 1]→ [0, 1] ofa random rough variable ξ is defined by

Φ(x;α) = Ch {ξ ≤ x} (α). (11.12)

Theorem 11.8 The chance distribution Φ(x;α) of random rough variable is(a) a decreasing and left-continuous function of α for any fixed x;(b) an increasing and right-continuous function of x for any fixed α, and

Φ(−∞;α) = 0, Φ(+∞;α) = 1, ∀α; (11.13)

limx→−∞Φ(x;α) = 0, ∀α; (11.14)

limx→+∞Φ(x;α) = 1 if α < 1. (11.15)

Proof: Let Φ(x;α) be the chance distribution of random rough variable ξdefined on the rough space (Λ,Δ,A, π). Part (a): For any given α1 and α2

with 0 < α1 < α2 ≤ 1, it is clear that

Φ(x;α1) = supTr{A}≥α1

infλ∈A

Pr {ξ(λ) ≤ x}

≥ supTr{A}≥α2

infλ∈A

Pr {ξ(λ) ≤ x} = Φ(x;α2).

Thus Φ(x;α) is a decreasing function of α. We next prove that Φ(x;α) is aleft-continuous function of α. Let α ∈ (0, 1] be given and {αi} a sequence

Page 346: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 11.3 - Chance Distribution 337

of numbers with αi ↑ α. Since Φ(x;α) is a decreasing function of α, thelimitation limi→∞ Φ(x;αi) exists and is not less than Φ(x;α). If the limi-tation is equal to Φ(x;α), then Φ(x;α) is left-continuous with respect to α.Otherwise, we have

limi→∞

Φ(x;αi) > Φ(x;α).

Let z∗ = (limi→∞ Φ(x;αi) + Φ(x;α))/2. It is clear that

Φ(x;αi) > z∗ > Φ(x;α)

for all i. It follows from Φ(x;αi) > z∗ that there exists Ai with Tr{Ai} ≥ αi

such thatinf

λ∈Ai

Pr{ξ(λ) ≤ x} > z∗.

Now we define

A∗ =∞⋃i=1

Ai.

It is clear that Pr{A∗} ≥ Tr{Ai} ≥ αi. Letting i→∞, we get Tr{A∗} ≥ α.Thus

Φ(x;α) ≥ infλ∈A∗

Pr{ξ(λ) ≤ x} ≥ z∗.

A contradiction proves the part (a).We now prove the part (b). For any x1 and x2 with−∞ ≤ x1 < x2 ≤ +∞,

it is clear that

Φ(x1;α) = supTr{A}≥α

infλ∈A

Pr {ξ(λ) ≤ x1}

≤ supTr{A}≥α

infλ∈A

Pr {ξ(λ) ≤ x2} = Φ(x2;α).

Therefore, Φ(x;α) is an increasing function of x. We next prove that Φ(x;α)is a right-continuous function of x. Let {xi} be an arbitrary sequence withxi ↓ x as i→∞. It follows from Theorem 11.6 that

limy↓x

Φ(y;α) = limy↓x

Ch{ξ ∈ (−∞, y]}(α) = Ch{ξ ∈ (−∞, x]}(α) = Φ(x;α).

Thus Φ(x;α) is a right-continuous function of x.Since ξ(λ) is a random variable for any λ ∈ Λ, we have Pr{ξ(λ) ≤ −∞} =

0 for any λ ∈ Λ. It follows that

Φ(−∞;α) = supTr{A}≥α

infλ∈A

Pr {ξ(λ) ≤ −∞} = 0.

Similarly, we have Pr{ξ(λ) ≤ +∞} = 1 for any λ ∈ Λ. Thus

Φ(+∞;α) = supTr{A}≥α

infλ∈A

Pr {ξ(λ) ≤ +∞} = 1.

Page 347: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

338 Chapter 11 - Random Rough Theory

Thus (8.13) is proved.If (11.14) is not true, then there exists a number z∗ > 0 and a sequence

{xi} with xi ↓ −∞ such that Φ(xi, α) > z∗ for all i. Writing

Ai ={λ ∈ Λ

∣∣ Pr{ξ(λ) ≤ xi} > z∗}

for i = 1, 2, · · ·, we have Tr{Ai} ≥ α, and A1 ⊃ A2 ⊃ · · · It follows from thetrust continuity theorem that

Tr

{ ∞⋂i=1

Ai

}= lim

i→∞Tr{Ai} ≥ α > 0.

Thus there exists λ∗ such that λ∗ ∈ Ai for all i. Therefore

0 = limi→∞

Pr{ξ(λ∗) ≤ xi} ≥ z∗ > 0.

A contradiction proves (11.14).If (11.15) is not true, then there exists a number z∗ < 1 and a sequence

{xi} with xi ↑ +∞ such that Φ(xi, α) < z∗ for all i. Writing

Ai ={λ ∈ Λ

∣∣ Pr{ξ(λ) ≤ xi} < z∗}

for i = 1, 2, · · ·, we have

Tr{Ai} = 1− Tr{λ ∈ Λ

∣∣ Pr{ξ(λ) ≤ xi} ≥ z∗}

> 1− α

and A1 ⊃ A2 ⊃ · · · It follows from the trust continuity theorem that

Tr

{ ∞⋂i=1

Ai

}= lim

i→∞Tr{Ai} ≥ 1− α > 0.

Thus there exists λ∗ such that λ∗ ∈ Ai for all i. Therefore

1 = limi→∞

Pr{ξ(λ∗) ≤ xi} ≤ z∗ < 1.

A contradiction proves (11.15). The proof is complete.

Theorem 11.9 Let ξ be a random rough variable. Then Ch{ξ ≥ x}(α) is(a) a decreasing and left-continuous function of α for any fixed x;(b) a decreasing and left-continuous function of x for any fixed α.

Proof: Like Theorem 11.8.

Definition 11.11 The chance density function φ: �× (0, 1]→ [0,+∞) of arandom rough variable ξ is a function such that

Φ(x;α) =∫ x

−∞φ(y;α)dy (11.16)

holds for all x ∈ [−∞,+∞] and α ∈ (0, 1], where Φ is the chance distributionof ξ.

Page 348: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 11.4 - Independent and Identical Distribution 339

11.4 Independent and Identical Distribution

This section introduces the concept of independent and identically distributed(iid) random rough variables.

Definition 11.12 The random rough variables ξ1, ξ2, · · · , ξn are said to beiid if and only if

(Pr{ξi(λ) ∈ B1},Pr{ξi(λ) ∈ B2}, · · · ,Pr{ξi(λ) ∈ Bm}) , i = 1, 2, · · · , n

are iid rough vectors for any Borel sets B1, B2, · · · , Bm of � and any positiveinteger m.

Theorem 11.10 Let ξ1, ξ2, · · · , ξn be iid random rough variables. Then forany Borel set B of �, Pr{ξi(λ) ∈ B}, i = 1, 2, · · · , n are iid rough variables.

Proof: It follows immediately from the definition.

Theorem 11.11 Let f : � → � be a measurable function. If ξ1, ξ2, · · · , ξnare iid random rough variables, then f(ξ1), f(ξ2), · · · , f(ξn) are iid randomrough variables.

Proof: We have proved that f(ξ1), f(ξ2), · · · , f(ξn) are random rough vari-ables. For any positive integer m and Borel sets B1, B2, · · · , Bm of �, sincef−1(B1), f−1(B2), · · · , f−1(Bm) are Borel sets, we know that(

Pr{ξi(λ) ∈ f−1(B1)},Pr{ξi(λ) ∈ f−1(B2)}, · · · ,Pr{ξi(λ) ∈ f−1(Bm)}),

i = 1, 2, · · · , n are iid rough vectors. Equivalently, the rough vectors

(Pr{f(ξi(λ)) ∈ B1},Pr{f(ξi(λ)) ∈ B2}, · · · ,Pr{f(ξi(λ)) ∈ Bm}) ,

i = 1, 2, · · · , n are iid. Hence f(ξ1), f(ξ2), · · · , f(ξn) are iid random roughvariables.

Theorem 11.12 Assume that ξ1, ξ2, · · · , ξn are iid random rough variablessuch that E[ξ1(λ)], E[ξ2(λ)], · · · , E[ξn(λ)] are all finite for each λ. ThenE[ξ1(λ)], E[ξ2(λ)], · · ·, E[ξn(λ)] are iid rough variables.

Proof: For any λ ∈ Λ, it follows from the expected value operator that

E[ξi(λ)] =∫ +∞

0

Pr{ξi(λ) ≥ r}dr −∫ 0

−∞Pr{ξi(λ) ≤ r}dr

= limj→∞

limk→∞

⎛⎝ 2k∑l=1

j

2kPr{

ξi(λ) ≥ lj

2k

}−

2k∑l=1

j

2kPr{

ξi(λ) ≤ − lj

2k

}⎞⎠

Page 349: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

340 Chapter 11 - Random Rough Theory

for i = 1, 2, · · · , n. Now we write

η+i (λ) =

∫ ∞

0

Pr{ξi(λ) ≥ r}dr, η−i (λ) =

∫ 0

−∞Pr{ξi(λ) ≤ r}dr,

η+ij(λ) =

∫ j

0

Pr{ξi(λ) ≥ r}dr, η−ij(λ) =

∫ 0

−j

Pr{ξi(λ) ≤ r}dr,

η+ijk(λ) =

2k∑l=1

j

2kPr{

ξi(λ) ≥ lj

2k

}, η−

ijk(λ) =2k∑l=1

j

2kPr{

ξi(λ) ≤ − lj

2k

}for any positive integers j, k and i = 1, 2, · · · , n. It follows from the mono-tonicity of the functions Pr{ξi(λ) ≥ r} and Pr{ξi(λ) ≤ r} that the se-quences {η+

ijk(λ)} and {η−ijk(λ)} satisfy (a) for each j and k,

(η+ijk(λ), η−

ijk(λ)),

i = 1, 2, · · · , n are iid rough vectors; and (b) for each i and j, η+ijk(λ) ↑ η+

ij(λ),and η−

ijk(λ) ↑ η−ij(λ) as k →∞.

For any real numbers x, y, xi, yi, i = 1, 2, · · · , n, it follows from the prop-erty (a) that

Tr

{η+ijk(λ) ≤ xi, η

−ijk(λ) ≤ yi

i = 1, 2, · · · , n

}=

n∏i=1

Tr{η+ijk(λ) ≤ xi, η

−ijk(λ) ≤ yi

},

Tr{η+ijk(λ) ≤ x, η−

ijk(λ) ≤ y}

= Tr{η+i′jk(λ) ≤ x, η−

i′jk(λ) ≤ y}

, ∀i, i′.

It follows from the property (b) that{η+ijk(λ) ≤ xi, η

−ijk(λ) ≤ yi

i = 1, 2, · · · , n

}→{

η+ij(λ) ≤ xi, η

−ij(λ) ≤ yi

i = 1, 2, · · · , n

},

{η+ijk(λ) ≤ x, η−

ijk(λ) ≤ y}→{η+ij(λ) ≤ x, η−

ij(λ) ≤ y}

as k →∞. By using the trust continuity theorem, we get

Tr

{η+ij(λ) ≤ xi, η

−ij(λ) ≤ yi

i = 1, 2, · · · , n

}=

n∏i=1

Tr{η+ij(λ) ≤ xi, η

−ij(λ) ≤ yi

},

Tr{η+ij(λ) ≤ x, η−

ij(λ) ≤ y}

= Tr{η+i′j(λ) ≤ x, η−

i′j(λ) ≤ y}

, ∀i, i′.

Thus(η+ij(λ), η−

ij(λ)), i = 1, 2, · · · , n are iid rough vectors, and satisfy (c) for

each j,(η+ij(λ), η−

ij(λ)), i = 1, 2, · · · , n are iid rough vectors; and (d) for each

i, η+ij(λ) ↑ η+

i (λ) and η−ij(λ) ↑ η−

i (λ) as j →∞.A similar process may prove that

(η+i (λ), η−

i (λ)), i = 1, 2, · · · , n are iid

rough vectors. Thus E[ξ1(λ)], E[ξ2(λ)], · · · , E[ξn(λ)] are iid rough variables.The theorem is proved.

Page 350: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 11.6 - Variance, Covariance and Moments 341

11.5 Expected Value Operator

Definition 11.13 (Liu [75]) Let ξ be a random rough variable. Then itsexpected value is defined by

E[ξ] =∫ +∞

0

Tr{λ ∈ Λ

∣∣ E[ξ(λ)] ≥ r}

dr −∫ 0

−∞Tr{λ ∈ Λ

∣∣ E[ξ(λ)] ≤ r}

dr

provided that at least one of the two integrals is finite.

Theorem 11.13 Assume that ξ and η are random rough variables with finiteexpected values. Then for any real numbers a and b, we have

E[aξ + bη] = aE[ξ] + bE[η]. (11.17)

Proof: For any λ ∈ Λ, by the linearity of expected value operator of randomvariable, we have E[aξ(λ) + bη(λ)] = aE[ξ(λ)] + bE[η(λ)]. It follows thelinearity of expected value operator of rough variable that E[aξ + bη] =E [aE[ξ(λ)] + bE[η(λ)]] = aE [E[ξ(λ)]] + bE [E[η(λ)]] = aE[ξ] + bE[η]. Thetheorem is proved.

11.6 Variance, Covariance and Moments

Definition 11.14 (Liu [75]) Let ξ be a random rough variable with finiteexpected value E[ξ]. The variance of ξ is defined as V [ξ] = E

[(ξ − E[ξ])2

].

Theorem 11.14 If ξ is a random rough variable with finite expected value,a and b are real numbers, then V [aξ + b] = a2V [ξ].

Proof: It follows from the definition of variance that

V [aξ + b] = E[(aξ + b− aE[ξ]− b)2

]= a2E[(ξ − E[ξ])2] = a2V [ξ].

Theorem 11.15 Assume that ξ is a random rough variable whose expectedvalue exists. Then we have

V [E[ξ(λ)]] ≤ V [ξ]. (11.18)

Proof: Denote the expected value of ξ by e. It follows from Theorem 4.51that

V [E[ξ(λ)]] = E[(E[ξ(λ)]− e)2

]≤ E

[E[(ξ(λ)− e)2

]]= V [ξ].

The theorem is proved.

Theorem 11.16 Let ξ be a random rough variable with expected value e.Then V [ξ] = 0 if and only if Ch{ξ = e}(1) = 1.

Page 351: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

342 Chapter 11 - Random Rough Theory

Proof: If V [ξ] = 0, then it follows from V [ξ] = E[(ξ − e)2] that∫ +∞

0

Tr{λ ∈ Λ

∣∣ E[(ξ(λ)− e)2] ≥ r}

dr = 0

which implies that Tr{λ ∈ Λ|E[(ξ(λ)−e)2] ≥ r} = 0 for any r > 0. Therefore,Tr{λ ∈ Λ|E[(ξ(λ) − e)2] = 0} = 1. That is, there exists a set A∗ withTr{A∗} = 1 such that E[(ξ(λ) − e)2] = 0 for each λ ∈ A∗. It follows fromTheorem 2.39 that Pr{ξ(λ) = e} = 1 for each λ ∈ A∗. Hence

Ch{ξ = e}(1) = supTr{A}≥1

infλ∈A

Pr{ξ(λ) = e} = 1.

Conversely, if Ch{ξ = e}(1) = 1, it follows from Theorem 11.5 that thereexists a set A∗ with Tr{A∗} = 1 such that

infλ∈A∗

Pr{ξ(λ) = e} = 1.

That is, Pr{(ξ(λ)− e)2 ≥ r} = 0 for each r > 0 and each λ ∈ A∗. Thus

E[(ξ(λ)− e)2] =∫ +∞

0

Pr{(ξ(λ)− e)2 ≥ r}dr = 0

for each λ ∈ A∗. It follows that Tr{λ ∈ Λ|E[(ξ(λ)− e)2] ≥ r

}= 0 for any

r > 0. Hence

V [ξ] =∫ +∞

0

Tr{λ ∈ Λ

∣∣ E[(ξ(λ)− e)2] ≥ r}

dr = 0.

The theorem is proved.

Definition 11.15 Let ξ and η be random rough variables such that E[ξ] andE[η] are finite. Then the covariance of ξ and η is defined by

Cov[ξ, η] = E [(ξ − E[ξ])(η − E[η])] . (11.19)

Definition 11.16 For any positive integer k, the expected value E[ξk] iscalled the kth moment of the random rough variable ξ. The expected valueE[(ξ −E[ξ])k] is called the kth central moment of the random rough variableξ.

11.7 Optimistic and Pessimistic Values

Definition 11.17 (Liu [75]) Let ξ be a random rough variable, and γ, δ ∈(0, 1]. Then

ξsup(γ, δ) = sup{r∣∣ Ch{ξ ≥ r

}(γ) ≥ δ} (11.20)

is called the (γ, δ)-optimistic value to ξ, and

ξinf(γ, δ) = inf{r∣∣ Ch{ξ ≤ r}(γ) ≥ δ

}(11.21)

is called the (γ, δ)-pessimistic value to ξ.

Page 352: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 11.7 - Optimistic and Pessimistic Values 343

Theorem 11.17 Let ξ be a random rough variable and γ, δ ∈ (0, 1]. As-sume that ξsup(γ, δ) is the (γ, δ)-optimistic value and ξinf(γ, δ) is the (γ, δ)-pessimistic value to ξ. Then we have

Ch{ξ ≤ ξinf(γ, δ)}(γ) ≥ δ, Ch{ξ ≥ ξsup(γ, δ)}(γ) ≥ δ. (11.22)

Proof: It follows from the definition of (γ, δ)-pessimistic value that thereexists a decreasing sequence {xi} such that Ch{ξ ≤ xi}(γ) ≥ δ and xi ↓ξinf(γ, δ) as i→∞. Since Ch{ξ ≤ x}(γ) is a right-continuous function of x,the inequality Ch{ξ ≤ ξinf(γ, δ)}(γ) ≥ δ holds.

Similarly, there exists an increasing sequence {xi} such that Ch{ξ ≥xi}(γ) ≥ δ and xi ↑ ξsup(γ, δ) as i → ∞. Since Ch{ξ ≥ x}(γ) is a left-continuous function of x, the inequality Ch{ξ ≥ ξsup(γ, δ)}(γ) ≥ δ holds.The theorem is proved.

Theorem 11.18 Let ξsup(γ, δ) and ξinf(γ, δ) be the (γ, δ)-optimistic and (γ, δ)-pessimistic values of random rough variable ξ, respectively. If γ ≤ 0.5, thenwe have

ξinf(γ, δ) ≤ ξsup(γ, δ) + δ1; (11.23)

if γ > 0.5, then we have

ξinf(γ, δ) + δ2 ≥ ξsup(γ, δ) (11.24)

where δ1 and δ2 are defined by

δ1 = supλ∈Λ{ξ(λ)sup(1− δ)− ξ(λ)inf(1− δ)} ,

δ2 = supλ∈Λ{ξ(λ)sup(δ)− ξ(λ)inf(δ)} ,

and ξ(λ)sup(δ) and ξ(λ)inf(δ) are δ-optimistic and δ-pessimistic values of ran-dom variable ξ(λ) for each λ, respectively.

Proof: Assume that γ ≤ 0.5. For any given ε > 0, we define

Λ1 ={λ ∈ Λ

∣∣ Pr {ξ(λ) > ξsup(γ, δ) + ε} ≥ δ}

,

Λ2 ={λ ∈ Λ

∣∣ Pr {ξ(λ) < ξinf(γ, δ)− ε} ≥ δ}

.

It follows from the definitions of ξsup(γ, δ) and ξinf(γ, δ) that Tr{Λ1} < γand Tr{Λ2} < γ. Thus Tr{Λ1}+ Tr{Λ2} < γ + γ ≤ 1. This fact implies thatΛ1 ∪ Λ2 �= Λ. Let λ∗ �∈ Λ1 ∪ Λ2. Then we have

Pr {ξ(λ∗) > ξsup(γ, δ) + ε} < δ,

Pr {ξ(λ∗) < ξinf(γ, δ)− ε} < δ.

Since Pr is self dual, we have

Pr {ξ(λ∗) ≤ ξsup(γ, δ) + ε} > 1− δ,

Page 353: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

344 Chapter 11 - Random Rough Theory

Pr {ξ(λ∗) ≥ ξinf(γ, δ)− ε} > 1− δ.

It follows from the definitions of ξ(λ∗)sup(1− δ) and ξ(λ∗)inf(1− δ) that

ξsup(γ, δ) + ε ≥ ξ(λ∗)inf(1− δ),

ξinf(γ, δ)− ε ≤ ξ(λ∗)sup(1− δ)

which implies that

ξinf(γ, δ)− ε− (ξsup(γ, δ) + ε) ≤ ξ(λ∗)sup(1− δ)− ξ(λ∗)inf(1− δ) ≤ δ1.

Letting ε→ 0, we obtain (11.23).Next we prove the inequality (11.24). Assume γ > 0.5. For any given

ε > 0, we define

Λ1 ={λ ∈ Λ

∣∣ Pr {ξ(λ) ≥ ξsup(γ, δ)− ε} ≥ δ}

,

Λ2 ={λ ∈ Λ

∣∣ Pr {ξ(λ) ≤ ξinf(γ, δ) + ε} ≥ δ}

.

It follows from the definitions of ξsup(γ, δ) and ξinf(γ, δ) that Tr{Λ1} ≥ γand Tr{Λ2} ≥ γ. Thus Tr{Λ1}+ Tr{Λ2} ≥ γ + γ > 1. This fact implies thatΛ1 ∩ Λ2 �= ∅. Let λ∗ ∈ Λ1 ∩ Λ2. Then we have

Pr {ξ(λ∗) ≥ ξsup(γ, δ)− ε} ≥ δ,

Pr {ξ(λ∗) ≤ ξinf(γ, δ) + ε} ≥ δ.

It follows from the definitions of ξ(λ∗)sup(δ) and ξ(λ∗)inf(δ) that

ξsup(γ, δ)− ε ≤ ξ(λ∗)sup(δ),

ξinf(γ, δ) + ε ≥ ξ(λ∗)inf(δ)

which implies that

ξsup(γ, δ)− ε− (ξinf(γ, δ) + ε) ≤ ξ(λ∗)sup(δ)− ξ(λ∗)inf(δ) ≤ δ2.

The inequality (11.24) is proved by letting ε→ 0.

11.8 Convergence Concepts

This section introduces four types of sequence convergence concept: conver-gence a.s., convergence in chance, convergence in mean, and convergence indistribution.

Definition 11.18 Suppose that ξ, ξ1, ξ2, · · · are random rough variables de-fined on the rough space (Λ,Δ,A, π). The sequence {ξi} is said to be conver-gent a.s. to ξ if and only if there exists a set A ∈ A with Tr{A} = 1 suchthat {ξi(λ)} converges a.s. to ξ(λ) for every λ ∈ A.

Page 354: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 11.9 - Laws of Large Numbers 345

Definition 11.19 Suppose that ξ, ξ1, ξ2, · · · are random rough variables. Wesay that the sequence {ξi} converges in chance to ξ if

limi→∞

limα↓0

Ch {|ξi − ξ| ≥ ε} (α) = 0 (11.25)

for every ε > 0.

Definition 11.20 Suppose that ξ, ξ1, ξ2, · · · are random rough variables withfinite expected values. We say that the sequence {ξi} converges in mean to ξif

limi→∞

E[|ξi − ξ|] = 0. (11.26)

Definition 11.21 Suppose that Φ,Φ1,Φ2, · · · are the chance distributions ofrandom rough variables ξ, ξ1, ξ2, · · ·, respectively. We say that {ξi} convergesin distribution to ξ if Φi(x;α)→ Φ(x;α) for all continuity points (x;α) of Φ.

11.9 Laws of Large Numbers

This section introduces four laws of large numbers of random rough variable.

Theorem 11.19 Let {ξi} be a sequence of independent but not necessarilyidentically distributed random rough variables with a common expected valuee. If there exists a number a > 0 such that V [ξi] < a for all i, then (E[ξ1(λ)]+E[ξ2(λ)] + · · ·+ E[ξn(λ)])/n converges in trust to e as n→∞.

Proof: Since {ξi} is a sequence of independent random rough variables, weknow that {E[ξi(λ)]} is a sequence of independent rough variables. By usingTheorem 11.15, we get V [E[ξi(λ)]] ≤ V [ξi] < a for each i. It follows fromthe weak law of large numbers of rough variable that (E[ξ1(λ)] + E[ξ2(λ)] +· · ·+ E[ξn(λ)])/n converges in trust to e.

Theorem 11.20 Let {ξi} be a sequence of iid random rough variables with afinite expected value e. Then (E[ξ1(λ)]+E[ξ2(λ)]+· · ·+E[ξn(λ)])/n convergesin trust to e as n→∞.

Proof: Since {ξi} is a sequence of iid random rough variables with a finiteexpected value e, we know that {E[ξi(λ)]} is a sequence of iid rough variableswith finite expected e. It follows from the weak law of large numbers of roughvariable that (E[ξ1(λ)]+E[ξ2(λ)]+ · · ·+E[ξn(λ)])/n converges in trust to e.

Theorem 11.21 Let {ξi} be a sequence of independent random rough vari-ables with a common expected value e. If

∞∑i=1

V [ξi]i2

<∞, (11.27)

then (E[ξ1(λ)] + E[ξ2(λ)] + · · ·+ E[ξn(λ)])/n converges a.s. to e as n→∞.

Page 355: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

346 Chapter 11 - Random Rough Theory

Proof: Since {ξi} is a sequence of independent random rough variables, weknow that {E[ξi(λ)]} is a sequence of independent rough variables. By usingTheorem 11.15, we get V [E[ξi(λ)]] ≤ V [ξi] for each i. It follows from thestrong law of large numbers of rough variable that (E[ξ1(λ)] + E[ξ2(λ)] +· · ·+ E[ξn(λ)])/n converges a.s. to e.

Theorem 11.22 Suppose that {ξi} is a sequence of iid random rough vari-ables with a finite expected value e. Then (E[ξ1(λ)]+E[ξ2(λ)]+· · ·+E[ξn(λ)])/nconverges a.s. to e as n→∞.

Proof: Since {ξi} is a sequence of iid random rough variables, we know that{E[ξi(λ)]} is a sequence of iid rough variables with a finite expected value e.It follows from the strong law of large numbers of rough variable that

1n

n∑i=1

E[ξi(λ)]→ a, a.s.

as n→∞. The proof is complete.

11.10 Random Rough Simulations

In this section, we introduce random rough simulations for finding criticalvalues, computing chance functions, and calculating expected value.

Example 11.1: Suppose that ξ is an n-dimensional random rough vectordefined on the rough space (Λ,Δ,A, π), and f : �n → �m is a measurablefunction. For any real number α ∈ (0, 1], we design a random rough simula-tion to compute the α-chance Ch{f(ξ) ≤ 0}(α). That is, we should find thesupremum β such that

Tr{λ ∈ Λ

∣∣ Pr {f(ξ(λ)) ≤ 0} ≥ β}≥ α. (11.28)

We sample λ1, λ2, · · · , λN from Δ and λ1, λ2, · · · , λN from Λ according to themeasure π. For any number v, let N(v) denote the number of λk satisfyingPr{f(ξ(λk)) ≤ 0} ≥ v for k = 1, 2, · · · , N , and N(v) denote the number ofλk satisfying Pr{f(ξ(λk)) ≤ 0} ≥ v for k = 1, 2, · · · , N , where Pr{·} may beestimated by stochastic simulation. Then we may find the maximal value vsuch that

N(v) + N(v)2N

≥ α. (11.29)

This value is an estimation of β.

Algorithm 11.1 (Random Rough Simulation)Step 1. Generate λ1, λ2, · · · , λN from Δ according to the measure π.

Step 2. Generate λ1, λ2, · · · , λN from Λ according to the measure π.

Page 356: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 11.10 - Random Rough Simulations 347

Step 3. Find the maximal value v such that (11.29) holds.Step 4. Return v.

Now we consider the following two random rough variables

ξ1 ∼ N (ρ1, 1), with ρ1 = ([1, 2], [0, 3]),ξ2 ∼ N (ρ2, 1), with ρ2 = ([2, 3], [1, 4]).

A run of random rough simulation with 5000 cycles shows that

Ch{ξ1 + ξ2 ≥ 2}(0.9) = 0.74.

Example 11.2: Assume that ξ is an n-dimensional random rough vector onthe rough space (Λ,Δ,A, π), and f : �n → � is a measurable function. Forany given confidence levels α and β, let us find the maximal value f suchthat

Ch{f(ξ) ≥ f

}(α) ≥ β (11.30)

holds. That is, we should compute the maximal value f such that

Tr{λ ∈ Λ

∣∣ Pr{f(ξ(λ)) ≥ f

}≥ β}≥ α (11.31)

holds. We sample λ1, λ2, · · · , λN from Δ and λ1, λ2, · · · , λN from Λ accordingto the measure π. For any number v, let N(v) denote the number of λk

satisfying Pr{f(ξ(λk)) ≥ v} ≥ β for k = 1, 2, · · · , N , and N(v) denote thenumber of λk satisfying Pr{f(ξ(λk)) ≥ v} ≥ β for k = 1, 2, · · · , N , wherePr{·} may be estimated by stochastic simulation. Then we may find themaximal value v such that

N(v) + N(v)2N

≥ α. (11.32)

This value is an estimation of f .

Algorithm 11.2 (Random Rough Simulation)Step 1. Generate λ1, λ2, · · · , λN from Δ according to the measure π.

Step 2. Generate λ1, λ2, · · · , λN from Λ according to the measure π.Step 3. Find the maximal value v such that (11.32) holds.Step 4. Return v.

We now find the maximal value f such that Ch{ξ21 + ξ2

2 ≥ f}(0.9) ≥ 0.9,where ξ1 and ξ2 are random rough variables defined as

ξ1 ∼ N (ρ1, 1), with ρ1 = ([1, 2], [0, 3]),ξ2 ∼ N (ρ2, 1), with ρ2 = ([2, 3], [1, 4]).

Page 357: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

348 Chapter 11 - Random Rough Theory

A run of random rough simulation with 5000 cycles shows that f = 1.67.

Example 11.3: Assume that ξ is an n-dimensional random rough vector onthe rough space (Λ,Δ,A, π), and f : �n → � is a measurable function. Oneproblem is to calculate the expected value E[f(ξ)]. Note that, for each λ ∈Λ, we may calculate the expected value E[f(ξ(λ)] by stochastic simulation.Since E[f(ξ)] is essentially the expected value of rough variable E[f(ξ(λ)],we may combine rough simulation and stochastic simulation to produce arandom rough simulation.

Algorithm 11.3 (Random Rough Simulation)Step 1. Set L = 0.Step 2. Generate λ from Δ according to the measure π.Step 3. Generate λ from Λ according to the measure π.Step 4. L← L + E[f(ξ(λ))] + E[f(ξ(λ))].Step 5. Repeat the second to fourth steps N times.Step 6. Return L/(2N).

We employ the random rough simulation to calculate the expected valueof ξ1ξ2, where ξ1 and ξ2 are random rough variables defined as

ξ1 ∼ N (ρ1, 1), with ρ1 = ([1, 2], [0, 3]),ξ2 ∼ N (ρ2, 1), with ρ2 = ([2, 3], [1, 4]).

A run of random rough simulation with 5000 cycles shows that E[ξ1ξ2] = 3.75.

Page 358: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Chapter 12

Fuzzy Rough Theory

A fuzzy rough variable was defined by Liu [75] as a rough variable on theuniversal set of fuzzy variables, or a rough variable taking “fuzzy variable”values.

The emphasis in this chapter is mainly on fuzzy rough variable, fuzzyrough arithmetic, chance measure, chance distribution, independent and iden-tical distribution, expected value operator, variance, critical values, conver-gence concepts, laws of large numbers, and fuzzy rough simulation.

12.1 Fuzzy Rough Variables

Definition 12.1 (Liu [75]) A fuzzy rough variable is a function ξ from arough space (Λ,Δ,A, π) to the set of fuzzy variables such that Pos{ξ(λ) ∈ B}is a measurable function of λ for any Borel set B of �.

Remark 12.1: Note that the concept is very different from the fuzzy roughset introduced by Dubois and Prade [27].

Theorem 12.1 Assume that ξ is a fuzzy rough variable. Then for any Borelset B of �, we have(a) the possibility Pos{ξ(λ) ∈ B} is a rough variable;(b) the necessity Nec{ξ(λ) ∈ B} is a rough variable;(c) the credibility Cr{ξ(λ) ∈ B} is a rough variable.

Proof: Since the possibility Pos{ξ(λ) ∈ B} is a measurable function of λfrom the rough space (Λ,Δ,A, π) to �, it is a rough variable. It followsfrom Nec{B} = 1 − Pos{Bc} and Cr{B} = (Pos{B} + Nec{B})/2 thatNec{ξ(λ) ∈ B} and Cr{ξ(λ) ∈ B} are rough variables. The theorem isproved.

Theorem 12.2 Let ξ be a fuzzy rough variable. If the expected value E[ξ(λ)]is finite for each λ, then E[ξ(λ)] is a rough variable.

Page 359: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

350 Chapter 12 - Fuzzy Rough Theory

Proof: In order to prove that the expected value E[ξ(λ)] is a rough variable,we only need to show that E[ξ(λ)] is a measurable function of λ. It is obviousthat

E[ξ(λ)] =∫ +∞

0

Cr{ξ(λ) ≥ r}dr −∫ 0

−∞Cr{ξ(λ) ≤ r}dr

= limj→∞

limk→∞

(k∑

l=1

j

kCr{

ξ(λ) ≥ lj

k

}−

k∑l=1

j

kCr{

ξ(λ) ≤ − lj

k

}).

Since Cr{ξ(λ)) ≥ lj/k} and Cr{ξ(λ) ≤ −lj/k} are all measurable functionsfor any integers j, k and l, the expected value E[ξ(λ)] is a measurable functionof λ. The proof is complete.

Definition 12.2 An n-dimensional fuzzy rough vector is a function ξ froma rough space (Λ,Δ,A, π) to the set of n-dimensional fuzzy vectors such thatPos{ξ(λ) ∈ B} is a measurable function of λ for any Borel set B of �n.

Theorem 12.3 If (ξ1, ξ2, · · · , ξn) is a fuzzy rough vector, then ξ1, ξ2, · · · , ξnare fuzzy rough variables.

Proof: Write ξ = (ξ1, ξ2, · · · , ξn). Suppose that ξ is a fuzzy rough vector onthe rough space (Λ,Δ,A, π). For any Borel set B of �, the set B × �n−1 isa Borel set of �n. Note that

Pos {ξ1(λ) ∈ B} = Pos

⎧⎪⎪⎪⎨⎪⎪⎪⎩ξ1(λ) ∈ Bξ2(λ) ∈ �

...ξn(λ) ∈ �

⎫⎪⎪⎪⎬⎪⎪⎪⎭ = Pos{ξ(λ) ∈ B ×�n−1

}

is a measurable function of λ. Thus ξ1 is a fuzzy rough variable. A similarprocess may prove that ξ2, ξ3, · · · , ξn are fuzzy rough variables.

Theorem 12.1 Let ξ be an n-dimensional fuzzy rough vector, and f : �n →� a measurable function. Then f(ξ) is a fuzzy rough variable.

Proof: It is clear that f−1(B) is a Borel set of �n for any Borel set B of �.Thus, for each λ ∈ Λ, we have

Pos{f(ξ(λ)) ∈ B} = Pos{ξ(λ) ∈ f−1(B)}

which is a measurable function of λ. That is, f(ξ) is a fuzzy rough variable.The theorem is proved.

Definition 12.3 (Liu [75], Fuzzy Rough Arithmetic on Single Space) Letf : �n → � be a measurable function, and ξ1, ξ2, · · · , ξn fuzzy rough variablesdefined on the rough space (Λ,Δ,A, π). Then ξ = f(ξ1, ξ2, · · · , ξn) is a fuzzyrough variable defined by

ξ(λ) = f(ξ1(λ), ξ2(λ), · · · , ξn(λ)), ∀λ ∈ Λ. (12.1)

Page 360: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 12.2 - Chance Measure 351

Definition 12.4 (Liu [75], Fuzzy Rough Arithmetic on Different Spaces) Letf : �n → � be a measurable function, and ξi fuzzy rough variables definedon (Λi,Δi,Ai, πi), i = 1, 2, · · · , n, respectively. Then ξ = f(ξ1, ξ2, · · · , ξn) isa fuzzy rough variable defined on the product rough space (Λ,Δ,A, π) as

ξ(λ1, λ2, · · · , λn) = f(ξ1(λ1), ξ2(λ2), · · · , ξn(λn)) (12.2)

for all (λ1, λ2, · · · , λn) ∈ Λ.

12.2 Chance Measure

Definition 12.5 (Liu [75]) Let ξ be a fuzzy rough variable, and B a Borelset of �. Then the chance of fuzzy rough event ξ ∈ B is a function from (0, 1]to [0, 1], defined as

Ch {ξ ∈ B} (α) = supTr{A}≥α

infλ∈A

Cr {ξ(λ) ∈ B} . (12.3)

Theorem 12.4 Let ξ be a fuzzy rough variable, and B a Borel set of �.Write β∗ = Ch {ξ ∈ B} (α∗). Then we have

Tr{λ ∈ Λ

∣∣ Cr {ξ(λ) ∈ B} ≥ β∗} ≥ α∗. (12.4)

Proof: Since β∗ is the supremum of β satisfying

Tr{λ ∈ Λ

∣∣ Cr {ξ(λ) ∈ B} ≥ β}≥ α∗,

there exists an increasing sequence {βi} such that

Tr{λ ∈ Λ

∣∣ Cr {ξ(λ) ∈ B} ≥ βi

}≥ α∗

and βi ↑ β∗ as i→∞. It follows from{λ ∈ Λ

∣∣ Cr {ξ(λ) ∈ B} ≥ βi

}↓{λ ∈ Λ

∣∣ Cr {ξ(λ) ∈ B} ≥ β∗}and the trust continuity theorem that

Tr{λ ∈ Λ

∣∣ Cr {ξ(λ) ∈ B} ≥ β∗}= lim

i→∞Tr{λ ∈ Λ

∣∣ Cr {ξ(λ) ∈ B} ≥ βi

}≥ α∗.

The proof is complete.

Theorem 12.5 Let ξ be a fuzzy rough variable, and {Bi} a sequence of Borelsets of � such that Bi ↓ B. If limi→∞ Ch{ξ ∈ Bi}(α) > 0.5 or Ch{ξ ∈B}(α) ≥ 0.5, then we have

limi→∞

Ch{ξ ∈ Bi}(α) = Ch{ξ ∈ lim

i→∞Bi

}(α). (12.5)

Page 361: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

352 Chapter 12 - Fuzzy Rough Theory

Proof: First we suppose that limi→∞ Ch{ξ ∈ Bi}(α) > 0.5. Write

β = Ch{ξ ∈ B}(α), βi = Ch{ξ ∈ Bi}(α), i = 1, 2, · · ·

Since Bi ↓ B, it is clear that β1 ≥ β2 ≥ · · · ≥ β. Thus the limitation

ρ = limi→∞

βi = limi→∞

Ch{ξ ∈ Bi}(α) > 0.5

and ρ ≥ β. On the other hand, since ρ ≤ βi for each i, it follows fromTheorem 12.4 that

Tr{λ ∈ Λ∣∣ Cr{ξ(λ) ∈ Bi} ≥ ρ} ≥ Tr{λ ∈ Λ

∣∣ Cr{ξ(λ) ∈ Bi} ≥ βi} ≥ α.

Since ρ > 0.5, by using the credibility semicontinuity law, it is easy to verifythat

{λ ∈ Λ∣∣ Cr{ξ(λ) ∈ Bi} ≥ ρ} ↓ {λ ∈ Λ

∣∣ Cr{ξ(λ) ∈ B} ≥ ρ}.

It follows from the trust continuity theorem that

Tr{λ ∈ Λ∣∣ Cr{ξ(λ) ∈ B} ≥ ρ} = lim

i→∞Tr{λ ∈ Λ

∣∣ Cr{ξ(λ) ∈ Bi} ≥ ρ} ≥ α

which implies that ρ ≤ β. Hence ρ = β and (12.5) holds. Under the conditionCh{ξ ∈ B}(α) ≥ 0.5, if limi→∞ Ch{ξ ∈ Bi}(α) = Ch{ξ ∈ B}(α), then (12.5)holds. Otherwise, we have

limi→∞

Ch{ξ ∈ Bi}(α) > Ch{ξ ∈ B}(α) ≥ 0.5

which also implies (12.5).

Theorem 12.6 (a) Let ξ, ξ1, ξ2, · · · be fuzzy rough variables such that ξi(λ) ↑ξ(λ) for each λ ∈ Λ. If limi→∞ Ch{ξi ≤ r}(α) > 0.5 or Ch {ξ ≤ r} (α) ≥ 0.5,then

limi→∞

Ch{ξi ≤ r}(α) = Ch{

limi→∞

ξi ≤ r}

(α). (12.6)

(b) Let ξ, ξ1, ξ2, · · · be fuzzy rough variables such that ξi(λ) ↓ ξ(λ) for eachλ ∈ Λ. If limi→∞ Ch{ξi ≥ r}(α) > 0.5 or Ch{ξ ≥ r}(α) ≥ 0.5, then we have

limi→∞

Ch{ξi ≥ r}(α) = Ch{

limi→∞

ξi ≥ r}

(α). (12.7)

Proof: (a) Suppose limi→∞

Ch{ξi ≤ r}(α) > 0.5 and write

β = Ch{ξ ≤ r}(α), βi = Ch{ξi ≤ r}(α), i = 1, 2, · · ·

Since ξi(λ) ↑ ξ(λ) for each λ ∈ Λ, it is clear that {ξi(λ) ≤ r} ↓ {ξ(λ) ≤ r}for each λ ∈ Λ and β1 ≥ β2 ≥ · · · ≥ β. Thus the limitation

ρ = limi→∞

βi = limi→∞

Ch{ξi ≤ r}(α) > 0.5

Page 362: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 12.3 - Chance Distribution 353

and ρ ≥ β. On the other hand, since ρ ≤ βi for each i, it follows fromTheorem 12.4 that

Tr{λ ∈ Λ∣∣ Cr{ξi(λ) ≤ r} ≥ ρ} ≥ Tr{λ ∈ Λ

∣∣ Cr{ξi(λ) ≤ r} ≥ βi} ≥ α.

Since ρ > 0.5 and {ξi(λ) ≤ r} ↓ {ξ(λ) ≤ r} for each λ ∈ Λ, it follows fromthe credibility semicontinuity law that

{λ ∈ Λ∣∣ Cr{ξi(λ) ≤ r} ≥ ρ} ↓ {λ ∈ Λ

∣∣ Cr{ξ(λ) ≤ r} ≥ ρ}.By using the trust continuity theorem, we get

Tr{λ ∈ Λ∣∣ Cr{ξ(λ) ≤ r} ≥ ρ} = lim

i→∞Tr{λ ∈ Λ

∣∣ Cr{ξi(λ) ≤ r} ≥ ρ} ≥ α

which implies that ρ ≤ β. Hence ρ = β and (12.6) holds. Under the conditionCh {ξ ≤ r} (α) ≥ 0.5, if limi→∞ Ch{ξi ≤ r}(α) = Ch {ξ ≤ r} (α), then (12.6)holds. Otherwise, we have

limi→∞

Ch{ξi ≤ r}(α) > Ch{

limi→∞

ξi ≤ r}

(α) ≥ 0.5

which also implies (12.6). The part (b) may be proved similarly.

Variety of Chance Measure

Definition 12.6 Let ξ be a fuzzy rough variable, and B a Borel set of �.For any real number α ∈ (0, 1], the α-chance of fuzzy rough event ξ ∈ B isdefined as the value of chance at α, i.e., Ch {ξ ∈ B} (α) where Ch denotesthe chance measure.

Definition 12.7 Let ξ be a fuzzy rough variable, and B a Borel set of �.Then the equilibrium chance of fuzzy rough event ξ ∈ B is defined as

Che {ξ ∈ B} = sup0<α≤1

{α∣∣ Ch {ξ ∈ B} (α) ≥ α

}(12.8)

where Ch denotes the chance measure.

Definition 12.8 Let ξ be a fuzzy rough variable, and B a Borel set of �.Then the average chance of fuzzy rough event ξ ∈ B is defined as

Cha {ξ ∈ B} =∫ 1

0

Ch {ξ ∈ B} (α)dα (12.9)

where Ch denotes the chance measure.

Definition 12.9 A fuzzy rough variable ξ is said to be(a) nonnegative if Ch{ξ < 0}(α) ≡ 0;(b) positive if Ch{ξ ≤ 0}(α) ≡ 0;(c) simple if there exists a finite sequence {x1, x2, · · · , xm} such that

Ch {ξ �= x1, ξ �= x2, · · · , ξ �= xm} (α) ≡ 0; (12.10)

(d) discrete if there exists a countable sequence {x1, x2, · · ·} such that

Ch {ξ �= x1, ξ �= x2, · · ·} (α) ≡ 0. (12.11)

Page 363: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

354 Chapter 12 - Fuzzy Rough Theory

12.3 Chance Distribution

Definition 12.10 The chance distribution Φ: [−∞,+∞]× (0, 1]→ [0, 1] ofa fuzzy rough variable ξ is defined by

Φ(x;α) = Ch {ξ ≤ x} (α). (12.12)

Theorem 12.7 The chance distribution Φ(x;α) of a fuzzy rough variable isa decreasing and left-continuous function of α for each fixed x.

Proof: Denote the fuzzy rough variable by ξ. For any given α1 and α2 with0 < α1 < α2 ≤ 1, it is clear that

Φ(x;α1) = supTr{A}≥α1

infλ∈A

Cr {ξ(λ) ≤ x}

≥ supTr{A}≥α2

infλ∈A

Cr {ξ(λ) ≤ x} = Φ(x;α2).

Thus Φ(x;α) is a decreasing function of α for each fixed x.Next we prove the left-continuity of Φ(x;α) with respect to α. Let α ∈

(0, 1] be given, and let {αi} be a sequence of numbers with αi ↑ α. SinceΦ(x;α) is a decreasing function of α, the limitation limi→∞ Φ(x;αi) existsand is not less than Φ(x;α). If the limitation is equal to Φ(x;α), then theleft-continuity is proved. Otherwise, we have

limi→∞

Φ(x;αi) > Φ(x;α).

Let z∗ = (limi→∞ Φ(x;αi) + Φ(x;α))/2. It is clear that

Φ(x;αi) > z∗ > Φ(x;α)

for all i. It follows from Φ(x;αi) > z∗ that there exists Ai with Tr{Ai} ≥ αi

such thatinf

λ∈Ai

Cr{ξ(λ) ≤ x} > z∗

for each i. Now we define

A∗ =∞⋃i=1

Ai.

It is clear that Tr{A∗} ≥ Tr{Ai} ≥ αi. Letting i→∞, we get Tr{A∗} ≥ α.Thus

Φ(x;α) ≥ infλ∈A∗

Cr{ξ(λ) ≤ x} ≥ z∗.

A contradiction proves the theorem.

Theorem 12.8 The chance distribution Φ(x;α) of fuzzy rough variable isan increasing function of x for each fixed α, and

Φ(−∞;α) = 0, Φ(+∞;α) = 1, ∀α; (12.13)

Page 364: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 12.3 - Chance Distribution 355

limx→−∞Φ(x;α) ≤ 0.5, ∀α; (12.14)

limx→+∞Φ(x;α) ≥ 0.5, if α < 1. (12.15)

Furthermore, if limy↓x

Φ(y;α) > 0.5 or Φ(x;α) ≥ 0.5, then we have

limy↓x

Φ(y;α) = Φ(x;α). (12.16)

Proof: Let Φ(x;α) be the chance distribution of the fuzzy rough variable ξdefined on the rough space (Λ,Δ,A, π). For any x1 and x2 with −∞ ≤ x1 <x2 ≤ +∞, it is clear that

Φ(x1;α) = supTr{A}≥α

infλ∈A

Cr {ξ(λ) ≤ x1}

≤ supTr{A}≥α

infλ∈A

Cr {ξ(λ) ≤ x2} = Φ(x2;α).

Therefore, Φ(x;α) is an increasing function of x for each fixed α.Since ξ(λ) is a rough variable for any λ ∈ Λ, we have Cr{ξ(λ) ≤ −∞} = 0

for any λ ∈ Λ. It follows that

Φ(−∞;α) = supTr{A}≥α

infλ∈A

Cr {ξ(λ) ≤ −∞} = 0.

Similarly, we have Cr{ξ(λ) ≤ +∞} = 1 for any λ ∈ Λ. Thus

Φ(+∞;α) = supTr{A}≥α

infλ∈A

Cr {ξ(λ) ≤ +∞} = 1.

Thus (12.13) is proved.If (12.14) is not true, then there exists a number z∗ > 0.5 and a sequence

{xi} with xi ↓ −∞ such that Φ(xi, α) > z∗ for all i. Writing

Ai ={λ ∈ Λ

∣∣ Cr{ξ(λ) ≤ xi} > z∗}

for i = 1, 2, · · ·, we have Tr{Ai} ≥ α, and A1 ⊃ A2 ⊃ · · · It follows from thetrust continuity theorem that

Tr

{ ∞⋂i=1

Ai

}= lim

i→∞Tr{Ai} ≥ α.

Thus there exists λ∗ such that λ∗ ∈ Ai for all i. Therefore

0.5 ≥ limi→∞

Cr{ξ(λ∗) ≤ xi} ≥ z∗ > 0.5.

A contradiction proves (12.14).

Page 365: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

356 Chapter 12 - Fuzzy Rough Theory

If (12.15) is not true, then there exists a number z∗ < 0.5 and a sequence{xi} with xi ↑ +∞ such that Φ(xi, α) < z∗ for all i. Writing

Ai ={λ ∈ Λ

∣∣ Cr{ξ(λ) ≤ xi} < z∗}

for i = 1, 2, · · ·, we have

Tr{Ai} = 1− Tr{λ ∈ Λ

∣∣ Cr{ξ(λ) ≤ xi} ≥ z∗}

> 1− α

and A1 ⊃ A2 ⊃ · · · It follows from the trust continuity theorem that

Tr

{ ∞⋂i=1

Ai

}= lim

i→∞Tr{Ai} ≥ 1− α > 0.

Thus there exists λ∗ such that λ∗ ∈ Ai for all i. Therefore

0.5 ≤ limi→∞

Cr{ξ(λ∗) ≤ xi} ≤ z∗ < 0.5.

A contradiction proves (12.15).Finally, we prove (12.16). Let {xi} be an arbitrary sequence with xi ↓ x

as i→∞. It follows from Theorem 12.5 that

limy↓x

Φ(y;α) = limy↓x

Ch{ξ ∈ (−∞, y]}(α) = Ch{ξ ∈ (−∞, x]}(α) = Φ(x;α).

The theorem is proved.

Theorem 12.9 Let ξ be a fuzzy rough variable. Then Ch{ξ ≥ x}(α) is(a) a decreasing and left-continuous function of α for any fixed x;(b) a decreasing function of x for any fixed α. Furthermore, if

Ch{ξ ≥ x}(α) ≥ 0.5 or limy↑x

Ch{ξ ≥ y}(α) > 0.5,

then we have limy↑x

Ch{ξ ≥ y}(α) = Ch{ξ ≥ x}(α).

Proof: Like Theorems 12.7 and 12.8.

Definition 12.11 The chance density function φ: �× (0, 1]→ [0,+∞) of afuzzy rough variable ξ is a function such that

Φ(x;α) =∫ x

−∞φ(y;α)dy (12.17)

holds for all x ∈ [−∞,+∞] and α ∈ (0, 1], where Φ is the chance distributionof ξ.

Page 366: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 12.4 - Independent and Identical Distribution 357

12.4 Independent and Identical Distribution

This section introduces the concept of independent and identically distributed(iid) fuzzy rough variables.

Definition 12.12 The fuzzy rough variables ξ1, ξ2, · · · , ξn are said to be iidif and only if

(Pos{ξi(λ) ∈ B1},Pos{ξi(λ) ∈ B2}, · · · ,Pos{ξi(λ) ∈ Bm}) , i = 1, 2, · · · , n

are iid rough vectors for any Borel sets B1, B2, · · · , Bm of � and any positiveinteger m.

Theorem 12.10 Let ξ1, ξ2, · · · , ξn be iid fuzzy rough variables. Then for anyBorel set B of �, we have(a) Pos{ξi(λ) ∈ B}, i = 1, 2, · · · , n are iid rough variables;(b) Nec{ξi(λ) ∈ B}, i = 1, 2, · · · , n are iid rough variables;(c) Cr{ξi(λ) ∈ B}, i = 1, 2, · · · , n are iid rough variables.

Proof: The part (a) follows immediately from the definition. (b) Sinceξ1, ξ2, · · · , ξn are iid fuzzy rough variables, the possibilities Pos{ξi ∈ Bc}, i =1, 2, · · · , n are iid rough variables. It follows from Nec{ξi ∈ B} = 1−Pos{ξi ∈Bc}, i = 1, 2, · · · , n that Nec{ξi(λ) ∈ B}, i = 1, 2, · · · , n are iid rough vari-ables. (c) It follows from the definition of iid fuzzy rough variables that(Pos{ξi(λ) ∈ B},Pos{ξi(λ) ∈ Bc}), i = 1, 2, · · · , n are iid rough vectors.Since, for each i,

Cr{ξi(λ) ∈ B} =12

(Pos{ξi(λ) ∈ B}+ 1− Pos{ξi(λ) ∈ Bc}) ,

the credibilities Cr{ξi(λ) ∈ B}, i = 1, 2, · · · , n are iid rough variables.

Theorem 12.11 Let f : � → � be a measurable function. If ξ1, ξ2, · · · , ξnare iid fuzzy rough variables, then f(ξ1), f(ξ2), · · ·, f(ξn) are iid fuzzy roughvariables.

Proof: We have proved that f(ξ1), f(ξ2), · · · , f(ξn) are fuzzy rough vari-ables. For any positive integer m and Borel sets B1, B2, · · · , Bm of �, sincef−1(B1), f−1(B2), · · · , f−1(Bm) are Borel sets, we know that(Pos{ξi(λ) ∈ f−1(B1)},Pos{ξi(λ) ∈ f−1(B2)}, · · · ,Pos{ξi(λ) ∈ f−1(Bm)}

),

i = 1, 2, · · · , n are iid rough vectors. Equivalently, the rough vectors

(Pos{f(ξi(λ)) ∈ B1},Pos{f(ξi(λ)) ∈ B2}, · · · ,Pos{f(ξi(λ)) ∈ Bm}) ,

i = 1, 2, · · · , n are iid. Hence f(ξ1), f(ξ2), · · · , f(ξn) are iid fuzzy rough vari-ables.

Page 367: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

358 Chapter 12 - Fuzzy Rough Theory

Theorem 12.12 Suppose that ξ1, ξ2, · · · , ξn are iid fuzzy rough variables suchthat E[ξ1(λ)], E[ξ2(λ)], · · · , E[ξn(λ)] are all finite for each λ. Then E[ξ1(λ)],E[ξ2(λ)], · · ·, E[ξn(λ)] are iid rough variables.

Proof: For any λ ∈ Λ, it follows from the expected value operator that

E[ξi(λ)] =∫ +∞

0

Cr{ξi(λ) ≥ r}dr −∫ 0

−∞Cr{ξi(λ) ≤ r}dr

= limj→∞

limk→∞

⎛⎝ 2k∑l=1

j

2kCr{

ξi(λ) ≥ lj

2k

}−

2k∑l=1

j

2kCr{

ξi(λ) ≤ − lj

2k

}⎞⎠for i = 1, 2, · · · , n. Now we write

η+i (λ) =

∫ ∞

0

Cr{ξi(λ) ≥ r}dr, η−i (λ) =

∫ 0

−∞Cr{ξi(λ) ≤ r}dr,

η+ij(λ) =

∫ j

0

Cr{ξi(λ) ≥ r}dr, η−ij(λ) =

∫ 0

−j

Cr{ξi(λ) ≤ r}dr,

η+ijk(λ) =

2k∑l=1

j

2kCr{

ξi(λ) ≥ lj

2k

}, η−

ijk(λ) =2k∑l=1

j

2kCr{

ξi(λ) ≤ − lj

2k

}for any positive integers j, k and i = 1, 2, · · · , n. It follows from the mono-tonicity of the functions Cr{ξi ≥ r} and Cr{ξi ≤ r} that the sequences{η+

ijk(λ)} and {η−ijk(λ)} satisfy (a) for each j and k,

(η+ijk(λ), η−

ijk(λ)), i =

1, 2, · · · , n are iid rough vectors; and (b) for each i and j, η+ijk(λ) ↑ η+

ij(λ),and η−

ijk(λ) ↑ η−ij(λ) as k →∞.

For any real numbers x, y, xi, yi, i = 1, 2, · · · , n, it follows from the prop-erty (a) that

Tr

{η+ijk(λ) ≤ xi, η

−ijk(λ) ≤ yi

i = 1, 2, · · · , n

}=

n∏i=1

Tr{η+ijk(λ) ≤ xi, η

−ijk(λ) ≤ yi

},

Tr{η+ijk(λ) ≤ x, η−

ijk(λ) ≤ y}

= Tr{η+i′jk(λ) ≤ x, η−

i′jk(λ) ≤ y}

, ∀i, i′.

It follows from the property (b) that{η+ijk(λ) ≤ xi, η

−ijk(λ) ≤ yi

i = 1, 2, · · · , n

}→{

η+ij(λ) ≤ xi, η

−ij(λ) ≤ yi

i = 1, 2, · · · , n

},

{η+ijk(λ) ≤ x, η−

ijk(λ) ≤ y}→{η+ij(λ) ≤ x, η−

ij(λ) ≤ y}

as k →∞. By using the trust continuity theorem, we get

Tr

{η+ij(λ) ≤ xi, η

−ij(λ) ≤ yi

i = 1, 2, · · · , n

}=

n∏i=1

Tr{η+ij(λ) ≤ xi, η

−ij(λ) ≤ yi

},

Page 368: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 12.5 - Expected Value Operator 359

Tr{η+ij(λ) ≤ x, η−

ij(λ) ≤ y}

= Tr{η+i′j(λ) ≤ x, η−

i′j(λ) ≤ y}

, ∀i, i′.

Thus(η+ij(λ), η−

ij(λ)), i = 1, 2, · · · , n are iid rough vectors, and satisfy (c) for

each j,(η+ij(λ), η−

ij(λ)), i = 1, 2, · · · , n are iid rough vectors; and (d) for each

i, η+ij(λ) ↑ η+

i (λ) and η−ij(λ) ↑ η−

i (λ) as j →∞.A similar process may prove that

(η+i (λ), η−

i (λ)), i = 1, 2, · · · , n are iid

rough vectors. Thus E[ξ1(λ)], E[ξ2(λ)], · · · , E[ξn(λ)] are iid rough variables.The theorem is proved.

12.5 Expected Value Operator

Definition 12.13 (Liu [75]) Let ξ be a fuzzy rough variable. Then its ex-pected value is defined by

E[ξ] =∫ +∞

0

Tr{λ ∈ Λ

∣∣ E[ξ(λ)] ≥ r}

dr −∫ 0

−∞Tr{λ ∈ Λ

∣∣ E[ξ(λ)] ≤ r}

dr

provided that at least one of the two integrals is finite.

Theorem 12.13 Assume that ξ and η are fuzzy rough variables with finiteexpected values. If ξ(λ) and η(λ) are independent fuzzy variables for each λ,then for any real numbers a and b, we have

E[aξ + bη] = aE[ξ] + bE[η]. (12.18)

Proof: For any λ ∈ Λ, by the linearity of expected value operator of in-dependent fuzzy variables, we have E[aξ(λ) + bη(λ)] = aE[ξ(λ)] + bE[η(λ)].It follows from the linearity of expected value operator of rough variablesthat E[aξ + bη] = E [aE[ξ(λ)] + bE[η(λ)]] = aE [E[ξ(λ)]] + bE [E[η(λ)]] =aE[ξ] + bE[η]. The theorem is proved.

Continuity Theorems

Theorem 12.14 (a) Let ξ, ξ1, ξ2, · · · be fuzzy rough variables such that ξi(λ) ↑ξ(λ) uniformly for each λ ∈ Λ. If there exists a fuzzy rough variable η withfinite expected value such that ξi ≥ η for all i, then we have

limi→∞

E[ξi] = E[ξ]. (12.19)

(b) Let ξ, ξ1, ξ2, · · · be fuzzy rough variables such that ξi(λ) ↓ ξ(λ) uniformlyfor each λ ∈ Λ. If there exists a fuzzy rough variable η with finite expectedvalue such that ξi ≤ η for all i, then we have

limi→∞

E[ξi] = E[ξ]. (12.20)

Page 369: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

360 Chapter 12 - Fuzzy Rough Theory

Proof: (a) For each λ ∈ Λ, since ξi(λ) ↑ ξ(λ) uniformly, it follows fromTheorem 3.41 that E[ξi(λ)] ↑ E[ξ(λ)]. Since ξi ≥ η, we have E[ξi(λ)] ≥E[η(λ)]. Thus {E[ξi(λ)]} is an increasing sequence of rough variables suchthat E[ξi(λ) ≥ E[η(λ)], where E[η(λ)] is a rough variable with finite expectedvalue. It follows from Theorem 4.36 that (12.19) holds. The part (b) may beproved similarly.

Theorem 12.15 Let ξ, ξ1, ξ2, · · · be fuzzy rough variables such that ξi(λ)→ξ(λ) uniformly for each λ ∈ Λ. If there exists a fuzzy rough variable η withfinite expected value such that |ξi| ≤ η for all i, then we have

limi→∞

E[ξi] = E[

limi→∞

ξi

]. (12.21)

Proof: For each λ ∈ Λ, since ξi(λ)→ ξ(λ) uniformly, it follows from Theo-rem 3.41 that E[ξi(λ)]→ E[ξ(λ)]. Since |ξi| ≤ η, we have E[ξi(λ)] ≤ E[η(λ)].Thus {E[ξi(λ)]} is a sequence of rough variables such that E[ξi(λ) ≤ E[η(λ)],where E[η(λ)] is a rough variable with finite expected value. It follows fromTheorem 4.38 that (12.21) holds.

12.6 Variance, Covariance and Moments

Definition 12.14 (Liu [75]) Let ξ be a fuzzy rough variable with finite ex-pected value E[ξ]. The variance of ξ is defined as V [ξ] = E

[(ξ − E[ξ])2

].

Theorem 12.16 If ξ is a fuzzy rough variable with finite expected value, aand b are real numbers, then V [aξ + b] = a2V [ξ].

Proof: It follows from the definition of variance that

V [aξ + b] = E[(aξ + b− aE[ξ]− b)2

]= a2E[(ξ − E[ξ])2] = a2V [ξ].

Theorem 12.17 Assume that ξ is a fuzzy rough variable whose expectedvalue exists. Then we have

V [E[ξ(λ)]] ≤ V [ξ]. (12.22)

Proof: Denote the expected value of ξ by e. It follows from Theorem 4.51that

V [E[ξ(λ)]] = E[(E[ξ(λ)]− e)2

]≤ E

[E[(ξ(λ)− e)2

]]= V [ξ].

The theorem is proved.

Theorem 12.18 Let ξ be a fuzzy rough variable with expected value e. ThenV [ξ] = 0 if and only if Ch{ξ = e}(1) = 1.

Page 370: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 12.7 - Optimistic and Pessimistic Values 361

Proof: If V [ξ] = 0, then it follows from V [ξ] = E[(ξ − e)2] that∫ +∞

0

Tr{λ ∈ Λ

∣∣ E[(ξ(λ)− e)2] ≥ r}

dr = 0

which implies that Tr{λ ∈ Λ|E[(ξ(λ)−e)2] ≥ r} = 0 for any r > 0. Therefore,Tr{λ ∈ Λ|E[(ξ(λ) − e)2] = 0} = 1. That is, there exists a set A∗ withTr{A∗} = 1 such that E[(ξ(λ) − e)2] = 0 for each λ ∈ A∗. It follows fromTheorem 3.47 that Cr{ξ(λ) = e} = 1 for each λ ∈ A∗. Hence

Ch{ξ = e}(1) = supTr{A}≥1

infλ∈A

Cr{ξ(λ) = e} = 1.

Conversely, if Ch{ξ = e}(1) = 1, it follows from Theorem 12.4 that thereexists a set A∗ with Tr{A∗} = 1 such that

infλ∈A∗

Cr{ξ(λ) = e} = 1.

That is, Cr{(ξ(λ)− e)2 ≥ r} = 0 for each r > 0 and each λ ∈ A∗. Thus

E[(ξ(λ)− e)2] =∫ +∞

0

Cr{(ξ(λ)− e)2 ≥ r}dr = 0

for all λ ∈ A∗. It follows that Tr{λ ∈ Λ|E[(ξ(λ)− e)2] ≥ r

}= 0 for any

r > 0. Hence

V [ξ] =∫ +∞

0

Tr{λ ∈ Λ

∣∣ E[(ξ(λ)− e)2] ≥ r}

dr = 0.

The theorem is proved.

Definition 12.15 Let ξ and η be fuzzy rough variables such that E[ξ] andE[η] are finite. Then the covariance of ξ and η is defined by

Cov[ξ, η] = E [(ξ − E[ξ])(η − E[η])] . (12.23)

Definition 12.16 For any positive integer k, the expected value E[ξk] iscalled the kth moment of the fuzzy rough variable ξ. The expected valueE[(ξ −E[ξ])k] is called the kth central moment of the fuzzy rough variable ξ.

12.7 Optimistic and Pessimistic Values

Definition 12.17 (Liu [75]) Let ξ be a fuzzy rough variable, and γ, δ ∈ (0, 1].Then

ξsup(γ, δ) = sup{r∣∣ Ch{ξ ≥ r

}(γ) ≥ δ} (12.24)

is called the (γ, δ)-optimistic value to ξ, and

ξinf(γ, δ) = inf{r∣∣ Ch{ξ ≤ r}(γ) ≥ δ

}(12.25)

is called the (γ, δ)-pessimistic value to ξ.

Page 371: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

362 Chapter 12 - Fuzzy Rough Theory

Theorem 12.19 Let ξ be a fuzzy rough variable. Assume that ξsup(γ, δ) isthe (γ, δ)-optimistic value and ξinf(γ, δ) is the (γ, δ)-pessimistic value to ξ. Ifδ > 0.5, then we have

Ch{ξ ≤ ξinf(γ, δ)}(γ) ≥ δ, Ch{ξ ≥ ξsup(γ, δ)}(γ) ≥ δ. (12.26)

Proof: It follows from the definition of (γ, δ)-pessimistic value that thereexists a decreasing sequence {xi} such that Ch{ξ ≤ xi}(γ) ≥ δ and xi ↓ξinf(γ, δ) as i→∞. Thus we have

limi→∞

Ch{ξ ≤ xi}(γ) ≥ δ > 0.5.

It follows from Theorem 12.8 that

Ch{ξ ≤ ξinf(γ, δ)}(γ) = limi→∞

Ch{ξ ≤ xi}(γ) ≥ δ.

Similarly, there exists an increasing sequence {xi} such that Ch{ξ ≥ xi}(γ) ≥δ and xi ↑ ξsup(γ, δ) as i→∞. Thus we have

limi→∞

Ch{ξ ≥ xi}(γ) ≥ δ > 0.5.

It follows from Theorem 12.9 that

Ch{ξ ≥ ξsup(γ, δ)}(γ) = limi→∞

Ch{ξ ≥ xi}(γ) ≥ δ.

The theorem is proved.

Theorem 12.20 Let ξsup(γ, δ) and ξinf(γ, δ) be the (γ, δ)-optimistic and (γ, δ)-pessimistic values of fuzzy rough variable ξ, respectively. If γ ≤ 0.5, then wehave

ξinf(γ, δ) ≤ ξsup(γ, δ) + δ1; (12.27)

if γ > 0.5, then we have

ξinf(γ, δ) + δ2 ≥ ξsup(γ, δ) (12.28)

where δ1 and δ2 are defined by

δ1 = supλ∈Λ{ξ(λ)sup(1− δ)− ξ(λ)inf(1− δ)} ,

δ2 = supλ∈Λ{ξ(λ)sup(δ)− ξ(λ)inf(δ)} ,

and ξ(λ)sup(δ) and ξ(λ)inf(δ) are δ-optimistic and δ-pessimistic values offuzzy variable ξ(λ) for each λ, respectively.

Page 372: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 12.8 - Convergence Concepts 363

Proof: Assume that γ ≤ 0.5. For any given ε > 0, we define

Λ1 ={λ ∈ Λ

∣∣ Cr {ξ(λ) > ξsup(γ, δ) + ε} ≥ δ}

,

Λ2 ={λ ∈ Λ

∣∣ Cr {ξ(λ) < ξinf(γ, δ)− ε} ≥ δ}

.

It follows from the definitions of ξsup(γ, δ) and ξinf(γ, δ) that Tr{Λ1} < γand Tr{Λ2} < γ. Thus Tr{Λ1}+ Tr{Λ2} < γ + γ ≤ 1. This fact implies thatΛ1 ∪ Λ2 �= Λ. Let λ∗ �∈ Λ1 ∪ Λ2. Then we have

Cr {ξ(λ∗) > ξsup(γ, δ) + ε} < δ,

Cr {ξ(λ∗) < ξinf(γ, δ)− ε} < δ.

Since Cr is self-dual, we have

Cr {ξ(λ∗) ≤ ξsup(γ, δ) + ε} > 1− δ,

Cr {ξ(λ∗) ≥ ξinf(γ, δ)− ε} > 1− δ.

It follows from the definitions of ξ(λ∗)sup(1− δ) and ξ(λ∗)inf(1− δ) that

ξsup(γ, δ) + ε ≥ ξ(λ∗)inf(1− δ),

ξinf(γ, δ)− ε ≤ ξ(λ∗)sup(1− δ)

which implies that

ξinf(γ, δ)− ε− (ξsup(γ, δ) + ε) ≤ ξ(λ∗)sup(1− δ)− ξ(λ∗)inf(1− δ) ≤ δ1.

Letting ε→ 0, we obtain (12.27).Next we prove the inequality (12.28). Assume γ > 0.5. For any given

ε > 0, we define

Λ1 ={λ ∈ Λ

∣∣ Cr {ξ(λ) ≥ ξsup(γ, δ)− ε} ≥ δ}

,

Λ2 ={λ ∈ Λ

∣∣ Cr {ξ(λ) ≤ ξinf(γ, δ) + ε} ≥ δ}

.

It follows from the definitions of ξsup(γ, δ) and ξinf(γ, δ) that Tr{Λ1} ≥ γand Tr{Λ2} ≥ γ. Thus Tr{Λ1}+ Tr{Λ2} ≥ γ + γ > 1. This fact implies thatΛ1 ∩ Λ2 �= ∅. Let λ∗ ∈ Λ1 ∩ Λ2. Then we have

Cr {ξ(λ∗) ≥ ξsup(γ, δ)− ε} ≥ δ,

Cr {ξ(λ∗) ≤ ξinf(γ, δ) + ε} ≥ δ.

It follows from the definitions of ξ(λ∗)sup(δ) and ξ(λ∗)inf(δ) that

ξsup(γ, δ)− ε ≤ ξ(λ∗)sup(δ),

ξinf(γ, δ) + ε ≥ ξ(λ∗)inf(δ)

which implies that

ξsup(γ, δ)− ε− (ξinf(γ, δ) + ε) ≤ ξ(λ∗)sup(δ)− ξ(λ∗)inf(δ) ≤ δ2.

The inequality (12.28) is proved by letting ε→ 0.

Page 373: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

364 Chapter 12 - Fuzzy Rough Theory

12.8 Convergence Concepts

This section introduces four types of sequence convergence concepts: conver-gence a.s., convergence in chance, convergence in mean, and convergence indistribution.

Definition 12.18 Suppose that ξ, ξ1, ξ2, · · · are fuzzy rough variables definedon the rough space (Λ,Δ,A, π). The sequence {ξi} is said to be convergenta.s. to ξ if and only if there exists a set A ∈ A with Tr{A} = 1 such that{ξi(λ)} converges a.s. to ξ(λ) for every λ ∈ A.

Definition 12.19 Suppose that ξ, ξ1, ξ2, · · · are fuzzy rough variables. Wesay that the sequence {ξi} converges in chance to ξ if

limi→∞

limα↓0

Ch {|ξi − ξ| ≥ ε} (α) = 0 (12.29)

for every ε > 0.

Definition 12.20 Suppose that ξ, ξ1, ξ2, · · · are fuzzy rough variables withfinite expected values. We say that the sequence {ξi} converges in mean to ξif

limi→∞

E[|ξi − ξ|] = 0. (12.30)

Definition 12.21 Suppose that Φ,Φ1,Φ2, · · · are the chance distributions offuzzy rough variables ξ, ξ1, ξ2, · · ·, respectively. We say that {ξi} converges indistribution to ξ if Φi(x;α)→ Φ(x;α) for all continuity points (x;α) of Φ.

12.9 Laws of Large Numbers

This section introduces four laws of large numbers of fuzzy rough variable.

Theorem 12.21 Let {ξi} be a sequence of independent but not necessarilyidentically distributed fuzzy rough variables with a common expected value e.If there exists a number a > 0 such that V [ξi] < a for all i, then (E[ξ1(λ)] +E[ξ2(λ)] + · · ·+ E[ξn(λ)])/n converges in trust to e as n→∞.

Proof: Since {ξi} is a sequence of independent fuzzy rough variables, weknow that {E[ξi(λ)]} is a sequence of independent rough variables. By usingTheorem 12.17, we get V [E[ξi(λ)]] ≤ V [ξi] < a for each i. It follows fromthe weak law of large numbers of rough variable that (E[ξ1(λ)] + E[ξ2(λ)] +· · ·+ E[ξn(λ)])/n converges in trust to e.

Theorem 12.22 Let {ξi} be a sequence of iid fuzzy rough variables with afinite expected value e. Then (E[ξ1(λ)]+E[ξ2(λ)]+· · ·+E[ξn(λ)])/n convergesin trust to e as n→∞.

Page 374: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 12.10 - Fuzzy Rough Simulations 365

Proof: Since {ξi} is a sequence of iid fuzzy rough variables with a finiteexpected value e, we know that {E[ξi(λ)]} is a sequence of iid rough variableswith finite expected e. It follows from the weak law of large numbers of roughvariable that (E[ξ1(λ)]+E[ξ2(λ)]+ · · ·+E[ξn(λ)])/n converges in trust to e.

Theorem 12.23 Let {ξi} be a sequence of independent fuzzy rough variableswith a common expected value e. If

∞∑i=1

V [ξi]i2

<∞, (12.31)

then (E[ξ1(λ)] + E[ξ2(λ)] + · · ·+ E[ξn(λ)])/n converges a.s. to e as n→∞.

Proof: Since {ξi} is a sequence of independent fuzzy rough variables, weknow that {E[ξi(λ)]} is a sequence of independent rough variables. By usingTheorem 12.17, we get V [E[ξi(λ)]] ≤ V [ξi] for each i. It follows from thestrong law of large numbers of rough variable that (E[ξ1(λ)] + E[ξ2(λ)] +· · ·+ E[ξn(λ)])/n converges a.s. to e.

Theorem 12.24 Suppose that {ξi} is a sequence of iid fuzzy rough variableswith a finite expected value e. Then (E[ξ1(λ)] + E[ξ2(λ)] + · · ·+ E[ξn(λ)])/nconverges a.s. to e as n→∞.

Proof: Since {ξi} is a sequence of iid fuzzy rough variables with a finiteexpected value e, Theorem 12.12 implies that {E[ξi(λ)]} is a sequence ofiid rough variables with a common expected value e. It follows from thestrong law of large numbers of rough variable that (E[ξ1(λ)] + E[ξ2(λ)] +· · ·+ E[ξn(λ)])/n converges a.s. to e as n→∞.

12.10 Fuzzy Rough Simulations

In this section, we introduce fuzzy rough simulations for finding critical val-ues, computing chance functions, and calculating expected value.

Example 12.1: Suppose that ξ is an n-dimensional fuzzy rough vectordefined on the rough space (Λ,Δ,A, π), and f : �n → �m is a measurablefunction. For any real number α ∈ (0, 1], we design a fuzzy rough simulationto compute the α-chance Ch{f(ξ) ≤ 0}(α). That is, we should find thesupremum β such that

Tr{λ ∈ Λ

∣∣ Cr {f(ξ(λ)) ≤ 0} ≥ β}≥ α. (12.32)

We sample λ1, λ2, · · · , λN from Δ and λ1, λ2, · · · , λN from Λ according to themeasure π. For any number v, let N(v) denote the number of λk satisfyingCr{f(ξ(λk)) ≤ 0} ≥ v for k = 1, 2, · · · , N , and N(v) denote the number ofλk satisfying Cr{f(ξ(λk)) ≤ 0} ≥ v for k = 1, 2, · · · , N , where Cr{·} may be

Page 375: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

366 Chapter 12 - Fuzzy Rough Theory

estimated by fuzzy simulation. Then we may find the maximal value v suchthat

N(v) + N(v)2N

≥ α. (12.33)

This value is an estimation of β.

Algorithm 12.1 (Fuzzy Rough Simulation)Step 1. Generate λ1, λ2, · · · , λN from Δ according to the measure π.

Step 2. Generate λ1, λ2, · · · , λN from Λ according to the measure π.Step 3. Find the maximal value v such that (12.33) holds.Step 4. Return v.

Now we consider the following two fuzzy rough variables

ξ1 = (ρ1, ρ1 + 1, ρ1 + 2), with ρ1 = ([1, 2], [0, 3]),ξ2 = (ρ2, ρ2 + 1, ρ2 + 2), with ρ2 = ([2, 3], [1, 4]).

A run of fuzzy rough simulation with 5000 cycles shows that

Ch{ξ1 + ξ2 ≥ 4}(0.9) = 0.72.

Example 12.2: Assume that ξ is an n-dimensional fuzzy rough vector onthe rough space (Λ,Δ,A, π), and f : �n → � is a measurable function. Forany given confidence levels α and β, we find the maximal value f such that

Ch{f(ξ) ≥ f

}(α) ≥ β (12.34)

holds. That is, we should compute the maximal value f such that

Tr{λ ∈ Λ

∣∣ Cr{f(ξ(λ)) ≥ f

}≥ β}≥ α (12.35)

holds. We sample λ1, λ2, · · · , λN from Δ and λ1, λ2, · · · , λN from Λ accordingto the measure π. For any number v, let N(v) denote the number of λk

satisfying Cr{f(ξ(λk)) ≥ v} ≥ β for k = 1, 2, · · · , N , and N(v) denote thenumber of λk satisfying Cr{f(ξ(λk)) ≥ v} ≥ β for k = 1, 2, · · · , N , whereCr{·} may be estimated by fuzzy simulation. Then we may find the maximalvalue v such that

N(v) + N(v)2N

≥ α. (12.36)

This value is an estimation of f .

Algorithm 12.2 (Fuzzy Rough Simulation)Step 1. Generate λ1, λ2, · · · , λN from Δ according to the measure π.

Step 2. Generate λ1, λ2, · · · , λN from Λ according to the measure π.

Page 376: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 12.10 - Fuzzy Rough Simulations 367

Step 3. Find the maximal value v such that (12.36) holds.Step 4. Return v.

We now find the maximal value f such that Ch{ξ21 + ξ2

2 ≥ f}(0.9) ≥ 0.9,where ξ1 and ξ2 are fuzzy rough variables defined as

ξ1 = (ρ1, ρ1 + 1, ρ1 + 2), with ρ1 = ([1, 2], [0, 3]),ξ2 = (ρ2, ρ2 + 1, ρ2 + 2), with ρ2 = ([2, 3], [1, 4]).

A run of fuzzy rough simulation with 5000 cycles shows that f = 6.39.

Example 12.3: Assume that ξ is an n-dimensional fuzzy rough vector onthe rough space (Λ,Δ,A, π), and f : �n → � is a measurable function.One problem is to calculate the expected value E[f(ξ)]. Note that, for eachλ ∈ Λ, we may calculate the expected value E[f(ξ(λ)] by fuzzy simulation.Since E[f(ξ)] is essentially the expected value of rough variable E[f(ξ(λ)],we may combine rough simulation and fuzzy simulation to produce a fuzzyrough simulation.

Algorithm 12.3 (Fuzzy Rough Simulation)Step 1. Set L = 0.Step 2. Generate λ from Δ according to the measure π.Step 3. Generate λ from Λ according to the measure π.Step 4. L← L + E[f(ξ(λ))] + E[f(ξ(λ))].Step 5. Repeat the second to fourth steps N times.Step 6. Return L/(2N).

We employ the fuzzy rough simulation to calculate the expected value ofξ1ξ2, where ξ1 and ξ2 are fuzzy rough variables defined as

ξ1 = (ρ1, ρ1 + 1, ρ1 + 2), with ρ1 = ([1, 2], [0, 3]),ξ2 = (ρ2, ρ2 + 1, ρ2 + 2), with ρ2 = ([2, 3], [1, 4]).

A run of fuzzy rough simulation with 5000 cycles shows that E[ξ1ξ2] = 8.93.

Page 377: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Chapter 13

Birough Theory

Roughly speaking, a birough variable is a rough variable defined on the uni-versal set of rough variables, or a rough variable taking “rough variable”values.

The emphasis in this chapter is mainly on birough variable, birough arith-metic, chance measure, chance distribution, independent and identical distri-bution, expected value operator, variance, critical values, convergence con-cepts, laws of large numbers, and birough simulation.

13.1 Birough Variables

Definition 13.1 (Liu [75]) A birough variable is a function ξ from a roughspace (Λ,Δ, A, π) to the set of rough variables such that Tr{ξ(λ) ∈ B} is ameasurable function of λ for any Borel set B of �.

Theorem 13.1 Assume that ξ is a birough variable, and B is a Borel set of�. Then the trust Tr{ξ(λ) ∈ B} is a rough variable.

Proof: Since the trust Tr{ξ(λ) ∈ B} is a measurable function of λ from therough space (Λ,Δ,A, π) to the set of real numbers, it is a rough variable.

Theorem 13.2 Let ξ be a birough variable. If the expected value E[ξ(λ)] isfinite for each λ, then E[ξ(λ)] is a rough variable.

Proof: In order to prove that the expected value E[ξ(λ)] is a rough variable,we only need to show that E[ξ(λ)] is a measurable function of λ. It is obviousthat

E[ξ(λ)] =∫ +∞

0

Tr{ξ(λ) ≥ r}dr −∫ 0

−∞Tr{ξ(λ) ≤ r}dr

= limj→∞

limk→∞

(k∑

l=1

j

kTr{

ξ(λ) ≥ lj

k

}−

k∑l=1

j

kTr{

ξ(λ) ≤ − lj

k

}).

Page 378: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

370 Chapter 13 - Birough Theory

Since Tr{ξ(λ) ≥ lj/k} and Tr{ξ(λ) ≤ −lj/k} are all measurable functions forany integers j, k and l, the expected value E[ξ(λ)] is a measurable functionof λ. The proof is complete.

Definition 13.2 An n-dimensional birough vector is a function ξ from arough space (Λ,Δ, A, π) to the set of n-dimensional rough vectors such thatTr{ξ(λ) ∈ B} is a measurable function of λ for any Borel set B of �n.

Theorem 13.3 If (ξ1, ξ2, · · · , ξn) is a birough vector, then ξ1, ξ2, · · · , ξn arebirough variables. Conversely, if ξ1, ξ2, · · · , ξn are birough variables, and foreach λ ∈ Λ, the rough variables ξ1(λ), ξ2(λ), · · · , ξn(λ) are independent, then(ξ1, ξ2, · · · , ξn) is a birough vector.

Proof: Write ξ = (ξ1, ξ2, · · · , ξn). Suppose that ξ is a birough vector on thetrust space (Λ,Δ,A, π). For any Borel set B of �, the set B × �n−1 is aBorel set of �n. Thus the function

Tr {ξ1(λ) ∈ B} = Tr

⎧⎪⎪⎪⎨⎪⎪⎪⎩ξ1(λ) ∈ Bξ2(λ) ∈ �

...ξn(λ) ∈ �

⎫⎪⎪⎪⎬⎪⎪⎪⎭ = Tr{ξ(λ) ∈ B ×�n−1

}

is a measurable function of λ. It follows that ξ1 is a birough variable. Asimilar process may prove that ξ2, ξ3, · · · , ξn are birough variables.

Conversely, suppose that ξ1, ξ2, · · · , ξn are birough variables on the roughspace (Λ,Δ,A, π). We write ξ = (ξ1, ξ2, · · · , ξn) and define

C ={C ⊂ �n

∣∣ Tr{ξ(λ) ∈ C} is a measurable function of λ}

.

The vector ξ is a birough vector if we can prove that C contains all Borelsets of �n. Let C1, C2, · · · ∈ C, and Ci ↑ C or Ci ↓ C. It follows from thetrust continuity theorem that Tr{ξ(λ) ∈ Ci} → Tr{ξ(λ) ∈ C} as i → ∞.Thus Tr{ξ(λ) ∈ C} is a measurable function of λ, and C ∈ C. Hence C isa monotone class. It is also clear that C contains all intervals of the form(−∞, a], (a, b], (b,∞) and �n since

Tr {ξ(λ) ∈ (−∞, a]} =n∏

i=1

Tr {ξi(λ) ∈ (−∞, ai]} ;

Tr {ξ(λ) ∈ (a, b]} =n∏

i=1

Tr {ξi(λ) ∈ (ai, bi]} ;

Tr {ξ(λ) ∈ (b,+∞)} =n∏

i=1

Tr {ξi(λ) ∈ (bi,+∞)} ;

Tr {ξ(λ) ∈ �n} = 1.

Page 379: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 13.2 - Chance Measure 371

Let F be the class of all finite unions of disjoint intervals of the form (−∞, a],(a, b], (b,∞) and �n. Note that for any disjoint sets C1, C2, · · · , Cm of F andC = C1 ∪ C2 ∪ · · · ∪ Cm, we have

Tr {ξ(λ) ∈ C} =m∑i=1

Tr {ξ(λ) ∈ Ci} .

That is, C ∈ C. Hence we have F ⊂ C. It may also be verified that the classF is an algebra. Since the smallest σ-algebra containing F is just the Borelalgebra of �n, the monotone class theorem implies that C contains all Borelsets of �n. The theorem is proved.

Theorem 13.4 Let ξ be an n-dimensional birough vector, and f : �n → �a measurable function. Then f(ξ) is a birough variable.

Proof: It is clear that f−1(B) is a Borel set of �n for any Borel set B of �.Thus, for each λ ∈ Λ, we have

Tr{f(ξ(λ)) ∈ B} = Tr{ξ(λ) ∈ f−1(B)}

which is a measurable function of λ. That is, f(ξ) is a birough variable. Thetheorem is proved.

Definition 13.3 (Liu [75], Birough Arithmetic on Single Space) Let f :�n → � be a measurable function, and ξ1, ξ2, · · · , ξn birough variables de-fined on the rough space (Λ,Δ,A, π). Then ξ = f(ξ1, ξ2, · · · , ξn) is a biroughvariable defined by

ξ(λ) = f(ξ1(λ), ξ2(λ), · · · , ξn(λ)), ∀λ ∈ Λ. (13.1)

Definition 13.4 (Liu [75], Birough Arithmetic on Different Spaces) Let f :�n → � be a measurable function, and ξi birough variables on the roughspaces (Λi,Δi,Ai, πi), i = 1, 2, · · · , n, respectively. Then ξ = f(ξ1, ξ2, · · · , ξn)is a birough variable on the product rough space (Λ,Δ,A, π), defined by

ξ(λ1, λ2, · · · , λn) = f(ξ1(λ1), ξ2(λ2), · · · , ξn(λn)) (13.2)

for all (λ1, λ2, · · · , λn) ∈ Λ.

13.2 Chance Measure

Definition 13.5 (Liu [75]) Let ξ be a birough variable, and B a Borel set of�. Then the chance of birough event ξ ∈ B is a function from (0, 1] to [0, 1],defined as

Ch {ξ ∈ B} (α) = supTr{A}≥α

infλ∈A

Tr {ξ(λ) ∈ B} . (13.3)

Page 380: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

372 Chapter 13 - Birough Theory

Theorem 13.5 Let ξ be a birough variable, and B a Borel set of �. Writeβ∗ = Ch {ξ ∈ B} (α∗). Then we have

Tr{λ ∈ Λ

∣∣ Tr {ξ(λ) ∈ B} ≥ β∗} ≥ α∗. (13.4)

Proof: It follows from the definition of chance that β∗ is just the supremumof β satisfying

Tr{λ ∈ Λ

∣∣ Tr {ξ(λ) ∈ B} ≥ β}≥ α∗.

Thus there exists an increasing sequence {βi} such that

Tr{λ ∈ Λ

∣∣ Tr {ξ(λ) ∈ B} ≥ βi

}≥ α∗

and βi ↑ β∗ as i→∞. It is also easy to verify that{λ ∈ Λ

∣∣ Tr {ξ(λ ∈ B} ≥ βi

}↓{λ ∈ Λ

∣∣ Tr {ξ(λ) ∈ B} ≥ β∗}as i→∞. It follows from the trust continuity theorem that

Tr{λ ∈ Λ

∣∣ Tr {ξ(λ) ∈ B} ≥ β∗}= lim

i→∞Tr{λ ∈ Λ

∣∣ Tr {ξ(λ) ∈ B} ≥ βi

}≥ α∗.

The proof is complete.

Theorem 13.6 Let ξ be a birough variable, and {Bi} a sequence of Borelsets of �. If Bi ↓ B, then we have

limi→∞

Ch{ξ ∈ Bi}(α) = Ch{ξ ∈ lim

i→∞Bi

}(α). (13.5)

Proof: Write

β = Ch{ξ ∈ B}(α), βi = Ch{ξ ∈ Bi}(α), i = 1, 2, · · ·

Since Bi ↓ B, it is clear that β1 ≥ β2 ≥ · · · ≥ β. Thus the limitation

ρ = limi→∞

βi = limi→∞

Ch{ξ ∈ Bi}(α)

exists and ρ ≥ β. On the other hand, since ρ ≤ βi for each i, it follows fromTheorem 13.5 that

Tr{λ ∈ Λ∣∣ Tr{ξ(λ) ∈ Bi} ≥ ρ} ≥ Tr{λ ∈ Λ

∣∣ Tr{ξ(λ) ∈ Bi} ≥ βi} ≥ α.

It follows from the trust continuity theorem that

{λ ∈ Λ∣∣ Tr{ξ(λ) ∈ Bi} ≥ ρ} ↓ {λ ∈ Λ

∣∣ Tr{ξ(λ) ∈ B} ≥ ρ}.

It follows from the trust continuity theorem that

Tr{λ ∈ Λ∣∣ Tr{ξ(λ) ∈ B} ≥ ρ} = lim

i→∞Tr{λ ∈ Λ

∣∣ Tr{ξ(λ) ∈ Bi} ≥ ρ} ≥ α

which implies that ρ ≤ β. Hence ρ = β and (13.5) holds.

Page 381: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 13.2 - Chance Measure 373

Theorem 13.7 (a) Let ξ, ξ1, ξ2, · · · be birough variables such that ξi(λ) ↑ξ(λ) for each λ ∈ Λ. Then we have

limi→∞

Ch{ξi ≤ r}(α) = Ch{

limi→∞

ξi ≤ r}

(α). (13.6)

(b) Let ξ, ξ1, ξ2, · · · be birough variables such that ξi(λ) ↓ ξ(λ) for each λ ∈ Λ.Then we have

limi→∞

Ch{ξi ≥ r}(α) = Ch{

limi→∞

ξi ≥ r}

(α). (13.7)

Proof: (a) Write

β = Ch{ξ ≤ r}(α), βi = Ch{ξi ≤ r}(α), i = 1, 2, · · ·

Since ξi(λ) ↑ ξ(λ) for each λ ∈ Λ, it is clear that {ξi(λ) ≤ r} ↓ {ξ(λ) ≤ r}for each λ ∈ Λ and β1 ≥ β2 ≥ · · · ≥ β. Thus the limitation

ρ = limi→∞

βi = limi→∞

Ch{ξi ≤ r}(α)

exists and ρ ≥ β. On the other hand, since ρ ≤ βi for each i, it follows fromTheorem 13.5 that

Tr{λ ∈ Λ∣∣ Tr{ξi(λ) ≤ r} ≥ ρ} ≥ Tr{λ ∈ Λ

∣∣ Tr{ξi(λ) ≤ r} ≥ βi} ≥ α.

Since {ξi(λ) ≤ r} ↓ {ξ(λ) ≤ r} for each λ ∈ Λ, it follows from the trustcontinuity theorem that

{λ ∈ Λ∣∣ Tr{ξi(λ) ≤ r} ≥ ρ} ↓ {λ ∈ Λ

∣∣ Tr{ξ(λ) ≤ r} ≥ ρ}.

By using the trust continuity theorem, we get

Tr{λ ∈ Λ∣∣ Tr{ξ(λ) ≤ r} ≥ ρ} = lim

i→∞Tr{λ ∈ Λ

∣∣ Tr{ξi(λ) ≤ r} ≥ ρ} ≥ α

which implies that ρ ≤ β. Hence ρ = β and (13.6) holds. The part (b) maybe proved similarly.

Variety of Chance Measure

Definition 13.6 Let ξ be a birough variable, and B a Borel set of �. Forany real number α ∈ (0, 1], the α-chance of birough event ξ ∈ B is the valueof chance at α, i.e., Ch {ξ ∈ B} (α) where Ch denotes the chance measure.

Definition 13.7 Let ξ be a birough variable, and B a Borel set of �. Thenthe equilibrium chance of birough event ξ ∈ B is defined as

Che {ξ ∈ B} = sup0<α≤1

{α∣∣ Ch {ξ ∈ B} (α) ≥ α

}(13.8)

where Ch denotes the chance measure.

Page 382: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

374 Chapter 13 - Birough Theory

Definition 13.8 Let ξ be a birough variable, and B a Borel set of �. Thenthe average chance of birough event ξ ∈ B is defined as

Cha {ξ ∈ B} =∫ 1

0

Ch {ξ ∈ B} (α)dα (13.9)

where Ch denotes the chance measure.

Definition 13.9 A birough variable ξ is said to be(a) nonnegative if Ch{ξ < 0}(α) ≡ 0;(b) positive if Ch{ξ ≤ 0}(α) ≡ 0;(c) simple if there exists a finite sequence {x1, x2, · · · , xm} such that

Ch {ξ �= x1, ξ �= x2, · · · , ξ �= xm} (α) ≡ 0; (13.10)

(d) discrete if there exists a countable sequence {x1, x2, · · ·} such that

Ch {ξ �= x1, ξ �= x2, · · ·} (α) ≡ 0. (13.11)

13.3 Chance Distribution

Definition 13.10 The chance distribution Φ: [−∞,+∞]× (0, 1]→ [0, 1] ofa birough variable ξ is defined by

Φ(x;α) = Ch {ξ ≤ x} (α). (13.12)

Theorem 13.8 The chance distribution Φ(x;α) of a birough variable is(a) a decreasing and left-continuous function of α for any fixed x;(b) an increasing and right-continuous function of x for any fixed α, and

Φ(−∞;α) = 0, Φ(+∞;α) = 1, ∀α; (13.13)

limx→−∞Φ(x;α) = 0, ∀α; (13.14)

limx→+∞Φ(x;α) = 1 if α < 1. (13.15)

Proof: Let Φ(x;α) be the chance distribution of birough variable ξ definedon the rough space (Λ,Δ,A, π). Part (a): For any given α1 and α2 with0 < α1 < α2 ≤ 1, it is clear that

Φ(x;α1) = supTr{A}≥α1

infλ∈A

Tr {ξ(λ) ≤ x}

≥ supTr{A}≥α2

infλ∈A

Tr {ξ(λ) ≤ x} = Φ(x;α2).

Thus Φ(x;α) is a decreasing function of α. We next prove that Φ(x;α) isa left-continuous function of α. Let α ∈ (0, 1] be given, and let {αi} be a

Page 383: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 13.3 - Chance Distribution 375

sequence of numbers with αi ↑ α. Since Φ(x;α) is a decreasing function ofα, the limitation limi→∞ Φ(x;αi) exists and is not less than Φ(x;α). If thelimitation is equal to Φ(x;α), then the left-continuity is proved. Otherwise,we have

limi→∞

Φ(x;αi) > Φ(x;α).

Let z∗ = (limi→∞ Φ(x;αi) + Φ(x;α))/2. It is clear that

Φ(x;αi) > z∗ > Φ(x;α)

for all i. It follows from Φ(x;αi) > z∗ that there exists Ai with Tr{Ai} ≥ αi

such thatinf

λ∈Ai

Tr{ξ(λ) ≤ x} > z∗

for each i. Now we define

A∗ =∞⋃i=1

Ai.

It is clear that Tr{A∗} ≥ Tr{Ai} ≥ αi. Letting i→∞, we get Tr{A∗} ≥ α.Thus

Φ(x;α) ≥ infλ∈A∗

Tr{ξ(λ) ≤ x} ≥ z∗.

A contradiction proves the part (a).Next we prove the part (b). For any x1 and x2 with−∞ ≤ x1 < x2 ≤ +∞,

it is clear that

Φ(x1;α) = supTr{A}≥α

infλ∈A

Tr {ξ(λ) ≤ x1}

≤ supTr{A}≥α

infλ∈A

Tr {ξ(λ) ≤ x2} = Φ(x2;α).

Therefore, Φ(x;α) is an increasing function of x. Let us prove that Φ(x;α)is a right-continuous function of x. Let {xi} be an arbitrary sequence withxi ↓ x as i→∞. It follows from Theorem 13.6 that

limy↓x

Φ(y;α) = limy↓x

Ch{ξ ∈ (−∞, y]}(α) = Ch{ξ ∈ (−∞, x]}(α) = Φ(x;α).

Thus Φ(x;α) is a right-continuous function of x.Since ξ(λ) is a rough variable for any λ ∈ Λ, we have Tr{ξ(λ) ≤ −∞} = 0

for any λ ∈ Λ. It follows that

Φ(−∞;α) = supTr{A}≥α

infλ∈A

Tr {ξ(λ) ≤ −∞} = 0.

Similarly, we have Tr{ξ(λ) ≤ +∞} = 1 for any λ ∈ Λ. Thus

Φ(+∞;α) = supTr{A}≥α

infλ∈A

Tr {ξ(λ) ≤ +∞} = 1.

Page 384: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

376 Chapter 13 - Birough Theory

Thus (13.13) is proved.If (13.14) is not true, then there exists a number z∗ > 0 and a sequence

{xi} with xi ↓ −∞ such that Φ(xi, α) > z∗ for all i. Writing

Ai ={λ ∈ Λ

∣∣ Tr{ξ(λ) ≤ xi} > z∗}

for i = 1, 2, · · ·, we have Tr{Ai} ≥ α, and A1 ⊃ A2 ⊃ · · · It follows from thetrust continuity theorem that

Tr

{ ∞⋂i=1

Ai

}= lim

i→∞Tr{Ai} ≥ α > 0.

Thus there exists λ∗ such that λ∗ ∈ Ai for all i. Therefore

0 = limi→∞

Tr{ξ(λ∗) ≤ xi} ≥ z∗ > 0.

A contradiction proves (13.14).If (13.15) is not true, then there exists a number z∗ < 1 and a sequence

{xi} with xi ↑ +∞ such that Φ(xi, α) < z∗ for all i. Writing

Ai ={λ ∈ Λ

∣∣ Tr{ξ(λ) ≤ xi} < z∗}

for i = 1, 2, · · ·, we have

Tr{Ai} = 1− Tr{λ ∈ Λ

∣∣ Tr{ξ(λ) ≤ xi} ≥ z∗}

> 1− α

and A1 ⊃ A2 ⊃ · · · It follows from the trust continuity theorem that

Tr

{ ∞⋂i=1

Ai

}= lim

i→∞Tr{Ai} ≥ 1− α > 0.

Thus there exists λ∗ such that λ∗ ∈ Ai for all i. Therefore

1 = limi→∞

Tr{ξ(λ∗) ≤ xi} ≤ z∗ < 1.

A contradiction proves (13.15). The proof is complete.

Theorem 13.9 Let ξ be a birough variable. Then Ch{ξ ≥ x}(α) is(a) a decreasing and left-continuous function of α for any fixed x;(b) a decreasing and left-continuous function of x for any fixed α.

Definition 13.11 The chance density function φ: �× (0, 1]→ [0,+∞) of abirough variable ξ is a function such that

Φ(x;α) =∫ x

−∞φ(y;α)dy (13.16)

holds for all x ∈ [−∞,+∞] and α ∈ (0, 1], where Φ is the chance distributionof ξ.

Page 385: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 13.4 - Independent and Identical Distribution 377

13.4 Independent and Identical Distribution

This section introduces the concept of independent and identically distributed(iid) birough variables.

Definition 13.12 The birough variables ξ1, ξ2, · · · , ξn are said to be iid ifand only if

(Tr{ξi(λ) ∈ B1},Tr{ξi(λ) ∈ B2}, · · · ,Tr{ξi(λ) ∈ Bm}) , i = 1, 2, · · · , n

are iid rough vectors for any Borel sets B1, B2, · · · , Bm of � and any positiveinteger m.

Theorem 13.10 Let ξ1, ξ2, · · · , ξn be iid birough variables. Then for anyBorel set B of �, Tr{ξi(λ) ∈ B}, i = 1, 2, · · · , n are iid rough variables.

Proof: It follows immediately from the definition.

Theorem 13.11 Let f : � → � be a measurable function. If ξ1, ξ2, · · · , ξnare iid birough variables, then f(ξ1), f(ξ2), · · ·, f(ξn) are iid birough variables.

Proof: We have proved that f(ξ1), f(ξ2), · · · , f(ξn) are birough variables.For any positive integer m and Borel sets B1, B2, · · · , Bm of �, since

f−1(B1), f−1(B2), · · · , f−1(Bm)

are Borel sets, we know that(Tr{ξi(λ) ∈ f−1(B1)},Tr{ξi(λ) ∈ f−1(B2)}, · · · ,Tr{ξi(λ) ∈ f−1(Bm)}

),

i = 1, 2, · · · , n are iid rough vectors. Equivalently, the rough vectors

(Tr{f(ξi(λ)) ∈ B1},Tr{f(ξi(λ)) ∈ B2}, · · · ,Tr{f(ξi(λ)) ∈ Bm}) ,

i = 1, 2, · · · , n are iid. Hence f(ξ1), f(ξ2), · · · , f(ξn) are iid birough variables.

Theorem 13.12 If ξ1, ξ2, · · · , ξn are iid birough variables such that E[ξ1(λ)],E[ξ2(λ)], · · ·, E[ξn(λ)] are all finite for each λ, then E[ξ1(λ)], E[ξ2(λ)], · · ·,E[ξn(λ)] are iid rough variables.

Proof: For any λ ∈ Λ, it follows from the expected value operator that

E[ξi(λ)] =∫ +∞

0

Tr{ξi(λ) ≥ r}dr −∫ 0

−∞Tr{ξi(λ) ≤ r}dr

= limj→∞

limk→∞

⎛⎝ 2k∑l=1

j

2kTr{

ξi(λ) ≥ lj

2k

}−

2k∑l=1

j

2kTr{

ξi(λ) ≤ − lj

2k

}⎞⎠

Page 386: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

378 Chapter 13 - Birough Theory

for i = 1, 2, · · · , n. Now we write

η+i (λ) =

∫ ∞

0

Tr{ξi(λ) ≥ r}dr, η−i (λ) =

∫ 0

−∞Tr{ξi(λ) ≤ r}dr,

η+ij(λ) =

∫ j

0

Tr{ξi(λ) ≥ r}dr, η−ij(λ) =

∫ 0

−j

Tr{ξi(λ) ≤ r}dr,

η+ijk(λ) =

2k∑l=1

j

2kTr{

ξi(λ) ≥ lj

2k

}, η−

ijk(λ) =2k∑l=1

j

2kTr{

ξi(λ) ≤ − lj

2k

}for any positive integers j, k and i = 1, 2, · · · , n. It follows from the mono-tonicity of the functions Tr{ξi ≥ r} and Tr{ξi ≤ r} that the sequences{η+

ijk(λ)} and {η−ijk(λ)} satisfy (a) for each j and k,

(η+ijk(λ), η−

ijk(λ)), i =

1, 2, · · · , n are iid rough vectors; and (b) for each i and j, η+ijk(λ) ↑ η+

ij(λ),and η−

ijk(λ) ↑ η−ij(λ) as k →∞.

For any real numbers x, y, xi, yi, i = 1, 2, · · · , n, it follows from the prop-erty (a) that

Tr

{η+ijk(λ) ≤ xi, η

−ijk(λ) ≤ yi

i = 1, 2, · · · , n

}=

n∏i=1

Tr{η+ijk(λ) ≤ xi, η

−ijk(λ) ≤ yi

},

Tr{η+ijk(λ) ≤ x, η−

ijk(λ) ≤ y}

= Tr{η+i′jk(λ) ≤ x, η−

i′jk(λ) ≤ y}

, ∀i, i′.

It follows from the property (b) that{η+ijk(λ) ≤ xi, η

−ijk(λ) ≤ yi

i = 1, 2, · · · , n

}→{

η+ij(λ) ≤ xi, η

−ij(λ) ≤ yi

i = 1, 2, · · · , n

},

{η+ijk(λ) ≤ x, η−

ijk(λ) ≤ y}→{η+ij(λ) ≤ x, η−

ij(λ) ≤ y}

as k →∞. By using the trust continuity theorem, we get

Tr

{η+ij(λ) ≤ xi, η

−ij(λ) ≤ yi

i = 1, 2, · · · , n

}=

n∏i=1

Tr{η+ij(λ) ≤ xi, η

−ij(λ) ≤ yi

},

Tr{η+ij(λ) ≤ x, η−

ij(λ) ≤ y}

= Tr{η+i′j(λ) ≤ x, η−

i′j(λ) ≤ y}

, ∀i, i′.

Thus(η+ij(λ), η−

ij(λ)), i = 1, 2, · · · , n are iid rough vectors, and satisfy (c) for

each j,(η+ij(λ), η−

ij(λ)), i = 1, 2, · · · , n are iid rough vectors; and (d) for each

i, η+ij(λ) ↑ η+

i (λ) and η−ij(λ) ↑ η−

i (λ) as j →∞.A similar process may prove that

(η+i (λ), η−

i (λ)), i = 1, 2, · · · , n are iid

rough vectors. Thus E[ξ1(λ)], E[ξ2(λ)], · · · , E[ξn(λ)] are iid rough variables.The theorem is proved.

Page 387: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 13.6 - Variance, Covariance and Moments 379

13.5 Expected Value Operator

Definition 13.13 (Liu [75]) Let ξ be a birough variable. Then its expectedvalue is defined by

E[ξ] =∫ +∞

0

Tr{λ ∈ Λ

∣∣ E[ξ(λ)] ≥ r}

dr −∫ 0

−∞Tr{λ ∈ Λ

∣∣ E[ξ(λ)] ≤ r}

dr

provided that at least one of the two integrals is finite.

Theorem 13.13 Assume that ξ and η are birough variables with finite ex-pected values. Then for any real numbers a and b, we have

E[aξ + bη] = aE[ξ] + bE[η]. (13.17)

Proof: For any λ ∈ Λ, by the linearity of expected value operator of roughvariable, we have E[aξ(λ) + bη(λ)] = aE[ξ(λ)] + bE[η(λ)]. It follows thatE[aξ + bη] = E [aE[ξ(λ)] + bE[η(λ)]] = aE [E[ξ(λ)]]+ bE [E[η(λ)]] = aE[ξ]+bE[η]. The theorem is proved.

13.6 Variance, Covariance and Moments

Definition 13.14 (Liu [75]) Let ξ be a birough variable with finite expectedvalue E[ξ]. The variance of ξ is defined as V [ξ] = E

[(ξ − E[ξ])2

].

Theorem 13.14 If ξ is a birough variable with finite expected value, a andb are real numbers, then V [aξ + b] = a2V [ξ].

Proof: It follows from the definition of variance that

V [aξ + b] = E[(aξ + b− aE[ξ]− b)2

]= a2E[(ξ − E[ξ])2] = a2V [ξ].

Theorem 13.15 Assume that ξ is a birough variable whose expected valueexists. Then we have

V [E[ξ(λ)]] ≤ V [ξ]. (13.18)

Proof: Denote the expected value of ξ by e. It follows from Theorem 4.51that

V [E[ξ(λ)]] = E[(E[ξ(λ)]− e)2

]≤ E

[E[(ξ(λ)− e)2

]]= V [ξ].

The theorem is proved.

Theorem 13.16 Let ξ be a birough variable with expected value e. ThenV [ξ] = 0 if and only if Ch{ξ = e}(1) = 1.

Page 388: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

380 Chapter 13 - Birough Theory

Proof: If V [ξ] = 0, then it follows from V [ξ] = E[(ξ − e)2] that∫ +∞

0

Tr{λ ∈ Λ

∣∣ E[(ξ(λ)− e)2] ≥ r}

dr = 0

which implies that Tr{λ ∈ Λ|E[(ξ(λ)−e)2] ≥ r} = 0 for any r > 0. Therefore,Tr{λ ∈ Λ|E[(ξ(λ) − e)2] = 0} = 1. That is, there exists a set A∗ withTr{A∗} = 1 such that E[(ξ(λ) − e)2] = 0 for each λ ∈ A∗. It follows fromTheorem 4.41 that Tr{ξ(λ) = e} = 1 for each λ ∈ A∗. Hence

Ch{ξ = e}(1) = supTr{A}≥1

infλ∈A

Tr{ξ(λ) = e} = 1.

Conversely, if Ch{ξ = e}(1) = 1, it follows from Theorem 13.5 that thereexists a set A∗ with Tr{A∗} = 1 such that

infλ∈A∗

Tr{ξ(λ) = e} = 1.

That is, Tr{(ξ(λ)− e)2 ≥ r} = 0 for each r > 0 and each λ ∈ A∗. Thus

E[(ξ(λ)− e)2] =∫ +∞

0

Tr{(ξ(λ)− e)2 ≥ r}dr = 0

for each λ ∈ A∗. It follows that Tr{λ ∈ Λ|E[(ξ(λ)− e)2] ≥ r

}= 0 for any

r > 0. Hence

V [ξ] =∫ +∞

0

Tr{λ ∈ Λ

∣∣ E[(ξ(λ)− e)2] ≥ r}

dr = 0.

The theorem is proved.

Definition 13.15 Let ξ and η be birough variables such that E[ξ] and E[η]are finite. Then the covariance of ξ and η is defined by

Cov[ξ, η] = E [(ξ − E[ξ])(η − E[η])] . (13.19)

Definition 13.16 For any positive integer k, the expected value E[ξk] iscalled the kth moment of the birough variable ξ. The expected value E[(ξ −E[ξ])k] is called the kth central moment of the birough variable ξ.

13.7 Optimistic and Pessimistic Values

Definition 13.17 (Liu [75]) Let ξ be a birough variable, and γ, δ ∈ (0, 1].Then we call

ξsup(γ, δ) = sup{r∣∣ Ch{ξ ≥ r

}(γ) ≥ δ} (13.20)

the (γ, δ)-optimistic value to ξ, and

ξinf(γ, δ) = inf{r∣∣ Ch{ξ ≤ r}(γ) ≥ δ

}(13.21)

the (γ, δ)-pessimistic value to ξ.

Page 389: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 13.7 - Optimistic and Pessimistic Values 381

Theorem 13.17 Let ξ be a birough variable and γ, δ ∈ (0, 1]. Assume thatξsup(γ, δ) is the (γ, δ)-optimistic value and ξinf(γ, δ) is the (γ, δ)-pessimisticvalue to ξ. Then we have

Ch{ξ ≤ ξinf(γ, δ)}(γ) ≥ δ, Ch{ξ ≥ ξsup(γ, δ)}(γ) ≥ δ. (13.22)

Proof: It follows from the definition of (γ, δ)-pessimistic value that thereexists a decreasing sequence {xi} such that Ch{ξ ≤ xi}(γ) ≥ δ and xi ↓ξinf(γ, δ) as i→∞. Since Ch{ξ ≤ x}(γ) is a right-continuous function of x,the inequality Ch{ξ ≤ ξinf(γ, δ)}(γ) ≥ δ holds.

Similarly, there exists an increasing sequence {xi} such that Ch{ξ ≥xi}(γ) ≥ δ and xi ↑ ξsup(γ, δ) as i → ∞. Since Ch{ξ ≥ x}(γ) is a left-continuous function of x, the inequality Ch{ξ ≥ ξsup(γ, δ)}(γ) ≥ δ holds.The theorem is proved.

Theorem 13.18 Let ξsup(γ, δ) and ξinf(γ, δ) be the (γ, δ)-optimistic and (γ, δ)-pessimistic values of birough variable ξ, respectively. If γ ≤ 0.5, then we have

ξinf(γ, δ) ≤ ξsup(γ, δ) + δ1; (13.23)

if γ > 0.5, then we have

ξinf(γ, δ) + δ2 ≥ ξsup(γ, δ) (13.24)

where δ1 and δ2 are defined by

δ1 = supλ∈Λ{ξ(λ)sup(1− δ)− ξ(λ)inf(1− δ)} ,

δ2 = supλ∈Λ{ξ(λ)sup(δ)− ξ(λ)inf(δ)} ,

and ξ(λ)sup(δ) and ξ(λ)inf(δ) are δ-optimistic and δ-pessimistic values ofrough variable ξ(λ) for each λ, respectively.

Proof: Assume that γ ≤ 0.5. For any given ε > 0, we define

Λ1 ={λ ∈ Λ

∣∣ Tr {ξ(λ) > ξsup(γ, δ) + ε} ≥ δ}

,

Λ2 ={λ ∈ Λ

∣∣ Tr {ξ(λ) < ξinf(γ, δ)− ε} ≥ δ}

.

It follows from the definitions of ξsup(γ, δ) and ξinf(γ, δ) that Tr{Λ1} < γand Tr{Λ2} < γ. Thus Tr{Λ1}+ Tr{Λ2} < γ + γ ≤ 1. This fact implies thatΛ1 ∪ Λ2 �= Λ. Letting λ∗ �∈ Λ1 ∪ Λ2. Then we have

Tr {ξ(λ∗) > ξsup(γ, δ) + ε} < δ,

Tr {ξ(λ∗) < ξinf(γ, δ)− ε} < δ.

Page 390: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

382 Chapter 13 - Birough Theory

Since Tr is self dual, we have

Tr {ξ(λ∗) ≤ ξsup(γ, δ) + ε} > 1− δ,

Tr {ξ(λ∗) ≥ ξinf(γ, δ)− ε} > 1− δ.

It follows from the definitions of ξ(λ∗)sup(1− δ) and ξ(λ∗)inf(1− δ) that

ξsup(γ, δ) + ε ≥ ξ(λ∗)inf(1− δ),

ξinf(γ, δ)− ε ≤ ξ(λ∗)sup(1− δ)

which implies that

ξinf(γ, δ)− ε− (ξsup(γ, δ) + ε) ≤ ξ(λ∗)sup(1− δ)− ξ(λ∗)inf(1− δ) ≤ δ1.

Letting ε→ 0, we obtain (13.23).Next we prove the inequality (13.24). Assume γ > 0.5. For any given

ε > 0, we define

Λ1 ={λ ∈ Λ

∣∣ Tr {ξ(λ) ≥ ξsup(γ, δ)− ε} ≥ δ}

,

Λ2 ={λ ∈ Λ

∣∣ Tr {ξ(λ) ≤ ξinf(γ, δ) + ε} ≥ δ}

.

It follows from the definitions of ξsup(γ, δ) and ξinf(γ, δ) that Tr{Λ1} ≥ γand Tr{Λ2} ≥ γ. Thus Tr{Λ1}+ Tr{Λ2} ≥ γ + γ > 1. This fact implies thatΛ1 ∩ Λ2 �= ∅. Let λ∗ ∈ Λ1 ∩ Λ2. Then we have

Tr {ξ(λ∗) ≥ ξsup(γ, δ)− ε} ≥ δ,

Tr {ξ(λ∗) ≤ ξinf(γ, δ) + ε} ≥ δ.

It follows from the definitions of ξ(λ∗)sup(δ) and ξ(λ∗)inf(δ) that

ξsup(γ, δ)− ε ≤ ξ(λ∗)sup(δ),

ξinf(γ, δ) + ε ≥ ξ(λ∗)inf(δ)

which implies that

ξsup(γ, δ)− ε− (ξinf(γ, δ) + ε) ≤ ξ(λ∗)sup(δ)− ξ(λ∗)inf(δ) ≤ δ2.

The inequality (13.24) is proved by letting ε→ 0.

13.8 Convergence Concepts

This section introduces four types of sequence convergence concept: conver-gence almost surely (a.s.), convergence in chance, convergence in mean, andconvergence in distribution.

Page 391: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 13.9 - Laws of Large Numbers 383

Definition 13.18 Suppose that ξ, ξ1, ξ2, · · · are birough variables defined onthe rough space (Λ,Δ,A, π). The sequence {ξi} is said to be convergent a.s. toξ if and only if there exists a set A ∈ A with Tr{A} = 1 such that {ξi(λ)}converges a.s. to ξ(λ) for every λ ∈ A.

Definition 13.19 Suppose that ξ, ξ1, ξ2, · · · are birough variables. We saythat the sequence {ξi} converges in chance to ξ if

limi→∞

limα↓0

Ch {|ξi − ξ| ≥ ε} (α) = 0 (13.25)

for every ε > 0.

Definition 13.20 Suppose that ξ, ξ1, ξ2, · · · are birough variables with finiteexpected values. We say that the sequence {ξi} converges in mean to ξ if

limi→∞

E[|ξi − ξ|] = 0. (13.26)

Definition 13.21 Suppose that Φ,Φ1,Φ2, · · · are the chance distributions ofbirough variables ξ, ξ1, ξ2, · · ·, respectively. We say that {ξi} converges indistribution to ξ if Φi(x;α)→ Φ(x;α) for all continuity points (x;α) of Φ.

13.9 Laws of Large Numbers

Theorem 13.19 Let {ξi} be a sequence of independent but not necessarilyidentically distributed birough variables with common expected value e. Ifthere exists a number a > 0 such that V [ξi] < a for all i, then (E[ξ1(λ)] +E[ξ2(λ)] + · · ·+ E[ξn(λ)])/n converges in trust to e as n→∞.

Proof: Since {ξi} is a sequence of independent birough variables, we knowthat {E[ξi(λ)]} is a sequence of independent rough variables. By using The-orem 13.15, we get V [E[ξi(λ)]] ≤ V [ξi] < a for each i. It follows from theweak law of large numbers of rough variable that (E[ξ1(λ)]+E[ξ2(λ)]+ · · ·+E[ξn(λ)])/n converges in trust to e.

Theorem 13.20 Let {ξi} be a sequence of iid birough variables with a finiteexpected value e. Then (E[ξ1(λ)] + E[ξ2(λ)] + · · ·+ E[ξn(λ)])/n converges intrust to e as n→∞.

Proof: Since {ξi} is a sequence of iid birough variables with a finite expectedvalue e, we know that {E[ξi(λ)]} is a sequence of iid rough variables withfinite expected e. It follows from the weak law of large numbers of roughvariable that (E[ξ1(λ)]+E[ξ2(λ)]+ · · ·+E[ξn(λ)])/n converges in trust to e.

Theorem 13.21 Let {ξi} be independent birough variables with a commonexpected value e. If

∞∑i=1

V [ξi]i2

<∞, (13.27)

then (E[ξ1(λ)] + E[ξ2(λ)] + · · ·+ E[ξn(λ)])/n converges a.s. to e as n→∞.

Page 392: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

384 Chapter 13 - Birough Theory

Proof: Since {ξi} is a sequence of independent birough variables, we knowthat {E[ξi(λ)]} is a sequence of independent rough variables. By using Theo-rem 13.15, we get V [E[ξi(λ)]] ≤ V [ξi] for each i. It follows from the strong lawof large numbers of rough variable that (E[ξ1(λ)]+E[ξ2(λ)]+· · ·+E[ξn(λ)])/nconverges a.s. to e.

Theorem 13.22 Suppose that {ξi} is a sequence of iid birough variableswith a finite expected value e. Then (E[ξ1(λ)] + E[ξ2(λ)] + · · ·+ E[ξn(λ)])/nconverges a.s. to e as n→∞.

Proof: Since {ξi} is a sequence of iid birough variables, we know that{E[ξi(λ)]} is a sequence of iid rough variables with a finite expected value e.It follows from the strong law of large numbers of rough variable that

1n

n∑i=1

E[ξi(λ)]→ a, a. s.

as n→∞. The proof is complete.

13.10 Birough Simulations

In this section, we introduce birough simulations for finding critical values,computing chance functions, and calculating expected value.

Example 13.1: Suppose that ξ is an n-dimensional birough vector definedon the rough space (Λ,Δ,A, π), and f : �n → �m is a measurable function.For any real number α ∈ (0, 1], we design a birough simulation to computethe α-chance Ch{f(ξ) ≤ 0}(α). That is, we should find the supremum βsuch that

Tr{λ ∈ Λ

∣∣ Tr {f(ξ(λ)) ≤ 0} ≥ β}≥ α. (13.28)

We sample λ1, λ2, · · · , λN from Δ and λ1, λ2, · · · , λN from Λ according to themeasure π. For any number v, let N(v) denote the number of λk satisfyingTr{f(ξ(λk)) ≤ 0} ≥ v for k = 1, 2, · · · , N , and N(v) denote the number ofλk satisfying Tr{f(ξ(λk)) ≤ 0} ≥ v for k = 1, 2, · · · , N , where Tr{·} may beestimated by rough simulation. Then we may find the maximal value v suchthat

N(v) + N(v)2N

≥ α. (13.29)

This value is an estimation of β.

Algorithm 13.1 (Birough Simulation)Step 1. Generate λ1, λ2, · · · , λN from Δ according to the measure π.

Step 2. Generate λ1, λ2, · · · , λN from Λ according to the measure π.Step 3. Find the maximal value v such that (13.29) holds.

Page 393: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 13.10 - Birough Simulations 385

Step 4. Return v.

Now we consider the following two birough variables

ξ1 = ([ρ1 − 1, ρ1 + 1], [ρ1 − 2, ρ1 + 2]), with ρ1 = ([1, 2], [0, 3]),ξ2 = ([ρ2 − 1, ρ2 + 1], [ρ2 − 2, ρ2 + 2]), with ρ2 = ([2, 3], [1, 4]).

A run of birough simulation with 5000 cycles shows that

Ch{ξ1 + ξ2 ≥ 2}(0.9) = 0.77.

Example 13.2: Assume that ξ is an n-dimensional birough vector on therough space (Λ,Δ,A, π), and f : �n → � is a measurable function. For anygiven confidence levels α and β, Let us find the maximal value f such that

Ch{f(ξ) ≥ f

}(α) ≥ β (13.30)

holds. That is, we should compute the maximal value f such that

Tr{λ ∈ Λ

∣∣ Tr{f(ξ(λ)) ≥ f

}≥ β}≥ α (13.31)

holds. We sample λ1, λ2, · · · , λN from Δ and λ1, λ2, · · · , λN from Λ accordingto the measure π. For any number v, let N(v) denote the number of λk

satisfying Tr{f(ξ(λk)) ≥ v} ≥ β for k = 1, 2, · · · , N , and N(v) denote thenumber of λk satisfying Tr{f(ξ(λk)) ≥ v} ≥ β for k = 1, 2, · · · , N , whereTr{·} may be estimated by rough simulation. Then we may find the maximalvalue v such that

N(v) + N(v)2N

≥ α. (13.32)

This value is an estimation of f .

Algorithm 13.2 (birough Simulation)Step 1. Generate λ1, λ2, · · · , λN from Δ according to the measure π.

Step 2. Generate λ1, λ2, · · · , λN from Λ according to the measure π.Step 3. Find the maximal value v such that (13.32) holds.Step 4. Return v.

We now find the maximal value f such that Ch{ξ21 + ξ2

2 ≥ f}(0.9) ≥ 0.9,where ξ1 and ξ2 are birough variables defined as

ξ1 = ([ρ1 − 1, ρ1 + 1], [ρ1 − 2, ρ1 + 2]), with ρ1 = ([1, 2], [0, 3]),ξ2 = ([ρ2 − 1, ρ2 + 1], [ρ2 − 2, ρ2 + 2]), with ρ2 = ([2, 3], [1, 4]).

A run of birough simulation with 5000 cycles shows that f = 1.74.

Page 394: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

386 Chapter 13 - Birough Theory

Example 13.3: Assume that ξ is an n-dimensional birough vector on therough space (Λ,Δ,A, π), and f : �n → � is a measurable function. Oneproblem is to calculate the expected value E[f(ξ)]. Note that, for each λ ∈ Λ,we may calculate the expected value E[f(ξ(λ)] by rough simulation. SinceE[f(ξ)] is essentially the expected value of rough variable E[f(ξ(λ)], we havethe following birough simulation.

Algorithm 13.3 (Birough Simulation)Step 1. Set L = 0.Step 2. Generate λ from Δ according to the measure π.Step 3. Generate λ from Λ according to the measure π.Step 4. L← L + E[f(ξ(λ))] + E[f(ξ(λ))].Step 5. Repeat the second to fourth steps N times.Step 6. Return L/(2N).

We employ the birough simulation to calculate the expected value of ξ1ξ2,where ξ1 and ξ2 are birough variables defined as

ξ1 = ([ρ1 − 1, ρ1 + 1], [ρ1 − 2, ρ1 + 2]), with ρ1 = ([1, 2], [0, 3]),ξ2 = ([ρ2 − 1, ρ2 + 1], [ρ2 − 2, ρ2 + 2]), with ρ2 = ([2, 3], [1, 4]).

A run of birough simulation with 5000 cycles shows that E[ξ1ξ2] = 3.73.

Page 395: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Chapter 14

Some Remarks

As a branch of mathematics that studies the behavior of random, fuzzy andrough events, uncertainty theory is the generic name of probability theory,credibility theory, and trust theory. This chapter will provide an uncertaintytheory tree and discusses multifold uncertainty. Furthermore, a nonclassicalcredibility theory and a generalized trust theory are introduced.

14.1 Uncertainty Theory Tree

.......................................................................................................................... ....................................................................................................................................................... ...............

............................................................................................................................................................................ ...............

........................................................................................................................................................................................................................................

..............................................................................................................................................................................................................................................................................................................................................

........................................................................................................................................................................................................................................

...........................................................................................................................................................................................

.......................................................................................................................... ........................................................................................................................................................................................... ...............

........................................................................................................................................................................................................................................

.......................................................................................................................... ......................................................................................................................................................................

...........................................................................................................................................................................................

........................................................................................................................................................................................................................................

..............................................................................................................................................................................................................................................................................................................................................

............................................ ...............

............................................ ...............

............................................ ...............

.............................

........

........

........

........

........

........

........

........

..............................................................................................................................................................................................................................................................................................................................................................................

UncertaintyTheory

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

........

................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................

Probability Theory

........

........

........

........

........

........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................Credibility Theory

................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................

Trust Theory

....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................

Birandom Theory

....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................

Random Fuzzy Theory

....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................

Fuzzy Random Theory

....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................

Random Rough Theory

....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................

Bifuzzy Theory

....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................

Rough Random Theory

....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................

Fuzzy Rough Theory

....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................

Rough Fuzzy Theory

....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................

Birough Theory

Figure 14.1: Uncertainty Theory Tree

Page 396: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

388 Chapter 14 - Some Remarks

14.2 Multifold Uncertainty

In addition to the two-fold uncertainty, we may define three-fold uncertainty,four-fold uncertainty, and so on.

Definition 14.1 (Liu [75]) A trirandom variable is a function ξ from a prob-ability space (Ω,A,Pr) to the set of birandom variables such that Ch{ξ(ω) ∈B} is a measurable function of ω for any Borel set B of �.

Definition 14.2 (Liu [75]) A trifuzzy variable is a function from a possibil-ity space (Θ,P(Θ),Pos) to the set of bifuzzy variables.

Definition 14.3 (Liu [75]) A trirough variable is a function ξ from a roughspace (Λ,Δ, A, π) to the set of birough variables such that Ch{ξ(λ) ∈ B} isa measurable function of λ for any Borel set B of �.

We may also define other three-fold uncertainty. For example, a fuzzyrandom rough variable is a function from a rough space to the set of fuzzyrandom variables such that Ch{ξ(λ) ∈ B} is a measurable function of λ forany Borel set B of �.

14.3 Ranking Uncertain Variables

Uncertain variable is the generic name of random variable, fuzzy variable,rough variable, fuzzy random variable, random fuzzy variable, etc.

Let ξ and η be two uncertain variables. Different from the situation ofreal numbers, there does not exist a natural ordership in an uncertain world.Thus an important problem appearing in uncertain systems is how to rankuncertain variables. The following four ranking methods are recommended.

(a) We say ξ > η if and only if E[ξ] > E[η], where E is the expected valueoperator of uncertain variables. This criterion leads to expected valuemodels.

(b) We say ξ > η if and only if, for some predetermined confidence levelα ∈ (0, 1], we have ξsup(α) > ηsup(α), where ξsup(α) and ηsup(α) arethe α-optimistic values of ξ and η, respectively. This criterion leads to(maximax) chance-constrained programming.

(c) We say ξ > η if and only if, for some predetermined confidence levelα ∈ (0, 1], we have ξinf(α) > ηinf(α), where ξinf(α) and ηinf(α) are theα-pessimistic values of ξ and η, respectively. This criterion leads tominimax chance-constrained programming.

(d) We say ξ > η if and only if Ch {ξ ≥ r} > Ch {η ≥ r} for some predeter-mined level r. This criterion leads to dependent-chance programming.

For detailed expositions, the interested readers may consult the bookTheory and Practice of Uncertain Programming by Liu [75].

Page 397: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 14.4 - Nonclassical Credibility Theory 389

14.4 Nonclassical Credibility Theory

We have introduced the credibility theory with four axioms on Page 80. Thefirst three axioms are all fairly straightforward and easy to accept. The fourthone however causes problems. In fact, we may replace Axiom 4 with a newone, thus producing a new axiomatic foundation. The fuzzy operation willnot coincide with the extension principle of Zadeh. This type of credibilitytheory is called nonclassical credibility theory. Let us begin with the followingfour axioms:

Axiom 1. Pos{Θ} = 1.

Axiom 2. Pos{∅} = 0.

Axiom 3. Pos{∪iAi} = supi Pos{Ai} for any collection {Ai} in P(Θ).

Axiom 4′. Let Θi be nonempty sets on which Posi{·} satisfy the first threeaxioms, i = 1, 2, · · · , n, respectively, and Θ = Θ1 ×Θ2 × · · · ×Θn. Then

Pos{A} = sup(θ1,θ2,···,θn)∈A

Pos1{θ1} × Pos2{θ2} × · · · × Posn{θn} (14.1)

for each A ∈ P(Θ). In that case we write Pos = Pos1 × Pos2 × · · · × Posn.

Product Possibility Space

The first question is whether Pos = Pos1×Pos2×· · ·×Posn satisfies the firstthree axioms or not.

Theorem 14.1 Suppose that (Θi,P(Θi),Posi), i = 1, 2, · · · , n are possibilityspaces. Let Θ = Θ1×Θ2×· · ·×Θn and Pos = Pos1×Pos2×· · ·×Posn. Thenthe set function Pos is a possibility measure on P(Θ), and (Θ,P(Θ),Pos) isa possibility space.

Proof: We must prove that Pos satisfies the first three axioms. It is obviousthat Pos{∅} = 0 and Pos{Θ} = 1. In addition, for any arbitrary collection{Ak} in P(Θ), we have

Pos {∪kAk} = sup(θ1,θ2,···,θn)∈∪kAk

Pos1{θ1} × Pos2{θ2} × · · · × Posn{θn}

= supk

sup(θ1,θ2,···,θn)∈Ak

Pos1{θ1} × Pos2{θ2} × · · · × Posn{θn}

= supk

Pos{Ak}.

Thus the set function Pos is a possibility measure and (Θ,P(Θ),Pos) is apossibility space.

Definition 14.4 Let (Θi,P(Θi),Posi), i = 1, 2, · · · , n be possibility spacesand Θ = Θ1 × Θ2 × · · · × Θn and Pos = Pos1 × Pos2 × · · · × Posn. Then(Θ,P(Θ),Pos) is called the product possibility space of (Θi,P(Θi),Posi), i =1, 2, · · · , n.

Page 398: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

390 Chapter 14 - Some Remarks

Infinite Product Possibility Space

Theorem 14.2 Let (Θi,P(Θi),Posi), i = 1, 2, · · · be possibility spaces. If

Θ = Θ1 ×Θ2 × · · · × · · · , (14.2)

Pos{A} = sup(θ1,θ2,···)∈A

Pos1{θ1} × Pos2{θ2} × · · · , (14.3)

then the set function Pos is a possibility measure on P(Θ), and (Θ,P(Θ),Pos)is a possibility space.

Proof: We must prove that Pos satisfies the first three axioms. It is obviousthat Pos{∅} = 0 and Pos{Θ} = 1. In addition, for any arbitrary collection{Ai} in P(Θ), we have

Pos {∪iAi} = sup(θ1,θ2,···)∈∪iAi

Pos1{θ1} × Pos2{θ2} × · · ·

= supi

sup(θ1,θ2,···)∈Ai

Pos1{θ1} × Pos2{θ2} × · · ·

= supi

Pos{Ai}.

Thus the set function Pos defined by (14.3) is a possibility measure and(Θ,P(Θ),Pos) is a possibility space.

Definition 14.5 Let (Θi,P(Θi),Posi), i = 1, 2, · · · be possibility spaces andΘ = Θ1×Θ2× · · · and Pos = Pos1×Pos2× · · · Then (Θ,P(Θ),Pos) is calledthe infinite product possibility space of (Θi,P(Θi),Posi), i = 1, 2, · · ·

New Fuzzy Arithmetic

As defined before, a fuzzy variable is a function from a possibility space(Θ,P(Θ),Pos) to the set of real numbers.

Definition 14.6 (New Fuzzy Arithmetic on Single Possibility Space) Let f :�n → � be a function, and ξ1, ξ2, · · · , ξn fuzzy variables on the possibilityspace (Θ,P(Θ),Pos). Then ξ = f(ξ1, ξ2, · · · , ξn) is a fuzzy variable defined as

ξ(θ) = f(ξ1(θ), ξ2(θ), · · · , ξn(θ)), ∀θ ∈ Θ. (14.4)

Definition 14.7 (New Fuzzy Arithmetic on Different Possibility Spaces) Letf : �n → � be a function, and ξi fuzzy variables on the possibility spaces(Θi,P(Θi),Posi), i = 1, 2, · · · , n, respectively. Then ξ = f(ξ1, ξ2, · · · , ξn) is afuzzy variable defined on the product possibility space (Θ,P(Θ),Pos) as

ξ(θ1, θ2, · · · , θn) = f(ξ1(θ1), ξ2(θ2), · · · , ξn(θn)) (14.5)

for any (θ1, θ2, · · · , θn) ∈ Θ.

Page 399: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 14.4 - Nonclassical Credibility Theory 391

Independent Events and Independent Fuzzy Variables

Definition 14.8 The events Ai, i ∈ I are said to be independent if and onlyif for any collections {i1, i2, · · · , ik} of distinct indices in I, we have

Pos{Ai1 ∩Ai2 ∩ · · · ∩Aik} = Pos{Ai1}Pos{Ai2} · · ·Pos{Aik}. (14.6)

Definition 14.9 The fuzzy variables ξ1, ξ2, · · · , ξm are said to be independentif and only if

Pos{ξi ∈ Bi, i = 1, 2, · · · ,m} =m∏i=1

Pos{ξi ∈ Bi} (14.7)

for any sets B1, B2, · · · , Bm of �.

Example 14.1: Let Θ = {(θ′1, θ′′1 ), (θ′1, θ′′2 ), (θ′2, θ

′′1 ), (θ′2, θ

′′2 )}, Pos{∅} = 0,

Pos{(θ′1, θ′′1 )} = 1, Pos{(θ′1, θ′′2 )} = 0.5, Pos{(θ′2, θ′′1 )} = 0.8, Pos{(θ′2, θ′′2 )} =0.4 and Pos{Θ} = 1. Two fuzzy variables are defined as

ξ1(θ′, θ′′) =

{0, if θ′ = θ′11, if θ′ = θ′2,

ξ2(θ′, θ′′) =

{1, if θ′′ = θ′′10, if θ′′ = θ′′2 .

Then we have

Pos{ξ1 = 1, ξ2 = 1} = 0.8 = Pos{ξ1 = 1} × Pos{ξ2 = 1},

Pos{ξ1 = 1, ξ2 = 0} = 0.4 = Pos{ξ1 = 1} × Pos{ξ2 = 0},Pos{ξ1 = 0, ξ2 = 1} = 1.0 = Pos{ξ1 = 0} × Pos{ξ2 = 1},Pos{ξ1 = 0, ξ2 = 0} = 0.5 = Pos{ξ1 = 0} × Pos{ξ2 = 0}.

Thus ξ1 and ξ2 are independent fuzzy variables.

Example 14.2: Consider Θ = {θ1, θ2}, Pos{θ1} = 1, Pos{θ2} = 0.8 and thefuzzy variables are defined by

ξ1(θ) =

{0, if θ = θ1

1, if θ = θ2,ξ2(θ) =

{1, if θ = θ1

0, if θ = θ2.

Then we have

Pos{ξ1 = 1, ξ2 = 1} = Pos{∅} = 0 �= 0.8× 1 = Pos{ξ1 = 1} × Pos{ξ2 = 1}.

Thus ξ1 and ξ2 are dependent fuzzy variables.

Theorem 14.3 Let ξi be independent fuzzy variables, and fi : � → � func-tions, i = 1, 2, · · · ,m. Then f1(ξ1), f2(ξ2), · · · , fm(ξm) are independent fuzzyvariables.

Page 400: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

392 Chapter 14 - Some Remarks

Proof: For any sets of B1, B2, · · · , Bm of �, we have

Pos{f1(ξ1) ∈ B1, f2(ξ2) ∈ B2, · · · , fm(ξm) ∈ Bm}

= Pos{ξ1 ∈ f−11 (B1), ξ2 ∈ f−1

2 (B2), · · · , ξm ∈ f−1m (Bm)}

= Pos{ξ1 ∈ f−11 (B1)} × Pos{ξ2 ∈ f−1

2 (B2)} × · · · × Pos{ξm ∈ f−1m (Bm)}

= Pos{f1(ξ1) ∈ B1} × Pos{f2(ξ2) ∈ B2} × · · · × Pos{fm(ξm) ∈ Bm}.

Thus f1(ξ1), f2(ξ2), · · · , fm(ξm) are independent fuzzy variables.

Theorem 14.4 Let ξ1, ξ2, · · · , ξn be independent fuzzy variables with mem-bership functions μ1, μ2, · · · , μn, respectively, and f : �n → � a function.Then the membership function μ of ξ = f(ξ1, ξ2, · · · , ξn) is derived from themembership functions μ1, μ2, · · · , μn by

μ(x) = supx1,x2,···,xn∈�

{n∏

i=1

μi(xi)∣∣ x = f(x1, x2, · · · , xn)

}. (14.8)

Proof: It follows from Definition 14.7 that the membership function of ξ =f(ξ1, ξ2, · · · , ξn) is

μ(x) = Pos{(θ1, θ2, · · · , θn) ∈ Θ

∣∣ x = f (ξ1(θ1), ξ2(θ2), · · · , ξn(θn))}

= supθi∈Θi,i=1,2,···,n

{n∏

i=1

Posi{θi}∣∣ x = f (ξ1(θ1), ξ2(θ2), · · · , ξn(θn))

}= sup

x1,x2,···,xn∈�

{n∏

i=1

μi(xi)∣∣ x = f(x1, x2, · · · , xn)

}.

The theorem is proved.

Example 14.3: Assume that ξ and η are simple fuzzy variables, i.e.,

ξ =

⎧⎪⎪⎪⎨⎪⎪⎪⎩a1 with possibility μ1

a2 with possibility μ2

· · ·am with possibility μm,

η =

⎧⎪⎪⎪⎨⎪⎪⎪⎩b1 with possibility ν1

b2 with possibility ν2

· · ·bn with possibility νn.

Then the sum ξ + η is a simple fuzzy variable taking values ai + bj withpossibilities μi · νj , i = 1, 2, · · · ,m, j = 1, 2, · · · , n, respectively. The productξ ·η is also a simple fuzzy variable taking values ai ·bj with possibilities μi ·νj ,i = 1, 2, · · · ,m, j = 1, 2, · · · , n, respectively.

Example 14.4: Let ξ1 and ξ2 be fuzzy variables defined by membershipfunctions

μ1(x) =

{1, if x ∈ [a1, b1]0, otherwise,

μ2(x) =

{1, if x ∈ [a2, b2]0, otherwise,

Page 401: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 14.4 - Nonclassical Credibility Theory 393

respectively. Then the sum ξ1 + ξ2 is a fuzzy variable whose membershipfunction is

μ(x) =

{1, if x ∈ [a1 + a2, b1 + b2]0, otherwise.

Example 14.5: From the new fuzzy arithmetic, we can obtain the sum oftrapezoidal fuzzy variables ξ = (a1, a2, a3, a4) and η = (b1, b2, b3, b4). Differ-ent from the classical credibility theory, the sum ξ + η is not a trapezoidalfuzzy variable.

Theorem 14.5 Let ξ1, ξ2, · · · , ξn be independent fuzzy variables, and f :�n → �m a function. Then the possibility of the fuzzy event f(ξ1, ξ2, · · · , ξn) ≤0 is

Pos {f(ξ1, ξ2, · · · , ξn) ≤ 0} = supx1,x2,···,xn

{n∏

i=1

μi(xi)∣∣ f(x1, x2, · · · , xn) ≤ 0

}.

Proof: Assume that ξi are defined on the possibility spaces (Θi,P(Θi),Posi),i = 1, 2, · · · , n, respectively. Then the fuzzy event f(ξ1, ξ2, · · · , ξn) ≤ 0 isdefined on the product possibility space (Θ,P(Θ),Pos), whose possibility is

Pos{f(ξ1, ξ2, · · · , ξn) ≤ 0}= Pos

{(θ1, θ2, · · · , θn) ∈ Θ

∣∣ f(ξ1(θ1), ξ2(θ2), · · · , ξn(θn)) ≤ 0}

= supθi∈Θi,1≤i≤n

{n∏

i=1

Pos{θi}∣∣ f(ξ1(θ1), ξ2(θ2), · · · , ξn(θn)) ≤ 0

}= sup

x1,x2,···,xn∈�

{n∏

i=1

μi(xi)∣∣ f(x1, x2, · · · , xn) ≤ 0

}.

The theorem is proved.

Definition 14.10 The n-dimensional fuzzy vectors ξ1, ξ2, · · · , ξm are saidto be independent if and only if

Pos{ξi ∈ Bi, i = 1, 2, · · · ,m} =m∏i=1

Pos{ξi ∈ Bi} (14.9)

for any sets B1, B2, · · · , Bm of �n.

Expected Value Operator

Using the new fuzzy arithmetic, the expected value operator for fuzzy variableξ is defined by

E[ξ] =∫ +∞

0

Cr{ξ ≥ r}dr −∫ 0

−∞Cr{ξ ≤ r}dr (14.10)

Page 402: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

394 Chapter 14 - Some Remarks

provided that at least one of the two integrals is finite. This definition has anidentical form with the old one. However, it has no linearity property evenfor independent fuzzy variables.

Example 14.6: Now we define two fuzzy variables as follows,

ξ1 =

{0, with possibility 11, with possibility 0.8,

ξ2 =

{0, with possibility 11, with possibility 0.8.

Then their sum is

ξ1 + ξ2 =

⎧⎪⎨⎪⎩0, with possibility 11, with possibility 0.82, with possibility 0.64.

Since E[ξ1] = E[ξ2] = 0.4 and E[ξ1 + ξ2] = 0.72, we know that

E[ξ1 + ξ2] �= E[ξ1] + E[ξ2].

Conditional Possibility

We consider the possibility of an event A after it has been learned that someother event B has occurred. This new possibility of A is called the conditionalpossibility of the event A given that the event B has occurred.

Definition 14.11 Let (Θ,P(Θ),Pos) be a possibility space, and A,B ∈ P(Θ).Then the conditional possibility of A given B is defined by

Pos{A|B} =Pos{A ∩B}

Pos{B} (14.11)

provided that Pos{B} > 0.

Remark 14.1: When A and B are independent events, and B has occurred,it is reasonable that the the possibility of the event A remains unchanged.The following formula shows the fact,

Pos{A|B} =Pos{A ∩B}

Pos{B} =Pos{A} × Pos{B}

Pos{B} = Pos{A}.

Theorem 14.6 Let (Θ,P(Θ),Pos) be a possibility space, and B ∈ P(Θ). IfPos{B} > 0, then Pos{·|B} defined by (14.11) is a possibility measure on(Θ,P(Θ)), and (Θ,P(Θ),Pos{·|B}) is a possibility space.

Proof: At first, we have

Pos{Θ|B} =Pos{Θ ∩B}

Pos{B} =Pos{B}Pos{B} = 1,

Page 403: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 14.4 - Nonclassical Credibility Theory 395

Pos{∅|B} =Pos{∅ ∩B}

Pos{B} =Pos{∅}Pos{B} = 0.

Second, for any sequence {Ai} of events, we have

Pos

{⋃i

Ai|B}

=Pos{⋃

i

Ai ∩B

}Pos{B} =

supi

Pos{Ai ∩B}

Pos{B} = supi

Pos{Ai|B}.

Thus Pos{·|B} is a possibility measure on (Θ,P(Θ)). Furthermore, the triplet(Θ,P(Θ),Pos{·|B}) is a possibility space.

Definition 14.12 Let (Θ,P(Θ),Pos) be a possibility space, and A,B ∈ P(Θ).Then the conditional necessity of A given B is defined by

Nec{A|B} = 1− Pos{Ac|B}, (14.12)

and the conditional credibility of A given B is defined by

Cr{A|B} =12

(Pos{A|B}+ Nec{A|B}) (14.13)

provided that Pos{B} > 0.

Definition 14.13 Let (Θ,P(Θ),Pos) be a possibility space. Then the condi-tional credibility distribution Φ: [−∞,+∞]×P(Θ)→ [0, 1] of a fuzzy variableξ given B is defined by

Φ(x|B) = Cr{ξ ≤ x

∣∣ B} (14.14)

provided that Pos{B} > 0.

Definition 14.14 Let (Θ,P(Θ),Pos) be a possibility space and Pos{B} > 0.Then the conditional credibility density function φ: � × P(Θ) → [0,+∞) ofa fuzzy variable ξ given B is a function such that

Φ(x|B) =∫ x

−∞φ(y|B)dy (14.15)

holds for all x ∈ [−∞,+∞], where Φ is the conditional credibility distributionof the fuzzy variable ξ given B.

Example 14.7: Let ξ and η be fuzzy variables. Then the conditional credi-bility distribution of ξ given η = y is

Φ(x|η = y) =Pos{η = y}+ Pos{ξ ≤ x, η = y} − Pos{ξ > x, η = y}

2Pos{η = y}

provided that Pos{η = y} �= 0.

Page 404: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

396 Chapter 14 - Some Remarks

Example 14.8: Let (ξ, η) be a fuzzy vector with a joint membership functionμ. If supr μ(r, y) �= 0 for some y, then the conditional membership functionof ξ given η = y is

ν(x|η = y) =μ(x, y)

supr

μ(r, y).

Definition 14.15 Let ξ be a fuzzy variable defined on the possibility space(Θ,P(Θ),Pos). Then the conditional expected value of ξ given B is definedby

E[ξ|B] =∫ +∞

0

Cr{ξ ≥ r|B}dr −∫ 0

−∞Cr{ξ ≤ r|B}dr (14.16)

provided that at least one of the two integrals is finite.

14.5 Generalized Trust Theory

We have given a trust theory to study the behavior of rough events. Nowwe replace Axioms 3 and 4 with two new axioms and produce a generalizedtrust theory.

Let Λ be a nonempty set representing the sample space, P(Λ) the powerset of Λ, Δ a subset of Λ and π a real-valued set function. The four axiomsare listed as follows:

Axiom 1. π{Λ} < +∞.

Axiom 2. π{Δ} > 0.

Axiom 3′. π{∅} = 0.

Axiom 4′. π{∪iAi} = supi π{Ai} for any collection {Ai} in P(Λ).

We may accept other axiomatic system. Let Λ be a nonempty set repre-senting the sample space, A a σ-algebra of subsets of Λ, Δ an element in A,and π a real-valued set function. The four axioms are listed as follows:

Axiom 1. π{Λ} < +∞.

Axiom 2. π{Δ} > 0.

Axiom 3′. π{∅} = 0.

Axiom 4′′. π{A} ≤ π{B} whenever A,B ∈ A and A ⊂ B.

Definition 14.16 (Liu [79]) Let Λ be a nonempty set, A a σ-algebra ofsubsets of Λ, Δ an element in A, and π a set function satisfying the fouraxioms. Then (Λ,Δ,A, π) is called a generalized rough space.

Definition 14.17 (Liu [79]) A rough variable ξ is a function from the gen-eralized rough space (Λ,Δ,A, π) to the set of real numbers such that for every

Page 405: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Section 14.5 - Generalized Trust Theory 397

Borel Set B of �, we have{λ ∈ Λ

∣∣ ξ(λ) ∈ B}∈ A. (14.17)

The lower and the upper approximations of the rough variable ξ are thendefined as follows,

ξ ={ξ(λ)

∣∣ λ ∈ Δ}

, ξ ={ξ(λ)

∣∣ λ ∈ Λ}

. (14.18)

From the generalized rough space and new definition of rough variable,we may produce a generalized trust theory.

Page 406: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Bibliography

[1] Alefeld, G., Herzberger, J., Introduction to Interval Computations, AcademicPress, New York, 1983.

[2] Ash R.B., Real Analysis and Probability, Academic Press, New York, 1972.

[3] Atanassov, K.T., Intuitionistic fuzzy sets, Fuzzy Sets and Systems, Vol. 20,No. 1, 87-96, 1986.

[4] Atanassov, K.T., More on intuitionistic fuzzy sets, Fuzzy Sets and Systems,Vol. 33, No. 1, 37-46, 1989.

[5] Atanassov, K.T., New operations defined over the intuitionistic fuzzy sets,Fuzzy Sets and Systems, Vol. 61, No. 2, 137-142, 1994.

[6] Atanassov, K.T., Intuitionistic Fuzzy Sets: Theory and Applications, Physica-Verlag, Heidelberg, 1999.

[7] Bamber, D., Goodman, I.R., Nguyen, H.T., Extension of the concept ofpropositional deduction from classical logic to probability: An overview ofprobability-selection approaches, Information Sciences, Vol. 131, 195-250,2001.

[8] Bandemer, H., and Nather, W., Fuzzy Data Analysis, Kluwer, Dordrecht,1992.

[9] Bellman, R.E., and Zadeh, L.A., Decision making in a fuzzy environment,Management Science, Vol. 17, 141-164, 1970.

[10] Bitran, G.R., Linear multiple objective problems with interval coefficients,Management Science, Vol. 26, 694-706, 1980.

[11] Bratley, P., Fox, B.L., and Schrage, L.E., A Guide to Simulation, Springer-Verlag, New York, 1987.

[12] Buckley, J.J., Possibility and necessity in optimization, Fuzzy Sets and Sys-tems, Vol. 25, 1-13, 1988.

[13] Buckley, J.J., Stochastic versus possibilistic programming, Fuzzy Sets andSystems, Vol. 34, 173-177, 1990.

[14] Cadenas, J.M., and Verdegay, J.L., Using fuzzy numbers in linear program-ming, IEEE Transactions on Systems, Man and Cybernetics–Part B, Vol. 27,No. 6, 1016-1022, 1997.

[15] Campos, L., and Gonzalez, A., A subjective approach for ranking fuzzy num-bers, Fuzzy Sets and Systems, Vol. 29, 145-153, 1989.

Page 407: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

400 Bibliography

[16] Campos, L., and Verdegay, J.L., Linear programming problems and rankingof fuzzy numbers, Fuzzy Sets and Systems, Vol. 32, 1-11, 1989.

[17] Chanas, S., and Kuchta, D., Multiobjective programming in optimizationof interval objective functions—a generalized approach, European Journal ofOperational Research, Vol. 94, 594-598, 1996.

[18] Chen, S.J., and Hwang, C.L., Fuzzy Multiple Attribute Decision Making:Methods and Applications, Springer-Verlag, Berlin, 1992.

[19] Dubois, D. and Prade, H., Operations on fuzzy numbers, International Jour-nal of System Sciences, Vol. 9, 613-626, 1978.

[20] Dubois, D., and Prade, H., Fuzzy Sets and Systems, Theory and Applications,Academic Press, New York, 1980.

[21] Dubois, D., and Prade, H., Twofold fuzzy sets: An approach to the repre-sentation of sets with fuzzy boundaries based on possibility and necessitymeasures, The Journal of Fuzzy Mathematics, Vol. 3, No. 4, 53-76, 1983.

[22] Dubois, D., and Prade, H., Fuzzy cardinality and the modeling of imprecisequantification, Fuzzy Sets and Systems, Vol. 16, 199-230, 1985.

[23] Dubois, D., and Prade, H., The mean value of a fuzzy number, Fuzzy Setsand Systems, Vol. 24, 279-300, 1987.

[24] Dubois, D., and Prade, H., Twofold fuzzy sets and rough sets — some issuesin knowledge representation, Fuzzy Sets and Systems, Vol. 23, 3-18, 1987.

[25] Dubois, D., and Prade, H., Possibility Theory: An Approach to ComputerizedProcessing of Uncertainty, Plenum, New York, 1988.

[26] Dubois, D., and Prade, H., Fuzzy numbers: An overview, in Analysis of FuzzyInformation, Vol. 2, 3-39, Bezdek, J.C. (Ed.), CRC Press, Boca Raton, 1988.

[27] Dubois, D., and Prade, H., Rough fuzzy sets and fuzzy rough sets, Interna-tional Journal of General Systems, Vol. 17, 191-200, 1990.

[28] Dunyak, J., Saad, I.W., and Wunsch, D., A theory of independent fuzzyprobability for system reliability, IEEE Transactions on Fuzzy Systems, Vol.7, No. 3, 286-294, 1999.

[29] Fishman, G.S., Monte Carlo: Concepts, Algorithms, and Applications,Springer-Verlag, New York, 1996.

[30] Gao, J., and Liu, B., New primitive chance measures of fuzzy random event,International Journal of Fuzzy Systems, Vol. 3, No. 4, 527-531, 2001.

[31] Gonzalez, A., A study of the ranking function approach through mean values,Fuzzy Sets and Systems, Vol. 35, 29-41, 1990.

[32] Guan, J., and Bell, D.A., Evidence Theory and its Applications, North-Holland, Amsterdam, 1991.

[33] Hansen, E., Global Optimization Using Interval Analysis, Marcel Dekker, NewYork, 1992.

[34] Heilpern, S., The expected value of a fuzzy number, Fuzzy Sets and Systems,Vol. 47, 81-86, 1992.

[35] Hisdal, E., Logical Structures for Representation of Knowledge and Uncer-tainty, Physica-Verlag, Heidelberg, 1998.

Page 408: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Bibliography 401

[36] Hua, N., Properties of Moment and Covariance of Fuzzy Variables, BachelorThesis, Department of Mathematical Sciences, Tsinghua University, Beijing,2003.

[37] Inuiguchi, M., and Ramık, J., Possibilistic linear programming: A brief re-view of fuzzy mathematical programming and a comparison with stochasticprogramming in portfolio selection problem, Fuzzy Sets and Systems, Vol.111, No. 1, 3-28, 2000.

[38] Ishibuchi, H., and Tanaka, H., Multiobjective programming in optimizationof the interval objective function, European Journal of Operational Research,Vol. 48, 219-225, 1990.

[39] John, R.I., Type 2 fuzzy sets: An appraisal of theory and applications, In-ternational Journal of Uncertainty, Fuzziness & Knowledge-Based Systems,Vol. 6, No. 6, 563-576, 1998.

[40] Kacprzyk, J., and Esogbue, A.O., Fuzzy dynamic programming: Main devel-opments and applications, Fuzzy Sets and Systems, Vol. 81, 31-45, 1996.

[41] Karnik, N.N., Mendel, J.M., and Liang, Q., Type-2 fuzzy logic systems, IEEETransactions on Fuzzy Systems, Vol. 7, No. 6, 643-658, 1999.

[42] Karnik, N.N., Mendel, J.M., Operations on type-2 fuzzy sets, Fuzzy Sets andSystems, Vol. 122, 327-248, 2001.

[43] Karnik, N.N., Mendel, J.M., and Liang, Q., Centroid of a type-2 fuzzy set,Information Sciences, Vol. 132, 195-220, 2001.

[44] Kaufmann, A., Introduction to the Theory of Fuzzy Subsets, Vol.I, AcademicPress, New York, 1975.

[45] Kaufmann, A. and Gupta, M.M., Introduction to Fuzzy Arithmetic: Theoryand Applications, Van Nostrand Reinhold, New York, 1985.

[46] Kaufmann, A. and Gupta, M.M., Fuzzy Mathematical Models in Engineeringand Management Science, 2nd ed., North-Holland, Amsterdam, 1991.

[47] Klement, E.P., Puri, M.L., and Ralescu, D.A., Limit theorems for fuzzy ran-dom variables, Proceedings of the Royal Society of London, Vol. 407, 171-182,1986.

[48] Klir, G.J. and Folger, T.A., Fuzzy Sets, Uncertainty, and Information,Prentice-Hall, Englewood Cliffs, NJ, 1980.

[49] Klir, G.J. and Yuan, B., Fuzzy Sets and Fuzzy Logic: Theory and Applica-tions, Prentice-Hall, New Jersey, 1995.

[50] Kruse, R., and Meyer, K.D., Statistics with Vague Data, D. Reidel PublishingCompany, Dordrecht, 1987.

[51] Kwakernaak, H., Fuzzy random variables–I. Definitions and theorems, Infor-mation Sciences, Vol. 15, 1-29, 1978.

[52] Kwakernaak H., Fuzzy random variables–II. Algorithms and examples for thediscrete case, Information Sciences, Vol. 17, 253-278, 1979.

[53] Lai, Y.-J. and Hwang, C.-L., A new approach to some possibilistic linearprogramming problems, Fuzzy Sets and Systems, Vol. 49, 121-133, 1992.

Page 409: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

402 Bibliography

[54] Lai, Y.-J. and Hwang, C.-L., Fuzzy Multiple Objective Decision Making:Methods and Applications, Springer-Verlag, New York, 1994.

[55] Laha, R.G., and Rohatgi, Probability Theory, Wiley, New York, 1979.

[56] Law, A.M. and Kelton, W.D., Simulation Modelling & Analysis, 2nd edition,McGraw-Hill, New York, 1991.

[57] Lee, E.S., Fuzzy multiple level programming, Applied Mathematics and Com-putation, Vol. 120, 79-90, 2001.

[58] Li, H.-L., and Yu, C.-S., A fuzzy multiobjective program with quasiconcavemembership functions and fuzzy coefficients, Fuzzy Sets and Systems, Vol.109, No. 1, 59-81, 2000.

[59] Li, S.M., Ogura, Y. and Nguyen, H.T., Gaussian processes and martingalesfor fuzzy valued random variables with continuous parameter, InformationSciences, Vol. 133, 7-21, 2001.

[60] Liu, B., Dependent-chance goal programming and its genetic algorithm basedapproach, Mathematical and Computer Modelling, Vol. 24, No. 7, 43-52, 1996.

[61] Liu, B., and Esogbue, A.O., Fuzzy criterion set and fuzzy criterion dynamicprogramming, Journal of Mathematical Analysis and Applications, Vol. 199,No. 1, 293-311, 1996.

[62] Liu, B., Dependent-chance programming: A class of stochastic programming,Computers & Mathematics with Applications, Vol. 34, No. 12, 89-104, 1997.

[63] Liu, B., and Iwamura, K., Modelling stochastic decision systems usingdependent-chance programming, European Journal of Operational Research,Vol. 101, No. 1, 193-203, 1997.

[64] Liu, B., and Iwamura, K., Chance constrained programming with fuzzy pa-rameters, Fuzzy Sets and Systems, Vol. 94, No. 2, 227-237, 1998.

[65] Liu, B., and Iwamura, K., A note on chance constrained programming withfuzzy coefficients, Fuzzy Sets and Systems, Vol. 100, Nos. 1-3, 229-233, 1998.

[66] Liu, B., Minimax chance constrained programming models for fuzzy decisionsystems, Information Sciences, Vol. 112, Nos. 1-4, 25-38, 1998.

[67] Liu, B., Dependent-chance programming with fuzzy decisions, IEEE Trans-actions on Fuzzy Systems, Vol. 7, No. 3, 354-360, 1999.

[68] Liu, B., and Esogbue, A.O., Decision Criteria and Optimal Inventory Pro-cesses, Kluwer, Boston, 1999.

[69] Liu, B., Uncertain Programming, Wiley, New York, 1999.

[70] Liu, B., Dependent-chance programming in fuzzy environments, Fuzzy Setsand Systems, Vol. 109, No. 1, 97-106, 2000.

[71] Liu, B., Uncertain programming: A unifying optimization theory in variousuncertain environments, Applied Mathematics and Computation, Vol. 120,Nos. 1-3, 227-234, 2001.

[72] Liu, B., and Iwamura, K., Fuzzy programming with fuzzy decisions and fuzzysimulation-based genetic algorithm, Fuzzy Sets and Systems, Vol. 122, No. 2,253-262, 2001.

Page 410: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Bibliography 403

[73] Liu, B., Fuzzy random chance-constrained programming, IEEE Transactionson Fuzzy Systems, Vol. 9, No. 5, 713-720, 2001.

[74] Liu, B., Fuzzy random dependent-chance programming, IEEE Transactionson Fuzzy Systems, Vol. 9, No. 5, 721-726, 2001.

[75] Liu, B., Theory and Practice of Uncertain Programming, Physica-Verlag, Hei-delberg, 2002.

[76] Liu, B., Toward fuzzy optimization without mathematical ambiguity, FuzzyOptimization and Decision Making, Vol. 1, No. 1, 43-63, 2002.

[77] Liu, B., and Liu, Y.-K., Expected value of fuzzy variable and fuzzy expectedvalue models, IEEE Transactions on Fuzzy Systems, Vol. 10, No. 4, 445-450,2002.

[78] Liu, B., Random fuzzy dependent-chance programming and its hybrid intel-ligent algorithm, Information Sciences, Vol. 141, Nos. 3-4, 259-271, 2002.

[79] Liu, B., Inequalities and convergence concepts of fuzzy and rough variables,Fuzzy Optimization and Decision Making, Vol.2, No.2, 87-100, 2003.

[80] Liu, Y.-K., and Liu, B., Random fuzzy programming with chance measuresdefined by fuzzy integrals, Mathematical and Computer Modelling, Vol. 36,No. 4-5, 509-524, 2002.

[81] Liu, Y.-K., and Liu, B., Fuzzy random programming problems with multiplecriteria, Asian Information-Science-Life, Vol.1, No.3, 2002.

[82] Liu, Y.-K., and Liu, B., Fuzzy random variables: A scalar expected valueoperator, Fuzzy Optimization and Decision Making, Vol.2, No.2, 143-160,2003.

[83] Liu, Y.-K., and Liu, B., Expected value operator of random fuzzy variable andrandom fuzzy expected value models, International Journal of Uncertainty,Fuzziness & Knowledge-Based Systems, Vol.11, No.2, 195-215, 2003.

[84] Liu, Y.-K., and Liu, B., A class of fuzzy random optimization: Expectedvalue models, Information Sciences, Vol.155, No.1-2, 89-102, 2003.

[85] Liu, Y.-K., and Liu, B., On minimum-risk problems in fuzzy random decisionsystems, Computers & Operations Research, to be published.

[86] Liu, Y.-K., and Liu, B., Fuzzy random programming with equilibrium chanceconstraints, Technical Report, 2003.

[87] Lu, M., On crisp equivalents and solutions of fuzzy programming with dif-ferent chance measures, Information: An International Journal, Vol.6, No.2,125-133, 2003.

[88] Lu, M., Some mathematical properties of fuzzy random programming, Pro-ceedings of the First Annual Conference on Uncertainty, Daqing, China, Au-gust 16-19, 2003, pp.83-96.

[89] Lucas, C., and Araabi, B.N., Generalization of the Dempster-Shafer Theory:A fuzzy-valued measure, IEEE Transactions on Fuzzy Systems, Vol. 7, No. 3,255-270, 1999.

[90] Luhandjula, M.K., Fuzziness and randomness in an optimization framework,Fuzzy Sets and Systems, Vol. 77, 291-297, 1996.

Page 411: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

404 Bibliography

[91] Luhandjula M.K. and M.M. Gupta, On fuzzy stochastic optimization, FuzzySets and Systems, Vol. 81, 47-55, 1996.

[92] Maleki, H.R., Tata, M., and Mashinchi, M., Linear programming with fuzzyvariables, Fuzzy Sets and Systems, Vol. 109, No. 1, 21-33, 2000.

[93] Matheron, G., Random Sets and Integral Geometry, Wiley, New York, 1975.

[94] Mizumoto, M., and Tanaka, K., Some properties of fuzzy sets of type 2,Information and Control, Vol. 31, 312-340, 1976.

[95] Mohammed, W., Chance constrained fuzzy goal programming with right-hand side uniform random variable coefficients, Fuzzy Sets and Systems, Vol.109, No. 1, 107-110, 2000.

[96] Molchanov, I.S., Limit Theorems for Unions of Random Closed Sets,Springer-Verlag, Berlin, 1993.

[97] Morgan, B., Elements of Simulation, Chapamn & Hall, London, 1984.

[98] Nahmias, S., Fuzzy variables, Fuzzy Sets and Systems, Vol. 1, 97-110, 1978.

[99] Negoita, C.V. and Ralescu, D., On fuzzy optimization, Kybernetes, Vol. 6,193-195, 1977.

[100] Negoita, C.V., and Ralescu D., Simulation, Knowledge-based Computing, andFuzzy Statistics, Van Nostrand Reinhold, New York, 1987.

[101] Neumaier, A., Interval Methods for Systems of Equations, Cambridge Uni-versity Press, New York, 1990.

[102] Nguyen, H.T., Fuzzy sets and probability, Fuzzy sets and Systems, Vol.90,129-132, 1997.

[103] Nguyen, H.T., Kreinovich, V., Zuo, Q., Interval-valued degrees of belief: Ap-plications of interval computations to expert systems and intelligent control,International Journal of Uncertainty, Fuzziness & Knowledge-Based Systems,Vol.5, 317-358, 1997.

[104] Nguyen, H.T., Nguyen, N.T., Wang, T.H., On capacity functionals in intervalprobabilities, International Journal of Uncertainty, Fuzziness & Knowledge-Based Systems, Vol.5, 359-377, 1997.

[105] Nguyen, H.T., Kreinovich, V., Shekhter, V., On the possibility of using com-plex values in fuzzy logic for representing inconsistencies, International Jour-nal of Intelligent Systems, Vol.13, 683-714, 1998.

[106] Nguyen, H.T., Kreinovich, V., Wu, B.L., Fuzzy/probability similar to frac-tal/smooth, International Journal of Uncertainty, Fuzziness & Knowledge-Based Systems, Vol. 7, 363-370, 1999.

[107] Nguyen, H.T., Nguyen, N.T., On Chu spaces in uncertainty analysis, Inter-national Journal of Intelligent Systems, Vol. 15, 425-440, 2000.

[108] Nguyen, H.T., Some mathematical structures for computational information,Information Sciences, Vol. 128, 67-89, 2000.

[109] Pawlak, Z., Rough sets, International Journal of Information and ComputerSciences, Vol. 11, No. 5, 341-356, 1982.

[110] Pawlak, Z., Rough sets and fuzzy sets, Fuzzy sets and Systems, Vol. 17, 99-102, 1985.

Page 412: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Bibliography 405

[111] Pawlak, Z., Rough Sets: Theoretical Aspects of Reasoning about Data, Kluwer,Dordrecht, 1991.

[112] Pawlak, Z., and Slowinski, R., Rough set approach to multi-attribute decisionanalysis, European Journal of Operational Research, Vol. 72, 443-459, 1994.

[113] Pawlak, Z., Rough set approach to knowledge-based decision support, Euro-pean Journal of Operational Research, Vol. 99, 48-57, 1997.

[114] Pedrycz, W., Optimization schemes for decomposition of fuzzy relations,Fuzzy Sets and Systems, Vol. 100, 301-325, 1998.

[115] Peng, J., and Liu, B., Stochastic goal programming models for parallelmachine scheduling problems, Asian Information-Science-Life, Vol.1, No.3,2002.

[116] Peng, J., and Liu, B., Birandom variables and birandom programming, Com-puters & Industrial Engineering, to be published.

[117] Peng, J., and Liu, B., Parallel machine scheduling models with fuzzy process-ing times, Information Sciences, to be published.

[118] Puri, M.L. and Ralescu, D., Fuzzy random variables, Journal of MathematicalAnalysis and Applications, Vol. 114, 409-422, 1986.

[119] Raj, P.A., and Kumer, D.N., Ranking alternatives with fuzzy weights usingmaximizing set and minimizing set, Fuzzy Sets and Systems, Vol. 105, 365-375, 1999.

[120] Ramer, A., Conditional possibility measures, International Journal of Cyber-netics and Systems, Vol. 20, 233-247, 1989.

[121] Ramık, J., Extension principle in fuzzy optimization, Fuzzy Sets and Systems,Vol. 19, 29-35, 1986.

[122] Ramık, J., and Rommelfanger H., Fuzzy mathematical programming basedon some inequality relations, Fuzzy Sets and Systems, Vol. 81, 77-88, 1996.

[123] Robbins, H.E., On the measure of a random set, Annals of MathematicalStatistics, Vol. 15, 70-74, 1944.

[124] Rubinstein, R.Y., Simulation and the Monte Carlo Method, Wiley, New York,1981.

[125] Saade, J.J., Maximization of a function over a fuzzy domain, Fuzzy Sets andSystems, Vol. 62, 55-70, 1994.

[126] Sakawa, M., Nishizaki, I., and Uemura Y., Interactive fuzzy programming formulti-level linear programming problems with fuzzy parameters, Fuzzy Setsand Systems, Vol. 109, No. 1, 3-19, 2000.

[127] Sakawa, M., Nishizaki, I., Uemura, Y., Interactive fuzzy programming fortwo-level linear fractional programming problems with fuzzy parameters,Fuzzy Sets and Systems, Vol. 115, 93-103, 2000.

[128] Shafer, G., A Mathematical Theory of Evidence, Princeton University Press,Princeton, NJ, 1976.

[129] Shih, H.S., Lai, Y.J., Lee, E.S., Fuzzy approach for multilevel programmingproblems, Computers and Operations Research, Vol. 23, 73-91, 1996.

Page 413: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

406 Bibliography

[130] Slowinski, R. and Teghem, Jr. J., Fuzzy versus stochastic approaches to mul-ticriteria linear programming under uncertainty, Naval Research Logistics,Vol. 35, 673-695, 1988.

[131] Slowinski, R., and Stefanowski, J., Rough classification in incomplete infor-mation systems, Mathematical and Computer Modelling, Vol. 12, 1347-1357,1989.

[132] Slowinski, R., and Vanderpooten, D., A generalized definition of rough ap-proximations based on similarity, IEEE Transactions on Knowledge and DataEngineering, Vol. 12, No. 2, 331-336, 2000.

[133] Steuer, R.E., Algorithm for linear programming problems with interval ob-jective function coefficients, Mathematics of Operational Research, Vol. 6,333-348, 1981.

[134] Szmidt, E., Kacprzyk, J., Distances between intuitionistic fuzzy sets, FuzzySets and Systems, Vol. 114, 505-518, 2000.

[135] Szmidt, E., Kacprzyk, J., Entropy for intuitionistic fuzzy sets, Fuzzy Sets andSystems, Vol. 118, 467-477, 2001.

[136] Tanaka, H. and Asai, K., Fuzzy linear programming problems with fuzzynumbers, Fuzzy Sets and Systems, Vol. 13, 1-10, 1984.

[137] Tanaka, H. and Asai, K., Fuzzy solutions in fuzzy linear programming prob-lems, IEEE Transactions on Systems, Man and Cybernetics, Vol. 14, 325-328,1984.

[138] Tanaka, H., Guo, P., Possibilistic Data Analysis for Operations Research,Physica-Verlag, Heidelberg, 1999.

[139] Tanaka, H., Guo, P., and Zimmermann, H.-J., Possibility distribution of fuzzydecision variables obtained from possibilistic linear programming problems,Fuzzy Sets and Systems, Vol. 113, 323-332, 2000.

[140] Wang, G., and Qiao Z., Linear programming with fuzzy random variablecoefficients, Fuzzy Sets and Systems, Vol. 57, 295-311, 1993.

[141] Wang, G., and Liu, B., New theorems for fuzzy sequence convergence, Tech-nical Report, 2003.

[142] Yager, R.R., A procedure for ordering fuzzy subsets of the unit interval,Information Sciences, Vol. 24, 143-161, 1981.

[143] Yager, R.R., Generalized probabilities of fuzzy events from fuzzy belief struc-tures, Information Sciences, Vol. 28, 45-62, 1982.

[144] Yager, R.R., On ordered weighted averaging aggregation operators in multi-criteria decision making, IEEE Transactions on Systems, Man and Cybernet-ics, Vol. 18, 183-190, 1988.

[145] Yager, R.R., Decision making under Dempster-Shafer uncertainties, Interna-tional Journal of General Systems, Vol. 20, 233-245, 1992.

[146] Yager, R.R., On the specificity of a possibility distribution, Fuzzy Sets andSystems, Vol. 50, 279-292, 1992.

[147] Yager, R.R., Modeling uncertainty using partial information, Informationsciences, Vol. 121, 271-294, 1999.

Page 414: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Bibliography 407

[148] Yager, R.R., Decision making with fuzzy probability assessments, IEEETransactions on Fuzzy Systems, Vol. 7, 462-466, 1999.

[149] Yager, R.R., On the evaluation of uncertain courses of action, Fuzzy Opti-mization and Decision Making, Vol. 1, 13-41, 2002.

[150] Yang, L., and Liu, B., Chance distribution of fuzzy random variable and lawsof large numbers, Technical Report, 2003.

[151] Yao, Y.Y., Two views of the theory of rough sets in finite universes, Interna-tional Journal of Approximate Reasoning, Vol. 15, 291-317, 1996.

[152] Yazenin, A.V., On the problem of possibilistic optimization, Fuzzy Sets andSystems, Vol. 81, 133-140, 1996.

[153] Zadeh, L.A., Fuzzy sets, Information and Control, Vol. 8, 338-353, 1965.

[154] Zadeh, L.A., Outline of a new approach to the analysis of complex systemsand decision processes, IEEE Transactions on Systems, Man and Cybernetics,Vol. 3, 28-44, 1973.

[155] Zadeh, L.A., The concept of a linguistic variable and its application to ap-proximate reasoning, Information Sciences, Vol. 8, 199-251, 1975.

[156] Zadeh, L.A., Fuzzy sets as a basis for a theory of possibility, Fuzzy Sets andSystems, Vol. 1, 3-28, 1978.

[157] Zhao, R. and Liu, B., Stochastic programming models for general redundancyoptimization problems, IEEE Transactions on Reliability, Vol.52, No.2, 181-191, 2003.

[158] Zhao, R., and Liu, B., Renewal process with fuzzy interarrival times andrewards, International Journal of Uncertainty, Fuzziness & Knowledge-BasedSystems, Vol.11, No.5, 573-586, 2003.

[159] Zhao, R., and Liu, B., Redundancy optimization problems with uncertaintyof combining randomness and fuzziness, European Journal of OperationalResearch, to be published.

[160] Zhou, J., and Liu, B., New stochastic models for capacitated location-allocation problem, Computers & Industrial Engineering, Vol.45, No.1, 111-125, 2003.

[161] Zhou, J., and Liu, B., Analysis and algorithms of bifuzzy systems, Interna-tional Journal of Uncertainty, Fuzziness & Knowledge-Based Systems, Vol.12,No.3, 2004.

[162] Zhou, J., and Liu, B., Convergence concepts of bifuzzy sequence, AsianInformation-Science-Life, to be published.

[163] Zhu, Y., and Liu, B., Continuity theorems and chance distribution of randomfuzzy variable, Proceedings of the Royal Society: Mathematical, Physical andEngineering Sciences, to be published.

[164] Zhu, Y., and Liu, B., Characteristic functions for fuzzy variables, TechnicalReport, 2003.

[165] Zhu, Y., and Liu, B., Convergence concepts of random fuzzy sequence, Tech-nical Report, 2003.

[166] Zimmermann, H.-J., Fuzzy Set Theory and its Applications, Kluwer AcademicPublishers, Boston, 1985.

Page 415: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

List of Frequently Used Symbols

ξ, η, τ random, fuzzy, or rough variablesξ, η, τ random, fuzzy, or rough vectors

μ, ν membership functionsφ, ψ probability, credibility, trust, or chance density functionsΦ, Ψ probability, credibility, trust, or chance distributions

Pr probability measurePos possibility measureNec necessity measureCr credibility measureTr trust measureCh chance measureE expected value operatorV variance operator

Cov covariance operator(Ω,A,Pr) probability space

(Θ,P(Θ),Pos) possibility space(Λ,Δ,A, π) rough space

∅ empty set� set of real numbers�n set of n-dimensional real vectors∨ maximum operator∧ minimum operator

ai ↑ a a1 ≤ a2 ≤ · · · and ai → a

ai ↓ a a1 ≥ a2 ≥ · · · and ai → a

Ai ↑ A A1 ⊂ A2 ⊂ · · · and A = A1 ∪A2 ∪ · · ·Ai ↓ A A1 ⊃ A2 ⊃ · · · and A = A1 ∩A2 ∩ · · ·

Page 416: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Index

algebra, 1α-chance measure, 197, 222α-level set, 87approximation theorem, 3average chance measure, 197, 223Bayes’ rule, 71bifuzzy variable, 245birandom variable, 273birough variable, 369Borel algebra, 7Borel measurable space, 7Borel set, 7Cantor function, 12Cantor set, 8Caratheodory extension theorem, 3Cauchy-Schwartz inequality, 57chance distribution of

bifuzzy variable, 252birandom variable, 279birough variable, 374fuzzy random variable, 198fuzzy rough variable, 354random fuzzy variable, 223random rough variable, 336rough fuzzy variable, 316rough random variable, 298

chance measure ofbifuzzy event, 247birandom event, 276birough event, 371fuzzy random event, 194fuzzy rough event, 351random fuzzy event, 218random rough event, 333rough fuzzy event, 312rough random event, 296

characteristic function offuzzy variable, 127random variable, 59

rough variable, 175Chebyshev inequality, 57convergence almost surely, 61, 129convergence in chance, 210, 233convergence in credibility, 129convergence in distribution, 62, 130,convergence in mean, 62, 130, 177convergence in probability, 62convergence in trust, 177covariance of

bifuzzy variable, 258birandom variable, 286birough variable, 380fuzzy random variable, 207fuzzy rough variable, 361fuzzy variable, 125random fuzzy variable, 229random rough variable, 342random variable, 54rough fuzzy variable, 321rough random variable, 304rough variable, 170

credibility measure, 83credibility density function, 100credibility distribution, 95credibility semicontinuity law, 85critical value of

bifuzzy variable, 258birandom variable, 286birough variable, 380fuzzy random variable, 207fuzzy rough variable, 361fuzzy variable, 107random fuzzy variable, 229random rough variable, 342random variable, 54rough fuzzy variable, 322rough random variable, 305rough variable, 171

Page 417: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

410 Index

Dirichlet function, 9distance of

fuzzy variables, 121random variables, 53rough variables, 169

equilibrium chance measure, 197, 223expected value of

bifuzzy variable, 256birandom variable, 284birough variable, 379fuzzy random variable, 204fuzzy rough variable, 359fuzzy variable, 109random fuzzy variable, 227random rough variable, 341random variable, 40rough fuzzy variable, 320rough random variable, 303rough variable, 157

exponential distribution, 74extension principle of Zadeh, 104Fatou’s Lemma, 15Fσ set, 6Fubini theorem, 16fuzzy arithmetic, 90fuzzy random variable, 191fuzzy rough variable, 349fuzzy set, 79fuzzy variable, 87

absolutely continuous, 98continuous, 87discrete, 87simple, 87singular, 98

Gδ set, 6Holder’s inequality, 57identically distributed

bifuzzy variables, 255birandom variables, 282birough variables, 377fuzzy random variables, 202fuzzy rough variables, 357fuzzy variables, 105random fuzzy variables, 226random rough variables, 339random variables, 39rough fuzzy variable, 319rough random variable, 301

rough variables, 156independence of

bifuzzy variables, 255birandom variables, 282birough variables, 377fuzzy random variables, 202fuzzy rough variables, 357fuzzy variables, 103random fuzzy variables, 226random rough variables, 339random variables, 36rough fuzzy variable, 319rough random variable, 301rough variables, 154

interval number, 145intuitionistic fuzzy set, 245inverse transform method, 73Jensen’s inequality, 58kernel, 81Kolmogorov inequality, 68Kronecker Lemma, 67Laplace criterion, 142law of large numbers, 70Lebesgue dominated convergence the-

orem, 16Lebesgue integral, 13Lebesgue measure, 7Lebesgue-Stieltjes integral, 19Lebesgue-Stieltjes measure, 17lower approximation, 138Markov inequality, 57measurable function, 8measurable space, 2measure, 2measure continuity theorem, 3measure space, 2membership function, 79Minkowski inequality, 58moment of

bifuzzy variable, 258birandom variable, 286birough variable, 380fuzzy random variable, 207fuzzy rough variable, 361fuzzy variable, 125random fuzzy variable, 229random rough variable, 342random variable, 54

Page 418: [Studies in Fuzziness and Soft Computing] Uncertainty Theory Volume 154 ||

Index 411

rough fuzzy variable, 321rough random variable, 304rough variable, 170

Monte Carlo simulation, 73monotone convergence theorem, 14monotone class theorem, 3necessity measure, 82necessity upper semicontinuity theo-

rem, 83normal distribution, 74optimistic value, see critical valuepessimistic value, see critical valuepossibility lower semicontinuity theo-

rem, 81possibility measure, 80, 105possibility space, 80power set, 1probability continuity theorem, 24probability density function, 35probability distribution, 31probability measure, 21probability space, 21product possibility space, 82product probability space, 24product rough space, 141random fuzzy arithmetic, 217random fuzzy variable, 215random rough variable, 331random set, 295random variable, 25

absolutely continuous, 33continuous, 26discrete, 26simple, 26singular, 33

ranking method, 388Riemann function, 9rough arithmetic, 145rough fuzzy variable, 311rough random variable, 293rough set, 138rough space, 139rough variable, 142

absolutely continuous, 150continuous, 143discrete, 143simple, 143singular, 150

σ-algebra, 1similarity relation, 137simple function, 9simulation

bifuzzy, 270birandom, 290birough, 369fuzzy, 133fuzzy random, 212fuzzy rough, 365random fuzzy, 241random rough, 346rough, 188rough fuzzy, 327rough random, 293stochastic, 73

singular function, 11step function, 9Toeplitz Lemma, 67trapezoidal fuzzy variable, 93triangular distribution, 75triangular fuzzy variable, 94trifuzzy variable, 388trirandom variable, 388trirough variable, 388trust continuity theorem, 140trust density function, 152trust distribution, 148trust measure, 139twofold fuzzy set, 245type 2 fuzzy set, 245uncertain variable, 388uniform distribution, 74upper approximation, 138variance of

bifuzzy variable, 257birandom variable, 285birough variable, 379fuzzy random variable, 206fuzzy rough variable, 360fuzzy variable, 124random fuzzy variable, 228random rough variable, 341random variable, 53rough fuzzy variable, 320rough random variable, 303

rough variable, 169