Hierarchical DSmP Transformation for Decision-Making under Uncertainty
Jean DezertDeqiang HanZhun-ga Liu
Jean-Marc Tacnet
AbstractβDempster-Shafer evidence theory is widely used forapproximate reasoning under uncertainty; however, the decision-making is more intuitive and easy to justify when made inthe probabilistic context. Thus the transformation to approx-imate a belief function into a probability measure is crucialand important for decision-making based on evidence theoryframework. In this paper we present a new transformation ofany general basic belief assignment (bba) into a Bayesian beliefassignment (or subjective probability measure) based on newproportional and hierarchical principle of uncertainty reduction.Some examples are provided to show the rationality and efficiencyof our proposed probability transformation approach.Keywords: Belief functions, probabilistic transformation,DSmP, uncertainty, decision-making.
I. INTRODUCTION
Dempster-Shafer evidence theory (DST) [1] proposes amathematical framework for approximate reasoning underuncertainty thanks to belief functions. Thus it is widely used inmany fields of information fusion. As any theory, DST is notexempt of drawbacks and limitations, like its inconsistencywith the probability calculus, its complexity and the missof a clear decision-making process. Aside these weaknesses,the use of belief functions remains flexible and appealingfor modeling and dealing with uncertain and imprecise in-formation. That is why several modified models and rules ofcombination of belief functions were proposed to resolve someof the drawbacks of the original DST. Among the advancesin belief function theories, one can underline the transferablebelief model (TBM) [2] proposed by Smets, and more recentlythe DSmT [3] proposed by Dezert and Smarandache.
The ultimate goal of approximate reasoning under uncer-tainty is usually the decision-making. Although the decision-making can be done based on evidence expressed by a belieffunction [4], the decision-making is better established in aprobabilistic context: decisions can be evaluated by assessingtheir ability to provide a winning strategy on the long run in agame theory context, or by maximizing return in a utility the-ory framework. Thus to take a decision, it is usually preferredto transform (approximate) a belief function into a probabilitymeasure. So the quality of such probability transformationis crucial for the decision-making in the evidence theory.The research on probability transformation has attracted moreattention in recent years.
The classical probability transformation in evidence theoryis the pignistic probability transformation (PPT) [2] in TBM.TBM has two levels: the credal level, and the pignistic level.Beliefs are entertained, combined and updated at the credallevel while the decision making is done at the pignistic level.PPT maps the beliefs defined on subsets to the probabilitydefined on singletons. In PPT, belief assignments for a com-pound focal element are equally assigned to the singletonsincluded. In fact, PPT is designed according to the principleof minimal commitment, which is somehow related withuncertainty maximization.
Other researchers also proposed some modified probabilitytransformation approaches [5]β[13] to assign the belief assign-ments of compound focal elements to the singletons accordingto some ratio constructed based on some available information.The representative transformations include Sudanoβs probabil-ity transformations [8] and Cuzzolinβs intersection probabilitytransformation [13], etc. In the framework of DSmT, anotherprobability transformation approach was proposed, which iscalled DSmP [9]. DSmP takes into account both the valuesof the masses and the cardinality of focal elements in theproportional redistribution process. DSmP can also be used inboth DSmT and DST. For a probability transformation, it isalways evaluated by using probabilistic information content(PIC) [5] (PIC being the dual form of Shannon entropy),although it is not enough or comprehensive [14]. A probabil-ity transformation providing a high probabilistic informationcontent (PIC) is preferred in fact for decision-making sincenaturally it is always easier to take a decision when theuncertainty is reduced.
In this paper we propose a new probability transformation,which can output a probability with high but not exagger-ated PIC. The new approach, called HDSmP (standing forHierarchical DSmP) is implemented hierarchically and it fullyutilize the information provided by a given belief function.Succinctly, for a frame of discernment (FOD) with size π, forπ = π down to π = 2, the following step is repeated: the beliefassignment of a focal element with size π is proportionallyredistributed to the focal elements with size π β 1. Theproportion is defined by the ratio among mass assignments offocal elements with size πβ1. A parameter π is introduced inthe formulas to avoid division by zero and warranty numerical
Originally published as Dezert J., Han D., Liu Z., Tacnet J.-M., Hierarchical DSmP transformation for decision-making under uncertainty, in Proc. of
Fusion 2012, Singapore, July 2012, and reprinted with permission.
Advances and Applications of DSmT for Information Fusion. Collected Works. Volume 4
125
robustness of the result. HDSmP corresponds to the last stepof the hierarchical proportional redistribution method for basicbelief assignment (bba) approximation presented briefly in[16] and in more details in [17]. Some examples are given atthe end of this paper to illustrate our proposed new probabilitytransformation approach. Comparisons of our new HDSmPapproach with the other well-known approaches with relatedanalyses are also provided.
II. EVIDENCE THEORY AND PROBABILITYTRANSFORMATIONS
A. Brief introduction of evidence theory
In Dempster-Shafer theory [1], the elements in the frame ofdiscernment (FOD) Ξ are mutually exclusive. Suppose that 2Ξ
represents the powerset of FOD, and one defines the functionπ : 2Ξ β [0, 1] as the basic belief assignment (bba), alsocalled mass function satisfying:β
π΄βΞπ(π΄) = 1, π(β ) = 0 (1)
Belief function (π΅ππ) and plausibility function (ππ) aredefined below, respectively:
π΅ππ(π΄) =β
π΅βπ΄π(π΅) (2)
ππ(π΄) =β
π΄β©π΅ β=β π(π΅) (3)
Suppose that π1,π2, ...,ππ are π mass functions, Dempsterβsrule of combination is defined in (4):
π(π΄) =
β§β¨β©0, π΄ = β ββ©π΄π=π΄
β1β€πβ€π
ππ(π΄π)ββ©π΄π β=β
β1β€πβ€π
ππ(π΄π), π΄ β= β (4)
Dempsterβs rule of combination is used in DST to accom-plish the fusion of bodies of evidence (BOEs). However, thefinal goal for decision-level information fusion is decisionmaking. The beliefs should be transformed into probabilities,before the probability-based decision-making. Although thereare also some research works on making decision directlybased on belief function or bba [4], probability-based decisionmethods are more intuitive and have become the currenttrend to decide under uncertainty from approximate reasoningtheories [15]. Some existing and well-known probability trans-formation approaches are briefly reviewed in the next section.
B. Probability transformations used in DST framework
A probability transformation (or briefly a βprobabilizationβ)is a mapping ππ : π΅ππΞ β πΞ, where π΅ππΞ meansthe belief function defined on Ξ and πΞ represents aprobability measure (in fact a probability mass function,pmf) defined on Ξ. Thus the probability transformationassigns a Bayesian belief function (i.e. probability measure)to any general (i.e. non-Bayesian) belief function. It is areason why the transformations from belief functions toprobability distributions are sometimes called also Bayesiantransformations.
The major probability transformation approaches used sofar are:
a) Pignistic transformationThe classical pignistic probability was proposed by Smets
[2] in his TBM framework which is a subjective and a non-probabilistic interpretation of DST. It extends the evidencetheory to the open-world propositions and it has a range oftools including discounting and conditioning to handle belieffunctions. At the credal level of TBM, beliefs are entertained,combined and updated. While at the pignistic level, beliefs areused to make decisions by resorting to pignistic probabilitytransformation (PPT). The pignistic probability obtained isalways called betting commitment probability (in short, BetP).The basic idea of pignistic transformation consists of trans-ferring the positive belief of each compound (or nonspecific)element onto the singletons involved in that compound elementsplit by the cardinality of the proposition when working withnormalized bbaβs.
Suppose that Ξ = {π1, π2, ..., ππ} is the FOD. The PPT forthe singletons is defined as [2]:
BetPπ(ππ) =β
ππβπ΅, π΅β2Ξ
π(π΅)
β£π΅β£ (5)
PPT is designed according to an idea similar to uncertaintymaximization. In PPT, masses are not assigned discriminatelyto different singletons involved. For information fusion, theaim is to reduce the degree of uncertainty and to gain a moreconsolidated and reliable decision result. High uncertainty inPPT might not be helpful for the decision. To overcome this,some typical modified probability transformation approacheswere proposed which are summarized below.
b) Sudanoβs probabilitiesSudano [8] proposed Probability transformation propor-
tional to Plausibilities (PrPl), Probability transformation pro-portional to Beliefs (PrBel), Probability transformation pro-portional to the normalized Plausibilities (PrNPl), Probabilitytransformation proportional to all Plausibilities (PraPl) andHybrid Probability transformation (PrHyb), respectively. Assuggested by their names, different kinds of mappings wereused. For the belief function defined on the FOD Ξ ={π1, ..., ππ}, they are respectively defined by
PrPl(ππ) = ππ({ππ}) β β
π β2Ξ,ππβπ
π(π )ββͺπππ=π
ππ({ππ}) (6)
PrBel(ππ) = π΅ππ({ππ}) β β
π β2Ξ,ππβπ
π(π )ββͺπππ=π
π΅ππ({ππ}) (7)
PrNPl(ππ) =ππ({ππ})βπ
ππ({ππ}) (8)
PraPl(ππ) = π΅ππ({ππ}) +1ββ
π π΅ππ({ππ})βπ ππ({ππ}) β ππ({ππ}) (9)
Advances and Applications of DSmT for Information Fusion. Collected Works. Volume 4
126
PrHyb(ππ) = PraPl(ππ) β β
π β2Ξ,ππβπ
π(π )ββͺπππ=π
PraPl(ππ)(10)
c) Cuzzolinβs intersection probabilityFrom a geometric interpretation of Dempsterβs rule of
combination, an intersection probability measure was proposedby Cuzzolin [12] from the proportional repartition of thetotal nonspecific mass (TNSM) for each contribution of thenonspecific masses involved.
CuzzP(ππ) = π({ππ}) + ππ({ππ})βπ({ππ})βπ (ππ({ππ})βπ({ππ})) β TNSM (11)
where
TNSM = 1ββ
ππ({ππ}) =
βπ΄β2Ξ,β£π΄β£>1
π(π΄) (12)
d) DSmP transformationDSmP proposed recently by Dezert and Smarandache is
defined as follows:
DSmPπ(ππ) = π({ππ})+ (π({ππ}) + π) β
βπβ2Ξ
ππβπβ£πβ£β₯2
π(π)βπ β2Ξ
πβπβ£π β£=1
π(π ) + π β β£πβ£ (13)
In DSmP, both the mass assignments and the cardinalityof focal elements are used in the proportional redistributionprocess. The parameter of π is used to adjust the effect offocal elementβs cardinality in the proportional redistribution,and to make DSmP defined and computable when encoun-tering zero masses. DSmP made an improvement comparedwith Sudanoβs, Cuzzolinβs and PPT formulas, in that DSmPmathematically makes a more judicious redistribution of theignorance masses to the singletons involved and thus increasesthe PIC level of the resulting approximation. Moreover, DSmPworks for both theories of DST and DSmT.
There are still some other definitions on modified PPT suchas the iterative and self-consistent approach PrScP proposedby Sudano in [5], and a modified PrScP in [11]. Althoughthe aforementioned probability transformation approaches aredifferent, they are all evaluated according to the degree ofuncertainty. The classical evaluation criteria for a probabilitytransformation are the following ones:
1) Normalized Shannon EntropySuppose that π (π) is a probability mass function (pmf),
where π β Ξ, β£Ξβ£ = π and the β£Ξβ£ represents the cardinalityof the FOD Ξ. An evaluation criterion for the pmf obtainedfrom different probability transformation is as follows [12]:
EH =
β βπβΞ
π (π) log2(π (π))
log2 π(14)
i.e., the ratio of Shannon entropy and the maximum ofShannon entropy for {π (π)β£π β Ξ},β£Ξβ£ = π . Clearly EH
is normalized. The larger EH is, the larger the degree of
uncertainty is. The smaller EH is, the smaller the degreeof uncertainty is. When EH= 0, one hypothesis will haveprobability 1 and the rest with zero probabilities. Thereforethe agent or system can make decision without error. WhenEH= 1, it is impossible to make a correct decision, becauseπ (π), for all π β Ξ are equal.
2) Probabilistic Information ContentProbabilistic Information Content (PIC) criterion [5] is an
essential measure in any threshold-driven automated decisionsystem. The PIC value of a pmf obtained from a probabilitytransformation indicates the level of the total knowledge onehas to draw a correct decision.
PIC(π ) = 1 +1
log2 πβ βπβΞ
π (π) log2(π (π)) (15)
Obviously, PIC = 1 β EH. The PIC is the dual of thenormalized Shannon entropy. A PIC value of zero indicatesthat the knowledge to take a correct decision does not exist (allhypotheses have equal probabilities, i.e., one has the maximalentropy).
Less uncertainty means that the corresponding probabilitytransformation result is better to help to take a decision.According to such a simple and basic idea, the probabilitytransformation approach should attempt to enlarge the beliefdifferences among all the propositions and thus to achieve amore reliable decision result.
III. THE HIERARCHICAL DSMP TRANSFORMATION
In this paper, we propose a novel probability transformationapproach called hierarchical DSmP (HDSmP), which providesa new way to reduce step by step the mass committed touncertainties until to obtain an approximate measure ofsubjective probability, i.e. a so-called Bayesian bba in [1]. Itmust be noticed that this procedure can be stopped at anystep in the process and thus it allows to reduce the numberof focal elements in a given bba in a consistent manner todiminish the size of the core of a bba and thus reduce thecomplexity (if needed) when applying also some complexrules of combinations. We present here the general principleof hierarchical and proportional reduction of uncertaintiesin order to finally obtain a Bayesian bba. The principle ofredistribution of uncertainty to more specific elements of thecore at any given step of the process follows the proportionalredistribution already proposed in the (non hierarchical)DSmP transformation proposed recently in [3].
Letβs first introduce two new notations for convenience andfor concision:
1) Any element of cardinality 1 β€ π β€ π of the powerset 2Ξ will be denoted, by convention, by the genericnotation π(π). For example, if Ξ = {π΄,π΅,πΆ}, thenπ(2) denotes the following partial uncertainties π΄βͺπ΅,π΄βͺπΆ or π΅βͺπΆ, and π(3) denotes the total uncertaintyπ΄ βͺπ΅ βͺ πΆ.
2) The proportional redistribution factor (ratio) of width π involving elements π and π of the powerset is definedas (for π β= β and π β= β )
Advances and Applications of DSmT for Information Fusion. Collected Works. Volume 4
127
π π (π,π) β π(π ) + π β β£πβ£βπ βπ
β£πβ£ββ£π β£=π (π(π ) + π β β£πβ£) (16)
where π is a small positive number introduced here todeal with particular cases where
βπβπ
β£πβ£ββ£π β£=π π(π ) = 0.
In HDSmP, we just need to use the proportional redis-tribution factors of width π = 1, and so we will justdenote π (π,π) β π 1(π,π) by convention.
The HDSmP transformation is obtained by a step by step(recursive) proportional redistribution of the mass π(π(π))of a given uncertainty π(π) (partial or total) of cardinality2 β€ π β€ π to all the least specific elements of cardinalityπ β 1, i.e. to all possible π(π β 1), until π = 2 is reached.The proportional redistribution is done from the masses ofbelief committed to π(π β 1) as done classically in DSmPtransformation. Mathematically, HDSmP is defined for anyπ(1) β Ξ, i.e. any ππ β Ξ by
π»π·πππ (π(1)) = π(π(1))+βπ(2)βπ(1)
π(1),π(2)β2Ξ
[πβ(π(2)) β π (π(1), π(2))] (17)
where the βhierarchicalβ masses πβ(.) are recursively (back-ward) computed as follows:
πβ(π(πβ 1)) = π(π(πβ 1))+βπ(π)βπ(πβ1)
π(π),π(πβ1)β2Ξ
[π(π(π)) β π (π(πβ 1), π(π))]
πβ(π΄) = π(π΄),ββ£π΄β£ < πβ 1
(18)
πβ(π(πβ 2)) = π(π(πβ 2))+βπ(πβ1)βπ(πβ2)
π(πβ2),π(πβ1)β2Ξ
[πβ(π(πβ 1)) β π (π(πβ 2), π(πβ 1))]
πβ(π΄) = π(π΄),ββ£π΄β£ < πβ 2
...(19)
πβ(π(2)) = π(π(2))+βπ(3)βπ(2)
π(3),π(2)β2Ξ
[πβ(π(3)) β π (π(2), π(3))]
πβ(π΄) = π(π΄),ββ£π΄β£ < 2
(20)
Actually, it is worth to note that π(π) is in fact unique andit corresponds only to the full ignorance π1 βͺ π2 βͺ . . . βͺ ππ.Therefore, the expression of πβ(π(π β 1)) in Eq. (18) justsimplifies as
πβ(π(πβ1)) = π(π(πβ1))+π(π(π))β π (π(πβ1), π(π))
Because of the full proportional redistribution of the massesof uncertainties to the elements least specific involved in theseuncertainties, no mass of belief is lost during the step by stephierarchical process and thus one gets finally a Bayesian bbasatisfying
βπ(1)β2Ξ π»π·πππ (π(1)) = 1.
IV. EXAMPLES
In this section we show in details how HDSmP can beapplied on very simple different examples. So letβs examinethe three following examples based on a simple 3D frame ofdiscernment Ξ = {π1, π2, π3} satisfying Shaferβs model.
A. Example 1
Letβs consider the following bba:
π(π1) = 0.10, π(π2) = 0.17, π(π3) = 0.03,
π(π1 βͺ π2) = 0.15, π(π1 βͺ π3) = 0.20,
π(π2 βͺ π3) = 0.05, π(π1 βͺ π2 βͺ π3) = 0.30.
We apply HDSmP with π = 0 in this example because thereis no mass of belief equal to zero. It can be verified that theresult obtained with a small positive π parameter remains (asexpected) numerically very close to the result obtained withπ = 0. This verification is left to the reader.
The first step of HDSmP consists in redistributing backπ(π1 βͺ π2 βͺ π3) = 0.30 committed to the full ignoranceto the elements π1 βͺ π2, π1 βͺ π3 and π2 βͺ π3 only, becausethese elements are the only elements of cardinality 2 that areincluded in π1βͺπ2βͺπ3. Applying the Eq. (18) with π = 3, onegets when π(2) = π1 βͺ π2, π1 βͺ π3 and π1 βͺ π2 the followingmasses
πβ(π1 βͺ π2) = π(π1 βͺ π2) +π(π(3)) β π (π1 βͺ π2, π(3))
= 0.15 + (0.3 β 0.375) = 0.2625
because π (π1 βͺ π2, π(3)) = 0.150.15+0.20+0.05 = 0.375.
Similarly, one gets
πβ(π1 βͺ π3) = π(π1 βͺ π3) +π(π(3)) β π (π1 βͺ π3, π(3))
= 0.20 + (0.3 β 0.5) = 0.35
because π (π1 βͺ π3, π(3)) = 0.200.15+0.20+0.05 = 0.5, and also
πβ(π2 βͺ π3) = π(π2 βͺ π3) +π(π(3)) β π (π2 βͺ π3, π(3))
= 0.05 + (0.3 β 0.125) = 0.0875
because π (π2 βͺ π3, π(3)) = 0.050.15+0.20+0.05 = 0.125.
Now, we go to the next step of HDSmP and one needs toredistribute the masses of partial ignorances π(2) correspond-ing to π1βͺπ2, π1βͺπ3 and π2βͺπ3 back to the singleton elementsπ(1) corresponding to π1, π2 and π3. We use directly HDSmPin Eq. (17) for doing this as follows:
Advances and Applications of DSmT for Information Fusion. Collected Works. Volume 4
128
π»π·πππ (π1) = π(π1) +πβ(π1 βͺ π2) β π (π1, π1 βͺ π2)
+πβ(π1 βͺ π3) β π (π1, π1 βͺ π3)
β 0.10 + (0.2625 β 0.3703) + (0.35 β 0.7692)= 0.10 + 0.0972 + 0.2692 = 0.4664
because
π (π1, π1 βͺ π2) =0.10
0.10 + 0.17β 0.3703
π (π1, π1 βͺ π3) =0.10
0.10 + 0.03β 0.7692
Similarly, one gets
π»π·πππ (π2) = π(π2) +πβ(π1 βͺ π2) β π (π2, π1 βͺ π2)
+πβ(π2 βͺ π3) β π (π2, π2 βͺ π3)
β 0.10 + (0.2625 β 0.6297) + (0.0875 β 0.85)= 0.17 + 0.1653 + 0.0744 = 0.4097
because
π (π2, π1 βͺ π2) =0.17
0.10 + 0.17β 0.6297
π (π2, π2 βͺ π3) =0.17
0.17 + 0.03= 0.85
and also
π»π·πππ (π3) = π(π3) +πβ(π1 βͺ π3) β π (π3, π1 βͺ π3)
+πβ(π2 βͺ π3) β π (π3, π2 βͺ π3)
β 0.03 + (0.35 β 0.2307) + (0.0875 β 0.15)= 0.03 + 0.0808 + 0.0131 = 0.1239
because
π (π3, π1 βͺ π3) =0.03
0.10 + 0.03β 0.2307
π (π3, π2 βͺ π3) =0.03
0.17 + 0.03= 0.15
Hence, the final result of HDSmP transformation is:
π»π·πππ (π1) = 0.4664, π»π·πππ (π2) = 0.4097,
π»π·πππ (π3) = 0.1239.
and we can easily verify that
π»π·πππ (π1) +π»π·πππ (π2) +π»π·πππ (π3) = 1.
The procedure can be illustrated in Fig. 1 below.
1 2 3{ , , }
1 2{ , } 1 3{ , } 2 3{ , }
1{ } 2{ } 3{ }
Step 1
Step 2
Figure 1. Illustration of Example 1
Table IEXPERIMENTAL RESULTS FOR EXAMPLE 1.
ApproachesPropositions
EHπ1 π2 π3BetP 0.3750 0.3700 0.2550 0.9868PrPl 0.4045 0.3681 0.2274 0.9747PrBel 0.4094 0.4769 0.1137 0.8792DSmP 0 0.4094 0.4769 0.1137 0.8792DSmP 0.001 0.4094 0.4769 0.1137 0.8792HDSmP 0 0.4664 0.4097 0.1239 0.8921HDSmP 0.001 0.4664 0.4097 0.1239 0.8921
The classical DSmP transformation [3] and the other trans-formations (BetP [2], PrBel and PrPl [8]) are compared withHDSmP for this example in Table I. It can be seen in Table Ithat the normalized entropy EH of HDSmP is relatively smallbut not too small among all the probability transformationsused. In fact it is normal that the entropy drawn form HDSmPis a bit bigger than the entropy drawn from DSmP, becausethere is a βdilutionβ of uncertainty in the step-by-step redis-tribution, whereas such dilution of uncertainty is absent in thedirect DSmP transformation.
B. Example 2
Letβs consider the following bba:
π(π1) = 0, π(π2) = 0.17, π(π3) = 0.13,
π(π1 βͺ π2) = 0.20, π(π1 βͺ π3) = 0.20,
π(π2 βͺ π3) = 0, π(π1 βͺ π2 βͺ π3) = 0.30
The first step of HDSmP consists in redistributing backπ(π1 βͺ π2 βͺ π3) = 0.30 committed to the full ignorance tothe elements π1βͺπ2, and π1βͺπ3 only, because these elementsare the only elements of cardinality 2 that are included inπ1βͺπ2βͺπ3. Applying the Eq. (18) with π = 3, one gets whenπ(2) = π1 βͺ π2, π1 βͺ π3 and π1 βͺ π2 the following masses
πβ(π1 βͺ π2) = π(π1 βͺ π2) +π(π(3)) β π (π1 βͺ π2, π(3))
= 0.20 + (0.3 β 0.5) = 0.35
because π (π1 βͺ π2, π(3)) = 0.200.20+0.20+0.00 = 0.5.
Similarly, one gets
πβ(π1 βͺ π3) = π(π1 βͺ π3) +π(π(3)) β π (π1 βͺ π3, π(3))
= 0.20 + (0.3 β 0.5) = 0.35
because π (π1 βͺ π3, π(3)) = 0.200.20+0.20+0.00 = 0.5, and also
πβ(π2 βͺ π3) = π(π2 βͺ π3) +π(π(3)) β π (π2 βͺ π3, π(3))
= 0.00 + (0.3 β 0.0) = 0
because π (π2 βͺ π3, π(3)) = 0.00.20+0.20+0.00 = 0.
Now, we go to the next and last step of HDSmP principle,and one needs to redistribute the masses of partial ignorancesπ(2) corresponding to π1 βͺ π2, π1 βͺ π3 and π2 βͺ π3 back tothe singleton elements π(1) corresponding to π1, π2 and π3.
Advances and Applications of DSmT for Information Fusion. Collected Works. Volume 4
129
We use directly HDSmP in Eq. (17) for doing this as follows:
π»π·πππ (π1) = π(π1) +πβ(π1 βͺ π2) β π (π1, π1 βͺ π2)
+πβ(π1 βͺ π3) β π (π1, π1 βͺ π3)
β 0.00 + (0.35 β 0.00) + (0.35 β 0.00)= 0.00 + 0.00 + 0.00 = 0
because
π (π1, π1 βͺ π2) =0.00
0.00 + 0.17= 0.00
π (π1, π1 βͺ π3) =0.00
0.00 + 0.13= 0.00
Similarly, one gets
π»π·πππ (π2) = π(π2) +πβ(π1 βͺ π2) β π (π2, π1 βͺ π2)
+πβ(π2 βͺ π3) β π (π2, π2 βͺ π3)
β 0.17 + (0.35 β 1) + (0.00 β 0.5667)= 0.17 + 0.35 + 0.00 = 0.52
becauseπ (π2, π1 βͺ π2) =
0.17
0.00 + 0.17= 1
π (π2, π2 βͺ π3) =0.17
0.17 + 0.13β 0.5667
and also
π»π·πππ (π3) = π(π3) +πβ(π1 βͺ π3) β π (π3, π1 βͺ π3)
+πβ(π2 βͺ π3) β π (π3, π2 βͺ π3)
β 0.13 + (0.35 β 1) + (0.00 β 0.4333)= 0.13 + 0.35 + 0.00 = 0.48
becauseπ (π3, π1 βͺ π3) =
0.13
0.13 + 0.00= 1
π (π3, π2 βͺ π3) =0.13
0.17 + 0.13β 0.4333
Hence, the final result of HDSmP transformation is:
π»π·πππ (π1) = 0.4664, π»π·πππ (π2) = 0.4097,
π»π·πππ (π3) = 0.1239.
and we can easily verify that
π»π·πππ (π1) +π»π·πππ (π2) +π»π·πππ (π3) = 1.
The HDSmP procedure of Example 2 with π = 0 is Fig. 2.The HDSmP procedure of Example 2 with π > 0 is the sameas that illustrated in Fig. 1. When one takes π > 0, there existmasses redistributed to {π2 βͺ π3}. If one takes π = 0, thereis no mass edistributed to {π2 βͺ π3}. Thatβs the differencebetween Fig. 1 and Fig. 2.
Letβs suppose that one takes π = 0.001, then the HDSmPcalculation procedure is as follows:β Step 1: The first step of HDSmP consists in distributing backπ(π1βͺ π2βͺ π3) = 0.30 committed to the full ignorance to theelements π1 βͺ π2, π1 βͺ π3 and π2 βͺ π3. Applying the formula
1 2 3{ , , }
1 2{ , } 1 3{ , }
1{ } 2{ } 3{ }
Step 1
Step 2
Figure 2. Illustration of Example 2.
(III) with π = 3, one gets when π(2) = π1 βͺ π2, π1 βͺ π3 andπ1 βͺ π2 the following masses
πβ(π1 βͺ π2) = π(π1 βͺ π2) +π(π(3)) β π (π1 βͺ π2, π(3))
= 0.20 + (0.3 β 0.4963) = 0.3489
because
π (π1 βͺ π2, π(3)) =0.20 + 0.001 β 3
(0.20 + 0.001 β 3) β 2 + (0.00 + 0.001 β 3)= 0.4963
πβ(π1 βͺ π3) = π(π1 βͺ π3) +π(π(3)) β π (π1 βͺ π3, π(3))
= 0.20 + (0.3 β 0.4963) = 0.3489
because
π (π1 βͺ π2, π(3)) =0.20 + 0.001 β 3
(0.20 + 0.001 β 3) β 2 + (0.00 + 0.001 β 3)= 0.4963
πβ(π2 βͺ π3) = π(π2 βͺ π3) +π(π(3)) β π (π2 βͺ π3, π(3))
= 0.00 + (0.3 β 0.0073) = 0.0022
because
π (π2 βͺ π3, π(3)) =0.001 β 3
(0.20 + 0.001 β 3) β 2 + (0.00 + 0.001 β 3)= 0.0073
β Next step: one needs to redistribute the masses of partialignorances π(2) corresponding to π1βͺπ2, π1βͺπ3 and π2βͺπ3back to the singleton elements π(1) corresponding to π1, π2and π3. We use directly HDSmP in Eq. (17) for doing this asfollows:
π»π·πππ (π1) = π(π1) +πβ(π1 βͺ π2) β π (π1, π1 βͺ π2)
+πβ(π1 βͺ π3) β π (π1, π1 βͺ π3)
β 0.00 + (0.3489 β 0.0115) + (0.3489 β 0.0149)= 0.00 + 0.0040 + 0.0052 = 0.0092
because
π (π1, π1 βͺ π2) =0.00 + 0.001 β 2
(0.00 + 0.001 β 2) + (0.17 + 0.001 β 2)= 0.0115
π (π1, π1 βͺ π3) =0.00 + 0.001 β 2
(0.00 + 0.001 β 2) + (0.13 + 0.001 β 2)= 0.0149
Advances and Applications of DSmT for Information Fusion. Collected Works. Volume 4
130
Similarly, one gets
π»π·πππ (π2) = π(π2) +πβ(π1 βͺ π2) β π (π2, π1 βͺ π2)
+πβ(π2 βͺ π3) β π (π2, π2 βͺ π3)
β 0.17 + (0.3489 β 0.9885) + (0.0022 β 0.5658)= 0.17 + 0.3449 + 0.0012 = 0.5161
because
π (π2, π1 βͺ π2) =0.17 + 0.001 β 2
(0.00 + 0.001 β 2) + (0.17 + 0.001 β 2)= 0.9885
π (π2, π2 βͺ π3) =0.17 + 0.001 β 2
(0.17 + 0.001 β 2) + (0.13 + 0.001 β 2)β 0.5658
and also
π»π·πππ (π3) = π(π3) +πβ(π1 βͺ π3) β π (π3, π1 βͺ π3)
+πβ(π2 βͺ π3) β π (π3, π2 βͺ π3)
β 0.13 + (0.3489 β 0.9851) + (0.0022 β 0.4342)= 0.13 + 0.3437 + 0.0009 = 0.4746
because
π (π3, π1 βͺ π3) =0.13 + 0.001 β 2
(0.13 + 0.001 β 2) + (0.00 + 0.001 β 2)= 0.9851
π (π3, π2 βͺ π3) =0.13 + 0.001 β 2
(0.17 + 0.001 β 2) + (0.13 + 0.001 β 2)β 0.4342
Hence, the final result of HDSmP transformation is:
π»π·πππ (π1) = 0.0092, π»π·πππ (π2) = 0.5161,
π»π·πππ (π3) = 0.4746.
and we can easily verify that
π»π·πππ (π1) +π»π·πππ (π2) +π»π·πππ (π3) = 1.
We also calculate some other probability transformationsand the results are listed in Table II.
Table IIEXPERIMENTAL RESULTS FOR EXAMPLE 2.
ApproachesPropositions
EHπ1 π2 π3BetP 0.3000 0.3700 0.3300 0.9966PrPl 0.3125 0.3683 0.3192 0.9975PrBel NaN NaN NaN NaNDSmP 0 0.0000 0.5400 0.4600 0.6280DSmP 0.001 0.0037 0.5381 0.4582 0.6479HDSmP 0 0.0000 0.5200 0.4800 0.6302HDSmP 0.001 0.0092 0.5161 0.4746 0.6720
It can be seen in Table II that the normalized entropy EH
of HDSmP is relatively small but not too small among all theprobability transformations used.
C. Example 3
Letβs consider the following bba:
π(π1) = 0, π(π2) = 0, π(π3) = 0.70,
π(π1 βͺ π2) = 0, π(π1 βͺ π3) = 0,
π(π2 βͺ π3) = 0, π(π1 βͺ π2 βͺ π3) = 0.30
In this example, the mass assignments for all the focal ele-ments with cardinality size 2 equal to zero. For HDSmP, whenπ > 0, π(π2 βͺ π3) will be divided equally and redistributed to{π1 βͺ π2}, {π1 βͺ π3} and {π2 βͺ π3}. Because the ratios are
π (π1 βͺ π2, π(3)) = π (π1 βͺ π3, π(3)) = π (π2 βͺ π3, π(3))
=0.00 + 0.001 β 3
(0.00 + 0.001 β 3) β 3 = 0.3333
One sees that with the parameter π = 0, HDSmP cannot becomputed (division by zero) and that is why it is necessary touse π > 0 in such particular case. The results of HDSmP andother probability transformations are listed in Table III.
Table IIIEXPERIMENTAL RESULTS FOR EXAMPLE 3.
ApproachesPropositions
EHπ1 π2 π3BetP 0.1000 0.1000 0.8000 0.5871PrPl 0.0562 0.0562 0.8876 0.3911PrBel NaN NaN NaN NaNDSmP 0 0.0000 0.0000 1.0000 0.0000DSmP 0.001 0.0004 0.0004 0.0092 0.0065HDSmP 0 NaN NaN NaN NaNHDSmP 0.001 0.0503 0.0503 0.8994 0.3606
It can be seen in Table III that the normalized entropy EH
of HDSmP is relatively small but not the smallest among allthe probability transformations used. Naturally, and as alreadypointed out, HDSmPπ=0 cannot be computed in such examplebecause of division by zero. But with the use of the parameterπ = 0.001, the mass of π(π1βͺπ2βͺπ3) becomes equally dividedand redistributed to the focal elements with cardinality of 2.This justify the necessity of the use of parameter π > 0 insome particular cases when there exist masses equal to zero.
D. Example 4 (vacuous bba)
Letβs consider the following particular bba, called the vacu-ous bba since it represents a fully ignorant source of evidence:
π(π1) = 0, π(π2) = 0, π(π3) = 0,
π(π1 βͺ π2) = 0, π(π1 βͺ π3) = 0,
π(π2 βͺ π3) = 0, π(π1 βͺ π2 βͺ π3) = 1
In this example, the mass assignments for all the focal ele-ments with cardinality less than 3 equal to zero. For HDSmP,when π > 0, π(π1 βͺ π2 βͺ π3) will be divided equally andredistributed to {π1 βͺ π2}, {π1 βͺ π3} and {π2 βͺ π3}. Similarly,the mass assignments for focal elements with cardinality of2 (partial ignorances) obtained at the intermediate step willbe divided equally and redistributed to singletons included inthem. This redistribution is possible for the existence of π > 0in HDSmP formulas. HDSmP cannot be applied and computed
Advances and Applications of DSmT for Information Fusion. Collected Works. Volume 4
131
in such example if one takes π = 0, and that is why one needsto use π > 0 here. The results of HDSmP and other probabilitytransformations are listed in Table IV.
Table IVEXPERIMENTAL RESULTS FOR EXAMPLE 4.
ApproachesPropositions
EHπ1 π2 π3BetP 0.3333 0.3333 0.3333 1.0000PrPl 0.3333 0.3333 0.3333 1.0000PrBel NaN NaN NaN NaNDSmP 0 NaN NaN NaN NaNDSmP 0.001 0.3333 0.3333 0.3333 1.0000HDSmP 0 NaN NaN NaN NaNHDSmP 0.001 0.3333 0.3333 0.3333 1.0000
It can be seen in Tables I β IV that the normalized entropyEH of HDSmP is always moderate among the other probabilitytransformations it is compared with, and it is normal to get anentropy value with HDSmP bigger than with DSmP because ofdilution of uncertainty through the procedure of HDSmP. Wehave already shown that the entropy criteria is not enough infact to evaluate the quality a probability transformation [14],and always a compromise must be found between entropy leveland numerical robustness of the transformation. Although theentropy should be as small as possible for decision-making,exaggerate small entropy is not always preferred. Because ofthe way the mass of (partial) ignorances is proportionallyredistributed, it is clear that if the mass assignment for asingleton equals to zero in the original bba, then after applyingDSmP or HDSmP transformations this mass is unchanged andis kept to zero. This behavior may appear a bit intuitivelysurprising at the first glance specially if some masses of partialignorances including this singleton are not equal to zero.This behavior is however normal in the spirit of proportionalredistribution because one wants to reduce the PIC value sothat if one has no strong support (belief) in a singleton inthe original bba, we expect also to have no strong support inthis singleton after the transformation is applied which makesperfectly sense. Of course if such behavior is considered astoo optimistic or not acceptable because it appears too riskyin some applications, it is always possible to choose anothertransformation instead. The final choice is always left in thehands of the user, or the fusion system designer.
V. CONCLUSIONS
Probability transformation is very crucial for decision-making in evidence theory. In this paper a novel interesting anduseful hierarchical probability transformation approach calledHDSmP has been proposed, and HDSmP always provides amoderate value of entropy which is necessary for an easierand reliable decision-making support. Unfortunately the PIC(or entropy) level is not the unique useful criterion to evaluatethe quality of a probability transformation in general. At leastthe numerical robustness of the method is also important andmust be considered seriously as already shown in our previousworks. Therefore, to evaluate any probability transformationmore efficiently and to outperform existing transformations
(including DSmP and HDSmP) a more general comprehensiveevaluation criteria need to be found. The search for such acriteria is under investigations.
ACKNOWLEDGEMENT
This work is supported by National Natural Sci-ence Foundation of China (Grant No.61104214 and No.67114022), Fundamental Research Funds for the Cen-tral Universities, China Postdoctoral Science Foundation(No.20100481337, No.201104670) and Research Fund ofShaanxi Key Laboratory of Electronic Information SystemIntegration (No.201101Y17).
REFERENCES
[1] G. Shafer, A Mathematical Theory of Evidence, Princeton University,Princeton, 1976.
[2] P. Smets, R. Kennes, The transferable belief model, Artificial Intelligence,Vol 66, No. 2, pp. 191-234, 1994.
[3] F. Smarandache, J. Dezert (Editors), Applications and Advances ofDSmT for Information Fusion, Vol 3, American Research Press, 2009.http://www.gallup.unm.edu/Λsmarandache/DSmT-book3.pdf.
[4] W.H. Lv, Decision-making rules based on belief interval with D-Sevidence theory, book section in Fuzzy Information and Engineering, pp.619-627, 2007.
[5] J. Sudano, Pignistic probability transforms for mixes of low- and high-probability events, in Proc. of Int. Conf. on Information Fusion 2001,Montreal, Canada, , TUB3, pp. 23β27, August, 2001.
[6] J. Sudano, The system probability information content (PIC) relationshipto contributing components, combining independent multisource beliefs,hybrid and pedigree pignistic probabilities, in Proc. of Int. Conf. onInformation Fusion 2002, Annapolis, USA, Vol 2, pp. 1277β1283, July2002.
[7] J. Sudano, Equivalence between belief theories and naive Bayesian fusionfor systems with independent evidential data - Part I, The theory, in Proc.of Int. Conf. on Information Fusion 2003, Cairns, Australia, Vol. 2, pp.1239β1243, July 2003.
[8] J. Sudano, Yet another paradigm illustrating evidence fusion (YAPIEF),in Proc. of Int. Conf. on Information Fusion 2006, Florence, pp. 1β7,July 2006.
[9] J. Dezert, F. Smarandache, A new probabilistic transformation of beliefmass assignment, in Int. Conf. on Information Fusion 2008, Germany,Cologne, pp. 1-8, June 30π‘β β July 3rd, 2008.
[10] J. Sudano, Belief fusion, pignistic probabilities, and information contentin fusing tracking attributes, in Proc. of Radar Conference, Volume, Issue,26-29 , pp. 218 β 224, 2004.
[11] Y. Deng, W. Jiang, Q. Li, Probability transformation in transferablebelief model based on fractal theory (in Chinese), in Proc. of ChineseConf. on Information Fusion 2009, Yantai, China, pp. 10-13, 79, Nov.2009
[12] W. Pan, H.J. Yang, New methods of transforming belief functions topignistic probability functions in evidence theory, Int. Workshop onIntelligent Systems and Applications, pp. 1β5, Wuhan, China, May, 2009.
[13] F. Cuzzolin, On the properties of the Intersection probability, submittedto the Annals of Mathematics and AI, Feb. 2007.
[14] D.Q. Han, J. Dezert, C.Z. Han, Y. Yang, Is Entropy Enough to Evaluatethe Probability Transformation Approach of Belief Function?, in Proc. of13th Int. Conf. on Information Fusion Conference, pp. 1β7, Edinburgh,UK, July 26-29th, 2010.
[15] P. Smets, Decision making in the TBM: the necessity of the pignistictransformation, in Int. Journal of Approximate Reasoning, Vol 38, pp.133β147, 2005.
[16] J. Dezert, D.Q. Han, Z.G. Liu, J.-M. Tacnet, Hierarchical proportionalredistribution for bba approximation, in Proceedings of the 2nd Interna-tional Conference on Belief Functions, Compiegne, France, May 9-11,2012.
[17] J. Dezert, D.Q. Han, Z.G. Liu, J.-M. Tacnet, Hierarchical ProportionalRedistribution principle for uncertainty reduction and bba approximation,in Proceedings of the 10th World Congress on Intelligent Control andAutomation (WCICA 2012), Beijing, China, July 6-8, 2012.
Advances and Applications of DSmT for Information Fusion. Collected Works. Volume 4
132