judgement and decision making in information systems diagnostic modeling: bayes’ theorem,...

Post on 23-Dec-2015

222 Views

Category:

Documents

2 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Judgement and Decision Making in Information Systems

Diagnostic Modeling: Bayes’ Theorem, Influence Diagrams

and Belief Networks

Yuval Shahar, M.D., Ph.D.

Reasoning Under Uncertainty Example: Medical Diagnosis

• Uncertainty is inherent to medical reasoning– relation of diseases to clinical and laboratory findings

is probabilistic– Patient data itself is often uncertain with respect to

value and time – Patient preferences regarding outcomes vary– Cost of interventions and therapy can change

• Principles of diagnosis modeling and computation are identical in engineering, finance, scientific evidence

Test Characteristics

Disease

Test result

Disease present

Disease absent

Total

Positive

True positive (TP)

False positive (FP)

TP+FP

Negative

False negative (FN)

True negative (TN)

FN+TN

TP+FN FP+TN

Test Performance Measures

• The gold standard test: the procedure that defines presence or absence of a disease (often, very costly)

• The index test: The test whose performance is examined• True positive rate (TPR) = Sensitivity:

– P(Test is positive|patient has disease) = P(T+|D+)– Ratio of number of diseased patients with positive tests to total

number of patient: TP/(TP+FN)• True negative rate (TNR) = Specificity

– P(Test is negative|patient has no disease) = P(T-|D-)– Ratio of number of nondiseased patients with negative tests to

total number of patients: TN/(TN+FP)

Test Predictive Values

• Positive predictive value (PV+) = P(D|T+) = TP/(TP+FP)

• Negative predictive value (PV-) = P(D-|T-) = TN/(TN+FN)

Lab Tests: What is “Abnormal”?

The Cut-off Value Trade off

• Sensitivity and specificity depend on the cut off value between what we define as normal and abnormal

• Assume high test values are abnormal; then, moving the cut-off value to a higher one increases FN results and decreases FP results (i.e. more specific) and vice versa

• There is always a trade off in setting the cut-off point

Receiver Operating Characteristic (ROC) Curves: Examples

Receiver Operating Characteristic (ROC) Curves: Interpretation

• ROC curves summarize the trade-off between the TPR (sensitivity) and the false positive rate (FPR) (1-specificity) for a particular test, as we vary the cut-off threshold

• The greater the area under the ROC curve, the better (more sensitive, more specific) the index test we are considering

Application of Bayes’ Theorem to Diagnosis

FPRDPySensitivitDP

DTPDP

DTPDPDTPDP

DTPDP

TP

DTPDP

TDPpositivetestdiseaseP

AP

BAPBPABP

*))(1(*)(

)|()(

)|()(()|()(

)|()(

)(

)|()(

)|():|(

)(

)|()()|(

B),|P(A P(B) B)|P(A)P(A B)&P(A

Odds-Likelihood (Odds Ratio) Form of Bayes’ Theorem

• Odds = P(A)/(1-P(A)); P = Odds/(1+Odds)

• Post-test odds = pretest odds * likelihood ratio

TNR

FNR

DTP

DTPLRratiolikelihoodNegative

FPR

TPR

DTP

DTPLRratioLikelihood

DTP

DTP

DP

DP

TDP

TDP

)|(

)|()(

)|(

)|()(

)|(

)|(*

)(

)(

)|(

)|(

Application of Bayes’ Theorem• Needs reliable pre-test probabilities• Needs reliable post-test likelihood ratios• Assumes one disease only (mutual exclusivity of

diseases)• Can be used in sequence for several tests, but only if

they are conditionally independent given the disease; then we use the post-test probability of Ti as the pre-test probability for Ti+1 (AKA Simple, or Naïve, Bayes)

ni

i

i

i

i

i

LRDP

DP

TDP

TDP

..1)(

)(

)|(

)|(

Relation of Pre-Test and Post-Test Probabilities

Example: Computing Predictive Values

• Assume P(Downs Syndrom):– (A) 0.1% (age 30)– (B) 2% (age 45)

• Assume for amniocentesis: Sensitivity is 99%, Specificity is 99%, for Downs Syndrome

• For both cases, A and B:– PV+ = P(DS|Amnio+) = ??– PV- = P(DS-|Amnio-) = ??

Predictive Values: Down Syndrom

0.99979 -PV

66891.001.0*98.099.0*02.0

99.0*02.0

2%(B)P(DS)

99.999% Amnio-)|P(DS- -PV

0901.000999.000099.0

00099.0

01.0*999.099.0*001.0

99.0*001.0

%1.0)()(

PV

PV

DSPA

Bayesian Diagnostic System Example: de Dombal’s Abdominal-Pain System (1972)

• Domain: Acute abdominal pain (7 possible diagnoses)• Input: Signs and symptoms of patient• Output: Probability distribution of diagnoses• Method: Naïve Bayesian classification• Evaluation: an eight-center study involving 250 physicians and

16,737 patients• Results:

– Diagnostic accuracy rose from 46 to 65%– The negative laparotomy rate fell by almost half– Perforation rate among patients with appendicitis fell by half– Mortality rate fell by 22%

• Results using survey data consistently better than the clinicians’ opinions and even the results using human probability estimates!

Influence Diagrams: An Alternative, Powerful Tool for

Modeling Decisions• A graphical notation for modeling situations

involving multiple decisions, probabilities, and utilities

• Computationally: equivalent to decision trees

• Has several distinct advantages and disadvantages relative to decision trees

Influence Diagrams: Node Conventions

Chance node

Decision node

Utility node

Deterministic node

Link Semantics in Influence Diagrams

Dependence link(possible probabilistic relationship)

Information link

Influence link

“No-forgetting” link

Decision Trees: an HIV Example

Decision node

Chance node

Influence Diagrams: An HIV Example

The Structure of Influence Diagram Links

Belief Networks (Bayesian/Causal Probabilistic/Probabilistic Networks, etc)

Disease

Fever Sinusitis

Runny nose

Headache

Influence diagrams without decision and utility nodes

Gender

Link Semantics in Belief Networks

Dependence

Independence

Conditional independence of B and C, given A

B

CA

Assessment Versus Observation Orders

• Usually it is most convenient to consider relationship between diagnoses and tests in assessment order: P(T+|D+)– This is also the easier order in which experts assess the

probabilities or in which we can learn them from data

• However, in real life we need to compute the probability of diseases given observed findings, that is, in observation order: P(D+|T+)– That is the fashion in which diagnostic problems are

typically presented, although less easy to compute

• Thus, we often need to reverse the dependence arc to solve the influence diagram or belief network (or to draw the corresponding decision tree)

Assessment Order Representation

0.8 T+

0.2 T-

0.2 T+

0.8 T-

0.4 D+

0.6 D-

0.32

0.08

0.12

0.48

D TP(D & T )

Observation order Representation: Reversing the Arcs

0.727 D+

0.273 D-

0.143 D+

0.857 D-

0.44 T+

0.56 T-

0.32

0.08

0.12

0.48

D T P(D & T )

Advantages of Influence Diagrams and Belief Networks:

Modeling Implications• Excellent modeling tool that supports

acquisition from domain experts– Intuitive semantics (e.g., information and

influence links)– Explicit representation of dependencies– Representation in assessment (not observation)

order– very concise representation of large decision

models

Advantages of Influence Diagrams and Belief Networks:

Computational Implications• “Anytime” algorithms available (using

probability theory) to compute the distribution of values at any node given the values of any subset of the nodes (e.g., at any stage of information gathering)

• Explicit support for value of information computations

Disadvantages of Influence Diagrams and Belief Networks

for Modeling• The order of decisions (timing) might be obscured• The precise relationship between decisions and

available information is hidden within the nodes• Highly asymmetric problems might be easier to

represent as decision trees– Influence diagrams require using a lot of 0/1 probabilities

to represent asymmetry (e.g., if the test is not done, the result is sure to be unknown)

Problems in Using Influence Diagrams and Belief Networks

for Computations• Explicit representation of dependencies often

requires acquisition of joint probability distributions (P(A|B,C))

• Computation is in general intractable (NP hard), making even moderate-sized problems hard to solve without specialized algorithms

• Solution of even a relatively simple influence diagram requires the use of a computer and specialized software

Examples of Successful Belief- Network Applications

• In clinical medicine:– Pathological diagnosis at the level of a

subspecialized medical expert (Pathfinder)– Endocrinological diagnosis (NESTOR)

• In bioinformatics:– Recognition of meaningful sites and features in

DNA sequences– Educated guess of tertiary structure of proteins

top related