decision tree classification prof. navneet goyal bits, pilani bits c464 – machine learning

84
Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Upload: lauren-blaxton

Post on 15-Dec-2015

224 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Decision Tree Classification

Prof. Navneet GoyalBITS, Pilani

BITS C464 – Machine Learning

Page 2: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

General Approach

Apply

Model

Induction

Deduction

Learn

Model

Model

Tid Attrib1 Attrib2 Attrib3 Class

1 Yes Large 125K No

2 No Medium 100K No

3 No Small 70K No

4 Yes Medium 120K No

5 No Large 95K Yes

6 No Medium 60K No

7 Yes Large 220K No

8 No Small 85K Yes

9 No Medium 75K No

10 No Small 90K Yes 10

Tid Attrib1 Attrib2 Attrib3 Class

11 No Small 55K ?

12 Yes Medium 80K ?

13 Yes Large 110K ?

14 No Small 95K ?

15 No Large 67K ? 10

Test Set

Learningalgorithm

Training Set

Figure taken from text book (Tan, Steinbach, Kumar)

Page 3: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Decision tree – is a classification scheme

Represents – a model of different classes Generates – tree & set of rules

A node without children - is a leaf node. Otherwise an internal node. Each internal node has - an associated splitting predicate. e.g. binary predicates. Example predicates:

Age <= 20Profession in {student, teacher}

5000*Age + 3*Salary – 10000 > 0

Classification by Decision Tree Induction

Page 4: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Classification by Decision Tree Induction

Decision tree A flow-chart-like tree structure Internal node denotes a test on an attribute Branch represents an outcome of the test Leaf nodes represent class labels or class distribution

Decision tree generation consists of two phases Tree construction

At start, all the training examples are at the root Partition examples recursively based on selected attributes

Tree pruning Identify and remove branches that reflect noise or outliers

Use of decision tree: Classifying an unknown sample Test the attribute values of the sample against the decision tree

Page 5: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Decision tree classifiers are very popular. WHY?

It does not require any domain knowledge or parameter setting, and is therefore suitable for exploratory knowledge discoveryDTs can handle high dimensional dataRepresentation of acquired knowledge in tree form is intuitive and easy to assimilate by humansLearning and classification steps are simple & fastGood accuracy

Classification by Decision Tree Induction

Page 6: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Main AlgorithmsHunt’s algorithmID3C4.5CARTSLIQ,SPRINT

Classification by Decision Tree Induction

Page 7: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Example of a Decision Tree

Tid Refund MaritalStatus

TaxableIncome Cheat

1 Yes Single 125K No

2 No Married 100K No

3 No Single 70K No

4 Yes Married 120K No

5 No Divorced 95K Yes

6 No Married 60K No

7 Yes Divorced 220K No

8 No Single 85K Yes

9 No Married 75K No

10 No Single 90K Yes10

categ

orical

categ

orical

contin

uous

class

Training Data

Refund

MarSt

TaxInc

YESNO

NO

NO

Yes No

Married Single, Divorced

< 80K > 80K

Splitting Attributes

Model: Decision Tree

Figure taken from text book (Tan, Steinbach, Kumar)

Page 8: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Another Example of Decision Tree

Tid Refund MaritalStatus

TaxableIncome Cheat

1 Yes Single 125K No

2 No Married 100K No

3 No Single 70K No

4 Yes Married 120K No

5 No Divorced 95K Yes

6 No Married 60K No

7 Yes Divorced 220K No

8 No Single 85K Yes

9 No Married 75K No

10 No Single 90K Yes10

categ

orical

categ

orical

contin

uous

class

MarSt

Refund

TaxInc

YESNO

NO

NO

Yes No

Married Single,

Divorced

< 80K > 80K

There could be more than one tree that fits the same data!

Figure taken from text book (Tan, Steinbach, Kumar)

Page 9: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Which tree is better and why? How many decision trees? How to find the optimal tree? Is it computationally feasible?(Try constructing a suboptimal tree in reasonable amount of time – greedy algorithm) What should be the order of split? Look for answers in “20 questions” & “Guess Who” games!

Some Questions

Page 10: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Apply Model to Test Data

Refund

MarSt

TaxInc

YESNO

NO

NO

Yes No

Married Single, Divorced

< 80K > 80K

Refund Marital Status

Taxable Income Cheat

No Married 80K ? 10

Test DataStart from the root of tree.

Figure taken from text book (Tan, Steinbach, Kumar)

Page 11: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Apply Model to Test Data

Refund

MarSt

TaxInc

YESNO

NO

NO

Yes No

Married Single, Divorced

< 80K > 80K

Refund Marital Status

Taxable Income Cheat

No Married 80K ? 10

Test Data

Figure taken from text book (Tan, Steinbach, Kumar)

Page 12: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Apply Model to Test Data

Refund

MarSt

TaxInc

YESNO

NO

NO

Yes No

Married Single, Divorced

< 80K > 80K

Refund Marital Status

Taxable Income Cheat

No Married 80K ? 10

Test Data

Figure taken from text book (Tan, Steinbach, Kumar)

Page 13: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Apply Model to Test Data

Refund

MarSt

TaxInc

YESNO

NO

NO

Yes No

Married Single, Divorced

< 80K > 80K

Refund Marital Status

Taxable Income Cheat

No Married 80K ? 10

Test Data

Page 14: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Apply Model to Test Data

Refund

MarSt

TaxInc

YESNO

NO

NO

Yes No

Married Single, Divorced

< 80K > 80K

Refund Marital Status

Taxable Income Cheat

No Married 80K ? 10

Test Data

Figure taken from text book (Tan, Steinbach, Kumar)

Page 15: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Apply Model to Test Data

Refund

MarSt

TaxInc

YESNO

NO

NO

Yes No

Married Single, Divorced

< 80K > 80K

Refund Marital Status

Taxable Income Cheat

No Married 80K ? 10

Test Data

Assign Cheat to “No”

Figure taken from text book (Tan, Steinbach, Kumar)

Page 16: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Decision Trees: Example Training Data Set

No PlayTrue6068Rain

No PlayFalse7066Rain

PlayFalse6078Rain

PlayFalse9588Overcast

PlayTrue7563Overcast

PlayFalse8888Overcast

No PlayTrue9060Sunny

PlayTrue7579Sunny

PlayFalse7056Sunny

No playtrue9079Sunny

ClassWindyHumidityTempOutlook

Numerical Attributes

Temprature, Humidity

Categorical Attributes

Outlook, Windy

Class ???

Class label

Page 17: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Sample Decision Tree

Outlook

sunny rain

Humidity

true false<=75

Play No

> 75

{1}

Play Windy

PlayNo PlayNo Play

overcast

Five leaf nodes – Each represents a rule

Decision Trees: Example

Page 18: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Decision Trees: Example

Rules corresponding to the given tree

1. If it is a sunny day and humidity is not above 75%, then play.

2. If it is a sunny day and humidity is above 75%, then do not play.

3. If it is overcast, then play.

4. If it is rainy and not windy, then play.

5. If it is rainy and windy, then do not play.

Is it the best classification ????

Page 19: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Decision Trees: Example

Accuracy of the classifier

determined by the percentage of the test data set that is correctly classified

Class: “No Play”

Classification of new record

New record: outlook=rain, temp =70, humidity=65,

windy=true.

Page 20: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Decision Trees: Example

Test Data Set

PlayTrue6068Rain

No PlayFalse7066Rain

PlayFalse6078Rain

PlayFalse9588Overcast

PlayTrue7563Overcast

No PlayFalse8888Overcast

No PlayTrue9060Sunny

No PlayTrue7579Sunny

PlayFalse7056Sunny

Playtrue9079Sunny

ClassWindyHumidityTempOutlook Rule 1: two records

Sunny & hum <=75

(one is correctly classified)Accuracy= 50%

Rule 2:sunny, hum> 75

Accuracy = 50%Rule 3: overcastAccuracy= 66%

Page 21: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Practical Issues of Classification

Underfitting and OverfittingMissing ValuesCosts of Classification

Page 22: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Overfitting the Data

A classification model commits two kinds of errors: Training Errors (TE) (resubstitution, apparent errors) Generalization Errors (GE)

A good classification model must have low TE as well as low GE

A model that fits the training data too well can have high GE than a model with high TE

This problem is known as model overfitting

Page 23: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Underfitting and OverfittingOverfitting

Underfitting: when model is too simple, both training and test errors are large. TE & GE are large when the size of the tree is very small.

It occurs because the model is yet to learn the true structure of the data and as a result it performs poorly on both training and test sets

Figure taken from text book (Tan, Steinbach, Kumar)

Page 24: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Overfitting the Data

When a decision tree is built, many of the branches may reflect anomalies in the training data due to noise or outliers.

We may grow the tree just deeply enough to perfectly classify the training data set.

This problem is known as overfitting the data.

Page 25: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Overfitting the Data

TE of a model can be reduced by increasing the model complexity

Leaf nodes of the tree can be expanded until it perfectly fits the training data

TE for such a complex tree = 0GE can be large because the tree may

accidently fit noise points in the training setOverfitting & underfitting are two pathologies

that are related to model complexity

Page 26: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Occam’s RazorGiven two models of similar generalization

errors, one should prefer the simpler model over the more complex model

For complex models, there is a greater chance that it was fitted accidentally by errors in data

Therefore, one should include model complexity when evaluating a model

Page 27: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Definition

A decision tree T is said to overfit the training data if there exists some other tree T’ which is a simplification of T, such that T has smaller error than T’ over the training set but T’ has a smaller error than T over the entire distribution of the instances.

Page 28: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Problems of Overfitting

Overfitting can lead to many difficulties: Overfitted models are incorrect. Require more space and more computational

resources Require collection of unnecessary features They are more difficult to comprehend

Page 29: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Overfitting

Overfitting can be due to:

1. Presence of Noise

2. Lack of representative samples

Page 30: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Overfitting: Example

Presence of Noise: Training SetName Body

TemperatureGives Birth

4-legged Hibernates Class Label(mammal)

Procupine Warm Blooded Y Y Y Y

Cat Warm Blooded Y Y N Y

Bat Warm Blooded Y N Y N*

Whale Warm Blooded Y N N N*

Salamander Cold Blooded N Y Y N

Komodo dragon Cold Blooded N Y N N

Python Cold Blooded N N Y N

Salmon Cold Blooded N N N N

Eagle Warm Blooded N N N N

Guppy Cold Blooded Y N N N

Table taken from text book (Tan, Steinbach, Kumar)

Page 31: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Overfitting: Example

Presence of Noise: Training SetName Body

TemperatureGives Birth

4-legged Hibernates Class Label(mammal)

Procupine Warm Blooded Y Y Y Y

Cat Warm Blooded Y Y N Y

Bat Warm Blooded Y N Y N*

Whale Warm Blooded Y N N N*

Salamander Cold Blooded N Y Y N

Komodo dragon Cold Blooded N Y N N

Python Cold Blooded N N Y N

Salmon Cold Blooded N N N N

Eagle Warm Blooded N N N N

Guppy Cold Blooded Y N N N

Table taken from text book (Tan, Steinbach, Kumar)

Page 32: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Overfitting: Example

Presence of Noise:Test SetName Body

TemperatureGives Birth

4-legged Hibernates Class Label(mammal)

Human Warm Blooded Y N N Y

Pigeon Warm Blooded N N N N

Elephant Warm Blooded Y Y N Y

Leopard Shark Cold Blooded Y N N N

Turtle Cold Blooded N Y N N

Penguin Cold Blooded N N N N

Eel Cold Blooded N N N N

Dolphin Warm Blooded Y N N Y

Spiny Anteater Warm Blooded N Y Y Y

Gila Monster Cold Blooded N Y Y N

Table taken from text book (Tan, Steinbach, Kumar)

Page 33: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Overfitting: Example

Presence of Noise: Models

Body Temp

4-legged

Mammals Non-mammals

Yes

Warm blooded

Gives BirthNoYes

No

Non-mammals

Non-mammals

Cold bloodedBody Temp

Mammals Non-mammals

Warm blooded

Gives BirthNoYes

Cold blooded

Non-mammals

Model M1TE = 0%, GE=30% Find out why?

Model M2TE = 20%, GE=10%

Figure taken from text book (Tan, Steinbach, Kumar)

Page 34: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Overfitting: Example

Lack of representative samples: Training Set

Name Body Temperature

Gives Birth

4-legged Hibernates Class Label(mammal)

Salamander Cold Blooded N Y Y N

Eagle Warm Blooded N N N N

Guppy Cold Blooded Y N N N

Poorwill Warm blooded N N Y N

Platypus Warm blooded N Y Y Y

Table taken from text book (Tan, Steinbach, Kumar)

Page 35: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Overfitting: Example

Lack of representative samples: Training Set

Body Temp

4-legged

Mammals Non-mammals

Yes

Warm blooded

HibernatesNoYes

No

Non-mammals

Non-mammals

Cold blooded

Model M3TE = 0%, GE=30% Find out why?

Figure taken from text book (Tan, Steinbach, Kumar)

Page 36: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Overfitting due to Noise

Decision boundary is distorted by noise point

Figure taken from text book (Tan, Steinbach, Kumar)

Page 37: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Overfitting due to Insufficient Examples

Lack of data points in the lower half of the diagram makes it difficult to predict correctly the class labels of that region

- Insufficient number of training records in the region causes the decision tree to predict the test examples using other training records that are irrelevant to the classification task

Figure taken from text book (Tan, Steinbach, Kumar)

Page 38: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

How to Address Overfitting

Pre-Pruning (Early Stopping Rule) Stop the algorithm before it becomes a fully-grown tree Typical stopping conditions for a node:

Stop if all instances belong to the same class Stop if all the attribute values are the same

More restrictive conditions: Stop if number of instances is less than some user-specified

threshold Stop if class distribution of instances are independent of the

available features (e.g., using 2 test) Stop if expanding the current node does not improve impurity

measures (e.g., Gini or information gain).

Page 39: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

How to Address Overfitting…

Post-pruningGrow decision tree to its entiretyTrim the nodes of the decision tree in a

bottom-up fashionIf generalization error improves after

trimming, replace sub-tree by a leaf node.Class label of leaf node is determined

from majority class of instances in the sub-tree

Can use MDL for post-pruning

Page 40: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Post-pruning

Post-pruning approach- removes branches of a fully grown tree. Subtree replacement replaces a subtree with a

single leaf node

Alt

Price

Yes Yes No

$$$

$$$

Yes

AltYes

Yes

Page 41: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Post-pruning

Subtree raising moves a subtree to a higher level in the decision tree, subsuming its parent

Alt

Price

Yes Yes No

$$$

$$$

Yes

ResYesNo

No4/4

Alt

Price

Yes Yes No

$$$

$$$

Yes

Page 42: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Overfitting: Example

Presence of Noise:Training SetName Body

TemperatureGives Birth

4-legged Hibernates Class Label(mammal)

Porcupine Warm Blooded Y Y Y Y

Cat Warm Blooded Y Y N Y

Bat Warm Blooded Y N Y N*

Whale Warm Blooded Y N N N*

Salamander Cold Blooded N Y Y N

Komodo Dragon Cold Blooded N Y N N

Python Cold Blooded N N Y N

Salmon Cold Blooded N N N N

Eagle Warm Blooded N N N N

Guppy Cold Blooded Y N N N

Table taken from text book (Tan, Steinbach, Kumar)

Page 43: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Post-pruning: Techniques

Cost Complexity pruning Algorithm: pruning operation is performed if it does not increase the estimated error rate. Of course, error on the training data is not the useful estimator (would

result in almost no pruning)

Minimum Description Length Algorithm: states that the best tree is the one that can be encoded using the fewest number of bits. The challenge for the pruning phase is to find the subtree that can be

encoded with the least number of bits.

Page 44: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Hunt’s Algorithm Let Dt be the set of training records

that reach a node t Let y={y1,y2,…yc} be the class labels

Step 1: If Dt contains records that belong

the same class yt, then t is a leaf node labeled as yt. If Dt is an empty set, then t is a leaf node labeled by the default class, yd

Step 2: If Dt contains records that belong to

more than one class, use an attribute test to split the data into smaller subsets. Recursively apply the procedure to each child node

Tid Refund Marital Status

Taxable Income Cheat

1 Yes Single 125K No

2 No Married 100K No

3 No Single 70K No

4 Yes Married 120K No

5 No Divorced 95K Yes

6 No Married 60K No

7 Yes Divorced 220K No

8 No Single 85K Yes

9 No Married 75K No

10 No Single 90K Yes 10

Dt

?

Figure taken from text book (Tan, Steinbach, Kumar)

Page 45: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Hunt’s AlgorithmDon’t Cheat

Refund

Don’t Cheat

Don’t Cheat

Yes No

Refund

Don’t Cheat

Yes No

MaritalStatus

Don’t Cheat

Cheat

Single,Divorced Married

TaxableIncome

Don’t Cheat

< 80K >= 80K

Refund

Don’t Cheat

Yes No

MaritalStatus

Don’t CheatCheat

Single,Divorced Married

Figure taken from text book (Tan, Steinbach, Kumar)

Page 46: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Should handle the following additional conditions: Child nodes created in step 2 are empty. When can this happen?

Declare the node as leaf node (majority class label of the training records of parent node)

In step 2, if all the records associated with Dt have identical attributes (except for the class label), then it is not possible to split these records further. Declare the node as leaf with the same class label as the majority class of training records associated with this node.

Hunt’s Algorithm

Page 47: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Tree Induction

Greedy strategy.Split the records based on an attribute

test that optimizes certain criterion.

IssuesDetermine how to split the records

How to specify the attribute test condition?How to determine the best split?

Determine when to stop splitting

Page 48: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Design Issues of Decision Tree Induction How should the training records be split?

At each recursive step, an attribute test condition must be selected. Algorithm must provide a method for specifying the test condition for diff. attrib. types as well as an objective measure for evaluating the goodness of each test condition

How should the splitting procedure stop?

Stopping condition is needed to terminate the tree-growing process. Stop when:

- all records belong to the same class

- all records have identical values

- both conditions are sufficient to stop any DT induction algo., other criterion can be imposed to terminate the procedure early (do we need to do this? Think of model over-fitting!)

Hunt’s Algorithm

Page 49: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

How to determine the Best Split?

OwnCar?

C0: 6C1: 4

C0: 4C1: 6

C0: 1C1: 3

C0: 8C1: 0

C0: 1C1: 7

CarType?

C0: 1C1: 0

C0: 1C1: 0

C0: 0C1: 1

StudentID?

...

Yes No Family

Sports

Luxury c1c10

c20

C0: 0C1: 1

...

c11

Before Splitting: 10 records of class 0,10 records of class 1

Which test condition is the best?

Slide taken from text book slides available at companion website (Tan, Steinbach, Kumar)

Page 50: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Greedy approach: Nodes with homogeneous class

distribution are preferredNeed a measure of node impurity:

C0: 5C1: 5

C0: 9C1: 1

Non-homogeneous,

High degree of impurity

Homogeneous,

Low degree of impurity

How to determine the Best Split?

Slide taken from text book slides available at companion website (Tan, Steinbach, Kumar)

Page 51: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Measures of Node Impurity

- Based on the degree of impurity of child nodes

- Less impurity more skew

- node with class distribution (1,0) has zero impurity, whereas a node with class distribution (0.5, 0.5) has highest impurity

Gini IndexEntropyMisclassification error -

Page 52: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Measures of Node Impurity

Gini Index

Entropy

Misclassification error

j

tjptGINI 2)]|([1)(

j

tjptjptEntropy )|(log)|()(

)|(max1)( tiPtErrori

Page 53: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Comparison among Splitting Criteria

For a 2-class problem:

Figure taken from text book (Tan, Steinbach, Kumar)

Impurity

Page 54: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

How to find the best split? Example:

Node N1

C0: 0

C1:6

Node N2

C0: 1

C1:5

Node N3

C0: 3

C1:3

GINI = 0

ENTROPY =0

ERROR = 0

GINI = 0.278

ENTROPY = 0.650

ERROR = 0.167

GINI = 0.5

ENTROPY =1

ERROR = .5

Page 55: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

How to find the best split? The 3 measures have similar characteristic curves Despite this, the attribute chosen as the test condition may vary

depending on the choice of the impurity measure Need to normalize these measures! Introducing GAIN,

where I=impurity measure of a given node

N = total no. of records at the parent node

K = no. of attribute values

N(vj) = no. of records associated with the child node vj

I(parent) is same for all test conditions When entropy is used, it is called Information Gain, info The larger the Gain, the better is the split Is it the best measure?

)()(

)(1

j

k

j

j vIN

vNparentI

Page 56: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

How to Find the Best Split?

B?

Yes No

Node N3 Node N4

A?

Yes No

Node N1 Node N2

Before Splitting:

C0 N10 C1 N11

C0 N20 C1 N21

C0 N30 C1 N31

C0 N40 C1 N41

C0 N00 C1 N01

M0

M1 M2 M3 M4

M12 M34Gain = M0 – M12 vs M0 – M34

Slide taken from text book slides available at companion website (Tan, Steinbach, Kumar)

Page 57: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Measure of Impurity: GINI Gini Index for a given node t :

(NOTE: p( j | t) is the relative frequency of class j at node t).

Maximum (1 - 1/nc) when records are equally distributed among all classes, implying least interesting information

Minimum (0.0) when all records belong to one class, implying most interesting information

j

tjptGINI 2)]|([1)(

C1 0C2 6

Gini=0.000

C1 2C2 4

Gini=0.444

C1 3C2 3

Gini=0.500

C1 1C2 5

Gini=0.278

Slide taken from text book slides available at companion website (Tan, Steinbach, Kumar)

Page 58: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Examples for computing GINI

C1 0 C2 6

C1 2 C2 4

C1 1 C2 5

P(C1) = 0/6 = 0 P(C2) = 6/6 = 1

Gini = 1 – P(C1)2 – P(C2)2 = 1 – 0 – 1 = 0

j

tjptGINI 2)]|([1)(

P(C1) = 1/6 P(C2) = 5/6

Gini = 1 – (1/6)2 – (5/6)2 = 0.278

P(C1) = 2/6 P(C2) = 4/6

Gini = 1 – (2/6)2 – (4/6)2 = 0.444

Page 59: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Splitting Based on GINI Used in CART, SLIQ, SPRINT. When a node p is split into k partitions (children), the

quality of split is computed as,

where, ni = number of records at child i,

n = number of records at node p.

k

i

isplit iGINI

n

nGINI

1

)(

Page 60: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Binary Attributes: Computing GINI Index

Splits into two partitions Effect of Weighing partitions:

– Larger and Purer Partitions are sought for.

A?

Yes No

Node N1 Node N2

Parent

C1 6

C2 6

Gini = 0.500

N1 N2 C1 4 2

C2 3 3

Gini=0.486

Gini(N1) = 1 – (4/7)2 – (3/7)2 = 0.490

Gini(N2) = 1 – (2/5)2 – (3/5)2 = 0.480

Gini(Children) = 7/12 * 0.490 + 5/12 * 0.480= 0.486

Page 61: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Binary Attributes: Computing GINI Index

Splits into two partitions Effect of Weighing partitions:

– Larger and Purer Partitions are sought for.

B?

Yes No

Node N1 Node N2

Parent

C1 6

C2 6

Gini = 0.500

N1 N2 C1 5 1

C2 2 4

Gini=0.371

Gini(N1) = 1 – (5/7)2 – (2/7)2 = 0.408

Gini(N2) = 1 – (1/5)2 – (4/5)2 = 0.320

Gini(Children) = 7/12 * 0.408 + 5/12 * 0.320= 0.371

Attribute B is preferred over A

Page 62: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Categorical Attributes: Computing Gini Index

For each distinct value, gather counts for each class in the dataset

Use the count matrix to make decisions

CarType{Sports,Luxury}

{Family}

C1 3 1

C2 2 4

Gini 0.400

CarType

{Sports}{Family,Luxury}

C1 2 2

C2 1 5

Gini 0.419

CarType

Family Sports Luxury

C1 1 2 1

C2 4 1 1

Gini 0.393

Multi-way split Two-way split (find best partition of values)

GINI favours multiway splits!!

Page 63: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Continuous Attributes: Computing Gini Index

Use Binary Decisions based on one value

Several Choices for the splitting value Number of possible splitting values

= Number of distinct values Each splitting value has a count matrix

associated with it Class counts in each of the

partitions, A < v and A v Simple method to choose best v

For each v, scan the database to gather count matrix and compute its Gini index

Computationally Inefficient! Repetition of work.

TaxableIncome> 80K?

Yes No

Page 64: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Continuous Attributes: Computing Gini Index...

For efficient computation: for each attribute, Sort the attribute on values Linearly scan these values, each time updating the count matrix and

computing gini index Choose the split position that has the least gini index

Cheat No No No Yes Yes Yes No No No No

Taxable Income

60 70 75 85 90 95 100 120 125 220

55 65 72 80 87 92 97 110 122 172 230

<= > <= > <= > <= > <= > <= > <= > <= > <= > <= > <= >

Yes 0 3 0 3 0 3 0 3 1 2 2 1 3 0 3 0 3 0 3 0 3 0

No 0 7 1 6 2 5 3 4 3 4 3 4 3 4 4 3 5 2 6 1 7 0

Gini 0.420 0.400 0.375 0.343 0.417 0.400 0.300 0.343 0.375 0.400 0.420

Split Positions

Sorted Values

Find the time complexity in terms of # records!

Page 65: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Alternative Splitting Criteria based on INFO

Entropy at a given node t:

(NOTE: p( j | t) is the relative frequency of class j at node t).

Measures homogeneity of a node. Maximum (log nc) when records are equally distributed

among all classes implying least informationMinimum (0.0) when all records belong to one class,

implying most information

Entropy based computations are similar to the GINI index computations

j

tjptjptEntropy )|(log)|()(

Page 66: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Examples for computing Entropy

C1 0 C2 6

C1 2 C2 4

C1 1 C2 5

P(C1) = 0/6 = 0 P(C2) = 6/6 = 1

Entropy = – 0 log 0 – 1 log 1 = – 0 – 0 = 0

P(C1) = 1/6 P(C2) = 5/6

Entropy = – (1/6) log2 (1/6) – (5/6) log2 (1/6) = 0.65

P(C1) = 2/6 P(C2) = 4/6

Entropy = – (2/6) log2 (2/6) – (4/6) log2 (4/6) = 0.92

j

tjptjptEntropy )|(log)|()( 2

Page 67: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Splitting Based on INFO...

Information Gain:

Parent Node, p is split into k partitions;

ni is number of records in partition i

Measures Reduction in Entropy achieved because of the split. Choose the split that achieves most reduction (maximizes GAIN)

Used in ID3 and C4.5 Disadvantage: Tends to prefer splits that result in large

number of partitions, each being small but pure.

k

i

i

splitiEntropy

nn

pEntropyGAIN1

)()(

Page 68: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Gain Ratio:

Parent Node, p is split into k partitions

ni is the number of records in partition i

Adjusts Information Gain by the entropy of the partitioning (SplitINFO). Higher entropy partitioning (large number of small partitions) is penalized!

Used in C4.5 Designed to overcome the disadvantage of Information

Gain

SplitINFO

GAINGainRATIO Split

split

k

i

ii

nn

nn

SplitINFO1

log

Splitting Based on INFO...

Page 69: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Splitting Criteria based on Classification Error

Classification error at a node t :

Measures misclassification error made by a node. Maximum (1 - 1/nc) when records are equally

distributed among all classes, implying least interesting information

Minimum (0.0) when all records belong to one class, implying most interesting information

)|(max1)( tiPtErrori

Page 70: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Examples for Computing Error

C1 0 C2 6

C1 2 C2 4

C1 1 C2 5

P(C1) = 0/6 = 0 P(C2) = 6/6 = 1

Error = 1 – max (0, 1) = 1 – 1 = 0

P(C1) = 1/6 P(C2) = 5/6

Error = 1 – max (1/6, 5/6) = 1 – 5/6 = 1/6

P(C1) = 2/6 P(C2) = 4/6

Error = 1 – max (2/6, 4/6) = 1 – 4/6 = 1/3

)|(max1)( tiPtErrori

Page 71: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Misclassification Error vs GiniA?

Yes No

Node N1 Node N2

Parent

C1 7

C2 3

Gini = 0.42

N1 N2 C1 3 4

C2 0 3

Gini=0.361

Gini(N1) = 1 – (3/3)2 – (0/3)2 = 0

Gini(N2) = 1 – (4/7)2 – (3/7)2 = 0.489

Gini(Children) = 3/10 * 0 + 7/10 * 0.489= 0.342

Gini improves !!

Page 72: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Decision Tree Based Classification

Advantages:Inexpensive to constructExtremely fast at classifying unknown

recordsEasy to interpret for small-sized treesAccuracy is comparable to other

classification techniques for many simple data sets

Page 73: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Example: C4.5Simple depth-first construction.Uses Information GainSorts Continuous Attributes at each

node.Needs entire data to fit in memory.Unsuitable for Large Datasets.

Needs out-of-core sorting.

You can download the software from:http://www.cse.unsw.edu.au/~quinlan/c4.5r8.tar.gz

Page 74: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

ExampleWeb robot or crawler Based on access patterns, distinguish between

human user and web robots Web Usage Mining BUILD a MODEL – use web log data Think of some more applications of classification!!

Page 75: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Summary: DT Classifiers Does not require any prior assumptions about prob.

dist. Satisfied by classes Finding optimal DT is NP-complete Construction of DT is fast even for large data sets. Testing is also fast. O(w), w=max. depth of the tree Robust to niose Irrelevant attributes can cause problems. (use

feature selection) Data fragmentation problem (leaf nodes having very

few records) Tree pruning has greater impact on the final tree

than choice of impurity measure

Page 76: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Decision Boundary

y < 0.33?

: 0 : 3

: 4 : 0

y < 0.47?

: 4 : 0

: 0 : 4

x < 0.43?

Yes

Yes

No

No Yes No

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

x

y

• Border line between two neighboring regions of different classes is known as decision boundary

• Decision boundary is parallel to axes because test condition involves a single attribute at-a-time

Page 77: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Oblique Decision Trees

x + y < 1

Class = + Class =

• Test condition may involve multiple attributes

• More expressive representation

• Finding optimal test condition is computationally expensive

Page 78: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Tree ReplicationP

Q R

S 0 1

0 1

Q

S 0

0 1

• Same subtree appears in multiple branches

Split using P redundant?

Remove P in post pruning

Page 79: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Metrics for Performance Evaluation

Focus on the predictive capability of a model Rather than how fast it takes to classify or build models,

scalability, etc.

Confusion Matrix:

TP: predicted to be in YES, and is actually in it FP: predicted to be in YES, but is not actually in it TN: predicted not to be in YES, and is not actually in it FN: predicted not to be in YES, but is actually in it

PREDICTED CLASS

ACTUALCLASS

Class=Yes Class=No

Class=Yes a b

Class=No c d

a: TP (true positive)

b: FN (false negative)

c: FP (false positive)

d: TN (true negative)

Page 80: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Metrics for Performance Evaluation…

Most widely-used metric:

PREDICTED CLASS

ACTUALCLASS

Class=Yes Class=No

Class=Yes a(TP)

b(FN)

Class=No c(FP)

d(TN)

FNFPTNTPTNTP

dcbada

Accuracy

Page 81: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Limitation of Accuracy

Consider a 2-class problemNumber of Class 0 examples = 9990Number of Class 1 examples = 10

If model predicts everything to be class 0, accuracy is 9990/10000 = 99.9 %Accuracy is misleading because model

does not detect any class 1 example

Page 82: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Cost Matrix

PREDICTED CLASS

ACTUALCLASS

C(i|j) Class=Yes Class=No

Class=Yes C(Yes|Yes) C(No|Yes)

Class=No C(Yes|No) C(No|No)

C(i|j): Cost of misclassifying class j example as class i

Page 83: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Computing Cost of ClassificationCost

MatrixPREDICTED CLASS

ACTUALCLASS

C(i|j) + -

+ -1 100

- 1 0

Model M1 PREDICTED CLASS

ACTUALCLASS

+ -

+ 150 40

- 60 250

Model M2 PREDICTED CLASS

ACTUALCLASS

+ -

+ 250 45

- 5 200

Accuracy = 80%

Cost = 3910

Accuracy = 90%

Cost = 4255

Page 84: Decision Tree Classification Prof. Navneet Goyal BITS, Pilani BITS C464 – Machine Learning

Cost-Sensitive Measures

cbaa

prrp

baa

caa

222

(F) measure-F

(r) Recall

(p)Precision

Precision is biased towards C(Yes|Yes) & C(Yes|No) Recall is biased towards C(Yes|Yes) & C(No|Yes) F-measure is biased towards all except C(No|No)

dwcwbwawdwaw

4321

41Accuracy Weighted