decision trees, part 1 reading: textbook, chapter 6

47
Decision Trees, Part 1 Reading: Textbook, Chapter 6

Upload: augusta-neal

Post on 13-Dec-2015

225 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: Decision Trees, Part 1 Reading: Textbook, Chapter 6

Decision Trees, Part 1

Reading: Textbook, Chapter 6

Page 2: Decision Trees, Part 1 Reading: Textbook, Chapter 6

Day Outlook Temp Humidity Wind PlayTennis

D1 Sunny Hot High Weak NoD2 Sunny Hot High Strong NoD3 Overcast Hot High Weak YesD4 Rain Mild High Weak YesD5 Rain Cool Normal Weak YesD6 Rain Cool Normal Strong NoD7 Overcast Cool Normal Strong YesD8 Sunny Mild High Weak NoD9 Sunny Cool Normal Weak YesD10 Rain Mild Normal Weak YesD11 Sunny Mild Normal Strong YesD12 Overcast Mild High Strong YesD13 Overcast Hot Normal Weak YesD14 Rain Mild High Strong No

Training data:

Page 3: Decision Trees, Part 1 Reading: Textbook, Chapter 6

Decision Trees

• Target concept: “Good days to play tennis”

• Example:

<Outlook = Sunny, Temperature = Hot, Humidity = High, Wind = Strong>

Classification?

Page 4: Decision Trees, Part 1 Reading: Textbook, Chapter 6

• Would it be possible to use a “generate-and-test” strategy to find a correct decision tree?

– I.e., systematically generate all possible decision trees, in order of size, until a correct one is generated.

Page 5: Decision Trees, Part 1 Reading: Textbook, Chapter 6

• Why should we care about finding the simplest (i.e., smallest) correct decision tree?

Page 6: Decision Trees, Part 1 Reading: Textbook, Chapter 6

Decision Tree Induction

• Goal is, given set of training examples, construct decision tree that will classify those training examples correctly (and, hopefully, generalize)

• Original idea of decision trees developed in 1960s by psychologists Hunt, Marin, and Stone, as model of human concept learning. (CLS = “Concept Learning System”)

• In 1970s, AI researcher Ross Quinlan used this idea for AI concept learning: – ID3 (“Itemized Dichotomizer 3”), 1979

Page 7: Decision Trees, Part 1 Reading: Textbook, Chapter 6

The Basic Decision Tree Learning Algorithm(ID3)

1. Determine which attribute is, by itself, the most useful one for distinguishing the two classes over all the training data. Put it at the root of the tree.

Page 8: Decision Trees, Part 1 Reading: Textbook, Chapter 6

Outlook

Page 9: Decision Trees, Part 1 Reading: Textbook, Chapter 6

The Basic Decision Tree Learning Algorithm(ID3)

1. Determine which attribute is, by itself, the most useful one for distinguishing the two classes over all the training data. Put it at the root of the tree.

2. Create branches from the root node for each possible value of this attribute. Sort training examples to the appropriate value.

Page 10: Decision Trees, Part 1 Reading: Textbook, Chapter 6

Outlook

Sunny Overcast Rain

D1, D2, D8D9, D11

D3, D7, D12D13

D4, D5, D6D10, D14

Page 11: Decision Trees, Part 1 Reading: Textbook, Chapter 6

The Basic Decision Tree Learning Algorithm(ID3)

1. Determine which attribute is, by itself, the most useful one for distinguishing the two classes over all the training data. Put it at the root of the tree.

2. Create branches from the root node for each possible value of this attribute. Sort training examples to the appropriate value.

3. At each descendant node, determine which attribute is, by itself, the most useful one for distinguishing the two classes for the corresponding training data. Put that attribute at that node.

Page 12: Decision Trees, Part 1 Reading: Textbook, Chapter 6

Outlook

Sunny Overcast Rain

Humidity

Yes

Wind

Page 13: Decision Trees, Part 1 Reading: Textbook, Chapter 6

The Basic Decision Tree Learning Algorithm(ID3)

1. Determine which attribute is, by itself, the most useful one for distinguishing the two classes over all the training data. Put it at the root of the tree.

2. Create branches from the root node for each possible value of this attribute. Sort training examples to the appropriate value.

3. At each descendant node, determine which attribute is, by itself, the most useful one for distinguishing the two classes for the corresponding training data. Put that attribute at that node.

4. Go to 2, but for the current node.

Note: This is greedy search with no backtracking

Page 14: Decision Trees, Part 1 Reading: Textbook, Chapter 6

The Basic Decision Tree Learning Algorithm(ID3)

1. Determine which attribute is, by itself, the most useful one for distinguishing the two classes over all the training data. Put it at the root of the tree.

2. Create branches from the root node for each possible value of this attribute. Sort training examples to the appropriate value.

3. At each descendant node, determine which attribute is, by itself, the most useful one for distinguishing the two classes for the corresponding training data. Put that attribute at that node.

4. Go to 2, but for the current node.

Note: This is greedy search with no backtracking

Page 15: Decision Trees, Part 1 Reading: Textbook, Chapter 6

How to determine which attribute is the best classifier for a set of training examples?

E.g., why was Outlook chosen to be the root of the tree?

Page 16: Decision Trees, Part 1 Reading: Textbook, Chapter 6

“Impurity” of a split

• Task: classify as Female or Male

• Instances: Jane, Mary, Alice, Bob, Allen, Doug

• Each instance has two binary attributes: “wears lipstick” and “has long hair”

Page 17: Decision Trees, Part 1 Reading: Textbook, Chapter 6

“Impurity” of a split

Wears lipstick

TF

Jane, Mary, Alice

Pure split Impure split

Bob, Allen, Doug

T F

Jane, Mary, Bob Alice, Allen, Doug

Has long hair

For the each node of the tree we want to choose attribute that gives purest split.

But how to measure degree of impurity of a split ?

Page 18: Decision Trees, Part 1 Reading: Textbook, Chapter 6

Entropy

• Let S be a set of training examples.

p+ = proportion of positive examples.

p− = proportion of negative examples

• Entropy measures the degree of uniformity or non-uniformity in a collection.

• Roughly measures how predictable collection is, only on basis of distribution of + and − examples.

Entropy(S) = −(p+ log2 p+ + p− log2 p−)

Page 19: Decision Trees, Part 1 Reading: Textbook, Chapter 6

Entropy

• When is entropy zero?

• When is entropy maximum, and what is its value?

Page 20: Decision Trees, Part 1 Reading: Textbook, Chapter 6
Page 21: Decision Trees, Part 1 Reading: Textbook, Chapter 6

• Entropy gives minimum number of bits of information needed to encode the classification of an arbitrary member of S.

– If p+ = 1, don’t need any bits (entropy 0)

– If p+ = .5, need one bit (+ or -)

– If p+ = .8, can encode collection of {+,-} values using on average less than 1 bit per value

• Can you explain how we might do this?

Page 22: Decision Trees, Part 1 Reading: Textbook, Chapter 6

Entropy of each branch?

Wears lipstick

TF

Jane, Mary, Alice

Pure split Impure split

Bob, Allen, Doug

T F

Jane, Mary, Bob Alice, Allen, Doug

Has long hair

Page 23: Decision Trees, Part 1 Reading: Textbook, Chapter 6

Day Outlook Temp Humidity Wind PlayTennis

D1 Sunny Hot High Weak NoD2 Sunny Hot High Strong NoD3 Overcast Hot High Weak YesD4 Rain Mild High Weak YesD5 Rain Cool Normal Weak YesD6 Rain Cool Normal Strong NoD7 Overcast Cool Normal Strong YesD8 Sunny Mild High Weak NoD9 Sunny Cool Normal Weak YesD10 Rain Mild Normal Weak YesD11 Sunny Mild Normal Strong YesD12 Overcast Mild High Strong YesD13 Overcast Hot Normal Weak YesD14 Rain Mild High Strong No

What is the entropy of the “Play Tennis” training set?

Page 24: Decision Trees, Part 1 Reading: Textbook, Chapter 6

• Suppose you’re now given a new example. In absence of any additional information, what classification should you guess?

Page 25: Decision Trees, Part 1 Reading: Textbook, Chapter 6

What is the average entropy of the “Humidity” attribute?

Page 26: Decision Trees, Part 1 Reading: Textbook, Chapter 6
Page 27: Decision Trees, Part 1 Reading: Textbook, Chapter 6

Information gain

Gain(S, A) = Entropy(S) −Sv

Sv∈Values(A )

∑ Entropy(Sv ),

where Sv = {instances∈S with value v}

Example: Gain (S, Humidity)

Page 28: Decision Trees, Part 1 Reading: Textbook, Chapter 6

In-class exercise:

• Calculate information gain of the “Outlook” attribute.

Page 29: Decision Trees, Part 1 Reading: Textbook, Chapter 6

Day Outlook Temp Humidity Wind PlayTennis

D1 Sunny Hot High Weak NoD2 Sunny Hot High Strong NoD3 Overcast Hot High Weak YesD4 Rain Mild High Weak YesD5 Rain Cool Normal Weak YesD6 Rain Cool Normal Strong NoD7 Overcast Cool Normal Strong YesD8 Sunny Mild High Weak NoD9 Sunny Cool Normal Weak YesD10 Rain Mild Normal Weak YesD11 Sunny Mild Normal Strong YesD12 Overcast Mild High Strong YesD13 Overcast Hot Normal Weak YesD14 Rain Mild High Strong No

Gain(S,A) = Entropy(S) −Sv

Sv∈Values(A )

∑ Entropy(Sv ), where Sv = {instances∈S with value v}

Page 30: Decision Trees, Part 1 Reading: Textbook, Chapter 6

Operation of ID3

Page 31: Decision Trees, Part 1 Reading: Textbook, Chapter 6
Page 32: Decision Trees, Part 1 Reading: Textbook, Chapter 6

Exercise: Do calculation to find best attribute to put at each branch

Page 33: Decision Trees, Part 1 Reading: Textbook, Chapter 6

ID3’s Inductive Bias

• Given a set of training examples, there are typically many decision trees consistent with that set.

– E.g., what would be another decision tree consistent with the example training data?

• Of all these, which one does ID3 construct?

– First acceptable tree found in greedy search

Page 34: Decision Trees, Part 1 Reading: Textbook, Chapter 6

ID3’s Inductive Bias, continued

• Algorithm does two things:

– Favors shorter trees over longer ones

– Places attributes with highest information gain closest to root.

• What would be an algorithm that explicitly constructs the shortest possible tree consistent with the training data?

Page 35: Decision Trees, Part 1 Reading: Textbook, Chapter 6

ID3’s Inductive Bias, continued

• ID3: Efficient approximation to “find shortest tree” method

• Why is this a good thing to do?

Page 36: Decision Trees, Part 1 Reading: Textbook, Chapter 6

Overfitting

• ID3 grows each branch of the tree just deeply enough to perfectly classify the training examples.

• What if number of training examples is small?

• What if there is noise in the data?

• Both can lead to overfitting– First case can produce incomplete tree– Second case can produce too-complicated tree.

But...what is bad about over-complex trees?

Page 37: Decision Trees, Part 1 Reading: Textbook, Chapter 6

Overfitting, continued

• Formal definition of overfitting:

– Given a hypothesis space H, a hypothesis h H is said to overfit the training data if there exists some alternative h’ H, such that

TrainingError(h) < TrainingError(h’),

but

TestError(h’) < TestError(h).

Page 38: Decision Trees, Part 1 Reading: Textbook, Chapter 6

Overfitting, continuedA

ccur

acy

Size of tree (number of nodes)

test data

training data

Medical data set

Page 39: Decision Trees, Part 1 Reading: Textbook, Chapter 6

Overfitting, continued

• How to avoid overfitting:

– Stop growing the tree early, before it reaches point of perfect classification of training data.

– Allow tree to overfit the data, but then prune the tree.

Page 40: Decision Trees, Part 1 Reading: Textbook, Chapter 6

Pruning the Tree

• Pruning: – Remove subtree below a decision node.

– Create a leaf node there, and assign most common classification of the training examples affiliated with that node.

– Helps get rid of nodes due to overfitting.

Page 41: Decision Trees, Part 1 Reading: Textbook, Chapter 6

• Reduced-error pruning:– Consider each decision node as candidate for pruning.

– For each node, try pruning node. Measure accuracy of pruned tree over pruning set.

– Select single-node pruning that yields best increase in accuracy over pruning set.

– If no increase, select one of the single-node prunings that does not decrease accuracy.

– If all prunings decrease accuracy, then don’t prune. Otherwise, continue this process until further pruning is harmful.

Page 42: Decision Trees, Part 1 Reading: Textbook, Chapter 6

• Problem:– Now need separate “test” set to test pruned tree.

(Always need final test set not used to do any learning!)

– So split data into three parts: • Training data• Test data used for pruning• Final test data

Page 43: Decision Trees, Part 1 Reading: Textbook, Chapter 6

Continuous valued attributes

• Original decision trees: Two discrete aspects:

– Target class (e.g., “PlayTennis”) has discrete values

– Attributes (e.g., “Humidity”) have discrete values

• How to incorporate continuous-valued decision attributes? – E.g., Humidity [0,100]

Page 44: Decision Trees, Part 1 Reading: Textbook, Chapter 6

Continuous valued attributes, continued

• Create new attributes, e.g., Humidityc true if Humidity < c, false otherwise.

• How to choose c? – Find c that maximizes information gain.

Page 45: Decision Trees, Part 1 Reading: Textbook, Chapter 6

Continuous valued attributes, continued

• Sort examples according to continuous value of Humidity

Humidity: 10 25 55 60 72 85

PlayTennis: Yes No Yes Yes No No

• Find adjacent examples that differ in target classification.

• Choose candidate c as midpoint of the corresponding interval. – Can show that optimal c must always lie at such a

boundary.

• Then calculate information gain for each candidate c.

• Choose best one.

• Put new attribute Humidityc in pool of attributes.

Page 46: Decision Trees, Part 1 Reading: Textbook, Chapter 6

• Note that the version of ID3 we will use in Homework 3 allows only continuous valued attributes. All nodes in decision tree are of the form

Ai

Threshold < Threshold

Page 47: Decision Trees, Part 1 Reading: Textbook, Chapter 6

K-Fold Cross Validation

• Used to improve generalization accuracy measure when data is limited.

• K-fold cross-validation:

– Suppose there are m examples in original training set.

– Partition these m examples into k disjoint subsets, each of size m/k . (Keep ratio between positive and negative examples approx. constant.)

– Run cross-validation procedure k times, each time using one of the subsets as the validation set and the other combined subsets as the training set.

– At the end, average the k validation accuracies.