text classification - department of systems engineering and...

57
Text Classification Naïve Bayes Algorithm Reference: Introduction to Information Retrieval by C. Manning, P. Raghavan, H. Schutze 1

Upload: others

Post on 09-Jun-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Text ClassificationNaïve Bayes Algorithm

Reference: Introduction to Information Retrievalby C. Manning, P. Raghavan, H. Schutze

1

Page 2: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Multimedia GUIGarb.Coll.SemanticsML Planning

planningtemporalreasoningplanlanguage...

programmingsemanticslanguageproof...

learningintelligencealgorithmreinforcementnetwork...

garbagecollectionmemoryoptimizationregion...

“planninglanguageproofintelligence”

TrainingData:

TestData:

Classes:(AI)

Document Classification

(Programming) (HCI)

... ...

(Note: in real life there is often a hierarchy, not present in the above problem statement; and also, you get papers on “ML approaches to Garb. Coll.”)

2

Page 3: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Categorization/Classification• Given:

– A description of an instance, d  X• X is the instance language or instance space.

– Issue: how to represent text documents. – Usually some type of high‐dimensional space

– A fixed set of classes:C = {c1, c2,…, cJ}

• Determine:– The category of d: γ(d)  C, where γ(d) is a classification function whose domain is X and whose range is C.

• We want to know how to build classification functions (“classifiers”).

3

Page 4: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Supervised Classification

• Given:– A description of an instance, d  X

• X is the instance language or instance space.– A fixed set of classes:C = {c1, c2,…, cJ}

– A training set D of labeled documents with each labeled document  d,c X×C

• Determine:– A learning method or algorithm which will enable us to learn a classifier γ:X→C

– For a test document d,  we assign it the class γ(d)  C4

Page 5: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

More Text Classification ExamplesMany search engine functionalities use classification

Assigning labels to documents or web‐pages:• Labels are most often topics

– "finance," "sports," "news"• Labels may be genres

– "editorials" "movie‐reviews" "news”• Labels may be opinion on a person/product

– “like”, “hate”, “neutral”• Labels may be domain‐specific

– "interesting‐to‐me" : "not‐interesting‐to‐me”– “contains adult language” : “doesn’t”– language identification: English, French, Chinese, …– search vertical: about Linux versus not– “link spam” : “not link spam”

5

Page 6: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Classification Methods (1)

• Manual classification– Used by the original Yahoo! Directory– Looksmart, about.com, ODP, PubMed– Very accurate when job is done by experts– Consistent when the problem size and team is small

– Difficult and expensive to scale• Means we need automatic classification methods for big problems

6

Page 7: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Classification Methods (2)• Automatic document classification

– Hand‐coded rule‐based systems• One technique used by CS dept’s spam filter, Reuters, CIA, etc.• It’s what Google Alerts is doing

– Widely deployed in government and enterprise• Companies provide “IDE” for writing such rules• E.g., assign category if document contains a given Boolean combination of words

• Standing queries: Commercial systems have complex query languages (everything in IR query languages +score accumulators)

• Accuracy is often very high if a rule has been carefully refined over time by a subject expert

• Building and maintaining these rules is expensive7

Page 8: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

A Verity topic A complex classification rule

• Note:– maintenance issues 

(author, etc.)– Hand‐weighting of 

terms

[Verity was bought by Autonomy for $500m.]

8

Page 9: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Classification Methods (3)• Supervised learning of a document‐label assignment 

function– Many systems partly rely on machine learning (Autonomy, Microsoft, Enkata, Google News, …)

• k‐Nearest Neighbors (simple, powerful)• Naive Bayes (simple, common method)• Support‐vector machines (new, more powerful)• … plus many other methods• No free lunch: requires hand‐classified training data• But data can be built up (and refined) by amateurs

• Many commercial systems use a mixture of methods• Bayesian text classification is widely employed for 

spam filtering9

Page 10: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Recall a few probability basics

• For events a and b:• Bayes’ Rule

• Odds:

aaxxpxbp

apabpbp

apabpbap

apabpbpbapapabpbpbapbapbap

,)()|(

)()|()(

)()|()|(

)()|()()|()()|()()|()(),(

)(1)(

)()()(

apap

apapaO

Posterior

Prior

10

Page 11: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Probabilistic Methods

• Learning and classification methods based on probability theory.

• Bayes theorem plays a critical role in probabilistic learning and classification.

• Builds a generative model that approximates how data is produced

• Uses prior probability of each category given no information about an item.

• Categorization produces a posterior probability distribution over the possible categories given a description of an item.

11

Page 12: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Bayes’ Rule for text classification

• For a document d and a class c

P(c,d)P(c |d)P(d)P(d |c)P(c)

P(c |d) P(d |c)P(c)P(d)

12

Page 13: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Naive Bayes ClassifiersTask: Classify a new instance d based on a tuple of attribute values                  

into one of the classes cj C

nxxxd ,,, 21

),,,|(argmax 21 njCc

MAP xxxcPcj

),,,()()|,,,(

argmax21

21

n

jjn

Cc xxxPcPcxxxP

j

)()|,,,(argmax 21 jjnCc

cPcxxxPj

MAP is “maximum a posteriori” = most likely class 13

Page 14: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Naive Bayes Classifier: Naive Bayes Assumption

• P(cj)– Can be estimated from the frequency of classes in the training examples.

• P(x1,x2,…,xn|cj) – O(|X|n•|C|) parameters– Could only be estimated if a very, very large number of training examples was available.

Naive Bayes Conditional Independence Assumption:• Assume that the probability of observing the conjunction of attributes is equal to the product of the individual probabilities P(xi|cj).

14

Page 15: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Flu

X1 X2 X5X3 X4feversinus coughrunnynose muscle-ache

The Naive Bayes Classifier

• Conditional Independence Assumption:features detect term presence and are independent of each other given the class:

)|()|()|()|,,( 52151 CXPCXPCXPCXXP

15

Page 16: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

First Naive Bayes Model• Model 1: Multivariate Bernoulli

– One feature Xw for each word in dictionary– Xw = true if w appears in d; otherwise Xw = false – Naive Bayes assumption: 

• Given the document’s class, appearance of one word in the document tells us nothing about chances that another word appears 

• Model Learningfraction of documents of class cj

in which word w appears𝑃 𝑋 𝑡𝑟𝑢𝑒 𝑐

16

Page 17: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Multivariate Bernoulli ModelLearning the Model

• First attempt: maximum likelihood estimates– simply use the frequencies in the data

)(),(

)|(ˆj

jiji cCN

cCtXNctXP

C

X1 X2 X5X3 X4 X6

NcCN

cP jj

)()(ˆ

17

Page 18: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Problem with Maximum Likelihood

• What if we have seen no training documents with the word muscle‐ache and classified in the topic Flu?

• Zero probabilities cannot be conditioned away, no matter the other evidence!

0)(

),()|(ˆ 55

FluCN

FluCtXNFluCtXP

i ic ctXPcP )|(ˆ)(ˆmaxarg

Flu

X1 X2 X5X3 X4feversinus coughrunnynose muscle-ache

)|()|()|()|,,( 52151 CXPCXPCXPCXXP

18

Page 19: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Smoothing to Avoid Overfitting

kcCNcCtXN

ctXPj

jiji

)(1),(

)|(ˆ

# of values of Xi

19

in this case

Page 20: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Bernoulli Naive Bayes AlgorithmLearning (Training)

20

TRAINBERNOULLINB(C,D)1  V ← EXTRACTVOCABULARY(D)2  N ← COUNTDOCS(D)3  for each c ∈ C4  do𝑁 ← COUNTDOCSINCLASS(D, c)5  prior[c] ← 𝑁 𝑁⁄6  for each t ∈ V7  do 𝑁 ← COUNTDOCSINCLASSCONTAININGTERM(D, c, t)8  condprob[t][c] ←  𝑁 1 𝑁 2⁄9 return V, prior, condprob

Page 21: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Bernoulli Naive Bayes AlgorithmClassifying (Testing)

21

APPLYBERNOULLINB(C,V, prior, condprob, d)1  𝑉 ← EXTRACTTERMSFROMDOC(V, d)2  for each c ∈ C3  do score[c] ← log prior[c]4  for each t ∈ V5 do if t ∈ 𝑉6 then score[c] += log condprob[t][c]7  else  score[c] += log(1− condprob[t][c])8 return argmax ∈𝑪𝑠𝑐𝑜𝑟𝑒 𝑐

Page 22: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Second Model• Model 2: Multinomial = Class conditional unigram– One feature Xi for each word position in document• feature’s values are all words in dictionary

– Value of Xi is the word in position i– Naive Bayes assumption: 

• Given the document’s class, word in one position in the document tells us nothing about words in other positions

22

Page 23: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Multinomial Naïve Bayes Model

fraction of times in which word w appears among all

words in documents of class cj

)|(ˆji cwXP

23

• Can create a mega‐document for class  by concatenating all documents in this class

• Use the frequency of w in mega‐document

Page 24: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Using Multinomial Naive Bayes Classifiers to Classify Text: Basic method

Still too many possibilities

)|text""()|our""()(argmax

)|()(argmax

1j

j

jnjjCc

ijij

CcNB

cxPcxPcP

cxPcPc

24

Attributes are text positions, values are words.

Page 25: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Using Multinomial Naive Bayes Classifiers to Classify Text: Basic method

Assume that classification is independent of the positions of the words Use same parameters for each position Result is bag-of-words model

25

Word appearance does not depend on positions

Just have one multinomial feature predicting all words

)|()|( cwXPcwXP ji for all positions i,j, word w, and class c

Page 26: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Textj single document containing all docsj n total number of words in Textj For each word xk in Vocabulary

nk number of occurrences of xk in Textj

Multinomial Naive BayesLearning Approach

• From training corpus, extract Vocabulary• Calculate required P(cj) and P(xk | cj) terms

For each cj in C do docsj subset of documents for which the target class is cj

||1)|(

VocabularynncxP k

jk

|documents # total|||

)( jj

docscP

26

Page 27: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Multinomial Naive BayesClassifying (Testing) Approach

• positions all word positions in current document which contain tokens found in Vocabulary

• Return cNB, where

positionsi

jijCc

NB cxPcPc )|()(argmaxj

27

Page 28: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Multinomial Naive Bayes: ExampledocID words in document in c = China?

Training set

1 Chinese Beijing Chinese yes

2 Chinese Chinese Shanghai yes

3 Chinese Macao yes

4 Tokyo Japan Chinese no

Test set 5 Chinese Chinese Chinese Tokyo Japan

?

𝑃 𝑐34

𝑃 𝑐̅14

𝑃 Chinese 𝑐5 18 6

614

37 𝑃 Toyk𝑜 𝑐 𝑃 Japan 𝑐

0 18 6

114

𝑃 Chinese 𝑐̅1 13 6

29

𝑃 Toyk𝑜 𝑐̅ 𝑃 Japan 𝑐̅1 13 6

29

28

Page 29: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Multinomial Naive Bayes: Example

𝑃 𝑐34 𝑃 𝑐̅

14

𝑃 Chinese 𝑐5 18 6

614

37 𝑃 Toyk𝑜 𝑐 𝑃 Japan 𝑐

0 18 6

114

𝑃 Chinese 𝑐̅1 13 6

29 𝑃 Toyk𝑜 𝑐̅ 𝑃 Japan 𝑐̅

1 13 6

29

𝑃 𝑐 𝑑 ∝34 ·

37 ·

114 ·

114 0.0003

𝑃 𝑐̅ 𝑑 ∝14 ·

29 ·

29 ·

29 0.0001

The classifier assigns the test document to 𝑐 China

29

Page 30: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Multinomial Naive Bayes AlgorithmLearning (Training)

30

TRAINMULTINOMIALNB(C,D)1 V ← EXTRACTVOCABULARY(D)2 N ← COUNTDOCS(D)3 for each c ∈ C4 do 𝑁 ← COUNTDOCSINCLASS(D, c)5 prior[c] ← 𝑁 𝑁⁄6 𝑡𝑒𝑥𝑡 ← CONCATENATETEXTOFALLDOCSINCLASS(D, c)7 for each t ∈ V8 do 𝑇 ← COUNTTOKENSOFTERM(𝑡𝑒𝑥𝑡 , t)9 for each t ∈ V10 do condprob[t][c] ← ∑11return V, prior, condprob

Page 31: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Multinomial Naive Bayes AlgorithmClassifying (Testing)

31

APPLYMULTINOMIALNB(C,V, prior, condprob, d)1 W ← EXTRACTTOKENSFROMDOC(V, d)2 for each c ∈ C3 do score[c] ← log prior[c]4 for each t ∈ W5 do score[c] += log condprob[t][c]6 return argmax ∈𝑪𝑠𝑐𝑜𝑟𝑒 𝑐

Page 32: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Underflow Prevention: using logs

• Multiplying lots of probabilities, which are between 0 and 1 by definition, can result in floating‐point underflow.

• Since log(xy) = log(x) + log(y), it is better to perform all computations by summing logs of probabilities rather than multiplying probabilities.

• Class with highest final un‐normalized log probability score is still the most probable.

• Note that model is now just max of sum of weights.

cNB argmaxcjC

[log P(c j ) log P(xi | c j )ipositions ]

32

Page 33: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Naive Bayes Classifier

• Simple interpretation: Each conditional parameter log P(xi|cj) is a weight that indicates how good an indicator xi is for cj.

• The prior log P(cj) is a weight that indicates the relative frequency of cj. 

• The sum is then a measure of how much evidence there is for the document being in the class.

• We select the class with the most evidence for it

cNB argmaxcjC

[log P(c j ) log P(xi | c j )ipositions ]

33

Page 34: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Feature Selection: Why?• Text collections have a large number of features– 10,000 – 1,000,000 unique words … and more

• May allow using a particular classifier feasible– Some classifiers can’t deal with 100,000 of features

• Reduces training time– Training time for some methods is quadratic or worse in the number of features 

• Can improve generalization (performance)– Eliminates noise features– Avoids overfitting 34

Page 35: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Feature Selection: how?• Two ideas:

– Hypothesis testing statistics:• Are we confident that the value of one categorical variable is associated with the value of another

• Chi‐square test (2)– Information theory:

• How much information does the value of one categorical variable give you about the value of another

• Mutual information• They’re similar, but 2 measures confidence in association, 

(based on available statistics), while MI measures extent of association (assuming perfect knowledge of probabilities)

35

Page 36: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Feature Selection

• For each category we build a list of kmost discriminating terms.

• For example (on 20 Newsgroups):– sci.electronics: circuit, voltage, amp, ground, copy, battery, electronics, cooling, …

– rec.autos: car, cars, engine, ford, dealer, mustang, oil, collision, autos, tires, toyota, …

• Greedy: does not account for correlations between terms

36

Page 37: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

2 statistic (CHI)• 2 is interested in (fo – fe)2/fe summed over all table entries: is the 

observed number what you’d expect given the marginals?

)001.(9.129498/)94989500(502/)502500(

75.4/)75.43(25./)25.2(/)(),(22

2222

p

EEOautojaguar

9500

500

(4.75)

(0.25)

(9498)3Class auto

(502)2Class = auto

Document does not contain

jaguar

Document containsjaguar expected: fe

observed: fo

502

9503

10000537

• The null hypothesis is rejected with confidence .999, since 12.9 > 10.83 (the value for .999 confidence).

Page 38: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

There is a simpler formula for 2x2 2:

2 statistic (CHI)

N = A + B + C + D

D = #(¬t, ¬c)B = #(t,¬c)

C = #(¬t,c)A = #(t,c)

Value for complete independence of term and category?38

Page 39: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Feature selection via Mutual Information

• In training set, choose k words which best discriminate (give most info on) the categories.

• The Mutual Information between a word w and a class c is:

where ew =1 when the document contains the word w (0 otherwise); ec = 1 when the document is in class c (0 otherwise)

}1,0{ }1,0{ )()(

),(log),(),(w ce e cw

cwcw epep

eepeepcwI

39

Page 40: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Feature Selection

• Mutual Information– Clear information‐theoretic interpretation– May select very slightly informative frequent terms that are not very useful for classification

• Chi‐square– Statistical foundation– May select rare uninformative terms

• Just use the commonest terms?– No particular foundation– In practice, this is often 90% as good

40

Page 41: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Feature selection for NB

• In general feature selection is necessary for multivariate Bernoulli NB.

• Otherwise you suffer from noise, multi‐counting

• “Feature selection” really means something different for multinomial NB.  It means dictionary truncation– The multinomial NB model only has 1 feature

41

Page 42: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Naive Bayes on spam email

42

Page 43: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

SpamAssassin

• Naive Bayes has found a home in spam filtering– Paul Graham’s A Plan for Spam

• A mutant with more mutant offspring...– Naive Bayes‐like classifier with weird parameter estimation

– Widely used in spam filters • Classic Naive Bayes superior when appropriately used 

– According to David D. Lewis

– But also many other things: black hole lists, etc.

• Many email topic filters also use NB classifiers43

Page 44: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Evaluating Categorization• Evaluation must be done on test data that are independent of the 

training data (usually a disjoint set of instances).– It’s easy to get good performance on a test set that was available to the 

learner during training (e.g., just memorize the test set).– The holdout method reserves a certain amount for testing and uses the 

remainder for training– Usually: one third for testing, the rest for training

• Sometimes use cross‐validation (averaging results over multiple training and test splits of the overall data)

44

Page 45: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

• Cross-validation avoids overlapping test sets– First step: data is split into k subsets of equal size– Second step: each subset in turn is used for testing

and the remainder for training• This is called k-fold cross-validation• The error estimates are averaged to yield an

overall error estimate

Cross-validation

45

Page 46: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Cross-validation

Training set Testing set Accuracy

P2, … , Pk P1 A1

P1,P3, … , Pk P2 A2

: :P1,P2 , … , Pk-1 Pk Ak

Average Accuracy A

• Split the available data set into k equal partitions, namely, P1, … Pk

46

Page 47: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Evaluation Metric• Metrics (Measures): precision, recall, F1, classification accuracy

• Classification accuracy: c/n where n is the total number of test instances and c is the number of test instances correctly classified by the system.– Adequate if one class per document– Otherwise F measure for each class

47

Page 48: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Per class evaluation measures

• Given a class i, treat it as a binary classification problem.

• Recall: Fraction of docs in class i classified correctly

• Precision: Fraction of docs assigned class ithat are actually about class i

• Accuracy: (1 ‐ error rate) Fraction of all docs classified correctly with respect to class i

Sec. 15.2.4

48

Page 49: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

A combined measure: F

• Combined measure that assesses precision/recall tradeoff is F measure(weighted harmonic mean):

• People usually use balanced F1measure– i.e., with  = 1 or  = ½

RPPR

RP

F

2

2 )1(1)1(1

1

49

Page 50: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Micro‐ vs. Macro‐Averaging

• If we have more than one class, how do we combine multiple performance measures into one quantity?

• Macroaveraging: Compute performance for each class, then average.

• Microaveraging: Collect decisions for all classes, compute contingency table, evaluate.

Sec. 15.2.4

50

Page 51: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Micro‐ vs. Macro‐Averaging: Example

Truth: yes

Truth: no

Classifier: yes

10 10

Classifier: no

10 970

Truth: yes

Truth: no

Classifier: yes

90 10

Classifier: no

10 890

Truth: yes

Truth: no

Classifier: yes

100 20

Classifier: no

20 1860

Class 1 Class 2 Micro Ave. Table

Macroaveraged precision: (0.5 + 0.9)/2 = 0.7 Microaveraged precision: 100/120 = .83 Microaveraged score is dominated by score

on common classes

Sec. 15.2.4

51

Page 52: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

• Most (over)used data set• 21578 documents• 9603 training, 3299 test articles (ModApte/Lewis split)• 118 categories

– An article can be in more than one category– Learn 118 binary category distinctions

• Average document: about 90 types, 200 tokens• Average number of classes assigned

– 1.24 for docs with at least one category• Only about 10 out of 118 categories are large

Common categories(#train, #test)

Evaluation: Classic Reuters‐21578 Data Set 

• Earn (2877, 1087) • Acquisitions (1650, 179)• Money-fx (538, 179)• Grain (433, 149)• Crude (389, 189)

• Trade (369,119)• Interest (347, 131)• Ship (197, 89)• Wheat (212, 71)• Corn (182, 56)

Sec. 15.2.4

52

Page 53: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Reuters Text Categorization Data Set (Reuters‐21578)

<REUTERS TOPICS="YES" LEWISSPLIT="TRAIN" CGISPLIT="TRAINING-SET" OLDID="12981" NEWID="798">

<DATE> 2-MAR-1987 16:51:43.42</DATE>

<TOPICS><D>livestock</D><D>hog</D></TOPICS>

<TITLE>AMERICAN PORK CONGRESS KICKS OFF TOMORROW</TITLE>

<DATELINE> CHICAGO, March 2 - </DATELINE><BODY>The American Pork Congress kicks off tomorrow, March 3, in Indianapolis with 160 of the nations pork producers from 44 member states determining industry positions on a number of issues, according to the National Pork Producers Council, NPPC.

Delegates to the three day Congress will be considering 26 resolutions concerning various issues, including the future direction of farm policy and the tax law as it applies to the agriculture sector. The delegates will also debate whether to endorse concepts of a national PRV (pseudorabies virus) control and eradication program, the NPPC said.

A large trade show, in conjunction with the congress, will feature the latest in technology in all areas of the industry, the NPPC added. Reuter

&#3;</BODY></TEXT></REUTERS>

Sec. 15.2.4

53

Page 54: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Sec. 15.2.4

54

Page 55: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Violation of NB Assumptions

• The independence assumptions do not really hold of documents written in natural language.– Conditional independence– Positional independence

55

Page 56: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Naive Bayes Posterior Probabilities

• Classification results of naive Bayes (the class with maximum posterior probability) are usually fairly accurate.

• However, due to the inadequacy of the conditional independence assumption, the actual posterior‐probability numerical estimates are not.– Output probabilities are commonly very close to 0 or 1.

• Correct estimation  accurate prediction, but correct probability estimation is NOT necessary for accurate prediction (just need right ordering of probabilities)

56

Page 57: Text Classification - Department of Systems Engineering and …seem5680/lecture/textcat-nbayes-2020spring.pdf · More Text Classification Examples Many search engine functionalities

Naive Bayes is Not So Naive• Naive Bayes won 1st and 2nd place in KDD‐CUP 97 competition out of 16 systems

Goal: Financial services industry direct mail response prediction model: Predict if the recipient of mail will actually respond to the advertisement – 750,000 records.

• More robust to irrelevant features than many learning methodsIrrelevant Features cancel each other without affecting resultsDecision Trees can suffer heavily from this.

• More robust to concept drift (changing class definition over time)• Very good in domains with many equally important features

Decision Trees suffer from fragmentation in such cases – especially if little data

• A good dependable baseline for text classification (but not the best)!• Optimal if the Independence Assumptions hold: Bayes Optimal Classifier

Never true for text, but possible in some domains

• Very Fast Learning and Testing (basically just count the data)• Low Storage requirements

57