information extraction and named entity recognition

73
Information Extraction and Named Entity Recognition Introducing the tasks: Getting simple structured information out of text

Upload: elata

Post on 24-Jan-2016

67 views

Category:

Documents


1 download

DESCRIPTION

Information Extraction and Named Entity Recognition. Introducing the tasks: Getting simple structured information out of text. Information Extraction. Information extraction (IE) systems Find and understand limited relevant parts of texts Gather information from many pieces of text - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Information Extraction and Named Entity Recognition

Information Extraction and Named

Entity Recognition

Introducing the tasks:

Getting simple structured information out of text

Page 2: Information Extraction and Named Entity Recognition

Christopher Manning

Information Extraction

• Information extraction (IE) systems• Find and understand limited relevant parts of texts• Gather information from many pieces of text• Produce a structured representation of relevant information:

• relations (in the database sense), a.k.a.,• a knowledge base

• Goals:1. Organize information so that it is useful to people2. Put information in a semantically precise form that allows

further inferences to be made by computer algorithms

Page 3: Information Extraction and Named Entity Recognition

Christopher Manning

Information Extraction (IE)

• IE systems extract clear, factual information• Roughly: Who did what to whom when?

• E.g.,• Gathering earnings, profits, board members, headquarters, etc.

from company reports • The headquarters of BHP Billiton Limited, and the global

headquarters of the combined BHP Billiton Group, are located in Melbourne, Australia.

• headquarters(“BHP Biliton Limited”, “Melbourne, Australia”)• Learn drug-gene product interactions from medical research

literature

Page 4: Information Extraction and Named Entity Recognition

Christopher Manning

Low-level information extraction

• Is now available – and I think popular – in applications like Apple or Google mail, and web indexing

• Often seems to be based on regular expressions and name lists

Page 5: Information Extraction and Named Entity Recognition

Christopher Manning

Low-level information extraction

Page 6: Information Extraction and Named Entity Recognition

Christopher Manning

Why is IE hard on the web?

Need thisprice

Title

A book,Not a toy

Page 7: Information Extraction and Named Entity Recognition

Christopher Manning

How is IE useful? Classified Advertisements (Real Estate)

Background:• Plain text

advertisements• Lowest common

denominator: only thing that 70+ newspapers using many different publishing systems can all handle

<ADNUM>2067206v1</ADNUM><DATE>March 02, 1998</DATE><ADTITLE>MADDINGTON

$89,000</ADTITLE><ADTEXT>OPEN 1.00 - 1.45<BR>U 11 / 10 BERTRAM ST<BR> NEW TO MARKET Beautiful<BR>3 brm freestanding<BR>villa, close to shops & bus<BR>Owner moved to Melbourne<BR> ideally suit 1st home buyer,<BR> investor & 55 and over.<BR>Brian Hazelden 0418 958 996<BR> R WHITE LEEMING 9332 3477</ADTEXT>

Page 8: Information Extraction and Named Entity Recognition

Christopher Manning

Page 9: Information Extraction and Named Entity Recognition

Christopher Manning

Why doesn’t text search (IR) work?

What you search for in real estate advertisements:• Town/suburb. You might think easy, but:

• Real estate agents: Coldwell Banker, Mosman• Phrases: Only 45 minutes from Parramatta• Multiple property ads have different suburbs in one ad

• Money: want a range not a textual match• Multiple amounts: was $155K, now $145K• Variations: offers in the high 700s [but not rents for $270]

• Bedrooms: similar issues: br, bdr, beds, B/R

Page 10: Information Extraction and Named Entity Recognition

Christopher Manning

Named Entity Recognition (NER)

• A very important sub-task: find and classify names in text, for example:

• The decision by the independent MP Andrew Wilkie to withdraw his support for the minority Labor government sounded dramatic but it should not further threaten its stability. When, after the 2010 election, Wilkie, Rob Oakeshott, Tony Windsor and the Greens agreed to support Labor, they gave just two guarantees: confidence and supply.

Page 11: Information Extraction and Named Entity Recognition

Christopher Manning

• A very important sub-task: find and classify names in text, for example:

• The decision by the independent MP Andrew Wilkie to withdraw his support for the minority Labor government sounded dramatic but it should not further threaten its stability. When, after the 2010 election, Wilkie, Rob Oakeshott, Tony Windsor and the Greens agreed to support Labor, they gave just two guarantees: confidence and supply.

Named Entity Recognition (NER)

Page 12: Information Extraction and Named Entity Recognition

Christopher Manning

• A very important sub-task: find and classify names in text, for example:

• The decision by the independent MP Andrew Wilkie to withdraw his support for the minority Labor government sounded dramatic but it should not further threaten its stability. When, after the 2010 election, Wilkie, Rob Oakeshott, Tony Windsor and the Greens agreed to support Labor, they gave just two guarantees: confidence and supply.

Named Entity Recognition (NER)

PersonDateLocationOrgani- zation

Page 13: Information Extraction and Named Entity Recognition

Christopher Manning

Named Entity Recognition (NER)

• The uses:• Named entities can be indexed, linked off, etc.• Sentiment can be attributed to companies or products• A lot of IE relations are associations between named entities• For question answering, answers are often named entities.

• Concretely:• Many web pages tag various entities, with links to bio or topic pages, etc.

• Reuters’ OpenCalais, Evri, AlchemyAPI, Yahoo’s Term Extraction, …• Apple/Google/Microsoft/… smart recognizers for document content

Page 14: Information Extraction and Named Entity Recognition

Information Extraction and Named

Entity Recognition

Introducing the tasks:

Getting simple structured information out of text

Page 15: Information Extraction and Named Entity Recognition

Evaluation of Named Entity Recognition

Precision, Recall, and the F measure;

their extension to sequences

Page 16: Information Extraction and Named Entity Recognition

Christopher Manning

The 2-by-2 contingency table

correct not correctselected tp fp

not selected fn tn

Page 17: Information Extraction and Named Entity Recognition

Christopher Manning

Precision and recall

• Precision: % of selected items that are correctRecall: % of correct items that are selected

correct not correctselected tp fp

not selected fn tn

Page 18: Information Extraction and Named Entity Recognition

Christopher Manning

A combined measure: F

• A combined measure that assesses the P/R tradeoff is F measure (weighted harmonic mean):

• The harmonic mean is a very conservative average; see IIR § 8.3

• People usually use balanced F1 measure• i.e., with = 1 (that is, = ½): F = 2PR/(P+R)

RP

PR

RP

F

2

2 )1(1

)1(1

1

Page 19: Information Extraction and Named Entity Recognition

Christopher Manning

Quiz question

What is the F1?

P = 40% R = 40% F1 =

Page 20: Information Extraction and Named Entity Recognition

Christopher Manning

Quiz question

What is the F1?

P = 75% R = 25% F1 =

Page 21: Information Extraction and Named Entity Recognition

Christopher Manning

The Named Entity Recognition Task

Task: Predict entities in a text

Foreign ORGMinistry ORGspokesman OShen PERGuofang PERtold OReuters ORG: :

}Standard evaluationis per entity, not per token

Page 22: Information Extraction and Named Entity Recognition

Christopher Manning

Precision/Recall/F1 for IE/NER

• Recall and precision are straightforward for tasks like IR and text categorization, where there is only one grain size (documents)

• The measure behaves a bit funnily for IE/NER when there are boundary errors (which are common):• First Bank of Chicago announced earnings …

• This counts as both a fp and a fn• Selecting nothing would have been better• Some other metrics (e.g., MUC scorer) give partial credit

(according to complex rules)

Page 23: Information Extraction and Named Entity Recognition

Evaluation of Named Entity Recognition

Precision, Recall, and the F measure;

their extension to sequences

Page 24: Information Extraction and Named Entity Recognition

Methods for doing NER and IE

The space;

hand-written patterns

Page 25: Information Extraction and Named Entity Recognition

Christopher Manning

Three standard approaches to NER (and IE)

1. Hand-written regular expressions• Perhaps stacked

2. Using classifiers• Generative: Naïve Bayes• Discriminative: Maxent models

3. Sequence models• HMMs• CMMs/MEMMs• CRFs

Page 26: Information Extraction and Named Entity Recognition

Christopher Manning

Hand-written Patterns forInformation Extraction

• If extracting from automatically generated web pages, simple regex patterns usually work.• Amazon page• <div class="buying"><h1 class="parseasinTitle"><span

id="btAsinTitle" style="">(.*?)</span></h1>

• For certain restricted, common types of entities in unstructured text, simple regex patterns also usually work.• Finding (US) phone numbers• (?:\(?[0-9]{3}\)?[ -.])?[0-9]{3}[ -.]?[0-9]{4}

Page 27: Information Extraction and Named Entity Recognition

Christopher Manning

MUC: the NLP genesis of IE

• DARPA funded significant efforts in IE in the early to mid 1990s

• Message Understanding Conference (MUC) was an annual event/competition where results were presented.

• Focused on extracting information from news articles:• Terrorist events• Industrial joint ventures• Company management changes

• Starting off, all rule-based, gradually moved to ML

Page 28: Information Extraction and Named Entity Recognition

Christopher Manning

MUC Information Extraction ExampleBridgestone Sports Co. said Friday it had set up a joint venture in Taiwan with a local concern and a Japanese trading house to produce golf clubs to be supplied to Japan. The joint venture, Bridgestone Sports Taiwan Co., capitalized at 20 million new Taiwan dollars, will start production in January 1990 with production of 20,000 iron and “metal wood” clubs a month.

JOINT-VENTURE-1• Relationship: TIE-UP• Entities: “Bridgestone Sport Co.” , “a local

concern”, “a Japanese trading house”• Joint Ent: “Bridgestone Sports Taiwan Co.”• Activity: ACTIVITY-1• Amount: NT$20 000 000ACTIVITY-1• Activity: PRODUCTION• Company: “Bridgestone Sports Taiwan Co.”• Product: “iron and ‘metal wood’ clubs”• Start date: DURING: January 1990

Page 29: Information Extraction and Named Entity Recognition

Christopher Manning

Natural Language Processing-based Hand-written Information Extraction

• For unstructured human-written text, some NLP may help• Part-of-speech (POS) tagging

• Mark each word as a noun, verb, preposition, etc.• Syntactic parsing

• Identify phrases: NP, VP, PP• Semantic word categories (e.g. from WordNet)

• KILL: kill, murder, assassinate, strangle, suffocate

Page 30: Information Extraction and Named Entity Recognition

Christopher Manning

0

1

2

3

4

PN ’s

ADJ

Art

N

PNP

’s

Art

Finite Automaton forNoun groups:John’s interestingbook with a nice cover

Grep++ = Cascaded grepping

Page 31: Information Extraction and Named Entity Recognition

Christopher Manning

Natural Language Processing-based Hand-written Information Extraction

• We use a cascaded regular expressions to match relations• Higher-level regular expressions can use categories matched by lower-

level expressions• E.g. the CRIME-VICTIM pattern can use things matching NOUN-GROUP

• This was the basis of the SRI FASTUS system in later MUCs• Example extraction pattern

• Crime victim:• Prefiller: [POS: V, Hypernym: KILL]• Filler: [Phrase: NOUN-GROUP]

Page 32: Information Extraction and Named Entity Recognition

Christopher Manning

Rule-based Extraction Examples

Determining which person holds what office in what organization• [person] , [office] of [org]

• Vuk Draskovic, leader of the Serbian Renewal Movement • [org] (named, appointed, etc.) [person] Prep [office]

• NATO appointed Wesley Clark as Commander in Chief

Determining where an organization is located• [org] in [loc]

• NATO headquarters in Brussels• [org] [loc] (division, branch, headquarters, etc.)

• KFOR Kosovo headquarters

Page 33: Information Extraction and Named Entity Recognition

Methods for doing NER and IE

The space;

hand-written patterns

Page 34: Information Extraction and Named Entity Recognition

Information extraction as text

classification

Page 35: Information Extraction and Named Entity Recognition

Christopher Manning

Naïve use of text classification for IE• Use conventional classification algorithms to classify

substrings of document as “to be extracted” or not.

• In some simple but compelling domains, this naive technique is remarkably effective.• But do think about when it would and wouldn’t work!

Page 36: Information Extraction and Named Entity Recognition

Christopher Manning

‘Change of Address’ email

Page 37: Information Extraction and Named Entity Recognition

Christopher Manning

Change-of-Address detection[Kushmerick et al., ATEM 2001]

not CoA

“address”naïve-Bayes

model

P[[email protected]] = 0.28 P[[email protected]] = 0.72

everyone so.... My new email address is: [email protected] Hope all is well :) >

From: Robert Kubinsky <[email protected]> Subject: Email update Hi all - I’m

“message”naïve Bayes

model

Yes

2. Extraction

1. Classification

Page 38: Information Extraction and Named Entity Recognition

Christopher Manning

Change-of-Address detection results [Kushmerick et al., ATEM 2001]

• Corpus of 36 CoA emails and 5720 non-CoA emails• Results from 2-fold cross validations (train on half, test on other half)• Very skewed distribution intended to be realistic• Note very limited training data: only 18 training CoA messages per fold• 36 CoA messages have 86 email addresses; old, new, and miscellaneous

P R F1

Message classification 98% 97% 98%

Address classification 98% 68% 80%

Page 39: Information Extraction and Named Entity Recognition

Information extraction as text

classification

Page 40: Information Extraction and Named Entity Recognition

Sequence Models for Named Entity Recognition

Page 41: Information Extraction and Named Entity Recognition

Christopher Manning

The ML sequence model approach to NER

Training1. Collect a set of representative training documents2. Label each token for its entity class or other (O)3. Design feature extractors appropriate to the text and classes4. Train a sequence classifier to predict the labels from the data

Testing5. Receive a set of testing documents6. Run sequence model inference to label each token7. Appropriately output the recognized entities

Page 42: Information Extraction and Named Entity Recognition

Christopher Manning

Encoding classes for sequence labeling

IO encoding IOB encoding

Fred PER B-PERshowed O OSue PER B-PERMengqiu PER B-PERHuang PER I-PER‘s O Onew O Opainting O O

Page 43: Information Extraction and Named Entity Recognition

Christopher Manning

43

Features for sequence labeling

• Words• Current word (essentially like a learned dictionary)• Previous/next word (context)

• Other kinds of inferred linguistic classification• Part-of-speech tags

• Label context• Previous (and perhaps next) label

Page 44: Information Extraction and Named Entity Recognition

Christopher Manning

Features: Word substrings

Cotrimoxazole Wethersfield

Alien Fury: Countdown to Invasion

00

0

18

0

oxa

708

0006

:

0 86

68

14

field

Page 45: Information Extraction and Named Entity Recognition

Christopher Manning

Features: Word shapes

• Word Shapes • Map words to simplified representation that encodes attributes

such as length, capitalization, numerals, Greek letters, internal punctuation, etc.

Varicella-zoster Xx-xxxmRNA xXXXCPA1 XXXd

Page 46: Information Extraction and Named Entity Recognition

Sequence Models for Named Entity Recognition

Page 47: Information Extraction and Named Entity Recognition

Maximum entropy sequence models

Maximum entropy Markov models (MEMMs) or

Conditional Markov models

Page 48: Information Extraction and Named Entity Recognition

Christopher Manning

Sequence problems

• Many problems in NLP have data which is a sequence of characters, words, phrases, lines, or sentences …

• We can think of our task as one of labeling each itemVBG NN IN DT NN IN NNChasing opportunity in an age of upheaval

POS tagging

B B I I B I B I B B

而 相 对 于 这 些 品 牌 的 价Word segmentation

PERS O O O ORG ORGMurdoch discusses future of News Corp.

Named entity recognition

Text segmen-tation

QAQAAAQA

Page 49: Information Extraction and Named Entity Recognition

Christopher Manning

MEMM inference in systems

• For a Conditional Markov Model (CMM) a.k.a. a Maximum Entropy Markov Model (MEMM), the classifier makes a single decision at a time, conditioned on evidence from observations and previous decisions

• A larger space of sequences is usually explored via search

-3 -2 -1 0 +1

DT NNP VBD ??? ???

The Dow fell 22.6 %

Local ContextFeatures

W0 22.6

W+1 %

W-1 fell

T-1 VBD

T-1-T-2 NNP-VBD

hasDigit? true

… …(Ratnaparkhi 1996; Toutanova et al. 2003, etc.)

Decision Point

Page 50: Information Extraction and Named Entity Recognition

Christopher Manning

Example: POS Tagging

• Scoring individual labeling decisions is no more complex than standard classification decisions• We have some assumed labels to use for prior positions• We use features of those and the observed data (which can include current,

previous, and next words) to predict the current label

-3 -2 -1 0 +1

DT NNP VBD ??? ???

The Dow fell 22.6 %

Local ContextFeatures

W0 22.6

W+1 %

W-1 fell

T-1 VBD

T-1-T-2 NNP-VBD

hasDigit? true

… …

Decision Point

(Ratnaparkhi 1996; Toutanova et al. 2003, etc.)

Page 51: Information Extraction and Named Entity Recognition

Christopher Manning

Example: POS Tagging

• POS tagging Features can include:• Current, previous, next words in isolation or together.• Previous one, two, three tags.• Word-internal features: word types, suffixes, dashes, etc.

-3 -2 -1 0 +1

DT NNP VBD ??? ???

The Dow fell 22.6 %

Local ContextFeatures

W0 22.6

W+1 %

W-1 fell

T-1 VBD

T-1-T-2 NNP-VBD

hasDigit? true

… …(Ratnaparkhi 1996; Toutanova et al. 2003, etc.)

Decision Point

Page 52: Information Extraction and Named Entity Recognition

Christopher Manning

Inference in SystemsSequence Level

Local Level

LocalData

FeatureExtraction

Features

Label

Optimization

Smoothing

Classifier Type

Features

Label

SequenceData

Maximum Entropy Models

QuadraticPenalties

ConjugateGradient

Sequence Model Inference

LocalData

LocalData

Page 53: Information Extraction and Named Entity Recognition

Christopher Manning

Greedy Inference

• Greedy inference:• We just start at the left, and use our classifier at each position to assign a label• The classifier can depend on previous labeling decisions as well as observed

data• Advantages:

• Fast, no extra memory requirements• Very easy to implement• With rich features including observations to the right, it may perform quite well

• Disadvantage:• Greedy. We make commit errors we cannot recover from

Sequence Model

Inference

Best Sequence

Page 54: Information Extraction and Named Entity Recognition

Christopher Manning

Beam Inference

• Beam inference:• At each position keep the top k complete sequences.• Extend each sequence in each local way.

• The extensions compete for the k slots at the next position.• Advantages:

• Fast; beam sizes of 3–5 are almost as good as exact inference in many cases.• Easy to implement (no dynamic programming required).

• Disadvantage:• Inexact: the globally best sequence can fall off the beam.

Sequence Model

Inference

Best Sequence

Page 55: Information Extraction and Named Entity Recognition

Christopher Manning

Viterbi Inference

• Viterbi inference:• Dynamic programming or memoization.• Requires small window of state influence (e.g., past two states are relevant).

• Advantage:• Exact: the global best sequence is returned.

• Disadvantage:• Harder to implement long-distance state-state interactions (but beam inference

tends not to allow long-distance resurrection of sequences anyway).

Sequence Model

Inference

Best Sequence

Page 56: Information Extraction and Named Entity Recognition

Christopher Manning

Viterbi Inference: J&M Ch. 6

• I’m punting on this … read J&M Ch. 5/6.• I’ll do dynamic programming for parsing

• Basically, providing you only look at neighboring states, you can dynamic program a search for the optimal state sequence

Page 57: Information Extraction and Named Entity Recognition

Christopher Manning

Viterbi Inference: J&M Ch. 5/6

Page 58: Information Extraction and Named Entity Recognition

Christopher Manning

CRFs [Lafferty, Pereira, and McCallum 2001]

• Another sequence model: Conditional Random Fields (CRFs)• A whole-sequence conditional model rather than a chaining of local

models.

• The space of c’s is now the space of sequences• But if the features fi remain local, the conditional sequence likelihood can be

calculated exactly using dynamic programming

• Training is slower, but CRFs avoid causal-competition biases• These (or a variant using a max margin criterion) are seen as the state-of-

the-art these days … but in practice usually work much the same as MEMMs.

'

),'(expc i

ii dcf),|( dcP

iii dcf ),(exp

Page 59: Information Extraction and Named Entity Recognition

Maximum entropy sequence models

Maximum entropy Markov models (MEMMs) or

Conditional Markov models

Page 60: Information Extraction and Named Entity Recognition

The Full Task of Information Extraction

Page 61: Information Extraction and Named Entity Recognition

Christopher Manning

The Full Task of Information ExtractionInformation Extraction = segmentation + classification + association + clustering

As a family of techniques:

For years, Microsoft Corporation CEO Bill Gates railed against the economic philosophy of open-source software with Orwellian fervor, denouncing its communal licensing as a "cancer" that stifled technological innovation.

Now Gates himself says Microsoft will gladly disclose its crown jewels--the coveted code behind the Windows operating system--to select customers.

"We can be open source. We love the concept of shared source," said Bill Veghte, a Microsoft VP. "That's a super-important shift for us in terms of code access.“

Richard Stallman, founder of the Free Software Foundation, countered saying…

Microsoft CorporationCEOBill GatesGatesMicrosoftBill VeghteMicrosoftVPRichard StallmanfounderFree Software Foundation

NAM

E

TI

T LE

O

RGAN

IZAT

ION

B il l

Ga t

esC E

OM

icro

soft

B il l

Vegh

teVP

Mic

roso

ftR i

c har

d St

a llm

a nfo

u nde

rFr

e e S

oft..

Slide by Andrew McCallum. Used with permission.

Page 62: Information Extraction and Named Entity Recognition

Christopher Manning

An Even Broader ViewCreate ontology

SegmentClassifyAssociateCluster

Load DB

Spider

Query,Search

Data mine

IE

Documentcollection

Database

Filter by relevance

Label training data

Train extraction models

Slide by Andrew McCallum. Used with permission.

Page 63: Information Extraction and Named Entity Recognition

Christopher Manning

Landscape of IE Tasks:Document Formatting

Text paragraphswithout formatting

Grammatical sentencesand some formatting & links

Non-grammatical snippets,rich formatting & links

Tables

Astro Teller is the CEO and co-founder of BodyMedia. Astro holds a Ph.D. in Artificial Intelligence from Carnegie Mellon University, where he was inducted as a national Hertz fellow. His M.S. in symbolic and heuristic computation and B.S. in computer science are from Stanford University.

Slide by Andrew McCallum. Used with permission.

Page 64: Information Extraction and Named Entity Recognition

Christopher Manning

Landscape of IE TasksIntended Breadth of Coverage

Web site specific Genre specific Wide, non-specific

Amazon.com Book Pages Resumes University NamesFormatting Layout Language

Slide by Andrew McCallum. Used with permission.

Page 65: Information Extraction and Named Entity Recognition

Christopher Manning

Landscape of IE Tasks :Complexity of entities/relations

Closed set

He was born in Alabama…

Regular set

Phone: (413) 545-1323

Complex pattern

University of ArkansasP.O. Box 140Hope, AR 71802

…was among the six houses sold by Hope Feldman that year.

Ambiguous patterns, needing context and many sources of evidence

The CALD main office is 412-268-1299The big Wyoming sky…

U.S. states U.S. phone numbers

U.S. postal addressesPerson names

Headquarters:1128 Main Street, 4th FloorCincinnati, Ohio 45210

Pawel Opalinski, SoftwareEngineer at WhizBang Labs.

Slide by Andrew McCallum. Used with permission.

Page 66: Information Extraction and Named Entity Recognition

Christopher Manning

Landscape of IE Tasks:Arity of relation

Single entity

Person: Jack Welch

Binary relationship

Relation: Person-TitlePerson: Jack WelchTitle: CEO

N-ary record

“Named entity” extraction

Jack Welch will retire as CEO of General Electric tomorrow. The top role at the Connecticut company will be filled by Jeffrey Immelt.

Relation: Company-LocationCompany: General ElectricLocation: Connecticut

Relation: SuccessionCompany: General ElectricTitle: CEOOut: Jack WelshIn: Jeffrey Immelt

Person: Jeffrey Immelt

Location: Connecticut

Slide by Andrew McCallum. Used with permission.

Page 67: Information Extraction and Named Entity Recognition

Christopher Manning

Association task = Relation Extraction

• Checking if groupings of entities are instances of a relation

1. Manually engineered rules• Rules defined over words/entites: “<company> located in

<location>”• Rules defined over parsed text:

• “((Obj <company>) (Verb located) (*) (Subj <location>))”

2. Machine Learning-based• Supervised: Learn relation classifier from examples• Partially-supervised: bootstrap rules/patterns from “seed”

examples

Page 68: Information Extraction and Named Entity Recognition

Christopher Manning

Relation Extraction: Disease Outbreaks

May 19 1995, Atlanta -- The Centers for Disease Control and Prevention, which is in the front line of the world's response to the deadly Ebola epidemic in Zaire , is finding itself hard pressed to cope with the crisis…

Date Disease Name Location

Jan. 1995 Malaria Ethiopia

July 1995 Mad Cow Disease U.K.

Feb. 1995 Pneumonia U.S.

May 1995 Ebola Zaire

Information Extraction System

Page 69: Information Extraction and Named Entity Recognition

Christopher Manning

“We show that CBF-A and CBF-C interact with each other to form a CBF-A-CBF-C complexand that CBF-B does not interact with CBF-A or CBF-C individually but that it associates with the CBF-A-CBF-C complex.“

Relation Extraction: Protein Interactions

CBF-A CBF-C

CBF-B CBF-A-CBF-C complex

interactcomplex

associates

Page 70: Information Extraction and Named Entity Recognition

Christopher Manning

Binary Relation Association as Binary Classification

Christos Faloutsos conferred with Ted Senator, the KDD 2003 General Chair.

Person-Role (Christos Faloutsos, KDD 2003 General Chair) NO

Person-Role ( Ted Senator, KDD 2003 General Chair) YES

Person Person Role

Page 71: Information Extraction and Named Entity Recognition

Christopher Manning

Resolving coreference(both within and across documents)

John Fitzgerald Kennedy was born at 83 Beals Street in Brookline, Massachusetts on Tuesday, May 29, 1917, at 3:00 pm,[7] the second son of Joseph P. Kennedy, Sr., and Rose Fitzgerald; Rose, in turn, was the eldest child of John "Honey Fitz" Fitzgerald, a prominent Boston political figure who was the city's mayor and a three-term member of Congress. Kennedy lived in Brookline for ten years and attended Edward Devotion School, Noble and Greenough Lower School, and the Dexter School, through 4th grade. In 1927, the family moved to 5040 Independence Avenue in Riverdale, Bronx, New York City; two years later, they moved to 294 Pondfield Road in Bronxville, New York, where Kennedy was a member of Scout Troop 2 (and was the first Boy Scout to become President).[8] Kennedy spent summers with his family at their home in Hyannisport, Massachusetts, and Christmas and Easter holidays with his family at their winter home in Palm Beach, Florida. For the 5th through 7th grade, Kennedy attended Riverdale Country School, a private school for boys. For 8th grade in September 1930, the 13-year old Kennedy attended Canterbury School in New Milford, Connecticut.

Page 72: Information Extraction and Named Entity Recognition

Christopher Manning

Rough Accuracy of Information Extraction

• Errors cascade (error in entity tag error in relation extraction)• These are very rough, actually optimistic, numbers

• Hold for well-established tasks, but lower for many specific/novel IE tasks

Information type Accuracy

Entities 90-98%

Attributes 80%

Relations 60-70%

Events 50-60%

Page 73: Information Extraction and Named Entity Recognition

The Full Task of Information Extraction