introduction to ir systems: supporting boolean text search 198:541
Post on 20-Dec-2015
223 views
TRANSCRIPT
Introduction to IR Systems: Supporting Boolean Text
Search
198:541
2
Unstructured (text) vs. structured (database) data in 1996
0
20
40
60
80
100
120
140
160
Data volume Market Cap
UnstructuredStructured
3
Unstructured (text) vs. structured (database) data in 2006
0
20
40
60
80
100
120
140
160
Data volume Market Cap
UnstructuredStructured
Information Retrieval
A research field traditionally separate from Databases
Goes back to IBM, Rand and Lockheed in the 50’s G. Salton at Cornell in the 60’s Lots of research since then
Products traditionally separate Originally, document management systems for
libraries, government, law, etc. Gained prominence in recent years due to web
search
IR vs. DBMS Seem like very different beasts:
Both support queries over large datasets, use indexing. In practice, you currently have to choose between the
two. (some recent research to integrate both)
Expect reasonable number of updates
Read-Mostly. Add docs occasionally
SQLKeyword search
Generate full answerPage through top k results
Structured dataUnstructured data format
Precise SemanticsImprecise Semantics
DBMSIR
IR’s “Bag of Words” Model
Typical IR data model: Each document is just a bag (multiset) of words
(“terms”) Detail 1: “Stop Words”
Certain words are considered irrelevant and not placed in the bag
e.g., “the” e.g., HTML tags like <H1>
Detail 2: “Stemming” and other content analysis Using English-specific rules, convert words to their
basic form e.g., “surfing”, “surfed” --> “surf”
7
Unstructured data in 1650
Which plays of Shakespeare contain the words Brutus AND Caesar but NOT Calpurnia?
One could grep all of Shakespeare’s plays for Brutus and Caesar, then strip out lines containing Calpurnia? Slow (for large corpora) NOT Calpurnia is non-trivial Other operations (e.g., find the word
Romans near countrymen) not feasible Ranked retrieval (best documents to return)
Later lectures
8
Term-document incidence
1 if play contains word, 0 otherwise
Antony and Cleopatra Julius Caesar The Tempest Hamlet Othello Macbeth
Antony 1 1 0 0 0 1
Brutus 1 1 0 1 0 0
Caesar 1 1 0 1 1 1
Calpurnia 0 1 0 0 0 0
Cleopatra 1 0 0 0 0 0
mercy 1 0 1 1 1 1
worser 1 0 1 1 1 0
Brutus AND Caesar but NOT Calpurnia
9
Incidence vectors
So we have a 0/1 vector for each term. To answer query: take the vectors for
Brutus, Caesar and Calpurnia (complemented) bitwise AND.
110100 AND 110111 AND 101111 = 100100.
10
Answers to query
Antony and Cleopatra, Act III, Scene ii Agrippa [Aside to DOMITIUS ENOBARBUS]: Why, Enobarbus, When Antony found Julius Caesar dead, He cried almost to roaring; and he wept When at Philippi he found Brutus slain.
Hamlet, Act III, Scene ii Lord Polonius: I did enact Julius Caesar I was killed i' the Capitol; Brutus killed me.
11
Bigger corpora
Consider N = 1M documents, each with about 1K terms.
Avg 6 bytes/term incl spaces/punctuation 6GB of data in the documents.
Say there are m = 500K distinct terms among these.
12
Can’t build the matrix
500K x 1M matrix has half-a-trillion 0’s and 1’s. (approx 625GB)
But it has no more than one billion 1’s. matrix is extremely sparse.
What’s a better representation? We only record the 1 positions.
Why?
13
Inverted index
For each term T, we must store a list of all documents that contain T.
Do we use an array or a list for this?
Brutus
Calpurnia
Caesar
1 2 3 5 8 13 21 34
2 4 8 16 32 64128
13 16
What happens if the word Caesar is added to document 14?
14
Inverted index
Linked lists generally preferred to arrays Dynamic space allocation Insertion of terms into documents easy Space overhead of pointers
Brutus
Calpurnia
Caesar
2 4 8 16 32 64 128
2 3 5 8 13 21 34
13 16
1
Dictionary Postings lists
Sorted by docID (more later on why).
Posting
15
Inverted index construction
Tokenizer
Token stream (remove stop words).
Friends Romans Countrymen
Linguistic modules
Modified tokens(stemming)
friend roman countryman
Indexer
Inverted index.
friend
roman
countryman
2 4
2
13 16
1
Documents tobe indexed.
Friends, Romans, countrymen.
16
Sequence of (Modified token, Document ID) pairs.
I did enact JuliusCaesar I was killed
i' the Capitol; Brutus killed me.
Doc 1
So let it be withCaesar. The noble
Brutus hath told youCaesar was ambitious
Doc 2
Term Doc #I 1did 1enact 1julius 1caesar 1I 1was 1killed 1i' 1the 1capitol 1brutus 1killed 1me 1so 2let 2it 2be 2with 2caesar 2the 2noble 2brutus 2hath 2told 2you 2
caesar 2was 2ambitious 2
Indexer steps
17
Sort by terms. Term Doc #ambitious 2be 2brutus 1brutus 2capitol 1caesar 1caesar 2caesar 2did 1enact 1hath 1I 1I 1i' 1it 2julius 1killed 1killed 1let 2me 1noble 2so 2the 1the 2told 2you 2was 1was 2with 2
Term Doc #I 1did 1enact 1julius 1caesar 1I 1was 1killed 1i' 1the 1capitol 1brutus 1killed 1me 1so 2let 2it 2be 2with 2caesar 2the 2noble 2brutus 2hath 2told 2you 2caesar 2was 2ambitious 2
Core indexing step.
18
Multiple term entries in a single document are merged.
Frequency information is added.
Term Doc # Term freqambitious 2 1be 2 1brutus 1 1brutus 2 1capitol 1 1caesar 1 1caesar 2 2did 1 1enact 1 1hath 2 1I 1 2i' 1 1it 2 1julius 1 1killed 1 2let 2 1me 1 1noble 2 1so 2 1the 1 1the 2 1told 2 1you 2 1was 1 1was 2 1with 2 1
Term Doc #ambitious 2be 2brutus 1brutus 2capitol 1caesar 1caesar 2caesar 2did 1enact 1hath 1I 1I 1i' 1it 2julius 1killed 1killed 1let 2me 1noble 2so 2the 1the 2told 2you 2was 1was 2with 2
Why frequency?Will discuss later.
19
The result is split into a Dictionary file and a Postings file.
Doc # Freq2 12 11 12 11 11 12 21 11 12 11 21 12 11 11 22 11 12 12 11 12 12 12 11 12 12 1
Term N docs Coll freqambitious 1 1be 1 1brutus 2 2capitol 1 1caesar 2 3did 1 1enact 1 1hath 1 1I 1 2i' 1 1it 1 1julius 1 1killed 1 2let 1 1me 1 1noble 1 1so 1 1the 2 2told 1 1you 1 1was 2 2with 1 1
Term Doc # Freqambitious 2 1be 2 1brutus 1 1brutus 2 1capitol 1 1caesar 1 1caesar 2 2did 1 1enact 1 1hath 2 1I 1 2i' 1 1it 2 1julius 1 1killed 1 2let 2 1me 1 1noble 2 1so 2 1the 1 1the 2 1told 2 1you 2 1was 1 1was 2 1with 2 1
20
Query processing: AND
Consider processing the query:Brutus AND Caesar Locate Brutus in the Dictionary;
Retrieve its postings. Locate Caesar in the Dictionary;
Retrieve its postings. “Merge” the two postings:
128
34
2 4 8 16 32 64
1 2 3 5 8 13
21
Brutus
Caesar
21
34
1282 4 8 16 32 64
1 2 3 5 8 13 21
The merge
Walk through the two postings simultaneously, in time linear in the total number of postings entries
128
34
2 4 8 16 32 64
1 2 3 5 8 13 21
Brutus
Caesar2 8
If the list lengths are x and y, the merge takes O(x+y)operations.Crucial: postings sorted by docID.
22
Boolean queries: Exact match
The Boolean Retrieval model is being able to ask a query that is a Boolean expression: Boolean Queries are queries using AND, OR
and NOT to join query terms Views each document as a set of words Is precise: document matches condition or not.
Primary commercial retrieval tool for 3 decades.
Professional searchers (e.g., lawyers) still like Boolean queries: You know exactly what you’re getting.
23
Boolean queries: More general merges
Adapt the merge for the queries:Brutus AND NOT CaesarBrutus OR NOT Caesar
24
Merging
What about an arbitrary Boolean formula?(Brutus OR Caesar) AND NOT(Antony OR Cleopatra)
25
Query optimization
What is the best order for query processing?
Consider a query that is an AND of t terms. For each of the t terms, get its postings,
then AND them together.Brutus
Calpurnia
Caesar
1 2 3 5 8 16 21 34
2 4 8 16 32 64128
13 16
Query: Brutus AND Calpurnia AND Caesar
26
Query optimization example
Process in order of increasing freq: start with smallest set, then keep cutting
further.
Brutus
Calpurnia
Caesar
1 2 3 5 8 13 21 34
2 4 8 16 32 64128
13 16
This is why we keptfreq in dictionary
Execute the query as (Caesar AND Brutus) AND Calpurnia.
27
More general optimization
e.g., (madding OR crowd) AND (ignoble OR strife)
Get freq’s for all terms. Estimate the size of each OR by the
sum of its freq’s (conservative). Process in increasing order of OR
sizes.
28
What’s ahead in IR?Beyond term search
What about phrases? Stanford University
Proximity: Find Gates NEAR Microsoft. Need index to capture position information
in docs. More later. Zones in documents: Find documents with
(author = Ullman) AND (text contains automata).
Updates and Text Search Text search engines are designed to be query-
mostly: Deletes and modifications are rare Can postpone updates (nobody notices, no transactions!)
Updates done in batch (rebuild the index) Can’t afford to go off-line for an update?
Create a 2nd index on a separate machine Replace the 1st index with the 2nd!
So no concurrency control problems Can compress to search-friendly, update-unfriendly format
Main reason why text search engines and DBMSs are usually separate products.
Also, text-search engines tune that one SQL query to death!
30
Ranking search results
Boolean queries give inclusion or exclusion of docs.
Often we want to rank/group results Need to measure proximity from query to
each doc. Need to decide whether docs presented to
user are singletons, or a group of docs covering various aspects of the query.
31
IR vs. databases:Structured vs unstructured data
Structured data tends to refer to information in “tables”
Employee Manager Salary
Smith Jones 50000
Chang Smith 60000
50000Ivy Smith
Typically allows numerical range and exact match(for text) queries, e.g.,Salary < 60000 AND Manager = Smith.
32
Unstructured data
Typically refers to free text Allows
Keyword queries including operators More sophisticated “concept” queries e.g.,
find all web pages dealing with drug abuse
Classic model for searching text documents
33
Semi-structured data
In fact almost no data is “unstructured” E.g., this slide has distinctly identified
zones such as the Title and Bullets Facilitates “semi-structured” search such
as Title contains data AND Bullets contain
search
… to say nothing of linguistic structure
34
More sophisticated semi-structured search
Title is about Object Oriented Programming AND Author something like stro*rup
where * is the wild-card operator Issues:
how do you process “about”? how do you rank results?
The focus of XML search.
35
Clustering and classification
Given a set of docs, group them into clusters based on their contents.
Given a set of topics, plus a new doc D, decide which topic(s) D belongs to.
36
The web and its challenges
Unusual and diverse documents Unusual and diverse users, queries,
information needs Beyond terms, exploit ideas from
social networks link analysis, clickstreams ...
How do search engines work? And how can we make them better?
37
More sophisticated information retrieval
Cross-language information retrieval Question answering Summarization Text mining …
Lots More in IR …
How to “rank” the output? I.e., how to compute relevance of each result item w.r.t. the query?
Doing this well / efficiently is hard! Other ways to help users paw through the output?
Document “clustering”, document visualization How to take advantage of hyperlinks?
Really cute tricks here! How to use compression for better I/O performance?
E.g., making RID lists smaller Try to make things fit in RAM!
How to deal with synonyms, misspelling, abbreviations? How to write a good web crawler?