cs60092:(informaon(retrieval(cse.iitkgp.ac.in/~sourangshu/coursefiles/ir16a/week1.pdf · 2016. 8....
Post on 21-Mar-2021
4 Views
Preview:
TRANSCRIPT
CS60092: Informa0on Retrieval
Sourangshu Bha<acharya
Recap
• Informa0on Retrieval is: – Finding documents – Containing unstructured data – In a collec.on of documents – Which are relevant – To a query.
Unstructured data in 1620 • Which plays of Shakespeare contain the words Brutus AND Caesar but NOT Calpurnia?
• One could grep all of Shakespeare’s plays for Brutus and Caesar, then strip out lines containing Calpurnia?
• Why is that not the answer? – Slow (for large corpora) – NOT Calpurnia is non-‐trivial – Other opera0ons (e.g., find the word Romans near countrymen) not feasible
– Ranked retrieval (best documents to return) • Later lectures
3
Sec. 1.1
Term-‐document incidence matrices
Antony and Cleopatra Julius Caesar The Tempest Hamlet Othello Macbeth
Antony 1 1 0 0 0 1Brutus 1 1 0 1 0 0Caesar 1 1 0 1 1 1
Calpurnia 0 1 0 0 0 0Cleopatra 1 0 0 0 0 0
mercy 1 0 1 1 1 1worser 1 0 1 1 1 0
1 if play contains word, 0 otherwise
Brutus AND Caesar BUT NOT Calpurnia
Sec. 1.1
Incidence vectors
• So we have a 0/1 vector for each term. • To answer query: take the vectors for Brutus, Caesar and Calpurnia (complemented) è bitwise AND. – 110100 AND – 110111 AND – 101111 = – 100100
5
Sec. 1.1
Antony and Cleopatra Julius Caesar The Tempest Hamlet Othello Macbeth
Antony 1 1 0 0 0 1Brutus 1 1 0 1 0 0Caesar 1 1 0 1 1 1
Calpurnia 0 1 0 0 0 0Cleopatra 1 0 0 0 0 0
mercy 1 0 1 1 1 1worser 1 0 1 1 1 0
Answers to query
• Antony and Cleopatra, Act III, Scene ii Agrippa [Aside to DOMITIUS ENOBARBUS]: Why, Enobarbus, When Antony found Julius Caesar dead, He cried almost to roaring; and he wept When at Philippi he found Brutus slain.
• Hamlet, Act III, Scene ii Lord Polonius: I did enact Julius Caesar I was killed i’ the Capitol; Brutus killed me.
6
Sec. 1.1
Bigger collec0ons
• Consider N = 1 million documents, each with about 1000 words.
• Avg 6 bytes/word including spaces/punctua0on – 6GB of data in the documents.
• Say there are M = 500K dis+nct terms among these.
7
Sec. 1.1
Can’t build the matrix
• 500K x 1M matrix has half-‐a-‐trillion 0’s and 1’s.
• But it has no more than one billion 1’s. – matrix is extremely sparse.
• What’s a be<er representa0on? – We only record the 1 posi0ons.
8
Why?
Sec. 1.1
Introduc0on to
Informa.on Retrieval
The Inverted Index The key data structure underlying
modern IR
Inverted index • For each term t, we must store a list of all documents that
contain t. – Iden0fy each doc by a docID, a document serial number
• Can we use fixed-‐size arrays for this?
10
What happens if the word Caesar is added to document 14?
Sec. 1.2
Brutus
Calpurnia
Caesar 1 2 4 5 6 16 57 132
1 2 4 11 31 45 173
2 31
174
54 101
Inverted index • We need variable-‐size pos0ngs lists
– On disk, a con0nuous run of pos0ngs is normal and best – In memory, can use linked lists or variable length arrays
• Some tradeoffs in size/ease of inser0on
11
Dictionary Postings Sorted by docID (more later on why).
Pos+ng
Sec. 1.2
Brutus
Calpurnia
Caesar 1 2 4 5 6 16 57 132
1 2 4 11 31 45 173
2 31
174
54 101
Tokenizer
Token stream Friends Romans Countrymen
Inverted index construc0on
Linguis0c modules
Modified tokens friend roman countryman
Indexer
Inverted index
friend
roman
countryman
2 4
2
13 16
1
Documents to be indexed
Friends, Romans, countrymen.
Sec. 1.2
Ini0al stages of text processing • Tokeniza0on
– Cut character sequence into word tokens • Deal with “John’s”, a state-‐of-‐the-‐art solu9on
• Normaliza0on – Map text and query term to same form
• You want U.S.A. and USA to match • Stemming
– We may wish different forms of a root to match • authorize, authoriza9on
• Stop words – We may omit very common words (or not)
• the, a, to, of
Indexer steps: Token sequence
• Sequence of (Modified token, Document ID) pairs.
I did enact Julius Caesar I was killed
i’ the Capitol; Brutus killed me.
Doc 1
So let it be with Caesar. The noble
Brutus hath told you Caesar was ambitious
Doc 2
Sec. 1.2
Indexer steps: Sort
• Sort by terms – And then docID
Core indexing step
Sec. 1.2
Indexer steps: Dic0onary & Pos0ngs
• Mul0ple term entries in a single document are merged.
• Split into Dic0onary and Pos0ngs
• Doc. frequency informa0on is added.
Why frequency? Will discuss later.
Sec. 1.2
Where do we pay in storage?
17 Pointers
Terms and
counts IR system implementa0on • How do we
index efficiently? • How much
storage do we need?
Sec. 1.2
Lists of docIDs
Introduc0on to
Informa.on Retrieval
Query processing with an inverted index
The index we just built
• How do we process a query? – Later -‐ what kinds of queries can we process?
19
Our focus
Sec. 1.3
Query processing: AND • Consider processing the query:
Brutus AND Caesar – Locate Brutus in the Dic0onary;
• Retrieve its pos0ngs. – Locate Caesar in the Dic0onary;
• Retrieve its pos0ngs. – “Merge” the two pos0ngs (intersect the document sets):
20
128 34
2 4 8 16 32 64 1 2 3 5 8 13 21
Brutus Caesar
Sec. 1.3
The merge
• Walk through the two pos0ngs simultaneously, in 0me linear in the total number of pos0ngs entries
21
34 128 2 4 8 16 32 64
1 2 3 5 8 13 21 Brutus Caesar
If the list lengths are x and y, the merge takes O(x+y) operations. Crucial: postings sorted by docID.
Sec. 1.3
Intersec0ng two pos0ngs lists (a “merge” algorithm)
22
Boolean queries: More general merges
• Exercise: Adapt the merge for the queries: Brutus AND NOT Caesar Brutus OR NOT Caesar
• Can we s0ll run through the merge in 0me O(x+y)? What can we achieve?
23
Sec. 1.3
Merging
What about an arbitrary Boolean formula? (Brutus OR Caesar) AND NOT (Antony OR Cleopatra) • Can we always merge in “linear” 0me?
– Linear in what? • Can we do be<er?
24
Sec. 1.3
Query op0miza0on
• What is the best order for query processing? • Consider a query that is an AND of n terms. • For each of the n terms, get its pos0ngs, then AND them
together.
Brutus
Caesar Calpurnia
1 2 3 5 8 16 21 34
2 4 8 16 32 64 128
13 16
Query: Brutus AND Calpurnia AND Caesar 25
Sec. 1.3
Query op0miza0on example
• Process in order of increasing freq: – start with smallest set, then keep cuAng further.
26
This is why we kept document freq. in dic0onary
Execute the query as (Calpurnia AND Brutus) AND Caesar.
Sec. 1.3
Brutus
Caesar Calpurnia
1 2 3 5 8 16 21 34
2 4 8 16 32 64 128
13 16
More general op0miza0on
• e.g., (madding OR crowd) AND (ignoble OR strife)
• Get doc. freq.’s for all terms. • Es0mate the size of each OR by the sum of its doc. freq.’s (conserva0ve).
• Process in increasing order of OR sizes.
27
Sec. 1.3
Exercise
• Recommend a query processing order for
• Which two terms should we process first?
Term Freq eyes 213312 kaleidoscope 87009 marmalade 107913 skies 271658 tangerine 46653 trees 316812
28
(tangerine OR trees) AND (marmalade OR skies) AND (kaleidoscope OR eyes)
Query processing exercises
• Exercise: If the query is friends AND romans AND (NOT countrymen), how could we use the freq of countrymen?
• Exercise: Extend the merge to an arbitrary Boolean query. Can we always guarantee execu0on in 0me linear in the total pos0ngs size?
• Hint: Begin with the case of a Boolean formula query: in this, each query term appears only once in the query.
29
Introduc0on to
Informa.on Retrieval
Phrase queries and posi0onal indexes
Boolean queries: Exact match
• The Boolean retrieval model is being able to ask a query that is a Boolean expression: – Boolean Queries are queries using AND, OR and NOT to join query terms
• Views each document as a set of words • Is precise: document matches condi0on or not.
– Perhaps the simplest model to build an IR system on
• Primary commercial retrieval tool for 3 decades. • Many search systems you s0ll use are Boolean:
– Email, library catalog, Mac OS X Spotlight 31
Sec. 1.3
Example: WestLaw http://www.westlaw.com/
• Largest commercial (paying subscribers) legal search service (started 1975; ranking added 1992; new federated search added 2010)
• Tens of terabytes of data; ~700,000 users • Majority of users still use boolean queries • Example query:
– What is the statute of limitations in cases involving the federal tort claims act?
– LIMIT! /3 STATUTE ACTION /S FEDERAL /2 TORT /3 CLAIM
• /3 = within 3 words, /S = in same sentence 32
Sec. 1.4
Example: WestLaw http://www.westlaw.com/
• Another example query: – Requirements for disabled people to be able to access a workplace
– disabl! /p access! /s work-‐site work-‐place (employment /3 place
• Note that SPACE is disjunc0on, not conjunc0on! • Long, precise queries; proximity operators; incrementally developed; not like web search
• Many professional searchers s0ll like Boolean search – You know exactly what you are geong
• But that doesn’t mean it actually works be<er….
Sec. 1.4
Phrase queries
• We want to be able to answer queries such as “stanford university” – as a phrase
• Thus the sentence “I went to university at Stanford” is not a match. – The concept of phrase queries has proven easily understood by users; one of the few “advanced search” ideas that works
– Many more queries are implicit phrase queries • For this, it no longer suffices to store only <term : docs> entries
Sec. 2.4
A first a<empt: Biword indexes
• Index every consecu0ve pair of terms in the text as a phrase
• For example the text “Friends, Romans, Countrymen” would generate the biwords – friends romans – romans countrymen
• Each of these biwords is now a dic0onary term • Two-‐word phrase query-‐processing is now immediate.
Sec. 2.4.1
Longer phrase queries • Longer phrases can be processed by breaking them down
• stanford university palo alto can be broken into the Boolean query on biwords:
stanford university AND university palo AND palo alto
Without the docs, we cannot verify that the docs matching the above Boolean query do contain the phrase.
Can have false posi0ves!
Sec. 2.4.1
Issues for biword indexes
• False posi0ves, as noted before • Index blowup due to bigger dic0onary
– Infeasible for more than biwords, big even for them
• Biword indexes are not the standard solu0on (for all biwords) but can be part of a compound strategy
Sec. 2.4.1
Solu0on 2: Posi0onal indexes
• In the pos0ngs, store, for each term the posi0on(s) in which tokens of it appear:
<term, number of docs containing term; doc1: posi0on1, posi0on2 … ; doc2: posi0on1, posi0on2 … ; etc.>
Sec. 2.4.2
Posi0onal index example
• For phrase queries, we use a merge algorithm recursively at the document level
• But we now need to deal with more than just equality
<be: 993427; 1: 7, 18, 33, 72, 86, 231; 2: 3, 149; 4: 17, 191, 291, 430, 434; 5: 363, 367, …>
Which of docs 1,2,4,5 could contain “to be
or not to be”?
Sec. 2.4.2
Processing a phrase query
• Extract inverted index entries for each dis0nct term: to, be, or, not.
• Merge their doc:posi+on lists to enumerate all posi0ons with “to be or not to be”. – to:
• 2:1,17,74,222,551; 4:8,16,190,429,433; 7:13,23,191; ... – be:
• 1:17,19; 4:17,191,291,430,434; 5:14,19,101; ...
• Same general method for proximity searches
Sec. 2.4.2
Proximity queries
• LIMIT! /3 STATUTE /3 FEDERAL /2 TORT – Again, here, /k means “within k words of”.
• Clearly, posi0onal indexes can be used for such queries; biword indexes cannot.
• Exercise: Adapt the linear merge of pos0ngs to handle proximity queries. Can you make it work for any value of k?
Sec. 2.4.2
Merge
Posi0onal index size
• A posi0onal index expands pos0ngs storage substan+ally – Even though indices can be compressed
• Nevertheless, a posi0onal index is now standardly used because of the power and usefulness of phrase and proximity queries … whether used explicitly or implicitly in a ranking retrieval system.
Sec. 2.4.2
Posi0onal index size • Need an entry for each occurrence, not just once per
document • Index size depends on average document size
– Average web page has <1000 terms – SEC filings, books, even some epic poems … easily 100,000 terms
• Consider a term with frequency 0.1%
Why?
100 1 100,000
1 1 1000
Posi0onal pos0ngs Pos0ngs Document size
Sec. 2.4.2
Rules of thumb
• A posi0onal index is 2–4 as large as a non-‐posi0onal index
• Posi0onal index size 35–50% of volume of original text
– Caveat: all of this holds for “English-‐like” languages
Sec. 2.4.2
Combina0on schemes • These two approaches can be profitably combined – For par0cular phrases (“Michael Jackson”, “Britney Spears”) it is inefficient to keep on merging posi0onal pos0ngs lists
• Even more so for phrases like “The Who”
• Williams et al. (2004) evaluate a more sophis0cated mixed indexing scheme – A typical web query mixture was executed in ¼ of the 0me of using just a posi0onal index
– It required 26% more space than having a posi0onal index alone
Sec. 2.4.3
top related