hinrich schütze and christina lioma lecture 11: probabilistic information retrieval

36
Introduction to Information Retrieval Introduction to Information Retrieval Hinrich Schütze and Christina Lioma Lecture 11: Probabilistic Information 1

Upload: cachez

Post on 06-Jan-2016

50 views

Category:

Documents


4 download

DESCRIPTION

Hinrich Schütze and Christina Lioma Lecture 11: Probabilistic Information Retrieval. Overview. Probabilistic Approach to Retrieval Basic Probability Theory Probability Ranking Principle Appraisal & Extensions. Outline. Probabilistic Approach to Retrieval Basic Probability Theory - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Hinrich Schütze and Christina Lioma Lecture 11: Probabilistic Information Retrieval

Introduction to Information RetrievalIntroduction to Information Retrieval

Introduction to

Information RetrievalHinrich Schütze and Christina Lioma

Lecture 11: Probabilistic Information Retrieval

1

Page 2: Hinrich Schütze and Christina Lioma Lecture 11: Probabilistic Information Retrieval

Introduction to Information RetrievalIntroduction to Information Retrieval

Overview

❶ Probabilistic Approach to Retrieval

❷ Basic Probability Theory

❸ Probability Ranking Principle

❹ Appraisal & Extensions

2

Page 3: Hinrich Schütze and Christina Lioma Lecture 11: Probabilistic Information Retrieval

Introduction to Information RetrievalIntroduction to Information Retrieval

Outline

❶ Probabilistic Approach to Retrieval

❷ Basic Probability Theory

❸ Probability Ranking Principle

❹ Appraisal & Extensions

3

Page 4: Hinrich Schütze and Christina Lioma Lecture 11: Probabilistic Information Retrieval

Introduction to Information RetrievalIntroduction to Information Retrieval

4

Probabilistic Approach to Retrieval Given a user information need (represented as a query) and a collection of documents , a system must determine how well the documents satisfy the query

Boolean or vector space models of IR: query-documentmatching done in a formally defined but semanticallyimprecise calculus of index terms

Page 5: Hinrich Schütze and Christina Lioma Lecture 11: Probabilistic Information Retrieval

Introduction to Information RetrievalIntroduction to Information Retrieval

5

Probabilistic Approach to Retrieval An IR system has an uncertain understanding of the userquery , and makes an uncertain guess of whether a document satisfies the query

Probability theory provides a principled foundation for such reasoning under uncertainty

Probabilistic models exploit this foundation to estimate how likely it is that a document is relevant to a query

Page 6: Hinrich Schütze and Christina Lioma Lecture 11: Probabilistic Information Retrieval

Introduction to Information RetrievalIntroduction to Information Retrieval

Outline

❶ Probabilistic Approach to Retrieval

❷ Basic Probability Theory

❸ Probability Ranking Principle

❹ Appraisal & Extensions

6

Page 7: Hinrich Schütze and Christina Lioma Lecture 11: Probabilistic Information Retrieval

Introduction to Information RetrievalIntroduction to Information Retrieval

7

Basic Probability Theory For events A and B

Joint probability P(A, B) of both events occurring Conditional probability P(A|B) of event A occurring given that

event B has occurred Chain rule gives fundamental relationship between joint andconditional probabilities:

Similarly for the complement of an event :

Page 8: Hinrich Schütze and Christina Lioma Lecture 11: Probabilistic Information Retrieval

Introduction to Information RetrievalIntroduction to Information Retrieval

8

Basic Probability Theory

Partition rule: if B can be divided into an exhaustive set of disjoint subcases, then P(B) is the sum of the probabilities of the subcases. A special case of this rule gives:

Page 9: Hinrich Schütze and Christina Lioma Lecture 11: Probabilistic Information Retrieval

Introduction to Information RetrievalIntroduction to Information Retrieval

9

Basic Probability TheoryBayes’ Rule for inverting conditional probabilities:

Can be thought of as a way of updating probabilities: Start off with prior probability P(A) (initial estimate of how likely

event A is in the absence of any other information) Derive a posterior probability P(A|B) after having seen the evidence

B, based on the likelihood of B occurring in the two cases that A does or does not hold

Page 10: Hinrich Schütze and Christina Lioma Lecture 11: Probabilistic Information Retrieval

Introduction to Information RetrievalIntroduction to Information Retrieval

10

Basic Probability TheoryOdds of an event provide a kind of multiplier for how probabilitieschange:

Odds:

Page 11: Hinrich Schütze and Christina Lioma Lecture 11: Probabilistic Information Retrieval

Introduction to Information RetrievalIntroduction to Information Retrieval

Outline

❶ Probabilistic Approach to Retrieval

❷ Basic Probability Theory

❸ Probability Ranking Principle

❹ Appraisal & Extensions

11

Page 12: Hinrich Schütze and Christina Lioma Lecture 11: Probabilistic Information Retrieval

Introduction to Information RetrievalIntroduction to Information Retrieval

12

The Document Ranking Problem

Ranked retrieval setup: given a collection of documents, the user issues a query, and an ordered list of documents is returned Assume binary notion of relevance: Rd,q is a random dichotomous variable, such that

Rd,q = 1 if document d is relevant w.r.t query q Rd,q = 0 otherwise

Probabilistic ranking orders documents decreasingly by their estimated probability of relevance w.r.t. query: P(R = 1|d, q)

Page 13: Hinrich Schütze and Christina Lioma Lecture 11: Probabilistic Information Retrieval

Introduction to Information RetrievalIntroduction to Information Retrieval

13

Probability Ranking Principle (PRP)

If the retrieved documents are ranked decreasingly on their probability of relevance, then the effectiveness of the system will be the best that is obtainable

If [the IR] system’s response to each [query] is a ranking of the documents [...] in order of decreasing probability of relevance to the [query], where the probabilities are estimated as accurately as possible on the basis of whatever data have been made available to the system for this purpose, the overall effectiveness of the system to its user will be the best that is obtainable on the basis of those data

Page 14: Hinrich Schütze and Christina Lioma Lecture 11: Probabilistic Information Retrieval

Introduction to Information RetrievalIntroduction to Information Retrieval

14

Probability Ranking Principle (PRP)Suppose, instead, that we assume a model of retrieval costs. Let C1 be the cost of not retrieving a relevant document and C2 the cost of retrieval of a nonrelevant document. Then the Probability Ranking Principle says that if for a specific document d and for all documents d’ not yet retrieved

then d is the next document to be retrieved. Such a model gives a formal framework where we can model differential costs of false positives and false negatives and even system performance issues at the modeling stage, rather than simply at the evaluation stage

Page 15: Hinrich Schütze and Christina Lioma Lecture 11: Probabilistic Information Retrieval

Introduction to Information RetrievalIntroduction to Information Retrieval

15

Binary Independence Model (BIM)

Traditionally used with the PRPAssumptions: ‘Binary’ (equivalent to Boolean): documents and queries represented as binary term incidence vectors

E.g., document d represented by vector x = (x1, . . . , xM), where xt = 1 if term t occurs in d and xt = 0 otherwise Different documents may have the same vector representation

‘Independence’: no association between terms.

Page 16: Hinrich Schütze and Christina Lioma Lecture 11: Probabilistic Information Retrieval

Introduction to Information RetrievalIntroduction to Information Retrieval

16

Binary Independence ModelTo make a probabilistic retrieval strategy precise, need to estimatehow terms in documents contribute to relevance

Find measurable statistics (term frequency, document frequency, document length) that affect judgments about document relevance

Combine these statistics to estimate the probability of document relevance

Order documents by decreasing estimated probability of relevance P(R|d, q)

Assume that the relevance of each document is independent of the relevance of other documents.

Page 17: Hinrich Schütze and Christina Lioma Lecture 11: Probabilistic Information Retrieval

Introduction to Information RetrievalIntroduction to Information Retrieval

Binary Independence Model

17

is modelled using term incidence vectors as

and : : probability that if a relevant or nonrelevant document is retrieved, then that document’s representation is

Page 18: Hinrich Schütze and Christina Lioma Lecture 11: Probabilistic Information Retrieval

Introduction to Information RetrievalIntroduction to Information Retrieval

18

Page 19: Hinrich Schütze and Christina Lioma Lecture 11: Probabilistic Information Retrieval

Introduction to Information RetrievalIntroduction to Information Retrieval

19

Binary Independence Model

Estimate and from percentage of relevant documents in the collection.

Since a document is either relevant or nonrelevant to a query, we must have that :

Page 20: Hinrich Schütze and Christina Lioma Lecture 11: Probabilistic Information Retrieval

Introduction to Information RetrievalIntroduction to Information Retrieval

20

Deriving a Ranking Function for Query Terms Given a query q, ranking documents by is

modeled under BIM as ranking them by Easier: rank documents by their odds of relevance (gives same

ranking & we can ignore the common denominator)

is a constant for a given query - can be ignored

Page 21: Hinrich Schütze and Christina Lioma Lecture 11: Probabilistic Information Retrieval

Introduction to Information RetrievalIntroduction to Information Retrieval

21

Deriving a Ranking Function for Query Terms

It is at this point that we make the Naive Bayes conditionalindependence assumption that the presence or absence of a word in a document is independent of the presence or absence of any other word (given the query):

So:

Page 22: Hinrich Schütze and Christina Lioma Lecture 11: Probabilistic Information Retrieval

Introduction to Information RetrievalIntroduction to Information Retrieval

22

Deriving a Ranking Function for Query TermsSince each xt is either 0 or 1, we can separate the terms to give:

Let be the probability of a termappearing in relevant document.

Let be the probability of a termappearing in a nonrelevant document Visualise as contingency table:

Page 23: Hinrich Schütze and Christina Lioma Lecture 11: Probabilistic Information Retrieval

Introduction to Information RetrievalIntroduction to Information Retrieval

23

Deriving a Ranking Function for Query Terms

Additional simplifying assumption: terms not occurring in thequery are equally likely to occur in relevant and nonrelevantdocuments

If gt = 0, then pt = ut

Now we need only to consider terms in the products that appear inthe query:

Page 24: Hinrich Schütze and Christina Lioma Lecture 11: Probabilistic Information Retrieval

Introduction to Information RetrievalIntroduction to Information Retrieval

24

Deriving a Ranking Function for Query TermsIncluding the query terms found in the document into the right product, but simultaneously dividing through by them in the left product, gives:

The left product is still over query terms found in the document, but the right product is now over all query terms, hence constant for a particular query and can be ignored. The only quantity that needs to be estimated to rank documents w.r.t a query is the left product

Hence the Retrieval Status Value (RSV) in this model:

Page 25: Hinrich Schütze and Christina Lioma Lecture 11: Probabilistic Information Retrieval

Introduction to Information RetrievalIntroduction to Information Retrieval

25

Term ID Query Doc 1R=1

Doc 2R=1

… Doc 3R=0

Doc 4R=0

1 0 1 0 … 1 0

2 1 1 1 … 0 0

3 0 0 1 … 0 1

4 1 1 0 … 0 1

Relevant documents = n ,nonrelevant documents = M-n

Page 26: Hinrich Schütze and Christina Lioma Lecture 11: Probabilistic Information Retrieval

Introduction to Information RetrievalIntroduction to Information Retrieval

26

Deriving a Ranking Function for Query TermsWe can equally rank documents using the log odds ratios for the terms in the query ct :

The odds ratio is the ratio of two odds: (i) the odds of the term appearing if the document is relevant (pt/(1 − pt)), and (ii) the odds of the term appearing if the document is nonrelevant (ut/(1 − ut))

ct = 0 if a term has equal odds of appearing in relevant and nonrelevant documents, and ct is positive if it is more likely to appear in relevant documents.

ct functions as a term weight, so that Operationally, we sum ct quantities in accumulators for query terms appearing in documents.

Page 27: Hinrich Schütze and Christina Lioma Lecture 11: Probabilistic Information Retrieval

Introduction to Information RetrievalIntroduction to Information Retrieval

27

Deriving a Ranking Function for Query Terms

Page 28: Hinrich Schütze and Christina Lioma Lecture 11: Probabilistic Information Retrieval

Introduction to Information RetrievalIntroduction to Information Retrieval

28

Deriving a Ranking Function for Query Terms

To avoid the possibility of zeroes (such as if every or no relevantdocument has a particular term)

Page 29: Hinrich Schütze and Christina Lioma Lecture 11: Probabilistic Information Retrieval

Introduction to Information RetrievalIntroduction to Information Retrieval

29

Probability Estimates in Practice

Page 30: Hinrich Schütze and Christina Lioma Lecture 11: Probabilistic Information Retrieval

Introduction to Information RetrievalIntroduction to Information Retrieval

30

Probabilistic approaches to relevance feedback

The probabilistic approach to relevance feedback works as follows:1.Guess initial estimates of pt and ut . For instance, we can assume that pt is constant over all Xt in the query, in particular, perhaps taking pt = 1/2.

1.Use the current estimates of pt and ut to determine a best guess at the set of relevant documents R = {d: Rd,q =1} .

1.We interact with the user to refine the model of R . We do this by learning from the user relevance judgments for some subset of documents V . Based on relevance judgments, V is partitioned into two subsets: and ,which is disjoint from R .

Page 31: Hinrich Schütze and Christina Lioma Lecture 11: Probabilistic Information Retrieval

Introduction to Information RetrievalIntroduction to Information Retrieval

31

Probabilistic approaches to relevance feedback

4.We reestimate pt and ut on the basis of known relevant and nonrelevant documents. If the sets VR and VNR are large enough, we may be able to estimate these quantities directly from these documents as maximum likelihood estimates:

(where VRt is the set of documents in VR containing xt ). In practice, we usually need to smooth these estimates. We can do this by adding 1/2 to both the count | VRt | and to the number of relevant documents not containing the term, giving:

Page 32: Hinrich Schütze and Christina Lioma Lecture 11: Probabilistic Information Retrieval

Introduction to Information RetrievalIntroduction to Information Retrieval

32

However, the set of documents judged by the user (V) is usually very small, and so the resulting statistical estimate is quite unreliable (noisy), even if the estimate is smoothed. So it is often better to combine the new information with the original guess in a process of Bayesian updating . In this case we have:

Probabilistic approaches to relevance feedback

Page 33: Hinrich Schütze and Christina Lioma Lecture 11: Probabilistic Information Retrieval

Introduction to Information RetrievalIntroduction to Information Retrieval

33

Improve our guesses for pt and ut . We choose from the methods of and for re-estimating , except now bas

ed on the set V instead of VR. If we let Vt be the subset of documents

in V containing xt and use add 1/2 smoothing , we get:

and if we assume that documents that are not retrieved are nonrelevant then we can update our ut estimates as:

Once we have a real estimate for pt then the ct weights used in the RSV value look almost like a tf-idf value. we have:

=

Probabilistic approaches to relevance feedback

Page 34: Hinrich Schütze and Christina Lioma Lecture 11: Probabilistic Information Retrieval

Introduction to Information RetrievalIntroduction to Information Retrieval

Outline

❶ Probabilistic Approach to Retrieval

❷ Basic Probability Theory

❸ Probability Ranking Principle

❹ Appraisal & Extensions

34

Page 35: Hinrich Schütze and Christina Lioma Lecture 11: Probabilistic Information Retrieval

Introduction to Information RetrievalIntroduction to Information Retrieval

Okari BM25: A Nonbinary Model The simplest score for document d is just idf weighting of the

query terms present in the document:

Improve this formula by factoring in the term frequency and document length :

tf td : term frequency in document d Ld (Lave): length of document d (average document length in the

whole collection) k1: tuning parameter controlling the document term frequency

scaling b: tuning parameter controlling the scaling by document length 35

Page 36: Hinrich Schütze and Christina Lioma Lecture 11: Probabilistic Information Retrieval

Introduction to Information RetrievalIntroduction to Information Retrieval

Okari BM25: A Nonbinary Model If the query is long, we might also use similar weighting for query

terms

tf tq: term frequency in the query q k3: tuning parameter controlling term frequency scaling of the

query The above tuning parameters should ideally be set to optimize

performance on a development test collection. In the absence of such optimisation, experiments have shown reasonable values are to set k1 and k3 to a value between 1.2 and 2 and b = 0.75

36