introduction to information retrieval

43
Introduction to Information Retrieval Instructor : Marina Gavrilova

Upload: peta

Post on 25-Feb-2016

37 views

Category:

Documents


3 download

DESCRIPTION

Introduction to Information Retrieval . Instructor : Marina Gavrilova. Outline. Information Retrieval IR vs DBMS Boolean Text Search Text Indexes Simple relational text index Example of inverted file Computing Relevance Vector Space Model Text Clustering - PowerPoint PPT Presentation

TRANSCRIPT

Information Retrieval Introduction

Introduction to Information Retrieval

Instructor : Marina GavrilovaThe slides for this book are organized by chapter. This lecture consists of an introduction to information retrieval, and is based in part on slides prepared by Joe Hellerstein. OutlineInformation RetrievalIR vs DBMS Boolean Text SearchText Indexes Simple relational text indexExample of inverted fileComputing Relevance Vector Space ModelText Clustering Probabilistic Models and Ranking PrinciplesIterative query refinement - Racchio ModelQuery Modification Collaborative Filtering and Ringo Collaborative Filtering ConclusionsGoal Goal of this lecture is to introduce you to Informational retrieval and how it differentiates from DBMS. Then we will discuss how vector space model and Text Clustering help in computing relevance and similarity between documents. Information RetrievalA research field traditionally separate from DatabasesGoes back to IBM, Rand and Lockheed in the 50sG. Salton at Cornell in the 60sLots of research since thenProducts traditionally separateOriginally, document management systems for libraries, government, law, etc.Gained prominence in recent years due to web searchIR vs. DBMSSeem like very different beasts:

Both support queries over large datasets, use indexing. In practice, you currently have to choose between the two.IRDBMSImprecise SemanticsPrecise SemanticsKeyword searchSQLUnstructured data formatStructured dataRead-Mostly. Add docs occasionallyExpect reasonable number of updatesPage through top k resultsGenerate full answerIRs Bag of Words ModelTypical IR data model:Each document is just a bag (multiset) of words (terms)Detail 1: Stop WordsCertain words are considered irrelevant and not placed in the bage.g., the e.g., HTML tags like Detail 2: Stemming and other content analysisUsing English-specific rules, convert words to their basic form e.g., surfing, surfed --> surf

Boolean Text SearchFind all documents that match a Boolean containment expression: Windows AND (Glass OR Door) AND NOT MicrosoftNote: Query terms are also filtered via stemming and stop words.When web search engines say 10,000 documents found, thats the Boolean search result size (subject to a common max # returned cutoff).Text IndexesWhen IR says text indexUsually mean more than what DB people meanBoth tables and indexesReally a logical schema (i.e., tables)With a physical schema (i.e., indexes)Usually not stored in a DBMSA Simple Relational Text IndexCreate and populate a tableInvertedFile(term string, docURL string)Build a B+-tree or Hash index on InvertedFile.term as entries in index critical here for efficient storage!! Fancy list compression possibleNote: URL instead of RID, the web is your heap file!Can also cache pages and use RIDsThis is often called an inverted file or inverted indexMaps from words -> docs Can now do single-word text search queries!

An Inverted FileSearch fordatabases

Computing Relevance, Similarity: The Vector Space ModelDocument VectorsDocuments are represented as bags of wordsRepresented as vectors when used computationallyA vector is like an array of floating pointHas direction and magnitudeEach vector holds a place for every term in the collectionTherefore, most vectors are sparseDocument Vectors:One location for each word. novagalaxy heathwood filmroledietfur 10 5 3 5 10 10 8 7 9 10 5 10 10 9 10 5 7 9 6 10 2 8 7 5 1 3

ABCDEFGHI

Nova occurs 10 times in text AGalaxy occurs 5 times in text AHeat occurs 3 times in text A(Blank means 0 occurrences.)Document Vectors novagalaxy heathwood filmroledietfur 10 5 3 5 10 10 8 7 9 10 5 10 10 9 10 5 7 9 6 10 2 8 7 5 1 3

ABCDEFGHI

Document idsWe Can Plot the VectorsStarDietDoc about astronomyDoc about movie starsDoc about mammal behaviorAssumption: Documents that are close in space are similar. Vector Space ModelDocuments are represented as vectors in term spaceTerms are usually stemsDocuments represented by binary vectors of termsQueries represented the same as documentsA vector distance measure between the query and documents is used to rank retrieved documentsQuery and Document similarity is based on length and direction of their vectorsVector operations to capture boolean query conditionsTerms in a vector can be weighted in many waysVector Space Documentsand Queries

D1D2D3D4D5D6D7D8D9D10D11t2t3t1Boolean term combinationsQ is a query also represented as a vectorAssigning Weights to TermsBinary WeightsRaw term frequencyWant to weight terms highly if they arefrequent in relevant documents BUTinfrequent in the collection as a wholeBinary WeightsOnly the presence (1) or absence (0) of a term is included in the vector

Raw Term WeightsThe frequency of occurrence for the term in each document is included in the vector

TF x IDF NormalizationNormalize the term weights (so longer documents are not unfairly given more weight)The longer the document, the more likely it is for a given term to appear in it, and the more often a given term is likely to appear in it. So, we want to reduce the importance attached to a term appearing in a document based on the length of the document.Pair-wise Document Similarity

novagalaxy heathwood filmroledietfur 1 3 1 5 2 2 1 5 4 1ABCD

How to compute document similarity?Pair-wise Document Similaritynovagalaxy heathwood filmroledietfur 1 3 1 5 2 2 1 5 4 1ABCD

Pair-wise Document Similarity(cosine normalization)

Text ClusteringFinds overall similarities among groups of documentsFinds overall similarities among groups of tokensPicks out some themes, ignores othersText ClusteringClustering isThe art of finding groups in data. -- Kaufmann and Rousseeu

Term 1Term 2Problems with Vector SpaceThere is no real theoretical basis for the assumption of a term spaceIt is more for visualization than having any real basisMost similarity measures work about the sameTerms are not really orthogonal dimensionsTerms are not independent of all other terms; remember our discussion of correlated terms in textProbabilistic ModelsRigorous formal model attempts to predict the probability that a given document will be relevant to a given queryRanks retrieved documents according to this probability of relevance (Probability Ranking Principle)Relies on accurate estimates of probabilitiesProbability Ranking PrincipleIf a reference retrieval systems response to each request is a ranking of the documents in the collections in the order of decreasing probability of usefulness to the user who submitted the request, where the probabilities are estimated as accurately as possible on the basis of whatever data has been made available to the system for this purpose, then the overall effectiveness of the system to its users will be the best that is obtainable on the basis of that data.Stephen E. Robertson, J. Documentation 1977Query ModificationProblem: How can we reformulate the query to help a user who is trying several searches to get at the same information?Thesaurus expansion:Suggest terms similar to query termsRelevance feedback:Suggest terms (and documents) similar to retrieved documents that have been judged to be relevantRelevance FeedbackMain Idea:Modify existing query based on relevance judgementsExtract terms from relevant documents and add them to the queryAND/OR re-weight the terms already in the queryThere are many variations:Usually positive weights for terms from relevant docsSometimes negative weights for terms from non-relevant docsUsers, or the system, guide this process by selecting terms from an automatically-generated list. Rocchio MethodRocchio automaticallyRe-weights termsAdds in new terms (from relevant docs)have to be careful when using negative termsRocchio is not a machine learning algorithmRocchio Method

Alternative Notions of Relevance FeedbackFind people whose taste is similar to yours.Will you like what they like?Follow a users actions in the background. Can this be used to predict what the user will want to see next?Track what lots of people are doing. Does this implicitly indicate what they think is good and not good?

Collaborative Filtering (Social Filtering)If Pam liked the paper, Ill like the paperIf you liked Star Wars, youll like Independence DayRating based on ratings of similar peopleIgnores text, so also works on sound, pictures etc.But: Initial users can bias ratings of future users

Users rate items from like to dislike7 = like; 4 = ambivalent; 1 = dislikeA normal distribution; the extremes are what matterNearest Neighbors Strategy: Find similar users and predicted (weighted) average of user ratingsPearson Algorithm: Weight by degree of correlation between user U and user J1 means similar, 0 means no correlation, -1 dissimilarWorks better to compare against the ambivalent rating (4), rather than the individuals average score

Ringo Collaborative FilteringComputing RelevanceRelevance calculation involves how often search terms appear in doc, and how often they appear in collection:More search terms found in doc doc is more relevantGreater importance attached to finding rare termsDoing this efficiently in current SQL engines is not easy:Relevance of a doc wrt a search term is a function that is called once per doc the term appears in (docs found via inv. index):For efficient fn computation, for each term, we can store the # times it appears in each doc, as well as the # docs it appears in.Must also sort retrieved docs by their relevance value.Also, think about Boolean operators (if the search has multiple terms) and how they affect the relevance computation!An object-relational or object-oriented DBMS with good support for function calls is better, but you still have long execution path-lengths compared to optimized search engines.

Updates and Text SearchText search engines are designed to be query-mostly:Deletes and modifications are rareCan postpone updates (nobody notices, no transactions!)Updates done in batch (rebuild the index)Cant afford to go off-line for an update?Create a 2nd index on a separate machineReplace the 1st index with the 2nd!So no concurrency control problemsCan compress to search-friendly, update-unfriendly formatMain reason why text search engines and DBMSs are usually separate products.Also, text-search engines tune that one SQL query to death!{DBMS vs. Search Engine ArchitectureThe Access MethodBuffer ManagementDisk Space ManagementOSThe QuerySearch String Modifier

Simple DBMS}Ranking AlgorithmQuery Optimizationand ExecutionRelational OperatorsFiles and Access MethodsBuffer ManagementDisk Space ManagementConcurrencyand RecoveryNeededDBMSSearch EngineIR vs. DBMS RevisitedSemantic GuaranteesDBMS guarantees transactional semanticsIf inserting Xact commits, a later query will see the updateHandles multiple concurrent updates correctlyIR systems do not do this; nobody notices!Postpone insertions until convenientNo model of correct concurrencyData Modeling & Query ComplexityDBMS supports any schema & queriesRequires you to define schemaComplex query language hard to learnIR supports only one schema & queryNo schema design required (unstructured text)Trivial to learn query languageLots More in IR How to rank the output? I.e., how to compute relevance of each result item w.r.t. the query?Doing this well / efficiently is hard!Other ways to help users paw through the output?Document clustering, document visualizationHow to take advantage of hyperlinks?Really cute tricks here!How to use compression for better I/O performance?E.g., making RID lists smallerTry to make things fit in RAM!How to deal with synonyms, misspelling, abbreviations? How to write a good web crawler?SummaryFirst we studied difference between Information Retrieval and DBMS . Then we disused on two type of searches (Boolean and Text based search) used in DBMS . In addition, we learned how we can compute relevance between documents based on words using Vector Space Model and how text clustering can be used to find similarity between documents and in the end we discussed Racchio Model for iterative query refinement. SummaryIR relies on computing distance between documentsTerms can be weighted and distances normalizedIR can utilize clustering, adaptive query updates and elements of learning to perform document retrieval / response to query betterIdea is to use not only similarity, but dissimilarity measures to compare documents.

Sheet1termdocURLdatahttp://www-inst.eecs.berkeley.edu/~cs186databasehttp://www-inst.eecs.berkeley.edu/~cs186datehttp://www-inst.eecs.berkeley.edu/~cs186dayhttp://www-inst.eecs.berkeley.edu/~cs186dbmshttp://www-inst.eecs.berkeley.edu/~cs186decisionhttp://www-inst.eecs.berkeley.edu/~cs186demonstratehttp://www-inst.eecs.berkeley.edu/~cs186descriptionhttp://www-inst.eecs.berkeley.edu/~cs186designhttp://www-inst.eecs.berkeley.edu/~cs186desirehttp://www-inst.eecs.berkeley.edu/~cs186developerhttp://www.microsoft.comdifferhttp://www-inst.eecs.berkeley.edu/~cs186disabilityhttp://www.microsoft.comdiscussionhttp://www-inst.eecs.berkeley.edu/~cs186divisionhttp://www-inst.eecs.berkeley.edu/~cs186dohttp://www-inst.eecs.berkeley.edu/~cs186documenthttp://www-inst.eecs.berkeley.edu/~cs186documenthttp://www.microsoft.commicrosofthttp://www.microsoft.commicrosofthttp://www-inst.eecs.berkeley.edu/~cs186midnighthttp://www-inst.eecs.berkeley.edu/~cs186midtermhttp://www-inst.eecs.berkeley.edu/~cs186minibasehttp://www-inst.eecs.berkeley.edu/~cs186millionhttp://www.microsoft.comMondayhttp://www.microsoft.commorehttp://www.microsoft.commosthttp://www-inst.eecs.berkeley.edu/~cs186mshttp://www-inst.eecs.berkeley.edu/~cs186msnhttp://www.microsoft.commusthttp://www-inst.eecs.berkeley.edu/~cs186necessaryhttp://www-inst.eecs.berkeley.edu/~cs186needhttp://www-inst.eecs.berkeley.edu/~cs186networkhttp://www.microsoft.comnewhttp://www-inst.eecs.berkeley.edu/~cs186newhttp://www.microsoft.comnewshttp://www.microsoft.comnewsgrouphttp://www-inst.eecs.berkeley.edu/~cs186newsletterhttp://www.microsoft.comnowhttp://www.microsoft.comofhttp://www.microsoft.comofficehttp://www.microsoft.comphttp://www.microsoft.compacifichttp://www.microsoft.compagehttp://www.microsoft.compartnerhttp://www.microsoft.compersonalhttp://www.microsoft.comportalhttp://www.microsoft.compresshttp://www.microsoft.comprivacyhttp://www.microsoft.comproducthttp://www.microsoft.comprofessionalhttp://www.microsoft.com

http://www.microsoft.comhttp://www.microsoft.comhttp://www-inst.eecs.berkeley.edu/~cs186http://www-inst.eecs.berkeley.edu/~cs186http://www.microsoft.com

Sheet2

Sheet3

Sheet1docst1t2t3RSV=Q.DiD11014D21001D30115D41001D51116D61103D70102D80102D90013D100115D111013Q123q1q2q3

Sheet1docst1t2t3RSV=Q.DiD11014D21001D30115D41001D51116D61103D70102D80102D90013D100115D111013Q123q1q2q3

Sheet1docst1t2t3RSV=Q.DiD12034D21001D30475D43001D51636D63503D70802D801002D90013D100355D114013Q123q1q2q3

_949840800.unknown