by: robin s. poston university of memphis – u.s.a cheri speier michigan state university –...

22
Effective Use of Knowledge Effective Use of Knowledge Management Systems: Management Systems: A Process Model of Content Ratings A Process Model of Content Ratings And Credibility Indicators And Credibility Indicators By: Robin S. Poston University of Memphis – U.S.A Cheri Speier Michigan State University – U.S.A Presenter: Maged Younan

Upload: lundy

Post on 17-Jan-2016

26 views

Category:

Documents


0 download

DESCRIPTION

Effective Use of Knowledge Management Systems: A Process Model of Content Ratings And Credibility Indicators. By: Robin S. Poston University of Memphis – U.S.A Cheri Speier Michigan State University – U.S.A Presenter: Maged Younan. Knowledge Management Systems. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: By: Robin S. Poston  University of Memphis – U.S.A Cheri Speier Michigan State University – U.S.A

Effective Use of Knowledge Effective Use of Knowledge Management Systems:Management Systems:

A Process Model of Content Ratings And A Process Model of Content Ratings And Credibility IndicatorsCredibility Indicators

By: Robin S. Poston University of Memphis – U.S.A

Cheri SpeierMichigan State University – U.S.A

Presenter: Maged Younan

Page 2: By: Robin S. Poston  University of Memphis – U.S.A Cheri Speier Michigan State University – U.S.A

Knowledge Management SystemsKnowledge Management Systems

KMSs should facilitate the efficient and effective use of firm’s intellectual resources

KMSs Store a huge amount of information

Corporations investment in KMSs is expected to reach $13 Billion by 2007

KMS user should be therefore able to locate relevant and high quality content easily and quickly

Page 3: By: Robin S. Poston  University of Memphis – U.S.A Cheri Speier Michigan State University – U.S.A

What is the problem with KMSs?What is the problem with KMSs?

Have you ever tried to search for information on the internet (or intranet)?

What were the results? How many links / documents did you get?

How many of these were relevant and satisfied your need?

How many included low quality or even incorrect information?

Page 4: By: Robin S. Poston  University of Memphis – U.S.A Cheri Speier Michigan State University – U.S.A

To solve the problem of KMSs, content ratings were introduced

Content ratings are simply the feedback of the previous visitors to the same document / link...etc

Content Ratings -if valid- should help future knowledge workers (searchers) to evaluate and select proper content quickly and accurately

Content RatingsContent Ratings

Page 5: By: Robin S. Poston  University of Memphis – U.S.A Cheri Speier Michigan State University – U.S.A

Are content ratings always valid?Are content ratings always valid?

Do content ratings usually reflect the actual content quality? How accurate are they?

Content ratings may not be always valid due to the following reasons:

Lack of user experience (inappropriate context) Delegation of search tasks to juniors Subjectivity of rating – bias Intentional manipulation of rating

Page 6: By: Robin S. Poston  University of Memphis – U.S.A Cheri Speier Michigan State University – U.S.A

Credibility IndicatorsCredibility Indicators

Credibility indicators are used to assess the content and/or the rating validity.

Credibility indicators will take may depend on:

No. of raters Rater expertise Collaborative filtering

Page 7: By: Robin S. Poston  University of Memphis – U.S.A Cheri Speier Michigan State University – U.S.A

What do we want to study?What do we want to study?

Experiment 1: Examined the relationship between rating validity and KMS search and evaluation process

Experiments 2,3 and 4: Examined the moderating effect of credibility indicators

Page 8: By: Robin S. Poston  University of Memphis – U.S.A Cheri Speier Michigan State University – U.S.A

Experiment 1Experiment 1

““Relationship between rating validity Relationship between rating validity and KMS search and evaluation and KMS search and evaluation

process”process”

Page 9: By: Robin S. Poston  University of Memphis – U.S.A Cheri Speier Michigan State University – U.S.A

Experiment 1 - BackgroundExperiment 1 - Background

On a complex task people usually anchor on inappropriate content

Knowledge workers usually begin with the assumption that available ratings are valid

If rating is not valid, searchers will mislabel high rated content as being of high quality and vice versa

Page 10: By: Robin S. Poston  University of Memphis – U.S.A Cheri Speier Michigan State University – U.S.A

Experiment 1 - AssumptionsExperiment 1 - Assumptions

Searchers will follow any of the following search and evaluation processes depending on the rating validity:

Anchoring on the content and making no adjustment as the content rating is valid

Anchoring on the content and making no adjustment while the content rating is low in validity

Anchoring on the content and adjusting away as the content rating is low in validity

Page 11: By: Robin S. Poston  University of Memphis – U.S.A Cheri Speier Michigan State University – U.S.A

Experiment 1 - HypothesesExperiment 1 - Hypotheses The following hypotheses are thus generated:

H1: Knowledge workers will implement different search and evaluation processes depending on the validity of content rating

H2a: Anchoring on low quality content but adjusting away from that anchor results in higher decision than not adjusting away from the anchor

H2b: Anchoring on high quality content (and not adjusting away) results in higher decision quality than anchoring on low quality content and adjusting

H2c: Anchoring on low quality content and adjusting results in longer decision time than anchoring on high or low quality content and not adjusting away

Page 12: By: Robin S. Poston  University of Memphis – U.S.A Cheri Speier Michigan State University – U.S.A

Setting The ExperimentSetting The Experiment

14 different work plans were created and added to the KMS

3 quality measures were introduced: Clarity Project steps Assigning consultant levels to each project step Availability of senior consultant assignment to special tasks

Page 13: By: Robin S. Poston  University of Memphis – U.S.A Cheri Speier Michigan State University – U.S.A

Setting The ExperimentSetting The Experiment

The 14 work plans varied in quality such that:

1 plan met all 3 quality criteria 6 plans met 2 quality criteria 6 plans met 1 quality criterion 1 plan did not meet any of the 3 quality criteria

Subjects of the experiment had prior but limited experience with the task domain

Page 14: By: Robin S. Poston  University of Memphis – U.S.A Cheri Speier Michigan State University – U.S.A

Setting The ExperimentSetting The Experiment

A pilot test before the experiment was conducted to ensure that subjects had the ability to differentiate between low and high quality work plans

Work plans were given content ratings as follows

No. of Q. Criteria Valid rating Invalid Rating3 5 12 4 21 2 40 1 5

Page 15: By: Robin S. Poston  University of Memphis – U.S.A Cheri Speier Michigan State University – U.S.A

Experiment 1 – Dependant variablesExperiment 1 – Dependant variables

The response of the experiment done was:

The decision quality ( No. of lines matching with the lines of the work plan with best quality)

The decision time (measured in minutes)

Chi square tests were conducted to ensure that no significant effect exists for age , gender, experience and years in school of the candidates (Subject pool is homogeneous)

Page 16: By: Robin S. Poston  University of Memphis – U.S.A Cheri Speier Michigan State University – U.S.A

Experiment 1 – Interpreting the resultsExperiment 1 – Interpreting the results

After running the experiment, candidates were divided into three main clusters

Anchoring on high quality content and making no adjustment

Anchoring on low quality content and making no adjustment

Anchoring on low quality content and adjusting away from the anchor

Page 17: By: Robin S. Poston  University of Memphis – U.S.A Cheri Speier Michigan State University – U.S.A

Experiment 1 – ResultsExperiment 1 – Results

Strong relationship was proven between the validity of the rating and whether the subject adjusts away from an initial anchor or not. (This supports Hypothesis 1)

Significant correlation between the time spent and the decision quality was discovered

Hypotheses 2a and 2b were strongly supported

Hypothesis 2C was not supported

Page 18: By: Robin S. Poston  University of Memphis – U.S.A Cheri Speier Michigan State University – U.S.A

Experiment 1 - ResultsExperiment 1 - Results

H1: Knowledge workers will implement different search and evaluation processes depending on the validity of content rating - Supported

H2a: Anchoring on low quality content but adjusting away from that anchor results in higher decision than not adjusting away from the anchor - Supported

H2b: Anchoring on high quality content (and not adjusting away) results in higher decision quality than anchoring on low quality content and adjusting – Supported

H2c: Anchoring on low quality content and adjusting results in longer decision time than anchoring on high or low quality content and not adjusting- Not Supported

Page 19: By: Robin S. Poston  University of Memphis – U.S.A Cheri Speier Michigan State University – U.S.A

Other ExperimentsOther Experiments

3 more experiments were conducted to assess the moderating effect of adding credibility indicators.

No. of raters Rater expertise Collaborative filtering (Recommending similar

content or identifying content that has been used by others having the same context)

Page 20: By: Robin S. Poston  University of Memphis – U.S.A Cheri Speier Michigan State University – U.S.A

Experiment 2,3 and 4 - HypothesesExperiment 2,3 and 4 - Hypotheses

The following hypotheses were generated and assessed:

H3a: Given Low validity ratings, knowledge workers will adjust away -from an anchor on low quality content- more when the number of raters is low than when the number is high – NOT Supported

H3b: Given Low validity ratings, knowledge workers will adjust away -from an anchor on low quality content- more when the expertise of rater is low than when the expertise is high – NOT Supported

H3c: Given Low validity ratings, knowledge workers will adjust away -from an anchor on low quality content- more when the collaborative filtering sophistication is low than when it is high – Supported

Page 21: By: Robin S. Poston  University of Memphis – U.S.A Cheri Speier Michigan State University – U.S.A

ConclusionsConclusions

Results suggest that ratings influence the quality of the decisions taken by knowledge workers (KMS users)

The paper also provides other useful data for KMS designers and knowledge workers; for example the fact that collaborative filtering has a powerful moderating effect than the number of raters and the raters expertise is a new important point.

Future studies assisting individuals in overcoming invalid ratings should be conducted

Page 22: By: Robin S. Poston  University of Memphis – U.S.A Cheri Speier Michigan State University – U.S.A

Thank YouThank You