architecture and evaluation issues

5
1 Architecture and Evaluation Issues Julie Fitzgerald Arlington, Virginia August 9, 2003

Upload: yolanda-wade

Post on 31-Dec-2015

23 views

Category:

Documents


2 download

DESCRIPTION

Architecture and Evaluation Issues. Julie Fitzgerald Arlington, Virginia August 9, 2003. Areas of Research for Future (KBS) Evaluations. Evaluation Mechanisms Role of Participants Data collection Time management Meaningfulness of results. Open Questions for Future (KBS) Evaluations. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Architecture and Evaluation Issues

1

Architecture and Evaluation Issues

Julie FitzgeraldArlington, Virginia

August 9, 2003

Page 2: Architecture and Evaluation Issues

2

Areas of Research for Future (KBS) Evaluations

Evaluation MechanismsRole of ParticipantsData collectionTime managementMeaningfulness of results

Page 3: Architecture and Evaluation Issues

3

Open Questions for Future (KBS) Evaluations

Evaluation Mechanisms IET has used challenge problems and specifications to present evaluation

mechanisms, including information related to the questions to be used and the grading format.

CPs are time consuming to develop Future research: Develop a methodology for evaluating large KB systems.

Role of Participants In HPKB, the systems were tested using KEs. RKF was SME focused. The user effects the data we need to collect and the analysis performed on

that data. Future research:

User profiling Interaction profiling (user to system; user to technology developer; user to

outside resource)

Page 4: Architecture and Evaluation Issues

4

Data collection Labor intensive and invasive Future research:

Automated data collection and processing At a minimum, better specification of needed data

Time management Evaluations are time consuming

True for development, execution, and analysis More formalized evaluation methodology should help Future research:

More painless evaluations (on-going evaluations, evaluations in the background, automatic evaluations, other ideas?)

Open Questions for Future (KBS) Evaluations

Page 5: Architecture and Evaluation Issues

5

Meaningfulness of Results Methods and Metrics need to be better defined

This is context dependent—we won’t always be testing the same things Methodology development is required, esp. variable isolation and controls

Characterization of users need to improve Test users on related tasks Track user-system interaction more closely

Requires better task decompositions Need to relate results back to both system and user performance

Scope of evaluations needs to widen Need more data (more users, longer durations)

Open Questions for Future (KBS) Evaluations