validity
DESCRIPTION
Validity. Is the Test Appropriate, Useful, and Meaningful?. Properties of Validity. Does the test measure what it purports to measure? Validity is a property of inferences that can be made. Validity should be established over multiple inferences. Evidence of Validity. Meaning of test scores - PowerPoint PPT PresentationTRANSCRIPT
ValidityIs the Test Appropriate, Useful,
and Meaningful?
Properties of Validity
Does the test measure what it purports to measure?
Validity is a property of inferences that can be made.
Validity should be established over multiple inferences.
Evidence of Validity
Meaning of test scores Reliability Adequate standardization and norming Content validity Criterion-related validity Construct validity
Content Validity
How well does the test represent the domain? Appropriateness of items. Completeness of items. How the items assess the content.
Do the Items Match the Content?
Face validity Is the item considered part of the domain
Curriculum match Do the items reflect what has been taught
Homogeneity with the test Is the item positively correlated with the test
score? (Minimum .25) Point bi-serial rxx
Are the Total Items Representative?
Are items included representing various parts of the domain?
Are different parts differentially represented?
How well are the parts represented?
How Is the Content Measured?
Different types of measures Multiple choice, short answer Performance-based vs. Factual recall
Measures should match the type of content taught
Measures should match HOW the content was taught
Criterion-related (Cr)validity
How well does performance on the test reflect performance on what the test purports to measure? Expressed as rxx
Based on concurrent CR validity And predictive CR validity
Concurrent Criterion-related Validity
How well does performance on the test estimate knowledge and/or skill on the criterion measure? Does the reading test estimate the students
current reading performance? Compares performance on one test with
performance with other similar tests. KTEA w/ Woodcock-Johnson
Predictive Criterion-related Validity
Will performance on a test now be predictive of performance at a later time? Will a score derived now from one test be as
accurate as a score derived by another and later test ?
Will a student’s current reading scores accurately reflect reading measured at a later time by another test?
Criterion-related Validity Should
Describe criterion measures accurately Provide rationales for choices. Provide sufficient information to judge
adequacy of the criterion Describe adequately the sample of
students used. Include analytical data used to determine
predictiveness
Criterion-related Validity Should
Basic statistics should include Number of cases Reasons for eliminating cases Central tendency estimates
Provide analysis of the limits toward generalizability of the test What kind of inferences about the content can
be made
Construct Validity
How validly does the test measure the underlying constructs it purports to measure? Do IQ tests measure intelligence? Do self-concept scales measure self concept?
Definition of Construct
A psychological or personality trait E.G. Intelligence, learning style
Or A psychological concept, attribute, or
theoretical characteristic E.G. Problem solving, locus of control, or
learning disability
Ways to Measure Construct Validity
Developmental change: determining expected differences among identified groups Assuming content validity Assuming reliability
Convergent /divergent validity: high correlation with similar tests and low correlation with tests measuring different constructs
Ways to measure Construct Validity
Predictive Validity: High scores on one test should predict high scores on similar tests.
Accumulation of evidence FAILING TO DISPROVE: If the concept or trait tested can be influenced by intervention, intervention effects should be reflected by pre and posttest scores. If the test score should not be influenced by
intervention, changes should not be reflected in pre and posttest scores.
Factors Affecting Validity
Unsystematic error Lack of reliability
Systematic error Bias
Reliability Effects on Validity
The validity of a measure can NEVER exceed the measure’s reliability Reliability measures error Validity measures expected traits (content,
constructs, criteria)
rxy = rx(t)y(t) rxx ryy
Systematic Bias
Method of Measurement Behaviors in testing Item selection Administration errors Norms
Who is Responsible for Valid Assessment?
The Test Author Authors are reponsible for ensuring and
publishing validation. The Test Giver
Test administrators are reponsible for following procedures outlined in administration guidelines.
Guidelines for Giving Tests
Exact Administration Read the administration instructions. Note procedures for establishing baselines. Note procedures for individual items. Practice giving the test.
Appropriate Pacing Develop fluency with the test.