Clarifying the Concept and Context of Content Validation

Download Clarifying the Concept and Context of Content Validation

Post on 29-Sep-2016

214 views

Category:

Documents

2 download

Embed Size (px)

TRANSCRIPT

<ul><li><p>Industrial and Organizational Psychology, 2 (2009), 497500.Copyright 2009 Society for Industrial and Organizational Psychology. 1754-9426/09</p><p>Clarifying the Concept and Contextof Content Validation</p><p>BRIAN H. KIMOccidental College</p><p>FREDERICK L. OSWALDRice University</p><p>Murphy (2009) argues convincingly thatalthough content validation commonlyenhances the apparent job relevance andlegal defensibility of personnel selectionsystems, its influence on criterion-relatedvalidity is often theoretically and empiri-cally unsupported. In part, this might bebecause the term content validation oftenreflects a mix of conceptual and legalconcerns (American Educational ResearchAssociation, 1999; Buster, Roth, &amp; Bobko,2005; Kleinman &amp; Faley, 1978; Wollack,1976), and that may obscure those aspectsof content validation that improve the testvalidation process. Murphys main thesisis thatat its worstthe requirement forestablishing a manifest relationship dic-tated by the Civil Rights Act of 1964implies that either (a) the content of rela-tively abstract selection tests must appearsuperficially relevant to job performance(e.g., a spatial test for machinists shouldcontain pictures of gears instead of picturesof abstract shapes, even when the under-lying questions are the same) or (b) thecontent of relatively concrete selection testsmust match criterion content very closely,</p><p>Correspondence concerning this article should beaddressed to Brian H. Kim.E-mail: briankim@oxy.edu</p><p>Brian H. Kim, Department of Psychology, Occi-dental College; Frederick L. Oswald, Department ofPsychology, Rice University</p><p>Department of Psychology, Occidental College,1600 Campus Rd, F-11, Los Angeles, CA 900413314</p><p>if not literally. Murphy correctly claims thatneither situation guarantees the high levelsof validity (also see Buster et al., 2005; SIOPPrinciples, 2003).</p><p>However, we argue thatat its bestcontent validation is not a narrow prescrip-tion for demonstrating superficial linkagesbetween predictor and criterion content.Instead, it represents a broader processusing supported theories, past research,and job-related information (e.g., SME data,job analysis, and the O*NET) (a) to deter-mine constructs and construct relationshipsrelevant to personnel selection and (b) toevaluate how the content of selection andcriterion measures are faithful to those con-structs (Binning &amp; Barrett, 1989). Fromthis perspective, content validation offers amechanism for improving criterion-relatedvalidity by measuring constructs importantto selection (i.e., reducing deficiencies) andreducing the measurement of irrelevant fac-tors (i.e., reducing contamination; Hinkin&amp; Tracey, 1999; Little, Lindenberger, &amp;Nesselroade, 1999). A content validationprocess that leads to incorporating morejob-relevant and psychometrically soundpredictor and criterion measures into apersonnel selection system should add real-world value by reducing actual selectionerrors (i.e., selecting more of the applicantsyou want and fewer that you dont). Cor-relations alone do not provide convincing</p><p>497</p></li><li><p>498 B.H. Kim and F.L. Oswald</p><p>validation evidence without a clear ratio-nale underlying the constructs and mea-sures they describe.</p><p>Furthermore, two points relevant to Mur-phys conclusions deserve further elabora-tion. First, the usefulness of specific versusgeneral test content clearly depends onthe purpose of selection. General cognitiveability (or psychometric g) measures typi-cally produce practically significant levelsof criterion-related validity for predictingtask performance across most jobs, if notall of them (Schmidt &amp; Hunter, 1998). Con-versely, a measure saturated with specificcontent about job tasks could have zerovalidity if applicants would not have knowl-edge of such content until posthire (e.g.,entry-level sales positions, where product-specific knowledge is learned in training).In support of Murphys assertions, mea-suring job knowledge in this case wouldcontribute significantly less to validity thanwould predictors of knowledge acquisition,such as general cognitive ability (partic-ularly fluid intelligence, reflecting logicaland spatial relationships) and motivationalcharacteristics (e.g., Deadrick, Bennett, &amp;Russell, 1997; Eyring, Johnson, &amp; Francis,1993; Kanfer &amp; Ackerman, 1989; Yeo &amp;Neal, 2004).</p><p>That said, evidence of positive mani-fold, or the fact that ability constructs arehierarchically related and are highly cor-related (Carroll, 1993), provides no guar-antee that specific ability measures areinterchangeable predictors of job perfor-mance (e.g., substituting a verbal abilitymeasure for a spatial ability measure). Iftwo ability measures correlate .50thelevel that Murphy claimed reflected high-positive manifoldand one has a criterion-related validity of .45, then mathematically,criterion-related validity of the other pre-dictor could take on values anywhere from.55 to +.99 (see Stanley &amp; Wang, 1969),with the largest amount of incrementalvalidity possible at these extremes. Mur-phy points to a wealth of accumulatedevidence for a lack of differential validityacross jobs for ability measures, which can-not be ignored, but these results may reflect</p><p>the lack of precision in administrative rat-ings of job performance, the complexity ofthe jobs investigated, and the lack of infor-mation on mediating variables (e.g., jobknowledge and motivation) as much as theeffects of positive manifold, if not more so.</p><p>Our second point is an implication ofthe first. Despite strong validity generaliza-tion evidence, the level of criterion-relatedvalidity achieved by using general men-tal ability measures alone often leavesmuch variance in job performance to bepredicted, particularly for complex jobs(Hunter &amp; Hunter, 1984). Complex jobsmay require applicants at the outset to pos-sess job-specific knowledge and skills thatare highly specialized (e.g., neurosurgeon,industrialorganizational psychologist) andindicative of not only general cognitiveability but also years, if not decades, ofmotivation invested in a specific knowl-edge or skill domain (Ackerman, 1996;Cattell, 1971); furthermore, research showsthat positive manifold between ability mea-sures tends to be lower among individualswith higher levels of g (Detterman &amp;Daniel,1989; for an example specific to the ASVAB,see Legree, Pifer, &amp; Grafton, 1996). Thus,specific knowledge and skills captured bycontent validated measures (e.g., knowl-edge or work-sample tests) may predictvariance in performance that more gen-eral measures cannot capture (e.g., resumes,license/certification, and interview informa-tion) for complex jobs.</p><p>More generally, without content valida-tion one may overlook personality, moti-vation, and interest constructs shown topredict a vast array of performance criteriarecognized during the past 15 to 20 years,such as organizational citizenship behav-ior (e.g., altruism and civic virtue; seeHoffman, Blair, Meriac, &amp; Woehr, 2007)and counterproductive work behavior (e.g.,absenteeism and theft; see Gruys &amp; Sackett,2003). Although ability tests clearly pre-dict whether applicants can do technicaltasks, other measures may be better suitedto predict what applicants will do, andboth may be necessary for understandingperformance behaviors (see Dilchert, Ones,</p></li><li><p>Content validation 499</p><p>Davis, &amp; Rostow, 2007; Dudley &amp; Cortina,2008). Personality and biodata measurestend to predict the latter, and the incre-mental validity of situational judgment testsover ability and noncognitive measures(Oswald, Schmitt, Kim, Ramsay, &amp; Gille-spie, 2004) may in part be the result ofasking applicants what they would typi-cally do in specific, job-relevant situations(Motowidlo, Hooper, &amp; Jackson, 2006).</p><p>In conclusion, the purpose of our com-mentary was to share Murphys skepticismabout the usefulness of mechanically link-ing content between predictor and criterionmeasures, while also exploring the merits ofcontent validation as a broader process thatdraws on sources of job-relevant knowl-edge, theory, and expertise that bolsterssubstantive explanations about underlyingpsychological work-related constructs andtheir relationships. The former is clearly nota substitute for construct or criterion-relatedvalidity (Guion, 1977; Landy, 1986), but thelatter clearly supplements them. Judgmentsabout appropriate predictor constructs andtheir content should not be made inde-pendent of an understanding of the jobor the desired knowledge, skills, abilities,and other characteristics of job incumbents.Validity coefficients for a set of generalcognitive measures that are robust and prac-tically significant do not preclude the needto understand how selection for job-specificknowledge and interest will influence spe-cific job functions. We worry that an over-reliance on the validity for general cognitiveability may slow the development of othervalidation efforts to predict a greater portionof the performance domain.</p><p>Content validation is a legitimate andvaluable process intimately tied to a jobanalysis of person, task, and situationalconstructs and characteristics for improvingthe selection of applicants (Prien, 1977).We (like Guion, 1977, 1978) are sensitiveto the fact that experts often disagree,theory is not perfect, and informationand interpretations from job analysis andempirical research are not without fault.But ultimately, both expert consensusand points of disagreement arrived at</p><p>through content validation efforts should beuseful for informing both criterion-relatedvalidities and validation research, therebyimproving the personnel selection processas a whole.</p><p>ReferencesAckerman, P. L. (1996). A theory of adult intellectual</p><p>development: Process, personality, interests andknowledge. Intelligence, 22, 227257.</p><p>American Educational Research Association, Ameri-can Psychological Association, &amp; National Coun-cil on Measurement in Education. (1999). Stan-dards for educational and psychological testing.Washington, DC: American Educational ResearchAssociation.</p><p>Binning, J. F., &amp; Barrett, G. V. (1989). Validity ofpersonnel decisions: A conceptual analysis of theinferential and evidential bases. Journal of AppliedPsychology, 74, 478494.</p><p>Buster, M. A., Roth, P. L., &amp; Bobko, P. (2005). A processfor content validation of education and experience-based minimum qualifications: An approach result-ing in federal court approval. Personnel Psychol-ogy, 58, 771799.</p><p>Carroll, J. B. (1993). Human cognitive abilities:A survey of factor-analytical studies. New York:Cambridge University Press.</p><p>Cattell, R. B. (1971). Abilities: Their structure, growthand action. North Holland: Amsterdam.</p><p>Deadrick, D. L., Bennett, N., &amp; Russell, C. J. (1997).Using hierarchical linear modeling to examinedynamic performance criteria over time. Journalof Management, 23, 745757.</p><p>Detterman, D. K., &amp; Daniel, M. H.. (1989). Correla-tions of mental tests with each other and withcognitive variables are highest for low IQ groups.Intelligence, 13, 349359.</p><p>Dilchert, S., Ones, D. S., Davis, R. D., &amp; Rostow, C. D.(2007). Cognitive ability predicts objectively mea-sured counterproductive work behaviors. Journalof Applied Psychology, 92, 616627.</p><p>Dudley, N. M., &amp; Cortina, J.M. (2008). Knowledgeand skills that facilitate the personal supportdimension of citizenship. Journal of AppliedPsychology, 93, 12491270.</p><p>Eyring, J. D., Johnson, D. S., &amp; Francis, D. J. (1993).A cross-level units of analysis approach to indi-vidual differences in skill acquisition. Journal ofApplied Psychology, 78, 805814.</p><p>Gruys, M. L., &amp; Sackett, P. R. (2003). Investigatingthe dimensionality of counterproductive workbehavior. International Journal of Selection andAssessment, 11, 3042.</p><p>Guion, R.M. (1978). Content validity inmoderation.Personnel Psychology, 31, 206213.</p><p>Guion, R. M. (1977). Content validitythe source ofmy discontent. Applied Psychological Measure-ment, 1, 19.</p><p>Hambrick, D. Z., Pink, J. E., Meinz, E. J., Pettibone,J. C., &amp; Oswald, F. L. (2008). The roles of abil-ity, personality, and interests in acquiring currentevents knowledge: A longitudinal study. Intelli-gence, 36, 261278.</p></li><li><p>500 B.H. Kim and F.L. Oswald</p><p>Hinkin, T. R., &amp; Tracey, J. B. (1999). An analysisof variance approach to content validation.Organizational Research Methods, 2, 175186.</p><p>Hoffman, B. J., Blair, C. A., Meriac, J. P., &amp; Woehr,D. J. (2007). Expanding the criterion domain?A quantitative review of the OCB literature. Journalof Applied Psychology, 92, 555566.</p><p>Hunter, J. E., &amp; Hunter, R. (1984). Validity and utilityof alternative predictors of job performance.Psychological Bulletin, 96, 7298.</p><p>Kanfer, R., &amp; Ackerman, P. L. (1989). Motivationand cognitive abilities: An integrative/aptitude-treatment interaction approach to skill acqui-sition. Journal of Applied Psychology, 74,657690.</p><p>Kleiman, L. S., &amp; Faley, R. H. (1978). Assessing contentvalidity: Standards set by the court. PersonnelPsychology, 31, 701713.</p><p>Landy, F. J. (1986). Stamp collecting versus science:Validation as hypothesis testing. American Psy-chologist, 41, 11831192.</p><p>Legree, P. J., Pifer, M. E., &amp; Grafton, F. C. (1996).Correlations among cognitive abilities are lowerfor higher ability groups. Intelligence, 23,4557.</p><p>Little, T. D., Lindenberger, U., &amp; Nesselroade, J. R.(1999). On selecting indicators for multivariatemeasurement and modeling with latent variables:When good indicators are bad and badindicators are good. Psychological Methods, 4,192211.</p><p>Motowidlo, S. J., Hooper, A. C., &amp; Jackson, H. L.(2006). Implicit policies about relations betweenpersonality traits and behavioral effectiveness in</p><p>situational judgment items. Journal of Applied Psy-chology, 91, 749761.</p><p>Murphy, K. R. (2009). Content validation is useful formany things, but validity isnt one of them. Indus-trial and Organizational Psychology: Perspectiveson Science and Practice, 2, 453464.</p><p>Oswald, F. L., Schmitt, N., Kim, B. H., Ramsay, L. J.,&amp; Gillespie, M. A. (2004). Developing a biodatameasure and situational judgment inventory aspredictors of college student performance. Journalof Applied Psychology, 89, 187207.</p><p>Prien, E. P. (1977). The function of job analysis incontent validation. Personnel Psychology, 30,167174.</p><p>Schmidt, F. L., &amp; Hunter, J. E. (1998). The validityand utility of selection methods in personnelpsychology: Practical and theoretical implicationsof 85 years of research findings. PsychologicalBulletin, 124, 262274.</p><p>Society for Industrial and Organizational Psychology,Inc. (2003). Principles for the validation and use ofpersonnel selection procedures (4th ed.). BowlingGreen, OH: Author.</p><p>Stanley, J. C., &amp; Wang, M. D. (1969). Restrictions onthe possible values of r12 given r13 and r23.Educational and Psychological Measurement, 29,579581.</p><p>Wollack, S. (1976). Content validity: Its legal and psy-chometric basis. Public Personnel Management, 5,397408.</p><p>Yeo, G. B., &amp; Neal, A. (2004). A multilevel analysis ofeffort, practice, and performance: Effects of ability,conscientiousness, and goal orientation. Journal ofApplied Psychology, 89, 231247.</p></li></ul>

Recommended

View more >