avoiding a grand failure

29
DAVID DEGEEST 06J:278 STAFFING APRIL 13, 2010 SJTs: Avoiding a Grand Failure

Upload: david-degeest

Post on 08-May-2015

1.129 views

Category:

Technology


2 download

DESCRIPTION

This is a presentation on situational judgment tests created for a discussion on selection planning

TRANSCRIPT

Page 1: Avoiding A Grand Failure

DAVID DEGEEST06J:278 STAFFING

APRIL 13 , 2010

SJTs: Avoiding a Grand Failure

Page 2: Avoiding A Grand Failure

WHY ARE PEOPLE SO INTERESTED?

The Big Picture of SJT lit

Page 3: Avoiding A Grand Failure

The claims about SJTs

“They seem to represent PSYCHOMETRIC ALCHEMY” (Landy, 2007). Adverse impact is down, validity up Assessees like them Seem to address relevant KSAOs

They assess soft skills and tacit knowledgeThey provide incremental validity above GMA and

personality for predicting college GPA (Oswald et al., 2004)

Some SJTs have demonstrated criterion-related validities as high as r=.36 (Pereira and Harve, 1999)

They measure tacit knowledge and “non-academic intelligence” (Sternberg et al., 1995)

Page 4: Avoiding A Grand Failure

What is an SJT?

Situational Judgment Tests (SJTs) or Inventories (SJIs) are psychological tests which offer respondents realistic, hypothetical scenarios and ask for an appropriate response.

SJTs are often-identified as a type of low-fidelity situation (Motowildo, 1990).

SJTs can be designed to be predictive of job performance, managerial ability, integrity, personality, and apparently other measures or constructs.

Page 5: Avoiding A Grand Failure

Example of an SJT item from Becker (2005)

11. You’re retiring from a successful business that you started, and must now decide who will replace you. Two of your children want the position and would probably do a fine job. However, three non-family employees are more qualified. Who would you most likely put in charge?

A. The best performing non-family member, because the most qualified person deserves the job.

B. The lowest performing non-family member, because this won’t hurt your children’s feelings.

C. The highest performing child, because you have the right to do what is best for your kids.

D. The child you love the most, as long as he or she is able to do the job.

Page 6: Avoiding A Grand Failure

History of SJTs

First recorded SJT: George Washington University Social Intelligence Test (1926)

Some usage during WWII by military psychologists

(1990): Motowildo’s research resurrected interest in SJTs Idea of the “low fidelity simulation”

Commonly used now in industry as “customized” tool for organizations, consultants, etc.

Takeaway: There is a lot of perceived promise and sunk cost in SJT research.

Page 7: Avoiding A Grand Failure

WHAT THE HECK IS AN SJT?

Construct Validity and the Development of SJTs

Page 8: Avoiding A Grand Failure

Item Characteristics

McDaniel et al (2005) claims that SJTs have eight differentiating characteristics: Test fidelity Stem length Stem complexity Stem comprehensibility Nested stems Nature of responses Response instructions Degree of item heterogeneity

No proscribed standards to develop an SJT

Page 9: Avoiding A Grand Failure

Item Characteristics

Examples of response options: What is the best answer? What would you most likely do? Rate each response for effectiveness Rate each response on likelihood you would engage in

behaviorKnowledge v. Behavioral TendencyDichotomization Issue

Page 10: Avoiding A Grand Failure

Item Characteristics and Construct Validity

Construct Heterogeneity Most items tend to correlate with GMA,

Agreeableness, Conscientiousness, or Emotional Stability (McDaniel, 2005)

Ployhart and Erhart (2003) notes that multiple constructs measured with SJTs makes it hard to measure differences across studies

Takeaway: SJTs are best described as a method, not a construct (Schmitt & Chan, 2006)

Page 11: Avoiding A Grand Failure

EXCITING FINDS FROM RESEARCH ON SJTS

The promise of SJTs

Page 12: Avoiding A Grand Failure

Generalizability

McDaniel et al. (2007) meta-analytically demonstrated SJTs have incremental validity of .03 to .05 over GMA .06 to .07 over Big Five .01 to .02 over GMA/Big Five composite

McDaniel et al. (2001) showed that SJTs are generalizable as predictors of job performance 90% CV did not contain zero in the meta

Potosky et al. (2004) showed that a .84 score-equivalence correlation between an SJT administered via paper-and-pencil and the Internet No effects based on beliefs in computer efficacy

Takeaway: multiple metas have demonstrated the generalizability of SJTs in predicting job performance.

Page 13: Avoiding A Grand Failure

Variability in SJTs

Lievens and Sackett (2006) showed that video-based SJTs for interpersonal skills have more validity than written SJTs.

McDaniel et al. (2007) showed that reliabilities for SJTs can range from .63 to .88 The meta refers to alpha, but other reliability

measures matterTakeaway: Effects of variations in level of

fidelity offer interesting possibilities for research.

Page 14: Avoiding A Grand Failure

Assessment Reactions and Face Validity

Chan & Schmitt (1997) showed that B-W differences in test performance and face validity reactions were lower for video-based SJTs than pencil-and-paper tests Race X Method interaction attributable to reading

comprehension differences in subgroups Increasing fidelity increased mean performance on SJT

Chan (1997) showed that paper-and-pencil SJTs are more consistent with beliefs, values, and expectations of whites. Moving to video-based SJT increased validity perceptions

for both whites and blacks Bauer and Truxillo (2006) asserts that SJTs always have

better face validity than do cognitive and personality measures.

Takeaways: SJTs are useful in terms of face validity and justice perceptions, particularly high-fidelity (video) simulations.

Page 15: Avoiding A Grand Failure

THE TROUBLE WITH…

Problems with SJTs

Page 16: Avoiding A Grand Failure

The problem of g

• Nguyen (2005) Found that knowledge instruction SJT scores correlated .56 with GMA and behavioral instructions correlated .38 with GMA.

• Peeters and Lievens (2005) found that faking-good instructions produced differences in means and criterion-related validities across subgroups.• “Specifically, the fakability of SJTs might depend on their

correlation with one particular construct, namely, cognitive ability (see Nguyen &McDaniel, 2001).” (p.73)

• Schmidt and Hunter (2003) found low discriminant validity between SJTs and job knowledge tests.

• Retesting Issue• “If subgroup differences on a test exist, policies that permit

retests by candidates who were unsuccessful on the test might inflate calculations of adverse impact.” (Lievens et al., 2005, p. 1005)

Takeaway: If the degree of fakibility of an SJT depends on its GMA load, SJTs might just be contaminated g tests or low-reliability job knowledge tests.

Page 17: Avoiding A Grand Failure

Faking

Nguyen et al. (2005) found that d=.34 for honest instructions and d=.15 for faking

Ployhart and Erhart (2003) also note that behavioral response instructions are both more prone to faking and have problematic more reliability issues

Hooper et al. (2006) notes that the fragmentation of the literature has made a meta-analytic study of this issue impossible.

Page 18: Avoiding A Grand Failure

Response Instructions

Ployhart and Erhart (2003) found that response instructions had dramatic effects on validity, reliability, and performance for SJTs Showed that dimensionality of an SJT is crucial to

determining the reliability estimate to use. McDaniel et al. (2007) found meaningful differences

between means for tests with different behavioral and knowledge instructions.

Lievens and Sackett (2009) found no meaningful differences between means in a high-stakes testing environment with med school applicants. Last two studies found that knowledge instructions for an

SJT increased the scores’ correlation with a GMA measure

Takeaway: meta-analytic integration of these results is needed, but the primary research has yet to support this.

Page 19: Avoiding A Grand Failure

What is the reliability for an SJT?

Bess (2001) points out the elephant in the room: “SJTs by definition are multidimensional and therefore

internal consistency is not an appropriate measure of reliability” (p.29)

Schmitt and Chan (1997) also notes this problem.Examples of reliability estimates:

Ployhart and Erhart (2003) Used split-half estimates to get .67 and .68 reliabilities.

Lievens and Sackett (2009) found low alphas for their SJT (.55-.56)Lievens and Sackett (2007) noted generating alternate

forms is difficult for SJTs, given the contextual specificity of items. This means parallel forms reliability is a non-practical measure.

Takeaway: no one is quite sure how to systematically assess reliabilities for SJT measures

Page 20: Avoiding A Grand Failure

WHAT DO WE KNOW ABOUT SJTS?

Conclusions

Page 21: Avoiding A Grand Failure

Things we know fairly clearly

SJTs are primarily a method, not a construct.SJTs have demonstrated generalizable meta-

analytic incremental validity over GMA and Big 5 single and composite measures in predicting job performance

Most SJTs are correlated with GMA to a varying extent and share some benefits and disadvantages with GMA

SJTs often correlate with the Big 3

Page 22: Avoiding A Grand Failure

McDaniel et al. (2006) integrated model for SJT

Page 23: Avoiding A Grand Failure

WHERE DO RESEARCHERS GO FROM HERE? PRACTITIONERS?

Future Directions for SJTs

Page 24: Avoiding A Grand Failure

Ployhart & Weekly (2006) Agenda for Research

1. Construct Validity Correlates are known, but nomological net uncertain SJTs targeted to constructs: the “holy grail” (p.348) What exactly is “judgment?”

2. Understanding SJT structure How do we build SJTs to get construct homogeneity? How do we enhance the reliability of these measures?

3. More Experimentation/Micro Research Correlation studies and metas show generalizability Experimental studies can enhance understanding

Page 25: Avoiding A Grand Failure

Ployhart & Weekly (2006) Agenda for Research

Need for Theoretical Development Will help to integrate SJTs in mainline I/O research Theory of situation perception and judgment research

The limits of SJTs Little is known about applicant conditions for SJTs Generalizability in international contexts?

Expansion of org context for SJTs Possibility of use in training and development contexts

(Fritzche et al., 2006) Use in team contexts (Mumford et al., 2006)

Page 26: Avoiding A Grand Failure

Other possibilities

Personality and Self-ReportsPloyhart and Ryan (2004) proposed integrating

personality measures with an SJT to predict customer service orientation

Hogan (2005)’s work suggests that it may be possible to build a conscientiousness measure via an SJT that is more resistant to faking than current self-reports.

Bledow et Freese (2003)’s work also supports using SJTs to create non-self-report measures of constructs like personal drive or initiative.

TeamsMumford et. al (2006)’s work suggests building an

SJT that would ably predict a “team player” mentality.

Page 27: Avoiding A Grand Failure

References

Page 28: Avoiding A Grand Failure

Bauxer T.N.; and Truxillo, D.M. (2006). Applicant reactions to situational judgment tests: research and related practical issues. In Situational Judgement Tests: Theory, Measurement, and Application.

Becker, Thomas E. (2005). Development and Validation of a Situational Judgment Test of Employee Integrity. International Journal of Selection and Assessment, 13(3), 225-232.

Bess, T.L. (2001). Exploring the dimensionality of situational judgment: task and contextual knowledge. Unpublished Master’s Thesis. Accessesed 4/08/10 at: http://scholar.lib.vt.edu/theses/available/etd-04122001-183219/unrestricted/sjtdimensionality.pdf

Bledow, Ronald and Freese, Michael. A situational judgment test of personal initiative and its relation to performance. (Summer 2009) Personnel Psychology. 229-258

Chan, David. Racial subgroup differences in predictive validity perceptions on personality and cognitive ability tests. Journal of Applied Psychology. Vol 82(2), Apr 1997, 311-320.

Fritzsche, B.A.; Stagl, K.C.; Salas, E.; Burke, C.S. (2006) Enhancing the design, delivery, and evaluation of scenario-based training: can situational judgment tests contribute. In Situational Judgement Tests: Theory, Measurement, and Application.

Hogan, Robert (2005). In defense of personality measurement: new wine for old whiners. Human Performance 18(4), 31-41.Hooper, A., Cullen, M., and Sackett, P. (2006). Operational Threats to the Use of SJTs: Faking, Coaching, and Retesting Issues.

In Situational Judgement Tests: Theory, Measurement, and Application.Landy, F. J. (2007). The validation of personnel decisions in the twenty first century: Back to the future. In S. M. McPhail (Ed.),

Alternate validation strategies: Developing and leveraging existing validity evidence(pp. 409–426). San Francisco: Jossey-Bass.Lievens, Filip and Sackett, Paul R. (2006). Video-Based Versus Written Situational Judgment Tests: A Comparison in Terms of

Predictive Validity. Journal of Applied Psychology, 91, 1181-1188.Lievens, F.; Sackett, P.; and Buyse, Tine. (2009). The effects of response instructions on situational judgment test performance

and validity in a high-stakes context. Journal of Applied Psychology 94(4), 1095-1101.McDaniel, Michael and Nguyen, Nhung. Sitautional Judgment Tests: a review of practice and constructs assessed. (March/June

2001). International Journal of Selection and Assessment, 9(1/2), 103-113.McDaniel, Michael, Finnegan, Elizabeth, Morgeson, Frederick, Campion, Michael, Braverman, Eric. (2001). Use of Situational

Judgment Tests to Predict Job Performance: A clarification of the literature. Journal of Applied Psychology, 86(4), 730-740.

Page 29: Avoiding A Grand Failure

McDaniel, Whetzel, Hartman, Nguyen, and Grubb (2006). Situational Judgment Tests: Validity and an Integrative Model. Situational Judgment Tests: Theory, Measurement, and Application.

McDaniel, Michael, Hartman, Nathan S., Whetzel, Deborah, Grubb, W. Lee III. (2007) Situational judgment tests, response instructions, and validity: a meta-analysis. Personnel Psychology,60, 63-91.

Motowidlo, SJ, Dunnette, MD, Carter, GW (1990), "An alternative selection procedure: thelow-fidelity simulation", Journal of Applied Psychology, Vol. 75 pp.640-7

Mumford, T.V.; Campion, M.; Morgeson, F.P. (2006). Sitautional judgment tests in work teams: a team role typology. In Situational Judgement Tests: Theory, Measurement, and Application.

Nguyen, N. T., Biderman, M. D., & McDaniel, M. A. (2005). Effects of response instructions on faking a situational judgment test. International Journal of Selection and Assessment, 13, 250–260.

Frederick L. Oswald, Neal Schmitt, Brian H. Kim, Lauren J. Ramsay, and Michael A. Gillespie (2004). Developing a Biodata Measure and Situational Judgment Inventory as Predictors of College Student Performance. Journal of Applied Psychology 89: 187-207.

Helga Peeters and Filip Lievens (2005). Situational Judgment Tests and their Predictiveness of College Students’ Success: The Influence of Faking. Educational and Psychological Measurement, 65: 70-89.

Ployhart, R.E. and Ryan, A.M. (2004, April) Integrating personality tests with situational judgment tests for the prediction of customer service performance. Paper presented at the 15th annual conference of the Society for Industrial and Organizational Psychology, New Orleans, LA.

Ployhart, Robert E. and Weekley, Jeff. (2003) Web-based and paper-and-pencil testing of applicants in a proctored setting: are personality tests, biodata, and situational judgment tests comparable? Personnel Psychology, 56, 733-752.

Potosky, Denise and Bobko, Philip (2004). Selection testing via the Internet: practical considerations and exploratory empirical findings. Personnel Psychology,57, 1003-1034.

Schmidt, Frank and Hunter, John E. (2003). Tacit Knowledge, Practical Intelligence, General Mental Ability, and Job Knowledge. Current Directions in Psychological Science, 2, 8-9.