qut digital repository: //eprints.qut.edu.au/29748/1/29748.pdf · intelligence testing 4 ....

21
QUT Digital Repository: http://eprints.qut.edu.au/29748 Gilmore, Linda and Campbell, Marilyn (2009) Competence in Intelligence Testing : A Training Model for Postgraduate Psychology Students. The Australian Educational and Developmental Psychologist, 26(2). pp. 165-173. © Copyright 2009 Australian Psychological Society.

Upload: tranngoc

Post on 01-Dec-2018

223 views

Category:

Documents


0 download

TRANSCRIPT

QUT Digital Repository: http://eprints.qut.edu.au/29748

Gilmore, Linda and Campbell, Marilyn (2009) Competence in Intelligence Testing : A Training Model for Postgraduate Psychology Students. The Australian Educational and Developmental Psychologist, 26(2). pp. 165-173.

© Copyright 2009 Australian Psychological Society.

halla
Rectangle
halla
Rectangle

INTELLIGENCE TESTING

RUNNING HEAD: COMPETENCE IN INTELLIGENCE TESTING

Competence in Intelligence Testing:

A Training Model for Postgraduate Psychology Students

Linda Gilmore and Marilyn Campbell

School of Learning and Professional Studies

Queensland University of Technology, Brisbane, Australia

Address for correspondence Dr Linda Gilmore School of Learning and Professional Studies Queensland University of Technology Kelvin Grove Q 4059 Tel: +617 3138 9617 Fax: +617 3138 3987 Email: [email protected]

INTELLIGENCE TESTING 2

RUNNING HEAD: COMPETENCE IN INTELLIGENCE TESTING

Competence in Intelligence Testing:

A Training Model for Postgraduate Psychology Students

INTELLIGENCE TESTING 3

Abstract

The assessment of intellectual ability is a core competency in psychology. The

results of intelligence tests have many potential implications and are used frequently

as the basis for decisions about educational placements, eligibility for various

services, and admission to specific groups. Given the importance of intelligence test

scores, accurate test administration and scoring are essential; yet there is evidence of

unacceptably high rates of examiner error. This paper discusses competency and

postgraduate training in intelligence testing and presents a training model for

postgraduate psychology students. The model aims to achieve high levels of

competency in intelligence testing through a structured method of training, practice

and feedback that incorporates peer support, self-reflection and multiple methods for

evaluating competency.

Key words: intelligence testing, competence, training, Wechsler, WISC, WAIS.

INTELLIGENCE TESTING 4

Introduction

Assessment of Intelligence and its Implications

The assessment of intellectual ability is a core competency in psychology. Unlike

most other aspects of the psychologist’s role, intelligence testing is an activity that is

unique because it may not be performed by other professionals. Psychologists acquire

the essential knowledge and skills for competent intelligence testing through their

years of study in areas such as human development, psychometrics and professional

ethics, combined with specific advanced study of assessment and supervised practice

in test administration and interpretation.

The results of intelligence tests have many potential implications and are used

frequently as the basis for decisions about educational placements, eligibility for

various services, and admission to specific groups such as Mensa or the Special

Olympics. The most significant decision to be based on test scores is literally one of

life or death. In some states of the USA, the death penalty is waived for criminals who

are assessed as having an intellectual disability; that is, a score of approximately 70 to

75 or below (Duvall & Morris, 2005). IQ scores are also admissible in courts of law in

Australia. Although important decisions are rarely based solely on the results of

intelligence testing, IQ scores are usually the most important contributor. Given the

importance of intelligence test scores, accurate test administration and scoring are

essential. Errors may have significant consequences including misclassification of

individuals and loss of public confidence in the profession (Belk, LoBello, Ray, &

Zachar, 2002). Psychologists whose assessment practice is found to be unsatisfactory

risk legal proceedings and withdrawal of their professional registration.

Clearly, it is essential that postgraduate psychology programs provide rigorous

training in the assessment of intelligence. Yet surprisingly few descriptions of

INTELLIGENCE TESTING 5

teaching methods and strategies are available in the literature. The present paper

discusses competency and postgraduate training in intelligence testing and presents a

training model for postgraduate psychology students that was developed by the

authors.

Competence in Intelligence Testing

Competent administration of standardized tests of intelligence requires both

general and specific knowledge and skills. Basic knowledge about principles of

standardized testing provides the essential foundation for understanding the

requirements of specific tests and for developing the necessary skills to administer,

score and interpret those tests. Although psychology students usually acquire an

understanding of psychometrics and standardized test administration at the

undergraduate level, they seldom have the opportunity to apply this theoretical

knowledge until they enrol in postgraduate courses where specific skills in practical

test administration and interpretation are acquired.

Professional organizations such as the APS and the various state registration

boards require psychologists to attain competency in intelligence test administration

(Kendall, Jenkinson, de Lemos, & Clancey, 1997). Test developers and publishers

also have a strong interest in ensuring competent use of their instruments. As far as

can be determined, however, none of these groups provides specific guidelines to

prescribe exactly how competence with intelligence testing should be defined,

achieved or assessed although there are numerous documents related to ethical

standards in testing and the qualifications of testers. Among universities there seems

to be no consensus on the form of training and supervision that should be provided,

the criteria for competence, and exactly how competence should be demonstrated.

Although it would be unreasonable to demand totally error-free administration from

INTELLIGENCE TESTING 6

students, how many errors should be permitted, and what type of errors? Because of

financial pressures in higher education, time and cost are factors that undoubtedly

influence the depth of training and the height of standards for competency. The

considerable variation among universities in methods for training postgraduate

students in intelligence testing means there is also likely to be considerable variation

in the competence of psychologists in this core area of practice.

Published reports have provided consistent evidence of high error rates in

Wechsler scale administration and scoring among both graduate psychology students

and experienced practitioners (Belk et al., 2002; Lobello & Holley, 1999; Loe,

Kadlubek, & Marks, 2007; Simons, Goddard, & Patton, 2002; Slate & Jones, 1990a;

Whitten, Slate, Jones, & Shine, 1994). The overwhelming majority of protocols are

flawed, with an estimated 50 to 80% having errors that produce inaccurate composite

scores (Loe et al., 2007). Scoring, clerical and computation mistakes are reported

frequently, as well as failures to record responses or query appropriately. While

scoring errors are common (especially on the verbal subtests Similarities, Vocabulary

and Comprehension), of more significance are clerical and computation inaccuracies

because these are likely to have a more dramatic impact on results (Belk et al., 2002;

Hopwood & Richard, 2005; LoBello & Holley, 1999) and because they are usually

caused simply by carelessness rather than lack of knowledge or skill.

Most studies have reported error rates on the basis of examinations of completed

protocols but many administration errors are not visible on record forms. Fantuzzo,

Sisemore and Spradlin (1983) argued that “scrutiny of tester competencies must begin

with test administration” (p. 224) but, without direct observation, there is no way of

identifying administration errors such as failure to provide accurate and verbatim

instructions, incorrect placement of task materials, or variations from the required rate

INTELLIGENCE TESTING 7

of speed for presenting digits/letters. Deviations from standardization procedures were

reported by psychologists in one recent study (Wolfe-Christensen & Callahan, 2008)

with, for instance, fewer than half saying that they correctly presented two pages of

stimuli simultaneously on the WAIS-III Symbol Search subtest.

Given the large number of scoring and clerical errors that are routinely observed

on protocols, it is likely that administration errors also occur at an unacceptably high

rate. If these errors are not identified at an early stage, they may continue to occur and

become well-practised. The fact that there is no systematic external requirement for

monitoring the assessment practices of experienced psychologists makes it crucial that

rigorous training methods are employed during the postgraduate years to prevent

errors becoming established in the first place. The mistakes of experienced

psychologists are often attributed to lack of adequate training, especially in specialist

graduate psychology programs (Slate, Jones & Covert, 1992) and psychologists

themselves have reported being insufficiently prepared in their postgraduate training

for administering assessment instruments in general (Muniz et al., 2001).

Training Methods

There is no doubt that intelligence testing is a demanding task on which examiner

competence is not easily achieved. It is puzzling then that so few reports of training

methods have been published. The limited literature has, however, produced one

consistent finding: repeated practice has little value as an instructional method (Belk

et al., 2002; Loe et al., 2007). Slate and colleagues (Slate & Jones, 1990a; Slate et al.,

1992) found that, rather than developing competency, the traditional method of

unsupervised practice administrations led students to practise errors. It seems to take 8

to 10 administrations before improvement is typically seen (Patterson, Slate, Jones, &

Steger, 1995; Slate & Jones, 1990b; Slate & Jones, 1990c) although even 10

INTELLIGENCE TESTING 8

repetitions may not be sufficient (Slate et al., 1992) and administration errors may

persist (Klassen & Kishor, 1996) or even recur after competency has apparently been

achieved (Boehm, Duker, Haesloop, & White, 1974). Findings about the usefulness of

practice administrations in combination with feedback are mixed, with some evidence

of improvements in scoring accuracy on verbal subtests (Linger, Ray, Zachar,

Underhill & LoBello, 2007) and greater success if feedback is both specific and

immediate (Slate, Jones, & Murray, 1991).

Despite calls for the formulation of rigorous training programs (Belk et al., 2002),

descriptions of systematic methods are rare. Fantuzzo et al. (1983) provided details of

a training package that comprised a structured sequence of activities including study

of the test manual, scoring of observed administrations using a specially developed

checklist, a lecture on the major pitfalls of test administration, practice

administrations and feedback. Blakey, Fantuzzo, Gorsuch and Moon (1987)

described a peer-mediated training package for the WAIS-R using trained peers to

give item-specific feedback with the use of a checklist, and Thompson and Hodgins

(1994) developed a checklist to reduce clerical and computation errors.

Surprisingly, postgraduate assessment courses often have no explicitly defined

criteria for competency (Boehm et al., 1974) and fewer than half of the courses on

intelligence testing surveyed by Cody and Prieto (2000) included a final evaluation of

student competency. Programs typically required students to complete an average of

three WAIS and three WISC administrations, but the survey did not obtain

information about techniques used for teaching test administration or methods for

assessing student competency.

The scarcity of published papers in this area suggests that trainers are either

unaware of or unconcerned about the challenges of achieving competence on

INTELLIGENCE TESTING 9

intelligence testing. Perhaps there is simply an acceptance within training

organizations of the inevitability of errors in administration or an assumption that test

administration, like interpretation, will improve with experience and that students will

eventually administer tests in an error-free way once they are working as

practitioners, a belief that research findings would suggest is misguided. It is possible

that many trainers recognize the importance of achieving errorless administration, but

simply lack the time and resources to monitor the achievement of competency.

Nonetheless, it is still surprising that the literature is so sparse in this area, particularly

given the substantial body that has documented training methods for achieving

competency in other areas of psychology, such as counselling. In 1983 Fantuzzo et al.

commented on the dearth of literature about strategies for training and evaluating

skills in the assessment of intelligence. Now, 25 years later, the same comments are

still being made (Loe et al., 2007).

Training Methods used by Australian Universities

A brief survey of training practices in APS accredited postgraduate psychology

programs was conducted by the authors. Attempts were made to contact all staff who

coordinated assessment units in which the WAIS or WISC were taught at Australian

universities. During telephone interviews, 28 lecturers responded to a structured series

of questions about their training methods. Among the universities, 12 taught the

WAIS only, four focused on the WISC, and 12 included both instruments. The

number of students varied considerably, but the majority of classes contained 10 to 30

students.

Methods for training and evaluating competency varied from course to course.

Only four of the 28 courses devoted more than 6 hours to teaching the WAIS and/or

WISC. The most common teaching method was viewing live or videotaped

INTELLIGENCE TESTING 10

demonstrations of test administration (25 of the 28 courses) although often only a

selection of subtests was observed. Other frequent strategies included reading the test

manual, discussing administration issues in a group and practising the test, sometimes

under supervision.

In evaluating student competence on the WAIS and/or WISC, the most common

requirement was submission of a videotaped administration and completed protocol

(17 universities). At five universities, students administered the test to a tutor or

lecturer who either randomly selected certain subtests for administration or requested

a full administration. In one case, competency assessment involved live

administration to a child volunteer with the lecturer observing, and another required

live administration to a student peer. At four of the universities, competency was

assessed only through observations by supervisors of administrations in practice

settings. The use of administration checklists, either from Sattler (2008) or

specifically developed, was common. However, few respondents mentioned explicit

criterion levels for competency. Many remarked that there was insufficient time and,

in some cases, resources (e.g., multiple test kits) for teaching the Wechsler tests

adequately.

A Training Model for Postgraduate Students

Framework

We developed a training model that is now being used in a postgraduate course on

assessment within an Educational and Developmental Psychology master’s program.

The course runs during the first 13 week semester of the two year program and there

are typically 14 to 16 students, most of whom have no prior practical experience of

intelligence testing. WISC-IV training is a major part of the course which aims to

provide a sound framework of knowledge about assessment processes and practices.

INTELLIGENCE TESTING 11

Our training model develops competency in test administration and scoring

through the gradual reduction of errors, with assessment of competency occurring in

multiple and varied ways. The basic aim is to reduce errors and produce fluent

administrations through a sequenced and systematic method of training, practice and

feedback. Consistent with the reflective practitioner model of training (Scheikh,

Milne, & Macgregor, 2007; Schön, 1987), the goal is to encourage self-critical and

self-reflective evaluations of skill development. The model is guided by our belief

that thorough learning of the first test instrument provides a strong basis for mastery

of all subsequent tests, and that achievement of competency on this first test is

essential before other instruments are introduced.

Components

The training program has a number of inter-related components.

1. Introductory phase. Following a preliminary lecture in which formal and

informal methods of assessment are discussed, students are introduced to standardized

assessment with the Peabody Picture Vocabulary Test, Fourth Edition. We believe the

concept of standardized administration is fundamental and more easily demonstrated

initially on a less complex instrument than the WISC-IV. In this phase there is also

consolidation and direct application of the understanding of psychometrics that

students have already gained in their undergraduate studies.

2. Training phase. In a full weekend workshop (14 hours) students undertake

systematic training on the WISC-IV. Each sub-test is demonstrated by the lecturer,

and students practise in pairs with continual feedback from the lecturer. In the

following weeks, pairs continue to practise, with the expectation that they will spend a

minimum of 20 hours together going through the instrument. It is emphasized that

student pairs must administer to each other, with the student who is role-playing the

INTELLIGENCE TESTING 12

child providing feedback after every subtest administration. At this stage, allowing

sufficient time for practice is essential. After 2-3 weeks, there is a revision tutorial in

which the lecturer observes student pairs working together, gives feedback and

answers questions. The pairs continue to work together and then each student

videotapes a complete administration of the WISC with a child volunteer. Stopping

and re-starting the tape or later editing are not permitted.

3. Evaluation A. After recording the videotape, students complete a self-critique in

which they rate their own competency against the administrative checklist provided in

Sattler (2008) and write an overall evaluation. This technique promotes self-

monitoring and self-reflection. The videotapes are then circulated amongst the group

so that each student critiques the administrations of two of their peers using the Sattler

checklist. Although students sometimes feel uncomfortable about criticizing their

peers, we see this component as an important exercise because much can be learned

through observing the performance of others and because peer support and mentoring

is an essential and ongoing part of professional life. The exercise also provides

practice in constructively critiquing others, and further reinforces the importance of

accurate test administration and scoring.

The third phase ends with the lecturers (the authors) meeting together to review all

student administrations. Because critiques have already been prepared by students, it

is not necessary for us to view videotapes in their entirety, thus making this

component more cost-effective. Using criteria developed specifically for the task,

marks are deducted according to the type of error. Major errors, which either

invalidate the test or lead to inaccurate scores (e.g., failure to obtain a basal or ceiling,

transferring scores incorrectly, and not including items before a basal in the total raw

score) are more heavily penalized than less serious errors (e.g., late presentation of

INTELLIGENCE TESTING 13

stimulus book for Vocabulary or an unnecessary query). If students have identified

their own errors, half marks are deducted since these errors are considered to be less

serious than those which a student does not self-identify. Students who fail because

they accumulate too many deductions are required to submit another videotape before

progressing to Evaluation B phase. If this second administration is also failed, then

the student does not pass the course and must re-enrol the following year.

The following two examples illustrate the criteria used to assess competence. Out

of 15 marks assigned for the task, Student A lost a total of 5 marks for the following

errors: failure to include items before the basal in the total raw score (1½ marks

deducted), setting out blocks on one trial without the required faces showing (¼

mark), being late to produce the stimulus book for Vocabulary (¼ mark), timing

Coding as 1 minute 20 seconds, rather than 120 seconds (2 marks), failure to query

verbal responses that should have been queried (2 x ¼ marks) and scoring errors on

verbal items (2 x ¼ marks). With a score of 10 out of 15, Student A exceeded the

required 50% criteria of 7.5 marks, and passed the administration. Student B, on the

other hand, made a larger number of errors, losing 9½ marks for the following

reasons: one instance of no basal being obtained (1 mark deducted), no ceiling

obtained on two verbal subtests (2 marks), one addition error (1 mark), timing error

on Block Design (½ mark), inclusion of points for one Block Design trial that was

completed correctly but not within the time limit (2 marks), digits presented in

incorrect order on one Digit Span trial (½ mark) and consistently presented too

quickly (½ mark), scoring errors on verbal items (4 x ¼ marks), continuation of

Matrix Reasoning beyond the discontinue point and subsequent inclusion of items

beyond ceiling in raw score (1 mark). With a result of only 5½ marks out of 15,

INTELLIGENCE TESTING 14

Student B failed and was required to make a second attempt, which she subsequently

passed.

4. Final training phase. Taking account of feedback from the previous phase,

students continue to practise in pairs, focusing in particular on their identified areas of

weakness and supporting each other with feedback.

5. Evaluation B. Individually, students administer the WISC to one of the authors

who plays the part of a (well-behaved) child and follows a predetermined script. The

script includes a range of child responses and behaviours that are encountered in

practice and which are likely to elicit examiner error. These include rotations in Block

Design, common verbal responses that are not shown in the manual but which require

querying or prompting, errors in copying designs on Coding, and examinee requests

for repetitions of digits or explanations of verbal questions that examiners must

respond to appropriately. Students are encouraged to learn from their mistakes and to

accept responsibility for them, a process that we believe is the essence of reflective

practice. Although administration of all sub-tests is preferable, time constraints

sometimes mean that only six of the 10 core subtests are selected by the lecturer for

administration.

This final component in the model is an essential one since students who assessed

a relatively “easy” child in the previous evaluation phase have not been presented

with many of the situations that potentially lead to examiner error. Students score

their protocols immediately following the administration, and feedback about errors is

provided as soon as possible afterwards. We use the same marking criteria as

previously. Students who fail are required to re-sit the competency assessment within

a fortnight, and an alternative script is available for this second administration. Failure

on the second attempt means that students must repeat the course.

INTELLIGENCE TESTING 15

Conclusions

The training model presented in this paper was developed in response to our

concerns about levels of competence in intelligence testing and standards in

postgraduate training. Throughout our professional careers, we have observed

examples of grave errors by experienced practitioners, including assertions that the

manual does not need to be used once initial competency has been attained, beliefs

that re-wording instructions or telling examinees the correct answers are acceptable

practices, and the inclusions of varying and inconsistent numbers of core and

supplementary subtest scores in the calculation of composite scores. Our WISC-IV

training program is structured and sequenced with clearly defined and measurable

criteria for competency. The key components are: intensive and comprehensive

training, practice with peer support and feedback, frequent feedback from lecturers,

multiple methods for demonstrating competence, systematic evaluation of

competence, self-reflection and peer-evaluation. The complexity of the program

requires a time commitment from staff that extends well beyond the usual workload

allocation for such training within university settings.

Evidence about the effectiveness of this model is now required to determine the

extent to which our program is successful in producing practitioners who achieve, and

maintain, high professional standards in intelligence testing. There is a clear need for

professional bodies to establish measurable competency criteria which can be applied

within programs that teach intelligence testing. Professional forums in which trainers

from different universities gather to share and brainstorm effective training methods

could be of considerable value. Training within postgraduate programs may not be

sufficient, however; the implementation of refresher courses and external monitoring

of the assessment practices of more experienced practitioners may also be necessary

INTELLIGENCE TESTING 16

to ensure that the highest possible levels of competency in intelligence testing are

maintained. Future research that focuses on the development and evaluation of

rigorous training strategies and the ongoing competency of psychologists who

undertake intelligence testing is imperative.

INTELLIGENCE TESTING 17

Acknowledgments

We would like to thank the lecturers from various Australian universities who

willingly shared their own methods of training with us.

INTELLIGENCE TESTING 18

References

Belk, M. S., LoBello, S. G., Ray, G. E., & Zachar, P. (2002). WISC-III

administration, clerical, and scoring errors made by student examiners. Journal of

Psychoeducational Assessment, 20, 290-300.

Blakey, W. A., Fantuzzo, J. W., Gorsuch, R. L., & Moon, G. W. (1987). A peer-

mediated, competency-based package for administering and scoring the WAIS—R.

Professional Psychology: Research and Practice, 18, 17-20.

Boehm, A. E., Duker, J., Haesloop, M. D., & White, M A. (1974). Behavioral

objectives in training for competence in the administration of individual

administration tests. Journal of School Psychology, 12, 150-157.

Cody, M. S., & Prieto, L. R. (2000). Teaching intelligence testing in APA-accredited

programs: A national survey. Teaching of Psychology, 27, 190-194.

Duvall, J. C., & Morris, R. J. (2005). Mental retardation and the death penalty:

Current issues for attorneys and psychologists. In R. J. Morris (Ed.), Disability

research and policy: Current perspectives (pp. 205-220). Lawrence Erlbaum

Associates: Abingdon.

Fantuzzo, J. W., Sisemore, T. A., & Spradlin, W. H. (1983). A competency-based

model for teaching skills in the administration of intelligence tests. Professional

Psychology: Research and Practice, 14, 224-231.

Hopwood, C. J., & Richard, D. C. S. (2005). Graduate student WAIS-III scoring

accuracy is a function of Full Scale IQ and complexity of examiner tasks.

Assessment, 12, 445-454.

Kendall, I., Jenkinson, M., de Lemos, M., & Clancey D. (1997). Supplement to

guidelines for the use of psychological tests. Melbourne: Australian Psychological

Society.

INTELLIGENCE TESTING 19

Klassen, R. M., & Kishor, N. (1996). A comparative analysis of practitioners’ errors

on WISC-R and WISC-III. Canadian Journal of School Psychology, 12, 35-43.

Linger, M. L., Ray, G. E., Zachar, P., Underhill, A. T., & LoBello, S. G. (2007).

Decreasing scoring errors on Wechsler Scale vocabulary, comprehension, and

similarities subtests: A preliminary study. Psychological Reports, 101, 661-669.

LoBello, S. G., & Holley, G. (1999). WPPSI—R administration, clerical, and scoring

errors by student examiners. Journal of Psychoeducational Assessment, 17, 15-23.

Loe, S. C., Kadlubek, R. M., & Marks, W. J. (2007). Administration and scoring

errors on the WISC-IV among graduate student examiners. Journal of

Psychoeducational Assessment, 25, 237-247.

Muniz, J., Bartram, D., Evers, A., Boben, D., Matesic, K., Fernandez-Hermida, J. R.,

& Zaal, J. N. (2001). Testing practices in European countries. European Journal of

Psychological Assessment, 17, 201-211.

Patterson, M., Slate, J. R., Jones, C. H., & Steger, H..S. (1995). The effects of practice

administrations in learning to administer and score the WAIS-R: A partial

replication. Educational and Psychological Measurement, 55, 32-37.

Sattler, J. M. (2008). Assessment of children: Cognitive foundations. (5th ed.). San

Diego: J. M. Sattler.

Scheikh, A. I., Milne, D. L., & Macgregor, B. V. (2007). A model of personal

professional development in the systematic training of clinical psychologists.

Clinical Psychology & Psychotherapy, 14, 278-287.

Schön, D. A. (1987). Educating the reflective practitioner: Toward a new design for

teaching and learning in the professions. San Francisco: Jossey-Bass.

Simons, R., Goddard, R., & Patton, W. (2002). Hand-scoring error rates in

psychological testing. Assessment, 9, 292-300.

INTELLIGENCE TESTING 20

Slate, J. R., & Jones, C. H. (1990a). Identifying students’ errors in administering the

WAIS—R. Psychology in the Schools, 27, 83-87.

Slate, J. R., & Jones, C. H. (1990b). Examiner errors on the WAIS-R: A source of

concern. The Journal of Psychology, 124, 343-345.

Slate, J. R., & Jones, C. H. (1990c). Student error in administering the WISC-R:

Identifying problem areas. Measurement and Evaluation Counseling and

Development, 23, 137-140.

Slate, J. R., Jones, C. H., & Covert, T. L. (1992). Rethinking the instructional design

for teaching the WISC-R: The effects of practice administrations. College Student

Journal, 26, 285-289.

Slate, J. R., Jones, C. H., & Murray, R. A. (1991). Teaching administration and

scoring of the Wechsler Adult Intelligence Scale-Revised: An empirical evaluation

of practice administrations. Professional Psychology: Research and Practice, 22,

375-379.

Thompson, A. P., & Hodgins, C. (1994). Evaluation of a checking procedure for

reducing clerical and computational errors on the WAIS-R. Canadian Journal of

Behavioural Science, 26, 492-504.

Whitten, J., Slate, J. R., Jones, C. H., & Shine, A. E. (1994). Examiner errors in

administering and scoring the WPPSI-R. Journal of Psychoeducational Assessment,

12(1), 49-54.

Wolfe-Christensen, C., & Callahan, J. L. (2008). Current state of standardization

adherence: A reflection of competency in psychological assessment. Training and

Education in Professional Psychology, 2(2), 111-116.