lost in translation: using video annotation software to examine how a clinical supervisor interprets...

17
This article was downloaded by: [University of California Santa Cruz] On: 25 November 2014, At: 12:31 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK The Teacher Educator Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/utte20 LOST IN TRANSLATION: USING VIDEO ANNOTATION SOFTWARE TO EXAMINE HOW A CLINICAL SUPERVISOR INTERPRETS AND APPLIES A STATE-MANDATED TEACHER ASSESSMENT INSTRUMENT Matthew James Miller a & Joanne Carney a a Elementary Education, Western Washington University Published online: 24 Sep 2009. To cite this article: Matthew James Miller & Joanne Carney (2009) LOST IN TRANSLATION: USING VIDEO ANNOTATION SOFTWARE TO EXAMINE HOW A CLINICAL SUPERVISOR INTERPRETS AND APPLIES A STATE-MANDATED TEACHER ASSESSMENT INSTRUMENT, The Teacher Educator, 44:4, 217-231, DOI: 10.1080/08878730903180200 To link to this article: http://dx.doi.org/10.1080/08878730903180200 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any

Upload: joanne

Post on 31-Mar-2017

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: LOST IN TRANSLATION: USING VIDEO ANNOTATION SOFTWARE TO EXAMINE HOW A CLINICAL SUPERVISOR INTERPRETS AND APPLIES A STATE-MANDATED TEACHER ASSESSMENT INSTRUMENT

This article was downloaded by: [University of California Santa Cruz]On: 25 November 2014, At: 12:31Publisher: RoutledgeInforma Ltd Registered in England and Wales Registered Number: 1072954Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH,UK

The Teacher EducatorPublication details, including instructions forauthors and subscription information:http://www.tandfonline.com/loi/utte20

LOST IN TRANSLATION: USINGVIDEO ANNOTATION SOFTWARETO EXAMINE HOW A CLINICALSUPERVISOR INTERPRETS ANDAPPLIES A STATE-MANDATEDTEACHER ASSESSMENTINSTRUMENTMatthew James Miller a & Joanne Carney aa Elementary Education, Western WashingtonUniversityPublished online: 24 Sep 2009.

To cite this article: Matthew James Miller & Joanne Carney (2009) LOST INTRANSLATION: USING VIDEO ANNOTATION SOFTWARE TO EXAMINE HOW A CLINICALSUPERVISOR INTERPRETS AND APPLIES A STATE-MANDATED TEACHER ASSESSMENTINSTRUMENT, The Teacher Educator, 44:4, 217-231, DOI: 10.1080/08878730903180200

To link to this article: http://dx.doi.org/10.1080/08878730903180200

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all theinformation (the “Content”) contained in the publications on our platform.However, Taylor & Francis, our agents, and our licensors make norepresentations or warranties whatsoever as to the accuracy, completeness,or suitability for any purpose of the Content. Any opinions and viewsexpressed in this publication are the opinions and views of the authors, andare not the views of or endorsed by Taylor & Francis. The accuracy of theContent should not be relied upon and should be independently verified withprimary sources of information. Taylor and Francis shall not be liable for any

Page 2: LOST IN TRANSLATION: USING VIDEO ANNOTATION SOFTWARE TO EXAMINE HOW A CLINICAL SUPERVISOR INTERPRETS AND APPLIES A STATE-MANDATED TEACHER ASSESSMENT INSTRUMENT

losses, actions, claims, proceedings, demands, costs, expenses, damages,and other liabilities whatsoever or howsoever caused arising directly orindirectly in connection with, in relation to or arising out of the use of theContent.

This article may be used for research, teaching, and private study purposes.Any substantial or systematic reproduction, redistribution, reselling, loan,sub-licensing, systematic supply, or distribution in any form to anyone isexpressly forbidden. Terms & Conditions of access and use can be found athttp://www.tandfonline.com/page/terms-and-conditions

Dow

nloa

ded

by [

Uni

vers

ity o

f C

alif

orni

a Sa

nta

Cru

z] a

t 12:

31 2

5 N

ovem

ber

2014

Page 3: LOST IN TRANSLATION: USING VIDEO ANNOTATION SOFTWARE TO EXAMINE HOW A CLINICAL SUPERVISOR INTERPRETS AND APPLIES A STATE-MANDATED TEACHER ASSESSMENT INSTRUMENT

The Teacher Educator, 44:217–231, 2009

Copyright © Taylor & Francis Group, LLCISSN: 0887-8730 print/1938-8101 online

DOI: 10.1080/08878730903180200

RESEARCH ARTICLE

LOST IN TRANSLATION: USING VIDEO ANNOTATION

SOFTWARE TO EXAMINE HOW A CLINICAL SUPERVISOR

INTERPRETS AND APPLIES A STATE-MANDATED

TEACHER ASSESSMENT INSTRUMENT

MATTHEW JAMES MILLER and JOANNE CARNEY

Elementary Education, Western Washington University

This case study examines the reasoning of a clinical supervisor as she assesses

preservice teacher candidates with a state-mandated performance assessment

instrument. The supervisor’s evaluations were recorded using video annotationsoftware, which allowed her to record her observations in real-time. The study

reveals some of the inherent challenges in clinical supervision and the use

of a state’s mandated performance rubrics to evaluate teacher competencies.Findings indicate that the clinical supervisor found it difficult to interpret

rubric criteria, often made tenuous claims about candidates’ performance, and

tended to require students to design lessons that were artificial demonstrations ofmandated competencies. Findings also suggest that the difficulties faced by the

clinical supervisor were likely connected to inadequate professional development

regarding the use of the state-mandated performance assessment instrument.The article concludes with a discussion of the need for better professional de-

velopment for clinical supervisors given their important role in the professional

development of tomorrow’s teachers and suggests other areas for future research.

Statement of the Issue

Researchers have found clear evidence that exemplary teachers havelong-lasting and cumulative effects on student achievement (e.g., John-

Address correspondence to Matthew James Miller, Elementary Education, WesternWashington University, MS 9090, 516 High Street, Bellingham, WA 98225, USA. E-mail:

[email protected]

217

Dow

nloa

ded

by [

Uni

vers

ity o

f C

alif

orni

a Sa

nta

Cru

z] a

t 12:

31 2

5 N

ovem

ber

2014

Page 4: LOST IN TRANSLATION: USING VIDEO ANNOTATION SOFTWARE TO EXAMINE HOW A CLINICAL SUPERVISOR INTERPRETS AND APPLIES A STATE-MANDATED TEACHER ASSESSMENT INSTRUMENT

218 M. J. Miller and J. Carney

son, Kahle, & Fargo, 2007; Darling-Hammond, 1997; NCTAF, 2003).With the recognition of the importance of teacher quality and theincrease in state and national accountability measures to gauge teacherimpact on student achievement (U.S. Department of Education, n.d.;Nelson, 2003; OSPI, 2007), increasing attention is being paid to the as-sessment of teacher candidates toward rigorous performance outcomes.

Many states are spearheading the effort to require preservice teach-ers to meet a range of performance expectations before licensure isgranted (Nelson, 2003). The State of Washington, for example, usesan evaluative instrument entitled ‘‘The Washington State Performance-based Pedagogy Assessment of Teacher Candidates’’ to assess the teach-ing performance of candidates seeking professional licensure. Washing-ton policy requires that a collection of rubrics tied to each standard beused by clinical supervisors on at least two occasions during studentteaching (OSPI, 2004). The Performance-based Pedagogy Assessment(PPA) evaluates candidates as having ‘‘met standard’’ or ‘‘not metstandard’’ on 57 performance criteria necessary to effectively plan andimplement lessons. These criteria require observational or documentaryevidence of a candidate’s ability to do the following:

1. Set learning targets and align them to state standards.2. Demonstrate a culturally responsive knowledge of students and

their communities.3. Plan and establish effective interactions with families to support

student learning.4. Design assessment strategies that measure student learning.5. Design instruction based on research and principles of effective

practice.6. Align instruction with a teaching plan and communicate accurate

content knowledge.7. Enable students to participate in a learning community that sup-

ports student learning and well being.8. Provide opportunities for students to engage in learning activities

that are based on research and principles of effective practice.9. Manage a classroom effectively.

10. Engage, with students, in activities that assess student learning.

Such state and national goals for teacher performance are admirable; acandidate who demonstrates these standards-based (i.e., INTASC, 1992)strands of teacher knowledge will most likely be a qualified beginningteacher. Yet the high-stakes assessment of these competencies is oftendone, as it is in Washington State, solely by the student teaching su-

Dow

nloa

ded

by [

Uni

vers

ity o

f C

alif

orni

a Sa

nta

Cru

z] a

t 12:

31 2

5 N

ovem

ber

2014

Page 5: LOST IN TRANSLATION: USING VIDEO ANNOTATION SOFTWARE TO EXAMINE HOW A CLINICAL SUPERVISOR INTERPRETS AND APPLIES A STATE-MANDATED TEACHER ASSESSMENT INSTRUMENT

Lost in Translation 219

pervisor, and we know little about the manner in which they apply therigorous performance criteria in such rubrics.

Numerous research studies have indicated that the student teach-ing internship has a powerful impact on a novice teacher’s self-identityand practices (Blocker & Mantle-Bromley, 1997; Borko & Putnam, 1998;Cochran-Smith, 1991; Feiman-Nemser & Buchmann, 1985; Miller, 2009).Supervisors, who are frequently retired teachers or administrators, aregenerally responsible not only for the summative assessments that grantlicensure to candidates, but also for formative assessment and coachingas the student teaching internship progresses. Supervisors thus bearmuch of the responsibility for ensuring new teachers are competentand ready to engage in the rigorous and challenging profession ofteaching. Given the importance of student teaching, clinical supervisorsplay a critical role in the implementation of the current educationalreform agenda. Yet at the same time, clinical supervisors often work atthe periphery of teacher education institutions, with limited access toprofessional development and interaction with peers (Slick, 1998).

Although clinical supervisors play an important role in the ad-vancement of preservice teachers into the profession, little is knownabout the manner in which they navigate the challenges and dilemmasof their role and what personal criteria they bring to a key aspect of theirrole—engaging in the formative and summative assessment of teachercandidates. Similarly, we have virtually no detailed knowledge of how su-pervisors use mandated assessment instruments to evaluate candidates’performances and competencies. On what basis do they make theirevaluative judgments and how do they support their assertions? Howdo they ‘‘translate’’ rubric descriptors?

Introduction to This Study

This case study is focused on a clinical supervisor using a state-mandatedrubric to evaluate preservice candidates’ performances, recorded onvideo. Video annotation software allowed the clinical supervisor to recordher observations in real-time through verbal and visual annotations,using a microphone and mouse gestures. The resulting annotationsincluded the original teaching videotape, plus a ‘‘commentary track’’that followed the action in real-time. In essence, the resulting digitalartifact is similar to a commentary track on a DVD of a play-by-playcommentary of a sporting event, with the added benefit of allowingusers to indicate points of interest with mouse gestures.

The annotation software provided the researchers with a uniqueopportunity to capture the supervisor’s actions as she observed student

Dow

nloa

ded

by [

Uni

vers

ity o

f C

alif

orni

a Sa

nta

Cru

z] a

t 12:

31 2

5 N

ovem

ber

2014

Page 6: LOST IN TRANSLATION: USING VIDEO ANNOTATION SOFTWARE TO EXAMINE HOW A CLINICAL SUPERVISOR INTERPRETS AND APPLIES A STATE-MANDATED TEACHER ASSESSMENT INSTRUMENT

220 M. J. Miller and J. Carney

teachers and expressed her thoughts by means of the most commonways people communicate—through talking and pointing. These videoannotations were done via the Video Traces software (see http://depts.washington.edu/pettt/projects/videotraces.html) developed byDr. Reed Stevens from the University of Washington, a researcher incognitive studies.

We anticipated that the clinical supervisor’s use of the video an-notation software would give us insights into the manner in whichevaluators use a mandated teacher assessment instrument and how theysupport their assertions about candidates’ performance in relation tothe rubric criteria. We were also interested to investigate, in a prelimi-nary way, the affordances of the software for video-based assessment ofteaching. In data analysis we explored the following research questions:

How does a clinical supervisor use a mandated performance assessmentinstrument to evaluate teacher candidates? On what basis does she makeher evaluative judgments and how does she support her assertions?

Previous research has pointed to inconsistencies and challenges inher-ent to clinical supervision (Borko & Mayfield, 1995; Hess, 2005; Slick,1997). Yet the most crucial aspect of their work—their assessment ofcandidate performance during informal and formal observations—iscurrently under-researched.

The Problems of Clinical Supervision

Researchers have described how the clinical supervision of teacher can-didates poses some knotty challenges. The few studies that have takenon clinical supervision point to the inherent difficulties that surfacewhen supervisors evaluate teacher candidates toward performance cri-teria and expectations (Marshall, 2005; Slick, 1997, 1998). This is par-ticularly true of clinical supervisors who are not regular college faculty,but are frequently drawn from the retired teacher and/or principalranks. The few existing studies of clinical supervision indicate the manyfactors that can make traditional supervision ineffective, including in-consistency in supervision from supervisor-to-supervisor (Power & Perry,2002), the marginalized position of clinical supervisors in teacher edu-cation programs (Slick, 1998), the over-emphasis on observing teachingpractice and inattention to student learning (Paris & Gespass, 2001),inattention to or lack of subject matter content (Marshall, 2005), confu-sion over one’s roles and responsibilities as a clinical supervisor (Slick,1997), and the high-stakes nature of evaluation and problems with theuse of required evaluation instruments (Marshall, 2005). This body of

Dow

nloa

ded

by [

Uni

vers

ity o

f C

alif

orni

a Sa

nta

Cru

z] a

t 12:

31 2

5 N

ovem

ber

2014

Page 7: LOST IN TRANSLATION: USING VIDEO ANNOTATION SOFTWARE TO EXAMINE HOW A CLINICAL SUPERVISOR INTERPRETS AND APPLIES A STATE-MANDATED TEACHER ASSESSMENT INSTRUMENT

Lost in Translation 221

research sheds light on the ‘‘powerful and inevitable structural con-straints’’ (Borko & Mayfield, 1995, p. 516) faced by clinical supervisors.

Holding the teacher-supervision and assessment framework to-gether is the notion that there is a common conception of whatconstitutes teacher quality, and the assumption that all stakehold-ers in teacher evaluation agree on what represents quality in theirprofessional judgment. Yet researchers have demonstrated that theassessment of teacher candidates can be ‘‘unpredictably personal’’(Hess, 2005, p. 192) and not consistent from assessor to assessor. Studieshave also indicated that clinical supervisors frequently have difficultyaccurately evaluating candidates’ teaching performances based on statestandards and performance expectations (Marshall, 2005). Because oftheir disparate backgrounds and perspectives, the supervision of teachercandidates by clinical supervisors is often inconsistent.

Researchers have also described how supervisors’ dual roles of‘‘evaluators’’ and ‘‘coaches’’ work against their efficacy (Slick, 1997).As long as supervisors are responsible for evaluating candidates’ perfor-mance toward required expectations, student teachers are likely to per-ceive supervisors in an assessment, rather than assistance, role. Teachercandidates’ ultimate success often rests in the hands of their supervisors,and such high-stakes evaluation tends to shut down adult learning—candidates take fewer risks because they fear it will have a negativeimpact on their performance evaluation (Power & Perry, 2002). Onthe other end of the supervisory continuum, the desire to maximizecomfort and minimize risks for student teachers often works againstsupervisors’ impact on student teachers’ growth. Slick (1997) noted thatsupervisors place a high priority on being extremely positive in theirinteractions with student teachers, in order to build their confidence.Supervisors’ evaluations often fail to give teachers appropriately criticalfeedback; they do not explicitly tell teacher candidates where they standon articulated performance standards, do not give clear direction onthe ways in which teachers can improve their performance, and oftendo not fully answer the question teachers really care about: How am Idoing?

Although building up student teachers’ confidence is certainlyimportant, if this goal is given an exclusive priority, it makes it un-likely that student teachers will be encouraged to take the risks thatare an important part of developing teaching practice. At their best,student teachers’ relationships with their clinical supervisors strike abalance among specific feedback toward performance criteria, aboutsuggestions regarding new ways to think about teaching and learning,and emotional support. When these conditions exist, ‘‘the potentialof student teaching is realized; student teaching is teacher education’’

Dow

nloa

ded

by [

Uni

vers

ity o

f C

alif

orni

a Sa

nta

Cru

z] a

t 12:

31 2

5 N

ovem

ber

2014

Page 8: LOST IN TRANSLATION: USING VIDEO ANNOTATION SOFTWARE TO EXAMINE HOW A CLINICAL SUPERVISOR INTERPRETS AND APPLIES A STATE-MANDATED TEACHER ASSESSMENT INSTRUMENT

222 M. J. Miller and J. Carney

(Borko & Mayfield, 1995, p. 518). Yet, this sophisticated and complexdynamic between student teachers and their supervisors is too seldomrealized.

Research on clinical supervision alerts us to the many difficultiessupervisors in teacher education face. Yet we still know few detailsabout how particular supervisors reason in-the-moment as they usemandated performance assessment instruments to evaluate candidatestoward licensure requirements. Whether or not their use of these instru-ments provides an accurate assessment of the competency of candidates’toward licensure goals is as of yet undetermined.

Setting and Participants

This case study took place during a 10-week student teaching internshipin an undergraduate teacher education program at a small universityin the Pacific Northwest. A clinical supervisor—‘‘Felicia’’—was asked toannotate the teaching videotapes of three preservice teachers. Feliciahad spent her career as a special education teacher and administratorfor upper elementary school and middle school, and in retirement wassupervising teacher candidates.

As she provided her in-the-moment comments via the software,Felicia had access to the Washington PPA as a reference to evaluatethe candidates’ performance (see OSPI, 2004, pp. 36–42, for samplerubrics).

Prior to the study, Felicia had engaged in two training sessions toorient her to (a) the annotation software and (b) the PPA rubrics. Theannotation software session included an introduction to and demon-stration of the software, an opportunity to annotate a sample teachingvideo, and a question-and-answer session about the use of the softwarein conjunction with clinical supervision. The purpose of this trainingsession was to prepare the supervisor to use the software effectively andto minimize her frustrations in using a new technology.

A training session devoted to the PPA rubric was conducted by theUniversity’s Office of Field Experiences (OFE). This training sessionincluded all of the clinical supervisors who were hired to supervisecandidates. During the training session, the supervisors were introducedto the 10 individual performance rubrics that make up the PPA, andshown how to use the instrument to assess candidates’ performance. Atthis training session, participants were also introduced to the sizableglossary of terms that supported the use of the assessment rubrics.This glossary, which contained definitions to such complex conceptsas ‘‘multicultural perspective/approach,’’ ‘‘perspective consciousness,’’

Dow

nloa

ded

by [

Uni

vers

ity o

f C

alif

orni

a Sa

nta

Cru

z] a

t 12:

31 2

5 N

ovem

ber

2014

Page 9: LOST IN TRANSLATION: USING VIDEO ANNOTATION SOFTWARE TO EXAMINE HOW A CLINICAL SUPERVISOR INTERPRETS AND APPLIES A STATE-MANDATED TEACHER ASSESSMENT INSTRUMENT

Lost in Translation 223

and ‘‘transformative academic knowledge’’ (see OSPI, 2004, pp. 54–55), was intended to support a nuanced use of the instrument bysupervisors and contribute to its reliability. As is often the case, in aclimate of limited resources for the professional development of clinicalsupervisors (see Marshall, 2005), this training toward the use of theperformance rubrics was limited in its scope, lasting about one-halfday.

Methods and Data Sources

We used qualitative research methodologies to investigate how the clin-ical supervisor used video annotation software as she evaluated the per-formance of three student teaching interns. Ethnographic observation(Miles & Huberman, 1994), interview (Merriam, 1998), and documentcollection (Strauss & Corbin, 1998) were useful to better understandthe manner in which the participant supervised student teachers anduncover the details of her reasoning processes.

Data sources for this study included digital videotapes of theteacher candidates’ lessons, the accompanying annotations created bythe clinical supervisor as she viewed the candidates’ teaching videotapes,and a semi-structured interview after the annotations were completed.The interview consisted of prompts such as ‘‘Describe your approachto clinical supervision,’’ ‘‘Describe your process for using the PPArubrics,’’ ‘‘As you revisit this annotation of the candidate’s performance,how would you describe the ‘lens’ you used during supervision?’’ and‘‘What similarities and differences do you see with the other supervisor’scomments?’’

The original teaching videotapes, annotations, and focus groupinterview were transcribed so they could be coded and analyzed. Col-lecting this range of data was appropriate given the desire to gain an in-depth understanding of the supervisor’s perspectives, her interpretationof various teaching and learning events, and her evaluation of candi-dates’ performances in light of the state PPA rubric criteria. The videoannotation and interview transcripts provided insights into the clinicalsupervisor’s noticing behavior, the reasoning she used to support herassertions about particular teaching/learning events, and the mannerin which the state’s PPA instrument guided her assessment.

Analysis

Focusing on only one supervisor-participant in a case study enabled usto closely examine her verbalized assessment of student teachers, the

Dow

nloa

ded

by [

Uni

vers

ity o

f C

alif

orni

a Sa

nta

Cru

z] a

t 12:

31 2

5 N

ovem

ber

2014

Page 10: LOST IN TRANSLATION: USING VIDEO ANNOTATION SOFTWARE TO EXAMINE HOW A CLINICAL SUPERVISOR INTERPRETS AND APPLIES A STATE-MANDATED TEACHER ASSESSMENT INSTRUMENT

224 M. J. Miller and J. Carney

‘‘pointing’’ that occurred as she annotated the candidates’ teachingvideos, and her use of the state-mandated assessment rubrics.

Our first step in analyzing the data was to generate content logswhile viewing the original classroom-based video annotations and thevideotaped interview. This content log noted significant patterns andwas the basis for our initial coding. HyperResearch software was usedfor more detailed coding of complete written transcripts. Gradually,initial codes were grouped into ‘‘code families’’ (Miles & Huberman,1994, p. 57) as an iterative analysis began to take shape. Finally, wedid analysis by means of a transcript matrix that displayed the originalteaching transcript (with candidates and student dialogue) in the farleft column, aligned with the annotations of the supervisor-participantin the next column, with links to our code families in the far rightcolumn.

Findings

Using a Performance Assessment Instrument to

Evaluate Teacher Candidates

Felicia did not use the PPA explicitly for establishing teaching goalswith candidates at the beginning of the internship. She used the in-strument exclusively for formal, summative evaluation of candidates’performance. Prior to assessing by means of the rubric, Felicia spokeof working with teacher candidates to target an upcoming lesson forformal PPA evaluation, asking them to make sure any competencies notyet demonstrated in the ongoing, formative assessments would appearin that PPA lesson:

First of all, the candidates would have a specific lesson that they weregoing to use : : : [I would tell them] ‘‘This will be the lesson when I amobserving the pedagogy [PPA].’’ Sometimes before they would [teachthe PPA-focused lesson], I would [take the PPA rubric] and I would gothrough and check, ‘‘Okay, these are the areas that I don’t have anyconcerns about. I’m seeing them in every lesson. [But] these are theareas [missing] : : : how are you going to embed those in your lesson?’’

In this way, candidates were encouraged to develop ‘‘exhibition lessons’’that would demonstrate a wide variety of PPA-targeted competenciesnot noted in routine observation. The compressed nature of the stu-dent teaching experience and the limited time she was able to spendobserving candidates was undoubtedly the major factor inducing Feliciato engage in this practice.

Dow

nloa

ded

by [

Uni

vers

ity o

f C

alif

orni

a Sa

nta

Cru

z] a

t 12:

31 2

5 N

ovem

ber

2014

Page 11: LOST IN TRANSLATION: USING VIDEO ANNOTATION SOFTWARE TO EXAMINE HOW A CLINICAL SUPERVISOR INTERPRETS AND APPLIES A STATE-MANDATED TEACHER ASSESSMENT INSTRUMENT

Lost in Translation 225

Rather than being focused on student learning goals, these ex-hibition lessons tended to be ‘‘stuffed’’ with teaching behaviors thatneeded to be demonstrated for the supervisor to be able to checkoff numerous PPA competencies. Not surprisingly, these lessons werefrequently perceived as artificial by both supervisor and candidates.These lessons often had little to do with the regular curricular sequencein the student teacher’s cooperating classroom. The need to devisewhat felt like artificial demonstrations was especially acute for particulardimensions of the rubric, and as the following section will show, Feliciawas often forced to make tenuous claims to support her assertions abouta candidate’s competency.

Evaluating Candidates With PPA Criteria

Given her limited time and contact with candidates, Felicia reportedfinding it difficult to assess candidates’ performance toward the rig-orous performance expectations articulated on the state’s PPA. Theselimitations caused her to stretch the evidence when she was unableto directly observe targeted competencies. She also formed her own,somewhat tenuous, interpretations of the performance criteria.

The following portions of an annotation done by Felicia afterobserving a candidate’s videotaped fifth-grade literacy lesson on talltales provides a window into one supervisor’s reasoning process as shesystematically addresses each of the PPA rubric standards, one-by-one.The intent of this analysis is to illustrate the ways an experiencedclinical supervisor makes evaluative judgments, interprets rubric qualityindicators, and supports her assertions. It is important to note that ouranalysis of Felicia’s annotation is not intended to cast aspersions onher supervision or evaluative decision making. Rather, it is intended tofeature how the intentions of the PPA’s designers to assess candidatestoward rigorous performance expectations frequently come up shortwhen the instrument is implemented in the field.

After seeing the videotaped lesson in its entirety, Felicia beginsher summative assessment of the candidate’s performance toward thePPA requirements by verbalizing her intent: ‘‘What I want to do rightnow is look at the PPA and talk to that.’’ She then progresses througheach one of the standards in checklist fashion, noting the requiredperformance criteria, and evaluating the lesson she has just observed.In the following tables the PPA standard headings are shown on theleft; in the right column is the supervisor’s evaluation of the teachercandidate’s performance toward the standard. It is hoped that thisdata-sharing approach enables the reader to consider the supervisor’sevaluative decision making in light of the standard’s criteria.

Dow

nloa

ded

by [

Uni

vers

ity o

f C

alif

orni

a Sa

nta

Cru

z] a

t 12:

31 2

5 N

ovem

ber

2014

Page 12: LOST IN TRANSLATION: USING VIDEO ANNOTATION SOFTWARE TO EXAMINE HOW A CLINICAL SUPERVISOR INTERPRETS AND APPLIES A STATE-MANDATED TEACHER ASSESSMENT INSTRUMENT

226 M. J. Miller and J. Carney

TABLE 1 Felicia’s PPA Standard #1 Evaluation

PPA Standard 1. The teacher candidate

sets learning targets that addressWashington State’s Essential Academic

Learning Requirements: a) Alignment

of learning targets, b) meaningfulness/importance, c) developmental and

instructional importance, d) accuracy,

and e) multicultural perspectives.

Felicia’s annotation:‘‘You have set targets; you have

made it very meaningful and veryimportant for them. It is definitelydevelopmentally appropriate andinstructionally accurate. It doescross multi-cultural perspectives,because you are talking aboutJohn Henry.’’

Felicia began her evaluation of the candidate’s performance withPPA Standard #1 (Table 1). In checklist fashion and without muchexplication, Felicia has quickly noted the presence of the four Stan-dard 1 rubric criteria, without any reference to the more detailed,nuanced dimensions of these criteria provided by the authors of thePPA. For example, Felicia asserts that the candidate has met the re-quirement toward multicultural perspectives because the lesson focuseson an African-American character from the classic tall tale John Henry.She is asserting that this performance demonstrates the following rubriccriterion: ‘‘The (lesson’s) learning targets are grounded in transforma-tive multicultural knowledge, reasoning, performance skills, products,or dispositions.’’ One might ask, ‘‘Is the John Henry tall tale whatthe PPA rubric developers had in mind as an example of rich mul-ticultural knowledge?’’ Is a lesson on John Henry adequate evidenceof a candidate’s abilities to ground learning targets in transformativemulticultural perspectives? Felicia’s interpretation of the rubric criteriain this case seems highly problematic. One might theorize, in notingthe dubious evidence she uses in her evaluative decision making onthis standard, that Felicia simply is not aware of the full meaning of theterm ‘‘multicultural perspectives’’ as intended by the PPA authors. Yet,these policymakers had anticipated just such uncertainty on the part ofclinical supervisors and included a detailed definition for multiculturalperspective/approach in the PPA’s glossary (see OSPI, 2004, p. 54).The glossary’s nuanced and research-based definition of the conceptof multiculturalism is not evidenced in Felicia’s evaluation of the can-didate’s teaching. The analysis reveals a large gap between the explicitintentions of the PPA and its implementation by Felicia.

Later in her observation, Felicia evaluates the candidate’s successwith Standard 6, focusing on instructional alignment (Table 2). ThePPA’s exhaustive list of candidate competencies in this standard could

Dow

nloa

ded

by [

Uni

vers

ity o

f C

alif

orni

a Sa

nta

Cru

z] a

t 12:

31 2

5 N

ovem

ber

2014

Page 13: LOST IN TRANSLATION: USING VIDEO ANNOTATION SOFTWARE TO EXAMINE HOW A CLINICAL SUPERVISOR INTERPRETS AND APPLIES A STATE-MANDATED TEACHER ASSESSMENT INSTRUMENT

Lost in Translation 227

TABLE 2 Felicia’s PPA Standard #6 Evaluation

PPA Standard 6. The teacher candidate

aligns instruction with the plan andcommunicates accurate content

knowledge: Alignment, meaningful

opportunities to learn, accuracy,interdisciplinary instruction, and

culturally-responsive and

gender-sensitive instruction.

Felicia’s annotation:‘‘Ok, number six, ‘aligning instruction’:

this lesson was interdisciplinary, itwas culturally responsive and gendersensitive—I thought that it wasinteresting that one of the boyscommented that there weren’t verymany girls in the story : : : so thatcould be a gender equity issue, thatthey were sensitive to that.’’

hardly be met in a single lesson, even with multiple sources of evidence.Yet Felicia makes judgments about the candidate’s competencies towardthis standard based primarily on the very limited evidence of the can-didate’s interaction with a small group of children during a 20-minutelesson. As noted previously, Felicia had decided that the John Henrycontent of this lesson made it multicultural; she now determines thatthe lesson is interdisciplinary and culturally responsive because of onechild’s remark about how few girls there were in the story—reasoningthat this is evidence of a gender-sensitive perspective.

Felicia’s annotation related to Standard 8 (Table 3) reveals a sim-ilar cursory evaluation of a highly complex list of criteria. Rather thanevaluating learning activities based on research and principles of effec-

TABLE 3 Felicia’s PPA Standard #8 Evaluation

PPA Standard 8. Students engage in

learning activities that are based onresearch and principles of effective

practice: Questioning and discussion

techniques, delivery and pacing,differentiated instruction, active

learning, and technology.

Felicia’s annotation:‘‘On number eight ‘They engaged in

learning activities that are based onresearch and principles of effectivepractice.’ Your questioning anddiscussion techniques were, again,exemplary. Your delivery and pacingwere really good. The instructiondifferentiated : : : at this point, youhave a group that : : : I don’t knowhow the groups were selected : : :

the instruction worked well for thatgroup and they were certainlyactively learning.’’

Dow

nloa

ded

by [

Uni

vers

ity o

f C

alif

orni

a Sa

nta

Cru

z] a

t 12:

31 2

5 N

ovem

ber

2014

Page 14: LOST IN TRANSLATION: USING VIDEO ANNOTATION SOFTWARE TO EXAMINE HOW A CLINICAL SUPERVISOR INTERPRETS AND APPLIES A STATE-MANDATED TEACHER ASSESSMENT INSTRUMENT

228 M. J. Miller and J. Carney

tive practice, she focuses on easily observed teacher practices such asquestioning, delivery, and pacing. When considering the criteria relatedto differentiation, Felicia hesitates, not knowing how this group wasselected and what special learning needs these particular students mighthave. Although she could not be sure the instruction was properly dif-ferentiated, she nonetheless deduced it was effective based on students’engagement.

Some of Felicia’s difficulties in this annotative sequence seem tohave been caused by her inability to observe particular dimensions ofperformance identified in the rubric. In a later interview Felicia com-mented on this difficulty. She pointed out that some items on the rubricwere not appropriate for all classroom environments (e.g., primaryclassrooms, special education settings), or were non-observable—eitherbecause performances in those domains occur before or after the schoolday, or because the performance involves candidate thinking ratherthan observable behaviors. Competencies related to parental commu-nication, diversity, changed thinking, and instructional decisions basedon formative assessment were almost impossible to observe directlyduring the course of a typical school day, given a supervisor’s ‘‘outsider’’status. For these criteria, Felicia would sometimes rely on cooperatingteachers, or candidates themselves to verify they had demonstrated thecompetencies, or would make inferences on little or no evidence.

The sometimes simplistic and cursory rationales given by Feliciaas she did her checklist application of the PPA may have been causedby the complexity of the rubric criteria, which were not adequatelysupported by the rubric glossary or the training she received in usingthe instrument. Clearly, her interpretations were not in accord with theintentions of the rubric designers.

Discussion

These findings provide insight into some of the challenges in the su-pervision and performance evaluation of teacher candidates towardrigorous performance outcomes. The clinical supervisor in our casestudy, who required student teachers to design lessons that were arti-ficial demonstrations of mandated competencies, found it difficult tointerpret rubric criteria and often made tenuous claims about candi-dates’ performance. Findings also suggest that the difficulties faced bythe clinical supervisor were likely connected to inadequate professionaldevelopment regarding the use of the state-mandated performanceassessment instrument.

Our analysis is not intended to be disparaging toward clinicalsupervisors, many of whom bring a wealth of experience and education

Dow

nloa

ded

by [

Uni

vers

ity o

f C

alif

orni

a Sa

nta

Cru

z] a

t 12:

31 2

5 N

ovem

ber

2014

Page 15: LOST IN TRANSLATION: USING VIDEO ANNOTATION SOFTWARE TO EXAMINE HOW A CLINICAL SUPERVISOR INTERPRETS AND APPLIES A STATE-MANDATED TEACHER ASSESSMENT INSTRUMENT

Lost in Translation 229

to their supervision. Rather, our data suggest that two factors maycontribute to inconsistent and inaccurate assessment. First, because ofthe limited number of observations they are able to do, supervisorsare often required to draw on scant evidence from isolated teachingperformances to support their judgments about whether candidatesmeet the performance criteria articulated in mandated rubrics. Asecond factor is that because clinical supervisors are frequently locatedat the ‘‘fringe’’ of teacher education, they often receive limited pro-fessional development. Too often, they are left to their own devicesin interpreting teacher performance standards and applying complex,jargon-laden rubric criteria to their performance observations. In suchcircumstances, supervisors connect their substantial classroom andsupervisory experience to the indicators on mandated performancerubrics in rather idiosyncratic ways and improvise in assessment areaswhere they have limited or no experience.

We now understand that teaching is a rigorous, demanding, andcomplex profession. If we wish to ensure that each new teacher demon-strates the rigorous performance standards we consider essential formeeting the learning needs of all children, the findings in this studychallenge teacher education institutions and policymakers to bettersupport the important work of clinical supervisors. In most teachereducation and certification systems, they play a crucial role in assess-ing and coaching teacher candidates. When policymakers commissionand mandate large-scale performance assessments for teacher licensure,they need to make sure rubric indicators are clearly written and thatevaluators have sufficient opportunity to assess those criteria. Teachereducation programs and policymakers also need to make sure supervi-sors are being provided with the professional development and supportthey need.

This study’s analysis suggests video annotation software might offersome affordances for supervisors and others who assess teacher perfor-mance. In a focus group interview, Felicia and another participatingsupervisor noted how the annotation software helped them to noticemore and different aspects of teaching and learning during their obser-vations, enabled them to provide in-the-moment feedback on classroom-based activity in ways not traditionally possible during ‘‘live’’ classroomobservations, and strive for better reliability in their observations bycomparing their own evaluative decisions with others. We plan to con-tinue analyzing our data to identify the possible affordances of videoannotation software for teacher assessment.

The findings also support the need for further research into thedifferent perspectives and lenses that supervisory figures and the candi-dates themselves bring to their observations of teaching performance.

Dow

nloa

ded

by [

Uni

vers

ity o

f C

alif

orni

a Sa

nta

Cru

z] a

t 12:

31 2

5 N

ovem

ber

2014

Page 16: LOST IN TRANSLATION: USING VIDEO ANNOTATION SOFTWARE TO EXAMINE HOW A CLINICAL SUPERVISOR INTERPRETS AND APPLIES A STATE-MANDATED TEACHER ASSESSMENT INSTRUMENT

230 M. J. Miller and J. Carney

We are currently analyzing a longitudinal data set of annotations createdby additional participants: teacher candidates, cooperating teachers,school content specialists, and college faculty. We are hopeful that thecomparative analysis of these stakeholders’ annotations and the changesin their annotations over time will shed light on the unique perspec-tives different observers bring to their observation and give us insightsinto the potential for this software to enhance teacher assessment andcoaching.

Acknowledgments

This study was supported by a grant from Washington State’s Public Ed-ucation Advisory Board and the Washington State Office of Supervisionand Public Instruction.

References

Blocker, L. S., & Mantle-Bromley, C. (1997). PDS versus campus preparation:Through the eyes of the students. The Teacher Educator, 33(2), 70–89.

Borko, H., & Mayfield, V. (1995). The roles of the cooperating teacher anduniversity supervisor in learning to teach. Teaching and Teacher Education,11(5), 501–518.

Borko, H., & Putnam, R. T. (1998). The role of context in teacher learningand teacher education. In Contextual teaching and learning: Preparing teachersto enhance student success in and beyond school (Information Series No 376).Columbus: Ohio State University, Center on Education and Training forEmployment.

Cochran-Smith, M. (1991). Reinventing student teaching. Journal of Teacher

Education, 42(2), 104–118.Darling-Hammond, L. (1997). Doing what matters most: Investing in quality teaching

(NCTAF Research Report). New York: National Commission on Teachingand America’s Future.

Feiman-Nemser, S., & Buchmann, M. (1985). Pitfalls of experience in teacherpreparation. In J. Raths & L. Katz (Eds.), Advances in Teacher Education (Vol. 2,pp. 61–73). Norwood: Ablex.

Hess, F. (2005). The predictable, but unpredictably personal, politics of teacherlicensure. Journal of Teacher Education, 56(3), 192–198.

Interstate New Teacher Assessment and Support Consortium (INTASC). (1992).Model standards for beginning teacher licensing, assessment and development: Aresource for state dialogue. Retrieved October 25, 2008, from http://www.ccsso.org/content/pdfs/corestrd.pdf

Johnson, C., Kahle, J., & Fargo, J. (2007). Effective teaching results in increasedscience achievement for all students. Science Education, 91(3), 371–383.

Marshall, K. (2005). It’s time to rethink teacher supervision and evaluation.Phi Delta Kappan, 86(10), 727–735.

Dow

nloa

ded

by [

Uni

vers

ity o

f C

alif

orni

a Sa

nta

Cru

z] a

t 12:

31 2

5 N

ovem

ber

2014

Page 17: LOST IN TRANSLATION: USING VIDEO ANNOTATION SOFTWARE TO EXAMINE HOW A CLINICAL SUPERVISOR INTERPRETS AND APPLIES A STATE-MANDATED TEACHER ASSESSMENT INSTRUMENT

Lost in Translation 231

Merriam, S. B. (1998). Qualitative research and case study applications in education

(2nd ed.). San Francisco: Jossey-Bass.Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis: An expanded

sourcebook. Thousand Oaks, CA: Sage.Miller, M. (2009). Problem-based conversations: Using preservice teachers’ prob-

lems as a mechanism for their professional development. Teacher Education

Quarterly, 35(4), 77–98.National Commission on Teaching and America’s Future (NCTAF). (2003).

No dream denied: A pledge to America’s children (NCTAF Research Report).Washington, DC: Author.

Nelson, T. (2003). In response to increasing state and national control over theteacher education profession. Teacher Education Quarterly, 30(1), 9–72.

Office of Superintendent of Public Instruction (OSPI). (2004). Performance-based pedagogy assessment of teacher candidates. Retrieved June 8, 2009,from http://www.k12.wa.us/certification/profed/pubdocs/PerfBasedPedagogyAssessTchrCand6-2004SBE.pdf

Office of Superintendent of Public Instruction (OSPI). (2007). Essential aca-demic learning requirements and grade level expectations. Retrieved January10, 2009, from http://www.k12.wa.us/CurriculumInstruct/EALR_GLE.aspx

Paris, C., & Gespass, S. (2001). Examining the mismatch between learner-centered teaching and teacher-centered supervision. Journal of Teacher Ed-

ucation, 52(5), 398–412.Power, B., & Perry, C. (2002). True confessions of student teaching supervisors.

Phi Delta Kappan, 83(5), 406–413.Slick, S. (1997). Assessing versus assisting: The supervisor’s roles in the complex

dynamics of the student teaching triad. Teaching and Teacher Education, 13(7),713–726.

Slick, S. (1998). The university supervisor: A disenfranchised outsider. Teaching

and Teacher Education, 14(8), 821–834.Strauss, A., & Corbin, J. (1998). Basics of qualitative research. Thousand Oaks, CA:

Sage.U.S. Department of Education. (n.d.). No child left behind. Retrieved October

25, 2008, from http://www.ed.gov/nclb/landing.jhtml

Dow

nloa

ded

by [

Uni

vers

ity o

f C

alif

orni

a Sa

nta

Cru

z] a

t 12:

31 2

5 N

ovem

ber

2014