How US Preservice Teachers ‘Read’ Classroom Performances

Download How US Preservice Teachers ‘Read’ Classroom Performances

Post on 22-Feb-2017

212 views

Category:

Documents

0 download

Embed Size (px)

TRANSCRIPT

  • This article was downloaded by: [Universitaetsbibliothek Giessen]On: 22 October 2014, At: 08:25Publisher: RoutledgeInforma Ltd Registered in England and Wales Registered Number: 1072954Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

    Journal of Education for Teaching:International research andpedagogyPublication details, including instructions for authors andsubscription information:http://www.tandfonline.com/loi/cjet20

    How US Preservice Teachers ReadClassroom PerformancesDona M. Kagan a & Deborah J. Tippins ba University of Alabama , 207 Graves Hall, Box 870231,Tuscaloosa, AL 35487, USAb Science Education Department , University of Georgia ,212 Aderhold Hall, Athens, GA 30602, USAPublished online: 03 Aug 2006.

    To cite this article: Dona M. Kagan & Deborah J. Tippins (1992) How US Preservice TeachersRead Classroom Performances, Journal of Education for Teaching: International research andpedagogy, 18:2, 149-158, DOI: 10.1080/0260747920180204

    To link to this article: http://dx.doi.org/10.1080/0260747920180204

    PLEASE SCROLL DOWN FOR ARTICLE

    Taylor & Francis makes every effort to ensure the accuracy of all the information(the Content) contained in the publications on our platform. However, Taylor& Francis, our agents, and our licensors make no representations or warrantieswhatsoever as to the accuracy, completeness, or suitability for any purpose of theContent. Any opinions and views expressed in this publication are the opinions andviews of the authors, and are not the views of or endorsed by Taylor & Francis. Theaccuracy of the Content should not be relied upon and should be independentlyverified with primary sources of information. Taylor and Francis shall not be liablefor any losses, actions, claims, proceedings, demands, costs, expenses, damages,and other liabilities whatsoever or howsoever caused arising directly or indirectly inconnection with, in relation to or arising out of the use of the Content.

    http://www.tandfonline.com/loi/cjet20http://www.tandfonline.com/action/showCitFormats?doi=10.1080/0260747920180204http://dx.doi.org/10.1080/0260747920180204

  • This article may be used for research, teaching, and private study purposes. Anysubstantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,systematic supply, or distribution in any form to anyone is expressly forbidden.Terms & Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

    Dow

    nloa

    ded

    by [

    Uni

    vers

    itaet

    sbib

    lioth

    ek G

    iess

    en]

    at 0

    8:25

    22

    Oct

    ober

    201

    4

    http://www.tandfonline.com/page/terms-and-conditionshttp://www.tandfonline.com/page/terms-and-conditions

  • Journal of Education for Teaching, Vol. 18, No. 2, 1992 149

    How US Preservice Teachers 'Read'Classroom PerformancesDONA M. KAGANUniversity of Alabama, 207 Graves Hall, Box 870231, Tuscaloosa, AL 35487, USA

    DEBORAH J. TIPPINSScience Education Department, University of Georgia, 212 Aderhold Hall, Athens,GA 30602, USA

    ABSTRACT Thirty-seven preservice teachers viewed and evaluated three videotapes of actualclassroom lessons. To reveal macrostructures that may have guided perception, we asked thenovices to recall as much of a lesson as possible one week after evaluating a tape. Some of thenovices then evaluated a live classroom performance. As a comparative gauge, 19 inserviceteachers also evaluated the three videotaped lessons. Running notes (taken while viewing avideo) suggested that the inservice teachers were able to render spontaneous functionalinterpretations of teacher behavior (a 'deep' reading), while the preservice teachers invariablydescribed teacher behavior first, then only sometimes noted its function (a 'surface' reading).The preservice teachers defined good teaching from a student perspective in terms of fun andinvolvement, while the inservice teachers defined good teaching from a teacher perspective interms of clear lesson structure. The preservice teachers appeared to use major activity structuresas macrostructures to help interpret lessons. The task of evaluating a live classroom perform-ance appeared to be more difficult and ambiguous than that of evaluating a videotaped lesson.In sum, results contribute to a growing body of literature that documents differences in hownovice and experienced teachers process and interpret classroom stimuli.

    INTRODUCTION

    Educational psychologists have found that novices bring preconceptions to everylearning situation, and that those pre-existing beliefs serve as filters and buildingblocks of new knowledge (Posner et al., 1982). This appears to be equally true ofpreservice teachers, who enter their teaching programs with well-established beliefsabout students and classrooms (Book et al., 1983; Feiman-Nemser et al., 1988;Hollingsworth, 1989; Weinstein, 1989).

    By extension, one can assume that the pre-existing beliefs held by preserviceteachers probably shape their perceptions of the classroom performances theyobserve as part of the field components traditionally included in programs ofpreservice teacher education (Howey & Zimpher, 1989). This has led severalteacher educators to speculate that much of the unsupervised classroom observation

    Dow

    nloa

    ded

    by [

    Uni

    vers

    itaet

    sbib

    lioth

    ek G

    iess

    en]

    at 0

    8:25

    22

    Oct

    ober

    201

    4

  • 150 D. M. Kagan &> D. J. Tippins

    entailed in preservice programs could be described as miseducative, i.e. subject togross misinterpretation by novices (Zeichner, 1986; Feiman-Nemser & Buchmann,1987; Calderhead, 1988; Koehler, 1988).

    Beyerbach et al. (1989) recently conducted an experimental study that pro-vided direct evidence of the potentially counterproductive effect of unsupervisedobservation during preservice teacher education. In the context of a course on childdevelopment, the researchers asked students enrolled in a teacher education pro-gram to discuss and analyse video segments of two children playing in variouscontexts over a span of 18 months. Students were asked to describe the developmentof the children, noting strengths and weaknesses and speculating as to their familylives.

    Beyerbach et al. (1989) found concrete evidence of students' selective attentionand inappropriate inferences. Although these preservice candidates attempted toapply the course content to the task, they lacked finely tuned observational skillsand tended to draw unsupported inferences. With no corrective feedback, one canimagine how incorrect inferences would simply tend to confirm students' precon-ceptions.

    Our study was designed to explore the insights suggested by Beyerbach et al.(1989), by examining the ways in which preservice candidates interpret teachingperformances they are asked to evaluate. We chose to use three videotaped teachingepisodes as stimuli so that we could evaluate the variance in novices' interpretationsof the same performances.

    Since we expected to find considerable variance in their interpretations, we alsosampled a population of inservice teachers in order to have a comparative standard.We were also interested in examining the degree to which classroom experiencemight affect consensus among teachers about the nature of 'good' instruction. Onthe basis of data obtained from preservice and inservice teachers, we hoped to drawinferences concerning the utility of unsupervised, unstructured classroom observa-tion as a part of preservice teacher education.

    We also perceived this as a study of how novice and experienced teachersprocess classroom information. Here we assumed that the task of evaluating ateaching performance is analogous to that of comprehending a written text (Kagan,1989). That is, we assumed that teacher evaluators (a term we use generically)unconciously look for macrostructures: key components of a classroom lesson thatthey use to interpret incoming stimuli, just as readers use the components of storygrammar (or other macrostructures of connected discourse) to help them compre-hend narratives (Kagan, 1989). We used recall tasks to reveal the nature ofmacrostructures, since the structures individuals impose on sensory stimuli are usedto store, recall, and interpret events (Bransford et al., 1972).

    A variety of empirical research has shown that novice and experienced teachersprocess and interpret classroom stimuli differently, e.g. experienced teachers aremore selective and chunk information, (Berliner, 1986) and approach classroomdiscipline problems differently (Swanson et al., 1990). Other studies have docu-mented differences in the way problems are represented and solved (Housner &Griffey, 1985; Leinhardt & Greeno, 1986; Carter et al., 1988). We designed a task

    Dow

    nloa

    ded

    by [

    Uni

    vers

    itaet

    sbib

    lioth

    ek G

    iess

    en]

    at 0

    8:25

    22

    Oct

    ober

    201

    4

  • Classroom Performances 151

    that would allow us to examine several fundamental questions about the way noviceand experienced teachers may process and interpret a teaching performance:

    (1) What scheme(s) do teachers use when taking running notes on a teachingperformance they have been asked to evaluate? What do these schemes suggestabout their ability to assign organization and meaning on-line to the behaviorsthey view?

    (2) What elements of a lesson do teachers regard as good and poor teaching? Donovices and experienced teachers approach a performance with different as-sumptions? Is there consensus within either group?

    (3) What macrostructures appear to guide the recall of lessons?(4) In what ways does the task of evaluating a live classroom lesson appear to differ

    from that of evaluating a videotaped lesson?

    METHODS

    Participants

    Two groups of teachers participated in this study: 37 preservice candidates and 19inservice elementary teachers. All the inservice teachers were enrolled in a course onmethods of teaching science. Their classroom experience ranged from 1 to 33 years.All but two were female, and 68% had been trained in the use of some formalobservational system of teacher evaluation.

    The preservice candidates were enrolled in one of two undergraduate coursesin educational psychology: 22 were preparing to teach at the secondary level, 15 atthe elementary. All but eight were female, and none had had prior teachingexperience. None of the preservice teachers were trained in the use of a formalteacher evaluation system.

    Procedure

    The participants viewed three videotaped teaching episodes at one-week intervals.Each video, produced by the Association for Supervision and Curriculum Develop-ment, captured a 15-min lesson presented in a real classroom environment. Each isdescribed below:

    Tenth-grade (15-16 yrs old) biology lesson on RNA: a highly structured, teacher-centered lesson containing obvious introduction, practice, and review segments.

    Third-grade (7-8 yrs old) math lesson on symmetry: a highly inductive, conceptattainment approach in which the teacher shows her students examples and nonex-amples of symmetrical designs and asks them (the students) to infer the concept ofsymmetry.

    Seventh-grade (12-13 yrs old) social studies lesson on crimes: the teacher definescrimes of omission and commission, then asks her students to brainstorm andanalyse examples of each.

    Each of the videos included some element of peer teaching. That is, at some point in

    Dow

    nloa

    ded

    by [

    Uni

    vers

    itaet

    sbib

    lioth

    ek G

    iess

    en]

    at 0

    8:25

    22

    Oct

    ober

    201

    4

  • 152 D. M. Kagan & D. J. Tippins

    each lesson, students were asked to pair-off and help one another learn newmaterial.

    These three videos were shown to the participants in different orders. Partici-pants were provided with an open-ended, structured evaluation form containing thefollowing sections: a place for listing running notes while viewing the video, sectionsfor listing good and poor aspects of the lesson after viewing, and a section whererespondents were asked to indicate advice they would give the teachers forimproving their lessons. After each video was viewed, participants were given 25minutes to complete the form. They were told they were not being graded on theirevaluations and were not expected to relate the content of the courses to theirevaluations. They were simply expected to indicate their own honest opinions.

    Recall tasks were used with the preservice teachers only, since the structure ofthe inservice course did not permit use of a standard interval between viewing andrecalling a lesson. One week after viewing a video, the preservice teachers wereasked to recall in writing as much of the lesson as possible. Thus, at the end of 4weeks, each preservice teacher had completed three evaluations and three recalls.The videos were not discussed in class until all the data for this study werecollected.

    Twenty-two of the preservice teachers were also asked to complete a fourthevaluation: a field observation of an actual class. This could be a 20-40 min lessonon any content and grade level. They completed the same evaluation form used toevaluate the videotaped lessons. The field observations were completed 3 weeksafter the third video was recalled.

    At the end of the respective courses, the evaluations and recalls completed bythe participants were returned to them so they could analyse their own perform-ances and complete a structured questionnaire. Items on this self-analysis question-naire asked the participants to review and summarize the ways in which they (a)took running notes as they viewed a performance, (b) defined good and poorteaching, and (c) recalled a lesson (preservice teachers only).

    Data Analysis

    The evaluation forms and recall descriptions were regarded as primary data. Theself-analysis questionnaires were used as secondary data to complement our owninterpretation of the evaluations. These two sources of data were evaluated in termsof the research questions listed earlier.

    Each of the researchers evaluated the data independently by completing threetasks: (1) reading through the entire set of protocols once; (2) rereading theprotocols and inducing categories of responses for each research question, using theconstant comparative method (Glaser & Strauss, 1967); and (3) writing interpretivesummaries for each research question (Erickson, 1986). In completing these tasks,each researcher looked for key linkages connecting similar instances of the samephenomenon and sought corroborative evidence from the primary and secondarydata sources (Goetz & LaCompte, 1984). After completing these tasks, the resear-chers met to compare results and to resolve differences in interpretation. To

    Dow

    nloa

    ded

    by [

    Uni

    vers

    itaet

    sbib

    lioth

    ek G

    iess

    en]

    at 0

    8:25

    22

    Oct

    ober

    201

    4

  • Classroom Performances 153

    validate inferences and categories, each researcher produced excerpts from the datarecords.

    RESULTS

    Inservice Teachers

    Seventy percent of the evaluations completed by the inservice teachers indicatedthat they took running notes in a straight chronological fashion. Ninety percent ofthese chronological on-line notes were in a sort of shorthand that noted thefunctions of particular teacher behaviors without describing...

Recommended

View more >