Transcript
  • smk

    -C

    s explai

    Teacher evaluation

    a b s t r a c t

    teaching strategies to their own use is a key inuence to their futuresuccess in the classroom (Cochran-Smith, 2004).

    Examining preservice teachers ability to recognize and inter-nalize effective teaching strategies allows teacher education

    ipation in the program.This study seeks to understand the effectiveness of imple-

    menting a standardized, video-based assessment of teaching acrossteacher education programs to determine preservice teachersability to recognize effective teaching practices. In particular, welooked at the Video Assessment of Interactions of Learning (VAIL,2010), a standardized measure that has been used reliably in alarge study of in-service teachers (Hamre et al., 2012). In the UnitedStates, multiplemeasures of preservice teacher growth are required

    * Corresponding author. Curry School of Education, Department of Curriculum,Instruction, and Special Education, University of Virginia, 405 Emmett St., Char-lottesville, VA 22904, USA. Tel.: 1 540 820 0030.

    Contents lists available at

    Teaching and Tea

    .e

    Teaching and Teacher Education 33 (2013) 24e33E-mail address: [email protected] (P.D. Wiens).1. Introduction

    Preservice teacher education programs emphasize the develop-ment of effective instructional strategies, through pedagogymethods courses, eld experiences, and instructional-technologytraining (Adler, 2008). Recent research highlights that an importantstep in being able to implement effective instructional strategies istheability to recognize thoseeffective teaching inother teachers (vanEs & Sherin, 2002, 2006; Pianta & Hamre, 2009). The extent towhichpreservice teachers are able to internalize and adapt effective

    programs to understand the learning progress of its preserviceteachers while also providing an indication of the effectiveness ofthe education program. However, there is a lack of standardizedmeasures examining the learning of teachers in teacher educationprograms across contexts (Zeichner, 2005) and common tools thatcan connect data for different teacher education settings(Grossman & McDonald, 2008; Wiens, 2012). Designing andimplementing measures that assess preservice teacher growththrough a teacher education program is an important aspect ofunderstanding and directing the learning that occurs from partic-a r t i c l e i n f o

    Article history:Received 30 January 2012Received in revised form6 November 2012Accepted 24 January 2013

    Keywords:Preservice teacher educationPreservice teachersVideo technologyProgram evaluationTeacher effectiveness0742-051X/$ e see front matter 2013 Elsevier Ltd.http://dx.doi.org/10.1016/j.tate.2013.01.010The Video Assessment of Interactions and Learning (VAIL), a video-based assessment of teacher un-derstanding of effective teaching strategies and behaviors, was administered to preservice teachers.Descriptive and regression analyzes were conducted to examine trends among participants and identifypredictors at the individual level and program level. Results from this study demonstrate that a stan-dardized assessment used previously with in-service teachers can be implemented in a teacher educa-tion program. Analysis shows variability in preservice abilities to detect effective teaching strategies andbehaviors that is partially explained by teacher education program factors.

    2013 Elsevier Ltd. All rights reserved.< Assessment detects variance in abilit< Part of the variance in performance intify teaching strategies and behaviors.ned by teacher education program factors.< Preservice teachers completed Video Assessment of Interactions and Learning (VAIL).y to ideUsing a standardized video-based assesprogram to examine preservice teachers

    Peter D. Wiens*, Kevin Hessberg, Jennifer LoCasaleUniversity of Virginia, USA

    h i g h l i g h t s

    journal homepage: wwwAll rights reserved.ent in a university teacher educationnowledge related to effective teaching

    rouch, Jamie DeCoster

    SciVerse ScienceDirect

    cher Education

    lsevier .com/locate/ tate

  • Teacfor accreditation of teacher education programs (NCATE, 2008) andthe VAIL may be considered a viable option.

    We begin by exploring the need for standardized measures ofeffective teaching and what practices constitute effective practice.We will then examine the implementation of a standardizedmeasure of preservice teacher knowledge of effective teaching witha cross-sectional sample of participants. Finally, we will discuss theresults and viability of the measure for documenting learning in ateacher education program.

    1.1. Need for standardized measures in teacher education

    Teaching is a nuanced and dynamic exercise that is difcult tocategorically evaluate (Brophy, 1999; Brophy & Good, 1986). Effec-tive teaching includes developing the cognitive skills of studentswhile also being sensitive to the emotional and social needs of thelearner (National Research Council, 2000). A good measure ofteaching identies a teachers ability to engage in instruction thatmeets all of these needs. Such measures, if implemented in astandardized way, provide valuable information for teacherlearning and improvement (Pianta & Hamre, 2009).

    A goodmeasure of preservice teacher learning will determine towhat degree a preservice teacher understands the cognitive, social,and emotional aspects of quality instruction. Typically, assessmentsof student learning are collected and analyzed by the teacher ed-ucation program itself. The National Council for the Accreditation ofTeacher Education (NCATE), the largest teacher accreditation bodyin the United States, endorses multiple measures of student growthand performance (NCATE, 2008). These measures include lessonplans, evaluations of content knowledge, assessments of studentteaching, and evaluations of portfolios among others. Takentogether, these measures theoretically demonstrate whether apreservice teacher has developed a minimal level of skills andknowledge of effective teaching in order to implement thisknowledge in a classroom setting. These evaluations are oftenconducted at the end of the teacher education program and providea picture of what the preservice teacher knows.

    The problems with these commonly used measures of preser-vice learning are three-fold. First, they are largely subjective. Pre-service teacher-created artifacts such as portfolios and lesson plansare evaluated by teacher educators and often lack rigorous controlsfor variation between assessors. Even evaluation of studentteaching is generally done in a subjective format with universitypersonnel and cooperating teachers judging the performance of thepreservice teacher based on the evaluators own values and per-spectives (Greenberg, Pomerance, & Walsh, 2011). The secondproblem with using end-of-program measures is that they do notprovide any understanding of preservice teacher growth inknowledge and abilities vis a vis understanding the domains ofinstructional quality. Finally, these measures are site-specic anddo not allow for comparison across programs and contexts.

    Standardized evaluation measures provide a tool to measurepreservice teachers knowledge and skills within and across set-tings. However, not all standardized measures of teaching knowl-edge or ability can capture growth in these areas throughout ateacher education program. Capturing preservice teachers growthin understanding effective teaching is a difcult task because of thelimited time preservice teachers spend in the classroom. Thismakes it impossible to use year after year value-added models(Glazerman et al., 2010) to measure preservice teacher growth.Additionally, most traditional teacher education programs do notput preservice teachers in a lead position in a real classroom at thebeginning of their program. Without a measure of teaching per-formance at the beginning of the teacher education program

    P.D. Wiens et al. / Teaching andobservational protocols (Pianta & Hamre, 2009) will not showpreservice teacher growth. Therefore, the standardized measuresused for studying effective teaching in in-service situations, do nottranslate directly tomeasuring growth in preservice teachers. Thereis need for a standardized measure that builds on the domains ofcurrent observational measures, but can measure growth in pre-service teacher knowledge.

    1.2. A standardized measure of effective classroom interactions ethe Classroom Assessment Scoring System (CLASS)

    In a 2008 review of teacher education, Adler notes, teachersand researchers have responded to criticism of schools, teachers,and teacher education by looking for the empirical links betweenteacher practices and teacher education and student outcomes (p.332). From this line of empirical research it has become clear thateffective teachers create a positive classroom climate, effectivelymanage classroom time, and deliver high quality instruction andfeedback (La Paro, Pianta, & Stuhlman, 2004). It is possible toexamine these three categories of effective instruction by analyzingthe classroom interactions between students and teachers (Hamre,Pianta, Mashburn, & Downer, 2007). The Classroom AssessmentScoring System (CLASS) was developed as a standardized, reliableand valid measure to capture these components of effective class-room teaching and learning by measuring the quality of in-teractions between teachers and students (Pianta & Hamre, 2009;Pianta, La Paro, & Hamre, 2008).

    Pianta and Hamre (2009) conceptualize the CLASS framework asan observation tool that assesses those teacherestudent in-teractions that contribute to student development as a result of theclassroom experience and environment. The CLASS framework(Pianta, La Paro, et al., 2008) divides classroom interactions intothree major domains (see Fig. 1)dEmotional Supports, ClassroomOrganization, and Instructional Supports. Hamre et al. (2012) detailthe empirically- and theoretically-based research behind each ofthese domains. Each of the three domains represents a set of tenspecic dimensions of academic and social supports that are linkedto student development (Hamre et al., 2007; Pianta & Hamre, 2009,see list in Fig. 1). Finally, each of the dimensions is supported byindicators that are demonstrable to the observer (see Fig. 1). Forexample, an observer who sees a teacher provide repetitive andscaffolded feedback to students during instruction would beassessed as appropriate within the Instructional Support domain,the Quality of Feedback dimension, and the Feedback Loopindicator.

    The CLASS framework is supported by research in both educa-tion and psychology (Hamre & Pianta, 2007), and is designed to be auseful metric for the systematic research of classroom effects inteacher education (Hamre et al., 2007; Pianta & Hamre, 2009). Asdescribed in Hamre et al. (2012), observers are initially trained toreliability on the tool through a rigorous two day training sessionwhere they learn the CLASS framework and conduct multiplepractice tests. Next, observers must then pass a reliability test, us-ing the tool successfully across multiple classroom situations.Finally, observers complete regular reliability tests to ensure thatthey do not drift from established CLASS coding protocols.

    CLASS has been utilized by various researchers as an effectivemeasurement in elementary and secondary classrooms both in theUnited States (Graue, Rauscher, & Shenski, 2009; La Paro et al.,2009; LoCasale-Crouch et al., 2007; Malmberg & Hagger, 2009)and internationally (Cadima, Leal, & Burchinal, 2010; Pakarinenet al., 2010). CLASS-based studies consistently nd associationsbetween observable classroom behaviors outlined in the CLASSprotocol and student development and learning. For example, in alongitudinal study of 147 kindergartners through rst grade, Curby,

    her Education 33 (2013) 24e33 25Rimm-Kauman, and Ponitz (2009) found that teachers high in

  • ) co

    Teacemotional support had students that demonstrated faster growthin phonological awareness. In another study, Pianta, Belsky,Vandergrift, Houts, and Morrison (2008), and Pianta, La Paro,et al. (2008) examined nearly 800 students in various elementaryclassrooms and found a link between emotional support andreading achievement. Additionally, the Measures of EffectiveTeaching (MET) Project assessed nearly 3000 teachers and found apositive relationship between teachers ratings on CLASS and study

    Fig. 1. Classroom Assessment Scoring System (CLASS

    P.D. Wiens et al. / Teaching and26value-added estimates (Gates Foundation, 2012). Consequently,several recognized educational research agencies such as The GatesFoundation, Educational Testing Service, and the United StatesNational Institute of Child Health and Human Development haveincluded CLASS as part of their in-service teacher studies (Ewing,2008; Gates Foundation, 2010).

    While the use of a standardized observation measure can behelpful in evaluating preservice teaching performance it cannottypically be used to judge teacher growth. Given that preserviceteachers typically do not have the opportunity to take on teachingresponsibilities until the very end of the teacher education expe-rience, an observational measure like CLASS cannot always be usedto document preservice teacher growth. There is a need to examinestudents progressing skills that will translate into effective teach-ing practice. The question remains whether this can be done using astandardized lens? One standardized measure that could poten-tially be used to measure teacher learning is the Video Assessmentof Interactions and Learning.

    1.3. Video Assessment of Interactions and Learning (VAIL)

    The eld currently lacks standardized measures for analyzingteachers understanding of the domains of effective instruction(Zeichner, 2005). Empirical research is beginning to make a strongcase that the ability to notice effective teaching is foundational tothe ability to conduct effective teaching (Hamre, Downer, Jamil, &Pianta, 2012). Using the Classroom Assessment Scoring System(CLASS) dimensions as a conceptual framework, researchersdeveloped the Video Assessment of Interactions and Learning(VAIL) protocol to assess participants ability to identify effectiveteaching strategies and examples as specied within the CLASSdomains (Hamre et al., 2012). The VAIL builds upon the under-standing that the ability to see and notice good teaching is animportant component of being an effective teacher (Jamil, Sabol,Hamre, & Pianta, 2011). The VAIL uses the ability to identify effec-tive teaching strategies and actions as a measure of a participantsknowledge of effective teaching.

    nceptual framework (Pianta & Hamre, 2009, p. 111).

    her Education 33 (2013) 24e33Hamre et al. (2012) rst used VAIL as a component of a largerstudy in which participants in a treatment course were evaluatedonwhether they had improved skills (compared to a control group)to detect effective teacherestudent interactions through video. Inthe VAIL component of the Hamre et al. study, CLASS was used asthe treatment measure in the form of a CLASS-based course, andVAIL was used as a comparison outcome measure. Participants inthe treatment group were better able to specically identify mul-tiple aspects of effective instruction in video. In other words, howthese teachers performed on the VAIL protocol was related to theirquality of instructional practice as measured by CLASS. Both theirability to detect the critical behaviors (VAIL) and implement criticalbehaviors (CLASS) was inuenced by the intervention.

    The VAIL provides a standardized measure that has been usedsuccessfully with in-service teachers that can potentially beimplemented as a growth measure in teacher education. As shownpreviously, observational measures cannot examine growth inpreservice teachers because teacher education programs generallydo not allow the preservice to teach in K-12 classrooms until theend of their program. A measure of knowledge, such as the VAIL,can serve a measure of preservice teacher growth because it can beadministered at several different points during the teacher educa-tion program.

    This paper seeks to understand the feasibility of utilizing avideo-based assessment as a tool for assessing teacher education.We seek to examine the implementation of the VAIL within teachereducation and to conduct preliminary analyzes that examine thesensitivity of the measure to determine differences in preserviceteachers both at the teacher level and program level.

  • 2. Method

    2.1. Participants

    Data were collected at a publicly funded mid-Atlantic uni-versitys teacher education program located in the United States.This university has two teacher education and licensing programs.The rst program is a ve year bachelors plus masters degree. Thesecond program is a two year post-graduate masters degree pro-gram. Students from both programs take most teacher educationpedagogy courses together beginning with the fourth year of thebachelors plus masters degree program which is the rst year ofthe post-graduate students program. Students from the two pro-grams then continue through the nal year of their programtogether as well, ultimately graduating at the same time andearning U.S. state teaching certication.

    Before students are allowed to enroll in the ve year bachelorsplus masters degree, they are required to complete an introductory

    differences in these groups based onwhich program theywere in aswell as how far along in the program they happened to be.

    Characteristics were self-reported by participants through ademographic survey and are listed in Table 1. The demographicsurvey elicited demographic information from participants. De-mographic data included information on race, gender, and overallgrade point average. Programmatic information was also includedin the survey such as teaching area, teaching level, and educationprogram (graduate versus 5 year bachelor plus masters degree).

    The total sample consisted of 296 participants. Of the studentswho completed the demographic survey, 78.5% were female, and21.5% were male. The students were 76% Caucasian, 4.7% AfricanAmerican, 4.4% Asian, and 5.1% in other ethnicity categories. Thesample included preservice teachers working toward certicationin different areas including 36.8% pursuing elementary licenses,45.3% pursuing secondary subject-area licenses, and 12.8% pursu-ing licensing areas that extend across all grades (K-12). The samplein this study is highly representative of the universitys teacher

    On

    Gr

    2.72.63.02.81.82.52.82.12.63.42.31.52.92.52.4

    P.D. Wiens et al. / Teaching and Teacher Education 33 (2013) 24e33 27course in teacher education. The majority of students take this classin the second year of their bachelor degree program. Followingcompletion of this course the students may apply for admissioninto the teacher education program.

    Three distinct groups of preservice teachers participated in thisstudy. The rst cohort consisted of students enrolled in an intro-ductory education class (n 109). As explained above, most ofthese students were not enrolled in the universitys teacher edu-cation program. These students completed the VAIL in the rstweek of the introductory course and completed the demographicsurvey toward the end of the same course. In this data set it was notpossible to knowwhich of these students would pursue enrollmentin the teacher education program. Because we did not know whichstudents would apply for admission, we elected to keep all of theresults in the data set.

    The second cohort of students included students in the springsemester prior to student teaching the following fall (n 78). Thisgroup included students in the fourth year of a ve year bachelorsplus masters degree program and students in the rst year of a twoyear post-graduate masters degree in teaching.

    The third cohort of students included students who completedtheir one-semester student-teaching experience (n 80). Thisgroup included students in the fth year of a bachelors plus mas-ters degree program and students in the second year of a two yearpost-graduate masters degree in teaching. Including all three co-horts of students as well as students in a ve year program andstudents in a post-graduate program allowed us to examine the

    Table 1Analysis of variance table for group performance on the VAIL.

    Statistics One way ANOVA-ES knowledge

    n % Group mean SD F

    Male 58 21.48 2.28 2.50 1.19Female 212 78.52 2.70 2.68First cohort 109 36.8 2.28 2.40 3.87*Second cohort 78 26.4 3.32 2.78Third cohort 80 90.2 2.48 2.72Elementary 109 36.8 2.71 2.65 .51Secondary 134 45.3 2.49 2.52K-12 38 12.8 2.95 2.88White 225 76.0 2.55 2.58 .61African Am. 14 4.7 3.07 2.89Asian 13 4.4 3.23 2.77Other 15 5.1 2.93 3.37Not enrolled 64 23.7 2.31 2.47 .705 year BA MT 167 62.8 2.77 2.68Post-graduate 36 13.5 2.56 2.80*p < .05, **p < .01.education program as a whole.

    2.2. Instruments

    Data for this study included the VAIL (2010) assessment and thedemographic survey noted above. The VAIL assessment was con-ducted online andwas designed to take students less than one hourto complete. For the rst cohort, the demographic survey was apaper and pencil survey conducted during the introduction to ed-ucation class. For the other two cohorts the demographic surveywas administered online as part of a larger survey measure.

    2.3. Video Assessment of Interactions and Learning

    The VAIL was used to determine preservice teachers ability toidentify effective teaching strategies in video segments of realclassroom teaching environments. The participants watched threeshort videos (2e3 min). As the VAIL was originally developed for alarge-scale study of pre-kindergarten teachers (Hamre et al., 2012),each video featured a pre-kindergarten language arts lesson taughtby a veteran in-service teacher that demonstrated effective teach-ing characteristics as measured by CLASS. The videos each focusedon one dimension of the CLASS framework: quality of feedback,instructional learning formats, and regard for student perspectives.

    After watching the video, participants had the opportunity toprovide ve effective teaching strategies they identied from thevideo. A strategy is a general marker of effective teaching. Examples

    e way ANOVA-CO knowledge One way ANOVA-IS knowledge

    oup mean SD F Group mean SD F

    1 2.45 .06 3.84 2.58 .042 2.37 3.76 2.640 2.42 6.03** 3.74 2.42 .378 2.47 3.71 2.698 2.07 2.63 2.817 2.19 1.12 4.13 2.86 2.080 2.56 3.68 2.446 2.22 3.18 2.335 2.31 2.11 3.72 2.64 1.223 2.98 4.86 2.038 2.43 3.85 3.083 2.39 3.40 1.997 2.69 .86 3.53 2.29 .495 2.29 3.83 2.684 2.05 4.00 2.89

  • participant was looking for strategies based on ILF, she may haveidentied effective facilitation. If the example the participant pro-vided was an example of ILF, she would get credit for an example. Ifthe example was an example of effective facilitation the participantwould also get credit for a match. However, it was possible for aparticipant to identify an accurate examplewhile notmatching thatexample to its strategy pair. In this case the participant would getcredit for an example and strategy, but no credit for a strategy-example match.

    Table 2 provides an example of how a participants responsesmay be coded. In the rst pair the participant would be credited for

    that included an extended introduction to VAIL coding. The training

    Teacher Education 33 (2013) 24e33of effective teaching strategies included in the VAIL would bescaffolding, eliciting student ideas, and variety of instructionalmodalities.

    For each strategy, the participant had the opportunity to providea specic example of the strategy taken from the video. Theassessment denes an example as, A teaching method used tomeet a specic goal. In other words, examples constituted specicactions observed in the video. For example, if a participant notedscaffolding as a strategy a matching example might consist of theteacher helping the student sound out the word the student wasstruggling to read.

    Videos were selected based on their ability to provide examplesof effective teaching strategies in different areas based on CLASSdimensions. A prompt was included with each video to give par-ticipants direction for what to look for in the video. The prompts forthe three videos were as follows:

    1. Name up to 5 strategies the teacher is using to show she valueschildrens ideas and points of view and encourages childrensresponsibility and independence.

    2. Name up to 5 strategies the teacher is using to engage thestudents in the lesson and hold their attention.

    3. Name up to 5 strategies the teacher uses to effectively providefeedback and extend students learning, skills, and persistence.

    Responses supplied by participants were open-ended and werecoded for accuracy against a master code list created by mastercoders and then reconciling differences between them that werebased on standards identied in the CLASS (VAIL, 2010). The VAILuses a standardized rating description as outlined in the VAILCoding Manual (2010) which was used to guide all coding de-cisions. The VAIL was designed so that CLASS-specic terminologywas not necessary to perform well on the assessment. The mastercodes were descriptive of aspects of effective teaching and thelanguage was not tied to the CLASS system.

    When a CLASS-matched strategy was identied, a breadth scorewas also assigned to identify the specic indicators within theCLASS dimension identied by the participant. For example, for oneof the videos, the CLASS dimension was Instructional LearningFormats (ILF e which is in the Classroom Organization domain);Fig. 1 shows its indicators: (1) engaging approach, (2) variety ofmodalities, (3) student interest, and (4) clarity of learning objec-tives (Hamre & Downer et al., 2012). A CLASS-matched strategyfor this video would have to match, or constitute a reasonablesynonym of one of these four indicators. For example, a participantcan submit effective questions as a strategy under (1) engagingapproach. This would be credited as a CLASS-matched strategy.

    The number of unique indicators supplied by participants wasthen summed to create a breadth score for the entire set of re-sponses for that video. In the ILF/Classroom Organization examplefrom above, a participant may complete four strategy-examplepairs. This participant would have been coded as supplying thefollowing strategies:

    Pair 1: engaging approachPair 2: nonePair 3: student interestPair 4: engaging approach

    This participant would receive a two for the breadth score underILF because she only provided two different strategy types(engaging approach and student interest).

    Additionally, if both the strategy and example supplied werecorrect, the responsewas coded based onwhether the examplewas

    P.D. Wiens et al. / Teaching and28an accurate example of the strategy identied. For example, if thesessions culminated in a reliability test where research assistantscoded two sets of responses and were considered reliable if theircodes showed exact agreement with the master codes at least 80%of the time. All research assistants completed reliability tests everyone to two weeks while they were coding. Each time the assistantneeded to score at least an 80% exact agreement. If an assistantfailed to make 80% agreement, he or she stopped coding, retrained,and passed another reliability test before coding again. Addition-ally, the coding team showed strong reliability in having an 85%exact agreement on the 20% of the VAIL responses that were doublecoded. Cohens Kappa was also calculated between raters with analpha of .70 which indicates an acceptable level of inter rateragreement (Landis & Koch, 1977).

    To analyze the VAIL data collected for this study, a sum scorewas created for each variable (completion, strategy, example,match, breadth) for each video independently. Fig. 2 illustrates howvariables were combined to create summative variables for anal-ysis. The sum scores for strategy, match, and breadth were

    Table 2VAIL coding example.

    Participant response Code

    Strategy 1 Teacher asks good questions Engaging approachExample 1 While one student is volunteering

    the teacher asks the rest of theclass to help her

    Engaging approach

    Strategy 2 Teacher asks effective questions Engaging approachExample 2 Students have the opportunity to

    move around the room as part ofVariety of modalitiesproviding a correct strategy and a correct example. The participantwould also be coded as providing a strategyeexample match asboth the strategy and the example belong to the same CLASS in-dicators. The second strategyeexample pair would also be coded ascorrect for both strategy and example. However, the second pairwould not be coded as providing a strategyeexample matchbecause they come from different CLASS indicators. This responseset would also be coded as a breadth score of one because bothsupplied strategies belong to the same CLASS indicator.

    The completion score measured how many responses the par-ticipants wrote for each video. Participants were coded for eachattempt at identifying a strategy and example even if the strategyand example were not correctly identied. Each participant wasrequired to provide at least one strategy and example to continue inthe assessment. While there was the opportunity to identify vestrategies and examples, only one response was required tocontinue with the assessment. Any strategyeexample pairs thatwere left blank were coded as a zero.

    2.4. Scoring

    Research assistants participated in a half-day training sessiona word nd

  • P.D. Wiens et al. / Teaching and Teaccombined into a new variable called knowledge. The new variableidenties the participants knowledge of effective teaching strate-gies, but it does not indicate the participants ability to recognizewhat those strategies may look like. The example variable wastreated separately and renamed skill.

    Variables were composited in accordance with previously con-ducted research using the VAIL (Hamre et al., 2012) and the codingmanual (VAIL, 2010) which are based upon a conceptual under-standing that the skill variable measured the participants ability toidentify effective teaching behaviors, which is different fromknowledge of effective strategies. As shown in Table 3, the two setsof outcome variables were signicantly correlated but the corre-lations were weak enough to warrant examining the variablesindependently. The skill variable represents a participants abilityto know what good teaching looks like; or what actions a teacherdoes that are effective. However, it does not tap into the partici-pants understanding of why those actions are effective and onlydemonstrates a shallow understanding of good teaching. Theknowledge variable demonstrates the participants depth ofknowledge of the components and strategies behind the teachingactivities. The completion score was a measure of how many at-tempts participants made to identify strategies and examples ineach video was also treated separately.

    2.5. Procedures

    When participating in both the demographic survey and theVAIL, students were asked to consent to have their informationused for research purposes and to have it linked to other schoolrecords. Participants were assured of condentiality and anyidentifying information was stripped from the data before codingbegan.

    All participants completed the VAIL online on their own timewith no immediate supervision. The rst cohort participantscompleted the VAIL as part of a one semester introductory course.The intention was to have these participants complete the VAILbefore they received any instruction from the university teachereducation program; they were required to complete the VAILwithin the rst two weeks of the introductory course. Preserviceteachers enrolled in the teacher education program are required tocomplete a certain number of research credits each school year.

    Fig. 2. VAIL variable combinations.These participants who were not enrolled in the introductorycourse (cohorts two and three) were given the entire spring se-mester to complete the VAIL which counted toward their requiredresearch credits.

    Table 3Skill and knowledge correlations.

    Correlation p

    ES skill & knowledge .425 >.001CO skill & knowledge .308 >.001IS skill & knowledge .376 >.001

    ES Emotional supports, CO Classroom organization, IS Instructional supports.Likewise, rst cohort participants completed their demographicsurveys during an introductory class time with pencil and paper.Research assistants were responsible for data entry at a later time.Second and third cohort students completed their demographicsurveys on their own time. The demographic survey also countedtoward students required research credits for the semester.

    3. Analysis

    The purpose of our analysis was to determine if the VAIL mea-sure provided sufcient variation in scores as to be useful and tounderstand if it was sensitive to expected differences at the pre-service teacher level and the program level. Therefore, data analysisoccurred in two parts. The rst analysis run on the data wasdescriptive to understand the variation patterns in the sample. Webegan by generating mean scores and standard deviations for theentire sample on each outcome variable. The generated outcomevariables included measures of the number of completed items foreach participant, the knowledge score of each participant, and theskill score of each participant. Because each video measured adifferent component of effective teaching, means and standarddeviations were generated for each video separately in order toexamine patterns across the videos.

    The second analysis was intended to determine if any de-mographic or programmatic factors could explain the variance inoutcome measures. Demographic variables included gender, race,age, and GPA while programmatic variables included cohort (rst,second, third), student type (not enrolled in teacher educationprogram, 5 year BA plus masters, post-graduate masters), andteaching content area (elementary, secondary, K-12). Linear re-gressions were run on each outcome variable including de-mographic and programmatic variables to determine therelationship, if any, between the variables.

    4. Results

    4.1. Descriptive statistics

    As explained previously, each video was designed to measure adifferent aspect of effective teaching based on the CLASS domains(emotional support, classroom organization, and instructionalsupport). For each of these domains one dimensionwas selected asa focus. Even though specic dimensions are not an all-inclusivemeasure of the CLASS domains, they are a measure of knowledgein that domain. Therefore, the discussion of the results of the VAILanalysis will be at one of the domain levels.

    The ndings illustrated in Table 4 indicate the results of the totalsample on the different outcomemeasures included in the VAIL. Allparticipants watched and responded to the videos in the same or-der as it is discussed here with emotional support coming rstfollowed by classroom organization with the instructional domaincoming last. The maximum possible responses for each video wasve and participants completed on average 4.1 responses for thevideo demonstrating the domain of emotional supports, 4.12 re-sponses for the classroom organization domain, and 3.88 responsesfor the instructional supports domain.

    Participants ability to recognize effective teaching behaviorswas measured by the skill variable in each of the videos. There wassubstantial variance in each of the videos in the participants abilityto identify these behaviors in the video. For all skill variables themaximum possible score for each video was ve. For the emotionalsupport skill variable, participants had a mean score of 2.24 with astandard deviation of 1.18; therewas a normal distribution of scoresaround the mean. The classroom organization skill variable showed

    her Education 33 (2013) 24e33 29a slightly higher mean score of 2.47 and a similar spread in the data

  • with a standard deviation of 1.15. The instructional support skillvariable showed a mean score of 2.04 with a standard deviation of

    coded as dummy variables. Two variables, age and cohort, werehighly correlated (r .633, p< .001) andwe decided to include onlythe cohort variable. Additionally, variables related to race wherefound to be unbalanced and did not contribute to our regressionmodels andwere therefore left out of the nal analysis. The result ofthese analyzes indicates the VAILs ability to identify some indi-vidual and program level differences in participants.

    Different demographic and programmatic characteristics werepredictors of participants recognition of effective teaching actionsas shown by the skill and knowledge variables. Results of these

    Skill

    CO IS ES CO IS

    2.68 (2.41) 3.77 (2.62) 2.24 (1.18) 2.47 (1.15) 2.04 (1.44)14 15 5 5 5

    Teacher Education 33 (2013) 24e331.44. In the instructional support skill variable, the data showedgreater spread around the mean compared to the other variables.

    We were also interested in knowing if participants were betterat identifying effective teaching examples in one of the domains.We conducted paired sampled t-tests and determined that themeans of each of the skill variables were signicantly different fromeach other (p < .05). Participants in our sample most effectivelyidentied teaching behaviors in the domain of classroom organi-zation. They were least effective at identifying examples of goodinstruction in the instructional support domain with emotionalsupport falling in between.

    Table 4 also shows the mean scores and standard deviations foreach videos knowledge variable. For emotional supports, the meanscore was 2.66 with a standard deviation of 2.60, which indicates awide range of scores in our sample. The mode score for emotionalsupports was zero. Nearly 37% of participants failed to identify anyteaching strategies in the emotional support domain.

    The mean score for the classroom organization knowledgevariable in our sample was 2.68 with a standard deviation of 2.41.While the mean score is the same in the classroom organizationand emotional support knowledge variables, the spread of the datais somewhat greater for classroom organization. As with emotionalsupports, the most common score (34.5%) for classroom organiza-tion knowledge was zero.

    The nal knowledge variable is in the instructional supportsdomain. The instructional supports domain knowledge variablehad a mean of 3.77, with a standard deviation of 2.62. Besides thehighest mean, instructional supports knowledge data also showedthe greatest range of scores (0e12) with fewer participants scoringa zero. The mode score for this variable was not zero, like the otherknowledge variables, but was three.

    We also wanted to compare results across the videos to un-derstand if there were signicant differences between the partici-pants ability to understand the effective teacher strategies in thedifferent CLASS domains portrayed in the video. Paired sample ttests conrmed there was no statistical difference in participantsidentication of teaching strategies associated with emotionalsupports and classroom organization (p .921). Our sampleshowed a statistically greater ability to identify effective teachingstrategies related to instructional supports than either emotionalsupports (p < .001) or classroom organization (p < .001).Table 4Descriptive statistics for entire sample.

    Video Completion Knowledge

    ES CO IS ES

    Mean SD 4.10 (1.05) 4.12 (1.04) 3.88 (1.23) 2.66 (2.60)Total possible 5 5 5 14

    ES Emotional supports, CO Classroom organization, IS Instructional supports.

    P.D. Wiens et al. / Teaching and304.2. Predictor models

    We also wanted to analyze if the VAIL assessment was sensitiveto demographic and programmatic differences in participants. Hi-erarchical regressions were run to determine if demographic orprogrammatic characteristics predicted participants ability toidentify effective teaching actions and effective teaching strategies.Six two-model regressions were run, one for each of the outcomevariables in each of the videos. The rst model included de-mographic characteristics (age, GPA) and the full model addedprogrammatic characteristics (cohort, student type, teaching level)analyzes can be seen in Tables 5e7. Analysis will be discussed byexamining the results of each teaching domain in turn and lookingat both skill and knowledge variables.

    The emotional support domain was examined rst and is illus-trated in Table 5. The full or partial model showed no predictiveability for the emotional support skill variable and all betas werenot statistically signicant.

    Meanwhile, we also examined the relationship between pre-service teacher and program level characteristics to participantsability to identify effective teaching strategies as measured by theemotional support knowledge variable. The results of these re-gressions can also be seen in Table 5. The emotional supportdomain was unique in our data because it was the variable thatshowed a signicant predictive relationship in the individual levelcharacteristics,DR2 .021, p .019. GPA ( .123, p .059) showeda barely signicant positive relationships with participants iden-tication of effective teaching strategies; however, the combinedmodel did not signicantly predict participant outcomes.

    Our analysis also found signicant predictors of participantsidentication of effective teaching actions in the classroom orga-nization skill variable. The overall model had a small (ES Skilladjusted R2 .031, p .024), but signicant explanation of thevariance in the two outcome variables. Within the model, teachereducation program, teaching level, and cohort all had a signicantrelationship with the identication of effective teaching behaviors.

    Table 5Hierarchical regression analysis results e emotional support.

    Predictors Outcome variables

    ES skill ES knowledge

    DR2 DR2

    Block 1 .007 .021**Demographic GPA .065 .123*Block 2 .023 .027

    Programmatic cohort (intro)Pre-student teaching .016 .196**Post-student teaching .123 .057Student type

    (non-TEd student)Post-graduate .014 .0915-Yr BA MT .070 .070Teaching level (elementary)Secondary .057 .047K-12 .063 .005Final adjusted R2 .003 .022

    *p < .10, **p < .05.

  • Table 6Hierarchical regression analysis results e classroom organization.

    Predictors Outcome variables

    CO skill CO knowledge

    DR2 DR2

    Block 1 .002 .000Demographic GPA .075 .045Block 2 .057** .053**Programmatic cohort

    (intro)Pre-student teaching .205** .028Post-student teaching .095 .236***Student type

    (non-TEd student)Post-graduate .200** .0375-Yr BA MT .267*** .019Teaching level

    (elementary)Secondary .007 .027K-12 .129* .073Final adjusted R2 .031** .027**

    *p < .10, **p < .05, ***p < .01.

    P.D. Wiens et al. / Teaching and TeacCompared with introductory students, preservice teachers in thesemester prior to student teaching scored the highest in classroomorganization skill ( .205, p .019). However, students notenrolled in the teacher education program out-performed bothgroups of enrolled students (post-graduate .200, p .032,bachelors plus masters .267, p .004). Using elementarypreservice teachers as the comparison group, preservice teachersseeking licensure in K-12 areas were the least able to identifyeffective teaching actions as measured by the classroom organiza-tion skill variable at the alpha level of .10 ( .129, p .060).

    The regression analysis on classroom organization knowledgeshowed that our full predictor model had a signicant relationshipwith the participants identication of effective teaching strategiesin that domain (adjusted R2 .027, p .031). Programmatic char-acteristics were associated with participants identication ofeffective teaching strategies in the classroom organization domain.The strongest, and only signicant predictor within the model wascohort where preservice teachers who had completed studentteaching performed signicantly worse than introductory students

    ( .236, p .008).

    Table 7Hierarchical regression analysis results e instructional support.

    Predictors Outcome variables

    IS skill IS knowledge

    DR2 DR2

    Block 1 .000 .004Demographic GPA .016 .047Block 2 .041 .021Programmatic cohort

    (intro)Pre-student teaching .143 .026Post-student teaching .136 .025Student type

    (non-TEd student)Post-graduate .019 .0665-Yr BA MT .085 .051Teaching level

    (elementary)Secondary .157** .086K-12 .130* .139**Final adjusted R2 .015* .002

    *p < .10, **p < .05.Neither the partial nor full model contributed any signicantpredictors in the area of instructional support skill variable. In bothcases the adjusted R squared of the entire model was not signi-cant. However, programmatic characteristics were signicant pre-dictors in participants identication of effective teaching strategiesin instructional supports as measured by the knowledge variable.The full model including both individual and program level char-acteristics did not demonstrate a signicant association with theinstructional support knowledge variable; however, teaching leveldid. In our sample, the preservice elementary teachers were themost skilled at identifying teaching behaviors and strategies in theinstructional support domain while the teachers pursuing certi-cation in K-12 licensing areas performed least well ( .139,p .045).

    4.3. Summary of ndings

    The preservice teachers who participated in this studyattempted to provide a high number of responses to each videoaveraging 4 out of a possible 5 responses across the three videos.The participants showed a consistent ability to identify effectiveteaching behaviors as measured by the skill outcome variablesacross the three videos. Participants were more able to identifyeffective teaching strategies related to instructional support thaneither emotional supports or classroom organization. Overall theparticipants were better able to identify effective teaching behav-iors (skill variables) then they were able to articulate the effectiveteaching strategies (knowledge variables) behind these teachingstrategies.

    Examination of our regression models showed that the pro-grammatic factors tended to have the most ability to explain thevariance in the outcome variables. However, in each model thespecic factor within themodel that was signicant moved around.Teaching level was predictive in the participants ability to identifyeffective teaching behaviors and teaching strategies in theinstructional supports domain. Meanwhile the point in the teachereducation program was a signicant predictor of participantsidentication of effective teaching strategies in the domain ofclassroom organization.

    5. Discussion

    The purpose of this study was to analyze the implementation ofthe Video Assessment of Interactions in Learning as a standardizedmeasure of preservice teacher learning. Based on the ClassroomAssessment Scoring System, the VAIL measures participants abilityto identify effective teaching strategies and interactions related toemotional supports, classroom organization, and instructionalsupports. The VAIL has been used successfully with in-serviceteachers (Hamre et al., 2012), but it had not been used with pre-service teachers prior to this study. This study provides an analysisof the performance of a cross-sectional sample of preserviceteachers at three different points in their teacher education pro-gram. From the analysis conducted in this study three main con-clusions can be made. First, wide-scale, standardized assessment ispossible in teacher education. Second, the VAIL is sensitive tovariability that can be partially explained by program factors. Third,large variability in preservice teachers ability to identify effectiveteaching strategies and behaviors remains unexplained by ourmodel. Each of these points will be further elaborated on below.

    5.1. VAIL as a measure of preservice teacher learning

    Analyzes reported in this paper suggest that the VAIL may be a

    her Education 33 (2013) 24e33 31useful tool in examining preservice teacher learning. The VAIL

  • Teacoffers a wide-scale, standardized assessment that demonstratedreliability in this study while being administered across threedifferent cohorts of students at varying points in their teacher ed-ucation program. This addresses an important problem in teachereducation: a lack of standardized measures that can be used acrosscontexts (Wiens, 2012; Zeichner, 2005). Additionally, the on-linenature of the assessment made it possible to implement theassessment with preservice teachers without impacting theinstructional program.

    The VAIL offers promise for measuring preservice teacherlearning and growth throughout a teacher education program. As astandardized reliable assessment measure, the VAIL can potentiallyshow that value-added aspects of a teacher education program forpreservice teachers. As multiple measures are required foraccreditation of teacher education programs (NCATE, 2008) in theUnited States, the VAIL is a viable option.

    In our sample there was a negative relationship between theparticipants years in the teacher education program and theirability to detect both teaching behaviors and teaching strategies inthe instructional supports domain. This is not especially problem-atic because of the nature of our sample. We analyzed a cross-sectional sample of preservice teachers and therefore cannotmake any conclusions from the differences across the cohorts ofstudents. In our sample the preservice teachers earlier in theirteacher education program scored higher in the instructionalsupports domain, but this result could simply be a measure of thedifferences in the different cohorts. Longitudinal data will beneeded to judge the adequacy of the VAIL to detect changes overtime.

    Additionally, our participants were most successful at identi-fying classroom organization domain teaching strategies comparedto the other domains. However, the participants were no moresuccessful in identifying effective teaching behaviors in this domaincompared to the other domain skill variables. For whatever reason,the preservice teachers had a better grasp of the theory underlyingclassroom organization, but were no better at nding examples ofthe related behaviors. Furthermore, the preservice teachers whoparticipated in this study appear to have a sense of what goodteaching looks like and can identify those behaviors; however, theyare not as skilled at identifying effective teaching strategies un-derlying those actions.

    5.2. Explained and unexplained variance

    Data analyzed in this study also indicate that the VAIL can detectvariability in participants that can partially be explained by ex-pected program factors. Our analysis showed consistent associa-tions in the programmatic characteristics with participantsperformance on the assessment. The fact that the specic pre-dictors with the programmatic characteristics changed from videoto video was encouraging. The VAIL does not appear to preferen-tially benet one group over another in our sample. The lack ofconsistent predictors across all the outcome variables suggest thatthe performance of each participant is based on individual differ-ences more than group membership as dened by demographicand programmatic characteristics. As more data is collected and thepreservice teachers progress through the teacher education pro-gram, the growth analysis may be an effective indicator of the in-dividuals learning.

    While the programmatic factors help explain some of the vari-ation in preservice performance on the VAIL, most of the varianceremains unexplained. As noted previously, this is a cross sectionalsample of preservice teachers at various points in their program,which makes it difcult to make specic conclusions across the

    P.D. Wiens et al. / Teaching and32time points in the program. Differences in scores may be based ondifferences in the cohorts instead of the effects of the teacher ed-ucation program. However, we expected to see preservice teachersshow more knowledge and skills as they progress in the program.For example, in one case students at an earlier point in their pro-gram actually performed better than students who were at the endof the program. This suggests something else other thanwhere theyare in their training is contributing to their abilities. It could be theirpersonality, previous experiences, background e something elseabout them that enables them to be more (or less) in tune to whatthe teachers are doing that is effective. Finding what that other iswould perhaps be useful as a selection criterion for future studentsand help teacher preparation programs identify the students thatshow proclivity to the nuances in the classroom even before theystart training.

    6. Limitations

    One area of concern with the design of this study was the use ofpre-kindergarten language arts lessons for the video clips. Wewereconcerned that the use of pre-kindergarten classrooms would biasthe results toward elementary teachers. Likewise, we were con-cerned that the use of language arts lessons would bias the resultsin favor of English/Language Arts teachers. Overall, there was noevidence of bias in favor of elementary or English preserviceteachers. The exception to this was the instructional support videowhere elementary teachers scored higher in both the knowledgeand skill variable. Instructional support contains CLASS indicatorsincluding effective feedback, encouragement of responses, andexpansion of student performance. The results of this study raisethe question of whether secondary and K-12 preservice teacherscan identify effective quality of feedback in elementary classroomsas well as elementary teachers. This is a question that bearsinvestigation as more data are collected using the VAIL assessment.

    The data in this study provided a cross-sectional view of pre-service teachers at one point in their teacher education program.Due to the cross-sectional nature of the data it was difcult to makeany conclusions about the learning of preservice teachers throughtheir teacher education program. It made it equally impossible todevelop any conclusions about the effectiveness of the teachereducation program. While the VAIL has potential as a value-addedmeasure of teacher education students and teacher educationprograms, this data set did not provide the opportunity to examinethat particular question. As the VAIL continues to be implementedin the teacher education program, it will be possible to create alongitudinal data set that would potentially measure the growth ofstudents over time. For the rst group in our sample, the VAIL hasbeen administered as a pre-test. Future research studies will needto examine how the rst, and future, cohorts show changes in VAILresults throughout the course of their teacher education program.

    Another limitation of our study was that we only have VAIL datafrom the preservice teachers who completed both the VAIL and thedemographic survey. Even though there was an expectation by theschool of education that all preservice teachers in each cohortwould complete the VAIL and demographic survey, some 18% ofthem failed to do so. While we feel that an 82% participation rate isexcellent, there is no way to know how the missing preserviceteachers would perform on the VAIL had they participated.

    A previous study has linked performance on the VAIL withhigher quality teaching as measured by CLASS (Hamre et al., 2012).For this study, we did not have sufcient CLASS data to make anyanalysis of the predictive ability of VAIL for the quality of preserviceteacher instructional practices. Future research should examine ifVAIL is predictive of effective classroom instructional practices.Additional analysis could seek to understand if VAIL scores are

    her Education 33 (2013) 24e33broadly predictive of classroom instructional performance, or if

  • performance on specic VAIL indicators translates to effective in-struction in the same indicators as measured by CLASS.

    Finally, the VAIL is a newmeasure that has been used in only onepublished study. Additional psychometric testing needs to beconducted to further examine the compositing of variables in andacross videos.

    7. Conclusion

    The VAIL shows promise as a standardized tool that can bereliably implemented in a teacher education setting. Since the VAIL

    Gates Foundation. (2010).Working with teachers to develop fair and reliable measures

    Grossman, P., & McDonald, M. (2008). Back to the future: directions for research inteaching and teacher education. American Educational Research Journal, 45(1),184e205.

    Hamre, B., Downer, J., Jamil, F., & Pianta, R. (2012). Enhancing teacher intentionality:a conceptual model for the development and testing of professional develop-ment interventions. In Pianta, R., Justice, L., Barnett, S., & Sheridan, S. (Eds.),Handbook of early education (pp. 507e532). New York: Guilford Publications.

    Hamre, B. K., & Pianta, R. C. (2007). Learning opportunities in pre-school and earlyelementary classrooms. In Pianta, R., Cox, M., & Snow, K. (Eds.), School readinessand the transition to kindergarten in the era of accountability (pp. 49e84). Bal-timore: Brookes.

    Hamre, B. K., Pianta, R. C., Burchinal, M., Field, S., LoCasale-Crouch, J., Downer, J. T.,et al. (2012). A course on effective teacher-child interactions: effects on teacherbeliefs, knowledge, and observed practice. American Educational Research

    P.D. Wiens et al. / Teaching and Teacher Education 33 (2013) 24e33 33of effective teaching. Seattle, WA: Gates Foundation.Gates Foundation. (2012). Gathering feedback for teaching: Combining high-quality

    observations with student surveys and achievement gains. Seattle, WA: GatesFoundation.

    Glazerman, S., Loeb, S., Goldhaber, D., Staiger, D., Raudenbush, S., & Whitehurst, G.(2010). Evaluating teachers: The important role of value-added. Brookings: BrownCenter on Education Policy.

    Graue, E., Raushcer, E., & Shernski, M. (2009). The synergy of class size reductionand classroom quality. The Elementary School Journal, 110(2), 178e201.

    Greenberg, J., Pomerance, L., & Walsh, K. (2011). Student teaching in the United States.Washington, DC: National Council on Teacher Quality.is conducted online and can be administered several times during apreservice teachers teacher education program the VAIL may beuseful for measuring value-added for teacher education programs.However, more research will need to be conducted to determine ifthe VAIL is sensitive to changes in preservice teachers learning overtime. If future research demonstrates this ability, the VAIL can be avery useful tool in understanding quality in teacher educationprograms.

    References

    Adler, S. (2008). The education of social studies teachers. In Levstik, L., & Tyson, C.(Eds.), Handbook of research in social studies education (pp. 329e351). New York,NY: Routledge.

    Brophy, J. E. (1999). Teaching. In Educational practices series-1. Geneva, Switzerland:International Academy of Education and International Bureau of Education,UNESCO.

    Brophy, J. E., & Good, T. L. (1986). Teacher behavior and student achievement. InWittrock, M. C. (Ed.), Handbook of research on teaching (3rd ed.). (pp. 328e375)New York, NY: Macmillan.

    Cadima, J., Leal, T., & Burchinal, M. (2010). The quality of teacher-student in-teractions: associations with rst graders academic and behavioral outcomes.Journal of School Psychology, 48, 457e482.

    Cochran-Smith, M. (2004). The problem of teacher education. Journal of TeacherEducation, 55(4), 295e299, http://dx.doi.org/10.1177/0022487104268057.

    Curby, T. W., Rimm-Kaufman, S. E., & Ponitz, C. C. (2009). Teacher-child interactionsand childrens achievement trajectories across kindergarten and rst grade.Journal of Educational Psychology, 101(4), 912e925.

    van Es, E. A., & Sherin, M. G. (2002). Learning to notice: scaffolding new teachersinterpretations of classroom interactions. Journal of Technology and TeacherEducation, 10, 571e596.

    van Es, E. A., & Sherin, M. G. (2006). How different video club designs supportteachers in learning to notice. Journal of Computing in Teacher Education, 22,125e135.

    Ewing, T. (2008). ETS-led team to investigate classroom observational tool for student-teacher interactions. Retrieved May 27, 2011 from. http://www.ets.org/newsroom/news_releases/investigating_classroom_observation_tools.Journal, 49(1), 88e123.Hamre, B. K., Pianta, R. C., Mashburn, A. J., & Downer, J. T. (2007). Building a science of

    classrooms: Application of the CLASS framework in over 4,000 U.S. early childhoodand elementary classrooms. New York: Foundation for Child Development,Available at http://www.fcd-us. org/resources/resources_show.htm?doc_id507559.

    Jamil, F., Sabol, T., Hamre, B., & Pianta, R. (September, 2011). A measure of teachersskills in detecting interactions: the Video Assessment of Interactions and Learning.Paper presented at the Virginia Education Research Association conference,Charlottesville, VA.

    La Paro, K., Hamre, B., Locasale-Crouch, J., Pianta, R., Bryant, D., Early, D., et al.(2009). Quality in kindergarten classrooms: observational evidence for theneed to increase childrens learning opportunities in early education class-rooms. Early Education & Development, 20(4), 657e692.

    La Paro, K. M., Pianta, R. C., & Stuhiman, M. (2004). The Classroom AssessmentScoring System: ndings from the prekindergarten year. The Elementary SchoolJournal, 104(5), 409e426.

    Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement forcategorical data. Biometrics, 33(1), 159e174.

    LoCasale-Crouch, J., Konold, T., Pianta, R., Howes, C., Burchinal, M., Bryant, D., et al.(2007). Observed classroom quality proles in state-funded pre-kindergartenprograms and associations with teacher, program, and classroom characteris-tics. Early Childhood Research Quarterly, 22(1), 3e17.

    Malmberg, L. E., & Hagger, H. (2009). Changes in student teachers agency beliefsduring a teacher education year, and relationships with observed classroomquality, and day-to-day experiences. British Journal of Educational Psychology,79(4), 677e694.

    National Council for the Accreditation of Teacher Education (NCATE). (2008). Pro-fessional standards for the accreditation of teacher preparation institutions.Washington, DC: NCATE.

    National Research Council. (2000). How people learn: Brain, mind, experience, andschool. Washington, DC: National Academy Press.

    Pakarinen, E., Kiuru, N., Lerkkanen, M.-K., Poikkeus, A.-M., Siekkinen, M., &Nurmi, J.-E. (2010). Classroom organization and teacher stress predict learningmotivation in kindergarten children. European Journal of Psychology of Educa-tion, 25(3), 281e300.

    Pianta, R. C., Belsky, J., Vandergrift, N., Houts, R., & Morrison, F. J. (2008). Classroomeffects on childrens achievement trajectories in elementary school. AmericanEducational Research Journal, 45(2), 365e397.

    Pianta, R. C., & Hamre, B. K. (2009). Conceptualization, measurement, andimprovement of classroom processes: standardized observation can leveragecapacity. Educational Researcher, 38(2), 109e119, http://dx.doi.org/10.3102/0013189X09332374.

    Pianta, R. C., La Paro, K., & Hamre, B. K. (2008). Classroom Assessment Scoring System(CLASS). Baltimore: Paul H. Brookes.

    Video Assessment of Interactions and Learning: VAIL. (2010). Coding manual, un-published manual. Charlottesville, VA: Center for the Advanced Study ofTeaching and Learning.

    Wiens, P. D. (2012). The missing link: research on teacher education. Action inTeacher Education, 34(3), 249e261.

    Zeichner, K. (2005). A research agenda for teacher education. In Cochran-Smith, M.,& Zeichner, K. M. (Eds.), Studying teacher education: The report of the AERA panelon research and teacher education (pp. 737e759). Mahwah, NJ: Lawrence Erl-baum Associates, Inc.

    Using a standardized video-based assessment in a university teacher education program to examine preservice teachers knowle ...1. Introduction1.1. Need for standardized measures in teacher education1.2. A standardized measure of effective classroom interactions the Classroom Assessment Scoring System (CLASS)1.3. Video Assessment of Interactions and Learning (VAIL)

    2. Method2.1. Participants2.2. Instruments2.3. Video Assessment of Interactions and Learning2.4. Scoring2.5. Procedures

    3. Analysis4. Results4.1. Descriptive statistics4.2. Predictor models4.3. Summary of findings

    5. Discussion5.1. VAIL as a measure of preservice teacher learning5.2. Explained and unexplained variance

    6. Limitations7. ConclusionReferences


Top Related