mississippi teacher's perspectives of danielson's

226
The University of Southern Mississippi The University of Southern Mississippi The Aquila Digital Community The Aquila Digital Community Dissertations Fall 12-2018 Mississippi Teacher's Perspectives of Danielson's Framework for Mississippi Teacher's Perspectives of Danielson's Framework for Teaching Teaching Donna Floyd University of Southern Mississippi Follow this and additional works at: https://aquila.usm.edu/dissertations Recommended Citation Recommended Citation Floyd, Donna, "Mississippi Teacher's Perspectives of Danielson's Framework for Teaching" (2018). Dissertations. 1574. https://aquila.usm.edu/dissertations/1574 This Dissertation is brought to you for free and open access by The Aquila Digital Community. It has been accepted for inclusion in Dissertations by an authorized administrator of The Aquila Digital Community. For more information, please contact [email protected].

Upload: others

Post on 04-May-2022

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Mississippi Teacher's Perspectives of Danielson's

The University of Southern Mississippi The University of Southern Mississippi

The Aquila Digital Community The Aquila Digital Community

Dissertations

Fall 12-2018

Mississippi Teacher's Perspectives of Danielson's Framework for Mississippi Teacher's Perspectives of Danielson's Framework for

Teaching Teaching

Donna Floyd University of Southern Mississippi

Follow this and additional works at: https://aquila.usm.edu/dissertations

Recommended Citation Recommended Citation Floyd, Donna, "Mississippi Teacher's Perspectives of Danielson's Framework for Teaching" (2018). Dissertations. 1574. https://aquila.usm.edu/dissertations/1574

This Dissertation is brought to you for free and open access by The Aquila Digital Community. It has been accepted for inclusion in Dissertations by an authorized administrator of The Aquila Digital Community. For more information, please contact [email protected].

Page 2: Mississippi Teacher's Perspectives of Danielson's

MISSISSIPPI TEACHER’S PERSPECTIVES ON DANIELSON’S

FRAMEWORK FOR TEACHING

by

Donna Crosby Floyd

A Dissertation

Submitted to the Graduate School,

the College of Education and Human Sciences

and the School of Education

at The University of Southern Mississippi

in Partial Fulfillment of the Requirements

for the Degree of Doctor of Philosophy

Approved by:

Dr. David Lee, Committee Chair

Dr. Sharon Rouse

Dr. Kyna Shelley

Dr. Richard Mohn

____________________ ____________________ ____________________

Dr. David Lee

Committee Chair

Dr. Lillian Hill

Department Chair

Dr. Karen S. Coats

Dean of the Graduate School

December 2018

Page 3: Mississippi Teacher's Perspectives of Danielson's

COPYRIGHT BY

Donna Crosby Floyd

2018

Published by the Graduate School

Page 4: Mississippi Teacher's Perspectives of Danielson's

ii

ABSTRACT

The focus of this study was to measure Mississippi teacher’s perspectives as

identified in the Teacher’s Perspectives on Danielson’s Framework for Teaching

(TPDFT) instrument. This quantitative study investigated whether or not statistically

significant differences existed between domains as identified in the TPDFT, and if these

findings differed statistically as a function of years of teaching experience, type of

Mississippi teaching certification, and degree of professional trust. A 36-item survey

instrument based on the 2013 version of Charlotte Danielson’s Framework for Teaching

was developed. This survey was piloted and distributed to obtain quantitative data to

assess teacher’s perspectives on Danielson’s Framework for Teaching. A 5-point scale

ranging from 1 (not important at all) to 5 (very important) was used. To test the

hypotheses of this study, descriptive statistics were analyzed, and a repeated measures

analysis of variance (RM-ANOVA) and multiple linear regressions were utilized to

determine statistical significance. The results of this study revealed statistically

significant differences between area domains as well as the area of professional trust.

Linear regressions failed to provide evidence to support the research hypothesis that

domains differ based on teachers’ years of teaching experience or type of Mississippi

teaching certification.

Page 5: Mississippi Teacher's Perspectives of Danielson's

iii

ACKNOWLEDGMENTS

To all of my committee members, I would like to say thank you for working with

me for the last 7 years to accomplish this degree. Thank you to Dr. Lee for chairing my

committee and making all of the arrangements for me to complete this task. Dr. Shelley,

you are always patient, kind, and supportive. Dr. Mohn, thank you for meeting with me

and advising me on my statistics. Dr. Rouse, you cared about my topic and went the extra

mile to guide me. I appreciate you, Dr. Rouse.

To my sweet friend Dr. Elgen Hillman, thank you for your encouragement,

positive support, and kind words. Most of all, thank you for the many prayers you prayed

with me.

Page 6: Mississippi Teacher's Perspectives of Danielson's

iv

DEDICATION

I would like to dedicate this work first to my Lord and Savior Jesus Christ. I

would also like to dedicate this to my incredible husband, Walter, as words cannot

express how blessed I am to have you as my trophy. To my children, Faith, Hope,

Charity, and Joy along with their amazing husbands and my precious grandchildren, I

dedicate this work. To my brothers and sisters, Ray, Cheryl, Gary, Kevin, Wayne, Lou,

April, Paul, David, Lea Ann, and Cheryl K, thank you all for inspiring me. Last but not

least, I dedicate this work to my mom, Cora, who always encourages us and lifts us up in

prayer daily.

Page 7: Mississippi Teacher's Perspectives of Danielson's

v

TABLE OF CONTENTS

ABSTRACT ........................................................................................................................ ii

ACKNOWLEDGMENTS ................................................................................................. iii

DEDICATION ................................................................................................................... iv

LIST OF TABLES ............................................................................................................ xii

LIST OF ILLUSTRATIONS ........................................................................................... xiii

LIST OF ABBREVIATIONS .......................................................................................... xiv

CHAPTER I – PURPOSE OF THE STUDY ..................................................................... 1

Introduction ..................................................................................................................... 1

Background ..................................................................................................................... 2

Teacher Performance .................................................................................................. 2

Measuring Teacher Performance .................................................................................... 6

Assessment of Teacher Performance in Mississippi ....................................................... 7

Theoretical Framework ................................................................................................... 9

Constructivism and Danielson’s Framework for Teaching ...................................... 10

Constructivism and a Model for Professional Improvement .................................... 11

Summary of Historical and Theoretical Background ................................................... 11

Statement of the Problem .............................................................................................. 12

Purpose of the Research Study ..................................................................................... 12

Research Questions ....................................................................................................... 13

Page 8: Mississippi Teacher's Perspectives of Danielson's

vi

Research Hypotheses .................................................................................................... 14

Limitations, Delimitations, and Assumptions ............................................................... 15

Definitions of Key Terms ............................................................................................. 16

CHAPTER II – LITERATURE REVIEW ....................................................................... 20

Theoretical Framework ................................................................................................. 21

Constructivism and Danielson’s Framework for Teaching ...................................... 21

Constructivism and a Model for Professional Improvement .................................... 22

History of Teacher Assessment .................................................................................... 23

Government and Legislative Influence ......................................................................... 25

A Nation at Risk Report ............................................................................................ 28

No Child Left Behind................................................................................................ 29

Highly Qualified Teachers Versus Effective Teachers ............................................. 30

American Recovery and Reinvestment Act .............................................................. 32

Common Core State Standards Initiative .................................................................. 33

Every Student Succeeds Act ..................................................................................... 34

Regional and State Influences ................................................................................... 35

Purpose of Teacher Assessment.................................................................................... 37

Formative Assessments ............................................................................................. 38

Summative Assessments ........................................................................................... 39

Rationale for Formative and Summative Assessments ............................................. 40

Page 9: Mississippi Teacher's Perspectives of Danielson's

vii

Teacher Evaluation and Effective Teaching ................................................................. 41

Danielson’s Framework Domains................................................................................. 45

Components in Assessing Teacher Performance .......................................................... 47

Planning .................................................................................................................... 50

Student Assessment .................................................................................................. 52

Instruction ................................................................................................................. 56

Classroom Environment............................................................................................ 57

Professional Responsibility ....................................................................................... 57

Professional Evaluation ............................................................................................ 59

Evaluator ............................................................................................................... 59

Principals as evaluators ......................................................................................... 60

Mentors and peers as evaluators ........................................................................... 62

Feedback ............................................................................................................... 64

Classroom observation .......................................................................................... 65

Teacher’s response to feedback ............................................................................ 66

Self-assessment and self-reflection ....................................................................... 67

Teacher’s knowledge of assessment ..................................................................... 68

Professional Trust in Schools.................................................................................... 70

Professional Trust and Social Capital ....................................................................... 71

Trust as it relates to schools .................................................................................. 72

Page 10: Mississippi Teacher's Perspectives of Danielson's

viii

Trust as it relates to collaboration in schools ........................................................ 74

Trust as it relates to teacher assessment ................................................................ 76

Factors Affecting Teacher Performance ....................................................................... 78

Years of Certified Teaching Experience ................................................................... 78

Type of Teacher Certification ................................................................................... 80

Mississippi Identified Components in Assessing Teacher Performance ...................... 85

Mississippi Teacher Assessment Instrument (MTAI) .............................................. 85

Mississippi Statewide Teacher Appraisal Rubric (M-STAR) .................................. 85

Summary and Future Needs in Mississippi Evaluation of Teacher Performance ........ 87

Conclusion .................................................................................................................... 88

CHAPTER III – RESEARCH DESIGN AND METHODOLOGY ................................. 95

Introduction ................................................................................................................... 95

Research Questions and Hypotheses ............................................................................ 96

Research Questions ................................................................................................... 96

Research Hypotheses ................................................................................................ 97

Research Design............................................................................................................ 97

Pilot Study ..................................................................................................................... 99

Draft Instrument Description .................................................................................... 99

Draft Instrument Expert Content Review ............................................................... 101

Content review form ........................................................................................... 101

Page 11: Mississippi Teacher's Perspectives of Danielson's

ix

Content review results from demographic section .............................................. 104

Content review results from professional trust ................................................... 105

Content review results from planning and preparation. ...................................... 107

Content review results from classroom environment ......................................... 107

Content review results from instruction .............................................................. 108

Content review results from professional development ...................................... 108

Content review results from domain importance ................................................ 109

Revised Instrument Pilot Study .............................................................................. 110

Procedures ........................................................................................................... 110

Participants .......................................................................................................... 110

Reliability ............................................................................................................ 111

Final instrument revisions ................................................................................... 112

Study Design ............................................................................................................... 113

Participants and Setting........................................................................................... 113

Procedure ................................................................................................................ 116

Data Analysis .......................................................................................................... 117

CHAPTER IV – RESULTS ............................................................................................ 118

Introduction ................................................................................................................. 118

Organization of Data Analysis .................................................................................... 118

Research Questions ..................................................................................................... 119

Page 12: Mississippi Teacher's Perspectives of Danielson's

x

Research Hypotheses .................................................................................................. 119

Results Research Question One .................................................................................. 120

Research Hypothesis One ....................................................................................... 120

Descriptive statistics ........................................................................................... 121

Analysis............................................................................................................... 121

Results of research hypothesis one ..................................................................... 125

Results Research Question Two through Four ........................................................... 125

Descriptive Statistics ............................................................................................... 125

Planning and Preparation Results ........................................................................... 126

Classroom Environment Results ............................................................................. 128

Instruction Results .................................................................................................. 129

Professional Responsibility ..................................................................................... 129

Research Hypothesis Two Through Four ............................................................... 130

Results of research hypothesis two ..................................................................... 130

Results of research hypothesis three ................................................................... 130

Results of research hypothesis four .................................................................... 131

CHAPTER V – DISCUSSION ....................................................................................... 132

Summary of Procedures .............................................................................................. 132

Summary of Data ........................................................................................................ 134

Discussion ................................................................................................................... 136

Page 13: Mississippi Teacher's Perspectives of Danielson's

xi

Limitations .................................................................................................................. 138

Implications................................................................................................................. 138

Recommendations ....................................................................................................... 140

APPENDIX A – Teacher’s Perspectives on Danielson’s Framework for Teaching –

(TPDFT) .......................................................................................................................... 142

APPENDIX B – Danielson Letter of Permission ........................................................... 148

APPENDIX C – Charlotte Danielson’s Framework for Teaching Instrument (2013)

Translations of Items....................................................................................................... 149

APPENDIX D –Teacher’s Perspectives on Danielson’s Framework for Teaching –

Content Review (TPDFT-CR) ........................................................................................ 150

APPENDIX E – Consent Review Request Letter ........................................................... 155

APPENDIX F – Expert Reviewers ................................................................................. 156

APPENDIX G – Teacher’s Perspectives on Danielson’s Framework for Teaching

(TPDFT) Review Sheet................................................................................................... 157

APPENDIX H – Expert Reviews Summary Data........................................................... 165

APPENDIX I – Teacher’s Perspectives on Danielson’s Framework for Teaching – Pilot

Study ............................................................................................................................... 173

APPENDIX J – Script for Administration of Research Study ........................................ 179

APPENDIX K – IRB Approval Letter ............................................................................ 182

REFERENCES ............................................................................................................... 183

Page 14: Mississippi Teacher's Perspectives of Danielson's

xii

LIST OF TABLES

Table 1 Demographic Section Revision ......................................................................... 105

Table 2 Professional Trust Domain Revisions ................................................................ 106

Table 3 Planning and Preparation Domain Revisions .................................................... 107

Table 4 Classroom Environment Domain Revisions ...................................................... 108

Table 5 Instruction Domain Revision ............................................................................. 108

Table 6 Domain Importance Revisions ........................................................................... 109

Table 7 Demographic Characteristics of Pilot Study Participants .................................. 111

Table 8 Domain Reliability Statistics ............................................................................. 112

Table 9 Domain Reliability Statistics ............................................................................. 115

Table 10 Descriptive Statistics........................................................................................ 121

Table 11 Repeated Measures Analysis of Variance Domain .......................................... 122

Table 12 Significant Mean Differences Bonferroni Post-Hoc ........................................ 124

Table 13 Intercorrelations Between Variables ................................................................ 126

Table 14 Results of Linear Regression Analysis ............................................................ 127

Table 15 Summary of Linear Regression Analysis ........................................................ 128

Page 15: Mississippi Teacher's Perspectives of Danielson's

xiii

LIST OF ILLUSTRATIONS

Figure 1. Content review form directions. ...................................................................... 102

Figure 2. Content review form for demographics question 1 and response stem. .......... 103

Figure 3. Content review overall comment demographics section. ................................ 103

Figure 4. Content review format for trust domain. ......................................................... 104

Figure 5. Demographic information revision.................................................................. 112

Page 16: Mississippi Teacher's Perspectives of Danielson's

xiv

LIST OF ABBREVIATIONS

ARRA American Recovery and Reinvestment Act

ESEA Elementary and Secondary Education Act

ESSA Every Student Succeed Act

FFT Frameworks for Teaching

HQT Highly Qualified Teacher

LEA Local Educational Agency

MDE Mississippi Department of Education

MSU Mississippi State University

M-STAR Mississippi Statewide Teacher Appraisal

Rubric

MTAI Mississippi Teacher Assessment Instrument

NCEE National Commission on Excellence in

Education

NCLB No Child Left Behind

RCU Research and Curriculum Unit

SEA State Educational Agency

TPDFT Teacher’s Perspectives on Danielson’s

Framework for Teaching

Page 17: Mississippi Teacher's Perspectives of Danielson's

1

CHAPTER I – PURPOSE OF THE STUDY

Introduction

The classroom teacher is one of the most significant building blocks in our public

educational system. During the 2012-2013 school year, the education system was

responsible for educating over 49 million K-12 students. The single, most significant

predictor of success, for these millions of students, is their classroom teacher. Many

researchers agree that the most significant element to student growth is the teacher

(Goldhaber, 2002; Hanushek & Rivkin, 2010; Johnson, Kraft, & Papay, 2012; Rockoff,

2004).

Teacher assessment and evaluation are two of the most researched and written

about topics in the entire field of education. Great debate surrounds the assessment and

evaluation of the American K-12 public education classroom teacher. Researchers have

developed numerous measures, opinions, and theories over a long stretch of American

history regarding public school classroom teachers.

While reviewing research literature, regarding teacher assessment and evaluation

systems, many questions arose and remain unanswered. However, it became apparent that

the Danielson Framework for Teaching had convincing research impacting in the field of

education. Danielson’s framework has become the basis for the majority of all teacher

evaluations (McClellan, Atkinson, & Danielson, 2012; Olson, 2015). As early as 2012, at

least 16 states have adopted Danielson’s Framework for Teaching exclusively as their

statewide teacher assessment model. Furthermore, it was one of several state-approved

models in nine other states and was being used in countless school districts across the

country (McClellan et al., 2012; Olson, 2015).

Page 18: Mississippi Teacher's Perspectives of Danielson's

2

Danielson’s Framework for Teaching is a research-based teacher assessment

model. Because of its reputation, status, and large following, it has been the subject of

numerous studies, articles, and reports, including (a) Rethinking Teacher Evaluation in

Chicago, (b) The Measures of Effective Teaching, and (c) The Effect of Evaluation on

Performance. Moreover, the Frameworks for Teaching (FFT) is aligned with the

Interstate New Teachers Assessment and Support Consortium (INTASC) standards

(Michigan Association of School Administrators, 2016). Consequently, the Danielson

Framework for Teaching has been determined to be a robust teacher assessment model

regarding reliability, validity, and defensibility (Michigan Association of School

Administrators, 2016). This study used an adaptation of the FFT, the Teacher’s

Perspectives on Danielson’s Framework for Teaching (TPDFT) (See Appendix A), to

examine the relative importance of teaching components.

In the following chapter, the researcher examined the historical and theoretical

background regarding the development of teacher assessment. This background provided

a framework for understanding the context of this study and served as a primer for this

study.

Background

Teacher Performance

A teacher’s ability to perform in the classroom has never been under more

scrutiny by federal and state departments of education, political officials, and the general

public (Bush, 2004; Johnson et al., 2012; McNeil, 2014) than it is today. This scrutiny of

teacher performance interestingly enough is driven not from research, but rather national

reports and legislation.

Page 19: Mississippi Teacher's Perspectives of Danielson's

3

The Faculty Development and Instructional Design Center of the University of

Northern Illinois viewed the terms assessment and evaluation as having the same or

similar meanings concerning teacher performance measure. However, they are different

in meaning and are in fact separate steps in the process of evaluating teacher performance

or quality level. The term assessment is focused on the gathering of information (Faculty

Development and Instructional Design Center, n.d.; Little & Akin-Little, 2014), while the

term evaluation is focused on the judgment of the information gathered (Danielson &

McGreal, 2000).

The process of evaluating teacher performance can be said to have three steps: (a)

assessment, (b) evaluation, and (c) decision-making (Faculty Development and

Instructional Design Center, n.d.). Much of the literature reviewed as part of this study

did not differentiate between terminology usage of assessment and evaluation; therefore,

these terms were used interchangeably. However, for this study, the researcher utilized

the term assessment as the primary term in reference to measure a teacher’s performance.

The term evaluation was infrequently used in this study and only when referring to a

broad perspective or viewpoint of teacher assessment (e.g., an enterprise or system of

assessment) or when the literature was referring to a teacher evaluation system. The term

assessment was a more accurate term for use in this study because the term focused on

the gathering of information and not the judgment of the information collected.

A balanced education is essential to productive living within today’s society

(Hanushek & Rivkin, 2010; Rockoff, 2004). For teachers to provide well-rounded K-12

education for their students, they must continually address ever-changing educational

topics, including subject matter content, classroom management, instruction, planning,

Page 20: Mississippi Teacher's Perspectives of Danielson's

4

and preparation (Hanushek & Rivkin, 2010; Rockoff, 2004). Hanushek and Rivkin

(2010) developed descriptors in an effort to capture the many facets and complex nature

of teacher performance. These descriptors are teacher quality and teacher effectiveness.

According to Hanushek and Rivkin (2010), along with Rockoff (2004), teacher quality is

the single most notable variable impacting student achievement within the field of

education.

The genesis for teacher quality began with a report published in 1983, by the

National Commission on Excellence in Education (NCEE). This commission was formed

to evaluate the U.S. Public Education System. The product of the commission’s

evaluation was a report entitled A Nation at Risk. The commission’s report, according to

Lingenfelter (2007), caused the nation to become alarmed for the future of American

Public Education. Moreover, the report recognized that teacher quality was a critical

element to student understanding (NCEE, 1983). The issues identified in A Nation at Risk

prompted President George W. Bush and a bipartisan Congress to enact the

reauthorization of the Elementary and Secondary Education Act (ESEA), commonly

known as No Child Left Behind Act (NCLB) (2002). The NCLB legislation (2002)

designated that all students would be proficient in basic skills by 2014. Each state

individually developed policy to comply with NCLB mandates or submitted waivers.

Mississippi, the state of which this study concerns, chose to submit a wavier for

NCLB. Phil Bryant, the governor of Mississippi, realized that the state would not meet

NCLB’s mandates by the end of 2014. As a result, Governor Bryant presented a waiver

to the United States Department of Education asking to excuse Mississippi from the

NCLB mandate, which required 100% of the state’s students to be proficient in basic

Page 21: Mississippi Teacher's Perspectives of Danielson's

5

skills by the year 2014. In turn, Mississippi submitted the ESEA flexibility request in July

of 2012 (Gilbertson, 2012). This request included a requirement for the State Educational

Agency (SEA) and Local Educational Agency (LEA) to create and install a statewide

teacher evaluation and support system.

Later, and in support of the ESEA flexibility request, the Research and

Curriculum Unit (RCU) located at Mississippi State University (MSU) developed a

teacher assessment program and entitled it the Mississippi Statewide Teacher Appraisal

Rubric (M-STAR). The creators of M-STAR heavily leveraged the teacher assessment

system (Dechert & Jordan, 2014; Dechert, Kappler, & Nordin, 2015) developed by

Danielson in her 2000 seminal work, Teacher Evaluation to Enhance Professional

Practice (Danielson & McGreal, 2000). The objective for the creators M-STAR was to

develop a rubric that could successfully assess teacher quality. Over a period of 5 years,

the state of Mississippi heavily invested in M-STAR through consulted collaboration,

teacher and public reviews, school district piloting, statewide training, and statewide

distribution and implementation. In 2012, Mississippi began piloting M-STAR, and by

the beginning of the 2013 academic year, M-STAR was in every school district.

The NCLB act was replaced in 2015 by the Every Student Succeeds Act (ESSA).

President Barrack Obama signed the ESSA into effect in December 2015 (Gandha &

Baxter, 2016). The act allowed for changes to NCLB’s mandates; one of those changes

regarded teacher assessment. ESSA differed from teacher assessment to the discretion of

the states (Gandha & Baxter, 2016). Hart, director of the Mississippi Department of

Education (MDE) Educator Evaluations department, reported that the evaluation of

teacher performance would remain in the hands the of the MDE (T. Hart, personal

Page 22: Mississippi Teacher's Perspectives of Danielson's

6

communication, July 12, 2016). Hart also reported that the newly formed MDE ESSA

Planning Team would be reviewing the ESSA law requirements and making

recommendations.

Measuring Teacher Performance

There are two primary types of teacher assessments used for evaluating teacher

performance: formative and summative. A formative assessment identifies strengths and

weaknesses of a teacher while providing information that teacher needs to grow

individually in their profession (Danielson & McGreal, 2000; Weisberg, Sexton,

Mulhern, & Keeling, 2009). A summative assessment identifies teachers who have met

the established standards for teachers (Marzano, 2012). As reported by Marzano (2012),

teachers prefer formative assessments because teachers typically use the results to make

decisions regarding their professional growth. The government and community prefer

summative evaluations because stakeholders typically use the results to make

employment decisions regarding teacher status (Danielson & McGreal, 2000).

The overall purpose of summative evaluations is to identify teachers who have not

met the established standards (Marzano, 2012). However, Mielke and Frontier (2012)

found that summative teacher evaluations based on comprehensive teaching frameworks

could be constructed to guide teacher improvements throughout the school year, and not

just for the measurement of a teacher’s competency and growth.

According to Danielson and McGreal (2000), government leaders, policymakers,

and administration favor summative evaluation. The reason policymakers prefer

summative evaluation is that government funds represent the majority of K-12 public

education funding (Danielson & McGreal, 2000). Government funding is understood to

Page 23: Mississippi Teacher's Perspectives of Danielson's

7

be monies originating from federal, state, and local tax dollars provided by citizens,

teacher’s salaries being one of the greatest expenditures. Therefore, the citizens or public,

having funded teacher’s salaries, have a right to understand the quality level of the

teachers working in public schools (Danielson & McGreal).

Assessment of Teacher Performance in Mississippi

Prior to the M-STAR rubric, which was the formal Mississippi teacher assessment

instrument until the 2016-2017 school year, teachers were evaluated with a school

district-level, merely crafted, checklist. M-STAR was crafted over a 5-year period and by

consulted experts who included five domains in the instruments design (MDE, 2014a).

These domains served to identify teacher strengths and weaknesses in their profession.

The MDE developed M-STAR with the purpose of improving teacher performance and

judgment of teacher abilities (MDE, 2014a). A recognized prerequisite for M-STAR or

any teacher assessment instrument to facilitate improvement in a teacher’s performance.

A prerequisite is the teacher must recognize and understand the intended purpose of the

assessment (Danielson & McGreal, 2000).

The MDE consulted with the American Institutes for Research in drafting the M-

STAR rubric instrument. In the Spring of 2011, an M-STAR rubric instructed draft

included five domains (Planning, Assessment, Instruction, Classroom Environment, and

Professional Responsibilities) and 20 standards (Office of Educator Quality, 2016). The

creators of the M-STAR instrument based the instrument on numerous resources

including the Danielson Framework for Teaching and National Board and INTASC

standards (Office of Educator Quality, 2016).

Page 24: Mississippi Teacher's Perspectives of Danielson's

8

The MDE provided 5 hours of statewide M-STAR training to local schools via

video (MDE, 2014b). This training included data, examples, and case studies that

modeled M-STAR domain characteristics. The training also included vignettes via video

that afforded teachers an opportunity to practice identification of elements associated

with each M-STAR domain. In 2013, when Mississippi began using M-STAR as a

teacher assessment instrument, issues began to arise. It became evident that teachers

concerns included not understanding the assessment, basic tenets of the rubric domains,

nor the evaluation process to which the rubric was apart. School administrators also had

concerns with the extreme amount of man-hours required to complete each M-STAR

evaluation rubric with reliability and validity.

By December of 2015, and at a time when Mississippi had adjudicated M-STAR

concerns and reached near ninety percent participation in M-STAR, there was a shift in

federal lawmakers’ priority regarding teacher assessment. Then, in 2015, President

Obama signed the “Every Student Succeeds Act” (ESSA) (Gandha & Baxter, 2016). This

act authorized the overturn of NCLB, one of which was the statewide evaluation system

for teacher performance requirement. The responsibility for the evaluation of teacher

performance was to now be given to the discretion of the states by way of ESSA changes

(Gandha & Baxter, 2016).

Because of ESSA, a vital shift occurred in teacher evaluation responsibility. Now,

each state has full liberty to develop a unique teacher evaluation system or systems.

Mississippi chose to allow school districts full autonomy in developing and managing

their teacher evaluation systems. The implementation of these systems took place for the

first time in the 2016-2017 school year (MDE, 2016). Consequently, this is another

Page 25: Mississippi Teacher's Perspectives of Danielson's

9

critical juncture for teacher assessment in Mississippi. As each school district develops

and implements their teacher evaluation systems, an opportunity exists to implement

research findings.

Concerning measuring teacher performance, teachers are valuable intellectual

assets (King & Watson, 2010). Teachers are uniquely qualified to recognize the

discriminating qualities of an effective teacher. They have educationally specific training,

knowledge, classroom experience, also been witness to extremely effective and

ineffective teachers (King & Watson, 2010). Consequently, teachers most likely represent

one of the best sources for determining what qualities produce effective teachers and

thus, what makes up an effective teacher assessment instrument (Ackerman &

Mackenzie, 2007). This study surveyed Mississippi Gulf Coast K-12 teachers to

determine the relative importance of teaching components which are planning and

preparation, classroom environment, instruction, and professional responsibilities, as

identified in the Danielson Framework.

Theoretical Framework

The theoretical basis for this study is constructivism. Constructivism was

developed by John Dewey, Jean Piaget, and Lev Vygotsky and is the theoretical

framework for Danielson as well. This theory holds the supposition that individuals apply

prior knowledge to their current life experiences to construct their cognitive ability

(Danielson, 2011a; Juvova, Chudy, Neumeister, Plischke, & Kvintova, 2015). This

theory states that individuals gain knowledge and meaning through interaction between

their individual experiences and their backgrounds (Juvova et al., 2015).

Page 26: Mississippi Teacher's Perspectives of Danielson's

10

Jean Piaget (1952) identified an individual’s knowledge systems as schema,

knowledge organized into units of information. A schema, or background of the learner,

is essential to the learning process in the constructivist theory (Baerveldt, 2013).

According to Jones, Jones, and Vermette (2010) the theorist Dewey, Piaget, and Bruner

expanded on the tenets of the constructivist framework. Jones et al. (2010) developed the

idea that the learner must be actively involved in the learning process. These

constructivist tenets are the base of Danielson’s Framework for Teaching.

Constructivism and Danielson’s Framework for Teaching

Danielson (1996) developed the Framework for Teaching from focused research

on the implications of constructivist pedagogical implications and “the constructivist

view of learning (and therefore teaching) that underlines the framework for professional

practice” (Danielson, 1996, p. 122). Constructivist themes are apparent throughout

Danielson’s (2013) framework. For instance, in Domain I (planning and preparation),

Domain II (classroom environment), and Domain III (instruction), multiple concepts that

incorporate Piaget’s (1952) thoughts on the importance of background knowledge are

present. Further constructivist theory is present with the inclusion of Dewey’s (1938)

emphasis on teacher planning and organization are the foundation of Domain I (planning

and preparation) of Danielson’s framework (2013). Concerning Domain III (instruction),

Dewey’s (1938) theory of reflective activity is the fundamental concept underlying

reflective questioning techniques. Constructivism research has had a significant influence

on teaching methods and in the educational system at large (Cleaver & Ballantyne, 2014).

Page 27: Mississippi Teacher's Perspectives of Danielson's

11

This constructivist framework of learning and teaching sets up a foundation by which the

current paradigm for teacher assessment and evaluation systems can be better understood

(Danielson, 2011a).

Constructivism and a Model for Professional Improvement

In the development of the framework, Danielson performed an exhaustive review

of literature to incorporate significant theme and research on effective teaching

(Danielson, 2007). This constructivist thought that prior knowledge is used to develop

new knowledge reinforces the theory idea that research and best practice can enlighten

future teacher practice. The majority of current research and literature on the assessment

of teacher performance employs constructivism as the main theme (Danielson, 2011b).

Moreover, many teacher evaluation systems also founded their systems on the

constructivist theory. In summary, constructivism serves as the foundation for this study,

as well as Danielson’s Framework for teaching. Methodologically, the constructivism

approach as to how individuals learn is the theoretical framework for this study.

Summary of Historical and Theoretical Background

Recognizing the direct relationship that legislation and the United States

Department of Education have upon the evaluation of teacher performance is important.

This entity determined the progression and changes to Mississippi’s teacher evaluation

system (Herlihy et al., 2014). Educational legislation at the federal level flowed down to

individual states who responded to meet the requirements of the law. States implemented

or revised state-level teacher evaluation systems to comply with federal mandates (Goe,

Holdheide, & Miller, 2014; Lavigne, 2014). Danielson’s framework has become a major

foundation for teacher evaluation systems (Alvarez & Anderson-Ketchmark, 2011).

Page 28: Mississippi Teacher's Perspectives of Danielson's

12

Statement of the Problem

The assessment of teacher performance has become paramount in today’s culture

of accountability. Guerriero (2014) of the Organization for Economic Co-operation and

Development (OECD) reported that pedagogical knowledge and the teaching profession

is a knowledge-rich profession with teachers being the learning specialists. Teachers as

professionals in the field are expected to process and evaluate new knowledge as it

pertains to their core professional practice (Guerriero, 2014).

There are three factors supported through the review of research that influences

teacher’s perspectives regarding the relative importance of teaching components: (a)

years of teaching experience, (b) type of teacher’s state certification, and (c) a teacher’s

professional trust in the evaluator. These factors are also included in the Teacher’s

Perspectives on Danielson’s Framework Instrument (TPDFI).

This study seeks to explore the relative importance of teaching components, as

identified in the TPDFI. Furthermore, this study will seek to differentiate the relative

importance of the components regarding years of teaching experience, type of teacher’s

state certification, and teacher’s professional trust in the evaluator.

Although there has been considerable research conducted on the assessment of

teacher performance, little is known regarding the relative level of importance teachers

place on each teaching component, as identified in the TPDFI.

Purpose of the Research Study

The purpose of this study was to gain an understanding regarding Mississippi

Gulf Coast K-12 teacher’s perspectives on the relative importance of teaching

components as identified in the TPDFI. The study also explored the extent to which the

Page 29: Mississippi Teacher's Perspectives of Danielson's

13

relative importance of the components vary as a function of the teachers’ years of

certified teaching experience, type of state teaching certification, and level of

professional trust in their evaluator.

The purpose of the survey methodology was to (a) make available peer-reviewed

data for Mississippi educational policymakers, MDE, school district administrators,

teachers, and stakeholders; and (b) provide a voice for Mississippi Gulf Coast K-12

teachers regarding their perspective on the relative importance of teaching components as

identified in the TPDFI.

Mississippi is now at a critical juncture in shaping a new teacher evaluation

system. This study aimed to contribute research data that can assist state policymakers,

teachers, and other stakeholders to recognize the level of importance teachers place on

teaching components as identified in the TPDFI.

Research Questions

The central question of this study was as follows: What are the significant

teaching components identified in the TPDFI? Also, this study addressed additional

research questions to provide further insight into teaching components.

RQ1: How do Mississippi Gulf Coast K-12 teachers rate relative importance of

teaching components as identified in the Teacher’s Perspectives on Danielson’s

Framework Instrument (TPDFI)?

RQ2: Do Mississippi Gulf Coast K-12 teachers rate relative importance of

teaching components, as identified in the Teacher’s Perspectives on Danielson’s

Framework Instrument, differently (TPDFI) based on the teacher’s years of certified

teaching experience?

Page 30: Mississippi Teacher's Perspectives of Danielson's

14

RQ3: Do Mississippi Gulf Coast K-12 teachers rate relative importance of

teaching components as identified in the Teacher’s Perspectives on Danielson’s

Framework Instrument (TPDFI) differently based on the teacher’s type of Mississippi

state teaching certification?

RQ4: Do Mississippi Gulf Coast K-12 teachers rate relative importance of

teaching components as identified in the Teacher’s Perspectives on Danielson’s

Framework Instrument (TPDFI) differently based on the teacher’s perspectives of their

professional trust in their assessment evaluator?

Research Hypotheses

RH1: There will be significant differences between individual teacher ratings for

relative importance of teaching components as identified in the Teacher’s Perspectives on

Danielson’s Framework Instrument (TPDFI).

RH2: There will be significant differences between individual teacher ratings,

based on the teacher’s years of certified teaching experience, for relative importance of

teaching components as identified in the Teacher’s Perspectives on Danielson’s

Framework Instrument (TPDFI).

RH3: There will be significant differences between individual teacher ratings,

based on the teacher’s type of Mississippi state teaching certification, for relative

importance of teaching components as identified in the Teacher’s Perspectives on

Danielson’s Framework Instrument (TPDFI).

Page 31: Mississippi Teacher's Perspectives of Danielson's

15

RH4: There will be significant differences between individual teacher ratings,

based on the teacher’s perspectives of their professional trust in their assessment

evaluator, for relative importance of teaching components as identified in the Teacher’s

Perspectives on Danielson’s Framework Instrument (TPDFI).

Limitations, Delimitations, and Assumptions

The study was limited to certified kindergarten through twelfth-grade public

school teachers within two school districts located on the Mississippi Gulf Coast. The

survey instrument used a 5-point Likert-scale. The results were statistics based, with

respondents’ answers limited to teacher’s perspectives of Charlotte Danielson’s

Framework Instrument and teachers’ trust of evaluators.

First, the study is dependent upon Mississippi Gulf Coast K-12 teachers that

volunteered to participate. Two districts located on the Mississippi Gulf Coast

participated in this research. One limitation is that these school districts consist of limited

cultural diversity, and the sample may have a large number of Caucasian and they are

mostly, middle-class females with some variability when compared to the state of

Mississippi at large. An overrepresentation of this demographic group and an

underrepresentation of other groups surveyed in this study may not be representative of

the Mississippi Gulf Coast or the entire state. Thus, a generalization made of Mississippi

teachers may not occur in these results.

A delimitation of this study confines itself to surveying teachers only from two

school districts located on the Mississippi Gulf Coast. Assumptions about the study are

that teachers participating in the study would have participated in a systematic teacher

evaluation process. A potential limitation is that the data will be self-reported. The

Page 32: Mississippi Teacher's Perspectives of Danielson's

16

assumption is that participants are not biased and accurately reflect their choices when

executing the study instrument as identified in the Teacher’s Perspectives on Danielson’s

Framework instrument.

Definitions of Key Terms

Assessment within the context of this study is defined as making a decision about

a topic.

Classroom Environment is defined in the current study as developing an

environment of respect and rapport, establishing a culture of learning, managing

classroom procedures, managing student behavior; and organizing physical space

(Danielson, 2007).

A Domain is a broad category of skills, knowledge, dispositions, and related

elements in an educator performance framework. Domains are umbrella descriptions

defined by standards and indicators (Danielson, 2011b).

An Evaluator is defined as the campus principal, although some evaluators may

be trained peers from within or possibly outside the school. These individuals observe,

interview, and review the actions of a teacher in the school setting to assess the teacher’s

skills in their profession (Danielson, 2011a; Peterson, 2004).

Feedback is defined in the current study as for how a teacher is doing at reaching

his/her goal (Wiggins, 2012).

Page 33: Mississippi Teacher's Perspectives of Danielson's

17

Formative Assessment is a relatively low-stakes and administered primarily to

provide performance feedback to improve performance. This process provides feedback

on an ongoing basis for adjusting teaching practices in the classroom. Formative

assessments may or may not include the same measures as summative assessments

(Danielson, 2011a).

Highly Qualified Teacher (HQT) is defined in the current study as having a

bachelor’s degree and full state certification or licensure. In middle and high school

teachers must provide documentation of subject matter by majoring in the subject taught,

earn credits equivalent to a major in the subject, pass of a national or state-developed test,

an advanced certification from the state, or a graduate degree (United States Department

of Education, 2004).

Instruction is defined in the current study as communicating clearly with students,

using questioning and discussion techniques, engaging students in learning, evaluating

students during instruction, teachers being flexible and responsive (Danielson, 2007).

Planning and Preparation is defined in the current study as identifying each

student’s academic traits, proving knowledge of resources; and developing

comprehensible lessons, designing lessons knowing content and pedagogy of material

(Danielson, 2007).

Professional Responsibilities is defined in the current study as teacher reflection,

maintain precise records, communicating with families, partaking in a professional group,

maturing and improving professionally, and demonstrating professionalism (Danielson,

2007).

Page 34: Mississippi Teacher's Perspectives of Danielson's

18

Professional Trust is defined as four criteria: respect, competence, personal regard

for others, and integrity (Bryk & Schneider, 2003).

A Rubric is a method for defining and categorizing performance by highlighting

essential aspects of performance and defining observable and measurable levels of

performance along with a continuum. In personnel performance assessment, rubrics can

be used to communicate performance expectations that support self-reflection on practice

and facilitate self-reflection between an evaluator and the evaluated person (Reddy &

Andrade, 2010).

Standards are definitions of the specific teaching activities and responsibilities in

each domain that are research-based best practices (Danielson & McGreal, 2000).

Student Assessment is defined in the current study as establishing outcomes from

instruction and developing student assessments (Danielson, 2007).

Summative Assessment is an often high-stakes assessment administered primarily

at the end of a specific period (e.g., a school year) to provide a judgment on an educator’s

performance (Danielson & McGreal, 2000).

Teaching Effectiveness is defined in the current study as the outcome of teaching

or the impact on student growth (Goe, Bell, & Little, 2008).

Teacher’s Perspectives on Danielson’s Framework for Teaching (TPDFT) is the

assessment derived from “Charlotte Danielson’s Framework for Teaching Instrument”

which measures the teaching domains of planning and preparation, classroom

environment, instruction, and professional responsibilities.

Page 35: Mississippi Teacher's Perspectives of Danielson's

19

Type of Mississippi Teacher Certification is defined in the current study as the

training method a teacher used to attain a license to teach in the state due to credentials

earned at a higher education institution.

Years of Teacher Experience is defined in the current study as the number of

years a teacher has taught in the classroom.

Page 36: Mississippi Teacher's Perspectives of Danielson's

20

CHAPTER II – LITERATURE REVIEW

Teachers are professionals in the field of education, working with students on a

one-on-one basis, and are critical to overall student success in the classroom. Hanushek

and Rivkin (2010) and Rockoff (2004) all concluded that the most critical factor to

student growth is the role of the teacher. The MDE continues to heavily rely on statewide

teacher input and feedback regarding teacher assessment educational policy, process, and

tool design. The field of teacher evaluation is dominated by Charlotte Danielson’s

thought structure and research. In fact, it would be a significant challenge for an educator

or administrator to identify a teacher evaluation instrument or plan that is not based on

Danielson’s Framework for Teaching.

The purpose of this study informed and guided the review of the literature, which

was to identify teacher’s perspectives on each element of teacher assessments,

specifically Danielson’s Framework for Teaching, which permeates the teacher

evaluation process. Ascertaining which elements Mississippi Gulf Coast K-12 teachers

think are important within Danielson’s framework is timely as the MDE and local school

districts are in the process of developing new teacher evaluation systems and related

policy. Although research abounds regarding teacher evaluation, no research was found

that explored solely teacher’s perspectives, thus making this study unique.

This review of the literature begins by providing a theoretical rational for which

this study is based. Next, a brief history of education as it relates to teacher evaluation

systems and the purpose of assessment are presented. Research regarding teacher

evaluation and effective teaching are naturally grouped into three major themes of

literature. These grouping include components of assessing teacher performance, factors

Page 37: Mississippi Teacher's Perspectives of Danielson's

21

affecting teacher performance, and Mississippi-identified components in assessing

teacher performance. In conclusion, studies that influence teacher’s selection of important

elements within evaluating teacher performance are identified. The literature found in

association with this study was validated with respect to understanding the need for

accurate and reliable teacher assessment.

Theoretical Framework

The theoretical basis for this study is constructivism. Constructivism was

developed by Dewey, Piaget, and Vygotsky and also represents the theoretical framework

for Danielson. This theory holds the supposition that individuals apply prior knowledge

to their current life experiences to construct their cognitive ability (Danielson, 2011a;

Juvova et al., 2015). Moreover, this theory states that individuals gain knowledge and

meaning through interaction between their individual experiences and their backgrounds

(Juvova et al., 2015).

Piaget (1952) identified an individual’s knowledge systems as schema, knowledge

organized into units of information. A schema, or background of the learner, is essential

to the learning process in the constructivist theory (Baerveldt, 2013). Dewey, Piaget, and

Bruner expanded on the tenets of the constructivist framework (Jones et al., 2010). Jones

et al. (2010) developed the idea that the learner must be actively involved in the learning

process. These constructivist tenets form the basis of Danielson’s Framework for

Teaching.

Constructivism and Danielson’s Framework for Teaching

Danielson (1996) developed the Framework for Teaching from focused research

on the implications of constructivist pedagogical implications, including “the

Page 38: Mississippi Teacher's Perspectives of Danielson's

22

constructivist view of learning (and therefore teaching) that underlines the framework for

professional practice” (Danielson, 1996, p. 122). Constructivist themes are apparent

throughout Danielson’s (2013) framework. For instance, in Domain I (planning and

preparation), Domain 2 (classroom environment), and Domain 3 (instruction), multiple

concepts that incorporate Piaget’s (1952) thoughts on the importance of background

knowledge are present. Further constructivist theory is evident with the inclusion of

Dewey’s (1938) emphasis on teacher planning and organization. Dewey’s (1938)

emphasis is the foundation of Domain I (planning and preparation) of Danielson’s

framework (2013). Concerning Domain III (instruction), Dewey’s (1938) theory of

reflective activity is the fundamental concept underlying reflective questioning

techniques. Constructivism research has had a significant influence on teaching methods

and in the educational system at large (Cleaver & Ballantyne, 2014). This constructivist

framework of learning and teaching sets up a foundation by which the current paradigm

for teacher assessment and evaluation systems can be better understood (Danielson,

2011a).

Constructivism and a Model for Professional Improvement

In the development of the framework, Danielson performed an exhaustive review

of literature to incorporate significant themes and research on effective teaching

(Danielson, 2007). This constructivist thought that prior knowledge is used to develop

new knowledge reinforces the theory idea that research and best practices can enlighten

future teacher practice. The majority of current research and literature on the assessment

of teacher performance employs constructivism as the main theme (Danielson, 2011b).

Moreover, many teacher evaluation systems founded their systems on the constructivist

Page 39: Mississippi Teacher's Perspectives of Danielson's

23

theory. In summary, Danielson’s Framework for Teaching serves as the foundation for

this study. Methodologically, the constructivism approach as to how individuals learn is

the theoretical framework for this study.

History of Teacher Assessment

Teacher assessment has evolved over the last 300 years. In the 1700s, clergy and

respected citizens were the first to determine the effectiveness of teachers (Marzano,

Frontier, & Livingston, 2011). As the importance of education began to grow in the

1800s, along with advancements in industry, a more sophisticated educational system and

faculty staff became most important, as reported by Marzano et al. (2011). Schools

during this time in history had a superintendent and teachers as their only employees. As

industry shifted and populations grew, the school system had to adapt and change by

adding additional educational buildings. The increasing student population and the

demand for educated workers also influenced the growth in the educational infrastructure,

which required extra educational administrative staff. As school systems grew, the

responsibility became more than a single superintendent could manage. The solution was

to move a teacher on site to work in a manager’s role for the school building (Marzano et

al., 2011). This new manager position was termed principal teacher, and eventually it

was shorted to principal (Buckner, n.d.). During this time, according to Marzano et al.

(2011), teachers were hired or allowed to continue employment based on overall teacher

effectiveness. This effectiveness was measured with respect to their students’ learning

(Marzano et al., 2011).

At the turn of the 20th century, two schools of thought emerged regarding

education. First, John Dewey (Watras, 2012) viewed education as a whole child

Page 40: Mississippi Teacher's Perspectives of Danielson's

24

approach, which puts the focus of education on citizenship and democracy. Second,

Frederick Taylor (Bonstingl, 1992), who lived in the time of the Industrial Revolution,

viewed education as a more scientific approach, such as observing human behaviors to

better understand how to teach.

When entering the 1960s and 1970s, according to Glickman et al. (1998), student

growth in knowledge and skill was becoming the most important concern of professionals

in the educational community. Danielson and McGreal (2000) wrote:

The 1960s and 1970s brought a huge burst of energy to the teaching research and

prompted a dramatic shift in the focus of teacher assessment. The push to enhance

basic skills acquisition and improve science and mathematics teaching

encouraged research into what teachers did or could do to improve basic skills. (p.

13)

By the 1970s, Danielson and McGreal’s (2000) correlation of student

achievement and teacher’s actions in the classroom were beginning to be noticed. The

scholars’ explanation of the research was that “the ability to more accurately document

teacher classroom behavior led to the design of studies seeking to identify what kind of

teacher behavior could be linked to student achievement” (Danielson & McGreal, 2000,

p. 13).

Glickman et al. (1998) reported that while researching the elements of effective

teaching, a correlation between students’ scores from standardized achievement tests and

teaching strategies was first recognized. The effective teaching research of the 1970s had

four distinct factors in which the correlation of teacher effectiveness and student

achievement observed: (a) the correlation of teacher’s perspectives and student

Page 41: Mississippi Teacher's Perspectives of Danielson's

25

achievement, (b) classroom climate, (c) academic learning time, and (d) the instructional

process (Glickman et al., 1998). The list of distinct factors, recognized by Danielson,

McGreal (2000), and Glickman et al. (1998), was found to have a correlation with teacher

effectiveness and student achievement.

Glickman et al. (1998) went on to recognize that teacher’s perspectives also had a

correlation to student achievement. Glickman et al. identified four teacher’s perspectives

common to what was considered more effective when compared to their peers: (a)

teachers who looked for and expected mastery and success from their students, (b)

teachers who did whatever it took to ensure that all students learned, (c) teachers who had

realistic expectations for their students, and (d) teachers who reported all students could

learn. According to Glickman et al., these discoveries revealed that teachers who had a

perspective of positive student performance supported by teacher expectations were more

effective. In other words, and according to Lunenburg and Ornstein (2008), teacher

expectations are self-fulfilling prophecies.

Government and Legislative Influence

Most educational policy is a result of state decisions and federal influences

(Black, 2015). State decisions are made by government bodies and through elected

officials, such as state legislators, governors, boards of education as well as local school

boards and community stakeholders (Ornstein & Levine, 2006). Federal government

influences occur through two primary channels which influence education at all levels:

(a) money, in the form of federal education funding which are tied to federal education

mandates, and (b) federal laws primarily focused on ensuring equity in education

(Ornstein & Levine, 2006).

Page 42: Mississippi Teacher's Perspectives of Danielson's

26

The National Research Council (2002) identified three core channels used by state

and federal governments to influence teaching and learning. These channels of influence

include (a) curriculum, (b) teacher development, and (c) assessment and accountability.

These channels are part of a complex framework of interconnecting influences and

agenda, creating a dynamic theater in which educational policy is shaped and governed.

Moreover, many actors in this theater are of a political nature or have political ties.

Consequently, it is very difficult to reach an understanding of what factors within this

educational theater are associated with notable correlations (National Research Council,

2002). Hence, and in specific to the study of teacher assessment, there is great difficulty

associated with understanding the choice and development of specific assessment rubrics,

the effect of teacher assessments, and the perceptions of teacher assessment components.

Education is not specifically addressed in the U.S. Constitution. Consequently, as

stated by the in U.S. Const. art. I, § 10, “The powers not delegated to the United States by

the Constitution, nor prohibited by it to the States, are reserved to the States respectively,

or to the people.” Hence, individual states have the power to regulate education

autonomously as long no violations exist with the U.S. Constitution or federal laws

(Headen, 2014). As a result, state authorities have been active in education since the

beginning of America as a nation (Coulson, 1999). State roles in education can be

categorized into four eras: (a) Permissive Era of 1642 to 1821, (b) Encouraging Era of

1826 to 1851, (c) Compulsory Era of 1855 to 1980, and (d) School Choice Era of 1980 to

present (Coulson, 1999). The Permissive Era saw the beginnings to enact education laws

and setting minimum standards (Roberson, 2017). The Encouraging Era saw suggestions

for the formation of school districts and options for funding education (Roberson, 2017).

Page 43: Mississippi Teacher's Perspectives of Danielson's

27

The Compulsory Era saw the creation of the United States Department of Education and

requirements to establish school districts, curriculums, and compulsory laws (Roberson).

The school choice era opened up more options for students, such as homeschooling,

vouchers, tuition tax credits, and charter schools (Coulson, 1999).

Federal authority, regarding educational policy, had not been used much at the

national level until the Johnson administration’s “Great Society” movement (McGuinn &

Hess, 2005). According to McGuinn and Hess (2005), during this time, major federal

spending programs were created with a goal of eliminating poverty and racial injustice,

and education was one of several areas targeted. Until this time, the federal government

had not provided significant federal aid to public education (McGuinn & Hess, 2005).

McGuinn and Hess pointed out this was the beginning of a long series of U.S.

government actions toward influencing education at a national level.

Over the past 10 to 15 years, the U.S. government has had a strong influence on

public school teacher evaluation systems (Smylie, 2014). Part of this influence has been

associated with states compelled to comply with federal legislation in order to secure

federal tax dollars for their school districts. These tax dollars were tied to compliance

with federal legislation. An example of such legislation is Race to the Top, which

mandated new teacher evaluation systems (Hershberg & Robertson-Kraft, 2010).

Over the last 40 years, the U.S. public school system has been placed under a microscope

of ever-increasing scrutiny. Moreover, the American public educational system was on a

slippery slope downward from its former top position in the world’s ranking system of

education. The United States, once leading the world in academics, was now found to be

lacking. In 2012, a Huffington Post article ranked the United States 17th in comparison to

Page 44: Mississippi Teacher's Perspectives of Danielson's

28

other countries in the world (“Best education in the world,” 2012). As the United States

continued to fall in status in the world’s ranking system of education, a sense of urgency

emerged (Lingenfelter, 2007). This new-found sense of urgency led to the U.S.

government’s involvement in teacher assessment. Some examples of U.S. government

involvement through federal legislations follow (a) A Nation at Risk, (b) No Child Left

Behind, (c) American Recovery and Reinvestment Act, (d) Race to the Top and (e) Every

Student Succeeds Act.

A Nation at Risk Report

During the Reagan administration, the then Secretary of Education, Terrel H. Bell

called for a commission to evaluate the U.S. educational system. As a result of the

commission’s work, the National Commission on Excellence in Education (NCEE) was

formed in 1983. The first charge of the commission was to consider the credibility of

instruction in the nation’s public and private schools, colleges and universities. The

commission’s findings were published in a report entitled A Nation at Risk (Lingenfelter,

2007).

According to Lingenfelter (2007), the report caused the nation to become alarmed

for the future of the country’s educational system. The report concluded that teacher

quality was a critical element to student understanding, and identified recommendations

for addressing the problems discovered by the NCEE. Under the heading, A Nation at

Risk, Recommendation D, there are seven stands, of which two address teacher

assessment. The NCEE (1983) recommendation was written as:

Salaries for the teaching profession should be increased and should be

professionally competitive, market-sensitive, and performance-based. Salary,

Page 45: Mississippi Teacher's Perspectives of Danielson's

29

promotion, tenure, and retention decisions should be tied to an effective

evaluation system that includes peer review so that superior teachers can be

rewarded, average ones encouraged, and poor ones either improved or terminated.

(p. 38)

Later, during President George W. Bush’s presidency, he too was influenced by

the A Nation at Risk report. His administration and a bipartisan Congress enacted the

reauthorization of the Elementary and Secondary Education Act (ESEA), commonly

known as No Child Left Behind (2003).

No Child Left Behind

In 2003, the United States Department of Education (2003) issued a report

regarding The No Child Left Behind Act (NCLB). The report made strong points

regarding Congress having passed NCLB with an overwhelming majority of votes and

that Congress had renewed the ESEA at the same time. Moreover, the United States

Department of Education (2003) reported that:

In amending ESEA, the new law represents a sweeping overhaul of federal efforts

to support elementary and secondary education in the United States. The new law

is built on four common-sense pillars: accountability for results, an emphasis on

doing what works based on scientific research, expanded parental options and

expanded local control and flexibility (p. 3).

The United States Department of Education (2013) NCLB report detailed several

themes regarding teacher accountability, which included the following: (a) the report

identified elements that were to be considered anytime a school was in need of

improvement, (b) the report recognized problems in teaching and curriculum, (c) the

Page 46: Mississippi Teacher's Perspectives of Danielson's

30

report identified available student testing data that could be used for planning

improvement strategies, (d) the report discussed teacher accountability, and (e) the report

defined high quality educators as teachers with a bachelor’s degree in the subject area

taught, certification and license according to state regulations, and competency

demonstrated as designed by the state (United States Department of Education, 2003).

The NCLB of 2001 was a law designed to guarantee government involvement in

K-12 public schools (NCLB, 2002). This law included descriptions and criteria for

quality schools, quality education, and quality teachers. The NCLB Act (promised

parental involvement and parental notification (United States Department of Education,

2003). For example, per NCLB, official notification to parents or caregivers would be

provided if there were any discrepancies in a teacher’s qualifications. NCLB had a

deadline for all students to be at grade level and proficient by 2014 (United States

Department of Education, 2003).

Like nearly half of all U.S. states, Mississippi submitted a waiver for the NCLB

Act, according to Gilbertson (2012), due to Mississippi’s inability to meet the deadline.

As Mississippi worked toward meeting NCLB Act mandates, it developed a statewide

teacher appraisal rubric called M-STAR. The rubric was based on Danielson’s

framework. Mississippi would use the rubric in combination with measurements of

student growth as the basis of its statewide teacher evaluation system.

Highly Qualified Teachers Versus Effective Teachers

The United States Department of Education (2003) identified a requirement in

NCLB that there be a highly qualified teacher (HQT) in every classroom. An HQT was

defined as an individual with a degree in the subject taught, a state teaching license, and a

Page 47: Mississippi Teacher's Perspectives of Danielson's

31

passing score on a test of content knowledge (United States Department of Education,

2003). Knowing the definition of an HQT highly leads to the question as to whether a

“highly qualified teacher” and a “quality teacher” equate to the same definition (Smith,

Desimone, & Ueno, 2005). The answer is yes — according to a study conducted by

Mohan, Lundeberg, and Reffitt (2008), a quality teacher is an effective teacher.

Moreover, these researchers went on to say that an effective teacher didn’t necessarily

include one with specific degrees, license, or test scores.

King and Watson (2010) described an effective teacher as one who recognizes

that their students’ success is dependent on their teaching ability and the transfer of

knowledge that happens within their classroom. Moreover, an effective teacher is one

who does not blame factors outside of the classroom for students not learning their

lessons; even when these factors included uninvolved parents, poverty, underrepresented

groups, or illness (King & Watson, 2010).

According to King and Watson (2010), effective teachers were individuals who

had come to the realization that they themselves were the key factor in their students’

success. Therefore, an effective teacher bases student success on their effectiveness and

not on the student’s abilities (Colburn, 2000; King & Watson, 2010). King and Watson

(2010) further explained that effective teachers are committed to their students and

determined to see each student experience an academic growth increase of one year,

minimum. In order for these effective teachers to determine student academic growth,

assessments were administered at the beginning and end of the school year. The

assessments were then analyzed to determine the range of student growth experienced

during the school year.

Page 48: Mississippi Teacher's Perspectives of Danielson's

32

King and Watson’s (2010) findings on effective teachers share a common

consensus with a 1998 study conducted by Glickman et al. (1998), which found a

correlation between teacher’s perspectives and teacher’s attitudes about student success.

In describing what the classroom environment of an effective teacher looks like,

Glickman and colleagues reported that “a warm, supportive environment is fostered by

the teacher who provides specific praise when deserved, respects students’ contributions,

exhibits confidence and enthusiasm, engages in positive interactions with students, and

maintains an orderly classroom with limits on social conversations” (p. 91).

American Recovery and Reinvestment Act

President Barack Obama signed into law the American Recovery and

Reinvestment Act (ARRA) of 2009, which identified the U.S. public educational system

as one needing improvement (McGuinn, 2011; Viteritti, 2012). In addition, the ARRA

(2009) identified specific suggestions on how to achieve improvements to the U.S.

educational system and communicated how these improvements would benefit the nation

as a whole.

In section 14006 of the ARRA (2009), provisions were made to establish a

competition between states for a pool of federal funds at an unprecedented amount of

funds of $500 million. Unlike in prior years, states would be required to compete for the

funds in lieu of receiving them. In addition to providing opportunities to compete for

federal ARRA funding, the written act included a directive to improve student

achievement through school improvement (ARRA, 2009). Built into the student

achievement directive was a mandate to improve teacher quality. Also mandated was that

the placement of quality teachers would be equally distributed throughout individual

Page 49: Mississippi Teacher's Perspectives of Danielson's

33

schools and school districts. Moreover, it was encouraged that schools with at least 40%

of its students receiving free or reduced lunch should receive a greater share of the

district’s quality teachers (United States Department of Education, 2009). ARRA became

the driving force behind the formation of the Race to the Top (RTT) federal funding

program.

Race to the Top was an incentive for states to make improvements to their public-

school systems. Race to the Top established an application process which awarded points

for RTT criteria which was incorporated into state schools (United States Department of

Education, 2009). RTT application criteria included teacher and principal assessments.

According to the United States Department of Education (2009) RTT’s selection criteria

D2, “Improving teacher and principal effectiveness based on performance” (p .9), the

federal government was establishing the criteria for teacher and principal assessments.

Common Core State Standards Initiative

In addition to the educational changes and improvements made through the

mandates of RTT (United States Department of Education, 2009), there was an added

supplement to aid states transitioning to Common Core State Standards (CCSS). RTT

was not the only help regarding CCSS; the Partnership for Assessment of Readiness for

College and Careers consortium (United States Department of Education, 2010) was also

committed to help states that were developing and implementing CCSS. Full

implementation of CCSS in participating states was scheduled for the 2014-2015 school

year (United States Department of Education, 2010); Mississippi was a participating

state.

Page 50: Mississippi Teacher's Perspectives of Danielson's

34

Every Student Succeeds Act

On December 10, 2015, President Barrack Obama signed into law The Every

Student Succeeds Act (ESSA). This Act provided for a relaxation of many NCLB

mandates and gave states more control of implementing federal objectives found in ESSA

(ESSA, 2015). As a result, teacher assessments could now be established by each state

without federal government involvement (Klein, 2015). As part of the new act, states

would be permitted to defer control over teacher and principal assessments to local public

schools, namely the district school boards (Ganda & Baxter, 2016).

The ESSA implication for Mississippi regarding teacher assessment is significant.

As of 2016, the state could use federal funds designated for professional development and

Teacher and School Leader Incentive Fund grants to fund teacher assessment programs of

its own design (Partners for Each and Every Child, 2016). These teacher assessment

programs, however, would need to be based on student achievement, growth, several

measures of performances, and teacher professional development. According to Weiss et

al. (2016), ESSA has a focus on keeping stakeholders engaged and actively involved in a

school’s policy making process.

The MDE established an ESSA planning team in order to revise the state’s

teacher evaluation system based on the new assessment environment created by ESSA

(T. Hart, personal communication, July 12, 2016). Teacher input and perspectives on the

state’s new teacher assessment system were considered important to the process of

developing the state’s new teacher evaluation system. This importance was demonstrated

when MDE established the ESSA planning team composed of nearly all certified K-12

Page 51: Mississippi Teacher's Perspectives of Danielson's

35

teachers. This team was part of a larger steering committee which systematically

reviewed and recommended improvements to M-STAR and Mississippi Principal

Evaluation System (MPES).

Mississippi’s ESSA team seized an opportunity in ESSA’s flexibility to

completely review its teacher and administrator assessments (MDE, 2016). The team

concluded that the M-STAR teacher assessment rubric would be replaced by the Teacher

Growth Rubric (MDE, 2016). A high-level comparison between the new and old rubrics

is as follows: M-STAR – Domains: 5, Standards: 20; and Teacher Growth Rubric –

Domains: 4, Standards: 9 (MDE, 2016).

M-STAR, as an evaluation system, has been replaced by the Mississippi Educator

and Administrator Professional Growth System (MDE, 2016). According to the MDE,

“The Mississippi Educator and Administrator Professional Growth System is designed to

improve student achievement by providing teachers and administrators with clear,

specific, actionable, and timely feedback to inform continuous improvement” (MDE,

2016). The MDE (2016) made this change for the 2016-2017 school year.

Regional and State Influences

Regional philosophies and educational level of control is rooted in Colonial

America (Ornstein & Levine, 2006). Colonial America consisted of 13 colonies split

between three regions along the Atlantic seaboard (Ornstein & Levine, 2006). Each of

these regions had their own view of education and in turn established regional

educational philosophies (Ornstein & Levine, 2006). New England (or Northern

Colonies) was the area around Massachusetts, and was primarily inhabited by Puritan

separatists. The Puritan separatists valued education and made an effort to provide it for

Page 52: Mississippi Teacher's Perspectives of Danielson's

36

their children. Middle Colonies was the area between Massachusetts and Virginia, which

was primarily inhabited by Dutch, French, Germans, and several other nationalities. The

Quakers who settled in the Philadelphia area considered it important to educate the

populace. They had a strong influence on the development of education in the region and

established the first public school. Southern Colonies was the area that stretched south

from Virginia to Florida. The South, overwhelmingly rural, had few schools of any sort

until the Revolutionary era. Education was primarily only available to the affluent

(Ornstein & Levine).

Most decisions regarding public education are made by government bodies at the

state level, followed by local community stakeholders (Kaestle & Smith, 1982). State-

level influence on education is typically deeply rooted in the state’s constitution and

government policymaker choices and agendas (Ornstein & Levine, 2007). In the case of

Mississippi, influences are invoked by way of appointments to the State Board of

Education and by state educational laws.

According to the Mississippi Constitution art. 8, § 202, the State Superintendent

of Education is elected by the State Board of Education. The State Board of Education

membership corresponding with the Mississippi Constitution art.8, § 203 is appointed by

someone who holds a state political office. Of the nine seats on the board, five are

appointed by the Governor, two by the Lieutenant Governor and two by the Speaker of

the House (National Association of State Board of Education, 2017). Moreover, the

Mississippi state constitution sets boundaries in educational power by way of its

constitutional language (MS Const. art. 8, § 201, § 202, § 203). Some examples are as

follows: (a) in the case of free public schools, “…upon such conditions and limitations as

Page 53: Mississippi Teacher's Perspectives of Danielson's

37

the Legislature may prescribe…”, (b) in the case of a state superintendent, “…appointed

by the State Board of Education, with the advice and consent of the Senate…”, and (c) in

the case of State Board of Education, “…formulate policies according to law…” (MS

Const. art. 8, § 201, § 202, § 203).

Purpose of Teacher Assessment

The key purpose of teacher assessment is to enhance teaching and improve

student learning (Adams et al., 2015). Furthermore, the purpose of teacher assessment is

to manage educational teaching elements associated with the performance of teachers. In

order to manage these elements, one must first be able to measure, categorize, and rank

the elements as they relate to individual teachers; to this end, teacher assessment can take

on different designs and types.

Teacher assessment today is typically based on a framework model. Many

popular framework models exist for teacher evaluation. A few commonly known and

national-level framework models, along with their associated software tool names, are:

(a) Danielson’s Framework for Teaching using Teachscape (Danielson & McGreal,

2000), (b) Marzano causal teacher evaluation model using iObservation (Marzano &

Toth, 2013), (c) Mid-continent Research for Education and Learning using McREL

(Snow-Renner & Lauer, 2005), and (d) Stronge’s teacher effectiveness performance

evaluation system using OASYS (Szczepaniak, 2010). Each of these models has

significant financial expenses associated with their implementation and annual usage

(Freehold Borough School District, n.d.). Some states use these models, either

exclusively, in-part, or as the basis for their own state developed teacher assessment

models.

Page 54: Mississippi Teacher's Perspectives of Danielson's

38

Charlotte Danielson, author of the Danielson Framework for Teaching framework

model, is an educational consultant based in Princeton, New Jersey. She is both

nationally and internationally known for her work with teacher assessment (Danielson,

2011b). She initially developed her framework, a constructivism model, in 1996. Since

that time, it has undergone three revisions (2007, 2011, and 2013) and is now considered

a seminal work on teacher supervision and evaluation. Danielson’s Framework originates

from her work with the Educational Testing Service’s Praxis III Classroom Performance

Assessments certification. The Danielson Framework for Teaching has gained a large

following from the educational community, policymakers, and state departments of

education.

The design of teacher assessments serves two purposes: (a) to measure teacher

competence, and (b) to foster professional development or growth. There are two types of

teacher assessments: formative and summative (Chukwubikem, 2012; Danielson &

McGreal, 2000 & Marzano, 2012). Formative assessments provide teachers with

constructive criticism (Chukwubikem, 2012; Danielson & McGreal, 2000). Summative

assessments, on the other hand, provide teachers with less constructive criticism and are

primarily used by school administrators to make determinations regarding the teacher’s

employment status (Chukwubikem, 2012).

Formative Assessments

Formative assessments are based on teacher growth and professional development

and are focused on teacher skill and growth (Danielson & McGreal, 2000). Often,

teachers prefer formative assessments because they identify their individual professional

strengths and weaknesses while providing feedback information that can be applied

Page 55: Mississippi Teacher's Perspectives of Danielson's

39

toward professional growth (Danielson & McGreal, 2000; Weisberg et al., 2009).

Likewise, in a study by Weisberg et al. (2009), teachers found evaluator observations to

be beneficial to their professional growth because the assessments identified their

strengths and weaknesses in their classroom teaching.

Summative Assessments

Summative assessments are based on teacher employment decisions and are

focused on assisting school administrators identifying teachers who have not met

established standards (Marzano, 2012). According to Danielson and McGreal (2000),

government leaders and policymakers favor summative assessments. Likewise, school

district administrators prefer summative assessments because of public education’s

dependency on government funding (Danielson & McGreal, 2000). The public, as tax

payers, has an interest in teacher quality at their public schools and a right to be informed

regarding the same (Mathers & Olivia, 2008).

Mielke and Frontier (2012) wrote that originally, the basis for establishing

summative teacher assessments was to create a teacher guide for professional

improvements using comprehensive teaching frameworks. Marzano (2010) and

Chukwubikem (2012) both conducted summative evaluation research and their work was

concluded with divergent findings. Marzano (2010) conveyed that assessment had two

different objectives. The first purpose is to measure teachers according to their

competence, while the next is to assess teacher growth. Chukwubikem (2012) conveyed

that the purpose of teacher assessment was for “measuring teacher competence and

simultaneously encouraging development growth is the purpose of effective evaluation of

teachers” (p. 556).

Page 56: Mississippi Teacher's Perspectives of Danielson's

40

Rationale for Formative and Summative Assessments

Danielson (2011a) questioned the necessity for teacher assessments. Danielson

ended up answering her own question when she came to the realization that existing

educational laws require teacher assessments for the purpose of public accountability.

Danielson and McGreal (2000) explained that stewardship of tax dollars to pay teachers’

salaries was one purpose of accountability. They supported the notion that teacher

assessments were the best method for ensuring that public tax dollars were being

expended judiciously. According to Danielson and McGreal (2000), public education had

two new purposes for teacher assessment: (a) to guarantee the quality level of teachers,

and (b) to encourage teacher professional development (Danielson & McGreal, 2000). In

2012, Marzano conducted a study of 3,000 teacher participants, and found that while

teachers recognized that teacher assessments were important, these same participants

reported that teacher professional development was the most important component of the

teacher evaluation process.

Marzano (2010) noted in his research that some systems of teacher assessment

have failed to calculate the quality of teachers accurately because they failed to

distinguish between effective and ineffective teachers. During a yearlong investigation of

teacher’s perspectives on teacher assessments, seventy-six percent of the 3,000 teachers

surveyed reported that teacher assessments were for the measurement of the teachers’

abilities and for identifying individual areas needing improvement, with improvement

being the prevailing purpose (Marzano, 2010).

Chukwubikem (2012) noted in his research that teacher assessment hinges on the

teacher’s self-reflection and its main purpose was for the improvement of the educational

Page 57: Mississippi Teacher's Perspectives of Danielson's

41

organization. Consequently, Chukwubikem (2012) considered that it was important to

develop a process for teacher self-reflection and that the process involve colleagues who

could help teachers view themselves as learners. In an expansion of self-reflection,

Sandholtz (2011) conducted a qualitative study with pre-service teachers of which was

focused on self-reflection associated with student teaching. Sandholtz (2011) found that

the practice of self-reflection increased the student teacher’s abilities as an educator. The

practice of student teacher self-reflection provided insight and direction regarding

methods to improve their individual teaching skills (Sandholtz). Likewise, King and

Watson (2010) found that self-reflection improved teaching practices. Moreover, King

and Watson (2010) recognized that there were correlations between improved teaching

practices and student success in the classroom.

Bambrick-Santoyo (2012) suggested that teacher improvement took place

happened when teachers knew the specific steps to take for improvement. The teachers

developed, and experienced success when they persistently applied the specific steps for

improvement (Bambrick-Santoyo, 2012). Chukwubikem (2012) described teacher

improvement as “a process of continuous improvement gained through experience,

targeted professional development and the insights and direction provided through

thoughtful, objective feedback about the teachers’ effectiveness” (p. 561).

Teacher Evaluation and Effective Teaching

Teacher evaluation and effective teaching are identified as two separate entities;

teacher evaluation is an assessment of teaching abilities (Chukwubikem, 2012; Danielson

Page 58: Mississippi Teacher's Perspectives of Danielson's

42

& McGreal, 2000; Marzano, 2010), whereas effective teaching is the outcome of

instruction and outgrowth of teacher evaluation (Goe et al., 2008; Rockoff & Speroni,

2011).

Well respected researchers such as Michael Pressley campaigned for attention to

effective teaching that ensures all students have equal opportunity to similar educational

experiences (Mohan et al., 2008). Moreover, schools are required to develop policies,

with importance placed on teacher effectiveness, in order to qualify for government

educational funds (Rockoff & Speroni, 2011).

Teaching effectiveness is defined as the outcome of teaching or the impact on

student growth (Danielson, 2007; Goe et al., 2008). The definition of effective teaching is

similar to that of teaching effectiveness. The differences between the two are that (a) the

outcome for effective teaching is student growth (Creemers & Kyriakides, 2013; Layne,

2012), and (b) the outcome for teaching effectiveness is teacher growth (OECD, 2011).

King and Watson (2010) had observed highly effective teachers for over 30 years.

During that time, they observed that students who needed highly effective teachers the

most were generally not the students who received exposure to highly effective teachers.

Danielson (2012a) reported that educators must consider guideline changes, from

national and state departments of education, when defining criteria for good teaching.

The Bill and Melinda Gates Foundation (2013) funded a project that researched Measures

of Effective Teaching (MET). The MET project defined effective teaching as, “[a]

sensitivity to students’ academic and social needs; [b] knowledge of subject-matter

content and pedagogy; and [c] the ability to put that knowledge into practice, all in the

service of student success” (Bill & Melinda Gates Foundation, 2013, p. 3). In addition to

Page 59: Mississippi Teacher's Perspectives of Danielson's

43

the MET project, other researchers (King & Watson, 2010; Mohan et al., 2008; Sanden,

2012) observed additional attributes of effective teachers, such as: excellent classroom

management, developers of independent learners, differentiated instruction, and timely

student feedback.

During the 1980s, Madeline Hunter developed a model of teaching practices that

are still in practice today (Danielson & McGreal, 2000). Hunter’s teaching practices

required that the teacher first model the educational lesson prior to presenting it in the

classroom. After presenting the lesson, students would then work in groups until each

was able to manage independent practice (Danielson & McGreal, 2000). The same

teaching practices described in Hunter’s model have been validated by Mohan et al.

(2008), who communicated that effective teachers he observed followed the same

teaching practices as prescribed by Hunter. Mohan et al. (2008) supposed that the

teaching practices modeled by Hunter demonstrated effective transfer of information to

students through instruction.

Charlotte Danielson (2007), through her many years of research, identified five

areas that effective teachers should incorporate into their classroom instruction, which are

as follows: (a) communicating clearly with students, (b) using questioning and discussion

techniques, (c) engaging students in learning, (d) evaluating students during instruction,

and (e) being flexible and responsive.

According to Glickman et al. (1998), a single instructional method cannot be

identified that addresses effective instruction for all classroom circumstances, students, or

discipline. Glickman et al. reported that an effective teacher was one who was able to

match the best teaching strategy for the given educational objective. Using correct

Page 60: Mississippi Teacher's Perspectives of Danielson's

44

teaching strategies is essential in lesson planning and instruction (Glickman et al., 1998).

Thus, according to research conducted by Glickman et al., effective teachers match the

best teaching strategy to the given classroom dynamics and student characteristics.

The development of effective teachers was found to be contingent upon individual

practice being paramount to the teacher’s skill building habits (Bambrick-Santoyo, 2012).

The Bill and Melinda Gates Foundation (2013) validated this finding from Bambrick-

Santoyo (2012) by stating that it is difficult to defend effective teaching without knowing

the facts regarding the practices of the teacher. Teacher growth occurs with (a) practice,

(b) directed professional development, and (c) assessment feedback, each of which is

constantly applied to the teacher’s daily skills (Chukwubikem, 2012). Bambrick-Santoyo

(2012) observed that true personal change occurred when teachers directly addressed

their individual skills that were lacking, regardless of assessment feedback or evaluation

results.

Effective teaching is more than just teaching the content; students must also

demonstrate learning (Sandholtz, 2011). Danielson and McGreal (2000) supported this

finding through research that recognized the dynamic interaction between the (a) subject

matter, (b) learner, (c) teacher, and (d) learning outcomes. Each element is required for

effective teaching instruction. Sandholtz (2011) studied lesson planning among student

teachers and discovered that student teachers incorrectly reported that designing

classroom lessons which addressed the content was the most important factor in lesson

planning.

King and Watson (2010) observed that individuals outside the public-school

setting have the ability to recognize effective teaching. Moreover, these outsiders,

Page 61: Mississippi Teacher's Perspectives of Danielson's

45

including student parents, caregivers, and the students themselves, could function as

stakeholders in the process to ensure that classrooms were being led by effective teachers

(King & Watson, 2010).

In summary, Danielson’s and McGreal’s (2000) definition of good teaching

captures the evolution of effective teaching over the years. Their definition of good

teaching is “the definition of good teaching has gradually changed, from approaches that

specified actual teaching behaviors to those that relied more heavily on methods that

promote student conceptual understanding” (p. 33). The concepts promoted by

Danielson’s and McGreal’s (2000) work in conjunction with NCLB mandates regarding

HQTs were significant historical influences on the U.S. public school system.

Danielson’s Framework Domains

A discussion of teacher evaluation ultimately led to Danielson’s Framework for

Teaching. Danielson’s evaluation has been revised over the years, with the most recent

publication being in 2013. Danielson (2013) developed a teacher evaluation system with

22 components, of which are divided into four domains. Each domain is structured to

evaluate a specific part of a teacher’s profession. The four domains are (a) Planning, (b)

Classroom Environment, (c) Instruction, and (d) Professional Responsibilities. A brief

description of each domain is listed below.

Domain 1: Planning is an unobservable domain. First the teacher reveals a

knowledge of the content and pedagogy of the material. Teachers differentiate instruction

according to their knowledge of the students. Teacher plan the outcomes of the

instruction. An awareness of the type of resources available for the planned instruction is

Page 62: Mississippi Teacher's Perspectives of Danielson's

46

noted. Lessons are developed according to the age appropriateness of the students.

Finally assessments are prepared to organize data on the individual student (Danielson,

2013).

Domain 2: Classroom Environment is an observable domain. The teacher creates

an environment of respect and camaraderie. A culture of learning is established.

Procedures are created and managed. Student behaviors are handled with respect and

solutions. The physical space in the classroom is orderly (Danielson, 2013).

Domain 3: Instruction is an observable domain. Teachers connect with students

with two-way communication. The instructor uses open-ended questions and discussion

methods to help students understand lessons. Students are involved in learning because

the teacher know how to draw the students into the lesson. Assessment is used by

teachers to monitor students’ understanding. Teachers display flexibility and feedback to

students while teaching (Danielson, 2013).

Domain 4: Professional Responsibilities is an unobservable domain.

Professionally, teachers reflect on their teaching. Accurate records are sustained and

filed. Communication with families is ongoing. Teachers participate in their professional

group while increasing and expanding as a professional. Displaying professionalism is

the continuing responsibility of a teacher (Danielson, 2013).

Danielson’s (1996) research developed components for observing and evaluating

the qualities of a teacher. Sanden’s (2012) recognized the practices of very effective

teachers guided students to success in academics. The MET Project (2010) noted that the

qualities of effective teacher could predict success in students. Many components of the

teaching profession led to the domains that Danielson identified.

Page 63: Mississippi Teacher's Perspectives of Danielson's

47

Components in Assessing Teacher Performance

The Gates Foundation (2013) conducted research which identified and described

teaching elements for improving teacher skills. These teaching elements are: (a)

calculation of effective teaching, (b) guaranteeing accurate data, and (c) devotion to

improvement. Oliva, Mathers, and Laine (2009) suggested that school principals

continually assess their teacher evaluation systems so that adjustments could be

incorporated in order to continually improve the evaluation system and consequently

improve teacher quality. Oliva et al. (2009) further reported that principals played a

major role in changing teacher evaluation systems in a manner that could facilitate

increased teacher effectiveness and student learning. Oliva et al. related that regular,

consistent feedback through classroom observation was a potential tool that could have

major positive impact on both novice and veteran teachers. Toch (2008) questioned how

public education could hope to improve teacher value without having a reliable way to

assess teacher worth.

According to Darling-Hammond, Amrein-Beardsley, Haertel, and Rothstein

(2012), teacher effectiveness should be considered when designing teacher assessments.

Danielson and McGreal (2000) reported that there were two conclusions coming from

teacher and administrators, regarding teacher assessments, and that these had been

successfully tested against more than three decades of educational literature. The first

conclusion was that there was a consensus that teacher evaluations were valuable and

needed (Danielson & McGreal, 2000). The second conclusion was that there was a

consensus that evaluation systems, based on formative assessments, were more

productive (Danielson & McGreal, 2000). Chukwubikem (2012) noted that an effective

Page 64: Mississippi Teacher's Perspectives of Danielson's

48

assessment of teachers involved an evaluation system that determined teacher

competence while boosting teacher developmental growth.

Marzano (2012) stated that the purpose of a teacher assessment should be

determined before the assessment begins. He concluded that there were two purposes for

teacher assessments: (a) evaluations for measurement, and (b) evaluations for

improvement. Marzano was careful to draw attention to the fact that these two purposes

were very different from each other. In the 1980s, it was common for teacher classroom

assessments to be comprised of simple rating scales and checklists. These assessments

were one-dimensional and positioned as a summative evaluation. The conclusion of these

assessments identified teacher flaws and did not offer any help in improving teacher

skills (Danielson & McGreal, 2000).

Gordon, Kane, and Staiger (2006) suggested that funding for teacher assessment

must originate at the federal level and that each state should be given the autonomy to

determine its own teacher evaluation system design. Additional suggestions by Gordon et

al. (2006) included these three requests: (a) teacher assessments have no ties to teacher

licensure, classes, or degrees earned, (b) a large portion of the assessment score include

student outcomes such as student test scores on normed tests, and (c) assessment scores

based on an accumulation of several years versus one year of assessment scores. Some

researchers, such as Colby, Bradshaw, and Joyner (2002), went a step further in giving

states autonomy in teacher assessments. The authors suggested creating teacher

assessments locally at the school district level. According to Colby et al. (2002), the

added benefits of educational improvement, teacher growth, and pupil learning were

more powerful when the evaluation was created at the local level.

Page 65: Mississippi Teacher's Perspectives of Danielson's

49

Chukwubikem (2012) stated that there are four elements to an effective teacher

assessment: (a) consistent feedback to teachers for professional growth, (b) regular

reporting to school principals with information for building valuable educational systems,

(c) recognizing needs for professional growth, and (d) recognizing specific learning

expectations. Kimball (2002) thought it imperative to note that the frameworks for

teaching excluded the identifiers of subject or grade. Kimball identified the teacher

assessment as “standards-based teacher evaluation has at its core a vision of teaching

elaborated with broad domains of practice, comprehensive standards, and detailed criteria

through rubrics” (p. 241).

To better identify effective teachers and improve teacher assessment credibility,

public school districts began searching for other teacher assessment approaches that went

beyond the ones centered on student test scores (Gordon et al., 2006). Alicias (2005)

deemed that the purpose for teacher assessment was to (a) measure the teacher’s

attributes as an educator and (b) measure student growth because it was the most crucial

measurement of the teacher evaluation process. Danielson (2011b) considered that a good

teacher evaluation system should contain: (a) a trustworthy description of valuable

instruction, (b) a mutual perception of this description, and (c) trained evaluators.

Chukwubikem (2012) described effectual teacher assessments as those that helped

teachers develop their teaching skills and as a result, helped students increase their

learning. As reported by Smith et al. (2005), teacher certification is not always indicative

of an effective teacher. An effective teacher is not someone with certification only, but

instead someone who can skillfully demonstrate: (a) planning, (b) educational

environment, (c) instruction, (d) professional responsibilities, and (c) student growth

Page 66: Mississippi Teacher's Perspectives of Danielson's

50

(Smith et al., 2005). Evidence of a teacher certification only demonstrates one’s ability to

successfully complete an educator’s program in academics.

In the education community, there has been growing concern and awareness

regarding the legal aspects of teacher evaluation processes and systems (Baker, Oluwole,

& Green, 2013). Baker et al. (2013) noted one school of thought entailed using existing

teacher assessment framework models associated with significant research and having

histories of success. Another school of thought pertained to self-developing a teacher

assessment framework models or significantly altering an existing one to suit local

philosophy (Baker et al., 2013). However, Baker et al. pointed out the later school of

thought carried with it legal and organizational dangers. According to Stronge, “teacher

evaluation systems must be legally defensible, useful, feasible, and accurate measures of

teacher performance” (Stronge, 2006, p. 12).

Planning

According to Danielson (2007), teacher planning should consist of four

descriptors: (a) lessons designed with content knowledge and pedagogy of material, (b)

identification of student’s individual academic traits, (c) understanding of resource needs,

and (d) development of comprehensible lessons. As a foundation to good planning,

teachers should have adequate knowledge of the content they are responsible for

teaching. In order to teach knowledge, teachers should be able to understand each

student’s thinking processes in addition to understanding their academic traits (Graff,

2011). Graff (2011) stated that in order for teachers to be successful in the classroom,

they are required to have subject area content knowledge and an extensive understanding

of the what, how, and when of the material. Tanni (2012) went on to state that in order

Page 67: Mississippi Teacher's Perspectives of Danielson's

51

for a teacher to understand the subject and present an effective lesson, the assimilation of

information from many resources is required in combination with prior knowledge.

Planning an effective lesson is only one component of effective teaching. Lesson

planning is an important skill that effective teachers must master (Danielson & McGreal,

2000).

Not only are teachers expected to skillfully plan lessons, but they are also

expected to assess student learning and mastery as well (Desimone, 2011). This means

that an effective teacher includes student outcomes when constructing the lesson plan

(Skowron, 2001). Rock and Wilson (2005) state that in order to construct a lesson plan

for student success building, it is essential for the teacher to understand the student’s

background and content knowledge. This building or construction of an effective lesson

plan is consistent with the constructivist theory of building on the current knowledge

(Rock & Wilson, 2005). Danielson and McGreal (2000) further noted that teachers must

also have resources readily available for lesson planning. Teachers must ultimately

possess knowledge of how to help students discover and access resources. Planning a

lesson includes the teacher’s development of an assessment tool that can assess student

learning (Danielson & McGreal, 2000).

According to Williams, Evans, and King (2012), teachers need to know how to

plan instructions that are differentiated. Also, teachers need to know how to include

student assessment and participation in the lesson planning process. The lesson planning

process must also include any students, with learning disabilities, which are in the

classroom. With this in mind, Williams et al. (2012) conducted a study to measure the

impact that instruction from Universal Design Learning (UDL) had on pre-service

Page 68: Mississippi Teacher's Perspectives of Danielson's

52

teachers. According to Spooner, Baker, Harris, Delzell, and Browder (2007), “UDL uses

flexible instructional materials and methods to accommodate a variety of learning

differences” (p. 109).

Williams et al.’s (2012) study group participants consisted of 15 pre-service

teachers. Each was given a pretest to determine their knowledge of the UDL. Afterwards,

the participants were provided instruction regarding UDL using textbook, video, and

illustration. Then, the participants were given a case study which included an imaginary

student, and were tasked with designing a class lesson plan. Upon completion, each

participant presented their lesson to their fellow participants as a group. Afterwards, the

participants critiqued each other. At the end of the exercise, a post-test was given to each

participant. In conclusion, when t-test data was analyzed from pre- and post-test scores,

significant differences were observed. The pre-service teacher participants had rated

themselves significantly higher on the post-test for skills and knowledge of the UDL

planning format (Williams et al., 2012).

Student Assessment

School-Based Assessment (SBA) is the first of two types of student assessment

addressed in this section. According to Opara, Onyekuru, and Njoku (2015), SBA

consists of teacher created, and ongoing assessment tools developed to identify the

learner’s strengths and weaknesses. When correctly developed, SBA enhances student

learning; however, when incorrectly developed, SBA impedes student learning. Opara et

al. further stated assessment is the method of recognizing, collecting, and understanding

data about student learning. Moreover, the aim of student assessment is reporting

accomplishment and advancement for continued instruction and understanding.

Page 69: Mississippi Teacher's Perspectives of Danielson's

53

Opara et al. utilized SBA to predict student success on the Junior Secondary Certificate

Examination. As a result of their study, Opara et al. found that researchers successfully

predicted high score outcomes on the Junior Secondary Certificate Examination based on

whether teachers had given their students three previous SBA exams. The study

concluded that student SBA scores had an effect on student success.

Portfolio-Based Assessments (PBA) is the second and final type of student

assessment addressed in this section. PBA is an evidence-based, historical collection of a

student’s work which exists to demonstrate a student’s learning (Bırgın & Bakı, 2007).

The portfolio, as an assessment, is based on the constructivist theory; it builds on prior

student knowledge from previous experiences and builds on an application of new

student learning. Teachers guide students in the process of collecting data for their

portfolio by providing rubrics for each student assignment. Bırgın and Bakı (2007)

pointed out that student portfolios are performance-based assessments. Students are

allowed to determine value of their work while using the teacher provided rubrics.

Rockoff and Speroni (2011) stated that a predictive indicator for a teacher experiencing

student growth throughout their entire teaching career was having experienced student

growth in their first year of teaching. According to Danielson and McGreal (2000),

educational stakeholders, such as parents, guardians, government leaders, and the general

public, are interested in using student-learning data as a qualifier for teacher assessment

and identification of quality schools. Moreover, Danielson and McGreal stated that

student-learning in regard to teacher assessment is as follows:

Given the research findings on the effect of individual teachers on the learning of

students, most educators acknowledge that student learning is relevant to the

Page 70: Mississippi Teacher's Perspectives of Danielson's

54

evaluation of teachers. The quality of teaching, in other words, is highly

significant to the degree of student learning. The challenge, for both educators and

policymakers, is how to capture and use information on student learning – and,

indeed, deciding what information is most relevant. This is one of the most

complex issues surrounding teacher evaluation, and the one most likely to

generate lively political debate. The factors involved include assessment issues,

outside influences on students, and characteristics of the school system itself.

(Danielson & McGreal, 2000, p. 41)

Peterson (2004) reported that student academic growth is an important part of

teacher assessment. Peterson’s (2004) belief supported the RTT initiative for states to

identify methods for including student accomplishments in their teacher evaluation

processes (Fuhrman, 2010).

Rockoff and Speroni (2011) observed that teacher assessments completed by

principals along with classroom observations were strong predictors of student

achievement. Student learning is the focus of all standards and assessment criteria for

organizations that develop qualifications for beginning teachers (Sandholtz, 2011). Lewis

(1998) observed that schools which do not hire poor quality teachers experience higher

student achievement data. Rutledge, Harris, and Ingle (2010) noted that the most

important influence on student learning is teacher quality. Osler and Russell (2013) stated

the following: “Teacher quality incorporates both good and effective teaching. Good

teachers adhere to defined teaching standards and effective teachers improve student

achievement” (p. 23).

Page 71: Mississippi Teacher's Perspectives of Danielson's

55

Gordon et al. (2006) supposed that teacher assessment should include calculated

pupil achievement as an element. Gordon et al. did not consider teacher credentials as a

verifiable factor when it came to determining if a teacher would be effective concerning

student achievement. According to Peterson, a verifiable factor for an effective teacher is

documented data proving student growth and teacher improvement (Peterson, 2004).

Peterson reported that teachers who continually demonstrated student achievement or

teachers demonstrating improvement choose the data used for documentation.

King and Watson (2010) identified effective teachers as accomplished teachers.

Setting high accomplishment levels was a factor King and Watson observed in

accomplished teachers. Students of these teachers realized success in achieving the high

expectations established by their teacher. King and Watson pointed out that students who

successfully reached their teacher’s high expectations by using every day experiences

demonstrated an ability to acquire knowledge. Moreover, King and Watson observed that

accomplished teachers had a desire for their students to achieve academic excellence in

testing and in life outside the classroom. These accomplished teachers went beyond

academics by creating situations for their students to use their newly acquired knowledge

in real life. It should be noted, however, that going beyond academics may not add to the

learning process.

Sandholtz (2011) pointed out in a study of pre-service teachers that teachers could

create a fun learning environment including hands-on experiences, and that these

experiences did not necessarily mean that learning was happening. Pre-service teachers in

Sandholtz’s (2011) study continually remarked about the fun experiences the students

were having in the classes they observed. However, the pre-service teachers failed to

Page 72: Mississippi Teacher's Perspectives of Danielson's

56

correlate the learning that was taking place or not taking place. Sandholtz (2011) stated

that teachers need to ensure that student learning is taking place.

Instruction

Instruction is the path teachers take to deliver knowledge to their students

(Danielson, 2011a). The ability a teacher has to communicate in a manner which

facilitates student comprehension of the teacher’s direction and meaning is an example of

instruction as provided by Danielson (2011b). In the constructivist classroom, the teacher

acts as a facilitator versus a lecturer (Rolloff, 2010). The constructivist teaching model

theorizes that students use background knowledge in building new learning and in

comprehending their experiences (Rolloff, 2010).

The learning theory of constructivism is that student learning is built on prior

knowledge to form new knowledge. Instruction time with students engaged in the

learning process is essential (Detlor, Booker, Serenko, & Julien, 2012). Questions and

discussions during classroom instruction foster higher order thinking while scaffolding

from the student’s prior lessons and knowledge (Peterson & Taylor, 2012). Detlor et al.

(2012) noted that teachers further incorporate student assessment during instructional

time by monitoring and observing students’ understanding. Adjusting instruction in a way

that capitalizes on teachable moments is an important characteristic included in this

element of instruction. Teachers utilize many different approaches to guarantee and

assess a student’s comprehension of skills taught. It should be noted that planning,

classroom environment, and instruction are all essential to teaching, but teachers are

responsible for other duties as well (Danielson, 2011b).

Page 73: Mississippi Teacher's Perspectives of Danielson's

57

Classroom Environment

The classroom environment can be envisioned as a safe space for students and,

according to Holley and Steiner (2005), can be explained in this manner:

The metaphor of the classroom as a “safe space” has emerged as a description of a

classroom climate that allows students to feel secure enough to take risks,

honestly express their views, and share and explore their knowledge, attitudes,

and behaviors. Safety in this sense does not refer to physical safety. Instead,

classroom safe space refers to protection from psychological or emotional harm.

(p. 50)

When assessing the classroom environment, it is essential that the assessor be able

to recognize the teacher’s ability to respect and understand each individual student

(Marzano, Marzano & Pickering, 2003). In addition, building a classroom community is

essential in developing an effective classroom environment (Wong et al, 2013). In an

effective classroom environment, high expectations for learning are set by both the

teacher and the student. Classroom procedures are established which facilitate a smooth-

running classroom during times of transition (Smythe, 2002). In other words, the

effective teacher has a classroom environment that operates like a well-oiled machine.

Professional Responsibility

Danielson (2011a) identifies professional responsibility as an important domain

for teacher assessments. Professional responsibility incorporates many different aspects

of the teaching profession, which include: (a) self-reflection on the effects of their

teaching, (b) determination if adjustments are needed based on student achievement and

observed problems, (c) keeping of accurate accounts of students’ progress, (d) consistent

Page 74: Mississippi Teacher's Perspectives of Danielson's

58

communication with families about student progress, (e) provision of opportunities for

the family to participation in the student’s learning process, (f) participation in

community and school initiatives, (g) collaboration with colleagues to discuss academic

activities for enhancing student progress, (h) participation in training courses and

workshops that improve teaching skills, and (i) displaying aspects of integrity and

honestly. These aspects, along with prioritizing student concerns, are key to a teacher’s

professional responsibility.

As previously stated, teacher assessments are for summative and formative

purposes (Danielson & McGreal, 2000; Glickman et al., 1998; Marzano, 2012).

However, Danielson (2011a) introduced another reason for teacher assessment which was

to encourage professional learning. Danielson explained that professional learning is not

important because teachers are non-effective, but rather because it is the teacher’s

obligation to continuously improve their skills. Mohan et al. (2008) discovered that

Pressley’s research consistently revealed that effective teachers are teachers that pursue

methods of improving their teaching skills. Teachers who watched themselves teach

through video study have remarked that the experience was their most useful method for

improving teaching skills (Bill and Melinda Gates Foundation, 2013)

According to Smith et al. (2005), teacher professional development, when

associated with content knowledge, increased the use of skills that improved their craft.

There is a belief in the education community that professional development only of a

specific design can be effective. Glickman et al. (1998) went on to state the following:

Professional development must be geared to teachers’ needs and concerns.

Research on successful professional development programs has shown an

Page 75: Mississippi Teacher's Perspectives of Danielson's

59

emphasis on involvement, long-term planning, problem-solving meetings,

released time, experimentation and risk-taking, concrete training, small group

activities, peer feedback, demonstration and trials, coaching, and leader

participation in activities. (p. 375)

Mohan et al. (2008) questioned why professional development was successful at

schools when principals provided teachers with the opportunity to choose their type of

professional development training. Mohan et al. (2008) reasoned that a possible

contributing factor to their success could be attributed to the principals providing teachers

with time to communicate what they had learned with each other (Mohan et al., 2008).

Hussain, Javed, Lin Siew, and Mohammed (2013) noted that professional development

was vital to pre-service teachers and for the development of their teaching skills.

According to Oliva et al. (2009), teacher professional development was futile unless

educational measurements were in place to recognize excellent teachers and pinpoint

problems in student education.

Professional Evaluation

This section details an explanation of the types of evaluators as well as various

processes used to conduct evaluations. Building principals or other administrators within

the school evaluate the majority of teachers (Darling-Hammond, 2013). However, peers

may conduct some teacher assessments (Johnson & Fiarman, 2012). A professional

evaluation includes the evaluators and the procedures used in the process (Darling-

Hammond, 2013; Johnson & Fiarman, 2012).

Evaluator. Most teacher evaluators originate from a pool of trained principals;

however, teacher evaluators can originate from a pool of trained peers (Danielson &

Page 76: Mississippi Teacher's Perspectives of Danielson's

60

McGreal, 2000). In either case, the pool of teacher evaluators can be formed inside or

outside of the physical school campus. These teacher evaluators observe, interview, and

review teacher actions in school settings to assess their teaching skills (Danielson, 2011;

Peterson, 2004; Zimmerman & Deckert-Pelton, 2003). During the process of evaluating

teachers, evaluators take on different roles. In the role as judge, the evaluator’s objective

is staffing related and seeks to identify those who will not be considered for contract

renewal (Danielson, 2011). Danielson, however, went on to point out that while teacher

evaluators are conducting assessments, they should also be identifying teachers who

should be considered for promotion. In the role of a coach or mentor, teacher evaluators

are not focused on staffing objectives; rather, they are focused on the teacher’s growth

experience associated with the assessment process (Danielson & McGreal, 2000).

Principals as evaluators. Historically, principals have been the primary choice for

teacher evaluators. In prior years, principals would typically conduct walk-through

assessments which were brief and informal classroom visits (Peterson, 2004). Peterson

explained that principals he observed did not have the individual free time or available

human resources to complete their teacher assessments in the manner in which they

would have preferred. Gordon et al. (2006) identified several approaches that principals

could use for the purpose of conducting teacher evaluations other than walk-throughs.

Research revealed, in a study conducted by Zimmerman and Deckert-Pelton

(2003), that teachers perceived their principals as mentors and their source for

educational information. In this same study, these teachers reported that a principal who

had several years of teaching experience and a successful record as a teacher made a

better teacher evaluator (Zimmerman & Deckert-Pelton, 2003). According to

Page 77: Mississippi Teacher's Perspectives of Danielson's

61

Zimmerman and Deckert-Pelton (2003), teachers who were assessed by a single principal

and one in whom they respected, considered their assessment productive. On the other

hand, teachers who received various teacher assessments conducted by different

principals ended up feeling demoralized due to gross inconsistencies in their assessments.

In the case of various teacher assessments, teacher transfers within the same school

district was the reason for multiple assessments being made by different evaluators.

Williams (2000) found that the teachers he interviewed were quick to state their belief in

the principal’s need to set clear goals and objectives in order for the school to be

successful.

Principals typically feel pressured when they find an ineffective teacher while

conducting a teacher assessment. In such cases, the presumption is that the principal can

improve the situation (Peterson, 2004). In most cases, principals ignore ineffective

teachers (Weisberg et al., 2009). Unfortunately, school locations where ineffective

teachers existed had the greatest need for effective teachers (Weisberg et al., 2009).

Concerning teacher assessment, Peterson noted that principals could be more effective by

helping teachers become more involved in their assessments (Peterson, 2004). Peterson

stated that principals could guide teachers by suggesting that they learn more about the

characteristics of an effective teacher and explaining the assessment data gathering

process. Principals have been identified as being a good predictor of effective teachers

(Gordon et al., 2006; Peterson, 2004).

Principals have recognized many difficult issues regarding teacher assessment.

One such issue is that the yearly salary increase for effective teachers is the same as for

ineffective teachers (Weisberg et al., 2009). Weisberg et al. (2009) recognized that

Page 78: Mississippi Teacher's Perspectives of Danielson's

62

administrators do not utilize the teacher assessment ratings in making critical disciplinary

decisions about a teacher’s status. Instead, and according to Weisberg et al., a declaration

of incompetence is required before a teacher can be relieved of their position.

Research has indicated that principals rated few teachers as ineffective (Toch,

2008). This principal rating error kept teachers from growing in their craft according to

Toch (2008). Weisberg et al. (2009) noted that most teacher evaluators very rarely

identified teachers as ineffective. Under the legacy teacher evaluation systems,

administrators assured teachers that they were doing a good job, with as little as one

percent of teachers were being informed that they were not doing a good job (Bill and

Melinda Gates Foundation, 2013). Weisberg et al. also noted that administrators rarely

excused veteran teachers from their job for being ineffective teachers.

Mentors and peers as evaluators. Mentors and peer teachers can take an active

part in the teacher assessment process. The pool of teacher evaluators used for teacher

assessments can include non-principals (Darling-Hammond, 2013; Johnson & Fiarman,

2012). In this case, mentors and peer teachers typically compose this pool (Darling-

Hammond, 2013).

To increase the number of teacher evaluators, some schools and school districts

have reached out to veteran teachers to serve as mentor-peer teacher evaluators (Toch,

2008). Oliva et al. (2009) noted that mentors and peer teachers, with a background in

instruction, had been used in some schools as mentor-peer teacher evaluators. Valuable

mentors and peers were functioning as teacher evaluators in situations that they had a

similar experience (Oliva et al., 2009). Rockoff and Speroni (2011) noted that teacher

evaluators were better able to predict teacher effectiveness by using various types of

Page 79: Mississippi Teacher's Perspectives of Danielson's

63

assessments instead of by using various types of teacher evaluators (e.g., mentors, peer

teachers, or principals). The reliability of teacher assessment was improved when schools

began using an array of evaluators and lessons taught (Bill and Melinda Gates

Foundation, 2013).

Johnson and Fiarman (2012) acknowledged controversy regarding the use of

mentors and peer teachers as teacher evaluators. Some of the controversy centered on a

viewpoint that these evaluators were doing a job that was decidedly the principal’s

(Johnson & Fiarman, 2012). Other controversy noted by Johnson and Fiarman (2012)

centered on a viewpoint that evaluators were possibly not being objective in their

observations of colleague teachers. In such cases, the mentor-peer teacher evaluator

might not be able to make tough decisions such as confronting colleagues with

unfavorable comments. In defense of this viewpoint, Weisberg et al. (2009) had

encountered this issue of assessment confrontation, but with principals faced with having

uncomfortable conversations with teachers after assessment observations. Weisberg et al.

as well as Johnson and Fiarman (2012) recognized that if teacher assessments were to be

effective, administrators must address uncomfortable topics — that is, the poor choices of

teaching along with the good choices of teaching.

Johnson and Fiarman (2012) proposed several ideas that eventually helped make

mentor-peer teacher assessments more acceptable and successful. Johnson and Fiarman

(2012) proposed that: (a) when selecting mentor-peer evaluators, the process should be

unrestricted and thorough, (b) mentor-peer evaluators should be trained using well-

defined performance rules and detailed teaching standards, (c) mentor-peer evaluator

Page 80: Mississippi Teacher's Perspectives of Danielson's

64

training should be continuous, and (d) the teacher assessment process should be well

supervised (Johnson & Fiarman, 2012).

According to Mielke and Frontier (2012), mentor-peer based assessments

happened in reverse as mentor-peer evaluators observed and assessed their colleagues’

classrooms. Then mentor-peer evaluators were able to assess and evaluate their teaching

practices. In such cases, the mentor-peer evaluator had the opportunity to compare and

validate their teaching practices to those exhibited in their colleague’s classroom (Mielke

& Frontier, 2012). Thus, Mielke and Frontier (2012) found that mentor-peer based

assessments exposed teachers to opportunities for personal, professional development.

Gordon et al. (2006) suggested using teacher assessment evaluators from outside

the physical school campus, regardless of type (e.g., mentors, peer teachers, or

principals), instead of relying on evaluators only from within the school. Gordon et al.

pointed out that when districts used outside evaluators, the principal bias lessened.

Gordon et al. also suggested analyzing student work as an additional type of teacher

assessment.

Feedback. Effective feedback is described as useful for teacher development

(Bambrick-Santoyo, 2012; Danielson, 2011; & Weisberg et al., 2009). Effective teaching

takes place when both teacher observation and management coexist (Hussain et al.,

2013). Hussain et al. (2013) went on to suggest that feedback was necessary if

observations were to be successful. Mielke and Frontier (2012) observed that teachers in

their study had used feedback to evaluate their practices for improvement. According to

Weisberg et al. (2009), feedback after an assessment was beneficial to both effective

teachers and marginal teachers. With respect to feedback, Kimball (2002) remarked that

Page 81: Mississippi Teacher's Perspectives of Danielson's

65

teachers considered their assessment valid if they received feedback. According to

Danielson, evaluator feedback may lead to professional conversations, which in turn

benefits teachers by enhancing their educational practices (Danielson, 2011). Teachers

need support as well as feedback in order to benefit from assessments (Johnson &

Fiarman, 2012). Kimball (2002) established three measurements for determining types

and value for feedback: (a) timing or frequency of feedback, (b) the source of the

feedback, and (c) the usefulness of the feedback (Kimball, 2002).

The Bill and Melinda Gates Foundation (2013) committed to conducting research

that was designed to identify methods to support teachers in conjunction with a teacher

evaluation system. This research became known as the MET project. Upon its

completion, the MET project identified nine principles for measuring teacher

effectiveness and developed methods for teacher assessment and feedback (Bill &

Melinda Gates Foundation, 2013). According to the Bill and Melinda Gates Foundation

(2013), teacher feedback represents the pathway to improved teaching.

Classroom observation. In prior years, supervisors, namely principals, visited

school classrooms and conducted assessment observations. The objective of these

classroom assessments was to provide pro and con style feedback of the teacher’s

instructional skills (Danielson & McGreal, 2000). Danielson and McGreal (2000) pointed

out that teacher assessments, when conducted in this format, were not helpful to the

teacher. In more recent years, a new assessment format trend emerged in the teacher

evaluation process. As described by Danielson and McGreal, this trend specifically

addressed the observation phase of the teacher evaluation process. The new trend

established teacher assessment as a more purposeful event and one directed more toward

Page 82: Mississippi Teacher's Perspectives of Danielson's

66

the observed behaviors of the teacher during the classroom assessment (Garza, Ovando,

& O'Doherty, 2016). Moreover, Garza et al. (2016) noted that while using this new trend,

evaluators (e.g., mentors, peer teachers, or principals) became more focused on teacher

behaviors rather than pro and con teaching practices.

Danielson and McGreal (2000) went on to describe that immediately or very soon

after an assessment, the evaluator and teacher convened for a conference where they

discussed the classroom assessment. The evaluator-teacher conference focused on

improvements versus mistakes or flaws. The conference objective was to offer the

teacher guidance toward improving their teaching skills (Danielson & McGreal, 2000).

Johnson and Fiarman (2012) asserted that these classroom assessments and conferences

were beneficial when the evaluator took on a supportive teacher role. In 1998, Glickman

et al. noted that the evaluator-teacher conferences could also be helpful for collecting and

evaluating information for a formative assessment.

Boyd (1989) communicated that for a teacher assessment to be legitimate, it had

to provide: (a) meaningful feedback, (b) information for teacher growth, and (c) guidance

from principals and peers.

Teacher’s response to feedback. Bambrick-Santoyo (2012) stated it was the

teacher’s responsibility to use their evaluator’s feedback as information for changing

their teaching practices for the better. Bambrick-Santoyo (2012) also noted that feedback

alone could not effect changes in a teacher’s teaching practices; the teacher is required to

take action.

Weisberg et al. (2009) observed in their study that teachers rarely received

meaningful feedback. Within the study group, three out of four teachers did not receive

Page 83: Mississippi Teacher's Perspectives of Danielson's

67

any feedback after their evaluation. Consequently, 75% of the teacher study group had no

understanding of their assessment outcome, their strengths and weaknesses, or how to

better their teaching skills. According to Darling-Hammond et al. (2012), if evaluators

did not provide frequent feedback, teachers would not take the feedback seriously. Also,

Kimball (2002) stated that designers of teacher evaluation systems relied on literature to

better identify the characteristics of meaningful feedback.

Self-assessment and self-reflection. Teachers have an inherent desire to review

their teaching practices, assess their skills, and find ways to better their skills

(Chukwubikem, 2012). Sandholtz (2011) declared that teachers who used self-analysis

and self-reflection not only improved their teaching skill, but also promoted student

understanding. In addition, teacher education courses found value in using self-reflection

because it fostered and environment where pre-service teachers judged their teaching

practices and student learning (Sandholtz, 2011). According to Danielson, teacher

evaluation systems that utilize self-assessment and self-reflection have proof of quality

assurance (Danielson, 2011). Pieczura (2012) noted that she modified her lessons through

the realizations she discovered about her own teaching practices during self-reflection.

Moreover, Pieczura expressed that her lessons were improved as a result of her self-

reflection.

For teachers who were already excelling in their teaching practice, self-

assessment was a way to grow further (Glickman et al., 1998). Danielson noted that

professional conversations resulting from prior self-assessment and professional

reflection were productive ways to grow and develop professionally. According to

Sandholtz, teachers are better able to evaluate their teaching skills when they utilize self-

Page 84: Mississippi Teacher's Perspectives of Danielson's

68

reflection (Sandholtz, 2011). Chukwubikem (2012) acknowledged that there is a

worldwide consensus that teachers who use self-reflections effectively are responsible for

improvements we see in the educational system.

Sandholtz (2011) questioned if novice teachers would be able to utilize the

information they discovered during self-assessment to gain insight into their teaching

practices. Mielke and Frontier (2012) concluded that novice teacher could be successful

in utilizing self-assessment information toward professional growth. Mielke and Frontier

(2012) indicated that self-assessment and reflection, in the teaching profession, had

motivated teachers to become experts in their field. Bambrick-Santoyo (2012) agreed

with the findings of Mielke and Frontier who found if teachers were able to see the steps

they needed to take in developing their skills, the teacher developed those identified

skills. Mielke and Frontier concluded that effective management and assessment methods

permitted teachers to correctly assess their teaching skills and identify areas for

professional growth.

Teacher’s knowledge of assessment. According to Glickman et al. (1998), the

characteristic of self-actualization was to be true to one’s own tendencies or bent. In

reference to a teacher’s knowledge of their own profession, Osler and Russell (2013)

discovered that teacher expectations directly equated to high student achievement. Also,

these same discoveries about teacher’s characteristics were influential of teachers’

understanding of curriculum, staff development, and reading achievement (Osler &

Russell, 2013). Peterson (2004) acknowledged that the phase at which a teacher was in

regarding their professional career affected their knowledge about their own evaluation

process. Moreover, a teacher’s understanding of assessments influenced the manner in

Page 85: Mississippi Teacher's Perspectives of Danielson's

69

which assessment feedback was accepted (Kimball, 2002). Therefore, according to

Kimball (2002), changes in a teacher’s instructional methods influenced their belief in the

assessment process and feedback.

Schacter (2001) conducted a study of pre-service teachers who were teaching

reading to elementary students. In the study, Schacter (2001) discovered that

preconceived teacher beliefs, particularly those regarding their student’s academic

abilities, had a direct correlation to the outcome of student learning. Schacter (2001)

observed that if teachers had a preconceived belief that their students could learn to read,

their students performed well in reading. On the other hand, Schacter observed that if

teachers had a preconceived belief that their students could not learn to read, they

performed poorly in reading (Schacter, 2001). Moreover, Hussain et al. (2013) conducted

a study of teachers in Pakistan and found that if teachers were confident in their teaching

abilities, they, in turn, had command over the classroom and were more productive with

students.

Marzano’s (2012) previously mentioned study focused on 3,000 teacher

participants in determining what their perceptions were regarding teacher assessments.

Although Marzano’s sample group was not representative of the population, the results

exemplified that teachers stated assessments were crucial for the measurement of teacher

effectiveness and the development of teacher skills. Teacher participants expressed that

the development of teacher skills was the most important purpose for teacher assessments

(Marzano, 2012). This finding agreed with Danielson’s and McGreal’s (2000) belief

regarding the same. Moreover, Danielson and McGreal stated that “experienced

practitioners argue that professional dialogue about teaching, in a safe environment,

Page 86: Mississippi Teacher's Perspectives of Danielson's

70

managed and led by teachers, is the only means by which teachers will improve their

practice” (2000, p. 9). Kimball (2002) remarked that teacher perceptions regarding the

assessment process were fair when: (a) teachers precisely knew measured assessment, (b)

various types of data were used in the assessment, and (c) teachers had more

opportunities for input. Kimball also stated that assessments were perceived to have

quality when feedback was related to teaching.

Professional Trust in Schools

The idea of trust has become a central issue to many bodies of research including

teamwork, leadership, organizational relationships, buyer-seller relationships, strategic

alliances, organizational governance, and the focus of this study — teacher assessment.

Blase and Blase (2001) noted that “without trust, a school cannot improve and grow into

the rich, nurturing micro-society needed by children and adults alike” (p. 23). A school’s

culture rooted in trust makes it difficult to sway trust; however, trust can be indirectly

influenced and grown by providing established times and environments that foster

collaboration (Demir, 2015; Forsyth, Barnes, & Adams, 2006).

In the case of trust in schools, there is a solid relationship between trust and

collaboration (Tschannen-Moran, 2014). The two are supposed to be mutually inclusive;

which comes first is not clear. The belief in relationship and inclusiveness is critical

because without it, the omission of some literature and overlooking of certain viewpoints

takes place (Tschannen-Moran, 2014).

The National Center for Literacy Education (NCLE) conducted a survey entitled

"National Survey on Collaborative Professional Learning Opportunities” of which the

report Remodeling Literacy Learning is based (NCLE, 2013). In this report, an analysis

Page 87: Mississippi Teacher's Perspectives of Danielson's

71

of the study survey concluded strong correlation between trust and collaboration. The

survey involved educators, and among those who agreed that collaboration was not

routine in their schools, 42% agreed that professionals in their school trusted each other.

In contrast, for educators who agreed that collaboration was routine, 82% reported a

culture or climate of trust. According to the report, “[t]hese data suggest that trust, routine

collaboration, and the spread of best practices exist in a reciprocal, mutually reinforcing

relationship, where more of any one leads to more of the others, a classic ‘virtuous

cycle’” (NCLE, 2013, p. 21).

Professional Trust and Social Capital

For this study, “[t]rust is defined as a willingness to rely on an exchange partner

in whom one has confidence” (Moorman, Deshpande, & Zaltman, 1993, p. 82).

Moreover, Bryk and Schneider (2003) defined professional trust as the trust between

professionals working in a professional setting, such as in an educational environment.

Within the education community, some have referred to trust between educators in

general as relational trust (Bryk & Schneider, 2003). In communities outside of

education, other terms are used concerning trust as a factor within professional

environments (Bryk & Schneider, 2002). The social sciences community typically prefers

social trust (Bryk & Schneider, 2002, p. xiv) while the business community typically

prefers social capital (Bryk & Schneider, 2002, p. xiii). Concerning this study’s survey

instrument, professional trust is noted to be more closely aligned to the relationship

between assessment evaluator and teacher, which is a focus of this study (Bryk &

Schneider, 2002).

Page 88: Mississippi Teacher's Perspectives of Danielson's

72

The notion of social capital and value associated with community networks does

have application to schools associated with trust and collaboration, a focus of this study.

According to the seminal work of Putnam (1995, n.p.), social capital represents “features

of social life — networks, norms, and trust — that enable participants to act together

more effectively to pursue shared objectives.” Although social capital can be at the

individual level, the definition commonly relates to a group of individuals (Glaeser,

Laibson, Scheinkman, & Soutter, 1999). In other words, social capital is the byproduct of

mutual trust individuals have placed in each other as a community or network; the

emphasis is on byproduct because it cannot directly be produced (Glaeser et al., 1999).

According to Schultz and Cuneo (2015), social capital is the result of trusting

relationships with communities.

Trust as it relates to schools. In the late 1990s, researchers Tom Tyler and Peter

Degoey (1996) concluded that trust has a strong influence on people’s reactions to

authorities. They found that “inferences about the trustworthiness of the motives of

authorities had a powerful effect on whether people voluntarily deferred to third-party

decisions and group rules” (Tyler & Degoey, 1996, p. 336). Moreover, trust in an

organization and its leadership has productivity consequences and is empirically

supported in the literature (Bryk & Schneider, 2002; Goddard, Tschannen-Moran, &

Hoy, 2001; Tschannen-Moran, 2001).

According to Dr. Austin Buffum, the school climate must maintain trust, and

barriers to building and sustaining trust must also be eliminated (Buffum, 2008). In their

work with Northwest Regional Educational Laboratory on a project partially funded by

the United States Department of Education, Brewster and Railsback (2003) identified the

Page 89: Mississippi Teacher's Perspectives of Danielson's

73

nine most common barriers to building trust (a) top-down decision making that is

perceived as arbitrary, misinformed, or not in the best interests of the school, (b)

misinformed, or not in the best interests of the school, (c) ineffective communication, (d)

lack of follow-through on or support for school improvement efforts and other projects,

(e) unstable or inadequate school funding, (f) failure to remove teachers or principals who

are widely viewed to be ineffective, (g) frequent turnover in school leadership, (h) high

teacher turnover, and (i) teacher isolation. Moreover, Brewster and Railsback (2003)

pointed out the following:

Perhaps the greatest obstacle that schools experiencing a lack of trust must

overcome, however, is their past. Identifying the specific causes of mistrust in the

school and making a sincere commitment to address them is the first and probably

most important step. (p. 11)

Bryk and Schneider (2002) conducted one of the largest and best-known studies

regarding trust in schools. This longitudinal study took place over a 10-year period in 400

Chicago elementary schools. This study was an analysis of the relationships between a

school’s productivity trend and periodic survey reports (Bryk & Schneider, 2002). Case

studies as well as surveys of teachers, principals, and students resulted in the research

findings. A series of analyses were used to control for other factors that might also affect

changes in student achievement. Bryk and Schneider (2002) established a significant

relationship between a school’s productivity and levels of trust. Also, Bryk and Schneider

(2002) established a connection between the level of trust in a school and student

learning. According to the Association for Supervision and Curriculum Development, the

Page 90: Mississippi Teacher's Perspectives of Danielson's

74

study conducted by Bryk and Schneider (2003) illustrates the central role of relational

trust in building successful education communities.

Trust as it relates to collaboration in schools. Holste and Fields (2010) began a

relevant vein of research regarding trust by publishing an article in the Journal of

Knowledge Management in 2010. Holste and Fields presented findings from a study

conducted to explore the impact of affect-based and cognition-based trust concerning co-

workers’ willingness to share and use tacit knowledge. Tacit knowledge, defined as a

non-written form including abilities, developed skills, experience, and undocumented

processes, is typically personal and informal (Edge, 2013; Kucharska & Kowalczyk,

2016). Nonaka (2007) went on to provide the following definition of tacit knowledge:

Tacit knowledge is highly personal. It is hard to formalize and, therefore, difficult

to communicate to others. Or, in the words of the philosopher Michael Polanyi,

“We can know more than we can tell.” Tacit knowledge is also deeply rooted in

action and in an individual’s commitment to a specific context – a craft or

profession, a particular technology or product market, or the activities of a work

group or team. Tacit knowledge consists partly of technical skills – the kind of

informal, hard-to-pin-down skills captured in the term “know-how.” (p. 165)

Explicit knowledge defined as knowledge that is in written form such as documents,

reports, articles, and presentations. This knowledge is typically impersonal and formal

(Edge, 2013; Kucharska & Kowalczyk, 2016).

In a study by Holste and Fields (2010), participants were composed of 202

managers and professional staff members in a business setting. From the findings of this

study, the authors concluded the following:

Page 91: Mississippi Teacher's Perspectives of Danielson's

75

The levels of both types of trust influence the extent to which staff members are

willing to share and use tacit knowledge. Affect-based trust has a significantly

greater effect on the willingness to share tacit knowledge, while cognition-based

trust plays a greater role in willingness to use tacit knowledge. (Holste & Fields,

p. 128)

At the turn of the 21st century, Tschannen-Moran (2001) conducted an essential

study regarding trust in schools and collaboration. The study involved 45 urban public

schools where she examined connections between the school’s level of collaboration and

their level of trust (Tschannen-Moran, 2001). Measures of Trust in Schools Scales were

utilized to assess the variable of trust in schools. This group of scales explored faculty

trust in both principals and colleagues. The results indicated a significant link between (a)

teachers’ collaboration with the principal and their trust in the principal, (b) collaboration

with colleagues and trust in colleagues, and (c) collaboration with parents and trust in

parents (Tschannen-Moran, 2001).

Tschannen-Moran (2001) discussed her findings when she stated: “Schools where

there was a high level of trust could be predicted to be schools where there would be a

high level of collaboration” (p. 327). Even though it has almost been 20 years since

Tschannen-Moran (2001) made the following concluding remarks in her seminal work,

they remain timely. Tschannen-Moran (2001) further stated:

As schools struggle to reinvent themselves to respond to the needs of a changing

world, collaboration provides an important mechanism for schools to work toward

Page 92: Mississippi Teacher's Perspectives of Danielson's

76

excellence. The problems facing schools are larger than any person or group can

solve alone, and finding solutions will require cooperation and collaboration. (p.

327)

Trust as it relates to teacher assessment. Trust is a valuable factor in

implementing educational reform, growing culture of shared school ownership, and a

successful teacher evaluation process. The conduct of teacher assessment that fosters

mutual trust between the evaluator and the teacher has the most potential for success.

Wang and Day (2002) stated it this way, “respect, safety, trust, and collaboration … are

considered key ingredients of effective teacher development, and hence need to be at the

core of any teacher observation model” (p. 14).

Davis, Pool, and Mits-Cash (2000) studied two separate schools as they adopted

new teacher evaluation systems and found that the principals’ attitudes toward the new

evaluation process, as well their expectations of teacher success, profoundly impacted the

teachers’ attitudes toward the evaluation process. Moreover, in a separate study, Kelly

(2006) and Lansman (2006) likewise found that the principal’s attitude toward the teacher

evaluation process, in addition to collaboration with the observed teachers, had a positive

impact on how teachers viewed and implemented the feedback received. Lansman (2006)

found that the principal’s collaborative leadership was “the major factor impacting the

teacher evaluation process” (p. 156), a finding which supports the finding of Milanowski

and Heneman (2001) that a factor linked to negative teacher attitudes toward their

evaluation process was related to a perception of the lack of “collaborative attitude” (p.

207) on the part of their administrators (i.e., their evaluator).

Page 93: Mississippi Teacher's Perspectives of Danielson's

77

Concerning teacher assessment, trust is a main factor to the success of the

evaluation process. According to Bryk and Schneider (2003), clear and consistent

communication helps build trust among all stakeholders. Research conducted by

Donaldson and Donaldson (2012) concluded that for teachers to benefit from the

feedback they received from the evaluator, the teacher needs to trust the evaluators.

Moreover, Donaldson and Donaldson (2012) reported that for teachers to benefit from the

evaluator feedback, teachers must be able to trust the evaluator. In other words, “[f]or

teachers to find these conjectures credible and respond to them with efforts to build on

their strengths and address their weaknesses, they must trust the observer and have access

to subsequent learning opportunities” (Donaldson & Donaldson, 2012, p. 80).

Research conducted at the University of Chicago revealed that the successful

implementation of their teacher assessment program was heavily dependent on the solid

level of trust between their administrators and the teachers (Sporte, Stevens, Healey,

Jiang, & Hart, 2013). The teacher assessment utilized components of Charlotte

Danielson's 2011 Framework for Teaching. The teachers stated that in their survey

responses that trust can help minimize the tension that exists during an evaluation if the

same person uses the same instrument and supports the teacher (Sporte et al., 2013).

Teachers carefully determine the trustworthiness of their principal through social

exchanges that include everything from chance meetings to formal assessments (Forsyth

et al., 2006). Forsyth et al. (2006) discovered the level of teacher trust in their principal

was assimilated by observed behaviors, primary traits of which indicate respectfulness,

benevolence, competence, reliability, and honesty yield a high level of trust. Research has

concluded that perceptions of individual teachers evolve into shared perceptions of the

Page 94: Mississippi Teacher's Perspectives of Danielson's

78

teacher role group. In a case analysis of trust and school improvement in five schools,

low levels of trust in school administrators “were associated with lower cohesion within

the building, both among teachers and between the teachers and the principal” (Seashore,

2003, p. 29). Moreover, in a study of middle schools, Tarter, Sabo, and Hoy (1995) found

that “trust in the principal and trust in colleagues independently move the organization

toward effectiveness” (47). Teachers, when perceiving a move toward effectiveness, are

more likely to have confidence in the school’s efficacy as a whole (Tarter et al., 1995).

Factors Affecting Teacher Performance

There are two factors affecting the teaching elements that teachers identify as

being important regarding the assessment of teacher performance. The first factor is the

years of certified teaching experience. The second factor is the route in which the teacher

earned their teaching certification (i.e., traditional route or alternative route).

Years of Certified Teaching Experience

In the 1990s, years of certified teaching experience surfaced as a main

contributing factor to the improvement of student performance (Hanushek & Rivkin,

2006). There were some researchers, however, who reported that years of teaching

experience only had a nominal effect on teaching. These researchers were of the belief

that teacher growth occurs during the first few years of teaching and then levels off after

four or five years in the profession (Hanushek, Kain, O’Brien, & Rivkin, 2005; Rockoff,

2004).

An issue with beginning teachers is that they are required to teach content with

which they are not familiar; therefore, the time spent in gaining content knowledge takes

away from their ability to recognize their student’s as unique individuals (Vonk, 1995).

Page 95: Mississippi Teacher's Perspectives of Danielson's

79

Vonk (1995) went on to write that the attention taken away from students made the

teacher less effective. Likewise, Shahid and Thompson (2001) discovered similar themes

through a synthesized research study on self-efficacy. A positive relationship correlated

between years of teaching experience and both general and overall teaching pedagogy.

Weisberg et al. (2009) noted that veteran teachers were more effective than

teachers with fewer years of experience. In a study conducted about teacher mental set,

Bishop (2009) explained, “mental-set behaviors were defined as a teacher’s heightened

sense of situational awareness and conscious control over his or her thoughts and

behaviors relative to a particular situation” (p. 105). Bishop (2009) found a significant

difference in the mental-set of experienced teachers.

Teachers, with three to five years of teaching experience, level off concerning

effectiveness (Boonen, Van Damme, & Onghena, 2014). Moreover, veteran teachers may

become less effective in the classroom as declared by their admission and their students’

measurement (Day, Sammons, & Stobart, 2007). Weisberg et al. (2009) revealed that

students assigned to an ineffective teacher for three consecutive years could fall

academically behind, by as much as 50%, other students assigned to an effective teacher

for three consecutive years.

According to Day and Gu (2009), the idea of teacher effectiveness and

ineffectiveness, based on years of teaching experience, is associated with teacher

commitment, of which is a predictor of career success as a teacher progress in the field of

education.

Page 96: Mississippi Teacher's Perspectives of Danielson's

80

Type of Teacher Certification

Uriegas, Kupczynski, and Mundy (2014) described two different and uniquely

separate types of teacher preparation programs. Each program offers potential teachers a

different route or pathway toward a career in teaching. There are two pathways to teacher

certification: (a) the traditional route and (b) the alternate route.

Turley and Nakai (2000) defined teachers with a traditional route certification as

one having obtained their state certification through completing coursework at a

university. This coursework would have included academic classes and practicums

specific to the teaching profession. Unlike traditional route teachers, alternate route

teachers did not attend a university with the initial goal of becoming a teacher. Although

they attended a university like traditional route teachers, their coursework did not include

classes specific to the teaching profession (Feistritzer, 2005; Turley & Nakai, 2000).

Currently, the research community is mixed regarding which teacher route to certification

is the best approach to becoming a teacher (Buck & O’Brien, 2005; Lit, Nager, & Snyder,

2010; Mikulecky et al., 2004).

Traditional route education students are individuals enrolled at a university and

working toward a bachelor’s degree program that includes classes with pedagogy and

content theory (Uriegas et al., 2014). The traditional route education student receives

classroom teaching experience as part of their university program. Typically, a full

semester is dedicated to what is referred to as student teaching, which takes place outside

the university in an actual public classroom environment (Lit et al., 2014; Uriegas et al.,

2014). According to Darling-Hammond (2000), traditional-route education students

receive training in examining the teaching role from a learner’s viewpoint. Moreover,

Page 97: Mississippi Teacher's Perspectives of Danielson's

81

once trained through a traditional route program, the education student will be fully

prepared with the knowledge required for understanding student behaviors and to

diversify lessons (Lit et al., 2014).

There are, however, some noted disadvantages in being a traditional route teacher.

One disadvantage is that traditional route teachers tend only to have life experiences

centered on the teaching profession and this can be a hindrance when trying to see life

beyond the educational world (Mikulecky, Shkodriani, & Wilner, 2004). Nakai and

Turley (2003) noted that traditional route teachers took longer to complete their

university programs when compared to nontraditional route teachers, because of their

teacher classes, practicums, and rigid processes. Moreover, in multiple studies conducted

by Klagholz (2000), teachers that came by way of the traditional route scored lower on

state licensing exams, which were designed to determine knowledge associated with the

educational field.

Alternate route education students are individuals enrolled at a university and working

toward a bachelor’s degree in any non-education major. These students have almost

never received classroom teaching experience as part of their university program

(Darling-Hammond, 2009; Mikulecky et al., 2004; Uriegas et al., 2014). Nakai and

Turley (2003) reported that thoroughly planned induction support is more imperative for

alternate route teachers than traditional route teachers.

Alternate route teacher programs are considered a fast track pathway that

bypasses the traditional route teacher program (Nakai & Turley, 2003). However, this

fast track to the classroom has omitted some critical teaching skills. According to

Samaras & Wilcox (2009), managing time and students are two of the main issues that

Page 98: Mississippi Teacher's Perspectives of Danielson's

82

the first-year alternate route teachers faced in their new career (Samaras & Wilcox,

2009). Alternate route teachers themselves identified six concerns they had as teachers

and the follow according to significance: (a) developing curriculum, (b) classroom

equipment and supplies, (c) training tactics, (d) handling challenging students, (e)

managing a classroom, and (f) observation feedback (Nakai & Turley, 2003). Therefore,

the synopsis of alternate route teachers is that these teachers are ill-prepared to teach

because of the lack of pedagogical knowledge gained in a traditional route teacher

program associated with a university setting (Darling-Hammond, 2009; Nagy & Wang,

2007).

The majority of states have established teacher certification pathways through

their state’s alternative route program. Each program has methods and standards by

which non-traditional route individuals can obtain teacher certification (Buck & O’Brian,

2005). According to Feistritzer (2005), most states have pathway programs in place for

alternate route certification. Feistritzer (2005) noted the program requirements are not

consistent across states, except for the following six requirements: (a) the program is

designed specifically to recruit, prepare, and license talented individuals for the teaching

profession who already have at least a bachelor’s degree, (b) candidates for these

programs must pass a rigorous screening process, such as passing tests, interviews and

demonstrated mastery of content, (c) the programs are field-based, (d) the programs

include coursework or equivalent experiences in professional education studies before

and while teaching, (e) candidates for teaching work diligently with mentor teachers, and

(f) candidates must meet high performance standards for completion of the programs.

Page 99: Mississippi Teacher's Perspectives of Danielson's

83

Supporters of traditional-route teachers argue that the experiences traditional

route teacher received as student teachers prepared them for the real-life environments

and situations of a school classroom. These same supporters suggest that the traditional

route pathway is the best approach for preparing teachers for real-world teaching (Lit et

al., 2010). In research completed by Nelson (2010), mentors compared traditional route

teachers and alternate route teachers. Traditional route teachers had a higher mean score

when compared to alternate route teachers. Moreover, traditional route teachers were

more successful in using student assessments to drive unit planning for differentiated

instructions.

In contrast, and according to Mikulecky et al. (2004), alternate route teachers can

bring advantages to the teaching community. One such advantage is that alternate route

teachers are generally placed in communities where the environment is harsh, and of

which are experiencing a teacher shortage. Another advantage is that alternate-route

teachers have previous work experiences. These are non-educational life experiences and

skills that can be applied to educational experiences to enlighten students. Finally,

alternate-route teachers are considered to possess more content knowledge of subjects

because of the training they received while working on their non-teaching bachelor’s

degree (Goldhaber & Brewer, 2000; Zeichner & Paige, 2007).

According to several researchers, alternate-route teachers are a great educator

supply line for regions and communities experiencing teacher shortages (Mikulecky et

al., 2004; Nakai & Turley, 2003). Feistritzer (2005) has observed over the past several

years that the position of alternate-route teacher is evolving into a more decidedly

reputable position within the teaching profession. Therefore, considering alternate-route

Page 100: Mississippi Teacher's Perspectives of Danielson's

84

teachers as decidedly reputable is a change in professional perspective when compared to

past years when only traditional-route teachers were highly respected (Lit et al., 2010).

The school district has a responsibility to develop their new teachers regardless of

how the teacher received their certification (traditional route or alternate route). A new

teacher should begin their career under the supervision of a veteran teacher, one who can

mentor and guide the teacher without exception (Darling-Hammond, Holtzman, Gatlin, &

Vasquez Heilig, 2005). Ultimately, however, the teacher’s career success is heavily

dependent on their character. According to Boyd, Grossman, Lankford, Loeb, and

Wychoff (2006) as well as Sass (2008), the success of a traditional route or alternate

route teacher depends on the teacher’s innate desire.

In some cases, there are non-certified teachers in a classroom. Non-certified

teachers include those who are teaching under an emergency certification and individuals

teaching with no certification at all. Research regarding non-certified teachers is

conclusive in its findings — certified teachers have a more positive effect on student

learning when compared to non-certified teachers. Darling-Hammond et al. (2005)

revealed that non-certified teachers could negatively influence student learning. Mubenga

(2006) discovered that uncertified teachers contributed to an achievement gap in student

learning. He attributed this gap in learning to a lack of essential instructional practice.

Goldhaber and Brewer (2000) investigated the relationship between teacher certification

and the performance of 12th-grade students. The authors determined that students who

received instruction from certified teachers perform significantly higher in the areas of

math and science (Goldhaber & Brewer, 2000). In 2002, Stronge reported that students

have significantly higher achievement rates when taught by a certified teacher in their

Page 101: Mississippi Teacher's Perspectives of Danielson's

85

field of study. In fact, students scored 7 to 10 points higher than students of non-certified

teachers (probationary, emergency, or no certification).

Mississippi Identified Components in Assessing Teacher Performance

Over the past 30 years in the MDE, there have been two named statewide

assessments used for evaluating teachers (Furtwengler, 1995; Gilbertson, 2012). In the

1980s, MDE implemented the Mississippi Teacher Assessment Instrument (MTAI)

which served as a teacher assessment until the Mississippi Statewide Teacher Assessment

Rubric (M-STAR) was adopted.

Mississippi Teacher Assessment Instrument (MTAI)

MTAI was considered an innovative approach to evaluating student teachers as

well as new teachers and veteran teachers (Furtwengler, 1995). This instrument contained

16 competencies and 42 indicators designed after Georgia’s teacher assessment

instrument (Amos & Cheeseman, 1992; Walker, 1992). According to Amos and

Cheeseman (1992), MDE endorsed the MTAI conducting a validation study with many

practicing Mississippi teachers. The state-trained principals or peer teachers to conduct

the evaluation using the MTAI as the instrument to measure the teacher’s effectiveness

(Furtwengler, 1995). During the era of MTAI, each state’s department of education

developed their teacher evaluation system.

Mississippi Statewide Teacher Appraisal Rubric (M-STAR)

Understanding the historical role that the United States Department of Education

played in initiating and shaping a nationwide movement in teacher assessment is vital in

understanding the rationale behind Mississippi’s development of its statewide teacher

assessment rubric and teacher evaluation system (MDE, 2014a).

Page 102: Mississippi Teacher's Perspectives of Danielson's

86

Mississippi’s teacher assessment rubric is referred to as the Mississippi Statewide

Teacher Appraisal Rubric (M-STAR). Gilbertson (2012) reported that Mississippi filed

for an NCLB waiver based on data collected through the Mississippi Curriculum Test,

Second Edition. Data at the time indicated that the 2014 deadline for all children to be at

grade level by NCLB (United States Department of Education, 2003) was going to miss;

consequently, Mississippi had no choice but to file for a waiver. At the time, and as a

condition of the wavier, Mississippi committed to reforms in Mississippi’s public

education system. One such reform was the development of the statewide teacher

evaluation system (Gilbertson, 2012).

Mississippi Governor Phil Bryant issued a report explaining Race to the Top, a

federally-funded program for schools meeting designated criterion. Race to the Top was

the catalyst for the state’s newly developed teacher assessment tool, M-STAR (Cooper,

Parker, & Jordan, 2012). Cooper et al. (2012), who authored Governor Bryant’s report on

M-STAR, noted that a highly qualified teacher, as defined by NCLB, is an individual

certified to teach a given subject, but possibly, not an effective teacher. This observation

was consistent with Osler and Russell (2013), who recognized that there is a difference

between a qualified teacher and an effective teacher; the two identifications are not

mutually inclusive.

M-STAR is a highly leveraged teacher assessment instrument based on Charlotte

Danielson’s Teaching Evaluation Instrument (Danielson, 2011b). One of the most

significant differences between M-STAR and Danielson’s instrument (Danielson, 2011b)

is that M-STAR (MDE, 2014a) adds a domain, Domain 2: Assessment.

Page 103: Mississippi Teacher's Perspectives of Danielson's

87

Both the M-STAR (MDE, 2014a) and Danielson’s Framework for Teaching

Evaluation Instrument (Danielson, 2011b) include a component built in to measure

teacher’s knowledge of their teaching effectiveness. Danielson explained that teachers

when properly doing their job will be able to predict high levels of student learning

(Griffin, 2013).

Danielson’s organized her architecture framework into domains, components,

elements, and indicators (Alvarez & Anderson-Ketchmark, 2011). This study focused on

the framework’s four domains and 22 components. Domains were primary groupings of

teaching activities, while components describe the teaching characteristics of its domain

(Danielson, 2011b). Elements further described teaching characteristics of its higher

order component, and so on with elements and indicators. During an assessment,

evaluators score teachers at the component level of the framework (MET Project, 2010).

Concerning M-STAR, Danielson’s components are called standards. Prior studies had

determined that teachers found the four domains in Danielson’s Framework for Teaching

were useful indicators of teaching and learning (MET Project, 2010; Olson, 2015).

Summary and Future Needs in Mississippi Evaluation of Teacher Performance

In summary, professional trust within organizations has been researched in

education for several decades now, and researchers have concluded that trust is a main

factor in the following relationships: (a) parent to school (Glaeser et al., 1999; Schultz &

Cuneo, 2015), (b) teacher to principal (Bryk & Schneider, 2002; Goddard et al., 2001;

Tschannen-Moran, 2001), (c) teacher to teacher (Tschannen-Moran, 2014), and (d)

teacher to student (Bryk & Schneider, 2002). Moreover, research have determined that

where high levels of trust in schools exist, there are: (a) improving schools (Blase &

Page 104: Mississippi Teacher's Perspectives of Danielson's

88

Blase, 2001), (b) more collaboration (Demir, 2015; Forsyth et al., 2006), (c) improved

student performance and teacher growth (Tschannen-Moran, 2014), and (d) more positive

perceptions of school reform and individual assessment (Forsyth et al., 2006).

Conclusion

Essential concepts throughout this review of the literature have been critical in

identifying central themes throughout the research. These themes included the purpose of

evaluation, effective teaching, components of evaluation, Charlotte Danielson’s domain

frameworks, evaluators, feedback, factors that influence teacher evaluation, and trust.

Therefore, the purpose of this literature review chapter was to not only evaluate teacher

assessment, but to also recognize critical teaching components and elements within a

teacher assessment instrument as identified in the Teacher’s Perspective on Danielson’s

Framework Instrument.

When conducting a literature review on teacher evaluation, the purpose would be

a significant theme addressed in the research. The research reviewed indicated that the

purpose of teacher evaluation was not only to enhance teaching, but also to ultimately

improve student learning (Adams et al., 2015). Both the purpose of teacher evaluation

and audience of the evaluation vary according to formative and summative assessment.

Teachers preferred formative assessment to identify strengths and weaknesses in their

teaching (Weisberg et al., 2009), while stakeholders’ favored summative assessment for

employment and global decisions (Mathers & Olivia, 2008). Interestingly, two essential

elements to teacher evaluation were participant self-reflection as well as the purpose of

the assessment (Chukwubikem, 2012). However, the actual changes in teacher behavior

Page 105: Mississippi Teacher's Perspectives of Danielson's

89

only occurred when teachers knew the specific steps needed to improve their areas of

weakness (Bambrick-Santoyo, 2012).

Effective teaching surprisingly differs from teaching effectiveness. The goal of

effective teaching is student growth (Creemers & Kyriakides, 2013; Layne, 2012),

whereas the goal of teaching effectiveness is teacher growth (OECD, 2011). The MET

project and other researchers (King & Watson, 2010; Mohan et al., 2008; Sanden, 2012)

found attributes which contribute to teacher growth include: (a) excellent classroom

management, (b) developers of independent learners, (c) differentiated instruction, and

(d) providing timely students feedback. Hunter introduced the practice of requiring

teachers to first model lessons for students then students would work in groups until

students were able to manage independent practice (Danielson & McGreal, 2000). As

teachers grow and employ change, the following areas may show improvement: (a)

excellent classroom management, (b) developers of independent learners, (c)

differentiated instruction, (d) providing timely student feedback and (e) improved student

growth. Consequently, each of these are representative of attributes of an effective

teacher (King & Watson, 2010; Mohan et al., 2008; Sanden, 2012).

Empirical evidence included a multitude of frameworks and designs based on

teacher assessment. Within the multitude of frameworks designed, Charlotte Danielson’s

Framework for Teachers was the leader. Danielson organized her framework for teachers

into domains, components, elements, and indicators (Alvarez & Anderson-Ketchmark,

2011). Danielson’s framework focuses on four domains (a) instruction, (b) classroom

environment, (c) professional responsibility, (d) planning and preparation. Domains

consist of teaching activities and elements which further describe teaching characteristics

Page 106: Mississippi Teacher's Perspectives of Danielson's

90

of its higher-order component (Danielson, 2011b). Studies have determined that teachers

have found the four domains in Danielson’s Framework for Teaching to be useful

indicators of teaching and learning (MET Project, 2010; Olson, 2015).

Danielson’s (2011) research found that a good teacher evaluation system should

(a) contain a consistent description of good instruction, (b) a mutual perception of this

description, and (c) trained evaluators. Chukwubikem’s (2012) effective teacher

assessment supported Danielson’s findings. The effective teacher assessment includes (a)

consistent feedback to teachers for professional growth, (b) regular reporting to school

principals with information for building efficient educational systems, (c) recognizing

needs for professional growth, and (d) recognizing specific learning expectations.

Although Danielson’s Frameworks for Teaching do not incorporate all findings from

Chukwubikem (2012), her frameworks are the leader within teacher evaluation research.

Through teacher evaluation, it is apparent that an effective teacher is not someone

who holds a certification only, but rather someone who can skillfully demonstrate

planning, the educational environment, instruction, professional responsibilities, and

student growth (Smith et al., 2005).

According to Danielson (2007), teacher planning should consist of four

descriptors: (a) lessons designed with content knowledge and pedagogy of material, (b)

identification of student’s academic traits, (c) understanding of resource needs, and (d)

development of comprehensible lessons. To create a lesson plan for student success

building, it is crucial for the teacher to understand the student’s background and content

knowledge (Rock & Wilson, 2005). Differentiated instruction is vital in the lesson

planning process as is the execution of the plan (Williams et al., 2012).

Page 107: Mississippi Teacher's Perspectives of Danielson's

91

A teacher’s ability to communicate is an example of instruction in a manner

which facilitates student comprehension of the teacher’s direction and meaning, as

provided by Danielson (2011a). Questions and discussions are main components during

classroom instruction (Peterson & Taylor, 2012). These teaching skills are vital to foster

higher-order thinking while also taking into account a student’s prior lessons and

knowledge (Peterson & Taylor, 2012). However, these skills can be underused if a

student does not feel comfortable in their environment to respond to questions.

The best classroom environment for students is a safe space (Holley & Steiner,

2005); therefore, it is critical that the teacher evaluator be cognizant of the classroom

environment. More importantly, the evaluator recognizes the teacher’s interaction and

controlled manipulation of the classroom concerning the student’s learning process and

behavior (Marzano et al., 2003).

One of the most encompassing domains in teacher assessment is professional

responsibility (Danielson, 2011a). Many different aspects of the teaching profession with

regards to professional responsibility include but are not limited to self-reflection,

communication, record keeping, professional development, collaboration, integrity, and

honesty (Danielson 2011a). Danielson (2007) reiterated these aspects, along with

prioritizing student concerns, are key to a teacher’s professional responsibility.

In a majority of school settings, evaluators are obtained from trained principals;

however, teacher’s observation or experience has begun to introduce trained peers in this

process as well (Danielson & McGreal, 2000; Darling-Hammond, 2013). Olivia et al.

(2009) found that using mentors and peers as teacher evaluators helped school

Page 108: Mississippi Teacher's Perspectives of Danielson's

92

administration with teacher assessment overload and also brought a new perspective to

the teacher assessment process. However, no matter who the evaluator, feedback has

proven to be essential in the evaluation process (Hussain et al., 2013).

On this note, Hussain et al. (2013) found that for an observation to be successful,

feedback is vital. The Bill and Melinda Gates Foundation (2013) discovered teacher

feedback is the pathway that leads to the greatest improvements in teaching. This

discovery was supported by findings from Mielke and Frontier (2012), who observed

teachers in their study used feedback to evaluate their practices for improvement. Finally,

upon the MET project’s completion, principles for measuring teacher effectiveness were

identified, a key among these being feedback (Bill & Melinda Gates Foundation, 2013).

Danielson and McGreal (2000) noted that feedback after a classroom observation

must be immediate or soon afterward for the feedback to be effective. Also, effective

feedback must be focused on improvements instead of mistakes or flaws. Therefore, the

post-conference evaluation objective should offer guidance for improvement (Danielson

& McGreal, 2000). A crucial element to feedback is the teacher’s response to the

feedback (Bambrick-Santoyo, 2012). The success of the evaluation feedback is indicated

by the change in practices made by the teacher in response to the feedback (Bambrick-

Santoyo, 2012). Darling-Hammond et al. (2012) noted that the frequency of feedback

was a deciding factor as to how earnestly the teachers reacted to feedback.

Another type of feedback from teacher assessments that has shown improved

teaching skills is self-analysis and self-reflection (Sandholtz, 2011). Teachers who use

self-analysis and self-reflection promote student understanding (Sandholtz, 2011).

Page 109: Mississippi Teacher's Perspectives of Danielson's

93

Pieczura (2012) valued self-reflection from her own experience as a teacher. Lesson

modifications took place when Pieczura (2012) had realizations she discovered through

self-reflections thus Pieczura said these modifications bettered her lessons.

In summation, trust was the final component of teacher evaluation in this chapter.

Tschannen-Moran (2014) pointed out that in the instance of trust in schools, there is a

powerful connection between trust and collaboration. Also, the two types of trust defined

are relational trust and social capital (Bryk & Schneider, 2003; Schultz & Cuneo, 2015).

Bryk and Schneider (2003) defined relational trust within the education community as the

trust between educators in general, while social capital was defined as trusting

relationships with communities (Schultz & Cuneo, 2015).

Trust in an organization and its leadership has productivity consequences and is

empirically supported in the literature (Bryk & Schneider, 2002; Goddard et al., 2001;

Tschannen-Moran, 2001). Bryk and Schneider (2002) recognized a notable relationship

between a school’s productivity and levels of trust. Trust is a worthy component of

teacher evaluation (Wang & Day, 2002).

Wang and Day (2002) recounted the importance of trust in a teacher evaluation

model as follows: “respect, safety, trust, and collaboration … are considered key

ingredients of effective teacher development, and hence need to be at the core of any

teacher observation model” (p. 14). Trust builds among stakeholders with clear and

consistent communication (Bryk & Schneider, 2003). Finally, according to Donaldson

and Donaldson (2012), for teachers to benefit from evaluator feedback, teachers must be

Page 110: Mississippi Teacher's Perspectives of Danielson's

94

able to trust the evaluator. The theme’s purpose of evaluation, effective teaching,

components of evaluation, Charlotte Danielson’s domain frameworks, evaluators,

feedback, factors that influence teacher evaluation, and trust were all found to be

empirically relevant when studying teacher evaluation.

Page 111: Mississippi Teacher's Perspectives of Danielson's

95

CHAPTER III – RESEARCH DESIGN AND METHODOLOGY

Introduction

This study explored the relationship and differences between teacher’s

perspectives regarding Danielson’s Framework for Teaching, professional trust, years of

teaching experience, and state of Mississippi teacher licensure. This study also sought to

describe the “what-is” status regarding teacher’s perspectives of teaching components as

established by Danielson’s framework. This study anticipated informing policymakers’

understanding of teacher’s perspectives regarding evaluation components as established

by the Danielson’s Framework for Teaching.

The purpose of this study was to gain an understanding of Mississippi Gulf Coast

K-12 teacher’s perspectives on the relative importance of teaching components as

identified in the Teacher’s Perspective on Danielson’s Framework for Teaching. This

study also explored the extent to which the relative components vary as a function of

teachers’ years of teaching experience, type of teaching certification, and perspective of

their professional trust in their assessment evaluator. Moreover, the objectives of this

study were to: (a) describe survey results as they relate to each variable being examined

and capture teacher perspective state-of-affairs as they relate to components of teaching

as established by Danielson’s Framework for Teaching, (b) explain data interaction and

determine findings, and (c) validate findings observed by way of expert panel, pilot

study, and statistical methods (Gay, Mills, & Airasian, 2015). By research best practices,

the researcher was careful to provide comprehensive contextual information which would

serve both the reader and researcher interested in replication.

Page 112: Mississippi Teacher's Perspectives of Danielson's

96

This chapter addresses the methodology proposed for this study, beginning with

the research questions and the research hypothesis of this study. Next, instrument

development and pilot study findings are presented. Lastly, study procedures and data

analysis follow a description of the participants and the research setting.

Research Questions and Hypotheses

Research Questions

The research questions used were as follows:

RQ1: What is the degree of importance that Mississippi Gulf Coast K-12 teachers

rate teaching components as identified in the Teacher’s Perspectives on Danielson’s

Framework for Teaching (TPDFT) instrument?

RQ2: What is the degree of importance that Mississippi Gulf Coast K-12 teachers

rate teaching components as identified differently in the Teacher’s Perspectives on

Danielson’s Framework for Teaching (TPDFT) instrument based on the teacher’s years

of teaching experience?

RQ3: What is the degree of importance that Mississippi Gulf Coast K-12 teachers

rate teaching components as identified differently in the Teacher’s Perspectives on

Danielson’s Framework for Teaching (TPDFT) instrument based on the teacher’s type of

Mississippi teaching certification?

RQ4: What is the degree of importance that Mississippi Gulf Coast K-12 teachers

rate teaching components as identified differently in the Teacher’s Perspectives on

Danielson’s Framework for Teaching (TPDFT) instrument based on the teacher’s

perspectives of their professional trust in their assessment evaluator?

Page 113: Mississippi Teacher's Perspectives of Danielson's

97

Research Hypotheses

RH1: There will be a statistically significant difference in the composite scores

that encompass the respective teaching domains of planning and preparation, classroom

environment, instruction, and professional responsibilities between teachers as identified

in the Teacher’s Perspectives on Danielson’s Framework for Teachers instrument.

RH2: There will be a statistically significant difference in the composite scores

that encompass the respective teaching domains of planning and preparation, classroom

environment, instruction, and professional responsibilities between teachers based on the

teacher’s years of teaching experience as identified in the Teacher’s Perspectives on

Danielson’s Framework for Teachers instrument.

RH3: There will be a statistically significant difference in the composite scores

that encompass the respective teaching domains of planning and preparation, classroom

environment, instruction, and professional responsibilities between teachers based on the

teacher’s type of Mississippi state teaching certification as identified in the Teacher’s

Perspectives on Danielson’s Framework for Teachers instrument.

RH4: There will be a statistically significant difference in the composite scores

that encompass the respective teaching domains of planning and preparation, classroom

environment, instruction, and professional responsibilities between teachers based on the

teacher’s perspectives of their professional trust as identified in the Teacher’s Perspective

on Danielson’s Framework for Teachers instrument.

Research Design

The researcher chose both prediction and description categories for this study.

According to Gall, Gall, and Borg (2006), outcomes of educational research fall into four

Page 114: Mississippi Teacher's Perspectives of Danielson's

98

categories: (a) description, (b) prediction, (c) improvement, and (d) explanation.

Descriptive studies lend themselves to survey methods for collecting descriptive data

(Gall et al., 2006). Furthermore, descriptive studies have demonstrated their importance

and significance regarding educational research (Gay et al., 2015). Both quantitative and

qualitative studies use descriptive studies.

This study explored relationships between Danielson’s teacher domains as it

relates to teacher’s perspectives. Moreover, this study established predictions grounded in

generalizations obtained through correlational research. For data collection, this study

used a voluntary cross-sectional field survey using participants convenient for the

researcher. To reduce survey non-responses and study bias associated with the same, the

researcher used a face-to-face approach to survey administration. The study’s sample

frame consisted of Mississippi certified teachers teaching in the two Mississippi coast

counties holding traditional and non-traditional certifications who were over 18 years of

age, including men and women with all years of teaching experience in grades K-12. The

researcher conducted a comparative study using a quantitative design approach.

When using non-probability sampling, the primary challenge at the sampling

stage is obtaining a representative sample. This study had reduced bias using sample

representation (selecting school districts that are representative of the coast) and diversity

(surveying diverse school district teacher populations). For this study, a priori power

analysis was performed using a power of 0.8. Based on this analysis, a sample size of 119

was needed to detect an effect size of 0.5. Sample size was calculated using G*Power

3.0.3 (Faul, Erdfelder, Lang, & Buchner, 2007).

Page 115: Mississippi Teacher's Perspectives of Danielson's

99

The researcher determined that the best approach to collecting data for this study

be by using a survey instrument. Nevertheless, through an extensive review of research,

Charlotte Danielson’s Framework for Teaching Instrument was located. However,

questions from Charlotte Danielson’s Framework for Teaching Instrument (2013)

addressed many but not all of this study’s domains. Due to this instrument revision, a

pilot study was conducted to evaluate the revised instrument’s reliability and validity. A

panel of experts evaluated content validity. Reliability and face validity were tested using

pilot study. Coefficient alpha was used to evaluate reliability for the subscales. A

decision rule of .70 was used to evaluate reliability based on DeVellis (2016)

recommendation. The researcher presented the instrument development and the findings

of the pilot study followed by procedures used in the current study.

Pilot Study

Draft Instrument Description

Through an extensive review of research, as reported in the previous chapter,

Charlotte Danielson’s Framework for Teaching was the framework supported by a

significant portion of the empirical research (Alvarez & Anderson-Ketchmark, 2011;

Danielson, 2011a; Dechert et al., 2015). This framework is widely accepted by the

educational community as a seminal work and is the basis for many teacher evaluation

systems (Alvarez & Anderson-Ketchmark, 2011; Danielson, 2011a; Danielson &

McGreal, 2000; Dechert et al., 2015). After a comprehensive review of the published

empirical research as reported in Chapter Two, the following domains were identified:

planning and preparation, classroom environment, instruction, professional responsibility,

and professional trust (Bryk & Schneider, 2003; Danielson & McGreal, 2000).

Page 116: Mississippi Teacher's Perspectives of Danielson's

100

Based on this, logically it made sense to utilize a measure of Charlotte

Danielson’s Framework for Teaching. However, upon review of the literature and

Danielson’s work, a sufficient assessment was not identified. Thus, the 2013 version of

Charlotte Danielson’s Framework for Teaching included four of the five domains

identified in Chapter Two. The Danielson Group LLC was contacted to obtain permission

to edit the 2013 version of Charlotte Danielson’s Framework for Teaching. The

Danielson Group LLC granted permission. (See Appendix B).

After the granted permission, the researcher developed a Likert scale

questionnaire survey instrument for use in this study. This instrument is a derivative of

the 2013 version of Charlotte Danielson’s Framework for Teaching. The researcher

translated questionnaire statements through an editing process using components from the

Charlotte Danielson’s Framework for Teaching. The translation of each item was

documented (See Appendix C). Issac and Michael (1997) stated that five positions are

most commonly used for a Likert scale. The scale ranged from 1 to 5. However,

additional items to assess teacher’s perspectives of their professional trust in their

assessment evaluator would need to be developed and added. The researcher developed a

question to evaluate the domain of professional trust based on behaviors identified

through the review of the literature. The question to measure the trust of evaluators was:

“I have a professional trust in my assessment evaluator.”

The draft instrument, Teacher’s Perspectives on Danielson’s Framework for

Teaching – Content Review (TPDFT-CR), started with self-report Likert-scale items

organized into multiple sections. The first section included informed consent,

instructions, and research study disclosure (See Appendix D). The second section

Page 117: Mississippi Teacher's Perspectives of Danielson's

101

collected demographic information from participants and started with four questions. The

third section collected information about professional trust defined as four criteria:

respect, competence, personal regard for others, and integrity (Forsyth et al., 2006). The

last section included 26 questions which assessed the four domains of planning and

preparation, classroom environment, instruction, and professional responsibility. To

determine teacher’s perspectives for each component, the researcher selected a 5-point

Likert scale with possible scores ranging from one (Not Important at all) to five (Very

Important). A Likert scale was used to give a raw score value to the participant’s

answers. This format allowed for the precise distinction between items that are important

and not important. After the researcher constructed the survey, a content review was

conducted via an expert panel.

Draft Instrument Expert Content Review

The purpose of this expert content review was to establish content validity of the

Teacher’s Perspectives on Danielson’s Framework for Teaching – Content Review

(TPDFT-CR). Content validity is a process of determining if the items on the TPDFT

have both item and sampling validity (Gay et al., 2015). An expert panel was selected

which included teacher’s and researcher’s familiar with Charlotte Danielson’s

Framework for Teaching. An email request was sent to seven individuals to review the

instrument (Appendix E). Five of the seven experts responded and participated on the

content review committee (Appendix F).

Content review form. To receive consistent data from experts, a form was

developed based on Ross (2011) content review form (Appendix G). This structured form

allowed experts to rate if assessment items were representative of the content domain as

Page 118: Mississippi Teacher's Perspectives of Danielson's

102

well as item clarity. Additional areas were available for comments regarding, directions,

content coverage, formatting, and comments. The researcher’s instrument maintained the

original language meaning in the Charlotte Danielson’s Framework.

The content review form began with a description of the dissertation and the

purpose of the assessment instrument (See Figure 1). The form followed with a review of

the demographic section. The reviewers were asked to rate each question along with

possible responses based on wording clarity with an opportunity to provide additional

comments and clarification. Clarity of wording was rated on a 4-point scale of 1 (item is

not clear) to 4 (item is clear) (See Figure 2). At the end of this section, reviewers had the

opportunity to comment on content coverage, items to add, formatting, and general

comments (See Figure 3).

Figure 1. Content review form directions.

Page 119: Mississippi Teacher's Perspectives of Danielson's

103

Figure 2. Content review form for demographics question 1 and response stem.

Figure 3. Content review overall comment demographics section.

The last section of the content review addressed the domains of trust, planning

and preparation, classroom environment, instruction, and professional responsibility

which asked reviewers to rate the clarity of items. However, these sections also asked

reviewers to rate items on how well items represent the domain content on a four-point

scale. Each domain section began by clearly stating the domain and study definition of

the variable (See Figure 4). Item representatives of domain content ranged from 1 (item is

Page 120: Mississippi Teacher's Perspectives of Danielson's

104

not representative) to 4 (item is representative) (See Figure 4). At the end of every

content section, reviewers had the opportunity to comment on domain coverage, items to

add, section formatting, and general comments.

Figure 4. Content review format for trust domain.

The researcher provided reviewers both electronic and paper copies of the content

review form along with a copy of the Teacher’s Perspectives on Danielson’s Framework

for Teaching – Content Review (TPDFT-CR) instrument (Appendix D).

Content review results from demographic section. Upon review of the expert

comments, the majority of items of the demographic section were found to be “clear,”

needing no revision. However, one out of the five reviewers indicated that clarification of

teacher certification and additional detail regarding teacher certification was needed.

Page 121: Mississippi Teacher's Perspectives of Danielson's

105

Reviewer feedback and comments are combined and reported in Appendix H.

Demographic section revision is presented in Table 1.

Table 1

Demographic Section Revision

Comment Item Revision

Clarity What is your teacher’s certification

based on?

According to your current

subject area, what is your

teacher’s certification based on?

Content review results from professional trust. The domain required additional

items based on the expert review. Within the professional trust domain, reviewer

comments addressed three main concerns: domain content coverage, clarification, and

formatting. Three out of five reviewers expressed concern about the limited number of

items within the domain of trust. All expert reviewers agreed the trust question listed was

clear, however, the reviewers were concerned that one question did not provide sufficient

coverage of the domain. Reviewer One stated: “A single item cannot measure a concept

that has four distinct criteria. Similarly, a single item does not constitute a section.

Integrity, respect, competence, and regard for student’s families and colleagues are not

assessed.” Reviewer Two shared:

Trust is difficult item to measure and it is very personal. It is also a multi-

dimensional construct. For example, as you stated above from Forsyth, Barnes,

and Adams (2006), it includes respect, competence, personal regard, and integrity.

At the very least you should consider turning these four dimensions into questions

and …. then use a composite mean or sum to represent a “trust” score. (Reviewer

2)

Page 122: Mississippi Teacher's Perspectives of Danielson's

106

The reviewers commented about clarity and formatting. One reviewer commented

in this section that the term “teacher evaluator” may need to be clarified. Regarding

instrument formatting, two reviewers suggested moving the trust section within the

demographic section. Reviewers were clear that trust should be presented and formatted

as a domain. Thus, professional trust section formatting should mirror that of other

domains included on the assessment. One reviewer also noted that the instrument skipped

from item five to item seven omitting the number six. Reviewer feedback and comments

are combined and reported in Appendix H. Professional trust revisions are presented in

Table 2.

Table 2

Professional Trust Domain Revisions

Comment Item Revision

Clarification “Teacher Evaluator” No revision needed

Additional

items needed

I have a professional trust in

my assessment evaluator.

Item removed, five items added

Item added I think having a primary assessment

evaluator that is trustworthy is -

Item added I think having a primary assessment

evaluator that is worthy of my respect is -

Item added I think having a primary assessment

evaluator that is competent to assess my

teaching is -

Item added I think having a primary assessment

evaluator that expresses personal regard

for others is -

Item added I think having a primary assessment

evaluator that is a person of integrity is -

Format Missing number 6 Renumbered items

Format Professional Trust Section Reformatted the professional trust

section as a domain

Page 123: Mississippi Teacher's Perspectives of Danielson's

107

Content review results from planning and preparation. Upon review of the data,

the majority of items in the planning and preparation domain were found to be “clear,”

needing no revision and representative of the domain. The comments within the planning

and preparation domain addressed one major concern: item clarification. Reviewer One

pointed out that both the first and third question included in this domain assessed multiple

concepts and suggested a split of the questions into multiple questions. Reviewer

feedback and comments are combined and reported in Appendix H. Planning and

preparation revisions are presented in Table 3.

Table 3

Planning and Preparation Domain Revisions

Comment Item Revision

Clarification I think having knowledge of content

and pedagogy is -

I think having knowledge of

contents and teaching strategies

is -

Clarification I think setting or establishing

instructional outcomes is -

I think establishing

instructional outcomes is -

Content review results from classroom environment. Upon review of the data, the

majority of items in the classroom environment domain were found to be “clear,” needing

no revision and representative of the domain. The comments within the classroom

environment domain addressed item clarification and formatting. Reviewer One pointed

out that several items included in this domain may need clarification or revision to

provide clarity of the questions focus. The last item within this section also contained the

phrase “I think” twice. Reviewer feedback and comments are combined and reported in

Appendix H. Classroom environment revisions are presented in Table 4.

Page 124: Mississippi Teacher's Perspectives of Danielson's

108

Table 4

Classroom Environment Domain Revisions

Comment Item Revision

Clarification I think creating a classroom

environment of respect and rapport

is -

Creating a classroom

environment of student respect

and rapport is -

Format I think I think organizing physical

instructional space is -

I think organizing physical

instructional space is -

Content review results from instruction. Upon review of the data, the majority of

items in the instruction domain were found to be “clear,” needing no revision and

representative of the domain. The one comment within the instruction domain addressed

item clarification. Reviewer One pointed out that item four in this domain may need

clarification or revision to provide clarity of the question focus. By replacing “in” with

“to guide” in this item provided the purpose of using student assessment. Reviewer

feedback and comments are combined and reported in Appendix H. Instruction domain

revisions are presented in Table 5.

Table 5

Instruction Domain Revision

Comment Item Revision

Clarification I think using student assessments

in instruction is -

using student assessment to

guide instruction is -

Content review results from professional development. Upon review of the data,

the majority of items in the professional development domain were found to be “clear,”

needing no revision and representative of the domain. The one comment within the

Page 125: Mississippi Teacher's Perspectives of Danielson's

109

professional responsibilities domain addressed formatting. Reviewer One pointed out that

item three may contain a grammar error. Upon review, the item was correctly formatted.

Reviewer feedback and comments are combined and reported in Appendix H. No

revisions were needed.

Content review results from domain importance. Upon review of the data, all

items in the section of domain importance were found to be “clear,” needing no revision

and representative. The one suggestion within the section of domain importance

addressed formatting. Reviewer One suggested asking about the importance of each

domain at the end of each section rather than at the end of the survey. Upon review, the

items were added to the end of each corresponding domain. Reviewer feedback and

comments are combined and reported in Appendix H. Revisions are presented in Table 6.

The revised instrument TPDFT-PS can be reviewed in Appendix I.

Table 6

Domain Importance Revisions

Comment Item Revision

Formatting Remove the domain numbers Domain numbers were removed, and

items revised

Formatting Restructure formatting Ask about the importance of each

domain at the end of each section

rather than at the end of the survey

Clarification In general, how important is Domain 1,

Planning and Preparation?

In general, how important is the

domain of planning and preparation?

Clarification In general, how important is Domain 2,

Classroom Environment?

In general, how important is the

domain of classroom environment?

Clarification In general, how important is Domain 3,

Instruction?

In general, how important is the

domain of instruction?

Clarification In general, how important is Domain 4,

Professional Responsibilities?

In general, how important is the

domain of professional

responsibility?

Page 126: Mississippi Teacher's Perspectives of Danielson's

110

Revised Instrument Pilot Study

Procedures. Upon district approval, the researcher contacted a faculty member at

each school within the River View district to request they administer a pilot of five

surveys at their school. The researcher compiled packets that included instructions,

surveys (See Appendix I), and a script for the administration of the research survey (See

Appendix J) for each school. These packets were distributed at each school and picked up

the following week.

Participants. Participants consisted of 29 teachers (years of teaching experience

ranging from 1 to 28) from an urban school district within a Mississippi’s coastal county.

The pilot study included teachers from kindergarten through 12th grade. The instrument

(Appendix I) was administered to teachers using standardized directions.

Of the 29 teachers who participated, 14 participants reported all domains as being

important as indicted with a perfect or near perfect scores. The highest possible total

score for the TPDFT-PS was 155. However, when running the internal consistency

analysis, the reliability coefficients for the total measure and each domain when using all

29 participants supported coefficient alpha scores as unreliable due to lack of score

variability. The reliability of the domain scale ranged from .302 to .755. The researcher

removed participants with scores of 152 and higher in the data set due to lack of score

variability and reliability coefficients. Final participants included a total of 15 teachers

with years of teaching experience ranging from 2 to 26 years (See Table 7).

Page 127: Mississippi Teacher's Perspectives of Danielson's

111

Table 7

Demographic Characteristics of Pilot Study Participants

n %

Grade Level

K-3 Lower Elementary 5 33.3

4-6 Upper Elementary 2 13.3

7-8 Middle School 3 20

9-12 High School 5 33.3

Type of Teacher Certification

Traditional Teacher Program 8 53.3

Alt Teacher Cert (MAT, etc.) 5 33.3

Reciprocity 2 13.3

Highly Qualified Certification

21 hours course work 6 40.0

Praxis / Emer Cert 3 20.0

Trad teacher cert (k-3) 6 40.0

Reliability. Internal consistency reliability coefficients of each domain score were

calculated using coefficient alpha. The reliability of the domain scale ranged from .79 to

.87 (n = 15). Domain reliability statistics are presented below in Table 8.

Page 128: Mississippi Teacher's Perspectives of Danielson's

112

Table 8

Domain Reliability Statistics

Cronbach’s

Alpha

N of

Items

Planning and Preparation .854 7

Classroom Environment .798 6

Instruction .691 6

Professional Responsibility .805 7

Professional Trust .794 5

Final instrument revisions. Several participants in the pilot study asked

clarification questions regarding the demographic question number 4, which stated:

“According to your current subject area, what is your teacher certification based on?” The

researcher added a question to the demographic section, to add clarity to question four.

This question asked teachers to identify their current subject area before asking the bases

of their teacher certification (See Figure 5).

Figure 5. Demographic information revision.

Page 129: Mississippi Teacher's Perspectives of Danielson's

113

The pilot study began establishing content validity through the evaluation of the

TPDFT-CR by experts in the field. Overall, the reliability of the pilot study proved to be

promising, but the researcher needed to explore this reliability further.

Study Design

The researcher conducted the study during the 2016-2017 school year. The data

was gathered in the field at two urban school districts within Mississippi’s two coastal

counties.

Participants and Setting

According to the Census Bureau’s 2012 estimates, the state of Mississippi has a

population of 2.9 million, comprised of three regions, each uniquely different. The

Mississippi Gulf Coast is a region of blended cultures consisting of coastal, tourism and

military influences. The region, comprised of three counties, borders Louisiana to the

west and Alabama to the east.

The target population for this study was K-12 certified teachers teaching in

Mississippi’s coastal counties. The researcher provided the following target population

demographics by way of calculation from the 2014-2015 Report to PEERS: (a) 4,209

total teachers, (b) 11 total separate school districts, (c) average teacher experience is

11.85 years, and (d) average annual teacher salary is $47,430 (MDE, 2015). The source

population for this study was two separate school districts.

The researcher drew participants for the study from a convenience sample of

lower, middle, and high school K-12 teachers employed in two urban school districts

along the Mississippi Gulf Coast with complementary demographics and within the target

population. The school districts were middle to an upper-income suburban area. At the

Page 130: Mississippi Teacher's Perspectives of Danielson's

114

time of the study, the River View community had an estimated population of 17,442, of

which comprised a student population of approximately 5,859 (U.S. Census Bureau,

2017). The U.S. Census Bureau estimated that the median household income for the

River View school district was $54,264, with 8.7% of residents considered impoverished.

The data calculated 45% of students would be eligible for free and reduced lunch. River

View was composed of English language learners (4.7%) and students with disabilities

(12.8%). The student population was Caucasian (75.9%) and Black (12.8%), followed by

Hispanic (5.5%), 2 or more races (1.7%), Asian (3.4%), American Indian (.5%), and

Native Hawaiian (.2%) (MDE, 2018).

The second school district included in the study, Beach Front, had very similar

characteristics to River View. Beach Front, located in a suburban community, is within

driving distance of a major city. At the time of the study, the community had an estimated

population of 4,613, of which comprised a student population of approximately 2,026

(U.S. Census Bureau, 2017). The U.S. Census Bureau estimated that the median

household income for Beach Front was $46,802, with 5.2% of residents considered

impoverished. The 5.2% impoverished amount is in contrast with the percentage of

students eligible for free and reduced meals calculated at 21.5%. Beach Front was

comprised of English language learners (3.1%) and students with disabilities (10.1%).

The student population was mostly Caucasian (61.0%) and Black (28.7%), followed by

two or more races (6.0%), and Asian (2.1%) (MDE, 2018).

A sample size of 119 teachers was the required minimum for a medium effect size

with the statistical power of .80 at the .05 alpha level (Faul et al., 2007). The River View

school district employed a total of 384 teachers, while Beach Front school district

Page 131: Mississippi Teacher's Perspectives of Danielson's

115

employed 138 teachers (MDE, 2018). The researcher gave all certified teachers within

the district an opportunity to participate.

A diverse sample of 276 teachers (years of teaching experience ranging from 1 to

45) took the TPDFT. Of the 276 teachers that participated, 91 participants reported all

domains as being very important as indicted by a score of 155. Due to lack of score

variability and pilot study coefficient alpha scores, scores of 155 were removed (See

Table 9).

Table 9

Domain Reliability Statistics

n %

Years of Teaching Experience

0 – 5 Years 45 24

6 – 10 Years 44 24

11 – 15 Years 33 18

16 – 20 Years 26 14

21 – 25 Years 20 10

26 – 30 Years 11 6

31 – 35 Years 4 3

36 – 40 Years 1 .5

41 – 45 Years 1 .5

Grade Level

K-3 Lower Elementary 61 33

4-6 Upper Elementary 39 21

7-8 Middle School 41 22

9-12 High School 40 22

Other 4 2

Type of Teacher Certification

Traditional Teacher Program 120 65

Alt Teacher Cert (MAT, etc.) 43 23

Reciprocity 22 12

Highly Qualified Certification

21 hours course work 84 45

Praxis / Emer Cert 53 29

Trad teacher cert (k-3) 48 26

Page 132: Mississippi Teacher's Perspectives of Danielson's

116

Procedure

The researcher began by receiving IRB approval for the study through the

University of Southern Mississippi’s Institutional Review Board (See Appendix K). After

securing approval from the university, the researcher contacted both school districts.

First, the researcher emailed the superintendent of the River View school district to

determine the best way to contact school principals to discuss the most efficient way to

conduct the study within their school. The researcher next contacted the Beach Front

school district research liaison to discuss survey distribution. The researcher contacted

principals via email followed with a phone call to determine the best faculty meeting at

which to conduct the study. Individual schools and district needs were considered to

ensure that survey distribution had a minimal effect on the school’s daily routine.

Teachers completed the questionnaire during a time of administers choosing.

Standardized directions were read (See Appendix J) before administration of the survey.

One week before the scheduled survey administration time, the researcher

confirmed survey distribution with the building principal. The researcher or researcher’s

representative read standardized directions to the participants (Appendix J). The

researcher or the researcher’s representative answered questions from the assembled

group if needed. The teachers completed the TPDFT. Completion of the survey was

voluntary, and permission was assumed if the participant completed the survey.

Completion took approximately 10 to 15 minutes. The teachers placed the questionnaire

in a box upon leaving.

Page 133: Mississippi Teacher's Perspectives of Danielson's

117

Data Analysis

Data analysis for this study was conducted using SPSS (IBM Corp, 2015). Four

multiple regressions were used to determine if years of teaching experience, type of

Mississippi teacher certification, and professional trust significantly predicted teachers’

overall knowledge of the planning and preparation, classroom environment, instruction,

and professional responsibility domains. An alpha level of less than .05 was used to

determine significance. Eta squared was also used to determine the effect size.

Furthermore, a repeated measures analysis was used to determine if there were significant

differences between teaching components as identified in the TPDFT. An alpha level of

less than .05 was used to determine significance along with Eta squared to determine

effect size.

Page 134: Mississippi Teacher's Perspectives of Danielson's

118

CHAPTER IV – RESULTS

Introduction

This chapter presents findings from data collected through the use of the TPDFT.

The goal of this study was to examine if differences existed between domains as

identified in the TPDFT, and if these findings differed statistically as a function of years

of teaching experience, type of Mississippi teaching certification, and degree of

professional trust. The variables included in the study were teacher’s years of teaching

experience, type of Mississippi teacher certification, professional trust, and Danielson’s

framework domains. There were 185 viable surveys collected from 10 schools in the

Southern region of Mississippi. This chapter presents the study data analysis results.

Organization of Data Analysis

This chapter outlines the statistical procedures and findings from this study. The

purpose of this study was to (a) gain understanding regarding Mississippi Gulf Coast K-

12 teacher’s perspectives on the relative importance of teaching components as identified

in the Teacher’s Perspective on Danielson’s Framework Instrument, and (b) explore the

extent to which the relative importance of the components vary as a function of the

teachers’ years of certified teaching experience, type of state teaching certification, and

level of professional trust in their evaluator. Descriptive statistics are presented below,

followed by the analysis of the four research questions.

Page 135: Mississippi Teacher's Perspectives of Danielson's

119

Research Questions

The researcher used the following research questions noted below:

RQ1: What is the degree of importance that Mississippi Gulf Coast K-12 teachers

rate teaching components as identified in the Teacher’s Perspectives on Danielson’s

Framework for Teaching (TPDFT) instrument?

RQ2: What is the degree of importance that Mississippi Gulf Coast K-12 teachers

rate teaching components as identified differently in the Teacher’s Perspectives on

Danielson’s Framework for Teaching (TPDFT) instrument based on the teacher’s years

of teaching experience?

RQ3: What is the degree of importance that Mississippi Gulf Coast K-12 teachers

rate teaching components as identified differently in the Teacher’s Perspectives on

Danielson’s Framework for Teaching (TPDFT) instrument based on the teacher’s type of

Mississippi teaching certification?

RQ4: What is the degree of importance that Mississippi Gulf Coast K-12 teachers

rate teaching components as identified differently in the Teacher’s Perspectives on

Danielson’s Framework for Teaching (TPDFT) instrument based on the teacher’s

perspectives of their professional trust in their assessment evaluator?

Research Hypotheses

RH1: There will be a statistically significant difference in the composite scores

that encompass the respective teaching domains of planning and preparation, classroom

environment, instruction, and professional responsibilities between teachers as identified

in the Teacher Perspectives on Danielson’s Framework for Teaching instrument.

Page 136: Mississippi Teacher's Perspectives of Danielson's

120

RH2: There will be a statistically significant difference in the composite scores

that encompass the respective teaching domains of planning and preparation, classroom

environment, instruction, and professional responsibilities between teachers based on the

teacher’s years of teaching experience as identified in the Teacher Perspectives on

Danielson’s Framework for Teaching instrument.

RH3: There will be a statistically significant difference in the composite scores

that encompass the respective teaching domains of planning and preparation, classroom

environment, instruction, and professional responsibilities between teachers based on the

teacher’s type of Mississippi state teaching certification as identified in the Teacher

Perspectives on Danielson’s Framework for Teaching instrument.

RH4: There will be a statistically significant difference in the composite scores

that encompass the respective teaching domains of planning and preparation, classroom

environment, instruction, and professional responsibilities between teachers based on the

teacher’s perspectives of their professional trust as identified in the Teacher Perspectives

on Danielson’s Framework for Teaching instrument.

Results Research Question One

Research Hypothesis One

Research hypothesis one was as follows: There will be a statistically significant

difference in the composite scores that encompass the respective teaching domains of

planning and preparation, classroom environment, instruction, professional

responsibilities, and trust between teachers as identified in the Teacher Perspectives on

Danielson’s Framework for Teaching instrument. A repeated measures ANOVA was

used to analyze the first research hypothesis whether there were significant differences

Page 137: Mississippi Teacher's Perspectives of Danielson's

121

between composite scores of the respective teaching domains of planning and

preparation, classroom environment, instruction, professional responsibilities, and trust.

Assumption testing for normality, multicollinearity, and homoscedasticity were

conducted prior to before the analysis and found tenable.

Descriptive statistics. The mean and standard deviation of the sample (n =185)

were: (a) planning and preparation (M = 4.65, SD = 0.34), (b) classroom environment (M

= 4.75, SD = 0.31), (c) instruction (M = 4.78, SD = .30), (d) professional responsibly (M

= 4.5), and (e) trust (M = 4.60, SD = 0.41). The mean and standard deviation of the

sample (n = 185) were for years of experience (M = 13.06, SD = 8.91), Mississippi

teaching certification (M = 1.47, SD = 0.70), reciprocity (M = 0.12, SD = 0.32), and

alternate route (M = 0.23, SD = 0.42), of which are shown in Table 10.

Table 10

Descriptive Statistics

M SD n

Plan Preparation 4.651 .359 185

Classroom Environment 4.752 .313 185

Instruction 4.784 .303 185

Professional Responsibility 4.570 .385 185

Trust 4.604 .412 185

Teaching Experience 13.06 8.913 185

Analysis. A one-way within-subjects (repeated measures) ANOVA was conducted

to compare the teaching domains of planning and preparation, classroom environment,

instruction, professional responsibilities, and trust. Sphericity was analyzed using

Page 138: Mississippi Teacher's Perspectives of Danielson's

122

Mauchly’s Test. The test reported a high significance, W = .428, p < .001, suggesting that

the observed matrix not have approximately equal variances and or covariances. This

result suggests that using an uncorrected repeated measures ANOVA F-test would

increase the probability of a Type I Errors, thus supporting the research hypothesis while

it was true. Several corrections were considered. However, the Greenhouse-Geisser

correction was selected based on the data. This correction did not affect the computed F-

statistic, but instead raised the critical F value needed to accept the research hypothesis.

The corrective coefficient using Greenhouse-Geisser was ε = .664 Table 11 summarizes

the results of Greenhouse-Geisser correction.

Table 11

Repeated Measures Analysis of Variance Domain

Effect SS MS df F p

Greenhouse-

Geisser

6.371 2.398 2.371 20.033 .001

Error 58.515 .120 488.752

A repeated measures ANOVA with a Greenhouse-Geisser correction showed that

mean teaching domains significantly differed from F (2.66, 488.751) = 20.03, p < 0.001.

Post hoc tests using the Bonferroni correction revealed several significant differences

between teaching domains (See Table 12).

Page 139: Mississippi Teacher's Perspectives of Danielson's

123

Bonferroni post hoc pairwise comparisons showed two significant differences

related to the domain of trust. These analyses showed that participants trust in their

evaluator was significantly lower than both the domain of instruction (mean difference =

-.181, p < .001, η2 = .49) and classroom environment (mean difference = -.148, p < .001,

η2 = .36).

When assessing the domain of planning and preparation using the Bonferroni post

hoc, pairwise comparisons showed significant differences with all domains except trust.

Teachers scored the domain of planning and preparation significantly lower than the

domain of instruction (mean difference = -.133, p < .001, η2 = .40) and classroom

environment (mean difference = -.101, p < .001, η2 = .30). However, participants scored

the domain of planning and preparation significantly higher than the domain of

professional responsibility (mean difference = .081, p < .02, η2 = .22).

To further extend these findings, when examining the data regarding the domain

of instruction, the researcher found significant differences with all domains except

classroom environment. Unlike planning and preparation, teachers scored instruction

significantly higher than all domains except classroom environment. Mean differences

ranged from the lowest difference with planning and preparation (mean difference = .133,

p < .001, η2 = .40) and the highest being with professional trust (mean difference = .214,

p < .001, η2 = .49).

Page 140: Mississippi Teacher's Perspectives of Danielson's

124

Table 12

Significant Mean Differences Bonferroni Post-Hoc

95% simultaneous

confidence interval

Comparison Mean

Difference

p η2 Lower Upper

Trust - Instruction -.18 .001 .49 -.277 -.084

Trust - Class Environment -.15 .001 .36 -.253 -.043

Plan Prep - Instruction -.13 .001 .40 -.192 -.074

Plan Prep - Class

Environment

-.10 .001 .30 -.166 -.035

Plan Prep - Prof

Responsibility

.08 .026 .22 .006 .157

Instruction - Prof

Responsibility

.21 .001 .60 .146 .283

Class Environment - Prof

Responsibility

-.18 .001 .51 .117 .246

The majority of significant findings has been discussed above with only the

domains of the classroom environment and professional responsibility remaining.

Participants’ scores in the domain of classroom environment were significantly higher

than all domains except instruction. Two significant differences are apparent in regard to

the domain of trust with a mean difference of .148 (p < .001), and planning and

preparation with a mean difference of .101, p < .001). The remaining significant

difference was teachers scoring classroom environment higher than professional

responsibility (mean difference = .282, p < .001).

Page 141: Mississippi Teacher's Perspectives of Danielson's

125

Results of research hypothesis one. The first research hypothesis was as follows:

There will be a statistically significant difference in the composite scores that encompass

the respective teaching domains of planning and preparation, classroom environment,

instruction, professional responsibilities, and trust between teachers as identified in the

Teacher’s Perspectives on Danielson’s Framework for Teachers instrument. A repeated

measures ANOVA with a Greenhouse-Geisser correction showed that means

significantly differed between teaching domains F (2.66, 488.751) = 20.03, p < .001, thus

providing sufficient evidence to support the first research hypothesis.

Results Research Question Two through Four

For the researcher to assess the remaining research questions, four linear

regressions were used to investigate whether the composite scores of the respective

teaching domains of planning and preparation, classroom environment, instruction, and

professional responsibilities differed based on teacher’s years of teaching experience,

state of Mississippi teaching certification, and professional trust. Assumption testing for

normality, linearity, multicollinearity, and homoscedasticity were conducted before

running the analysis and found tenable.

Descriptive Statistics

Descriptive statistics were presented previously in Table 10. Table 13 displays the

correlations among the predictor variables (planning and preparation, classroom

environment, instruction, professional responsibility) and the criterion variables (years of

experience, state of Mississippi teacher certification, professional trust).

Page 142: Mississippi Teacher's Perspectives of Danielson's

126

Table 13

Intercorrelations Between Variables

1 2 3 4 5 6 7 8

1. Plan

Preparation 1.00

2. Classroom

Environment

.577

(.001) 1.00

3. Instruction .648

(.001)

.590

(.001) 1.00

4. Professional

Responsibility

.530

(.001)

.626

(.001)

.567

(.001) 1.00

5. Trust .211

(.004)

.069

(.354)

.203

(.001)

.058

(.436) 1.00

6. Teaching

Experience

.065

(.380)

.120

(.103)

.128

(.082)

.131

(.076)

.070

(.346) 1.00

7. Alternate

Route Cert -,031 -.042 -.082 .099 .013 -.230

-.202

8. Reciprocity -.022 .006 -.081 .032 .061 .155 -.202 1.00

Planning and Preparation Results

Results of the linear regression analysis indicated that the linear combination of

years of experience, state of Mississippi alternate certification, reciprocity, and

professional trust began to approach significance to predicted planning and preparation

(M = 4.65, SD = 0.34), R2 = .05, adj R2 =.029, F (4, 180) = 2.366, p = .055. Table 14

provides results of the linear regression analysis. There was evidence to suggest

supporting the research hypothesis and conclude that model approached significance. The

Page 143: Mississippi Teacher's Perspectives of Danielson's

127

multiple correlation coefficient was .223. The results explained that less than 5% of the

variance in the domain of planning and preparation could be accounted for by the linear

combination of the teaching experience, type of state of Mississippi teacher certification,

and professional trust.

Table 14

Results of Linear Regression Analysis

F df p r2

Plan Preparation 2.366 4,180 .050 .050

Classroom Environment .857 4,180 .491 .019

Instruction 3.571 4,180 .008 .074

Professional

Responsibility

1.138 4,180 .340 .025

While the model approached significance with a p of .055, upon examination of

the regression weights, teacher’s years of experience (M = 13.06, SD = 8.91) did not

contribute to the multiple regression model predicting planning and preparation, β = .050,

t = .669, p = .504 (See Table 14). However, as seen in Table 15, upon examination of the

betas regarding the state of teaching certification (M = 1.47, SD = 0.70), reciprocity and

alternate route did not significantly contribute to the multiple regression model predicting

planning and preparation, β = -.050, t = -.665, p = .507 and β = -.033, t = -.431, p = .667,

respectively. Finally, upon examination of the regression weights, trust (M = 4.60, SD =

0.41), may significantly contribute to the multiple regression model predicting planning

and preparation, β = .211, t = 2.89, p = .004, the lack of a significant F did not allow for

this interpretation.

Page 144: Mississippi Teacher's Perspectives of Danielson's

128

Table 15

Summary of Linear Regression Analysis

Years of

Experience

Alt

Certification

Reciprocity Professional

Trust

B β B β B β B β

Plan Preparation .002 .050 -.028 -.033 -.055 .050 .184 .211

Classroom Environment .004 .114 -.015 -.020 -.019 -.019 .047 .062

Instruction .004 .115 -.061 -.085 -.120 -.128 .150 .204

Professional Responsibility .005 .110 -.680 -.075 -.004 -.040 .048 .051

Classroom Environment Results

Results of the linear regression analysis indicated that the linear combination of

years of experience, state of Mississippi alternate certification, reciprocity, and

professional trust were not significant predictors of classroom environment (M = 4.75, SD

= 0.31), R2 = .019, adj R2 =-.003, F (4, 180) = 0.857, p < .491. Table 15 above provides

results of the linear regression analysis. Therefore, the research hypothesis was not

supported and concluded that these factors did not predict classroom environment. The

multiple correlation coefficient was .137. The results explained that less than 1% of the

variance in the domain of classroom environment could be accounted for by the linear

combination of the teaching experience, type of state of Mississippi teacher certification

and professional trust. Therefore, a teacher’s years of teaching experience (β = .114, t =

1.49, p = .137), type of teacher certification (β = -.020, t = -.259, p = .796), reciprocity (β

= -.019, t = -.254, p = .800) and professional trust (β = .062, t = .83, p = .404). Table 15

above provides a summary of the regression analysis.

Page 145: Mississippi Teacher's Perspectives of Danielson's

129

Instruction Results

A third multiple regression model with all four predictors produced an R2 = .074

when predicting instruction (M = 4.78, SD = 0.30). The data provided evidence to support

the research hypothesis as the model was significance, F (4, 180) = 3.571, p < .008. Table

14 above provides results of the linear regression analysis. The multiple correlation

coefficient was .271. The results explained 7% of the variance in the domain of

classroom environment could be accounted for by the linear combination of the teaching

experience, type of state of Mississippi teacher certification, and professional trust.

Upon examination of the regression weights, teacher’s years of experience (β =

.115, t = 1.54, p = .123) did not contribute to the multiple regression model predicting

instruction with the coefficient beta p value of .125. Type of state of Mississippi teaching

certification (M = 1.47, SD = 0.70), or reciprocity and alternate route, did not contribute

to the multiple regression model predicting instruction, β = -.128, t = -1.735, p = .084 and

β = -.085, t = -1.128, p = .261, respectively. However, upon examination of the regression

weights, professional trust (M = 4.60, SD = 0.41) significantly contributed to the multiple

regression model predicting instruction, β = .204, t = 2.83, p = .005. The p values of .005

provides sufficient evidence in support of the research hypothesis.

Professional Responsibility

The final multiple regression model with all four predictors produced an R2 = .025

when predicting professional responsibility (M = 4.57, SD = 0.38). The data failed to

support the research hypothesis and concluded that the linear model years of experience,

state of Mississippi alternate certification, reciprocity, and professional trust did not

significantly predict of professional responsibility F (4, 180) = 1.138, p < .34. Table 14

Page 146: Mississippi Teacher's Perspectives of Danielson's

130

above provides results of the linear regression analysis. A p value of .34 did not provide

sufficient evidence to support the research hypothesis, therefore, a teacher’s years of

teaching experience (β = .110, t = 1.44, p = .149), type of teacher certification (β = -.004,

t = -.048, p = .962), reciprocity (β = -.075, t = -.973, p = .332), and professional trust (β =

.051, t = .691, p = .490) were not significant. Table 15 above provides a summary of the

regression analysis.

Research Hypothesis Two Through Four

Results of research hypothesis two. Research hypothesis two stated: There will be

no statistically significant difference in the composite scores that encompass the

respective teaching domains of planning and preparation, classroom environment,

instruction, and professional responsibilities between participants based on the teacher’s

years of teaching experience as identified in the Teacher Perspectives on Danielson’s

Framework for Teaching instrument. Four linear regressions failed to provide evidence to

support the research hypothesis that the domains differ based on teacher’s years of

teaching experience.

Results of research hypothesis three. Research hypothesis three stated: There will

be no statistically significant difference in the composite scores that encompass the

respective teaching domains of planning and preparation, classroom environment,

instruction, and professional responsibilities between teachers based on the teacher’s type

of Mississippi state teaching certification as identified in the Teacher Perspectives on

Danielson’s Framework for Teaching instrument. Four linear regressions failed to

provided evidence to support the research hypothesis the domains differ based on

teacher’s state of Mississippi teaching certification.

Page 147: Mississippi Teacher's Perspectives of Danielson's

131

Results of research hypothesis four. The fourth research hypothesis stated: There

will be no statistically significant difference in the composite scores that encompass the

respective teaching domains of planning and preparation, classroom environment,

instruction, and professional responsibilities between teachers based on the teacher’s

perspectives of their professional trust as identified in the Teacher Perspectives on

Danielson’s Framework for Teaching instrument. Two linear regressions failed to

provide evidence to support the research hypothesis the domains differ based on teacher’s

perspectives of professional trust. Planning and preparation regression approached

significance at .055. The instruction regression provided evidence to support the research

hypothesis, finding trust significantly contributed to the multiple regression model

predicting instruction, β = .204, t = 2.83, p = .005.

Page 148: Mississippi Teacher's Perspectives of Danielson's

132

CHAPTER V – DISCUSSION

The purpose of this study was to gain an understanding of Mississippi Gulf Coast

K-12 teacher’s perspectives on the relative importance of teaching components as

identified in the Teacher Perspectives on Danielson’s Framework for Teaching. Also, the

study also explored the extent to which the relative importance of the components vary as

a function of the teacher’s years of teaching experience, the teacher’s type of teaching

certification, and the teacher’s perspectives of their professional trust in their assessment

evaluator. Data obtained in the survey was quantitative and provided by use of a Likert

type scale covering five areas ranging from not important at all to very important. By

determining where statistical differences exist in teacher’s perspectives, educational

leaders can better identify and implement strategies that help improve teacher evaluation

methodology. This chapter includes a summary of the procedures, key findings, a

discussion of the findings, limitations of the study, recommendations for policy and

practice, concluding with recommendations for future research.

Summary of Procedures

The participants in this study consisted of 185 state-licensed teachers, each

employed at their respective participating school. The researcher selected participating

schools from among two separate school districts located along the Mississippi Gulf

Coast. These schools were chosen for being representative of public schools in the

Mississippi coastal region and for convenience.

The study was conducted using a questionnaire based on Charlotte Danielson’s

Framework for Teachers. Before its official use, an expert content review was performed

on the questionnaire instrument to analyze its validity. The reviewers suggested minor

Page 149: Mississippi Teacher's Perspectives of Danielson's

133

changes of which were incorporated into the questionnaire instrument. The revised

instrument was submitted to The University of Southern Mississippi Institutional Review

Board and was approved. A pilot study was then conducted to test the instrument’s

reliability. Reliability was tested using pilot study data. Coefficient alpha was used to

evaluate reliability for the subscales. A decision rule of .70 was established to evaluate

reliability based on DeVellis’ (2016) recommendation.

Two school districts and 10 schools participated in this research. From within this

population framework, the researcher solicited state-licensed teachers. In the end, the

participants completed a total of 276 questionnaires between March 7, 2017 and March

29, 2017. The researcher visited each participating school and conducted the study in

person. Each teacher participant voluntarily completed a 36-question questionnaire after

reviewing an informed consent.

A third of the participants submitted questionnaires had a raw score of 155,

indicating they considered all domains assessed were very important. The researcher

omitted questionnaires having scores of 155 from the analysis because these scores

offered no differential data and coefficient alpha reliability levels were not met when all

scores were included in the pilot study. The questionnaire instrument (Appendix A)

collected descriptive data to measure the level in which participating teachers agreed with

statements of importance to Danielson’s Framework for Teaching components.

Furthermore, data were analyzed to determine if teacher’s perspectives differed according

to their years of certified teaching experience, type of Mississippi teaching certification

obtained, or level of professional trust in their evaluator.

Page 150: Mississippi Teacher's Perspectives of Danielson's

134

Summary of Data

The researcher analyzed participants’ demographic data to gain insight into

participants. This data included years of certified teaching experience, type of state

teaching certification, planning and preparation, instruction, classroom environment,

professional responsibility, and level of professional trust in their evaluator. Of the 185

respondents, 24% had 0 to 5 years’ experience, 24% had 6 to 10 years’ experience, 18%

had 11 to 15 years’ experience, 14% had 16 to 20 years’ experience, 11% had 21 to 25

years’ experience, and 9% had over 25 years’ experience. The type of Mississippi

teacher’s certification obtained by the teachers were 65% traditional teacher education

certificate, 23% alternate teacher certification, and 12% reciprocity certificates. Highly

qualified teachers with 21 hours of coursework represented 45%. Teachers who

challenged the praxis or had emergency certification represented 29%. Traditional

teacher certification k-3 represented 26%. These results indicate that the participants were

an experienced group of teachers who understand the dynamics of Charlotte Danielson’s

Framework for Teaching components.

The first research question posed in this study was: “What is the degree of

importance that Mississippi Gulf Coast K-12 teachers rate components as identified in the

Teacher’s Perspectives on Danielson’s Framework for Teaching (TPDFT) instrument?”

K-12 teachers exhibited confidence in the measure with means ranging from 4.78 (SD =

.30) in the instruction domain to 4.57 (SD = .39) in the domain of professional

responsibility. Significant differences were statistically supported between the means of

the domain, thus supporting the research hypothesis. The data showed significant

differences between professional trust and the two domains of instruction and classroom

Page 151: Mississippi Teacher's Perspectives of Danielson's

135

environment. Significant differences were found between professional trust and the two

domains of instruction and classroom environment. Danielson’s domain of planning and

preparation differed significantly from the framework’s other domains of instruction,

classroom environment, and professional responsibility. Lastly, Danielson’s domain of

professional responsibility significantly differed from instruction and the classroom

environment.

The second research question posed in this study was: “What is the degree of

importance that Mississippi Gulf Coast K-12 teachers rate teaching components as

identified differently in the Teacher’s Perspectives on Danielson’s Framework for

Teaching (TPDFT) instrument based on the teacher’s years of teaching experience?” The

mean for years of teaching was 13.06, with a standard deviation of 8.913. This standard

deviation means the years of teaching had a large spread. K-12 teachers exhibited

confidence in the measure, with means ranging from 4.570 to 4.784. The highest mean

was in the domain of instruction, with a mean of 4.784 and a standard deviation of .303.

A standard deviation this low shows that most of the participants were in agreement that

instruction is very important. Responses did not vary according to years of experience,

thus failing to support the research hypothesis.

The third research question posed in this study was: “What is the degree of

importance that Mississippi Gulf Coast K-12 teachers rate teaching components as

identified differently in the Teacher’s Perspectives on Danielson’s Framework for

Teaching (TPDFT) instrument based on the teacher’s type of Mississippi teaching

certification?” Responses in this section did not vary according to the type of teaching

certification, thus failing to support the research hypothesis.

Page 152: Mississippi Teacher's Perspectives of Danielson's

136

The fourth question posed in this study was: “What is the degree of importance

that Mississippi Gulf Coast K-12 teachers rate teaching components as identified

differently in the Teacher’s Perspectives on Danielson’s Framework for Teaching

(TPDFT) instrument based on the teacher’s perspectives of their professional trust in their

assessment evaluator? The mean for teacher’s rating the importance of professional trust

in the assessment evaluator was 4.65. The mean response for this section fell into the

agree category, with the scale being not important at all being one and very important

being a four. The standard deviation for question 4 was 0.34, meaning that the scores in

the domain of professional trust did not vary. The analysis showed this question was

close to significance with p =.055. Professional trust also significantly predicted the

domain of instruction.

Discussion

Many states use Charlotte Danielson’s seminal work in teacher evaluation for

developing teacher evaluations. Teacher’s perspectives of the teacher evaluation has

many factors based on their years of service, the type of teacher certification and their

trust in the professional evaluator.

According to Hanushek and Rivkin (2010) as well as Rockoff (2004), teacher

quality is the single most essential variable impacting student achievement within the

field of education. Many researchers agree that the most significant element to student

growth is the teacher (Goldhaber, 2002; Hanushek & Rivkin, 2010; Johnson et al., 2012;

Rockoff, 2004). A primary purpose of teacher assessment is for the enhancement of

teachers and an improvement in student learning (Adams et al., 2015).

Page 153: Mississippi Teacher's Perspectives of Danielson's

137

Teacher assessment today is typically based on a framework model. Charlotte

Danielson (2011b), author of the Danielson Framework for Teaching framework model,

is an educational consultant based in Princeton, New Jersey. She is both nationally and

internationally known for her work with teacher assessment. Throughout the review of

empirical research, Danielson’s framework was the dominating work present in the

research regarding teacher evaluation. Mississippi along with other states incorporated or

based their evaluation process on Danielson’s framework. Due to state implementation

and empirical research, Danielson’s work leads the field in research regarding the teacher

evaluation making use of her framework the logical choice for this study. Often, teachers

prefer formative assessments because they identify their professional strengths and

weaknesses. Formative assessments also provide feedback information that can be

applied toward professional growth (Danielson & McGreal 2000; Weisberg et al., 2009).

Bambrick-Santoyo (2012) stated that teacher improvement happened when teachers knew

the specific steps to take for improvement. Chukwubikem (2012) described teacher

improvement as “a process of continuous improvement gained through experience,

targeted professional development and the insights and direction provided through

thoughtful, objective feedback about the teachers’ effectiveness” (p. 561).

The literature defines teaching effectiveness as the outcome of teaching or the

impact on student growth (Danielson, 2007; Goe et al., 2008). Moreover, schools are

required to develop policies, with great importance placed on teacher effectiveness, to

qualify for government educational funds (Rockoff & Speroni, 2011).

Danielson (2012b) reported that educators must consider guideline changes from

national and state departments of education when defining criteria for good teaching.

Page 154: Mississippi Teacher's Perspectives of Danielson's

138

Bambrick-Santoyo (2012) observed that true personal change occurred when teachers

directly addressed their skills that were lacking, regardless of assessment feedback or

evaluation results. According to Darling-Hammond et al. (2012), teacher assessments

should consider teacher effectiveness when designing teacher assessments.

Chukwubikem (2012) described successful teacher assessments as those that

helped teachers develop their teaching skills and as a result, helped students increase their

learning.

Limitations

The participants were a convenience sample of teachers on the Mississippi Gulf

Coast, thus limiting the generalizability of these findings. The study depended on a self-

reporting instrument and was limited to the teachers who volunteered to participate. The

researcher used a numeric scale in the instrument with a 1 meaning not important at all to

a 5 meaning very important. The researcher could only assume that the data was a

reflection of the participants’ perspectives of the domains and aspects of teacher

evaluation. Of the 276 participants, 33% gave a score of all 5’s to all questions on the

instrument, indicating they considered all domains were important. Due to lack of

variability, these scores were removed, thus possibly bringing this study’s outcome into

question.

Implications

The results of this study signify an agreement among teachers that the domains

listed in Charlotte Danielson’s Framework for Teaching were of utmost importance. The

importance of trust in the professional evaluator indicated that for teachers to take

ownership of their evaluation, they must completely trust the evaluator.

Page 155: Mississippi Teacher's Perspectives of Danielson's

139

The findings regarding trust, although surprising, can have significant

implications for the teacher evaluation process. The discovery from the research was the

value teachers placed on professional trust. Professional trust may be the building block

of teacher evaluation for establishing rapport and trust between administration and

teachers. Opportunities should be planned to build relationships between administration

and teachers. Another avenue for building professional trust is professional development

that centers on open communication, shared ideas, in addition to giving value to the

thoughts and ideas through listening to one another.

When comparing the importance of Charlotte Danielson’s domains, the researcher

discovered quite a few significant differences between domains. For instance, instruction

was found to be more important to the study participants than the domain of planning and

preparation. Additionally, the domain of instruction was considered more important than

professional responsibility and trust. Interestingly, the teachers from this group of

participants indicated instruction was most important of all areas except the classroom

environment. Thus, this result implies that the importance was in the student and teacher

interaction during the teaching of the lesson. Questioning, discussion, engaging students

in active learning, and the responsiveness to students in the process of instruction were

also significant.

A second implication was that the domain of the classroom environment was

significantly more important when compared to planning and preparation, professional

responsibility, and trust. This result implies that the classroom environment, which

includes a place for students to feel secure, is more important than developing

comprehensible lessons based on content, pedagogy, and an understanding of individual

Page 156: Mississippi Teacher's Perspectives of Danielson's

140

whole child needs. The simple tenet of classroom environment was significantly more

important to the participants than records keeping, family communication, and

professional growth. Finally, the security of the student in his or her classroom is even

more important than trust in a teacher’s evaluator. The findings supported that

participants felt a student’s security within the classroom was important. Thus, this

finding reveals the key role environment plays within the education of the student.

The final implication was that the participants ranked all Charlotte Danielson’s

domains significantly higher than professional responsibility. In the current study, the

literature defined professional responsibility as teacher reflection, maintenance of

accurate records, communicating with families, partaking in a professional group,

maturing and improving professionally, and demonstrating professionalism (Danielson,

2007). Thus, this implies professional responsibility was considered the least important of

all the domains. Administration that understands this tenet can guide teachers in

recognizing the importance and legal obligation of documentation. Professional

development can be created to give teachers some tools to simplify the task of

professional responsibility.

Recommendations

The researcher recommends further research that attempts to measure principal’s

perspectives corresponding to trust in teacher’s evaluations. Replication of this study that

includes a larger sample and in a different geographic area would be beneficial. Future

research might also consider the assessment of principal’s perspectives on trust and ask

principals’ theory on how to build trust between teachers and evaluators. A study that

evaluates principal’s perspectives on trust and then performing an analysis to compare the

Page 157: Mississippi Teacher's Perspectives of Danielson's

141

responses of teachers to the responses of principals can yield useful data. Understanding

the similarities and differences between principal’s and teacher’s perspectives could

revolutionize the partnership between these two groups.

Additional statements emerged from this research from the participants. For

future research, these recommendations should be added to the research in this study.

Several participants wrote that a comments section would be helpful for adding additional

information. One participant wrote that this evaluation was not applicable to teachers of a

severely profound class. This statement was an “aha” moment for this teacher researcher

because differentiation is constantly being required for teachers to incorporate this

education into the classroom. However, the administrators forget to differentiate the

teacher assessment. The last additional comment presented was related to question nine

(“I think having a primary assessment evaluator that expresses personal regard for others

is”), where a teacher wrote the following in the margin regarding favoritism: “It is to the

extreme detriment to morale and trust when this takes place.” The participant answered

the question with neither important nor not important.

Teachers embrace the responsibilities of their duties in planning and preparing

lessons, controlling classroom environment, delivering instruction, and completing

professional responsibilities. These are responsibilities that teachers expected to do,

without any need of prompting. Further investigation into the purpose of teacher

evaluation was for the benefit of the student. In summary, the majority of teachers agreed

Charlotte Danielson’s domains were important.

Page 158: Mississippi Teacher's Perspectives of Danielson's

142

APPENDIX A – Teacher’s Perspectives on Danielson’s Framework for Teaching –

(TPDFT)

Page 159: Mississippi Teacher's Perspectives of Danielson's

143

Page 160: Mississippi Teacher's Perspectives of Danielson's

144

Page 161: Mississippi Teacher's Perspectives of Danielson's

145

Page 162: Mississippi Teacher's Perspectives of Danielson's

146

Page 163: Mississippi Teacher's Perspectives of Danielson's

147

Page 164: Mississippi Teacher's Perspectives of Danielson's

148

APPENDIX B – Danielson Letter of Permission

Page 165: Mississippi Teacher's Perspectives of Danielson's

149

APPENDIX C – Charlotte Danielson’s Framework for Teaching Instrument (2013)

Translations of Items

Survey Questionnaire Translation: Danielson’s Framework for Teaching (DFFT)

Components, 2013 Edition has been translated into questionnaire statements as listed

below.

Description: The words: “I think” and “is” have been added to each of the 22 DFFT

components. Additional modifications are noted as: (1) Added content- Red color and (2)

Deleted content- Lined through. Statements one through twenty-two are prefixed with a

parentheses cross-reference; each includes the DFTT component number and survey

questionnaire number (Reference Appendix-A).

Questionnaire Statements

1. (1a/07) I think having demonstrating knowledge of content and pedagogy is-

2. (1b/08) I think knowing demonstrating knowledge of my students is-

3. (1c/09) I think setting or establishing instructional outcomes is-

4. (1d/10) I think having demonstrating knowledge of resources available to me is-

5. (1e/11) I think designing or preparing coherent instruction is-

6. (1f/12) I think designing or preparing student assessments is-

7. (2a/13) I think creating a classroom environment of respect and rapport is-

8. (2b/14) I think establishing a culture for learning is-

9. (2c/15) I think managing the classroom smoothly procedures is-

10. (2d/16) I think managing student behavior is-

11. (2e/17) I think organizing physical instructional space is-

12. (3a/18) I think communicating with my students is-

13. (3b/19) I think using questioning and discussion techniques that deepen student

understanding is-

14. (3c/20) I think engaging students in learning is-

15. (3d/21) I think using student assessment in instruction is-

16. (3e/22) I think having demonstrating lesson flexibility and being responsive to students

responsiveness is-

17. (4a/23) I think reflecting on my teaching is-

18. (4b/24) I think maintaining accurate records is-

19. (4c/25) I think communicating with student families is-

20. (4d/26) I think collaborating and participating in my school’s the professional community is-

21. (4e/27) I think growing and developing professionally is-

22. (4f/28) I think showing professionalism, integrity and ethical conduct is-

Last Modified December 7, 2016

December 7, 2016

Page 166: Mississippi Teacher's Perspectives of Danielson's

150

APPENDIX D –Teacher’s Perspectives on Danielson’s Framework for Teaching –

Content Review (TPDFT-CR)

Page 167: Mississippi Teacher's Perspectives of Danielson's

151

Page 168: Mississippi Teacher's Perspectives of Danielson's

152

Page 169: Mississippi Teacher's Perspectives of Danielson's

153

Page 170: Mississippi Teacher's Perspectives of Danielson's

154

Page 171: Mississippi Teacher's Perspectives of Danielson's

155

APPENDIX E – Consent Review Request Letter

310 Simon Blvd. | Ocean Springs, MS 39564

Phone: 228-860-7844 | [email protected]

January 26, 2017

Dr. _________________

TITLE

Department

University/ Institution / School District

Address

Dear Dr. _____________,

Hello, my name is Donna Floyd, and I am a doctoral candidate at the University of Southern

Mississippi. I would like you to review an instrument that I have revised with permission

from Charlotte Danielson (for use with my dissertation). The purpose of this instrument is to

assess the teacher’s perspectives of the relative importance of teaching components.

I am attempting to secure experts from different areas of the field of teaching and research to

increase the validity of the Teacher’s Perspectives on Danielson’s Framework for Teaching

(TPDFT) instrument, but I have asked you specifically because of your experience in

educational research. I know you are very busy with your own projects, but I do hope you

have time to lend your expertise.

I have successfully defended by proposal; this letter is to request your participation in this

review process. I anticipate sending you the instrument and review sheet upon your reply. If

you agree to review the instrument, I would like your feedback by February 10th, 2017. I will

be sending you a copy of the instrument, as well as, a review form. I can send the information

via mail or electronically. If you choose to conduct the review electronically, the review form

and instrument are in the form of a word documents, so that you may type your comments

directly into the document.

I would be so thankful for your assistance with this project. If you have any additional

questions about this study, please feel free to contact me at (228) 860-7844 or by email at

[email protected].

Whether or not you can commit to review the TPDFT please reply to this email and let me

know so that, if necessary, I may find an alternative reviewer. I truly appreciate you!

Sincerely,

Donna Floyd, Ph.D Candidate

The University of Southern Mississippi

Page 172: Mississippi Teacher's Perspectives of Danielson's

156

APPENDIX F – Expert Reviewers

James Fox, Ph.D.

Rebecca Giles, Ph.D.

Elgen Hillman, Ph.D.

Talia Lock, Ph.D.

Joan Seals, Ed.S.

Page 173: Mississippi Teacher's Perspectives of Danielson's

157

APPENDIX G – Teacher’s Perspectives on Danielson’s Framework for Teaching

(TPDFT) Review Sheet

Page 174: Mississippi Teacher's Perspectives of Danielson's

158

Page 175: Mississippi Teacher's Perspectives of Danielson's

159

Page 176: Mississippi Teacher's Perspectives of Danielson's

160

Page 177: Mississippi Teacher's Perspectives of Danielson's

161

Page 178: Mississippi Teacher's Perspectives of Danielson's

162

Page 179: Mississippi Teacher's Perspectives of Danielson's

163

Page 180: Mississippi Teacher's Perspectives of Danielson's

164

Page 181: Mississippi Teacher's Perspectives of Danielson's

165

APPENDIX H – Expert Reviews Summary Data

Teacher’s Perspectives on Danielson’s Framework for Teaching (TPDFT) Review Sheet*

Page 182: Mississippi Teacher's Perspectives of Danielson's

166

Page 183: Mississippi Teacher's Perspectives of Danielson's

167

Page 184: Mississippi Teacher's Perspectives of Danielson's

168

Page 185: Mississippi Teacher's Perspectives of Danielson's

169

Page 186: Mississippi Teacher's Perspectives of Danielson's

170

Page 187: Mississippi Teacher's Perspectives of Danielson's

171

Page 188: Mississippi Teacher's Perspectives of Danielson's

172

Page 189: Mississippi Teacher's Perspectives of Danielson's

173

APPENDIX I – Teacher’s Perspectives on Danielson’s Framework for Teaching – Pilot

Study

Page 190: Mississippi Teacher's Perspectives of Danielson's

174

Page 191: Mississippi Teacher's Perspectives of Danielson's

175

Page 192: Mississippi Teacher's Perspectives of Danielson's

176

Page 193: Mississippi Teacher's Perspectives of Danielson's

177

Page 194: Mississippi Teacher's Perspectives of Danielson's

178

Page 195: Mississippi Teacher's Perspectives of Danielson's

179

APPENDIX J – Script for Administration of Research Study

This script is to be used in conjunction with conducting a research study survey

using a questionnaire instrument entitled- Appendix A, Teacher Perspectives on

Danielson’s Framework for Teaching.

Instructions for Proctor

Objective: Completion and securing of research study questionnaires.

1. Coordinate location and time for a group setting administration of the research

study survey.

2. Prior to administrating the survey questionnaire, read and understand this script

document. The researcher should be contacted with concerns or questions.

3. The Proctor Script for Participants section must be read verbatim and as part

of executing the survey. This act is important because it ensures consistency in

the survey’s multiple executions.

4. Once all instruments are within envelope(s), seal with supplied tape.

5. Label envelope(s) and secure under lock and key until researcher makes

collection.

Proctor Script for Participants

1. Hello, my name is (Print Name: ). I’ll be

reading from a script today in order to ensure consistency between multiple

executions this survey.

2. On behalf of this study’s researcher, Donna Floyd, thank you for your

participation.

3. Participating in this survey is confidential, anonymous, and of no risk to you.

Additional information is provided in the survey’s disclosure statements.

4. This script and the survey you’ll be participating in have been approved by the

University of Southern Mississippi’s Institutional Review Board. Their job is to

determine compliance with federal laws and ethical conduct as related to this

study.

5. There is a raffle associated with this survey. Participation in the survey is not

required to participate in the raffle. Each participating school is awarded one

winning number. Sometime in the next several weeks, your school will make

award of its winning raffle number.

6. Everyone here is eligible to participate in the raffle regardless of your

participation in the survey. It’s important that you save your raffle ticket; you’ll

need it to collect your prize.

7. I will now pass out surveys and raffle tickets (Not to be read: distribute one

survey and one raffle ticket to each participant; It’s acceptable for the proctor to

have volunteer help).

8. Does everyone have a survey, raffle ticket, and a means to mark your survey

responses? (Not to be read: Provide participants with supplied pens as needed

and surveys/tickets as needed).

Page 196: Mississippi Teacher's Perspectives of Danielson's

180

9. This survey will take 15 minutes to complete. Please take note of the Informed

Consent Form; a single selection is required. If you are not a Mississippi state

licensed teacher, please check the “NO” box on the survey’s Informed Consent

Form and follow the instructions provided.

10. Please look at your survey. You should be looking at an Informed Consent

Form. Please read, understand, and check a YES or NO in the Your Action

Required Here section. Please begin now. (Not to be read: Allow time to

execute the informed consent form).

11. Has everyone completed the Informed Consent Form? (Not to be read: Allow 5

minutes prior to reading this step. If there are outstanding forms, allow extra

time).

12. Now, please turn over the Informed Consent Form page. You should be looking

at Appendix-A. Any questions at this point? (Not to be read: If needed direct

questions to researcher).

13. Please take your time and offer your best responses. When you have completed

your survey, please return it to me. Please begin now. Thank you.

14. (Not to be read: This is the end of the script).

General Study Instrument Care

1. Protect questionnaires as instructed herein and place under lock and key upon

completion.

2. Questions regarding the care of completed questionnaires should be

immediately directed to the researcher.

General Information

1. Research study contact information:

a. Researcher: Donna Floyd, 228-860-7484 or

[email protected]

b. Research Study Chair: Dr. David Lee, [email protected]

c. USM IRB: Chair, 601-266-5997

2. Survey Proctors:

a. The researcher will identify suitable proctor(s) at each participating

school.

b. The researcher will provide proctor(s) with training, verbal instructions,

and a script.

3. Restrictions:

a. Administration of this research study questionnaire can only be

executed by the researcher or designated proctor(s).

b. The researcher is trained per applicable USM policy, procedure and

training courses.

c. Survey proctor(s) are trained by the researcher for the proper use of this

script and proper handling of completed questionnaires.

Page 197: Mississippi Teacher's Perspectives of Danielson's

181

4. Survey Raffles:

a. A unique winning raffle number will be provided for each participating

school.

b. Researcher will determine the winning number for each school using an

online random number generator or equivalent. Two numbers will be

generated per school- a primary number and an alternate number. The

alternate number will be used as a backup in the event that the primary

number doesn’t result in a participant winner.

c. Upon completion of this research study, the researcher will coordinate

with each school’s administrator and provide winning numbers and gift

cards.

5. Completed Survey Envelopes:

a. Completed surveys should be placed in supplied envelope immedetialy

upon receipt.

b. Upon placement of the final survey in the envelop, complete the

following steps:

i. Each survey envelope label will be completed by the proctor.

ii. Each survey script will be included in the survey’s envelope.

iii. Envelope(s) will be sealed with supplied tape.

c. Envelope label will contain the following information.

i. Sensitive Information, Keep Secure At All Times.

ii. Proctor Name.

iii. Researcher Name.

iv. School district name.

v. School Name and survey setting Location (Room Name/Room

Number).

vi. Volunteers involved: YES / NO.

vii. Survey group total count (Proctor not included).

viii. Survey non-participating total count.

ix. Survey completion date and time.

x. Raffle start number and stop number.

Page 198: Mississippi Teacher's Perspectives of Danielson's

182

APPENDIX K – IRB Approval Letter

Page 199: Mississippi Teacher's Perspectives of Danielson's

183

REFERENCES

Ackerman, R. H., & Mackenzie, S. V. (2007). Uncovering teacher leadership: Essays

and voices from the field. Thousand Oaks, CA: Corwin Press.

Adams, T., Aguilar, E., Berg, E., Cismowski, L., Cody, A., Cohen, D. B., & White, S.

(2015). A coherent system of teacher evaluation for quality teaching. Education

Policy Analysis Archives, 23(17). doi:10.14507/epaa.v23.2006

Alicias, E. R., Jr. (2005). Toward an objective evaluation of teacher performance: The

use of variance partitioning analysis. Education Policy Analysis Archives, 13(30).

doi:10.14507/epaa.v13n30.2005

Alvarez, M. E., & Anderson-Ketchmark, C. (2011). Danielson's Framework for

Teaching. Children & Schools, 33(1), 61-63.

American Recovery and Reinvestment Act (ARRA) of 2009, Pub. L. No. 111-5, §123

Stat. 115, 516 (2009).

Amos, N., & Cheeseman, R. H. (1992). Provisional teachers failing the Mississippi

teacher assessment instruments for certification: An evaluation for 1991-92.

(Report No. TM-019-375). Knoxville, TN: Mid-South Educational Research

Association. (ERIC Document Reproduction Service No. ED353311)

Baerveldt, C. (2013). Constructivism contested: Implications of a genetic perspective in

Psychology. Integrative Psychological & Behavioral Science, 47(1), 156-166.

doi:10.1007/s12124-012-9221-z

Page 200: Mississippi Teacher's Perspectives of Danielson's

184

Baker, B. D., Oluwole, J., & Green, P. C. (2013). The legal consequences of mandating

high stakes decisions based on low quality information: Teacher evaluation in the

race-to-the-top era [Special issue]. Education Policy Analysis Archives, 21(5).

doi:10.14507/epaa.v21n5.2013

Bambrick-Santoyo, P. (2012). Beyond the scoreboard. Educational Leadership, 70(3),

27-30. Retrieved from http://www.ascd.org/publications/educational-

leadership.aspx

Best education in the world: Finland, South Korea top country rankings, U.S. rated

average. (2012, November 27). The Huffington Post. Retrieved from

http://www.huffingtonpost.com/2012/11/27/best-education-in-the-

wor_n_2199795.html

Bill and Melinda Gates Foundation. (2013). Feedback for better teaching: Nine

principles for using measures of effective teaching. Seattle, WA: Author.

Retrieved from http://k12education.gatesfoundation.org/resource/feedback-for-

better-teaching-nine-principles-for-using-measures-of-effective-teaching/

Bırgın, O., & Bakı, A. (2007). The use of portfolio to assess student's performance.

Journal of Turkish Science Education, 4(2), 75-90. Retrieved from ERIC

database. (ED504219)

Bishop, S. N. (2009). An examination of the impact of teachers’ experience and teacher

orientation on disciplinary factors used by Michigan Public Elementary school

teachers (Doctoral dissertation). Retrieved from ERIC database. (Accession No.

ED53178)

Page 201: Mississippi Teacher's Perspectives of Danielson's

185

Black, D. W. (2015). Federalizing education by waiver? Vanderbilt Law Review, 68(3),

607-680. Retrieved from https://www.vanderbiltlawreview.org/2015/05

/federalizing-education-by-waiver/

Blase, J., & Blase, J. (2001). Empowering teachers: What successful principals do (2nd

ed.) Thousand Oaks, CA: Corwin Press, Inc.

Bonstingl, J. J. (1992). The total quality classroom. Educational Leadership, 49(6), 66-

70. Retrieved from http://www.ascd.org/publications/educational-leadership.aspx

Boonen, T., Van Damme, J., & Onghena, P. (2014). Teacher effects on student

achievement in first grade: which aspects matter most? School Effectiveness and

School Improvement, 25(1), 126-152. doi:10.1080/09243453.2013.778297

Boyd, D., Grossman, P., Lankford, H., Loeb, S., & Wychoff, J. (2006). How changes in

entry requirements alter the teacher workforce and affect student achievement.

Education Finance and Policy, 1(2), 176-216.

Boyd, R. C., ERIC Clearinghouse on Tests, M.D., & American Institutes for Research,

W. D. (1989). Improving teacher evaluations. ERIC Digest No. 111. Retrieved

from http://eric.ed.gov/?id=ED315431

Brewster, C., Railsback, J., & Northwest Regional Educational Lab. (2003). Building

trusting relationships for school improvement: Implications for principals and

teachers. Portland, OR: Northwest Regional Educational Laboratory. Retrieved

from ERIC database. (ED481987)

Bryk, A. S., & Schneider, B. (2002). Trust in schools: A core resource for improvement.

American Sociological Association's Rose Series in Sociology. New York, NY:

Russell Sage Foundation.

Page 202: Mississippi Teacher's Perspectives of Danielson's

186

Bryk, A. S., & Schneider, B. (2003). Trust in schools: A core resource for school reform.

Educational Leadership, 60(6), 40-45. Retrieved from http://www.ascd.org

/publications/educational-leadership.aspx

Buck, B., O'Brien, T., & Education Commission of the States, D. C. (2005). Eight

questions on teacher licensure and certification: What does the research say?

Education Commission of the States. Retrieved from ERIC database. (ED479051)

Buckner, K. G. (n.d.). School principal – The role of elementary and secondary school

principals, principal duties and responsibilities, principal qualifications. Retrieved

from http://education.stateuniversity.com/pages/2333/Principal-School.html

Buffum, A. (2008). Trust: The secret ingredient to successful shared leadership. In The

collaborative administrator: Working together as a professional learning

community. Bloomington, IN: Solution Tree Press.

Bush, G. W. (2004). No child left behind. Retrieved from ERIC database. (EA030882)

Chukwubikem, P. E. I. (2012). Developing better teacher evaluation. International

Journal of Social Sciences & Education, 2(4), 554-566.

doi:10.5901/jesr.2013.v3n6p49

Cleaver, D., & Ballantyne, J. (2014). Teachers' views of constructivist theory: A

qualitative study illuminating relationships between epistemological

understanding and music teaching practice. International Journal of Music

Education, 32(2), 228-241. doi:10.1177/0255761413508066

Colburn, A. (2000). An inquiry primer. Science Scope, 23(6), 42-44.

Page 203: Mississippi Teacher's Perspectives of Danielson's

187

Colby, S. A., Bradshaw, L. K., & Joyner, R. L. (2002, April). Perceptions of teacher

evaluation systems and their impact on school improvement, professional

development and student learning. Paper presented at the meeting of American

Educational Research Association New Orleans, LA.

Cooper, E. T., Parker, R. B., & Jordan, J. (2012). Effective teachers and performance

pay: A recommendation for Mississippi’s performance compensation system.

Retrieved June 16, 2016 from https://mpe.org

/mpe/pdf/newsletterPDF/GovBryant.Performance-based-compensation-Plan.pdf

Coulson, A. J. (1999). Market education: The unknown history (Vol. 21). Piscataway, NJ:

Transaction Publishers.

Creemers, B., & Kyriakides, L. (2013). Using the dynamic model of educational

effectiveness to identify stages of effective teaching: An introduction to the

special issue. Journal of Classroom Interaction, 48(2), 4-10. Retrieved from

https://www.jstor.org/stable/43858890

Danielson, C. & McGreal, T. L. (2000). Teacher evaluation to enhance professional

practice. Alexandria, VA: Association for Supervision and Curriculum

Development.

Danielson, C. (1996). Enhancing professional practices: A framework for teaching.

Alexandria, VA: Association of Supervision and Curriculum Development.

Danielson, C. (2007). Enhancing professional practice: A framework for teaching (2nd

ed.). Alexandria, VA: Association of Supervision and Curriculum Development.

Danielson, C. (2011a). Enhancing professional practice: A framework for teaching (3rd

ed.) Alexandria, VA: Association for Supervision and Curriculum Development.

Page 204: Mississippi Teacher's Perspectives of Danielson's

188

Danielson, C. (2011b). The framework for teaching evaluation instrument. Princeton, NJ:

The Danielson Group.

Danielson, C. (2012a). Observing classroom practice. Educational Leadership, 70(3), 32-

37. Retrieved from http://www.ascd.org/publications/educational-leadership.aspx

Danielson, C. (2012b). It’s your evaluation–Collaborating to improve teacher practice.

The Education Digest, 77(8), 22-27. Retrieved from https://www.eddigest.com/

Danielson, C. (2013). The framework for teaching evaluation instrument: 2013 edition.

Retrieved from https://www.danielsongroup.org/resource-item/the-framework-

for-teaching-evaluation-instrument/

Darling-Hammond, L. (2000). How teacher education matters. Journal of Teacher

Education, 51(3), 166-173. doi:10.1177/0022487100051003002

Darling-Hammond, L. (2009). Educational opportunity and alternative certification: New

evidence and new questions (Policy brief No. 1). Stanford Center for Opportunity

Policy in Education. Retrieved from http://www.edpolicy.stanford.edu/

Darling-Hammond, L. (2013). When teachers support and evaluate their peers.

Educational Leadership, 71(2), 24-29. Retrieved from

http://www.ascd.org/publications/educational-leadership.aspx

Darling-Hammond, L., Amrein-Beardsley, A., Haertel, E., & Rothstein, J. (2012).

Evaluating teacher evaluation. Phi Delta Kappan, 93(6), 8-15.

doi:10.1177/003172171209300603

Page 205: Mississippi Teacher's Perspectives of Danielson's

189

Darling-Hammond, L., Holtzman, D., Gatlin, S., & Vasquez Heilig, J. (2005). Does

teacher preparation matter? Evidence about teacher certification, teach for

America, and teacher effectiveness. Education Policy Analysis Archives, 13(42).

doi:10.14507/epaa.v13n42.2005

Davis, D. R., Pool, J. E., & Mits-Cash, M. (2000). Issues in implementing a new teacher

assessment system in a large urban school district: Results of a qualitative field

study. Journal of Personnel Evaluation in Education, 14(4), 285-306.

doi:10.1023/A:1011139203370

Day, C., & Gu, Q. (2009). Veteran teachers: Commitment, resilience and quality

retention. Teachers and Teaching: Theory and Practice, 15(4), 441-457.

doi:10.1080/13540600903057211

Day, C., Sammons, P., & Stobart, G. (2007). Teachers matter: Connecting work, lives

and effectiveness. Berkshire: McGraw-Hill Education.

Dechert, K., & Jordan, J. (2014). Performance-pay pilots in Mississippi (Interim report).

Starkville, MS: Mississippi State University, Research and Curriculum Unit.

Retrieved from https://www.rcsd.ms/

Dechert, K., Kappler, L., & Nordin, A. (2015). Developing and implementing principal-

and teacher evaluation systems in Mississippi. Journal of Educational Leadership

in Action, 3(1). Retrieved from

http://www.lindenwood.edu/ela/issue05/dechert.html

Demir, K. (2015). The effect of organizational trust on the culture of teacher leadership in

primary schools. Educational Sciences: Theory & Practice, 15(3), 621-634.

doi:10.12738/estp.2015.3.2337

Page 206: Mississippi Teacher's Perspectives of Danielson's

190

Desimone, L. (2011). Outcomes: Content-focused learning improves teacher practice and

student results. Journal of Staff Development, 32(4), 63-68. Retrieved from

https://learningforward.org/publications/jsd

Detlor, B., Booker, L., Serenko, A., & Julien, H. (2012). Student perceptions of

information literacy instruction: The importance of active learning. Education for

Information, 29(2), 147-161. doi:10.3233/EFI-2012-0924

DeVellis, R. F. (2016). Scale development: Theory and applications. Los Angeles, CA:

Sage Publications.

Dewey, J. (1938). Experience and education. New York, NY: Simon and Schuster.

Donaldson, M. L., & Donaldson, G. A., Jr. (2012). Strengthening teacher evaluation:

What district leaders can do. Educational Leadership, 69(8), 78-82. Retrieved

from http://www.ascd.org/publications/educational-leadership.aspx

Edge, K. (2013). Rethinking knowledge management: Strategies for enhancing district-

level teacher and leader tacit knowledge sharing. Leadership & Policy in Schools,

12(3), 227-255. doi:10.1080/15700763.2013.826810

Every Student Succeeds Act of 2015, Pub. L. No. 114-95.

Faculty Development and Instructional Design Center (n.d.). Formative and summative

assessment. DeKalb, IL: Northern Illinois University, Faculty Development and

Instructional Design Center. Retrieved from http://www.niu.edu/facdev/_pdf

/guide/assessment/formative%20and_summative_assessment.pdf

Faul, F., Erdfelder, E., Lang, A. G., & Buchner, A. (2007). G*Power 3: A flexible

statistical power analysis program for the social, behavioral, and biomedical

sciences. Behavior Research Methods, 39(2), 175-191. doi:10.3758/BF03193146

Page 207: Mississippi Teacher's Perspectives of Danielson's

191

Feistritzer, C. E. (2005). State policy trends for alternative routes to teacher certification:

A moving target. Paper presented at Conference on Alternative Certification: A

Forum for Highlighting Rigorous Research, Washington, DC.

Forsyth, P. B., Barnes, L. B., & Adams, C. M. (2006). Trust-effectiveness patterns in

schools. Journal of Educational Administration, 44(2), 122-141.

doi:10.1108/09578230610652024

Freehold Borough School District. (n.d.). Teacher evaluation: A metric for performance.

Retrieved from

http://www.freeholdboro.k12.nj.us/cms/lib6/NJ01001089/Centricity/Domain/8/Te

acher Evaluation - A Metric for Performance 2-27-12.pdf

Fuhrman, S. H. (2010). Tying teacher evaluation to student achievement. Education

Week, 29(28), 32-33. Retrieved from https://www.edweek.org/ew/articles

/2010/04/07/28fuhrman_ep.h29.html

Furtwengler, C. B. (1995). Beginning teacher programs. Education Policy Analysis

Archives, 3(3). doi:10.14507/epaa.v3n3.1995

Gall, M. D., Gall, J. P., & Borg, W. R. (2006). Educational research: An introduction

(8th ed.). New York, NY: Pearson.

Gandha, T. & Baxter, A. (2016). State actions to advance teacher evaluation. SREB’s

Educator Effectiveness Series. Retrieved from https://www.sreb.org/publication

/state-actions-advance-teacher-evaluation

Garza, R., Ovando, M., & O'Doherty, A. (2016). Aspiring school leaders' perceptions of

the walkthrough observations. International Journal of Educational Leadership

Preparation, 11(1). Retrieved from ERIC database. (EJ1103597)

Page 208: Mississippi Teacher's Perspectives of Danielson's

192

Gay, L. R., Mills, G. E., & Airasian, P. (2015). Educational research: Competencies for

analysis and applications (11th ed.). Upper Saddle River, NJ: Pearson Merrill

Prentice Hall.

Gilbertson, A. (2012, July 19). Mississippi receives NCLB waiver, promises reform.

Southern Education Desk. Retrieved from

http://www.southerneddesk.org/?p=4794

Glaeser, E. L., Laibson, D., Scheinkman, J. A., & Soutter, C. L. (1999). What is social

capital? The determinants of trust and trustworthiness (Report No. 7216).

Cambridge, MA: National Bureau of Economic Research. doi:10.3386/w7216

Glickman, C. D., Gordon, S. P., & Ross-Gordon, J. M. (1998). Supervision of instruction:

A developmental approach (4th ed.). Boston, MA: Allyn & Bacon.

Goddard, R. D., Tschannen-Moran, M., & Hoy, W. K. (2001). A multilevel examination

of the distribution and effects of teacher trust in students and parents in urban

elementary schools. The Elementary School Journal, 102(1), 3-17.

doi:10.1086/499690

Goe, L., Bell, C., & Little, O. (2008). Approaches to evaluating teacher effectiveness: A

research synthesis. National Comprehensive Center for Teacher Quality.

Retrieved from ERIC database. (ED521228)

Goe, L., Holdheide, L., & Miller, T. (2014). A practical guide to designing

comprehensive teacher evaluation systems: A tool to assist in the development of

teacher evaluation systems. Center on Great Teachers and Leaders. Retrieved

from ERIC database. (ED520828)

Page 209: Mississippi Teacher's Perspectives of Danielson's

193

Goldhaber, D. (2002). The mystery of good teaching. Education Next, 2(1). Retrieved

from https://www.educationnext.org/the-mystery-of-good-teaching/

Goldhaber, D., & Brewer, D. (2000). Does teacher certification matter? High school

teacher certification status and student achievement. Educational Evaluation and

Policy Analysis 2(22), 129-145. doi:10.3102/01623737022002129

Gordon, R., Kane, T. J., & Staiger, F. O. (2006). Identifying effective teachers using

performance on the job. The Hamilton Project. Washington, DC: The Brookings

Institute. Retrieved from https://www.brookings.edu/research/identifying-

effective-teachers-using-performance-on-the-job/

Graff, N. (2011). “An effective and agonizing way to learn”: Backwards design and new

teachers’ preparation for planning curriculum. Teacher Education Quarterly,

38(3), 151-168. Retrieved from http://www.teqjournal.org/

Griffin, L. (2013). Charlotte Danielson on teacher evaluation and quality. School

Administration, 70(1), 27-31. Retrieved from http://www.aasa.org

Guerriero, S. (2014). Teachers’ pedagogical knowledge and the teaching profession.

Teaching and Teacher Education, 2(1), 7.

Hanushek, E. A., Kain, J. F., O’Brien, D. M., & Rivkin, S. G. (2005). The market for

teacher quality (Working Paper No.11154). Retrieved from National Bureau of

Economic Research: http://www.nber.org/papers/w11154

Hanushek, E. A., & Rivkin, S. G. (2010). The quality and distribution of teachers under

the No Child Left Behind Act. Journal of Economic Perspectives, 24(3), 133-150.

doi:10.1257/jep.24.3.133

Page 210: Mississippi Teacher's Perspectives of Danielson's

194

Headen, E. Q. (2014). A comparative analysis of teacher evaluation policy. (Doctoral

dissertation). Retrieved from ProQuest Digital Dissertations. (AAT No.

1620539428)

Herlihy, C., Karger, E., Pollard, C., Hill, H. C., Kraft, M. A., Williams, M., & Howard, S.

(2014). State and local efforts to investigate the validity and reliability of scores

from teacher evaluation systems. Teachers College Record, 116(1), 1-28.

Retrieved from https://scholar.harvard.edu/mkraft/publications

Hershberg, T., & Robertson-Kraft, C. (2010). Maximizing the opportunity provided by

“Race to the Top.” Penn GSE Perspectives on Urban Education, 7(1), 128-131.

Retrieved from https://urbanedjournal.gse.upenn.edu/

Holley, L. C., & Steiner, S. (2005). Safe space: Student perspectives on classroom

environment. Journal of Social Work Education, 41(1), 49-64.

doi:10.5175/JSWE.2005.200300343

Holste, J. S. & Fields, D. (2010). Trust and tacit knowledge sharing and use. Journal of

Knowledge Management, 14(1), 128-140. doi:10.1108/13673271011015615

Hussain, I., Javed, M., Lin Siew, E., & Mohammed, A. (2013). Reflection of prospective

teachers on the nature of teaching practice. International Online Journal of

Educational Sciences, 5(3), 531-538. Retrieved from http://www.iojes.net

IBM. (2015). IBM SPSS Statistics for Windows [Computer software]. Armonk, NY:

IBM Corp.

Issac, S., & Michael, W. (1997) Handbook in research and evaluation. San Diego, CA:

Educational and Industrial Testing Services.

Page 211: Mississippi Teacher's Perspectives of Danielson's

195

Johnson, S. M., & Fiarman, S. E. (2012). The potential of peer review. Educational

Leadership, 70(3), 20-25. Retrieved from http://www.ascd.org/publications

/educational-leadership.aspx

Johnson, S. M., Kraft, M. A., & Papay, J. P. (2012). How context matters in high-need

schools: The effects of teachers’ working conditions on their professional

satisfaction and their students’ achievement. Teachers College Record, 114(10),

1-39. Retrieved from https://scholar.harvard.edu/mkraft/publications

Jones, K., Jones, J. L., & Vermette, P. J. (2010). The constructivist mathematics

classroom. Mathematics Teaching, (219), 33-35. Retrieved from

https://www.atm.org.uk/Mathematics-Teaching-Journal-Archive/3918

Juvova, A., Chudy, S., Neumeister, P., Plischke, J., & Kvintova, J. (2015). Reflection of

constructivist theories in current educational practice. Universal Journal of

Educational Research, 3(5), 345-349. Retrieved from ERIC database.

(EJ1062318)

Kaestle, C., & Smith, M. (1982). The federal role in elementary and secondary education,

1940-1980. Harvard Educational Review, 52(4), 384-408.

doi:10.17763/haer.52.4.021g4v7641j98xg2

Kelly, C. L. (2006). Teacher evaluation and its relationship to teacher practice and

student achievement in a successful urban middle school: A case study (Doctoral

dissertation). Retrieved from ProQuest Dissertations and Theses database. (AAT

No. 3237735)

Page 212: Mississippi Teacher's Perspectives of Danielson's

196

Kimball, S. M. (2002). Analysis of feedback, enabling conditions and fairness

perceptions of teachers in three school districts with new standards-based

evaluation systems. Journal of Personnel Evaluation in Education, 16(4), 241-

268. doi:10.1023/A:1021787806189

King, S., & Watson, A. (2010). Teaching excellence for all our students. Theory Into

Practice, 49(3), 175-184. doi:10.1080/00405841.2010.487751

Klagholz, L. (2000). Growing better teachers in the garden state: New Jersey’s “alternate

route” to teacher certification. Washington, DC: Thomas B. Fordham Foundation.

Retrieved from ERIC database. (ED439135)

Klein, A. (2015, November 30). ESEA reauthorization: The Every Student Succeeds Act

explained. Education Week. Retrieved from http://blogs.edweek.org/edweek

/campaign-k-12/2015/11/esea_reauthorization_the_every.html

Kucharska, W., & Kowalczyk, R. (2016). Trust, collaborative culture and tacit

knowledge sharing in project management: A relationship model. Paper presented

at the The International Conference on Intellectual Capital, Knowledge

Management & Organizational Learning. Retrieved from

https://ssrn.com/abstract=2855322

Lansman, R. R. (2006). A case study of teacher evaluation and supervision at a high-

achieving urban elementary school. (Doctoral dissertation). Retrieved from

ProQuest Dissertations and Theses database. (AAT No. 3233791)

Layne, L. (2012). Defining effective teaching. Journal on Excellence in College

Teaching, 23(1), 43-68. Retrieved from http://celt.muohio.edu/ject

Page 213: Mississippi Teacher's Perspectives of Danielson's

197

Lavigne, A. L. (2014). Exploring the intended and unintended consequences of high-

stakes teacher evaluation on schools, teachers, and students. Teachers College

Record, 116(1), 1-29. Retrieved from http://www.tcrecord.org/

Lewis, A. C. (1998). ‘Just say no’ to unqualified teachers. Phi Delta Kappan, 80(30),

179-180.

Lingenfelter, P. E. (2007). How should states respond to “A Test of Leadership?”

Change: The Magazine of Higher Learning, 39(1), 13-19.

doi:10.3200/CHNG.39.1.13-19

Lit, I., Nager, N., & Snyder, J. D. (2010). If it ain’t broke, why fix it? Framework and

processes for engaging in constructive institutional development and renewal in

the context of increasing standards, assessments, and accountability for

university-based teacher preparation. Teacher Education Quarterly, 37(1), 15-34.

Retrieved from http://www.teqjournal.org/

Little, S. G., & Akin-Little, A. (2014). Methods of academic assessment. In S. G. Little &

A. Akin-Little (Eds.), Academic assessment and intervention (pp. 3-6). New

York, NY & London: Routledge.

Lunenburg, F. C. & Ornstein, A. C. (2008). Educational administration: Concept &

practices (5th ed.) Belmont, CA: Wadsworth, Cengage Learning.

Marzano, R. J., Frontier, T., Livingston, D. (2011). Effective supervision: Supporting the

art and science of teaching. Alexandria, VA: Association for Supervision and

Curriculum Development.

Page 214: Mississippi Teacher's Perspectives of Danielson's

198

Marzano, R. J. (2012). The two purposes of teacher evaluation. Educational Leadership,

70(3), 14-19. Retrieved from http://www.ascd.org/publications/educational-

leadership.aspx

Marzano, R. J., Marzano, J. S., & Pickering, D. J. (2003). Classroom management that

works: Research-based strategies for every teacher. Alexandria, VA: Association

for Supervision and Curriculum Development.

Marzano, R. J., & Toth, M. D. (2013). Teacher evaluation that makes a difference: A new

model for teacher growth and student achievement. Alexandria, VA: Association

for Supervision and Curriculum Development.

Mathers, C., & Oliva, M. (2008). Improving instruction through effective teacher

evaluation: Options for states and districts (Policy brief). Washington, DC:

National Comprehensive Center for Teacher Quality. Retrieved from ERIC

database. (ED520778)

McGuinn, P. (2011). Stimulating reform: Race to the top, competitive grants and the

Obama education agenda. Educational Policy, 26(1), 136-159.

doi:10.1177/0895904811425911

McGuinn, P., & Hess, F. (2005). Freedom from ignorance? The Great Society and the

evolution of the Elementary and Secondary Education Act of 1965. The great

society and the high tide of liberalism. Retrieved from https://users.drew.edu

/pmcguinn/publications/ESEA%20and%20Great%20Society%20Final.doc

McClellan, C., Atkinson, M., & Danielson, C. (2012). Teacher evaluator training and

certification: Lessons learned from the measures of effective teaching project. San

Francisco, CA: Teachscape.

Page 215: Mississippi Teacher's Perspectives of Danielson's

199

McNeil, M. (2014, February 18). Scrutiny rises on placement of best teachers. Education

Week, 44(1), 1-26. Retrieved from

http://www.edweek.org/ew/articles/2014/02/19/21equity_ep.h33.html

MET Project. (2010). Danielson’s framework for teaching for classroom observation.

Retrieved November 2, 2017 from http://k12education.gatesfoundation.org

/download/?Num=2792&filename=Danielson-FFT_10_29_101.pdf

Michigan Association of School Administrators. (2016). Evaluation Session 14 on

October 22. Retrieved July 21, 2016 from http://www.godfrey-lee.org/

uploads/6/1/2/2/6122679/teacher_evaluation_disclosures.pdf

Mielke, P., & Frontier, T. (2012, Fe). Keeping improvement in mind. Educational

Leadership, 70(3), 10-13. Retrieved from http://www.ascd.org/publications

/educational-leadership.aspx

Mikulecky, M., Shkodriani, G., & Wilner, A. (2004, December). Alternative certification.

Denver, CO: Education Commission of State.

Milanowski, A. & Heneman, H.G. (2001). Assessment of teacher reactions to a

standards-based teacher evaluation system: A pilot study. Journal of Personnel

Evaluation in Education, 15(3), 193-21. doi:10.1023/A:1012752725765

Mississippi Department of Education. (2014a). Mississippi teacher evaluation system (M-

STAR): Reaching professional excellence. Retrieved September 21, 2016 from

http://www.mde.k12.ms.us/educator-evaluations/m-star

Mississippi Department of Education. (2014b). M-STAR calibration training for

evaluators. Retrieved November 5, 2016 from http://www.mde.k12.ms.us/

docs/teacher-center/m-star-calibration-training-announcement-r.pdf?sfvrsn=2

Page 216: Mississippi Teacher's Perspectives of Danielson's

200

Mississippi Department of Education. (2015). Educational accountability report.

Retrieved November 5, 2016 from www.mde.k12.ms.us/docs/educational-

accountability/staff-head-count-fte-years-experience-average-salary-2014-

2015.xlsx?sfvrsn=2

Mississippi Department of Education. (2016). Mississippi educator and administrator

professional growth system. Retrieved November 15, 2016 from

https://www.mdek12.org/OTL/OTC/professional-growth-system

Mississippi Department of Education. (2018). Children first report. Retrieved March 15,

2018 from http://reports.mde.k12.ms.us/data/

Mississippi Legislative Information Systems. (2016, March). Mississippi legislature 2016

regular session: House bill 1531. Retrieved June 29, 2016 from

http://billstatus.ls.state.ms.us/2016/pdf/history/HB/HB1531.xml

MS Const. art. 8, § 201, § 202, § 203.

Mohan, L., Lundeberg, M. A., & Reffitt, K. (2008). Studying teachers and schools:

Michael Pressley's legacy and directions for future research. Educational

Psychologist, 43(2), 107-118. doi:10.1080/00461520801942292

Moorman, C., Deshpandé, R. & Zaltman, G. (1993). Factors affecting trust in market

research relationships. Journal of Marketing, 57(1), pp. 81-101.

doi:10.2307/1252059

Mubenga, P. T. (2006). Analyzing the impact of teachers’ qualifications on student

academic success in high school language arts and mathematics courses

(Doctoral Dissertation). Retrieved from ProQuest Dissertations and Theses

database. (UMI No: 3226244)

Page 217: Mississippi Teacher's Perspectives of Danielson's

201

Nagy, C., & Wang, N. (2007). The alternate route teachers’ transition to the classroom:

Preparation, support, and retention. NASSP Bulletin, 91(1), 98-113.

doi:10.1177/0192636506299153

Nakai, K., & Turley, S. (2003). Going the alternate route: Perceptions from non-

credentialed teachers. Education, 123(4), 831-846.

National Association of State Board of Education (NASBE). (2017). State education

governance matrix. Alexandria, VA: Author. Retrieved from

http://www.nasbe.org/wp-content/uploads/NASBE-State-Education-Governance-

State-by-State-Matrix.pdf

National Center for Literacy Education. (2013). Remodeling literacy learning: Making

room for what works. Retrieved from http://www.ncte.org/ncle/2013-report

National Commission on Excellence in Education (NCEE). (1983). A nation at risk: The

imperative for educational reform. Washington, DC: United States Department of

Education. Retrieved from https://www2.ed.gov/pubs/NatAtRisk/risk.html

National Research Council. (2002). Investigating the influence of standards: A

framework for research in mathematics, science, and technology education.

Washington, DC: National Academies Press.

Nelson, G. S. (2010). A comparison of the similarities and differences between

traditionally and alternatively prepared first year secondary teachers in a large

middle Georgia school district. (Doctoral dissertation). Retrieved from

http://digitalcommons.georgiasouthern.edu/etd/324

Page 218: Mississippi Teacher's Perspectives of Danielson's

202

Nonaka, I. (2007). The Knowledge-creating company. Harvard Business Review,

85(7/8), 162-171. Retrieved from https://hbr.org/2007/07/the-knowledge-creating-

company

No Child Left Behind (NCLB) Act of 2001, Pub. L. No. 107–110, §115 Stat. 1425

(2002).

Office of Educator Quality. (2016). Mississippi state plan to ensure equitable access to

excellent educators. Jackson, MS: Mississippi Department of Education.

Retrieved from https://www.mdek12.org/OTL/OTC/equitable-access-plan

Oliva, M., Mathers, C., & Laine, S. (2009). Effective evaluation. Principal Leadership:

Middle Level Edition, 9(7), 16-21.

Olson, D. J. (2015). Exemplary teachers' perspectives on effective teaching elements in

Danielson's framework for teaching. (Doctoral dissertation). Retrieved from

ProQuest Digital Dissertations. (AAT No. 3739205)

Opara, I. M., Onyekuru, B. U., & Njoku, J. U. (2015). Predictive power of school based

assessment scores on students' achievement in Junior Secondary Certificate

Examination (JSCE) in English and mathematics. Journal of Education and

Practice, 6(9), 112-116. Retrieved from https://www.iiste.org/

Organization for Economic Co-operation and Development. (2011). Building a high-

quality teaching profession: Lessons from around the world. Retrieved from

https://www2.ed.gov/about/inits/ed/internationaled/background.pdf

Ornstein, A. C., & Levine, D. U. (2006). Foundations of education. Boston, MA:

Houghton Mifflin Company.

Page 219: Mississippi Teacher's Perspectives of Danielson's

203

Osler, J. E., & Russell, S. (2013). An investigation on the impact of the socio-

psychological effects of teacher disposition on the academic performance of

students in diversely populated elementary school. Journal on Educational

Psychology, 7(1), 23-33. Retrieved from ERIC database. (EJ1101756)

Partners for Each and Every Child. (Producer). (2016). The Every Student Succeeds Act:

Implications for equity in Mississippi [Read ahead webinar]. Retrieved from

http://partnersforeachandeverychild.org/

Putnam, R. (1995). The case of the missing social capitol. Unpublished manuscript.

Peterson, D. S., & Taylor, B. M. (2012). Using higher order questioning to accelerate

students’ growth in reading. Reading Teacher, 65(5), 295-304.

doi:10.1002/TRTR.0104

Peterson, K. (2004). Research on school teacher evaluation. NASSP Bulletin, 88(639), 60-

79. doi:10.1177/019263650408863906

Pieczura, M. (2012). Weighing the pros and cons of TAP: A Tennessee teacher offers her

perspective on the state’s new evaluation system. Educational Leadership, 70(3),

70-72. Retrieved from http://www.ascd.org/publications/educational-

leadership.aspx

Piaget, J. (1952). The origins of intelligence in children (Vol. 8, No. 5, p. 18). New York,

NY: International Universities Press.

Reddy, Y. M., & Andrade, H. (2010). A review of rubric use in higher education.

Assessment & Evaluation in Higher Education, 35(4), 435-448.

doi:10.1080/02602930902862859

Page 220: Mississippi Teacher's Perspectives of Danielson's

204

Roberson, D. C. (2017). Interruption of community: A chronicle of the journey from

segregation to dis-integration. (Doctoral dissertation). Retrieved from ProQuest

Digital Dissertations. (AAT No. 10591270)

Rock, T. C., & Wilson, C. (2005). Improving teaching through lesson study. Teacher

Education Quarterly, 32(1), 77-92. Retrieved from http://www.teqjournal.org/

Rockoff, J. E. (2004). The impact of individual teachers on student achievement:

Evidence from panel data. American Economic Review, 94(2), 247-252. Retrieved

from https://www.jstor.org/stable/3592891

Rockoff, J. E., & Speroni, C. (2011). Subjective and objective evaluations of teacher

effectiveness: Evidence from New York City. Labor Economics, 18(5), 687-696.

doi:10.1016/j.labeco.2011.02.004

Rolloff, M. (2010). A constructivist model for teaching evidence-based practice. Nursing

Education Perspectives, 31(5), 290-293.

Ross, M. B. (2011). Determining content validity for the transition awareness and

possibilities scale (TAPS). Retrieved from ProQuest Dissertations & Theses

Global. (AAT No. 3491821)

Rutledge, S. A., Harris, D. N., & Ingle, W. K. (2010). How principals “bridge and buffer”

the new demands of teacher quality and accountability: A mixed-methods analysis

of teacher hiring. American Journal of Education, 116(2), 211-242.

doi:10.1086/649492

Sanden, S. (2012). Independent reading: Perspectives and practices of highly effective

teachers. The Reading Teacher, 66(3), 222-231. doi:10.1002/TRTR.01120

Page 221: Mississippi Teacher's Perspectives of Danielson's

205

Samaras, D. R., & Wilcox, A. P. (2009). Examining our career switching teachers’ first

year of teaching: Implications for alternative teacher education program design.

Teacher Education Quarterly, 36, 173-191.

Sandholtz, J. (2011). Preservice teachers' conceptions of effective and ineffective

teaching practices. Teacher Education Quarterly, 38(3), 27-47. Retrieved from

http://www.teqjournal.org/

Sass, T. R. (2008). Alternative certification and teacher quality. Florida State University

Department of Economics. Retrieved from http://www.teacherqualityresearch.org/

Schacter, J. (2001). Teacher performance-based accountability: Why, what and how.

Retrieved July 13, 2016 from

http://tapsystem.niet.org/pubs/performance_assessment.pdf

Seashore, K. (2003). Trust and improvement in schools. Keynote address presented at the

British Educational Leadership and Management Association, Milton Keynes.

Shahid, J., & Thompson, D. (2001). Teacher efficacy: A research synthesis, Annual

Meeting of the American Educational Research Association, Seattle, WA, 2001.

Seattle, WA: Educational Resource Information Center.

Schultz, A., & Cuneo, M. (2015). Networks, resources, and trust: What does social

capital mean to public health? Portland State University Institute of Sustainable

Solutions. Retrieved from https://www.pdx.edu/sustainability

Skowron, J. (2001). Powerful lesson planning models. Arlington Heights, IL: Skylight

Publishing.

Page 222: Mississippi Teacher's Perspectives of Danielson's

206

Smith, T. M., Desimone, L. M., & Ueno, K. (2005). “Highly Qualified” to do what? The

relationship between NCLB teacher quality mandates and the use of reform-

oriented instruction in middle school mathematics. Educational Evaluation and

Policy Analysis, 27(1), 75-109. doi:10.1080/00461520801942292

Smylie, M. A. (2014). Teacher evaluation and the problem of professional development.

Mid-Western Educational Researcher, 26(2), 97-111.

Smythe, C. (2002). Preparing for smooth transitions. Montessori Life, 14(1), 42-45.

Snow-Renner, R., & Lauer, P. A. (2005). Professional development analysis. Denver,

CO: Mid-Continent Research for Education and Learning. (ERIC Document

Reproduction Service No. ED491305)

Spooner, F., Baker, J. N., Harris, A. A., Delzell, L. A., & Browder, D. M. (2007). Effects

of training in universal design for learning on lesson plan development. Remedial

& Special Education, 28(2), 108-116. doi:10.1177/07419325070280020101

Sporte, S. E., Stevens, W. D., Healey, K. Jiang, J., & Hart, H. (2013). Teacher evaluation

in practice: Implementing Chicago’s REACH students. Retrieved from

https://consortium.uchicago.edu/publications/teacher-evaluation-practice-

implementing-chicagos-reach-students

Stronge, J. H. (2006). Teacher evaluation and school improvement: Improving the

educational landscape. Retrieved from https://www.corwin.com

/sites/default/files/upm-binaries/7808_Stronge01.pdf

Sung, P. F., & Yang, M. L. (2013). Exploring disciplinary background effect on social

studies teachers’ knowledge and pedagogy. The Journal of Educational Research,

106, 77-88. doi:10.1080/00220671.2012.658453

Page 223: Mississippi Teacher's Perspectives of Danielson's

207

Szczepaniak, A. (2010). Zeroing in on data: Customized analysis pinpoints evidence of

student impact. Journal of Staff Development, 31(1), 36-40. Retrieved from

https://learningforward.org/publications/jsd

Tanni, M. (2012). Teacher trainees' information acquisition in lesson planning.

Information Research: An International Electronic Journal, 17(3).

Tarter, C. J., Sabo, D. & Hoy, W. K. (1995). Middle school climate, faculty trust, and

effectiveness: A path analysis. Journal of Research and Development in

Education, 29(1), 41-49.

Toch, T. (2008). Fixing teacher evaluation. Educational Leadership, 66(2), 32-37.

Retrieved from http://www.ascd.org/publications/educational-leadership.aspx

Tschannen-Moran, M. (2001). Collaboration and the need for trust. Journal of

Educational Administration, 39(4), 308-331. doi:10.1108/EUM0000000005493

Tschannen-Moran, M. (2014). Trust matters: Leadership for successful schools (2nd ed.).

San Francisco, CA: John Wiley & Sons.

Turley, S., & Nakai, K. (2000). Two routes to certification: What do student teachers

think? Journal of Teacher Education, 51(2), 122-134.

doi:10.1177/002248710005100206

Tyler, T. R., & Degoey, P. (1996). Trust in organizational authorities: The influence of

motive attribution on willingness to accept decisions. In R. Kramer & T. Tyler

(Eds.), Trust in organizations (pp. 331-356). Thousand Oaks, CA: Sage

Publications.

United States Census Bureau. (2017). Quick facts beta. Retrieved from

http://www.census.gov/quickfacts/table/PST045214/00,2853520,2855400

Page 224: Mississippi Teacher's Perspectives of Danielson's

208

United States Department of Education. (2003). No child left behind: A toolkit for

teachers. Retrieved from https://www2.ed.gov/teachers/nclbguide/nclb-teachers-

toolkit.pdf

United States Department of Education. (2004). New no child left behind flexibility:

Highly qualified teachers. Retrieved from

http://www2.ed.gov/nclb/methods/teachers/hqtflexibility.html

United States Department of Education. (2009). Race to the top program: Executive

summary. Washington, DC. Retrieved November 2, 2017 from

http://www2.ed.gov/programs/racetothetop

United States Department of Education. (2010). PARCC supplemental race to the top

program: Executive summary. Retrieved from

https://www2.ed.gov/programs/racetothetop-assessment/parcc-supplemental-

funds-summary.pdf

U.S. Const. art. I, § 10.

Uriegas, B., Kupczynski, L., Mundy, M. (2014). A comparison of traditional and

alternative certification routes on classroom management. SAGE Open, 4(4).

doi:10.1177/2158244014553599

Viteritti, J. P. (2012). The federal role in school reform: Obama's Race to the Top. Notre

Dame Law Review, 87, 2087. Retrieved from

https://scholarship.law.nd.edu/ndlr/vol87/iss5/10

Vonk, J. H. C. (1995). Conceptualizing novice teachers’ professional development: A

base for supervisory interventions. (ERIC Documents Reproduction Service No.

ED390838)

Page 225: Mississippi Teacher's Perspectives of Danielson's

209

Walker, L. (1992, November). Perceptions of preservice teacher efficacy. Proceedings of

the Mid-South Educational Research Association, Knoxville, TN, 21, 1-46.

Wang, W., & Day, C. (2002). Issues and concerns about classroom observation:

Teachers' perspectives. (ERIC Document Reproduction Service No. ED467734)

Watras, J. (2012). Developing a democratic view of academic subject matters: John

Dewey, William Chandler Bagley, and Boyd Henry Bode. Philosophical Studies

in Education, 43, 162-170. Retrieved from ERIC database. (EJ1000333)

Weisberg, D., Sexton, S., Mulhern, J., & Keeling, D. (2009, June 8). The widget effect:

Our national failure to acknowledge and act on differences in teacher

effectiveness. Retrieved from http://tntp.org/publications/view/evaluation-and-

development/the-widget-effect-failure-to-act-on-differences-in-teacher-

effectiveness

Weiss, E., Vranek, J., Bodary, S., Porter, B., Labarre, L., Nawi, T., Guarino, H., & Duff,

K. (2016, May 4). Making the most of the Every Student Succeeds Act: A

reference guide (and our advice) for states, districts, advocates & funders to

advance state goals. Retrieved June 6, 2016 from http://education-first.com/wp-

content/uploads/2016/05/Education-First-Making-the-Most-of-ESSA-May-2016-

1.pdf

Wiggins, G. (2012). Seven keys to effective feedback. Educational Leadership, 70(1),

10-16. Retrieved from http://www.ascd.org/publications/educational-

leadership.aspx

Williams, H. S. (2000). Teacher's perceptions of principal effectiveness in selected

secondary schools in Tennessee. Education, 121(2), 264-275.

Page 226: Mississippi Teacher's Perspectives of Danielson's

210

Williams, J., Evans, C., & King, L. (2012). The impact of universal design for learning

instruction on lesson planning. International Journal of Learning, 18(4), 213-222.

Wong, A., Remin, R., Love, R., Aldred, R., Ralph, P., & Cook, C. (2013). Building

Pedagogical Community in the Classroom. Christian Higher Education, 12(4),

282-295. doi:10.1080/15363759.2013.805635

Zeichner, L., & Paige, L. (2007). The current status and possible future for ‘traditional’

college and university-based teacher education programs in the U.S. submission

to the International Alliance of Leading Education Institutes. University of

Wisconsin-Madison. Retrieved from http://www.intalalliance.org

Zimmerman, S., & Deckert-Pelton, M. (2003). Evaluating the evaluators: Teachers'

perceptions of the principal's role in professional evaluation. NASSP Bulletin,

87(636), 28-37. doi:10.1177/019263650308763604