technical manual for the missouri performance assessments · 2019. 1. 10. · an ability to plan,...
TRANSCRIPT
Technical Manual for
the Missouri Performance Assessments
October 2016
MoPTA — Candidate and Educator Handbook 2
Copyright © 2016 by Educational Testing Service. All rights reserved. ETS, the ETS logo, and PRAXIS are registered trademarks of
Educational Testing Service (ETS). MEASURING THE POWER OF LEARNING is a trademark of ETS. All other trademarks (and service
marks) are the property of their respective owners.
MoPTA — Candidate and Educator Handbook 3
Table of Contents
PREFACE .................................................................................................................................................. 4
PURPOSE OF A PERFORMANCE ASSESSMENT ................................................................................ 5
ASSESSMENT DEVELOPMENT ............................................................................................................ 9
STANDARD SETTING STUDIES ......................................................................................................... 25
SCORING METHODOLOGY ................................................................................................................ 28
PSYCHOMETRIC PROPERTIES .......................................................................................................... 32
SCORE REPORTING ............................................................................................................................. 34
APPENDIX A – STATISTICAL CHARACTERISTICS OF THE MISSOURI PERFORMANCE ASSESSMENTS .... 39
APPENDIX B – DESIGN TEAM MEETING REPORTS ................................................................................ 44
BIBLIOGRAPHY .................................................................................................................................. 132
MoPTA — Candidate and Educator Handbook 4
Preface
Purpose of This Manual
The purpose of this Technical Manual is to explain
the purpose of the Missouri Performance Assessments
the approach taken by ETS in developing the Missouri Performance Assessments
the validity evidence supporting score use for the Missouri Performance Assessments
the adoption process for the Missouri Performance Assessments
the statistical analyses supporting the psychometric quality of Missouri Performance
Assessments
the score reporting process
statistical summaries of test taker performance on the Missouri Performance Assessments
Audience
This manual was written for policy makers and state educators who are interested in
knowing more about the Missouri Performance Assessments
how the Missouri Performance Assessments relate to state licensure requirements
understanding the development and scoring of the Missouri Performance Assessments
the statistical characteristics of the Missouri Performance Assessments
MoPTA — Candidate and Educator Handbook 5
Purpose of a Performance Assessment
Overview
ETS’s mission is to advance quality and equity in education by providing fair and valid tests,
research, and related services. In support of this mission, ETS has developed the Missouri
Performance Assessments, which provide the state of Missouri with assessments and ancillary
services that support Missouri’s educator licensure and certification process.
What is a Performance Assessment?
A performance assessment is a method of using authentic tasks such as activities, exercises, or
problems for assessing how well test takers apply their knowledge, skills, and abilities to a
particular real-world or authentic situation.
What is its purpose?
A performance assessment allows a candidate to demonstrate his or her performance during a
clinical experience. Successful completion of the assessment will help to demonstrate that the
candidate is prepared to begin practice as an entry-level educator. Since evaluators cannot be
present in all schools at all times, the most authentic method of evaluation becomes the portfolio,
which the candidate can submit to an objective third party. A purpose of the performance
assessment is, therefore, evaluation of how well a candidate can apply the knowledge and skills
learned to the classroom.
What are the assessments?
The Missouri Performance Assessments comprise four assessments for four types of educators:
Missouri Pre-Service Teacher Assessment (MoPTA)
This assessment allows the candidate to demonstrate
the knowledge and skills that pertain to the understanding of the context of classroom in
regard to students, the school, and the community (Task 1)
an understanding, analysis, and application of assessment and data collection to measure
and inform student learning (Task 2)
an ability to develop instruction, including the use of technology, to facilitate student
learning (Task 3)
MoPTA — Candidate and Educator Handbook 6
an ability to plan and implement a lesson using standards-based instruction, an ability to
adjust instruction for the whole class as well as for individual students within the class,
and an ability to demonstrate an understanding of reflective practice (Task 4 with Video)
an ability to plan and implement a sequence of lessons using standards-based instruction,
an ability to analyze data and adjust instruction for the whole class as well as for
individual students within the class, and an ability to demonstrate an understanding of
reflective practice (Task 4 Non-Video)
Missouri School Counselor Performance Assessment (MoSCPA)
This assessment allows the candidate to demonstrate
an ability to enhance the Missouri Comprehensive Guidance and Counseling Program
and contribute to school improvement (Task 1)
an ability to implement program components within the Missouri Comprehensive
Guidance and Counseling Program (Task 2)
an ability to develop relationships by interacting with faculty, family, and/or the
community (Task 3)
Missouri School Leader Performance Assessment (MoSLPA)
This assessment allows the candidate to demonstrate
an ability to address and resolve a significant problem/challenge in the school that
influences instructional practice and student learning (Task 1) skills in establishing and supporting effective and continuous professional development
with staff (Task 2)
an ability to facilitate stakeholders’ efforts to build a collaborative team within the school
to improve student achievement (Task 3)
Missouri Librarian Performance Assessment (MoLPA)
This assessment allows the candidate to demonstrate
an ability to identify a user group within the school community whose needs are not
being met by the library collection, assess the current collection, select additions to the
collection that will meet the needs of the user group, identify a method to promote the
new and existing resources to the user group or to those related to the user group, and
MoPTA — Candidate and Educator Handbook 7
determine methods for gauging the extent to which the collection meets the needs of the
user group (Task 1)
an ability to plan, promote, and implement a collaborative lesson and to advocate for
collaboration among librarians, school leaders, and other faculty (Task 2)
an ability to plan and deliver a mini-lesson to adult participants on the use of a digital
resource and an ability to demonstrate an understanding of reflective practice. (Task 3)
How the Missouri Performance Assessments Address the State’s Needs
States have always wanted to ensure that beginning educators have the requisite knowledge and
skills necessary to certify as a Missouri educator. The Missouri Performance Assessments
provide the Missouri Department of Elementary and Secondary Education (DESE) with
appropriate tools to make decisions about applicants for educator licensure. In this way, the
Missouri Performance Assessments meet the basic licensure needs of the Department.
In addition to the assessments themselves, the Missouri Performance Assessments provide the
state with ancillary materials that help them make decisions related to licensure. Information to
help decision makers understand how the Missouri Performance Assessments support Missouri’s
licensure needs is available at http://mega.ets.org/.
States also want to ensure that their applicants’ needs are being met. To that end, ETS has made
available a number of helpful test preparation tools:
The MoPTA Candidate and Educator Handbook, a valuable resource for completing the
assessment. It includes a basic overview of the assessment and test-taking strategies, as
well as information on
o Missouri Teacher Standards and Quality Indicators.
o how test takers should prioritize activities and build responses.
o guidelines for writing — each task requires some form of written response. It is
imperative that test takers understand what kind of writing (descriptive, analytic
or reflective) is required by each guiding prompt.
o collecting evidence — evidence is found in the information that test takers
provide within the written commentary and in the artifacts that they submit. What
they need to know about evidence, how to select evidence for tasks, and how to
submit both student and teacher artifacts as evidence are described.
o MoPTA with Video (MoPTA-V) version only: How to prepare for the video
recording, how to record a class, how to analyze a video and the importance of
practice videos. The video is meant to provide as authentic and complete a view
of a test taker's teaching as possible.
o scoring of tasks.
MoPTA — Candidate and Educator Handbook 8
the MoPTA Task 1 Handbook, which explains the roles of the student teacher, educator
preparation program faculty, and the cooperating teacher/mentor for completing Task 1.
the MoPTA Reflective Practice Handbook, which contains directions and suggestions for
completing the Professional Competency Profile, a requirement of the assessment. The
Professional Competency Profile engages test takers in the kind of reflection, goal
setting, and action planning that certified teachers practice annually to improve their
knowledge and skills in concert with state teacher evaluation systems.
the MoSLPA Candidate Handbook, the MoSCPA Candidate Handbook, and the MoLPA
Candidate Handbook, which are designed to help guide test takers and educator
preparation program (EPP) supervising instructors through each of these assessments.
Finally, states have a strong interest in supporting their Educator Preparation Programs. ETS has
made available the ETS Data Manager, a collection of services related to score reporting and
analysis. These services are designed to allow state agencies, national organizations, and
institutions to receive and/or analyze test results. Offered services include Quick and Custom
Analytical Reports, Test-taker Score Reports and Test-taker Score Reports via Web Service.
Each year, institutions also receive annual summary reports of their test takers’ scores. ETS also
offers Title II Reporting Services to institutions of higher education to help them satisfy federal
reporting requirements.
MoPTA — Candidate and Educator Handbook 9
Assessment Development
Fairness in Test Development
ETS is committed to ensuring that its tests are of the highest quality and as free from bias as
possible. All ETS products and services—including individual test items, tests, instructional
materials, and publications—are evaluated during development so that they are not offensive or
controversial; do not reinforce stereotypical views of any group; are free of racial, ethnic, gender,
socioeconomic, or other forms of bias; and are free of content believed to be inappropriate or
derogatory toward any group.
For more explicit guidelines used in item development and review, please see the ETS
Guidelines for Fair Tests and Communications (2015).
Test Development Standards
During the test development process, ETS follows the strict guidelines detailed in Standards for
Educational and Psychological Testing (AERA, APA, NCME, 2014):
Define clearly the purpose of the test and the claims one wants to make about the test
takers
Develop test specifications and test blueprints consistent with the purpose of the test and
the domains of knowledge identified as important for licensure
Develop specifications for item types and numbers of items needed to adequately sample
the domains of knowledge
Develop test items that provide evidence of the measurable-behavior indicators detailed
in the test specifications
Review assessments for potential fairness or bias concerns
Validity
The Nature of Validity Evidence
A test is developed to fulfill one or more intended uses. The reason for developing a test is
fueled, in part, by the expectation that the test will provide information about the test taker’s
knowledge and/or skill that
may not be readily available from other sources
may be too difficult or expensive to obtain from other sources
may not be determined as accurately or equitably from other sources.
MoPTA — Candidate and Educator Handbook 10
But regardless of why a test is developed, evidence must show that the test measures what it was
intended to measure and that the meaning and interpretation of the test scores are consistent with
each intended use. Herein lies the basic concept of validity: the degree to which evidence
(rational, logical, and/or empirical) supports the intended interpretation of test scores for the
proposed purpose (Standards for Educational and Psychological Testing).
A test developed to inform licensure1 decisions is intended to convey the extent to which the test
taker (candidate for the credential) has a sufficient level of knowledge and/or skills to perform
important occupational activities in a safe and effective manner (Standards for Educational and
Psychological Testing). “Licensure is designed to protect citizens from mental, physical, or
economic harm that could be caused by practitioners who may not be sufficiently competent to
enter the profession” (Schmitt, 1995). A licensure test is often included in the larger licensure
process—which typically includes educational and experiential requirements—because it
represents a standardized, uniform opportunity to determine if a test taker has acquired and can
demonstrate adequate command of a domain of knowledge and/or skills that the profession has
defined as being important or necessary to be considered qualified to enter the profession.
The main source of validity evidence for licensure tests comes from the alignment between what
the profession defines as knowledge and/or skills important for safe and effective practice and
the content included on the test (Standards for Educational and Psychological Testing). The
knowledge and/or skills that the test requires the test taker to demonstrate must be justified as
being important for safe and effective practice and needed at the time of entry into the
profession. “The content domain to be covered by a credentialing test should be defined and
clearly justified in terms of the importance of the content for credential-worthy performance in
an occupation or profession” (Standards for Educational and Psychological Testing, p. 181). A
licensure test, however, should not be expected to cover all occupationally relevant knowledge
and/or skills; it is only the subset of this that is most directly connected to safe and effective
practice at the time of entry into the profession (Standards for Educational and Psychological
Testing).
The link forged between occupational content and test content is based on expert judgment by
practitioners and other stakeholders in the profession who may have an informed perspective
about requisite occupational knowledge and/or skills.
1 Licensure and certification tests are referred to as credentialing tests by the Standards for Educational and
Psychological Testing (2014). Unless quoted from the Standards, we use the term “licensure.”
MoPTA — Candidate and Educator Handbook 11
Test Development Process
Evidence-Centered Design (ECD)
ETS relies heavily on the evidence-centered design (ECD) process to produce high-quality, high-
stakes assessments in the development of each of our assessments.2 The following discussion
outlines the process of ECD used in the design and development of the MoPTA assessment.
In developing and implementing the Standards-Based Assessments for Coursework and Clinical
Experience we followed the process depicted below:
2 Williamson, D.M, Almond, R.G., and Mislevy, R.J. (2004). Evidence-centered design for certification and licensure.
CLEAR Exam Review, Volume XV, Number 2, 14–18.
Convene a Design Team
•Determine target demographics
•Recruit educators
Design Assessment
•Using ECD, define the assessment framework
•Develop assessment tasks or items
•Develop rubrics or keys
•Develop scoring materials
•Develop support materials and Web content
Pilot/Field Test the Assessment
•Prepare a pilot/field test version
•Define participant pool
•Recruit participants
•Conduct pilot/field test(s)
•Formatively score pilot/field test responses
•Conduct data analysis
Operationalize the Assessment
•Refine tasks
•Select benchmarks
•Establish informational website
•Build out online authoring and scoring portals
•QC each component, including systems
•Train raters
•Adminster assessments
•Score assessments
•Report results
MoPTA — Candidate and Educator Handbook 12
Convening Design Teams
Our first step was to work with the Missouri Department of Elementary and Secondary
Education (DESE) to bring together design teams consisting of educators who meet specific
demographic requirements. Missouri educators and the university faculty who prepare candidates
are most knowledgeable about the content and pedagogy expectations for Missouri educators at
the various stages of their careers. In every step of the assessment development process
educators nominated by the Missouri Department of Elementary and Secondary Education
formed the core of the design teams. This is an important foundation of ECD.
Reports of the design team meetings may be found in Appendix B.
Working with assessment and psychometric experts, educators make the decisions that create
assessments that are true to the appropriate standards. Each partner in the relationship brings a
specific set of skills and abilities that are needed for the project to succeed. In this case, ETS
brings its extensive knowledge of performance assessment, validity and reliability, fairness in
performance testing and scoring, and scoring unique types of assessments.
Expert educators aid the development process by bringing their in-depth knowledge of the
standards and an understanding of what educators should know and be doing. Professional
educators know what candidates should be able to do when they enter the classroom.
Diversity and fairness are critical to assessment design. As such, development committees should
be representative of the population. In choosing participants for the development team, ETS
considers the following:
Geographic location across a state
Developmental level taught
Content areas taught
Gender
Race/ethnicity
Years of experience
Designing Assessments – Alignment with Standards
As each new assessment was developed, the development teams “unpacked” the relevant
Missouri state standards as a foundation for each assessment. As we did this, we were guided by
the principle that “not everything that can be measured is important and not everything that is
important can be measured.” As we worked with educator teams to examine the standards, we
encouraged them to consider the following questions:
Is every standard essential to measure in the performance context?
Are there specific constructs within each standard that are most important to measure?
MoPTA — Candidate and Educator Handbook 13
How would we think about measuring the identified standards/constructs?
What kinds of evidence could we ask educators to supply to meet this standard/construct?
Can we design an activity, exercise, or evidence-collection device that will reasonably
allow teacher candidates to provide evidence?
Who Are We Measuring?
One of the first tasks of a development committee is to achieve consensus on the characteristics
of the educators who will take the assessment. It is critical that as the committee forms a
consensus of these characteristics it designs tasks that are appropriate for a specific group of
candidates at the specific stage in their training or careers. We would not ask initial licensure
candidates to show the same proficiency on a set of tasks as highly experienced teachers.
Therefore, it is of great importance that the development team understands whom the assessment
is measuring, what the test takers know, and what the test takers should be able to demonstrate in
regard to each of the standards.
What Claims Do We Want to Make About These Educators?
The claims should answer the question, “What do we want to say about candidates on the basis
of the standards being assessed?” Claims are the statements we want to be able to make about a
teacher candidate’s knowledge, skills, abilities, or other attributes on the basis of their
performances on the assessment. Claims may be very general or more specific. A single
assessment may be the basis for many claims at different levels of specificity. To know what we
want to say about the candidate requires that we know the purpose of the assessment. Therefore,
ECD begins with a clear description of the purpose of the assessment. Lower-level claims must
support the high-level claim. These lower-level claims must get at the necessary knowledge,
skills, and abilities at increasing levels of specificity.
What Evidence Would Support Those Claims?
Not everything that we can measure is important, and we cannot measure everything that is
important. Determining what is important to measure is a critical step in the development
process. We cannot measure everything that is included in the domain of practice of educators.
We need to determine which things are most important to measure in light of the criteria, and
what things are measurable. Further, what is measured must be appropriate for the particular
population that is being assessed.
In designing the assessments, we applied the National Board for Professional Teaching
Standards APPLE criteria for quality performance assessments. Such assessments must meet five
criteria to be effective. They must be
administratively feasible
professionally acceptable
publicly credible
MoPTA — Candidate and Educator Handbook 14
legally defensible
economically affordable
Evidence can take many forms. Some of the forms that evidence can take include
direct observation
portfolio
video
student work samples/artifacts
teacher work samples, including lesson plans, worksheets, and assessments
published evidence, such as newspaper articles, photos, and papers
testing documentation, such as standardized test results, teacher-created assessment data,
and formative assessment data
contextual information about classes and schools
written explanations and documentation
reflective pieces
Designing Tasks to Generate Evidence
A key job of the development team is to determine what — among the various constructs we
could measure in order to provide evidence of a claim — is most important to measure. Then
they determined what evidence would support those claims.
In thinking this through, the team members considered things like:
How much evidence is enough? Do we need, for example, 60 minutes of video, or could
we obtain sufficient validity and reliability with 15 minutes?
What is the right evidence to support a specific claim? If, for example, we want to make
the claim that a candidate will be able to plan a lesson, using a video would not be
effective. A more appropriate assessment would be to evaluate a lesson plan for evidence
that the teacher candidate has a deep knowledge of the students for whom he or she is
designing the lesson and determining if the lesson plan has been differentiated to meet his
or her specific needs.
A task is something specific that we ask a teacher candidate to do. Tasks generate evidence that
supports the claims we wish to make about the teacher candidate we are assessing. We have
developed tasks specifically to generate evidence of components of practice, and each task
measures one or more standards. Often, many points of evidence are required for each standard,
and we have developed tasks to generate that evidence. Teacher candidates are sometimes
required to submit and discuss such things as student work, lesson plans, assessments that
measure students’ progress, notes from meetings with colleagues, etc.
MoPTA — Candidate and Educator Handbook 15
After we developed individual tasks, we reviewed them for consistency. It is important to make
certain that the tasks, taken together, measure the various standards and that there is an alignment
between the standards and assessment. Collectively, the tasks must be sufficient to provide an
accurate picture of practice for the specific population of educators. The tasks must also provide
the educators with the opportunity to demonstrate that practice. Finally, the tasks must be of
equal rigor for the various candidates.
The Missouri Pre-Service Teacher Assessment (MoPTA)
ETS created a standards-based assessment design for teacher candidates that has a unique
approach. First, although the assessment is summative, there is a connection of the summative
tasks to formative activities, including coursework, and professional interactions with university
faculty and cooperating teachers.
Second, the teacher candidate has access to an electronic portfolio platform that enables the
collection of a variety of artifacts, including student work. This same system will allow access to
an electronic scoring process that provides a quick turnaround, from submission to scoring
results, as test takers submit tasks.
This approach is also unique in that it supports future professional growth by requiring each
teacher candidate to create a student survey to elicit feedback about various aspects of the
classroom learning environment. It also supports professional growth by providing a link to the
next tier in the teacher candidate’s career as a classroom teacher by requiring a professional
growth plan that is a result of the teacher candidate’s reflection on the results of the assessment,
the impact of the clinical experience, and the professional interactions with university faculty and
cooperating teachers.
The Missouri Pre-Service Teacher Assessment (MoPTA) consists of four tasks. Task 1 is
completed early in the 14-week clinical experience and Tasks 2-4 are completed approximately
two-thirds of the way through clinical experience. ETS designed each task to address specific
Missouri standards/quality indicators. However, the MoPTA does not address every
standard/indicator. Successful completion of coursework (as certified by the attending university
faculty member) better addresses certain standards/indicators.
ETS enhanced its Online Network for Evaluation (ONE) system to (1) upload candidates’
written commentaries and artifacts, and (2) to support distributed scoring. This system gives
raters access to the candidate’s portfolio for the centralized scoring of Tasks 2-4. The teacher
candidate can upload each document (tasks and artifacts) as well as the video for Task 4 into the
online submission system during the course of the clinical experience semester.
MoPTA — Candidate and Educator Handbook 16
The Tasks
Each of the MoPTA tasks includes a written commentary in which the teacher candidate
responds to a series of prompts and provides related artifacts. A development team of Missouri
educators, with ETS facilitation, determined the prompts, the length of the written commentary,
and the number of artifacts.
The development team created prompts for each of four tasks that are applicable to all levels and
content areas (i.e., Early Childhood, Elementary, Middle, Secondary, Special Education),
identifying Missouri Teacher Standards that need to be addressed. Early childhood and
elementary teacher candidates are required to focus Task 2 on literacy and Task 3 on
mathematics. It is suggested that Special Education teachers working at the same developmental
levels follow the same approach.
For each task that deals with instruction, we developed prompts that have a focus on the
candidate’s decision-making process used in the development of the lesson plan. Also for these
tasks, the candidate should be able to show how he or she differentiated instruction for students.
Task 1: The first task focuses on knowledge of the students with whom the teacher candidate
will be interacting during the clinical experience. Completion of this task allows the teacher
candidate to get to know his or her students.
Task 1 is formative in nature, entered into an online submission system, and evaluated locally.
This gives EPPs an opportunity to introduce the teacher candidate to the evidence
collecting/analyzing process that they will follow for succeeding tasks. Missouri educators,
working with ETS staff, developed the task and the scoring rubrics for the task, focusing on each
of the standards/indicators addressed by the task. No scores are reported to the Department, and
Task 1 scores are not used to calculate the final MoPTA score.
While ETS does not conduct or monitor scoring for Task 1, we did develop scoring materials
(e.g., rubrics and exemplars) and training materials to support university faculty and cooperating
teachers. These scoring materials were also created for Tasks 2, 3, and 4.
Task 1: Knowledge of Students and the Learning Environment
The following Missouri Teacher Standards and Quality Indicators represent the focus of this
task.
The evidence the candidate submits needs to address and is scored according to the following.
Standard 2, Quality Indicators 2C4, 2C5, and 2C6
Standard 3, Quality Indicator 3C2
MoPTA — Candidate and Educator Handbook 17
Standard 4, Quality Indicator 4C2
Standard 5, Quality Indicators 5C1 and 5C3
Standard 6, Quality Indicator 6C2
Standard 8, Quality Indicator 8C3
Standard 9, Quality Indicator 9C3
Tasks 2 and 3: The second and third tasks contribute to the calculation of the overall MoPTA
score, and candidates complete them during the middle of the clinical experience. One task
focuses on Assessment and Data Collection to Measure and Inform Student Learning (Task 2)
and the other on Designing Instruction for Student Learning (Task 3). Candidates complete both
within the parameters of a current subject-specific unit of teaching, though the two tasks will not
focus on the same lesson. Candidates will submit task commentaries and artifacts using an online
submission system. Trained educators score both tasks.
Task 2: Assessment and Data Collection to Measure and Inform Student Learning
The following Missouri Teacher Standards and Quality Indicators represent the focus of this
task.
The evidence the candidate submits needs to address and is scored according to the following.
Standard 1, Quality Indicator
1C5
Standard 2, Quality Indicators 2C2, 2C5, and
2C6
Standard 3, Quality Indicators 3C1 and 3C3
Standard 7, Quality Indicators 7C1, 7C2, and 7C4
Standard 8, Quality Indicator 8C1
Task 3: Designing Instruction for Student Learning
The following Missouri Teacher Standards and Quality Indicators represent the focus of this
task.
The evidence the candidate submits needs to address and is scored according to the following.
MoPTA — Candidate and Educator Handbook 18
Standard 1, Quality Indicator 1C2
Standard 2, Quality Indicators 2C3, 2C4, 2C5, and 2C6
Standard 3, Quality Indicators 3C1, 3C2, and 3C3
Standard 4, Quality Indicators 4C1, 4C2, and 4C3
Standard 5, Quality Indicator 5C1
Standard 6, Quality Indicator 6C4
Standard 7, Quality Indicators 7C1, 7C2, and 7C4
Standard 8, Quality Indicator 8C1
Task 4: The fourth task for the MoPTA-Video assessment is Implementing and Analyzing
Instruction to Promote Student Learning and requires the submission of a 15-minute video
recording in addition to the task commentary and artifacts. The fourth task for the MoPTA-Non-
Video assessment is Planning, Implementing, Analyzing and Adjusting Instruction to Promote
Student Learning. Candidates submit the task commentary, artifacts, and (for MoPTA-V) the
video recording using an online submission system, and the task is centrally scored by trained
educators.
Task 4: Implementing and Analyzing Instruction to Promote Student Learning (Video)
The following Missouri Teacher Standards and Quality Indicators represent the focus of this
task.
The evidence the candidate submits needs to address and is scored according to the following.
Standard 1, Quality Indicators 1C1 and 1C2
Standard 2, Quality Indicators 2C4, 2C5, and 2C6
Standard 3, Quality Indicator 3C2
Standard 4, Quality Indicators 4C1 and 4C3
Standard 5, Quality Indicators 5C1 and 5C2
Standard 6, Quality Indicators 6C1 and 6C2
Standard 7, Quality Indicators 7C1, 7C2, and 7C4
Standard 8, Quality Indicator 8C1
The video in Task 4 focuses on the teacher candidate’s ability to implement and use research-
based instructional strategies that have the potential to impact student learning. The development
team created prompts that allow the candidate to show how he or she adapted instruction to meet
the needs of individual students. It also allows the candidate to reflect on his or her practice.
MoPTA — Candidate and Educator Handbook 19
Task 4: Planning, Implementing, Analyzing and Adjusting Instruction to Promote Student Learning
(Non-Video)
The following Missouri Teacher Standards and Quality Indicators represent the focus of this
task.
The evidence the candidate submits needs to address and is scored according to the following.
Standard 1, Quality Indicators 1C1 and 1C2
Standard 2, Quality Indicators 2C1, 2C2, 2C4, and 2C5
Standard 3, Quality Indicators 3C2 and 3C3
Standard 4, Quality Indicator 4C1
Standard 7, Quality Indicators 7C1, 7C2, and 7C4
Standard 8, Quality Indicator 8C1
The development team also created a submission schedule that is provided to teacher candidates
guiding the candidates to submit tasks at appropriate times during the clinical experience.
Reflection
Standard 8 addresses the importance Missouri places on the role of reflection for both teacher
candidates and beginning teachers. The standard reads: “The teacher is a reflective practitioner
who continually assesses the effects of choices and actions on others. The teacher actively seeks
out opportunities to grow professionally to improve learning for all students.” For teacher
candidates, quality indicators 1 and 2 are
self-assessment and improvement: The teacher candidate reflects on teaching practices
to refine his/her instructional process.
professional learning: The teacher candidate identifies and reflects on the array of
professional learning opportunities including those offered by educator preparation
programs, school districts, professional associations, and/or other opportunities.
Missouri educators, along with ETS Assessment Development staff, developed a Reflective
Practice Handbook that the teacher candidate can use as he or she completes each Task including
Task 1. The focus of the reflection is to include lessons learned as well as what was successful
and how that will impact the future teacher candidate and student learning.
The Professional Growth Plan
After completing the tasks and the clinical experience, the teacher candidate creates a
professional growth plan in conjunction with the supporting university faculty member and the
cooperating teacher referencing the self-evaluation forms and the student survey. This plan
should designate areas of professional development based on professional needs on which the
MoPTA — Candidate and Educator Handbook 20
teacher candidates will focus when entering the teaching workforce. The format of the
professional growth plan is to reflect the current plan used by Missouri educators. The candidate
can use the plan when he or she enters the classroom.
The Missouri School Leader, Librarian, and School Counselor Performance Assessments
The development of the Missouri School Leader, Librarian, and School Counselor Performance
Assessments followed the same process delineated previously for MoPTA. These assessments
are summative in nature consisting of tasks, commentary, and relevant artifacts. The tasks reflect
multiple Missouri standards. One of the tasks for each assessment also includes a 15-minute
video.
Missouri School Leader Performance Assessment (MoSLPA). Missouri building/district
leaders, working with ETS Assessment Development staff, developed a Missouri School Leader
Performance Assessment based on the six Missouri Leader Standards. This assessment focuses
on the knowledge and skills a candidate needs to become a principal.
Missouri Librarian Performance Assessment (MoLPA). Missouri librarians working with
ETS Assessment Development staff developed the Missouri Librarian Performance Assessment
based on the seven Missouri Standards for School Librarians. This assessment focuses on the
school librarian as information specialist, teacher, instructional partner, and program
administrator, stressing the promotion of reading, literacy, and technology skills.
Missouri School Counselor Performance Assessment (MoSCPA). Missouri school counselors
working with ETS Assessment Development staff developed the Missouri School Counselor
Performance Assessment based on the five Missouri School Counselor Standards. This
assessment focuses on the school counselor’s knowledge of student development and behavior,
and his or her ability to collaborate to create and maintain a guidance and counseling program,
develop collaborative professional relationships, and serve as a change agent who advocates and
demonstrates ethical and professional conduct.
The development process for these three assessments mirrored the development activities for the
MoPTA assessment. A development team first spent time examining the appropriate Missouri
standards, determining which standards/elements are most appropriate to cluster together, and
then building an assessment around those clusters of standards. Development of these
assessments also required a small-scale field test (tryout) as well as a large-scale field test (pilot)
followed by a field test scoring session.
Each of the tasks within each assessment includes a written commentary in which the candidate
responds to a series of prompts and provides related artifacts. A development team of Missouri
educators, with ETS facilitation, determined the prompts, the length of the written commentary,
and the number of artifacts.
MoPTA — Candidate and Educator Handbook 21
As with the MoPTA, candidates use an online submission system to create a portfolio in which
they upload the documents (task commentary and artifacts) as well as video. ETS developed this
online submission system to (1) gather documents and videos in order to create a candidate
portfolio, and (2) support distributed scoring.
For MoPTA, the teacher candidate submits on a semester basis with scores delivered before the
end of a semester. School leader and school counselor candidates can submit twice a year.
School librarian candidates submit on a yearly basis.
Field Testing
For performance-based assessments we generally conduct two field tests. The first field test is to
try out the assessment tasks with a smaller targeted group of individuals. The purpose is to
determine at an early point in the development whether or not there are fundamental flaws in the
desired approach of each assessment task. This allows for necessary corrective action before the
second field test, which is larger and more formal.
In the description that follows, the initial field test is referred to as the “tryout” and second field
test is referred to as the “pilot.” We will use those terms in describing our field-testing routine
throughout the text for these standards-based assessments.
Small-scale Field Test: The Tryout
We used the development team members to complete the initial tryout of the assessment tasks. In
addition, we asked each team member to recruit at least two colleagues to also complete the
tryout of the assessment tasks.
The purpose of the tryout was to obtain insight into the
clarity of the task directions
value of the evidence elicited
alignment of the evidence elicited to the directions and the standards
thoroughness of the responses generated by the assessment directions
alignment to the draft rubrics
Development Team Meeting: Tryout Review and Field Preparation
The primary focus at this meeting was to use the results of the tryout to finalize the assessment
tasks and rubrics before the formal pilot of that assessment. Having this early opportunity to
determine whether or not assessment directions are guiding candidates properly in building their
response promotes a more efficient and effective pilot. The team also spent time on rubric
MoPTA — Candidate and Educator Handbook 22
development, cross-checking the tasks to the Missouri standards they are supposed to measure,
and aligning the standards and tasks to the draft rubrics, as they were developed.
Large-scale Field Test: The Pilot
We asked the Department to convene a review panel to review the field-test plans, assessment
directions, and draft rubrics prior to the pilot. Once the panel provided approval, we piloted the
assessment using a larger number of participants than the tryout.
The pilot pool was selected to be diverse and representative across:
gender
race/ethnicity
content area taught
developmental level taught
regional location
ETS delivered the pilot electronically, using our online submission system. The pilot allowed us
to test the online submission system to answer the following questions specific to the Missouri
standards-based assessments:
Did we enter the tasks properly into the system? Do the directions and tasks display
properly?
Will users see the tasks in the proper sequence?
Can candidates access and navigate the system easily?
Are there any issues with entering evidence, submitting responses, or uploading
attachments and artifacts?
Are the directions and instructions clear and easy to follow for each section?
Does the task measure skills and knowledge that are important for meeting standards? In
other words, does the content reflect what candidates should know and be able to do
relative to the Missouri standards?
Are the tasks described equally accessible and applicable to all teachers within the
certification areas regardless of ethnicity, gender, race, disability, or teaching context
(e.g., urban, rural, and suburban; privileged; at-risk; homogeneous; diverse)?
Do the activities associated with the task represent authentic (realistic) practices?
Was the content and nature of the task fair?
Was any information missing that should be added?
MoPTA — Candidate and Educator Handbook 23
Does each task adequately address the age range of the learners?
Are the evaluation criteria equally applicable to all teacher candidates within the
certificate area regardless of ethnicity, gender, race, disability, or teaching context (e.g.,
urban, rural, and suburban; privileged; at-risk; homogeneous; diverse)?
Are the described tasks free of language and/or content that reinforce stereotypes of
teachers or students (e.g., consider diversity issues related to gender, race, ethnicity,
disability)?
In addition to submitting the pilot responses, the participants also supplied personal information
by completing a Background Information Questionnaire (BIQ). The participants also supplied
information regarding ease of use of the system, clarity of task directions, time taken to complete
the assessment, and additional information as determined with our clients.
The pilot responses underwent formative scoring, after which we made final revisions to the task
directions and materials, as well as to the scoring materials, including rubrics.
After ETS scored the pilot responses as part of the formative scoring process, we gathered
performance data for statistical analysis experts to analyze. These experts checked for inter-rater
reliability, internal consistency reliability, and for issues related to difficulty (too hard, too easy).
Pilot Scoring, Formative Review, and Benchmarking
ETS recruited 66 educators, including the development team members, to score the pilot
responses and to validate the task directions as appropriate or make suggestions and
modifications to the tasks and scoring rubrics in light of the responses. The emphasis during this
process was not on the performance of the candidate, but rather on the performance of each task.
The pilot scoring session was conducted as a five-day meeting. ETS staff spent three days
training the raters and two days scoring the pilot responses. Raters double scored the various
submissions. Formative review took place immediately following the scoring.
Scoring took place electronically, using the Online Network for Evaluation (ONE), which is also
used for operational scoring. A formative review based on the written feedback from the raters
followed scoring.
At the end of pilot scoring, the team reviewed the data and information acquired and made final
revisions to the tasks and rubrics to prepare them for the first official candidates. ETS prepared
final tasks and rubrics and met with DESE staff to review them prior to uploading them to the
online submission system.
Training Approach
During formative review, we tightly linked the development process and the scoring process.
ETS Assessment Development staff have developed training protocol models that raters use in
MoPTA — Candidate and Educator Handbook 24
live scoring. Pairs of raters work together as a task-specific team that chooses benchmarks. The
primary outcome of the formative review is to determine whether the task is working as
designed. These activities attempt to answer questions like:
Is there the anticipated connection between the standards, the evidence collected by the
response, and what is valued in scoring the task?
More specifically, formative review is the verification of the critical alignment between
the standards of focus and the elicited evidence from educators through the assigned
tasks. Do the tasks allow teacher candidates to produce and document the necessary
evidence that speaks to the standards in meaningful ways?
Does the rubric both value the evidence and support raters, in making sense of the
evidence that leads to consistent and valid judgments on the accomplishment of the
teacher candidate relative to the specific task?
Evaluation of the Evidence
Assessment Development specialists generated and documented the professional judgments of
review team. The Assessment Development specialists led these sessions and put the teams
through a strict process considering feedback through a focused discussion of the explicit
requirements of the ECD process. We asked team members to consider the evidence submitted
by the pilot participants, whether tasks elicited the responses expected, and whether we
maintained the intended connections between standards, task directions, and the scoring rubrics.
MoPTA — Candidate and Educator Handbook 25
Standard Setting Studies
To support the decision-making process for DESE in establishing a passing score (cut score) for
each of the performance assessments, research staff from ETS designed and conducted a
standard-setting study. Separate standard-setting studies were conducted for the following
assessments:
Missouri Pre-Service Teacher Assessment (MoPTA) – Video and Non-video
Missouri School Counselor Performance Assessment (MoSCPA)
Missouri School Leader Performance Assessment (MoSLPA)
Missouri Librarian Performance Assessment (MoLPA)
Each study provides a recommended passing score, which represents the combined judgments of
a group of experienced educators. ETS provides a recommended passing score from the
standard-setting study; DESE is responsible for establishing the operational passing score in
accordance with applicable regulations. ETS does not set passing scores; that is DESE’s
responsibility.
Panel Formation
Standard-setting studies provide recommended passing scores, which represent the combined
judgments of a group of experienced educators. For the standard-setting studies for the Missouri
performance assessments, DESE recommended panelists with (a) experience as either educators in
the particular area or college faculty who prepare educators in the area and (b) familiarity with the
knowledge and skills required of beginning educators. ETS selects panelists to represent the diversity
(race/ethnicity, gender, geographic setting, etc.) of the educator population in the state. Each panel
includes 10-20 educators, the majority of whom are practicing, licensed educators in the area covered
by the test.
Standard Setting Method for Missouri Performance Assessments
Each study began with an in-depth discussion of the assessment, the tasks, scoring rubrics, and
candidate handbooks. Panelists were asked to take notes on the tasks and steps, focusing on what
is being measured and the challenge the tasks pose for beginning educators. Then panelists
discussed the state standards, particularly the quality indicators. The quality indicates described
a continuum of expected performance, from “candidate” to “distinguished educator.” The
candidate-level quality indicators define what is expected to be “just” qualified to begin practice.
The just-qualified candidate description plays a central role in standard setting3; the goal of the
standard-setting process is to identify the test score that aligns with this description4. The
3 Perie, M. (2008). A guide to understanding and developing performance-level descriptors. Educational
Measurement: Issues and Practice, 27, 15–29. 4 Tannenbaum, R. J., & Katz, I. R. (2013). Standard setting. In K. F. Geisinger (Ed.), APA Handbook of Testing and
Assessment in Psychology. Washington, District of Columbia: American Psychological Association.
MoPTA — Candidate and Educator Handbook 26
panelists’ understanding of the assessments and the expectations for the just qualified candidate
were crucial in the standard-setting process.
Standard-setting methods are selected based on the characteristics of the particular test. Given
the structure of the performance assessments—each assessment consists of tasks and each task
consists of steps—a standard setting method was designed to reflect this nested structure. For the
various Missouri performance assessments, a variation on a multiple-round extended Angoff
method5 was used. Multiple rounds of judgments are collected, with panelist discussion between
rounds.
The panelists made independent judgments at the step-level for their first round of judgments.
They considered each step in a task, the rubrics, and exemplars. Then the panelists independently
judge, for each step within the task, the score a just qualified candidate would likely receive. The
task-level result from Round 1 was the simple sum of the recommended likely scores for each
step.
For the second round of judgments, the panelists reviewed a summary of results from Round 1.
They discuss their step-level judgments, then the task-level judgments. They are asked if the
task-level score from Round 1 reflects the likely performance of the just qualified candidate,
considering the various patterns of step scores that are likely to result in the a task score. If
sufficient field test data were available, the panel reviewed the distribution of candidate scores
from the field test as a reasonableness check for the Round 1 judgments. The panelists then
made a holistic task-level judgment as their Round 2 judgments. The assessment-level result
from Round 2 is the weighted sum of the recommended likely scores for each task.
For the third round of judgments, the panelists reviewed a summary of results from Round 2.
They discuss their task-level judgments, then the assessment-level judgments. They are asked if
the assessment-level score from Round 2 reflects the likely performance of the just qualified
candidate, considering the various patterns of task scores that are likely to result in an assessment
score. Again, if sufficient field test data were available, the panel reviewed the distribution of
candidate scores from the field test as a reasonableness check for the Round 2 judgments. The
panelists then made a holistic assessment-level judgment as their Round 3 judgments. The
results from Round 3 were the final judgments and the panel’s average was the recommended
passing score reported to DESE.
5 Tannenbaum, R. J., & Katz, I. R. (2013). Standard setting. In K. F. Geisinger (Ed.), APA Handbook of Testing and
Assessment in Psychology. Washington, District of Columbia: American Psychological Association.
MoPTA — Candidate and Educator Handbook 27
Figure 1. Standard-setting process.
Review tasks,
rubrics, & exemplars
Round 1
Independent, step-
level judgments
Summarize Round 1
judgments & discuss
step- and task-level
results
Round 2
Adjust task-level
passing score based
on discussion
Summarize Round 2
judgments & discuss
task- and test-level
results
Round 3
Adjust test-level
passing score based
on discussions
Recommended
passing score
Standard-Setting Reports
Following each study, ETS provided DESE with a technical report. The technical report
describes the content and format of the performance assessment, the standard-setting processes
and methods, and the results of the standard-setting study. The standard setting technical reports
can be made available upon request, subject to DESE’s approval.
MoPTA — Candidate and Educator Handbook 28
Scoring Methodology
Task 1 for MoPTA is formative and is scored locally by the candidate’s Educator Preparation
Program (EPP). Missouri educators, working with ETS staff, developed the task and the scoring
rubrics for the task, focusing on each of the standards/indicators addressed by the task. Scores
are made available to the teacher candidate, the supporting faculty, and the cooperating teacher.
No scores are reported to the Department, and Task 1 scores are not used to calculate the final
MoPTA score.
The other three tasks are centrally scored by trained educators using ETS’s Online Network for
Evaluation (ONE). Within a task, each step receives a score. Scores for the steps are combined to
determine the task score. Scores for the three tasks are then combined to produce an overall
score. The final task is weighted so that it contributes more to the overall score than the other
two tasks. The passing or cut scores were set by standard setting panels composed of Missouri
educators (see the section on Standard Setting above). There are no passing scores for the
individual tasks.
Scoring Model
Centralized scoring. ETS employs and trains the raters who score the Missouri Performance
Assessments using ETS’s Online Network for Evaluation (ONE). Raters who are faculty at
Missouri EPPs are not permitted to score candidates attending their EPP, and cooperating
teachers could not score submissions from candidates completing their clinical experience in
their districts. Submissions are double scored to monitor scoring quality, and ETS provides full
support to the scoring process — scoring materials (e.g., rubrics), training materials, and scoring
oversight (e.g., scoring leaders).
Scoring Process
When scoring tasks, raters review the written commentary the candidate entered in the online
submission system and the artifacts that the candidate uploaded, including the video for MoPTA
with Video (MoPTA-V), MoSLPA, MoLPA, and MoSCPA.
Calculating Step Scores (MoPTA)
For MoPTA there are 11 steps. Step scores are determined using a four-point rubric. Score levels
for each rubric are defined as follows:
Score Quantitative and Qualitative Elements of Evidence
Score of 4 Consistent and thorough
Score of 3 Adequate and appropriate
MoPTA — Candidate and Educator Handbook 29
Score of 2 Partial and inconsistent
Score of 1 Minimal and ineffective
0 Blank, insufficient evidence, no required artifacts linked to written commentary
The rubric documents contain the task-specific rubrics used during scoring to evaluate the
elements of the evidence provided for each step.
Task 2 Rubric (PDF)
Task 3 Rubric (PDF)
Task 4 Rubric — MoPTA with Video (PDF)
Task 4 Rubric — MoPTA Non-Video (PDF)
Steps that are determined to be nonscorable receive a score of zero.
The step scores assigned by the raters are averaged to determine each step score.
Calculating Task Scores
Step scores are summed to determine the task score for each of the three tasks. The score for
Task 4 is multiplied by two to reflect the double weighting of the task. Tasks that are not
submitted receive a score of zero.
At least six raters contribute to scoring the assessment, and under no circumstances does the
score for any of the tasks depend entirely on one individual rater.
Calculating the Overall Assessment Score
The three task scores are summed to determine the overall assessment score. As noted above, the
score for Task 4 is doubled.
Calculating Step Scores (MoSLPA, MoSCPA, and MoLPA)
For MoSLPA, MoSCPA, and MoLPA there are 12 steps. Step scores are determined using a
four-point rubric. Score levels for each rubric are defined as follows:
Score Quantitative and Qualitative Elements of Evidence
Score of 4 Consistent and thorough
Score of 3 Adequate and appropriate
Score of 2 Partial and inconsistent
Score of 1 Minimal and ineffective
0 Blank, insufficient evidence, no required artifacts linked to written commentary
MoPTA — Candidate and Educator Handbook 30
The rubric documents contain the task-specific rubrics used during scoring to evaluate the
elements of the evidence provided for each step.
MoSLPA Rubrics
Task 1 Rubric (PDF)
Task 2 Rubric (PDF)
Task 3 Rubric (PDF)
MoSCPA Rubrics
Task 1 Rubric (PDF)
Task 2 Rubric (PDF)
Task 3 Rubric (PDF)
MoLPA Rubrics
Task 1 Rubric (PDF)
Task 2 Rubric (PDF)
Task 3 Rubric (PDF)
Steps that are determined to be nonscorable receive a score of zero.
The step scores assigned by the raters are averaged to determine each step score.
Calculating Task Scores
Step scores are summed to determine the task score for each of the three tasks. The score for
Task 3 is multiplied by two to reflect the double weighting of the task. Tasks that are not
submitted receive a score of zero.
At least six raters contribute to scoring the assessment, and under no circumstances does the
score for any of the tasks depend entirely on one individual rater.
Calculating the Overall Assessment Score
The three task scores are summed to determine the overall assessment score. As noted above, the
score for Task 3 is doubled.
Score Reports
As candidates submit each of their completed tasks into the online submission system, raters
score the tasks within a relatively short turnaround time. Score reports are available via the
candidate’s online account approximately four weeks after each task deadline. The candidate and
MoPTA — Candidate and Educator Handbook 31
the EPP faculty member will have access to the scores on each task and can use them to engage
in reflective practice. This will help candidates become more skilful practitioners as they
continue through the clinical experience. The final score report, including a composite score for
each standard/indicator, includes passing status and is available to candidates through their
online account approximately four weeks after the deadline date for the final task. EPPs and the
Department access candidate score reports through the ETS Data Manager (EDM). If a
candidate is not successful, the submission schedule will afford the teacher candidate the
opportunity to resubmit any task(s). Following discussion between the candidate and the EPP
faculty member, the candidate has the opportunity to rewrite parts of tasks and/or choose to do
another lesson, possibly with video.
Quality Assurance Measures
All raters are carefully trained and follow strict scoring procedures, and each submission is
scored by multiple raters. However, if a candidate’s cumulative score falls below the designated
passing score for the assessment, they are eligible to resubmit any or all three tasks for a fee
during the resubmission window immediately following their original submission.
Appropriate Score Use
ETS is committed to furthering quality and equity in education by providing valid and fair tests,
research, and related services. Central to this objective is helping those who use our tests to
understand what are considered their proper uses. The booklet Proper Use of The Praxis Series
and Related Assessments defines proper test use as adequate evidence to support the intended use
of the test and to support the decisions and outcomes rendered on the basis of test scores.
Proper assessment use is a joint responsibility of ETS as the test developer, and of states,
agencies, associations, and institutions of higher education as the test users. The Praxis program
is responsible for developing valid and fair assessments in accordance with technical guidelines
established by the American Educational Research Association, the American Psychological
Association, and the National Council on Educational Measurement in Education (Standards for
Educational and Psychological Testing, 2014).
Test users are responsible for selecting a test that meets their credentialing or related needs, and
for using that test in a manner consistent with the test’s intended and validated purpose. Test
users must validate the use of a test for purposes other than those intended and supported by
existing validity evidence. In other words, they must be able to justify that the intended alternate
use is acceptable.
Both ETS and test users share responsibility for minimizing the misuse of assessment
information and for discouraging inappropriate assessment use.
MoPTA — Candidate and Educator Handbook 32
Psychometric Properties
Introduction
ETS Statistical Analysis division has developed procedures designed to support the development
of valid and reliable interpretation of test scores for the Missouri Performance Assessments.
Software developed at ETS provides rigorously tested routines to produce task level and test
level statistics at various stages of each test administration, which enables candidates to receive
feedback and make improvements to their responses before final assessment scores are reported.
The analysis process is summarized in the two tables below. After the initial submission, task
score distributions are examined to ensure scores are within range and reasonable. Task ratings
from different raters are compared and rater agreement statistics are calculated to monitor the
rating quality and to provide feedback for training and/or improvement of the scoring process.
Total test score distributions are also examined to ensure scores are within range and reasonable,
and to identify any anomaly for investigation. Test reliability is estimated to evaluate the overall
quality of the assessment. Based on the task level feedback from the initial submission,
candidates can revise their work and resubmit any or all tasks. The task and test level analyses
are repeated before the release of the final task and test scores.
MoPTA-V and MoPTA-NV
Original Submission Resubmission
Analysis
Window 1
Analysis
Window 2
Analysis
Window 3
Analysis
Window 1
Analysis
Window 2
Task 2 & Task 3 Task 4 Master Tasks Master
SLPA, MSC, and MSL
Original Submission Resubmission
Analysis
Window 1
Analysis
Window 2
Analysis
Window 1
Analysis
Window 2
Tasks Master Tasks Master
MoPTA — Candidate and Educator Handbook 33
Test Statistics
Reliability
Reliability is a measure of the extent to which scores are expected to be consistent over time. For
this assessment, that is the extent to which scores are likely to be the same, regardless of when
the test was administered and which raters scored the tasks. Reliability coefficients may range
from 0 to 1. The higher the reliability coefficient, the more likely individuals would be to obtain
very similar scores if they were retested. In this report. Cronbach’s alpha (Cronbach, 1951) is
used as a measure of internal consistency of MO assessments. The Cronbach’s alpha of the total
assessment score, when calculated on the basis of the task scores (N=3), ranged from .67 for
MoPTA-NV to .87 for MoLPA. This assessment reliability of 0.67 is reasonable, but somewhat
lower than would be ideal for an initial licensure assessment.
Standard Error of Measurement
The standard error of measurement (SEM) is an estimate of the standard deviation of the
distribution of observed scores around a theoretical true score. The SEM can be interpreted as an
index of expected variation if the same test taker could be tested repeatedly on different forms of
the same test without benefiting from practice or being hampered by fatigue. The SEM of a raw
score is computed from the alpha reliability estimate (rx) and the standard deviation (SDx) of the
scores; the formula is included in Appendix A.
Inter-Rater Reliability and the Standard Error of Scoring
The inter-rater reliability coefficient describes the reliability of the scoring process when
constructed response items are scored independently by two raters. It is an estimate of the
correlation between the scores resulting from two independent replications of the same scoring
process. Because it does not take into account the adjudication of discrepancies between the first
and second ratings, it is a slight underestimate of the correlation of the two complete sets of
scores.
The standard error of scoring (SES) is an estimate of the standard deviation of the distribution of
observed scores around a theoretical true score. The SES can be interpreted as an index of
expected variation if the same test taker’s responses were scored repeatedly by different rater
pairs on the same test responses. The SEM of a raw score is computed from the inter-rater
reliability estimate (rscoring) and the standard deviation (SDx). The multi-step calculation process
is included in Appendix A.
MoPTA — Candidate and Educator Handbook 34
Score Reporting
Score reporting is the process by which tests are scored and test results are reported to test takers,
institutions, and state agencies.
Test Taker Score Reports -- MoPTA
Test takers access their score reports via their online accounts. The MoPTA score report contains
valuable information, including
a summary page indicating the score for each task, the cumulative score for the
assessment, and passing status
a detailed page for each task indicating scores for each step within a task and feedback
for each step score
See a MoPTA with Video (MoPTA-V) Sample Test-taker Score Report (PDF).
See a MoPTA Non-Video (MoPTA-NV) Sample Test-taker Score Report (PDF).
Pass/Fail
Scores for each task are summed to determine the overall assessment score. The passing score
for the MoPTA will be established in August 2016. This passing score will go into effect starting
with the fall 2016 submission window.
Reviewing the Feedback for Step Scores
Each task of the assessment contains multiple steps.
Task 2 contains three steps
Task 3 contains four steps
Task 4 contains four steps
The test taker score report includes feedback for each step score within a task. This feedback
guides test takers to improve the quality of evidence in their step responses
addresses the possible qualitative and quantitative level of the evidence provided in the
step responses
is connected to the language of the task rubrics and the language of the guiding prompts
is helpful in deciding whether or not to resubmit a task
When making decisions about resubmission, candidates are encouraged to
Read, review and reflect on the feedback provided with each step score.
Reread the rubric at the score level obtained as well as at the next higher level(s).
View the feedback for all steps and score levels of the MoPTA
o Task 2 Score Report Feedback (PDF)
MoPTA — Candidate and Educator Handbook 35
o Task 3 Score Report Feedback (PDF)
o Task 4 Score Report Feedback
MoPTA-V Task 4 Score Report Feedback (PDF)
MoPTA-NV Task 4 Score Report Feedback (PDF)
Read some appropriate examples from the Library of Examples.
Steps with a Score of Zero
If a score report contains a step score of zero, the step response may
be blank
not address the activity and guiding prompts for the step
address the guiding prompts in the written commentary, but may not include links to the
required artifact(s) or the artifact(s) that are linked may not provide the evidence that is
required
not reflect the required content focus of literacy and numeracy (elementary education and
early childhood candidates)
Test Taker Score Reports – MoSLPA
Test takers access their score reports via their online accounts. The Missouri School Leader
Performance Assessment score report contains valuable information, including
a summary page indicating scores for each task, the cumulative score for the assessment,
and passing status
a detail page for each task indicating scores for each step within a task and feedback for
each step score
See a MoSLPA Sample Test-taker Score Report (PDF).
Pass/Fail
Scores for each task are summed to determine the overall assessment score. Effective with the
fall 2015 submission window, the passing score for the MoSLPA is an overall assessment score
of 41.
Reviewing the Feedback for Step Scores
Each task of the assessment contains four steps, for a total of 12 steps.
The score report includes feedback for each step score within a task. This feedback
guides test takers to improve the quality of evidence in their step responses
addresses the possible qualitative and quantitative level of the evidence provided in the
step responses
is connected to the language of the task rubrics and the language of the guiding prompts
is helpful in deciding whether or not to resubmit a task
MoPTA — Candidate and Educator Handbook 36
When making decisions about resubmission, test takers are encouraged to
read, review and reflect on the feedback provided with each step score.
re-read the rubric at the score level obtained as well as at the next higher level(s).
view the feedback for all steps and score levels of the MoSLPA.
o Task 1 Score Report Feedback (PDF)
o Task 2 Score Report Feedback (PDF)
o Task 3 Score Report Feedback (PDF)
read some appropriate examples from the Library of Examples.
Steps with a Score of Zero
If a score report contains a step score of zero, the step response may
be blank
not address the activity and guiding prompts for the step
address the guiding prompts in the written commentary, but may not include links to the
required artifact(s) or the artifact(s) that are linked may not provide the evidence that is
required
Candidate Score Reports – MoSCPA
Test takers access their score reports via their online accounts. The Missouri School Counselor
Performance Assessment score report contains valuable information, including:
a summary page indicating scores for each task, the cumulative score for the assessment,
and passing status
a detailed page for each task indicating scores for each step within a task and feedback
for each step score
See a MoSCPA Sample Test-taker Score Report (PDF).
Pass/Fail
Scores for each task are summed to determine the overall assessment score. The passing score
for the MoSCPA will be established in August 2016. This passing score will go into effect
starting with the fall 2016 submission window.
Reviewing the Feedback for Step Scores
Each task of the assessment contains four steps, for a total of 12 steps.
The candidate score report includes feedback for each step score within a task. This feedback
guides test takers to improve the quality of evidence in their step responses
addresses the possible qualitative and quantitative level of the evidence provided in the
step responses
is connected to the language of the task rubrics and the language of the guiding prompts
is helpful in deciding whether or not to resubmit a task
MoPTA — Candidate and Educator Handbook 37
When making decisions about resubmission, test takers are encouraged to
read, review and reflect on the feedback provided with each step score.
reread the rubric at the score level obtained as well as at the next higher level(s).
view the feedback for all steps and score levels of the MoSCPA.
o Task 1 Score Report Feedback (PDF)
o Task 2 Score Report Feedback (PDF)
o Task 3 Score Report Feedback (PDF)
read some appropriate examples from the Library of Examples.
Steps with a Score of Zero
If a score report contains a step score of zero, the step response may
be blank
not address the activity and guiding prompts for the step
address the guiding prompts in the written commentary, but may not include links to the
required artifact(s) or the artifact(s) that are linked may not provide the evidence that is
required
Test Taker Score Reports – MoLPA
Test takers access their score reports via their online accounts. The Missouri Librarian
Performance Assessment score report contains valuable information, including
a summary page indicating scores for each task, the cumulative score for the assessment,
and passing status
a detail page for each task indicating scores for each step within a task and feedback for
each step score
See a MoLPA Sample Test-taker Score Report (PDF).
Pass/Fail
Scores for each task are summed to determine the overall assessment score. The passing score
for the MoLPA will be established in August 2016. This passing score will go into effect starting
with the fall 2016 submission window.
Reviewing the Feedback for Step Scores
Each task of the assessment contains four steps, for a total of 12 steps.
Test taker score reports includes feedback for each step score within a task. This feedback
guides test takers to improve the quality of evidence in their step responses
addresses the possible qualitative and quantitative level of the evidence provided in the
step responses
MoPTA — Candidate and Educator Handbook 38
is connected to the language of the task rubrics and the language of the guiding prompts
is helpful in deciding whether or not to resubmit a task
When making decisions about resubmission, test takers are encouraged to
read, review and reflect on the feedback provided with each step score.
reread the rubric at the score level obtained as well as at the next higher level(s).
view the feedback for all steps and score levels of the MoLPA.
o Task 1 Score Report Feedback (PDF)
o Task 2 Score Report Feedback (PDF)
o Task 3 Score Report Feedback (PDF)
Steps with a Score of Zero
If a score report contains a step score of zero, the step response may
be blank
not address the activity and guiding prompts for the step
address the guiding prompts in the written commentary, but may not include links to the
required artifact(s) or the artifact(s) that are linked may not provide the evidence that is
required
Score Reports for EPPs and the Department
EPPs and the Department access candidate score reports through the ETS Data Manager (EDM).
Information provided through EDM includes candidate demographic information; Current
Scores including Task and Step Scores, the cumulative score for the assessment, and passing
status; and Highest Scores earned to date. The Department also has access to aggregate score
data by EPP.
Summary Reports
Every fall, ETS provides the Department with Summary Reports that provide the following:
Overall pass rates
Pass rates by program type
MoPTA pass rates by content area (Elementary, High School, Special Education)
MoPTA — Candidate and Educator Handbook 39
Appendix A – Statistical Characteristics of the
Missouri Performance Assessments
Table 1 in this section provides important scoring and statistical information for the Missouri
Performance Assessments. Notes at the end of the table provide more information about the data
included. Tables 3-7 provide summary statistics for subgroups by gender and ethnicity. All tables
are based on test takers taking the test during the 2015-2016 school year.
Table 1: Statistical Characteristics for the Missouri Performance Assessment (Total Group)
MoPTA-V MoPTA-NV MoLPA* MoSCPA MoSLPA
Sample Size 1,037 2,522 4 280 270
Possible Range 0-60 0-60 0-64 0-64 0-64
Observed Range 0-55 0-57 0-57 0-63 0-57
Median 43 43 34 45 45
Mean 41.06 41.47 38.50 41.86 42.33
Standard Deviation 7.64 7.46 11.24 11.88 12.16
Reliability: Alpha 0.68 0.67 - 0.81 0.87
Standard Error of Measurement 4.30 4.30 - 5.13 4.31
Reliability: Inter-Rater 0.85 0.80 - 0.86 0.81
Standard Error of Scoring 2.94 3.32 - 4.41 5.29
Note: Reliability estimates are not reported for MoSLPA because of the very small sample size.
Sample Size — The number of people taking the test.
Possible Range — The lowest to the highest raw score possible on any edition of the test.
Observed Range — The actual maximum and minimum observed raw scores for a given
form of a test.
Median — The score that separates the lower half of the scores from the upper half,
calculated for the scores obtained by the group of test takers.
Mean — The arithmetic average, calculated for the scores obtained by the group of test
takers.
Standard Deviation — The amount of variability among the scores obtained by the group of
test takers.
Alpha Reliability — Cronbach’s alpha is calculated as a measure of the consistency of
scores. It focuses on the extent to which differences in test scores reflect true differences in
MoPTA — Candidate and Educator Handbook 40
the knowledge, ability, or skills being tested. Reliability coefficients range from 0 to 1 and is
calculated using the formula below:
2
1
2ˆ 1
1
k
i
i
C
sk
k s
(1)
where,
k is the number of criteria, which is 3 for Missouri Performance Assessment, 2
isis the variance of scores for criterion i, and
2
Csis the variance of composite score.
Standard Error of Measurement (SEM) — The standard error of measurement (SEM) is a
test statistic to characterize the reliability of the scores of a group of test takers. A test taker’s
score on a single administration of a test will differ somewhat from the score the test taker
would receive on another occasion. The more consistent a test taker’s scores are from one
testing to another, the smaller the SEM. Because estimates of the standard error may vary
slightly from one test administration to another and from one test edition to another, the
tabled values are averages of the SEMs obtained from all forms of the test currently in use.
The SEM is calculated using the formula below:
𝑆𝐸𝑀 = 𝑆𝐷𝑐𝑜𝑚𝑝𝑜𝑠𝑖𝑡𝑒 ∗ √1 − 𝛼 (2)
Inter-rater Reliability and Standard Error of Scoring (SES) — The inter-rater reliability
coefficient describes the reliability of the scoring process when constructed response items
are scored independently by two raters. The standard error of scoring (SES) is an estimate of
the standard deviation of the distribution of observed scores around a theoretical true score.
The calculation uses a multi-step process described below.
Step 1: Calculate Intra-class Correlation for Each Task
The intra-class correlations (ICC) are calculated for each task. Since the first two ratings
are used for the calculation, the ICC values would be somewhat underestimated. The
intra-class correlation is a measure of agreement among ratings (Shrout & Fleiss, 1979).
The ICC ranges from 0 to 1, with 1 indicating the two ratings agree perfectly for all test
takers and 0 indicating no relationship. The intra-class correlation is derived from the
Analysis of Variance (ANOVA) framework. Table 2 shows the decomposition of the
variance for each criterion when there are two ratings for each response. In this table, the
subject represents the candidate’s response and n is the number of test takers.
MoPTA — Candidate and Educator Handbook 41
Table 2. Analysis of Variance Framework
Source of Variation df Mean Square
Between subject n-1 BMS
Within subject n WMS
The intra-class correlation is calculated as:
icc
BMS WMSr
BMS WMS
, (3)
where BMS is the Between Subject Mean Square, which is calculated as:
2
1
2 ( )
1
n
i
i
X X
BMSn
, (4)
and WMS is the Within Subject Mean Square, which is calculated as:
2 2 2
1 2
1
( 2 )n
i i i
i
X X X
WMSn
(5)
In the formula above, 1iX is the first rating for the response of test taker i, 2iX is the second rating for
test taker i, iX is the average rating for test taker i, and X is the overall average of the ratings across
all test taker for all n test taker:
𝑋𝑖.̅̅ ̅ =𝑋𝑖1+𝑋𝑖2
2 (6)
𝑋..̅ =∑ (𝑋𝑖1+𝑋𝑖2)𝑛𝑖=1
2𝑛 (7)
Step 2: Calculate Reliability of Combined Item Score (CIS) for Each Task
A combined item score (CIS) is the average of two ratings assigned by different raters to
the same response.
2
1
iccCIS
icc
rr
r
(8)
Step 3: Calculate the Variance of Error of Scoring for Each Task
2 2 (1 )CIS CISe x CISr (9)
𝜎𝑥𝐶𝐼𝑆2 is the variance of the combined item score (CIS), i.e. the squared SD of the CIS.
MoPTA — Candidate and Educator Handbook 42
Step 4: Calculate the Variance of Error of Scoring for the Entire Test
2 2 2
scoring CISe ew (10)
This is the weighted sum of the three task variances from step 3. W is the weight for each
task score in computing the total test score, i.e., 1, 1 and 2 for Task 2, Task 3 and Task 4
respectively.
Step 5: Calculate the Standard Error of Scoring for the Entire Test
𝑆𝐸𝑆 = √𝜎𝑒𝑠𝑐𝑜𝑟𝑖𝑛𝑔2 (11)
Step 6: Calculate the Reliability of Scoring
𝒓𝒔𝒄𝒐𝒓𝒊𝒏𝒈 = 𝟏 −𝝈𝒆𝒔𝒄𝒐𝒓𝒊𝒏𝒈𝟐
𝝈𝒄𝒐𝒎𝒑𝒐𝒔𝒊𝒕𝒆𝟐 (12)
Table 3: Summary of Assessment Scores by Gender and Ethnicity for MoPTA-V
N Mean Std Min Max Median
Total 1,037 41.06 7.64 0 55 43
Female 756 42.00 7.15 0 55 43
Male 281 38.52 8.32 0 54 40
Unspecified Ethnicity 9 43.44 4.86 36 51 44
African American 54 38.22 6.91 21 50 39.5
Asian or Pacific Islander 10 41.00 6.48 26 48 44
Hispanic 18 41.00 5.32 27 47 43
American Indian or Alaskan Native 10 41.60 3.56 34 47 42
White 906 41.22 7.77 0 55 43
Other 7 41.29 5.31 32 47 43
Two or More Races 23 40.09 7.36 20 53 42
Table 4: Summary of Assessment Scores by Gender and Ethnicity for MoPTA-NV
N Mean Std Min Max Median
Total 2,522 41.47 7.46 0 57 43
Female 2,013 41.92 6.99 0 57 43
Male 509 39.65 8.83 0 56 42
Unspecified Ethnicity 29 41.90 5.74 27 53 43
MoPTA — Candidate and Educator Handbook 43
African American 85 37.91 8.39 0 51 40
Asian or Pacific Islander 19 40.74 5.70 27 50 41
Hispanic 44 39.14 11.17 0 52 42.5
American Indian or Alaskan Native 10 41.30 5.12 33 50 41
White 2,285 41.65 7.31 0 57 43
Other 15 38.20 11.75 0 49 42
Two or More Races 35 42.43 5.82 26 56 42
Table 5: Summary of Assessment Scores by Gender and Ethnicity for MoLPA
N Mean Std Min Max Median
Total 4 38.5 11.24 29 57 34
Female 4 38.5 11.24 29 57 34
White 4 38.5 11.24 29 57 34
Table 6: Summary of Assessment Scores by Gender and Ethnicity for MoSCPA
N Mean Std Min Max Median
Total 280 41.86 11.88 0 63 45
Female 243 42.18 11.96 0 63 45
Male 37 39.81 11.11 0 59 42
Unspecified Ethnicity 5 39.20 15.54 19 61 46
African American 36 34.08 15.25 0 58 38.5
Asian or Pacific Islander 2 57.00 2.00 55 59 57
Hispanic 2 24.00 24.00 0 48 24
American Indian or Alaskan Native 2 47.50 2.50 45 50 47.5
White 224 43.24 10.42 0 63 45
Other 4 34.25 6.42 28 45 32
Two or More Races 5 44.00 6.96 34 54 45
Table 7: Summary of Assessment Scores by Gender and Ethnicity for MoSLPA
N Mean Std Min Max Median
Total 270 42.33 12.16 0 57 45
Female 187 43.64 10.53 0 57 45
Male 83 39.39 14.78 0 55 44
Unspecified Ethnicity 4 34.75 20.49 0 53 43
African American 25 34.68 18.07 0 54 41
Hispanic 2 39.00 4.00 35 43 39
American Indian or Alaskan Native 3 42.67 6.13 34 47 47
White 233 43.64 10.18 0 57 45
Two or More Races 3 16.67 23.57 0 50 0
MoPTA — Candidate and Educator Handbook 44
Appendix B: Design Team Meeting Reports
The following pages contain embedded files for the Design Team Meeting Reports. Double-
click on the file to open.
MoPTA — Candidate and Educator Handbook 45
Missouri Pre-Service Teacher Assessment
Content Development Team- Meeting One
February 26-27, 2013
Jefferson City, Missouri
In Attendance:
DESE: Gale “Hap” Hairston, Director of Education Preparation, Heather MacCleoud, Assistant
Director
ETS: Seth Weiner, Executive Director: Teacher Licensure, Ethan Taylor, Project Owner, Cathy
Owens-Oliver, Senior Client Management, Annette DeLuca, Assessment Specialist, Joe Ciofalo,
Research Project Manager, Kim Hagen, Assessment Specialist, Steve Schreiner, Assessment
Specialist
Content Development Team Name Institution Position Name Institution Position
Antrim, Pat University of
Central Missouri
Educational Leadership
and Human
Development,
Department Chair
Hansett,
Elaine
Mexico Public
Schools
K-5 Math Coach and
Mentor
Banfield,
Ron
Washington
University
Director of Teacher
Education and
Academic Services
Livingston,
Diane
MNEA President, Hazelwood
NEA
Callaway,
Becky
NBCT
St. Joseph
School District
High School Math
Teacher
Ray, Julie Southeast
Missouri State
University
Early Childhood and
Special Education,
Interim Chair
Cartier,
Cathy
Afton Schools Social Studies Teacher Reed,
Marcieta
Kansas City
Schools
Early Childhood
Cozens,
Jeanie
Missouri
Southern State
Teacher Education
Faculty
Stuart, Diana Mineral Area
Community
College
Coordinator of Teacher
Education and
Professor of English
Gunn,
Sharon
Southeast
Missouri State
University
Early Childhood and
Special Education
Faculty
Young,
Marvin
MSTA 4th Grade Teacher
Purpose of the Meeting:
To design a standards-based performance assessment for teacher candidates in Missouri.
Overall Goals:
A) Analyze the Missouri Teacher Standards and Quality Indicators as they relate to
the teacher candidate and student learning
B) Complete the evidence-centered design (ECD) process
C) Articulate the Missouri blueprint
D) Provide an outline of each task
E) Provide draft prompts for the tasks
MoPTA — Candidate and Educator Handbook 46
Pre-Meeting Homework for the Content Development Team:
1. Read and be familiar with the Missouri Teacher Standards and Quality
Indicators; Kissing the Frog; A Brief Introduction to Evidence-Centered Design;
and Review of Teaching Performance Assessments for Use in Human Capital
Management
2. Think about what they would expect candidate teachers to submit as evidence that
they are addressing these standards.
Meeting Agenda Missouri Pre-Service Teacher Assessment Development
Meeting # 1
February 26-27, 2013
Day 1- Tuesday
8:30 am Introductions and Overview of the Meeting: Missouri Department of Education Staff (Purpose, Vision, History, Teams, Process, Timelines, etc.)
Introduction of ETS staff – Seth Weiner
Introduction of Development Committee – Steve Schreiner and team
Overview of Pre-Service Teacher Assessment Development Process – Steve
Schreiner 9:30 Who Are the Assessment Takers? 10:30 Break
10:45 What Does the Student Teaching Experience Look Like? 12:00 pm Lunch 1:00 Unpacking the Missouri Standards/Indicators and Sharing 3:00 Share the Shell/Clustering of the Missouri Standards 3:30 Break 3:45 Clustering the Missouri Standards and Indicators 5:00 Adjournment
Day 2 - Wednesday 8:30 am Reflection on the Previous Day’s Work 9:00 Evidence-Centered Design PPT 9:30 Blueprint of the Assessment and Formative and Summative Discussion 10:15 Break
MoPTA — Candidate and Educator Handbook 47
10:30 Blueprint of the Assessment (cont.) 12:30 pm Lunch
Meeting Notes for Tuesday, February 26 Due to a snowstorm, there were only 12 participants (See those listed above).
Hap Hairston provided opening remarks, explaining the licensure continuum and the
MEGA program (Missouri Educator Gateway Assessments)
Seth Weiner provided welcoming remarks and Steve Schreiner introduced the purpose
and timeline for the development committee
Activity 1 -The Who of the Assessment Annette DeLuca led the first activity: “Who are the assessment takers?” In order to begin
development the Content Development Team needed to have a common understanding of
“the who” this assessment would be assessing. The Evidence Centered Design (ECD)
activity required brainstorming and responses from the following questions: Who is being
assessed? What assumptions can we make about these teacher candidates when they take
this assessment? Think about the knowledge, skills, and abilities these candidates have as a
pre-service teacher. What conclusions would you like to draw after they’ve taken the
assessment? What claims can we make about what they know and can do? What evidence
can we ask them to produce to show this?” Participant comments were charted and posted.
See the related document, “Who Is Being Measured?”
Activity 2 – What Are the Components that Are Important To The Student Teaching
Experience?
Steve Schreiner led the brainstorming of what everyone would agree are the most important
parts of a student teacher’s training experience. “What should the student teaching
experience look like in an ideal world? What characteristics should the process possess?”
There was a lot of discussion about what committee members experienced or lacked when
they were in teacher prep. There were comments on what various EPPs do now based on
what they’ve learned over the years working with student teachers. Comments can be read on
the related document, “The Ideal Student Teaching Experience”
Activity 3 - Unpacking the Standards
Joe Ciofalo led the next activity in which the Missouri Standards were “unpacked” in order
to become familiar with what each would look like in application. Note: In each of the four
break-out groups, there was one representative who had participated in writing the standards
in previous DESE meetings which was helpful, as the committee members closely examined
all of the standards. Committee members asked Hap about adoption of the Common Core
Standards and how DESE ensures EPPs are aware of how the standards are implemented at
the district level.
MoPTA — Candidate and Educator Handbook 48
Modified Agenda
Steve Schreiner explained how it was necessary to modify the agenda since there were fewer
than 20 committee members present for this first development meeting. He explained the
need for more members who could not only contribute to the diversity of the group but also
to the thinking and decision making of the design of this assessment. Note: The plan was to
end the session Wednesday at noon and not go as far as writing prompts until the April 2013
meeting. In April, the missing 8 committee members will come in a day early to cover what
was missed and then join this group for the originally scheduled 3-day meeting. Additional
members will also be recruited to join this group at this second meeting.
The week of April 8th was problematic for a few, both teachers and faculty, due to various
conferences and testing. The week of April 15th seemed to be preferred by most.
Unpacking the Standards (continued)
Joe Ciofalo returned to this activity. Committee members returned to their groups and
discussed their assigned standards and indicators. The groups charted their results and shared
the evidence they thought was indicative of how the standards appear in practice. There was
some discussion about how to go about acquiring parents’ permission to use videos and
student work. There was also some dialogue regarding using videos for evaluative purposes
as well as for student teachers to use for reflection and self-awareness. One person
commented that video use is becoming increasingly problematic because of how quickly and
easily technology can be shared.
There was concern around the table that a single video would not adequately capture
evidence of Standard 5: Positive Classroom Environment.
Committee members commented that the successful implementation of the PTA rests
largely in the training of the evaluators who will need ongoing calibration.
Results of the discussion on Unpacking the Standards can be read on the related document,
“Unpacking the Standards.”
Meeting Notes for Wednesday, February 27 Reflection
Joe Ciofalo opened the second day’s session with a reflective discussion focused on the previous
day’s work. Below are some of the comments.
I realize this assessment has many authors, rich input from the field, needs tweaking
overall.
This is a massive project but it must have a narrow focus, so many pieces to narrow
down to what can go into a portfolio.
MoPTA — Candidate and Educator Handbook 49
I appreciate the continual reminder that the purpose of this is for K-12 students.
We all have different interpretations of beginning practice so it’s good to hear all the
different perspectives.
It’s great that what we are doing here is what we do on a daily basis, interpret
standards, and determine what evidence is needed.
I kept thinking about what I actually do and what evidence I need to show. We have
to be careful not to make them show every single standard and focus on the crucial
part.
We sometimes create a monster with emphasis on too much. So I think we need a
reality check to keep going back to beginner teachers.
I attended the State Board meeting last week when Cathy Owens-Oliver presented,
and I was impressed with the nuances she was able to communicate in laymen’s
terms, with the language she chose. One thing I remember is “electronic submission
of artifacts” so I’d like to get a clear picture of what this assessment will really look
like.
I’d like to determine the “quality indicators and essential standards” that all students
must demonstrate. I think we (at EPPs) are expected to look at all the standards but
we need to know which ones to emphasize even though all of them are included.
This will satisfy our accreditation requirements.
We need to know more about the entry level piece as well, not just this exit
assessment, and we want to understand what this means for student teachers who do
not do well.
Additional Comments
Seth Weiner talked about the need for policies to “wrap around” the assessment, addressing
issues beyond ETS’ role (e.g., the implications of failing the assessment, how many times a
candidate may resubmit).
Hap Hairston responded to several comments and provided background information about
activities DESE is involved in to support the pre-service process. It was agreed that the group
would institute Hap’s Parking Lot to deal with questions beyond the immediate scope of the
development committee’s purpose.
MoPTA — Candidate and Educator Handbook 50
Logistics and Confidentiality
Annette DeLuca dealt with the logistics of the BIQ, W-9s, and travel and substitute
reimbursement. She also clarified the implications of the Confidentiality Form.
The Wiki Site
Kim Hagen provided a brief overview of the Wiki site, the materials that will be uploaded, and
how committee members will be able to access it. All Reports of Meetings and related
documents will be posted on the site. Committee members will be notified when the site is
updated, and they will be encouraged to communicate with ETS through this online, interactive,
collaborative setting.
Task Design
Steve Schreiner shared the four-task shell that the committee will be developing. Committee
members will be determining the standards and indicators addressed in each task’s shell, the
activities to be completed, the prompts for candidate’s response, and the type and number of
artifacts to be included. There were some questions about the timing of submissions and
resubmissions within the time frame of the clinical experience. This issue needs further
discussion, so Steve focused primarily on the overall design of the four tasks. The Development
Committee will determine the submission schedule. ETS will work with DESE to create a plan
for submission and resubmission windows.
There was some concern regarding the use of parallel systems rather than creating something
totally different for candidates to do. Steve asked committee members to write these kinds of
issues down so they can be addressed (with DESE) at the appropriate time. Steve had to clarify
what roles the committee will play in the operations realm (e.g., Can one submit task 3 before 2?
What’s the turnaround time between submissions? Will faculty advisors be able to log in and see
student work as it is being developed?) There was some confusion between “design” and
“operations.” Some committee members kept asking questions about “operational” issues.
Steve also had to clarify that the focus of this meeting is on the exit assessment, not the entry
assessment. He also kept reiterating inter-rater reliability.
There was discussion about moving away from “how we do it on our campus…and have been
doing it all along” to allowing another faculty member on another campus to externally score
essential outcomes (once that is clearly defined). How does the committee shape this
conversation for colleagues who feel this is “being done to them” rather than with them? Others
commented how critical it is for all faculty members to understand this assessment and scoring,
as well as to score. “We have to make sure faculty members are trained to coach and mentor
students throughout this process, whether or not they score.”
Others commented on the huge implications of external scoring and how it helps to minimize
bias and helps get to the essential standards and expectations of student teachers. Steve stressed
the need to continually consider test validity as the committee develops the assessment. The
group discussed limitations on the training and on who can be a rater.
MoPTA — Candidate and Educator Handbook 51
Scoring: All raters will be trained to score all four tasks. Task 1 will be scored by the
candidate’s faculty advisor. Tasks 2-4 will be scored by up to six other raters until modified
double scoring begins. Not all three tasks will be scored by the same person to limit bias and to
contribute to fostering positive inter-rater reliability.
Clarification: The Task 4: “Culminating” Task includes an overlap of some standards and a
more holistic submission of what student teachers can do. It is not a culminating submission of
everything in all other tasks; the rater for Task 4 does not need to see or score the other three
tasks.
Some committee members mentioned that this assessment will replace a lot of what is happening
on their campus and this is in many ways a good thing—less about how cute and great a younger
teacher is personally and more about his or her ability to meet standards for high quality practice.
Some faculty members have already begun to make the transition since they are already working
with the new standards.
Closing
Hap Hairston closed with brief remarks, reminding us of all the important work that lies ahead.
Next Steps The next development team meeting will be on April 15-18 Jefferson City, Missouri. Those
educators who were unable to make this first session, will be invited to attend a “repeat” session
on April 15. In addition, new members will be recruited to enhance the diversity of the group.
All committee members will attend starting on April 16. The focus of the meeting will be on task
directions and prompt development.
MoPTA — Candidate and Educator Handbook 52
Missouri Pre-Service Teacher Assessment
Content Development Team- Meeting One
New Members
April 16, 2013
Jefferson City, Missouri
In Attendance:
DESE: Heather MacCleoud, Assistant Director, Office of Educator Quality
ETS: Seth Weiner, Executive Director: Executive Director Teacher Licensure, Ethan Taylor,
Project Owner, Annette DeLuca, Assessment Specialist, Joe Ciofalo, Research Project Manager,
Kim Hagen, Assessment Specialist, Steve Schreiner, Assessment Specialist
New Member of the Content Development Team
Name Institution Position Name Institution Position
Cuenca,
Alexander
St. Louis
University
Assistant Professor:
Social Studies
Nace, Becky Kansas City
Schools
Communication
Arts Curriculum
Coach
Hausfather,
Sam
Maryville
University
Dean: School of
Education
Obermeir,
Nicole
Columbia
Schools
Elementary
Hollins, Etta University of
MO, Kansas
City
Professor Poe, Andrea Columbia
Schools
Mentor
Kingsley,
Laurie
University of
MO,
Columbia
Elementary:
Literacy
Smith,
Shelton
MO Baptist
University
Dean, Associate
Professor
Lamas,
Cynthia
Independence
Schools
High School
Spanish
McAnally,
Michael
Kansas City
Schools
Instructional Coach
Purpose of the Meeting:
To design a standards-based performance assessment for teacher candidates in Missouri
Overall Goals:
F) Analyze the Missouri Teacher Standards and Quality Indicators as they relate to
the teacher candidate and student learning
G) Complete the evidence-centered design (ECD) process
H) Articulate the Missouri blueprint
I) Provide an outline of each task
Pre-Meeting Homework for the Content Development Team:
3. Read and be familiar with the Missouri Teacher Standards and Quality
Indicators; Kissing the Frog; A Brief Introduction to Evidence-Centered Design;
MoPTA — Candidate and Educator Handbook 53
and Review of Teaching Performance Assessments for Use in Human Capital
Management
4. Think about what they would expect teacher candidates to submit as evidence that
they are addressing with these standards.
Meeting Agenda Missouri Pre-Service Teacher Assessment Development
Meeting # 1 for New Development Committee Members
Monday, April 15, 2013
Day 1- Monday
8:30 am Introductions and Overview of the Meeting: Missouri Department of Education Staff
(Purpose, Vision, History, Teams, Process, Timelines, etc.)
Introduction of ETS staff – Seth Weiner
Introduction of Development Committee – Steve Schreiner and team
Overview of Pre-Service Teacher Assessment Development Process – Steve
Schreiner
9:00 Who Are the Assessment Takers?
10:00 What Does the Student Teaching Experience Look Like?
10:30 Break
10:30 Unpacking the Missouri Standards/Indicators and Sharing
12:00 pm Lunch
12:45-3:00 Unpacking the Missouri Standards/Indicators and Sharing (continued)
3:00 Evidence-Centered Design PPT
3:30 Break
3:45 Share the Shell
4:30 Wiki SharePoint Explanation
5:00 Adjournment
Meeting Notes for Monday, April 15th Due to a snowstorm, during the first development meeting in February, not all committee
members were able to attend. These members and several additional recruits were
invited to attend this one-day meeting to catch them up on the development work that
was covered at the February meeting; and, to prepare for all members to come together as
a development committee the following day (see new committee members listed above).
Heather MacCleoud provided opening remarks and explained the licensure continuum.
Seth Weiner provided welcoming remarks.
Steve Schreiner introduced the purpose and timeline for the development committee.
MoPTA — Candidate and Educator Handbook 54
Activity 1 -The Who of the Assessment
Assessment Specialist Annette DeLuca led the first activity, “Who are the Assessment
Takers?” In order to begin development, the Content Development Team needed to have a
common understanding of “the who” this performance assessment would be assessing. The
Evidence-Centered Design (ECD) activity required brainstorming and responses to the
following questions: ‘Who is being assessed?’ ‘What assumptions can we make about these
teacher candidates when they take this assessment?’ ‘Think about the knowledge, skills, and
abilities these candidates have as pre-service teachers. What conclusions would you like to
draw after they have taken the assessment?’ ‘What claims can we make about what they
know and can do?’ ‘What evidence can we ask them to produce to show this?’ Participant
comments were charted and posted. See the related document, “Who Is Being Measured?”
Activity 2 – What Are the Components That Are Important To The Student Teaching
Experience?
Assessment Specialist Steve Schreiner led the brainstorming of what everyone would agree
are the most important parts of a student teacher’s training experience. ‘What should the
student teaching experience look like in an ideal world?’ ‘What characteristics should the
process possess?’ There was a lot of discussion about what committee members experienced
or lacked when they were in teacher prep programs. There were comments on what various
EPPs do now, based on what they have learned over the years through working with student
teachers.
Activity 3 - Unpacking the Standards
Assessment Specialist Joe Ciofalo led the next activity in which the Missouri Standards were
“unpacked” in order to help committee members become familiar with what each standard
would look like in application.
The groups charted their results and shared the evidence they thought was indicative of how
the standards might appear in practice. There was some discussion about how to go about
acquiring parents’ permission to use videos and student work. There was also some dialogue
regarding using videos for evaluative purposes, as well as for student teachers to use for
reflection and self-awareness.
Logistics and Confidentiality
Annette DeLuca dealt with the logistics of the BIQ, W-9s, and travel and substitute teacher
reimbursement. She also clarified the implications of the Confidentiality Form.
MoPTA — Candidate and Educator Handbook 55
Task Design
Steve Schreiner shared the four-task shell that the committee will be developing. Committee
members will be determining the standards and indicators addressed in each task’s shell, the
activities to be completed, the prompts for candidate’s response, and the type and number of
artifacts to be included. There were some questions about the timing of submissions and
resubmissions within the time frame of the clinical experience. This issue needs further
discussion; Steve focused primarily on the overall design of the four tasks. The Development
Committee will determine the submission schedule. ETS will work with DESE to create a plan
for submission and resubmission windows.
Scoring: All raters will be trained to score all four tasks. Task 1 will be scored by the
candidate’s faculty advisor. Tasks 2-4 will be scored by up to six other raters until modified,
double scoring begins. Not all three tasks will be scored by the same person to limit bias and to
contribute to fostering positive inter-rater reliability.
Clarification: The Task 4: “Culminating Task” includes an overlap of some standards and a
more holistic submission of what student teachers can do. It is not a culminating submission of
everything in all other tasks; the rater for Task 4 does not need to see or score the other three
tasks.
The Wiki Site
Kim Hagen provided a brief overview of the Wiki site, the materials that will be uploaded, and
how committee members will be able to access it. All Reports of Meetings and related
documents will be posted on the site. Committee members will be notified when the site is
updated, and they will be encouraged to communicate with ETS through this online, interactive,
collaborative setting.
Closing
Heather MacCleoud closed with brief remarks, reminding us of all the important work that lies
ahead.
Next Steps The next development team meeting will be Tuesday, April 16th and will include all committee
members, those who attended February’s meeting and those who attended the April 15th meeting.
The focus of the meeting will be on task directions and prompt development.
MoPTA — Candidate and Educator Handbook 56
Missouri Pre-Service Teacher Assessment
Content Development Team- Meeting Two
Monday, July 15th-July 18th
Columbia, Missouri
In Attendance:
DESE: Gale “Hap” Hairston, Director of Education Preparation, Heather MacCleoud, Assistant
Director
ETS: Seth Weiner, Executive Director: Teacher Licensure, Ethan Taylor, Project Owner,
Annette DeLuca, Assessment Specialist, Joe Ciofalo, Research Project Manager, Kim Hagen,
Assessment Specialist, Steve Schreiner, Assessment Specialist
Content Development Team
Name Institution Position Name Institution Position
Antrim, Pat University of
Central
Missouri
Educational
Leadership and
Human
Development,
Department Chair
Hansett,
Elaine
Mexico
Public
Schools
K-5 Math Coach
and Mentor
Banfield, Ron Washington
University
Director of Teacher
Education and
Academic Services
Livingston,
Diane
MNEA President,
Hazelwood NEA
Callaway,
Becky NBCT
St. Joseph
School
District
High School Math
Teacher
Ray, Julie Southeast
Missouri
State
University
Early Childhood
and Special
Education, Interim
Chair
Cartier, Cathy Afton Schools Social Studies
Teacher
Reed,
Marcieta
Kansas City
Schools
Early Childhood
Cozens, Jeanie Missouri
Southern
State
Teacher Education
Faculty
Stuart, Diana Mineral Area
Community
College
Coordinator of
Teacher Education
and Professor of
English
Gunn, Sharon Southeast
Missouri State
University
Early Childhood
and Special
Education Faculty
Young,
Marvin
MSTA 4th Grade Teacher
MoPTA — Candidate and Educator Handbook 57
Name Institution Position Name Institution Position
Cuenca,
Alexander
St. Louis
University
Assistant Professor:
Social Studies
Nace, Becky Kansas City
Schools
Communication
Arts Curriculum
Coach
Hausfather,
Sam
Maryville
University
Dean: School of
Education
Obermeir,
Nicole
Columbia
Schools
Elementary
Hollins, Etta University of
MO, Kansas
City
Professor Poe, Andrea Columbia
Schools
Mentor
Kingsley,
Laurie
University of
MO,
Columbia
Elementary:
Literacy
Smith,
Shelton
MO Baptist
University
Dean, Associate
Professor
Lamas,
Cynthia
Independence
Schools
High School
Spanish
McAnally,
Michael
Kansas City
Schools
Instructional Coach
Purpose of the Meeting:
To design a standards-based performance assessment for teacher candidates in Missouri.
Overall Goals:
J) Analyze the Missouri Teacher Standards and Quality Indicators as they relate to
the teacher candidate and student learning
K) Review the tryout responses
L) Finalize the four tasks in preparation for the pilot
Meeting Notes for Monday, July 15th 8:30-9:00 Welcome
Joe Ciofalo opened the session calling attention to two parking lots that accompany Hap’s: the
AD parking lot for issues dealing with development and the General Comments parking lot
where committee members can register their observations on trends they see (e.g., a recurring
misunderstanding of a similar aspect of multiple prompts) as they read/analyze/evaluate the
tryout responses.
9:00-12:00 Purpose of the Tryout Review, Outcomes, and Procedures
Joe Ciofalo gave a high-level overview of the process of using the responses to the tryout to
tweak the prompts as needed (specifics are covered on slides). A key point: are the prompts/tasks
eliciting the right amount of evidence and the kinds of evidence expected? “Among other things,
we have here an opportunity to get rid of unneeded overlap among prompts and even among
tasks.”
Steve Schreiner presented the documents to be used to analyze the try-out responses and explains
how to use them. Form 1 (the ROE) has people listing types and quality of evidence as they read
MoPTA — Candidate and Educator Handbook 58
the tryout responses. Form 2 has people evaluating the directions, prompts (did they elicit the
right kind of evidence and enough evidence?), match between responses to prompts and artifacts,
accessibility of prompts, etc.
Steve also presented the survey forms given to the candidates who did the tryout—a survey that
asked the participants questions a lot like the ones on the two evaluative forms the committee
members will use as they read through the tryout responses. “Did the participants understand
what they were being asked to do? Did the length limitations afford them sufficient opportunity
to carry out the dictates of the prompts fully?”
Annette DeLuca expanded on the reading of the tasks: She modeled the behavior for the group.
She read Task 1 responses (the first of the five we got back) with the group. That is, she read
one in concert with group: then they read the other four themselves.
Activity 1- The Tryout Responses
Annette first anatomized the guiding prompts (e.g., for 1.1.1, the guiding prompt’s calling for 2
factors for 3 different areas of interest on the Contextual Factors Chart [community, district,
school] means we are looking for SIX factors. For each of these, we expect one possible
teaching strategy and one learning activity. And, for each teaching strategy and learning activity,
we expect an explanation of how that strategy or activity impacts student learning.) She went on
to do the same for 1.1.2, 1.1.3, etc. As she did so, she noted patterns that emerge (e.g., the task
prompts drill down into the Contextual Factors Chart and call for the same kind of analysis of
each element of the Chart in terms of a teaching strategy and a learning activity for each of the X
factors candidates are called upon to enumerate.)
Committee members wondered whether the rubrics will help guide thinking about how to
evaluate the responses. The current exercise should provide insight into what aspects of a
response should be valued above other aspects. A sidebar focused on the fact that grammar and
mechanics are not handled by the rubric. The thinking is that, if someone writes so poorly that
the quality of the writing interferes with clarity, that candidate, ipso facto, cannot score high. In
this holistic way, the ability to write enters into the equation. But the assessment does not
measure writing ability per se.
Annette led a debrief of the committee’s reading of Task 1 tryouts in anticipation of filling in the
Evaluation Form (Editorial Review Form) for this task. The fruits of this discussion will inform
the work of the Task 1 team as they tweak Task 1.
Shelton: The candidates need every aid we can give them to make the distinctions that need to
be made in order to address the prompts fully. On “activities”—do we want learning
“experience” instead? The candidates don’t seem to make a distinction here.
Another way to make the candidates make the right distinctions is to have the prompts very
starkly distinguish among the community, district, and school sections of the Context chart.
Candidates, left to their own devices, conflate the three contexts or focus mostly on the
classroom—with which they are most directly familiar. The documents that candidates produce
MoPTA — Candidate and Educator Handbook 59
in dealing with Task I should be of value to them throughout their student-teaching career. The
Task should ensure that. The documents are not just for this single task.
1:00-3:00 Kim Hagen introduced the reading of Task 2.
Kim anatomizes (or “unpacks”) Task 2 in much the way that Annette anatomized Task 1 in the
morning session.
First aspect of task: assessment plan. Sharon Gunn points out that we need to make clear that the
two focus students do not have to be special needs students, per se. As Diane Livingston puts it,
they could be two students in an AP Calculus course who simply require different kinds of
instruction to ensure that each of them understands the material.
On a broader note applicable to the whole task: we ask the candidate to hone in on a formative
or summative assessment. This is at odds with the notion of an assessment plan, which includes
the sequencing and interplay of several assessments. We need to fix the disconnect between the
micro and macro aspects of Task 2. One aspect of this involves the distinction between a “trend”
and a “pattern.” Trends, says Pat the Librarian, imply a longitudinal dimension—maybe
unrealistic for the test. “Pattern” might be a better word. But “trend,” say some data wonks in
the group, is an acceptable word in the world of data analysis and does not necessarily imply
developments over time.
The issue of the focus students emerges again: how does a band teacher, say, choose two focus
students? What about someone teaching a non main-line foreign language to 2 or 3 students?
Again (in a not entirely related but somewhat related context) the point is made that we should
make clear that the focus students do NOT have to be special ed students. Lori: let’s see what
the tryout responses teach us re the focus students?
After the reading of the tasks, Kim led a discussion.
Only one of the 7 turned in a true assessment plan rather than focusing on one assessment. The
task itself over-promotes the misunderstanding that ONE assessment = an assessment plan.
Some responses confused “pre-assessment” with “formative assessment.” The guiding prompts
need to forestall this confusion. The glossary needs to define these terms, too.
The data analysis prompts need to be better aligned to show how the assessment results relate to
the learning goals.
Laurie had questions about permission issues around the focus students. Steve Schreiner says
candidates will need to get permissions, as required.
MoPTA — Candidate and Educator Handbook 60
Meeting Notes for Tuesday, July 16th 8:30: Hap welcomed the group to start the day. Hap talked about management of the Pilot and
the structure that is required to carry it off. Leadership from the Development Committee is key:
we will tap 2 or 3 members for that.
We will need to ID someone from each EPP as a campus coordinator. We need 250-300 to
participate in the Pilot.
Sam Hausfather noted that the fall—the time of the pilot—is the “off” semester. This is not
when the representative population does student teaching. But—as Steve Schreiner noted—we
must weather this inconvenience if we are to go live in the fall of 2014.
Hap noted that ETS will be considering the policy issues that have come up on the parking lot
and elsewhere with DESE at the afternoon debriefing session.
8:45-10:00: Joe Ciofalo resumed the unpacking of Task 3 discussion before the committee
started reading the tryout responses
Joe briefly reviewed the last half-hour/45 minutes of the previous day, then whips out the check-
off sheets and goes through the protocol of round-robin reading of the responses.
10:00-10:40 Joe led a discussion after the committee members read the Task 3 tryout responses
Words that need definition: (“glossary candidates”) (NOTE: candidates will have a link to the
glossary as they work in the submission system—maybe not by the time of the pilot, but
certainly operationally. Sam would like them to be able to click right on the word itself and be
carried to that word in the glossary)
Laurie Kingsley: a global comment: Task 3 seems to make candidates jump through hoops—
hoops that can be jumped through without really focusing on one good lesson.
Other team members offer advice: ensure that the sequencing of demands in the guiding prompts
leads candidates to plan from the very outset, e.g., for the use of technology in their lesson.
Also—and importantly—we need to ENSURE in the way we deploy the guiding prompts that
the candidate write a rationale—tell us why they have chosen a particular exemplar as an artifact.
How do we accommodate a situation where a student teacher moves from one group of students
to another? That candidate will need to find different focus students from one task to another.
On the bright side, this will help counter the tendency for providing canned responses—
something that we should try our best to counter even for candidates who will be with the same
group of students throughout. Note that, even if you have the same group of students, you may
well be doing a different lesson when you are working on successive tasks.
Cathy and Sam: The Tasks, to an extent, blend in a seamless whole: you can’t really divorce
assessment from instruction (e.g., so Task 2 is sort of continuous with Task 3.) The assessment
MoPTA — Candidate and Educator Handbook 61
can’t truly replicate real life experience. And there are key parts of the real life experience—
interruptions, aspects of classroom management—that the assessment does not now touch on.
When we are finished with this process of development, says Shelton, how will we know we
have something better than what we have had as an assessment system?
Joe Ciofalo notes that even the summative tasks—2,3,4—have a formative aspect—are part of
the learning experience.
10:40-12:00: Steve Schreiner introduced Task 4.
This task is meant to recapitulate and fuse Tasks 1, 2, 3—so repetition is expected.
The 2nd Step entails analysis of the implementation—not just the implementation. This is where
the video comes in.
Steps 3 and 4 are summarized.
Sam: same lesson as tasks 2 and 3? Should we specific that this MUST be a different lesson?
Note, though, that the video-making can happen earlier on—indeed, we will recommend that a
candidate make several videos.
Given the summative nature of tasks 2, 3, and 4, how much help should a candidate get? See
what Joe said above about the tasks being somewhat formative within the overall context of
being summative. They are part of a learning experience, but they are a test with consequences.
How do we resolve this tension?
Steve led a discussion after the task responses were read by the group.
The three 5 minute videos vs. a single 15-minute video? Steve Schreiner responded that the
fifteen continuous minutes, in his experience, involve wasting time on aspects of the total lesson
that are not to the point and not what the candidate wants to demonstrate vis a vis the assessment
task. Julie Ray suggests we need to foster the ability to have candidates hone in on what they
judge to be powerful teaching moments. Maybe one 10 minute and one 5 minute video will
afford them the flexibility and opportunity required. Steve Schreiner: No editing of a fifteen or
five minute spate is allowed, though.
The prompt guides are not crystal clear about when candidates can and can’t refer to the video.
Some of the prompt guides, to be sure, are directed right at the video. But, if we want to allow—
even encourage—candidates to comment on the video, say, in their comments on planning the
lesson, we need to be clearer about that.
Another issue: what about when lesson is not accessible to rater (e.g., a Spanish lesson or a
lesson in advanced mathematics?) Steve: rater training will ensure that raters defer what they
can’t score, and we will find raters with the required expertise to do the scoring.
MoPTA — Candidate and Educator Handbook 62
There is a brief discussion on the rubric vs. the guiding prompts. Steve lays it down that the
guiding prompts are just as important as the rubrics and that the two need to match and enforce
each other. Making sure this match exists will be a task for Wed and Thurs.
1:00-5:00: Joe Ciofalo set up the afternoon activity: the revision of the tasks on the basis of the
previous day and a half of review and discussion.
Each task team was furnished with the ROEs that were filled in by the twenty-one present
committee members and with the Feedback Forms those who participated in the tryout filled in.
Each team also had a copy of the MO Teacher Standards and Quality Indicators. Each team
made revisions directly on the formatted task documents.
Meeting Notes for Wednesday, July 17th
8:30-12:00 Activity 2- Standards and Tasks Match
Kim and Annette set this up. Each Task Group looked at another group’s task and came to
consensus on which standards and indicators the task comprehends/measures. Each person did
this exercise alone, then shared and discussed with his/her group, then shared the findings with
the Task owners. The Task owners will then need to tweak their tasks to address the findings.
This could mean revamping/deleting a prompt guide (or portion of a prompt guide) or
eliminating indicators that were not addressed.
It might also happen that a standard/indicator is covered but not mentioned in the list of
standards/indicators that heads off a Task. That was noted as well.
And, not so incidentally, if you notice a task prompt that just needs a little re-writing
(standards/indicators aside) point that out to the Task owning team/group!
Activity 3- Rubric Writing
12:00-3:30 Steve set up the rubric writing activity.
Steve reminded the group of the evidence-centered design paradigm we went over in session 1.
He hones in on the part of the process dealing with scoring—that is, evaluating the evidence
submitted in terms of the claims we want to make about the “who”—the test-taking population.
Does the rubric match the task and account for all relevant submissions?
The rubric is, at once, for the candidate (one “who”) and for the rater (another “who”). It needs
to be detailed enough to be analytic, general enough to be holistic. The score needs to be based
upon the preponderance of the evidence submitted (and, so, must allow for some unevenness in
the totality of evidence submitted).
MoPTA — Candidate and Educator Handbook 63
Steve expatiated on the principle of parallel construction in a rubric: the wording for each score
point needs to remain the same in structure; the qualifying and quantifying words change.
The space between a 1 and a 2 should be the same as the space between a 2 and a 3, etc. The
qualifying/quantifying words must walk up/down stairs in an even fashion.
Alex asks a question about what happens, in scoring, when a performance is uneven: one textbox
good, another not so much, etc. Steve answers that the “preponderance of evidence” must
prevail. The experience of the pilot will richly exemplify how these uneven performances are to
be handled in practice.
As the team members built the rubric, they returned to the “Who” of the assessment, keeping the
rubric simple and clear.
To start the teams off, the ETS team suggested wording for the “3” for each task and provided,
on an occasional basis, examples of what the parallel wording would look like at a “1,” “2”, or
“4” level.
The rubric proceeds by textbox. But the rater, when all is said and done, gives a rating based
upon a holistic evaluation on the basis of where the preponderance of evidence points.
Activity 4- Ancillary Materials
3:30-5:00 Kim set up the various committees with materials to facilitate the creation of the
ancillary materials.
Work on the ancillaries (directions for candidates, for EPPs; glossary; guides, etc.) started on the
WIKI; Kim thanked the committee for what had been done.
Kim provided a caution: re using resources from the internet and elsewhere in writing ancillary
materials: don’t take text directly. Read the resources, but write in your own words. We need to
avoid any potential copyright issues.
Meeting Notes for Thursday, July 18th 8:30 Hap talked about literacy and numeracy in the performance assessment. For elementary Ed,
these factors will be built into tasks two and three: one must be projected through a reading
lesson and one through a math lesson. For middle and high school, we will build leading
questions on literacy into the guiding prompts.
Activity 5- Rubric Match
8:45-1:00 Kim returned to the rubrics again and initiates a cross-check of the rubrics and tasks.
The team needs to be sure that the language of the rubric matches the language of the prompt
guides, so concentration is on the wording for a “3,” given that this establishes the linguistic
pattern that obtains for all the score points. One team scrutinized another team’s rubric and task
MoPTA — Candidate and Educator Handbook 64
prompts in order to train fresh eyes on each task and rubric combo. A form was devised for this
exercise, and Kim hands it out and discusses its attributes.
Activity 4- Return to Ancillary Materials
Candidate Guide: Joe reviewed the Candidate Guide for the pilot. He emphasized that, though
the title says “candidate,” it is essential that the guide be read by all stakeholders.
The finalized Guide will be ready in early August so we can start disseminating it ASAP,
especially given the very tight timeline to get the pilot up and running.
Joe reminded the team that MoPTA is owned by DESE.
The Committee starts to report out on their progress with Ancillary Materials.
Shelton Smith started. He discussed a form that will constitute an exit interview/summation of
the experience of sorts. DESE wants such a form. Though some EPPs may have and can use
their own form, DESE wants an official form provided by the Committee.
Becky Calloway reported out on the student survey that will be filled in by students for the
teacher candidate who did the student teaching. Various committee members offered
suggestions to ensure that the focus remains on the students’ experience of the teacher candidate
and on the students’ learning.
Sam Hausfather presented the Lesson Plan Format document.
This is the form to be used for tasks 3 and 4.
Alex Cuenca presented the Observation Protocol document
The team used the Danielson model as a model. They chose one quality indicator for each
standard that had the word “demonstrate” in it. They tried to make the document open-ended
enough so it could foster a conversation between mentor and candidate around the tasks of the
assessment while, at the same time, not forcing people radically to alter their observation
methodology.
DESE (MEES) is creating an observation protocol form. Hap: yes, but their forms are all geared
toward teachers in practice. At the present time, DESE is working on a formative and
summative form. When done, this document may replace the document Alex presented.
Hap noted that the observation evaluation will be combined with the cooperating teacher
score/evaluation, etc., which will be combined with the MoPTA scores/evaluations. Comparison
across the board is desired. Hap’s formative/summative form should be available, in draft, the
week of July 22.
Julie Ray presented the “Reflection on Student Learning” document.
MoPTA — Candidate and Educator Handbook 65
The document (e.g., the section on struggling candidates) is designed not only to help organize
the process of reflecting for candidates, professors, and cooperating teachers but also to help
shape the way teacher educators and candidates think about the elements of teaching.
Pat Antrim presented on the Glossary.
Next Steps: The pilot begins August 19, 2013
MoPTA — Candidate and Educator Handbook 66
Missouri School Leader Assessment
Content Development Team- Meeting One
June 25-27, 2013
Jefferson City, Missouri
In Attendance:
DESE: Gale “Hap” Hairston, Director of Education Preparation, Heather MacCleoud, Assistant
Director
ETS: Seth Weiner, Executive Director: Teacher Licensure, Ethan Taylor, Project Owner, Frank
Perry, Assessment Specialist, Steve Schreiner, Assessment Specialist
Content Development Team
Name Institution Position Name Institution Position
Arnold, Mick Southwest
Baptist
University
Leadership
Preparation Faculty
Okruch, Tom Southeast
RPDC
Principal
Barnes, Cindy Southwest
Livingston
Country R-1
Superintendent Smith,
Shelton
Missouri
Baptist
University
Leadership
Preparation
Faculty
Cooper-Baker,
Gustava
George
Washington
Carver
Elementary
Principal Spear, Karla Halfway
Elementary
Principal
Finch, Kim Missouri State Leadership
Preparation Faculty
Watkins, Paul Southeast
Missouri
State
Leadership
Preparation
Faculty
Hawley, Kim Lee’s Summit
High School
Principal Watt, Jeremy Putnam
County High
School
Principal
Maher, Carol University of
Missouri,
Columbia
Leadership
Preparation Faculty
Wiebers, Dan Trenton
Public
Schools
Principal
Masters, James Monroe City Superintendent
Minter, Joe Lafayette
County R-1
Principal
Purpose of the Meeting:
To design a standards-based performance assessment for school leaders in Missouri.
MoPTA — Candidate and Educator Handbook 67
Overall Goals:
M) Analyze the Missouri Leader Standards as they relate to the school leader candidate
and student learning
N) Complete the evidence-centered design (ECD) process
O) Articulate the Missouri School Leader Assessment blueprint
P) Provide an outline of each task
Q) Provide draft prompts for the tasks
Pre-Meeting Homework for the Content Development Team:
B. Read and be familiar with the Missouri Leader and Superintendent Standards and
Indicators; A Brief Introduction to Evidence-Centered Design; and Review of
Teaching Performance Assessments for Use in Human Capital Management, New
Thinking about Instructional Leadership, The School Principal as Leader Guiding
Schools to Better Teaching and Learning
C. Think about what they would expect school leaders to submit as evidence that they are
addressing these standards.
Meeting Agenda Missouri Pre-Service Teacher Assessment Development
Meeting # 1
June 25-27, 2013
Day 1- Tuesday
8:30 am Introductions and Overview of the Meeting: Missouri Department of Education Staff (Purpose, Vision, History, Teams, Process, Timelines, etc.)
Introduction of ETS staff – Seth Weiner
Introduction of Development Committee – Frank Perry Overview of School Leader Assessment Development Process – Steve
Schreiner
9:30 Who Are the Assessment Takers? 10:30 Break
10:45 What are the Ideal Characteristics of a School Leader Clinical Experience?
11:15 Unpacking the Missouri Standards/Indicators and Sharing 12:00 pm Lunch 1:00 Unpacking the Missouri Standards/Indicators and Sharing (continued) 2:15 Share the Shell/Clustering of the Missouri Standards 2:45 The Best Venue for Assessing the Standards 3:15 Break
MoPTA — Candidate and Educator Handbook 68
3:30 The Best Venue for Assessing the Standards (continued) 4:00 Clustering the Missouri Standards and Indicators 5:00 Adjournment
Day 2 - Wednesday 8:30 am Reflection on the Previous Day’s Work 9:00 Clustering the Missouri Standards and Indicators (continued)
9:30 Evidence-Centered Design PPT 10:15 Break 11:00 School Leader Assessment Design 12:30 pm Lunch 1:00 School Leader Assessment Design (continued) 2:30 Task Writing Template 5:00 Adjournment Overview:
Before the “who” discussion, Steve Schreiner outlined the arc of the three meetings, leading to
the pilot in the spring of 2014 and the pilot scoring in the summer of 2014—a session that will
lead to the finalization of the assessment. Steve requested that team members keep calendars
clear for these meetings. Aug 8 = meeting 2; Nov 21 = meeting 3. Mtg 2 = 3 days; 3 = 4 days
Activity 1 - The Who of the Assessment Steve Schreiner led the first activity: “Who are the assessment takers?” In order to begin
development, the Content Development Team needed to have a common understanding of
“the who” this assessment would be assessing. The Evidence Centered Design (ECD)
activity required brainstorming and responses from the following questions: Who is being
assessed? What assumptions can we make about these school leader candidates when they
take this assessment? Think about the knowledge, skills, and abilities these candidates have
as a principal. What conclusions would you like to draw after they’ve taken the assessment?
What claims can we make about what they know and can do? What evidence can we ask
them to produce to show this?” How do the KSAs of a principal differ from those of a
superintendent?
Participant comments were charted and posted. See the related document, “Who Is Being
Measured?”
Activity 2 – What Are the Ideal Components of a School Leader Clinical Experience?
Steve Schreiner led the brainstorming of what everyone would agree are the most important
parts of a school leader candidate’s training experience. “What should the school leader
clinical experience look like in an ideal world? What characteristics should the process
MoPTA — Candidate and Educator Handbook 69
possess?” See the related document, “Ideal Components of a School Leader Clinical
Experience.”
Activity 3 - Unpacking the Standards
Frank Perry led the next activity in which the Missouri Standards were “unpacked” in order
to become familiar with what each would look like in application. Before the unpacking of
the standards began, there was a general discussion of the standards. The efficacy of the
standards as viable indicators of the right set of KSAs was generally supported by the
committee.
The guiding questions for the unpacking were:
- What would a quality indicator look like in practical application?
- What would be convincing evidence that a school leader candidate has the knowledge
and skills addressed in this standard?
- If successful, what would the impact on student learning look like?
An interesting point was made re standard 5 (Ethics), viz., that it embodies a tension
between dispositional attributes (e.g., the desire to be kind and humane) and legal
requirements that sometimes pull in a different direction.
Results of the discussion on Unpacking the Standards can be read on the related document,
“Unpacking the Standards.”
Activity 4– The Best Venue for Assessing the Standards
Steve Schreiner led this activity which asked committee members to match types of
assessments with each standard and indicator. Members were provided a chart listing each
standard/indicator with check-off columns for PA (Performance Assessment), MC (Multiple
Choice), CW (Constructed Response), and Other. The activity helped winnow the
standards/indicators to be used with the new performance assessments.
Activity 5 – Sharing the Shell
Steve Schreiner reviewed the three performance tasks identified by ETS and DESE for use in
this School Leader Assessment. The first task deals with Problem Solving in the Field, the
second centers around Supporting Teacher Leadership, and the third focuses on
School/District-Wide Professional Development.
Steve reinforced that the shell is not set in stone; the group can modify it.
The concern was about creating repititous work. The example was the issue of the
survey: how might we avoid making people fill in surveys for, say, 4 candidates in a
single building all going for leadership training?
Will the test be taken throughout coursework or after as capstone experience?
MoPTA — Candidate and Educator Handbook 70
Steve answered in terms of Validity and Accessibility: Steve thinks there needs to be an
extended period of time to put the assessment together, given that situations take time to
unfold. But the assessment tasks will be submitted at the end of the internship.
A concern was aired re the issue of the nature of the performance assessment beast:
you get help/mentoring as you do the assessment over time.
The question arose, “Who would not pass?” Steve answered that, in summative
assessments, the mentors can act as resources but cannot help with the actual writing
of explanations or reflections or the composing of artifacts.
A meaningful measure: collaboration DOES take place during the fabrication of the
assessment but the actual writing/artifacts must be the candidates’. Tension exists
between collaboration and independence.
A meaningful assessment has a formative dimension (even if summative in the end)
and should help mould/change the profession in positive ways.
Should dovetail with DESE’s MOLead PD initiative.
Hap Hairston ended with a summary statement about the need to change the internship system:
that’s what all the comments have really circled around. The ideal would be a system where
interns were carefully chosen and paid to be full-time interns (principals-in-training). If they
fail—they go back to the classroom. If not, they receive principal certification.
Activity 6: Clustering the Standards
Frank Perry led this activity which asked for the committee members to identify which standards
and indicators most appropriately ‘clustered’ or matched the three performance tasks of Problem
Solving, Supporting Teacher Leadership, and School/District-Wide Professional Development.
The groups were re-arranged and spent 90 minutes on this activity.
Steve Schreiner and Ethan Taylor dealt with the logistics of the BIQ, W-9s, and travel and
substitute reimbursement. They also clarified the implications of the Confidentiality Form.
Meeting Notes for Wednesday, June 26 Reflection
Steve Schreiner opened the second day’s session with a reflective discussion focused on the
previous day’s work. Below are some of the comments.
Is the test going to get at the things a leader really needs to do to be an instructional
leader? Steve: the test will be elicited from the committee, and they will ensure that the
test covers what is essential to a leader.
A worry that not enough indicators dealing with management survived the previous day’s
winnowing of the standards/indicators. The structures, policies, procedures are not there.
Steve: As the groups construct the tasks, some of the indicators will, perforce, come back
into the picture.
MoPTA — Candidate and Educator Handbook 71
The test can’t be another hoop to jump through: it MUST be authentic, embedded in what
the practice is, and the committee needs to ensure this.
One team member opines that the shell is flexible enough to do the above.
Maybe not enough “people skills” are present ( i.e., survived the winnowing of
standards/indicators). Again, Steve cautions, these skills can be built into the activities in
the prompts. If the committee feels these skills are key, they will be built into the tasks.
One team member stated that she was concerned that this assessment still seems more
like a college project than an assessment. How do we keep that in mind, make it
rigorous, give it teeth, have a cut score?
Where do you get all the artifacts? (e.g., you can write about a conflict or problem and
how you resolved it. What kind of artifacts do you submit?) Steve: the nature of the
artifacts will be clear as they develop the task prompts.
Steve: you can prescribe the artifacts, or you can say: provide X artifacts to demonstrate
how you resolved an issue –i.e. leave it to the candidate to come up with the validating
documentation to show that he/she carried out the intent of a task prompt.
Steve notes that the rubrics will delineate how a task is to be scored and can instill the
right rigor—and establish criteria (e.g.., “confronted and dealt with a problem in a
rational and ethical way,” even if the outcome was not a perfect resolution).
One team member suggested, make it real, make it reflective, make it an experience
worthwhile in its own right—like NBPTS. Maybe some candidates will drop out along
the way.
Steve reminded the team that the tasks are summative.
A theme: the test is like real life, but it is a test—and a summative test at that.
Activity 6 – Clustering the Standards (continued)
Frank Perry led this activity which asked for the committee members to identify which standards
and indicators most appropriately ‘clustered’ or matched the three performance tasks of Problem
Solving, Supporting Teacher Leadership, and School/District-Wide Professional Development.
The groups were re-arranged and spent 90 minutes on this activity. At the end of this activity, the
groups identified specific Standards and Indicators that should be addressed by each standard.
The results are:
TASK 1: Standards 1C2, 2C3, 4C1, 5C1 (3C3 was later added after subsequent discussion)
TASK 2: Standards 2C2, 3C3, 4C3 ( 6C1 was later added after subsequent discussion)
TASK 3: Standards 1C2, 2C2, 2C3, 3C3, 4C1, 5C1, 6C1 (4C2 and 2C1 were later added)
MoPTA — Candidate and Educator Handbook 72
Activity 7 – Task Design:
Committee members split into 3 groups; each group was assigned one of the 3 tasks. Each
group then brainstormed on the claims they would like to make, the evidence for the claims,
and the tasks that might elicit that evidence. Steve Schreiner shared the three-task shell that
the committee will be developing. Committee members will be determining the standards
and indicators addressed in each task’s shell, the activities to be completed, the prompts for
candidate’s response, the number and type of artifacts.
Activity 8 Presentation of Washington ProTeach site to provide visceral sense of what the assessment
will look like, what the experience of crafting responses and attaching artifacts will be like,
what a task looks like, etc.
Meeting Notes for Thursday, June 27
Activity 9 – Task Design (continued)
Frank reflected on the fruits of good leadership contemplated from the vantage of the end of
the school year at a high school as the students, the center and purpose of the whole
enterprise of school leadership, prepare for the next steps of their lives—the sudden onset of
high-seriousness on the part of hitherto carefree male students, etc.
Steve then sets up the three groups so they can continue to work on their tasks for the rest of
the day. Dismissal will take place at about 3 o’clock
Activity 10 – Cross Check and Alignment Steve said that this step will occur (in greater detail) at the next meeting. At today’s meeting,
the groups continued refining their performance tasks according to the templates posted
through LCD projectors onto respective screens. An initial cross-check review followed after
each group attempted a preliminary overview of their task’s activities, prompts, and
standards alignments. Groups exchanged their thoughts (group 1 to group 3, group 3 to
group 2 and group 2 to group 1). Vigorous discussion ensued as each group edited the other’s
on the template provided. Each group then received their edited version back and discussed
among themselves the resulting suggestions/deletions. This discussion served to clarify the
purpose and focus of the performance task as it related to respective task objectives. The
tasks will not be at a high enough degree of polish to carry out this step at this first meeting.
Closing
Steve brought the groups back as a whole and asked for individual reflections on the week’s
activities. They are: See document “Missouri School Leader Reflections.”
Next Steps
The next development team meeting will be on August 7, 8, and 9 in Jefferson City, Missouri.
The focus of the meeting will continue to be on task directions and prompt development.
MoPTA — Candidate and Educator Handbook 73
Missouri School Leader Assessment
Content Development Team- Meeting two
August 7th-9th, 2013
Jefferson City, Missouri
In Attendance:
DESE: Gale “Hap” Hairston, Director of Education Preparation,
ETS: Ethan Taylor, Project Owner, Frank Perry, Assessment Specialist, Steve Schreiner,
Assessment Specialist
Content Development Team
Name Institution Position Name Institution Position
Arnold, Mick Southwest
Baptist
University
Leadership
Preparation Faculty
Okruch, Tom Southeast
RPDC
Principal
Barnes, Cindy Southwest
Livingston
Country R-1
Superintendent Smith,
Shelton
Missouri
Baptist
University
Leadership
Preparation
Faculty
Cooper-Baker,
Gustava
George
Washington
Carver
Elementary
Principal Spear, Karla Halfway
Elementary
Principal
Finch, Kim Missouri State Leadership
Preparation Faculty
Watkins, Paul Southeast
Missouri
State
Leadership
Preparation
Faculty
Hawley, Kim Lee’s Summit
High School
Principal Watt, Jeremy Putnam
County High
School
Principal
Maher, Carol University of
Missouri,
Columbia
Leadership
Preparation Faculty
Dan Wiebers Trenton High
School
Principal
Masters, James Monroe City Superintendent Art McCoy Ferguson
Florissant
School
District
Superintendent
Minter, Joe Lafayette
County R-1
Principal
Purpose of the Meeting:
To design a standards-based performance assessment for school leaders in Missouri.
Overall Goals:
R) Analyze the Missouri Leader Standards as they relate to the school leader
candidate and student learning
MoPTA — Candidate and Educator Handbook 74
S) Complete the evidence-centered design (ECD) process
T)
U) Provide draft prompts for the tasks ready for the tryout
V) Design a rubric template for one performance level in each task.
Meeting Agenda Missouri Pre-Service Teacher Assessment Development
Meeting # 2
August 7-9, 2013
Day 1- Wednesday
8:30 am Introductions and Overview of the Meeting:
1. Welcome/Introductions/Housekeeping
2. Overview of the development process for this session
9:00 am Review of the previous meeting’s work
9:30 am Evidence Centered Design PowerPoint
10:00 am Break
10:15 am Revisiting the Current Tasks (in groups)
12:00 pm Lunch
1:00 pm Revisiting the Current Task (continued)
3:00 pm Break
3:15 pm Cross Check and Discussion
5:00 pm Adjournment
Activity 1: Overview of Evidence Centered Design
Steve Schreiner led the first activity: “What are Performance Assessments?” The Content
Development Team needed to have a common understanding of validity, reliability, equability,
and accessibility around this assessment. Discussion was had around how ETS ensures validity
through the structured development steps. Discussion was had that came around to the fact that
the assessment is being used for entrance into the profession and not to be used solely as an
indicator of future success as a school leader.
The presentation went on to get the team thinking about feasibility of a timeframe a school leader
candidate should have to complete and submit the assessment. Discussion focused on the timing
and how the assessment would be embedded within the program/coursework. School leader
candidates can go to the website and become familiar with the tasks and materials prior to
actually registering for the assessment along with the requirements of length of the responses and
size of the artifacts submitted.
MoPTA — Candidate and Educator Handbook 75
Activity 2: Return to the Tasks
Frank Perry reviewed the work that was done at the first meeting around the focus of the task,
what the candidates need to do, and what activities and prompts present in the tasks. The team
was then broken up into their task team groups to work together on the tasks.
Team broke into three working groups where they worked on refining the tasks. Groups
exchanged tasks where additional comments and suggestions were documented by the groups
reviewing until each group saw all three tasks.
Team was bought back together to discuss the process and their thoughts around the review of
the tasks. Team spoke about their concerns about the tasks being too close and the student being
able to use the same response on all three tasks. Believe they need to make the tasks more
specific in their focus to ensure differentiation of the responses.
Day 2 - Thursday 8:30 am Tryout Review PowerPoint
9:00 am Return to Tasks
10:15 am Break
10:30 am Return to Tasks
12:00 pm Lunch
1:00 pm Sharing the Tasks
3:00 pm Break
3:15 pm Rubric PowerPoint
3:30 pm Rubric Development
5:00 pm Adjournment
Hap Hairston welcomed the team back to day two of the development meeting, responded to
parking lot items and shared briefly communication about the transition to the new assessments.
Activity 3: Cross Check and Feedback
Next, the team was asked to breakout into their task groups to go over the tasks. The group
was pulled back together after roughly a half hour of meeting in the task groups.
The teams shared two major feedback idea to two tasks in order to assure that each task was
focused on the correct topic. Written suggestions concerning more mundane ideas wree also
shared, but on paper.
Paul Watkins provided feedback for Task 1.. The focus was on safety in the school looking at
playground safety, injures that had occurred, and how this could be mitigated through local or
district level policies. Thought a case study would be a good way to assess this task.
Shelton Smith provided input on task 2and the team focused on the principal in the role of
supporting the teacher. Possible focus could be working with a beginning and seasoned teacher:
how they would develop the beginning teacher and how they might build upon the strengths of
the seasoned teacher.
MoPTA — Candidate and Educator Handbook 76
Task Three – suggested ideas included program focus, curriculum development, and
community, following colleagues through staff development: , making changes and looking at
the information over a period of time to make it meaningful, follow up, and make adjustments
to improve the process.
Activity 4: Tryout Review
Steve presented a review of the field test process. The first was a review of the small scale field
test called a tryout, and the second focused on the large scale field test called a pilot. Steve spent
most of the time reviewing the details of the tryout which will officially begin on August 23.
Results of the tryout will be delivered through the Wiki by November 1. Shelton Smith
encouraged team members to get involved by responding to the prompts themselves and by
recruiting colleagues to help.
Day 3 – Friday 8:30 am Rubric Development (continued)
10:00 am Break
10:15 am Rubric Development (continued)
12:00 pm Lunch
1:00 pm Rubric Development (continued)
3:00 pm Break
3:15 pm Rubric and Task Cross Check
4:30 pm Next steps & Homework
5:00 pm Adjournment
Activity 5: Rubric Development
Discussion began by Steve presenting a PowerPoint on rubric building. The rubric will be both
analytic and holistic. Scorers will look at the entire submission of work when determining the
task score. The task will be on a 0 to 4 point scale with a zero representing nothing submitted.
There will be benchmarks for each task level 1-4. This will be created from the pilot along with
training papers. An example of a rubric was shared with the team so they could see the
differentiation of the various points on a scale. It was also determined that the video component
would best fit into the task of Creating a Collaborative Culture. Team 2 took on this task and a
new identity as Task #3. Similarly, the former team 3 became Task #2. The new Task 2 revised
its prompts and activities to reflect the addition of the video component. The team asked
questions about the rubric. Rubric shells were handed out for tasks 1 and 3 for the team to see.
Steve explained how the teams would begin to develop the rubrics through identifying parallel
words for the various points on the scale. The team broke into their task groups to work on the
rubric.
Next Steps The next development team meeting will be on November 18, 19, and 20 in Jefferson City,
Missouri. The focus of the meeting will be on a review of the tryout responses.
MoPTA — Candidate and Educator Handbook 77
Missouri School Leader Assessment
Content Development Team- Meeting three
November 18th-20th, 2013
Jefferson City, Missouri
In Attendance:
DESE: Gale “Hap” Hairston, Director of Education Preparation,
ETS: Ethan Taylor, Project Owner, Frank Perry, Assessment Specialist, Holly Schrum-
Mayberry, Assessment Specialist, Steve Schreiner, Assessment Specialist,
Content Development Team
Name Institution Position Name Institution Position
Arnold, Mick Southwest
Baptist
University
Leadership
Preparation Faculty
Okruch, Tom Southeast
RPDC
Principal
Barnes, Cindy Southwest
Livingston
Country R-1
Superintendent Smith,
Shelton
Missouri
Baptist
University
Leadership
Preparation
Faculty
Cooper-Baker,
Gustava
George
Washington
Carver
Elementary
Principal Spear, Karla Halfway
Elementary
Principal
Finch, Kim Missouri State Leadership
Preparation Faculty
Watkins, Paul Southeast
Missouri
State
Leadership
Preparation
Faculty
Hawley, Kim Lee’s Summit
High School
Principal Watt, Jeremy Putnam
County High
School
Principal
Maher, Carol University of
Missouri,
Columbia
Leadership
Preparation Faculty
Masters, James Monroe City Superintendent
Purpose of the Meeting:
To design a standards-based performance assessment for school leaders in Missouri.
Overall Goals:
W) Review the tryout submissions in order to be ready for the pilot.
X) Review and modify the tasks and rubrics.
Y) Review and modify the ancillary materials.
MoPTA — Candidate and Educator Handbook 78
Meeting Agenda Missouri Pre-Service Teacher Assessment Development
Meeting # 3
November 18-20, 2013
Day 1- Monday
8:30 - 9:00 Welcome
1. Re-introductions of the group
2. Where we were and where we are now in terms of development
3. Purpose of the Tryout Review
4. Procedures of the Tryout Review
9:00 - 9:30 Review the Format and Purpose of each Tryout Document - (Whole Group)
9:30 - 10:00 Review of Task 2
10:00-10:15 Break
10:15 - 12:15 Reading of Task 2 Responses (cont.)
12:15 – 1:00 Lunch
1:00 – 3:00 Reading of Task 1 Responses
3:00 – 3:15 Break
3:15 – 5:15 Reading of Task 3
Ethan Taylor welcomed the group. Holly Schrum-Mayberry led a discussion about the tryout –
review of the tryout submissions and guiding prompts with an eye on
Frank Perry discussed the importance of Evidence Center Design – The “who”, claims to be
accessed, evidence through commentary and reflective feedback. Talked about the evidence that
was collected for each of the three tasks through describing the problem, goals to address the
problem, and impact on teaching/learning. Committee will collect the evidence from the written
commentary along with the artifacts that were supplied. Review will be done on the directions
and guiding prompts. Was coverage appropriate, were all parts of the prompt addressed, and did
the prompts elicit enough evidence/information? General questions for the team to think about
during the review – is there an appropriate match between evidence and guiding prompts? Were
the tasks fair and equitable?
Steve Schreiner led a review of the tryout submissions. The first review focused on Task 2. The
committee talked about the contextual information that the candidate supplies so that the rater
understands the milieu of the situation while scoring the task.
Frank led the discussion on Task 1 which centered on the problem that is identified by the
candidate. The team discussed whether it is realistic to expect the candidate to be able to provide
MoPTA — Candidate and Educator Handbook 79
a resolution to the problem. Should the problem be more specifically defined? Many problems
make use of longitudinal data and the concern was that the submission time will not allow
enough time for the candidate to resolve the problem.
Holly led the review of Task 3 and the responses which continued into the next day.
Day 2 - Tuesday 8:30 - 8:45 Review of yesterday and overview of today
8:45 – 10:15 Revision of Tasks by Individual Groups
10:15 – 12:00 Alignment of Standards
12:00 – 1:00 Lunch
1:00 – 2:00 Final Revisions
2:00 – 3:00 Preparation for the Pilot
3:00 – 4:00 Overview of the Ancillary Materials
5:00 pm Adjournment
The team began by reviewing the remaining Task 3 tryout submissions. After that concluded, the
team was brought together to discuss their observations on the tryout materials. Topics that were
covered – concerns about how the video submission will work, how the problem the candidate
identifies should be defined so that it is a problem that can be worked within the time allotment
of the submission cycle.
Task team broke out into their groups and to work on updating/revising the tasks based on the
tryout review exercise. The entire committee was brought back to discuss the changes that were
made to each task with the group being able to ask clarifying questions around the changes.
Steve reviewed a PowerPoint presentation on the requirements of the pilot with a review of pilot scoring, encouraging team members to recruit candidates for the pilot experience and to participate in the June 2014 pilot scoring session. Frank oversaw the round-robin review of the tasks and alignment to the Missouri Leader Standards. Holly introduced the ancillary materials that will be used as part of the MoSLPA pilot to support candidates, mentors, and EPP people. Holly asked team members to think about these materials as they continued to work with their tasks. One of tomorrow’s activities will be to work with these documents.
Day 3 – Wednesday 8:30 - 8:45 Review of yesterday and overview of today
8:45 – 11:00 Revision, Crosscheck, and QC of rubrics
11:00 – 12:00 Working with the Ancillary Materials
12:00 – 1:00 Lunch
MoPTA — Candidate and Educator Handbook 80
1:00- 3:00 Working with the Ancillary Materials
3:00 - 3:15 BREAK
3:15 - 4:15 Sharing Discussion of Ancillary Materials
4:15 Closing Remarks
4:30 pm Next steps
5:00 pm Adjournment
Holly began the session with the review of the rubrics for each task. The rubric work took place
within the task teams.
Holly and Frank reviewed the list of ancillary materials and teams worked on these for the
remainder of the morning.
Steve shared the online submission system currently being used by the Missouri PreService
Teacher assessment and which will be used by the School Leader Assessment come February,
Hap Hairston, Director of Education Preparation, entertained many Parking Lot Issues that the
team members had collected. Hap clarified several policy issues and put perspective on this
assessment as well.
The meeting adjourned at noon.
Next Steps:
The revised tasks and rubrics will be reviewed by ETS staff and then materials will be sent to
edit and fairness in preparation for the pilot and the uploading of materials into the informational
web site.
MoPTA — Candidate and Educator Handbook 81
Missouri School Counselor Assessment
Content Development Team- Meeting One
February 4-6, 2013
Jefferson City, Missouri
In Attendance:
DESE: Gale “Hap” Hairston, Director of Education Preparation, Christina Hudson, Assistant
Director
ETS: Ethan Taylor, Project Owner, Sue Obetz, Assessment Specialist, Jenna Norton,
Assessment Specialist, Steve Schreiner, Assessment Specialist
Content Development Team
Name Institution Position Name Institution Position
Bader, Karen Aurora
School
District
Elementary
Counselor
Hiatt,
Rochelle
Northwest
Missouri
State
University
Assistant Professor
Connor, Kim Lincoln
University
Assistant Professor
Department of Ed
Speck, Janice Missouri
Baptist
University
Dowdy, Marci Missouri
State
University
Senior Instructor Ward, Janice Southeast
Missouri
State
University
Professor
Horton, Sharon Ashland
School
District
Elementary
Counselor
Purpose of the Meeting:
To design a standards-based performance assessment for school leaders in Missouri.
Overall Goals:
Z) Analyze the Missouri School Counselor Standards as they relate to the school
leader candidate and student learning
AA) Complete the evidence-centered design (ECD) process
BB) Articulate the Missouri School Counselor Assessment blueprint
CC) Provide an outline of each task
DD) Provide draft prompts for the tasks
MoPTA — Candidate and Educator Handbook 82
Pre-Meeting Homework for the Content Development Team:
1. Read and be familiar with the Missouri School Counselor Standards and
Indicators; A Brief Introduction to Evidence-Centered Design; Counselor
Education Accountability: Training the Effective Professional School
Counselor; and Review of Teaching Performance Assessments for Use in
Human Capital Management,
2. Think about what they would expect school counselors to submit as evidence that
they are addressing these standards.
Meeting Agenda Missouri School Counselor Assessment Development
Meeting # 1
February 4-6, 2014
Day 1- Tuesday
8:30 am Introductions and Overview of the Meeting: Missouri Department of Education Staff (Purpose, Vision, History, Teams, Process, Timelines, etc.)
Introduction of ETS staff – Ethan Taylor and Christina Hudson
Introduction of Development Committee – Annette DeLuca
Overview of School Leader Assessment Development Process – Steve
Schreiner
9:30 Who Are the Assessment Takers? 10:30 Break
10:45 What are the Ideal Characteristics of a School Counselor Clinical Experience?
11:15 Unpacking the Missouri Standards/Indicators and Sharing 12:00 pm Lunch 1:00 Unpacking the Missouri Standards/Indicators and Sharing (continued) 2:15 Share the Shell/Clustering of the Missouri Standards 2:45 The Best Venue for Assessing the Standards 3:15 Break 3:30 The Best Venue for Assessing the Standards (continued) 4:00 Clustering the Missouri Standards and Indicators 5:00 Adjournment
Day 2 - Wednesday 8:30 am Reflection on the Previous Day’s Work 9:00 Evidence-Centered Design PPT
MoPTA — Candidate and Educator Handbook 83
9:30 Clustering the Missouri Standards and Indicators (continued) 10:15 Break 11:00 School Counselor Assessment Design 12:30 pm Lunch 1:00 Website Review of Tasks and Authoring System 2:30 Task Writing Template 5:00 Adjournment
Day 3 - Thursday 8:30 am Reflection on the previous day’s work
8:45 Task Writing (Continued)
10:15 Break
10:30 Task Writing (Continued)
12:00 pm Lunch
12:45 Task Writing (Continued)
3:15 Break
3:30 Cross Check and Alignment of Task Designs
5:00 Adjournment
Overview:
Before breaking into certificate areas, Steve Schreiner outlined the arc of the three meetings,
leading to the pilot in the fall of 2014 and the pilot scoring in the spring of 2015—a session that
will lead to the finalization of the assessment.
Tuesday, February 4
Activity 1 - The Who of the Assessment Sue Obetz led the first activity: “Who are the assessment takers?” In order to begin
development, the Content Development Team needed to have a common understanding of “the
who” this assessment would be assessing. The Evidence Centered Design (ECD) activity
required brainstorming and responses from the following questions: Who is being assessed?
What assumptions can we make about these school counselor candidates when they take this
assessment? Think about the knowledge, skills, and abilities these candidates have as school
counselors. What conclusions would you like to draw after they’ve taken the assessment? What
claims can we make about what they know and can do? What evidence can we ask them to
produce to show this?” Participant comments were charted and posted. See the related document,
“Who are the Assessment Takers?”
Activity 2 – What Are the Ideal Components of a School Counselor Training Experience?
Steve Schreiner led the brainstorming of what everyone would agree are the most important parts
of a school counselor candidate’s training experience. “What should the school counselor clinical
MoPTA — Candidate and Educator Handbook 84
experience look like in an ideal world? What characteristics should the process possess?” See the
related document, “Ideal Components of a School Counselor Program.”
Activity 3 - Unpacking the Standards
Jenna Norton led the next activity in which the Missouri Standards were “unpacked” in order to
become familiar with what each would look like in application.
Before the unpacking of the standards began, there was a general discussion of the standards.
The efficacy of the standards as viable indicators of the right set of KSAs was generally
supported by the committee.
The guiding questions for the unpacking were:
- What would a quality indicator look like in practical application,
- What would be convincing evidence that a school counselor candidate has the knowledge
and skills addressed in this standard?
- If successful, what would the impact on student learning look like?
The committee members generally agreed upon the interpretation and understanding of each of
the standards and indicators. One group focused specifically on the indicators as they
“unpacked” their set of standards. This guided the other two groups in discussing the indicators
within their standard(s) during group discussion. A point was made that several components
from each standard overlapped with other standards in one way or another. For example, it was
mentioned that quality indicator 5 from Standard 4 (School Climate and Culture) also plays a
role in quality indicator 4 from Standard 3 (School and Community Involvement.)
Activity 4– The Best Venue for Assessing the Standards
Sue Obetz led this activity which asked committee members to match types of assessments with
each standard and indicator. Members were provided a chart listing each standard/indicator with
check-off columns for PA (Performance Assessment), MC (Multiple Choice), CW (Constructed
Response), and Other. The activity helped identify the standards/indicators to be used with the
new performance assessments.
Activity 5 – Sharing the Shell
Steve Schreiner reviewed the three performance tasks identified by ETS and DESE for use in
this School Counselor Assessment. The first task deals with Maintaining and Enhancing the
Guidance Program; Organizing, Responding and Offering Support; and Interacting with the
Classroom, the Faculty, and/or Parent Support Groups.
Steve reinforced that the shell is not set in stone; the group can modify it.
The team’s greatest concern was the video-based task. Team members thought they
would have to be taping a “conversation” between the counselor and a student. When
they learned that the video would be of a presentation to a class or faculty or parents,
they were assuaged and felt the video was totally appropriate.
MoPTA — Candidate and Educator Handbook 85
Activity 6: Clustering the Standards
Jenna Norton led this activity which asked for the group members to identify which standards
and indicators most appropriately ‘clustered’ or matched the three performance tasks outlined in
the ‘Sharing the Shell’ activity. As the group members started discussing which standards and
indicators were most appropriate for which tasks they came to a consensus that the three
performance tasks needed to be reworded in a way that better assessed school counselors by
Missouri standards. As a result, the group members renamed the three performance tasks the
following:
1. Planning, Designing, Evaluating, and Enhancing the Guidance Program
2. Implementing Program Components; and Professional Relationships
3. Interacting with students, faculty, family, and community.
Committee members worked in their small groups on this task for the duration of the work day.
Wednesday, February 5
Reflection
Sue Obetz opened the second day’s session with a reflective discussion focused on the previous
day’s work. Below are some of the comments.
The participants reviewed the “who” as in who will be a candidate for the assessment.
They reinforced that candidates for the performance assessment should be in their
internship, should have completed the majority of their course work, and should be in a
Master’s program in School Counseling. Sue reinforced that the expectations of the
knowledge, skills, and abilities that go a long with this level of experience should drive
the activities of the committee.
The committee discussed the review of the standards and quality indicators and the
processes of identifying those indicators that should be included in the assessment.
The committee felt there was fairly good agreement among committee members about which
standards and indicators were appropriate to be used in a performance assessment. The
committee was in the process of identifying which standards and assessments were appropriate
for each task. They also renamed the tasks so that they were more consistent with School
Counselor terminology.
Activity 6 – Clustering the Standards (continued)
Jenna Norton led this activity which asked for the committee members to identify which
standards and indicators most appropriately ‘clustered’ or matched the three performance tasks
of Planning, Designing, Evaluating, and Enhancing the Guidance Program; Implementing
Program Components; and Professional Relationships: Interacting with Students, Faculty,
MoPTA — Candidate and Educator Handbook 86
Family, and Community. At the end of this activity, the groups identified specific Standards and
Indicators that should be addressed by each standard.
The results are:
TASK 1: Standard 2 Indicator 3 and 4, and Standard 4 Indicator 4
TASK 2: Standard 1 Indicator 3, 4, and 6, Standard 2 Indicator 3, and Standard 4 Indicator 5
TASK 3: Standard 1 Indicator 3, Standard 3 Indicator 2 and 4
Activity 7 Task Design Outline
Committee members split into 3 groups; each group was assigned one of the 3 tasks. Each
group then brainstormed on the claims they would like to make, the evidence for the claims,
and the tasks that might elicit that evidence. Steve Schreiner shared the three-task shell that
the committee will be developing. Committee members will be determining the standards
and indicators addressed in each task’s shell, the activities to be completed, the prompts for
candidate’s response, the number and type of artifacts.
Presentation of Washington ProTeach site to provide visceral sense of what the assessment
will look like, what the experience of crafting responses and attaching artifacts will be like,
what a task looks like, etc.
The committee spent a good part of the day identifying potential claims, evidence, and
activities that would illicit the desired evidence.
Thursday, February 6 Reflection
Jenna Norton opened the third day’s session with a reflective discussion focused on the previous
day’s work. In general, the committee members were positive. Some discussion occurred in
response to the Task Design Outline that was completed on Wednesday. The members found this
task to be slightly more challenging than the other tasks they had completed. Steve Schreiner did
an excellent job explaining how developing performance assessments are part of a large process
and it takes a lot of practice and tweaking of ideas to make sense of everything that needs to be
done. Overall, the reflection went well.
Activity 8 – Task Writing Template
Jenna Norton led this activity. Committee members separated into their 3 groups to begin
working on writing tasks (activities) for the school counselor candidates. Each group focused on
one of the 3 tasks outlined in handout 7- the shell. This activity took place for the majority of the
day. The committee members worked on the four steps that the candidates would need to
complete as their performance assessment. The steps had to illustrate the following components:
1) Designing/Planning, 2) Implementing, 3) Analyzing, 4) Reflecting. The committee members
were able to complete a good portion of this assignment with guidance from Steve, Susan, and
Jenna.
MoPTA — Candidate and Educator Handbook 87
Activity 9 – Cross Check and Alignment Steve said that this step will occur (in greater detail) at the next meeting. At today’s meeting, the
groups continued refining their performance tasks according to the templates posted through LCD
projectors onto respective screens. An initial cross-check review followed after each group
attempted a preliminary overview of their task’s activities, prompts, and standards alignments.
Groups exchanged their thoughts (group 1 to group 3, group 3 to group 2 and group 2 to group 1).
Vigorous discussion ensued as each group edited the other’s on the template provided. Each
group then received their edited version back and discussed among themselves the resulting
suggestions/deletions. This discussion served to clarify the purpose and focus of the performance
task as it related to respective task objectives. The tasks will not be at a high enough degree of
polish to carry out this step at this first meeting.
Next Steps
The next development team meeting will be on April 8, 9, and 10 in Jefferson City, Missouri.
The focus of the meeting will continue to be on task directions and prompt development.
Also, additional team members will be recruited; they will meet on April 7 and experience an
abridged session in order to blend in with the current team of developers.
MoPTA — Candidate and Educator Handbook 88
Missouri School Counselor Assessment
Content Development Team- Meeting One
April 7, 2014
Jefferson City, Missouri
In Attendance:
DESE: Gale “Hap” Hairston, Director of Education Preparation, Christina Hudson, Assistant
Director
ETS: Ethan Taylor, Project Owner, Sue Obetz, Assessment Specialist, Jenna Norton,
Assessment Specialist, Steve Schreiner, Assessment Specialist
Content Development Team
Name Institution Position Name Institution Position
Heckman, Geoff Cameron
Schools
High School
Counselor
Simpson,
Catherine
Missouri
Baptist
College
Instructor, Retired
Counselor
McIntyre, Becky Raytown
Schools
Middle School
Counselor
Thompson,
Jason
Missouri
Baptist
College,
Instructor,
Elementary
Counselor
Sadewhite, Sara Columbia
Public
Schools
Elementary
Counselor
Vertin, Shelly Savannah
Public
Schools
High School
Counselor
Shelton, Laura Savannah
Public
Schools
Elementary
Counselor
Williams,
Cherri
Cole County
School
District
K-8 Counselor
Purpose of the Meeting:
To design a standards-based performance assessment for school counselors in Missouri.
To add development team members and to have them experience the evidence centered design
process that the other team members previously experienced.
Overall Goals:
EE) Analyze the Missouri School Counselor Standards as they relate to the school
leader candidate and student learning
FF) Complete the evidence-centered design (ECD) process
GG) Articulate the Missouri School Counselor Assessment blueprint
HH) Provide an outline of each task
II) Provide draft prompts for the tasks
Pre-Meeting Homework for the Content Development Team:
3. Read and be familiar with the Missouri School Counselor Standards and
Indicators; A Brief Introduction to Evidence-Centered Design; Counselor
MoPTA — Candidate and Educator Handbook 89
Education Accountability: Training the Effective Professional School
Counselor; and Review of Teaching Performance Assessments for Use in
Human Capital Management,
4. Think about what they would expect school counselors to submit as evidence that
they are addressing these standards.
Meeting Agenda Missouri School Counselor Assessment Development
Meeting # 1
April 7, 2014
Day 1- Tuesday
8:30 am Introductions and Overview of the Meeting: Missouri Department of Education Staff (Purpose, Vision, History, Teams, Process, Timelines, etc.)
Introduction of ETS staff – Ethan Taylor and Christina Hudson
Introduction of Development Committee – Annette DeLuca
Overview of School Leader Assessment Development Process – Steve
Schreiner
9:30 Who Are the Assessment Takers? 10:30 Break
10:45 What are the Ideal Characteristics of a School Counselor Clinical Experience?
11:15 Unpacking the Missouri Standards/Indicators and Sharing 12:00 pm Lunch 1:00 Unpacking the Missouri Standards/Indicators and Sharing (continued) 2:15 Share the Shell 3:15 Break 3:30 Share the First Draft of the Three Tasks 4:30 Adjournment
Overview:
Before breaking into certificate areas, Steve Schreiner outlined the arc of the three meetings,
leading to the pilot in the fall of 2014 and the pilot scoring in the spring of 2015—a session that
will lead to the finalization of the assessment.
MoPTA — Candidate and Educator Handbook 90
Monday, April 7
Activity 1 - The Who of the Assessment Sue Obetz led the first activity: “Who are the assessment takers?” In order to begin
development, the Content Development Team needed to have a common understanding of “the
who” this assessment would be assessing. The Evidence Centered Design (ECD) activity
required brainstorming and responses from the following questions: Who is being assessed?
What assumptions can we make about these school counselor candidates when they take this
assessment? Think about the knowledge, skills, and abilities these candidates have as school
counselors. What conclusions would you like to draw after they’ve taken the assessment? What
claims can we make about what they know and can do? What evidence can we ask them to
produce to show this?” Participant comments were charted and posted. See the related document,
“Who are the Assessment Takers?”
Activity 2 – What Are the Ideal Components of a School Counselor Training Experience?
Steve Schreiner led the brainstorming of what everyone would agree are the most important parts
of a school counselor candidate’s training experience. “What should the school counselor clinical
experience look like in an ideal world? What characteristics should the process possess?” See the
related document, “Ideal Components of a School Counselor Program.”
Activity 3 - Unpacking the Standards
Jenna Norton led the next activity in which the Missouri Standards were “unpacked” in order to
become familiar with what each would look like in application.
Before the unpacking of the standards began, there was a general discussion of the standards.
The efficacy of the standards as viable indicators of the right set of KSAs was generally
supported by the committee.
The guiding questions for the unpacking were:
- What would a quality indicator look like in practical application?
- What would be convincing evidence that a school counselor candidate has the knowledge
and skills addressed in this standard?
- If successful, what would the impact on student learning look like?
The committee members generally agreed upon the interpretation and understanding of each of
the standards and indicators. One group focused specifically on the indicators as they
“unpacked” their set of standards. This guided the other two groups in discussing the indicators
within their standard(s) during group discussion. The results from this unpacking led to many of
the same responses made by the first group of committee members back in February. It was
reaffirming to see that this group agreed on the important components that should be assessed in
these standards.
MoPTA — Candidate and Educator Handbook 91
Activity 5 – Sharing the Shell
Steve Schreiner reviewed the three performance tasks identified by ETS and DESE for use in
this School Counselor Assessment. The first task deals with Maintaining and Enhancing the
Guidance Program; Organizing, Responding and Offering Support; and Interacting with the
Classroom, the Faculty, and/or Parent Support Groups.
Steve reinforced that the shell is not set in stone; the group can modify it.
The team’s greatest concern was the video-based task. Team members thought they
would have to be taping a “conversation” between the counselor and a student. When
they learned that the video would be of a presentation to a class or faculty or parents,
they were assuaged and felt the video was totally appropriate.
Activity 7 Task Design Outline
Committee members split into 3 groups; each group was assigned one of the 3 tasks. Each
group then brainstormed on the claims they would like to make, the evidence for the claims,
and the tasks that might elicit that evidence. Steve Schreiner shared the three-task shell that
the committee will be developing. Committee members will be determining the standards
and indicators addressed in each task’s shell, the activities to be completed, the prompts for
candidate’s response, the number and type of artifacts.
Presentation of Missouri Pre-service site to provide visceral sense of what the assessment
will look like.
Activity 8 – Task Writing Template
Jenna Norton led this activity. Committee members separated into their 3 groups to begin
working on writing tasks (activities) for the school counselor candidates. Each group focused on
one of the 3 tasks outlined in handout 7- the shell. This activity took place for the majority of the
day. The committee members worked on the four steps that the candidates would need to
complete as their performance assessment. The steps had to illustrate the following components:
1) Designing/Planning, 2) Implementing, 3) Analyzing, 4) Reflecting. The committee members
were able to complete a good portion of this assignment with guidance from Steve, Susan, and
Jenna.
MoPTA — Candidate and Educator Handbook 92
Missouri School Counselor Assessment
Content Development Team- Meeting Two
April 8-10, 2014
Jefferson City, Missouri
In Attendance:
DESE: Gale “Hap” Hairston, Director of Education Preparation, Christina Hudson, Assistant
Director
ETS: Ethan Taylor, Project Owner, Sue Obetz, Assessment Specialist, Jenna Norton,
Assessment Specialist, Steve Schreiner, Assessment Specialist
Content Development Team
Name Institution Position Name Institution Position
Bader, Karen Aurora School District
Elementary Counselor
Shelton, Laura
Savannah Public Schools
Elementary Counselor
Connor, Kim Lincoln University
Assistant Professor Department of Ed
Simpson, Catherine
Missouri Baptist College
Instructor, Retired Counselor
Dowdy, Marci Missouri State University
Senior Instructor Speck, Janice Missouri Baptist University
Instructor
Heckman, Geoff Cameron Schools
High School Counselor
Thompson, Jason
Missouri Baptist College,
Instructor, Elementary Counselor
Hiatt, Rochelle Northwest Missouri State University
Assistant Professor Vertin, Shelly Savannah Public Schools
High School Counselor
McIntyre, Becky Raytown Schools
Middle School Counselor
Ward, Janice Southeast Missouri State University
Professor
Sadewhite, Sara Columbia Public Schools
Elementary Counselor
Williams, Cherri
Cole County School District
K-8 Counselor
MoPTA — Candidate and Educator Handbook 93
Purpose of the Meeting:
To design a standards-based performance assessment for school counselors in Missouri.
Overall Goals:
JJ) Analyze the Missouri School Counselor Standards as they relate to the school
counselor candidate and student learning
KK) Complete the evidence-centered design (ECD) process
LL) Articulate the Missouri School Counselor Assessment blueprint
MM) Provide an outline of each task
NN) Provide draft prompts for the tasks ready for the tryout
OO) Design a rubric template for one performance level in each task
Meeting Agenda Missouri School Counselor Assessment Development
Meeting # 2
April 8-10, 2014
Day 1- Tuesday
8:30 am Introductions and Overview of the Meeting: Missouri Department of Education Staff
(Purpose, Vision, History, Teams, Process, Timelines, etc.)
Introduction of ETS staff – Christina Hudson
Introduction and Overview– Steve Schreiner
8:30- 9:00 Introductions and Overview of the Meeting:
3. Welcome/Introductions/Housekeeping
4. Overview of the development process for this session
9:00- 9:15 Review of the previous meeting’s work/Discussing Standards
9:15- 10:00 Revisiting the Current Tasks (in groups)
10:00 Break
10:15- 12:00 Revisiting the Current Tasks
12:00 Lunch
1:00- 3:00 Cross Check
3:00- 3:15 Break
3:15- 5:00 Discussion (whole group) and Return to Tasks
5:00 Adjournment
MoPTA — Candidate and Educator Handbook 94
Tuesday, April 8
Activity 1 - Review of the previous meeting’s work/Discussing Standards
Jenna Norton led the first activity: “Reviewing the previous meeting’s work and discussing the
standards.” During this activity our new members on the Content Development Team shared
with the previous members their interpretation of the standards and indicators and how they
broke down these standards to be meaningful. Discussion was focused around the similarities
that both the old group and new group came up with. There was a lot of consistency in the
examples the groups came up with to demonstrate their understanding of the standards and
indicators.
Activity 2 – Revisiting the Tasks
Sue Obetz reviewed the work that was done at the first meeting addressing the tasks, what the
candidates need to do, and what activities and prompts are presented in the tasks. The team was
then broken up into their task team groups to work together on the tasks.
Activity 3 - Cross-Check and Feedback
Jenna Norton led the cross-check. The task teams broke into three working groups where they
worked on refining the tasks. Groups exchanged tasks where additional comments and
suggestions were documented by the groups until each group saw all three tasks.
The team was bought back together to discuss the process and their thoughts around the review
of the tasks. The team spoke about their concerns about the tasks. A major concern was that two
of the tasks (Task 2 and Task 3) required a great deal more from the school counselor candidate
and the teams felt it might be too much. Refinement was also made to the guiding prompts and
the reflection section to make better sense and have a smooth flow.
Wednesday, April 9
Day 2 - Wednesday 8:30- 9:00 Tryout Review PowerPoint
9:00- 10:15 Return to Tasks
10:15- 10:30 Break
10:30- 12:00 Return to Tasks
12:00- 1:00 Lunch
1:00 -5:00 Tasks
Activity 4– Tryout Review PowerPoint
Steve presented a review of the field test process. The first was a review of the small scale field
test called a tryout, and the second focused on the large scale field test called a pilot. Steve spent
most of the time reviewing the details of the tryout which will officially begin on April 18.
Results of the tryout will be delivered through the Wiki by May 2. Steve encouraged team
MoPTA — Candidate and Educator Handbook 95
members to get involved by responding to the prompts themselves and by recruiting colleagues
to help.
Activity 5: Return to Tasks
Teams returned to their groups to continue refining their tasks. Steve, Sue, and Jenna took turns
meeting with the groups and assisting them in their development. As the teams finalized their
work, they began sharing with each other changes they made.
Thursday, April 10
Day 3 - Thursday 8:30- 9:15 Rubrics PowerPoint
9:15- 10:15 Rubrics
10:15- 12:00 Task Cross Check
12:00- 1:00 Lunch
1:00- 3:00 Rubrics and Task Refinement (continued)
3:00 – 3:15 Break
3:15- 4:30 Ancillary Materials
4:30-5:00 Next steps, Homework, Adjournment
Activity 6: Rubrics PowerPoint
Steve presented a PowerPoint on rubric building. The rubric will be both analytic and holistic. Raters will look at the entire submission of work when determining the task score. The task will
be on a 4 point scale. There will be benchmarks for each task level 1-4. The bench marks will be
identified during the pilot along with training cases. An example of a rubric was shared with the
team so they could see the differentiation of the various points on a scale.
The team asked questions about the rubric. Rubric shells were handed out for tasks 1, 2, and 3
for the team to see. Steve explained how the teams would begin to develop the rubrics through
identifying parallel words for the various points on the scale. The team broke into their task
groups to work on the rubric.
Activity 7 – Rubrics
The teams broke up into their groups to begin rubric development. Their task was to come up
with scoring criteria for what a score of three should look like. They worked on the rubric by
comparing it with their task to develop proper scoring criteria. There was enough time for two of
the groups to crosscheck each other’s work.
Next Steps
The next development team meeting will be on May 13, 14, and 15 in Columbia, Missouri. The
focus of the meeting will be on review of the Tryout responses and Task/Rubric revision.
MoPTA — Candidate and Educator Handbook 96
Missouri School Librarian Assessment
Content Development Team- Meeting One
February 4 -6, 2014
Jefferson City, Missouri
In Attendance:
DESE: Christina Hudson (Feb. 4th AM), Hap Hariston (Feb 5th PM and Feb 6th PM)
ETS: Ethan Taylor, Project Owner, Annette Deluca, Assessment Specialist, Melissa O’Rourke,
Director of Assessment Development, Kim Segal-Morris, Assessment Specialist
Content Development Team
Name Institution Position Name Institution Position
Pat Antrim University of
Central
Missouri
Instructor Bill Edgar Missouri
State
University
Coordinator,
Library Science
Education
Sheila
Driemeyer
East Central
College
Associate Director
of Library
Jenni George Ft. Zumwalt
South HS
Library Media
Specialist
Heather
Mitchell
Nowlin
Middle
School,
Independence
Public
Schools
Library Media
Specialist
Michael
Russell
Lee’s
Summit
North High
School
Library Media
Specialist
Kerry
Townsend
Lead Media
Specialist,
Columbia
Public
Schools
Instructional
Technology
Specialist/LMS
Sharon
Nations
Oakgrove
Public
Schools
School Librarian
Sharon
Salmons
Shepard
Boulevard
Elementary
School
School Librarian
Kelli Krause Knob Noster
Middle
School
Librarian
Purpose of the Meeting:
To design a standards-based performance assessment for school librarians in Missouri.
Overall Goals:
PP) Analyze the Missouri Standards for School Librarians as they relate to the
school librarian candidate and student learning
MoPTA — Candidate and Educator Handbook 97
QQ) Complete the evidence-centered design (ECD) process
RR) Articulate the Missouri School Librarian Assessment blueprint
SS) Provide an outline of each task
TT) Provide draft prompts for the tasks
Pre-Meeting Homework for the Content Development Team:
1. Read and be familiar with the Missouri Standards for School Librarians
Standards and Indicators; A Brief Introduction to Evidence-Centered Design;
and Review of Teaching Performance Assessments for Use in Human Capital
Management; The New School Library-The Human Connection to Digital
Resources and Academic Success .
2. Think about what they would expect school librarians to submit as evidence that
they are addressing these standards.
Meeting Agenda Missouri School Librarian Assessment Development
Meeting # 1
Day 1- Tuesday, Feb. 4th
8:30 am Introductions and Overview of the Meeting: Missouri Department of Education Staff
(Purpose, Vision, History, Teams, Process, Timelines, etc.)
Introduction of ETS staff – Ethan Taylor
Introduction of Development Committee – Annette DeLuca
Overview of Performance Assessment Development Process – Steve
Schreiner
9:30 Who Are the Assessment Takers?
10:30 Break
10:45 What are the Ideal Characteristics of a School Librarian’s Clinical Experience?
11:15 Unpacking the Missouri Standards/Indicators and Sharing
12:00 pm Lunch
1:00 Unpacking the Missouri Standards/Indicators and Sharing (continued)
2:15 Share the Shell/Clustering of the Missouri Standards
2:45 The Best Venue for Assessing the Standards
3:15 Break
3:30 The Best Venue for Assessing the Standards (continued)
5:00 Adjournment
MoPTA — Candidate and Educator Handbook 98
Day 2 – Wednesday, Feb. 5th 8:30 am Reflection on the Previous Day’s Work
9:00 Evidence-Centered Design PPT
9:30 Clustering the Missouri Standards and Indicators
10:15 Break
11:00 Clustering the Missouri Standards and Indicators
12:00pm Lunch
1:00 Sharing the Missouri MEGA Website
1:30 School Librarian Task Design Outline
5:00 Adjournment
Day 3 – Thursday, Feb. 6th 8:30 am Reflection on the Previous Day’s Work
9:00 School Librarian Task Design Outline
9:30 School Librarian Task Design Outline
10:15 Break
11:00 Task Writing Template
12:00pm Lunch
1:00 Task Writing Template
4:00 Cross Check and Alignment
5:00 Adjournment
Overview:
Activity 1 - The Who of the Assessment Annette DeLuca led the first activity: “Who are the assessment takers?” In order to begin
development, the Content Development Team needed to have a common understanding of
“the who” this assessment would be assessing. The evidence centered design (ECD) activity
required brainstorming and responses from the following questions: Who is being assessed?
What assumptions can we make about these school librarian candidates when they take this
assessment? Think about the knowledge, skills, and abilities these candidates have at this
point in time. What conclusions would you like to draw after they’ve taken the assessment?
What claims can we make about what they know and can do? What evidence can we ask
them to produce to show this?”
Clarification was needed by Hap Hairston (DESE) in regard to the “who” of this candidate
when it came to establishing who would be required to complete this assessment. Missouri
currently has five different paths in which a school librarian certificate can be awarded. Their
clinical experience varied from one semester of student teaching to completing a 100 hour
practicum. Hap clarified that all candidates, regardless of the path that they choose, will be
required to take the performance assessment. He also stated that the length of this clinical
experience would be one semester. Development team comments for this activity were
charted and posted. See the related document, “Who Are the Assessment Takers?”
MoPTA — Candidate and Educator Handbook 99
Activity 2 – What Are the Ideal Components of a School Librarian’s Clinical Experience?
Melissa O’Rourke led the next activity during which the development team brainstormed
what they considered, and agreed, are the most important aspects of a school librarian
candidate’s training experience. Questions that were considered included, What should the
school librarian/media specialist clinical experience look like in an ideal world? and, what
characteristics should the process possess?” See the related document, “What are the Ideal
Characteristics of a School Librarian’s Clinical Experience” for the results of this activity.
Activity 3 - Unpacking the Missouri Standards for School Librarians
Kim Morris led the next activity in which the Missouri Standards were “unpacked” to become
familiar with what each would look like in application.
During this activity the Committee worked in four small groups and each group was
responsible for unpacking and becoming experts for 1 or 2 of the standards. The guiding
questions for the unpacking were:
1. What would this standard look like in application? What are some examples?
2. What would be convincing evidence that a school librarian candidate has the knowledge
and skills addressed in this standard?
3. If successful, what would the impact on student learning look like?
Each group then presented their standard(s) to the whole group for discussion and consensus.
During the presentation of Standard 2 (Reading and Literacy) the Committee had a
discussion regarding the difference between student achievement and student learning and
what each of these phrases meant in the context of unpacking the standards and the third
guiding question. The Committee debated whether students would need to be assessed in
some manner in order to confirm student learning or could it be confirmed in another way.
The Committee decided to table the discussion and revisit it during a subsequent task.
During the presentation of Standard 1 (Teaching for Learning) the Committee made
additional suggestions for convincing evidence (guiding question 2) and for the impact on
student learning (guiding question 3).
During the presentation of Standard 2 (Reading and Literacy) the Committee contributed
suggestions for convincing evidence (guiding question 2) for each of the examples the group
had provided for guiding question 1 (application).
During the presentation of Standard 3 (Information and Knowledge), Standard 4 (Leadership
and Advocacy), and Standard 7 (Professional Development) the Committee generally agreed
with the unpacked standards as presented.
During the presentation of Standard 5 (Program Management and Administration) and
Standard 6 (Technology Integration) discussion by the Committee focused on how much
opportunity/involvement a practicum participant would have in application of these
standards. For instance, regarding Standard 5, the availability of the policy manual for review
by the candidate was discussed and many Committee members offered that in their own
MoPTA — Candidate and Educator Handbook 100
libraries the policy manual is outdated and rarely referenced. Regarding Standard 6, the
Committee felt that candidates would be restricted to using the technology available at their
practicum sites and the types and modernity of technology available could vary widely. The
Committee debated about how much they could require of a candidate regarding Standards 5
and 6 on the Missouri School Librarian Performance Assessment and agreed to address this
during the completion of subsequent tasks.
Results of the presentation and discussion on Unpacking the Standards can be read on the
related document, “Unpacking the Missouri Standards/Indicators.”
Activity 4 – Sharing the Shell
Annette DeLuca reviewed the three performance tasks identified by ETS and DESE for use
in this School Librarian Assessment. The first task deals with Promoting Access, the second
Collaborative Planning, and the third Understanding, Selecting and Integrating Technology
into Instruction.
Annette reinforced that the shell is not set in stone; once the team sees how the
standards/indicators are clustered; the group can modify the focus/title as needed.
Task 1: Much discussion was had of all the possibilities on which this task could focus.
The group was happy that this task could possibly focus partly on the “behind the
scenes” work that all librarians do (ex. cataloging, budgeting, and deselecting).
Task 2: The development team wanted to change the name from Collaborative Research
to Collaborative Planning. The reason was that in some schools, research is restrictive
and prescribed in regard to when the research happens. By changing the title, the team
felt that this would be more accessible to all schools.
Task 3: The original title of Understanding, Selecting, and Implementing Technology
was changed to Integrating Technology into Instruction. The development team felt that
this better described what a school librarian does in helping other teachers. They wanted
the focus to be on the integration of the technology into instruction; not the librarian
using the technology as their vehicle to teach, but the actual use of it in the learning
process. The team also thought that it fit perfectly with Task 2. They felt that Task 2
could be the planning of the lesson and Task 3 could be the implementation of that
lesson. They wanted this process to fit right into what the candidate would normally be
doing during their clinical experience and timeframe.
Activity 5– The Best Venue for Assessing the Standards
Melissa O’Rourke led this activity which asked committee members to match types of
assessments with each standard and indicator. Members were provided with a chart listing
each standard/indicator with check-off columns for PA (Performance Assessment), MC
MoPTA — Candidate and Educator Handbook 101
(Multiple Choice), CW (Course Work), and Other. The activity helped identify the
standards/indicators to be used with the new performance assessment. The committee was
placed into two groups, shared their ideas, and came to consensus within the large group.
See the related document, “Best Venue to Assess the Standards” from the two groups.
Results of the presentation and the final group decision following discussion on The Best
Venue for Assessing the Standards can be read on the related document, “The Best Venue
for Assessing the Standards.”
Activity 6 – Clustering the Standards
Annette DeLuca led this activity which asked for the committee members to identify which
standards and indicators could be most appropriately “clustered’ or matched within the three
performance tasks.
Annette and Ethan Taylor handled the logistics of the BIQ, W-9s, and travel and substitute
reimbursement. They also clarified the implications of the Confidentiality Form.
Meeting Notes for Wednesday, Feb. 5th Reflection
Melissa O’Rourke opened the second day’s session with a reflective discussion that focused on
the previous day’s work.
Steve Schreiner presented to both(Librarian and Counselor) groups the Missouri Pre-Service
Teacher Assessment web site to provide as general overview of what the assessment will look
like, what the experience of crafting responses and attaching artifacts will be like, what a task
looks like, etc.
Activity 6 – Clustering the Standards (continued)
Annette DeLuca reintroduced this activity which asked for the committee members to continue
to identify which standards and indicators would be most appropriately “clustered” or match the
three performance tasks of Task 1:Promoting Access, Task 2: Collaborative Planning, and Task
3:Integrating Technology into Instruction.
At the end of this activity, the groups identified specific Standards and Indicators that should be
addressed by each task.
The results are:
TASK 1: Standards 2.1, 2.2, 2.3, 2.5; 3.1, 3.2, 4.2, 5.1, 5.2, 5.3, 6.1, and 7.1
TASK 2: Standards 1.1, 1.2, 1.3, 2.4, 3.1, 3.2, and 7.1
TASK 3: Standards 1.1, 1.2, 1.3, 2.5, 3.2, 6.1, 6.3, 6.4, and 7.1
MoPTA — Candidate and Educator Handbook 102
Activity 7 – Missouri School Librarian Task Design Outline:
Kim Morris introduced the activity by briefly highlighting work the Committee had
accomplished during previous activities and how this would support the work to be
completed for this activity. During Activity 4, Annette DeLuca previously shared the three-
task shell that the committee will be developing. During Activity 5, led by Melissa O’Rourke
and Activity 6, led by Annette DeLuca, the Committee members determined the best venue
for assessing the standards (performance, multiple choice, or coursework) and subsequently
clustered which standards and indicators should be addressed in each task’s shell.
Committee members then split into 3 groups and each group was assigned one of the 3 tasks
and given a flash drive containing an electronic version of the shell on which to record their
work. Each group then brainstormed on the claims they would like to make about candidates,
the evidence to support those claims, and the tasks that might elicit that evidence.
Task 1 Group Members: Bill Edgar, Sheila Driemeyer, Kelli Krause, and Kerry Townsend
Task 2 Group Members: Pat Antrim, Heather Mitchell, and Sharon Salmons
Task 3 Group Members: Jenni George, Sharon Nations, and Michael Russell
Each group then presented their Task Design Outline to the whole group for discussion and
consensus.
Discussion Points
Task 1: Promoting Access
The Committee established that the most important focus of Task 1 is reading promotion and
agreed that Standard 3, Quality Indicator 2 should be removed from Task 1. The Committee
brainstormed and added all of the suggested evidence for this task and agreed that the Task 1
group members would work on activities that would supply the suggested evidence at a later
time.
Task 2: Collaborative Planning
The Committee recognized the value of the group’s idea of including an experience for the
candidate of collaborating with other educators to create a lesson plan. Annette DeLuca
asked the Committee to think about how the activity might be separated into steps. The
Committee offered suggestions that the group noted for later use during the Task Writing
activity. The Committee discussed how the lesson plan might also be used in Task 3 but
acknowledged that the candidate might not necessarily use one lesson plan for both tasks.
Task 3: Integrating Technology into Instruction
The Committee agreed with the Task Design Outline as presented. Pat Antrim asked about
including assessment in the lesson plan and analysis of the impact on the student the task.
MoPTA — Candidate and Educator Handbook 103
Meeting Notes for Thursday, Feb. 6th
Activity 8 – Task Writing
Melissa O’Rourke introduced this activity and template to the group. Committee members
then split into their three previously established groups and each group collaborated to draft
appropriate activities and guiding prompts for their assigned Task (Promoting Access,
Collaborative Planning, and Integrating Technology into Instruction). The groups worked on
this activity for the majority of the day with the intention that they would share their work
with the whole group prior to dismissal.
Activity 9 – Cross Check and Alignment Annette DeLuca informed the committee that this step will occur (in greater detail) at the
next meeting. At today’s meeting, the groups continued refining their performance tasks
according to the templates posted through LCD projectors onto respective screens.
An initial cross-check review followed after each group attempted a preliminary overview of
their task’s activities, prompts, and standards alignments. The cross-check was conducted as
a whole group activity since there were only 10 committee members. This was intentionally
done in order to get the most input for each of the tasks. The committee then engaged in
vigorous discussions each group contributed to the other’s initial attempt at their tasks. Each
group edited as the discussion was conducted. This discussion served to clarify the purpose
and focus of the performance tasks as it related to respective task objectives. This was also a
good forum for the committee to see the content of the other two tasks and to see how the
tasks would all fit cohesively into a candidate’s clinical experience.
Closing
Annette thanked the committee for being so dedicated to this process and for the
contributions made to the tasks designed by the other groups in order to make a strong and
valid assessment.
Next Steps
The next development team meeting will be on April 8, 9, and 10 in Jefferson City, Missouri.
Due to the weather issue for the first development meeting, and the inability of a few potential
team members to attend, we will be adding an additional day (April 7th) to the front of this
development session to catch up the new members so that they may continue with the second
development meeting on the following days. The focus of the meeting will continue to be task
directions, prompts, and rubric development.
MoPTA — Candidate and Educator Handbook 104
Missouri School Librarian Assessment
Content Development Team- Meeting One
New Members
April 7, 2014
Jefferson City, Missouri
In Attendance:
ETS: Ethan Taylor, Project Owner, Annette Deluca, Assessment Specialist, Kim Segal-Morris,
Assessment Specialist, Joanne Aswell, Assessment Specialist, Kimberly Hagen, Assessment
Specialist
New Members of the Content Development Team
Name Institution Position Name Institution Position
Jennifer Day Platte County
R-III Public
Schools
Library Media
Specialist
Barbara
Morris
Lindenwood
University
Adjunct Professor
Paula
Erikson
Fort Osage
Public
Schools
School Librarian Julie Rodell West Platte
School
District
Library Media
Specialist
Purpose of the Meeting:
To design a standards-based performance assessment for school librarians in Missouri.
Overall Goals:
UU) Analyze the Missouri Standards for School Librarians as they relate to the
school librarian candidate and student learning
VV) Complete the evidence-centered design (ECD) process
WW) Articulate the Missouri School Librarian Assessment blueprint
XX) Provide an outline of each task
YY) Provide draft prompts for the tasks
Pre-Meeting Homework for the Content Development Team:
3. Read and be familiar with the Missouri Standards for School Librarians
Standards and Indicators; A Brief Introduction to Evidence-Centered Design;
and Review of Teaching Performance Assessments for Use in Human Capital
Management; The New School Library-The Human Connection to Digital
Resources and Academic Success .
4. Think about what they would expect school librarians to submit as evidence that
they are addressing these standards.
MoPTA — Candidate and Educator Handbook 105
Meeting Agenda Missouri School Librarian Assessment Development
Meeting # 1 (new members)
Day 1-Monday
7:30 am Registration and Continental Breakfast
8:30 am Introductions and Overview of the Meeting:
Missouri Department of Education Staff (Welcome, Introductions,
Housekeeping, Purpose, Timelines, etc.)
Introduction of ETS Staff
Introduction of Development Committee
Overview of School Librarian Assessment Development Process
9:00 Who Are the Assessment Takers?
10:00 What Should the School Librarian Clinical Experience Look Like?
10:30 Break
10:45 Unpacking the Missouri Standards/Indicators and Sharing
12:00 pm Lunch
1:00 Evidence Centered Design PPT
1:30 Unpacking the Missouri Standards/Indicators and Sharing (cont.)
3:00 Break
3:15 Share the Clustering and Blueprint
5:00 Adjournment
Meeting Notes for Monday, April 7th Due to several last minute cancelations, prior to the first development meeting in
February, not all committee members were able to attend. Additional recruitment
provided new committee members who were able to attend this one-day meeting to catch
them up on the development work that was covered at the first meeting. This meeting
also was to prepare for all members to come together as a development committee the
following day (see new committee members listed above).
Activity 1-Who Are The Assessment Takers? Annette DeLuca led the first activity for the four new members of the School Librarian
Development team: “Who are the assessment takers?” In order for these new members to join in
the already occurring development of this assessment, they needed to have a common
understanding of “the who” this assessment would be assessing.
The evidence centered design (ECD) activity required brainstorming and responses from the
following questions: Who is being assessed? What assumptions can we make about these school
librarian candidates when they take this assessment? Think about the knowledge, skills, and
abilities these candidates have at this point in time. What conclusions would you like to draw
MoPTA — Candidate and Educator Handbook 106
after they’ve taken the assessment? What claims can we make about what they know and can do?
What evidence can we ask them to produce to show this?”
As with the original development team’s understanding of the multiple paths to acquire a school
librarian certificate, these four new members also noted the same. Discussion surrounded the five
paths currently available as well as the length of time a candidate has to complete a clinical
experience. (It was clarified in the first meeting by Hap Hairston that all candidates, regardless of
the path that they choose, will be required to take the performance assessment. He also stated
that the length of this clinical experience would be one semester.) Comments, from the new
members, for this activity, were charted and posted. See the related document, “Who Are the
Assessment Takers?”
Activity 2-What Are the Ideal Components of a School Librarian’s Clinical
Experience?
Kim Hagen led the next activity during which the development team brainstormed what they
considered, and agreed, are the most important aspects of a school librarian candidate’s training
experience. Questions that were considered included, What should the school librarian/media
specialist clinical experience look like in an ideal world? and what characteristics should the
process possess?” The committee members actively participated in the discussion of what they
see as ideal in the school librarian/media specialist clinical experience that will best prepare
future candidates for the Library Media Specialist position once he or she graduates. There was a
lot of discussion about what committee members experienced or lacked when they were in
Librarian preparation program. There were comments on various experiences of the committee
members which promoted conversation about the differences they each encounter in their
schools, their position, and any clinical experience that they have had. Comments, from the new
members, for this activity, were charted and posted on the overhead. See the related document,
“What are the Ideal Characteristics of a School Librarian’s Clinical Experience” for the results of
this activity.
Activity 3 - Unpacking the Missouri Standards for School Librarians Kim Morris led the next activity in which the Missouri Standards were “unpacked” to become
familiar with what each would look like in application.
During this activity the four new members of the Committee worked in two groups and each
group was responsible for unpacking and becoming experts for 3 or 4 of the standards.
Group 1: Jennifer Day & Julie Rodell (Standards 1, 2, and 6)
Group 2: Paula Erickson & Barbara Morris (Standards 3, 4, 5, and 7)
The guiding questions for the unpacking were:
4. What would this standard look like in application? What are some examples?
5. What would be convincing evidence that a school librarian candidate has the knowledge and
skills addressed in this standard?
6. If successful, what would the impact on student learning look like?
MoPTA — Candidate and Educator Handbook 107
Each group then presented their standards to the other group for discussion and consensus.
During the presentation of Standard 1 (Teaching for Learning) Joanne Aswell asked Jennifer and
Julie if the candidate is expected to differentiate by ability levels – they said “yes.” Paula
wondered why collaboration was not included in the application section of the breakdown of S1
and was adamant about this point. In response, Jennifer added collaboration to the application
section of the S1 poster. The whole group agreed in the evidence and impact sections of the
poster.
During the presentation of Standard 2 (Reading and Literacy) Paula suggested the addition of
the word diversity be added to the application section. The group discussed what was meant by
“creation of book trailers” in the evidence section. Joanne Aswell asked if Group 1 discussed
multiple formats – Jennifer added multiple formats to the application section of the poster.
During the presentation of Standard 3 (Information and Knowledge), Jennifer suggested adding
a survey to the evidence section – all agreed.
Standard 4 (Leadership and Advocacy), Joanne Aswell suggested that Web page be added to
evidence, Paula also added social media. Joanne added that national studies have indicated that
standardized test scores are raised by having a qualified librarian in a school. Jennifer asked
about the advocacy piece in Standard 4 and did not support the phrase “will work with.” Paula
changed it to say “will communicate with.”
During the presentation of Standard 5 (Program Management and Administration) Jennifer
asked that CSIP and SIP be added to the application section and “measures” to the impact
section. Jennifer was adamant about the candidate being able to defend return-on-investment to
better defend the budget. Joanne suggested that the group move on when the discussion stalled
on point 2. The group discussed what advocacy meant in this context and Joanne added her
thoughts on the definition of advocacy. Jennifer offered that her view of advocacy leaned toward
advocating for something that you want.
Standard 6 (Technology Integration), Paula suggested that they need to add “accessibility” to the
application section and added a variety of tools. “student samples” was added to the evidence
section.
Standard 7 (Professional Development) the Committee generally agreed with the unpacked
standards as presented.
Results of the presentation and discussion on Unpacking the Standards can be read on the related
document, “Missouri School Librarian Performance Assessment: Meeting #2 Posters.”
Evidence Centered Design PowerPoint
Steve Schreiner presented a PowerPoint presentation and facilitated discussion based
on evidence-centered design principles. This process was explained to the whole group as
being the foundation on which the tasks will be built. This foundation specifies the claims, the
evidence, and the activities that could be used based on the standards earmarked for these
particular tasks.
MoPTA — Candidate and Educator Handbook 108
Activity 4-Clustering and Sharing the Shell
Annette DeLuca began this activity by describing the process that occurred during the first
meeting in regard to the activities surrounding the School Librarian Standards. She described the
first activity , “Unpacking the Standards” in which all the standards were described in terms of
what they meant in regard to actual practice. Following this activity, Annette talked about how
we began the clustering activity by first sharing a shell that contained the three original
performance tasks as identified by ETS and DESE for use in this School Librarian Assessment.
She noted that the three original tasks dealt with Promoting Access, the second Collaborative
Planning, and the third Understanding, Selecting and Integrating Technology into Instruction.
Annette stated that, at the first meeting, it was reinforced that the shell was not set in stone and
that they had the option to make changes. Once the development team saw how the
standards/indicators were able to be clustered; the team was able to modify the focus/titles as
needed. (See Meeting #1 Meeting Notes for the full report.)
Annette next described the components of the assessment shell and the shell’s focus as
determined by the development team based on the clustering of the standards. She shared, with
them, the Assessment Blueprint for the three Tasks. (See Meeting #1 Meeting Notes for the full
report.) At the conclusion of this activity, the new members were able to see the evidence-
centered design path that this assessment took to get to this point.
Sharing the Draft Tasks PowerPoint Following Annette’s explanation of the first group’s work on the three tasks, Joanne presented a
brief PowerPoint presentation on the content of each task as they looked at the end of the session
in February. She explained that Task One was named Promoting Access and contained work
related to developing a part of the collection for a specific user group. Tasks Two (Collaborative
Planning) and Three (Designing a Lesson) were linked in content, with Task Two containing the
planning stages of a collaborative lesson, culminating in Task Three, a video-taped lesson.
Joanne further explained that the tasks were in the preliminary stages and would require much
further development. She explained that the new members of the committee would integrate
among the existing committee groups to further develop the tasks.
Logistics and Confidentiality Ethan Taylor handled the logistics of the BIQ, W-9s, and travel and substitute reimbursement for
the new members. He also clarified the implications of the Confidentiality Form.
Closing The ETS team thanked the new members for joining the Missouri School Librarian Assessment
Development Committee, briefly reviewed information about the following days of development
meetings, and answered any questions the new committee members had.
MoPTA — Candidate and Educator Handbook 109
Missouri School Librarian Assessment
Content Development Team- Meeting Two
April 8-10, 2014
Jefferson City, Missouri
In Attendance:
DESE: Christina Hudson (April 8 AM), Hap Hariston (April 9 PM and April 10 AM)
ETS: Ethan Taylor, Project Owner, Annette DeLuca, Assessment Specialist, Kim Segal-Morris,
Assessment Specialist, Joanne Aswell, Assessment Specialist, Kimberly Hagen, Assessment
Specialist
Content Development Team
Name Institution Position Name Institution Position
Pat Antrim University of
Central
Missouri
Instructor Bill Edgar Missouri State
University
Coordinator,
Library Science
Education
Sheila
Driemeyer
East Central
College
Associate Director
of Library
Jenni George Ft. Zumwalt
South HS
Library Media
Specialist
Heather
Mitchell
Nowlin
Middle
School,
Independence
Public
Schools
Library Media
Specialist
Michael
Russell
Lee’s Summit
North High
School
Library Media
Specialist
Kerry
Townsend
Lead Media
Specialist,
Columbia
Public
Schools
Instructional
Technology
Specialist/LMS
Sharon
Nations
Oakgrove
Public Schools
School Librarian
Sharon
Salmons
Shepard
Boulevard
Elementary
School
School Librarian
Kelli Krause
Knob Noster
Middle School
Librarian
Jennifer Day Platte County
R-III Public
Schools
Library Media
Specialist
Barbara
Morris
Lindenwood
University
Adjunct Professor
Paula
Erickson
Fort Osage
Public
Schools
School Librarian Julie Rodell West Platte
School District
Library Media
Specialist
MoPTA — Candidate and Educator Handbook 110
Purpose of the Meeting:
To design a standards-based performance assessment for school librarians in Missouri.
Overall Goals:
ZZ) Analyze the Missouri Standards for School Librarians as they relate to the
school librarian candidate and student learning
AAA) Complete the evidence-centered design (ECD) process
BBB) Articulate the Missouri School Librarian Assessment blueprint
CCC) Provide an outline of each task
DDD) Provide draft prompts for the tasks
EEE) Provide effective feedback within their own group and to all other groups
FFF) Provide draft rubrics for the tasks
GGG) Create Ancillary Materials to be used in the pilot (as an option)
Meeting Agenda Missouri School Librarian Assessment Development
Meeting # 2
Day 1- Tuesday, April 8th
7:30 am Registration and Continental Breakfast
8:30- 9:00 Introductions and Overview of the Meeting:
1. Welcome/Introductions/Housekeeping
2. Review of meeting #1, and
3. Overview of the development process for this session
9:00- 9:45 What has Happened Since the Last Meeting?
9:45-10:30 Revisiting the Current Tasks (in task-specific groups)
10:30-10:45 Break
10:45- 12:00 Revisiting the Current Tasks
12:00-1:00 Lunch
1:00- 3:00 Revisiting the Current Tasks
3:00- 3:15 Break
MoPTA — Candidate and Educator Handbook 111
3:15- 5:00 Revisiting the Current Tasks
5:00 Adjournment
Day 2- Wednesday, April 9th
7:30 am Continental Breakfast
8:30- 9:00 Tryout Review PowerPoint
9:00- 10:15 Task Cross Check
10:15- 10:30 Break
10:30- 12:00 Task Cross Check
12:00- 1:00 Lunch
1:00 -3:00 Task Cross Check
3:00 -3:15 Break
3:15 -5:00 Discussion (whole group) and Return to Tasks
Day 3-Thursday, April 10th
7:30 am Continental Breakfast
8:30-12:00 Return to Tasks (break taken based on task owner group’s decision or when
individually needed)
12:00-1:00 Lunch
1:00-2:00 Rubrics PowerPoint
2:00-3:00 Rubric Development
3:00-3:15 Break
3:15-4:30 Rubric Development
4:30-5:00 Next steps, Homework, Adjournment
Note: See group discussions over the course of the 3 day meeting at the end of this
document.
Day 1- Tuesday, April 8th
What has Happened since the Last Meeting Joanne Aswell used a PowerPoint presentation to inform the whole committee of the work that
occurred since the end of the February session. She presented the tasks as they were drafted by
the groups and explained the recommendations ETS was making with respect to their wording
MoPTA — Candidate and Educator Handbook 112
and content. She explained how statements that were general or vague were clarified by the
addition of guiding prompts and how terminology, e.g. learner, user, resource, was made
consistent throughout the tasks, and provided examples. She then showed the group the
suggested four steps in Task One and introduced Tasks Two and Three. Tasks Two and Three,
which were linked, had a significant amount of overlap in content, as well as inherent difficulties
in scoring. She provided examples of the scoring difficulties and possible scenarios arising from
having the tasks linked together, and asked the committee to consider these implications when
revising their tasks. To address the implications, she offered three plans for the committee to
consider—keeping the task linked, separating the tasks, and allowing the candidate the choice to
link or separate. Following the presentation, participants asked questions about aspects of the
revision process and specific recommendations for tasks.
Revisiting the Current Tasks in task-specific groups All task members regrouped or joined (for new members) their specific task group. Joanne
Aswell, Annette DeLuca, Kim Segal-Morris, and Kim Hagen all facilitated group discussion.
Each facilitator was assigned to a specific task owner group to work with and help guide while
their task group was revisiting the outline/first drafts that they, as a group, started previously at
meeting 1. The task owners began to make revisions to the document and accompanying
artifacts requirements.
The remainder of the day was spent with each of the three groups reviewing their previous work
on the task, discussing opinions, suggestions, and connections. Then each group went through
the task textbox by textbox (some groups reviewed numerous times for clarity) to rewrite, add,
revise, and/or change their task.
Day 2- Wednesday, April 9th
Tryout Review PowerPoint Steve Schreiner presented a PowerPoint presentation that provided a review of the field test
process. The first was a review of the small scale field test called a tryout, and the second
focused on the large scale field test called a pilot. Steve spent most of the time reviewing the
details of the tryout which will officially begin on April 18. Steve explain that the purpose for a
tryout is to determine if the task directions guide the participant to provide appropriate evidence.
Results of the tryout will be delivered via email by May 6th. Steve encouraged team members to
get involved by responding to the prompts themselves and by recruiting colleagues to help.
Steve also informed the committee about what happens to the tryout responses once the tryout is
over and the important role the development committee will have in reviewing the assigned task
and feedback form in detail, as well as the abbreviated rubric each task team created during this
meeting. They will also help determine that the task, with refinements, can be successfully
completed and the revisions needed to be made which the task owner’s will discuss, decide, and
apply at the next meeting.
The purpose of a tryout is to determine if the task directions guide the participant to provide appropriate
evidence.
MoPTA — Candidate and Educator Handbook 113
Has the participant understood the directions?
Is the participant being disadvantaged by the task directions and guiding questions?
Does the participant feel that, given sufficient time, he or she could complete the task?
Does the participant response indicate that all directions and prompts were understood?
Does the evidence that is being submitted support what the team expected the task to elicit?
Is there a range of evidence?
Were all participants, regardless of content/developmental levels, able to address the task
requirements and prompts?
Task Cross Check and Whole-Group Feedback Kim Hagen led the cross-check. The cross-check was conducted by each group reviewing and
adding suggestions, revisions, changes, and sharing points to keep in mind. The teams broke into
their three task owner groups and worked on refining the tasks. Groups exchanged tasks where
additional comments and suggestions were documented by the groups through track changes,
until each group saw all three tasks. Each group reviewed the overview of each task’s activities,
prompts, potential artifacts, and standards alignments.
Whole-Group Feedback on Task Cross Check To share the suggestions of each group on each task, it became a whole group activity since there
were only 14 committee members. This was intentionally done in order to get the most input for
each of the tasks. The committee engaged in vigorous discussions each group contributed to the
other’s initial attempt at their tasks. The Task Owner group presented their Task to the whole-
group to briefly outline their task. This discussion served to clarify the purpose and focus of the
performance tasks as it related to respective task objectives. Each task owner group edited and
took notes as the discussion was conducted about their specific task. This was also a good forum
for the committee to see the content of the other two tasks and to see how the tasks would all fit
cohesively into a candidate’s clinical experience.
Revisiting Tasks after Cross Check Whole-Group Discussion Each of the groups continued refining their performance tasks according to the task information
they received from the other two groups through discussion and the track changes made. Each
task was posted through LCD projectors onto respective screens and group members were
actively engaged and able to look at the track changes made from the other two groups, as well
as the notes that were taken during the whole-group discussion to make their own decisions on
their task.
Day 3-Thursday, April 10th
Rubric PowerPoint
Steve presented a PowerPoint on rubric building. The rubric will be both analytic and holistic. Raters will look at the entire submission of work when determining the task score. The task will
be on a 4 point scale. There will be benchmarks for each task level 1-4. The bench marks will be
MoPTA — Candidate and Educator Handbook 114
identified during the pilot along with training cases. An example of a rubric was shared with the
team so they could see the differentiation of the various points on a scale.
Rubric Development Kim explained the next activity and Rubric shells were handed out for tasks 1, 2, and 3 for the
team to see. Steve explained how the teams would begin to develop the rubrics through
identifying parallel words for the various points on the scale. The teams broke up into their task
groups to begin rubric development. Their task was to come up with scoring criteria for what a
score of three should look like. They worked on the rubric by comparing it with their task to
develop proper scoring criteria. The teams did not have time to conduct a Task and Rubric Cross
check.
Logistics Ethan Taylor dealt with the logistics of the BIQ, W-9s, and travel and substitute reimbursement.
He also clarified the implications of the Confidentiality Form.
Important Task Group Discussion and Key Points
Task One Work: Task One group met initially to review the recommendations ETS made in the interim between
the first and second meetings. They readily agreed that Promoting Access was not an accurate
name for the task and agreed upon Selecting and Promoting Resources as the appropriate title.
They then reviewed the textboxes box by box, making wording changes and, in some cases,
adding, deleting, or relocating a guiding prompt. Possible artifacts were discussed for each
textbox and listed for future reference. The group decided to select artifacts at the end of the
process.
The major areas of discussion were as follows:
What constitutes a user group, i.e. whether teachers should be included as part of the user
group or whether the user group should be strictly students. The conclusion was to
include both students and teachers as potential users groups since school librarians work
with both groups and it was entirely possible to provide an answer to the task using either
students or teachers.
Whether candidates should be expected to actually purchase items. The group concluded
that purchasing procedures and budgets vary among school districts and it was unfair to
impose that expectation on candidates. They decided instead to have candidates present a
list of what they WOULD purchase (or otherwise acquire) if they had the opportunity.
How candidates should present their list of selected resources and how many resources
should be presented. The group decided to create a form for candidates to use to ensure
consistency and that 20 items would be included.
MoPTA — Candidate and Educator Handbook 115
How candidates could provide information about the extent to which their selection and
promotion plan met the identified need since no actual items were purchased.
Additionally, they observed that the result of a selection and promotion process may not
be known within
the timeframe of the internship. The group decided that it was feasible for candidates to
design evaluation instruments and then predict the type of data those instruments would
return.
At the conclusion of the textbox review, the group members identified the five artifacts to be
included. There was easy consensus on the chosen artifacts because the appropriateness of each
artifact became apparent during the group’s discussion.
After Task One was distributed to the other groups, the group reviewed the comments, which
were, in large part, minor. Based on the other groups’ observations, the group made the
following changes to the task:
Adding standards
Changing the number of selections from 20 to 10 and defining the 10 as “representative
samples” rather than a complete list
Making minor wording changes for clarity in the guiding prompts
The group’s work concluded by revising the introductory paragraph to correctly represent the
entirety of the task, and by identifying terms for inclusion in the glossary.
Task 2 Work: Task 2 members: Pat Antrim, Paula Erickson, Barbara Morris, Sharon Salmons, and Heather
Mitchell
The Task 2 group immediately reviewed the recommendations made by ETS since the first
meeting. The group then reviewed the textboxes box by box, making wording changes and, in
some cases, adding, deleting, or relocating a guiding prompt.
As the group reviewed each textbox and guiding prompts, they identified and discussed various
possible artifacts for each textbox and listed these artifacts to refer to when selecting the artifact
that best demonstrates The group decided to select artifacts at the end of the process.
The group had an in-depth and passionate discussion about the option of having both Task 2 and
Task 3 linked and its implications. The group felt as though the candidate should be allowed to
make the choice of linking Task 2 and Task 3 or keeping the task completely separate to stand on
their own. The group began to focus on the main idea of this task which was collaboration and
how to revise the task to ensure all areas of collaboration were being covered and that the task
was created with the idea in mind that it would be a possibility to link this task to Task 3 for the
candidate. The group was focusing on the intent of the task, the standards earmarked for that
particular task, and the cohesiveness and clarity of the prompts.
There were questions from the group about create guiding prompts versus constructed responses
questions for this performance based assessment. Kim Hagen addressed the different between
the two and how to focus on creating questions based on the Librarian/Media Specialist
candidate and selecting the best evidence (artifacts) to demonstrate their understanding. The
MoPTA — Candidate and Educator Handbook 116
assessment is based on the performance of the candidate not hypothetical situations like
constructed responses questions.
The group was also reminded of how the focus now needed to be on the specific activities the
Librarian/Media Specialist candidate can do through the creation of task-specific guiding
prompts. The prompts must be directly related to the standards and indicators that are being
assessed in each particular task. The prompts should be scaffolded to lead the candidate to
provide measureable evidence (supporting artifacts), and connecting their responses directly to
the standards.
A question posed to the group was to think about, “What features, guides, opportunities, of your
task will make this pre-service Librarian/Media Specialist most effective in providing the best
evidence of their knowledge, skills, and abilities regarding standards being assessed.”
The group wanted to incorporate the common core standards within the guiding prompts. Kim
Segal-Morris and Kim Hagen began to explain the implications that that may have for certain
districts and schools. Kim Hagen assisted the group by researching specific parts of the common
core on their website and reporting that information to the group. It was then decided that it
would be best to use the word “National” instead of “Common Core” standards to best ensure all
districts and schools participating are on the same page and are not just limited to the CCSS
when we are unclear of how the Common Core is going to play out currently.
Task 3 Work: Task 3 members: Michael Russell, Jenni George, Julie Rodell, and Sharon Nations
Major points of this group’s work: Standard 2.5 was removed from this task. The team believed
that it would be too restrictive on what a candidate could produce as far as commentary or
evidence. They also decided to eliminate standard 3.2 based on the rationale that the task doesn’t
reflect any of the key points of this standard or indicator.
Following the crosscheck, Task 3 members decided to eliminate the two focus students from this
task and make it available for the candidate to comment and explain “any” student’s interactions
during the video. The team thought it was too limiting for the candidate to first select two focus
students and then “hope” to have something to comment about these students during the video.
By removing the restriction, they felt there was more opportunity to discuss what was happening
in the video. The team also agreed that the 15 minute video would be unedited and reflect the
portion of the lesson in which the candidate was instructing the use of digital technology.
Also the emphasis of the impact of student learning was going to have its main importance in
Task 3. After much discussion, during the cross-check debriefing, in regard to the candidate
including evidence of positive impact on student learning and development, it was decided that
more revisions to this task needed to be made to strengthen this aspect in this task. It was decided
that in order to demonstrate growth, the candidate will submit an artifact demonstrating growth
for the entire class.
The team also would like a disclaimer in the Candidate Handbook and in Tasks 2 and 3 stating
that the candidate can use the same lesson in both tasks as long as the focus in Task 2 uses the
digital requirements required for task 3 in their lesson plan.
MoPTA — Candidate and Educator Handbook 117
The team concluded this meeting by writing the task Overview, checking the standards and
indicators and artifacts that would be part of this task, and vocabulary words that would be
helpful to include in the glossary.
Next Steps, Homework, Adjournment Next Steps-Participate in the Tryout (see below for details in correlating homework assignment)
The homework assignment given to all committee members in preparation for this May Meeting.
Members were to: respond to a small-scale tryout of the task that they were originally working
with; ask two or more fellow colleagues to respond to the task; and complete a Tryout Feedback
Form depicting their experience with the task during the tryout. By completing this tryout,
committee members will first re-familiarize themselves with the tasks, and second, provide us
with a basis on which the pilot will be based; which will be to take a look at everyone’s
responses to determine if the guiding prompts elicit the evidence expected.
The next development team meeting will be on May 13, 14, and 15 in Columbia, Missouri. The
focus of the meeting will be on review of the Tryout responses and Task/Rubric cross check and
revision, and creating/working on ancillary materials.
MoPTA — Candidate and Educator Handbook 118
Missouri School Librarian Assessment
Content Development Team- Meeting Three
May 13-15, 2014
Columbia, Missouri
In Attendance:
DESE: Hap Hairston
ETS: Ethan Taylor, Project Owner, Annette DeLuca, Assessment Specialist, Kim Segal-Morris,
Assessment Specialist, Joanne Aswell, Assessment Specialist, Kimberly Hagen, Assessment
Specialist
Content Development Team
Name Institution Position Name Institution Position
Pat Antrim University of
Central
Missouri
Instructor Bill Edgar Missouri State
University
Coordinator,
Library Science
Education
Sheila
Driemeyer
East Central
College
Associate Director
of Library
Jenni George Ft. Zumwalt
South HS
Library Media
Specialist
Heather
Mitchell
Nowlin
Middle
School,
Independence
Public
Schools
Library Media
Specialist
Michael
Russell
Lee’s Summit
North High
School
Library Media
Specialist
Kerry
Townsend
(May 14th –
15th)
Lead Media
Specialist,
Columbia
Public
Schools
Instructional
Technology
Specialist/LMS
Sharon
Nations
Oakgrove
Public Schools
School Librarian
Sharon
Salmons
Shepard
Boulevard
Elementary
School
School Librarian
Kelli Krause
Knob Noster
Middle School
Librarian
Jennifer Day Platte County
R-III Public
Schools
Library Media
Specialist
Barbara
Morris
Lindenwood
University
Adjunct Professor
Paula
Erickson
Fort Osage
Public
Schools
School Librarian Julie Rodell West Platte
School District
Library Media
Specialist
MoPTA — Candidate and Educator Handbook 119
Purpose of the Meeting:
To design a standards-based performance assessment for school librarians in Missouri.
Overall Goals:
HHH) Analyze the Missouri Standards for School Librarians as they relate to the
school librarian candidate and student learning
III) Complete the evidence-centered design (ECD) process
JJJ) Articulate the Missouri School Librarian Assessment blueprint
KKK) Provide an outline of each task
LLL) Provide draft prompts for the tasks
MMM) Provide effective feedback within their own group and to all other groups
NNN) Provide draft rubrics for the tasks
OOO) Create Ancillary Materials to be used in the pilot (as an option)
Meeting Agenda Missouri School Librarian Assessment Development
Meeting # 3
Day 1- Tuesday, May 13th
7:30 am Registration and Continental Breakfast
8:30 - 9:00 Welcome - (Whole Group)
5. Re-introductions of the group/Comments from Ethan Taylor and Hap Hairston
6. Where we were and where we are now in terms of development
7. Purpose of the Tryout Review
8. Procedures of the Tryout Review
9. Review the Format and Purpose of each Tryout Document - (Whole Group)
9:00 - 10:00 Read a tryout response from Task 1 (Model) - Whole Group
10:10-10:30 Break
10:30 - 12:15 Reading of Task 1 Responses (cont.) –Individual Work (These times will need to
be adjusted depending on the number of responses.)
12:15 – 1:00 Lunch
1:10 – 2:15 Whole group discussion about the Task 1 findings
2:20-2:30 Review requirements of Task 2 with whole group
2:30-3:15 Individual Work for Task 2 responses
3:15 – 3:30 Break
3:30 – 4:30 Individual Work for Task 2 (cont.)
4:30 – 5:00 Debrief with Hap Hairston
MoPTA — Candidate and Educator Handbook 120
Day 2- Wednesday, May 14th
8:30 - 8:45 Review of yesterday and overview of today
8:45 – 10:05 Whole group discussion about the Task 2 findings
10:05 – 10:20 Break
10:20 – 10:40 Review requirements of Task3 with whole group
10:40 – 11:45 Individual Work for Task 3 responses
11:45 – 12:00 Whole group discussion about Task 3 findings
12:00 – 1:00 Lunch
1:00 – 2:00 Whole group discussion of Task 3 findings (cont.)
2:00 – 4:00 Revision of Tasks by Individual Groups
Day 3- Thursday, May 15th
8:30 - 8:35 Hap Hairston – Q&A
8:35 – 9:15 Preparing for the Pilot
9:15 – 9:35 Rubric Development
9:35 – 9:40 Ethan Taylor - Logistics
9:40 – 9:50 Review of yesterday and overview of today
9:50 – 12:00 Revision of Tasks, Alignment of Task to Standards, and Rubric Development by
Task Owners
12:00 – 1:00 Lunch
1:00 – 4:00 Revision of Tasks, Alignment of Task to Standards, and Rubric Development by
Task Owners (cont.) & Adjournment
Day 1- Tuesday, April 8th
Annette DeLuca addressed both groups (counselors and librarians) with a PPT overview of the
purpose and procedures of the Tryout Review meeting. She told the group that they would:
1) Review the tryout responses to see:
• if the Task directions and guiding prompts are reasonable, appropriate, clear, and
equitable
• if the Tasks assess the Missouri School Librarian/School Counselor Standards and
Quality Indicators
• if the Task guiding prompts elicit the amount and type evidence we thought they would
2) Review the feedback surveys to see:
• if the Tasks are reasonable, appropriate, clear, and measure what we think they
measure
MoPTA — Candidate and Educator Handbook 121
3) Make revisions to the Tasks and the Rubrics based on:
• tryout responses
• feedback surveys
Annette also explained how this process that they would be doing fit into the Evidence Centered
Design model. She reminded them that in the first and second meetings they had addressed
the ”who” of the assessment and the claims that they wanted this assessment to make. The team
had also established the evidence that they wanted to collect and the tasks and activities that
would elicit that evidence. She also told them that this meeting’s work would help to refine those
tasks and guiding prompts and that they would also create the task rubrics after the prompts were
set.
The next activity that Annette did was to talk about the documents that they would be using as
they read the tryout responses. Annette walked through each document pointing out
o The purpose of the document
o How the document is formatted
o What types of documentation should be captured on each
The chart that follows reflects those documents.
MoPTA — Candidate and Educator Handbook 122
Document Purpose Structure What to be Captured
Evidence
Collecting
Form
(ECF)
The ECF is a writing document.
It is used, by the rater, to capture
the evidence from the response
that is targeted to specific
standards/indicators in that
particular entry. There is a place
on the ECF for the collection of
evidence from the candidate’s
written commentary and from the
artifacts.
1 ECF per task
Contains the highlights of
the guiding prompts by
textbox
Has a place to document
evidence taken from the
written commentary
Has a place to document
(for the purpose of this
tryout) the types and kinds
of artifacts that could
possibly be included.
The Written
Commentary involves
what the tryout
candidate list/describes
based on the guiding
prompts. If there is a
guiding prompt; there
should be evidence
pertaining to that
prompt to record.
Examples would be a
Candidate writing about
goals/activities related
to these goals/ and the
impact of this on the
program/students.
Artifacts – For the
purpose of this tryout,
we want to see what
types of things are
possible to be attached
based on the guiding
prompts. This could be:
Student survey results;
post-test scores; journal
entries; phone logs,
emails, progress reports.
Task
Editorial
Review
This document is used to record
your evaluation of the
performance and clarity of the
task regarding the guiding
prompts; directions; artifacts
received. (This document asks the
development team if what we
were intending on getting, we
really got.)
The same five questions
are asked for each Step
that is in the Task.
There are four general
questions that refer to the
overall task.
Only one of these forms
will be needed for each of
the three tasks. You will
not need one form per
response.
These questions will
ask you for your
perception regarding the
clarity, omissions, over-
addressing or weakness
in the guiding prompts.
Feedback
Form
This form provides a vehicle in
which the person who completed
the Tryout provides feedback on
the content and organization of
the task as well as his/her
experience in completing the
task. The feedback will help
determine if the task matches the
assessment’s purpose
1 Feedback Form per
participant in this Tryout
Form is generic and not
Task specific
6 questions to rate on a
scale of 1 – 4
Opportunity for comments
and suggestions
The suggestions for
revisions to the text,
length of the
commentary, number of
artifacts to provide and
any general comments
about the task and/or
directions.
MoPTA — Candidate and Educator Handbook 123
Modeling a Read Through of a Tryout Response to Task 1: Resource
Selection and Promotion and Completion of a Evidence Collecting Form
(ECF) and a Editorial Review Form
Joanne Aswell modeled the process of tryout-task analysis by reviewing the guiding prompts for
Task 1, emphasizing the salient points of the first text box for Step One (Planning). The group
then applied the criteria explained earlier by Annette, to analyze the try-out candidate’s response
to the guiding prompts in the first text box. They recorded their observations on the Evidence
Collection and the Task Editorial Review forms. Joanne led a discussion of their observations
while Kim Hagen kept notes for future use by the Task 1 team. The process was repeated with
the second text box in Step One.
Joanne then reviewed the guiding prompts and salient points for the remaining text boxes in Task
1 before asking the group members to work individually to analyze the candidate responses for
the remaining text boxes. She reminded them that as they analyze the responses, they should
remember that the try-out candidates had not completed the task fully and that some had
misunderstood the instructions. Members were given a copy of Task 1 (marked “master”) to
facilitate their analysis and were encouraged to record their observations on the master itself.
Task 1: Resource Selection and Promotion – Committee Discussion of
Findings
After the group completed their analysis of the remaining text boxes in Task 1, Joanne facilitated
a discussion of their observations while a Task 1 team member recorded notes. The Evidence
Collection forms and the master copies of Task 1, with group members’ notes, were collected for
use by the Task 1 team.
Summary of Task 1 Discussion and Notes
Step One - Planning
Reconsider “typical characteristics” - unclear what is asked for
Is it necessary to break out quantitative and qualitative measures?
Can this task be completed throughout with an adult user group?
Step Two - Implementation
Is it clear that they are to use the attached item description form?
Clarify the statement about ethical and legal considerations
Clarify that only one promotional method is needed as an artifact
MoPTA — Candidate and Educator Handbook 124
Step Three - Assessment
Quantitative and qualitative continue to be unclear
Is predicting what the data might reveal too difficult, given that this step was not
excecuted?
Step Four - Reflection
Can they adequately reflect on what they haven’t done?
Task 2: Collaborative Planning – Presentation of Salient Points
Kim Hagen facilitated the review of the salient points addressed by the guiding prompts in Step 1
(Promotion) and Step 2 (Collaborative Planning) to the Committee. Kim Segal-Morris facilitated
the review of the salient points addressed by the guiding prompts in Step 3 (Individual Planning)
and Step 4 (Reflection) to the Committee. They then distributed the Task 2 master copy, the
evidence collection form (EVF), and the Task Editorial Review form to the Committee members.
The Committee was instructed to individually review the tryout responses and record evidence
on the ECF and general thoughts/findings on the Task Editorial Review form. Many Committee
members chose to record evidence and thoughts directly on the master copy of Task 2 in lieu of
the forms provided for that purpose.
The Committee worked on Task 2 until the close of the meeting when it was decided that the
discussion regarding feedback would take place at the start of the day on Wednesday, May 14th.
Day 2- Wednesday, May 14th
Review of Yesterday and Overview of Today
Kim Hagen briefly highlighted the work the Committee completed on Tuesday and gave an
overview of the work for the day.
Kim asked the Committee to review the notes they previously recorded on the Task 2 Tryouts to
prepare for the discussion regarding the findings.
MoPTA — Candidate and Educator Handbook 125
Task 2: Collaborative Planning – Committee Discussion of Findings
Summary of Task 2 Discussion and Notes
Step 1 – Promotion
Candidates are not making the connection that the communication should be addressed to
a school leader (school leader will be included in the glossary)
There is no distinguishing between school leader and teacher in the first two textboxes
(option of bolding)
Candidates not providing evidence-based communication tool-Committee feels that
“evidence based” needs to be clarified.
Unrealistic with school leader communication and candidate
Candidates ignored principal part and evidence-based past
Suggested-describe your task, artifact, and reflect on it
Bill asked/suggested mentioning the artifacts be mentioned..??? (ask Kim)
Jenni mentioned that “why” and “rationale” aren’t being addressed by the candidates. Pat
asked the Committee which word elicited more responses on the tryouts. (this was added
to the Global Comments poster.
The Committee debated the necessity of 2.1.2, guiding prompt a) due to it being more of
a brainstorm prompt and word list that is not needed
Candidates tended to over look the rationale and why components to the prompts
Discussion of how important rationale is and what questions are best to have an rationale
attached to it (emphasize the importance of the rationale)
Make b) two separate questions
Bill pointed out that the Committee needs to discuss what they want from the candidates
regarding the rationale. Suggested rewording “rationale” to making sure candidates are
aware to explain clearly (want candidates to feel driven to answer and explain their
thinking/reasoning)
MoPTA — Candidate and Educator Handbook 126
Step 2 – Collaborative Planning
Mike suggested limiting the number of examples provided to the candidates in guiding
prompt b) and that the tryouts tended to lump the two guiding prompts together.
Examples listed either limit list or not include that guiding prompt at all
Bill suggested changing the word “practical” to “logistical” in guiding prompt b).
Bold the words “practical or logistical” (based on which word was chosen to use) and
“instructional”
Regarding the second sentence in guiding prompt a), the candidate is choosing the
collaborating teacher – it is not a joint decision. Suggestion-changing wording of prompt
a) “If supervisor picked…”
Candidates seemed to answer the teacher they picked because…but not why both people
decided to collaborate together
The difference between the collaborative plan and the lesson plan needs to be clear
Plan to plan form (didn’t make it because all schools are different but candidates may
need that resource available to them-at least as a reference)
Is guiding prompt c) necessary? Especially if it is on collaborative planning form
Tryouts glossed over guiding prompt a) in 2.2.2
Can 2.2.2 guiding prompts go together with 2.1.2 prompts (combined them). Wanting
more candidates to address students with special needs, gifted and talented, learning
styles, etc.
Summarizing rarely happened-is it necessary?
For guiding prompt b) the word “how” was suggested to replace “what” (i.e. How will
you divide up the responsibilities? Provide a rationale for whom does what).
Are we asking too many questions or guiding the candidates’ responses? (obvious
understanding by candidates in this part).
Noted that candidates needed more freedom to discuss the students
Rationale of why the objectives are important
MoPTA — Candidate and Educator Handbook 127
Step 3 – Individual Planning
What is meant by external resources in 2.3.1. Candidates couldn’t distinguish the
difference between resources and external resources (library collection and online
resources was confusing). Are they external or within the library? Needs to be made
more clear and defined in glossary!
Artifact-Representative Sampling (added glossary term)-Are 10 resources too much?
Could it be 5-10 resources instead?
Tryout candidates tended to skip guiding prompt b) in 2.3.2
Prompt a) and b) seem similar and difficult to differentiate. B) needs to be more clear as
far as logistics of school population-Needs clarification
In 2.3.3 guiding prompt a) was repetitive-Is it necessary to have as a guiding prompt (??),
in b) the standards and objectives tended to be skipped, and in c) the Committee
discussed “prior knowledge”
B) Identify the collaborative instructional CONTENT objectives and information
B) Separate into 2 questions Question 1-collaborative instructional objectives Question 2-
Information Literacy Standards
Importance of reinforcing content of classroom and information on students
Step 4 – Reflection
Candidates didn’t tend to answers the reflection questions
Joanne asked why 2.4.1 was limited to reflection on the resources chosen for the lesson
Joanne suggested that in 2.4.2 the word “describe” be changed to “reflect”
Committee suggested eliminating the second part of guiding prompt c). A rationale
doesn’t need to be provided here.
The implementation of the lesson is not mentioned until Step 4. This needs to come
earlier on in the Task. The Title of the task changed to “Collaborative Planning and
Instruction Implementation. Front page with directions of task needs to include the actual
implementation of the lesson (pointed out by Bill).
MoPTA — Candidate and Educator Handbook 128
Suggested that Task 2 owners go through the task and look at each of the steps and
terminology to convey, making sure it is appropriate and the language is standardized.
Task 3: Integrating Technology into Instruction – Presentation of Salient
Points
Annette DeLuca facilitated the review of the salient points addressed by the guiding prompts in
Step 1 (Planning), Step 2 (Implementation), Step 3 (Assessment), and Step 4 (Reflection) to the
Committee. Kim Hagen and Kim Segal-Morris then distributed the Task 3 master copy, the
evidence collection form (EVF), and the Task Editorial Review form to the Committee members.
Task 3: Integrating Technology into Instruction – Committee Discussion of
Findings
Summary of Task 3 Discussion and Notes
Title:
The committee agreed, after review of feedback and revision of guiding prompts, that the title of
Integrating Technology into Instruction should be changed to Technology Instruction; since it
did not involve integration.
It was also decided to not try to relate tasks two and three (giving the candidate the option to use
the same lesson). It would have been very complicated if the two were to be linked based on the
requirements of each Task.
Much discussion was also about whether or not to call this a “mini lesson.” It was decided that,
indeed, this is the type of lesson that we wanted the candidate to submit on their video; a short 15
minute lesson teaching a technological skill. That does not mean that from beginning to end of
the lesson had to be 15 minutes; just the portion of them instructing had to be the 15 minutes.
Step 1 – Planning
Discussion about the connection between task 2 and 3 was here at this point.3.1.1
Lesson plan was extended to 4 pages. May used the form provided or their own. 3.1.1
The artifact (accompanying instructional artifact) was deleted from this textbox. 3.1.2
Rubric/scoring guide artifact was moved to a later part of this task and then later deleted
3.1.2
Concurred that the topic of ‘intellectual freedom” was not the appropriate term. Changed
to “legal and ethical issues”. 3.1.2
No major (just editorial) changes made to 3.1.3
MoPTA — Candidate and Educator Handbook 129
Step 2– Implementing the Plan
Evidence of the candidate’s response to student feedback and the differentiation of
instruction was changed from just the video to include the “off-camera” instruction that
would happen before or after the video. 3.2.1
Content knowledge was to be an observable and valued component of this instruction; so
a brand new guiding prompt was added to this textbox. How did you demonstrate your
knowledge and understanding of this technological resource (e.g. terminology, process,
methods, and application)? Cite evidence from the video.3.2.1
Step 3– Assessment of Student Work
Minor edits were made to this textbox.(a) and (c) 3.3.1
A graphical representation of the students’ prior and post knowledge of this
technological skill was added to this textbox 3.3.1
Step 4– Reflection
Prompt (c) -deleted the term “training” from the prompt since the candidates would not
have access to any training at this point in their careers.3.4.1
Step 5- Video
Discussion confirmed that the video would remain to be a 15 minute unedited video of
the candidate instructing the students on the use of a technological resource.
Glossary
Kim Hagen recorded and photocopied a list of potential terms to include in the glossary and
distributed the list to the Committee members for their review, additions, or deletions.
Revision of Tasks by Individual Groups
Kim Hagen explained that the Committee members would be returning to their Task owner
groups to revise their Tasks based on the discussions of the findings from the Tryout responses.
She asked that the Committee members focus on only valid changes that would improve the task
and reminded them to review the first page of the Tasks.
Task 2
Group Discussion Points after whole group discussion of tryout response and review of
tryout feedback forms received for the task
Prompts that answer themselves-directions need to be more clear
2.2.1 and 2.2.2 refer to questions that need to be asked in the planning steps of the task
MoPTA — Candidate and Educator Handbook 130
Suggested to have standards listed out so candidates don’t have to look the standards up
that they are referencing and those referenced in actual task directions
Advocating with principal hypothetical and difficult for candidate to facilitate
Questioned-Who gives candidate access to school, resources, communications, speaking
up (i.e. at faculty meetings)
Findings of last 2 parts to be redundant-suggestion to combine or leave last part off
2.3.1 “work” replaced with “learning” to match wording in the prompts and standardized
language
Collaborative planning form discussion of creating one
Discussion and deciding on where evidence-based and action research would best fit for
which rationales
Clarifying “changes to the library resources”
Changing the titles of the Steps within the Task for clarity
Day 3-Thursday, May 15th
Hap Hairston began the day by addressing both the School Librarian and School Counselor
Committees.
Preparing for the Pilot Steve Schreiner facilitated a PowerPoint presentation regarding the preparation for the pilot of
the School Librarian and Professional School Counselor Performance Assessments.
Rubric Development Steve Schreiner facilitated a PowerPoint presentation regarding the development of the rubrics for the
School Librarian and Professional School Counselor Performance Assessments. He explained that the
rubrics developed during the second development meeting needed to be revised based on feedback
from CAEP regarding the rubrics developed for the Missouri Pre-Service Teacher and School Leader
Performance assessments.
Logistics
MoPTA — Candidate and Educator Handbook 131
Ethan Taylor dealt with the logistics of the expense vouchers, travel, and substitute
reimbursement.
Review of Yesterday and Overview of Today
Annette DeLuca outlined the day’s work for the School Librarian Committee members. She
advised the Committee that the members of Task 2 should continue to edit their task while the
members of Task 1 and Task 3 should begin to revise their rubrics. When the groups complete
those tasks they should re-align their tasks to the Missouri School Librarian Standards. The
Committee members then split into their respective task groups and worked in those groups until
the close of the day.
Standards Crosscheck: Task 3 team did a standards crosscheck to make sure that, due to the revisions made to the
guiding prompts; the proper Missouri School Librarian Standards were properly represented at
the beginning of this task. The addition of Standard 5 Quality Indicator 4 was added to the
current list.
MoPTA — Candidate and Educator Handbook 132
Bibliography
American Educational Research Association, American Psychological Association, and National
Council on Measurement in Education (2014). Standards for Educational and Psychological
Testing. Washington, D.C.: American Psychological Association.
Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16,
297-334.
Dorans, N., & Holland, P. (1993). DIF Detection and Description: Mantel-Haenszel and
Standardization. In P. Holland and H. Wainer (Eds.), Differential Item Functioning. Hillsdale,
NJ: Lawrence Erlbaum Associates.
Dorans, N., & Kulick, E. (1986). Demonstrating the utility of the standardization approach to
assessing differential item functioning on the Scholastic Aptitude Test. Journal of Educational
Measurement, 23, 355–368.
Educational Testing Service, ETS Guidelines for Fairness Review of Assessments, Princeton,
N.J., 2009.
Educational Testing Service, ETS Standards for Quality and Fairness, Princeton, N.J., 2014
Educational Testing Service, Questions to Ask About Teacher Testing, Princeton, N.J., 2004
Educational Testing Service, Proper Use of The PRAXIS Series and Related Assessments,
Princeton, N.J., 2006
Holland, P.W. & Thayer, D.T. (1985). An alternative definition of the ETS delta scale of item
difficulty (RR-85-43). Princeton, N.J.: Educational Testing Service.
Holland, P.W., & Thayer, D.T. (1988). Differential item performance and the Mantel-Haenszel
procedure. In H. Wainer and H.I. Braun (Eds.), Test Validity, pp. 129–145. Hillsdale, N J:
Lawrence Erlbaum Associates.
Holland, P. W., & Wainer, H. (1993). Differential Item Functioning. Hillsdale, NJ: Lawrence
Erlbaum Associates.
Joint Committee on Testing Practices (2004). Code of Fair Testing Practices in Education.
Washington, D.C.
MoPTA — Candidate and Educator Handbook 133
Knapp, J., & Knapp, L. (1995). Practice analysis: Building the foundation for validity. In J.C.
Impara (Ed.), Licensure testing: Purposes, procedures, and practices (pp. 93–116). Lincoln, NE:
Buros Institute of Mental Measurements.
Kolen, M. J., & Brennan, R. L. (2004). Test equating, scaling, and linking: Methods and
Practices (2nd Ed.). New York: Springer-Verlag.
Livingston, S.A. & Lewis, C. (1995). Estimating the consistency and accuracy of classifications
based on test scores. Journal of Educational Measurement, 32, 179–197.
Lord, F.M. (1984). Standard errors of measurement at different ability levels. Journal of
Educational Measurement, 21, 239–243.
Raymond, M.R. (2001). Job analysis and the specification of content for licensure and
certification examinations. Applied Measurement in Education, 14, 369–415.
Schmitt, K (1995). What is licensure? In J.C. Impara (Ed.), Licensure testing: Purposes,
procedures, and practices (pp. 3–32). Lincoln, NE: Buros Institute of Mental Measurements.
Tannenbaum, R.J., & Rosenfeld, M. (1994). Job analysis for teacher competency testing:
Identification of basic skills important for all entry-level teachers. Educational and
Psychological Measurement, 54, 199–211
Von Davier, A. A., Holland, P. W., & Thayer, D. T. (2004). The kernel method of equating. New
York: Springer.
Wainer, H. & Kiely, G. (1987). Item clusters and computer adaptive testing: A case for testlets.
Journal of Educational Measurement, 24, 185–201.
Zwick, R., Donoghue, J. R, & Grima, A. (1993). Assessment of Differential Item Functioning for
Performance Tasks. Journal of Educational Measurement, 30, 233–251.