test administration technical report · data are collected on each item, allowing for the empirical...

74
Copyright © 2014 Pearson Education, Inc. or its affiliate(s). All rights reserved. Evaluation Systems, Pearson, P.O. Box 226, Amherst, MA 01004 Pearson and its logo are trademarks, in the U.S. and/or other countries, of Pearson Education, Inc. or its affiliate(s) Test Administration Technical Report 2013–2014 Program Year October 2014 Prepared for: Center for the Teaching Profession Ohio Department of Education 25 South Front Street, MS 505 Columbus, OH 43215-4183

Upload: others

Post on 01-Jun-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Copyright © 2014 Pearson Education, Inc. or its affiliate(s). All rights reserved. Evaluation Systems, Pearson, P.O. Box 226, Amherst, MA 01004

Pearson and its logo are trademarks, in the U.S. and/or other countries, of Pearson Education, Inc. or its affiliate(s)

Test Administration Technical Report

2013–2014 Program Year

October 2014

Prepared for: Center for the Teaching Profession

Ohio Department of Education 25 South Front Street, MS 505

Columbus, OH 43215-4183

Page 2: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Copyright © 2014 Pearson Education, Inc. or its affiliate(s). All rights reserved.

i

Contents

Preface ........................................................................................................ ii Purpose of the Ohio Assessments for Educators (OAE) Program ................ 1

Introduction ....................................................................................................... 1 Composition of the OAE Program ........................................................................... 1

Technical Properties of the OAE Program .................................................... 3 Scoring .............................................................................................................. 3 Item Analyses .................................................................................................... 5 Test Equating ..................................................................................................... 5 Scaled Scores ..................................................................................................... 6 Reliability ........................................................................................................... 8 Validity ............................................................................................................. 10

Score Reporting ........................................................................................ 12 Candidate Score Reports ..................................................................................... 12 Other Score Recipients ....................................................................................... 12 Test Statistics Reports ........................................................................................ 13 Total Scaled Score Distribution by Test Field .......................................................... 14 Title II Reporting ............................................................................................... 14

References ................................................................................................ 16 Appendices ............................................................................................... 17

Page 3: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Copyright © 2014 Pearson Education, Inc. or its affiliate(s). All rights reserved.

ii

Preface

This test administration technical report provides information on the technical characteristics of the tests and score reporting for the Ohio Assessments for Educators (OAE) for the 2013–2014 program year.

Page 4: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Copyright © 2014 Pearson Education, Inc. or its affiliate(s). All rights reserved.

1

Purpose of the Ohio Assessments for Educators (OAE) Program

Introduction

The Ohio Assessments for Educators (OAE) program assesses the content-area and professional (pedagogical) knowledge of candidates who are either seeking initial Ohio educator licensure or adding a new licensure area. The OAE program, administered by Pearson, includes 38 content-area assessments and four professional (pedagogical) knowledge assessments. Five OAE assessments include two separate tests each (i.e., Subtest I and Subtest II) for a total of 47 unique tests. The OAE tests are aligned with Ohio Educational Preparation Standards, Ohio Standards for the Teaching profession, and Ohio Student Standards.

Each test was validated for use in Ohio in accordance with the practices recommended by the Standards for Educational and Psychological Testing (AERA, APA, & NCME, 1999). The Standards require a clear definition of content domain and a rationale to support a claim that the knowledge and skills being assessed in a licensure test are required for credential-worthy performance. Educators, educator preparation faculty, and administrators from across Ohio were involved in reviewing the test materials for content, job-relatedness, and prevention of bias; validating their appropriateness for use in Ohio; and making recommendations for the passing score for each test. In addition, in accordance with State of Ohio requirements, assessment materials, where available, were to have been previously administered to educator licensure candidates in states other than Ohio.

The OAE tests are computer-based and delivered through a national network of Pearson computer-based testing centers. Most tests are available year round by appointment.

The OAE program offers several web-based resources to help candidates prepare for the tests. These resources include online study guides, practice tests, detailed score reports, and computer-based testing tutorials. In addition, a suite of faculty resources and interactive worksheets are available to assist in candidate preparation. The Ohio Department of Education and educator preparation programs have access to an interactive, electronic database that allows them to create customized reports of candidate test results, institution performance, or perform customized data queries.

Composition of the OAE Program

Currently, 46 OAE tests1 are available for test administration. The OAE program includes four professional (pedagogy) knowledge tests that are matched to Ohio licensure grade bands (Early Childhood, Middle Childhood, Adolescence to Young Adult, and Multi-Age). Content-area tests match Ohio license types. Thirty-two OAE tests have been operational since September 3, 2013; 12 OAE tests have been operational since January 21, 2014; two OAE tests have been operational since September 2, 2014.

1 Validation activities are in progress for the Dance test.

Page 5: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Copyright © 2014 Pearson Education, Inc. or its affiliate(s). All rights reserved.

2

Ohio Assessments for Educators (OAE)

Pedagogical Knowledge Assessments

001 Assessment of Professional Knowledge: Early Childhood (PK–3)

002 Assessment of Professional Knowledge: Middle Childhood (4–9)

003 Assessment of Professional Knowledge: Adolescence to Young Adult (7–12)

004 Assessment of Professional Knowledge: Multi-Age (PK–12)

Content Knowledge Assessments

005 Agriscience

006 Art

007 Biology

008 Business Education

009 Chemistry

010 Computer Information Science

011 Dance

012 Early Childhood Education

013 Early Childhood Special Education

014 Earth and Space Science

015 Educational Leadership

016 Computer/Technology (Subtest I)

017 Computer/Technology (Subtest II)

018 Elementary Education (Subtest I)

019 Elementary Education (Subtest II)

020 English Language Arts

021 English to Speakers of Other Languages

022 Family and Consumer Sciences

023 Health

024 Integrated Science

025 Integrated Social Studies

026 Marketing

027 Mathematics

028 Middle Grades English Language Arts

029 Middle Grades Science

030 Middle Grades Mathematics

031 Middle Grades Social Studies

032 Music

034 Physical Education

035 Physics

036 Prekindergarten (Subtest I)

037 Prekindergarten (Subtest II)

038 Reading (Subtest I)

Page 6: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Copyright © 2014 Pearson Education, Inc. or its affiliate(s). All rights reserved.

3

039 Reading (Subtest II)

040 School Counselor 041 School Library Media Specialist

042 School Psychologist 043 Special Education

044 Special Education Specialist: Deaf/Hard of Hearing

045 Special Education Specialist: Visually Impaired

046 Technology Education (Subtest I)

047 Technology Education (Subtest II)

048 Theater

Technical Properties of the OAE Program

The Standards for Educational and Psychological Testing require that testing agencies provide relevant technical information about the assessments so that test users and reviewers have sufficient information to make judgments about the quality of the test, the resulting scores, and interpretations based on test scores (AERA, APA, & NCME, 1999). This information can ultimately assist test users and reviewers in determining the appropriateness of the test for its intended purpose (AERA, APA, & NCME, 1999).

Scoring

The OAE includes tests consisting of multiple-choice items only as well as tests consisting of both multiple-choice and constructed-response items. The scoring procedures for the program are carefully documented for both multiple-choice and constructed-response items. Additionally, performance monitoring is conducted to check the accuracy of scoring and reporting for the constructed-response items.

Scoring Multiple-Choice Items

Answer keys for multiple-choice items are prepared during the construction of test forms. These keys are reviewed and checked at several points during development and production. Automated technology at computer-based testing centers compare these answer keys to candidate responses immediately following the conclusion of testing. This technology allows for on-site unofficial test results to be generated at the testing centers, which can then be provided to candidates. These results are provided to the candidates only for the OAE assessments that do not include constructed-response items.

Multiple-choice items are dichotomously scored, meaning a single point is awarded for each correct response, and no points are awarded for an incorrect response. For tests composed of multiple-choice items only, the final raw score is the total number of correct responses on the test. The raw scores are transformed and reported on a scale ranging from 100 to 300 with a scaled passing score of 220.

Each test form for all OAE fields includes both scorable and nonscorable multiple-choice items. Scorable items are those that are used to compute candidates’ scores. Nonscorable items are those that are included on a test form to collect additional psychometric information (to support pilot testing) and to support test form equating methodologies but do not contribute to candidates’ scores.

Page 7: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Copyright © 2014 Pearson Education, Inc. or its affiliate(s). All rights reserved.

4

Scoring Constructed-Response Items

Some OAE tests include constructed-response items in addition to multiple-choice items. Candidate responses to constructed-response items are scored using a focused holistic scoring methodology. In this method, scorers judge the overall effectiveness of each response using a set of characteristics that have been defined as important to help inform the overall score. The score is holistic in that each score is based on the overall effectiveness of these characteristics working together, focusing on the response as a whole.

The Special Education Specialist: Visually Impaired constructed-response item is scored differently. This item requires candidates to transcribe a passage into braille, and the score provided is based on the number of errors made in the transcription.

Scoring Components. Candidate responses are scored on a scale. Some tests use a four-point score scale; other tests use a three-point score scale. A score of 1 represents little or no command of the characteristics, and 3 or 4 represents a strong command of the characteristics. Each constructed response is independently scored by two scorers, and these scores are summed for a total possible score range of 2 to 6 for three-point score scales or 2 to 8 for four-point score scales.

To participate in the holistic scoring process, scorers must meet specific qualifications such as:

A state teaching certificate/license;

Teaching experience in public schools; and/or

Experience as a college educator responsible for preparing prospective teachers.

Scorer Training. Prior to scoring, each scorer receives training and orientation by a Chief Reader who trains each scorer to evaluate responses to a variety of teacher licensure examinations. The Chief Reader provides each scorer with the background of the OAE tests, the context of the scoring task, the tasks they will perform, the procedures they will follow, the scoring scale, and the characteristics that will guide their scoring judgments. The scorers are also oriented to the prompt-specific rubrics that apply the characteristics and scoring scale to the constructed-response item(s) used on test forms for the current test administration. The Chief Reader leads scoring sessions and identifies and develops orientation materials, conducts calibration orientation and assessments, and monitors the scoring session progress.

Scorers receive practice in scoring training sets of responses to which scores have already been assigned, including marker responses (i.e., exemplar responses of each score point on the score scale). The training emphasizes equity and fair application of the score scale. Once scorers have been trained, they have to prove their ability to score accurately by completing a calibration exercise. Each response is read independently and scored by two scorers. If the two scores are not identical or adjacent, additional scoring is conducted to resolve the discrepancy.

Performance Monitoring of Scorers

Pearson monitors the performance of scorers throughout the scoring process. Specific areas monitored include a scorers’ ability to understand and apply the established scoring scale, the consistency of the scores assigned in comparison with those assigned by the second scorer, and the scorers’ consistency over time. At points in the holistic scoring process, scorers are recalibrated to the scale, typically through discussions of specific questions. Scorers must demonstrate continued scoring accuracy on the

Page 8: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Copyright © 2014 Pearson Education, Inc. or its affiliate(s). All rights reserved.

5

responses. If scorers fail to demonstrate accuracy, they receive additional, individual reorientation before proceeding with scoring.

Item Analyses

Item analyses are conducted on multiple-choice items to assess the accuracy and psychometric quality of the items. Additionally, data from constructed-response items are reviewed to confirm that items in the item bank for each field are comparable in terms of difficulty and score distribution.

Item Analysis for Multiple-Choice Items

The purpose of item analysis for multiple-choice items is to verify the accuracy of the answer key for each administered test form and as an additional quality assurance check before providing final results and before official candidate score reports are produced. Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the plausibility of distractors. These item statistics are calculated and evaluated for the current administration and cumulatively (i.e., using combined statistics for previous operational test administrations).

The item statistics calculated and evaluated for each multiple-choice item include:

Item difficulty (p-value);

Distribution of responses (percentages of participants selecting each response option);

Item-to-total test point biserial correlation (correlation of performance on the item to performance on the total test); and

Mean score by response choice (average score on the total multiple-choice set achieved by all participants selecting each response option).

Those items that do not perform within defined statistical parameters are flagged and reviewed. Flagged items are reviewed by content specialists, test development specialists, psychometricians, and/or editors and are deleted or revised and subjected to additional pilot testing. Specifically, item review includes: a review to confirm that the wording on the test form is the same as the wording of the item as validated by the CACs, a check of content, topicality, and correct answer with documented sources, and/or review by a content expert. Based on the results of the review process, a decision is made regarding whether or not to include those items as scorable, and Pearson implements any necessary scoring actions.

Item Review for Constructed-Response Items

Constructed-response item data are regularly reviewed to monitor the difficulty of the items across administrations. Throughout the scoring process, monitor reports containing mean scores and standard deviations are reviewed. The monitor reports also contain distribution of scores assigned by the first two scorers, distribution of score differences, and distribution of score combinations assigned by the first two scorers.

Test Equating

Each OAE test consists of multiple forms. Multiple forms are utilized within and across test administrations to address issues of item exposure and security. Statistical

Page 9: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Copyright © 2014 Pearson Education, Inc. or its affiliate(s). All rights reserved.

6

adjustment (equating) is implemented to adjust for small differences in difficulty across forms.

Purpose of Equating

According to the Standards for Educational and Psychological Testing, equating refers to the process of placing scores from alternate, parallel, or equivalent forms of a test on a common scale (AERA, APA, & NCME, 1999). The central purpose of statistical equating methods is to compensate statistically for possible variability in the characteristics of test forms that may affect candidates’ scores (e.g., differences in the overall difficulty of a new test form compared to a previous test form). Statistical equating methods ensure that a candidate’s scaled score is adjusted for the relative difficulty of the particular test form that was taken. Equating thus allows test developers to attribute differences in scores across test forms to differences in knowledge or skills, and not differences in the tests. Equating helps to ensure that different test forms are comparable and that test scores from multiple test forms can be used interchangeably.

Equating Design

A single-group equating design is utilized for the OAE tests. In a single-group design, the same group of candidates is scored on two alternative forms of the same test. The two forms are defined by designating two alternative subsets of items on the full test form as scorable, so that each alternative contains a set of common scorable items plus a set of scorable items unique to the form. The two alternative forms are then statistically equated.

To equate the two test forms, a linear equating method was implemented for the OAE. In linear equating, two scores are equivalent if they are the same number of standard deviation units above or below the mean for some group of candidates (Angoff, 1984). A linear equation is used to relate the scores from the two forms by setting standard deviation scores, or z-scores, to be equal on the two test forms (Kolen & Brennan, 2004).

This design is used for the OAE tests because of the need to pre-equate the test forms, enabling candidates to receive unofficial test results at the testing center immediately after the conclusion of their test administration (for multiple-choice only OAE tests). With pre-equating, the passing score for a new test form is established prior to operational administration.

Scaled Scores

The Standards for Educational and Psychological Testing state that scaled scores may aid in interpretation of the test. Specifically, scaled scores allow scores to be easily comparable regardless of test form or administration (AERA, APA, & NCME, 1999). Scaled score reporting is preferred to raw score reporting due to the confusion that may occur as a result of some changes in raw cut scores across test forms. Therefore, the use of scaled scores helps to support the communication of the OAE program results in the following ways:

Candidates, institutions of higher education, and stakeholders will be able to interpret scores from the different OAE tests in a similar manner, regardless of the test taken.

The meaning of the scaled passing scores will be consistent over time making it possible to compare performance from one test administration to the next.

Page 10: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Copyright © 2014 Pearson Education, Inc. or its affiliate(s). All rights reserved.

7

Scaled Score Range for the OAE

Raw test scores, including the number correct for multiple-choice items, and holistic scores for constructed-response items are transformed to a standard scale ranging from 100 to 300. This scale is used to communicate OAE scores across all tests within the program. For each OAE test, 220 represents the passing score. By using a standard scale across all tests, test scores and feedback are provided in an easy-to-understand format for candidates, institutions of higher education, the ODE, and other stakeholders.

Calculation of Scaled Scores

For tests containing only multiple-choice items, a simple linear transformation is applied to the raw scores to compute the associated scaled score. The scaled score is derived from the candidate’s raw score, the raw cut score, and the maximum possible raw score. Candidates who perform at the raw score that is equivalent to the passing score achieve a scaled score of 220, while those who achieve the maximum possible raw score will receive a scaled score of 300.

For the OAE tests that contain both multiple-choice and constructed-response items, the multiple-choice and constructed-response test sections are each scaled separately. These multiple-choice and constructed-response scaled scores are then combined in accordance with the following weightings.

OAE Test Multiple-Choice

Weight Constructed-

Response Weight

Assessment of Professional Knowledge: Early Childhood (PK-3)

80% 20%

Assessment of Professional Knowledge: Middle Childhood (4-9)

80% 20%

Assessment of Professional Knowledge: Adolescence to Young Adult (7-12)

80% 20%

Assessment of Professional Knowledge: Multi-Age (PK-12)

80% 20%

Computer Information Science 85% 15%

Educational Leadership 60% 40%

Computer Technology (Subtest I) 70% 30%

Computer Technology (Subtest II) 70% 30%

Pre-Kindergarten (Subtest I) 80% 20%

Pre-Kindergarten (Subtest II) 80% 20%

Reading (Subtest I) 80% 20%

Reading (Subtest II) 80% 20%

School Psychologist 85% 15%

Special Education: Visually Impaired 80% 20%

Technology Education (Subtest I) 70% 30%

Technology Education (Subtest II) 70% 30%

Page 11: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Copyright © 2014 Pearson Education, Inc. or its affiliate(s). All rights reserved.

8

OAE Test Multiple-Choice

Weight Constructed-

Response Weight

Theater 80% 20%

Reliability

The Standards for Educational and Psychological Testing refers to reliability as the consistency of test scores for a group of candidates across administrations (AERA, APA & NCME, 1999). There are many common reasons for individual scores to fluctuate over time. Score fluctuation from one testing occasion to another has an impact on reliability. Some factors that affect reliability include:

Number of candidates. The number of candidates whose test scores contribute to a statistical estimate of reliability affects the stability of the estimate. In general, reliability estimates based on larger numbers of candidates are more stable than estimates based on smaller numbers. For this reason, reliability estimates are calculated for tests that are taken by one hundred or more candidates.

Self-selection of candidates by test administration date. Typically, candidates can decide when to take a particular test. OAE tests are administered throughout the year, and candidates can select when to take and retake the tests. This self-selection can affect the composition, ability level, and variability of the group taking a particular test at a given test administration.

Variability of the group tested. In general, the larger the true variance or true spread of the scores of the candidate group (i.e., the greater the individual differences in the true level of knowledge and skills of the candidates in the particular group taking a test on a particular occasion), the greater will be the reliability coefficient. Reliability estimates tend to be higher if candidates in the group have widely varying levels of knowledge, and lower if they tend to have similar levels of knowledge.

Test length. Statistical estimates of reliability are typically higher for tests with greater numbers of questions. A more reliable estimate of a person's knowledge is obtained by asking more questions.

Test content. Reliability estimates are typically higher for tests that cover narrow, homogeneous content than for tests (such as many used for educator licensure) that cover a broad range of content. Tests for educator licensure must typically test a broad base of knowledge and skills that pertain to licenses that will apply in a wide range of educational settings, grade levels, and teaching assignments.

Because the tests included in the OAE program are used to make high-stakes decisions, several indicators of decision consistency (that is the degree to which the same decisions are made from two tests) and measures that indicate score reliability (consistency of scores across administrations) are calculated. Statistics presented not only consider the reliability of the test scores, but also indicate the reliability of the decisions made using the test results.

Several measures are employed to assess the reliability of each test in the OAE program. These measures are described below.

Livingston-Lewis estimate of decision consistency. For a test used to make licensure requirement decisions such as the OAE, the consistency of such

Page 12: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Copyright © 2014 Pearson Education, Inc. or its affiliate(s). All rights reserved.

9

decisions becomes a primary focus (Crocker & Algina, 1986). Decision consistency refers to the degree to which the same decisions are made from two tests. For the OAE program, the Livingston and Lewis (1995) estimate of decision consistency is used. This multi-stage method calculates decision consistency and accuracy using four types of input including: distribution of scores on one form, the minimum and maximum possible scores, the cut points used for classification, and the reliability coefficient (Livingston & Lewis, 1995). Decision consistency is reported in the range of 0 to 1, with estimates close to 1 indicating more consistent or reliable decisions.

Kuder-Richardson formula 20 (KR20). The Kuder-Richardson index of item homogeneity (KR20) is an overall test consistency (reliability) estimate based on a single test administration (Kuder & Richardson, 1937). It is applicable to the multiple-choice section of tests. KR20 is reported in the range 0 to 1, with a higher number indicating a greater level of consistency (reliability). Homogeneity refers to the degree to which the items on the test are consistent with one another. For the OAE, KR20 is computed for tests composed of multiple-choice items only as well as for multiple-choice sections of tests that also include constructed-response items.

Stratified coefficient alpha. Stratified coefficient alpha is an estimate of total test score reliability for a test containing a mixture of item types (e.g., multiple-choice and constructed-response) (Qualls, 1995). Each item type component of the test is treated as a subtest. Internal consistency estimates for the separate subtests are combined to compute stratified coefficient alpha. Stratified coefficient alpha is reported in the range 0 to 1, with a higher number indicating a greater level of consistency (reliability). This statistical estimate was deemed most appropriate for estimating total reliability of tests with both multiple-choice and constructed-response items for the OAE because it takes into account differences in test length and variance of the two item types.

Standard error of measurement. The Standards for Educational and Psychological Testing define the standard error of measurement as the estimate of the difference between observed scores and estimated true scores by estimating the variability of measurement errors. This statistic speaks to the reliability of test scores, with smaller standard errors of measurement indicating more reliable test scores. (AERA, APA, & NCME, 1999).

Generalizability coefficient (G). The Generalizability (G) coefficient is a measure of the percent of total score variance that is attributable to persons (i.e., factors within the candidate, such as subject matter knowledge). It reflects the proportion of variability in individuals' scores that is attributable to true score variability rather than to measurement error (Brennan, 2001). It is reported in the range 0 to 1, with a higher number indicating a greater level of generalizability. The G-coefficient is applicable to test sections composed of constructed-response items. It gauges the degree to which the results from one test form of the constructed-response items are generalizable to other forms, or other test administrations.

Scorer Agreement. Scorer agreement is the degree of agreement between constructed-response scores assigned by independent scorers. Independent scorers are in agreement if the scores they award are either exact or adjacent. The scorers are not in agreement if the scores awarded differ by more than one point. The percent of cases in which the first two independent scorers are in

Page 13: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Copyright © 2014 Pearson Education, Inc. or its affiliate(s). All rights reserved.

10

agreement is computed as a measure of scorer agreement (reliability). The following scorer agreement statistics are reported:

• Percent Agreement. Overall agreement determined by summing exact and adjacent agreement.

• Percent Exact. This is the percentage of scores in which the first two scorers were in exact agreement.

• Percentage Adjacent. This is the percentage of scores in which the two scorers assigned adjacent scores.

• Inter-rater Reliability. This is the intraclass correlation between the first and second score assigned to each response, corrected using the Spearman-Brown formula.

Validity

The Standards for Educational and Psychological Testing state that validity is “the most fundamental consideration in developing and evaluating tests” (AERA, APA, & NCME, 1999, p. 9). Validity relates to the use and interpretation of test scores rather than describing a test itself. Messick (1989) defined validity as “an integrated evaluative judgment of the degree to which empirical evidence and theoretical rationales support the adequacy and appropriateness of inferences and actions based on test scores or other modes of assessment” (p. 13). For the OAE program, collection of both content-based and construct-based validity evidence is a continuous process.

Content-Based Validity Evidence

Because the OAE program is composed of licensure tests, gathering content-based validity evidence is essential to confirm that the assessment frameworks (which represent the content domain) are representative and relevant of the knowledge and skills required to be an entry-level educator in the classroom. Content-based validity evidence for the OAE program was gathered throughout the assessment validation process, starting with the frameworks.

OAE Frameworks. In validating the frameworks, content-based validity evidence is initially gathered through an alignment study. Alignment can be used to gather content-based validity evidence by corroborating that the knowledge, skills, and other constructs measured by the tests are consistent with those specified in the frameworks (Koretz & Hamilton, 2006). The OAE assessment frameworks are based on state-approved and nationally recognized professional and academic standards, and contain a competency component with information about the knowledge and/or skills necessary for performing the job of a licensed educator in Ohio public and non-public schools. Therefore, the competencies collectively define the range of content to be measured by the test. Pearson conducted an alignment study of each OAE framework to Ohio Educational Preparation Standards, Ohio Student Standards, and Ohio Educator Standards. These alignment studies were carried out to confirm that the test content, and therefore the tests are aligned with appropriate standards related to the intended purpose of the test. Additional information on the alignment studies can be found in the Ohio Assessments for Educators Technical Report.

Another source of content-based validity evidence was gathered by conducting an initial review of the frameworks by Ohio educators and teacher educators. During this review, Ohio educators and teacher educators checked that the content domain represented the knowledge and skills required by an entry-level educator in that test field by considering

Page 14: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Copyright © 2014 Pearson Education, Inc. or its affiliate(s). All rights reserved.

11

the criteria of alignment, completeness, clarity of language and terminology, and freedom from bias.

OAE Content Validation Surveys. Content Validation Surveys provide content-based validity evidence through the input of experts and stakeholders in educator licensure regarding the importance of the necessary knowledge and skills specified in each framework for an entry-level educator. Results of these surveys guided the final definition of the content domain. Practicing educators rated the test components of each framework on a 1–5 scale. Results of the surveys were used to determine the final status of all of the assessment framework components. Results of the content validation surveys for the OAE indicated that across test fields, almost all competencies and descriptive statements achieved a rating of at least 4.0, representing “great importance.” Additional information on the content validation surveys is included in the Ohio Assessments for Educators Technical Report.

OAE Test Items. As described in the Ohio Assessments for Educators Technical Report, the item validation activities by the Bias Review Committee (BRC) and Content Advisory Committees (CACs) provided additional content-based validity evidence. The BRC validated that the items were free from bias in content, language, offensiveness, and stereotypes, and that the items were fair and represented the diversity of the Ohio population. The CACs validated items to indicate that they matched the test competency or content domain to which they were written and were accurate, free from bias, and job-related. Any items that were judged to be problematic were eliminated.

Construct-Based Validity Evidence The Standards for Educational and Psychological Testing refer to construct validity as the degree to which scores from an assessment can be interpreted as indicating the candidate’s standing on the knowledge and skills assessed by the test (AERA, APA, & NCME, 1999). Some threats to construct validity include construct irrelevance and construct underrepresentation. Construct irrelevance is “the extent to which test scores are influenced by factors that are irrelevant to the construct that the test is intended to measure” (AERA, APA, & NCME, 1999, p.173–174). Construct underrepresentation is defined as “the extent to which a test fails to capture important aspects of the construct that the test is intended to measure” (AERA, APA, & NCME, 1999, p. 174).

OAE Content Validation Surveys. For the OAE program, Pearson conducted content validation surveys that were aimed at gathering evidence to show that each assessment frameworks adequately reflects the knowledge and skills necessary for an entry-level educator in Ohio. Survey participants were asked to rate the importance of the knowledge and skills described by each competency and accompanying descriptive statements. The third item on the survey asked participants to rate a set of competencies in terms of how well they represented important aspects of the knowledge and skills required for performing the job of an entry-level educator. Results showed that the set of competencies for each test adequately represent the knowledge and skills the test is intended to measure.

OAE Test Items. The item validation activities by the BRC and CACs provided additional construct-based validity evidence. The BRC validated that the items were free from bias in content, language, offensiveness, and stereotypes and that the items were fair and represented the diversity of the Ohio population. The CACs validated items to indicate that they matched the test competency or content domain to which they were written and were accurate, free from bias, and job-related. Any items that were judged to be problematic were eliminated.

Page 15: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Copyright © 2014 Pearson Education, Inc. or its affiliate(s). All rights reserved.

12

Score Reporting

After administration of the OAE tests, score reports are provided to candidates to inform them of their passing status and performance on the test. Score reports are also provided to the Ohio Department of Education (ODE) and to institutions, as designated by the candidate, to communicate passing status of individual candidates for educator licensure.

Candidate Score Reports

Candidates receive unofficial test results (reported as pass or not pass) on-site following the administration of multiple-choice-only tests. For tests that contain constructed-response items, candidates receive a receipt of completion on-site following the test administration. All candidates may request that a score report be e-mailed to the address provided during the registration process on the score report date published on the program website. Score reports are posted to the candidate’s online account for 45 days as PDF documents, which a candidate may view, print, and save for their records. After the 45 days, candidates may request a copy of their scores through their online account.

Score reports include the following information:

The date the candidate took the test

The candidate’s overall scaled score based on the number of items answered correctly converted to a scale ranging from 100 to 300

The candidate’s passing status based on the state-approved passing standard

Details of candidate performance on each content domain assessed by the test for both multiple-choice and constructed-response sections

Candidates who do not pass a test are also provided with a Detailed Performance Summary that includes more detailed information about the candidate’s performance on each test competency and constructed-response item, if applicable.

A sample candidate score report can be found in Appendix A. Score reports are accompanied by an interpretive guide to help candidates understand the reports. Additional information on how to read the score report can be found on the OAE program website (http://www.oh.nesinc.com/Content/Docs/OAE_ScoreReportBacker.pdf).

Other Score Recipients

Candidate scores are sent to the Ohio Department of Education (ODE) and institutions as designated by the candidate. These scores are delivered electronically through Pearson’s secure web-based score-reporting system. Through this system, the ODE and institutions also have access to a web-based tool called ResultsAnalyzerTM. This tool is interactive and allows ODE and institutions to view, analyze, reorganize, download, and print reports based on test data and generate customized reports of their choice.

ResultsAnalyzerTM can support the needs of institutions of higher education through the following features:

Create relevant and timely reports on candidate results

Filter reports by candidate demographics, specific tests, or testing year

Page 16: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Copyright © 2014 Pearson Education, Inc. or its affiliate(s). All rights reserved.

13

Organize data into customized tables and graphs

Customize data queries to align with institution goals and areas of interest

Aggregate performance data across testing program years

Export data to Microsoft® Excel or other report software and print graphics

Analyze candidate data for numerous purposes, including Title II reporting, institutional research, program evaluation, and curriculum improvement

ResultsAnalyzerTM also allows ODE to generate reports based on its needs and areas of focus using the features previously mentioned. In addition, ResultsAnalyzerTM can create reports and filter results by institution, as well as support the analysis of institutional data. This data can assist ODE in assessing the performance of institutions across the state and help in future statewide policy decisions.

Test Statistics Reports

This document provides statistical reports for each test and reports on candidate performance for OAE tests administered between September 3, 2013 and July 5, 2014. Test statistics reports consist of test form and field statistics reports that outline statistical characteristics of multiple-choice only tests as well as tests composed of both multiple-choice and constructed-response items administered during the specified timeframe. Total scaled score distributions also are provided. Specific details of each report are outlined below.

Test Form Statistics Reports

The Test Form Statistics Report provided in Appendix B provides selected statistics for test forms administered to at least 100 candidates between September 3, 2013 and July 5, 2014. This report includes the following information:

Test Field Code and Name

Form Designation

Number of Tests Taken. Scores are included in the analyses for this report for candidates who attempted at least one multiple-choice item and provided scorable responses to the constructed-response items, if applicable. In the cases in which the same test forms were administered at more than one test administration during the reporting period and candidates took the same test form at more than one test administration, both scores for those candidates’ several attempts are included in the analyses.

Mean. The mean of the total test scaled scores achieved by the candidates taking the test form.

Standard Error of Measurement (SEM). The standard error of measurement of the total test scaled scores is reported.

Decision Consistency. The Livingston Lewis estimate of decision consistency that is reported value is based on all administrations of a test form during the program year. Test forms are considered to be identical if they contain identical sets of scorable multiple-choice items, regardless of the order of the items.

Stratified Alpha. This statistic provides an estimate of total test score reliability for a test containing multiple item types (e.g., multiple-choice, constructed-response items) (Qualls, 1995).

Test Length. The number of scorable items of the multiple-choice section.

Page 17: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Copyright © 2014 Pearson Education, Inc. or its affiliate(s). All rights reserved.

14

Kuder-Richardson formula 20 (KR20)

A second Test Form Statistics Report is provided in Appendix C. This report contains the total test scaled score mean and the number of scorable multiple-choice items for test forms that were administered to less than 100 candidates between September 3, 2013 and July 5, 2014. Statistical indices such as the SEM or reliability coefficients cannot be confidently interpreted when computed using data from small groups of candidates; therefore, such analyses were omitted for test forms administered to less than 100 candidates in the reporting period.

Test Field Statistics Report The Test Statistics Report by Test Field shown in Appendix D provides selected statistics for test fields comprised of constructed-response items administered to at least 100 candidates during the program year. This report includes the following information:

Scorer Agreement. Measures of scorer agreement reported are:

• Percent Agreement. Overall agreement determined by summing exact andadjacent agreement.

• Percent Exact. This is the percentage of scores in which the first two scorerswere in exact agreement.

• Percentage Adjacent. This is the percentage of scores in which the twoscorers assigned adjacent scores.

• Inter-rater Reliability. This is the intraclass correlation between the first andsecond score assigned to each response, corrected using the Spearman-Brown formula.

Total Scaled Score Distribution by Test Field

The Total Scaled Score Distribution by Test field shown in Appendix E provides information about the scaled score distributions associated with all test fields taken by 10 or more candidates. For the OAE, results are reported on a scale ranging from 100 to 300. A scaled score of 220 represents the passing standard for each test.

The scale score distributions include the following information:

Total Scaled Score: the inclusive list of observed total test scaled scores, in intervals of five scale-score points.

N: the number of scores observed within each interval

N at or above: the number of scores observed at or above each scale score interval

Percent: the percent of scores observed within each interval

Percent at or above: the percent of scores observed at or above each scaled score interval

Title II Reporting

The federal government requires that states submit annual reports on quality of their teacher preparation programs for accountability purposes. The reporting requirements are spelled out in sections 205 through 208 of Title II of the Higher Education Act (HEA). The act, which was amended in 2008, was put in place to encourage transparency and accountability among states. The data are intended to inform stakeholders on the quality of teacher preparation.

Page 18: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Copyright © 2014 Pearson Education, Inc. or its affiliate(s). All rights reserved.

15

Pearson helps Ohio meet Title II requirements by working with both ODE and institutions of higher education to produce the reports. The Title II reporting system allows the submission, review, and transmission of data that supports collaboration among institutions and ODE. Pearson supports the reporting needs and requirements through the following actions:

Producing Test Pass Rates Tables and Summary Pass Rates Tables for Title II Reporting

Providing web-based training to institutions of higher education on the purpose and process for data collection

Maintaining a secure, password-accessible website for data collection

Page 19: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Copyright © 2014 Pearson Education, Inc. or its affiliate(s). All rights reserved.

16

References

American Educational Research Association, American Psychological Association, and National Council on Measurement in Education (1999). Standards for Educational and Psychological Testing. Washington, D.C.: American Educational Research Association.

Angoff, W. H. (1984). Scales, Norms and Equivalent Scores. Princeton, NJ: Educational Testing Service.

Berk, R. A. (1980). A consumer's guide to criterion-referenced test reliability. Journal of Educational Measurement 14(4):323–349.

Brennan, R. L. (2001). Generalizability Theory. New York, NY: Springer-Verlag.

Crocker, L., and J. Algina. (1986). Introduction to Classical and Modern Test Theory. Belmont, CA: Wadsworth Group.

Kolen, M. J. and Brennan, R. L. (2004). Test equating, scaling, and linking: Methods and practices (2nd ed.). New York, NY: Springer Science and Business Media, LLC.

Koretz, D. M. and Hamilton, L. S. (2006). Testing for accountability in K–12. In R. L. Brennan (Ed.), Educational Measurement (4th ed.) (pp. 531–578). Westport, CT: American Council on Education and Praeger Publishers.

Kuder, G. F. and Richardson, M. W. (1937) The theory of the estimation of test reliability. Psychometrika 2:151–160.

Livingston, S. A., and Lewis, C. (1995). Estimating the consistency and accuracy of classifications based on test scores. Journal of Educational Measurement 32(2):179–197.

Messick, S. (1989). Validity. In R. Linn (Ed.), Educational measurement, (3rd ed.) (pp. 13–103). Washington, DC: American Council on Education.

Qualls, L. A. (1995). Estimating the reliability of a test containing multiple item formats. Applied Measurement in Education 8(2):111–120.

Page 20: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Copyright © 2014 Pearson Education, Inc. or its affiliate(s). All rights reserved.

17

Appendices

Appendix A: Sample OAE Candidate Score Report

Appendix B: Test Form Statistics Report for Test Fields with More than 100 Tests Taken

Appendix C: Test Form Statistics Report for Test Fields with Less than 100 Tests Taken

Appendix D: Test Field Statistics Report

Appendix E: Total Scaled Score Distribution by Test Field

Page 21: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Appendix A: Sample OAE Candidate Score Report

Page 22: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

FirstName LastName300 Venture WaySample City, OH XXXXX

Assessment of Professional Knowledge: Early Childhood (PK-3) 001Minimum Passing Score: 220

Assessment Date: August 20, 2014

Domain Performance Index

Status: Pass Your Score: 231

Examinee Name: FirstName LastName

Social Security Number: XXX-XX-XXXX����������������������������������������������������������������������������������������������������������������������� ��������� ������������������������������������������������� �������������������������� �������� ����������������������� ����������������������� ����� ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������� ���� ����� ����� ����� ����� ����� ����� ����� ����� ����� ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ ����� ����� ����� ����� ����� ����� ����� ����� ����� ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������� �� ����� ����� ����� ����� ����� ����� ����� ����� ����� ������� ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������� ����� ����� ����� ����� ����� ����� ����� ����� ����� ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������� �������������������� ����� ����� ����� ����� ����� ����� ����� ����� ����� ����� ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������� ��������������������������� ��� ����� ����� ����� ����� ����� ����� ����� ����� ����� ��������������������������������������������������������������������������������������������������������� ��������������������������������������������������������������������������� ������������������������������ ����� ����� ����� ����� ����� ��������� ���������������� ������� ����������������������������������������� ������������������������������������������������������������������ ������������������������ ������������������������� ������������������������������������������� ���������������������������� ������ ������������������������������������������������������������������������������������������������������������������� ��������������������������������������������� ������� ���������������������������������������� �������������������������������������������������������������������������������������������� ����������������������������

This barcode contains unique candidate information.

Copyright © 2013 Pearson Education, Inc. or its affiliate(s). All rights reserved.Evaluation Systems, Pearson, P.O. Box 226, Amherst, MA 01004

Student Development and LearningAssessment, Instruction, and the Learning EnvironmentThe Professional EnvironmentConstructed Response Assignments

+++++++++++

Page 23: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Copyright © 2013 Pearson Education, Inc. or its affiliate(s). All rights reserved. Evaluation Systems, Pearson, P.O. Box 226, Amherst, MA 01004

Pearson and its logo are trademarks, in the U.S. and/or other countries, of Pearson Education, Inc. or its affiliate(s).

HOW TO READ YOUR SCORE REPORT

Overview. This score report provides your assessment results for the Ohio Assessments for Educators (OAE) that you took on the assessment date indicated on the report. The purpose of the OAE is to assess the knowledge and skills of a prospective Ohio educator. The OAE program helps the Ohio Department of Education and the State Board of Education of Ohio meet their goal of ensuring that candidates have the knowledge and skills needed to perform the job of an educator in Ohio schools. Assessment Score. Your total assessment score is a scaled score. A scaled score is a combination of the number of scorable questions you answered correctly on the multiple-choice section of the assessment and the scores you received on any constructed-response assignments (if applicable) converted to a scale from 100 to 300, with a score of 220 representing the passing score for the assessment. The number of questions per competency may vary by test form. The total number of questions needed to pass the assessment may also vary across test forms as a result of variations in question difficulty. The conversion of raw scores to scaled scores takes such differences into account. The passing score for each assessment is established by the State Board of Education of Ohio and is based on the professional judgments and recommendations of Ohio educators. "Pass" or "Did Not Pass" status is based on your total score for each assessment. Performance Indices. Performance indices are provided for each domain of the assessment. These indices can help you understand your areas of strength and weakness. This information should be interpreted with caution, since different domains contain different numbers of assessment questions. For each domain, your performance is reported according to the following performance indices:

Performance Indices: Multiple-Choice Questions

++++ Performance on the competency is well above the level represented by the minimum passing score.

+++ Performance on the competency is just at or above the level represented by the minimum passing score.

++ Performance on the competency is just below the level represented by the minimum passing score. + Performance on the competency is well below the level represented by the minimum passing

score.

Performance Indices: Constructed-Response Assignments

Constructed-Response Assignments (Assessments with a 4-Point Scoring Scale*)

Constructed-Response Assignments (Assessments with a 3-Point Scoring Scale*)

++++ Response reflects a thorough understanding

+++ Response reflects a thorough understanding

+++ Response reflects a general understanding

++ Response reflects a general understanding

++ Response reflects a limited understanding

+ Response reflects limited or no understanding

+ Response reflects little or no understanding

*See study guides at www.oh.nesinc.com

Page 24: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Copyright © 2013 Pearson Education, Inc. or its affiliate(s). All rights reserved. Evaluation Systems, Pearson, P.O. Box 226, Amherst, MA 01004

Each assessment framework describes the content knowledge assessed by the OAE. You may view, print, or download the framework for any OAE assessment by selecting "Assessment Frameworks" in the “Prepare” section of the OAE website at www.oh.nesinc.com. If a response to a constructed-response assignment is designated "Blank" or "Unscorable," you will see one of the following codes:

Codes for Blank/Unscorable Responses

B Response was blank U-1 Response was unrelated to assigned topic U-2 Response was unintelligible U-3 Response was not primarily in English U-4 Response lacked sufficient amount of original work

Summary Report for Assessments with Two Subtests. Some OAE assessments are comprised of two subtests. If you took a two-subtest assessment, your score report will include an assessment summary. The assessment summary lists passing status information for each subtest within the assessment and includes the date on which you passed each subtest. Please note that for assessments composed of more than one subtest, you must pass both subtests for that assessment to meet Ohio certification requirements. Reporting of Scores. Your scores are reported directly to the Ohio Department of Education and the Ohio educator preparation institution(s) you indicated during the registration process. This score report is for your information only. Keep a copy for your permanent records. ADDITIONAL INFORMATION Retaking the Assessment. You may retake an OAE assessment by following the same registration procedures you completed for previous assessment administrations. For information about retake policies, visit www.oh.nesinc.com. Assessment Preparation Materials. Assessment preparation materials are available through the website you used to register for this assessment.

Page 25: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Appendix B: Test Form Statistics Report for Test Fields with

More than 100 Tests Taken

Page 26: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

NOTES: *, **, *** - test field, test form administered, and the number of times the test form was taken, respectivelya - the mean total test scaled scores achieved by the candidates taking the test formb - the standard error of measurement of the total test scaled scoresc - a reliability statistic that describes the consistency of the pass/fail decision on the total test scaled scored - an estimate of total test reliability for a test containing multiple item types (e.g., multiple-choice, constructed-response items)e - the number of scorable items in the multiple-choice sectionf - an overall test consistency (reliability) estimate based on a single test administration (Kuder-Richardson formula 20)g - a measure of the proportion of total score variance that is attributable to true score variability rather than to measurement error (GeneralizabilityCoefficient)

Test Form Statistics Report For Test Fields With More Than 100 Tests Taken Per Test Form

NOTES: *, **, *** - test field, test form administered, and the number of times the test form was taken, respectivelya - the mean total test scaled scores achieved by the candidates taking the test formb - the standard error of measurement of the total test scaled scoresc - a reliability statistic that describes the consistency of the pass/fail decision on the total test scaled scored - an estimate of total test reliability for a test containing multiple item types (e.g., multiple-choice, constructed-response items)e - the number of scorable items in the multiple-choice sectionf - an overall test consistency (reliability) estimate based on a single test administration (Kuder-Richardson formula 20)g - a measure of the proportion of total score variance that is attributable to true score variability rather than to measurement error (GeneralizabilityCoefficient)

Test Form Statistics Report For Test Fields With More Than 100 Tests Taken Per Test Form

September 3, 2013 - August 31, 2014Numberof Tests

Taken***

Total Test Scaled Score IndicesMultiple-Choice

Section &EssayLabel

Meana

SEMb

DecisionConsist-

encyc

Stratified

Alphad

Lengthe

KR20f

G

Coefficientg

Test Field*

Form**

001 Assessment of ProfessionalKnowledge: Early Childhood (PK-3)

A1,748 238 13.2 0.89 0.70 74 0.70 0.23

B 279 236 13.4 0.87 0.74 80 0.76 0.17

C 255 233 13.2 0.85 0.75 80 0.77 0.32

002 Assessment of ProfessionalKnowledge: Middle Childhood (4-9)

A960 236 15.0 0.85 0.74 80 0.76 0.31

003 Assessment of ProfessionalKnowledge: Adolescence to Young Adult(7-12)

A

1,370 253 12.4 0.96 0.78 80 0.76 0.41

004 Assessment of ProfessionalKnowledge: Multi-Age (PK-12)

A1,841 241 13.8 0.89 0.82 80 0.83 0.34

006 Art A 193 235 10.2 0.86 -- 120 0.86 --

007 Biology A 208 220 10.7 0.82 -- 120 0.89 --

(Continued)

Page 27: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

NOTES: *, **, *** - test field, test form administered, and the number of times the test form was taken, respectivelya - the mean total test scaled scores achieved by the candidates taking the test formb - the standard error of measurement of the total test scaled scoresc - a reliability statistic that describes the consistency of the pass/fail decision on the total test scaled scored - an estimate of total test reliability for a test containing multiple item types (e.g., multiple-choice, constructed-response items)e - the number of scorable items in the multiple-choice sectionf - an overall test consistency (reliability) estimate based on a single test administration (Kuder-Richardson formula 20)g - a measure of the proportion of total score variance that is attributable to true score variability rather than to measurement error (GeneralizabilityCoefficient)

Test Form Statistics Report For Test Fields With More Than 100 Tests Taken Per Test Form

September 3, 2013 - August 31, 2014Numberof Tests

Taken***

Total Test Scaled Score IndicesMultiple-Choice

Section &EssayLabel

Meana

SEMb

DecisionConsist-

encyc

Stratified

Alphad

Lengthe

KR20f

G

Coefficientg

Test Field*

Form**

012 Early Childhood Education A 1,568 248 7.8 0.94 -- 118 0.81 --

B 253 251 7.5 0.95 -- 120 0.86 --

C 244 249 7.5 0.95 -- 120 0.84 --

013 Early Childhood Special Education A 269 225 13.0 0.76 -- 80 0.76 --

015 Educational Leadership A 734 231 10.3 0.87 0.74 60 0.71 0.57

018 Elementary Education Subtest I A 658 239 12.3 0.81 -- 60 0.74 --

019 Elementary Education Subtest II A 706 229 12.9 0.79 -- 60 0.83 --

020 English Language Arts A 637 225 11.0 0.83 -- 120 0.85 --

024 Integrated Science A 132 247 6.6 0.94 -- 120 0.90 --

(Continued)

Page 28: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

NOTES: *, **, *** - test field, test form administered, and the number of times the test form was taken, respectivelya - the mean total test scaled scores achieved by the candidates taking the test formb - the standard error of measurement of the total test scaled scoresc - a reliability statistic that describes the consistency of the pass/fail decision on the total test scaled scored - an estimate of total test reliability for a test containing multiple item types (e.g., multiple-choice, constructed-response items)e - the number of scorable items in the multiple-choice sectionf - an overall test consistency (reliability) estimate based on a single test administration (Kuder-Richardson formula 20)g - a measure of the proportion of total score variance that is attributable to true score variability rather than to measurement error (GeneralizabilityCoefficient)

Test Form Statistics Report For Test Fields With More Than 100 Tests Taken Per Test Form

September 3, 2013 - August 31, 2014Numberof Tests

Taken***

Total Test Scaled Score IndicesMultiple-Choice

Section &EssayLabel

Meana

SEMb

DecisionConsist-

encyc

Stratified

Alphad

Lengthe

KR20f

G

Coefficientg

Test Field*

Form**

025 Integrated Social Studies A 453 218 8.6 0.86 -- 120 0.87 --

B 170 226 7.6 0.83 -- 120 0.87 --

027 Mathematics A 401 217 8.9 0.87 -- 120 0.93 --

B 151 224 8.0 0.91 -- 120 0.94 --

028 Middle Grades English Language Arts A 611 227 11.0 0.79 -- 120 0.84 --

029 Middle Grades Science A 547 213 9.2 0.85 -- 120 0.88 --

B 219 225 8.2 0.84 -- 120 0.87 --

030 Middle Grades Mathematics A 520 219 8.5 0.86 -- 120 0.90 --

B 202 224 7.7 0.87 -- 120 0.90 --

(Continued)

Page 29: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

NOTES: *, **, *** - test field, test form administered, and the number of times the test form was taken, respectivelya - the mean total test scaled scores achieved by the candidates taking the test formb - the standard error of measurement of the total test scaled scoresc - a reliability statistic that describes the consistency of the pass/fail decision on the total test scaled scored - an estimate of total test reliability for a test containing multiple item types (e.g., multiple-choice, constructed-response items)e - the number of scorable items in the multiple-choice sectionf - an overall test consistency (reliability) estimate based on a single test administration (Kuder-Richardson formula 20)g - a measure of the proportion of total score variance that is attributable to true score variability rather than to measurement error (GeneralizabilityCoefficient)

Test Form Statistics Report For Test Fields With More Than 100 Tests Taken Per Test Form

September 3, 2013 - August 31, 2014Numberof Tests

Taken***

Total Test Scaled Score IndicesMultiple-Choice

Section &EssayLabel

Meana

SEMb

DecisionConsist-

encyc

Stratified

Alphad

Lengthe

KR20f

G

Coefficientg

Test Field*

Form**

031 Middle Grades Social Studies A 562 196 10.5 0.90 -- 120 0.87 --

B 248 220 8.2 0.79 -- 120 0.85 --

032 Music A 340 235 11.1 0.85 -- 120 0.82 --

034 Physical Education A 211 230 9.5 0.79 -- 120 0.83 --

038 Reading Subtest I A 1,755 243 14.7 0.89 0.78 54 0.78 0.38

039 Reading Subtest II A 1,722 246 14.9 0.89 0.76 54 0.76 0.32

040 School Counselor A 183 232 11.2 0.82 -- 116 0.73 --

043 Special Education A 1,018 235 9.9 0.87 -- 119 0.86 --

B 223 235 9.7 0.85 -- 120 0.84 --

C 259 232 10.0 0.84 -- 120 0.86 --

Page 30: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Appendix C: Test Form Statistics Report for Test Fields with

Less than 100 Tests Taken

Page 31: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

NOTES: *, **, *** - test field, test form administered, and the number of times the test form was taken, respectivelya - the mean total test scaled scores achieved by the candidates taking the test formb - the number of scorable items in the multiple-choice section

Test Form Statistics Report For Test Fields With Fewer Than 100 Tests Taken Per Test Form

NOTES: *, **, *** - test field, test form administered, and the number of times the test form was taken, respectivelya - the mean total test scaled scores achieved by the candidates taking the test formb - the number of scorable items in the multiple-choice section

Test Form Statistics Report For Test Fields With Fewer Than 100 Tests Taken Per Test Form

September 3, 2013 - August 31, 2014 Numberof Tests

Taken***

TotalTest

ScaledScore

Meana

Multiple-ChoiceSection

Lengthb

Test Field*

Form**

008 Business Education A 55 234 120

009 Chemistry A 64 229 120

010 Computer Information Science A 8 -- 64

014 Earth and Space Science A 21 232 120

016 Computer/Technology Subtest I A 57 266 56

017 Computer/Technology Subtest II A 57 258 32

021 English to Speakers of Other Languages (ESOL) A 82 248 120

022 Family and Consumer Sciences A 25 229 120

026 Marketing A 9 -- 80

035 Physics A 31 231 120

036 Prekindergarten Subtest I A 21 246 54

037 Prekindergarten Subtest II A 21 241 54

041 School Library Media Specialist A 16 232 120

042 School Psychologist A 2 -- 64

044 Special Education Specialist: Deaf/Hard of Hearing A 23 236 80

045 Special Education Specialist: Visually Impaired A 13 237 64

(Continued)

Page 32: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

NOTES: *, **, *** - test field, test form administered, and the number of times the test form was taken, respectivelya - the mean total test scaled scores achieved by the candidates taking the test formb - the number of scorable items in the multiple-choice section

Test Form Statistics Report For Test Fields With Fewer Than 100 Tests Taken Per Test Form

September 3, 2013 - August 31, 2014 Numberof Tests

Taken***

TotalTest

ScaledScore

Meana

Multiple-ChoiceSection

Lengthb

Test Field*

Form**

046 Technology Education Subtest I A 9 -- 36

047 Technology Education Subtest II A 7 -- 60

048 Theater A 7 -- 85

Page 33: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Appendix D: Test Field Statistics Report

Page 34: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

NOTES: a - the number of times the test was takenb - overall agreement determined by summing exact and adjacent agreementc - the percentage of scores in which the first two scorers were in exact agreementd - the percentage of scores in which the two scorers assigned adjacent scorese - the intraclass correlation between the first and second score assigned to each response, corrected using the Spearman-Brown formula

Test Field Statistics Report For Test Fields With Open-Response Items

NOTES: a - the number of times the test was takenb - overall agreement determined by summing exact and adjacent agreementc - the percentage of scores in which the first two scorers were in exact agreementd - the percentage of scores in which the two scorers assigned adjacent scorese - the intraclass correlation between the first and second score assigned to each response, corrected using the Spearman-Brown formula

Test Field Statistics Report For Test Fields With Open-Response Items

Sep 3, 2013 - Aug 31, 2014

Numberof Tests

Takena

Open-Response Prompts

Item 1 of 4 Item 2 of 4

Scorer Agreement

Inter-rater

Reliabilitye

Scorer Agreement

Inter-rater

Reliabilitye

Percent

Agreementb

Percent

Exactc

Percent

Adjacentd

Percent

Agreementb

Percent

Exactc

Percent

Adjacentd

001 Assessment ofProfessional Knowledge:Early Childhood (PK-3) 2,282 97.1 54.7 42.4 0.63 97.7 60.5 37.2 0.81

002 Assessment ofProfessional Knowledge:Middle Childhood (4-9) 960 97.8 63.9 34.0 0.75 97.4 61.8 35.6 0.81

003 Assessment ofProfessional Knowledge:Adolescence to Young Adult(7-12) 1,370 97.4 58.8 38.6 0.65 97.9 61.1 36.8 0.71

004 Assessment ofProfessional Knowledge:Multi-Age (PK-12) 1,841 96.3 55.0 41.3 0.65 95.6 55.3 40.3 0.80

015 Educational Leadership 734 95.1 53.0 42.1 0.65 93.1 55.4 37.6 0.64

038 Reading Subtest I 1,755 96.9 58.9 38.0 0.74 96.5 58.2 38.2 0.67

039 Reading Subtest II 1,722 97.5 59.8 37.7 0.66 96.7 58.8 37.9 0.68

Page 35: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

NOTES: a - the number of times the test was takenb - overall agreement determined by summing exact and adjacent agreementc - the percentage of scores in which the first two scorers were in exact agreementd - the percentage of scores in which the two scorers assigned adjacent scorese - the intraclass correlation between the first and second score assigned to each response, corrected using the Spearman-Brown formula

Test Field Statistics Report For Test Fields With Open-Response Items

Sep 3, 2013 - Aug 31, 2014

Open-Response Prompts

Item 3 of 4 Item 4 of 4

Scorer Agreement

Inter-rater

Reliabilitye

Scorer Agreement

Inter-rater

Reliabilitye

Percent

Agreementb

Percent

Exactc

Percent

Adjacentd

Percent

Agreementb

Percent

Exactc

Percent

Adjacentd

001 Assessment ofProfessional Knowledge:Early Childhood (PK-3) -- -- -- -- -- -- -- --

002 Assessment ofProfessional Knowledge:Middle Childhood (4-9) -- -- -- -- -- -- -- --

003 Assessment ofProfessional Knowledge:Adolescence to Young Adult(7-12) -- -- -- -- -- -- -- --

004 Assessment ofProfessional Knowledge:Multi-Age (PK-12) -- -- -- -- -- -- -- --

015 Educational Leadership 97.3 56.1 41.1 0.65 98.4 61.1 37.2 0.66

038 Reading Subtest I -- -- -- -- -- -- -- --

039 Reading Subtest II -- -- -- -- -- -- -- --

Page 36: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Appendix E: Total Scaled Score Distribution by Test Field

Page 37: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Candidate Performance Statistical ReportSeptember 3, 2013 - August 31, 2014

Total Scaled Score Distribution by Test Field (All Forms)

Test Field=001 Assessment of Professional Knowledge: Early Childhood (PK-3)

NN at orabove %

% at orabove

0 0 0.00 0.001 1 0.04 0.044 5 0.18 0.228 13 0.35 0.57

24 37 1.05 1.6253 90 2.32 3.9464 154 2.80 6.75

134 288 5.87 12.62168 456 7.36 19.98194 650 8.50 28.48235 885 10.30 38.78248 1133 10.87 49.65200 1333 8.76 58.41219 1552 9.60 68.01168 1720 7.36 75.37159 1879 6.97 82.34113 1992 4.95 87.29

87 2079 3.81 91.1075 2154 3.29 94.3939 2193 1.71 96.1033 2226 1.45 97.5518 2244 0.79 98.3313 2257 0.57 98.90

8 2265 0.35 99.269 2274 0.39 99.651 2275 0.04 99.693 2278 0.13 99.822 2280 0.09 99.911 2281 0.04 99.961 2282 0.04 100.000 2282 0.00 100.000 2282 0.00 100.000 2282 0.00 100.000 2282 0.00 100.000 2282 0.00 100.000 2282 0.00 100.000 2282 0.00 100.000 2282 0.00 100.000 2282 0.00 100.000 2282 0.00 100.00

Total Scaled Score

100-104105-109110-114115-119120-124125-129130-134135-139140-144145-149150-154155-159160-164165-169170-174175-179180-184185-189190-194195-199200-204205-209210-214215-219

220-224225-229230-234

235-239 240-244 245-249250-254255-259260-264265-269270-274275-279280-284285-289290-294295-300

0 100 200 300

Page 38: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Candidate Performance Statistical ReportSeptember 3, 2013 - August 31, 2014

Total Scaled Score Distribution by Test Field (All Forms)

Test Field=002 Assessment of Professional Knowledge: Middle Childhood (4-9)

NN at orabove %

% at orabove

0 0 0.00 0.001 1 0.10 0.105 6 0.52 0.63

11 17 1.15 1.7727 44 2.81 4.5832 76 3.33 7.9235 111 3.65 11.5646 157 4.79 16.3556 213 5.83 22.1971 284 7.40 29.5892 376 9.58 39.1770 446 7.29 46.4685 531 8.85 55.3174 605 7.71 63.0261 666 6.35 69.3856 722 5.83 75.2159 781 6.15 81.3551 832 5.31 86.6736 868 3.75 90.4230 898 3.13 93.5412 910 1.25 94.7917 927 1.77 96.5611 938 1.15 97.71

4 942 0.42 98.137 949 0.73 98.851 950 0.10 98.964 954 0.42 99.384 958 0.42 99.790 958 0.00 99.791 959 0.10 99.900 959 0.00 99.900 959 0.00 99.900 959 0.00 99.900 959 0.00 99.900 959 0.00 99.900 959 0.00 99.901 960 0.10 100.000 960 0.00 100.000 960 0.00 100.000 960 0.00 100.00

Total Scaled Score

100-104105-109110-114115-119120-124125-129130-134135-139140-144145-149150-154155-159160-164165-169170-174175-179180-184185-189190-194195-199200-204205-209210-214215-219

220-224225-229230-234

235-239 240-244 245-249250-254255-259260-264265-269270-274275-279280-284285-289290-294295-300

0 10 20 30 40 50 60 70 80 90 100

Page 39: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Candidate Performance Statistical ReportSeptember 3, 2013 - August 31, 2014

Total Scaled Score Distribution by Test Field (All Forms)

Test Field=003 Assessment of Professional Knowledge: Adolescence to Young Adult (7-12)

NN at orabove %

% at orabove

1 1 0.07 0.074 5 0.29 0.36

21 26 1.53 1.9053 79 3.87 5.7792 171 6.72 12.48

104 275 7.59 20.07129 404 9.42 29.49160 564 11.68 41.17145 709 10.58 51.75172 881 12.55 64.31109 990 7.96 72.26

88 1078 6.42 78.6966 1144 4.82 83.5062 1206 4.53 88.0339 1245 2.85 90.8840 1285 2.92 93.8025 1310 1.82 95.6215 1325 1.09 96.7214 1339 1.02 97.7410 1349 0.73 98.47

6 1355 0.44 98.913 1358 0.22 99.122 1360 0.15 99.275 1365 0.36 99.643 1368 0.22 99.851 1369 0.07 99.931 1370 0.07 100.000 1370 0.00 100.000 1370 0.00 100.000 1370 0.00 100.000 1370 0.00 100.000 1370 0.00 100.000 1370 0.00 100.000 1370 0.00 100.000 1370 0.00 100.000 1370 0.00 100.000 1370 0.00 100.000 1370 0.00 100.000 1370 0.00 100.000 1370 0.00 100.00

Total Scaled Score

100-104105-109110-114115-119120-124125-129130-134135-139140-144145-149150-154155-159160-164165-169170-174175-179180-184185-189190-194195-199200-204205-209210-214215-219

220-224225-229230-234

235-239 240-244 245-249250-254255-259260-264265-269270-274275-279280-284285-289290-294295-300

0 20 40 60 80 100 120 140 160 180

Page 40: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Candidate Performance Statistical ReportSeptember 3, 2013 - August 31, 2014

Total Scaled Score Distribution by Test Field (All Forms)

Test Field=004 Assessment of Professional Knowledge: Multi-Age (PK-12)

NN at orabove %

% at orabove

1 1 0.05 0.0512 13 0.65 0.7126 39 1.41 2.1251 90 2.77 4.8972 162 3.91 8.80

100 262 5.43 14.23113 375 6.14 20.37121 496 6.57 26.94132 628 7.17 34.11157 785 8.53 42.64162 947 8.80 51.44113 1060 6.14 57.58114 1174 6.19 63.77105 1279 5.70 69.47105 1384 5.70 75.18103 1487 5.59 80.77

61 1548 3.31 84.0871 1619 3.86 87.9461 1680 3.31 91.2546 1726 2.50 93.7517 1743 0.92 94.6823 1766 1.25 95.9320 1786 1.09 97.0119 1805 1.03 98.0410 1815 0.54 98.59

7 1822 0.38 98.975 1827 0.27 99.241 1828 0.05 99.291 1829 0.05 99.355 1834 0.27 99.623 1837 0.16 99.781 1838 0.05 99.842 1840 0.11 99.951 1841 0.05 100.000 1841 0.00 100.000 1841 0.00 100.000 1841 0.00 100.000 1841 0.00 100.000 1841 0.00 100.000 1841 0.00 100.00

Total Scaled Score

100-104105-109110-114115-119120-124125-129130-134135-139140-144145-149150-154155-159160-164165-169170-174175-179180-184185-189190-194195-199200-204205-209210-214215-219

220-224225-229230-234

235-239 240-244 245-249250-254255-259260-264265-269270-274275-279280-284285-289290-294295-300

0 20 40 60 80 100 120 140 160 180

Page 41: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Candidate Performance Statistical ReportSeptember 3, 2013 - August 31, 2014

Total Scaled Score Distribution by Test Field (All Forms)

Test Field=006 Art

NN at orabove %

% at orabove

0 0 0.00 0.000 0 0.00 0.003 3 1.55 1.552 5 1.04 2.593 8 1.55 4.156 14 3.11 7.25

13 27 6.74 13.999 36 4.66 18.65

14 50 7.25 25.9116 66 8.29 34.2012 78 6.22 40.4114 92 7.25 47.6717 109 8.81 56.4814 123 7.25 63.7312 135 6.22 69.9511 146 5.70 75.6513 159 6.74 82.38

6 165 3.11 85.494 169 2.07 87.562 171 1.04 88.606 177 3.11 91.711 178 0.52 92.234 182 2.07 94.304 186 2.07 96.372 188 1.04 97.411 189 0.52 97.931 190 0.52 98.450 190 0.00 98.450 190 0.00 98.450 190 0.00 98.452 192 1.04 99.480 192 0.00 99.480 192 0.00 99.481 193 0.52 100.000 193 0.00 100.000 193 0.00 100.000 193 0.00 100.000 193 0.00 100.000 193 0.00 100.000 193 0.00 100.00

Total Scaled Score

100-104105-109110-114115-119120-124125-129130-134135-139140-144145-149150-154155-159160-164165-169170-174175-179180-184185-189190-194195-199200-204205-209210-214215-219

220-224225-229230-234

235-239 240-244 245-249250-254255-259260-264265-269270-274275-279280-284285-289290-294295-300

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

Page 42: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Candidate Performance Statistical ReportSeptember 3, 2013 - August 31, 2014

Total Scaled Score Distribution by Test Field (All Forms)

Test Field=007 Biology

NN at orabove %

% at orabove

0 0 0.00 0.001 1 0.48 0.481 2 0.48 0.963 5 1.44 2.402 7 0.96 3.371 8 0.48 3.857 15 3.37 7.215 20 2.40 9.628 28 3.85 13.469 37 4.33 17.79

12 49 5.77 23.5612 61 5.77 29.3314 75 6.73 36.06

7 82 3.37 39.4226 108 12.50 51.92

9 117 4.33 56.2511 128 5.29 61.5411 139 5.29 66.8310 149 4.81 71.63

9 158 4.33 75.967 165 3.37 79.333 168 1.44 80.777 175 3.37 84.138 183 3.85 87.987 190 3.37 91.354 194 1.92 93.272 196 0.96 94.233 199 1.44 95.672 201 0.96 96.632 203 0.96 97.601 204 0.48 98.080 204 0.00 98.082 206 0.96 99.042 208 0.96 100.000 208 0.00 100.000 208 0.00 100.000 208 0.00 100.000 208 0.00 100.000 208 0.00 100.000 208 0.00 100.00

Total Scaled Score

100-104105-109110-114115-119120-124125-129130-134135-139140-144145-149150-154155-159160-164165-169170-174175-179180-184185-189190-194195-199200-204205-209210-214215-219

220-224225-229230-234

235-239 240-244 245-249250-254255-259260-264265-269270-274275-279280-284285-289290-294295-300

0 10 20 30

Page 43: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Candidate Performance Statistical ReportSeptember 3, 2013 - August 31, 2014

Total Scaled Score Distribution by Test Field (All Forms)

Test Field=008 Business Education

NN at orabove %

% at orabove

0 0 0.00 0.000 0 0.00 0.001 1 1.82 1.820 1 0.00 1.823 4 5.45 7.272 6 3.64 10.914 10 7.27 18.187 17 12.73 30.910 17 0.00 30.913 20 5.45 36.361 21 1.82 38.184 25 7.27 45.454 29 7.27 52.731 30 1.82 54.555 35 9.09 63.643 38 5.45 69.094 42 7.27 76.360 42 0.00 76.363 45 5.45 81.823 48 5.45 87.272 50 3.64 90.911 51 1.82 92.730 51 0.00 92.731 52 1.82 94.551 53 1.82 96.360 53 0.00 96.361 54 1.82 98.180 54 0.00 98.180 54 0.00 98.180 54 0.00 98.180 54 0.00 98.180 54 0.00 98.180 54 0.00 98.180 54 0.00 98.181 55 1.82 100.000 55 0.00 100.000 55 0.00 100.000 55 0.00 100.000 55 0.00 100.000 55 0.00 100.00

Total Scaled Score

100-104105-109110-114115-119120-124125-129130-134135-139140-144145-149150-154155-159160-164165-169170-174175-179180-184185-189190-194195-199200-204205-209210-214215-219

220-224225-229230-234

235-239 240-244 245-249250-254255-259260-264265-269270-274275-279280-284285-289290-294295-300

0 1 2 3 4 5 6 7

Page 44: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Candidate Performance Statistical ReportSeptember 3, 2013 - August 31, 2014

Total Scaled Score Distribution by Test Field (All Forms)

Test Field=009 Chemistry

NN at orabove %

% at orabove

1 1 1.56 1.561 2 1.56 3.130 2 0.00 3.130 2 0.00 3.131 3 1.56 4.692 5 3.13 7.815 10 7.81 15.632 12 3.13 18.753 15 4.69 23.440 15 0.00 23.441 16 1.56 25.004 20 6.25 31.251 21 1.56 32.819 30 14.06 46.886 36 9.38 56.255 41 7.81 64.062 43 3.13 67.199 52 14.06 81.251 53 1.56 82.812 55 3.13 85.943 58 4.69 90.631 59 1.56 92.192 61 3.13 95.310 61 0.00 95.311 62 1.56 96.881 63 1.56 98.440 63 0.00 98.441 64 1.56 100.000 64 0.00 100.000 64 0.00 100.000 64 0.00 100.000 64 0.00 100.000 64 0.00 100.000 64 0.00 100.000 64 0.00 100.000 64 0.00 100.000 64 0.00 100.000 64 0.00 100.000 64 0.00 100.000 64 0.00 100.00

Total Scaled Score

100-104105-109110-114115-119120-124125-129130-134135-139140-144145-149150-154155-159160-164165-169170-174175-179180-184185-189190-194195-199200-204205-209210-214215-219

220-224225-229230-234

235-239 240-244 245-249250-254255-259260-264265-269270-274275-279280-284285-289290-294295-300

0 1 2 3 4 5 6 7 8 9

Page 45: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Candidate Performance Statistical ReportSeptember 3, 2013 - August 31, 2014

Total Scaled Score Distribution by Test Field (All Forms)

Test Field=012 Early Childhood Education

NN at orabove %

% at orabove

1 1 0.05 0.052 3 0.10 0.15

10 13 0.48 0.6331 44 1.50 2.1355 99 2.66 4.79

140 239 6.78 11.57170 409 8.23 19.81168 577 8.14 27.94248 825 12.01 39.95255 1080 12.35 52.30161 1241 7.80 60.10238 1479 11.53 71.62168 1647 8.14 79.76103 1750 4.99 84.75105 1855 5.08 89.83

77 1932 3.73 93.5642 1974 2.03 95.5937 2011 1.79 97.3822 2033 1.07 98.4511 2044 0.53 98.98

8 2052 0.39 99.372 2054 0.10 99.474 2058 0.19 99.663 2061 0.15 99.812 2063 0.10 99.900 2063 0.00 99.900 2063 0.00 99.900 2063 0.00 99.900 2063 0.00 99.901 2064 0.05 99.950 2064 0.00 99.950 2064 0.00 99.950 2064 0.00 99.950 2064 0.00 99.950 2064 0.00 99.950 2064 0.00 99.950 2064 0.00 99.950 2064 0.00 99.950 2064 0.00 99.951 2065 0.05 100.00

Total Scaled Score

100-104105-109110-114115-119120-124125-129130-134135-139140-144145-149150-154155-159160-164165-169170-174175-179180-184185-189190-194195-199200-204205-209210-214215-219

220-224225-229230-234

235-239 240-244 245-249250-254255-259260-264265-269270-274275-279280-284285-289290-294295-300

0 100 200 300

Page 46: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Candidate Performance Statistical ReportSeptember 3, 2013 - August 31, 2014

Total Scaled Score Distribution by Test Field (All Forms)

Test Field=013 Early Childhood Special Education

NN at orabove %

% at orabove

0 0 0.00 0.000 0 0.00 0.001 1 0.37 0.371 2 0.37 0.743 5 1.12 1.868 13 2.97 4.835 18 1.86 6.695 23 1.86 8.558 31 2.97 11.52

15 46 5.58 17.1013 59 4.83 21.9312 71 4.46 26.3934 105 12.64 39.0315 120 5.58 44.6120 140 7.43 52.0431 171 11.52 63.5713 184 4.83 68.4016 200 5.95 74.3514 214 5.20 79.55

9 223 3.35 82.9010 233 3.72 86.6215 248 5.58 92.19

5 253 1.86 94.053 256 1.12 95.172 258 0.74 95.915 263 1.86 97.770 263 0.00 97.770 263 0.00 97.773 266 1.12 98.880 266 0.00 98.881 267 0.37 99.260 267 0.00 99.262 269 0.74 100.000 269 0.00 100.000 269 0.00 100.000 269 0.00 100.000 269 0.00 100.000 269 0.00 100.000 269 0.00 100.000 269 0.00 100.00

Total Scaled Score

100-104105-109110-114115-119120-124125-129130-134135-139140-144145-149150-154155-159160-164165-169170-174175-179180-184185-189190-194195-199200-204205-209210-214215-219

220-224225-229230-234

235-239 240-244 245-249250-254255-259260-264265-269270-274275-279280-284285-289290-294295-300

0 10 20 30 40

Page 47: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Candidate Performance Statistical ReportSeptember 3, 2013 - August 31, 2014

Total Scaled Score Distribution by Test Field (All Forms)

Test Field=014 Earth and Space Science

NN at orabove %

% at orabove

0 0 0.00 0.000 0 0.00 0.000 0 0.00 0.000 0 0.00 0.000 0 0.00 0.002 2 9.52 9.521 3 4.76 14.290 3 0.00 14.291 4 4.76 19.052 6 9.52 28.571 7 4.76 33.331 8 4.76 38.103 11 14.29 52.380 11 0.00 52.383 14 14.29 66.671 15 4.76 71.431 16 4.76 76.190 16 0.00 76.191 17 4.76 80.950 17 0.00 80.952 19 9.52 90.482 21 9.52 100.000 21 0.00 100.000 21 0.00 100.000 21 0.00 100.000 21 0.00 100.000 21 0.00 100.000 21 0.00 100.000 21 0.00 100.000 21 0.00 100.000 21 0.00 100.000 21 0.00 100.000 21 0.00 100.000 21 0.00 100.000 21 0.00 100.000 21 0.00 100.000 21 0.00 100.000 21 0.00 100.000 21 0.00 100.000 21 0.00 100.00

Total Scaled Score

100-104105-109110-114115-119120-124125-129130-134135-139140-144145-149150-154155-159160-164165-169170-174175-179180-184185-189190-194195-199200-204205-209210-214215-219

220-224225-229230-234

235-239 240-244 245-249250-254255-259260-264265-269270-274275-279280-284285-289290-294295-300

0 1 2 3

Page 48: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Candidate Performance Statistical ReportSeptember 3, 2013 - August 31, 2014

Total Scaled Score Distribution by Test Field (All Forms)

Test Field=015 Educational Leadership

NN at orabove %

% at orabove

0 0 0.00 0.000 0 0.00 0.000 0 0.00 0.002 2 0.27 0.276 8 0.82 1.099 17 1.23 2.32

19 36 2.59 4.9018 54 2.45 7.3658 112 7.90 15.2654 166 7.36 22.6251 217 6.95 29.5668 285 9.26 38.8360 345 8.17 47.0061 406 8.31 55.3179 485 10.76 66.0853 538 7.22 73.3033 571 4.50 77.7945 616 6.13 83.9228 644 3.81 87.7421 665 2.86 90.6022 687 3.00 93.6012 699 1.63 95.23

9 708 1.23 96.4610 718 1.36 97.82

3 721 0.41 98.233 724 0.41 98.643 727 0.41 99.050 727 0.00 99.054 731 0.54 99.592 733 0.27 99.861 734 0.14 100.000 734 0.00 100.000 734 0.00 100.000 734 0.00 100.000 734 0.00 100.000 734 0.00 100.000 734 0.00 100.000 734 0.00 100.000 734 0.00 100.000 734 0.00 100.00

Total Scaled Score

100-104105-109110-114115-119120-124125-129130-134135-139140-144145-149150-154155-159160-164165-169170-174175-179180-184185-189190-194195-199200-204205-209210-214215-219

220-224225-229230-234

235-239 240-244 245-249250-254255-259260-264265-269270-274275-279280-284285-289290-294295-300

0 10 20 30 40 50 60 70 80

Page 49: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Candidate Performance Statistical ReportSeptember 3, 2013 - August 31, 2014

Total Scaled Score Distribution by Test Field (All Forms)

Test Field=016 Computer/Technology Subtest I

NN at orabove %

% at orabove

1 1 1.75 1.754 5 7.02 8.771 6 1.75 10.539 15 15.79 26.324 19 7.02 33.336 25 10.53 43.868 33 14.04 57.895 38 8.77 66.673 41 5.26 71.934 45 7.02 78.955 50 8.77 87.724 54 7.02 94.740 54 0.00 94.742 56 3.51 98.250 56 0.00 98.250 56 0.00 98.250 56 0.00 98.251 57 1.75 100.000 57 0.00 100.000 57 0.00 100.000 57 0.00 100.000 57 0.00 100.000 57 0.00 100.000 57 0.00 100.000 57 0.00 100.000 57 0.00 100.000 57 0.00 100.000 57 0.00 100.000 57 0.00 100.000 57 0.00 100.000 57 0.00 100.000 57 0.00 100.000 57 0.00 100.000 57 0.00 100.000 57 0.00 100.000 57 0.00 100.000 57 0.00 100.000 57 0.00 100.000 57 0.00 100.000 57 0.00 100.00

Total Scaled Score

100-104105-109110-114115-119120-124125-129130-134135-139140-144145-149150-154155-159160-164165-169170-174175-179180-184185-189190-194195-199200-204205-209210-214215-219

220-224225-229230-234

235-239 240-244 245-249250-254255-259260-264265-269270-274275-279280-284285-289290-294295-300

0 1 2 3 4 5 6 7 8 9

Page 50: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Candidate Performance Statistical ReportSeptember 3, 2013 - August 31, 2014

Total Scaled Score Distribution by Test Field (All Forms)

Test Field=017 Computer/Technology Subtest II

NN at orabove %

% at orabove

0 0 0.00 0.001 1 1.75 1.752 3 3.51 5.261 4 1.75 7.027 11 12.28 19.305 16 8.77 28.07

10 26 17.54 45.616 32 10.53 56.145 37 8.77 64.915 42 8.77 73.683 45 5.26 78.952 47 3.51 82.462 49 3.51 85.962 51 3.51 89.470 51 0.00 89.474 55 7.02 96.490 55 0.00 96.490 55 0.00 96.491 56 1.75 98.250 56 0.00 98.251 57 1.75 100.000 57 0.00 100.000 57 0.00 100.000 57 0.00 100.000 57 0.00 100.000 57 0.00 100.000 57 0.00 100.000 57 0.00 100.000 57 0.00 100.000 57 0.00 100.000 57 0.00 100.000 57 0.00 100.000 57 0.00 100.000 57 0.00 100.000 57 0.00 100.000 57 0.00 100.000 57 0.00 100.000 57 0.00 100.000 57 0.00 100.000 57 0.00 100.00

Total Scaled Score

100-104105-109110-114115-119120-124125-129130-134135-139140-144145-149150-154155-159160-164165-169170-174175-179180-184185-189190-194195-199200-204205-209210-214215-219

220-224225-229230-234

235-239 240-244 245-249250-254255-259260-264265-269270-274275-279280-284285-289290-294295-300

0 1 2 3 4 5 6 7 8 9 10

Page 51: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Candidate Performance Statistical ReportSeptember 3, 2013 - August 31, 2014

Total Scaled Score Distribution by Test Field (All Forms)

Test Field=018 Elementary Education Subtest I

NN at orabove %

% at orabove

0 0 0.00 0.000 0 0.00 0.004 4 0.61 0.61

19 23 2.89 3.5020 43 3.04 6.5312 55 1.82 8.3624 79 3.65 12.0173 152 11.09 23.1032 184 4.86 27.9633 217 5.02 32.9842 259 6.38 39.3694 353 14.29 53.6545 398 6.84 60.4933 431 5.02 65.5045 476 6.84 72.3453 529 8.05 80.4031 560 4.71 85.1114 574 2.13 87.2318 592 2.74 89.9730 622 4.56 94.5311 633 1.67 96.20

5 638 0.76 96.965 643 0.76 97.729 652 1.37 99.092 654 0.30 99.391 655 0.15 99.540 655 0.00 99.542 657 0.30 99.851 658 0.15 100.000 658 0.00 100.000 658 0.00 100.000 658 0.00 100.000 658 0.00 100.000 658 0.00 100.000 658 0.00 100.000 658 0.00 100.000 658 0.00 100.000 658 0.00 100.000 658 0.00 100.000 658 0.00 100.00

Total Scaled Score

100-104105-109110-114115-119120-124125-129130-134135-139140-144145-149150-154155-159160-164165-169170-174175-179180-184185-189190-194195-199200-204205-209210-214215-219

220-224225-229230-234

235-239 240-244 245-249250-254255-259260-264265-269270-274275-279280-284285-289290-294295-300

0 10 20 30 40 50 60 70 80 90 100

Page 52: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Candidate Performance Statistical ReportSeptember 3, 2013 - August 31, 2014

Total Scaled Score Distribution by Test Field (All Forms)

Test Field=019 Elementary Education Subtest II

NN at orabove %

% at orabove

2 2 0.28 0.281 3 0.14 0.427 10 0.99 1.42

10 20 1.42 2.8333 53 4.67 7.5121 74 2.97 10.4829 103 4.11 14.5928 131 3.97 18.5618 149 2.55 21.1040 189 5.67 26.7754 243 7.65 34.4234 277 4.82 39.2439 316 5.52 44.7637 353 5.24 50.0029 382 4.11 54.1192 474 13.03 67.1432 506 4.53 71.6734 540 4.82 76.4924 564 3.40 79.8920 584 2.83 82.7235 619 4.96 87.6816 635 2.27 89.9418 653 2.55 92.4911 664 1.56 94.05

7 671 0.99 95.046 677 0.85 95.894 681 0.57 96.464 685 0.57 97.034 689 0.57 97.596 695 0.85 98.443 698 0.42 98.873 701 0.42 99.292 703 0.28 99.581 704 0.14 99.720 704 0.00 99.721 705 0.14 99.860 705 0.00 99.860 705 0.00 99.860 705 0.00 99.861 706 0.14 100.00

Total Scaled Score

100-104105-109110-114115-119120-124125-129130-134135-139140-144145-149150-154155-159160-164165-169170-174175-179180-184185-189190-194195-199200-204205-209210-214215-219

220-224225-229230-234

235-239 240-244 245-249250-254255-259260-264265-269270-274275-279280-284285-289290-294295-300

0 10 20 30 40 50 60 70 80 90 100

Page 53: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Candidate Performance Statistical ReportSeptember 3, 2013 - August 31, 2014

Total Scaled Score Distribution by Test Field (All Forms)

Test Field=020 English Language Arts

NN at orabove %

% at orabove

0 0 0.00 0.001 1 0.16 0.161 2 0.16 0.313 5 0.47 0.78

13 18 2.04 2.8311 29 1.73 4.5521 50 3.30 7.8522 72 3.45 11.3030 102 4.71 16.0130 132 4.71 20.7237 169 5.81 26.5347 216 7.38 33.9141 257 6.44 40.3537 294 5.81 46.1533 327 5.18 51.3350 377 7.85 59.1846 423 7.22 66.4141 464 6.44 72.8433 497 5.18 78.0219 516 2.98 81.0031 547 4.87 85.8717 564 2.67 88.5414 578 2.20 90.7412 590 1.88 92.6218 608 2.83 95.4511 619 1.73 97.17

4 623 0.63 97.804 627 0.63 98.431 628 0.16 98.591 629 0.16 98.743 632 0.47 99.221 633 0.16 99.371 634 0.16 99.532 636 0.31 99.840 636 0.00 99.840 636 0.00 99.840 636 0.00 99.840 636 0.00 99.841 637 0.16 100.000 637 0.00 100.00

Total Scaled Score

100-104105-109110-114115-119120-124125-129130-134135-139140-144145-149150-154155-159160-164165-169170-174175-179180-184185-189190-194195-199200-204205-209210-214215-219

220-224225-229230-234

235-239 240-244 245-249250-254255-259260-264265-269270-274275-279280-284285-289290-294295-300

0 10 20 30 40 50

Page 54: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Candidate Performance Statistical ReportSeptember 3, 2013 - August 31, 2014

Total Scaled Score Distribution by Test Field (All Forms)

Test Field=021 English to Speakers of Other Languages (ESOL)

NN at orabove %

% at orabove

0 0 0.00 0.000 0 0.00 0.003 3 3.66 3.660 3 0.00 3.665 8 6.10 9.768 16 9.76 19.513 19 3.66 23.17

13 32 15.85 39.027 39 8.54 47.569 48 10.98 58.545 53 6.10 64.631 54 1.22 65.85

11 65 13.41 79.272 67 2.44 81.713 70 3.66 85.372 72 2.44 87.800 72 0.00 87.803 75 3.66 91.462 77 2.44 93.900 77 0.00 93.900 77 0.00 93.900 77 0.00 93.902 79 2.44 96.342 81 2.44 98.781 82 1.22 100.000 82 0.00 100.000 82 0.00 100.000 82 0.00 100.000 82 0.00 100.000 82 0.00 100.000 82 0.00 100.000 82 0.00 100.000 82 0.00 100.000 82 0.00 100.000 82 0.00 100.000 82 0.00 100.000 82 0.00 100.000 82 0.00 100.000 82 0.00 100.000 82 0.00 100.00

Total Scaled Score

100-104105-109110-114115-119120-124125-129130-134135-139140-144145-149150-154155-159160-164165-169170-174175-179180-184185-189190-194195-199200-204205-209210-214215-219

220-224225-229230-234

235-239 240-244 245-249250-254255-259260-264265-269270-274275-279280-284285-289290-294295-300

0 1 2 3 4 5 6 7 8 9 10 11 12 13

Page 55: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Candidate Performance Statistical ReportSeptember 3, 2013 - August 31, 2014

Total Scaled Score Distribution by Test Field (All Forms)

Test Field=022 Family and Consumer Sciences

NN at orabove %

% at orabove

0 0 0 00 0 0 00 0 0 00 0 0 01 1 4 40 1 0 40 1 0 43 4 12 162 6 8 241 7 4 281 8 4 321 9 4 361 10 4 402 12 8 481 13 4 523 16 12 643 19 12 762 21 8 840 21 0 840 21 0 841 22 4 880 22 0 881 23 4 921 24 4 961 25 4 1000 25 0 1000 25 0 1000 25 0 1000 25 0 1000 25 0 1000 25 0 1000 25 0 1000 25 0 1000 25 0 1000 25 0 1000 25 0 1000 25 0 1000 25 0 1000 25 0 1000 25 0 100

Total Scaled Score

100-104105-109110-114115-119120-124125-129130-134135-139140-144145-149150-154155-159160-164165-169170-174175-179180-184185-189190-194195-199200-204205-209210-214215-219

220-224225-229230-234

235-239 240-244 245-249250-254255-259260-264265-269270-274275-279280-284285-289290-294295-300

0 1 2 3

Page 56: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Candidate Performance Statistical ReportSeptember 3, 2013 - August 31, 2014

Total Scaled Score Distribution by Test Field (All Forms)

Test Field=024 Integrated Science

NN at orabove %

% at orabove

0 0 0.00 0.000 0 0.00 0.000 0 0.00 0.007 7 5.30 5.304 11 3.03 8.33

11 22 8.33 16.676 28 4.55 21.218 36 6.06 27.27

17 53 12.88 40.1512 65 9.09 49.24

8 73 6.06 55.3014 87 10.61 65.91

6 93 4.55 70.458 101 6.06 76.52

13 114 9.85 86.367 121 5.30 91.672 123 1.52 93.183 126 2.27 95.453 129 2.27 97.730 129 0.00 97.731 130 0.76 98.481 131 0.76 99.241 132 0.76 100.000 132 0.00 100.000 132 0.00 100.000 132 0.00 100.000 132 0.00 100.000 132 0.00 100.000 132 0.00 100.000 132 0.00 100.000 132 0.00 100.000 132 0.00 100.000 132 0.00 100.000 132 0.00 100.000 132 0.00 100.000 132 0.00 100.000 132 0.00 100.000 132 0.00 100.000 132 0.00 100.000 132 0.00 100.00

Total Scaled Score

100-104105-109110-114115-119120-124125-129130-134135-139140-144145-149150-154155-159160-164165-169170-174175-179180-184185-189190-194195-199200-204205-209210-214215-219

220-224225-229230-234

235-239 240-244 245-249250-254255-259260-264265-269270-274275-279280-284285-289290-294295-300

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

Page 57: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Candidate Performance Statistical ReportSeptember 3, 2013 - August 31, 2014

Total Scaled Score Distribution by Test Field (All Forms)

Test Field=025 Integrated Social Studies

NN at orabove %

% at orabove

0 0 0.00 0.000 0 0.00 0.000 0 0.00 0.002 2 0.32 0.322 4 0.32 0.644 8 0.64 1.288 16 1.28 2.57

12 28 1.93 4.4921 49 3.37 7.8728 77 4.49 12.3630 107 4.82 17.1732 139 5.14 22.3130 169 4.82 27.1349 218 7.87 34.9952 270 8.35 43.3440 310 6.42 49.7662 372 9.95 59.7146 418 7.38 67.0946 464 7.38 74.4842 506 6.74 81.2243 549 6.90 88.1221 570 3.37 91.4918 588 2.89 94.3811 599 1.77 96.15

9 608 1.44 97.595 613 0.80 98.397 620 1.12 99.521 621 0.16 99.680 621 0.00 99.681 622 0.16 99.841 623 0.16 100.000 623 0.00 100.000 623 0.00 100.000 623 0.00 100.000 623 0.00 100.000 623 0.00 100.000 623 0.00 100.000 623 0.00 100.000 623 0.00 100.000 623 0.00 100.00

Total Scaled Score

100-104105-109110-114115-119120-124125-129130-134135-139140-144145-149150-154155-159160-164165-169170-174175-179180-184185-189190-194195-199200-204205-209210-214215-219

220-224225-229230-234

235-239 240-244 245-249250-254255-259260-264265-269270-274275-279280-284285-289290-294295-300

0 10 20 30 40 50 60 70

Page 58: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Candidate Performance Statistical ReportSeptember 3, 2013 - August 31, 2014

Total Scaled Score Distribution by Test Field (All Forms)

Test Field=027 Mathematics

NN at orabove %

% at orabove

1 1 0.18 0.184 5 0.72 0.915 10 0.91 1.815 15 0.91 2.72

10 25 1.81 4.5321 46 3.80 8.3312 58 2.17 10.5110 68 1.81 12.3219 87 3.44 15.7621 108 3.80 19.5726 134 4.71 24.2827 161 4.89 29.1726 187 4.71 33.8825 212 4.53 38.4122 234 3.99 42.3940 274 7.25 49.6425 299 4.53 54.1735 334 6.34 60.5131 365 5.62 66.1232 397 5.80 71.9223 420 4.17 76.0927 447 4.89 80.9816 463 2.90 83.8818 481 3.26 87.1410 491 1.81 88.9516 507 2.90 91.8513 520 2.36 94.20

8 528 1.45 95.659 537 1.63 97.288 545 1.45 98.731 546 0.18 98.911 547 0.18 99.091 548 0.18 99.282 550 0.36 99.641 551 0.18 99.821 552 0.18 100.000 552 0.00 100.000 552 0.00 100.000 552 0.00 100.000 552 0.00 100.00

Total Scaled Score

100-104105-109110-114115-119120-124125-129130-134135-139140-144145-149150-154155-159160-164165-169170-174175-179180-184185-189190-194195-199200-204205-209210-214215-219

220-224225-229230-234

235-239 240-244 245-249250-254255-259260-264265-269270-274275-279280-284285-289290-294295-300

0 10 20 30 40

Page 59: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Candidate Performance Statistical ReportSeptember 3, 2013 - August 31, 2014

Total Scaled Score Distribution by Test Field (All Forms)

Test Field=028 Middle Grades English Language Arts

NN at orabove %

% at orabove

0 0 0.00 0.000 0 0.00 0.003 3 0.49 0.490 3 0.00 0.494 7 0.65 1.15

10 17 1.64 2.7823 40 3.76 6.5524 64 3.93 10.4739 103 6.38 16.8629 132 4.75 21.6041 173 6.71 28.3146 219 7.53 35.8443 262 7.04 42.8850 312 8.18 51.0653 365 8.67 59.7437 402 6.06 65.7939 441 6.38 72.1836 477 5.89 78.0724 501 3.93 82.0017 518 2.78 84.7819 537 3.11 87.8915 552 2.45 90.3416 568 2.62 92.96

6 574 0.98 93.949 583 1.47 95.42

11 594 1.80 97.223 597 0.49 97.712 599 0.33 98.043 602 0.49 98.534 606 0.65 99.180 606 0.00 99.182 608 0.33 99.510 608 0.00 99.511 609 0.16 99.670 609 0.00 99.670 609 0.00 99.671 610 0.16 99.841 611 0.16 100.000 611 0.00 100.000 611 0.00 100.00

Total Scaled Score

100-104105-109110-114115-119120-124125-129130-134135-139140-144145-149150-154155-159160-164165-169170-174175-179180-184185-189190-194195-199200-204205-209210-214215-219

220-224225-229230-234

235-239 240-244 245-249250-254255-259260-264265-269270-274275-279280-284285-289290-294295-300

0 10 20 30 40 50 60

Page 60: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Candidate Performance Statistical ReportSeptember 3, 2013 - August 31, 2014

Total Scaled Score Distribution by Test Field (All Forms)

Test Field=029 Middle Grades Science

NN at orabove %

% at orabove

0 0 0.00 0.000 0 0.00 0.002 2 0.26 0.262 4 0.26 0.528 12 1.04 1.579 21 1.17 2.746 27 0.78 3.52

16 43 2.09 5.6117 60 2.22 7.8318 78 2.35 10.1830 108 3.92 14.1036 144 4.70 18.8037 181 4.83 23.6361 242 7.96 31.5953 295 6.92 38.5161 356 7.96 46.4843 399 5.61 52.0963 462 8.22 60.3153 515 6.92 67.2339 554 5.09 72.3250 604 6.53 78.8538 642 4.96 83.8128 670 3.66 87.4738 708 4.96 92.4319 727 2.48 94.9115 742 1.96 96.8710 752 1.31 98.17

5 757 0.65 98.835 762 0.65 99.483 765 0.39 99.871 766 0.13 100.000 766 0.00 100.000 766 0.00 100.000 766 0.00 100.000 766 0.00 100.000 766 0.00 100.000 766 0.00 100.000 766 0.00 100.000 766 0.00 100.000 766 0.00 100.00

Total Scaled Score

100-104105-109110-114115-119120-124125-129130-134135-139140-144145-149150-154155-159160-164165-169170-174175-179180-184185-189190-194195-199200-204205-209210-214215-219

220-224225-229230-234

235-239 240-244 245-249250-254255-259260-264265-269270-274275-279280-284285-289290-294295-300

0 10 20 30 40 50 60 70

Page 61: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Candidate Performance Statistical ReportSeptember 3, 2013 - August 31, 2014

Total Scaled Score Distribution by Test Field (All Forms)

Test Field=030 Middle Grades Mathematics

NN at orabove %

% at orabove

0 0 0.00 0.002 2 0.28 0.281 3 0.14 0.426 9 0.83 1.257 16 0.97 2.229 25 1.25 3.46

15 40 2.08 5.5410 50 1.39 6.9321 71 2.91 9.8318 89 2.49 12.3334 123 4.71 17.0450 173 6.93 23.9647 220 6.51 30.4743 263 5.96 36.4358 321 8.03 44.4655 376 7.62 52.0858 434 8.03 60.1145 479 6.23 66.3456 535 7.76 74.1039 574 5.40 79.5033 607 4.57 84.0724 631 3.32 87.4027 658 3.74 91.1416 674 2.22 93.3513 687 1.80 95.1515 702 2.08 97.23

8 710 1.11 98.348 718 1.11 99.451 719 0.14 99.581 720 0.14 99.722 722 0.28 100.000 722 0.00 100.000 722 0.00 100.000 722 0.00 100.000 722 0.00 100.000 722 0.00 100.000 722 0.00 100.000 722 0.00 100.000 722 0.00 100.000 722 0.00 100.00

Total Scaled Score

100-104105-109110-114115-119120-124125-129130-134135-139140-144145-149150-154155-159160-164165-169170-174175-179180-184185-189190-194195-199200-204205-209210-214215-219

220-224225-229230-234

235-239 240-244 245-249250-254255-259260-264265-269270-274275-279280-284285-289290-294295-300

0 10 20 30 40 50 60

Page 62: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Candidate Performance Statistical ReportSeptember 3, 2013 - August 31, 2014

Total Scaled Score Distribution by Test Field (All Forms)

Test Field=031 Middle Grades Social Studies

NN at orabove %

% at orabove

0 0 0.00 0.000 0 0.00 0.001 1 0.12 0.121 2 0.12 0.251 3 0.12 0.371 4 0.12 0.494 8 0.49 0.993 11 0.37 1.36

11 22 1.36 2.7215 37 1.85 4.5716 53 1.98 6.5425 78 3.09 9.6334 112 4.20 13.8344 156 5.43 19.2645 201 5.56 24.8166 267 8.15 32.9652 319 6.42 39.3855 374 6.79 46.1757 431 7.04 53.2150 481 6.17 59.3836 517 4.44 63.8340 557 4.94 68.7736 593 4.44 73.2125 618 3.09 76.3053 671 6.54 82.8431 702 3.83 86.6725 727 3.09 89.7519 746 2.35 92.1025 771 3.09 95.1911 782 1.36 96.54

7 789 0.86 97.4112 801 1.48 98.89

2 803 0.25 99.145 808 0.62 99.752 810 0.25 100.000 810 0.00 100.000 810 0.00 100.000 810 0.00 100.000 810 0.00 100.000 810 0.00 100.00

Total Scaled Score

100-104105-109110-114115-119120-124125-129130-134135-139140-144145-149150-154155-159160-164165-169170-174175-179180-184185-189190-194195-199200-204205-209210-214215-219

220-224225-229230-234

235-239 240-244 245-249250-254255-259260-264265-269270-274275-279280-284285-289290-294295-300

0 10 20 30 40 50 60 70

Page 63: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Candidate Performance Statistical ReportSeptember 3, 2013 - August 31, 2014

Total Scaled Score Distribution by Test Field (All Forms)

Test Field=032 Music

NN at orabove %

% at orabove

0 0 0.00 0.000 0 0.00 0.002 2 0.59 0.597 9 2.06 2.653 12 0.88 3.53

10 22 2.94 6.4718 40 5.29 11.7630 70 8.82 20.59

8 78 2.35 22.9419 97 5.59 28.5339 136 11.47 40.0025 161 7.35 47.3519 180 5.59 52.9429 209 8.53 61.4720 229 5.88 67.3526 255 7.65 75.00

8 263 2.35 77.3519 282 5.59 82.9413 295 3.82 86.7610 305 2.94 89.71

2 307 0.59 90.2914 321 4.12 94.41

5 326 1.47 95.885 331 1.47 97.351 332 0.29 97.651 333 0.29 97.944 337 1.18 99.122 339 0.59 99.711 340 0.29 100.000 340 0.00 100.000 340 0.00 100.000 340 0.00 100.000 340 0.00 100.000 340 0.00 100.000 340 0.00 100.000 340 0.00 100.000 340 0.00 100.000 340 0.00 100.000 340 0.00 100.000 340 0.00 100.00

Total Scaled Score

100-104105-109110-114115-119120-124125-129130-134135-139140-144145-149150-154155-159160-164165-169170-174175-179180-184185-189190-194195-199200-204205-209210-214215-219

220-224225-229230-234

235-239 240-244 245-249250-254255-259260-264265-269270-274275-279280-284285-289290-294295-300

0 10 20 30 40

Page 64: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Candidate Performance Statistical ReportSeptember 3, 2013 - August 31, 2014

Total Scaled Score Distribution by Test Field (All Forms)

Test Field=034 Physical Education

NN at orabove %

% at orabove

0 0 0.00 0.000 0 0.00 0.001 1 0.47 0.470 1 0.00 0.471 2 0.47 0.952 4 0.95 1.90

10 14 4.74 6.646 20 2.84 9.48

16 36 7.58 17.069 45 4.27 21.339 54 4.27 25.59

16 70 7.58 33.1821 91 9.95 43.1315 106 7.11 50.2413 119 6.16 56.4031 150 14.69 71.0913 163 6.16 77.25

7 170 3.32 80.5712 182 5.69 86.26

7 189 3.32 89.576 195 2.84 92.429 204 4.27 96.683 207 1.42 98.101 208 0.47 98.580 208 0.00 98.580 208 0.00 98.581 209 0.47 99.050 209 0.00 99.050 209 0.00 99.051 210 0.47 99.530 210 0.00 99.531 211 0.47 100.000 211 0.00 100.000 211 0.00 100.000 211 0.00 100.000 211 0.00 100.000 211 0.00 100.000 211 0.00 100.000 211 0.00 100.000 211 0.00 100.00

Total Scaled Score

100-104105-109110-114115-119120-124125-129130-134135-139140-144145-149150-154155-159160-164165-169170-174175-179180-184185-189190-194195-199200-204205-209210-214215-219

220-224225-229230-234

235-239 240-244 245-249250-254255-259260-264265-269270-274275-279280-284285-289290-294295-300

0 10 20 30 40

Page 65: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Candidate Performance Statistical ReportSeptember 3, 2013 - August 31, 2014

Total Scaled Score Distribution by Test Field (All Forms)

Test Field=035 Physics

NN at orabove %

% at orabove

0 0 0.00 0.000 0 0.00 0.000 0 0.00 0.001 1 3.23 3.231 2 3.23 6.453 5 9.68 16.131 6 3.23 19.351 7 3.23 22.582 9 6.45 29.030 9 0.00 29.032 11 6.45 35.481 12 3.23 38.713 15 9.68 48.392 17 6.45 54.841 18 3.23 58.062 20 6.45 64.520 20 0.00 64.522 22 6.45 70.973 25 9.68 80.651 26 3.23 83.872 28 6.45 90.322 30 6.45 96.770 30 0.00 96.770 30 0.00 96.770 30 0.00 96.770 30 0.00 96.770 30 0.00 96.770 30 0.00 96.770 30 0.00 96.770 30 0.00 96.771 31 3.23 100.000 31 0.00 100.000 31 0.00 100.000 31 0.00 100.000 31 0.00 100.000 31 0.00 100.000 31 0.00 100.000 31 0.00 100.000 31 0.00 100.000 31 0.00 100.00

Total Scaled Score

100-104105-109110-114115-119120-124125-129130-134135-139140-144145-149150-154155-159160-164165-169170-174175-179180-184185-189190-194195-199200-204205-209210-214215-219

220-224225-229230-234

235-239 240-244 245-249250-254255-259260-264265-269270-274275-279280-284285-289290-294295-300

0 1 2 3

Page 66: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Candidate Performance Statistical ReportSeptember 3, 2013 - August 31, 2014

Total Scaled Score Distribution by Test Field (All Forms)

Test Field=036 Prekindergarten Subtest I

NN at orabove %

% at orabove

0 0 0.00 0.000 0 0.00 0.000 0 0.00 0.001 1 4.76 4.761 2 4.76 9.521 3 4.76 14.290 3 0.00 14.293 6 14.29 28.573 9 14.29 42.862 11 9.52 52.382 13 9.52 61.901 14 4.76 66.671 15 4.76 71.432 17 9.52 80.951 18 4.76 85.710 18 0.00 85.711 19 4.76 90.481 20 4.76 95.240 20 0.00 95.240 20 0.00 95.240 20 0.00 95.240 20 0.00 95.241 21 4.76 100.000 21 0.00 100.000 21 0.00 100.000 21 0.00 100.000 21 0.00 100.000 21 0.00 100.000 21 0.00 100.000 21 0.00 100.000 21 0.00 100.000 21 0.00 100.000 21 0.00 100.000 21 0.00 100.000 21 0.00 100.000 21 0.00 100.000 21 0.00 100.000 21 0.00 100.000 21 0.00 100.000 21 0.00 100.00

Total Scaled Score

100-104105-109110-114115-119120-124125-129130-134135-139140-144145-149150-154155-159160-164165-169170-174175-179180-184185-189190-194195-199200-204205-209210-214215-219

220-224225-229230-234

235-239 240-244 245-249250-254255-259260-264265-269270-274275-279280-284285-289290-294295-300

0 1 2 3

Page 67: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Candidate Performance Statistical ReportSeptember 3, 2013 - August 31, 2014

Total Scaled Score Distribution by Test Field (All Forms)

Test Field=037 Prekindergarten Subtest II

NN at orabove %

% at orabove

0 0 0.00 0.000 0 0.00 0.000 0 0.00 0.001 1 4.76 4.760 1 0.00 4.760 1 0.00 4.763 4 14.29 19.052 6 9.52 28.572 8 9.52 38.102 10 9.52 47.621 11 4.76 52.382 13 9.52 61.900 13 0.00 61.902 15 9.52 71.430 15 0.00 71.432 17 9.52 80.950 17 0.00 80.951 18 4.76 85.711 19 4.76 90.480 19 0.00 90.480 19 0.00 90.480 19 0.00 90.481 20 4.76 95.241 21 4.76 100.000 21 0.00 100.000 21 0.00 100.000 21 0.00 100.000 21 0.00 100.000 21 0.00 100.000 21 0.00 100.000 21 0.00 100.000 21 0.00 100.000 21 0.00 100.000 21 0.00 100.000 21 0.00 100.000 21 0.00 100.000 21 0.00 100.000 21 0.00 100.000 21 0.00 100.000 21 0.00 100.00

Total Scaled Score

100-104105-109110-114115-119120-124125-129130-134135-139140-144145-149150-154155-159160-164165-169170-174175-179180-184185-189190-194195-199200-204205-209210-214215-219

220-224225-229230-234

235-239 240-244 245-249250-254255-259260-264265-269270-274275-279280-284285-289290-294295-300

0 1 2 3

Page 68: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Candidate Performance Statistical ReportSeptember 3, 2013 - August 31, 2014

Total Scaled Score Distribution by Test Field (All Forms)

Test Field=038 Reading Subtest I

NN at orabove %

% at orabove

4 4 0.23 0.2313 17 0.74 0.9733 50 1.88 2.8552 102 2.96 5.8177 179 4.39 10.2072 251 4.10 14.30

105 356 5.98 20.28136 492 7.75 28.03112 604 6.38 34.42171 775 9.74 44.16125 900 7.12 51.28128 1028 7.29 58.58117 1145 6.67 65.24120 1265 6.84 72.08113 1378 6.44 78.52

75 1453 4.27 82.7975 1528 4.27 87.0753 1581 3.02 90.0963 1644 3.59 93.6832 1676 1.82 95.5020 1696 1.14 96.6423 1719 1.31 97.9512 1731 0.68 98.6314 1745 0.80 99.43

2 1747 0.11 99.541 1748 0.06 99.600 1748 0.00 99.603 1751 0.17 99.771 1752 0.06 99.832 1754 0.11 99.941 1755 0.06 100.000 1755 0.00 100.000 1755 0.00 100.000 1755 0.00 100.000 1755 0.00 100.000 1755 0.00 100.000 1755 0.00 100.000 1755 0.00 100.000 1755 0.00 100.000 1755 0.00 100.00

Total Scaled Score

100-104105-109110-114115-119120-124125-129130-134135-139140-144145-149150-154155-159160-164165-169170-174175-179180-184185-189190-194195-199200-204205-209210-214215-219

220-224225-229230-234

235-239 240-244 245-249250-254255-259260-264265-269270-274275-279280-284285-289290-294295-300

0 20 40 60 80 100 120 140 160 180

Page 69: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Candidate Performance Statistical ReportSeptember 3, 2013 - August 31, 2014

Total Scaled Score Distribution by Test Field (All Forms)

Test Field=039 Reading Subtest II

NN at orabove %

% at orabove

4 4 0.23 0.2317 21 0.99 1.2230 51 1.74 2.9649 100 2.85 5.8190 190 5.23 11.0384 274 4.88 15.91

116 390 6.74 22.65138 528 8.01 30.66123 651 7.14 37.80159 810 9.23 47.04122 932 7.08 54.12160 1092 9.29 63.41133 1225 7.72 71.14

98 1323 5.69 76.83100 1423 5.81 82.64

70 1493 4.07 86.7071 1564 4.12 90.8235 1599 2.03 92.8647 1646 2.73 95.5921 1667 1.22 96.8126 1693 1.51 98.32

4 1697 0.23 98.557 1704 0.41 98.959 1713 0.52 99.481 1714 0.06 99.545 1719 0.29 99.831 1720 0.06 99.880 1720 0.00 99.881 1721 0.06 99.941 1722 0.06 100.000 1722 0.00 100.000 1722 0.00 100.000 1722 0.00 100.000 1722 0.00 100.000 1722 0.00 100.000 1722 0.00 100.000 1722 0.00 100.000 1722 0.00 100.000 1722 0.00 100.000 1722 0.00 100.00

Total Scaled Score

100-104105-109110-114115-119120-124125-129130-134135-139140-144145-149150-154155-159160-164165-169170-174175-179180-184185-189190-194195-199200-204205-209210-214215-219

220-224225-229230-234

235-239 240-244 245-249250-254255-259260-264265-269270-274275-279280-284285-289290-294295-300

0 20 40 60 80 100 120 140 160

Page 70: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Candidate Performance Statistical ReportSeptember 3, 2013 - August 31, 2014

Total Scaled Score Distribution by Test Field (All Forms)

Test Field=040 School Counselor

NN at orabove %

% at orabove

0 0 0.00 0.000 0 0.00 0.000 0 0.00 0.001 1 0.55 0.550 1 0.00 0.551 2 0.55 1.094 6 2.19 3.287 13 3.83 7.107 20 3.83 10.93

20 40 10.93 21.8621 61 11.48 33.3316 77 8.74 42.0811 88 6.01 48.0917 105 9.29 57.3815 120 8.20 65.5718 138 9.84 75.41

5 143 2.73 78.1414 157 7.65 85.79

9 166 4.92 90.716 172 3.28 93.990 172 0.00 93.993 175 1.64 95.632 177 1.09 96.720 177 0.00 96.721 178 0.55 97.272 180 1.09 98.363 183 1.64 100.000 183 0.00 100.000 183 0.00 100.000 183 0.00 100.000 183 0.00 100.000 183 0.00 100.000 183 0.00 100.000 183 0.00 100.000 183 0.00 100.000 183 0.00 100.000 183 0.00 100.000 183 0.00 100.000 183 0.00 100.000 183 0.00 100.00

Total Scaled Score

100-104105-109110-114115-119120-124125-129130-134135-139140-144145-149150-154155-159160-164165-169170-174175-179180-184185-189190-194195-199200-204205-209210-214215-219

220-224225-229230-234

235-239 240-244 245-249250-254255-259260-264265-269270-274275-279280-284285-289290-294295-300

0 10 20 30

Page 71: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Candidate Performance Statistical ReportSeptember 3, 2013 - August 31, 2014

Total Scaled Score Distribution by Test Field (All Forms)

Test Field=041 School Library Media Specialist

NN at orabove %

% at orabove

0 0 0.00 0.000 0 0.00 0.000 0 0.00 0.000 0 0.00 0.000 0 0.00 0.000 0 0.00 0.001 1 6.25 6.251 2 6.25 12.502 4 12.50 25.004 8 25.00 50.001 9 6.25 56.250 9 0.00 56.250 9 0.00 56.251 10 6.25 62.501 11 6.25 68.750 11 0.00 68.751 12 6.25 75.000 12 0.00 75.001 13 6.25 81.250 13 0.00 81.252 15 12.50 93.750 15 0.00 93.750 15 0.00 93.750 15 0.00 93.750 15 0.00 93.750 15 0.00 93.750 15 0.00 93.750 15 0.00 93.750 15 0.00 93.750 15 0.00 93.750 15 0.00 93.750 15 0.00 93.750 15 0.00 93.751 16 6.25 100.000 16 0.00 100.000 16 0.00 100.000 16 0.00 100.000 16 0.00 100.000 16 0.00 100.000 16 0.00 100.00

Total Scaled Score

100-104105-109110-114115-119120-124125-129130-134135-139140-144145-149150-154155-159160-164165-169170-174175-179180-184185-189190-194195-199200-204205-209210-214215-219

220-224225-229230-234

235-239 240-244 245-249250-254255-259260-264265-269270-274275-279280-284285-289290-294295-300

0 1 2 3 4

Page 72: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Candidate Performance Statistical ReportSeptember 3, 2013 - August 31, 2014

Total Scaled Score Distribution by Test Field (All Forms)

Test Field=043 Special Education

NN at orabove %

% at orabove

0 0 0.00 0.003 3 0.20 0.206 9 0.40 0.60

15 24 1.00 1.6028 52 1.87 3.4743 95 2.87 6.3399 194 6.60 12.9377 271 5.13 18.0783 354 5.53 23.60

114 468 7.60 31.2099 567 6.60 37.80

100 667 6.67 44.47108 775 7.20 51.67108 883 7.20 58.87142 1025 9.47 68.33

73 1098 4.87 73.2075 1173 5.00 78.2071 1244 4.73 82.9355 1299 3.67 86.6042 1341 2.80 89.4048 1389 3.20 92.6023 1412 1.53 94.1330 1442 2.00 96.1313 1455 0.87 97.00

9 1464 0.60 97.606 1470 0.40 98.00

10 1480 0.67 98.678 1488 0.53 99.203 1491 0.20 99.403 1494 0.20 99.604 1498 0.27 99.870 1498 0.00 99.870 1498 0.00 99.870 1498 0.00 99.870 1498 0.00 99.871 1499 0.07 99.931 1500 0.07 100.000 1500 0.00 100.000 1500 0.00 100.000 1500 0.00 100.00

Total Scaled Score

100-104105-109110-114115-119120-124125-129130-134135-139140-144145-149150-154155-159160-164165-169170-174175-179180-184185-189190-194195-199200-204205-209210-214215-219

220-224225-229230-234

235-239 240-244 245-249250-254255-259260-264265-269270-274275-279280-284285-289290-294295-300

0 20 40 60 80 100 120 140 160

Page 73: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Candidate Performance Statistical ReportSeptember 3, 2013 - August 31, 2014

Total Scaled Score Distribution by Test Field (All Forms)

Test Field=044 Special Education Specialist: Deaf/Hard of Hearing

NN at orabove %

% at orabove

0 0 0.00 0.000 0 0.00 0.000 0 0.00 0.000 0 0.00 0.000 0 0.00 0.000 0 0.00 0.001 1 4.35 4.356 7 26.09 30.430 7 0.00 30.431 8 4.35 34.781 9 4.35 39.132 11 8.70 47.830 11 0.00 47.830 11 0.00 47.834 15 17.39 65.221 16 4.35 69.572 18 8.70 78.264 22 17.39 95.651 23 4.35 100.000 23 0.00 100.000 23 0.00 100.000 23 0.00 100.000 23 0.00 100.000 23 0.00 100.000 23 0.00 100.000 23 0.00 100.000 23 0.00 100.000 23 0.00 100.000 23 0.00 100.000 23 0.00 100.000 23 0.00 100.000 23 0.00 100.000 23 0.00 100.000 23 0.00 100.000 23 0.00 100.000 23 0.00 100.000 23 0.00 100.000 23 0.00 100.000 23 0.00 100.000 23 0.00 100.00

Total Scaled Score

100-104105-109110-114115-119120-124125-129130-134135-139140-144145-149150-154155-159160-164165-169170-174175-179180-184185-189190-194195-199200-204205-209210-214215-219

220-224225-229230-234

235-239 240-244 245-249250-254255-259260-264265-269270-274275-279280-284285-289290-294295-300

0 1 2 3 4 5 6

Page 74: Test Administration Technical Report · Data are collected on each item, allowing for the empirical consideration of item difficulty, item discrimination, content accuracy, and the

Candidate Performance Statistical ReportSeptember 3, 2013 - August 31, 2014

Total Scaled Score Distribution by Test Field (All Forms)

Test Field=045 Special Education Specialist: Visually Impaired

NN at orabove %

% at orabove

0 0 0.00 0.000 0 0.00 0.000 0 0.00 0.000 0 0.00 0.001 1 7.69 7.691 2 7.69 15.381 3 7.69 23.082 5 15.38 38.460 5 0.00 38.460 5 0.00 38.460 5 0.00 38.460 5 0.00 38.461 6 7.69 46.152 8 15.38 61.541 9 7.69 69.230 9 0.00 69.231 10 7.69 76.920 10 0.00 76.920 10 0.00 76.921 11 7.69 84.622 13 15.38 100.000 13 0.00 100.000 13 0.00 100.000 13 0.00 100.000 13 0.00 100.000 13 0.00 100.000 13 0.00 100.000 13 0.00 100.000 13 0.00 100.000 13 0.00 100.000 13 0.00 100.000 13 0.00 100.000 13 0.00 100.000 13 0.00 100.000 13 0.00 100.000 13 0.00 100.000 13 0.00 100.000 13 0.00 100.000 13 0.00 100.000 13 0.00 100.00

Total Scaled Score

100-104105-109110-114115-119120-124125-129130-134135-139140-144145-149150-154155-159160-164165-169170-174175-179180-184185-189190-194195-199200-204205-209210-214215-219

220-224225-229230-234

235-239 240-244 245-249250-254255-259260-264265-269270-274275-279280-284285-289290-294295-300

0 1 2