study island scientific research base...educational programs used in academic settings be based on...

33
Study Island Scientific Research Base July 15, 2008

Upload: others

Post on 03-Aug-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Study Island Scientific Research Base...educational programs used in academic settings be based on scientific research findings (NCLB, 2002). This requirement for a research base applies

Study Island Scientific Research Base July 15, 2008

Page 2: Study Island Scientific Research Base...educational programs used in academic settings be based on scientific research findings (NCLB, 2002). This requirement for a research base applies

Study Island Scientific Research Base i Magnolia Consulting, LLC July 15, 2008

ACKNOWLEDGEMENT

I gratefully acknowledge the staff at Study Island, especially Tim McEwen and J. W. Marshall, for their value and commitment to research, and we would like to thank the many individuals whose contributions and assistance made this work possible, including Mary Styers, Lisa Shannon, Stephanie Baird Wilkerson, and Arianne Welker of Magnolia Consulting. The author, Jennifer Watts, Ph.D.

Page 3: Study Island Scientific Research Base...educational programs used in academic settings be based on scientific research findings (NCLB, 2002). This requirement for a research base applies

Study Island Scientific Research Base ii Magnolia Consulting, LLC July 15, 2008

TABLE OF CONTENTS Acknowledgement ............................................................................................................................................ i Introduction ....................................................................................................................................................... 1 NCLB Accountability Requirements ............................................................................................................. 2 Study Island Content Is Built From State Standards. .................................................................................... 2 Study Island Provides Diagnostic, Formative, and Summative Results ...................................................... 4 Study Island Builds In An Assessment Feedback Loop ................................................................................ 5 Study Island Reinforces and Extends Learning Through Ongoing and Distributed Skill Practice ........ 8

Study Island Includes Components that Motivate Students ......................................................................... 9 Study Island Supports Standards Mastery Through A Variety of Instructional Formats ......................... 11 Study Island Includes Dynamic and Generative Content ............................................................................. 13 Study Island Uses a Web-Based Instructional Platform ................................................................................ 14 Study Island Encourages Parental Involvement ............................................................................................. 15 Conclusions ........................................................................................................................................................ 17 Footnotes ........................................................................................................................................................... 18 References .......................................................................................................................................................... 22

Page 4: Study Island Scientific Research Base...educational programs used in academic settings be based on scientific research findings (NCLB, 2002). This requirement for a research base applies

Study Island Scientific Research Base 1 Magnolia Consulting, LLC July 15, 2008

IINNTTRROODDUUCCTTIIOONN

The No Child Left Behind Act of 2001 (NCLB) is the most comprehensive reform of the Elementary and Secondary Education Act since it was enacted in 1965. This law establishes that educational programs used in academic settings be based on scientific research findings (NCLB, 2002). This requirement for a research base applies not only to core instructional materials but also to materials and methodology that prepare students to be successful on their statewide high-stakes assessments. NCLB further calls for a stronger accountability of student achievement results from schools and districts, requiring academic professionals to ensure that students are meeting or exceeding state standards for proficiency in key instructional objectives in the core areas of reading, math, and more recently, science. Failure to meet these standards repeatedly can lead to negative and protracted consequences such as having to provide educational support services, restructuring of the internal staffing organization of the school, and ultimately school closure. Therefore, it is desirable for schools to continually monitor students’ academic progress and immediately remediate any difficulties students may encounter. Because of this need for continual progress monitoring, educational publishers have created materials designed to aid schools in achieving their academic progress goals. One such program, Study Island, is a web-based standards mastery program that combines highly specific and dynamic content with real-time reporting to create a customized assessment, diagnostic, and instructional program based on each state’s standards. By creating an interactive and flexible instructional platform, Study Island provides engaging, ongoing practice and remediation to help students meet their state required standards in all major content areas. The purpose of this work is to create a foundational research base to support the design features and instructional elements of the Study Island program. This work provides documentation that connects the key features of the Study Island program to scientific and academic research literature. The following sections present supporting research related to the key features of Study Island: • Content that is developed from specific state standards • Diagnostic, formative, and summative results • Assessment feedback loops • Ongoing and distributed skill practice • Motivational components • A variety of instructional formats • Dynamic and generative content • On-line learning • Parental involvement

Page 5: Study Island Scientific Research Base...educational programs used in academic settings be based on scientific research findings (NCLB, 2002). This requirement for a research base applies

Study Island Scientific Research Base 2 Magnolia Consulting, LLC July 15, 2008

NNCCLLBB AACCCCOOUUNNTTAABBIILLIITTYY RREEQQUUIIRREEMMEENNTTSS

NCLB legislation requires districts and schools to demonstrate Adequate Yearly Progress (AYP), which is an individual state’s measure of yearly progress toward achieving state academic standards. AYP is the minimum level of improvement that districts and schools within a state must attain each year. Under NCLB, states are able to set their own academic content standards in the core subject areas of reading, math, and science. Additionally, each state sets benchmark standards for the percentage of students meeting proficiency on their state assessment in these core areas. Over time, the requirements for the percentage of students reaching proficiency rise with the goal of reaching 100% proficiency by 2014. Schools are accountable for the academic progress of all students, including subgroup populations that had previously been exempt from accountability measures. If a school does not meet these proficiency goals each year for all students, the school does not meet AYP for the year and enters into improvement status, at which time the school must take provisions to improve the school’s proficiency standards. The emphasis NCLB places on accountability at the school or individual student level requires districts to monitor student progress toward the expectations of the content standards and benchmark proficiency goals actively. Without ongoing measurement of this progress, schools must rely on end-of-year assessment data to determine the individual academic needs of each student. Often teachers receive this information too late in the year for it to make any impact on instructional practices. Therefore, research recommends that teachers engage in frequent, ongoing classroom-based assessment known as formative assessment to monitor student progress (Black & Wiliam, 1998b; Stiggins, 1999). Not only can these assessment results provide ongoing feedback, research shows that formative assessments can also contribute to gains in student achievement scores (Bangert-Drowns, Kulik, & Kulik, 1991)1 and build student confidence (Stiggins, 1999). SSTTUUDDYY IISSLLAANNDD CCOONNTTEENNTT IISS BBUUIILLTT FFRROOMM SSTTAATTEE SSTTAANNDDAARRDDSS

In order for formative tests to monitor student progress toward achieving state and district benchmark goals accurately, these assessments must reflect both the breadth and depth of state standards and state assessments (Herman & Baker, 2005). In other words, there must be alignment between content standards and the assessments that evaluate knowledge of those standards; however, the term alignment can have many different meanings. Webb best defines alignment within educational settings as “the degree to which expectations and assessments are in agreement and serve in conjunction with one another to guide the system toward students learning what they are expected to know and do” (2002, p. 1).

“Unless benchmark tests reflect state standards and assessments, their results tell us little about whether students are making adequate progress” (Herman & Baker, 2005, p. 49).

Page 6: Study Island Scientific Research Base...educational programs used in academic settings be based on scientific research findings (NCLB, 2002). This requirement for a research base applies

Study Island Scientific Research Base 3 Magnolia Consulting, LLC July 15, 2008

From the beginning of the standards movement, many researchers have argued for and created models of alignment judgment and alignment procedures that move assessment-standards alignment past a mirror image of the content of a curriculum framework to a broader definition. These in-depth procedures capture not only the content, but also the depth of knowledge required to demonstrate proficiency of a standard, the key principles underlying a content domain, and the reciprocal relationship between the assessment and the standards (Herman & Baker, 2005; Porter & Smithson, 2002; Rotham, Slattery, Vranek & Resnick, 2002; Webb, 1997). Typically, states complete alignment procedures to determine how well an existing state assessment aligns with state standards in an effort to demonstrate the overall strength or predictive validity of the state assessment (e.g., Roach, Elliot, & Webb, 2005). However, with the increased emphasis NCLB has placed on accountability at the district and school level, there is a burgeoning movement for districts to develop formative or benchmark assessments from specific state standards (e.g., Niemi, Vallone, Wang, & Griffin, 2007), thus allowing schools to monitor student progress toward the mastery of state standards throughout the year confidently. Some alignment experts have taken this a step further and have developed procedures to examine the alignment between instructional content and assessments (Porter & Smithson, 2002, April). Research using these procedures has shown that a strong relationship exists between instructional content alignment and student achievement gains, indicating that the better instructional content aligns with assessments, the higher student achievement can be (Gamoran, Porter, Smithson, & White, 1997).2 Taken together, these results suggest that creating a system of instructional content and assessments built from and customized to specific state standards can provide a solid and accurate system of ongoing progress monitoring of student achievement. Going beyond traditional alignment procedures that evaluators typically conduct after program development, the Study Island program authors developed the content of Study Island from an in-depth analysis of each state’s learning objectives to create highly-specific and individualized versions of the program for each state. The deep customization of both the instructional practice and progress monitoring tools of the program provide precise methods to track and improve students’ progress toward meeting state-specific content standards.

Page 7: Study Island Scientific Research Base...educational programs used in academic settings be based on scientific research findings (NCLB, 2002). This requirement for a research base applies

Study Island Scientific Research Base 4 Magnolia Consulting, LLC July 15, 2008

SSTTUUDDYY IISSLLAANNDD PPRROOVVIIDDEESS DDIIAAGGNNOOSSTTIICC,, FFOORRMMAATTIIVVEE,, AANNDD SSUUMMMMAATTIIVVEE RREESSUULLTTSS

Within the context of progress monitoring, it is not so much about how educators assess students, as it is about how they use the assessment results. For assessments to be effective tools and have an impact on student learning, teachers must use them to adjust their practices or provide remediation based on student need (Black & Wiliam, 1998a).3 A recent report by the RAND Corporation (Hamilton et al., 2007) found that the occurrence of such data-based decision making is on the rise within school campuses due to the pressures of state and federal accountability requirements. However, according to their findings, teachers rarely use the end-of-year or summative assessment results to address students’ specific instructional needs. Administrators are more likely to use these results to guide decisions regarding retention or promotion of students or to determine the professional development needs of teachers. Instead, RAND reports that teachers are more likely to modify instructional practice based on the results of formative assessments used as diagnostic tools to drive instruction or correct gaps in teaching practice. Likewise, educators within low-performing schools in AYP improvement status are also examining the results of student assessments closer in an effort to target the instructional practices that led them into improvement status. In a review of districts’ and schools’ implementation of the accountability provisions of the NCLB Act, Shields et al. (2004) found that 86% of districts with low performing schools listed using student achievement data to monitor student progress as one of their two most important school improvement plans. These findings indicate that educators’ use of assessment results has a unique role in the learning process. The integration of assessment results within instructional practice from start to finish reflects a multidimensional purpose for assessment and strengthens its role in understanding the effectiveness of a curriculum, the impact of specific teaching practices, or the response of specific students at the point of instruction. The use of assessment in this broader capacity can reveal, over time, the true impact of instruction through the growth in student performance on these assessments (American Association for Higher Education, 1991). Study Island uses a comprehensive system of assessment tools to provide in-depth feedback regarding student progress toward mastery of content standards. The Study Island program includes reports of diagnostic, formative, and summative assessment results that are instantly and constantly available through the online system. These reports provide instructors and administrators with continual access to information regarding students’ instructional weaknesses (diagnostic data), their progress toward overcoming these weaknesses (formative data) and their eventual mastery of learning objectives (summative data).

“For assessment to function formatively, the results have to be used to adjust teaching and learning; thus a significant aspect of any program will be the ways in which teachers make these adjustments” (Black & Wiliam, 1998a, p. 141).

Page 8: Study Island Scientific Research Base...educational programs used in academic settings be based on scientific research findings (NCLB, 2002). This requirement for a research base applies

Study Island Scientific Research Base 5 Magnolia Consulting, LLC July 15, 2008

SSTTUUDDYY IISSLLAANNDD BBUUIILLDDSS IINN AANN AASSSSEESSSSMMEENNTT FFEEEEDDBBAACCKK LLOOOOPP

Educational programs, especially technology-based programs, such as Study Island, promote ongoing standards mastery by using the results of progress monitoring evaluations to inform instructional aspects of the program or to impact classroom practice. For illustrative purposes, one can think of this interactive relationship between assessment results and instructional practice as a continual feedback loop or cycle. For example, poor results on a progress monitoring mechanism can lead to remediation instruction, the scaling down of the level of future practice within the program, or to the creation of new instructional paths designed to promote new learning. The cycle is then completed or restarted through further performance evaluation. These cycles are typically of three lengths—long, medium, or short—all of which can operate concurrently. Longer cycles focus on the results of summative assessments and can last an entire school year while medium and short cycles use formative assessment results or informal findings from ongoing practice (e.g., worksheet or activity results). Regardless of the length of the cycle, researchers suggest that the use of feedback is the most critical element in the cycle (Duke & Pearson, 2002; Wiliam, 2006). For an assessment feedback loop to be successful, the instructional delivery mechanism, be it a teacher or computer, must be flexible and proactive, adapting the instructional content or the delivery of the instructional content as needed to ensure mastery (Cassarà, 2004). Research shows that when a feedback loop is applied in practice and instruction is modified based on student performance, student learning is accelerated and improved (Jinkins, 2001; Wiliam, Lee, Harrison, & Black, 2004),4 especially when feedback is used quickly and impacts or modifies instruction on a day-by-day or minute-by-minute basis (Leahy, Lyon, Thompson, & Wiliam, 2005). These shorter-cycle feedback loops, such as those found in Study Island, are typically comprised of three main functions: ongoing and continual assessment, immediate feedback of results, and quick remediation. Ongoing, continual assessment is critical to the success of a short-cycle assessment feedback loop. A consensus within the research literature suggests that students who receive frequent assessments have higher achievement scores (Black & Wiliam, 1998a; Fuchs & Fuchs, 1986; Wolf, 2007)5,6 especially when that assessment is cumulative (Dempster, 1991; Rohm, Sparzo & Bennett, 1986)7 and provides students with opportunities to learn from the assessment (Kilpatrick, Swafford, & Bradford, 2001). Although generally providing feedback to teachers and students regarding student performance can consistently enhance achievement (Baker, Gersten, & Lee, 2002),8 meta-analytic research indicates that it is the timeliness and the type of feedback provided that are critical within applied learning settings. Kulik and Kulik (1988) found that immediate feedback of results has a positive effect on

“When teachers understand and apply the teaching/learning cycle in their daily planning and instructional delivery, student learning is accelerated” (Jinkins, 2001, p. 281).

Page 9: Study Island Scientific Research Base...educational programs used in academic settings be based on scientific research findings (NCLB, 2002). This requirement for a research base applies

Study Island Scientific Research Base 6 Magnolia Consulting, LLC July 15, 2008

student achievement within classroom settings, especially on applied learning measures such as frequent quizzes.9 Such feedback was even more effective when it immediately followed each answer a student provided. Bangert-Drowns, Kulik, Kulik, and Morgan (1991) extended these findings by showing that timely feedback can correct errors when it informs the learner of the correct answer,10 especially when students were confident in their answers (Kulhavy & Stock, 1989).11 Marzano, Pickering, and Pollock (2001) further concluded that feedback, which also provides an explanation of the correct answer, was the most effective. Through their meta-analysis, they additionally concluded that feedback is best when it encourages students to keep working on a task until they succeed and tells students where they stand relative to a target level of knowledge instead of how their performance ranks in comparison to the performance of other students. 12 Although most of the research literature has focused on the effect of teacher-provided feedback or feedback from classroom-based assessments, research has shown that computers are effective tools for providing feedback as well. In their meta-analysis, Baker et al., (2002) concluded that, although using computers to provide ongoing progress monitoring feedback was effective (ES = 0.29), using a computer to provide instructional recommendations based on these results was even more effective (ES = 0.51), suggesting that the combination of the two factors may be the most beneficial practice. Taken together, these results suggest that a cycle of ongoing feedback followed by remediation and further assessment contributes to increases in student achievement. Study Island incorporates a short-cycle assessment feedback loop into its design through a system of continual assessment, immediate feedback, and quick remediation. When educators integrate Study Island into their instructional practices, it acts as a formative, ongoing assessment tool that provides students with a platform to practice or demonstrate their knowledge of taught standards. During program implementation, students answer questions that correspond to grade-specific state standards and learning objectives within state-tested content areas. When students answer a question, they immediately learn if the answer they provided was correct or not. Following each question, an explanation of the correct answer is available to the students, offering ongoing remediation to those students that may need it. At the end of each session, students can revisit the questions they missed and again can seek learning opportunities for those questions. Students also have the option to engage in additional learning opportunities through lessons on the standards that are available at the beginning and end of a study session. Additionally, Study Island provides in-depth reports of student performance data to students, teachers, and administrators. Students can learn where they stand relative to specific proficiency goals, teachers can use the reports of individual student performance data instantly to provide additional remediation where needed within a general classroom instruction setting, and administrators can use the reports to access summative data to determine if students are meeting benchmark standards over time.

The availability of real-time achievement data allows for both quick remediation and the identification of trends in individual student performance, helping teachers to create personalized instructional paths based on demonstrated student need. Furthermore, technology-based programs, such as Study Island, that immediately utilize student performance data can also shift instruction or practice to the appropriate level needed by a student to ensure more effective practice and to meet individual student needs. Such personalization of instructional materials promotes learning through a reduction of the cognitive load required to complete a task (Kayluga & Sweller, 2004), and research

Page 10: Study Island Scientific Research Base...educational programs used in academic settings be based on scientific research findings (NCLB, 2002). This requirement for a research base applies

Study Island Scientific Research Base 7 Magnolia Consulting, LLC July 15, 2008

from a variety of learning environments shows that personalized instruction can lead to more efficient training and higher test performance than fixed-sequence, one-size-fits-all programs (Camp, Paas, Rickers, & van Merriënboer, 2001; Corbalan, Kester, & van Merriënboer, 2006; Kayluga & Sweller, 2004; Salden, Paas, Broers, & van Merriënboer, 2004).13, 14, 15, 16

Study Island uses technology both to provide students with remediation or practice at lower levels and to provide students with a customized and personalized learning experience based on demonstrated need. In many cases throughout the program, if students do not reach the requisite proficiency level on a specific objective, the program cycles students down to lower levels in order to give students practice at levels that are building blocks for higher-level skills. Once students demonstrate proficiency at a lower level, the program cycles students back up to the higher level. Through this process, the Study Island program creates individual learning trajectories for students to follow. The administrative and reporting features of the Study Island program allow teachers and administrators to monitor constantly how students are progressing through these personalized trajectories toward mastering the required benchmarks and standards. If students begin to fall below or exceed certain levels of achievement, teachers can prescribe additional practice at specific levels through the program and continue to monitor students’ progress, or they can provide additional instruction or remediation within the classroom. Therefore, when teachers integrate Study Island into their curriculum, it essentially allows for individualized, differential instruction that could otherwise be difficult for one teacher alone to provide. Using Study Island to track content mastery and individual changes in achievement concurrently, a teacher can efficiently determine if a student has significantly improved over time and if that improvement was enough to meet specific content benchmarks and standards. Weiss and Kingsbury (1984) conclude that the combination of these methods is particularly useful for identifying students who may begin the year at the same level but do not respond to instruction at the same rate. This methodology allows for the immediate notification of when remediation and intervention are necessary. Although NCLB only requires that states examine student performance against the expectation for a student’s grade level, these procedures can be an effective tool within an assessment-feedback loop model to monitor accurately and propel student progress toward these goals throughout the year (Weiss, 2004).

Page 11: Study Island Scientific Research Base...educational programs used in academic settings be based on scientific research findings (NCLB, 2002). This requirement for a research base applies

Study Island Scientific Research Base 8 Magnolia Consulting, LLC July 15, 2008

SSTTUUDDYY IISSLLAANNDD RREEIINNFFOORRCCEESS AANNDD EEXXTTEENNDDSS LLEEAARRNNIINNGG TTHHRROOUUGGHH OONNGGOOIINNGG AANNDD DDIISSTTRRIIBBUUTTEEDD SSKKIILLLL PPRRAACCTTIICCEE

For instruction or remediation to have a lasting impact on student knowledge and foster further learning, instructors must provide reinforcement through ongoing practice and review of learned material (Dempster, 1991; Marzano et al., 2001).17 Research shows that review can have an impact on both the quantity and quality of the material that students learn. Mayer (1983)18 found that after multiple presentations of text, not only did the overall amount of information that students could recall increase, but participants also were able to recall more conceptual information than technical information. This suggests that after repeated presentations, students may process material at cognitively deeper levels. However, Marzano et al. concluded that students may need many practice sessions in order to reach high levels of competence, and for more difficult, multi-step skills, students may need to engage in focused practice, which allows students to target specific sub-skills within a larger skill. Teachers must allow time for students to internalize skills through practice so students can apply concepts in different and conceptually challenging situations. Research suggests that the temporal presentation of the material during review mediates the amount of material one can learn through practice. Within typical curriculum implementation, teachers either conduct material review in mass, such as once at the end of a unit of study, or distributed, in which students review and practice material continually spaced out over a longer period. There is a consensus within the research literature that distributed practice produces higher rates of retention and better test performance than massed practiced (Cepeda, Pashler, Vul, Wixted, & Rohrer, 2006; Donovan & Radosevich, 1999; and Janiszewski, Noel, & Sawyer, 2003).19, 20, 21 The magnitude of retention, however, depends on the complexity of the task and material as well as the interval between reviews. Donovan and Radosevich (1999) found that both task complexity and interval length were important interacting factors. Specifically, when individuals reviewed cognitively complex material, longer periods between presentations of the material led to higher rates of recall.22 Janiszewski et al. (2003) extended these findings and concluded that in addition to longer intervals between presentations, other variables contributed to the effectiveness of distributive practice as well. These included the type of learning (intentional learning produced bigger effects than incidental learning), the complexity of the material to be learned (semantically complex material was learned more effectively through distributive practice), the complexity of the intervening material, and the meaningfulness of the material to be learned.23 In a more recent meta-analysis, Cepeda et al. (2006) concluded that distributive learning spaced across different days markedly increased the amount of material learned and the length of time individuals were able to retain the learned material.24 Distributed practice of material not only has an effect on the amount of information learned, it can be a motivating factor as well. Research shows that distributive practice of material is more

“Spaced repetitions are likely to encourage exactly the kinds of constructive mental processes, founded on effort and concentration that teachers hope to foster” (Dempster, 1991, p. 72).

Page 12: Study Island Scientific Research Base...educational programs used in academic settings be based on scientific research findings (NCLB, 2002). This requirement for a research base applies

Study Island Scientific Research Base 9 Magnolia Consulting, LLC July 15, 2008

interesting and enjoyable to students than massed practice (Dempster, 1991; Elmes, Dye, & Herdelin, 1983) indicating that distributive practice could contribute to an increased motivation to learn material. Taken together these results suggest that the distributive practice of instructional material employed under these conditions could significantly impact students’ retention as well as their understanding and enjoyment of course material. The flexibility of the instructional framework of the Study Island program allows for ongoing skill practice and review of learned material, as well as the ability to space out practice in order to foster higher rates of recall and retention. During implementation of the program, teachers can customize the amount and frequency of practice that each student receives or assign students to review specific standards or learning objectives as needed. Because students do not have to complete the program lessons in any specific order, teachers can distribute the presentation of the skill practice, especially the more complex material, over multiple days. When teachers use Study Island in conjunction with classroom instruction, teachers can present material and assign practice on that material as needed throughout the year, creating an effective and motivating learning environment to practice state standards. SSTTUUDDYY IISSLLAANNDD IINNCCLLUUDDEESS CCOOMMPPOONNEENNTTSS TTHHAATT MMOOTTIIVVAATTEE SSTTUUDDEENNTTSS

Research demonstrates that learning is not a singular, linear process. Instead, learning is multidimensional, integrating a variety of cognitive and behavioral functions, including motivation and interest (Alao & Guthrie, 1999). Increasing motivational factors within a learning task is important for promoting student performance (Taylor & Aldeman, 1999), especially for struggling students (Apel & Swank, 1999). When students are engaged and interested in a task, students participate at higher levels, both cognitively (Klinger, Vaughn, & Schumm, 1998) and physically (Guthrie, Wigfield, Metsala, and Cox, 1999), and increase their involvement in activities that can improve achievement (Guthrie et al., 1996)25 such as the frequency and depth and breadth of reading (Stanovich & Cunningham, 1993).26 There are varieties of effective methods that increase student motivation within an instructional environment (Guthrie & Davis, 2003). Providing students with high-interest, diverse materials (Ivey & Broaddus, 2000; Worthy, Moorman, & Turner, 1999), embedding instruction within context (Biancarosa & Snow, 2004; Dole, Sloan, & Trathen, 1995), increasing students’ self efficacy and competence (Ryan & Deci, 2000), offering competitive based rewards for performance (Reeve & Deci, 1996), and providing students with choice and autonomy (Patall, Cooper, & Robinson, 2008;

“In brief, the most predictive statistical models show that engagement is a mediator of the effects of instruction on reading achievement. If instruction increases students’ engagement, then students’ achievement increases” (Snow, 2002, p. 42).

Page 13: Study Island Scientific Research Base...educational programs used in academic settings be based on scientific research findings (NCLB, 2002). This requirement for a research base applies

Study Island Scientific Research Base 10 Magnolia Consulting, LLC July 15, 2008

Zahorik, 1996) have all been shown to be effective strategies to increase student motivation and engagement. Research has also shown that offering performance-contingent rewards, such as the chance to play a game after successfully completing a task, is more motivating than positive performance feedback alone (Harackiewicz & Manderlink, 1984). Rewards symbolize competence at an activity (Boggiano & Ruble, 1979; Harackiewicz, 1979) and research shows that in situations where performance-contingent rewards are available, individuals are more concerned about their performance and perceived competence than those receiving positive performance feedback alone. Furthermore, the personal importance of doing well at a task enhances subsequent interest in the task. Therefore, performance-contingent rewards can foster task interest via an individual’s sense to do well and thus overcome the negative anxiety-producing effects typically associated with performance evaluation (Boggiano, Harackiewicz, Bessette, & Main, 1986; Harackiewicz & Manderlink, 1984). Additionally, Marzano et al. (2001) found that providing students with personalized recognition for their academic accomplishments, especially in the form of concrete symbols, can be a strong motivator to increase student achievement.27 However, in order to have a positive impact on students’ intrinsic motivation, Marzano et al., suggest that recognition should be contingent on the achievement of a specific performance goal, not just for the completion of any one task. Therefore, recognition has the strongest impact on achievement when a student connects the reward to reaching a specified level of performance. Technology-based instructional programs, such as Study Island, although inherently motivating (Relan, 1992, February), have a unique capacity to incorporate such motivational strategies concurrently within their instructional environments. In particular, computer programs can easily include both the flexibility and modifiability of instructional sequences. Such open architecture can provide students with a sense of autonomy and ownership in the instructional tasks. Research has shown that presenting students with choices during instruction, especially choices that enhance or affirm autonomy, augments intrinsic motivation, increases effort, improves task performance,28 and contributes to growth in perceived confidence (Patall et al., 2008). Likewise, Corbalan et al (2006) suggest that technology-based environments that allow task personalization promote self-regulated learning and provide the learner control over his or her environment, which can increase motivation and foster positive learning outcomes (Wolters, 2003). The Study Island program incorporates motivational factors into the implementation and design of the program in diverse ways to both engage students and further program use. For instance, Study Island includes a wide variety of material covering multiple content areas and subjects within those content areas. Additionally, it builds instructional opportunities into the standards practice in order to motivate students to apply skills as they are learning them. Study Island aims to build student confidence and self-efficacy by providing students with sufficient practice and learning opportunities that will help students realize positive gains in achievement. Students can monitor their own progress as they complete lessons and feel successful watching their mastery level rise. When students reach the specified mastery level of an objective, they earn a personalized reward in the form of a blue ribbon icon, which serves as concrete symbol of recognition for their academic achievements and further motivates students to succeed. As part of the Study Island program, students also have access to a wide variety of simple and short games that they can play when they have answered a question correctly. Students compete with

Page 14: Study Island Scientific Research Base...educational programs used in academic settings be based on scientific research findings (NCLB, 2002). This requirement for a research base applies

Study Island Scientific Research Base 11 Magnolia Consulting, LLC July 15, 2008

other Study Island users to try to achieve the highest score on the games and this competition is intended to motivate the students intrinsically to perform well on the task in order to have a chance to play the game and compete with their peers. One of the most significant motivational factors Study Island provides is its open architecture, which allows students the ability to complete lessons in any order and switch between tasks as desired. This offers students ownership of their learning environment, allowing them to set their own goals, plan personalized learning experiences, execute their work with flexibility, and regulate their own progress. SSTTUUDDYY IISSLLAANNDD SSUUPPPPOORRTTSS SSTTAANNDDAARRDDSS MMAASSTTEERRYY TTHHRROOUUGGHH AA VVAARRIIEETTYY OOFF IINNSSTTRRUUCCTTIIOONNAALL FFOORRMMAATTSS

Presenting instructional material in a variety of formats instead of a decontextualized one-size-fits-all program is both effective and motivating (Tomlinson, 2000). In a meta-analysis of a model of a learning style approach to instruction, Lovelace (2005) concluded that both student achievement and student attitudes improve when teachers consider students’ learning-style preference and match instruction to the students’ preferred styles.29 However, others have argued that learning styles may be dynamic, modified by the educational environment, and evolve over time (Sewall, 1986; Yin, 2001) making it difficult to consistently meet the needs of any one learner. The advent of technology-based instructional approaches has provided educators with a platform to meet learners flexibly at their preferred style, pace, and instructional level and has redefined the learning process. Research has shown that the flexible presentation of instructional material, such as found in Study Island, can lead to improved performance and allow for repeated, persistent practice (Yin, 2001). In an evaluation of student perceptions toward different types of instructional media, D’Arcy, Eastburn, and Bruce (in press) found that students generally value learning from a variety of different media formats as opposed to one singular type, and they concluded this could contribute to greater learning. Likewise, when teachers or computers present instructional information via multiple media formats, students are able to recall more information (Daiute & Morse, 1994). The presentation of instructional material within the context of an interactive classroom environment, combined with real-time feedback, can also be an effective mode for the delivery of instruction or skill practice. Research demonstrates that when students actively participate and interact in classroom discussions, they reach higher levels of critical thinking and demonstrate longer retention rates for information (McKeachie, 1990; Smith, 1977). Reay, Bao, Li, and Warnakulasooriya (2005) found that the use of clicker technology is an effective method to induce such participation and can lead to improved classroom dynamics and higher levels of student-to-

“There is no contradiction between effective standards-based instruction and differentiation. Curriculum tells us what to teach; Differentiation tells us how….Differentiation simply suggests ways in which we can make that curriculum work best for varied learners. In other words differentiation can show us how to teach the same standard to a range of learners by employing a variety of teaching and learning modes” (Tomlinson, 2000, pp. 8–9).

Page 15: Study Island Scientific Research Base...educational programs used in academic settings be based on scientific research findings (NCLB, 2002). This requirement for a research base applies

Study Island Scientific Research Base 12 Magnolia Consulting, LLC July 15, 2008

student and student-to-teacher interaction. Yourstone, Kraye, and Albaum (2008) extended these findings by showing that the use of clicker technology in applied classroom settings, as a means to provide immediate feedback to posed questions, contributed to a significant increase in achievement scores30 and allowed for the immediate discussion of specific questions and answers. Yourstone et al. concluded that real-time response monitoring, combined with immediate discussion of questions and their responses, enhances student understanding of the meaning of questions more than the traditional questioning systems (such as paper-based quizzes) in which long delays are typically experienced between questioning and feedback. Taken together, these results suggest that presenting material in a variety of formats can lead to higher achievement and improved levels of student participation and engagement. The flexible implementation of the Study Island program allows teachers to use the program for standards practice in a variety of settings and to present the content in multiple formats in order to meet each student at his or her learning and motivational level. The web-based platform of the program allows students to use the program from any computer with access to the internet, be it a classroom computer, a computer lab, or a home computer. Teachers can use the program in a whole- or small-group setting, assign students individual work within the classroom, or have students use the program at home for extra practice or remediation. Because some students may need teachers to present the material in different formats to be successful, the program also provides instructors with a printable worksheet option that students can complete in class or as homework. Study Island also includes clicker technology that teachers can use in conjunction with program implementation to create an interactive and engaging environment for students as well as another way to present the content material of the program. Teachers can implement Study Island in this mode concurrently with classroom instruction as a means to gather real-time, formative data regarding students’ knowledge of taught standards. After teaching a lesson, instructors present questions from the Study Island program to the whole class and students respond to the questions using the clickers. The clicker software immediately reports the students’ answers, which allows teachers to provide instant remediation, if needed, or the ability to move on quickly to the next topic, confident that students have mastered the previous material. Teachers can also utilize this technology to create interactive groups in which students discuss the questions and then provide their answers with the clicker quickly in order to compete with other groups. Overall, Study Island combines multiple modes of content presentation within a variety of settings in order to address the instructional and motivational needs of all students.

Page 16: Study Island Scientific Research Base...educational programs used in academic settings be based on scientific research findings (NCLB, 2002). This requirement for a research base applies

Study Island Scientific Research Base 13 Magnolia Consulting, LLC July 15, 2008

SSTTUUDDYY IISSLLAANNDD IINNCCLLUUDDEESS DDYYNNAAMMIICC AANNDD GGEENNEERRAATTIIVVEE CCOONNTTEENNTT

As Dodds and Fletcher (2004) explain, technology-based instruction has long afforded instructors methods: • to accommodate an individual’s rate of progress toward meeting instructional goals • to tailor content and sequence to meet the needs of each student • to vary the difficulty and specificity of instructional content as needed • to adjust instructional formats to meet the instructional style of each student However, the advent of more intelligent technology, such as found in the Study Island program, allows for the existence of an instructional grammar that generates content on demand rather than requiring developers to preprogram all possible sequences and formats allowing for dynamic and automatic item generation (see Irvine & Kyllonen, 2002 for an overview). Automatic item generation technology uses algorithms to create assessment items generatively that are of similar difficulty or that vary in difficulty systematically in order to create an unlimited set of test items that can provide additional practice and minimize security risks or cheating (Arendasy, Sommer, Gittler, & Hergovich, 2006). The bulk of the available research within the literature has centered on the feasibility and improvements of the technology such as creating items based on models that have the same characteristics that will elicit similar cognitive processing (Arendasy, et al., 2006). However, recent research has shown that this technology can be used successfully to create complicated educational assessment items with high psychometric properties (Arensday & Sommer, 2007; Gorin, 2005) and can produce assessment items that correlate highly in test-retest situations (Bejar, Lawless, Morley, Wagner, Bennett, & Revuelta, 2002). Study Island makes use of intelligent technology to create dynamic and generative content within the program, providing a unique experience for every child. Within a set of questions, although the students will ultimately see the same questions, the order of the questions and answer choices will vary for each student. This renders cheating, while using the Study Island program, virtually obsolete within a classroom or testing situation. Additionally within the math component of the program, the content of the questions is dynamic and generative as well. Although the format of a question will remain constant, the program automatically generates the content of the questions creating a unique and varied set of questions within each learning objective. Therefore, even if students are concurrently practicing the same standard at the same level, each student will see unique question

“A third revolution in instruction appears to be accompanying the introduction of computer technology. The capability of this technology for real-time adjustment of instructional content, sequence, scope, difficulty, and style to meet the needs of individuals suggests a third pervasive and significant revolution in instruction” (Dodds & Fletcher, 2004, p. 402).

Page 17: Study Island Scientific Research Base...educational programs used in academic settings be based on scientific research findings (NCLB, 2002). This requirement for a research base applies

Study Island Scientific Research Base 14 Magnolia Consulting, LLC July 15, 2008

content within the question format. Taken together these dynamic features of the program can afford teachers the confidence that students are applying their own knowledge in each Study Island session. SSTTUUDDYY IISSLLAANNDD UUSSEESS AA WWEEBB--BBAASSEEDD IINNSSTTRRUUCCTTIIOONNAALL PPLLAATTFFOORRMM

Because of the extensive instructional potential technology brings to the classroom, numerous researchers have sought to examine not only the overall effectiveness of the modality of Computer Aided Instruction (CAI), but also the implementation environments in which it is operationally most successful. Through meta-analytic research, CAI has been shown to be an effective means to deliver instruction in both primary and secondary school settings (Kulik & Kulik, 1991; Kulik, Bangert, & Williams, 1983; Kulik, Kulik, & Bangert-Drowns, 1985),31, 32, 33 and research indicates that CAI is effective across the curriculum, especially in the critical areas of math (Hasselbring, 1986) and reading (Soe, Koki, & Chang, 2000). CAI is effective with students at all instructional levels, including students who are struggling academically, have learning disabilities, or are learning English (Braun, 1993; Hannaford, 1993; Ormes, 1992). CAI has demonstrated success in a variety of different instructional contexts, such as providing individualized practice, self-paced learning, and positive reinforcement especially for struggling students (Schiffman, Tobin, & Buchanan, 1982). However, research has shown that CAI has the greatest impact on achievement when educators integrate it well into the curriculum and used it to supplement, not replace, classroom instruction (Hasselbring, 1986).34 Researchers have found that when teachers use CAI to extend instruction, it is more effective than teacher-directed instruction alone (Stennett, 1985). Research shows that CAI produces longer retention rates and enhances students’ learning rates up to 32% faster than traditional instruction alone (Capper & Copple, 1985; Kulik & Kulik, 1987). Furthermore, research findings suggest that the learning effects produced through CAI environments transfer to other contexts, helping students to generalize and apply what they have learned in CAI lessons to other areas, which can ultimately improve self-efficacy (Okolo, Bahr, & Rieth, 1993). Although web-based instructional delivery platforms are still relatively new (Jones, 2003; Mioduser, Nachmias, & Lahav, 2000), research indicates that this technology is also effective. A qualitative review of studies examining the effectiveness of online learning concluded that web-based learning can be an effective intervention, which “when implemented judiciously and with attention to ‘evidence based’ practices, apparently can improve student academic performance” (Smith, Clark, &

“Students generally learned more in classes in which they received computer-based instruction…Educational researchers and developers are therefore no longer asking whether a computer revolution will occur in education. They are asking instead how it will occur” (Kulik & Kulik, 1987, p.1).

Page 18: Study Island Scientific Research Base...educational programs used in academic settings be based on scientific research findings (NCLB, 2002). This requirement for a research base applies

Study Island Scientific Research Base 15 Magnolia Consulting, LLC July 15, 2008

Blomeyer, 2005, p. 56). Research has also shown that web-based learning can have a positive impact on student learning over classroom-based instruction alone (Sitzmann, Kraiger, Stewart, and Wisher, 2006), especially in areas of applied instruction (Gerber, Shuell, & Harlos; Rich & Joyner, 1998). Through meta-analytic techniques, Sitzmann et al. found that web-based instruction alone was 6%35 more effective than classroom-based instruction for teaching declarative knowledge (defined as the ability to remember taught concepts). Moreover, when teachers used web-based instruction to supplement classroom instruction it was 13%36 more effective than classroom instruction alone at teaching declarative knowledge, suggesting a combination of the two formats may lead to better achievement than classroom instruction alone. Additionally, the ease of use and convenience of web-based technology allows students to become more involved in the instructional process, motivating them to become independent learners and more efficient planners of their own instructional needs (Frid, 2001). Study Island is a computerized web-based program that uses internet technology to deliver program content. Although students can use Study Island as a stand-alone product in a tutorial setting, teachers can also implement the program in conjunction with their classroom curriculum. Study Island can supplement and extend classroom instruction with mini-lessons and learning opportunities embedded within the practice. Additionally Study Island can provide ongoing practice of already taught standards to reinforce classroom learning continually. SSTTUUDDYY IISSLLAANNDD EENNCCOOUURRAAGGEESS PPAARREENNTTAALL IINNVVOOLLVVEEMMEENNTT

The flexibility of web-based instructional environments, such as Study Island, also affords parents the chance to play a larger role in their children’s academic success by making it easier for parents to obtain access to ongoing reports of student achievement. Research has demonstrated that if parents are more involved either at school or at home, students attain higher levels of achievement (Fan & Chen, 2001; Fehrmann, Keith, & Reimers, 1987; Stevenson & Baker, 1987).37, 38 Longitudinal analyses in both elementary and middle school grade levels have demonstrated that the effect of parental involvement on achievement is long lasting and leads to lower rates of grade retention and special education assignments (Keith, Keith, Quirk, Sperduot, Santillo, & Killings, 1998; Miedel & Reynolds, 1999).39, 40 Furthermore, Izzo, Weissberg, Kasprow, and Fendrich (1999) suggest that enhancing the quality of parental involvement could be beneficial overall. In their meta-analysis, which found a positive effect of parental involvement on student achievement,41 Fan and Chen (2001) reported that parents’ aspirations and expectations for their students to do well mediate this effect. If parents have higher expectations for their children, they may be more apt to monitor their children’s ongoing progress. Additional research supports this

“Parental involvement works to influence children’s educational outcomes primarily through the mechanisms of modeling, reinforcement, and instruction, as tempered or mediated by parents’ selection of developmentally appropriate involvement strategies and the fit between parental involvement activities and the school’s expectation for their involvement” (Hoover-Dempsey & Sandler, 1995, p.326).

Page 19: Study Island Scientific Research Base...educational programs used in academic settings be based on scientific research findings (NCLB, 2002). This requirement for a research base applies

Study Island Scientific Research Base 16 Magnolia Consulting, LLC July 15, 2008

notion, finding that parents who demonstrate more concern and interest in their children’s schoolwork had higher-achieving students (Englund, Luckner, Whaley, & Egeland, 2004).42 Research has documented that although the majority of parents would like to be more involved at their children’s school, many factors such as work schedules, transportation issues, and teacher-parent relationships can negatively impact the level of parental involvement at school (Hoover-Dempsey & Sandler 1997; Weiss, et al., 2003). Therefore, parents in these situations are more likely to tend to their children’s academic needs while at home (Christenson, Rounds, & Gorney, 1992; Hoover-Dempsey & Sandler, 1995) where they could likely benefit from web-based access to student progress. The US Census Bureau reports that 67% of American households with school-aged children have a computer connected to the internet; however, the magnitude of this percentage varies by demographics such as race, socioeconomic status (SES), and geographic location. Regardless, the report shows that there is an overall increase in the use of a computer to access the internet. Eighty-nine percent of adult users report that using the internet is their main computing function (Day, Janus, & Davis, 2005). These results suggest then that parents who have access to a computer and the internet would be likely to use these tools to obtain information about students’ academic progress. Recent research indicates that schools in turn are also making more information available to parents via the internet. Baker (2007) found, through a survey of school-based websites, that 70% offered information or mechanisms designed to promote parental involvement. Although the technology is available, research on the usage and effectiveness of such information and tools is still limited. Marshall and Rossett (1997) suggest that the availability of links to general information, suggestions and tips to promote learning, the ability to help students practice taught concepts, and feedback regarding students’ academic performance all facilitate parental involvement via the internet. Through a survey of parents with elementary or middle school aged children, Lishka (2002) found that parents, regardless of the age of the child or length of their work schedule, favored using the internet to be more involved with their children’s school, especially those parents who were already frequent users of the internet. Bouffard (2007) extended these findings longitudinally and found that students in 10th grade whose parents were involved with their children’s school via the internet had higher math scores43 and decreased dropout rates in the 12th grade even after controlling for prior achievement and other forms of communication with schools. Taken together, these findings suggest that increasing parental involvement using web technology is viable, and it may impact achievement in much the same ways as general parental involvement can. Study Island encourages parental involvement through its web-based platform. Parents can access student achievement reports from any computer connected to the internet, making it easier for parents to monitor student progress on an ongoing basis. Parents can view the expectations and standards for tested content areas and quickly determine if a student is meeting those standards or not. If a student demonstrates a need for extra practice on specific standards, parents can use the instructional lessons and problem explanations within the Study Island program to help students improve their performance. By allowing parents ongoing access to student achievement, Study Island can help foster higher expectations, as well as increased interest and involvement in students’ academic progress.

Page 20: Study Island Scientific Research Base...educational programs used in academic settings be based on scientific research findings (NCLB, 2002). This requirement for a research base applies

Study Island Scientific Research Base 17 Magnolia Consulting, LLC July 15, 2008

CCOONNCCLLUUSSIIOONNSS

The presence of accountability for student achievement within our educational system is a certainty. Although the operational aspects of NCLB are undergoing revision during the reauthorization of the law, accountability will undoubtedly remain at the forefront. Ultimately, educators and administrators within individual school districts will bear the responsibility to meet these accountability requirements. Concomitant with this responsibility is an overall desire for educators to promote achievement and academic success for all students. In order to meet these goals, educators need effective research-based tools to both monitor and advance student progress. Study Island incorporates several research-based principles in order to support students and schools in meeting their accountability goals in all major content areas. Through a dynamic and interactive web-based tool, the design of Study Island builds on the following critical instructional elements evidenced in the literature: • content that is developed from specific state standards • diagnostic, formative, and summative results • assessment feedback loops • ongoing and distributed skill practice • motivational components • a variety of instructional formats • dynamic and generative content • online learning • parental involvement As outlined in this review, research demonstrates that instruction, practice, and assessments that aim toward the mastery of state standards are essential to achieving accountability goals. Recurring progress monitoring that provides immediate feedback, followed by quick remediation and practice, can promote standards mastery instruction. Additionally, the expansiveness of new technology platforms provides effective systems to further achievement efficiently through flexible and differential instructional formats that make use of dynamic and generative content. This allows instruction to meet students at their levels of individual need. Such personalized and technology-based learning environments provide a motivating context for students to practice and build skills over time toward mastery and allow students, teachers, and parents to share in the responsibility of monitoring achievement progress, taking the exclusive burden of accountability off the individual schools and districts.

Page 21: Study Island Scientific Research Base...educational programs used in academic settings be based on scientific research findings (NCLB, 2002). This requirement for a research base applies

Study Island Scientific Research Base 18 Magnolia Consulting, LLC July 15, 2008

FFOOOOTTNNOOTTEESS

1 In their meta-analysis of the impact of the frequency of testing, Bangert-Drowns, Kulik, and Kulik, (1991) found that the average effect size across studies that investigated the impact of frequent testing was 0.23, meaning that frequent testing raised achievement scores by 0.23 standard deviations. On average, the student who had frequent assessments outperformed 59% of the students who were not frequently tested. Furthermore, they found that when groups of students who were frequently tested were compared with groups of students who received no interim tests, the frequently tested groups typically scored about half a standard deviation higher on a criterion examination. 2 Through HLM analysis, Gamoran et al. (1997) found that students in math courses with the highest alignment grew significantly over the period, i.e. they grew an average of 1.13 points per period, which was significant at p < 0.01. Students in math courses with lower alignment did not grow significantly, i.e. although they grew an average of 0.66 points over the period, this growth was not significant at p < 0.10). Additionally, their results suggested that when they controlled for content coverage, differences in student achievement within different courses decreased and were not significantly different from one another. 3 Black and Wiliam’s (1998a) qualitative review of existing studies suggested that the consensus within the literature was that teachers must use assessment to adjust their practices in order to have an impact on student achievement. Through descriptive case study research, Jinkins (2001) found that when an assessment feedback loop was applied in practice over a 12-week period, seven of nine students made large gains in achievement with an average gain of two reading levels, which is equivalent to half a year’s gain. 4 Wiliam et al. (2004) found that when teachers participated in a program to develop and implement formative assessment strategies, such as self-questioning, within the classroom to determine if students are understanding taught concepts, student achievement improved (the mean effect size in favor of the intervention was 0.32). Additionally, analysis on student achievement within each class individually, showed that the effect sizes were more consistent for those teachers who were observed to be experts at implementing the formative assessment strategies (average ES = 0.25 with a hinge-spread of 0.07). 5 Black and Wiliam’s (1998a) review of studies looking at the effect of formative assessment on achievement found that effect sizes ranged from 0.4 to 0.7. 6 Through meta-analytic procedures investigating the effect of formative assessment on increasing student achievement, Fuchs & Fuchs (1986) found an average weighted effect size of 0.70. 7 Rohm et al, (1986) found that repeated cumulative assessment promoted higher achievement than repeated single unit testing in weekly testing conditions (F(2, 24) = 7.15, p < 0.004). 8 Through meta-analytic procedures, Baker et al, (2002) found that providing feedback to students either by teachers or computers has a positive effect on their subsequent achievement (average ES of 0.57). 9 Kulik and Kulik (1988) found an average effect size of 0.28. The average student receiving immediate feedback was at the 50th percentile while the average student receiving delayed feedback was at the 61st percentile. 10 Bangert-Drowns, Kulik, Kulik, and Morgan (1991) found an average effect size 0.31 when learners are guided or given the correct answer. 11 Kulhavy et al. (1989) standardized score distributions and plotted the discrepancy x feedback time functions separately for matched and mismatched feedback (i.e. if an individual was confident in his or her answer and got the question correct versus if an individual was confident in his or her answer but got the question wrong) and found that the difference between the functions was statistically significant (p <0.05). 12 Marzano et al. (2001) found an effect size of .61 in support of providing feedback to increase student achievement.

Page 22: Study Island Scientific Research Base...educational programs used in academic settings be based on scientific research findings (NCLB, 2002). This requirement for a research base applies

Study Island Scientific Research Base 19 Magnolia Consulting, LLC July 15, 2008

13 Through ANOVA, Camp et al. (2001) found that when a computer provided questions selected dynamically based on performance and judged mental effort, individuals had significantly better efficiency of training than individuals receiving a fixed sequence of questions (F(2, 50)=16.7, p <0.001). 14 In a pilot study that investigated the effects of personalized instruction, Corbalan et al. (2006) found that individuals in personalized learning environments had significantly higher mean performance scores (ES of 0.25) and had lower invested mental effort scores (ES of 0.37) than students in fixed sequence environments. 15 Kayluga & Sweller (2004) found that when a computer adapted learning tasks to individuals’ expertise, individuals experienced marginally significant higher average test scores (t(14) = 1.51, p < 0.1, ES = 0.55 and significant higher gains in efficiency (t(14) = 1.89, p < 0.05, ES = 0.69). 16 Salden et al. (2006) found through ANOVA that dynamic task selection in general leads to more efficient training than fixed-sequence task selection (F(2, 44) = 14.1, p < 0.0001). 17 In their meta-analysis, Marzano et al. (2001) found an effect size of .77 in support of the impact practice has on student achievement. 18 Mayer (1983) found that there was an overall effect for multiple presentations of material (F (2, 57) = 21.20, p <0.001 and planned comparisons among different numbers of presentations indicated that the groups that had the most presentations of the material were able to recall significantly more material (p <0.05) than groups that had less presentations of the material. Additionally, Mayer found an overall effect to support that students who saw multiple presentations of the material were able to remember more conceptual information about the material (F(10, 285) = 4.11, p < 0.001). Planned comparisons again revealed that those students who saw the most presentations remembered more conceptual information than those who saw the least (p <0.05). 19 In a meta-analytic review, Cepeda et al. (2006) found that, on average, spaced presentations led to better test performance than massed presentations (t (540) = 6.6, p < 0.001; note that there was not enough information for the authors to provide effect size data, so independent sample t-tests were used for this analysis. 20 Through meta-analysis, Donovan and Radosevich (1999) found an average effect size of 0.46 in support of spaced repetitions. 21 Janiszewski et al.’s (2003) meta-analysis found that overall spaced presentations were significantly better than massed presentations (r = 0.339, combined Z = 36.83, p < 0.001; fail-safe N = 148,979; note Rosenthal’s effect size was used in lieu of Cohen’s. This one can interpret this qualitatively as the closer the r-value is to 1.0, the higher the effect. 22 In a meta-analytic review, Donovan & Radosevich (1999) found that task complexity and time between presentations interact in their effect on spaced presentations. Tasks of high complexity alone had an overall lower effect on the amount of material recalled (ES = 0.07), but if the presentation of cognitively complex material was spaced out over time, the effect was improved (ES = 0.77). Additional analysis compared shorter intervals to longer intervals (ES = 0.24 compared to ES = 0.77) and found that the increase in effect size was significant (QB (1) = 28.65, p < 0.01) in favor of longer intervals. 23 Janiszewski et al. (2003) reported Rosenthal effect size for each of their findings, respectively, as well as the average percent of material recalled within these studies. Note that each of these effects was larger than its comparisons but only the largest are presented here: intentional learning (r = 0.352, combined Z = 29.83, p < 0.001; fail-safe N = 78,050; average percent recall score of 52%); semantically complex material (r = 0.586, combined Z = 14.08, p < 0.001); fail-safe N = 867; average percent recall of 53%); the complexity of intervening material (r = 0.331, combined Z = 32.39, p < 0.001; fail-safe N = 83,149; average percent recall score of 51%); meaningful stimuli (r = 0.335, combined Z = 36.44, p < 0.001; fail-safe N = 125, 378; average percent recall score of 52%). 24 Through meta-analysis, Cepeda et al. (2006) found that spaced repetitions spread out over multiple days were more effective than spaced repetitions within one day (t (37) = 1.7, p < 0.09) and the longer the interval between repetitions,

Page 23: Study Island Scientific Research Base...educational programs used in academic settings be based on scientific research findings (NCLB, 2002). This requirement for a research base applies

Study Island Scientific Research Base 20 Magnolia Consulting, LLC July 15, 2008

the better (t (17), p <0 .01; note that there was not enough information for the authors to provide effect size data, so independent sample t-tests were used for this analysis). 25 Guthrie et al. (1996) found that 85% of the students that had increased in intrinsic motivation had increased frequency and depth of reading. Of the students that decreased in the amount of intrinsic motivation, 70% of them also showed decreases in the frequency and depth of reading. This association was statistically significant (χ2 (1, N =20) = 4.06, p < 0.05). 26 Through a hierarchical regression analysis, Stanovich & Cunningham (1993) found that print exposure accounted for 37.1% of the variance in general knowledge measures (such as cultural literacy, practical knowledge) after indicators of general ability were factored out (such as intelligence, comprehension ability, GPA) and it was a significant and unique factor in the model (p < 0.05). A factor analysis corroborated these results, finding that the print exposure and general knowledge variables loaded on the same factor. 27 Through meta-analytical techniques Marzano et al. (2001) found that providing students with performance-contingent recognition had a positive effect on student motivation and achievement (ES = .80). 28 Through a meta-analysis of the effect of intrinsic motivation on task performance, Patall et al. (2008) found an average effect size of 0.37 which was significantly different from 0 (Q (12) = 38.73, p < 0.001). Note, in this study task, the authors defined performance as the accuracy of performance, quantity of a complete task, or the difference between a pre- and post-test. 29 Lovelace (2005) investigated the effects of a learning-style matching model on student achievement and attitude through a meta-analysis and concluded that there was an average effect size of 0.80 for both achievement and attitude. 30 Yourstone et al. (2008) compared growth in achievement over time between classrooms that used clickers to answer questions on quizzes and classrooms that used traditional paper and pencil methods. The authors provided results for two individual instructors, and in both cases, the students that used clickers had higher growth in achievement (one class had significant growth at p < 0.008 and the other had marginally significant growth at p < 0.075). 31 Kulik and Kulik (1991) reported an effect size of 0.30 in support of CAI across elementary and secondary grade levels. 32 Kulik, et al. (1983) found an average effect size of 0.32 for students in grades 6–12 who used computers to learn course content. 33 The research of Kulik et al. (1985) resulted in an effect size of 0.47 in support of CAI. 34 Hasselbring (1986) reviewed findings from multiple research reports to make these qualitative conclusions. 35 Sitzmann et al. (2006) reported an effect size of 0.15 for the effect of web-based instruction over classroom instruction. 36 Sitzmann et al. (2006) reported an effect size of 0.52 for the effect of web-based instruction used as a supplement to classroom instruction. 37 Through path analysis, Fehrmann et al. (1987) found an overall direct and meaningful path coefficient of 0.129 to support the effect of parental involvement on achievement. 38 Through regression analysis, Stevenson and Baker (1987) found that parental involvement is a significant predictor of student performance and the addition of parental involvement into an equation that included mother’s education level, and the child’s age and gender found that the R2 more than triples (0.04 to 0.15). This indicates that the addition of parental involvement in the model made it a better predictor of student performance.

Page 24: Study Island Scientific Research Base...educational programs used in academic settings be based on scientific research findings (NCLB, 2002). This requirement for a research base applies

Study Island Scientific Research Base 21 Magnolia Consulting, LLC July 15, 2008

39 Through structural equation modeling, Keith et al. (1998) found that parental involvement in eighth grade had a strong effect on student GPA measured in tenth grade. The model showed that each standard deviation increase in eighth grade parental involvement could result in a 0.25 standard deviation increase in GPA. 40 Miedel and Reynolds found through regression analysis that there was a marginally significant association between the frequency of parental involvement in the earlier grades and eighth-grade reading achievement (β = 1.98, p < 0.10) and a significant association between the number of activities of parental involvement and eighth-grade reading achievement (β = 1.58, p < 0.001). Additional analysis found that children whose parents were involved with school-related activities on a weekly basis or more had a 38% lower grade retention rate and the frequency of parental involvement was marginally associated with the time a child spent in special education services through the eighth grade (β = 2.17, p < 0.07). 41 Fan & Chen (2001) found an effect size of 0.52 across the studies examined. 42 Through path analysis, Englund et al. (2004) showed that parent expectations in first grade had indirect effects on children’s achievement in third grade (β = 0.11, t = 2.30, p < 0.05). 43 Through structural equation modeling, Bouffard (2007) found that both a general communication variable and the frequency of communication models fit the data well (χ2/df = 4.86; TLI = 0.95; RMSEA = 0.04 and χ2/df = 4.57; TLI = 0.96; RMSEA = 0.04, respectively).

Page 25: Study Island Scientific Research Base...educational programs used in academic settings be based on scientific research findings (NCLB, 2002). This requirement for a research base applies

Study Island Scientific Research Base 22 Magnolia Consulting, LLC July 15, 2008

RREEFFEERREENNCCEESS

Alao, S., & Guthrie, J. T. (1999). Predicating conceptual understanding with cognitive and

motivational variables. The Journal of Educational Research, 92, 243–254. American Association for Higher Education (1991). Nine principles of good practice for assessing student

learning. Sterling, VA: Stylus. Apel, K., & Swank, L.K. (1999). Second chances: Improving decoding skills in the older student.

Language, Speech, and Hearing Services in Schools, 30, 231–242. Arendasy, M., & Sommer, M. (2007). Using psychometric technology in educational assessment: The

case of a schema-based isomorphic approach to the automatic generation of quantitative reasoning items. Learning and Individual Differences 17, 366–383.

Arendasy, M., Sommer, M., Gittler, G., & Hergovich, A. (2006). Automatic generation of

quantitative reasoning items: Pilot study. Journal of Individual Differences, 27, 2−14. Baker, E. A. (2007). Elementary classroom websites. Journal of Literacy Research, 39, 1–36. Baker, S., Gersten, R., & Lee, D. S. (2002). A synthesis of empirical research on teaching

mathematics to low-achieving students. The Elementary School Journal, 103, 51–73. Bangert-Drowns, R. L., Kulik, J. A., Kulik, C. C (1991). Effects of frequent classroom testing. The

Journal of Educational Research, 85, 89–99. Bangert-Drowns, R. L., Kulik, C. C., Kulik, J. A., & Morgan, M. T. (1991). The instructional effect

of feedback in test-like events. Review of Educational Research, 61, 213–238. Bejar, I. I., Lawless, R. R., Morley, M. E., Wagner, M. E., Bennett, R. E., & Revuelta, J. (2002). A

feasibility study of on the fly item generation in adaptive testing (GRE Board Report No. 98-12P). Princeton, NJ: Educational Testing Service.

Biancarosa, G., & Snow, C. E. (2004). Reading next—a vision for action and research in middle and high

school literacy: A report to Carnegie Corporation for New York. Washington DC: Alliance for Excellent Education.

Black, P., & Wiliam, D. (1998a) Assessment and classroom learning. Assessment in Education, 5, 7–74. Black, P., & Wiliam, D. (1998b). Inside the black box: Raising standards through classroom

assessment. Phi Delta Kappan, 81(2), 139–148. Boggiano, A. K., Harackiewicz, J. M., Bessette, J. M., & Main, D. S. (1986). Increasing children's

interest through performance-contingent reward. Social Cognition, 3, 400–411.

Page 26: Study Island Scientific Research Base...educational programs used in academic settings be based on scientific research findings (NCLB, 2002). This requirement for a research base applies

Study Island Scientific Research Base 23 Magnolia Consulting, LLC July 15, 2008

Boggiano, A. K., & Ruble, D. N. (1979). Competence and the over justification effect: A

developmental study. Journal of Personality and Social Psychology, 37, 1462–1468. Bouffard, S. M. (2006). “Virtual” parent involvement: The role of the internet in parent-school communication.

Unpublished doctoral dissertation, Durham, NC: Duke University. Braun, L. (1993). Help for all the students. Communications of the Association for Computing Machinery,

36(5), 66–69. Camp, G., Paas, F., Rikers, R.., & van Merriënboer, J. (2001). Dynamic problem selection in air

traffic control training: A comparison between performance, mental effort and mental efficiency. Computers in Human Behavior, 17, 575–595.

Capper, J., & Copple, C. (1985). Computer use in education: Research review and instructional implications.

Washington, DC: Center for Research into Practice. Cassarà, S. (2004). Many nations: Building community in the classroom. Assessment Update, 16(3), 9–

10. Cepeda, N. J., Pashler, H, Vul, E., Wixted, J. T., & Rohrer, D. (2006). Distributed practice in verbal

recall tasks: A review and quantitative synthesis. Psychological Bulletin, 123, 354–380. Christenson, S. L., Rounds, T., & Gorney, D. (1992). Family factors and student achievement: An

avenue to increase students’ success. School Psychology Quarterly, 7, 178–206. Corbalan, G., Kester, L., & van Merriënboer, J. J. G. (2006). Towards a personalized task selection

model with shared instructional control. Instructional Science, 34, 399–422. D'Arcy, C. J., Eastburn, D. M., & Bruce, B. C. (in press). How media ecologies can address diverse

student needs. Retrieved online June 11, 2008 from http://www.isrl.uiuc.edu/~chip/pubs/08media-ls/ls.pdf.

Daiute, C., & Morse, F. (1994). Access to knowledge and expression: Multimedia writing tools for

students with diverse needs and strengths. Journal of Special Education Technology, 12, 221–256. Day, J. C., Janus, A., & Davis, J. (2005). Computer and internet use in the United States: 2003. Washington,

DC: US Census Bureau. Dempster, F. N. (1991). Synthesis of research on reviews and tests. Educational Leadership, 48 (7), 71–

76. Dodds, P., & Fletcher, J. D. (2004). Opportunities for new “smart” learning environments enabled

by next-generation web capabilities. Journal of Educational Multimedia and Hypermedia, 13, 391–404.

Page 27: Study Island Scientific Research Base...educational programs used in academic settings be based on scientific research findings (NCLB, 2002). This requirement for a research base applies

Study Island Scientific Research Base 24 Magnolia Consulting, LLC July 15, 2008

Dole, J., Sloan, C., & Trathen, W. (1995). Teaching vocabulary within the context of literature.

Journal of Reading, 38, 452–460. Donvoan, J. J., & Radosevich, D. J. (1999). A meta-analytic review of the distribution of practice

effect: Now you see it, now you don’t. Journal of Applied Psychology, 84, 795–805. Duke, N. K., & Pearson, P. D. (2002). Effective practices for developing reading comprehension. In

A. E. Farstrup & S. J. Samuels (Eds.), What research has to say about reading instruction (pp. 205–243). Newark, DE: International Reading Association.

Elmes, D. G., Dye, C. J., & Herdelin, N. J. (1983). What is the role of affect in the spacing effect?

Memory and Cognition, 11, 144–151. Englund, M. M., Luckner, A. E., Whaley, G. J. L., & Egeland, B. (2004). Children’s achievement in

early elementary school: Longitudinal effects of parental involvement, expectations and quality of assistance. Journal of Experimental Psychology, 96, 723–730.

Fan, X., & Chen, M. (2001). Parental involvement and students’ academic achievement: A meta-

analysis. Educational Psychology Review, 13, 1–22. Fehrmann, P. G., Keith, T. Z., & Reimers, T. M. (1987). Home influence on school learning: Direct

and indirect effects of parental involvement on high school grades. Journal of Educational Research, 80, 330–337.

Frid, S. (2001). Supporting primary students’ on-line learning in a virtual enrichment program.

Research in Education, 66, 9–27. Fuchs, L. S. & Fuchs, D. (1986) Effects of systematic formative evaluation: a meta-analysis.

Exceptional Children, 53, 199–208. Gamoran, A., Porter, A. C., Smithson, J., & White, P. A. (1997, Winter). Upgrading high school

mathematics instruction: Improving learning opportunities for low-achieving, low-income youth. Educational Evaluation and Policy Analysis, 19, 325–338.

Gerber, S., Shuell, T. J., & Harlos, C. A. (1998). Using the internet to learn mathematics. Journal of

Computers in Mathematics and Science Teaching, 17, 113–132. Gorin, J. S. (2005). Manipulating processing difficulty of reading comprehension questions: The

feasibility of verbal item generation. Journal of Educational Measurement, 42, 351–373. Guthrie, J. T., & Davis, M. H. (2003). Motivating struggling readers in middle school through an

engagement model of classroom practice. Reading and Writing Quarterly, 19, 59–85. Guthrie, J. T., Van Meter, P., McCann, A. D., Wigfield, A., Bennett, L., Poundstone, C. C., et al.

(1996). Growth of literacy engagement: Changes in motivations and strategies during concept-oriented reading instruction. Reading Research Quarterly, 31, 306–332.

Page 28: Study Island Scientific Research Base...educational programs used in academic settings be based on scientific research findings (NCLB, 2002). This requirement for a research base applies

Study Island Scientific Research Base 25 Magnolia Consulting, LLC July 15, 2008

Guthrie, J. T., Wigfield, A., Metsala, J. L., & Cox, K. E. (1999). Motivational and cognitive

predictors of text comprehension and reading amount. Scientific Studies of Reading, 3, 231–256. Hamilton, L. S., Stecher, B. M, Marsh, J. A., McCombs, J. S., Robyn, A., Russell, J. L., et al. (2007).

Standards-based accountability under No Child Left Behind: Experiences of teachers and administrators in three states. Santa Monica, CA: RAND Corporation.

Hannaford, A. E. (1993). Computers and exceptional individuals. In J. D. Lindsey (Ed.), Computers

and Exceptional Individuals, (2nd ed, pp. 3–26). Austin, TX: Pro-Ed. Harackiewicz, J. M. (1979). The effects of reward contingency and performance feedback on

intrinsic motivation. Journal of Personality and Social Psychology, 37, 1352–1361. Harackiewicz, J. M., & Manderlink, G. (1984). A process analysis of the effects of performance-

contingent rewards on intrinsic motivation. Journal of Experimental Social Psychology, 20, 531–551.

Hasselbring, T. S. (1986). Research on the effectiveness of computer-based instruction: A review.

International Review of Education, 32, 313–325. Herman, J. L, & Baker, E. L. (2005). Making benchmark testing work. Educational Leadership, 63(3),

48–55. Hoover-Dempsey, K. V., & Sandler, H. M. (1995). Parental involvement in children’s education:

Why does it make a difference? Teacher’s College Record, 97, 310–331. Hoover-Dempsey, K. V., & Sandler, H. M. (1997). Why do parents become involved in their

children’s education? Review of Educational Research, 67, 3–42. Irvine, S. H., & Kyllonen, P. C. (2002). Item generation for test development. Mahwah, NJ: Erlbaum. Ivey, G., & Broaddus, K. (2000). Tailoring the fit: Reading instruction and middle school readers.

The Reading Teacher, 54, 68–78. Izzo, C. V., Weissberg, R. P., Kasprow, W. J., & Fendrich, M. (1999). Longitudinal assessment of

teacher perceptions and parent involvement in children’s education and school performance. American Journal of Community Psychology, 27, 817–839.

Janiszewski, C., Noel, H., & Sawyer, A. G. (2003). A meta-analysis of the spacing effect in verbal

learning: Implications for research on advertising repetition and consumer memory. Journal of Consumer Research, 30, 138–149.

Jinkins, D. (2001). Impact of the implementation of the teaching/learning cycle on teacher decision-

making and emergent readers. Reading Psychology, 22, 267–288. Jones, K. (2003). Using the internet in the teaching and learning of mathematics: A research

bibliography. Micromath, 19, 43–44.

Page 29: Study Island Scientific Research Base...educational programs used in academic settings be based on scientific research findings (NCLB, 2002). This requirement for a research base applies

Study Island Scientific Research Base 26 Magnolia Consulting, LLC July 15, 2008

Kalyuga, S., & Sweller, J. (2005). Rapid dynamic assessment of expertise to optimize the efficiency of

e-learning. Educational Technology, Research and Development, 53(3), 83–93. Keith, T. Z., Keith, P. B., Quirk, K. J., Sperduot, J., Santillo, S., & Killings, S. (1998). Longitudinal

effects of parent involvement on high school grades: Similarities and differences across gender and ethnic groups. Journal of School Psychology, 36, 335–363.

Kilpatrick, J., Swafford, J., & Bradford, R. (Eds.) 2001. Adding it up: Helping children learn mathematics.

Washington, DC: National Academy Press. Klinger, J., Vaughn, S., & Schumm, J. (1998). Collaborative strategic reading during social studies in

heterogeneous fourth-grade classrooms. The Elementary School Journal, 99, 3–22. Kulhavy, R. W., & Stock, W. A. (1989). Feedback in written instruction. The place of response

certitude. Educational Psychology Review, 1, 279–308. Kulik, C. C., & Kulik, J. A. (1991). Effectiveness of computer-based instruction: An updated

analysis. Computers in Human Behavior, 7, 75–94. Kulik, J. A., & Kulik, C. C. (1987, February–March). Computer-based instruction: What 200 evaluations say.

Paper presented at the Annual Convention of the Association for Educational Communications and Technology, Atlanta, GA. (ERIC Document Reproduction Service No. ED 285 521).

Kulik, J. A., & Kulik, C. C. (1988). Timing of feedback and verbal learning. Review of Educational

Research, 58, 79–97. Kulik, J. A., Bangert, R. L., & Williams, G. W. (1983). Effects of computer-based teaching on

secondary school students. Journal of Educational Psychology, 75, 19–26. Kulik, J. A., Kulik, C. C., & Bangert-Drowns, R. L. (1985). Effectiveness of computer-based

education in elementary schools. Computers in Human Behavior, 1, 59–74. Leahy, S., Lyon, C., Thompson, M., & Wiliam, D. (2005). Classroom assessment: Minute by minute,

day by day. Educational Leadership, 63(3), 19–24. Lishka, S. (2002). Using the internet to increase parent-school communication: A survey of parent interested and

intended use of school web sites. Unpublished Doctoral Dissertation, Hartford, CT: University of Hartford.

Lovelace, M. K. (2005). Meta-analysis of experimental research based on the Dunn and Dunn

Model. The Journal of Educational Research, 98, 176–183. Marshall, J., & Rossett, A. (1997). How technology can forge links between school and home. Electronic

School Online. Retrieved June 11, 2008 from http://www.electronic-school.com/0197f3.html

Page 30: Study Island Scientific Research Base...educational programs used in academic settings be based on scientific research findings (NCLB, 2002). This requirement for a research base applies

Study Island Scientific Research Base 27 Magnolia Consulting, LLC July 15, 2008

Marzano, R. J., Pickering, D. J., & Pollock, J. E. (2001). Classroom instruction that works: Research-based

strategies for increasing student achievement. Alexandria, VA: Association for Supervision and Curriculum Development

Mayer, R. E. (1983). Can you repeat that? Qualitative effects of repetition and advance organizers on

learning from science prose. Journal of Educational Psychology, 75, 40–49. McKeachie, W. (1990). Research on college teaching: The historical background. Journal of Educational

Psychology, 82, 189–200. Miedel, W. T., & Reynolds, A. J. (1999). Parent involvement in early intervention for disadvantaged

children: Does it matter? Journal of School Psychology, 37, 379–402. Mioduser, D., Nachmias, R., & Lahav, O. (2000). Web-based learning environments: Current

pedagogical and technological state. Journal of Research on Computing in Education, 33, 55–76. Niemi, D., Vallone, J., Wang, J, & Griffin, N. (2007). Recommendations for building a valid benchmark

assessment system: Interim report to the Jackson Public Schools, CRESST Report 723. Los Angeles, CA: University of California Los Angeles, National Center for Research on Evaluation Standards and Student Testing.

No Child Left Behind (NCLB) Act of 2001, Pub. L. No. 107-110, §115, Stat. 1425 (2002). Okolo, C. M., Bahr, C. M., & Rieth, H. J. (1993). A retrospective view of computer-based

instruction. Journal of Special Education Technology, 12, 1–27. Ormes, C. (1992). Science and ESL instruction: Videodisc does it all for Florida elementary school.

Technological Horizons in Education Journal, 20(2), 40–42. Patall, E. A., Cooper, H., & Robinson, J. C. (2008). The effects of choice on intrinsic motivation and

related outcomes: A meta-analysis of research findings. Psychological Bulletin, 134, 270–300. Porter, A. C. & Simthson, J. L. (2002, April). Alignment of assessments, standards, and instruction using

curriculum indicator data. Paper presented at the Annual Meeting of the American Educational Research Association, New Orleans, LA.

Reay, W. R., Bao, L., Li, P., Warnakulasooriya, R., & Baugh, G. (2005). Toward the effective use of

voting machines in physics lectures. American Journal of Physics, 73, 554–558. Reeve, J., & Deci, E. L. (1996). Elements of the competitive situation that affect intrinsic

motivation. Personality and Social Psychology Bulletin, 22, 24–33. Relan, A. (1992, February). Motivational strategies in computer-based instruction: Some lessons

from theories and models of motivation. In proceedings of selected research and development presentations at the Convention of the Association for Educational Communications and Technology (ERIC Document Reproduction Service No. ED 348 017).

Page 31: Study Island Scientific Research Base...educational programs used in academic settings be based on scientific research findings (NCLB, 2002). This requirement for a research base applies

Study Island Scientific Research Base 28 Magnolia Consulting, LLC July 15, 2008

Rich, W., & Joyner, J. (2002). Using interactive web sites to enhance mathematics learning. Teaching

Children Mathematics, 8, 380–383. Roach, A. T., Elliott, S. N., & Webb, N. L. (2005). Alignment of an alternate assessment with state

academic standards. The Journal of Special Education, 38, 218–231. Rohm, R. A., Sparzo, F. J., Bennett, C. M. (1986). College student performance under repeated

testing and cumulative conditions: Report on five studies. The Journal of Educational Research, 80, 99–104.

Rothman, R., Slattery, J. B., Vranek, J. L. & Resnick, L. B. (2002). Benchmarking and alignment of

standards and testing (Technical Report 566). Los Angeles: National Center for Evaluation, Standards, and Student Testing.

Ryan, R. M., & Deci, E. L. (2000). Intrinsic and extrinsic motivations: Classic definitions and new

directions. Contemporary Educational Psychology, 25, 54–67. Salden, R. J. C. M., Paas, F., Broers, N. J., & van Merriënboer, J. J. G. (2004). Mental effort and

performance as determinants for the dynamic selection of learning tasks in air traffic control training. Instructional Science, 32, 153–172.

Schiffman, G., Tobin, D., & Buchanan, B. (1982). Microcomputer instruction for the learning

disabled. Journal of Learning Disabilities, 15, 557–559. Sewall, T. J. (1986). The measurement of learning style: a critique of four assessment tools. Green Bay, WI:

University of Wisconsin at Green Bay, Assessment Center. (ERIC Document Reproduction Service No. ED 267247).

Shields, P., Esch, C., Lash, A., Padilla, C., Woodworth, K., LaGuardia, K., et al. (2004). Evaluation of

Title I accountability systems and school improvement: First year findings. Washington, DC: US Department of Education.

Smith, D. (1977). College classroom interactions and critical thinking. Journal of Educational Psychology,

69, 180–190. Smith, R., Clark, T., & Blomeyer, R. L. (2005). A synthesis of new research of K-12 online learning.

Naperville, IL: North Central Regional Educational Laboratory. Retrieved June 17, 2008 from http://www.ncrel.org/tech/synthesis/synthesis.pdf.

Snow, C. (2002). Reading for understanding: Toward an R&D program in reading comprehension. Santa

Monica, CA: RAND. Soe, K., Koki, S., & Chang, J. M. (2000). Effect of computer-assisted instruction (CAI) on reading achievement:

A meta-analysis. Honolulu, Hawaii: Pacific Resources for Education and Learning.

Page 32: Study Island Scientific Research Base...educational programs used in academic settings be based on scientific research findings (NCLB, 2002). This requirement for a research base applies

Study Island Scientific Research Base 29 Magnolia Consulting, LLC July 15, 2008

Stanovich, K., & Cunningham, A. (1993). Where does knowledge come from? Specific associations

between print exposure and information acquisition. Journal of Educational Psychology, 85, 211–230.

Stennett, R. G. (1985). Computer-assisted instruction: A review of the reviews. London: The Board of

Education for the City of London. (ERIC Document Reproduction Service No. ED 260 687).

Stevenson, D. L., & Baker, D. P. (1987). The family–school relation and the child’s school

performance. Child Development, 58, 1348–1357. Stiggins, R. J. (1999). Assessment, student confidence, and school success. Phi Delta Kappan, 81(3),

191–198. Taylor, L., & Adelman, H. (1999). Personalizing classroom instruction to account for motivational

and developmental factors. Reading & Writing Quarterly, 15, 255–276. Tomlinson, C. A. (2000). Reconcilable differences? Standards-based teaching and differentiation.

Educational Leadership, 58(1), 6–11. VanLoy, W. J. (1996). A comparison of adaptive self-referenced testing and classical approaches to the measurement

of individual change. Unpublished doctoral dissertation, Minneapolis, MN: University of Minneapolis.

Webb, N. L. (1997). Criteria for alignment of expectations and assessments in mathematics and science education.

(NISE Research Monograph No. 6). Madison, WI: University of Wisconsin-Madison, National Institute for Science Education.

Webb, N. L. (2002, April). An analysis of the alignment between mathematics standards and assessments for three

states. Paper presented at the annual meeting of the American Educational Research Association, New Orleans, LA.

Weiss, D. J. (2004). Computerized adaptive testing for effective and efficient measurement in

counseling and education. Measurement and Evaluation in Counseling and Development, 37, 70–84. Weiss, D. J., & Kingsbury, G. G. (1984). Application of computerized adaptive testing to

educational problems. Journal of Educational Measurement, 21, 361–375. Weiss, H. B., Mayer, E., Kreider, H., Vaughn, M., Dearing, E., Hencke, R., et al. (2003). Making it

work: Low-income mothers’ involvement in their children’s education. American Educational Research Journal, 40, 879–901.

Wiliam, D., Lee, C., Harrison, C., & Black, P. (2004). Teachers developing assessment for learning:

Impact on student achievement. Assessment in Education, 11, 49–65. Wolf, P. J. (2007). Academic improvement through regular assessment. Peabody Journal of Education,

82, 690–702.

Page 33: Study Island Scientific Research Base...educational programs used in academic settings be based on scientific research findings (NCLB, 2002). This requirement for a research base applies

Study Island Scientific Research Base 30 Magnolia Consulting, LLC July 15, 2008

Worthy, J., Moorman, M., & Tuner, M. (1999). What Johnny likes to read is hard to find in school.

Reading Research Quarterly, 34, 12–27. Yin, L. R. (2001). Dynamic learning patterns: Temporal characteristics demonstrated by the learner.

Journal of Educational Multimedia and Hypermedia, 10, 273–284. Yourstone, S. A., Kraye, H. S., & Albaum, G. (2008). Classroom questioning with immediate

electronic response: Do clickers improve learning? Decision Sciences Journal of Innovative Education, 6, 75–88.

Zahorik, J. A. (1996). Elementary and secondary teachers’ reports of how they make learning

interesting. The Elementary School Journal, 96, 551–565.