effect of assistive technology in a public school setting · settings. copley and ziviani (2004)...

12
18 January/February 2010, Volume 64, Number 1 RESEARCH S CHOLARS I NITIATIVE Effect of Assistive Technology in a Public School Setting KEY WORDS • education, special • educational measurement • learning disorders • outcome assessment (health care) • self-help devices Anne H. Watson, PhD, OT/L, is School Occupational Therapist, Arlington Public Schools, 1401 North Greenbrier Street, Arlington, VA 22205; anne_watson@ apsva.us or [email protected] Max Ito, PhD, OTR/L, is Director, Doctoral Program, and Associate, Nova Southeastern University, Fort Lauderdale, FL. Roger O. Smith, PhD, OT, FAOTA, is RESNA Fellow; Interim Associate Dean, Graduate Studies, College of Health Sciences; Professor, Department of Occupational Therapy, University of Wisconsin–Milwaukee; and Director, Rehabilitation Research Design and Disability (R2D2) Center. Lori T. Andersen, EdD, OTR/L, FAOTA, is Educational Consultant, Buford, GA. The Individuals With Disabilities Education Improvement Act of 2004 (IDEA) requires assistive technology (AT) be considered at the yearly individualized education program (IEP) meeting of every student in special education. IDEA also directs that AT be implemented on the basis of peer-reviewed literature despite a paucity of research on AT’s effectiveness in the public schools. This repeated-measures quasi-experimental study explored AT’s effect in a public school special education setting. Participants (N = 13) were a heterogeneous group of students in 1 school system who had newly provided AT to address academic and communication goals in one school year. Results suggest that relative to other interventions, AT provided by a multidisciplinary team may have a significant effect on IEP goal improvement (t [12] = 5.54, p = .00) for students in special education (F [2] = 9.35, p = .00), which may support AT’s use in special education by occupational therapists as directed by IDEA. Watson, A. H., Ito, M., Smith, R. O., & Andersen, L. T. (2010). Effect of assistive technology in a public school setting. American Journal of Occupational Therapy, 64, 18–29. Anne H. Watson, Max Ito, Roger O. Smith, Lori T. Andersen T heIndividualsWithDisabilitiesEducationImprovementActof2004(IDEA) directsthatindividualizededucationprogram(IEP)teamsmustconsiderassistive technology(AT)duringthedraftingofeverystudent’sIEP.IDEAregulationsregard- ingtheconsiderationofATallowIEPteamsandschooldistrictadministratorsthe discretiontoprovideorforgoATasaninterventionstrategywithoutdirectingwhen orhowATinterventionmaybeappropriate(U.S.DepartmentofEducation,2006, 34C.F.R.§§300.105,300.324(a)(2)(v)).BothIDEAandtheU.S.Departmentof EducationregulationsforIDEArequire,however,thatincorporationofsupplemen- taryaidsandservices(whichincludeAT)be“basedonpeer-reviewedresearchtothe extentpracticable”(U.S.DepartmentofEducation,2006,§300.320(a)(4)). DespiteIDEA’sdirectiontoconsiderATateveryIEPmeetingandtobaseAT service provision on peer-reviewed research, little evidence exists regarding AT’s effectivenessinthepublicschoolsetting,whichcreatesthepossibilitythatanIEP teammaydeclinetoimplementAT,apotentiallyeffectiveinterventionstrategyfor helpingpublicschoolstudentsinspecialeducationmeettheireducationalgoalsand objectives. TheliteraturepromotestheimportanceofdocumentingAToutcomes;how- ever,theliteraturelacksempiricalstudiesofAT’seffectiveness.Manydiscussions addresstheproblemsofmeasuringAToutcomes,includingthatclientswhouse ATaretypicallyaheterogeneousgroupusingawidevarietyofATwithendless customizations. High participant heterogeneity creates difficulty in measuring outcomesbecauseasingleinstrument’sitemsmaynotbeapplicabletosomeor manyoftheparticipants(DeRuyter,1997;Fuhrer,2001;Gelderblom&deWitte, 2002;Jutai,Fuhrer,Demers,Scherer,&DeRuyter,2005;Minkel,1996;RESNA, 1998a,1998b,1998c;Smith,1996). Downloaded From: http://ajot.aota.org/pdfaccess.ashx?url=/data/journals/ajot/930031/ on 06/14/2017 Terms of Use: http://AOTA.org/terms

Upload: nguyendat

Post on 16-Dec-2018

214 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Effect of Assistive Technology in a Public School Setting · settings. Copley and Ziviani (2004) believed that after fund-ing and access to technology, AT service delivery was lack-ing

18� January/February 2010, Volume 64, Number 1

ReseaRch scholaRs InItIatIve

Effect of Assistive Technology in a Public School Setting

KEY WORDS• education, special• educational measurement• learning disorders• outcome assessment (health care)• self-help devices

Anne H. Watson, PhD, OT/L, is School Occupational Therapist, Arlington Public Schools, 1401 North Greenbrier Street, Arlington, VA 22205; [email protected] or [email protected]

Max Ito, PhD, OTR/L, is Director, Doctoral Program, and Associate, Nova Southeastern University, Fort Lauderdale, FL.

Roger O. Smith, PhD, OT, FAOTA, is RESNA Fellow; Interim Associate Dean, Graduate Studies, College of Health Sciences; Professor, Department of Occupational Therapy, University of Wisconsin–Milwaukee; and Director, Rehabilitation Research Design and Disability (R2D2) Center.

Lori T. Andersen, EdD, OTR/L, FAOTA, is Educational Consultant, Buford, GA.

The Individuals With Disabilities Education Improvement Act of 2004 (IDEA) requires assistive technology (AT) be considered at the yearly individualized education program (IEP) meeting of every student in special education. IDEA also directs that AT be implemented on the basis of peer-reviewed literature despite a paucity of research on AT’s effectiveness in the public schools. This repeated-measures quasi-experimental study explored AT’s effect in a public school special education setting. Participants (N = 13) were a heterogeneous group of students in 1 school system who had newly provided AT to address academic and communication goals in one school year. Results suggest that relative to other interventions, AT provided by a multidisciplinary team may have a significant effect on IEP goal improvement (t[12] = 5.54, p = .00) for students in special education (F[2] = 9.35, p = .00), which may support AT’s use in special education by occupational therapists as directed by IDEA.

Watson, A. H., Ito, M., Smith, R. O., & Andersen, L. T. (2010). Effect of assistive technology in a public school setting. American Journal of Occupational Therapy, 64, 18–29.

Anne H. Watson, Max Ito, Roger O. Smith, Lori T. Andersen

The�Individuals�With�Disabilities�Education�Improvement�Act�of�2004�(IDEA)�directs�that�individualized�education�program�(IEP)�teams�must�consider�assistive�

technology�(AT)�during�the�drafting�of�every�student’s�IEP.�IDEA�regulations�regard-ing�the�consideration�of�AT�allow�IEP�teams�and�school�district�administrators�the�discretion�to�provide�or�forgo�AT�as�an�intervention�strategy�without�directing�when�or�how�AT�intervention�may�be�appropriate�(U.S.�Department�of�Education,�2006,�34�C.F.R.�§§�300.105,�300.324(a)(2)(v)).�Both�IDEA�and�the�U.S.�Department�of�Education�regulations�for�IDEA�require,�however,�that�incorporation�of�supplemen-tary�aids�and�services�(which�include�AT)�be�“based�on�peer-reviewed�research�to�the�extent�practicable”�(U.S.�Department�of�Education,�2006,�§�300.320(a)(4)).

Despite�IDEA’s�direction�to�consider�AT�at�every�IEP�meeting�and�to�base�AT�service�provision�on�peer-reviewed�research,� little�evidence�exists�regarding�AT’s�effectiveness�in�the�public�school�setting,�which�creates�the�possibility�that�an�IEP�team�may�decline�to�implement�AT,�a�potentially�effective�intervention�strategy�for�helping�public�school�students�in�special�education�meet�their�educational�goals�and�objectives.

The�literature�promotes�the�importance�of�documenting�AT�outcomes;�how-ever,�the�literature�lacks�empirical�studies�of�AT’s�effectiveness.�Many�discussions�address�the�problems�of�measuring�AT�outcomes,�including�that�clients�who�use�AT�are�typically�a�heterogeneous�group�using�a�wide�variety�of�AT�with�endless�customizations.�High�participant�heterogeneity� creates� difficulty� in�measuring�outcomes�because�a�single�instrument’s�items�may�not�be�applicable�to�some�or�many�of�the�participants�(DeRuyter,�1997;�Fuhrer,�2001;�Gelderblom�&�de�Witte,�2002;�Jutai,�Fuhrer,�Demers,�Scherer,�&�DeRuyter,�2005;�Minkel,�1996;�RESNA,�1998a,�1998b,�1998c;�Smith,�1996).

Downloaded From: http://ajot.aota.org/pdfaccess.ashx?url=/data/journals/ajot/930031/ on 06/14/2017 Terms of Use: http://AOTA.org/terms

Page 2: Effect of Assistive Technology in a Public School Setting · settings. Copley and Ziviani (2004) believed that after fund-ing and access to technology, AT service delivery was lack-ing

The American Journal of Occupational Therapy� 19

A�lack�of�reliable,�validated�measurement�tools�adds�to�the�difficulty�of�conducting�empirical�studies.�The�AT�out-come�tools�that�do�have�reliability�and�validity�information�are�measures� of�user-reported� satisfaction.�Many� authors�have�indicated�that�the�domains�of�satisfaction�and�subjec-tive�well-being�have�shortcomings�in�that�satisfaction�and�subjective�well-being�have�multiple�factors�that�contribute�to�these�constructs,�including�a�person’s�expectations�for�AT�(Fuhrer,�2001).

The�field�of�AT�outcomes�is�relatively�new.�Early�AT�outcome� studies� investigated�discontinuance�or� abandon-ment�of�AT.�These�early�studies�found�a�discontinuance�rate�of�between�8%�and�75%�(DeRuyter,�1997;�Riemer-Ross�&�Wacker,�2000;�Scherer,�1996).�A�later�study�by�Riemer-Ross�and�Wacker�(2000)�sought�to�identify�factors�among�seven�independent�variables,�including�three�service�delivery�vari-ables,�associated�with�the�dichotomous�dependent�variable�of�use�versus�nonuse.�Their�study�of�115�participants�found�three�of�the�seven�independent�variables�to�be�significantly�associated� with� continued� use� of� AT:� relative advantage�(advantage� of�AT�over� other� interventions� or�methods),�compatibility�(how�well�the�AT�fits�the�consumer’s�needs),�and�consumer involvement�(the�consumer’s�ability�to�voice�an�opinion�in�the�selection�of�AT).

Many�studies�subsequent�to�the�abandonment�studies�used� client� satisfaction—a� subjective� measure� without�regard�for�clients’�preintervention�expectations—as�an�out-come�measure�(Kohn,�LeBlanc,�&�Mortola,�1994;�Lenker,�Scherer,�Fuhrer,�Jutai,�&�DeRuyter,�2005;�Wuolle�et�al.,�1999).�Another�study�included�societal�costs�in�the�form�of� institutionalization� as� an� outcome� measure� (Mann,�Ottenbacher,�Fraas,�Tomita,�&�Granger,�1999).�An�early�longitudinal� study� (conducted� over� 3� years)� compared�technology�access�performance�of�7�participants�with�severe�disabilities.�The�study�design�was�a�case�series;�therefore,�the�ability�to�generalize�the�results�is�limited�(Guerette�&�Nakai,�1996).

The�functional�performance�changes�of�clients�after�AT�intervention,�especially� those�regarding�children� in�public�school� settings,� continue� to� receive� less� recognition� as� an�outcome�variable�than�user�satisfaction�or�use�versus�nonuse�(Smith,�2002,�2005).�Some�studies�have�reported�outcomes�of�specific�devices�or�methods�for�specific�student�groups.�A�few�studies�used�a�group�design�to�study�AT’s�impact�on�a�heterogeneous� group�of� children� (Campbell,�Milbourne,�Dugan,�&�Wilcox,�2006;�Evans�&�Henry,�1989;�Gerlach,�1987;�Hall,�1985;�Hetzroni�&�Shrieber,�2004;�Higgins�&�Raskind,�2004;�Wallace,�2000).�The�Ohio�Department�of�Education’s�Assistive�Technology�Infusion�Project,�a�study�with�>3,000�participants,�has�studied�the�effects�of�AT�in�a�school� setting� (Fennema-Jansen,� 2004;�Fennema-Jansen,�

Smith,�&�Edyburn,�2004).�The�literature�contains�references�to�studies�that�examine�the�effect�of�specific�AT�for�specific�groups�of�students.�In�a�multiple�single-subject–design�study,�Schepis,�Reid,�Behrmann,�and�Sutton�(1998)�examined�the�effectiveness� of� speech-generating� devices� (augmentative�communication)�on�classroom�performance�of�four�children�with� autism.� This� multiple� single-subject� study� showed�improved�communication�among�all�participants�in�com-parison�with�their�baseline�function�(Schepis�et�al.,�1998).

A�descriptive�study�by�Ostensjo,�Carlberg,�and�Vollestad�(2005)�revealed�improvement�in�the�performance�of�children�with�cerebral�palsy�secondary�to�the�introduction�of�AT�as�measured�by�the�Caregiver�Assistance�scale�of�the�Pediatric�Evaluation�of�Disability�Inventory�(PEDI™;�Haley,�Coster,�Ludlow,�Haltiwanger,�&�Adrellos,�1992;�n�=�95,�mean�age�=�4�years,�10�months).�The�PEDI�measures�mobility,�self-care,�and�social�function.�The�Social�Function�scale�of�the�PEDI�(measures�comprehension,�expression,�problem�solv-ing,�peer�play,�and�safety)�most�closely�compares�with�the�goals�and�objectives�of�students�in�this�study,�although�the�comparison�is�inexact.

A�2004�review�article�on�barriers� to�AT�use�revealed�hurdles�for�service�delivery,�funding,�and�technology�access�(Copley�&�Ziviani,�2004).�This�literature�review�pertained�to� children�with�multiple� disabilities�within� educational�settings.�Copley�and�Ziviani�(2004)�believed�that�after�fund-ing�and�access�to�technology,�AT�service�delivery�was�lack-ing�for�children�with�multiple�disabilities.�The�two�broad�areas�of�service�delivery�deficiency�were�training�support�and�planning� for� assessment� and� implementation� (Copley�&�Ziviani,�2004).

The� field� has� tolerated� studies� with� methodological�shortcomings.�The�extant�studies�exhibit�many�limitations�across� the� age,�disability,� and� setting� spectrum.�A� review�article�revealed�that�most�studies�regarding�AT�in�the�occu-pational�therapy�literature�are�qualitative,�single�subject,�or�nonexperimental�(Ivanoff,�Iwarsson,�&�Sonn,�2006).�Many�of�the�group-design�studies�that�exist�rely�on�use�versus�non-use�or�client�satisfaction�as�outcome�criteria,�variables�that�do�not�directly� consider� functional�performance� changes.�The�school-based�empirical�studies�in�the�literature�have�a�narrow�focus�(regarding�specific�devices�or�regarding�chil-dren�with�specific�special�education�classifications)�without�regard�to�service�delivery.

We�considered�and�implemented�the�recommendations�in�three�review�articles�regarding�the�development�of�the�lit-erature�on�AT�outcomes�in�a�public�school�setting.

Lenker�et�al.�(2005)�recommended�that�future�research�provide�a�rationale�for�instruments�used�in�research�studies�along�with�a�description�of�the�participants,�the�study�site,�and�the�duration�of�AT�use�at�the�time�of�data�collection.�

Downloaded From: http://ajot.aota.org/pdfaccess.ashx?url=/data/journals/ajot/930031/ on 06/14/2017 Terms of Use: http://AOTA.org/terms

Page 3: Effect of Assistive Technology in a Public School Setting · settings. Copley and Ziviani (2004) believed that after fund-ing and access to technology, AT service delivery was lack-ing

20� January/February 2010, Volume 64, Number 1

The�Ivanoff�et�al.�(2006)�review�article�highlighted�the�need�for�more�group-design�studies�in�the�occupational�therapy�AT�literature.�Campbell�et�al.�(2006)�discussed�the�decreas-ing�number�of�empirical�studies�in�the�educational�AT�out-comes� literature� and� the� need� for� more� group-design�studies.

The�purpose�of�our�study�was�to�seek�evidence�regarding�AT’s�effectiveness�in�a�public�school�setting�by�determining�the�outcome�of�AT�provided�by�a�multidisciplinary� team�(AT�team)�in�helping�students�ages�3–21�enhance�their�per-formance�in�a�public�school�setting.�Students�in�special�edu-cation�receive�many�interventions,�including�AT.�We�exam-ined�the�relative�contribution�of�AT�in�supporting�IEP�goals�and�objectives�in�comparison�with�other�interventions�stu-dents�received.

The�overarching�study�question�asked�how�students�in�special�education�are�affected�by�the�inclusion�of�AT�as�an�intervention�strategy.�Subordinate�questions�that�addressed�this� overarching�question� concerned�how� student�perfor-mance�outcomes�are�affected�by�the�inclusion�of�AT�and,�relative�to�other�intervention�strategies,�how�AT�contributed�to�students�attaining�identified�IEP�goals�and�objectives.

Method

Research Design

We� used� a� repeated-measures� pretest–posttest� quasi-�experimental�design.�The�repeated-measures�design�allowed�students�to�serve�as�their�own�controls,�which�was�necessi-tated�by�high�participant�heterogeneity� (a�wide�variety�of�disabilities,�ages,�and�types�of�devices�used),�thus�eliminating�between-subjects� variability� in� the�baseline� (pretest)� and��follow-up� (posttest)� conditions� (Carey� &� Boden,� 2003;�Portney�&�Watkins,�2000;�Ritchie,�2001;�Smith,�2000).�We�used� a� repeated-measures�design� instead�of� a� randomized�controlled�trial�because�the�small�number�of�new�students�served�by�the�AT�team�prevented�analysis�of�independent�groups�necessary�for�the�latter�methodology.�An�additional�problem�with�a�randomized�controlled�design�would�be�the�necessity� of�withholding� intervention� to� create� a� control�group.�Withholding�or�delaying�intervention�is�unethical�in�this�situation.�Another�positive�aspect�of�using�a�repeated-measures�design�over� independent� samples� is� the� greater�power�in�statistical�procedures�despite�the�small�number�of�participants�(Portney�&�Watkins,�2000).

Participants

We�determined�the�minimum�number�of�participants�for�the�study�through�consultation�with�the�coordinator�of�the�AT�team.�She�predicted,�based�on�past�referral�rates,� that�

there�would�be�a�minimum�of�10�newly�referred�students�or�existing�students�with�new�devices�within�the�first�semester�of�the�2005–2006�school�year�(G.�Williams,�personal�com-munication,�May�2005).�The�study�exceeded�the�minimum�number,�with�13�participants.

The�main�inclusion�criterion�consisted�of�all�students�newly�referred�to�the�AT�team�(or�ongoing�AT�students�who�had�new�AT)� for� the�2005–2006�school�year.�Additional�inclusion�criteria�were�the�student’s�age�(between�3�and�21)�and�special�education�status�(the�student�had�to�have�a�cur-rent�IEP).�The�study�limited�participation�to�those�students�whose�case�managers�could�complete�the�follow-up�before�the�end�of�the�2005–2006�school�year.�Limiting�data�collec-tion� to�1� school� year�was� an� attempt� to� limit� extraneous�variables�that�could�arise�from�data�collection�over�2�or�more�school�years�(i.e.,�new�teachers,�grade�expectations,�or�school�buildings).

We�excluded�students�who�met�the�inclusion�criteria�but�whose�parents�and�IEP�team�case�managers�declined�consent.�We�also�excluded�students�who�would�otherwise�have�met�the�inclusion�criteria�but�whose�parents�did�not�read�English�or�Spanish�fluently�because� consent� forms�were� available�only�in�those�two�languages.�Only�1�student�had�parents�who�spoke�a�language�other�than�English�or�Spanish�(Mongolian)�and�was�thus�excluded.

Outcome Measures

We�used� the�Student�Performance�Profile� (SPP;�Assistive�Technology� Outcomes� Measurement� Systems,� 2004;�Fennema-Jansen,� 2004)� as� the� study�measure.�The�SPP,�developed�by�the�Ohio�Department�of�Education�and�the�Rehabilitation� Research� Design� and� Disability� (R2D2)�Center,�began�as�an�online,�study-specific�instrument�for�the�Ohio� Assistive� Technology� Infusion� Project.� The� Ohio�Assistive�Technology�Infusion�Project�used�the�SPP�to�collect�data�on�approximately�4,000�students.�Teachers�in�the�Ohio�study�recorded�their�students’�experiences�with�the�AT�pro-vided�by�the�project.�Evidence�for�content�and�discriminant�validity�has�been�presented�through�dissertation�studies�and�national� conferences� (Fennema-Jansen,� 2004,� 2005;�Fennema-Jansen,�Smith,�Edyburn,�&�Binion,�2005).�In�this�study,�we�used�a�modified�print�version�of�the�instrument�written�in�conjunction�with�and�approved�by�the�developers�of� the� original� SPP� (Assistive� Technology� Outcomes�Measurement�Systems,�2004).�Readers� are�directed� to� the�R2D2�Center�Web�site�(www.r2d2.uwm.edu/atoms/archive/sppi3.html)�to�see�the�version�of�the�SPP�used�in�this�study.

Our�version�of� the�SPP�included�pretest�and�posttest�forms.�The�pretest�form�included�three�sections:�Section�I,�Student�Information;�Section�II,�Areas�of�Need;�and�Section�III,�Relevant�(to�AT)�IEP�Goals�or�Objectives�and�Current�

Downloaded From: http://ajot.aota.org/pdfaccess.ashx?url=/data/journals/ajot/930031/ on 06/14/2017 Terms of Use: http://AOTA.org/terms

Page 4: Effect of Assistive Technology in a Public School Setting · settings. Copley and Ziviani (2004) believed that after fund-ing and access to technology, AT service delivery was lack-ing

The American Journal of Occupational Therapy� 21

Ability�Level.�Case�managers�could�identify�and�rate�up�to�three�AT-relevant�IEP�goals�or�objectives.�The�case�managers�rated�each�identified�IEP�goal�or�objective�on�a�scale�ranging�from�0%�ability�to�100%�ability�in�10%�increments�(e.g.,�a�student�who�has�mastered�the�goal�or�objective�would�receive�a� 100%�ability-level� rating).�The�posttest� form� included�three� sections� relevant� to� this� article:� Section� I,� Student�Information;�Section�II,�Current�Ability�Level�of�(previously�identified)�Relevant�IEP�Goals�or�Objectives;�and�Section�III,�Contribution�of�Interventions.

We�used�the�SPP�because�it�can�be�customized�to�a�wide�variety�of�students,�it�is�specific�to�the�use�of�AT�in�a�public�school� setting,� and� it� bases� performance�on� IEP�goal� or�objective�ability� level.�We�also�chose�the�SPP�because�the�developers�had�experience�with�its�use�with�a�heterogeneous�group�of�AT�users�in�public�education.�The�case�managers�rated�each�student’s�ability�level�on�IEP�goals�or�objectives�on�the�SPP�pretest�form�and�on�the�SPP�posttest�form.�The�measure�of�performance�was�the�difference�in�averaged�abil-ity�level�(of�up�to�three�goals�or�objectives)�on�the�pretest�and�posttest�forms.

Rating�student�performance�with�the�SPP�(ability�level�on�relevant�IEP�goals�and�objectives)�does�not�distinguish�between�student�performance�with�and�without�AT.�The�SPP�measures�only�the�change�in�IEP�ability�level�after�the�introduction�of�AT.�The�SPP�posttest�form�attempts�to�dis-tinguish�the�impact�of�AT�on�the�student’s�ability�to�achieve�IEP�goals�and�objectives�with�an�additional�section,�Section�III,�Contribution�of�Interventions.�Section�III�includes�10�rating�scales�representing�10�interventions.�The�scales�range�from�0�(no contribution)�to�10�(substantial contribution)�to�improved�IEP�ability�level.�The�interventions�are�broken�into�four�categories:�student�strategies,�teacher�strategies,�special�services,�and�AT.

Procedures

Institutional� review� board� (IRB)� approval� was� obtained�through�Nova�Southeastern�University.�The�IRB�approval�preceded�the�approval�from�the�school�system�in�which�the�study�took�place.�We�also�sought�additional�approval�from�the�school�principals�at�each�school�with�participants.�Data�collection�began�after�the�completion�of�these�three�levels�of�approval.

The�single�independent�variable�in�the�study�consisted�of�the�AT�devices�and�services�provided�by�the�AT�team�to�the�participating� students� (nominal� data).�We� chose�AT�team�member�services�(including�AT�device�provision)�as�a�single� intervention�because� this� service�was� the�consistent�variable�across�all�participants.�The�services�provided�by�the�AT�team�were�at�minimum�an�interdisciplinary�team�review�of�the�AT�referral,�discussion�about�the�referred�student’s�

case�and�assignment�of�the�most�appropriate�AT�team�mem-ber�to�the�case,�an�initial�visit�by�the�AT�team�member�to�the�referring�case�manager�(usually�the�student’s�special�edu-cation�teacher),�an�observation�of�the�student�performing�the�activities�for�which�the�AT�was�being�considered,�and�col-lection�of�student�baseline�performance�data�(e.g.,�words�per�minute�produced�in�student’s�usual�written�communication�method).

If�the�AT�team�member�recommended�AT,�the�team�member�provided�these�services:�AT�device�or�devices,�initial�training�of� the� teacher� and� student� in� the�use�of� the�AT�device,�and�initiation�of�a�follow-up�regarding�the�student’s�AT�use�at�least�twice�a�year�and�provision�of�support�for�the�use�of�and�maintenance�of�the�AT�as�needed.�The�AT�team�member’s�level�of�service�was�higher�for�students�with�com-plex� AT� (i.e.,� speech-generating� devices� or� dictation�software).

The�two�dependent�variables�in�this�study�consisted�of�student�performance�and�the�relative�contribution�of�the�AT�intervention�provided�by�the�AT�team�compared�with�that�of�nine�other�commonly�used�interventions�to�improvement�in�student�performance.�Student performance�was�defined�by�student�ability�level�on�AT-relevant�IEP�goals�or�objectives�as� rated�by� the� student’s� case�manager� (0%�=�no ability,�100%�=�a fully met IEP objective or goal).�Student�perfor-mance�was�rated�using�the�SPP.�Rating�scale�scores�are�usu-ally�viewed�as�ordinal�data;�however,�the�SPP�has�the�assump-tion�of�an�interval�scale�because,�as�noted�earlier,�the�ratings�are�in�equal,�numerical�intervals�or�increments�(0%�to�100%,�with�even�increments�of�10%).

The� level� of�measurement� for� the� second�dependent�variable,�contribution�of�the�AT�intervention�to�the�improve-ment�in�student�IEP�objective�or�ability,�is�ordinal�because�the� level� of� contribution� is� based�on� an�SPP� rating� scale�(ranging�from�0�[no contribution]�to�10�[substantial contribu-tion]).�The�case�managers�were�instructed�that�multiple�inter-ventions�could�be�rated�the�same�(i.e.,�each�intervention�did�not�need�to�be�ranked�relative�to�the�others)�and�that�the�numbers�associated�with�the�ratings�did�not�need�to�add�to�a�specific�number�(e.g.,�100).

We�recruited�every�participant�who�met�the�inclusion�criteria.�The�case�manager�of�1�student�declined�participa-tion,�and�2�students’�parents�declined�to�give�consent.�The�case� managers� of� all� participants� completed� the� posttest�evaluation.�The�data�collection�process�started�when�the�AT�team�provided�services�to�a�newly�referred�student�or�pro-vided�new�AT�devices�to�an�existing�student�(and�after�all�parties� signed� informed� consent� forms).�The�first� author�(Watson)�met�with�the�case�manager�(the�IEP�team�member�primarily�responsible�for�the�student’s�IEP;�most�were�special�education�teachers)�and�other�appropriate�IEP�team�members�

Downloaded From: http://ajot.aota.org/pdfaccess.ashx?url=/data/journals/ajot/930031/ on 06/14/2017 Terms of Use: http://AOTA.org/terms

Page 5: Effect of Assistive Technology in a Public School Setting · settings. Copley and Ziviani (2004) believed that after fund-ing and access to technology, AT service delivery was lack-ing

22� January/February 2010, Volume 64, Number 1

for�the�student�at�the�case�manager’s�school.�The�first�inter-view�included�a�verbal�introduction�to�the�study�but�with-held�the�study�hypotheses.

At�the�first�data�collection�meeting,�the�first�author�asked�the�case�manager�to�identify�up�to�three�pertinent�areas�of�student�need�addressed�by�the�AT�and�included�these�on�the�SPP�pretest�form.�The�SPP�pretest�form�was�used�to�guide�the�case�managers�to�identify�one�to�three�IEP�goals�or�objec-tives�most�relevant�to�the�AT.�Although�the�school�system�used�goals�only�for�students�taking�standardized�achievement�tests,�it�used�goals�and�objectives�for�students�with�alternative�testing,�thus�goals�or�objectives�could�be�used�on�the�SPP.

The�first�author�reviewed�the�instructions�for�the�instru-ment�with�each�case�manager�and�answered�any�questions�that�arose.�The�case�managers�were� instructed� to� rate� the�student’s�ability�level�on�each�AT-relevant�IEP�goal�or�objec-tive�on�the�SPP�pretest’s�Section�III�(Relevant�IEP�Goals�or�Objectives�and�Current�Ability�Level),�using�the�rating�scale�described�earlier.�Case�managers�rated�their�student’s�ability�level�on�each�goal�or�objective�on�the�basis�of�the�student’s�performance� immediately�before�AT�was� introduced.�To�avoid�the�potential� for�bias,�case�managers�completed�the�SPP�pretest�form�while�Watson�was�in�the�building�but�not�in�the�room.

Approximately�4�months�after�AT�provision�,�the�same�case�manager�and�any�other�IEP�team�members�who�com-pleted� the�SPP�pretest� form�completed� the�SPP�posttest�form.�Watson�delivered�all� follow-up�instruments�to�the�case�managers� and� reviewed� the� instructions�with� them.�The� SPP� posttest� form� included� the� goals� or� objectives�(entered�via�a�word�processor)�identified�on�the�SPP�pretest�form�for�each�student,�but�it�did�not�include�the�pretest�rating�levels.�Pretest�ratings�were�excluded�from�the�posttest�to�minimize�case�managers’�scoring�bias�when�rating�post-test� Section� III� (Relevant� IEP� Goals� or� Objectives� and�Current�Ability�Level.)

The� case�manager� completed� the� SPP�posttest� form�regarding�the�student’s�ability�level�on�IEP�goals�and�objec-tives,�as�indicated�earlier,�and�the�relative�contribution�of�10�intervention� strategies� for� each� identified� IEP� objective�(Fennema-Jansen�et�al.,�2004).�The�SPP�posttest�form�asked�the�case�manager�to�identify�any�possible�confounding�vari-ables�that�may�have�influenced�the�student’s�progress�nega-tively�or�positively�before�or�after�the�introduction�of�AT.�The� case�managers�did�not� identify� any�untoward� events�during�the�time�of�any�student’s�AT�use.

Data Analysis

We�used�SPSS�software�(Version�15.0;�SPSS,�Inc.,�Chicago)�and�Microsoft�Excel�2004�for�Macintosh�(Version�11.3.7;�Microsoft�Corporation,�Redmond,�WA)�to�analyze�the�data.�

The�study�included�descriptive�statistics�for�student�partici-pant�characteristics�and�for�the�AT�provided�to�students.�The�descriptive�analysis�also�provided�frequency�distributions�of�data�used�for�inferential�statistical�tests.�We�used�the�paired�t�test�to�compare�the�mean�(of�IEP�ability�level)�of�each�par-ticipant�before�AT�was�introduced�with�the�mean�of�each�participant�after�AT�was�introduced.�We�used�a�significance�level�of�p�<�.01�(one-tailed)�on�all�analyses�using�the�t�test.�The�SPP�scores�used�for�the�t-test�analyses�were�the�mean�percentage�ability�level�on�the�chosen,�relevant�IEP�goals�or�objectives�for�each�student.

The�first� author� analyzed� the� case�manager–reported�contribution�level�of�AT�provided�by�the�AT�team�versus�other� interventions�using� a� one-way� analysis� of� variance�(ANOVA).�An�analysis�of� the�mean�of� each� intervention�revealed�that�no�single� intervention�alone�or�the�averaged�mean� contribution�of� all�nine�non-AT� interventions�was�rated�as�highly�as�the�AT�intervention.�We�used�an�ANOVA�to�compare�the�two�highest�rated�interventions�(of�the�nine�non-AT�interventions)�with�the�mean�AT�rating�to�deter-mine�whether�significant�differences�existed.

Results

Description of the Sample (Participant Characteristics)

None�of�the�participants�were�lost�at�follow-up,�and�all�con-tinued�to�use�their�AT�(0%�abandonment).�The�participating�students�ranged�in�grade�level�from�preschool�to�eighth�grade.�Table�1�summarizes�the�participating�students’�grade�levels.�The�participating�students�represented�a�variety�of�special�edu-cation�disability�classifications�(see�Table�2).

Assistive Technology Provided

The�AT�team�provided�two�broad�categories�of�AT,�speech-generating�devices� (for� oral� communication)� and�AT� for�written�communication.�The�participants�received�32�AT�devices�altogether.�Written�communication�devices�consisted�of�hardware�or�software.�Table�3�summarizes�the�devices�and�software�provided�by�the�AT�team.

Table 1. Grade Levels of Participant Students (N = 13)

Grade Level n

Preschool 1First grade 2Second grade 2Fourth grade 3Fifth grade 3Sixth grade 1Eighth grade 1

Downloaded From: http://ajot.aota.org/pdfaccess.ashx?url=/data/journals/ajot/930031/ on 06/14/2017 Terms of Use: http://AOTA.org/terms

Page 6: Effect of Assistive Technology in a Public School Setting · settings. Copley and Ziviani (2004) believed that after fund-ing and access to technology, AT service delivery was lack-ing

The American Journal of Occupational Therapy� 23

Student Performance Progress

SPP�posttest�Section�II�measured�student�performance�as�the�students’�ability�level�(ranging�from�0%�to�100%,�in�10%�increments)�on�AT-relevant�IEP�goals�and�objectives.�The�case�managers�identified�27�IEP�goals�or�objectives�relevant�to�AT;�the�average�number�of�IEP�goals�and�objectives�per�student�was�2.08.� (If� a� student’s� case�manager� identified�multiple�IEP�goals�and�objectives�for�the�SPP,�performance�was�measured�as�the�average�of�the�change�in�ability�level�for�all�identified�IEP�goals�or�objectives.)�Most�of�the�students�(11�of�13)�showed�improvement�in�ability�level�on�identified�IEP�goals�or�objectives�at�posttest�(after�receiving�AT)�com-pared�with�ability�level�at�pretest�(before�receiving�AT).�The�remaining�2� students’� averaged� level� of� ability� stayed� the�same.�The�mean�percentage�of�ability�for�the�13�participants�was�36%�before�the�AT�intervention�and�67%�after�the�AT�intervention,�for�a�mean�improvement�of�31%.�The�partici-pating�students’�overall�performance�on�SPP�posttest�Section�II� (percentage� of� ability� on� IEP� objectives� and� goals)�improved�significantly�(t[12] =�5.54, p�=�.00,�one-tailed,�d�=�1.40).� See�Table� 4� and�Figure�1� for� a� summary�of� SPP�Section�II�pretest�and�posttest�scores.

Relative Contribution of AT Provided by the AT Team Compared With Other Interventions

Case�managers�completed�SPP�posttest�Section�III�after�rat-ing�each�student’s�ability�level�on�every�identified�IEP�goal�and�objective�on� the�SPP�posttest� form.�The�participants�

received�multiple�concurrent�forms�of�interventions�in�addi-tion�to�AT,�thus�inquiring�about�the�level�of�contribution�of�AT�versus�other�interventions�helped�to�clarify�the�reason�for�the� improvement� in� IEP� ability� level.� The� SPP� posttest�Section�III�rating�scale�ranged�from�0�(no contribution)�to�10�(substantial contribution).�Our�hypothesis�was�that�AT�would�contribute�to�IEP�improvement�at�least�the�same�as�or�more�than�the�nine�other�interventions.

The�two�non-AT�interventions�with�the�highest�means�were�“adaptations�of�specific�curricular�tasks�(e.g.,�worksheet�modifications,�alternate�test�taking)”�and�“related�and�sup-port�services�(e.g.,�occupational�therapy,�physical�therapy,�speeach–language�therapy,�Title�I,�Tutoring),”�which�were�Items�3�and�7�on�SPP�posttest�Section�III,�respectively.�For�Items�3�and�7,�means�were�6.00�and�5.37,�respectively�(stan-dard�deviations [SDs]�=�1.88�and�2.36,�respectively)�com-pared�with�a�mean�of�7.74�(SD�=�1.99)� for�Item�10�(the�contribution�of�AT�provided�by�the�AT�team).�Each�inter-vention�had�27�ratings�of�contribution�to�IEP�goal�ability�level� (based�on� the�27� relevant� IEP�goals� and�objectives�identified�by�the�case�managers).�The�data�from�the�three�data�sets�mentioned�previously�(Items�3,�7,�and�10)�meet�the�assumptions�of�the�ANOVA�for�homogeneity�of�variance�(SDs�=�1.88,�2.36,�and�1.99,�respectively)�and�normal�distri-bution.�See�Figures�2�and�3�for�frequency�distributions�of�the�ratings�of�Items�3,�7,�and�10�of�SPP�posttest�Section�III.

A� one-way� ANOVA� with� a� Tukey’s� post� hoc� test�between�these�three�intervention�strategies�(Items�3,�7,�and�

Table 2. Special Education Classification of Participant Students (N = 13)

Special Education Classification n

Autism spectrum disorder 4Other health impaired 2Specific learning disability 3Other health impaired and specific learning disability 1Cognitive disability 2Developmentally delayed 1

Table 3. Assistive Technology Devices and Software Provided by the Assistive Technology Team

Type of Assistive Technology n Examples of Devices or Softwarea

Written communication hardware 8 AlphaSmartb (Renaissance Learning, Inc., Wisconsin Rapids, WI)Written communication software 8 Co:Writer and Write:Outloud (Don Johnston, Inc., Volo, IL)Speech-generating devices 9 Cheaptalk, Twin Talk, Communication Builder (Enabling Devices, Toys for Special Children, Hastings-on-Hudson,

NY) and Step-by-Step (AbleNet, Inc., Roseville, MN)Curriculum support software 5 First Words (Laureate Learning Systems, Inc., Winooski, VT) and Kurzweil (Cambium Learning Technologies, Inc.,

Bedford MA)Computer access 2 Intellikeys (Cambium Learning Technologies, Inc., Bedford, MA)

aNot all devices were named, by model or title, by case managers.bAlphaSmart was acquired by Renaissance Learning, Inc., a Wisconsin Rapids, WI–based company in 2005. Since this study was published, the AlphaSmart com-puters have evolved into what is now known as the NEO 2.

Table 4. Effect of Assistive Technology Provided by the Assistive Technology Team on Performance, as Measured by the Student Performance Profile at Two Time Points (N = 13)

TestM

(%)SD (%) t df p

Cohen’s d (Effect Size)

Posttest 67 20Pretest 36 24Difference 31 20 5.54 12 .00* 1.40

Note. Analysis by paired t test.*p < .01 (one-tailed).

Downloaded From: http://ajot.aota.org/pdfaccess.ashx?url=/data/journals/ajot/930031/ on 06/14/2017 Terms of Use: http://AOTA.org/terms

Page 7: Effect of Assistive Technology in a Public School Setting · settings. Copley and Ziviani (2004) believed that after fund-ing and access to technology, AT service delivery was lack-ing

24� January/February 2010, Volume 64, Number 1

Figure 1. Frequency distribution of the scores of the averaged Student Performance Profile (SPP) on the pretest and posttest forms, in percentages, of current ability level of individualized education program goals and objectives (N = 13).

Figure 2. Frequency of each rating (contribution) level, by Student Performance Profile Posttest Section III, Item 7 (related and support services) and Item 3 (adaptations of specific curricular tasks; N = 27 questions).

Freq

uenc

y of

Ave

rage

d SP

P Sc

ores

Scores on SPP (% Current Ability)

0 10 20 30 40 50 60 70 80 90 100

6

5

4

3

2

1

0

Distribution of Scores Pretest

Distribution of Scores Posttest

12

10

8

6

4

2

0

Freq

uenc

y Ite

m 3

and

7

Frequency, Item 7

Frequency, Item 3

Rating 0 1 2 3 4 5 6 7 8 9 10 11

Downloaded From: http://ajot.aota.org/pdfaccess.ashx?url=/data/journals/ajot/930031/ on 06/14/2017 Terms of Use: http://AOTA.org/terms

Page 8: Effect of Assistive Technology in a Public School Setting · settings. Copley and Ziviani (2004) believed that after fund-ing and access to technology, AT service delivery was lack-ing

The American Journal of Occupational Therapy� 25

10)�revealed�a�significant�difference�between�AT�provided�by�the�AT�team�(Item�10)�and�each�of�the�other�two�inter-ventions� (Items� 3� and� 7)� but� no� significant� difference�between�Items�3�and�7�(F[2]�=�9.35,�p�=�.00).�This�analysis�may�provide�evidence�of�AT’s�positive�contribution�com-pared�with�that�of�other�intervention�strategies�in�promoting�attainment�of�IEP�goals�and�objectives�(see�Table�5).

Discussion

The�problem�statement�of�this�study�indicates�that�federal�law�(IDEA,�2004),�despite�the�paucity�of�research�on�AT�outcomes,�directs�IEP�teams�to�consider�AT�for�every�child�in� special� education� and� to�use�peer-reviewed� research� to�guide�AT�implementation.�This�study�addressed�the�prob-lem�of�lack�of�evidence�of�AT’s�effectiveness�to�support�stu-dents�in�special�education.

The�characteristics�of�this�study’s�participants�reflected�reports�in�the�literature�regarding�the�heterogeneity�of�clients�who�use�AT.�Although�the�students�in�this�study�were�het-erogeneous,�as�were�the�AT�devices�used,�the�degree�of�het-erogeneity�may�be�less�than�is�found�in�many�studies�cited�

in� the� literature� (Kohn� et� al.,� 1994;�Lenker� et� al.,� 2005;�Ostensjo,�Carlberg,�&�Vollestad,�2005).�The�students�in�this�study�primarily�received�devices�that�addressed�academic�or�communication�goals�and�objectives.�No�students�received�AT�that�addressed�needs�typical�to�physical�disabilities�such�as� adapted� toilet� seats,�wheelchair� or� seating� systems,� or�complex�computer�access�systems.

In�this�study,�we�indirectly�measured�use�versus�nonuse�(abandonment),�a�factor�measured�in�many�studies�in�the�AT�outcomes�literature.�The�abandonment�rate�of�from�8%�to�75%�cited�in�various�studies�contrasts�with�the�0%�aban-donment�rate�in�this�study.�The�other�studies�in�the�literature�regarding�use�versus�nonuse�of�AT�were�retrospective,�and�most�were�imprecise�in�reporting�the�length�of�time�before�abandonment,�making� an� exact� comparison�between� this�study�and�other�studies�difficult�(DeRuyter,�1997;�Riemer-Ross�&�Wacker,�2000;�Scherer,�1996).�The�continued�use�of�AT�by�all�of� the�students� in�this� study�may�reflect� the�findings�of�Riemer-Ross�and�Wacker�(2000)�regarding�the�factors�associated�with�continued�use�of�AT.�Riemer-Ross�and�Wacker�concluded,�on�the�basis�of�data�for�115�partici-pants,�that�consumer�involvement,�compatibility,�and�rela-tive�advantage�are�significantly�associated�with�continued�use�of�AT.�The�procedures�of�the�AT�team�promoted�the�three�factors�associated�with�continued�use�of�AT.

The�literature�on�AT�outcomes�has�indicated�that�AT�has�a�positive�impact�on�children’s�performance�(Campbell�et�al.,�2006;�Evans�&�Henry,�1989;�Gerlach,�1987;�Hetzroni�&�Shrieber,�2004;�Higgins�&�Raskind,�2004;�Ostensjo�et�al.,�2005).�Most�of� these� studies� regarded� intervention�with�a�specific�AT�device;�however,�we�examined�AT’s�effectiveness�as�a�result�of�a�specific�service�delivery�model.�We�directly�

Figure 3. Frequency of each rating (contribution) level, by Student Performance Profile Posttest Section III, Item 10 (assistive technology [AT] intervention).

Table 5. One-Way Analysis of Variance Summary of the Contribution of Three Interventions to Attainment of Individualized Education Program Goals and Objectives

Source Sum of Squares df Mean Square F p

Between groups 81.41 2 40.70 9.35 .00*Error 339.48 78 4.35Total 420.89 80

*p < .01.

8

7

6

5

4

3

2

1

0

Freq

uenc

y

Rating

0 1 2 3 4 5 6 7 8 9 10 11 More

Downloaded From: http://ajot.aota.org/pdfaccess.ashx?url=/data/journals/ajot/930031/ on 06/14/2017 Terms of Use: http://AOTA.org/terms

Page 9: Effect of Assistive Technology in a Public School Setting · settings. Copley and Ziviani (2004) believed that after fund-ing and access to technology, AT service delivery was lack-ing

26� January/February 2010, Volume 64, Number 1

examined�student�performance�but�also�indirectly�measured�the�performance�of�specific�groups�of�students�(although�very�small�in�total�number)�using�their�AT.

We�found�positive�results�similar�to�those�of�the�Schepis�et�al.�(1998)�study,�a�multiple�single-subject�design�regarding�the�effect�of�speech-generating�devices�on�four�children�with�autism�spectrum�disorders.�The�3�participants�in�this�study�with� autism� spectrum� disorders� who� used� AT� (speech-�generating�devices)� to� address� classroom�communication�objectives�showed�gains�in�performance�(as�measured�by�the�SPP).�The�low�number�of�students�in�each�study�requires�cautious�interpretation,�however.

The�Ostensjo� et� al.� (2005)� study� found� little�positive�effect�of�AT�on�progress�in�social�function�as�reported�retro-spectively�by�parents�and�caregivers.�It�did�show�functional�improvement�and� reduced�caregiver� assistance� in�mobility�and�self-care�after�AT�use.�The�differences�in�instrumentation�and� retrospective� versus� prospective� methodology� may�account�for�the�differences�between�Ostensjo�et�al.’s�(2005)�results�and�ours.�The�PEDI,�a�general-purpose�instrument�used�in�this�study,�may�have�lacked�sensitivity�to�AT’s�impact�on�social�function.�Social�function�as�a�construct�may�be�more�difficult�to�accurately�assess�by�retrospective�reporting�than�the�more�concrete�constructs�of�mobility�and�self-care.

No�other�study�in�the�AT�outcomes�literature�regarding�performance�is�similar�to�this�study�in�terms�of�instrumenta-tion,� methodology,� and� participant� characteristics.� The�methodology�reported�in�the�AT�outcomes�literature�is�pri-marily�descriptive�or�used�single-subject�designs.�The�results�of� this� study,� however,� generally� agree� with� the� limited�research�regarding�AT’s�effectiveness�in�positively�supporting�performance.

The�instrument�used�in�the�study,�the�SPP,�accounted�for� participant�heterogeneity� by� allowing� for� instrument�customization�for�each�student.�The�use�of�student�IEP�goals�and�objectives� is� aligned�with� the�findings�of�Trachtman�(1996),�who�proposed�that�the�attainment�of�goals�set�for�or�by�the�client�forms�the�most�important�outcome�measure.�The�use�of�IEP�goals�and�objectives�appeared�to�equalize�the�measurement�of� students�despite�vast�differences�between�them� (Fennema-Jansen� et� al.,� 2004;� Grogan,� 2004;�Silverman,� 1999).�Using� the� students’� unique� goals� and�objectives�relevant�to�AT�as�an�outcome�measure�(which�the�SPP�uses)�may�account�for�this�study’s�significant�results.

Limitations

This�study’s�limitations�include�the�small�sample�obtained�by�consecutive�sampling�(a�form�of�convenience�sampling),�the�use�of� instrumentation� (the�SPP)�with� relatively� little�psychometric� information,� a� sample�pool� limited� to�one�school�district,�and�the�use�of�the�t test�on�ordinal�data.

The�SPP�captured�data�based�on�rating�scales�and�there-fore� met� the� classification� of� ordinal� data� (Portney� &�Watkins,�2000).�We�used�the�t�test�and�the�ANOVA�based�on�ordinal�data�with�a�distribution�unknown�a�priori,�factors�that�violate�the�assumptions�for�the�t�test�and�ANOVA�of�interval-� or� ratio-based� data� with� a� normal� distribution�(Bridge�&�Sawilowsky,�1999;�Ottenbacher,�1983;�Portney�&�Watkins,�2000;�Song�et�al.,�2006).

The�t�test�and�the�ANOVA,�however,�typically�retain�robustness,�or�avoidance�of�Type�I�errors,�when�the�assump-tions�are�not�met�and�thus�may�account�for�the�frequent�use�of�these�statistics�in�many�medical�and�social�science�studies.�The�concern�considered�more� likely�by�statisticians� is� the�ability�of�the�t�test�to�retain�power,�or�the�ability�to�avoid�a�Type�II�error,�when�used�with�noninterval�and�non-normally�distributed�data.�The�lack�of�power�is�a�concern�especially�in�studies�that�have�small�(<25)�samples�(Bridge�&�Sawilowsky,�1999;�Song�et�al.,�2006).�We�found�significant�results�using�the�t test�despite�the�violation�of�the�t test’s�assumptions�and�the�small�sample�size.�The�ANOVA�in�this�study,�although�based�on�ordinal�data,�met�the�assumptions�of�equal�variance�and�normal�distributions�with�a�sample�size�>25.

Future Research

The�implications�and�recommendations�for�future�research�include�SPP�instrument�development,�further�research�into�intervention�with�specific�AT�devices,�and�further�definition�of�the�constructs�of�performance�as�they�relate�to�AT�inter-vention.�Future�research�on�AT�outcomes�in�a�public�school�setting�should�consider�multiple�data�collection�phases�over�a�longer�period�(>4�months).�Some�devices�used�with�various�disability�categories�may�require�>4�months�to�show�changes.�The�first�author�(Watson)�collected�data�over�only�1�school�year�and�interviewed�only�case�managers�who�requested�AT�as�an�intervention.�Data�collection�over�multiple�school�years�with�case�managers�who�inherit�the�AT�with�their�new�stu-dents�may�view�AT’s�effect�differently�than�case�managers�who�request�AT.

Clinical Implications

The�first�clinical�implication�of�this�study�is�that�AT�pro-vided�by�a�multidisciplinary�team�may�be�helpful�in�promot-ing�improved�performance�(attaining�IEP�goals�and�objec-tives)�in�a�public�school�setting�among�heterogeneous�groups�of�students�who�have�difficulty�meeting�these�IEP�goals�and�objectives�with�other�interventions.�However,�a�practitioner�or�administrator�who�uses�the�results�of�this�study�as�evidence�for�AT’s� effectiveness� in� supporting� student�performance�should�consider�the�service�delivery�model�a�critical�compo-nent� of� its� effectiveness.� The� participants� in� this� study�received�AT�from�a�multidisciplinary�AT�team�and�from�an�

Downloaded From: http://ajot.aota.org/pdfaccess.ashx?url=/data/journals/ajot/930031/ on 06/14/2017 Terms of Use: http://AOTA.org/terms

Page 10: Effect of Assistive Technology in a Public School Setting · settings. Copley and Ziviani (2004) believed that after fund-ing and access to technology, AT service delivery was lack-ing

The American Journal of Occupational Therapy� 27

IEP�team�that�made�a�commitment�to�seeking�and�imple-menting�AT.

The�second�implication�for�practice�is�the�experience�gained�with� the� SPP,� an� AT�outcome� instrument� in� its�developmental�stage.�The�SPP�appears�to�have�potential�as�an�effective�means�of�collecting�AT�outcomes�data�in�the�public�school�setting.�The�SPP�has�the�following�character-istics:�ease�of�administration,�specificity�to�relevant�measure-ment�constructs� (IEP�goals�and�objectives),� sensitivity� to�change�in�performance�over�time,�and�ease�of�scoring.�An�instrument�with�these�characteristics�may�help�practitioners�pursue�data�collection�regarding�the�effect�of�AT�on�student�performance.

A�third�implication�of�this�study�for�practice�is�the�impe-tus� it�may�provide� for� collection�of�outcome�data� in� this�arena�of�practice.�Many�articles�have�attested�to�the�impor-tance� of�measuring�AT�outcomes,� but� the� literature�has�provided�little�guidance�on�this�process�(DeRuyter,�1997;�Fuhrer,�2001;�Gelderblom�&�de�Witte,�2002;�Jutai�et�al.,�2005;�Minkel,�1996;�RESNA,�1998a,�1998b,�1998c;�Smith,�1996).�Some�authors�have�directed�practitioners�in�methods�to�determine�which�students�are�appropriate�for�AT�inter-vention;�however,�these�same�authors�do�not�extend�the�dis-cussion�to�the�monitoring�of�outcomes�(Cook�&�Hussey,�2002;�Edyburn,� 2001;�Lenker�&�Paquet,� 2003;�Quality�Indicators� for� Assistive� Technology� Consortium,� 2005;�Zabala,�2001).�Guidelines� for� collection�of�AT�outcomes�data�may�have� implications� for� individual� school�district�practice.�Individual�school�districts�that�use�this�study’s�rec-ommendations�for�data�collection�and�analyses�of�AT�out-comes�may�improve�service�delivery�and�provide�a�basis�for�justification�of�their�service.�Service�delivery�could�improve�if�AT�providers�find�deficits�in�their�service�and�then�work�to�improve�them.�Data�that�demonstrate�the�effectiveness�of�AT�intervention�on�student�performance�may�help�secure�continued�support�from�school�administrators.

ConclusionThis�study’s�results�provide�evidence�of� improvement� in�IEP�goal�and�objective�ability�when� students� (who�were�having�difficulty�achieving�IEP�goal�progress�with�standard�classroom�interventions)�used�AT�as�an�intervention�strat-egy.�The�study�also�suggests�that�AT’s�contribution�as�an�intervention� strategy� is� greater� than� nine� other� possible�intervention�strategies.�The�results�further�indicate�that�a�multidisciplinary�team�service�delivery�approach�may�be�an�effective� method� of� AT� implementation� for� students� in�special�education.�Further�study�is�needed�for�measuring�the�performance�of�larger�populations�of�students�who�use�AT�to�help�meet�IEP�goals.�Further�work�is�necessary�to�refine�

AT�outcome�measures�appropriate�to�school�system�prac-tice,�such�as�the�SPP.� s

AcknowledgmentsThis�work�was�supported�in�kind�and�in�the�form�of�ATOMS�Project�consultation�by�the�ATOMS�Project�and�National�Institute�on�Disability�and�Rehabilitation�Research�(NIDRR)�Grant�H133A010403.�The�opinions�contained�in�this�study�are�those�of�the�grantee�and�do�not�necessarily�reflect�those�of�the�NIDRR�or�the�U.S.�Department�of�Education.

Anne�H.�Watson�acknowledges�several�people�for�their�guidance�and�support:�her�dissertation�committee—Max�Ito,�Lori�Andersen,�and�Roger�Smith—and�the�members�of�the�2005–2006�AT�team—Therese�Smith�(who�helped�pilot�the�instruments),�Nancy�Harlan,�and�Doreen�Bender—for�help�in�recruiting�participants.�She�offers�special�thanks�to�Grace�Williams,�the�coordinator�of�the�AT�team,�for�guidance�and�to�Kathy�Snydor�for�copyediting.

This�study�was�presented�at�the�AOTA�Annual�Con-ference�and�Expo,�April�2009,�in�Houston,�TX.

ReferencesAssistive�Technology�Outcomes�Measurement�Systems.�(2004).�

Ohio’s Assistive Technology Infusion Project Student Perfor-mance Profile.�Retrieved� from�www.r2d2.uwm.edu/atoms/archive/technicalreports/ohioato/sample_spp_follow-up03-04.pdf

Bridge,�P.�D.,�&�Sawilowsky,�S.�S.�(1999).�Increasing�physicians’�awareness�of�the�impact�of�statistics�on�research�outcomes:�Comparative�power�of�the�t-test�and�Wilcoxon�rank-sum�test�in�small�samples�applied�research.�Journal of Clinical Epide-miology, 52, 229–235.

Campbell,�P.,�Milbourne,�S.,�Dugan,�L.�M.,�&�Wilcox,�M.� J.�(2006).�A�review�of�evidence�on�practices�for�teaching�young�children�to�use�assistive�technology�devices.�Topics in Early Childhood Special Education, 26,�3–13.

Carey,�T.,�&�Boden,� S.� (2003).�A� critical� guide� to� case� series�reports.�Spine, 28,�1631–1634.

Cook,�A.�M.,�&�Hussey,�S.�M.�(2002).�A�framework�for�assistive�technologies.�In�Assistive technologies: Principles and practice�(2nd�ed.,�pp.�34–53).�St.�Louis,�MO:�Mosby.

Copley,�J.,�&�Ziviani,�J.�(2004).�Barriers�to�the�use�of�assistive�tech-nology�for�children�with�multiple�disabilities.�Occupational Therapy International, 11, 229–243.

DeRuyter,�F.�(1997).�The�importance�of�outcome�measures�for�assistive�technology�service�delivery�systems.�Technology and Disability, 6,�89–104.

Edyburn,�D.�(2001).�Models,�theories,�and�frameworks:�Contribu-tions�to�understanding�special�education�technology.�Special Education Technology Practice, 4,�16–24.

Evans,�C.�D.,�&�Henry,�J.�S.�(1989).�Keyboarding�for�the�special�needs�student.�Business Education Forum, 43(7),�23–25.

Downloaded From: http://ajot.aota.org/pdfaccess.ashx?url=/data/journals/ajot/930031/ on 06/14/2017 Terms of Use: http://AOTA.org/terms

Page 11: Effect of Assistive Technology in a Public School Setting · settings. Copley and Ziviani (2004) believed that after fund-ing and access to technology, AT service delivery was lack-ing

28� January/February 2010, Volume 64, Number 1

Fennema-Jansen,�S.�(2004).�Technical report—The Assistive Tech-nology Infusion Project (ATIP) database (Version�1.0).�Milwau-kee:�University�of�Wisconsin.

Fennema-Jansen,�S.�(2005).�An analysis of assistive technology out-comes in Ohio schools: Special education students’ access to and participation in general education and isolating the contribution of assistive technology.�Milwaukee:�University�of�Wisconsin.

Fennema-Jansen,�S.,�Smith,�R.�O.,�&�Edyburn,�D.�L.�(2004,�June).�Measuring AT outcomes using the Student Performance Profile: Analysis and recommendations.�Paper�presented�at�the�RESNA�27th�International�Annual�Conference,�Orlando,�FL.

Fennema-Jansen,�S.,�Smith,�R.�O.,�Edyburn,�D.,�&�Binion,�M.�(2005).�Isolating the contribution of assistive technology to school progress.�Paper�presented�at�the�RESNA�28th�International�Conference�on�Technology�and�Disability:�Research,�Design,�Practice�and�Policy,�Atlanta,�GA.

Fuhrer,�M.� J.� (2001).�Assistive� technology�outcomes� research:�Challenges�met�and�yet�unmet.�American Journal of Physical Medicine and Rehabilitation, 80, 528–535.

Gelderblom,�G.� J.,�&�de�Witte,�L.� (2002).�The� assessment�of�assistive�technology�outcomes,�effects�and�costs.�Technology and Disability, 14,�91–94.

Gerlach,�G.�J.�(1987,�April).�The effect of typing skill on using a word processor for composition. Paper�presented�at�the�annual�meeting�of�the�American�Educational�Research�Association,�Washington,�DC.�(ERIC�Document�Reproduction�Service�No.�ED286465)

Grogan,�K.�M.�(2004).�Measuring the effects of assistive technology on school activity performance using the School Function Assess-ment AT supplement (SFA–AT).�Unpublished�master’s�thesis,�University�of�Wisconsin–Milwaukee.

Guerette,�P.,�&�Nakai,�R.�(1996).�Access�to�assistive�technology:�A�comparison�of�integrated�and�distributed�control.�Technology and Disability, 5,�63–73.

Hall,� C.� S.� (1985,� October).� Keyboarding in a self-contained fourth–fifth grade classroom. Paper�presented� at� the�North�Carolina�Educational�Microcomputer�Conference,�Greens-boro,�NC,� and� the�Computer�Coordinator’s�FOCUS� ’85�Workshop,�Raleigh.�(ERIC�Document�Reproduction�Service�No.�ED269130)

Haley,�S.,�Coster,�W.�J.,�Ludlow,�L.�H.,�Haltiwanger,�J.,�&�Adrel-los,�P.�J.�(1992).�Pediatric Evaluation of Disability Inventory (PEDI) development, standardization, and administration manual. Boston:�Trustees�of�Boston�University,�Health�and�Disability�Research�Institute.

Hetzroni,�O.�E.,�&�Shrieber,�B.�(2004).�Word�processing�as�an�assistive�technology�tool�for�enhancing�academic�outcomes�of�students�with�writing�disabilities�in�the�general�education�classroom.�Journal of Learning Disabilities, 37, 143–154.

Higgins,�E.�L.,�&�Raskind,�M.�H.�(2004).�Speech�recognition-based� and� automaticity� programs� to� help� students� with�severe�reading�and�spelling�problems.�Annals of Dyslexia, 54, 365–384.

Individuals� With� Disabilities� Education� Improvement� Act� of�2004,�Pub.�L.�108–446,�20�U.S.C.�§�1400�et seq.

Ivanoff,�S.�D.,�Iwarsson,�S.,�&�Sonn,�U.�(2006).�Occupational�therapy�research�on�assistive� technology�and�physical�envi-ronmental� issues:�A� literature� review.�Canadian Journal of Occupational Therapy, 73,�109–119.

Jutai,�J.,�Fuhrer,�M.,�Demers,�L.,�Scherer,�M.,�&�DeRuyter,�F.�(2005).�Toward�a� taxonomy�of�assistive� technology�device�outcomes.�American Journal of Physical Medicine and Reha-bilitation, 84,�294–302.

Kohn,� J.�G.,�LeBlanc,�M.,�&�Mortola,�P.� (1994).�Measuring�quality�and�performance�of�assistive�technology:�Results�of�a�prospective�monitoring�program.�Assistive Technology, 6,�120–125.

Lenker,� J.�A.,�&�Paquet,�V.�L.� (2003).�A� review�of�conceptual�models�for�assistive�technology�outcomes�research�and�prac-tice.�Assistive Technology, 15,�1–15.

Lenker,� J.� A.,� Scherer,� M.� J.,� Fuhrer,� M.� J.,� Jutai,� J.� W.,� &�DeRuyter,�F.�(2005).�Psychometric�and�administrative�prop-erties�of�measures�used�in�assistive�technology�device�outcomes�research.�Assistive Technology, 17,�7–22.

Mann,� W.� C.,� Ottenbacher,� K.� J.,� Fraas,� L.,� Tomita,� M.,� &�Granger,�C.�V.�(1999).�Effectiveness�of�assistive�technology�and� environmental� interventions� in�maintaining� indepen-dence�and�reducing�home�care�costs�for�the�frail�elderly:�A�randomized�controlled�trial.�Archives of Family Medicine, 8, 210–217.

Minkel,� J.� L.� (1996).�Assistive� technology� and�outcome�mea-surement:�Where�do�we�begin?�Technology and Disability, 5,�285–288.

Ostensjo,�S.,�Carlberg,�E.�B.,�&�Vollestad,�N.�K.� (2005).�The�use�and�impact�of�assistive�devices�and�other�environmental�modifications�on�everyday�activities�and�care�in�young�chil-dren�with� cerebral�palsy.�Disability and Rehabilitation, 27,�849–861.

Ottenbacher,�K.�(1983).�A�“tempest”�over�t�tests�in�the�occupa-tional� therapy� literature.�American Journal of Occupational Therapy, 37,�700–702.

Portney,�L.�G.,�&�Watkins,�M.�P.�(2000).�Foundations of clinical research: Applications to practice�(2nd�ed.).�Upper�Saddle�River,�NJ:�Prentice-Hall.

Quality�Indicators�for�Assistive�Technology�Consortium.�(2005).�Quality indicators for assistive technology services—Revised. Retrieved�August�26,�2006,�from�http://natri.uky.edu/assoc_projects/qiat/qualityindicators.html

Rehabilitation�Engineering�and�Assistive�Technology�Society�of�North�America.�(1998a).�RESNA resource guide for assistive technology outcomes: Assessment instruments, tools, and checklists from the field�(Vol.�2).�Arlington,�VA:�Author.

Rehabilitation�Engineering�and�Assistive�Technology�Society�of�North�America.�(1998b).�RESNA resource guide for assistive technology outcomes: Developing domains of need and criteria of services�(Vol.�3).�Arlington,�VA:�Author.

Rehabilitation�Engineering�and�Assistive�Technology�Society�of�North�America.�(1998c).�RESNA resource guide for assistive technology outcomes: Measurement tools� (Vol.� 1).�Arlington,�VA:�Author.

Riemer-Ross,�M.�L.,�&�Wacker,�R.�R.�(2000).�Factors�associated�with�assistive�technology�discontinuance�among�individuals�with�disabilities.�Journal of Rehabilitation, 66(3),�44–59.

Ritchie,�J.�E.�(2001).�Case�series�research:�A�case�for�qualitative�method� in� assembling� evidence.�Physiotherapy Theory and Practice, 17, 127–135.

Schepis,�M.�M.,�Reid,�D.�H.,�Behrmann,�M.�M.,�&�Sutton,�K.�A.�(1998).�Increasing�communicative�interactions�of�young�

Downloaded From: http://ajot.aota.org/pdfaccess.ashx?url=/data/journals/ajot/930031/ on 06/14/2017 Terms of Use: http://AOTA.org/terms

Page 12: Effect of Assistive Technology in a Public School Setting · settings. Copley and Ziviani (2004) believed that after fund-ing and access to technology, AT service delivery was lack-ing

The American Journal of Occupational Therapy� 29

children�with�autism�using�a�voice�output�communication�aid�and�naturalistic�teaching.�Journal of Applied Behavior Analysis, 31, 561–578.

Scherer,�M.�J.�(1996).�Living in a state of stuck: How assistive tech-nology impacts the lives of people with disabilities� (2nd� ed.).�Cambridge,�MA:�Brookline�Books.

Silverman,�M.�(1999).�Interpretation of the School Function Assessment for the use as an assistive technology outcome measure.�Unpub-lished�master’s�thesis,�University�of�Wisconsin–Milwaukee.

Smith,�R.�O.�(1996).�Measuring�the�outcomes�of�assistive�tech-nology:�Challenge� and� innovation.�Assistive Technology, 8, 71–81.

Smith,�R.�O.�(2000).�Measuring�assistive�technology�outcomes�in�education.�Diagnostique, 25,�273–290.

Smith,�R.�O.�(2002,�June–July).�Assistive technology outcome assess-ment prototypes: Measuring “ingo” variables of “outcomes.”�Paper�presented� at� the�Rehabilitation�Engineering� and�Assistive�Technology�Society�of�North�America�25th� International�Annual�Conference,�Minneapolis,�MN.

Smith,�R.�O.�(2005,�March).�IMPACT2 model.�Retrieved�August�17,�2006,�from�www.r2d2.uwm.edu/archive/impact2model.html

Song,�F.,�Jerosch-Herold,�C.,�Holland,�R.,�de�Lourdes�Drachler,�M.,�Mares,�K.,�&�Harvey,�I.�(2006).�Statistical�methods�for�analysing�Barthel�scores�in�trials�of�poststroke�interventions:�A�review�and�computer�simulations.�Clinical Rehabilitation, 20,�347–356.

Trachtman,�L.�(1996).�Outcome�measures:�Are�we�ready�to�answer�the�tough�questions?�Assistive Technology, 6,�91–92.

U.S.�Department�of�Education.�(2006).�Assistance�to�states�for�the�education�of�children�with�disabilities�and�preschool�grants�for�children�with�disabilities;�Final�rule,�34�C.F.R.�§§�300�and�301.

Wallace,� J.�B.� (2000).�The effects of color-coding on keyboarding instruction of third grade students. Unpublished�master’s�thesis,�Johnson�Bible�College,�Knoxville,�TN.� (ERIC�Document�Reproduction�Service�No.�ED443389)

Wuolle,�K.�S.,�Doren,�C.�L.�V.,�Bryden,�A.�M.,�Peckman,�P.�H.,�Keith,�M.,�Kilgore,�K.�L.,�et�al.�(1999).�Satisfaction�with�and�usage�of�a�hand�neuroprosthesis.�Archives of Physical Medicine and Rehabilitation, 80, 206–213.

Zabala,� J.� (2001).�The SETT framework.�Retrieved�August�26,�2005,� from� http://atto.buffalo.edu/registered/ATBasics/Foundation/Assessment/sett.php�

AOTA ONLINE COURSEUnderstanding the Assistive Technology Processto Promote School-Based Occupation OutcomesPresented by Beth Goodrich, MS, MEd, OTR, ATP; Lynn Gitlow, PhD, OTR/L, ATP; and Judith Schoonover, MEd, OTR/L, ATP

Earn 1 AOTA CEU (10 NBCOT PDUs/10 contact hours).

Occupational therapy in school-based practice helps children live life to its fullest. Assistive technology (AT) is a rapidly emerging practice area for occupational therapy and has the powerful potential to support students coping with disabilities as they strive to meet their learning needs and enjoy participating in their school environments.

Learn more about the rising role of AT in successful intervention with children! This outstanding new AOTA Online Course—

• Delves deeply into 4 phases of the AT service delivery process• Provides case examples for experiential learning activities• Enables matching of technology to desired school-based outcomes• Distinguishes occupational therapy skills on a school-based AT team• Identi� es the requirements for federally funded schools to provide AT • Includes technology skill sheets for specialized learning and reference review.

Order # OL31AOTA Members: $225Nonmembers: $320

Register today!Call us at 877-404-AOTAOnline at http://store.aota.orgRefer to code AAT109

CE-113

Downloaded From: http://ajot.aota.org/pdfaccess.ashx?url=/data/journals/ajot/930031/ on 06/14/2017 Terms of Use: http://AOTA.org/terms