evaluating web-based learning environments: strategies and insights

9
CYBERPSYCHOLOGY & BEHAVIOR Volume 3, Number 1, 2000 Mary Ann Liebert, Inc. Evaluating Web-Based Learning Environments: Strategies and Insights RONALD D. OWSTON, Ph.D. ABSTRACT As the demand for Web-based courses grows, so too will the need for the systematic evalua- tion of these new learning environments. This article describes four separate evaluation stud- ies of Web-based programs conducted by the author that illustrate the wide range of evalu- ation methodologies and tools available to the evaluator. The studies also underscore the need to adapt the evaluation methodology to meet the requirements of stakeholders. Through con- tinual experimentation with mixed-method evaluation strategies and new tools, evaluators will be able to contribute significantly to our understanding of learning on the Web. 79 INTRODUCTION O VER THE LAST SEVERAL YEARS the growth of Web-based courses has been explosive— and there is every indication that the trend will continue for the foreseeable future. At the high school level, many school districts now provide courses on the Web for students whose own school is not able to offer regular face-to-face instruction in particular subjects because of low enrollment or scarce resources. On a broader scale, the Virtual High School offered 29 credit courses to 500 students in 10 states in 1997–1998 1 and by 2002, the CLASS Project at the University of Nebraska-Lincoln 2 plans to make available a complete, accredited high school diploma sequence containing 54 inter- active Web-based courses. At the university and college level, Peterson’s Guide to Distance Learning Programs 3 lists some 900 institutions in the United States and Canada that offer dis- tance education courses. Most of these institu- tions have either moved their traditional cor- respondence offerings to the Web or are plan- ning on doing so. Despite the widespread adoption of this technology by our educational institutions, we know very little about Web-supported peda- gogy. Owston 4 cautions that before public in- stitutions rush to embrace the innovation we need to be able to answer three questions: (1) Does the Web increase access to learning? (2) Can the Web promote improved learning? and (3) Can increased access and improved learn- ing be attained without increasing the costs of education? He adds that unless we have evi- dence of satisfying these criteria we may be doomed to promoting just another educational bandwagon. There is room for some optimism, however, as some promising results are now emerging from large scale efforts to assess the impact of Web-based programs on access to learning opportunities 5 ; on student learning and teacher practice 1,6,7 ; and on the cost–bene- Center for the Study of Computers in Education, York University, York, Ontario, Canada

Upload: ronald-d

Post on 18-Mar-2017

220 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Evaluating Web-Based Learning Environments: Strategies and Insights

CYBERPSYCHOLOGY & BEHAVIOR

Volume 3, Number 1, 2000Mary Ann Liebert, Inc.

Evaluating Web-Based Learning Environments:Strategies and Insights

RONALD D. OWSTON, Ph.D.

ABSTRACT

As the demand for Web-based courses grows, so too will the need for the systematic evalua-tion of these new learning environments. This article describes four separate evaluation stud-ies of Web-based programs conducted by the author that illustrate the wide range of evalu-ation methodologies and tools available to the evaluator. The studies also underscore the needto adapt the evaluation methodology to meet the requirements of stakeholders. Through con-tinual experimentation with mixed-method evaluation strategies and new tools, evaluatorswill be able to contribute significantly to our understanding of learning on the Web.

79

INTRODUCTION

OVER THE LAST SEVERAL YEARS the growth ofWeb-based courses has been explosive—

and there is every indication that the trend willcontinue for the foreseeable future. At the highschool level, many school districts now providecourses on the Web for students whose ownschool is not able to offer regular face-to-faceinstruction in particular subjects because of lowenrollment or scarce resources. On a broaderscale, the Virtual High School offered 29 creditcourses to 500 students in 10 states in1997–19981 and by 2002, the CLASS Project atthe University of Nebraska-Lincoln2 plans tomake available a complete, accredited highschool diploma sequence containing 54 inter-active Web-based courses. At the universityand college level, Peterson’s Guide to DistanceLearning Programs3 lists some 900 institutions inthe United States and Canada that offer dis-tance education courses. Most of these institu-

tions have either moved their traditional cor-respondence offerings to the Web or are plan-ning on doing so.

Despite the widespread adoption of thistechnology by our educational institutions, weknow very little about Web-supported peda-gogy. Owston4 cautions that before public in-stitutions rush to embrace the innovation weneed to be able to answer three questions: (1)Does the Web increase access to learning? (2)Can the Web promote improved learning? and(3) Can increased access and improved learn-ing be attained without increasing the costs ofeducation? He adds that unless we have evi-dence of satisfying these criteria we may bedoomed to promoting just another educationalbandwagon. There is room for some optimism,however, as some promising results are nowemerging from large scale efforts to assess theimpact of Web-based programs on access tolearning opportunities5; on student learningand teacher practice1,6,7; and on the cost–bene-

Center for the Study of Computers in Education, York University, York, Ontario, Canada

Page 2: Evaluating Web-Based Learning Environments: Strategies and Insights

fits that ensue.5,8 Nevertheless, much of the lit-erature in the field still tends to be anecdotaland falls short of asking critical questions aboutteaching and learning,9 hence continued effortsto evaluate Web-based learning are essential.

Several frameworks have been proposed forassessing online learning. For example, Rieland Harasim 10 proposed three areas in whichdata collection might focus: the structure of thenetwork environment, social interaction thatoccurs during the course or project, and the ef-fects of the experience on individuals. Bates’11

ACTION model calls for comparison of two ormore instructional delivery modes on the basisof access, costs, teaching functions, interactionand user-friendliness, organizational issues,novelty, and speed of course development/adaptation. And Ravitz12 suggests a frame-work that encourages the assessment of a pro-ject’s evolution through interactive discussion,continual record keeping, and documentation.Although these models do provide some guid-ance on what factors evaluators should attendto and how they may go about doing so, no sin-gle model or framework is likely going to sat-isfactorily capture the complexity of pedagog-ical, technical, organizational, and institutionalissues inherent with Web-based learning.Therefore, as we proceed with evaluative ef-forts, evaluators must be prepared to employ awide range of evaluation models, methods, andstrategies that take into account the salient fea-tures of the particular learning environmentand stakeholder needs.

In this article, I describe four evaluation stud-ies of Web-based learning environments thatmy colleagues and I have undertaken recently.For each I provide an overview of the project,its methodology, and a discussion of the in-sights we gained by using the particular eval-uation strategy. My goal is to illustrate the ap-plication of a variety of strategies, tools, andmethodologies that can be employed to evalu-ate Web-based learning. I begin by describinga clustered study of teacher practices in two on-line learning projects. This will be followed bya description of a formative evaluation of amultimedia rich tutorial developed for a uni-versity undergraduate course. The third projectis an institutional analysis of a large Web-basedcollege program. I conclude by describing an

assessment of an English-as-a-second languagecourse offered on the Web.

A CLUSTERED STUDY OF TWO ONLINE PROJECTS

Project overview

In this study,7 we investigated the interrela-tionship of implementation, pedagogical per-spectives and practices, and perceived out-comes in two Canadian national telelearningprojects based on different pedagogical modelsand delivery systems, one of which was judgedby participating teachers to be more success-fully implemented than the other. By means ofa comparative analysis of the two programs,we sought to determine what teacher and prac-tice factors beyond training and support con-tributed to the relative success and failure ofthe two programs and to describe how thesefactors interacted with traditional implementa-tion concerns. Grounded in this analysis, a the-oretical model of project implementation wasthen articulated and contrasted with otherviews of implementation.

The two projects we studied were Writers inElectronic Residence (WIER) and the SatelliteNetworked Schools (SNS) project. WIER uses aWeb-based conferencing system to link Englishand language arts students to Canadian authors,teachers, and each other for the exchange anddiscussion of students’ original compositions.WIER is a relatively large telelearning networkby Canadian standards, involving the participa-tion of up to 120 classes in any given year fromall areas of the country and including studentsranging from the junior elementary to the seniorhigh school levels. It is also one of the few pro-jects at a mature stage of development, havingbeen in operation for over 12 years. The SNS pro-ject linked three Canadian schools via digitalsatellite to a commercial curriculum contentprovider headquartered in the United States. Al-though the SNS service was well established inmany schools in the United States at the time ofthe research, only three Canadian schools hadbeen connected. We studied all three of thesesites during their first and second years of im-plementing the service.

WIER has three primary learning objectives:

OWSTON80

Page 3: Evaluating Web-Based Learning Environments: Strategies and Insights

(1) making use of computer and network me-dia to enhance students’ creative autonomyand to broaden the scope and shape of class-room experience; (2) helping students to(re)consider the value of revision in the writ-ing process and the students’ role in using lan-guage to interpret and understand as well asbe understood; and (3) prompting these novicewriters to revisit their creative efforts in thelight of the ideas that they receive and gener-ate in their conferencing interactions with bothauthor–mentors and their peers. Students“post” drafts of creative works into forums thatthe assigned author reads and then respondsto, posting back comments and suggestions forrevision to the student (which others accessingthe conference are free to read). Students alsoread and respond to work posted by otherschools. WIER strongly encourages participat-ing teachers to require every student who sub-mits a composition to provide a written re-sponse to two other student works.

SNS is a curriculum resource delivery servicethat was purchased by the schools primarilywith the goal of enhancing mathematics andscience teaching and learning. The service pro-vided teachers in these schools unlimited ac-cess to their collection of over 12,000 videos indexed to the K–12 curriculum. Teachers in-teracted with company personnel via a two-way television channel to search out and selectrelevant videos. In addition, the companywould develop “custom curriculum” upon ateacher’s request, incorporating new or exist-ing videos and live interaction with a subjectspecialist if desired. As part of its service, thecompany also provided a website, printed cur-riculum resources via a faxback service, and aselection of interactive software in CDI (Com-pact Disk Interactive) format. To ensure that allelements of the service interoperated, the com-pany sold a complete turnkey installation toschools, including a file server, networkedcomputers with Internet connections, fax ma-chines, digital satellite receivers and an an-tenna, TV monitors, and VCRs.

Methodology

In order to investigate teachers’ experiences inthe WIER and SNS programs and to collect

their reflections on program implementationand its effects, interviews were conducted withstaff directly participating in the programs. ForWIER, we interviewed teachers using the pro-gram in eleven participating schools acrossCanada. These teachers had recently completedor were completing twelve weeks of WIER ac-tivities with at least one class in either creativewriting or language arts. (The normal durationof a WIER project is 12 weeks, however schoolsmay elect to participate in many projects.) Inthe case of the SNS program, we conductedpreliminary interviews of teachers and princi-pals at each of the three schools at the begin-ning of the project and returned a year later tointerview the same staff on their experiencesduring the intervening period. A total of 38 par-tially structured interviews were conducted.All interviews for both programs were audio-taped and transcribed.

A qualitative coding and analysis of the tran-scripts suggested that at the end of 2 years theSNS project was perceived by most teachers tobe only a very limited success. This was espe-cially true at the two high school sites. On theother hand, WIER was roundly praised byteachers as being a very successful online ex-perience. The apparent success or failure of aprogram is a function of many factors; how-ever, this study focused on the teacher-relateddimensions that determine the success of anonline program. We found that the SNS andWIER programs could be differentiated on twofactors related to teachers beyond training andsupport. These factors were (1) the teachers’perceptions of the value of the program for stu-dents and (2) the congruence between the ped-agogy implicit in the program and the teach-ers’ own practices. Our conclusion was thatthese two aspects, together with training andsupport, played a significant role in determin-ing the ultimate success and sustainability ofthe program.

Discussion

We began this research as two separate mul-tisite studies of teacher practices that coinci-dentally started during the same year. Origi-nally there was no connection between theWIER and SNS projects other than the fact that

EVALUATING WEB-BASED LEARNING 81

Page 4: Evaluating Web-Based Learning Environments: Strategies and Insights

the same researchers were working on bothprojects. It became apparent fairly early on thatSNS was not proceeding particularly well in itsimplementation. Therefore, we felt that morecould be learned by comparing SNS to WIER,as this project was proceeding well, rather thancontinuing to study SNS on its own. Now thatthe project is completed, we are convinced ofthe wisdom of that decision as we would nothave been able to otherwise develop our un-derstanding of the interaction between teachertraining and support, perceptions of value tostudents, and pedagogical congruence.

Our experience with the WIER/SNS projectsupports Kozma and Quellmalz’s13 position onthe value of clustering network-based projectsfor evaluation. By clustering projects with gen-erally similar goals, the evaluator can make useof common instruments and data collection pro-cedures, and can aggregate the projects for mostanalyses and interpretations. Kozma and Quell-malz see “cluster evaluation,” not only as a cost-effective method of evaluating several projectssimultaneously, but as a way to promote shar-ing of information among the projects’ stake-holders to improve performance and effective-ness. A further advantage they suggest is thatclustering will encourage the development ofcommunities of practice “that can share effectiveproject strategies and lessons learned.”

VITAL FORMATIVE EVALUATION

Project overview

VITAL is a project of York University’s Cul-Tech Collaborative Research Centre aimed atdeveloping university level multimedia tutori-als for broadband networks. We were asked toconduct a formative evaluation of one of theVITAL tutorials designed for the course Intro-duction to Computer Technology, a requiredcourse in the undergraduate programs of sev-eral non-computer science majors at York Uni-versity.14 The primary goal of this formativeevaluation was (1) to provide the instructor andthe VITAL development team at York withdata regarding the uses students made of theVITAL tutorial during the semester, (2) to pro-vide these stakeholders with an analysis of stu-dents’ experiences with and responses to the

online tutorial; and (3) to make suggestions forprogram modification on the basis of these dataand analyses. Participation in the study and, in-deed, use of the VITAL tutorial was voluntary,however students were told that the tutorialmay help them because it was directly relatedto their course.

Methodology

Four focus group sessions (n 5 16) were heldon campus during the week prior to the finalweek of classes. The focus group leader fol-lowed a semi-structured interview protocolthat probed students’ attitudes and reflectionsabout all aspects of their use of the VITAL sys-tem. The group sessions were audiotaped, transcribed, and coded, with indexical countsbeing developed where possible. A brief ques-tionnaire also was distributed to all studentsenrolled in the course lectures in the penulti-mate week of classes that asked students if theyhad accessed the VITAL multimedia tutorialand, if so, for how long.

To ascertain more precisely the patterns ofuse of the VITAL tutorial, the log files gener-ated by the course Web server were analyzedusing WebTrends Log Analyzer software(http://www.webtrends.com). Log files nor-mally contain data that allow analysis softwareto obtain records of usage of the website, in-cluding such data as pages most frequently ac-cessed, hours of peek usage, location fromwhich a user accessed the server, length of anindividual’s session, and the paths most fre-quently taken through a website by a user. Fig-ure 1 shows a typical output file from the analy-sis. The graph shows that Wednesdays were apeak day of usage of the tutorial.

OWSTON82

FIG. 1. Sample output from WebTrends Log Analyzersoftware.

Page 5: Evaluating Web-Based Learning Environments: Strategies and Insights

With both the qualitative data and quantita-tive data from the questionnaires and log fileanalysis, we were able to report to stakehold-ers that students did not use the tutorial as of-ten as the stakeholders anticipated, nor as of-ten as they reported in the focus groups.Nevertheless, those who did make use of itfound that the tutorial was easy to navigate,flowed in a logical manner, and enhanced theirinterest in and learning of certain course top-ics. We also reported on many specific sugges-tions for the tutorial content and ways in whichthe tutorial could be better integrated into thecourse.

Discussion

A lesson learned in this study was the im-portance of triangulating data. Triangulationinvolves combining data sources to study thesame phenomenon for the purpose of improv-ing the inferences from the data.15 Triangula-tion may be done by having different observersstudy the same phenomenon by using differ-ent observation methods (e.g., field observationand interviews), or by using qualitative andquantitative methods to study the phenome-non. Our study employed qualitative tech-niques (focus group interviews) and quantita-tive techniques (log file analysis) to studystudents’ usage patterns. Not surprisingly, per-haps, we found that students over-reportedtheir use of the system. Moreover, we foundthat when we asked questions about specificpages in the tutorial, more students com-mented on the pages than could possibly haveviewed by them according to the log file analy-sis. When we discovered this, we returned tothe interview transcripts and attempted to as-certain those comments that appeared to bemost likely based on fact. The end result wasthat we were able to make recommendationsto the stakeholders with a good measure of con-fidence.

AN INSTITUTIONAL ANALYSIS OFONLINE COLLEGE COURSES

Project overview

Atkinson College is a large undergraduatecollege affiliated with York University with a

mandate to educate working adults. Until sev-eral years ago, many of its courses were offeredin both face-to-face and traditional print corre-spondence modes. Then the college undertooka major initiative to offer courses via the Web,initially using an external Web hosting serviceand, most recently, using Lotus LearningSpaceon its own servers. For each Web course, fac-ulty typically mount a weekly study schedule,weekly readings, lecture notes, and occasion-ally graphics. All courses have an electronicdiscussion room for the instructor and stu-dents, although the use of the room variedwidely from instructor to instructor. As the col-lege was planning to expand its Internet offer-ings, my colleague and I were asked to providethem with some indication of how the Internetstudents were faring academically and deter-mine their level of satisfaction with the courses.

The budget for the evaluation was modest,therefore, we were restricted to analyzing ex-isting institutional data that were available tous. These included final course grades obtainedthrough the registrar’s office and course eval-uations gathered from an online form. Our results16 revealed that students in Internet andface-to-face classes achieved significantlyhigher grades (p , 0.02) than those in corre-spondence courses; however, no significant dif-ferences were found between Internet and face-to-face classes (p . 0.05). Student satisfactionwith the Internet course ran high with some70% of respondents rating their course as “av-erage” or “better than average” relative to othercourses they had taken at the college.

Methodology

To assess academic achievement we did aone-way analysis of variance comparing coursemodes (correspondence n 5 2127, face-to-facen 5 2,262, Internet n 5 971) with achievement(expressed in grade points) as the dependentvariable. Included in the analysis were datafrom the last three offerings in each mode of 12undergraduate courses. Evaluation data foreach question were summarized and severalquestions were cross-tabulated.

Although the methodology for this studywas very straightforward, its simplicity beliesthe difficulty we had in obtaining accurateachievement data. Atkinson College had to

EVALUATING WEB-BASED LEARNING 83

Page 6: Evaluating Web-Based Learning Environments: Strategies and Insights

search its records to identify courses that wereoffered in all three modes. Fortunately, both In-ternet and correspondence courses had uniquecourse section codes; the codes for the face-to-face sections were, by implication, any othercourse code. Once the courses were identified,the Registrar’s Office had to create a file, ap-propriate for input into SPSS data analysis soft-ware, containing the raw data. At least severalversions of the file had to be created becauseof human errors in coding before we were sat-isfied with its accuracy. This process took about2 months.

The evaluation data for the Internet courseswere obtained very readily, although the num-ber responding to the online form was disap-pointingly low (about 10%). Normally, in face-to-face courses the return rate is very high(about 90%) because students complete theforms in class.

Discussion

We noticed an unusual phenomenon duringour data analysis: the Internet courses had ahigher proportion of zero scores (11% for In-ternet courses vs. 8% for the other two in-structional modes). Upon checking, we foundthat these zeroes were not missing data, but in-deed were Fail grades. We discussed the mat-ter with the college administrator responsiblefor distance education courses and by manu-ally checking some incomplete grades we dis-covered that the failures were largely becausestudents did not write their final exam. Thissuggested that the Internet courses had ahigher dropout rate than the other modes ofdelivery. We could not verify this observationbecause the official university record systemdoes not distinguish between students whocomplete a course but fail and students whodrop out and automatically receive a failinggrade. Nevertheless, we ran another analysis ofthe data with the zeroes for all modes removed.This time all simple post hoc comparisons weresignificant (p , 0.001): Internet course achieve-ment was significantly higher than face-to-face,which in turn was significantly higher than cor-respondence mode.

This finding has two important implicationsfor evaluators. First, it highlights the need to

always check quantitative data for any anom-alies or outliers. This is considered routinegood practice in qualitative research, but it caneasily be overlooked in the rush to complete astudy. Second, it again emphasizes the value ofusing mixed methodology in a study—some-thing we did not do. Ideally, we should havebeen able to interview students who partici-pated in the Internet courses but did not com-plete them. Budget limitations prevented usfrom doing this; however, if we had, we wouldhave been able to determine reasons why stu-dents dropped out before the final exam. Wethen would have been able to place more con-fidence in the results of the re-analysis, the im-plications of which are considerable for the col-lege.

THE NELL PROJECT: A MIXED METHODQUASI-EXPERIMENTAL DESIGN

Project overview

The Networked English Language Learning(NELL) Project staff in the Toronto DistrictSchool Board developed a Web-based coursedesigned for intermediate level adult English-as-a-second-language learners. The goal of thecourse was to help participants, most of whomwere newly arrived in Canada, develop suffi-cient English language competency so that theycould find employment as skilled white collarworkers. The course used Simon Fraser Uni-versity’s Virtual U as a template. In the course,students had online access to multimedia tuto-rials, daily assignments, asynchronous confer-encing, and the capability of sending E-mailmessages with voice file attachments to eachother and the instructor. In addition, the in-structor was available online for live chats tohelp students with either language or computerproblems. My colleagues and I were engagedto evaluate the course from the point of viewof its effectiveness in helping students developtheir language skills and to provide feedbackfor course improvement.17 Our results showedthat students with limited computer back-ground learned how to use the website and as-sociated online tools remarkably fast despitetheir complexity. At the end of the study we

OWSTON84

Page 7: Evaluating Web-Based Learning Environments: Strategies and Insights

found that, relative to their peers who receivedtraditional face-to-face classroom instruction,the Web-using students made educationallysignificant gains in not only in English readingand writing skills, which were practiced ex-tensively, but in listening and speaking, whichwere less well-practiced. Additionally, theywere very favorably disposed to learning En-glish via the Web.

Methodology

Participants in the study were enrolled in afull-time ESL day program for intermediatelevel speakers. Twenty-four volunteers fromthe program were selected to participate in thestudy. Half of the volunteers were selected bythe NELL project staff to follow the Web-basedprogram. The other half of the volunteersformed a control group. (This number of stu-dents was chosen because the program hadonly 12 computers available for the study.) As-signment to experimental or control group wasdone so that the groups were matched in lan-guage proficiency. Each day of the school week,for half a day, the treatment group followed theWeb-based program in the school’s computerlab. For the remaining half-day they joined thecontrol group, who were participating in tra-ditional full-day face-to-face classroom in-struction. The study lasted for 8 weeks.

At the beginning and end of the study, par-ticipants’ English listening, speaking, reading,and writing skills were assessed using theCanadian English Language Benchmarks Assess-ment, an individually administered languageassessment tool designed for adults. As thestudy progressed, we regularly observed stu-dents as they worked on the computers to de-termine how well they were able to navigatethe website, use the course tools, and respondto the daily lessons. Transcripts of their chatsessions and E-mail messages were obtained totrace development of written language fluency,document difficulties they had with under-standing the website content and organization,and to obtain a record of the kinds of technicalproblems they may have experienced. At theend of the course, group interviews were con-ducted to determine student perceptions oftheir online learning experience. We grouped

students by first language for the interviewsand employed an interpreter to assist when stu-dents had difficulty expressing themselves inEnglish.

Because of the small number of students inthe study we were not able to make statisticalcomparisons between control and experimen-tal groups on language scores. Instead, we re-lied on the professional judgement of an expe-rienced ESL educator to assess whether thegrowth on the Benchmarks was greater thanwhat would normally be expected for eithercontrol or experimental groups. (Her assess-ment was that, indeed, the experimental groupdid perform significantly better than expectedfor all skills measured.) For all qualitative data,we developed themes that emerged from mul-tiple readings of the field notes and then coded,analyzed, and wrote up the themes.

Discussion

This study, although traditional in its over-all design, provided the stakeholders with thekinds of evaluative data they sought. The stake-holders wanted answers to the same questionsthat I posed at the beginning of this article,namely Can the Web be used as a medium formaking ESL instruction accessible? Can stu-dents learn English at least as well as class-room-based instruction using the Web? andCan the costs be contained? On the basis of ouradmittedly limited study, we were able to pro-vide the stakeholders with sufficient data todemonstrate the feasibility of Web-based ESLinstruction to satisfy the first two questions.The stakeholders did not specifically engage usto undertake a cost analysis, however, our sus-picion was that course certainly was not finan-cially viable when offered to the small numberof students who participated. We believe, how-ever, that there is a reasonable opportunity forthe course to become financially viable if of-fered to a much larger number of students overseveral years.

IMPLICATIONS FOR EVALUATORS

Two implications helpful to the evaluation ofWeb-based learning arise from our work. First,

EVALUATING WEB-BASED LEARNING 85

Page 8: Evaluating Web-Based Learning Environments: Strategies and Insights

my colleagues and I have increasingly come toappreciate the value of mixing quantitative andqualitative methodologies when evaluatingWeb-based learning. All four projects de-scribed in this article use different mixes ofquantitative and qualitative methodologies: theAtkinson study was largely quantitative; theWIER/SNS study was largely qualitative; andthe other two studies made more or less equaluse of both approaches. We believe that by mix-ing methods there is greater potential for cap-turing and understanding the richness andcomplexity of Web-based learning environ-ments than if either approach is used solely.While some methodologists may argue againstmixing research paradigms,18,19 we supportTashakkori and Teddlie’s20 pragmatic stancethat stresses the importance and predominanceof the research question over the paradigm.This approach frees the evaluator to choosewhatever methods are most appropriate to an-swer the question once it is articulated.

Secondly, our initial use of WebTrends, theserver log file analysis tool, in the VITAL studysuggests a promising avenue of data collectionfor the evaluator. Builders of websites fre-quently want answers to questions such aswhat pages are viewed most often, what pathsusers typically take through the site, how longusers stay at the site, and what links are clickedon most often. WebTrends and the host of othersimilar tools on the market can be used to an-swer these and many other similar kinds ofquestions. The tools typically provide outputfrom these analyses in HTML and MicrosoftWord format for direct importing into reports.Evaluators must be cautioned, however, thatthe log files generated by Web servers are ex-tremely large. Busy sites might generate filesmany gigabytes in size every day! Therefore,the evaluator needs to make advanced plansfor the storage of these files (system adminis-trators frequently delete the files if they are notasked to keep them). A plan for periodicallysampling the files throughout the duration ofa course might be a very practical solution tomanaging the files. Nevertheless, we encour-age evaluators to consider log files as a poten-tially rich data source for their work.

As the demand for Web-based learninggrows, so too will the need for the systematic

evaluation of these new learning environ-ments. The four studies described in this arti-cle illustrate the wide range of evaluation toolsand methodologies available to the evaluatorand the need to adapt the methodology to meetthe requirements of stakeholders. Throughcontinual experimentation with various evalu-ation strategies and new tools, evaluators willbe able to contribute significantly to our un-derstanding of learning on the Web.

REFERENCES

1. Kozma, R.B., Zucker, A.A., & Espinoza, C. (1998). Anevaluation of the Virtual High School after one year of operation . Arlington, VA: SRI International. On-line document: http://128.18.30.66/ policy/ctl/html/vhs.htm

2. University of Nebraska-Lincoln. (1999). Class projectoverview . Online document: http://class.unl.edu/

3. Peterson’s guide to distance learning program s. (1999).Princeton, NJ: Author.

4. Owston, R.D. (1997). The World Wide Web: A tech-nology to enhance teaching and learning? EducationalResearcher , 26(2):27–33.

5. Jewett, F. (1998). Case studies in evaluating the benefitsand costs of mediated and distributed learning. Paper presented at the The Third Annual Conference of the TeleLearning Network of Centres of Excellence,Vancouver, BC. Online document: http://www.cal-state.edu/special-projects/

6. Means, B., Coleman, E., Lewis, A., Quellmalz, E.,Marder, C., & Valdes, K. (1997). Globe year 2 evalua-tion: Implementation and progress . Menlo Park, CA: SRIInternational Online document: http://128.18.30.66/policy/ctl/html/globe.htm#evaluation

7. Owston, R.D., & Wideman, H.H. (1998). Teacher fac-tors that contribute to implementation success in tele-learning networks (Centre for the Study of Computersin Education Technical Report No. 98-3). Toronto,ON: York University, Faculty of Education. Onlinedocument: http://www.edu.yorku.ca/csce

8. Bartolic-Zlomislic, S., & Bates, T. (1998). Assessing thecosts and benefits of telelearning: A case study from theUniversity of British Columbia. Vancouver, BC: Dis-tance Education and Technology, University ofBritish Columbia. Online document: http://research.cstudies.ubc.ca

9. Windschitl, M. (1998). The WWW and classroom re-search: What path should it take? Educational Re-searcher , 27(1):28–33.

10. Riel, M., & Harasim, L. (1994). Research perspectiveson network learining. Machine-Mediat ed Learning,4(2&3):91–113.

11. Bates, A.W. (1995). Technology, open learning and dis-tance education . London: Routledge.

12. Ravitz, J. (1998). Evaluating learning networks: A spe-

OWSTON86

Page 9: Evaluating Web-Based Learning Environments: Strategies and Insights

cial challenge for Web-based instruction. In: Khan, B.(ed.), Web-based instruction Englewood Cliffs, NJ: Ed-ucational Technology Publications, pp. 361–368.

13. Kozma, R.B., & Quellmalz, E. (1996). Issues and needsin evaluating the educational impact of the National In-formation Infrastructure. Online document: http://www.ed.gov/Technology/Futures/kozma.html

14. Wideman, H.H., Owston, R.D., & Quann, V. (1998).A formative evaluation of the VITAL tutorial “Introduc-tion to Computer Science” (Centre for the Study of Com-puters in Education Technical Report No. 98-1).Toronto: York University, Faculty of Education. On-line document: http://www.edu.yorku.ca/csce

15. Patton, M.Q. (1990). Qualitative evaluation and researchmethods (2nd ed.). Newbury Park, CA: Sage.

16. Owston, R.D., & Wideman, H.H. (1999). Internet-basedcourses at Atkinson College: An initial assessment (Cen-tre for the Study of Computers in Education Techni-cal Report No. 99-1). Toronto, ON: York University,Faculty of Education. Online document: http://www.edu.yorku.ca/csce

17. Wideman, H.H., Owston, R.D., Handscombe, J., &Solomon (1999). Web-based ESL learning: An evaluationof the Networked English Language Learning Project

(Centre for the Study of Computers in EducationTechnical Report No. 99-2). Toronto, ON: York Uni-versity, Faculty of Education. Online document:http://www.edu.yorku.ca/csce

18. Smith, J.K., & Heshusius, L. (1986). Closing down theconversation: The end of the quantitative-qualitativedebate among educational researchers. EducationalResearcher , 15(1):4–12.

19. Lincoln, Y.S., & Guba, E.G. (1985). Naturalistic inquiry.Beverly Hills, CA: Sage.

20. Tashakkori, A., & Teddlie, C. (1998). Mixed methodol-ogy: Combining qualitative and quantitative approaches .Thousand Oaks, CA: Sage.

Address reprint requests to:Ronald D. Owston

Center for the Study of Computers in EducationYork University4700 Keele St.

Toronto, Ontario, Canada M3T 1P3

E-mail: [email protected]

EVALUATING WEB-BASED LEARNING 87