piloting a method for comparing two experiential teaching strategies

8
Featured Article Piloting a Method for Comparing Two Experiential Teaching Strategies Katie Anne Adamson, PhD, RN * Washington State University College of Nursing, Spokane, WA 99210-1495, USA KEYWORDS case study; experiential learning; simulation Abstract Background: Experiential learning activities allow students to apply abstract concepts in realistic patient-care scenarios. The knowledge, values, and abilities that are essential to nursing practice re- quire aptitude in the affective, cognitive, and psychomotor domains. Therefore, research aimed at as- sessing the effectiveness of teaching strategies should address how they affect learning outcomes in each of these domains. Method: This pilot study used a quasi-experimental, nonequivalent comparison, modified Solomon four-group design. A total of 14 student participants were exposed to either case study clinical con- ference or human patient simulation learning activities. Cognitive, affective, and psychomotor learn- ing outcomes were measured and compared between groups. Results: No significant differences in cognitive, affective, or psychomotor learning outcomes were detected between the groups. Conclusions: There is much to be learned from the design, methods, analyses, and results of this study. As a pilot, this study tested the feasibility of coordinating and carrying out the complex study procedures and provided data for a power analysis to inform future research. Cite this article: Adamson, K. A. (2012, October). Piloting a method for comparing two experiential teaching strategies. Clinical Simulation in Nursing, 8(8), e375-e382. doi:10.1016/j.ecns.2011.03.005 Ó 2012 International Nursing Association for Clinical Simulation and Learning. Published by Elsevier Inc. All rights reserved. Introduction Kolb’s (1984) definition of learning describes knowledge as being generated through experience. This concept of expe- riential learning provides a theoretical foundation for many teaching strategies used in nursing education. Experiential learning activities, including case studies and, more re- cently, human patient simulations (HPSs), have been used extensively to help students apply abstract concepts in real- istic patient-care scenarios. Both case studies and HPS incorporate interactive approaches and constructivist prin- ciples to engage learners. However, little research has been completed that compares learning outcomes between these two experiential teaching strategies. Technological advances, including the expanding use of high-fidelity HPS, could change the face of clinical nursing education. However, research has not adequately compared emerging, high-tech teaching strategies such as HPS with other, more traditional teaching strategies (Jeffries, 2005). To improve nursing education and to improve patient care for the future, studies about the use HPS and other innova- tive teaching strategies in nursing education must focus on * Corresponding author: [email protected] (K. A. Adamson). 1876-1399/$ - see front matter Ó 2012 International Nursing Association for Clinical Simulation and Learning. Published by Elsevier Inc. All rights reserved. doi:10.1016/j.ecns.2011.03.005 Clinical Simulation in Nursing (2012) 8, e375-e382 www.elsevier.com/locate/ecsn

Upload: katie-anne

Post on 25-Nov-2016

218 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Piloting a Method for Comparing Two Experiential Teaching Strategies

* Correspondi

1876-1399/$ - se

doi:10.1016/j.ec

Clinical Simulation in Nursing (2012) 8, e375-e382

www.elsevier.com/locate/ecsn

Featured Article

Piloting a Method for Comparing TwoExperiential Teaching Strategies

Katie Anne Adamson, PhD, RN*Washington State University College of Nursing, Spokane, WA 99210-1495, USA

KEYWORDScase study;experiential learning;simulation

ng author: kaadamson@

e front matter � 2012 Int

ns.2011.03.005

Background: Experiential learning activities allow students to apply abstract concepts in realisticpatient-care scenarios. The knowledge, values, and abilities that are essential to nursing practice re-

Abstract

quire aptitude in the affective, cognitive, and psychomotor domains. Therefore, research aimed at as-sessing the effectiveness of teaching strategies should address how they affect learning outcomes ineach of these domains.Method: This pilot study used a quasi-experimental, nonequivalent comparison, modified Solomonfour-group design. A total of 14 student participants were exposed to either case study clinical con-ference or human patient simulation learning activities. Cognitive, affective, and psychomotor learn-ing outcomes were measured and compared between groups.Results: No significant differences in cognitive, affective, or psychomotor learning outcomes weredetected between the groups.Conclusions: There is much to be learned from the design, methods, analyses, and results of thisstudy. As a pilot, this study tested the feasibility of coordinating and carrying out the complex studyprocedures and provided data for a power analysis to inform future research.

Cite this article:Adamson, K. A. (2012, October). Piloting a method for comparing two experiential teaching strategies.Clinical Simulation in Nursing, 8(8), e375-e382. doi:10.1016/j.ecns.2011.03.005

� 2012 International Nursing Association for Clinical Simulation and Learning. Published by ElsevierInc. All rights reserved.

Introduction

Kolb’s (1984) definition of learning describes knowledge asbeing generated through experience. This concept of expe-riential learning provides a theoretical foundation for manyteaching strategies used in nursing education. Experientiallearning activities, including case studies and, more re-cently, human patient simulations (HPSs), have been usedextensively to help students apply abstract concepts in real-istic patient-care scenarios. Both case studies and HPS

wsu.edu (K. A. Adamson).

ernational Nursing Association for Clinica

incorporate interactive approaches and constructivist prin-ciples to engage learners. However, little research hasbeen completed that compares learning outcomes betweenthese two experiential teaching strategies.

Technological advances, including the expanding use ofhigh-fidelity HPS, could change the face of clinical nursingeducation. However, research has not adequately comparedemerging, high-tech teaching strategies such as HPS withother, more traditional teaching strategies (Jeffries, 2005).To improve nursing education and to improve patient carefor the future, studies about the use HPS and other innova-tive teaching strategies in nursing education must focus on

l Simulation and Learning. Published by Elsevier Inc. All rights reserved.

Page 2: Piloting a Method for Comparing Two Experiential Teaching Strategies

Comparing Two Experiential Teaching Strategies e376

learning outcomes (Decker, Sportsman, Puetz, & Billings,2008). The body of literature documenting the benefits ofHPS in nursing education is growing (Melnyk, 2008;Starkweather, Kardong-Edgren, & Ward, 2008). Yet thereis a lack of rigorous experimental studies comparing learn-

Key Points� There is little evi-dence supporting theuse of one experien-tial teaching strategy(CCC or HPS) overthe other.

� A modified Solomonfour group design al-lows the effects ofthe intervention andthe pretests to be sep-arated and the use ofestablished teachingand evaluation strate-gies enhance thestrength of this study.

� This study found non-significant differencesbetween learning out-comes from CCC orHPS though the prob-ability of a type IIerror was high.

ing outcomes from HPSwith learning outcomesfrom other, more traditionalteaching strategies.

Case studies have beenused in medical educationsince the 1700s (Tomey,2003) and have more re-cently gained popularity innursing education. Thecase-study approach hasbeen praised for helpingstudents practice criticalthinking skills, providingcontext for applying theo-retical knowledge, and pro-viding a safe environmentin which students canpractice problem solving(Rowles & Brigham,2005). Various authorshave developed derivationsof case studies, includingcase method and unfoldingcase scenarios for teachingand evaluation in nursingeducation (Oermann &

Gaberson, 2006). Problem-based learning and the use ofcase-based teaching were extensively praised in the Carne-gie Foundation’s most recent report (Benner, Sutphen,Leonard, & Day, 2010). Howard (2007) measured and com-pared learning outcomes, including knowledge acquisition,critical thinking skills, and student perceptions of learningactivities, from interactive case-study teaching with learn-ing outcomes from HPS. Although studies consistentlydemonstrate that students enjoy HPS activities, few, suchas Howard’s, show significant differences in learning out-comes between HPS and other teaching strategies.

Researchers have measured learning outcomes fromHPS using various indicators, including the acquisition ofindividual clinical skills such as medication administration(Bearnson & Wiker, 2005) and improvement in safe patienthandling (Beyea & Kobokovich, 2004). Learning outcomeshave been described by unstructured student evaluations(Arundell & Cioffi, 2005), author-designed evaluation tools(Arundell & Cioffi, 2005; Bearnson & Wiker, 2005;Oermann, 2009), measures of student perceptions of learn-ing (Schoening, Sittner, & Todd, 2006), and measures ofstudent satisfaction ratings of simulation experiences(Block, Lottenberg, Flint, Jakobsen, & Liebnitzky, 2002;Schoening et al., 2006). However, these studies, and others

pp e375-

like them, have not addressed the global learning outcomesthat are relevant to nursing practice: development of knowl-edge, skills, and values in the affective, cognitive, and psy-chomotor domains.

Experiential teaching strategies, including case-studyclinical conferences (CCCs) and HPS activities, provideunprecedented opportunities for affecting the way stu-dents think about and perform patient care. However, ifeducators are unable to evaluate learning outcomesbeyond student satisfaction or performance on cognitiveassessments, they may be missing an opportunity toevaluate how these activities truly affect student learning.In the absence of such evaluation data, educators will beill prepared for making evidence-based pedagogicaldecisions.

One commendable example of research comparinglearning outcomes between two experiential teachingstrategies is Howard’s (2007) research comparing differ-ences in knowledge acquisition, critical thinking, and stu-dent perceptions between students who participated inHPS and interactive case study (ICS) activities. This re-search (later published as Howard, Ross, Mitchell, &Nelson, 2010) involved 49 baccalaureate and diploma nurs-ing students. The researchers used Health EducationSystems Incorporated (HESI) examination questions tomeasure students’ knowledge and critical thinking and anauthor-designed Simulation and Case Study EvaluationSurvey to measure students’ perceptions of the learning ac-tivities. The results from this study yielded important, yetcurious, results, including a decrease in the posttest HESIscores of the group of students who participated in theICS activities. Overall, the study demonstrated superiorlearning, critical thinking, and student perceptions relatedto the HPS learning activities. This evidence suggests thatteaching that uses HPS is a better strategy than ICS for pre-paring nursing students for nursing practice.

To enhance research related to the effectiveness ofteaching in nursing education, investigators must continueto strive to measure how experiential teaching strategiescontribute to the overarching goal of nursing education,which is to prepare nurses for practice. By comparinglearning outcomes from two experiential teaching strat-egies, this study may help build the evidence base forteaching in nursing by revealing whether one is superiorto the other for teaching complicated medicalesurgicalcontent such as the care of a patient with congestiveheart failure (CHF) experiencing acute coronary syn-drome. The knowledge, values, and abilities that areessential to nursing practice require aptitude in theaffective, cognitive, and psychomotor domains. There-fore, research aimed at assessing the effectiveness ofteaching strategies in nursing education should addresshow the strategies affect learning outcomes in each ofthese domains.

The purpose of this pilot study was to comparecognitive, affective, and psychomotor learning outcomes

e382 � Clinical Simulation in Nursing � Volume 8 � Issue 8

Page 3: Piloting a Method for Comparing Two Experiential Teaching Strategies

Fourth Semester BSN Students

n = 14

All 14 students complete Self Directed

Learning Module prior to randomized assignment

DA

Case Study Clinical Conference

Human Patient Simulations

Pretest using Elsevier, Inc. test questions and

NLN surveys

Pretest using Elsevier, Inc. test questions and

NLN surveys

Randomly select: Post test using

Elsevier, Inc. test

Comparing Two Experiential Teaching Strategies e377

between two experiential teaching strategies, CCCs andHPS, in a sample of senior baccalaureate nursing students.These two teaching strategies were chosen for comparisonbecause they both apply experiential learning theory. Theyare similar, with the exception of the physical activityinvolved in the HPS strategy. Both teaching strategiesinvolved the use of a patient scenario including a hypothet-ical patient chart (with history and physical, culturalinformation, laboratory values, recent assessment findings,etc.), and students in both groups interacted with theinstructor and other students.

and NLN surveys

Standardized Patient Performance Evaluation

Figure Study design.Group A completed pretest knowledge exams and student satisfac-tion and self-confidence in learning� surveys, participated in thecase-study clinical conference activities and then completed theposttest knowledge exams and Student Satisfaction and Self-Confidence in Learning� survey.Group B participated in the case-study clinical conference activi-ties and then completed the posttest knowledge exams and StudentSatisfaction and Self-Confidence in Learning� survey.Group C participated in the human patient simulation activitiesand then completed the posttest knowledge exams and StudentSatisfaction and Self-Confidence in Learning� survey.Group D completed pretest knowledge exams and student satisfac-tion and self-confidence in learning� survey, participated in thehuman patient simulation activities and then completed the post-test knowledge exams and Student Satisfaction and Self-Confidence in Learning� survey.Note. NLN, National League for Nursing.

Method

This quasi-experimental, nonequivalent comparison groupstudy used a modified Solomon four-group design in which,instead of an experimental and a control group, there weretwo experimental groups that received different interven-tions: either CCC or HPS. The benefit of the modifiedSolomon four-group design in this study was that some ofthe participants who received each of the interventionswere pretested and some were not. This design allowed theeffects of the interventions (CCC or HPS) and the potentialeffects of the pretests to be separated. Approval for thestudy was obtained through the university’s institutionalreview board prior to initiation of recruitment or studyprocedures. The investigators and assistants emphasizeda supportive and collegial environment on the day of thestudy by emphasizing that the activities were opportunitiesfor participants to learn. In addition to completing the datacollection procedures, participants were provided withrefreshments and the opportunity to participate in a prizeraffle.

Sample

The sample, recruited from a population of baccalaureatenursing students enrolled in a senior practicum course inthe spring of 2009, included 14 senior students. Participa-tion was voluntary, and informed consent was obtainedfrom all participants.

Materials

On the day of the study, all the participants completeda demographic questionnaire and then participated individ-ually in a computer-based, self-directed learning modulerelated to the content of the CCC or HPS. The self-directedlearning module about CHF was peer-reviewed as anaward-winning case study (Burns & Poster, 2008). The par-ticipants were given 45 minutes to independently completethis self-directed learning module in the college computerlab. After completing the self-directed learning module,

pp e375-

the students were separated into randomly assigned groups(A, B, C and D; see Figure).

Instruments

Participants in two of the four groups (A and D) completeda knowledge pretest. The test questions were supplied bya national testing corporation (Elsevier Health EducationSystems Inc.); the publisher reported that all questionshad established reliability and validity and met the stan-dards for test preparation. The knowledge exam consistedof 12 questions specifically related to the nursing care ofa patient with CHF. They included content related to phys-ical assessment, lab values, nutrition, and appropriate ac-tions for immediate and follow-up care. Scores on theknowledge exam were intended to measure cognitive learn-ing. Participants in Groups A and D also completed the Stu-dent Satisfaction and Self-Confidence in Learning� survey(National League for Nursing, 2005). This survey was pro-vided with permission by the National League for Nursing;the Cronbach’s alpha for the Satisfaction scale was .94, andthe Cronbach’s alpha for the Self-Confidence in Learning

e382 � Clinical Simulation in Nursing � Volume 8 � Issue 8

Page 4: Piloting a Method for Comparing Two Experiential Teaching Strategies

Comparing Two Experiential Teaching Strategies e378

scale was .85. Scores on the Student Satisfaction and Self-Confidence in Learning survey were intended to measureaffective learning. While the participants in Groups A andD were completing the pretest and presurvey procedures,participants in Groups B and C completed the CCC activityand the HPS activity, respectively.

The students who were randomly selected to participatein the standardized patient encounter were video recorded,and their performances were scored with the SeattleUniversity Evaluation Tool� (Mikasa & Cicero, 2008),the Quint Simulation Evaluation Tool (a modification ofthe Lasater Clinical Judgment Rubric, 2007; Quint &Kardong-Edgren, 2008), and a modified version of theClark Simulation Evaluation Tool (Clark, 2006). Thesethree instruments covered a broad range of performancecriteria for evaluation, including clinical judgment, techni-cal skills, and communication. In the absence of a well-accepted overall performance evaluation instrument, thistriad of instruments was selected to evaluate students’ per-formance in the standardized patient encounter.

Learning Activities

The content for the CCC activity and HPS activity wasadapted from patient-care scenarios provided by Elsevierand further developed by a group of nursing facultymembers. The content and activities were peer reviewedby nursing instructors and researchers who are consideredto be reputable experts in the medicalesurgical nursingcontent relevant to the scenarios. These faculty, instructors,and researchers also facilitated the CCC and HPS activitieson the day of the study and coordinated their materials andteaching in an effort to present as close to the same contentas possible to the different groups. The intent of theseefforts was to provide each of the groups with comparable,high-quality learning experiences with only one differencebetween the CCC and HPS activities: The HPS activityinvolved physical interaction with the simulated patient andpatient care environment. The facilitators for the CCC andHPS activities remained constant throughout the course ofthe day. Both the CCC and HPS activities for Groups B andC were conducted in groups including 3 participants andone facilitator.

All of the activities took place in the University’ssimulation lab and conference rooms and lasted for 45minutes. During both the CCC and the HPS activities,participants were given a short time to review the patient’schart and had access to resources for looking up medica-tions, lab values, and other information pertinent to thepatient’s care. Next, the activity facilitator either broughtthe group of learners into the simulation room (for the HPSactivities) or began the CCC scenario involving a patientwith CHF. SimMan� (Laerdal�) was used for the HPSactivities, and a different facilitator ran the computer andpatient voice. The students in the HPS groups were encour-aged to interact with the patient in order to gain additional

pp e375-

assessment information, whereas the students in the CCCgroups relied on information contained in the patient chartand discussions with the facilitator. Whereas students in theHPS groups actively engaged in simulated care, the stu-dents in the CCC groups discussed such care.

When the participants in Groups B and C had completedtheir respective learning activities, they completed theknowledge posttest and Student Satisfaction and Self-Confidence in Learning� survey. While they were com-pleting the posttest and survey, the participants in Groups Aand D completed the CCC activity and the HPS activity,respectively. For Groups A and D, the CCC and HPSactivities were conducted in groups of 4 participants andone facilitator. When the participants in Groups A and Dwere finished with their respective learning activities, theycompleted the knowledge posttest and Student Satisfactionand Self-Confidence in Learning� survey. The posttestknowledge exam and Student Satisfaction and Self-Confi-dence in Learning� survey were the same as the preinter-vention test and survey given to Groups A and D.

Standardized Patient Encounters

To test overall learning and the students’ ability to transferthat learning into a patient care scenario, students wererandomly selected to participate in an individualstandardized-patient student performance evaluation. Per-formance in these activities was intended to measurecognitive, affective, and psychomotor learning. Becauseof time and resource constraints, not all students were ableto participate in the standardized patient encounter.

Seven participants (4 from the pretested group and 3from the non-pretested group) were randomly selected toparticipate in a follow-on standardized patient encounter.During this encounter, a lab preceptor role-played a patientexhibiting CHF symptoms. The encounters lasted approx-imately 15 minutes and were video archived for laterevaluation. Students were expected to apply the informa-tion learned in their previous educational activities ininterviewing and assessing the patient. The content or‘‘script’’ for the standardized patient encounter was de-veloped by a nurse who was part of the simulation lab staffand was peer reviewed by another nursing expert. Aschedule that details the coordination of activities on theday of the study is included in the Appendix.

Data Analyses

All of the pretests, posttests, and surveys were scoredmanually. A nurse educator experienced in HPS but notinvolved with this study scored the video recorded,standardized patient performances using the SeattleUniversity Evaluation Tool� (Mikasa & Cicero, 2008),the Quint Simulation Evaluation Tool (Quint & Kardong-Edgren, 2008, unpublished manuscript), and a modified

e382 � Clinical Simulation in Nursing � Volume 8 � Issue 8

Page 5: Piloting a Method for Comparing Two Experiential Teaching Strategies

Table 1 Participant Characteristics

CharacteristicOverallN ¼ 14

Simulation Groupsn ¼ 7

Case-Study Groupsn ¼ 7

Female, n (%) 14 (100) 7 (100) 7 (100)Average age, years, M (SD) 24.64 (5.51) 22.86 (1.57) 26.42 (7.48)Nursing school GPA M (SD) 3.64 (0.30) missing case

replaced with mean3.76 (0.13) 3.5 (0.37) missing case

replaced with meanACLS certified (%) 29 43 14Currently working inhealth care (%)

14 0 29

Note. ACLS ¼ advanced cardiac life support; GPA ¼ grade point average.

Comparing Two Experiential Teaching Strategies e379

version of the Clark Simulation Evaluation Tool (Clark,2006). The original Clark Simulation Evaluation Tool(Clark, 2006) was designed to evaluate an obstetric traumascenario. The author modified it in order to evaluate stu-dents caring for a patient with CHF. Each video recordedstandardized patient performance was scored with each ofthe three instruments. Scores from each of these instru-ments were converted into percentages, and an unweightedaverage of the participant’s scores on each of the instru-ments was used as the final ‘‘standardized patient perfor-mance score.’’ For example, if a participant scored65.00% on the Seattle University Evaluation Tool�,70.00% on the Quint Simulation Evaluation Tool, and80.00% on the Clark Simulation Evaluation Tool, her stan-dardized patient performance score would be the un-weighted average of these, or 71.67%.

Data were entered into Excel 2007 and SPSS 16. Onemissing value, grade point average (GPA), was replacedwith the mean GPA for the sample. Descriptive statisticswere used to analyze the data from the demographicquestionnaires. Pretest and posttest and survey data wereexplored for normality and group differences. The datawere found to be nonnormal with nonequivalent groups.Therefore, a nonparametric test (Mann-Whitney U) wasused to determine the level of significance in the differ-ences between groups. The differences in scores (knowl-edge exam, Student Satisfaction and Self-Confidence inLearning� survey, and standardized patient encounter) be-tween pretested (A and D) and non-pretested (B and C)groups were found to be nonsignificant. Therefore, datafrom Groups A and B were compiled to form the CCCgroup, and data from Groups C and D were compiled toform the HPS group, and comparisons were completedfor the variable of teaching strategy (CCC or HPS). Thisprocedure reduced the number of groups from four (inthe original modified Solomon four-group design) to two.It also increased the number of participants in each group.

Results

The sample of 14 students was 100% female and had anaverage age of 24.64 years. Their average GPA was 3.64

pp e375-

on a 4.0-point scale. Of the students in the sample, 14%were currently working in health care and 29% werecertified in advanced cardiac life support (ACLS).Table 1 provides overall participant characteristics andthe characteristics of participants in each of the groups.There were significant differences (p ¼ .05) betweenthe two groups based on each of the characteristics ex-cept GPA. Therefore, homogeneity of the groups cannotbe assumed.

Table 2 describes the results of the study, including over-all mean scores on the pretest and posttest KnowledgeExam, pretest and posttest Student Satisfaction and Self-Confidence in Learning� surveys, and standardized patientperformance evaluations. Additionally, this table includesthe mean scores from each of these assessments for partic-ipants in each of the groups, CCC or HPS. Based on theseassessments, no significant differences in cognitive, affec-tive, or psychomotor learning outcomes were detected be-tween the groups.

Discussion

Strengths and Limitations

Strengths of this study included a rigorous experimentaldesign using the modified Solomon four-way to rule outpreeposttest effects. The use of well-established teachingand evaluation tools that measured a broad range oflearning outcomes enhanced the strength of the study. Theself-directed learning module was an award-winning,peer-reviewed teaching tool. The CCC and HPS activitieswere provided by a respected nursing publisher, furtherdeveloped and delivered by experienced medicalesurgicalnursing faculty. The facilities, including computer lab,testing rooms, simulation lab, and case-study rooms, aswell as the exam room with audiovisual capabilities thatwas used for the standardized patient encounters, werestate-of-the-art. The knowledge test and Student Satisfac-tion and Self-Confidence in Learning� survey hadestablished validity and reliability. The variety of toolsused to score the standardized patient encounters provided

e382 � Clinical Simulation in Nursing � Volume 8 � Issue 8

Page 6: Piloting a Method for Comparing Two Experiential Teaching Strategies

Table 2 Study Results

Exam OverallCase-Study

Groups (A and B)Simulation

Groups (C and D)

Pretest knowledge scoreM (SD) out of 12

7.87 (2.30)n ¼ 8

6.75 (2.06)n ¼ 4

9 (2.16)n ¼ 4

Posttest knowledge scoreM (SD) out of 12

7.57 (1.50)n ¼ 14

7.14 (1.86)n ¼ 7

8 (1)n ¼ 7

Pretest satisfaction scoreM (SD) out of 25

18.63 (3.50)n ¼ 8

19.75 (3.30)n ¼ 4

17.5 (3.79)n ¼ 4

Posttest satisfaction scoreM (SD) out of 25

21.93 (4.71)n ¼ 14

23.86 (1.57)n ¼ 7

20 (6.08)n ¼ 7

Pretest self-confidence scoreM (SD) 40

28.88 (5.38)n ¼ 8

28.75 (6.13)n ¼ 4

29 (5.48)n ¼ 4

Posttest self-confidence scoreM (SD) out of 40

32.29 (6.50)n ¼ 14

33.14 (3.39)n ¼ 7

31.42 (8.85)n ¼ 7

Standardized patient performance scoreM (SD) out of 100%

74.59 (12.05)n ¼ 7

73.50 (4.80)n ¼ 3

75.41 (16.53)n ¼ 4

Comparing Two Experiential Teaching Strategies e380

a standardized method for evaluating studentperformance.

Like many studies investigating the effectiveness ofteaching modalities, the major limitations of this study wasthe small sample size. Because of the small sample size, theprobability of a Type II error, or the failure of this study todetect differences in learning outcomes between the CCCand HPS groups, must be considered. As previouslymentioned, Howard et al. (2010) completed a similar studycomparing student learning and perceptions between partic-ipants who engaged in ICS and HPS learning activities.Howard et al.’s two-group (vs. the modified Solomonfour-group) research design involved 49 participants andyielded significant differences in learning outcomes be-tween the two groups of learners. Specifically, studentswho participated in the HPS activities showed significantlysuperior learning (as demonstrated by differences betweenpretest and posttest scores) compared with their counter-parts who participated in the ICS activities. Future researchshould attempt to marry the strengths of both of these stud-ies. One strategy that needs to be explored in order to over-come the limitation of small sample sizes is collaborationamong universities and colleges (Bradley, 2006). Develop-ing and distributing a universal research protocol, includingstandardized HPS and case-study activities and evaluationmethods, could facilitate such collaboration.

Another limitation of this study was that the CCC andHPS activities were not all led by the same facilitator. Toisolate the variable of interest, teaching strategy, investiga-tors should attempt to keep as many other variables aspossible constant. However, in order to administer theactivities efficiently, we decided to have CCC and HPSactivities going on concurrently. The facilitators workedtogether to develop and refine the scenarios for the CCCand HPS activities and tried to maintain as much consis-tency across groups as possible. However, the personal

pp e375-

differences between facilitators may have influenced stu-dents’ learning outcomes.

Implications

Although researchers are prone to show little interest innonsignificant findings, there is much to be learned fromthe design, methods, analyses, and results of this study. Asa pilot, this project tested the feasibility of coordinating andexecuting the complex study procedures. To maximizeefficiency of instructor time and resources, a great deal ofcare was taken in scheduling the activities of the day. In thefuture, the same design may be used. The study design canbe doubled, with eight instead of four groups participatingconcurrently. This pilot also provided a platform for testingthe CCC and HPS activities that were created for use in thisvenue. The nursing faculty, instructors, and the lab pre-ceptor who role-played the standardized patient said theyfelt the study procedures went well. However, they alsosuggested several revisions to the study procedures. Twocommon suggestions included allowing additional time forstudents to orient themselves to the CCC or HPS materialsbefore beginning the activities and building in breaks forthe facilitators between groups of students. These modifi-cations may be incorporated to enhance future studies.

The analyses from this pilot study also provide food forthought. The nonsignificant differences between pretestedand non-pretested groups indicates that the modifiedSolomon four-group design may not be necessary. How-ever, the ease of dividing groups in this way, the expandedoptions for data analyses, and the potential of this designfor establishing the rigor of the study make this designuseful for future studies. Conversely, if all study partici-pants are pretested, the ability to complete data analysesincluding general linear model repeated measures wouldexpand the potential findings of the study. A power analysis

e382 � Clinical Simulation in Nursing � Volume 8 � Issue 8

Page 7: Piloting a Method for Comparing Two Experiential Teaching Strategies

Comparing Two Experiential Teaching Strategies e381

was completed with the posttest knowledge scores. Topower the study at 80%, at the alpha ¼ .05 level, the futurestudy will require 35 participants in each group.

Finally, the nonsignificant results suggest the need forfurther research in order to discover the most effectiveevidence-based pedagogy for nursing education. Oermann(2009) calls for nurse researchers to conduct and replicaterigorous studies to provide an evidence base for makingcurricular and pedagogical decisions for nursing education.The abundance of small, limited studies and the lack oflarger, more rigorous studies is not without cause. The re-sources for studies that focus on easily measured outcomesand use small sample sizes are readily available. However,the tools and resources for conducting larger, more rigorousstudies that measure multiple learning outcomes are muchmore difficult to access.

Acknowledgments

Acknowledgment of extramural funding/support: Washing-ton Center for Nursing, Washington State UniversityOGRD #109099. The author would also like to acknowl-edge Elsevier, Inc., the National League for Nursing, Dr.Kenn Daratha, and Dr. Suzan Kardong-Edgren for theirsupport for this study.

References

Arundell, F., & Cioffi, J. (2005). Using a simulation strategy: an educator’s

experience. Nurse Education in Practice, 5(5), 296-301. doi:

10.1016/j.nepr.2005.03.001.

Bearnson, C. S., & Wiker, K. M. (2005). Human patient simulators: A new

face in baccalaureate nursing education at Brigham Young University.

Journal of Nursing Education, 44(9), 421-425.

Benner, P., Sutphen, M., Leonard, V., & Day, L. (2010). Educating nurses:

A call for radical transformation. San Francisco, CA: Jossey-Bass.

Beyea, S. C., & Kobokovich, L. J. (2004). Human patient simulation: A

teaching strategy. AORN Journal, 80(4), 738, 741e732.

Block, E. F. J., Lottenberg, L., Flint, L., Jakobsen, J., & Liebnitzky, D.

(2002). Use of a human patient simulator for the advanced trauma life

support course. American Journal of Surgery, 68(7), 648-651.

Bradley, P. (2006). The history of simulation in medical education and pos-

sible future directions. Medical Education, 40(3), 254-262. doi:

10.1111/j.1365-2929.2006.02394.x.

pp e375-

Burns, P., & Poster, E. C. (2008). Competency development in new regis-

tered nurse graduates: Closing the gap between education and practice.

Journal of Continuing Education in Nursing, 39(2), 67-73.

Clark, M. (2006). Evaluating an obstetric trauma scenario. Clinical Simu-

lation in Nursing, 2(2), 75-77. doi:10.1016/j.ecns.2009.05.028.

Decker, S., Sportsman, S., Puetz, L., & Billings, L. (2008). The evolution

of simulation and its contribution to competency. Journal of Continuing

Education in Nursing, 39(2), 74-80.

Elsevier. (2009). Medical-surgical nursing: Assessment and management

of clinical problems (7th ed.). St. Louis, MO: Elsevier.

Howard, V. M. (2007). A comparison of educational strategies for the

acquisition of medical-surgical nursing knowledge and critical thinking

skills: Human patient simulator vs the interactive case study approach.

Doctoral dissertation, University of Pittsburgh, 155.

Howard, V. M., Ross, C., Mitchell, A. M., & Nelson, G. M. (2010). Human

patient simulators and interactive case studies: A comparitive analysis of

learning outcomes and student perceptions. Computers, Informatics,

Nursing, 28(1), 42-48.

Jeffries, P. R. (2005). Guest editorial: Technology trends in nursing educa-

tion: Next steps. Journal of Nursing Education, 44(1), 3-4.

Kolb, D. A. (1984). Experiential learning. Englewood Cliffs, NJ: Prentice

Hall.

Lasater, K. (2007). Clinical judgment development: Using simulation to

create an assessment rubric. Journal of Nursing Education, 46(11),

496-503.

Melnyk, B. M. (2008). Evidence to support the use of patient simulation to

enhance clinical practice skills and competency in health care profes-

sionals and students. Worldviews on Evidence-Based Nursing, 5(1),

49-52.

Mikasa, A., & Cicero, T. (2008). Seattle university evaluation tool�.

National League for Nursing. (2005). Satisfaction and self-confidence in

learning�. Retrieved from www.nln.org/research/toolsandinstruments.

htm.

Oermann, M. H. (2009). Evidence-based programs and teaching/evaluation

methods: Needed to achieve excellence in nursing education. In M.

Adams, & T. Valiga (Eds.), Achieving excellence in nursing education.

(pp. 63-76) New York: National League for Nursing.

Oermann, M. H., & Gaberson, K. B. (2006). Evaluation and testing in

nursing education (2nd ed.). New York: Springer.

Rowles, C. J., & Brigham, C. (2005). Strategies to promote critical think-

ing and active learning. In D. M. Billings & J. A. Halstead (Eds.), Teach-

ing in nursing: A guide for faculty (2nd ed., pp. 283-303). St. Louis,

MO: Elsevier.

Schoening, A. M., Sittner, B. J., & Todd, M. J. (2006). Simulated clinical

experience: Nursing students’ perceptions and the educators’ role. Nurse

Educator, 31(6), 253-258.

Starkweather, A. R., Kardong-Edgren, S., & Ward, L. (2008). Diffusion of

innovation: Embedding simulation into nursing curricula. International

Journal of Nursing Education Scholarship, 5. Article13.

Tomey, A. M. (2003). Learning with cases. Journal of Continuing Educa-

tion in Nursing, 34(1), 34-38.

e382 � Clinical Simulation in Nursing � Volume 8 � Issue 8

Page 8: Piloting a Method for Comparing Two Experiential Teaching Strategies

Appendix Scheduling Details

Group A Schedule: Case Study, Pretests and Posttests0900e0915 (15 minutes) Arrive, check in, complete demographic questionnaire0915e0945 (30 minutes) Self-directed learning module0945e1030 (45 minutes) Pretests (knowledge and satisfaction/self- confidence in learning)1030e1045 (15 minutes) Break1045e1130 (45 minutes) Case study activity1130e1215 (45 minutes) Posttests (knowledge and satisfaction/self- confidence in learning)1215e1230 (15 minutes) Check out, enter raffle, select standardized patient participantsStandardized patient 1245e1300Standardized patient 1300e1315Standardized patient 1315e1330Standardized patient 1330e1345Group B Schedule: Case Study, No Pretests0900e0915 (15 minutes) Arrive, check in, complete demographic questionnaire0915e0945 (30 minutes) Self-directed learning module0945e1000 (15 minutes) Break1000e1045 (45 minutes) Case study activity1045e1130 (45 minutes) Posttests (knowledge and satisfaction/ self-confidence in learning)1130e1145 (15 minutes) Check out, enter raffle, select standardized patient participantsStandardized patient 1145e1200Standardized patient 1200e1215Standardized patient 1215e1230Standardized patient 1230e1245Group C Schedule: Simulation, No Pretests0900e0915 (15 minutes) Arrive, check in, complete demographic questionnaire0915e0945 (30 minutes) Self-directed learning module0945e1000 (15 minutes) Break1000e1045 (45 minutes) Simulation (Rooms 217 G & H)1045e1130 (45 minutes) Posttests (knowledge and satisfaction/self-confidence in learning)1130e1145 (15 minutes) Check out, enter raffle, select standardized patient participantsStandardized patient 1145e1200Standardized patient 1200e1215Standardized patient 1215e1230Standardized patient 1230e1245Group D Schedule: Simulation, Pretests and Posttests0900e0915 (15 minutes) Arrive, check in, complete demographic questionnaire0915e0945 (30 minutes) Self-directed learning module0945e1030 (45 minutes) Pretests (knowledge and satisfaction/self-confidence in learning)1030e1045 (15 minutes) Break1045e1130 (45 minutes) Simulation1130e1215 (45 minutes) Posttests (knowledge and satisfaction/self-confidence in learning)12315e1230 (15 minutes) Check-out, enter raffle, select standardized patient participantsStandardized patient 1245e1300Standardized patient 1300e1315Standardized patient 1315e1330Standardized patient 1330e1345

Comparing Two Experiential Teaching Strategies e382

pp e375-e382 � Clinical Simulation in Nursing � Volume 8 � Issue 8