Preservice elementary teachers' views toward a science methods curriculum
Post on 22-Aug-2016
Embed Size (px)
JOURNAL OF ELEMENTARY SCIENCE EDUCATION VOL, 5, NO. 2, Pp. 37-51, (1993) (C) 1993, Curry School of Education, University of Virginia
PRESERVICE ELEMENTARY TEACHERS' VIEWS TOWARD A SCIENCE METHODS CURRICULUM
By William J. Boone
A b s t r a c t One factor effecting the success of an elementary science methods curricu/um are preservice teachers' perceptions of a course's usefu/ness. /n the fa// of I991, over 100 e/ementary science methods students were administered a survey to assess their viaws toward a curricu/um. Survey resu/ts supp/y a distinct ordering of curricu/ar components. Some c/ass components were viewed in a favorable manner, wht/e others were viewed /ess positive/y. Three c/ass components et unpredictable student responses. Survey resu/ts and imp/ications for reforming this methods course ate presented.
Introduct ion One important aspect of elementary education is the
"science methods ctass" presented to teachers in training, for often such courses provide students with their only exposure to a variety of science teaching techniques. Usually a methods curriculum is built by instructors who carefully select topics they gauge as useful for futura elementary science teachers.
Certainly the success of a methods curriculum, as is true for all curricula, can be influenced by many factors unrelated to subject matter (i.e. teachers, class meeting time); however, this study emphasizes the gathering of basic attitudinal data to provide useful information on students' attitudes toward a curriculum. Once collected, these types of data can be used to evaluate and improve a course.
Previous Research and Goals of this Study A number of researchers have previously developed
and/or utilized attitudinal instruments to supply information helpful for science education effo Enochs and Riggs (1990) usecI a Likert scale to measure the science teaching efficacy beliefs of elementary science teachers. Stefanish and Kelsey (1989) utilized the Shrigley Science Attitude Scale for Preservice Elementary Teachers (Shrigley, 1971) to measure
the beliefs of preservice elementary science teachers toward science and science teaching. Hartly et al. (1984) employed ah att i tudinal instrument (Shrigley & Johnson, 1974) to investigate, in part, whether differences in preservice teachers' attitudes could be traced to differences between two methods courses.
In an effort to extend the research base involving the collection and evaluation of Likert scale data and to improve elementary science methods courses, this project was conducted to collect and evaluate student attitudes toward a science methods curriculum (Thurstone, 1928). Many researchers have considered how to change student attitudes toward science teaching (i.e. Morrisey, J.T., 1981), but research regarding attitudes toward class topics presented in a science methods course seems to be lacking. Ir students use methods they view as being most useful, then it certainly is important to collect these types of data.
Data Collection At the end of the fall 1991 term ah attitude survey
(topics listed in Table 1) was administered to students completing a science methods course at Indiana University- Bloomington. This course was taken sotely by elementary education majors who were primarily of junior standing and near the end of their formal course work. The instrument asked students to evaluate how important they believed 21 class components to be in preparing them for elementary science teaching. A six step Likert scale (excellent, very good, good, fair, poor, terrible) was provided. The 21 surveyed class elements represented major segments of the course. Many other topics were covered during the semester, but in order to present students with a manageable survey, a limited number of class components were used for survey construction. Surveys were administered during the final week of classes in December of 1991, and were completed by more than 95% of the enrolled students. Ir is important to note that while these students completed this course they were concurrently enrolled in ah elementary field teaching experience. The field experience involved teaching science once a week for four weeks at local elementary schools.
Surveyed Course TopIcs
1) Portfolio Item #1- Look up 5 science books and journals. 2) Portfolio Item #2- Writing an organization or corporation for free
teaching materials. 3) Portfolio Item #3- Writing a national or state teacher's
organization about membership. 4) Portfolio Item #4- The clesign of a bulletin board. 5) Portfolio Item #5- Listing five fielcl trip sites. 6) Attencling the clinosaur lectures in October. 7) Writing 10 single page Science Journals. 8) Writing the Post-Critiques of your 4 fielcl science teaching
experiences. 9) Developing your own lesson plans for 3 field science teaching
experiences. 10) Being supplied with an already made lesson plan for your first
field science teaching. 11) Being provicled with classroom time to refine ancl clevelop your
four field science lessons. 12) Developing your own teaching tools ancl props for the fielcl science
lessons. 13) Developing a science game or learning center for your field
science teaching. 14) Lectures on Cognition (Piaget's findings, how students learn). 15) Lectures on the Scientific Method. 16) Lectures on how to write test items. 17) Your four field science teaching experience. 18) The consumer product lab. 19) The paper-clip and string "pendulum" labs. 20) The electrical circuit labs with aluminum foil, light bulbs, wire
and so on. 21) The university furnishing science supplies (science kit) for the
four field science teaching experiences.
Table 1 lists the survey topics administered to the fall 1991 elementary science methods course. Students who completed this class were concurrently enrolled in a science field teaching experience. Respondents were supplied with six Likerl scale responses lor each topic (excellent, very good, good, fair, poor, terrible). These were selected by students on the basis of whether or not the class aclivity was viewed in light of preparing each student,
Data Evaluatlon The s tochas t i c Rasch mode l (Rasch , 1960) was used to
e v a l u a t e t h e s e data. Th is eva lua t i on t e c h n i q u e was se lec ted b e c a u s e the ord ina l a t t i tud ina l sca le mus t be first c o n v e r t e d to an in terva l scale. Th is s tep can bes t be unde rs tood by not ing that a s tep in a t t i t u d e ' f r o m ' "exce l l en t " to "very good"
does not necessarily represent the same quantifiable change in attitude as steps from "very good" to "good" (i.e. Thurstone, 1929; Wright and Masters, 1982). For example, in coding attitudinal data using the categories "excellent", "very good", "good", "fair" and "poor" many evaluators assign weights to each response. In this case an "excellent" might be named a "5", while a "very good" is considered a "4" a n d a rating of "good" is assigned a "3". This naming of categories with numbers is fine; however, one can not immediately use these numerical identifiers for statistical calculations. If calculations are made at once, an implicit assumption is made that an attitude of "very good" is indeed equal distant from a view of "excellent" and "good". Therefore, after respondents answer an attitudina] survey such as the one presented here, one must take into consideration that the numbers used to code the data imply ah ordering of attitudes (5 is greater than 4, thus "excellent" is a more supportive statement than "very good"), but not a known spacing.
A basic probabilistic (stochastic) model which can be used to convert "raw scores" of coded responses to true "measures" is presented below. For the sake of this explanation the case involving the evaluation of dichotomous data is presented. In a "rating scale" scenario this can best be viewed as the situation in which responses such as "excellent", "very good", and "good" are all considered a "positive" answer and are coded a s a "1", while responses of "fair", "poor", and "terrible" are judged a s a "negative" answer are coded a s a "0". For the data reported in this paper the formula can be viewed as having been used a number of times for each person and item combination so that the six rating steps could be taken into account. Rating Scale Analysis (Wright and Masters, 1982) presents a detai led discussion that goes beyond the dichotomous case. Rasch (1968), Andersen (1973, 1977) and Barndorff-Nielsen (1978) are additional references which can furnish further information.
Much of the data presented in this paper is reported in terms of "logits" derived from the formula presented betow. What does it mean for a certain survey item to be given a higher Iogit rating than other class components? First off, the general relationship between Iogit values can be seen by Iooking at the "measure Iogit" and "average" columns of table
2. The "average" column repo the raw average response to each item using the coding 1 (excellent), 2 (very good), 3 (good), 4 (fair), 5 (poor), and 6 (terrible). Thus "lectures on the scientific method" were rated on average as "good" by the students. However, this average raw numerical value must take into consideration that only numerical labels were used to calculate this average va]ue. Th raw average must be converted to a scale that takes the potential unequal spacing in attitudes between rating categories into consideration. That is why reporting the average in terms of "logits" is so important.