Preservice elementary teachers' views toward a science methods curriculum

Download Preservice elementary teachers' views toward a science methods curriculum

Post on 22-Aug-2016

223 views

Category:

Documents

4 download

Embed Size (px)

TRANSCRIPT

  • JOURNAL OF ELEMENTARY SCIENCE EDUCATION VOL, 5, NO. 2, Pp. 37-51, (1993) (C) 1993, Curry School of Education, University of Virginia

    PRESERVICE ELEMENTARY TEACHERS' VIEWS TOWARD A SCIENCE METHODS CURRICULUM

    By William J. Boone

    A b s t r a c t One factor effecting the success of an elementary science methods curricu/um are preservice teachers' perceptions of a course's usefu/ness. /n the fa// of I991, over 100 e/ementary science methods students were administered a survey to assess their viaws toward a curricu/um. Survey resu/ts supp/y a distinct ordering of curricu/ar components. Some c/ass components were viewed in a favorable manner, wht/e others were viewed /ess positive/y. Three c/ass components et unpredictable student responses. Survey resu/ts and imp/ications for reforming this methods course ate presented.

    Introduct ion One important aspect of elementary education is the

    "science methods ctass" presented to teachers in training, for often such courses provide students with their only exposure to a variety of science teaching techniques. Usually a methods curriculum is built by instructors who carefully select topics they gauge as useful for futura elementary science teachers.

    Certainly the success of a methods curriculum, as is true for all curricula, can be influenced by many factors unrelated to subject matter (i.e. teachers, class meeting time); however, this study emphasizes the gathering of basic attitudinal data to provide useful information on students' attitudes toward a curriculum. Once collected, these types of data can be used to evaluate and improve a course.

    Previous Research and Goals of this Study A number of researchers have previously developed

    and/or utilized attitudinal instruments to supply information helpful for science education effo Enochs and Riggs (1990) usecI a Likert scale to measure the science teaching efficacy beliefs of elementary science teachers. Stefanish and Kelsey (1989) utilized the Shrigley Science Attitude Scale for Preservice Elementary Teachers (Shrigley, 1971) to measure

    37

  • Science Methods

    the beliefs of preservice elementary science teachers toward science and science teaching. Hartly et al. (1984) employed ah att i tudinal instrument (Shrigley & Johnson, 1974) to investigate, in part, whether differences in preservice teachers' attitudes could be traced to differences between two methods courses.

    In an effort to extend the research base involving the collection and evaluation of Likert scale data and to improve elementary science methods courses, this project was conducted to collect and evaluate student attitudes toward a science methods curriculum (Thurstone, 1928). Many researchers have considered how to change student attitudes toward science teaching (i.e. Morrisey, J.T., 1981), but research regarding attitudes toward class topics presented in a science methods course seems to be lacking. Ir students use methods they view as being most useful, then it certainly is important to collect these types of data.

    Data Collection At the end of the fall 1991 term ah attitude survey

    (topics listed in Table 1) was administered to students completing a science methods course at Indiana University- Bloomington. This course was taken sotely by elementary education majors who were primarily of junior standing and near the end of their formal course work. The instrument asked students to evaluate how important they believed 21 class components to be in preparing them for elementary science teaching. A six step Likert scale (excellent, very good, good, fair, poor, terrible) was provided. The 21 surveyed class elements represented major segments of the course. Many other topics were covered during the semester, but in order to present students with a manageable survey, a limited number of class components were used for survey construction. Surveys were administered during the final week of classes in December of 1991, and were completed by more than 95% of the enrolled students. Ir is important to note that while these students completed this course they were concurrently enrolled in ah elementary field teaching experience. The field experience involved teaching science once a week for four weeks at local elementary schools.

    38

  • Science Methods

    Table 1

    Surveyed Course TopIcs

    1) Portfolio Item #1- Look up 5 science books and journals. 2) Portfolio Item #2- Writing an organization or corporation for free

    teaching materials. 3) Portfolio Item #3- Writing a national or state teacher's

    organization about membership. 4) Portfolio Item #4- The clesign of a bulletin board. 5) Portfolio Item #5- Listing five fielcl trip sites. 6) Attencling the clinosaur lectures in October. 7) Writing 10 single page Science Journals. 8) Writing the Post-Critiques of your 4 fielcl science teaching

    experiences. 9) Developing your own lesson plans for 3 field science teaching

    experiences. 10) Being supplied with an already made lesson plan for your first

    field science teaching. 11) Being provicled with classroom time to refine ancl clevelop your

    four field science lessons. 12) Developing your own teaching tools ancl props for the fielcl science

    lessons. 13) Developing a science game or learning center for your field

    science teaching. 14) Lectures on Cognition (Piaget's findings, how students learn). 15) Lectures on the Scientific Method. 16) Lectures on how to write test items. 17) Your four field science teaching experience. 18) The consumer product lab. 19) The paper-clip and string "pendulum" labs. 20) The electrical circuit labs with aluminum foil, light bulbs, wire

    and so on. 21) The university furnishing science supplies (science kit) for the

    four field science teaching experiences.

    Table 1 lists the survey topics administered to the fall 1991 elementary science methods course. Students who completed this class were concurrently enrolled in a science field teaching experience. Respondents were supplied with six Likerl scale responses lor each topic (excellent, very good, good, fair, poor, terrible). These were selected by students on the basis of whether or not the class aclivity was viewed in light of preparing each student,

    Data Evaluatlon The s tochas t i c Rasch mode l (Rasch , 1960) was used to

    e v a l u a t e t h e s e data. Th is eva lua t i on t e c h n i q u e was se lec ted b e c a u s e the ord ina l a t t i tud ina l sca le mus t be first c o n v e r t e d to an in terva l scale. Th is s tep can bes t be unde rs tood by not ing that a s tep in a t t i t u d e ' f r o m ' "exce l l en t " to "very good"

    39

  • Science Methods

    does not necessarily represent the same quantifiable change in attitude as steps from "very good" to "good" (i.e. Thurstone, 1929; Wright and Masters, 1982). For example, in coding attitudinal data using the categories "excellent", "very good", "good", "fair" and "poor" many evaluators assign weights to each response. In this case an "excellent" might be named a "5", while a "very good" is considered a "4" a n d a rating of "good" is assigned a "3". This naming of categories with numbers is fine; however, one can not immediately use these numerical identifiers for statistical calculations. If calculations are made at once, an implicit assumption is made that an attitude of "very good" is indeed equal distant from a view of "excellent" and "good". Therefore, after respondents answer an attitudina] survey such as the one presented here, one must take into consideration that the numbers used to code the data imply ah ordering of attitudes (5 is greater than 4, thus "excellent" is a more supportive statement than "very good"), but not a known spacing.

    A basic probabilistic (stochastic) model which can be used to convert "raw scores" of coded responses to true "measures" is presented below. For the sake of this explanation the case involving the evaluation of dichotomous data is presented. In a "rating scale" scenario this can best be viewed as the situation in which responses such as "excellent", "very good", and "good" are all considered a "positive" answer and are coded a s a "1", while responses of "fair", "poor", and "terrible" are judged a s a "negative" answer are coded a s a "0". For the data reported in this paper the formula can be viewed as having been used a number of times for each person and item combination so that the six rating steps could be taken into account. Rating Scale Analysis (Wright and Masters, 1982) presents a detai led discussion that goes beyond the dichotomous case. Rasch (1968), Andersen (1973, 1977) and Barndorff-Nielsen (1978) are additional references which can furnish further information.

    Much of the data presented in this paper is reported in terms of "logits" derived from the formula presented betow. What does it mean for a certain survey item to be given a higher Iogit rating than other class components? First off, the general relationship between Iogit values can be seen by Iooking at the "measure Iogit" and "average" columns of table

    40

  • Science Methods

    2. The "average" column repo the raw average response to each item using the coding 1 (excellent), 2 (very good), 3 (good), 4 (fair), 5 (poor), and 6 (terrible). Thus "lectures on the scientific method" were rated on average as "good" by the students. However, this average raw numerical value must take into consideration that only numerical labels were used to calculate this average va]ue. Th raw average must be converted to a scale that takes the potential unequal spacing in attitudes between rating categories into consideration. That is why reporting the average in terms of "logits" is so important.

    Iog (Pni/1-Pni)= Bn - Di

    Pni is the probability of a person "n" answering a survey item "i" in a positive manner (excellent, very good, good). Bn is a calculated "attitude" of person "n", while Di is a measure of how "difficutt" it was, in general, for the respondents to positivety respond to the particular survey item "i". The units of Bn, Di, and the left side of the formula are in "logits". This name comes from the fact that the left side of the equation involves the Iogarithm of the odds of a person's response.

    The key component of this formula is that it involves probabilities. More specifically, the probability of a particular response is viewed as a predictor of each person's overall "attitude" as determined by their own responses to all the survey items and the overall rating of a particular when all respondents' views toward a single item are taken into considerat ion.

    What is the implication of one survey item being 1 or 2 Iogits greater than another To best understand the meaning of a 1 or 2 Iogit separation one should first consider someone who has ah attitude that is on the fence (they can not make up their mind whether they are positive or negative to a particular statement). If they are exactly "on the fence" with regard to one item their probability of responding with a favorable rating is .5 . For that same person the probability of their furnishing a positive response for an item 1 Iogit more positive is .27 The probability of their answering positively to an item 2 Iogits greater than the first is .12, while the probability of their supplying positive views on an item 3

    41

  • Science Methods

    Iogits higher than the first item is .05 . With each change in 1 Iogit the probability roughly decreases by 50%. The same changes in the probability of a response takes place for this fictitious person when they answer an item that is 1 Iogit below the first item discussed. For this item they have a .73 probability of considering this class component in a positive manner. With each change in 1 Iogit the probability roughly increases by 50%.

    Another way to visualize the "meaning" of a "logit" is to also Iook at the traditional "raw score" in table 2. By doing so the relationship between the reported "logit" measure and the "raw average" can be best gauged. Some may ask if one can use "raw average scores" to determine the typical responses circled on a particular survey, why report the data in terms of Iogits? Again- by converting the raw data to Iogits the possible non-equal spacing between categories can be corrected. A careful review of figure 1 reveals that the Iogit spacing between the category boundaries (e.g poor-fair, fair- good) is not equal! If this probabilistic model had not been used, this characteristic of the rating system used in this survey would not have been detected and taken into consideration.

    This method of analysis is also well suited for these data because 1) it allows an evaluation of persons and items when data is incomplete (i.e. each survey respondent must not respond to every item), 2) errors of each surveyed item and respondent are reported, 3) statistics which help indicate the relevance of items are provided, and 4) persons and items are plotted on the same scale. The FACETS computer program (Linacre, J.M. and Wright, B.D., 1991) was utilized.

    Data Interpretation-ltems In Figure 1 the results of the students' class components

    ratings are presented. The class component with the highest Iogit value (Item 6: Dinosaur Lecture) was viewed least favorably by students. This item was rated, on average, as being between "fair" and "poor" by students. Items positioned below this (less positive togit calibration) represent class activities viewed in a more favorable manner by students. The item at the base of Figure 1 (ttem 9: Developing Own Lesson Plan for Science Teaching) was viewed most

    42

  • Table 2

    Science Methods

    I t em C a l i b r a t i o n s

    I t em Meas. E r ro r Score Count Avg. Ou t - Log i t Log i t f i t

    Std, 6 At tending Dinosaur 1,46 . 0.11 419 118 4.6 3

    Lec tu res 7 Writ ing Science Joumals 0.54 0.10 340 121 3.8 0 1 Look Up Books or Journafs 0.13 0.10 307 123 3.5 0 3 Write for Membership -0.05 0.:11 289 122 3.4 1

    16 Lecture on Test Items -0.34 0.11 261 121 3,2 -1 1 8 Lab-Consumer Product -0.34 0.11 264 122 3.2 -1 1 9 Lab-Pendu lum -0.34 0.11 264 122 3.2 -1 14 Lecture on Cognition -0.49 0.11 250 122 3.0 -4 15 Lecture on Scientif ic -0.54 0.11 246 122 3.0 -3

    Method 5 Field Trip List -0.55 0.11 244 120 3,0 0

    20 L a b - C i r c -0.74 0.11 232 123 2.9 0 2 Write for Free Materials -0.83 0.11 224 122 2.8 3

    1 0 Suppl ied Lesson Plan -0.96 0.12 201 114 2.8 3 21 Furnished with Science -0.96 0.12 202 115 2.8 2

    K i t 8 Wri t ing Post Crit ique -0.98 0.11 213 123 2.7 -2 4 Making a Bulletin Board -1.13 0.11 199 121 2.6 0

    1 2 Developing Own Props -1.34 0.12 181 121 2.5 -1 1 1 Pract ice in Class -1.39 0.12 175 118 2.5 0 1 7 Science Teaching -1.70 0.12 152 118 2.3 1

    Exper ience 1 3 Developing a Game -1.76 0.12 152 122 2.2 0

    9 Developin 9 Lessons -1.88 0.12 143 122 2.2 -1

    Legend and Explanatlon for Table 2 ltem- Survey number described in rabie 1; Meas. (Logits)- Item measure in Iogits;

    Error (Logits)- The standard error of Ihe item measure in Iogit...

Recommended

View more >