a curriculum to improve thinking under uncertainty

16
67 Instructional Science 12 (1983) 67-82 Elsevier Scientific Publishing Company, Amsterdam - Printed in The Netherlands A CURRICULUM TO IMPROVE THINKING UNDER UNCERTAINTY RUTH BEYTH-MAROM* Decision Research, A Branch of Perceptronics, Inc., Eugene, Oregon, U.S.A. SHLOMIT DEKEL School of Education, The Hebrew University, Jerusalem, Israel ABSTRACT Recent evidence indicates that people's intuitive judgments are sometimes affected by system- atic biases that can lead to bad decisions. Much of the value of this research depends on its applicability, i.e., showing people when and how their judgments are wrong and how they can be improved. This article describes one step toward that goal, i.e., the development of a curriculum for junior high school students aimed at improving thought processes, specifically, those necessary in uncertain situations (probabilistic thinking). The relevant psychological literature is summar- ized and the main guidelines in the curriculum development are specified: (a) encouraging students to introspect and examine their own (and others') thought processes consciously, (b) indicating the circumstances in which common modes of thinking may cause fallacies, and (c) providing better tools for coping with the problems that emerge. Two detailed examples are given. In addition, the problem of training teachers is briefly discussed and a small-scale evaluation effort is described. We have begun to teach elementary school children about sex. Why? Because we are afraid that if we do not, they will make 'mistakes,' i.e., behave in ways that are socially if not individually disadvantageous. But mistakes in thinking can be no less socially disadvan- tageous. Why then do we not teach the principles of thought in the same way we teach the principles of sex? (Mathew Lipman, Philosophy for Children, p. 9) This quotation from Mathew Lipman, director of the Institute for the Advancement of Philosophy for Children, is but one of many such. Recently, many educators have advocated the teaching of thinking, be it principles of logic, critical thinking, problem solving, or every-day reasoning (Fletcher and Wood- dell, 1981; Goodlad, 1973; Michael, 1968; Seif, 1981; Vye and Bransford, 1981). * Reprint requests and correspondence to be addressed to the author at: Everymans Universityl 16 Klausner Street, P.O.B. 39328, Ramat Aviv, Tel Aviv, Israel. 0020-4277/83/$03.00 © 1983 Elsevier Science Publishers B.V.

Upload: ruth-beyth-marom

Post on 06-Jul-2016

217 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: A curriculum to improve thinking under uncertainty

67

Instructional Science 12 (1983) 67-82 Elsevier Scientific Publishing Company, Amsterdam - Printed in The Netherlands

A CURRICULUM TO IMPROVE THINKING UNDER UNCERTAINTY

RUTH BEYTH-MAROM* Decision Research, A Branch of Perceptronics, Inc., Eugene, Oregon, U.S.A.

SHLOMIT DEKEL School of Education, The Hebrew University, Jerusalem, Israel

ABSTRACT

Recent evidence indicates that people's intuitive judgments are sometimes affected by system- atic biases that can lead to bad decisions. Much of the value of this research depends on its applicability, i.e., showing people when and how their judgments are wrong and how they can be improved. This article describes one step toward that goal, i.e., the development of a curriculum for junior high school students aimed at improving thought processes, specifically, those necessary in uncertain situations (probabilistic thinking). The relevant psychological literature is summar- ized and the main guidelines in the curriculum development are specified: (a) encouraging students to introspect and examine their own (and others') thought processes consciously, (b) indicating the circumstances in which common modes of thinking may cause fallacies, and (c) providing better tools for coping with the problems that emerge. Two detailed examples are given. In addition, the problem of training teachers is briefly discussed and a small-scale evaluation effort is described.

We have begun to teach elementary school children about sex. Why? Because we are afraid that if we do not, they will make 'mistakes,' i.e., behave in ways that are socially if not individually disadvantageous. But mistakes in thinking can be no less socially disadvan- tageous. Why then do we not teach the principles of thought in the same way we teach the principles of sex? (Mathew Lipman, Philosophy for Children, p. 9)

This quotation from Mathew Lipman, director of the Institute for the Advancement of Philosophy for Children, is but one of many such. Recently, many educators have advocated the teaching of thinking, be it principles of logic, critical thinking, problem solving, or every-day reasoning (Fletcher and Wood- dell, 1981; Goodlad, 1973; Michael, 1968; Seif, 1981; Vye and Bransford, 1981).

* Reprint requests and correspondence to be addressed to the author at: Everymans Universityl 16 Klausner Street, P.O.B. 39328, Ramat Aviv, Tel Aviv, Israel.

0020-4277/83/$03.00 © 1983 Elsevier Science Publishers B.V.

Page 2: A curriculum to improve thinking under uncertainty

68

At the same time, psychologists studying cognitive processes have been focusing on people's abilities and limitations in inductive and deductive reasoning (e.g., Johnson-Laird and Wason, 1977; Kahneman et al., 1982).

When these two trends, represented by Seymour Fox and Daniel Kahne- man, were brought together at the School of Education at the Hebrew University of Jerusalem, the result was a project that used the findings from the psychologi- cal literature on thinking under uncertainty to create a curriculum aimed at improving students' abilities to think about uncertainty [1].

Only preliminary findings on the effectiveness of that curriculum are pres- ented here. We hope to present further evaluation data, together with details of classroom processes, in a later paper.

The Psychological Research

The traditional view of people's higher mental processes assumes that "we are so built that what seems reasonable to us is likely to be confirmed by experience or we could not live in the world at all" (Knight, 1921, p. 227). During the last two decades, many psychologists have questioned this view as the result Of studying thought processes in such diverse tasks as classification and coding (Miller, 1956), concept formation (Bruner et al., 1956), problem solving (Newell and Simon, 1972), deductive reasoning (Wason and Johnson-Laird, 1972), and thinking under uncertainty (Slovic et al., 1977; Tversky and Kahneman, 1974). Their results have demonstrated cognitive limitations in perceiving, memorizing, and processing information.

To cope with these limitations, people have developed simple thought processes (sometimes called "heuristics" or "strategies") that enable them to function in the face of information overload. Although generally adaptive, these thought processes may lead to failures in specific situations. For example, we cannot remember all that we have seen because our memory has a limited capacity. As a result, selection criteria are needed to determine what will be stored and retrieved most readily. These criteria seem to give priority in storage and retrieval to items that are most recent, important, and impressive (Under- wood, 1969). Although adaptive, these selection criteria can bias estimates of event frequencies or probabilities, so that vivid, memorable events such as homicides and accidents are overestimated (Lichtenstein et al., 1978; Tversky and Kahneman, 1973).

The curriculum described in this paper deals with just one aspect of think- ing: thinking under uncertainty - how people perceive uncertainty, assess proba- bilities, evaluate risks, and judge the quality of their own and others' decisions. Of particular concern is the determination of the thought processes that have developed for coping with problems of judgment under uncertainty, and the

Page 3: A curriculum to improve thinking under uncertainty

69

discovery of circumstances in which these processes are maladaptive. Research in this area hascompared common thought processes (intuitions)

with normative strategies such as those from statistical, probability, and decision theories. Several conclusions emerge from this comparison (Kahneman and Tversky, 1979):

1. Errors of judgment are often systematic rather than random, manifesting bias rather than confusion.

2. Many errors of judgment are shared by experts and lay people alike (Fischhoff et al., 1981).

3. Erroneous intuitions resemble visual illusions in an important respect: errors remain compelling even when one is fully awa1"e of their nature. For example, despite knowing that a fair coin has no memory, seeing five "heads" leads us to expect a "tail."

The failures of our intuitions are shown in persistent tendencies to neglect various kinds of normatively important information as well as in the misinter- pretation of information to which we do attend. The tendency to neglect information may be seen in the underweighting of base-rate information. When assessing the probability of a particular event, people fail to give proper consid- eration to how common the event is (Bar-Hillel, 1980; Kahneman and Tversky, 1972). This may cause, for example, a probability overestimation of catastrophic rare events like the collapse of large dams, and a probability underestimation of less impressive though more common events like getting an electrical shock from house appliances. Neglect also occurs in the failure to recognize regression effects in prediction tasks (Kahneman and Tversky, 1973) and in the failure to consider sample size and sample selection procedures when evaluating the quality of information (Tversky and Kahneman, 1971).

Misinterpretation is seen in: (a) people's tendency to be more confident when making predictions on the basis of redundant information than with independent information, although the latter has greater predictive validity (Kahneman and Tversky, 1973) and (b) in people's readiness to find interpretable patterns in random sequences (Furby, 1973; Fama, 1965) and significant rela- tionships in mere coincidence (Chapman and Chapman, 1969).

In summary, the evidence indicates that people's intuitive judgments are sometimes affected by systematic biases that can lead to bad decisions. Our curriculum represents an effort to utilize the research results by (a) showing students when and how their judgments are wrong and (b) presenting corrective procedures to improve their inductive reasoning.

Theoretical Considerations

The idea that errors in reasoning are often systematic rather than random is not restricted to probabilistic thinking. For example, when solving arithmetic

Page 4: A curriculum to improve thinking under uncertainty

70

problems, students are remarkably adept at following procedures, but too often follow the wrong one (Brown and Burton, 1978; Young and O'Shea, 1981).

The assumption of systematic bias leads to a focus on the process by which someone solves a problem rather than on the problem solution itself. Such an emphasis on process is not unique to our curriculum. On the contrary, all of the few existing programs for teaching thinking skills work on the student's thought processes. For example, the goal of Feuerstein's (1980) instrumental enrichment exercises is not to arrive at solutions but to understand how solutions are arrived at; not to acquire facts but to learn how facts are acquired. In The Complete Problem Solver, Hayes (1981) emphasizes the advantages of verbal protocols in giving a more or less reliable description of the psychological processes that a person uses to perform a task. Whimbey and Lochhead's (1980) analytical reasoning course teaches a "think aloud" procedure whereby problem solvers are encouraged to verbalize their thoughts while solving a problem. In a like way, our goal was to make students' implicit thought processes explicit by getting them to talk about their own beliefs.

This explication of thought processes often reveals pre-existing schemas or conceptual systems concerning the material about to be taught. Before learning something formal about probability, students already have some notions about what probability is (Falk et al., 1980). These intuitive notions, if correct, can facilitate learning. But if they are incorrect, they can hinder learning. We strongly agree with Fischbein, who said, "[We] cannot afford to ignore the intuitive endowment of the child. A conceptual system cannot be built in a vacuum, but only by making use of the (sometimes contradictory) pre-existing intuitive biases." Referring to a "primary intuitive substate," he added, "A proper curricu- l u m . . , shou ld . . , concern itself with improving it and with finding methods of building new intuitions which are readily compatible with it" (Fischbein, 1975, pp. 3-4).

Similarly, Champagne et al. (1980) showed that students' prior knowledge about the motion of objects is sometimes consistent with the physical concepts they are supposed to learn and sometimes not. McCloskey et al., (1980, 1981) summarized similar experiments, concluding that students do not lack the correct information but have strong preconceptions and misconceptions. If students' naive beliefs are not addressed, instruction may serve only to teach the students new terminology for expressing their erroneous beliefs.

Method

The traditional approach to improving judgment under uncertainty is to teach the normative tools of statistics, probability, and decision theory. Al- though widespread in colleges and even in some high schools, this approach has done little to help people outside the classroom (e.g., Tversky and Kahneman, 1971). Why?

Page 5: A curriculum to improve thinking under uncertainty

71

First, most courses in statistics, probability, and decision theory teach the normative without devoting any time to the descriptive. They teach formulas instead of cognitive processes. They teach what is "right," not why and when our intuitions are wrong. The only exception of which we are aware is a British Open University interdisciplinary course on Risk and Chance (Dowie, 1980; Lefrere et al., 1980); this presents decision-making in a psychological context, as well as mathematically using Bayes' Theorem, but differs f rom our work in its audience (adults learning mainly by correspondence) and its far lengthier duration (400 hours of study time over one year).

Second, because traditional methods teach formal statistical and probabil- istic concepts, it may be difficult to convince students that probability theory is relevant to life events, instead of being just the "science of coins and playing cards." As Fong et al. (1982) have pointed out, people fail to apply a probabilistic model for tasks outside the classroom, because most daily problems are not formulated as neat, textbook probability problems. Thus, the analogies are not easily perceived.

Finally, the complex mathematical content of courses in statistics, probabil- ity, and decision theory makes the material difficult to understand and apply.

We have tried to avoid these pitfalls. To teach thinking skills, thinking must be the primary topic. We have put emphasis on existing intuitions. By encourag- ing students to examine their (and others') thought processes consciously, we lead them to ask "How do people think?" and "Why did I reach that conclusion?" We believe introspection is a necessity. After students learn to recognize a common thought process, they can start to learn when it is appropriate and when it may lead them into a fallacy. Finally, they are ready to learn how to substitute better processes for fallacious ones.

Our knowledge about children's abilities at different ages determined the choice of the target population: below about 7th grade, children have a very limited ability to appreciate arguments about certainty or uncertainty, but can be taught to recognize situations which are uncertain (Sieber et al., 1970), with various long-term benefits (Sieber e ta] , 1976). Above 8th grade, it becomes increasingly feasible to introduce more complex thought processes, such as we have incorporated in our curriculum.

More than five years of work have been devoted to bringing the curriculum to its present state: a 200-page textbook for 9th grade students, currently in press, and an accompanying teacher's manual. The textbook contains almost no mathematical formulas. Readers are expected to be able to compute only percentages. The related course is planned to involve 35 to 40 hours of instruc- tion.

The nature of the curriculum is best illustrated by a description of the textbook. As shown in Table I, the textbook is divided into three sections. The first, chapters 1 through 4, provides a general framework for thinking about

Page 6: A curriculum to improve thinking under uncertainty

72

TABLE I

Contents of the Textbook

Chapter Contents

First unit: General framework 1. Certainty and uncertainty

2. Defining the uncertain situation

Listing and grouping possible answers

4. Defining degree of belief

Second unit: Some tools

5. Estimation

6. Sampling, part I

7. Sampling, part II: mental samplirig

Third unit: Probability assessment

8. From group percentages to individual chances, part I

9. From group percentages to individual chances, part II'

10. Estimating chances in exceptional (unique) problems

11. Demonstrations

What is certainty? What is uncertainty? Different characteristics of uncertainty. Decisions under uncertainty. Good/bad decisions vs. good/bad outcomes.

Ambivalence and ambiguity in daily language - advan- tages and disadvantages. The importance of a clear definition when defining the problem. The clairvoyance test.

Think about other possibilities. How to build a good and efficient list: grouping, exclusive- ness, exhaustiveness.

Demonstrate the ambiguity of verbal probability expres- sions. Why do we use words to express probability? Why are numbers better?

Quantitative questions. Algorithms and how to use them. Use more than one algorithm to estimate a quantity, compare estimates.

Sampling as a tool for estimation. What is a good/representative sample? What is random sampling? Why is a small random sample worth less than a large random one?

Mental sampling is never random. How memory characteristics affect mental sampling. The availability heuristic.

Why and how one can deduce from group characteristics to the probability that an individual belonging to it has a specific characteristic. What is the best group to consider/use?

Narrowing the group according to the data one has about the individual.

Going from an approximate frequentistic estimate (which takes into account only the information that can be used to form a relevant group) to a final estimate (which uses all additional information). Judging information according to its redundancy, validity and reliability.

Detailed solution of three common probability problems.

Page 7: A curriculum to improve thinking under uncertainty

73

uncertainty. The second section, chapters 5 through 7, discusses some tools that are frequently used in dealing with problems under uncertainty. The last section, chapters 8 through 11, deals specifically with probability assessment.

For each topic, misconceptions are listed and discussed. A misconception is not necessarily a strategy or belief shared by all junior high school students, nor is it applied under allcircumstances. For example, not all the students at all times will mistakenly believe that good decisions always lead to good outcomes. However, students often act or make judgments based on such a belief.

The following are examples that show the way we teach two subjects, each involving a number of misconceptions:

CERTAINTY AND UNCERTAINTY

Certainty and uncertainty are the first concepts discussed in class; both are introduced in chapter 1. We address four misconceptions:

1. Certainty and uncertainty are features of the external world. 2. Uncertainty mostly concerns future events. 3. The more evidence one has, the more confident one should be. 4. Good decisions are always followed by the desired outcome, while bad

decisions are always followed by the undesired outcome. A detective story was chosen as a counter example for the third misconcep-

tion, that the more evidence one has, the more confident one should be. A really good detective story has the characteristic that, as you read more, you get more evidence, but you become more confused. Figure 1 describes Henry and James, two readers of a 200-page detective story. On the first page of the book there is a list of all the characters in the book. Before reading further, both Henry and James feel that any one of the characters could be the murderer. After reading 100 pages, Henry thinks it is highly probable that the murderer is X, while James is less sure, suspecting X, Y, and Z. After reading 94 more pages (just 6 pages from the end), Henry has become more doubtful, now suspecting X, Y, and Z, while James becomes more confident that it is X. When they finish the book, both know that the murderer is X. This example demonstrates two important points:

1. Additional evidence may decrease feelings of uncertainty (as for James), increase them (as for Henry), or not change them at all.

2. The same evidence (e.g., reading the first hundred pages of the book) may affect different people differently.

Students discuss the misconceptions and their possible causes. For the example just given, students may respond, "What is the sense in gathering more evidence if it only confuses us more?" This question creates an opportunity to talk more about the differences between facing reality and sticking one's head in the sand, between accepting uncertainty and rejecting it, and between the need to acknowledge one's ignorance and unjustified confidence.

Page 8: A curriculum to improve thinking under uncertainty

74

WHOIS THE MURDERER?

READERS

HENRY JAMES

C~

Q=

(b

1PAGE

100 PAGES VER THA

194 PAGES

EVERYBODYIN THE LISTISA POSSIBLE MURDERER

VERY PROBABLE "HAT IT IS 'X'

IT MAY BE'X' OR 'Y' OR 'Z'

IT MAY BE 'X' OR 'Y' OR 'Z'

VERY PROBABLE LTHAT IT IS 'X'

'X' IS THE 200 PAG ES

MURDERER

# Fig. 1. A counter example for the misconception that the more evidence one has, the more confident one should be.

THEBASE-RATEFALLACY

The tenth chapter deals, implicitly, with the base-rate problem. The aims of this chapter are to question two common strategies for assessing probabilities and to teach students to use both base-rate and individuating information. There are four teaching stages:

1. The examples. Students are given several examples. One of them is, "Judy is a beautiful young woman. She takes care of herself and her figure is slim and sexy. She always wears fashionable clothes and is frequently seen in beauty parlors, coffee houses, and clothing boutiques. What is the probability that Judy is a fashion model?"

A typical answer is 70%. 2. Analyzing thought processes. Frequent reasons given for high estimates

Page 9: A curriculum to improve thinking under uncertainty

75

are "She looks like a fashion model" and "Most fashion models look like that." The first reason shows reasoning based on the similarity between Judy's descrip- tion and that of a fashion model. The second is based onfrequentistic considera- tions.

3. Questioning common intuitions. In an earlier chapter, the students learned to make lists of mutually exclusive and exhaustive possibilities before assessing probabilities. We now remind the students of this technique by giving them a partial list and asking them to assess all the probabilities: "What is the probability that she is an actress? A cosmetics distributor? A salesperson in a boutique?" All these probabilities seem very high to the students, but because the sum of the probabilities cannot exceed 100%, the students feel that something must be wrong. This opens the door to a discussion of reasoning based on similarity. The problem here is that similarity considerations do not comply with the rules of probability. When we assign probabilities to mutually exclusive and exhaustive events, our assignment is compensatory; the more confidence we assign to one possibility, the less we have available to assign to all the others. (The idea that one's total store of confidence is a constant, that probabilities must sum to 100%, was covered in chapter 4.) But similarity does not work that way. There is no constant store of similarity, so our assessments of similarity need not be compensatory. Judy can be highly similar to an actress and highly similar to a fashion model, a salesperson, and so forth. Thus, similarity is not always a good basis for assessing chances.

In chapters 9 and 10 the students learned to apply frequentistic considera- tions in the assessment of probabilities. They were taught to define a relevant total set, to specify the relevant subset, to estimate the size of the set and the subset, and finally to find the ratio of these two frequencies.

Our emphasis in the present case is to teach the students that while the frequentistic approach is often advantageous, it must be applied properly, that is, by correctly defining the total set and the subset. The teacher and the students identify the total set and the subset in the original question and in the students' answer: - In the original question: The total set is "Women who look like Judy"; the

subset is "Fashion models." - I n the students' answer: The total set is "Fashion models"; the subset is

"Women who look like Judy." The students are shown graphically that one should not make inferences about one of these frequentistic arguments based on the other one. It may be true that "most fashion models look like Judy" but it does not necessarily follow that therefore "most young women who look like Judy are fashion models." They come to understand, then, that not every frequency consideration is a good one and that special care must be taken in translating a probability question to a relative frequency question.

Page 10: A curriculum to improve thinking under uncertainty

76

4. Suggesting an alternative, desirable approach. "While walking down- town at 6 p.m., I bumped into a woman. What is the probability that she is a fashion model?" A typical answer is less than 5% because"She could have a lot of professions" or"Just a few women are models; why would you necessarily bump into one of them?" Research has shown that when no individuating information is presented, subjects do rely on base-rate information (Kahneman and Tversky, 1973), as most of the students' answers indicate. When the individu, ating infor- mation about Judy is added, the estimates do go up but seldom reach the high estimates originally given.

Such experiences convinced us that when many items of individuating information exist, a recommended approach is to work step by step:

1. Temporarily ignore most of the information and develop a preliminary relative frequency estimate based on part of the information (for example, forget for a moment Judy's good looks and usual haunts; what proportion of women Judy's age are fashion models?).

2. Evaluate each additional piece of information by considering its redun- dancy, reliability, and validity.

3. According to your evaluation of the evidence, change your preliminary estimate to a final one. Since we are not attempting to teach the students the calculus of Bayes' Theorem, these adjustments to the preliminary estimate are quite informal.

4. Check your estimate by doing the whole process twice, beginning from different preliminary estimates, based on different items of information.

THE TEACHER'S MANUAL

The Teacher's Manual is organized according to the chapters in the stu- dents' textbook. For each chapter, the manual contains:

1. The chapter as written for the students with the addition of marginal remarks that (a) discuss the examples used in the text, (b) point at difficult and /o r important issues, and (c) refer to theoretical ideas not mentioned in the

text. 2. A didactic section on lesson planning including suggestions about activi-

ties (games, experiments, etc.) for the students. 3. A short essay on topics that are relevant to the chapter's content but that

seem difficult for students. 4. Detailed answers to the study questions given in the chapter. 5. A bibliography.

Page 11: A curriculum to improve thinking under uncertainty

77

I m p l e m e n t a t i o n a n d E v a l u a t i o n

During the process of developing the curriculum, we trained some teachers, taught the course in six classes, and evaluated its effect. This implementation was done when the curriculum was in its earliest stages; the teachers had some lesson plans that simulated typical in-class experiences, but no textbook or manual.

The training and teaching experiences have made it clear that the teachers themselves had difficulties in comprehending some of the curriculum topics. To absorb the ideas, to be able to identify reasoning errors, and to adopt correct strategies when necessary require much practice and training. A short workshop is obviously not enough. We do not yet have enough experience to make firm recommendations with regard to how and for how long the teachers should be trained, what their educational background should be, or how deeply they should be instructed in the normative models (i.e., statistics and probability theory). We strongly believe that the future of the curriculum depends heavily on the answers to these questions.

A small-scale evaluation of the curriculum was performed at the end of the first teaching experience. The evaluation involved a questionnaire with twenty questions that (a) dealt with daily events, (b) were comprehensible even to students who did not study our curriculum, and (c) afforded an opportunity for curriculum students to exhibit their knowledge. The following is a sample question. Included below are the expected answers of students who took the course and those who did not.

Yossi and David are two 12th grade students. Yossi: My cousin registered for pilot training. What are his chances of being

accepted, higher than 50%, lower than 50%, or exactly 50%? David: Why ask me? How should I know? I have never met your cousin and

don't know anything about him. Question: Is David's answer right or wrong? If right, why? If wrong, should

he answer more than 50%, less than 50%, or exactly 50%? Expected answer from a student of the course:

- David's answer is wrong. He probably knows that only a very small percentage of the pilot candidates are accepted. Even without knowing Yossi's cousin, he should have answered "less than 50%."

Expected answers from untutored students: - He is right, he doesn't know anything. - He is wrong. He does not know Yossi's cousin, but he knows that

Yossi's cousin can either be accepted or rejected. Therefore his answer should be "exactly 50%."

Of the 210 students who got the questionnaire, 126 participated in the course and 84 did not. Some students in each group were from schools known to have

Page 12: A curriculum to improve thinking under uncertainty

78

TABLE II

Mean Scores on Evaluation Questionnaire

School academic level Course participation

no yes

High 34.6 61.5 (N---- 33) (N = 96)

Low 31.6 45.9 (N=51) (N-- 30)

a high level of academic achievement, whereas some were from lower-level schools. From 0 to 5 points were given for each question. Scorers did not know who answered which questionnaire. The mean scores of the four groups are shown in Table II.

Examination of Table II reveals that all scores were low; this was expected because of all the difficulties the teachers and the students faced in this first year of curriculum development. An analysis of variance showed that the two main effects and the interaction were statistically significant (a < 0.05):

1. Students who took the course got higher scores. This effect was found for all twenty questions.

2. Students from high-level schools earned higher scores. This effect, too, was found for all twenty questions.

3. The children from the high-level schools gained more from the curricu- lum than those from the lower-level schools. This effect was found for 16 questions. The last result is not surprising, since the curriculum was not intended for disadvantaged students [2].

These results were encouraging and represent the sort of evaluation that must accompany further teaching experiments that will take place after the publication of the text and Teacher's Manual.

Concluding R e m a r k s

The time has come for a fuller implementation of the curriculum. Special attention must be given to two issues:

1. Teacher training. What background should they have and how should they be trained?

2. Curriculum evaluation. Will students improve their thinking skills? Do

Page 13: A curriculum to improve thinking under uncertainty

79

they generalize their knowledge to different content areas? If they do improve, how long will the improvement last?

Much work remains for the future. Many more topics should be added to the curriculum, including the perception of correlational relationships, judg- ments made in hindsight, assessing the probability of conjunctions, and so forth. Since it is too early to judge the success of the present curriculum, supplements to the current version are premature.

While working on the curriculum, we found ourselves fascinated by the mutual contributions of, and the interactions between, psychological research and curriculum development. Our curriculum was written at a time when research in thinking about uncertainty was strongly oriented toward demon- strating the existence of heuristics and biases in judgment. We set out to improve students' thinking skills, desperately looking for the most efficient tools but too often not finding them in the relevant literature. More recently, psychologists have become increasingly interested in debiasing and suggested some ways to overcome specific biases (e.g., Fischhoff, 1982). The debiasing approaches we invented for our curriculum must be subjected to like research. Two examples should suffice:

I. People too often give insufficient weight to the base-rate. Some debiasing techniques have been suggested only recently (Fischhoff et al., 1979; Fischhoff and Bar-Hillel, 1982). We used a quite different technique, the "step-by-step" approach described above. We are now beginning to do research on the effectiveness of this approach to debiasing the base-rate fallacy.

2. People have difficulties in estimating uncertain quantities (How many people live in Australia? How many dimples are there on a golf ball?). The literature on improving such estimates is sparce (Armstrong et al., 1975; Lichtenstein et al., 1978). Our curriculum recommends some approaches to improving quantitative estimates, all of which should be tested empirically.

We hope that just as psychological research initiated this curriculum, so will this curriculum trigger new areas of research. We look forward to a future of mutual interaction between psychological research and educational efforts.

Acknowledgements

The preparation of this paper was supported by a Lady Davis Fellowship We wish to thank Baruch Fischhoff, Don McGregor, Sarah Lichtenstein and Paul Slovic for their many constructive comments on this article.

Page 14: A curriculum to improve thinking under uncertainty

80

Notes

1 The principal investigators of this project are: Ruth Beyth-Marom and Shlomit Dekel from the Hebrew University, Jerusalem, Israel Daniel Kahneman, from the University of British Columbia Seymour Fox, from the Hebrew University, Baruch Fischhoff and Sarah Lichten- stein from Decision Research in Eugene, Oregon, serve as consultants. Ruth Gombo and Moshe Shaked from the Hebrew University serve as assistants.

2 We believe that the form of a curriculum aimed at probabilistic thinking should depend upon the population for which it is intended. Different curricula should be developed for different populations such as disadvantaged children, medical students, or military officers.

References

Armstrong, J. S., Denniston, W. B. and Gordon, M. M. (1975). ~'The use of the decomposition principle in making judgments," Organizational Behavior and Human Performance 14: 257-263.

Bar-Hillel, M. (1980). "The base-rate fallacy in probability judgments," Acta Psychologica 44: 211-233.

Brown, J. S. and Burton, R. R. (1978). "Diagnostic models for procedural bugs in basic mathemat- ical skills," Cognitive Science 2: 155-192.

Bruner, J. S., Goodnow, J. J. and Austin, G. A. (1956). A Stud.v o f Thinking. New York: Wiley. Caramazza, A., McCloskey, M. and Green, B. (1981). "Naive beliefs in 'sophisticated' subjects:

Misconceptions about trajectories of objects," Cognition 9:117 123. Champagne, A. B., Klopfer, L. E., Solomon, C. A. and Cahn, A. D. (1980). Interactions of

students' knowledge with their comprehension and design of science experiments. A LRDS Report, University of Pittsburgh.

Chapman, L. J. and Chapman, J. P. (1969). "Illusory correlation as an obstacle to the use of valid psychodiagnostic signs," Journal o f Abnormal Psychology 74:271-280.

Dowie, J. (1980). U20I Risk: Course Handbook. Milton Keynes: Open University Press. Falk, R., Falk, R. and Levin, I. (1980). "A potential for learning probability in young children,"

Educational Studies in Mathematics 11. Fama, E. F. (1965). "Random walks in stock market prices," Financial Analysts Journal21: 55-60. Feuerstein, R., Rand, Y., Hoffman, M. D. and Miller, R. (1980). Instrumental Enrichment.

Baltimore: University Park Press. Fischbein, E. (1975). The Intuitive Sources o f Probabilistic Thinking in Children. New York: D.

ReideL Fischhoff, B. (1982). "Debiasing," in D. Kahneman, P. Slovic and A. Tversky (eds.) Judgment

under Uncertainty: Heuristics and Biases. New York: Cambridge University Press. Fischhoff, B. and Bar-Hillel, M. (1982). "Focusing techniques as aids to inference," Decision

Research Report 82 4. Fischhoff, B., Slovic, P. and Lichtenstein, S. (1979). "Subjective sensitivity analysis," Organiza-

tional Behavior and Human Performance 23: 339-359. Fischhoff, B., Slovic, P. and Lichtenstein, S. (1981). "Lay foibles and expert fables in judgments

about risk," in T. O'Riordan and R. K. Turner (eds.), Progress in Resource Management and Environmental Planning, Vol 3. Chichester: Wiley.

Fletcher, G. H. and Wooddell, G. (1981). "Educating for a changing world," The Journal o f Thought 16 (3).

Page 15: A curriculum to improve thinking under uncertainty

81

Fong, G. T., Krantz, D. H. and Nisbett, R. E. (1982). "Improving Inference Through Statistical Training," paper presented at the meeting of the American Psychological Association, Wa- shington, D. C., August 1982.

Furby, L. (1973). "I nterpreting regression toward the mean in developmental research," Develop- mental Psychology 8:172 179.

Goodlad, J. I. (1973). "A concept of the school in 2000 A.D.," in R. W. Holstrop (ed.), Founda- tions of Futurology in Education. Homewood, Illinois: ETC Publications.

Hayes, J. R. (1981). The Complete Problem Solver. Philadelphia: The Franklin Institute Press. J ohnson-Laird, P. N. and Wason, P. C. (eds.) (1977). Thinking. New York: Cambridge University

Press. Kahneman, D. and Tversky, A. (1972). "Subjective probability: A judgment of representative-

ness," Cognitive Psychology 3:430 454. Kahneman, D. and Tversky, A. (1973). "On the psychology of prediction," Psychological Review

80:237-251. Kahneman, D. and Tversky, A. (1979). "Intuitive predictions: Biases and corrective procedures,"

TIMS Studies in Management Science 12:313-327. Kahneman, D., Slovic, P. and Tversky, A. (eds.) (1982). Judgment under Uncertainty: Heuristics

and Biases. New York: Cambridge University Press. Knight, F. H. (1921). Risk, Uncertaino,, and Profit. Boston and New York: Houghton-Miflin. Lefrere, P., Dowie, J. and Whalley, P. (1980). "Educating for justified uncertainty," in R.

Winterburn and L. Evans (eds.) Aspects of Educational Technology 14. London: Kogan Page. Lichtenstein, S., Slovic, P., Fischhoff, B., Layman, M. and Combs, B. (1978). "Judged frequency

of lethal events," Journal of Experimental Psychology: Human Learning and Memory 4: 551 578.

Lipman, M. Philosophy for Children. Unpublished manuscript, undated, Montclair State Col- lege, p. 9.

McCloskey, M., Caramazza, A. and Green, B. (1980). "Curvilinear motion in the absence of external forces: Naive beliefs about the motion of objects," Science 210:1139-1141.

Michael, D. (1968). The Unprepared Society: Planning for a Precarious Future. New York: Basic Books.

Miller, G. A. (1965). "The magical number seven, plus or minus two: Some limits on our capacity for processing information," Psychological Review 63: 81-97.

Newell, A, and Simon, H. A. (1972). Human Problem Solving. Englewood Cliffs, N.J.: Prentice- Hall.

Sieber, T., Epstein, M. and Petty, C. (1970). "The effectiveness of modelling and concept-learning procedures in teaching children to indicate uncertainty," The Irish Journal of Education 4 (2): 90 106.

Sieber, J., Clark, R. E., Smith, H. M. and Sanders, N. (1976). "The effects of learning to be uncertain on children's knowledge and use of drugs," R & D Memo 144, Stanford Center for Research and Development in Teaching.

Seif, E. (1981). "Thinking and education: A futures approach." The Journal of Thought 16 (3). Slovic, P., Fischhoff, B. and Lichtenstein, S. (1977). "Behavioral decision theory," Annual Review

of Psychology 28: 207-232. Tversky, A. and Kahneman, D. (1971). "The belief in the 'law of small numbers," Psychological

Bulletin 76: I05-t t0. Tversky, A. and Kahneman, D. (1973). "Availability: A heuristic for judging frequency and

probability," Cognitive Psychology 4: 207-232. Tversky, A. and Kahneman, D. (1974). "Judgment under uncertainty: Heuristics and biases"

Science 185:1124-1131.

Page 16: A curriculum to improve thinking under uncertainty

82

Underwood, B. J. (1969). "Attributes of memory," Psychological Review 76: 559-573. Vye, N. J. and Bransford, J. D. (1981 ). "Programs for teaching thinking," Educational Leadership

(October) pp. 26-28. Wason, P. C. and Johnson-Laird, P. N. (1972). Psychology of reasoning: Structure and Content.

London: Batsford. Whimbey, A. and Lochhead, J. (1980). Problem Solving and Comprehension: A Short Course in

Analytical Reasoning. Philadelphia: The Franklin Institute Press. Young, R. M. and O'Shea, T. (1981). "Errors in children's subtractions," Cognitive Science 5:

153-177.