characterizing uncertainty in expert assessments

33
1 Article type: Advanced Review Characterizing uncertainty in expert assessments Jessica O'Reilly, [email protected] Woodrow Wilson School, Princeton University, and Program in Science Studies, University of California, San Diego Keynyn Brysse, [email protected] Woodrow Wilson School, Princeton University Michael Oppenheimer, [email protected] Department of Geosciences and Woodrow Wilson School, Princeton University Naomi Oreskes, Department of History and Program in Science Studies, University of California, San Diego Keywords assessment, climate change, policy, uncertainty, IPCC Abstract Large scale assessments have become an important vehicle for organizing, interpreting, and presenting scientific information relevant to environmental policy. At the same time, the characterization of scientific uncertainty with respect to the very questions these assessments were designed to address has become more difficult, as ever more complex problems involving greater portions of the Earth system and longer timescales have emerged at the science-policy interface. In this article, we review recent research on the categorization of uncertainty in the context of problems of global change, and explore key approaches to characterizing and communicating uncertainty employed in such assessments. Our discussion is based on recent research involving case studies of two sets of global assessments, one focused on stratospheric ozone depletion and the other on the response of the West Antarctic ice sheet (WAIS) to global warming. We find that assessments have been fairly adept at analyzing and communicating one type of uncertainty in models (parameter uncertainty) while encountering much greater difficulty in dealing with structural model uncertainty, sometimes entirely avoiding grappling with it. In the absence of viable models, innovative approaches were developed in the ozone case for consolidating information about highly uncertain future outcomes, whereas little such progress has been made in the case of WAIS. Similarly, these cases indicate that future assessments need to develop improved approaches to representing internal conflicts of judgment, in order to produce a more complete evaluation and representation for policy makers.

Upload: others

Post on 24-Oct-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Characterizing Uncertainty in Expert Assessments

1

Article type: Advanced Review

Characterizing uncertainty in expert assessments

Jessica O'Reilly, [email protected] Woodrow Wilson School, Princeton University, and Program in Science Studies, University of California, San Diego Keynyn Brysse, [email protected] Woodrow Wilson School, Princeton University Michael Oppenheimer, [email protected] Department of Geosciences and Woodrow Wilson School, Princeton University Naomi Oreskes, Department of History and Program in Science Studies, University of California, San Diego

Keywords

assessment, climate change, policy, uncertainty, IPCC

Abstract

Large scale assessments have become an important vehicle for organizing, interpreting, and presenting scientific information relevant to environmental policy. At the same time, the characterization of scientific uncertainty with respect to the very questions these assessments were designed to address has become more difficult, as ever more complex problems involving greater portions of the Earth system and longer timescales have emerged at the science-policy interface. In this article, we review recent research on the categorization of uncertainty in the context of problems of global change, and explore key approaches to characterizing and communicating uncertainty employed in such assessments. Our discussion is based on recent research involving case studies of two sets of global assessments, one focused on stratospheric ozone depletion and the other on the response of the West Antarctic ice sheet (WAIS) to global warming. We find that assessments have been fairly adept at analyzing and communicating one type of uncertainty in models (parameter uncertainty) while encountering much greater difficulty in dealing with structural model uncertainty, sometimes entirely avoiding grappling with it. In the absence of viable models, innovative approaches were developed in the ozone case for consolidating information about highly uncertain future outcomes, whereas little such progress has been made in the case of WAIS. Similarly, these cases indicate that future assessments need to develop improved approaches to representing internal conflicts of judgment, in order to produce a more complete evaluation and representation for policy makers.

Page 2: Characterizing Uncertainty in Expert Assessments

2

In the second half of the 20th century, a new phenomenon emerged in science: large-

scale, organized and formalized assessments of the state of scientific knowledge for the

purposes of public policy. Scientists have long had mechanisms for summarizing the state of

their art for themselves—textbooks, review papers, annual and ad hoc professional meetings—

and various forms of assessment for external audiences have existed for some time. In the 19th

century there were commissions intended to answer specific policy questions, such as the Royal

Commission on Vaccination [1, 2]. In the twentieth century, various national and royal academies

have produced thousands of reports on myriad subjects; the U.S. National Academy of Sciences

alone produces over two hundred studies each year [3]. Still, the more recent assessments are

somewhat different, intended not merely to address a specific matter of fact or policy, but to

summarize a broad body of policy-relevant knowledge, with the presumption that such summaries

are needed to inform appropriate public policy. Assessments of science for policy have become

particularly prominent in the earth and environmental sciences, where formal assessments have

played a major role in political and social debates over acid rain, ozone depletion, and global

warming [4].

A number of scholars have studied assessment processes from policy and social

perspectives [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], and there is a copious literature on risk

assessment, perception and communication [e.g. 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26].

However, a major issue in these assessments--largely unaddressed in the literature--is how they

deal with scientific uncertainty. Issues like global warming and the ozone hole have become the

subject of scrutiny in part because the science surrounding them was, in important ways,

uncertain or incomplete. Yet because the potential ramifications of ozone depletion and global

warming are so great and involve lags between causes and consequences, policies are often

planned or implemented under a great deal of scientific uncertainty.

By uncertainty, we mean “imprecision in the characterization of any outcome” (Reference

27, p. 158). Scientists (and other experts) are well aware that their characterizations of potential

outcomes may be imprecise, and have grappled with the uncertainty question, recommending a

variety of techniques for uncertainty estimation and articulation [23, 28, 29, 30, 31, 32]. Recently,

Page 3: Characterizing Uncertainty in Expert Assessments

3

some scientists have also begun to consider how well prior assessments have predicted actual

environmental change [33, 34; see also 35]. By and large, however, few scholars have

considered how uncertainty is addressed by scientists within the framework of formal

assessments [some examples, though, include 36, 37]. This paper undertakes such an

examination. First, we review how existing research categorizes uncertainty, and consider the

ways in which dominant uncertainty paradigms might omit key features of interest to the user

community. Then, we analyze two cases to illuminate some approaches to uncertainty in

scientific assessments relating to global change. Our cases are: 1) stratospheric ozone depletion

as discussed in international ozone assessments, and 2) the stability of the West Antarctic Ice

Sheet as discussed by the Intergovernmental Panel on Climate Change. The division of model

uncertainty into “parameter” and “structural” uncertainty is useful in describing these cases. We

conclude by suggesting that, while the assessments studied here did a great deal to consider and

address parameter uncertainty, structural uncertainty was often treated only in passing.

Furthermore, we find no systematic approach is employed to articulate and if possible explain

disagreement among experts.

Characterizing Uncertainty

Scientific assessments attempt to summarize the state of the art in a manner useful to

both a broad community of scientists and policy makers, but they are driven primarily by the

demand for information that can be used beyond the community of experts. Policy makers need

to know both the range of potential outcomes and the likelihood of those outcomes; this had led

to considerable discussion about the diverse types of uncertainty that policy-makers ought to

comprehend. Characterizing uncertainty can be a way of 1) admitting what you don’t know; 2)

indicating the imprecision of what you do know, and 3) quantifying how much it matters if we don’t

know something or only know it within a certain range of precision.

Studies have used various terms to convey these distinct but related ideas.

Page 4: Characterizing Uncertainty in Expert Assessments

4

Webster [38, 39] identified four challenges to articulating uncertainty: empirical (gaps in data),

methodological (different models with different approaches), institutional (the social dynamics of

consensus and/or expert judgments), and philosophical (contrasting views of probability, risk, and

the future itself). Patt [12] distinguishes between model uncertainty (uncertainties in the model

itself) and conflict uncertainty (disagreement among experts in their judgment of the matters at

hand).

Model uncertainty is part and parcel to the proliferation of simulations of climate aimed at

prediction, and is generally depicted by reporting a range of possible outcomes (Reference 12, p.

38). This may be done by running multiple simulations (or ‘realizations’) of a single model while

varying initial or boundary conditions, or presenting outcomes for an ensemble of models using a

single set of initial conditions. Recent studies indicate that the performance of ensembles of

models improves with an increase in ensemble size--as judged by the match between model

outcome and observational data-- and that Bayesian approaches applied to ensembles may yield

additional improvement [40, 41].

Model uncertainty can itself be divided into “structural uncertainty” and “parameter

uncertainty.” Structural uncertainty refers to questions about the way in which the model

construction reflects the processes--be they physical, chemical, biological, or social--represented

by the model. By definition, a model reflects but does not replicate the natural world, so there are

always uncertainties introduced when describing the essentially infinite complexity of natural

systems by simplified representations. These uncertainties include both “known unknowns,”

which may include processes known or suspected to exist, but which we have no good way to

describe mathematically, or for which we lack measurements, and “unknown unknowns,”

processes and variables that might be pertinent but of which we are (as yet) unaware. Both of

these categories trouble the prediction-making capabilities of models [42]. Statistical approaches

are being applied in an attempt to reduce structural uncertainty but the verdict on their efficacy is

unclear [43, 44].

Page 5: Characterizing Uncertainty in Expert Assessments

5

Parameter uncertainty refers to the questions and approximations surrounding the values

of parameters used as model input. These can be explored through sensitivity testing, which

examines the impact of variations in model parameter input on model output, in some cases

using probabilistic representations of parameters combined with Monte Carlo analysis.

Conflict uncertainty refers to the simple fact that, even presented with the same

information, experts may come to different conclusions and therefore find themselves in conflict

with each other. (Experts may also find themselves in conflict with their own prior expressed

judgments or published opinions.) Formalized expert elicitation has been used to determine the

range of expert opinion of topics for which general agreement is lacking [23, 45]. This method is

also used to illuminate the origins of such differences and in some cases to narrow them, by

making the assumptions on which they are based explicit, and inviting experts to reconsider their

views in light of this additional information [23, 38]. On the other hand, there is the risk that such

“invitations” can effectively act as pressure on outliers to bring their views into concordance with

those of their colleagues.

Many scientists believe that better communication of uncertainties is crucial for effective

public policy. While Webster emphasizes the need to improve approaches to uncertainty from

within the scientific community, Manning, for example, focuses on the challenges of

communicating uncertain knowledge to the public and policy makers. “Are there better strategies

for communicating the information that is important from a policy perspective without straying into

one of the twin traps of ignoring the necessary caveats or else producing statements so hedged

in qualifications as to misrepresent what we really do understand?,” he asks [46]. Whether this

presumption--that better communication of uncertainty is necessary--is correct, it is clear that

scientists have neither been consistent in their approaches to uncertainty nor to its

communication.

Morgan et al. [47] surveyed assessors for the U.S. National Assessment for the Potential

Consequences of Climate Variability and Change and found their approach to uncertainty uneven

or lacking. They attributed this to assessors’ relative discomfort with and lack of knowledge about

Page 6: Characterizing Uncertainty in Expert Assessments

6

conveying uncertainties. For example, few assessors surveyed knew that terms such as “likely” or

“unlikely” could be interpreted by readers in various ways (Reference 47, p. 9030). While

assessors had relevant scientific expertise, they did not have expertise in communicating

uncertainties. These authors argue, in effect, that judging scientific uncertainty and

communicating it are two different things, and suggest that future assessments utilize experts in

uncertainty as well as experts on the particular scientific subject matter.

Others have critiqued the assumption that uncertainty reduction is necessary for policy

action. Lempert et al. [28] call the IPCC’s current operating paradigm “predict-then-act,” a

process that seeks to quantify uncertainty to the greatest amount practicable before making a

policy decision based on these calculations. This strategy encounters two difficulties when

applied to climate change: 1) the topic “climate change” is so diverse that it is extremely difficult to

quantify many aspects, and 2) climate science and policy operate in conditions of deep

uncertainty where human actors do not and cannot agree on many factors. The authors offer their

“assess-risk-of-policy” framework as an antidote, where vulnerabilities to a range of plausible

policy options are detailed. Instead of providing a numerical estimate of likelihood of specific

outcomes, with uncertainty bars, they suggest that assessors provide a list of possible policy

scenarios and then analyze the risks, benefits, and uncertainties for each scenario. This

approach, they argue, may help to mitigate the “fallacy of misplaced concreteness” [48], in which

one places excess confidence in results that do not warrant them. It also allows for surprises to

be included in assessing risk, and encourages analysts to consider a wide range of possible

futures, since focusing on concrete ramifications instead of abstractions allows a broader range of

policy scenarios to become apparent and thus, potentially feasible to implement.

Below, we put aside these questions of efficacy and examine two case studies—

stratospheric ozone depletion and the risk of rapid disintegration of the West Antarctic Ice

Sheet—to illuminate how these challenges and proposed solutions were actually addressed in

practice.

The Assessment of Stratospheric Ozone Depletion

Page 7: Characterizing Uncertainty in Expert Assessments

7

Structural Uncertainty in Ozone Assessments

The Antarctic ozone hole, a region characterized by massive depletion of ozone occurring

in the lower stratosphere was first reported in 1985. It came as a surprise, because elaborations

of the original Rowland-Molina hypothesis of ozone depletion by anthropogenic chlorine

compounds predicted much smaller (perhaps unobservable) depletion in the late 20th century and

relatively modest (up to 7%) depletion of global ozone [49] decades later, expected to occur

largely in the upper stratosphere at middle and lower latitudes. Instead, the ozone hole was a

geographically limited circumpolar area of major ozone depletion (up to 60% in total column

ozone), occurring only in the austral spring. The location, timing, and extent of the Antarctic

ozone hole puzzled scientists. The unusual nature of the ozone hole suggested an unusual cause

[50].

Scientists were also troubled that the ozone hole had not been reported sooner. The

British Antarctic Survey team which announced the discovery in 1985 had detected the hole

some four years earlier, but before making the announcement, took time to make sure the effect

was not a malfunction of the Halley Bay instrument that first detected it. However, once scientists

knew what to look for, they were able to trace the appearance of the hole back to about 1978.

Given that satellite observations of ozone in the region had begun during the earlier period, why

was the ozone hole not detected sooner? It soon came out that the NASA satellite indeed had

recorded the extremely low springtime values, but had been programmed to flag all readings

below a certain level (180 Dobson Units) as probably in error. Because ozone levels inside the

hole were dropping to around 150 DU, these readings were consistently identified as

instrumentation errors. Unfortunately, NASA also had an incorrectly calibrated ground-based

instrument in Antarctica giving readings of 300 DU, the expected level, at the same time.

It appears, then, that if the NASA team noticed the anomalously low satellite readings,

they dismissed them as instrument failures, since they did not agree with the ground-based

instrument’s readings [51]. Because the ozone detectors on the satellite generated a huge

amount of raw data, and the availability of both computer time to process the data and human

technicians to search for patterns within the data were limited, it may have made sense for the

Page 8: Characterizing Uncertainty in Expert Assessments

8

data processing programs to have some built-in automatic flagging. However, as it turned out,

this automated response masked a real and important trend. This speaks to an expectation

about what ozone levels should have been – an expectation that proved to be incorrect.

By 1987 scientists had conclusively demonstrated that the ozone hole was caused by

heterogeneous (i.e. multi-phase) chemical reactions involving anthropogenic chlorine, facilitated

by the unusual dynamics of the polar winter vortex. Reactions on the liquid or solid surfaces of

polar stratospheric cloud (PSC) particles in the Antarctic (and, when conditions were cold

enough, in the Arctic as well) caused large ozone depletion each polar spring. However, before

the ozone hole discovery forced them to re-think their conceptual models, scientists had

dismissed the possible importance of multi-phase reactions (for example, the catalytic chlorine

cycle proposed by Molina and Rowland involved only gas phase chemistry). At the time, gas

phase atmospheric chemistry was much better understood than multi-phase chemistry, and

heterogeneous reactions were seen as arcane and generally unimportant to atmospheric

processes. Heterogeneous reactions are also notoriously difficult to simulate and measure in the

laboratory, particularly under conditions that may be relevant to the real atmosphere.

The earliest national and international ozone assessments either ignored heterogeneous

reactions entirely, or mentioned them only to dismiss their possible significance [note 1]. For

example, a 1976 U.S. National Academy of Sciences report discussed heterogeneous reactions

only in an appendix called “Detailed Discussion of Removal Processes.” It notes that both CFC

and ClO molecules are unlikely to react with or on the sulfate aerosols of the stratosphere’s

Junge layer to any significant extent, due to slow predicted reaction rates, the extreme inertness

of the CFC molecules, and the sparseness of the aerosol particles. They “conclude that inactive

removal of CFMs [a class of CFCs] from the stratosphere by heterogeneous processes is not at

all significant,” and “It is therefore clear that heterogeneous removal of ClO is negligible when

compared with the homogeneous ozone catalysis processes” (Reference 52, p. 226).

However, the authors were only considering heterogeneous removal of CFCs and

chlorine (process that would protect ozone), not heterogeneous reactions that could speed ozone

Page 9: Characterizing Uncertainty in Expert Assessments

9

depletion. Heterogeneous reactions can only take place in the presence of solid or liquid

particles; if there are no such surfaces present, any reactions will necessarily occur among gases

only. Only the sulfate particles of the Junge layer were known to occur in the stratosphere, but

these were generally thought to be too low in stratospheric altitude and too sparse to contribute to

ozone depletion, which was thought to occur higher in the stratosphere via gas-phase reactions.

The sulfate layer might possibly participate in the removal of CFC molecules or chlorine atoms

before ozone depletion, however, and it is this possibility that was considered in early

assessments like the 1976 report.

Elsewhere in the report, the authors offered the caveat that that their predicted ozone

depletion reaction rates would be accurate “unless there is some presently unknown process that

quickly returns ClONO2 into active ClOx and NOx species” (Reference 52, p. 154). However,

nowhere did they suggest that heterogeneous reactions might enable such a process, which

turned out to be precisely the case. The possibility that heterogeneous reactions might directly

contribute to ozone depletion was not raised explicitly in ozone assessments until after the

Antarctic ozone hole had been discovered.

The absence of discussion of heterogeneous chemistry from ozone assessments of the

late 1970s and early 1980s did not reflect an absence of basic scientific research on the subject,

however. If scientists were studying heterogeneous reactions, why did this work not percolate

into the assessments? The answer to this may be that these studies produced ambiguous or

negative results, which seemed to suggest that heterogeneous chemistry was not important in

ozone depletion. A number of studies, involving both laboratory experiments and computer

modeling, were conducted to evaluate the potential occurrence and rapidity of ozone-relevant

reactions on sulfate aerosols of the Junge layer (the only commonly known source of

stratospheric particles, before the discovery of the Antarctic ozone hole called attention to polar

stratospheric cloud particles). All of the reactions investigated in these experiments and models

were apparently too slow, either inherently or due to the sparseness of the aerosol particles, to

contribute significantly to ozone depletion [see for example 53, 54, 55, 56]. Therefore, because

Page 10: Characterizing Uncertainty in Expert Assessments

10

scientific knowledge of heterogeneous chemistry was both highly uncertain and apparently

unimportant, it was left out of early assessments virtually entirely.

However, once the ozone hole had been discovered and linked to heterogeneous

chemistry, assessment authors had to find a way to incorporate this new phenomenon into their

predictions, even absent appropriate photochemical models of the processes. Because the

ozone hole was restricted to a small and self-contained geographic area around the South Pole,

scientists pondered whether some particular characteristic of the region was causing this special

phenomenon. One obvious and unusual characteristic of the polar winter stratosphere is the

formation of polar stratospheric clouds (PSCs). These are clouds of particles composed of nitric

acid (HNO3), water, and sulfuric acid (H2SO4) (Type I PSCs), or just water ice (Type II PSCs),

which form at extremely low temperatures (-78°C or lower) found at lower stratospheric altitudes

over Antarctica. Atmospheric chemists knew that the presence of solid or liquid surfaces could

potentially speed up or enable reactions that happened more slowly, or not at all, in the gas

phase. Such surfaces were not previously known to be present in significant quantities in regions

of the global stratosphere where ozone depletion was thought to occur. But now, ozone depletion

was found to be occurring in precisely the area where such surfaces might be available in

abundance.

Scientists immediately wondered if such heterogeneous reactions might be causing the

Antarctic ozone hole. For example, Rowland and colleagues conducted laboratory experiments

which showed that ClONO2, formerly regarded as a reservoir locking up chlorine and nitrogen in

inert forms, would actually hydrolyze extremely quickly on any available surface, potentially

freeing both chlorine and nitrogen to participate in ozone depletion [57]. Molina and colleagues

were able to show that contrary to previous belief, HCl and ClONO2 did not have to collide

simultaneously on the PSC surfaces in order to react; instead, at the extremely cold Antarctic

temperatures, the HCl would get absorbed onto the cloud particles, thus needing only a collision

of the cloud particle with ClONO2 – a much more likely event [56]. Two different teams now

proposed discrete sets of heterogeneous chlorine reactions which might be causing the large

Antarctic ozone depletion [55, 58].

Page 11: Characterizing Uncertainty in Expert Assessments

11

It seems, then, that with very few exceptions (see ref. 49 p.42), scientists did not address

heterogeneous reactions in detail, as potential contributors to ozone depletion, until they were

forced to by the discovery of an unexpected phenomenon. This made them realize their models

were either inaccurate or incomplete, so they re-considered factors they had previously

neglected. Among the reasons why scientists omitted heterogeneous chemistry from their

models of ozone depletion was that heterogeneous reactions are extremely hard to measure: the

reaction rates depend on the presence of surfaces, as well as their particular type. There are

always surfaces in contained laboratory experiments, which may not in any way resemble

stratospheric aerosol, and thus may produce misleading outcomes. Even once it became clear

that heterogeneous reactions were important, this difficulty inhibited experiments aimed at

characterizing reactions so that they could incorporate them into models, which might be able to

replicate the ozone hole (or mid-latitude ozone depletion), and this remains a challenge to this

day.

Meanwhile, however, policy-makers have been making judgments about ozone depletion

based on scientific information despite the absence of reliable models. How have they managed

to do so? Ozone assessors responded to the scientific difficulties by developing a series of

metrics for ozone depletion that in effect served as surrogates for predictive

photochemical/dynamical models. These metrics--or proxies--were employed as a pragmatic way

to proceed despite this deficiency.

The first was the Ozone Depletion Potential (ODP), proposed by atmospheric modeler

Don Wuebbles in 1983, and first appearing in an ozone assessment in 1989 [59, 60]. The ODP

incorporated information from models about the emission rate and atmospheric lifetime of a given

ozone depleting substance; results were presented relative to the action of an equal mass of

CFC-11 (i.e., CFC-11 was arbitrarily assigned an ODP of 1). While ODPs continued to be

discussed in all subsequent ozone assessments, they became increasingly difficult to calculate

(because of the increasing complexity of the science once the importance of heterogeneous

reactions and later, reactions involving bromine, was recognized), and for this reason were

Page 12: Characterizing Uncertainty in Expert Assessments

12

supplemented by another proxy metric: the chlorine loading potential or CLP. Use of the CLP

circumvented the need to calculate actual ozone depletion.

The CLP, which first appeared in a July 1988 report of the U.S. EPA and was also first

used in the 1989 ozone assessment, represented how much chlorine (or equivalent ozone-

depleting substance, such as bromine) from a given halocarbon would be delivered to the

stratosphere [61]. By 2002, another shift had occurred in the ozone assessment reports, and

CLP had itself been replaced, by the equivalent effective stratospheric chlorine, or EESC. The

EESC provides an estimate of the total amount of chlorine and chlorine equivalent (e.g., bromine)

delivered to the stratosphere to participate in ozone depletion at a particular time. EESC was first

proposed by Daniel and colleagues in 1995. In addition to serving as an easily interpreted

measure of a particular chemical’s contribution to chlorine loading in the stratosphere, it has

another purpose: it is set up for easy conversion into GWP or global warming potential [62]. With

this new metric, policy-makers could quickly see how much a given ozone-depleting chemical

also contributes to climate change, a concern which has overtaken ozone depletion in the last few

decades, and which the Science Assessment panelists were asked to begin to consider

alongside ozone depletion beginning in the early 1990s [note 2].

The authors of Analysis of Global Change Assessments: Lessons Learned call the EESC

“perhaps the most important advance in avoiding the political pitfalls associated with

characterizing uncertainties” [63]. Since 1998, most of the assessments’ graphical predictions of

stratospheric chlorine levels under various emissions scenarios have been presented in terms of

EESC. Even without a sophisticated knowledge of the relevant chemistry, it is easy for anyone

examining these graphs to conclude that the higher the EESC, the worse the expected ozone

depletion. In the most recent assessment [64], the EESC also provides a dramatic visual effect in

these diagrams, which includes a grey bar showing when this metric of stratospheric chlorine is

predicted to return to 1980 levels (roughly the time of initial observation of ozone depletion) for

the first time (the period 2034-2060, depending on latitude and emissions scenarios) (Reference

64, 6.28-6.35).

Page 13: Characterizing Uncertainty in Expert Assessments

13

Computer models of ozone depletion are only now finally catching up to the complex

chemistry. The two most recent assessments [64, 65] employed both two-dimensional chemical

transport models and three-dimensional coupled chemistry-climate models. The 2D models have

greater computational efficiency, which allows them to incorporate more parameters and permits

more sensitivity analysis, than the more computationally intense 3D models. The 3D models, on

the other hand, are better suited to modeling the polar processes that cause the Antarctic (and

sometimes Arctic) ozone hole. Several different 2D and 3D models are used in the

assessments, and the different models are known to have varying strengths and weaknesses.

There also seem to be a few systematic biases across the models; for example, most of the 3D

models overestimate total column ozone (Reference 64, 6.25).

Conflict and Parameter Uncertainty in Ozone Assessments

In contrast to the complexities associated with structural uncertainties (absence of

heterogeneous chemistry) described above, ozone scientists seem to have had an easier time

dealing with parameter and conflict uncertainty. For example, while scientists were grappling with

the unexpected discovery of the Antarctic ozone hole, forcing them to re-consider factors they’d

previously dismissed as unimportant, another discovery had precipitated a dispute among ozone

experts.

In 1981, NASA scientist Don Heath announced the detection of significant global ozone

depletion, detected by solar backscatter ultraviolet (SBUV) instruments on a NASA satellite (the

results were leaked to newspapers before official publication; see reference 66). Some scientists

discounted these reports, however, arguing that Heath’s attempted corrections for known

instrument degradation had not been done properly. In response to this difference of

interpretation, NASA (along with the U.S. National Oceanic and Atmospheric Administration, the

Federal Aviation Administration, the WMO, and UNEP), formed the Ozone Trends Panel. As its

name suggests, the Ozone Trends Panel was charged with analyzing these and other reports of

recent trends in ozone levels (they eventually also took on the task of analyzing the reports of

severe Antarctic ozone loss), and producing a detailed report about them. The executive

Page 14: Characterizing Uncertainty in Expert Assessments

14

summary was released in March 1988; the complete report followed a couple of years later [67].

This report found that while Heath’s results overestimated observed ozone depletion, other

studies had underestimated it. Taken together, the studies indicated that a small but significant

downward trend in global ozone levels was indeed observable. One reason this trend had been

so difficult to detect previously was that scientists were annually and globally averaging their data,

and giving more weight to the less variable (and therefore presumably more reliable) summer

months. However, it turned out that the most significant non-polar ozone depletion was

happening in the winter in high latitudes, a trend that was not revealed until the data were

analyzed on a month-by-month and station-by-station basis. In effect, the creation of the Ozone

Trends Panel was a means to manage, and resolve, conflict uncertainty. (Reference 67, p.3).

While the results of the Ozone Trends Panel were not publicly announced until 15 March

1988, one day after the U.S. Senate ratified the Montreal Protocol, the Panel report is widely

recognized as having had a major influence on subsequent amendments to the Protocol,

amendments which called for more stringent regulation of CFCs and (eventually) other ozone

depleting substances (Reference 68, p.124, and Reference 69, pp.153-161).

The international ozone assessments seem also to have found some means to address

parameter uncertainty. For example, the handling of parameter uncertainty can be seen in the

adoption of strategies developed by Stolarski, Douglass, and collaborators, in a series of papers

beginning in the late 1970s [see for example 70, 71]. They advocate the use of Monte Carlo

simulations to determine the sensitivity of atmospheric computer model simulations to variations

in different input parameters. The Monte Carlo method, first adopted in the 1985 assessment,

allowed the assessment authors to combine the uncertainty in several parameters – in this case,

different rates for several different chemical reactions – and determine which ones were

contributing the most to the overall uncertainty. Subsequent assessments of ozone depletion

have continued to apply and refine this approach [note 3].

The international ozone assessments also adopted Bayesian (what-if or scenario)

approaches, most notably in the selection and consideration of a handful of discrete CFC

Page 15: Characterizing Uncertainty in Expert Assessments

15

emissions scenarios. While the first few reports presented the results of several model runs with

different chemical reaction rates but under a single emission scenario [60, 72, 73], beginning with

the 1991 report, the assessments presented the predicted ozone depletion associated with

several different emissions scenarios, corresponding to varying levels of global compliance with

the Montreal Protocol (Reference 74, 8.33).

Discussion

The history of international ozone assessments suggests two distinct judgments on how

assessments handle uncertainty. One view is that ozone scientists did a poor job managing

structural uncertainty, ignoring heterogeneous processes that were just too hard to measure, until

they were forced to re-examine their assumptions when an unexpected result (the Antarctic

ozone hole) emerged. However, a detailed reading of the scientific literature contemporaneous

with these assessments supports an alternative interpretation. Ozone scientists did not entirely

ignore these hard-to-measure processes: they tested them to the best of their abilities and made

a judgment call. The available data suggested heterogeneous reactions were unimportant, so

scientists understandably did not focus on them. However, when an unexpected phenomenon

emerged in the form of the Antarctic ozone hole, suggesting that some crucial piece of the puzzle

was missing, they went back to work, and found fairly quickly that heterogeneous processes were

the missing piece. Even knowing this, however, they still could not adequately model the relevant

chemistry, and so they developed surrogates (ODP, CLP, and EESC), consistent with the known

science, for the benefit of policy-makers.

Either way, there is a useful lesson here: scientists need to be on the lookout for

processes they have assumed or presumed are unimportant, especially if the data on which they

have based that judgment are scant. The history of ozone assessments also suggests that

surrogates, like the EESC, can be an important tool for scientists to express what is known, even

while acknowledging the complexity of the systems under investigation, and without burying the

uncertainties.

Page 16: Characterizing Uncertainty in Expert Assessments

16

Ozone metrics, particularly the EESC, have become critical in part because assessment

authors recognize that the models still have a long way to go. With all the processes that need to

be incorporated completely and correctly into the models, it is difficult to predict when ozone

levels will return to “normal” [note 4]. With the use of the EESC, however, and on the

understanding that current decreases in ozone depletion are “dominated by changes in EESC,” it

is easier to predict when stratospheric chlorine and equivalents will return to levels which are

characterized as “normal,” with the assumption that, all else being equal, actual ozone levels

won’t lag far behind.

Ozone scientists seem to have grappled more successfully with parameter and conflict

uncertainty, by, for example, adopting Monte Carlo methods to test the sensitivity of their

outcomes to changes in parameters, and creating the Ozone Trends Panel to resolve the dispute

over mid-latitude ozone depletion. However, we are still left with some key questions: First, how

complete are the results of Monte Carlo analysis? The ozone assessments have heavily favored

these in their handling of parameter uncertainties, but as a recent NRC report emphasizes, they

are no panacea – not even for parameter uncertainty alone [75]. Second, what happens when

the creation of an assessment panel fails to resolve the conflict uncertainties it was created to

address? In the history of ozone science, the Ozone Trends Panel was the response to conflict

uncertainty. Scientists had divergent views of the data, and the panel was created to sort these

out, which it did. This leaves open the question of what happens when experts disagree within a

panel – an issue that arose in the most recent attempt of the Intergovernmental Panel on Climate

Change to address the potential for the rapid disintegration of the West Antarctic Ice Sheet.

Assessing the Potential for Rapid Disintegration of the West Antarctic Ice Sheet

The West Antarctic Ice Sheet (WAIS)

Experts have been concerned about the potential for rapid disintegration of WAIS and the

resulting rise in global sea level since the 1960s [76, 77, 78, 79, 80]. WAIS contains about 10% of

all Antarctic ice and is considered to be less stable than the larger East Antarctic ice sheet. If

WAIS disintegrated entirely, global average sea level rise would increase by about 5m, a high-

Page 17: Characterizing Uncertainty in Expert Assessments

17

impact event. But experts are unclear as to the details of the mechanisms by which the ice sheet

might disintegrate, how rapidly this might occur, how soon the process might begin and whether it

may have already begun in some places. These are considered great unanswered questions and

challenges to our limited ability to model ice sheets. Many experts have spent their careers

working on this topic without achieving a satisfactory resolution of many questions, and on some

aspects, no making little progress at all. Yet clearly, this is an area in which society has a

pressing need for answers.

Many scientists presume that more research will lead to less uncertainty, but the question

of WAIS’ contribution to sea level rise has become more uncertain, instead of less, over time--or,

at least in IPCC’s representation of it. In the Summary for Policymakers (SPM) of the Third

Assessment Report (TAR), the IPCC’s Working Group I included a contribution from WAIS in its

projections for global sea level rise. While acknowledging in the Technical Summary that the

results were “strongly dependent on model assumptions regarding climate change scenarios, ice

dynamics and other factors,” the TAR authors nevertheless provided an estimate of the potential

contribution to sea level rise should WAIS begin to disintegrate: “Current ice dynamic models

project that the West Antarctic ice sheet (WAIS) will contribute no more than 3 mm/yr to sea level

rise over the next thousand years...” (Reference 81, p.77).

In the SPM of the Fourth Assessment Report, however, WGI specifically excluded

potential increases in “dynamical ice loss” from either the Greenland or Antarctic ice sheets from

its tabulated sea level rise estimates for the 21st century. Moreover, there was no separate

judgment presented in the SPM of the potential rate of the WAIS contribution to sea level rise,

neither over the long term, as there was in the TAR, nor during the 21st century. The authors

stated that “dynamical processes related to ice flow not included in current models but suggested

by recent observations could increase the vulnerability of the ice sheets to warming, increasing

future sea level rise. Understanding of these processes is limited and there is no consensus on

their magnitude” (Reference 82, p.17). Numerical estimates of future sea level rise based only on

thermal expansion of ocean water, melting of mountain glaciers, and current ice movement were

presented in the charts in the SPM, although the authors did insert the caveat that “larger values

Page 18: Characterizing Uncertainty in Expert Assessments

18

cannot be excluded, but understanding of these effects is too limited to assess their likelihood or

provide a best estimate or an upper bound for sea level rise.” (Reference 82, p.14). That is to say,

increased discharge of ice from Antarctica and Greenland could raise sea level dramatically and

quickly, but no quantitative estimates of how dramatically or how quickly were offered. Thus, at

least on the face of it, it would appear that the science surrounding WAIS became more

uncertain, rather than less, from the Third to the Fourth Assessments.

If we define scientific progress in terms of greater certainty, then we would have to say

that in recent years, science has gone backward. Yet this is a rather pinched view of progress,

and in fact, it is doubtful that any scientists involved with WAIS do hold this view: even if modeling

capability remains limited (due to structural uncertainty), information from observations has

increased rapidly. Where progress has been absent is in our ability to assess all the new

information and make coherent judgments based upon it due to a lack of models capable of

interpreting this data.

Many people involved consider the IPCC’s treatment appropriate, asserting that it would

have been wrong to offer an estimate that might have been incorrect, in light of significant new

information [83]. Some have even argued that the AR4 authors were brave in deciding to leave

the ice sheets out of numerical sea level rise tabulations for the 21st century in AR4, resisting the

pressure to say something they were not honestly able to say [83]. However, others consider this

a serious omission, insofar as the scientists’ declining to provide an estimate fails to communicate

the knowledge that does exist [83]. The result, they feel, is that the sea level rise estimates that

were offered are virtually useless—in that they leave out a contributory factor that is not only

significant, but that potentially would overwhelm the other contributory factors.

It is nevertheless surprising that a more complete assessment of uncertainty was not

achieved, and other factors may have been at work. Several explanations have been offered.

One, invoked by some of the AR4 authors, is that given the data long at hand, the immense

amount of new data that had recently come to light, and the limited time to assimilate it, WAIS

experts simply could not come to an agreement. New remote sensing data, in particular, showed

Page 19: Characterizing Uncertainty in Expert Assessments

19

the ice sheet to be far more dynamic than previously thought: not just a chunk of solid ice, but rife

with cracks, streams, and lakes—in effect, a complex plumbing system that provided many more

potential mechanisms for rapid ice loss than were previously known. As a result, the physical

picture of WAIS became more complex, and the understanding of which processes are important

to its dynamics remained in flux. The new data suggested that modelers have substantial work

still to do to make the models run like the ice sheet flows. The model estimates for AR4 were

based on simulations that did not take the new information into account—due to time

constraints—and therefore, the authors could not claim that those estimates realistically

approximated WAIS conditions and behavior.

Social and institutional factors may also have contributed to increased uncertainty in AR4.

For example, IPCC chapters have a (largely) new writing team for each assessment. For sea

level rise, new estimates must be made in light of the new data sets, methods, scales, and so on

based of hundreds of sources assessed. Writing teams are composed of scientists with not only

professional credentials, but also beliefs and opinions on how to manage the massive project of

writing an IPCC chapter, and subjective approaches to risk and uncertainty. While the SPM

represents a consensus of a large group of scientists and receives careful scrutiny for bias and

approval by governments, the underlying technical material upon which the SPMs are based

represents the product of much smaller groups of scientists. In the role of assessors, the writing

team must make judgments about how to organize and convey the available material, so

personal experience and bias inevitably matter.

Another explanation focuses on the organization of the reports: bureaucratic organization

can contribute to propagating as well as mitigating judgment uncertainty. In the Fourth

Assessment Report, the scientific question of WAIS became more fractured than it had previously

been, as a result of a top-down chapter reorganization. In the first, second, and third IPCC

assessment reports, the WAIS question was primarily confined to the chapter on sea level rise.

However, in the Fourth Assessment Report, there was no single chapter on sea level rise, and

thus the WAIS issue was splintered among three chapters in Working Group I: Chapter 4—

Observations: Changes in Snow, Ice and Frozen Ground, Chapter 6—Paleoclimate, and Chapter

Page 20: Characterizing Uncertainty in Expert Assessments

20

10—Global Climate Projections. Two chapter in Working Group II also had responsibility for

assessing WAIS (Chapter 15—Polar Regions and Chapter 19—Assessing Key Vulnerabilities

and the Risk from Climate Change). With different writing groups focusing on different aspects of

the WAIS question, it was difficult to achieve consistency among the chapters (see ref. 84 for a

discussion of how differing views of uncertainty and risk were rationalized between the Working

Groups). Authors informally worked with one another to tell a consistent story about ice sheets

across the chapters, but with more people involved and the cohesion of several chapters at stake,

decisions become more difficult to manage [83]. The bureaucratic decision to reorganize

assessment report chapters resulted in a reorganization of scientific content and a less unified

approach. Writing teams for chapters meet separately, have massive workloads, and would have

to make extra meeting arrangements to collaborate among chapter groups [83]. There is also the

consideration that with more people, there are more views to consider. The organizational

structure of AR4 may not have created a social environment that maximally facilitated

collaborative decision-making.

Discussion

It is clear that a good deal of the explanation for what happened between the third and

fourth assessment reports involves the impact of new data, which made scientists understand

that the ice sheet system was more complex than earlier models allowed, underscoring the

importance of structural model uncertainty. Increased knowledge led to a deeper appreciation of

this natural complexity. Increase in uncertainty estimates was in part an honest reflection of a

deepened understanding of a complex natural system. On the other hand, it also suggests that

the presumption that more research will lead to less uncertainty is not necessarily correct, which

has implications for the value of assessments in public policy.

Data alone, however, are not the whole story, and indeed the question remains as to

whether AR4 could have presented a more complete assessment of the uncertainties arising from

structural model uncertainty affecting ice sheets and sea level rise in general and the WAIS

contribution in particular. Certainly the plethora of articles appearing in the literature attempting to

do so since AR4 was published reveals a certain dissatisfaction in this community [85].

Page 21: Characterizing Uncertainty in Expert Assessments

21

Uncertainty also was generated through the IPCC institution and process itself. Some of this

socially-derived uncertainty, such as changes in how uncertainty bounds are calculated across

assessment reports, can be managed by the institution. Others, like the subjectivity of the writing

teams and the bureaucratic reorganization of report chapters, are inherent qualities of the IPCC

institution itself, a bureaucratic dynamism that has both positive and negative aspects.

Differences of opinion among experts falls intellectually into the category of “conflict

uncertainty,” but the word “conflict” belies the eventual agreement of at least the key, influential

authors in this situation. We suggest that a better term might be “judgment uncertainty,” to

suggest the fact that differences in expert judgment, without overt (or even covert) conflict, can

play a significant role in assessment outcomes. In this case, the increase in uncertainty was in

part the result of an apparent convergence among authors, a judgment call of the writing group in

place.

Daniel Sarewitz and colleagues [14, 86] have argued that increased science does not

necessarily help resolve policy conflicts, as more science can be used to defend or attack various

policy positions, and it is also always possible to say that more science is needed [see also 27,

87]. They suggest that the appropriate role for science in complex environmental decision-

making is not to try to fully summarize the scientific information in order to enable the “right” policy

decision (a quixotic goal), but rather to provide ongoing monitoring of how policy choices are

being implemented and whether that implementation is having the desired effect (e.g., ozone

recovery, decreased acid rain, etc.) While we do not entirely agree with this argument, the history

of the scientific study and assessment of WAIS does suggest that the presumption that more

science will lead to less uncertainty--and therefore to clearer policy implications and better policy

choices--is not always supported.

Conclusion

Two cases is admittedly a small sample from the universe of scientific assessment, but

our cases do offer insights into the assessment process and the management and

Page 22: Characterizing Uncertainty in Expert Assessments

22

communication of uncertainty in it. We offer the following comments as a starting point for

discussion.

Our case studies appear to be consistent with the taxonomy presented above, drawn

from existing literature: both ozone and climate change assessments have faced parameter,

conflict, and structural uncertainties. In addition, we have suggested that an additional term--

“judgment uncertainty”--may be useful to underscore that scientists interpreting evidence may

make divergent judgments. Given identical data, honest and competent experts may come to

different conclusions based on their own subjective weighing and evaluation of it, as well as their

epistemic and social affinities [87]. Such judgment uncertainty is not eliminable, but it can

perhaps be better managed and communicated.

Moving beyond taxonomy, however, is important because the ways assessors deal with

uncertainty sheds light on the larger process of writing assessments and the social dynamics that

shape uncertainty, and calls attention to the uneven handling of different types of uncertainty.

Our cases suggest that, at least as a first-order approximation, scientists have done a good job

dealing with parameter uncertainty, and have had more difficulty dealing with structural, conflict,

and judgment uncertainty. We have seen how, for ozone, scientists handled parameter

uncertainty with Monte Carlo simulations; in the IPCC, scientists turned to multi-model

ensembles. However, ozone scientists largely dismissed the potential significance of

heterogeneous reactions until they were forced to reconsider them by the discovery of the ozone

hole (structural uncertainty). WAIS scientists declined to make an estimate of the dynamic ice

loss contribution to sea level rise because the models were found to lack key processes

(structural uncertainty) and authors could not settle on whether or how to represent the resulting

effect of this uncertainty on sea level rise (conflict or judgment uncertainty).

It is perhaps not surprising that scientists working in assessments have done a good job

addressing parameter uncertainty. Scientists have scientific techniques for doing so: sensitivity

analysis, Monte Carlo analysis, multi-model comparisons, and model ensembles. These

techniques draw on the expertise of scientific experts, to address what are understood and

Page 23: Characterizing Uncertainty in Expert Assessments

23

recognized as scientific questions (e.g., the impact of initial and boundary conditions on model

outcomes).

Structural, conflict, and judgment uncertainty are a different matter. Scientists do not have

scientific techniques to address conflict among experts, and the fact that different experts can

come to different conclusions from the same data is awkward for men and women engaged in an

activity that is supposed, at its core, to be logical and objective. It is difficult to discuss

subjectivity in science. Yet it seems clear that scientists must discuss it, or else run the risk that

their results will, at minimum, be less useful than they should be, and at worst, suggest to the

public that the so-called experts don’t know what they are talking about. Moreover, if the

impression arises that scientific experts cannot answer the questions that the public needs

answered, then support for scientific assessments may well founder. To address these forms of

uncertainty, natural scientists may need to collaborate with social scientists for methods and

support.

Structural uncertainty is particularly difficult to address, because by definition one cannot

analyze what one does not know. In the ozone assessments, the structural uncertainty

represented by heterogeneous chemical reactions was essentially ignored, until its inclusion was

forced by the discovery of the Antarctic ozone hole. In the IPCC assessments, the potential for

rapid disintegration of WAIS is a similar structural uncertainty, one that assessors are trying to

grapple with, but have not yet found a mechanism to do so.

As we noted above, the ozone story can be read in pessimistic or optimistic ways. The

pessimistic view notes that scientists missed a crucial mechanism in ozone depletion and only re-

examined their assumptions when forced to by a major, unanticipated and unexplained discovery,

one which they almost missed because their theoretical framework did not allow for it. The

optimistic interpretation notes that once events forced scientists to re-examine their work, they

found the missing mechanism, and did so rather quickly. Moreover, ozone scientists then

developed clever approaches to express the expected effects and outcomes. This leaves us,

however, with the question of whether ozone scientists were especially clever or lucky, in that

Page 24: Characterizing Uncertainty in Expert Assessments

24

they worked out the role of heterogeneous chemistry in just a few years. One could easily

imagine a similar missed process, for example in ice sheet dynamics, which could go

unrecognized for much longer. What mechanisms, if any, does the IPCC have for encouraging

scientists to explore factors that may, like heterogeneous ozone chemistry, have been brushed

aside, but may be important, or even crucial? The history of science could provide help here, in

considering how past scientific communities have tried to explore the domain of “unknown

unknowns” and maintain a stance of openness to the possible significance of previously

neglected matters.

Future climate change assessments must recognize that these several kinds of

uncertainties exist, and that solutions like multi-model testing and Monte Carlo simulations only

address one kind: parameter uncertainty. This is important, but it is not enough. It also seems

important not merely to classify different types of uncertainty, but to actually test the impact of

uncertainty on scientific assessment and policy. For example, it seems to be a common

assumption that decreased uncertainty aids in decision-making. Is this true? People worry about

uncertainties in climate change in part because of its politically sensitive nature, but also in sheer

concern for getting the science right. By reducing uncertainties in climate change assessments,

proponents argue, policy makers are able to move forward more assuredly. Conversely, high

uncertainties in scientific research may hinder the translation of scientific information into

appropriate, effective social policy – and the suggestion or perception of uncertainty can provide

a cover for inaction, to those parties resistant to change. Yet there is substantial historical

evidence to cast doubt upon these presumptions. After all, leading U.S. scientists had a

consensus in 1979 that global warming would result from human activities, yet here we are, thirty

years later, and little action has been taken in response [88, 89, 90, 91].

Notes

Note 1: In the late 1970s and early 1980s several countries produced national assessments of ozone depletion. The Montreal Protocol on Substances that Deplete the Ozone Layer was adopted in 1987. It mandated periodic assessments of the science, environmental effects, and technological and economic impacts of ozone depletion (or recovery). These assessments have been published every four years since 1989.

Page 25: Characterizing Uncertainty in Expert Assessments

25

Note 2: Depending on the chemical nature of the particular species, ozone-depleting substances may directly contribute to global warming, or contribute an indirect cooling effect (Reference 60, p. 1283). Note 3: While Monte Carlo methods are valuable, they may not address all possible uncertainties, particularly those that are not easily captured in a single probability distribution. A recent NRC report [73] recommends combining Monte Carlo simulations with other methods, such as sensitivity analysis and the use of alternative scenarios, to achieve a more rounded treatment of parameter uncertainties. Note 4: “Normal” is defined in the latest assessment as a return to pre-1980 levels, since anthropogenic ozone depletion was not yet significant in 1980 (Reference 62, 6.1).

References

1. Report. A report on the Royal Commission on Vaccination. Nature 1896; 55: 15-17.

2. Royal Commission. A Report On Vaccination and Its Results, Based On the Evidence Taken by the Royal Commission During the Years 1889-1897. Vol. 1: The Text of the Commission Report. 1898. New Sydenham Society, London.

3. National Academy of Sciences. Our Study Process: Insuring Independent, Objective Advice. 2009. The National Academy of Sciences, Washington, DC.

4. Social Learning Group. Learning to Manage Global Environmental Risks, Vols. 1 and 2: A Comparative History of Social Responses to Climate Change, Ozone Depletion and Acid Rain, edited by Clark, WC Jaeger, J, von Eijndhoven, J and Dickson, NM. 2001. MIT Press, Cambridge, MA.

5. Jasanoff, S. The Fifth Branch: Science Advisers as Policymakers. 1990. Harvard University Press, Cambridge, MA.

6. Jasanoff, S. Acceptable evidence in a pluralistic society pp 29-47 in Mayo, DG and Hollander, RD, eds. Acceptable Evidence: Science and Values in Risk Management. 1994. Oxford University Press, Oxford.

7. Jasanoff, S. Designs on Nature: Science and Democracy in Europe and the United States. 2005. Princeton University Press, Princeton.

8. Science and Policy Associates. Joint Climate Project to Address Decision Makers Uncertainties: Report of Findings. 1992. Science and Policy Associates, Washington, D.C.

9. Mayo, DG, and Hollander, RD, eds. Acceptable Evidence: Science and Values in Risk Management. 1994. Oxford University Press, Oxford.

Page 26: Characterizing Uncertainty in Expert Assessments

26

10. Bernabo, JC. Communication among scientists, decision-makers and society: developing policy-relevant global climate change research pp 103-117 in S. Zwerver et al, ed. Climate Change Research: Evaluation and Policy Implications. 1995. Elsevier, Amsterdam.

11. Bernabo, JC. Improving integrated assessments for applications to decision making pp 183-197 in T Schneider, ed. Air Pollution in the 21st Century: Priority Issues and Policy, Studies in Environmental Science 72. 1998.183-197. Elsevier, Amsterdam.

12. Patt, AG. Extreme outcomes: the strategic treatment of low probability events in scientific assessments Risk, Decision and Policy 1999; 4:1-15.

13. Patt, AG. Assessing model-based and conflict-based uncertainty Global Environmental Change 2007; 17:37-46.

14. Sarewitz, D. How science makes environmental controversies worse in Environmental Science & Policy 2004; 7:385-403.

15. Farrell, AE, and Jager, J eds. Designing Processes for the Effective Use of Science in Decisionmaking. 2006. RFF Press, Washington, D.C.

16. Douglas, Mary. Purity and Danger: An Analysis of the Concepts of Pollution and Taboo. 2002 (original publication date 1966). Routledge Classics, London.

17. Douglas, M, and Wildavsky, A. Risk and Culture: Essays on the Selection of Technological and Environmental Dangers 1983 University of California Press, Berkeley.

18. Jasanoff, S. 1986. Risk Management and Political Culture: A Comparative Study of Science in the Policy Context. 1986. Russell Sage Foundation, New York.

19. Shrader-Frechette, K. Risk and Rationality: Philosophical Foundations for Populist Reforms. 1991 University of California Press, Berkeley.

20. Beck, U. Risk Society: Towards a New Modernity 1992 Sage Publications Ltd, Thousand Oaks.

21. Slovic, P, ed. The Perception of Risk. 2000. Earthscan Publications Ltd, Virginia.

22. Morgan, GM, Fishhoff, B, Bostrom, A, and Atman, CJ. Risk Communication: A Mental Models Approach. 2001. Cambridge University Press, Cambridge.

23. Morgan, GM, and Henrion, M. Uncertainty: A Guide to Dealing with Uncertainty in Quantitative Risk and Policy Analysis. 1990. Cambridge University Press, Cambridge.

Page 27: Characterizing Uncertainty in Expert Assessments

27

24. Sunstein, CR. Risk and Reason: Safety, Law, and the Environment. 2002. Cambridge University Press, Cambridge.

25. Pidgeon, NR, Kasperson, E, and Slovic, P eds. The Social Amplification of Risk. 2003. Cambridge University Press, Cambridge.

26. Revesz, RL, and Livermore, MA. Retaking Rationality: How Cost-Benefit Analysis Can Better Protect the Environment and Our Health. 2008. Oxford University Press, Oxford.

27. Oppenheimer, M, O’Neill, BC and Webster, M. Negative learning Climatic Change 2008 89:155-172.

28. Lempert, R, Nakicenovic, N, Sarewitz, D and Schlesinger, M. Characterizing Climate-Change Uncertainties for Decision-Makers: An editorial essay in Climatic Change 2004; 65:1-9.

29. Moss, RH, and Schneider, SH. A Guide to Dealing with Uncertainty in Quantitative Risk and Policy Analysis. 2000. Cambridge University Press, Cambridge.

30. Betz, G. Probabilities in climate policy advice: a critical comment. Climatic Change 2007; 85:1-9.

31. Kandlikar, M, Risbey, J, and Dessai, S. Representing and communicating deep uncertainty in climate-change assessments in C.R. Geoscience 2005 337:443-455.

32. Walker, WE, Harremoes, P, Rotmans, J, Van Der Sluijs, JP, Van Asselt, MBA, Janssen P, and Krayer Von Krauss, MP. Defining Uncertainty: A Conceptual Basis for Uncertainty Management in Model-Based Decision Support, Integrated Assessment 2003; 4:5–17.

33. Rahmstorf, S, Cazenave, A, Church, JA, Hansen, JE, Keeling, RF, Parker, DE, and Somerville, RCJ. Recent Climate Observations Compared to Projections Science 2007; 316:709

34. Raupach, MR, Marland, G, Ciais, P, Le Quere, C, Canadell, JG, Klepper, G, Field, CB. Global and regional drivers of accelerating CO2 emissions Proceedings of the National Academy of Sciences of the United States of America 2007; 104:10288-10293.

35. Pielke, RA. Climate predictions and observations Nature Geoscience 2008; 1:206.

36. Reilly, J, Stone, PH, Forest, CE, Webster, MD, Jacoby, HD and Prinn, RG. 2001 Climate Change: Uncertainty and Climate Change Assessments Science 2001; 5529:430-433.

Page 28: Characterizing Uncertainty in Expert Assessments

28

37. Swart, R, Bernstein, L, Ha-Duong, M, Patersen, A. Agreeing to disagree: uncertainty management in assessing climate change, impacts and responses by the IPCC Climatic Change 2009; 92:1-29.

38. Webster, M. Communicating Climate Change Uncertainty to Policy-Makers and the Public: An editorial comment Climatic Change 2003; 61:1-8.

39. Webster, M. Uncertainty and the IPCC. An editorial comment Climatic Change 2009; 92:37-40.

40. CS Jackson, MK Sen, G. Huerta, Y Deng, KP Bowman, Error Reduction and Convergence in Climate Prediction, J Clim, 21, 6698-6709 (2008);

41. Reifen, C, and Toumi, R. Climate projections: Past performance no guarantee of future skill? Geophysical Research Letters 2009; 36: 3704.

42. Funtowicz, SO, and Ravetz, JR. Three types of risk assessment and the emergence of post-normal science pp 251-274 in Krimsky, S., and D. Golding, ed. Social theories of risk. Praeger, Westport, CT.

43. Goldstein, M, and Rougier, J. Reified Bayesian modeling and inference for physical systems Journal of Statistical Planning and Inference 2009; 139: 1221-1239.

44. Lavine, M, Heger, GC, Lozier, S. Discussion of reified Bayesian modeling and inference for physical systems by Michael Goldstein and Jonathan Rougier Journal of Statistical Planning and Inference 2009 139:1243-1245

45. Kriegler, E, Hall, JW, Held, H, Dawson, R, and Schellnhuber, HJ. Imprecise probability assessment of tipping points in the climate system. 2008. Proceedings of the National Academy of Sciences of the United States of America. 106:5041-5046.

46. Manning, M. The Difficulty of Communicating Uncertainty Climatic Change 2003; 61:9-16.

47. Morgan, GM, Cantor, R, Clark, WC, Fisher, A, Jacoby, HD, Janetos, AC, Kinzig, AP, Melillo, J, Street, RB, and Wilbanks, TJ. Learning from the U.S. Assessment of Climate Change Impacts Environmental Science and Technology. 2005; 39: 9023-9032.

48. Whitehead, AN. Process and Reality. 1929. Harper Brothers, New York.

49. Rowland, FS. Chlorofluorocarbons and the depletion of stratospheric ozone American Scientist 1989 77:36-45.

Page 29: Characterizing Uncertainty in Expert Assessments

29

50. Solomon, S. Progress toward a quantitative understanding of Antarctic ozone depletion Nature 1990; 347: 347-354.

51. Christie, M. The Ozone Layer: A Philosophy of Science Perspective. 2001. Cambridge University Press, Cambridge.

52. National Academy of Sciences, Panel on Atmospheric Chemistry. Halocarbons:

Effects on Stratospheric Ozone. 1976. National Academy Press, Washington, D.C.

53. Cadle, RD, Crutzen, P, and Ehhalt, D. 1975. Heterogeneous chemical reactions in the stratosphere Journal of Geophysical Research 1975; 24:3381-3385.

54. Krishnan, PN, and Salomon, RE. Solubility of hydrogen chloride in ice Journal of Physical Chemistry 1969; 73: 2680-2683.

55. McElroy, MB, Salawitch, RJ, Wofsy, SC, and Logan, JA. Reductions of Antarctic ozone due to synergistic interactions of chlorine and bromine Nature 1986; 321:759-762.

56. Molina, MJ, Tso, T, Molina, LT, Wang, FC. Antarctic stratospheric chemistry of chlorine nitrate, hydrogen chloride, and ice: release of active chlorine,” Science 1987; 238:1253-1257.

57. Rowland, FS, Sato, H, Khwaja, H, and Elliott, SM. The hydrolysis of chlorine nitrate and its possible atmospheric significance Journal of Physical Chemistry 1986; 90:1985-1988.

58. Solomon, S, Garcia, RR, Rowland, FS, and Wuebbles, DJ. On the depletion of Antarctic ozone Nature 1986; 321:755-758.

59. Wuebbles, DJ. Chlorocarbon emission scenarios: potential impact on stratospheric ozone Journal of Geophysical Research 1983 88:1433-1443.

60. World Meteorological Organization. Scientific Assessment of Stratospheric Ozone: 1989. 2 Vols. Global Ozone Research and Monitoring Project – Report No. 20. 1989. WMO, Geneva, Switzerland.

61. Hoffman, JS, and Gibbs, MJ. Future Concentrations of Stratospheric Chlorine and Bromine. EPA Report 400/1-88/005. 1988. U.S. Environmental Protection Agency, Office of Air and Radiation, Washington, D.C.

62. Daniel, JS, Solomon, S, and Albritton, DL. On the evaluation of halocarbon radiative forcing and global warming potentials. Journal of Geophysical Research 1995; D1:1271–1285.

63. National Academy of Sciences. Analysis of Global Change Assessments: Lessons Learned 2007. 2007. National Academies Press, Washington, D.C.

Page 30: Characterizing Uncertainty in Expert Assessments

30

64. World Meteorological Organization. Scientific Assessment of Ozone Depletion: 2006. Global Ozone Research and Monitoring Project – Report No. 50. 2006. WMO, Geneva, Switzerland.

65. World Meteorological Organization. Scientific Assessment of Ozone Depletion: 2002. Global Ozone Research and Monitoring Project – Report No. 47. 2002. WMO, Geneva, Switzerland.

66. Norman, C. Satellite data indicate ozone depletion Science 1981; 213:1088-1089.

67. World Meteorological Organization. Report of the International Ozone Trends Panel 1988. 2 Vols. Global Ozone Research and Monitoring Project – Report No. 18. 1988. WMO, Geneva, Switzerland.

68. Litfin, KT. Ozone Discourses: Science and Politics in Global Environmental Cooperation. 1994. Columbia University Press, New York.

69. Parson, EA. Protecting the Ozone Layer: Science and Strategy. 2003. Oxford University Press, Oxford.

70. Stolarski, RS, Butler, DM, and Rundel, RD. Uncertainty propagation in a stratospheric model 2: Monte Carlo analysis of imprecisions due to reaction rates Journal of Geophysical Research 1978 83:3074-3078.

71. Stolarski, RS, and Douglass, AR. Parameterization of the photochemistry of stratospheric ozone including catalytic loss processes Journal of Geophysical Research 1985; 90:10709-10718.

72. World Meteorological Organization. The Stratosphere 1981: Theory and Measurements. WMO Global Ozone Research and Monitoring Project – Report No. 11. 1981. Geneva, Switzerland.

73. World Meteorological Organization. Atmospheric Ozone 1985: Assessment of Our Understanding of the Processes Controlling its Present Distribution and Change. 3 Vols. Global Ozone Research and Monitoring Project – Report No. 16. 1986. WMO, Geneva, Switzerland.

74. World Meteorological Organization. Scientific Assessment of Ozone Depletion: 1991. Global Ozone Research and Monitoring Project – Report No. 25. 1991 Geneva, Switzerland.

75. Committee on Models in the Regulatory Decision Process. Models in Environmental

Regulatory Decision Making. 2007. National Academy of Sciences- National Research Council,

Board on Environmental Studies and Toxicology, Washington, D.C.

Page 31: Characterizing Uncertainty in Expert Assessments

31

76. Carbon Dioxide Assessment Committee (CDAC). Increasing Carbon Dioxide and the West Antarctic Ice Sheet: Notes on an Informal Workshop 1982 Scripps Institution of Oceanography.

77. Mercer, JH. Antarctic ice and Sangamon sea level International Association of Scientific Hydrology Symposia 1968; 79: 217–225.

78. Mercer, JH. West Antarctic Ice Sheet and CO2 greenhouse effect: a threat of disaster Nature 1978; 271: 321–325.

79. Carbon Dioxide Assessment Committee (CDAC). Changing Climate: Report of the Carbon Dioxide Assessment Committee. 1983. National Academy Press, Washington, D.C.

80. President's Science Advisory Committee (PSAC). Restoring the Quality of Our Environment. Report of the Environmental Pollution Panel. 1965. The White House, Washington, D.C.

81. Intergovernmental Panel on Climate Change (IPCC). Climate Change 2001: The Scientific Basis. Edited by Houghton, JT, Ding, Y, Griggs, DJ, Noguer, M, van der Linden, PJ, Dai, X, Maskell, K and Johnson, CA. 2001. Cambridge University Press, Cambridge.

82. Intergovernmental Panel on Climate Change (IPCC). Climate Change 2007: The Physical Science Basis. Edited by Solomon, S, Qin, D, Manning, M, Chen, Z, Marquis, M, Averyt, KB, Tignor, M and Miller, HL. 2007. Cambridge University Press, Cambridge.

83. O’Reilly, J. The Rapid Disintegration of Predictions: Climate Change, Bureaucracy, and the West Antarctic Ice Sheet. 2009. Paper presented at the Society for the Social Studies of Science meeting, Washington. D.C.

84. Schneider, SH. Science as a Contact Sport: Inside the Battle to Save Earth's Climate. 2009. National Geographic, Washington, D.C.

85. Milne, GA., Gehrels, WR., Hughes, CW, Tamisiea, ME. Identifying the causes of sea-level change. Nature Geoscience. 2009; published online June 14.

86. Herrick, C and Sarewitz, D. Ex Post Evaluation: A More Effective Role for Scientific Assessments in Environmental Policy in Science, Technology, and Human Values. 2000; 25: 309-331.

87. Oreskes, N. The Rejection of Continental Drift. 1999 Oxford University Press, Oxford.

88. Oreskes, N and Conway, EM. Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming. In Press. Bloomsbury Press, New York.

Page 32: Characterizing Uncertainty in Expert Assessments

32

89. National Research Council (NRC). Carbon Dioxide and Climate: A Scientific Assessment., National Research Council, Ad Hoc Study Group on Carbon Dioxide and Climate. 1979. National Academy Press, Washington, D.C.

90. Weart, S. The Discovery of Global Warming. 2004. Harvard University Press, Cambridge.

91. Oreskes, N. The Long Consensus on Climate Change The Washington Post 2007; February 1.

Further Reading

Click here to insert Further reading text

Cross-References

CC-0019: Probabilistic approaches to assessing uncertainties and risks of climate change impacts

CC-0177: Charactersing uncertainty in expert panel assessments

CC-0180: Approaches to uncertainty management in integrated assessment modelling

Supplementary Information

Table 1

Metrics for estimating ozone depletion

T = lifetime (in years)

M = molecular weight

nx = number of chlorine (or bromine) atoms in halocarbon X

Clt-3

= strat halocarbon mixing ratio at time t (with 3 year lag for transport)

FC = fractional release of Cl (or Br) in stratosphere (when calculating with bromine, there is also a constant that accounts for bromine’s greater ability to destroy ozone. relative to chlorine’s ability

Page 33: Characterizing Uncertainty in Expert Assessments

33

Reviewer suggestions

Name Affiliation e-mail

David Fahey NOAA Earth System Research Laboratory

[email protected]

Michael Prather University of California, Irvine [email protected]

Clark Miller Arizona State University [email protected]

Robert Bindschadler NASA Goddard Space Flight Center

[email protected]

Ozone Metric Calculated By Formula Interpretation of Results

Ozone Depletion

Potential (ODP)

How much ozone depletion a fixed quantity of a given substance will cause, over its entire atmospheric lifetime (at steady-state emission levels), relative to the same amount of CFC-11

ODP = Global �O3 due to X

Global �O3 due to CFC-11

Need to combine calculated ODP with actual quantities of this and other ODSs released to estimate actual ozone depletion

Chlorine Loading

Potential (CLP) or Bromine

Loading Potential (BLP)

How much chlorine (or bromine) a fixed quantity of a given substance will deliver from the troposphere to the stratosphere relative to the amount of chlorine delivered by the same quantity of CFC-11

CLP = Tx x MCFC-11 x nx

TCFC-11 Mx 3

Need to combine CLPs for all ODSs with actual quantities of all ODSs released to figure out actual ozone depletion

Equivalent Effective

Stratospheric Chlorine (EESC)

Estimates the total number of chlorine (and bromine) atoms delivered to the stratosphere, over a given time period (t)

EESC = nx x Clt-3

x FCX

FCCFC-11

Gives total amount of chlorine (and bromine) delivered to stratosphere by all ODSs; allows easy calculation of GWP contributed by this chlorine (and bromine)