the precautionary principle (with application to the genetic

26
EXTREME RISK INITIATIVE —NYU SCHOOL OF ENGINEERING WORKING PAPER SERIES The Precautionary Principle (with Application to the Genetic Modification of Organisms) Nassim Nicholas Taleb , Rupert Read § , Raphael Douady , Joseph Norman ,Yaneer Bar-Yam School of Engineering, New York University New England Complex Systems Institute Institute of Mathematics and Theoretical Physics, C.N.R.S., Paris § School of Philosophy, University of East Anglia F Abstract—The precautionary principle (PP) states that if an action or policy has a suspected risk of causing severe harm to the public domain (affecting general health or the environment globally), the action should not be taken in the absence of scientific near-certainty about its safety. Under these conditions, the burden of proof about absence of harm falls on those proposing an action, not those opposing it. PP is intended to deal with uncertainty and risk in cases where the absence of evidence and the incompleteness of scientific knowledge carries profound implications and in the presence of risks of "black swans", unforeseen and unforeseable events of extreme consequence. This non-naive version of the PP allows us to avoid paranoia and paralysis by confining precaution to specific domains and problems. Here we formalize PP, placing it within the statistical and probabilistic structure of “ruin” problems, in which a system is at risk of total failure, and in place of risk we use a formal"fragility" based approach. In these problems, what appear to be small and reasonable risks accumulate inevitably to certain irreversible harm. Traditional cost-benefit analyses, which seek to quantitatively weigh outcomes to determine the best policy option, do not apply, as outcomes may have infinite costs. Even high-benefit, high-probability outcomes do not outweigh the existence of low probability, infinite cost options—i.e. ruin. Uncertainties result in sensitivity analyses that are not mathematically well behaved. The PP is increasingly relevant due to man-made dependencies that propagate impacts of policies across the globe. In contrast, absent humanity the biosphere engages in natural experiments due to random variations with only local impacts. Our analysis makes clear that the PP is essential for a limited set of contexts and can be used to justify only a limited set of actions. We discuss the implications for nuclear energy and GMOs. GMOs represent a public risk of global harm, while harm from nuclear energy is comparatively limited and better characterized. PP should be used to prescribe severe limits on GMOs. September 4, 2014 1 I NTRODUCTION T He aim of the precautionary principle (PP) is to prevent decision makers from putting society as a whole—or a significant segment of it—at risk from the unexpected side effects of a certain type of decision. The Corresponding author: N N Taleb, email [email protected] PP states that if an action or policy has a suspected risk of causing severe harm to the public domain (such as general health or the environment), and in the absence of scientific near-certainty about the safety of the action, the burden of proof about absence of harm falls on those proposing the action. It is meant to deal with effects of absence of evidence and the incompleteness of scientific knowledge in some risky domains. 1 We believe that the PP should be evoked only in extreme situations: when the potential harm is systemic (rather than localized) and the consequences can involve total irreversible ruin, such as the extinction of human beings or all life on the planet. The aim of this paper is to place the concept of precaution within a formal statistical and risk-analysis structure, grounding it in probability theory and the properties of complex systems. Our aim is to allow decision makers to discern which circumstances require the use of the PP and in which cases evoking the PP is inappropriate. 2 DECISION MAKING AND TYPES OF RISK Taking risks is necessary for individuals as well as for de- cision makers affecting the functioning and advancement of society. Decision and policy makers tend to assume all risks are created equal. This is not the case. Taking into account the structure of randomness in a given system can have a dramatic effect on which kinds of actions are, or are not, justified. Two kinds of potential harm must be considered when determining an appropriate approach to the role of risk in decision-making: 1) localized non- spreading impacts and 2) propagating impacts resulting in irreversible and widespread damage. 1. The Rio Declaration on Environment and Development presents it as follows: "In order to protect the environment, the precautionary approach shall be widely applied by States according to their capabil- ities. Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation." 1

Upload: tranhuong

Post on 06-Jan-2017

225 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: The Precautionary Principle (with Application to the Genetic

EXTREME RISK INITIATIVE —NYU SCHOOL OF ENGINEERING WORKING PAPER SERIES

The Precautionary Principle(with Application to the Genetic Modification of

Organisms)Nassim Nicholas Taleb⇤, Rupert Read§, Raphael Douady‡, Joseph Norman†,Yaneer Bar-Yam†

⇤School of Engineering, New York University †New England Complex Systems Institute‡ Institute of Mathematics and Theoretical Physics, C.N.R.S., Paris

§School of Philosophy, University of East Anglia

F

Abstract—The precautionary principle (PP) states that if an actionor policy has a suspected risk of causing severe harm to the publicdomain (affecting general health or the environment globally), the actionshould not be taken in the absence of scientific near-certainty about itssafety. Under these conditions, the burden of proof about absence ofharm falls on those proposing an action, not those opposing it. PP isintended to deal with uncertainty and risk in cases where the absenceof evidence and the incompleteness of scientific knowledge carriesprofound implications and in the presence of risks of "black swans",unforeseen and unforeseable events of extreme consequence.

This non-naive version of the PP allows us to avoid paranoia andparalysis by confining precaution to specific domains and problems.Here we formalize PP, placing it within the statistical and probabilisticstructure of “ruin” problems, in which a system is at risk of total failure,and in place of risk we use a formal"fragility" based approach. In theseproblems, what appear to be small and reasonable risks accumulateinevitably to certain irreversible harm. Traditional cost-benefit analyses,which seek to quantitatively weigh outcomes to determine the bestpolicy option, do not apply, as outcomes may have infinite costs. Evenhigh-benefit, high-probability outcomes do not outweigh the existenceof low probability, infinite cost options—i.e. ruin. Uncertainties result insensitivity analyses that are not mathematically well behaved. The PPis increasingly relevant due to man-made dependencies that propagateimpacts of policies across the globe. In contrast, absent humanity thebiosphere engages in natural experiments due to random variationswith only local impacts.

Our analysis makes clear that the PP is essential for a limited setof contexts and can be used to justify only a limited set of actions.We discuss the implications for nuclear energy and GMOs. GMOsrepresent a public risk of global harm, while harm from nuclear energyis comparatively limited and better characterized. PP should be used toprescribe severe limits on GMOs.

September 4, 2014

1 INTRODUCTION

THe aim of the precautionary principle (PP) is toprevent decision makers from putting society as a

whole—or a significant segment of it—at risk from theunexpected side effects of a certain type of decision. The

Corresponding author: N N Taleb, email [email protected]

PP states that if an action or policy has a suspected riskof causing severe harm to the public domain (such asgeneral health or the environment), and in the absenceof scientific near-certainty about the safety of the action,the burden of proof about absence of harm falls on thoseproposing the action. It is meant to deal with effects ofabsence of evidence and the incompleteness of scientificknowledge in some risky domains.1

We believe that the PP should be evoked only inextreme situations: when the potential harm is systemic(rather than localized) and the consequences can involvetotal irreversible ruin, such as the extinction of humanbeings or all life on the planet.

The aim of this paper is to place the concept ofprecaution within a formal statistical and risk-analysisstructure, grounding it in probability theory and theproperties of complex systems. Our aim is to allowdecision makers to discern which circumstances requirethe use of the PP and in which cases evoking the PP isinappropriate.

2 DECISION MAKING AND TYPES OF RISKTaking risks is necessary for individuals as well as for de-cision makers affecting the functioning and advancementof society. Decision and policy makers tend to assume allrisks are created equal. This is not the case. Taking intoaccount the structure of randomness in a given systemcan have a dramatic effect on which kinds of actions are,or are not, justified. Two kinds of potential harm must beconsidered when determining an appropriate approachto the role of risk in decision-making: 1) localized non-spreading impacts and 2) propagating impacts resultingin irreversible and widespread damage.

1. The Rio Declaration on Environment and Development presentsit as follows: "In order to protect the environment, the precautionaryapproach shall be widely applied by States according to their capabil-ities. Where there are threats of serious or irreversible damage, lackof full scientific certainty shall not be used as a reason for postponingcost-effective measures to prevent environmental degradation."

1

Page 2: The Precautionary Principle (with Application to the Genetic

EXTREME RISK INITIATIVE —NYU SCHOOL OF ENGINEERING WORKING PAPER SERIES 2

Traditional decision-making strategies focus on thecase where harm is localized and risk is easy to calculatefrom past data. Under these circumstances, cost-benefitanalyses and mitigation techniques are appropriate. Thepotential harm from miscalculation is bounded.

On the other hand, the possibility of irreversible andwidespread damage raises different questions about thenature of decision making and what risks can be reason-ably taken. This is the domain of the PP.

Criticisms are often levied against those who arguefor caution portraying them as unreasonable and pos-sibly even paranoid. Those who raise such criticismsare implicitly or explicitly advocating for a cost benefitanalysis, and necessarily so. Critics of the PP have alsoexpressed concern that it will be applied in an overreach-ing manner, eliminating the ability to take reasonablerisks that are needed for individual or societal gains.While indiscriminate use of the PP might constrainappropriate risk-taking, at the same time one can alsomake the error of suspending the PP in cases when it isvital.

Hence, a non-naive view of the precautionary princi-ple is one in which it is only invoked when necessary,and only to prevent a certain variety of very precisely de-fined risks based on distinctive probabilistic structures.But, also, in such a view, the PP should never be omittedwhen needed.

The remainder of this section will outline the differ-ence between the naive and non-naive approaches.

2.1 What we mean by a non-naive PPRisk aversion and risk-seeking are both well-studiedhuman behaviors. However, it is essential to distinguishthe PP so that it is neither used naively to justify any actof caution, nor dismissed by those who wish to courtrisks for themselves or others.

The PP is intended to make decisions that ensuresurvival when statistical evidence is limited—becauseit has not had time to show up —by focusing on theadverse effects of "absence of evidence."

Table 1 encapsulates the central idea of the paper andshows the differences between decisions with a risk ofharm (warranting regular risk management techniques)and decisions with a risk of total ruin (warranting thePP).

2.2 Harm vs. Ruin: When the PP is necessaryThe purpose of the PP is to avoid a certain class of what,in probability and insurance, is called “ruin" problems[1]. A ruin problem is one where outcomes of riskshave a non-zero probability of resulting in unrecoverablelosses. An often-cited illustrative case is that of a gamblerwho loses his entire fortune and so cannot return tothe game. In biology, an example would be a speciesthat has gone extinct. For nature, "ruin" is ecocide: anirreversible termination of life at some scale, which couldbe planetwide. The large majority of variations that

Standard Risk Management Precautionary Approachlocalized harm systemic ruinnuanced cost-benefit avoid at all costsstatistical fragility basedstatistical probabilistic non-statisticalvariations ruinconvergent probabibilities divergent probabilitiesrecoverable irreversibleindependent factors interconnected factorsevidence based precautionarythin tails fat tailsbottom-up, tinkering top-down engineeredevolved human-made

Table 1: Two different types of risk and their respectivecharacteristics compared

2000 4000 6000 8000 10000Exposure

0.2

0.4

0.6

0.8

1.0Probability of Ruin

Figure 1: Why Ruin is not a Renewable Resource. Nomatter how small the probability, in time, somethingbound to hit the ruin barrier is about guaranteed to hitit.

occur within a system, even drastic ones, fundamentallydiffer from ruin problems: a system that achieves ruincannot recover. As long as the instance is bounded, e.g.a gambler can work to gain additional resources, theremay be some hope of reversing the misfortune. This isnot the case when it is global.

Our concern is with public policy. While an individualmay be advised to not "bet the farm," whether or not hedoes so is generally a matter of individual preferences.Policy makers have a responsibility to avoid catastrophicharm for society as a whole; the focus is on the aggregate,not at the level of single individuals, and on global-systemic, not idiosyncratic, harm. This is the domain ofcollective "ruin" problems.

Precautionary considerations are relevant much morebroadly than to ruin problems. For example, there wasa precautionary case against cigarettes long before therewas an open-and-shut evidence-based case against them.Our point is that the PP is a decisive consideration forruin problems, while in a broader context precaution isnot decisive and can be balanced against other consid-erations.

3 WHY RUIN IS SERIOUS BUSINESSThe risk of ruin is not sustainable. By the ruin theorems,if you incur a tiny probability of ruin as a "one-off" risk,

Page 3: The Precautionary Principle (with Application to the Genetic

EXTREME RISK INITIATIVE —NYU SCHOOL OF ENGINEERING WORKING PAPER SERIES 3

survive it, then do it again (another "one-off" deal), youwill eventually go bust with probability 1. Confusionarises because it may seem that the "one-off" risk isreasonable, but that also means that an additional oneis reasonable. This can be quantified by recognizing thatthe probability of ruin approaches 1 as the number ofexposures to individually small risks, say one in tenthousand, increases (see Fig. 1). For this reason a strategyof risk taking is not sustainable and we must considerany genuine risk of total ruin as if it were inevitable.

The good news is that some classes of risk can bedeemed to be practically of probability zero: the earthsurvived trillions of natural variations daily over 3 bil-lion years, otherwise we would not be here. By recog-nizing that normal risks are not in the category of ruinproblems, we recognize also that it is not necessary oreven normal to take risks that involve a possibility ofruin.

3.1 PP is not Risk ManagementIt is important to contrast and not conflate the PP andrisk management. Risk management involves variousstrategies to make decisions based upon accounting forthe effects of positive and negative outcomes and theirprobabilities, as well as seeking means to mitigate harmand offset losses. Risk management strategies are impor-tant for decision-making when ruin is not at stake. How-ever, the only risk management strategy of importancein the case of the PP is ensuring that actions which canresult in ruin are not taken, or equivalently, modifyingpotential choices of action so that ruin is not one of thepossible outcomes.

More generally, we can identify three layers associatedwith strategies for dealing with uncertainty and risk.The first layer is the PP which addresses cases thatinvolve potential global harm, whether probabilities areuncertain or known and whether they are large or small.The second is risk management which addresses the caseof known probabilities of well-defined, bounded gainsand losses. The third is risk aversion or risk-seekingbehavior, which reflects quite generally the role of per-sonal preferences for individual risks when uncertaintyis present.

3.2 Ruin is foreverA way to formalize the ruin problem in terms of the de-structive consequences of actions identifies harm as notabout the amount of destruction, but rather a measureof the integrated level of destruction over the time itpersists. When the impact of harm extends to all futuretimes, i.e. forever, then the harm is infinite. When theharm is infinite, the product of any non-zero probabilityand the harm is also infinite, and it cannot be balancedagainst any potential gains, which are necessarily finite.This strategy for evaluation of harm as involving theduration of destruction can be used for localized harmsfor better assessment in risk management. Our focus

Absorbing Barrier

20 40 60 80 100Time

State

Figure 2: A variety of temporal states for a processsubjected to an absorbing barrier. Once the absorbingbarrier is hit, the process terminates, regardless of itsfuture potential.

here is on the case where destruction is complete fora system or an irreplaceable aspect of a system.

Figure 2 shows ruin as an absorbing barrier, a pointthat does not allow recovery.

For example, for humanity global devastation cannotbe measured on a scale in which harm is proportionalto level of devastation. The harm due to complete de-struction is not the same as 10 times the destructionof 1/10 of the system. As the percentage of destructionapproaches 100%, the assessment of harm diverges toinfinity (instead of converging to a particular number)due to the value placed on a future that ceases to exist.

Because the “cost” of ruin is effectively infinite, cost-benefit analysis (in which the potential harm and po-tential gain are multiplied by their probabilities andweighed against each other) is no longer a usefulparadigm. Even if probabilities are expected to be zerobut have a non-zero uncertainty, then a sensitivity anal-ysis that considers the impact of that uncertainty resultsin infinities as well. The potential harm is so substantialthat everything else in the equation ceases to matter. Inthis case, we must do everything we can to avoid thecatastrophe.

4 SCIENTIFIC METHODS AND THE PPHow well can we know either the potential conse-quences of policies or their probabilities? What doesscience say about uncertainty? To be helpful in policydecisions, science has to encompass not just expectationsof potential benefit and harm but also their probabilityand uncertainty.

Just as the imperative of analysis of decision-makingchanges when there is infinite harm for a small, non-zerorisk, so is there a fundamental change in the ability toapply scientific methods to the evaluation of that harm.This influences the way we evaluate both the possibilityof and the risk associated with ruin.

Page 4: The Precautionary Principle (with Application to the Genetic

EXTREME RISK INITIATIVE —NYU SCHOOL OF ENGINEERING WORKING PAPER SERIES 4

The idea of precaution is the avoidance of adverse con-sequences. This is qualitatively different from the idea ofevidentiary action (from statistics). In the case of the PP,evidence may come too late. The non-naive PP bridgesthe gap between precaution and evidentiary action usingthe ability to evaluate the difference between local andglobal risks.

4.1 Precautionary vs. Evidentiary ActionStatistical-evidentiary approaches to risk analysis andmitigation count the frequency of past events (robuststatistics), or calibrate parameters of statistical distribu-tions to generate probabilities of future events (para-metric approach), or both. Experimental evidentiarymethods follow the model of medical trials, computingprobabilities of harm from side effects of drugs or in-terventions by observing the reactions in a variety ofanimal and human models. Generally they assume thatthe risk itself (i.e. nature of harm and their probabil-ity) is adequately determined by available information.However, the level of risk may be hard to gauge asits probability may be uncertain, and, in the case ofpotential infinite harm, an uncertainty that allows fora non-zero probability results in infinities so that theproblem is ill-defined mathematically.

While evidentiary approaches are often considered toreflect adherence to the scientific method in its purestform, it is apparent that these approaches do not applyto ruin problems. In an evidentiary approach to risk(relying on evidence-based methods), the existence ofa risk or harm occurs when we experience that risk orharm. In the case of ruin, by the time evidence comes itwill by definition be too late to avoid it. Nothing in thepast may predict one fatal event as illustrated in Fig. 4.Thus standard evidence-based approaches cannot work.

More generally, evidentiary action is a frameworkbased upon the quite reasonable expectation that welearn from experience. The idea of evidentiary action isembodied in the kind of learning from experience that isfound in how people often react to disasters—after thefact. When a disaster occurs people prepare for the nextone, but do not anticipate it in advance. For the case ofruin problems, such behavior guarantees extinction.

4.2 Invalid Empirical Arguments Against RuinIn the case of arguments about ruin problems, claimsthat experience thus far has not provided evidence forruin, and thus it should not be considered, are not valid.

4.3 Unknowability, Uncertainty and UnpredictabilityIt has been shown that the complexity of real worldsystems limits the ability of empirical observations todetermine the outcomes of actions upon them [2]. Thismeans that a certain class of systemic risks will remaininherently unknown. In some classes of complex sys-tems, controlled experiments cannot evaluate all of the

possible systemic consequences under real-world con-ditions. In these circumstances, efforts to provide assur-ance of the "lack of harm" are insufficiently reliable. Thisruns counter to both the use of empirical approaches(including controlled experiments) to evaluate risks, andto the expectation that uncertainty can be eliminated byany means.

Figure 3: Thin Tails from Tinkering, Bottom-Up, Evolu-tion. In nature no individual variation represents a largeshare of the sum of the variations. Natural boundariesprevent cascading effects from propagating globally.Mass extinctions arise from the rare cases where largeimpacts (meteorite hits and vulcanism) propagate acrossthe globe through the atmosphere and oceans.

Figure 4: Fat Tails from a Top-Down, Engineered De-sign In human made variations the tightly connectedglobal system implies a single deviation will eventuallydominate the sum of their effects. Examples includepandemics, invasive species, financial crises and mono-culture.

4.4 Distinguishing Global and Local RisksSince there are mathematical limitations to predictabilityof outcomes in a complex system, the central issueto determine is whether the threat of harm is local(hence globally benign) or carries global consequences.Scientific analysis can robustly determine whether a riskis systemic, i.e. by evaluating the connectivity of thesystem to propagation of harm, without determining the

Page 5: The Precautionary Principle (with Application to the Genetic

EXTREME RISK INITIATIVE —NYU SCHOOL OF ENGINEERING WORKING PAPER SERIES 5

specifics of such a risk. If the consequences are systemic,the associated uncertainty of risks must be treated differ-ently than if it is not. In such cases, precautionary actionis not based on direct empirical evidence but on analyti-cal approaches based upon the theoretical understandingof the nature of harm. It relies on probability theorywithout computing probabilities. The essential questionis whether or not global harm is possible or not. Theoryenables generalizing from experience in order to apply itto new circumstances. In the case of the PP, the existenceof a robust way to generalize is essential.

The relevance of the precautionary principle today isgreater than in the past, owing to the global connectivityof civilization that makes the spreading of effects toplaces previously insulated.

5 FAT TAILS AND FRAGILITY5.1 Thin and Fat TailsTo figure out whether a given decision involves the riskof ruin and thus warrants the use of the PP, we mustfirst understand the relevant underlying probabilisticstructures.

There are two classes of probability distributions ofevents: one in which events are accompanied by wellbehaved, mild variations (e.g. Gaussian or thin tails), andthe other where small probabilities are associated withlarge variations that have no characteristic scale (e.g.power law or fat tails). Allegorically these are illustratedby Mediocristan and Extremistan (Figs. 3 and 4), theformer being typical of human weight distributions, andthe latter of human wealth distributions. Given a seriesof events (a sequence of measurements of weight orwealth), in the case of thin tails the sum is proportionalto the average, and in the case of fat tails a sum overthem may be entirely dominated by a single one. Thus,while no human being can be heavier than, say, tenaverage adults (since weight is thin-tailed), a singleindividual can be richer than the poorest two billionhumans (since wealth is fat tailed).

In thin tailed domains (Fig 3) harm comes from thecollective effect of many, many events; no event alonecan be consequential enough to affect the aggregate. Itis practically impossible for a single day to account for99% of all heart attacks in a given year (the probability issmall enough to be practically zero), for an illustration).Statistical distributions that belong to the thin-taileddomain include: Gaussian, Binomial, Bernoulli, Poisson,Gamma, Beta and Exponential.

In fat tailed domains of risk (Fig. 4) harm comes fromthe largest single event. Examples of relevant statisticaldistributions include: Pareto, Levy-Stable distributionswith infinite variance, Cauchy, and power law distribu-tions, especially with larger exponents.

5.2 Why interdependence brings fat tailsWhen variations lead to independent impacts locally, theaggregate effect of those variations is small according to

the central limit theorem, guaranteeing thin-tailed distri-butions. When there is interdependence, the central limittheorem does not apply, and aggregate variations maybecome much more severe due to mutual reinforcement.Interdependence arises because of the coupling of behav-ior in different places. Under these conditions, cascadespropagate through the system in a way that can causelarge impacts. Whether components are independent ordependent clearly matters to systemic disasters such aspandemics and financial or other crises. Interdependenceincreases the probability of ruin, ultimately to the pointof certainty.

Consider the global financial crash of 2008. As finan-cial firms became increasingly interdependent duringthe latter part of the 20th century, small fluctuationsduring periods of calm masked the vulnerability of thesystem to cascading failures. Instead of a local shock inan independent area of the system, we experienced aglobal shock with cascading effects. The crisis of 2008,in addition, illustrates the failure of evidentiary riskmanagement. Since data from the time series beginningin the 1980s exhibited stability, causing the period to bedubbed "the great moderation," it deceived those relyingon historical statistical evidence.

6 WHAT IS THE RISK OF HARM TO THEEARTH?At the systemic largest scale on Earth, nature has thintails, though tails may be fat at smaller length scales orsufficiently long time scales; occasional mass extinctionsoccur at very long time scales. This is characteristic of abottom-up, local tinkering design process, where thingschange primarily locally and only mildly and iterativelyon a global scale.

In recent years, it has been shown that natural systemsoften have fat tail (power law) behaviors associated withthe propagation of shocks [3]. This, however, applies toselected systems that do not have barriers (or circuit-breakers) that limit those propagations. The earth hasan intrinsic heterogeneity of oceans/continents, deserts,mountains, lakes, rivers and climate differences thatlimit the propagation of variations from one area toanother. There are also smaller natural boundaries as-sociated with organism sizes and those of local groupsof organisms. Among the largest propagation events wecommonly observe are forest fires, but even these arebounded in their impacts compared to a global scale.The various forms of barriers limit the propagation ofcascades that enable large scale events.

At longer time scales of millions of years, mass extinc-tions can achieve a global scale. Connectivity of oceansand the atmosphere enables propagation of impacts, i.e.gas, ash and dust propagating through the atmospheredue to meteor impacts and volcanism, is considered ascenario for these extinction events [4]. The variabilityassociated with mass extinctions can especially be seenin the fossil record of marine animal species; those of

Page 6: The Precautionary Principle (with Application to the Genetic

EXTREME RISK INITIATIVE —NYU SCHOOL OF ENGINEERING WORKING PAPER SERIES 6

plants and land insects are comparatively robust. It isnot known to what extent these events are driven extrin-sically, by meteor impacts, geological events includingvolcanos, or cascading events of coupled species extinc-tions, or combinations of them. The variability associatedwith mass extinctions, however, indicates that there arefat tail events that can affect the global biosphere. Themajor extinction events during the past 500 million yearsoccur at intervals of millions of years [5]. While mass ex-tinctions occur, the extent of that vulnerability is drivenby both sensitivity to external events and connectivityamong ecosystems.

The greatest impact of human beings on this naturalsystem connectivity is through dramatic increases inglobal transportation. The impact of invasive species andrapid global transmission of diseases demonstrates therole of human activity in connecting previously muchmore isolated natural systems. The role of transporta-tion and communication in connecting civilization itselfis apparent in economic interdependence manifest incascading financial crises that were not possible even ahundred years ago. The danger we are facing today isthat we as a civilization are globally connected, and thefat tail of the distribution of shocks extends globally, toour peril.

Had nature not imposed sufficiently thin-tailed varia-tions in the aggregate or macro level, we would not behere today. A single one of the trillions, perhaps the tril-lions of trillions, of variations over evolutionary historywould have terminated life on the planet. Figures 1 and 2show the difference between the two separate statisticalproperties. While tails can be fat for subsystems, natureremains predominantly thin-tailed at the level of theplanet [6]. As connectivity increases the risk of extinctionincreases dramatically and nonlinearly [7].

6.1 Risk and Global Interventionism

Currently, global dependencies are manifest in the ex-pressed concerns about policy maker actions that nom-inally appear to be local in their scope. In just recentmonths, headlines have been about Russia’s involvementin Ukraine, the spread of Ebola in east Africa, expansionof ISIS control into Iraq, ongoing posturing in North Ko-rea and Israeli-Palestinian conflict, among others. Theseevents reflect upon local policy maker decisions thatare justifiably viewed as having global repercussions.The connection between local actions and global riskscompels widespread concern and global responses toalter or mitigate local actions. In this context, we pointout that the broader significance and risk associatedwith policy actions that impact on global ecological andhuman survival is the essential point of the PP. Payingattention to the headline events without paying attentionto these even larger risks is like being concerned aboutthe wine being served on the Titanic.

Figure 5: Nonlinear response compared to linear re-sponse. The PP should be evoked to prevent impactsthat result in complete destruction due to the nonlin-ear response of natural systems, it is not needed forsmaller impacts where risk management methods canbe applied.

7 FRAGILITY

We define fragility in the technical discussion in Ap-pendix C as "is harmed by uncertainty", with the math-ematical result that what is harmed by uncertainty hasa certain type on nonlinear response to random events.

The PP applies only to the largest scale impacts dueto the inherent fragility of systems that maintain theirstructure. As the scale of impacts increases the harmincreases non-linearly up to the point of destruction.

7.1 Fragility as Nonlinear Response

Everything that has survived is necessarily non-linear toharm. If I fall from a height of 10 meters I am injuredmore than 10 times than if I fell from a height of 1 meter,or more than 1000 times than if I fell from a heightof 1 centimeter, hence I am fragile. In general, everyadditional meter, up to the point of my destruction, hurtsme more than the previous one.

Similarly, if I am hit with a big stone I will be harmeda lot more than if I were pelted serially with pebbles ofthe same total weight.

Everything that is fragile and still in existence (thatis, unbroken), will be harmed more by a certain stressorof intensity X than by k times a stressor of intensityX/k, up to the point of breaking. If I were not fragile(susceptible to harm more than linearly), I would bedestroyed by accumulated effects of small events, andthus would not survive. This non-linear response iscentral for everything on planet earth.

This explains the necessity of considering scale wheninvoking the PP. Polluting in a small way does notwarrant the PP because it is essentially less harmful thanpolluting in large quantities, since harm is non-linear.

Page 7: The Precautionary Principle (with Application to the Genetic

EXTREME RISK INITIATIVE —NYU SCHOOL OF ENGINEERING WORKING PAPER SERIES 7

7.2 Why is fragility a general rule?

The statistical structure of stressors is such that smallvariations are much, much more frequent than largeones. Fragility is intimately connected to the ability towithstand small impacts and recover from them. Thisability is what makes a system retain its structure. Everysystem has a threshold of impact beyond which it willbe destroyed, i.e. its structure is not sustained.

Consider a coffee cup sitting on a table: there aremillions of recorded earthquakes every year; if the coffeecup were linearly sensitive to earthquakes and accumu-lated their effects as small deteriorations of its form, itwould not persist even for a short time as it would havebeen broken down due to the accumulated impact ofsmall vibrations. The coffee cup, however, is non-linearto harm, so that the small or remote earthquakes onlymake it wobble, whereas one large one would break itforever.

This nonlinearity is necessarily present in everythingfragile.

Thus, when impacts extend to the size of the sys-tem, harm is severely exacerbated by non-linear effects.Small impacts, below a threshold of recovery, do notaccumulate for systems that retain their structure. Largerimpacts cause irreversible damage. We should be careful,however, of actions that may seem small and local butthen lead to systemic consequences.

7.3 Fragility, Dose response and the 1/n rule

Another area where we see non-linear responses to harmis the dose-response relationship. As the dose of somechemical or stressor increases, the response to it growsnon-linearly. Many low-dose exposures do not causegreat harm, but a single large-dose can cause irreversibledamage to the system, like overdosing on painkillers.

In decision theory, the 1/n heuristic is a simple rulein which an agent invests equally across n funds (orsources of risk) rather than weighting their investmentsaccording to some optimization criterion such as mean-variance or Modern Portfolio Theory (MPT), which dic-tates some amount of concentration in order to increasethe potential payoff. The 1/n heuristic mitigates the riskof suffering ruin due to an error in the model; thereis no single asset whose failure can bring down theship. While the potential upside of the large payoff isdampened, ruin due to an error in prediction is avoided.This heuristic works best when the sources of variationsare uncorrelated and, in the presence of correlation ordependence between the various sources of risk, the totalexposure needs to be reduced.

Hence, because of non-linearities, it is preferable todiversify our effect on the planet, e.g. distinct types ofpollutants, across the broadest number of uncorrelatedsources of harm, rather than concentrate them. In thisway, we avoid the risk of an unforeseen, disproportion-ately harmful response to a pollutant deemed "safe" by

virtue of responses observed only in relatively smalldoses.

8 THE LIMITATION OF TOP-DOWN ENGINEER-ING IN COMPLEX ENVIRONMENTS

In considering the limitations of risk-taking, a key ques-tion is whether or not we can analyze the potentialoutcomes of interventions and, knowing them, identifythe associated risks. Can’t we just "figure it out?” Withsuch knowledge we can gain assurance that extremeproblems such as global destruction will not arise.

Since the same issue arises for any engineering effort,we can ask what is the state-of-the-art of engineering?Does it enable us to know the risks we will encounter?Perhaps it can just determine the actions we should,or should not, take. There is justifiably widespread re-spect for engineering because it has provided us withinnovations ranging from infrastructure to electronicsthat have become essential to modern life. What is notas well known by the scientific community and thepublic, is that engineering approaches fail in the face ofcomplex challenges and this failure has been extensivelydocumented by the engineering community itself [8].The underlying reason for the failure is that complexenvironments present a wide range of conditions. Whichconditions will actually be encountered is uncertain.Engineering approaches involve planning that requiresknowledge of the conditions that will be encountered.Planning fails due to the inability to anticipate the manyconditions that will arise.

This problem arises particularly for “real-time” sys-tems that are dealing with large amounts of informationand have critical functions in which lives are at risk. Aclassic example is the air traffic control system. An effortto modernize that system by traditional engineeringmethods cost $3-6 billion and was abandoned withoutchanging any part of the system because of the inabilityto evaluate the risks associated with its implementation.

Significantly, the failure of traditional engineering toaddress complex challenges has led to the adoption ofinnovation strategies that mirror evolutionary processes,creating platforms and rules that can serve as a basisfor safely introducing small incremental changes thatare extensively tested in their real world context [8].This strategy underlies the approach used by highly-successful, modern, engineered-evolved, complex sys-tems ranging from the Internet, to Wikipedia, to iPhoneApp communities.

9 SKEPTICISM AND PRECAUTION

We show in Figures 6 and 7 that an increase in uncer-tainty leads to an increase in the probability of ruin,hence "skepticism" is that its impact on decisions shouldlead to increased, not decreased conservatism in thepresence of ruin. More skepticism about models im-plies more uncertainty about the tails, which necessitates

Page 8: The Precautionary Principle (with Application to the Genetic

EXTREME RISK INITIATIVE —NYU SCHOOL OF ENGINEERING WORKING PAPER SERIES 8

Low model uncertainty

High model uncertainty

Ruin

Ruin probability

-5 5 10 15

Figure 6: The more uncertain or skeptical one is of"scientific" models and projections, the higher the riskof ruin, which flies in the face of the argument ofthe style "skeptical of climate models". No matter howincreased the probability of benefits, ruin as an absorbingbarrier, i.e. causing extinction without further recovery,can more than cancels them out. This graph assumeschanges in uncertainty without changes in benefits (amean-preserving sensitivity) –the next one isolates thechanges in benefits.

Figure 7: The graph shows the asymmetry betweenbenefits and harm and the effect on the ruin probabilities.Shows the effect on ruin probability of changes theInformation Ratio, that is, expected benefit

uncertainty (or signal dividedby noise). Benefits are small compared to negative ef-fects. Three cases are considered, two from Extremistan:extremely fat-tailed (↵ = 1), and less fat-tailed (↵ = 2),and one from Mediocristan.

more precaution about newly implemented techniques,or larger size of exposures. As we said, Nature mightnot be smart, but its longer track record means smalleruncertainty in following its logic.

Mathematically, more uncertainty about the future –or about a model –increases the scale of the distribution,hence thickens the "left tail" (as well as the "right one")which raises the potential ruin. The survival probabilityis reduced no matter what takes place in the right tail.

Hence skepticim about climate models should lead tomore precautionary policies.

In addition, such increase uncertainty matters far morein Extremistan –and has benign effects in Mediocristan.Figure 7 shows th asymmetries between costs and ben-efits as far as ruin probabilities, and why these mattermore for fat-tailed domains than thin-tailed ones. In thin-tailed domains, an increase in uncertainty changes theprobability of ruin by several orders of magnitude, butthe effect remains small: from say 10

�40 to 10

�30 is notquite worrisome. In fat-tailed domains, the effect is size-able as we start with a substantially higher probabilityof ruin (which is typically underestimated, see [6]).

10 WHY SHOULD GMOS BE UNDER PP BUTNOT NUCLEAR ENERGY?As examples that are relevant to the discussion of thedifferent types of strategies, we consider the differencesbetween concerns about nuclear energy and GM crops.

In short nuclear exposure in nonlinear –and can belocal (under some conditions) – while GMOs are not andpresent systemic risks even in small amounts.

10.1 Nuclear energyMany are justifiably concerned about nuclear energy.It is known that the potential harm due to radiationrelease, core meltdowns and waste can be large. At thesame time, the nature of these risks has been extensivelystudied, and the risks from local uses of nuclear energyhave a scale that is much smaller than global. Thus,even though some uncertainties remain, it is possibleto formulate a cost benefit analysis of risks for localdecision-making. The large potential harm at a local scalemeans that decisions about whether, how and how muchto use nuclear energy, and what safety measures to use,should be made carefully so that decision makers and thepublic can rely upon them. Risk management is a veryserious matter when potential harm can be large andshould not be done casually or superficially. Those whoperform the analysis must not only do it carefully, theymust have the trust of others that they are doing it care-fully. Nevertheless, the known statistical structure of therisks and the absence of global systemic consequencesmakes the cost benefit analysis meaningful. Decisionscan be made in the cost-benefit context—evoking the PPis not appropriate for small amounts of nuclear energy,as the local nature of the risks is not indicative of thecircumstances to which the PP applies.

In large quantities, we should worry about an unseenrisk from nuclear energy and invoke the PP. In smallquantities, it may be OK—how small we should deter-mine by direct analysis, making sure threats never ceaseto be local.

In addition to the risks from nuclear energy useitself, we must keep in mind the longer term risksassociated with the storage of nuclear waste, which are

Page 9: The Precautionary Principle (with Application to the Genetic

EXTREME RISK INITIATIVE —NYU SCHOOL OF ENGINEERING WORKING PAPER SERIES 9

compounded by the extended length of time they remainhazardous. The problems of such longer term “lifecycle”effects is present in many different industries. It arisesnot just for nuclear energy but also for fossil fuels andother sources of pollution, though the sheer duration oftoxicity effects for nuclear waste, enduring for hundredsof thousands of years in some cases, makes this problemparticularly intense for nuclear power.

As we saw earlier we need to remain careful inlimiting nuclear exposure –as other sources of pollution– to sources that owing to their quantity do not allowfor systemic effects.

10.2 GMOsGenetically Modified Organisms (GMOs) and their riskare currently the subject of debate [9]. Here we ar-gue that they fall squarely under the PP because theirrisk is systemic. There are two aspects of systemicrisk, the widespread impact on the ecosystem and thewidespread impact on health.

Ecologically, in addition to intentional cultivation,GMOs have the propensity to spread uncontrollably, andthus their risks cannot be localized. The cross-breeding ofwild-type plants with genetically modified ones preventstheir disentangling, leading to irreversible system-wideeffects with unknown downsides. The ecological impli-cations of releasing modified organisms into the wild arenot tested empirically before release.

Healthwise, the modification of crops impacts every-one. Corn, one of the primary GMO crops, is not onlyeaten fresh or as cereals, but is also a major component ofprocessed foods in the form of high-fructose corn syrup,corn oil, corn starch and corn meal. In 2014 in the USalmost 90% of corn and 94% of soybeans are GMO [11].Foods derived from GMOs are not tested in humansbefore they are marketed.

The widespread impacts of GMOs on ecologies andhuman health imply they are in the domain of thePP. This should itself compel policy makers to takeextreme caution. However, there is a difficulty for manyin understanding the abstract nature of the engagementin risks and imagining the many possible ways that harmcan be caused. Thus, we summarize further the natureof the risks that are involved.

10.3 GMOs in detailThe systemic global impacts of GMOs arise from acombination of (1) engineered genetic modifications, (2)monoculture—the use of single crops over large areas.Global monoculture itself is of concern for potentialglobal harm, but the evolutionary context of traditionalcrops provides important assurances (see Figure 8). In-vasive species are frequently a problem but one mightat least argue that the long term evolutionary testingof harmful impacts of organisms on local ecologicalsystems mitigates if not eliminates the largest potential

Figure 8: A simplified illustration of the mechanismbehind the potato famine of the 19

th C. showing howconcentration from monoculture increases the risk ofruin. Inspired by Berkeley’s Understanding Evolution.

risks. Monoculture in combination with genetic engi-neering dramatically increases the risks being taken.Instead of a long history of evolutionary selection, thesemodifications rely not just on naive engineering strate-gies that do not appropriately consider risk in complexenvironments, but also explicitly reductionist approachesthat ignore unintended consequences and employ verylimited empirical testing.

Ironically, at a time when engineering is adoptingevolutionary approaches due to the failure of top-downstrategies, biologists and agronomists are adopting top-down engineering strategies and taking global systemicrisks in introducing organisms into the wild.

One argument in favor of GMOs is that they are nomore "unnatural" than the selective farming our ances-tors have been doing for generations. In fact, the ideasdeveloped in this paper show that this is not the case.Selective breeding over human history is a process inwhich change still happens in a bottom-up way, and canbe expected to result in a thin-tailed distribution. If thereis a mistake, some harmful variation, it will not spreadthroughout the whole system but end up dying out dueto local experience over time. Human experience overgenerations has chosen the biological organisms that arerelatively safe for consumption. There are many that arenot, including parts of and varieties of the crops we docultivate [12]. Introducing rapid changes in organisms isinconsistent with this process. There is a limited rate atwhich variations can be introduced and selection will beeffective [13].

There is no comparison between tinkering with theselective breeding of genetic components of organismsthat have previously undergone extensive histories ofselection and the top-down engineering of taking a genefrom a fish and putting it into a tomato. Saying that sucha product is natural misses the process of natural selec-tion by which things become “natural." While there areclaims that all organisms include transgenic materials,those genetic transfers that are currently present were

Page 10: The Precautionary Principle (with Application to the Genetic

EXTREME RISK INITIATIVE —NYU SCHOOL OF ENGINEERING WORKING PAPER SERIES 10

subject to selection over long times and survived. Thesuccess rate is tiny. Unlike GMOs, in nature there is noimmediate replication of mutated organisms to become alarge fraction of the organisms of a species. Indeed, anyone genetic variation is unlikely to become part of thelong term genetic pool of the population. Instead, justlike any other genetic variation or mutation, transgenictransfers are subject to competition and selection overmany generations before becoming a significant part ofthe population. A new genetic transfer engineered todayis not the same as one that has survived this process ofselection.

An example of the effect of transfer of biologicallyevolved systems to a different context is that of zoonoticdiseases. Even though pathogens consume their hosts,they evolve to be less harmful than they would oth-erwise be. Pathogens that cause highly lethal diseasesare selected against because their hosts die before theyare able to transmit to others. This is the underlyingreason for the greater dangers associated with zoonoticdiseases—caused by pathogens that shift from the hostthat they evolved in to human beings, including HIV,Avian and Swine flu that transferred from monkeys(through chimpanzees), birds and hogs, respectively.

More generally, engineered modifications to ecologicalsystems (through GMOs) are categorically and statisti-cally different from bottom up ones. Bottom-up modifi-cations do not remove the crops from their long termevolutionary context, enabling the push and pull ofthe ecosystem to locally extinguish harmful mutations.Top-down modifications that bypass this evolutionarypathway unintentionally manipulate large sets of inter-dependent factors at the same time, with dramatic risksof unintended consequences. They thus result in fat-tailed distributions and place a huge risk on the foodsystem as a whole.

For the impact of GMOs on health, the evaluation ofwhether the genetic engineering of a particular chemical(protein) into a plant is OK by the FDA is based uponconsidering limited existing knowledge of risks associ-ated with that protein. The number of ways such an eval-uation can be in error is large. The genetic modificationsare biologically significant as the purpose is to stronglyimpact the chemical functions of the plant, modifying itsresistance to other chemicals such as herbicides or pesti-cides, or affecting its own lethality to other organisms—i.e. its antibiotic qualities. The limited existing knowl-edge generally does not include long term testing ofthe exposure of people to the added chemical, even inisolation. The evaluation is independent of the ways theprotein affects the biochemistry of the plant, includinginteractions among the various metabolic pathways andregulatory systems—and the impact of the resultingchanges in biochemistry on health of consumers. Theevaluation is independent of its farm-ecosystem com-bination (i.e. pesticide resistant crops are subject to in-creased use of pesticides, which are subsequently presentin the plant in larger concentrations and cannot be

washed away). Rather than recognizing the limitationsof current understanding, poorly grounded perspectivesabout the potential damage with unjustified assumptionsare being made. Limited empirical validation of bothessential aspects of the conceptual framework as wellas specific conclusions are being used because testing isrecognized to be difficult.

We should exert the precautionary principle here – ournon-naive version – because we do not want to discovererrors after considerable and irreversible environmentaland health damage.

10.4 Red herring: How about the risk of famine with-out GMOs?An argument used by those who advocate for GMOs isthat they will reduce the hunger in the world. Invokingthe risk of famine as an alternative to GMOs is a deceitfulstrategy, no different from urging people to play Russianroulette in order to get out of poverty.

The evocation of famine also prevents clear thinkingabout not just GMOs but also about global hunger. Theidea that GMO crops will help avert famine ignoresevidence that the problem of global hunger is due topoor economic and agricultural policies. Those who careabout the supply of food should advocate for an imme-diate impact on the problem by reducing the amountof corn used for ethanol in the US, which burns foodfor fuel consuming over 40% of the US crop that couldprovide enough food to feed 2/3 of a billion people [14].

One of the most extensively debated cases for GMOsis a variety of rice—"golden rice"—to which has beenadded a precursor of vitamin A as a potential meansto alleviate this nutritional deficiency, which is a keymedical condition affecting impoverished populations.Since there are alternatives, including traditional vitaminfortification, one approach is to apply a cost benefitanalysis comparing these approaches. Counter to thisapproach stands both the largely unknown risks asso-ciated with the introduction of GMOs, and the needand opportunities for more systemic interventions toalleviate not just malnutrition but poverty and hungerworldwide. While great attention should be placed onimmediate needs, neglecting the larger scale risks is un-reasonable [10]. Here science should adopt an unyieldingrigor for both health benefit and risk assessment, in-cluding careful application of the PP. Absent such rigor,advocacy by the scientific community not only fails tobe scientific, but also becomes subject to challenge forshort term interests, not much different from corporateendorsers. Thus, cutting corners on tests, including testswithout adequate consent or approvals performed onChinese children [15], undermines scientific claims tohumanitarian ideals. Given the promotion of "goldenrice" by the agribusiness that also promote biofuels, theirinterest in humanitarian impacts versus profits gainedthrough wider acceptance of GMO technology can belegitimately questioned [16].

Page 11: The Precautionary Principle (with Application to the Genetic

EXTREME RISK INITIATIVE —NYU SCHOOL OF ENGINEERING WORKING PAPER SERIES 11

We can frame the problem in our probabilistic argu-ment of Section 9. This asymmetry from adding anotherrisk, here a technology (with uncertainty attending someof its outcomes), to solve a given risk (which can besolved by less complicated means) are illustrated inFigures 6 and 7. Model error, or errors from the tech-nology itself, i.e., its iatrogenics, can turn a perceived"benefit" into a highly likely catastrophe, simply becausean error from, say, "golden rice" or some such technologywould have much worse outcomes than an equivalentbenefit. Most of the discussions on "saving the poor fromstarvation" via GMOs miss the fundamental asymmetryshown in 7.

10.5 GMOs in summaryIn contrast to nuclear energy (which, as discussed insection 10.1 above, may or may not fall under thePP, depending on how and where (how widely) it isimplemented), Genetically Modified Organisms, GMOs,fall squarely under the PP because of their systemic risk.The understanding of the risks is very limited and thescope of the impacts are global both due to engineeringapproach replacing an evolutionary approach, and dueto the use of monoculture.

Labeling the GMO approach “scientific" betrays a verypoor—indeed warped—understanding of probabilisticpayoffs and risk management. A lack of observations ofexplicit harm does not show absence of hidden risks.Current models of complex systems only contain thesubset of reality that is accessible to the scientist. Natureis much richer than any model of it. To expose anentire system to something whose potential harm isnot understood because extant models do not predict anegative outcome is not justifiable; the relevant variablesmay not have been adequately identified.

Given the limited oversight that is taking place onGMO introductions in the US, and the global impactof those introductions, we are precisely in the regimeof the ruin problem. A rational consumer should say:We do not wish to pay—or have our descendants pay—for errors made by executives of Monsanto, who arefinancially incentivized to focus on quarterly profitsrather than long term global impacts. We should exertthe precautionary principle—our non-naive version—simply because we otherwise will discover errors withlarge impacts only after considerable damage.

10.6 Vaccination, Antibiotics, and Other ExposuresOur position is that while one may argue that vaccina-tion is risky, or risky under some circumstances, it doesnot fall under PP owing to the lack of systemic risk.The same applies to such interventions as antibiotics,provided the scale remains limited to the local.

11 PRECAUTION AS POLICY AND NAIVE IN-TERVENTIONWhen there is a risk of ruin, obstructionism and policyinaction are important strategies, impeding the rapid

headlong experimentation with global ruin by thosewith short-term, self-centered incentives and perspec-tives. Two approaches for policy action are well justified.In the first, actions that avoid the inherent sensitivity ofthe system to propagation of harm can be used to free thesystem to enable local decision-making and explorationwith only local harm. This involves introducing bound-aries, barriers and separations that inhibit propagation ofshocks, preventing ruin for overly connected systems. Inthe second, where such boundaries don’t exist or cannotbe introduced due to other effects, there is a need foractions that are adequately evaluated as to their globalharm. Scientific analysis of such actions, meticulouslyvalidated, is needed to prevent small risks from causingruin.

What is not justified, and dangerous, are actions thatare intended to prevent harm by additional intervention.The reason is that indirect effects are likely to createprecisely the risks that one is intending to avoid.

When existing risks are perceived as having the po-tential for ruin, it may be assumed that any preventivemeasure is justified. There are at least two problemswith such a perspective. First, localized harm is oftenmistaken for ruin, and the PP is wrongly invoked whererisk management techniques should be employed. Whena risk is not systemic, overreaction will typically causemore harm than benefits, like undergoing dangeroussurgery to remove a benign growth. Second, even if thethreat of ruin is real, taking specific (positive) action inorder to ward off the perceived threat may introducenew systemic risks. It is often wiser to reduce or removeactivity that is generating or supporting the threat andallow natural variations to play out in localized ways.

Preventive action should be limited to correcting sit-uations by removing threats via negativa in order tobring them back in line with a statistical structure thatavoids ruin. It is often better to remove structure orallow natural variation to take place rather than to addsomething additional to the system.

When one takes the opposite approach, taking specificaction designed to diminish some perceived threat, oneis almost guaranteed to induce unforeseen consequences.Even when there appears to be a direct link from aspecific action to a specific preventive outcome, theweb of causality extends in complex ways with con-sequences that are far from the intended goal. Theseunintended consequences may generate new vulnerabil-ities or strengthen the harm one is hoping to diminish.Thus, when possible, limiting fragilizing dependencies isbetter than imposing additional structure that increasesthe fragility of the system as a whole.

12 FALLACIOUS ARGUMENTS AGAINST PP

In this section we respond to a variety of arguments thathave been made against the PP.

Page 12: The Precautionary Principle (with Application to the Genetic

EXTREME RISK INITIATIVE —NYU SCHOOL OF ENGINEERING WORKING PAPER SERIES 12

12.1 Crossing the road (the paralysis fallacy)

Many have countered the invocation of the PP with“nothing is ever totally safe.” “I take risks crossing theroad every day, so according to you I should stay homein a state of paralysis.” The answer is that we don’t crossthe street blindfolded, we use sensory information tomitigate risks and reduce exposure to extreme shocks.

Even more importantly in the context of the PP, theprobability distribution of death from road accidents atthe population level is thin-tailed; I do not incur the riskof generalized human extinction by crossing the street—a human life is bounded in duration and its unavoidabletermination is an inherent part of the bio-social system[17]. The error of my crossing the street at the wrongtime and meeting an untimely demise in general doesnot cause others to do the same; the error does notspread. If anything, one might expect the opposite effect,that others in the system benefit from my mistake byadapting their behavior to avoid exposing themselves tosimilar risks. Equating risks a person takes with his orher own life with risking the existence of civilization isan inappropriate ego trip. In fact, the very idea of thePP is to avoid such a frivolous focus.

The paralysis argument is often used to present the PPas incompatible with progress. This is untrue: tinkering,bottom-up progress where mistakes are bounded is howprogress has taken place in history. The non-naive PPsimply asserts that the risks we take as we innovate mustnot extend to the entire system; local failure serves asinformation for improvement. Global failure does not.

This fallacy illustrates the misunderstanding betweensystemic and idiosyncratic risk in the literature. Individ-uals are fragile and mortal. The idea of sustainability isto stike to make systems as close to immortal as possible.

12.2 The Psychology of Risk and Thick Tailed Distri-butions

One concern about the utility of the PP is that its evoca-tion may become commonplace because of risk aversion.Is it true that people overreact to small probabilitiesand the PP would feed into human biases? While wehave carefully identified the scope of the domain ofapplicability of the PP, it is also helpful to review theevidence of risk aversion, which we find not to be basedupon sound studies.

Certain empirical studies appear to support the exis-tence of a bias toward risk aversion, claiming evidencethat people choose to avoid risks that are beneficial,inconsistent with cost-benefit analyses. The relevant ex-periments ask people questions about single probabilityevents, showing that people overreact to small probabil-ities. However, those researchers failed to include theconsequences of the associated events which humansunderestimate. Thus, this empirical strategy as a wayof identifying effectiveness of response to risk is funda-mentally flawed [18].

The proper consideration of risk involves both prob-ability and consequence, which should be multipliedtogether. Consequences in many domains have thicktails, i.e. much larger consequences can arise than areconsidered in traditional statistical approaches. Overre-acting to small probabilities is not irrational when theeffect is large, as the product of probability and harmis larger than expected from the traditional treatment ofprobability distributions.

12.3 The Loch Ness fallacyMany have countered that we have no evidence thatthe Loch Ness monster doesn’t exist, and, to take theargument of evidence of absence being different fromabsence of evidence, we should act as if the Loch Nessmonster existed. The argument is a corruption of theabsence of evidence problem and certainly not part ofthe PP.

The relevant question is whether the existence of theLoch Ness monster has implications for decisions aboutactions that are being taken. We are not considering adecision to swim in the Loch Ness. If the Loch Nessmonster did exist, there would still be no reason toinvoke the PP, as the harm he might cause is limitedin scope to Loch Ness itself, and does not present therisk of ruin.

12.4 The fallacy of misusing the naturalistic fallacySome people invoke “the naturalistic fallacy,” a philo-sophical concept that is limited to the moral domain.According to this critique, we should not claim thatnatural things are necessarily good; human innovationcan be equally valid. We do not claim to use nature toderive a notion of how things "ought" to be organized.Rather, as scientists, we respect nature for the extent ofits experimentation. The high level of statistical signifi-cance given by a very large sample cannot be ignored.Nature may not have arrived at the best solution to aproblem we consider important, but there is reason tobelieve that it is smarter than our technology based onlyon statistical significance.

The question about what kinds of systems work (asdemonstrated by nature) is different than the questionabout what working systems ought to do. We can takea lesson from nature—and time—about what kinds oforganizations are robust against, or even benefit from,shocks, and in that sense systems should be structured inways that allow them to function. Conversely, we cannotderive the structure of a functioning system from whatwe believe the outcomes ought to be.

To take one example, Cass Sunstein—who has writtenan article critical of the PP [19]—claims that there is a"false belief that nature is benign." However, his concep-tual discussion fails to distinguish between thin and fattails, local harm and global ruin. The method of analysismisses both the statistical significance of nature and thefact that it is not necessary to believe in the perfection of

Page 13: The Precautionary Principle (with Application to the Genetic

EXTREME RISK INITIATIVE —NYU SCHOOL OF ENGINEERING WORKING PAPER SERIES 13

nature, or in its "benign" attributes, but rather in its trackrecord, its sheer statistical power as a risk evaluator andas a risk manager in avoiding ruin.

12.5 The "Butterfly in China" fallacy

The statement “if I move my finger to scratch my nose,by the butterfly-in-China effect, owing to non-linearities,I may terminate life on earth," is known to be flawed. Theexplanation is not widely understood. The fundamentalreason arises because of the existence of a wide rangein levels of predictability and the presence of a largenumber of fine scale degrees of freedom for every largescale one [20]. Thus, the traditional deterministic chaos,for which the butterfly effect was named, applies specif-ically to low dimensional systems with a few variablesin a particular regime. High dimensional systems, likethe earth, have large numbers of fine scale variables forevery large scale one. Thus, it is apparent that not allbutterfly wing flaps can cause hurricanes. It is not clearthat any one of them can, and, if small perturbationscan influence large scale events, it happens only underspecific conditions where amplification occurs.

Empirically, our thesis rebuts the butterfly fallacy withthe argument that, in the aggregate, nature has experi-enced trillions of small variations and yet it survives.Therefore, we know that the effects of scratching one’snose fall into the thin tailed domain and thus do notwarrant the precautionary principle.

As described previously, barriers in natural systemslead to subsystems having a high-degree of indepen-dence. Understanding how modern systems with a high-degree of connectivity have cascading effects is essentialfor understanding when it is and isn’t appropriate to usethe PP.

12.6 The potato fallacy

Many species were abruptly introduced into the OldWorld starting in the 16th Century that did not causeenvironmental disasters (perhaps aside from diseasesaffecting Native Americans). Some use this observationin defense of GMOs. However, the argument is fallaciousat two levels:

First, by the fragility argument, potatoes, tomatoesand similar "New World" goods were developed locallythrough progressive, bottom-up tinkering in a complexsystem in the context of its interactions with its envi-ronment. Had they had an impact on the environment,it would have caused adverse consequences that wouldhave prevented their continual spread.

Second, a counterexample is not evidence in the riskdomain, particularly when the evidence is that takinga similar action previously did not lead to ruin. Lack ofruin due to several or even many trials does not indicatesafety from ruin in the next one. This is also the Russianroulette fallacy, detailed below.

12.7 The Russian roulette fallacy (the counterexam-ples in the risk domain)The potato example, assuming potatoes had not beengenerated top-down by some engineers, would still notbe sufficient. Nobody says "look, the other day therewas no war, so we don’t need an army," as we knowbetter in real-life domains. Nobody argues that a giantRussian roulette with many barrels is "safe" and a greatmoney making opportunity because it didn’t blow upsomeone’s brains last time.

There are many reasons a previous action may nothave led to ruin while still having the potential to doso. If you attempt to cross the street with a blindfoldand earmuffs on, you may make it across, but this is notevidence that such an action carries no risk.

More generally, one needs a large sample for claimsof absence of risk in the presence of a small probabilityof ruin, while a single “n = 1" example would be suf-ficient to counter the claims of safety—this is the BlackSwan argument [27]. Simply put, systemic modificationsrequire a very long history in order for the evidence oflack of harm to carry any weight.

12.8 The Carpenter FallacyRisk managers skeptical of the understanding of riskof biological processes, such as GMOs, by the expertsare sometimes asked "are you a biologist?" But nobodyasks a probabilist dealing with roulette sequences if he isa carpenter. To understand the gambler’s ruin problemby roulette betting, we know to ask a probabilist, nota carpenter. No amount of expertise in carpentry canreplace rigor in understanding the properties of longsequences of small probability bets. Likewise, no amountof expertise in the details of biological processes can bea substitute for probabilistic rigor.

The context for evaluating risk is the extent of knowl-edge or lack of knowledge. Thus, when consideringGMO risks, a key question is what is the extent towhich we know the impacts of genetic changes in organ-isms. Claims that geneticists know these consequencesas a basis for GMOs do not recognize either that theirknowledge is not complete in its own domain nor isgenetics complete as a body of knowledge. Geneticistsdo not know the developmental, physiological, medical,cognitive and environmental consequences of geneticchanges in organisms. Indeed, most of these are not partof their training or competency. Neither are they trainedin recognizing the impact of the limitations of knowledgeon risk.

Some advocates dismiss the very existence of risk dueto the role of scientific knowledge in GMOs. Accordingto this view scientists from Monsanto and similar com-panies can be trusted to provide safe foods without riskand even a question about risk is without basis. Scientificknowledge as a source of engineering innovation has along tradition. At the same time, engineering itself is adifferent discipline and has different imperatives. While

Page 14: The Precautionary Principle (with Application to the Genetic

EXTREME RISK INITIATIVE —NYU SCHOOL OF ENGINEERING WORKING PAPER SERIES 14

construction of bridges and buildings relies upon wellestablished rules of physics, the existence of risks doesnot end with that knowledge and must be considereddirectly in planning and construction as it is in otherforms of engineering. The existence of risk in engineer-ing even where knowledge is much better establishedthan genetics is widely recognized. That proponentsdismiss the very existence of risk, attests to their poorunderstanding or blind extrinsically motivated advocacy.

The FDA has adopted as a policy the approach thatcurrent scientific knowledge assures safety of GMOs,and relies upon Monsanto or similar companies forassurances. It therefore does not test the impact ofchemical changes in GMO plants on human health orecological systems. This despite experiments that showthat increased concentrations of neurotoxins in maternalblood are linked to GMOs [21]. A variety of studies showexperimental evidence that risks exist [22], [23], [24],[25] and global public health concerns are recognized[26]. We note that it is possible that there are significantimpacts of neurotoxins on human cognitive function asa result of GMO modification, as FDA testing does notevaluate this risk.

Consistent with these points, the track record of theexperts in understanding biological and medical riskshas been extremely poor. We need policies to be robust tosuch miscalculations. The "expert problem" in medicineby which experts mischaracterize the completeness oftheir own knowledge is manifest in a very poor historicalrecord of risks taken with innovations in biological prod-ucts. These range from biofuels to transfat to nicotine,etc. Consider the recent major drug recalls such asThalidomide, Fen-Phen, Tylenol and Vioxx—all of theseshow blindness on the part of the specialist to large scalerisks associated with absence of knowlege, i.e., BlackSwan events. Yet most of these risks were local and notsystemic (with the exception of biofuel impacts on globalhunger and social unrest). Since systemic risks wouldresult in a recall happening too late, we need the strongversion of the PP.

12.9 The technological salvation fallacyIatrogenics is harm done by a healer despite positiveintentions, see Appendix A for a list of innovationsin care that have extensive documentation of adverseconsequences. Each of these underwent best practicestesting that did not reveal the iatrogenic consequencesprior to widespread application. The controlled teststhat are used to evaluate innovations for potential harmcannot replicate the large number of conditions in whichinterventions are applied in the real world. Adverseconsequences are exposed only by extensive experiencewith the combinatorial number of real world condi-tions. Natural, i.e. evolutionary, selection implements asa strategy the use of selection of lack of harm undersuch conditions in a way that bounds the consequencesbecause the number of replicates is increased only grad-ually during the process in which success is determined.

In contrast, traditional engineering of technological solu-tions does not. Thus, the more technological a solution toa current problem—the more it departs from solutionsthat have undergone evolutionary selection—the moreexposed one becomes to iatrogenics owing to combinato-rial branching of conditions with adverse consequences.

Our concern here isn’t mild iatrogenics, but the sys-temic case.

12.10 The pathologization fallacyToday many mathematical or conceptual models thatare claimed to be rigorous are based upon unvalidatedand incorrect assumptions. Such models are rationalin the sense that they are logically derived from theirassumptions, except that it is the modeler who is usingan incomplete representation of the reality. Often themodelers are not familiar with the dynamics of com-plex systems or use Gaussian statistical methods thatdo not take into account fat-tails and make inferencesthat would not be acceptable under different classes ofprobability distributions. Many biases, such as the onesused by Cass Sunstein (mentioned above), about theoverestimation of the probabilities of rare events in factcorrespond to the testers using a bad probability modelthat is thin-tailed. See Ref. [6] for a deeper discussion.

It has became popular to claim irrationality for GMOand other skepticism on the part of the general public—not realizing that there is in fact an "expert problem"and such skepticism is healthy and even necessary forsurvival. For instance, in The Rational Animal, the authorspathologize people for not accepting GMOs although"the World Health Organization has never found evi-dence of ill effects," a standard confusion of evidenceof absence and absence of evidence. Such pathologizingis similar to behavioral researchers labeling hyperbolicdiscounting as "irrational" when in fact it is largely theresearcher who has a very narrow model and richermodels make the "irrationality" go away.

These researchers fail to understand that humans mayhave precautionary principles against systemic risks, andcan be skeptical of the untested consequences of policiesfor deeply rational reasons, even if they do not expresssuch fears in academic format.

13 CONCLUSIONS

This formalization of the two different types of un-certainty about risk (local and systemic) makes clearwhen the precautionary principle is, and when it isn’t,appropriate. The examples of GMOs and nuclear energyhelp to elucidate the application of these ideas. We hopethis will help decision makers to avoid ruin in the future.

ACKNOWLEDGMENTS

Gloria Origgi, William Goodlad, Maya Bialik, DavidBoxenhorn, Jessica Woolley, Phil Hutchinson...

Page 15: The Precautionary Principle (with Application to the Genetic

EXTREME RISK INITIATIVE —NYU SCHOOL OF ENGINEERING WORKING PAPER SERIES 15

CONFLICTS OF INTERESTOne of the authors (Taleb) reports having received mon-etary compensation for lecturing on risk managementand Black Swan risks by the Institute of Nuclear PowerOperations, INPO, the main association in the UnitedStates, in 2011, in the wake of the Fukushima accident.

REFERENCES[1] Asmussen, S., & Albrecher, H., 2010, Ruin probabilities (Vol. 14).

World Scientific.[2] Bar-Yam, Y., 2013, The Limits of Phenomenology: From Behaviorism

to Drug Testing and Engineering Design, arXiv 1308.3094[3] Bak, P., 2009, How nature works. Copernicus.[4] Schulte, P., Alegret, L., Arenillas, I., Arz, J. A., Barton, P. J., Bown,

P. R., ... & Willumsen, P. S., 2010. The Chicxulub asteroid impactand mass extinction at the Cretaceous-Paleogene boundary. Sci-ence, 327(5970), 1214-1218.

[5] Alroy, J., 2008. Dynamics of origination and extinction in themarine fossil record. Proceedings of the National Academy ofSciences, 105(Supplement 1), 11536-11542.

[6] Taleb, N.N., 2014, Silent Risk: Lectures on Fat Tails, (Anti)Fragility,and Asymmetric Exposures, SSRN

[7] Rauch, E.M. and Y. Bar-Yam, 2006, Long-range interactions andevolutionary stability in a predator-prey system, Physical Review E73, 020903

[8] Bar-Yam, Y., 2003, When Systems Engineering Fails — TowardComplex Systems Engineering in International Conference onSystems, Man & Cybernetics Vol. 2, IEEE Press, Piscataway, NJ,2003, pp. 2021- 2028.

[9] Thompson, P.B. (Ed.), 2007. Food biotechnology in ethical per-spective (Vol. 10). Springer.

[10] Read, R., Hutchinson, P., 2014. What is Wrong With GM Food?,Philosophers’ Magag.

[11] Recent Trends in GE Adoption, Adoption of Genetically Engi-neered Crops in the U.S., USDA Economics Research Service,

[12] See e.g. List of poisonous plants, Wikipedia[13] Nowak, M., Schuster, P.,1989. Error thresholds of replication in

finite populations mutation frequencies and the onset of Muller’sratchet. Journal of Theoretical Biology, 137, 375-395.

[14] Albino, D.K., Bertrand, K.Z., Bar-Yam, Y., 2012, Food for fuel: Theprice of ethanol. arXiv:1210.6080.

[15] Qiu, J., 2012, China sacks officials over Golden Rice controversy.Nature News, 10.

[16] Harmon, A., 2013, Golden Rice: Lifesaver?[17] Taleb, N.N., 2007, Black swans and the domains of statistics. The

American Statistician, 61, 198-200.[18] Taleb, N.N. and Tetlock, P.E., 2014, On the Difference be-

tween Binary Prediction and True Exposure with Implicationsfor Forecasting Tournaments and Decision Making Researchhttp://dx.doi.org/10.2139/ssrn.2284964

[19] Sunstein, C.R., Beyond the Precautionary Principle (January 2003).U Chicago Law & Economics, Olin Working Paper No. 149; U ofChicago, Public Law Working Paper No. 38.

[20] Bar-Yam, Y., Complex Systems: The Science of Prediction,[21] Aris, A., Leblanc, S., 2011, Maternal and fetal exposure to pesti-

cides associated to genetically modified foods in Eastern Town-ships of Quebec, Canada. Reproductive Toxicology, 31(4), 528-533.

[22] Mesnage, R., Clair, E., Gress, S., Then, C., SzÃl’kÃacs, A., &SÃl’ralini, G. E. (2013). Cytotoxicity on human cells of Cry1Ab andCry1Ac Bt insecticidal toxins alone or with a glyphosateâARbasedherbicide. Journal of Applied Toxicology,33(7), 695-699.

[23] Mesnage, R., Bernay, B., & Séralini, G. E. (2013). Ethoxylatedadjuvants of glyphosate-based herbicides are active principles ofhuman cell toxicity.Toxicology, 313(2), 122-128.

[24] López, S. L., Aiassa, D., Benitez-Leite, S., Lajmanovich, R., Manas,F., Poletta, G., ... & Carrasco, A. E. (2012). Pesticides used in SouthAmerican GMO-based agriculture: A review of their effects onhumans and animal models. Advances in Molecular Toxicology,6, 41-75.

[25] Mezzomo, B. P., Miranda-Vilela, A. L., & Friere, I. S. (2013).Hematotoxicity of Bacillus thuringiensis as spore-crystal strainsCry1Aa, Cry1Ab, Cry1Ac or Cry2Aa in Swiss albino mice. JHematol Thromb Dis, 1(1).

[26] Sears, M. E., Genuis, S. J., 2012, Environmental determinants ofchronic disease and medical approaches: recognition, avoidance,supportive therapy, and detoxification. Journal of environmentaland public health, 356798, doi:10.1155/2012/356798

[27] Taleb, N.N., 2010, The Black Swan:: The Impact of the HighlyImprobable Fragility. Random House LLC.

Page 16: The Precautionary Principle (with Application to the Genetic

EXTREME RISK INITIATIVE —NYU SCHOOL OF ENGINEERING WORKING PAPER SERIES 16

Medical Intervention Intended Effects Unintended Effects

Rofecoxib (Vioxx, Ceoxx, Ceeoxx) relieve osteoarthritis, dysmenorrhoea myocardial infarctions [1]Thalidomide (Immunoprin, Talidex, Tal-izer, Thalomid)

sedative severe birth defects [2]

Fen-phen (Pondimin) weight loss valvular heart disease, pulmonary hy-pertension [3]

Diethylstilbestrol (Distilbene,Stilbestrol, Stilbetin)

reduce miscarriage cancerous tumors in daughters exposedin utero [4]

Cerivastatin (Baycol, Lipobay) lower cholesterol, reduce cardiovasculardisease

Rhabdomyolysis leading to renal failure[5]

lobotomy improve mental disorder loss of personality, intellect [6]Troglitazone (Rezulin, Resulin,Romozin, Noscal)

antidiabetic, antiinflammatory drug-induced hepatitis [7]

Terfenadine (Seldane, Triludan, Tel-dane)

antihistamine cardiac arrhythmia [8]

Phenylpropanolamine (Accutrim) appetite suppressant, stimulant, decon-gestant

increased stroke [9]

hospitalization patient treatment and monitoring nosocomial infection; medication errors[10]

antibiotics clear bacterial infections treatment-resistant bacteria [11]antidepressants relieve depression increased suicide risk [12]Encainide (Enkaid), flecainide (Tambo-cor)

reduced arrhythmia increased mortality [13]

Acetaminophen (Tylenol) pain relief liver damage [14]coronary angioplasty increased blood flow increased risk of death/myocardial in-

farction [15]cosmetic surgery improved aesthetics infection, death, deformity, other mal-

function [16]obsessive hygiene keeping safe from ‘germs’ autoimmune disorders [17]ear-tubes otitis media with effusion tympanosclerosis [18]

Table 2: Examples of iatrogenics in the medical field. The upper portion of the table shows medications andtreatments whose use has been significantly reduced or completely discontinued due to their undesired effects(which were discovered only after significant damage had been done). The lower portion of the table lists exampleswhere unintended side effects are significant but treatment continues to be applied due to expected benefits.

APPENDIX AA SAMPLE OF IATROGENICS, "UNFORESEEN" CRITICAL ERRORS

Page 17: The Precautionary Principle (with Application to the Genetic

EXTREME RISK INITIATIVE —NYU SCHOOL OF ENGINEERING WORKING PAPER SERIES 17

APPENDIX BDEFINITION OF FAT TAILS AND DISTINCTIONBETWEEN MEDIOCRISTAN AND EXTREMISTAN

Probability distributions range between extreme thin-tailed (Bernoulli) and extreme fat tailed [6]. Among thecategories of distributions that are often distinguisheddue to the convergence properties of moments are: 1)Having a support that is compact but not degenerate, 2)Subgaussian, 3) Gaussian, 4) Subexponential, 5) Powerlaw with exponent greater than 3, 6) Power law withexponent less than or equal to 3 and greater than 2, 7)Power law with exponent less than or equal to 2. Inparticular, power law distributions have a finite meanonly if the exponent is greater than 1, and have a finitevariance only if the exponent exceeds 2.

Our interest is in distinguishing between cases wheretail events dominate impacts, as a formal definition ofthe boundary between the categories of distributionsto be considered as Mediocristan and Extremistan. Thenatural boundary between these occurs at the subexpo-nential class which has the following property:

Let X = (Xi

)

1in

be a sequence of independent andidentically distributed random variables with support inthe positive real numbers (R+), with cumulative distri-bution function F . The subexponential class of distribu-tions is defined by [19],[20].

lim

x!+1

1� F ⇤2(x)

1� F (x)= 2

where F ⇤2= F 0 ⇤ F is the cumulative distribution of

X1

+ X2

, the sum of two independent copies of X .This implies that the probability that the sum X

1

+ X2

exceeds a value x is twice the probability that eitherone separately exceeds x. Thus, every time the sumexceeds x, for large enough values of x, the value ofthe sum is due to either one or the other exceeding x—the maximum over the two variables—and the other ofthem contributes negligibly.

More generally, it can be shown that the sum of nvariables is dominated by the maximum of the valuesover those variables in the same way. Formally, thefollowing two properties are equivalent to the subex-ponential condition [21],[22]. For a given n � 2, letSn

= ⌃

n

i=1

xi

and Mn

= max

1in

xi

a) limx!1

P (Sn>x)

P (X>x)

= n,

b) limx!1

P (Sn>x)

P (Mn>x)

= 1.

Thus the sum Sn

has the same magnitude as thelargest sample M

n

, which is another way of saying thattails play the most important role.

Intuitively, tail events in subexponential distributionsshould decline more slowly than an exponential distri-bution for which large tail events should be irrelevant.

Indeed, one can show that subexponential distributionshave no exponential moments:

Z 1

0

e✏x dF (x) = +1

for all values of " greater than zero. However,the con-verse isn’t true, since distributions can have no ex-ponential moments, yet not satisfy the subexponentialcondition.

We note that if we choose to indicate deviations asnegative values of the variable x, the same result holdsby symmetry for extreme negative values, replacingx ! +1 with x ! �1. For two-tailed variables, wecan separately consider positive and negative domains.

APPENDIX CMATHEMATICAL DERIVATIONS OF FRAGILITYIn this Section we provide a formal definition of fragilityas the detrimental (negative) expected response fromincreasing uncertainty. Uncertainty itself arises from de-viations from expected conditions. We also show whyfragility is necessarily linked to increasing impact oflarger deviations, i.e. how “size matters.” The deriva-tions explain, among other things,

• How risk associated with events that spread acrossthe system are much more dangerous than thosethat are limited to local areas; the derivations char-acterize risk spreading as a non-concave response.

• Why errors of analysis are a problem in the presenceof nonlinearity.

• Why polluting "a little" is qualitatively differentfrom pollution "a lot."

• Why fat tails can arise from accelerating response.Our analysis is designed to characterize the response

of a system to a distribution of environmental pertur-bations. It differs from traditional response theory thatconsiders a system’s behavior resulting from a singleprecisely defined perturbation [23], and from stochasticsystem dynamics that typically considers noise as a wellcharacterized Normal or other thin tailed distribution[24], or thermodynamics which considers contact with areservoir at a specified temperature [25]. These analysesdo not adequately characterize a system in a far fromequilibrium real world environment often having fattailed distributions of perturbations, and whose distri-butions may be poorly characterized.

In order to describe the system response we assume asingle dimensional measure of the structural integrity ofthe system, s. The damage, dissolution or destruction ofthe system is measured by the deviation from a referencevalue, described by negative values of s of increasingmagnitude. No assumption is made of the existenceof corresponding positive values of improvement instructure, though such may exist.

We note that this mathematical framework enablesphysical system damage to be mapped onto a singledimension, as is done in market pricing of value, and

Page 18: The Precautionary Principle (with Application to the Genetic

EXTREME RISK INITIATIVE —NYU SCHOOL OF ENGINEERING WORKING PAPER SERIES 18

K

Prob Density

Ξ!K, s" # $s"" % #"&

K !x "'" f Λ!s_#$s"" !x" ) xΞ!K, s"" % #

"&

K !x "'" f Λ!s_" !x" ) x

Figure 9: A definition of fragility as left tail-vega sensi-tivity, in other words how an increase in uncertainty(which includes errors) affects adverse outcomes. Thefigure shows the effect of the perturbation of a mesure ofthe lower semi-deviation s� on the tail integral ⇠ of (x –⌦) below K, ⌦ being a centering constant. Centrally, ourdetection of fragility does not require the specification off the probability distribution.

thus we adopt for fragility the terminology, “vega,” ofprice sensitivity to uncertainty associated with deriva-tives contracts.

Intrinsic and Inherited Fragility: Our definition offragility is two-fold. First, of concern is the intrinsicfragility, the shape of the probability distribution of avariable and its sensitivity to s-, a parameter controllingthe left side of its own distribution. But we do not oftendirectly observe the statistical distribution of objects,and, if we did, it would be difficult to measure theirtail-vega sensitivity. Nor do we need to specify suchdistribution: we can gauge the response of a given objectto the volatility of an external stressor that affects it. Forinstance, an option is usually analyzed with respect tothe scale of the distribution of the “underlying” security,not its own; the fragility of a coffee cup is determinedas a response to a given source of randomness or stress;that of a house with respect of, among other sources, thedistribution of earthquakes. This fragility coming fromthe effect of the underlying is called inherited fragility.The transfer function, which we present next, allowsus to assess the effect, increase or decrease in fragility,coming from changes in the underlying source of stress.

Transfer Function: A nonlinear exposure to a certainsource of randomness maps into tail-vega sensitivity(hence fragility). We prove that

Inherited Fragility , Concavity in exposure on the leftside of the distribution

and build H, a transfer function giving an exact map-ping of tail vega sensitivity to the second derivativeof a function. The transfer function will allow us toprobe parts of the distribution and generate a fragility-detection heuristic covering both physical fragility andmodel error.

Taking z as a stress level and ⇧(z) the harm function,it suffices to see that, with n > 1,

⇧(nz) < n⇧(z) for all 0 < nz < Z⇤

where Z⇤ is the level (not necessarily specified) atwhich the item is broken. Such inequality leads to ⇧(z)having a negative second derivative at the initial valuez.

So if a coffee cup is less harmed by n times astressor of intensity Z than once a stressor of nZ, thenharm (as a negative function) needs to be concave tostressors up to the point of breaking; such stricture isimposed by the structure of survival probabilities andthe distribution of harmful events, and has nothing todo with subjective utility or some other figments. Justas with a large stone hurting more than the equivalentweight in pebbles, if, for a human, jumping one mil-limeter caused an exact linear fraction of the damageof, say, jumping to the ground from thirty feet, then theperson would be already dead from cumulative harm.Actually a simple computation shows that he wouldhave expired within hours from touching objects orpacing in his living room, given the multitude of suchstressors and their total effect. The fragility that comesfrom linearity is immediately visible, so we rule it outbecause the object would be already broken and theperson already dead. The relative frequency of ordinaryevents compared to extreme events is the determinant.In the financial markets, there are at least ten thousandtimes more events of 0.1% deviations than events of10%. There are close to 8,000 micro-earthquakes dailyon planet earth, that is, those below 2 on the Richterscale —about 3 million a year. These are totally harmless,and, with 3 million per year, you would need them tobe so. But shocks of intensity 6 and higher on the scalemake the newspapers. Accordingly, we are necessarilyimmune to the cumulative effect of small deviations, orshocks of very small magnitude, which implies that theseaffect us disproportionally less (that is, nonlinearly less)than larger ones.

Model error is not necessarily mean preserving. s-,the lower absolute semi-deviation does not just expresschanges in overall dispersion in the distribution, suchas for instance the “scaling” case, but also changes inthe mean, i.e. when the upper semi-deviation from ⌦ toinfinity is invariant, or even decline in a compensatorymanner to make the overall mean absolute deviationunchanged. This would be the case when we shift thedistribution instead of rescaling it. Thus the same vega-sensitivity can also express sensitivity to a stressor (doseincrease) in medicine or other fields in its effect on eithertail. Thus s�(l) will allow us to express the sensitivityto the "disorder cluster" in Antifragile: i) uncertainty,ii) variability, iii) imperfect, incomplete knowledge, iv)chance, v) chaos, vi) volatility, vii) disorder, viii) entropy,ix) time, x) the unknown, xi) randomness, xii) turmoil,xiii) stressor, xiv) error, xv) dispersion of outcomes.

Page 19: The Precautionary Principle (with Application to the Genetic

EXTREME RISK INITIATIVE —NYU SCHOOL OF ENGINEERING WORKING PAPER SERIES 19

C.1 Tail Sensitivity to UncertaintyThe analysis of ruin problems led in the main paper toa number of conclusions based upon considering eventscausing destruction above a well defined level K, theruin threshold, and their cumulative probability F (K).

Here we show that the same conclusions apply to thecase of events with different levels of damage wherethe extent of that damage matters. We continue to applya threshold that differentiates events that do not causehigh levels of devastation from those that do. The anal-ysis thus also applies to "near ruin" problems, i.e, thedifference between destruction of 20% and 30% of theworlds forests matters and our conclusions apply to thiscase as well. This will also allow us to examine howsystems approach ruin.

Formally, we consider a system where events causea certain level of losses, X . The losses can be anythingthat can be quantified and is undesirable. The variableX can be an economic measure (a P/L for a portfolio,unemployment or GDP for a country, value of a com-pany’s assets), or epidemiological (number of victimsof a pandemic), ecological (deforestation, biodiversityloss), or other measures consisting of a single measurefor which larger values correspond to an undesirableoutcome. For consistency with conventions in economicvalue, we treat the increasing levels of loss to be repre-sented by negative numbers.

We construct a measure of "vega", that is, the sensi-tivity to uncertainty, in the left tails of the distributionthat depends on the variations of s the semi-deviationbelow a certain level W , chosen in the L1 norm in orderto ensure its existence under "fat tailed" distributionswith finite first semi-moment. In fact s would exist asa measure even in the case of undefined moments to theright side of W .

Let X be a random variable, the distribution of whichis one among a one-parameter family of pdf, f

,� 2 I ⇢R. We consider a fixed reference value ⌦ and, from thisreference, the "raw" left-semi-absolute deviation:2

s�(�) =

Z⌦

�1(⌦� x)f

(x)dx (1)

We assume that � ! s–(�) is continuous, strictly in-creasing and spans the whole range R

+

= [0, +1),so that we may use the left-semi-absolute deviations– as a parameter by considering the inverse function�(s) : R

+

! I , defined by s� (�(s)) = s for s 2 R+

.This condition is for instance satisfied if, for any given

x < ⌦, the probability is a continuous and increasing

2. We use a measure related to the left-semi absolute deviation, orroughly the half the mean absolute deviation (the part in the negativedomain) because 1) distributions are not symmetric and might havechanges on the right of ⌦ that are not of interest to us, 2) standarddeviation requires finite second moment.

Further, we do not adjust s� by its probability –with no loss ofgenerality. Simply, probability in the negative domain is close to 1

2 andwould not change significantly in response to changes in parameters.Probabilities in the tails are nonlinear to changes, not those in the bodyof the distribution.

function of �. Indeed, denoting

F�

(x) = Pf�(X < x) =

Zx

�1f�

(t) dt, (2)

an integration by parts yields:

s�(�) =

Z⌦

�1F�

(x) dx

This is the case when � is a scaling parameter, i.e., X ⇠⌦+ �(X

1

� ⌦) indeed one has in this case

F�

(x) = F1

✓⌦+

x� ⌦

◆,

@F�

@�(x) =

⌦� x

�2

f�

(x) and s�(�) = � s�(1).

It is also the case when � is a shifting parameter, i.e.X ⇠ X

0

� � , indeed, in this case F�

(x) = F0

(x+ �) and@s

@�

(x) = F�

(⌦).For K < ⌦ and s 2 R+, let:

⇠(K, s�) =

ZK

�1(⌦� x)f

�(s

�)

(x)dx (3)

In particular, ⇠(⌦, s–) = s–. We assume, in a firststep, that the function ⇠(K,s–) is differentiable on(�1, ⌦] ⇥ R

+

. The K-left-tail-vega sensitivity of X at stresslevel K < ⌦ and deviation level s� > 0 for the pdf f

is:

V (X, f�

,K, s�) =@⇠

@s�(K, s�) =

Z⌦

�1(⌦� x)

@f�

)

@�dx

!✓ds�

d�

◆�1

(4)

As in the many practical instances where thresholdeffects are involved, it may occur that ⇠ does not dependsmoothly on s–. We therefore also define a finite differenceversion of the vega-sensitivity as follows:

V (X, f�

,K, s�) =1

2�s

�⇠(K, s� +�s)� ⇠(K, s� ��s)

=

ZK

�1(⌦� x)

f�

(s� +�s)(x)� f�

(s� ��s)(x)

2� sdx (5)

Hence omitting the input �s implicitly assumes that�s ! 0.

Note that ⇠(K, s�) = �E(X|X < K) Pf�(X < K). It

can be decomposed into two parts:

⇠�K, s�(�)

�= (⌦�K)F

(K) + P�

(K) (6)

P�

(K) =

ZK

�1(K � x)f

(x) dx (7)

Where the first part (⌦�K)F�

(K) is proportional to theprobability of the variable being below the stress levelK and the second part P

(K) is the expectation of theamount by which X is below K (counting 0 when it isnot). Making a parallel with financial options, while s–(�)

Page 20: The Precautionary Principle (with Application to the Genetic

EXTREME RISK INITIATIVE —NYU SCHOOL OF ENGINEERING WORKING PAPER SERIES 20

Figure 10: The different curves of F�

(K) and F�

0(K)

showing the difference in sensitivity to changes at dif-ferent levels of K.

is a “put at-the-money”, ⇠(K,s–) is the sum of a put struckat K and a digital put also struck at K with amount ⌦ –K; it can equivalently be seen as a put struck at ⌦ witha down-and-in European barrier at K.

Letting � = �(s–) and integrating by part yields

⇠�K, s�(�)

�= (⌦�K)F

(K) +

ZK

�1F�

(x)dx =

Z⌦

�1FK

(x) dx (8)

Where FK

(x) = F�

(min(x,K)) = min (F�

(x), F�

(K)),so that

V (X, f�

,K, s�) =@⇠

@s(K, s�)

=

R⌦

�1@F

K�

@�

(x) dxR⌦

�1@F�@�

(x) dx(9)

For finite differences

V (X, f�

,K, s�,�s) =1

2� s

Z⌦

�1�FK

�,�s

(x)dx (10)

Where �+

s

and ��s

are such that s(�+

s

�) = s� + �s,s(��

s

�) = s� ��s and �FK

�,�s

(x) = FK

�s+(x)� FK

�s�(x).

C.2 Mathematical Expression of FragilityIn essence, fragility is the sensitivity of a given risk mea-sure to an error in the estimation of the (possibly one-sided) deviation parameter of a distribution, especiallydue to the fact that the risk measure involves parts of thedistribution – tails – that are away from the portion usedfor estimation. The risk measure then assumes certainextrapolation rules that have first order consequences.

These consequences are even more amplified when therisk measure applies to a variable that is derived fromthat used for estimation, when the relation between thetwo variables is strongly nonlinear, as is often the case.

C.2.1 Definition of Fragility: The Intrinsic Case

The local fragility of a random variable X�

depending onparameter �, at stress level K and semi-deviation levels–(�) with pdf f

is its K-left-tailed semi-vega sensitivityV (X, f

,K, s�).

The finite-difference fragility of X�

at stress level K andsemi-deviation level s�(�)±�s with pdf f

is its K-left-tailedfinite-difference semi-vega sensitivity V (X, f

,K, s�,�s).In this definition, the fragility relies in the unsaid

assumptions made when extrapolating the distributionof X

from areas used to estimate the semi-absolutedeviation s–(�), around ⌦, to areas around K on whichthe risk measure ⇠ depends.

C.2.2 Definition of Fragility: The Inherited Case

Next we consider the particular case where a randomvariable Y = '(X) depends on another source of risk X,itself subject to a parameter �. Let us keep the abovenotations for X, while we denote by g

the pdf ofY ,⌦

Y

= '(⌦) and u�(�) the left-semi-deviation of Y.

Given a “strike” levelL = '(K), let us define, as in the case of X :

⇣�L, u�

(�)�=

ZK

�1(⌦

Y

� y)g�

(y) dy (11)

The inherited fragility of Y with respect to X at stresslevel L = '(K) and left-semi-deviation level s�(�) of Xis the partial derivative:

VX

�Y, g

, L, s�(�)�=

@⇣

@s

�L, u�

(�)�=

ZK

�1(⌦

Y

� Y )

@g�

@�(y)dy

!✓ds�

d�

◆�1

(12)

Note that the stress level and the pdf are definedfor the variable Y, but the parameter which is usedfor differentiation is the left-semi-absolute deviation ofX, s–(�). Indeed, in this process, one first measures thedistribution of X and its left-semi-absolute deviation,then the function ' is applied, using some mathematicalmodel of Y with respect to X and the risk measure ⇣ isestimated. If an error is made when measuring s–(�), itsimpact on the risk measure of Y is amplified by the ratiogiven by the “inherited fragility”.

Once again, one may use finite differences and definethe finite-difference inherited fragility of Y with respect toX, by replacing, in the above equation, differentiation byfinite differences between values �+ and �–, where s–(�+)= s– + �s and s–(�–) = s– – �s.

Page 21: The Precautionary Principle (with Application to the Genetic

EXTREME RISK INITIATIVE —NYU SCHOOL OF ENGINEERING WORKING PAPER SERIES 21

C.3 Effect of Nonlinearity on Intrinsic FragilityLet us study the case of a random variable Y = '(X); thepdf g

of which also depends on parameter �, relatedto a variable X by the nonlinear function '. We arenow interested in comparing their intrinsic fragilities.We shall say, for instance, that Y is more fragilefragileat the stress level L and left-semi-deviation level u�

(�)than the random variable X, at stress level K and left-semi-deviation level s�(�) if the L-left-tailed semi-vegasensitivity of Y

is higher than the K-left-tailed semi-vega sensitivity of X

:

V (Y, g�

, L, µ�) > V (X, f

,K, s�) (13)

One may use finite differences to compare thefragility of two random variables:V (Y, g

, L,�µ) >V (X, f

,K,�s). In this case, finite variations must becomparable in size, namely �u/u– = �s/s–.

Let us assume, to start, that ' is differentiable, strictlyincreasing and scaled so that ⌦

Y

= '(⌦) = ⌦. We alsoassume that, for any given x < ⌦, @F�

@�

(x) > 0.In this case, as observed above, � ! s–(�) is also

increasing.Let us denote G

y

(y) = Pg�(Y < y) . We have:

G�

(�(x)) = Pg� (Y < �(y)) = P

f�(X < x) = F�

(x).(14)

Hence, if ⇣(L, u–) denotes the equivalent of ⇠(K), s�

with variable (Y, g�

) instead of (X, f�

), we have:

⇣�L, u�

(�)�=

Z⌦

�1FK

(x)d�

dx(x)dx (15)

Because ' is increasing and min('(x),'(K)) ='(min(x,K)). In particular

µ�(�) = ⇣

�⌦, µ�

(�)�=

Z⌦

�1FK

(x)d�

dx(x) dx (16)

The L-left-tail-vega sensitivity of Y is therefore:

V�Y, g

, L, u�(�)�=

R⌦

�1@F

K�

@�

(x)d�dx

(x) dxR⌦

�1@F�@�

(x)d�dx

(x) dx(17)

For finite variations:

V (Y, g�

, L, u�(�),�u) =

1

2�u

Z⌦

�1�FK

�,�u

(x)d�

dx(x)dx

(18)Where �+

u

� and ��u

� are such that u(�+

u

�) = u�+�u,

u(�+

u

�) = u� ��u and FK

�,�u

(x) = FK

+u(x)� FK

�u(x).

Next, Theorem 1 proves how a concave transformation'(x) of a random variable x produces fragility.

Fragility Transfer TheoremTheorem 1: Let, with the above notations, ' : R ! R

be a twice differentiable function such that '(⌦) = ⌦

and for any x < ⌦, d'

dx

(x) > 0. The random variableY = '(X) is more fragile at level L = '(K) and pdf g

than X at level K and pdf f�

if, and only if, one has:Z

�1HK

(x)d2'

dx2

(x)dx < 0

Where

HK

(x) =@PK

@�(x)

�@PK

@�(⌦)�@P

@�(x)

�@P

@�(⌦) (19)

and where

P�

(x) =

Zx

�1F�

(t)dt (20)

is the price of the “put option” on X�

with “strike” xand

PK

(x) =

Zx

�1FK

(t)dt

is that of a "put option" with "strike" x and "Europeandown-and-in barrier" at K.

H can be seen as a transfer function, expressed as thedifference between two ratios. For a given level x ofthe random variable on the left hand side of ⌦, thesecond one is the ratio of the vega of a put struck atx normalized by that of a put "at the money" (i.e. struckat ⌦), while the first one is the same ratio, but where putsstruck at x and ⌦ are "European down-and-in options"with triggering barrier at the level K.

The proof is detailed in [26] and [6].Fragility Exacerbation TheoremTheorem 2: With the above notations, there exists a

threshold ⇥

< ⌦ such that, if K ⇥

then HK

(x) > 0

for x 2 (1,�

] with K < l

ambda < ⌦.As a con-sequence, if the change of variable ' is concave on(�1,

] and linear on [�

,⌦], then Y is more fragileat L = '(K)than X at K.

One can prove that, for a monomodal distribution,⇥

< �

< ⌦ (see discussion below), so whatever thestress level K below the threshold ⇥

, it suffices that thechange of variable ' be concave on the interval (�1,⇥

]

and linear on [⇥

l

ambda,⌦] for Y to become more fragileat L than X at K. In practice, as long as the change ofvariable is concave around the stress level K and haslimited convexity/concavity away from K, the fragilityof Y is greater than that of X .

Figure 11 shows the shape of HK

(x) in the case ofa Gaussian distribution where � is a simple scalingparameter (� is the standard deviation �) and ⌦ = 0.We represented K = –2� while in this Gaussian case, ⇥

= –1.585�.DISCUSSIONMonomodal caseWe say that the family of distributions (f

) is left-monomodal if there exists K

< ⌦ such that @f�

@�

> 0 on (–1,

] and @f�

@�

6 0 on [µ�

,⌦]. In this case @P�@�

is a convexfunction on the left half-line (–1, µ

], then concave

Page 22: The Precautionary Principle (with Application to the Genetic

EXTREME RISK INITIATIVE —NYU SCHOOL OF ENGINEERING WORKING PAPER SERIES 22

Figure 11: The Transfer function H for different portionsof the distribution: its sign flips in the region slightlybelow ⌦

Figure 12: The distribution of G�

and the various deriva-tives of the unconditional shortfalls

after the inflexion point µ�

. For K µ�

, the function@P

K�

@�

coincides with @P�@�

on (–1, K], then is a linearextension, following the tangent to the graph of @P�

@�

inK (see graph below). The value of @P

K�

@�

(⌦) correspondsto the intersection point of this tangent with the verticalaxis. It increases with K, from 0 when K ! –1 to avalue above @P�

@�

(⌦) when K = µ�

. The threshold ⇥

corresponds to the unique value of K such that @P

K�

@�

(⌦) =

@P�@�

(⌦) . When K < ⇥

then G�

(x) =@P�@�

(x).

@P�@�

(⌦)

and GK

(x) =

@P

K�

@�

(x).

@P

K�

@�

(⌦) are functions such thatG

(⌦) = GK

(⌦) = 1 and which are proportional for x K, the latter being linear on [K, ⌦]. On the other hand,if K < ⇥

then @P

K�

@�

(⌦) <@P�@�

(⌦) and G�

(K) < GK

(K),which implies that G

(x) < GK

(x) for x K. Anelementary convexity analysis shows that, in this case,the equation G

(x) = GK

(x) has a unique solution �

with µl

ambda < �

< ⌦. The “transfer” function HK

(x)is positive for x <

, in particular when x µ�

andnegative for

< x < ⌦.Scaling ParameterWe assume here that � is a scaling parameter, i.e. X

=

⌦+ �(X1

� ⌦). In this case, as we saw above, we have

f�

(x) =1

�f1

✓⌦+

x� ⌦

◆, F

(x) = F1

✓⌦+

x� ⌦

P�

(x) = �P1

✓⌦+

x� ⌦

◆and s�(�) = �s�(1).

Hence

⇠(K, s�(�)) = (⌦�K)F1

✓⌦+

K � ⌦

+ �P1

✓⌦+

K � ⌦

◆(21)

@⇠

@s�(K, s�) =

1

s�(1)

@⇠

@�(K,�)

=

1

s�(�)

⇣P�

(K) + (⌦�K)F�

(K) + (⌦�K)

2

f�

(K)

(22)

When we apply a nonlinear transformation ', theaction of the parameter � is no longer a scaling: whensmall negative values of X are multiplied by a scalar �,so are large negative values of X. The scaling � applies tosmall negative values of the transformed variable Y witha coefficient d'

dx

(0), but large negative values are subjectto a different coefficient d'

dx

(K), which can potentially bevery different.

C.4 Fragility Drift

Fragility is defined at as the sensitivity – i.e. the firstpartial derivative – of the tail estimate ⇠ with respect tothe left semi-deviation s–. Let us now define the fragilitydrift:

V 0K

(X, f�

,K, s�) =@2⇠

@K@s�(K, s�) (23)

In practice, fragility always occurs as the result offragility, indeed, by definition, we know that ⇠(⌦, s–) = s–,hence V(X, f

, ⌦, s–) = 1. The fragility drift measures thespeed at which fragility departs from its original value1 when K departs from the center ⌦.

Second-order FragilityThe second-order fragility is the second order derivative

of the tail estimate ⇠ with respect to the semi-absolutedeviation s–:

V 0s

�(X, f�

,K, s�) =@2⇠

(@s�)2

(K, s�)

As we shall see later, the second-order fragility drives thebias in the estimation of stress tests when the value ofs– is subject to uncertainty, through Jensen’s inequality.

Page 23: The Precautionary Principle (with Application to the Genetic

EXTREME RISK INITIATIVE —NYU SCHOOL OF ENGINEERING WORKING PAPER SERIES 23

C.5 Definitions of Robustness and AntifragilityAntifragility is not the simple opposite of fragility, as wesaw in Table 1. Measuring antifragility, on the one hand,consists of the flipside of fragility on the right-hand side,but on the other hand requires a control on the robustnessof the probability distribution on the left-hand side.From that aspect, unlike fragility, antifragility cannot besummarized in one single figure but necessitates at leasttwo of them.

When a random variable depends on another sourceof randomness: Y

= '(X�

), we shall study the an-tifragility of Y

with respect to that of X�

and to theproperties of the function '.

DEFINITION OF ROBUSTNESSLet (X

) be a one-parameter family of random vari-ables with pdf f

. Robustness is an upper control on thefragility of X, which resides on the left hand side of thedistribution.

We say that f�

is b-robust beyond stress level K < ⌦ ifV(X

, f�

, K’, s(�)) b for any K’ K. In other words, therobustness of f

on the half-line (–1, K] is

R(�1,K]

(X�

, f�

,K, s�(�)) = max

K

06K

V (X�

, f�

,K 0, s�(�)),

(24)so that b-robustness simply means

R(�1,K]

(X�

, f�

,K, s�(�)) 6 b

We also define b-robustness over a given interval [K1, K2]by the same inequality being valid for any K’ 2 [K1, K2].In this case we use

R[K1,K2]

(X�

, f�

,K, s�(�)) =

max

K16K

06K2

V (X�

, f�

,K 0, s�(�)). (25)

Note that the lower R, the tighter the control and themore robust the distribution f

.Once again, the definition of b-robustness can be trans-

posed, using finite differences V(X�

, f�

, K’, s–(�), �s).In practical situations, setting a material upper bound

b to the fragility is particularly important: one need tobe able to come with actual estimates of the impactof the error on the estimate of the left-semi-deviation.However, when dealing with certain class of models,such as Gaussian, exponential of stable distributions,we may be lead to consider asymptotic definitions ofrobustness, related to certain classes.

For instance, for a given decay exponent a > 0,assuming that f

(x) = O(eax) when x ! –1, the a-exponential asymptotic robustness of X

below the levelK is:

Rexp

(X�

, f�

,K, s�(�), a)

= max

K

06K

⇣ea(⌦�K

0)V (X

, f�

,K 0, s�(�))⌘

(26)

If one of the two quantities ea(⌦�K

0)f

(K 0) or

ea(⌦�K

0)V (X

, f�

,K 0, s�(�)) is not bounded from above

when K ! –1, then Rexp = +1 and X�

is considered asnot a-exponentially robust.

Similarly, for a given power ↵ > 0, and assuming thatf�

(x) = O(x–↵) when x ! –1, the ↵-power asymptoticrobustness of X

below the level K is:

Rpow

(X�

, f�

,K, s�(�), a) =

max

K

06K

⇣(⌦�K 0

)

↵�2

V (X�

, f�

,K 0, s�(�))⌘

If one of the two quantities

(⌦�K 0)

↵f�

(K 0)

(⌦�K 0)

↵�2V (X�

, f�

,K 0, s�(�))

is not bounded from above when K 0 ! �1, thenR

pow

= +1 and X�

is considered as not ↵-power robust.Note the exponent ↵ – 2 used with the fragility, forhomogeneity reasons, e.g. in the case of stable distribu-tions, when a random variable Y

= '(X�

) depends onanother source of risk X

.Definition 1: Left-Robustness (monomodal distribu-

tion). A payoff y = '(x) is said (a, b)-robust belowL = '(K) for a source of randomness X with pdff�

assumed monomodal if, letting g�

be the pdf ofY = '(X), one has,for any K 0 K and L = '(K):

VX

�Y, g

, L0, s�(�)�6 aV

�X, f

, K 0, s�(�)�+ b (27)

The quantity b is of order deemed of “negligibleutility” (subjectively), that is, does not exceed sometolerance level in relation with the context, while a isa scaling parameter between variables X and Y.

Note that robustness is in effect impervious to changesof probability distributions. Also note that this measurerobustness ignores first order variations since owing totheir higher frequency, these are detected (and remedied)very early on.

REFERENCES

[1] R. Horton, “Vioxx, the implosion of merck, and aftershocks at the{FDA},” The Lancet, vol. 364, no. 9450, pp. 1995 – 1996, 2004.

[2] J. H. Kim and A. R. Scialli, “Thalidomide: the tragedy of birth de-fects and the effective treatment of disease,” Toxicological Sciences,vol. 122, no. 1, pp. 1–6, 2011.

[3] A. P. Fishman, “Aminorex to fen/phen an epidemic foretold,”Circulation, vol. 99, no. 1, pp. 156–161, 1999.

[4] A. L. Herbst, H. Ulfelder, and D. C. Poskanzer, “Adenocarcinomaof the vagina,” New England Journal of Medicine, vol. 284,no. 16, pp. 878–881, 1971, pMID: 5549830. [Online]. Available:http://www.nejm.org/doi/full/10.1056/NEJM197104222841604

[5] C. D. Furberg and B. Pitt, “Withdrawal of cerivastatin from theworld market,” Curr Control Trials Cardiovasc Med, vol. 2, no. 5,pp. 205–7, 2001.

[6] R. P. Feldman and J. T. Goodrich, “Psychosurgery: a historicaloverview,” Neurosurgery, vol. 48, no. 3, pp. 647–659, 2001.

[7] P. B. Watkins and R. W. Whitcomb, “Hepatic dysfunctionassociated with troglitazone,” New England Journal of Medicine, vol.338, no. 13, pp. 916–917, 1998, pMID: 9518284. [Online]. Available:http://www.nejm.org/doi/full/10.1056/NEJM199803263381314

Page 24: The Precautionary Principle (with Application to the Genetic

EXTREME RISK INITIATIVE —NYU SCHOOL OF ENGINEERING WORKING PAPER SERIES 24

[8] W. RL, C. Y, F. JP, and G. RA, “Mechanism ofthe cardiotoxic actions of terfenadine,” JAMA, vol. 269,no. 12, pp. 1532–1536, 1993. [Online]. Available: +http://dx.doi.org/10.1001/jama.1993.03500120070028

[9] W. N. Kernan, C. M. Viscoli, L. M. Brass, J. P. Broderick, T. Brott,E. Feldmann, L. B. Morgenstern, J. L. Wilterdink, and R. I. Hor-witz, “Phenylpropanolamine and the risk of hemorrhagic stroke,”New England Journal of Medicine, vol. 343, no. 25, pp. 1826–1832,2000.

[10] J. T. James, “A new, evidence-based estimate of patient harmsassociated with hospital care,” Journal of patient safety, vol. 9, no. 3,pp. 122–128, 2013.

[11] J. Davies and D. Davies, “Origins and evolution of antibioticresistance,” Microbiology and Molecular Biology Reviews, vol. 74,no. 3, pp. 417–433, 2010.

[12] D. Fergusson, S. Doucette, K. C. Glass, S. Shapiro, D. Healy,P. Hebert, and B. Hutton, “Association between suicide attemptsand selective serotonin reuptake inhibitors: systematic review ofrandomised controlled trials,” Bmj, vol. 330, no. 7488, p. 396, 2005.

[13] D. S. Echt, P. R. Liebson, L. B. Mitchell, R. W. Peters, D. Obias-Manno, A. H. Barker, D. Arensberg, A. Baker, L. Friedman, H. L.Greene et al., “Mortality and morbidity in patients receiving en-cainide, flecainide, or placebo: the cardiac arrhythmia suppressiontrial,” New England Journal of Medicine, vol. 324, no. 12, pp. 781–788, 1991.

[14] A. M. Larson, J. Polson, R. J. Fontana, T. J. Davern, E. Lalani, L. S.Hynan, J. S. Reisch, F. V. Schiødt, G. Ostapowicz, A. O. Shakilet al., “Acetaminophen-induced acute liver failure: results of aunited states multicenter, prospective study,” Hepatology, vol. 42,no. 6, pp. 1364–1372, 2005.

[15] R. T. Participants, “Coronary angioplasty versus coronary arterybypass surgery: the randomised intervention treatment of angina(rita) trial,” The Lancet, vol. 341, no. 8845, pp. 573–580, 1993.

[16] K. P. Morgan, “Women and the knife: Cosmetic surgery and thecolonization of women’s bodies,” Hypatia, vol. 6, no. 3, pp. 25–53,1991.

[17] G. A. Rook, C. A. Lowry, and C. L. Raison, Evolution, Medicine,and Public Health, vol. 2013, no. 1, pp. 46–64, 2013.

[18] I. F. Wallace, N. D. Berkman, K. N. Lohr, M. F. Harrison, A. J.Kimple, and M. J. Steiner, “Surgical treatments for otitis mediawith effusion: A systematic review,” Pediatrics, pp. peds–2013,2014.

[19] J. L. Teugels, “The class of subexponential distributions,” TheAnnals of Probability, vol. 3, no. 6, pp. 1000–1011, 1975.

[20] E. Pitman, “Subexponential distribution functions,” J. Austral.Math. Soc. Ser. A, vol. 29, no. 3, pp. 337–347, 1980.

[21] V. Chistyakov, “A theorem on sums of independent positiverandom variables and its applications to branching random pro-cesses,” Theory of Probability & Its Applications, vol. 9, no. 4, pp.640–648, 1964.

[22] P. Embrechts, C. M. Goldie, and N. Veraverbeke, “Subexponential-ity and infinite divisibility,” Probability Theory and Related Fields,vol. 49, no. 3, pp. 335–347, 1979.

[23] B. Kuo, Automatic Control Systems. Prentice Hall, 1982.[24] C. W. Gardiner, Stochastic Methods. Springer Berlin/Heidelberg,

1985.[25] E. A. Guggenheim, Thermodynamics—An advanced treatment for

chemists and physicists. North-Holand, 1985.[26] N. N. Taleb and R. Douady, “Mathematical definition, mapping,

and detection of (anti) fragility,” Quantitative Finance, 2013.

Page 25: The Precautionary Principle (with Application to the Genetic

1

Precaution and GMOs: an AlgorithmicComplexity Approach

Yaneer Bar-Yam†, Joseph Norman†,and Nassim Nicholas Taleb⇤†New England Complex Systems Institute,

⇤Real World Risk Institute

F

SUMMARY: This is a supplement to our Precaution-ary Principle paper presenting the problem from theperspective of Computational/Algorithmic Complexity,which can clarify the risks of GMOs. The point is that—in additional to the change in risk classes—the differencebetween conventional breeding and transgenics maychange the complexity class associated with the problemof harm evaluation.

Our PP approach

Our analysis of the risk of GMOs in the preliminaryversion of the PP paper [1] was probabilistic, based uponthe conjunction of three problems

• the opacity of tail risks: the difficulty of obtaining in-formation now about potential deferred future harmto health or the environment

• the systemic consequences of fat tailed events inGMO risks that are not present in conventionalbreeding and agricultural technology innovation—the difference lies in the absence of passive barriersor reactive circuit-breakers (a term also used in mar-ket regulation) that limit the propagations of errorsfor GMOs to prevent wider damage.

• that measures of harm scale nonlinearly with mea-sures of impact, e.g., a reduction of 10% in cropproduction or genetic diversity can multiply theharm to social systems or ecologies by orders ofmagnitude.

The problem of identifying the harm analytically arisesbecause of the many ways that, e.g., insertion of a genefrom another species, can affect molecular, cellular, phys-iological, and other organismal aspects of the organism,and those modifications may impact long term health oragricultural and ecological systems—impacts that maybe unobserved due to the complexity of societal changesin the absence of monitoring. Unintended effects arisefrom the many interactions that are possible, and increas-ing global connectivity that converts local to systemicrisks. The ecosystem and civilization are at risk due

to the absence of boundaries of GMO use in humanconsumption or ecological systems globally.

Boundedness of conventional breedingCounter to the idea that GMOs are similar to naturalevolution or breeding, the viability and other qualitiesof offspring from the mating of two members of thesame species implies the range of outcomes of breedingis bounded. Offspring that arise within a species musthave high probability of long term compatibility withother members of the species and contextual ecology.Otherwise the species and the ecosystem of which it ispart would not have persisted over many generations.The same statement need not be true of GMOs.

Indeed the reason that GMOs are being introducedis that (1) GMOs depart significantly from the set oforganisms that can arise through breeding, (2) Manyof the advantages of breeding have been explored andexploited.

Unfortunately, the FDA does not require testing ofGMOs as it has accepted industry claims that GMOsare no different from conventional breeding. This meansthere are few, and surely insufficient, tests of the harmthat might be caused—or monitoring of the effects.

Some believe—and have convinced others—that theycan "figure out" what the effect of GMO modificationsis and consider that unintended consequences will notoccur. This is a stretch: "Figuring out" the impact of GMOmodifications may very well not be possible. We do notmean unlikely due to current computational restrictionsand the state of science, but literally impossible.

NP computational limits and impossibilityThere are many problems that are intractable becausethe level of effort to solve them grows rapidly with thedimension of the problem. Thus, for example, it is widelybelieved by mathematicians that NP-complete problems(e.g. traveling salesman problem, boolean satisfiabilityproblem) cannot be solved for large enough problemsbecause the level of effort grows more than polynomially

Page 26: The Precautionary Principle (with Application to the Genetic

2

in the size of the system. We may check a solution if we

know it in polynomial time but cannot guarantee thatwe can derive it. Whether NP-complete or not, there is awide range of problems whose computation time growsexponentially.

These problems—those exponential rather than poly-nomial growth—are deemed impossible to solve forlarge systems.

Is the determination of harm from GMOs such a“hard" problem? Identifying all of the possible conse-quences of GMO modifications may very well involvecombinatorially many effects to consider in the interac-tion of one (or more) new genes with the other genes,other organisms, agricultural and ecological processes.(The need for combinatorial numbers of observationsfor behavioral characterizations of such complex systemshas been previously discussed [2].) On the other hand,the limitations that exist on the traits of members ofthe same species suggest that the probability of harmin breeding is constrained, and the generally incremen-

tal impact of breeding suggests a much lower if notwell characterized computational effort to determine it.Therefore outcomes of breeding are more amenable totesting or analysis in comparison to GMOs, includingthe projection of future harm.

Advanced complex systems science methodologies ex-ist that can identify large scale impacts without char-acterizing all details of a system [3]. However theirapplication to the system of harm and risk in the contextof genetic modification that is pervasively present in thesystem has to be established, and is not yet available toaddress the current risks being taken.

REFERENCES[1] N. N. Taleb, R. Read, R. Douady, J. Norman, and Y. Bar-Yam, “The

precautionary principle (with application to the genetic modifica-tion of organisms),” arXiv preprint arXiv:1410.5787, 2014.

[2] Y. Bar-Yam, “The limits of phenomenology: From behaviorism todrug testing and engineering design,” Complexity, 2015.

[3] Y. Bar-Yam and M. Bialik, “Beyond big data: Identifying importantinformation for real world challenges,” ArXiv, 2013.