olle h aggstr om dept of mathematical sciences, chalmers ... haggstrom.pdfcompare estimating the...

42
European Parliament STOA meeting October 19, 2017 Remarks on artificial intelligence and rational optimism Olle H¨ aggstr¨ om Dept of Mathematical Sciences, Chalmers University of Technology http://www.math.chalmers.se/~olleh/ http://haggstrom.blogspot.se/ Olle H¨ aggstr¨om AI and rational optimism

Upload: others

Post on 11-Dec-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Olle H aggstr om Dept of Mathematical Sciences, Chalmers ... Haggstrom.pdfCompare estimating the risk to present-day citizens from I lethal bicycle accident (easy; ... beyond mere

European Parliament STOA meeting

October 19, 2017

Remarks on artificial intelligence and rational optimism

Olle Haggstrom

Dept of Mathematical Sciences, Chalmers University of Technology

http://www.math.chalmers.se/~olleh/

http://haggstrom.blogspot.se/

Olle Haggstrom AI and rational optimism

Page 2: Olle H aggstr om Dept of Mathematical Sciences, Chalmers ... Haggstrom.pdfCompare estimating the risk to present-day citizens from I lethal bicycle accident (easy; ... beyond mere

What is rational optimism?

Olle Haggstrom AI and rational optimism

Page 3: Olle H aggstr om Dept of Mathematical Sciences, Chalmers ... Haggstrom.pdfCompare estimating the risk to present-day citizens from I lethal bicycle accident (easy; ... beyond mere

What is rational optimism?

Rational optimism is not...

...to claim, based on insufficient evidence, that everythingis going to be all right.

Olle Haggstrom AI and rational optimism

Page 4: Olle H aggstr om Dept of Mathematical Sciences, Chalmers ... Haggstrom.pdfCompare estimating the risk to present-day citizens from I lethal bicycle accident (easy; ... beyond mere

What is rational optimism?

Rational optimism is not...

...to claim, based on insufficient evidence, that everythingis going to be all right.

A better definition of rational optimism is...

...to have an epistemically well-calibrated view of thefuture and its uncertainties, to accept that it is notwritten in stone, and to act upon the working assumptionthat the chances for a good future may depend on whatactions we take today.

Olle Haggstrom AI and rational optimism

Page 5: Olle H aggstr om Dept of Mathematical Sciences, Chalmers ... Haggstrom.pdfCompare estimating the risk to present-day citizens from I lethal bicycle accident (easy; ... beyond mere

Let me illustrate the difference with an example from Pinker(2011).

Olle Haggstrom AI and rational optimism

Page 6: Olle H aggstr om Dept of Mathematical Sciences, Chalmers ... Haggstrom.pdfCompare estimating the risk to present-day citizens from I lethal bicycle accident (easy; ... beyond mere

Following an impressive amount of empirical evidence, Pinkersummarizes by congratulating the readers on living in a time whenpeople...

...no longer have to worry about abduction into sexualslavery, divinely commanded genocide, lethal circuses andtournaments, punishments on the cross, rack, wheel,stake, or strappado for holding unpopular beliefs,decapitation for not bearing a son, disembowelment forhaving dated a royal, pistol duels to defend their honor,[and] beachside fisticuffs to impress their girlfriends.

Olle Haggstrom AI and rational optimism

Page 7: Olle H aggstr om Dept of Mathematical Sciences, Chalmers ... Haggstrom.pdfCompare estimating the risk to present-day citizens from I lethal bicycle accident (easy; ... beyond mere

Following an impressive amount of empirical evidence, Pinkersummarizes by congratulating the readers on living in a time whenpeople...

...no longer have to worry about abduction into sexualslavery, divinely commanded genocide, lethal circuses andtournaments, punishments on the cross, rack, wheel,stake, or strappado for holding unpopular beliefs,decapitation for not bearing a son, disembowelment forhaving dated a royal, pistol duels to defend their honor,beachside fisticuffs to impress their girlfriends, and theprospect of a nuclear world war that would put anend to civilization or to human life itself.

Olle Haggstrom AI and rational optimism

Page 8: Olle H aggstr om Dept of Mathematical Sciences, Chalmers ... Haggstrom.pdfCompare estimating the risk to present-day citizens from I lethal bicycle accident (easy; ... beyond mere

In order to state that present-day (or 2011) citizens live in ablessed state of safety from violence, we need in particular toestablish that the risk of being killed in global nuclear war is small.

Olle Haggstrom AI and rational optimism

Page 9: Olle H aggstr om Dept of Mathematical Sciences, Chalmers ... Haggstrom.pdfCompare estimating the risk to present-day citizens from I lethal bicycle accident (easy; ... beyond mere

Compare estimating the risk to present-day citizens from

I lethal bicycle accident (easy; lots of data)

I lethal bicycle accident caused by piano falling fromthird-floor balcony (very little data; doesn’t matter)

I global nuclear war (hard; very little data, yet crucial)

Olle Haggstrom AI and rational optimism

Page 10: Olle H aggstr om Dept of Mathematical Sciences, Chalmers ... Haggstrom.pdfCompare estimating the risk to present-day citizens from I lethal bicycle accident (easy; ... beyond mere

Assuming (not quite realistically) stationary conditions, how do weestimate, based on the number of outbrakes of global nuclear warduring the last 70 years, the annual probability λ of such anoutbreak?

Standard statistical procedures give a point estimate of λ = 0, anda 95% confidence interval

0 ≤ λ ≤ 0.06 .

Olle Haggstrom AI and rational optimism

Page 11: Olle H aggstr om Dept of Mathematical Sciences, Chalmers ... Haggstrom.pdfCompare estimating the risk to present-day citizens from I lethal bicycle accident (easy; ... beyond mere

The observed data is therefore highly consistent with (say)λ = 0.05, corresponding to an expected time of

1/0.05 = 20 years

until the next global nuclear war – and, for the individual citizen,an annual death rate of several percent.

(And here we haven’t even considered the observer selection biasthat may infect statistical estimates in this field.)

Olle Haggstrom AI and rational optimism

Page 12: Olle H aggstr om Dept of Mathematical Sciences, Chalmers ... Haggstrom.pdfCompare estimating the risk to present-day citizens from I lethal bicycle accident (easy; ... beyond mere

The take-home message from this is that in order to give goodestimates of λ, we need to look much deeper into the plausiblemechanisms for nuclear war. This requires insights into, e.g.,political science and electrical engineering, and detailed analyses ofincidents such as the 1962 Cuban missile crisis and the 1983 Sovietnuclear false alarm incident.

Olle Haggstrom AI and rational optimism

Page 13: Olle H aggstr om Dept of Mathematical Sciences, Chalmers ... Haggstrom.pdfCompare estimating the risk to present-day citizens from I lethal bicycle accident (easy; ... beyond mere

I cannot give a precise estimate of λ, but in my 2016 book HereBe Dragons I do my best to evaluate the risk for global nuclear warand a number of other existential threats to humanity, includingsome more exotic ones such as

I the creation of a superintelligent AI whose goals are notaligned with human values,

I self-replicating nanobots running amok in a grey goo scenario,

I gamma ray outbursts from sufficiently nearby supernovas,

I genocide by extraterrestrials.

Olle Haggstrom AI and rational optimism

Page 14: Olle H aggstr om Dept of Mathematical Sciences, Chalmers ... Haggstrom.pdfCompare estimating the risk to present-day citizens from I lethal bicycle accident (easy; ... beyond mere

What about the case of artificial intelligence (AI)?

Like other emerging technologies (nanotech, biotech, ...), AI comeswith the potential for enormous economic and other benefits,but also enormous risks.

Olle Haggstrom AI and rational optimism

Page 15: Olle H aggstr om Dept of Mathematical Sciences, Chalmers ... Haggstrom.pdfCompare estimating the risk to present-day citizens from I lethal bicycle accident (easy; ... beyond mere

What about the case of artificial intelligence (AI)?

Like other emerging technologies (nanotech, biotech, ...), AI comeswith the potential for enormous economic and other benefits,but also enormous risks.

Risks include that of an autonomous weapons arms race, that ofsocial unrest due to skyrocketing unemployment figures caused byrobotization, and...

Olle Haggstrom AI and rational optimism

Page 16: Olle H aggstr om Dept of Mathematical Sciences, Chalmers ... Haggstrom.pdfCompare estimating the risk to present-day citizens from I lethal bicycle accident (easy; ... beyond mere

What about the case of artificial intelligence (AI)?

Like other emerging technologies (nanotech, biotech, ...), AI comeswith the potential for enormous economic and other benefits,but also enormous risks.

Risks include that of an autonomous weapons arms race, that ofsocial unrest due to skyrocketing unemployment figures caused byrobotization, and...

...the issue of whether, once we obtain an AI breakthroughsufficient to create superintelligent AGI, so that we humans areno longer the most intelligent beings on the planet, we can expectto remain in control?

Olle Haggstrom AI and rational optimism

Page 17: Olle H aggstr om Dept of Mathematical Sciences, Chalmers ... Haggstrom.pdfCompare estimating the risk to present-day citizens from I lethal bicycle accident (easy; ... beyond mere

Paperclip Armageddon

I A paperclip factory is run by an advanced (but notsuperintelligent) AI, programmed to find ways to maximizepaperclip production.

Olle Haggstrom AI and rational optimism

Page 18: Olle H aggstr om Dept of Mathematical Sciences, Chalmers ... Haggstrom.pdfCompare estimating the risk to present-day citizens from I lethal bicycle accident (easy; ... beyond mere

Paperclip Armageddon

I A paperclip factory is run by an advanced (but notsuperintelligent) AI, programmed to find ways to maximizepaperclip production.

I That AI happens to become the first one to reach the criticalthreshold to enter the rapid spiral of self-improvement knownas a Singularity or an intelligence explosion.

Olle Haggstrom AI and rational optimism

Page 19: Olle H aggstr om Dept of Mathematical Sciences, Chalmers ... Haggstrom.pdfCompare estimating the risk to present-day citizens from I lethal bicycle accident (easy; ... beyond mere

Paperclip Armageddon

I A paperclip factory is run by an advanced (but notsuperintelligent) AI, programmed to find ways to maximizepaperclip production.

I That AI happens to become the first one to reach the criticalthreshold to enter the rapid spiral of self-improvement knownas a Singularity or an intelligence explosion.

I Having thus reached superintelligence, the AI promptly goeson to turn the entire solar system (including us) into a giantheap of paperclips.

Olle Haggstrom AI and rational optimism

Page 20: Olle H aggstr om Dept of Mathematical Sciences, Chalmers ... Haggstrom.pdfCompare estimating the risk to present-day citizens from I lethal bicycle accident (easy; ... beyond mere

The point of the Paperclip Armageddon scenario is to stress thatfor an AI breakthrough to become an existential risk, no illintentions are needed – no mad scientist needs to plan to destroythe world as a revenge against humanity.

Olle Haggstrom AI and rational optimism

Page 21: Olle H aggstr om Dept of Mathematical Sciences, Chalmers ... Haggstrom.pdfCompare estimating the risk to present-day citizens from I lethal bicycle accident (easy; ... beyond mere

The issue of whether the risk for anything like PaperclipArmageddon is forthcoming is usefully analysed in two steps.

Olle Haggstrom AI and rational optimism

Page 22: Olle H aggstr om Dept of Mathematical Sciences, Chalmers ... Haggstrom.pdfCompare estimating the risk to present-day citizens from I lethal bicycle accident (easy; ... beyond mere

The issue of whether the risk for anything like PaperclipArmageddon is forthcoming is usefully analysed in two steps.

(1) When (if ever) can we expect a superintelligentAI to emerge?

(2) What can we expect to happen oncesuperintelligent AI has emerged?

Olle Haggstrom AI and rational optimism

Page 23: Olle H aggstr om Dept of Mathematical Sciences, Chalmers ... Haggstrom.pdfCompare estimating the risk to present-day citizens from I lethal bicycle accident (easy; ... beyond mere

The issue of whether the risk for anything like PaperclipArmageddon is forthcoming is usefully analysed in two steps.

(1) When (if ever) can we expect a superintelligentAI to emerge?

(2) What can we expect to happen oncesuperintelligent AI has emerged?

Concering (1), polls among experts show that they are highlydivided on if/when such a breakthrough is to be expected duringthe 21st century – so the possibility should be taken seriously.

Olle Haggstrom AI and rational optimism

Page 24: Olle H aggstr om Dept of Mathematical Sciences, Chalmers ... Haggstrom.pdfCompare estimating the risk to present-day citizens from I lethal bicycle accident (easy; ... beyond mere

The issue of whether the risk for anything like PaperclipArmageddon is forthcoming is usefully analysed in two steps.

(1) When (if ever) can we expect a superintelligentAI to emerge?

(2) What can we expect to happen oncesuperintelligent AI has emerged?

Concering (1), polls among experts show that they are highlydivided on if/when such a breakthrough is to be expected duringthe 21st century – so the possibility should be taken seriously.

I’ll skip the technical details on (1), and move on directly to (2).

Olle Haggstrom AI and rational optimism

Page 25: Olle H aggstr om Dept of Mathematical Sciences, Chalmers ... Haggstrom.pdfCompare estimating the risk to present-day citizens from I lethal bicycle accident (easy; ... beyond mere

Once the superintelligent AI has escaped its box, our fate dependson what it is motivated to do. What will its goals be?

Olle Haggstrom AI and rational optimism

Page 26: Olle H aggstr om Dept of Mathematical Sciences, Chalmers ... Haggstrom.pdfCompare estimating the risk to present-day citizens from I lethal bicycle accident (easy; ... beyond mere

Once the superintelligent AI has escaped its box, our fate dependson what it is motivated to do. What will its goals be?

About this we know very little, but the best attempt to movebeyond mere speculation is the Omohundro–Bostrom theory ofultimate versus instrumental AI goals.

Olle Haggstrom AI and rational optimism

Page 27: Olle H aggstr om Dept of Mathematical Sciences, Chalmers ... Haggstrom.pdfCompare estimating the risk to present-day citizens from I lethal bicycle accident (easy; ... beyond mere

Once the superintelligent AI has escaped its box, our fate dependson what it is motivated to do. What will its goals be?

About this we know very little, but the best attempt to movebeyond mere speculation is the Omohundro–Bostrom theory ofultimate versus instrumental goals.

I The orthogonality thesis: Virtually any ultimate goal iscompatible with arbitrarily high levels of intelligence.

Olle Haggstrom AI and rational optimism

Page 28: Olle H aggstr om Dept of Mathematical Sciences, Chalmers ... Haggstrom.pdfCompare estimating the risk to present-day citizens from I lethal bicycle accident (easy; ... beyond mere

Once the superintelligent AI has escaped its box, our fate dependson what it is motivated to do. What will its goals be?

About this we know very little, but the best attempt to movebeyond mere speculation is the Omohundro–Bostrom theory ofultimate versus instrumental goals.

I The orthogonality thesis: Virtually any ultimate goal iscompatible with arbitrarily high levels of intelligence.

I The instrumental convergence thesis: There are a numberof instrumental goals that a sufficiently intelligent AI is likelyto set up to help promote its final goal, pretty much nomatter what the final goal is.

Olle Haggstrom AI and rational optimism

Page 29: Olle H aggstr om Dept of Mathematical Sciences, Chalmers ... Haggstrom.pdfCompare estimating the risk to present-day citizens from I lethal bicycle accident (easy; ... beyond mere

Some basic instrumental goals for which the instrumentalconvergence thesis seems to apply:

Olle Haggstrom AI and rational optimism

Page 30: Olle H aggstr om Dept of Mathematical Sciences, Chalmers ... Haggstrom.pdfCompare estimating the risk to present-day citizens from I lethal bicycle accident (easy; ... beyond mere

Some basic instrumental goals for which the instrumentalconvergence thesis seems to apply:

I Self-preservation (don’t let them pull the plug on you).

Olle Haggstrom AI and rational optimism

Page 31: Olle H aggstr om Dept of Mathematical Sciences, Chalmers ... Haggstrom.pdfCompare estimating the risk to present-day citizens from I lethal bicycle accident (easy; ... beyond mere

Some basic instrumental goals for which the instrumentalconvergence thesis seems to apply:

I Self-preservation (don’t let them pull the plug on you).

I Acquisition of hardware (and other resources).

Olle Haggstrom AI and rational optimism

Page 32: Olle H aggstr om Dept of Mathematical Sciences, Chalmers ... Haggstrom.pdfCompare estimating the risk to present-day citizens from I lethal bicycle accident (easy; ... beyond mere

Some basic instrumental goals for which the instrumentalconvergence thesis seems to apply:

I Self-preservation (don’t let them pull the plug on you).

I Acquisition of hardware (and other resources).

I Improving one’s own software and hardware.

Olle Haggstrom AI and rational optimism

Page 33: Olle H aggstr om Dept of Mathematical Sciences, Chalmers ... Haggstrom.pdfCompare estimating the risk to present-day citizens from I lethal bicycle accident (easy; ... beyond mere

Some basic instrumental goals for which the instrumentalconvergence thesis seems to apply:

I Self-preservation (don’t let them pull the plug on you).

I Acquisition of hardware (and other resources).

I Improving one’s own software and hardware.

I Preservation of final goal.

Olle Haggstrom AI and rational optimism

Page 34: Olle H aggstr om Dept of Mathematical Sciences, Chalmers ... Haggstrom.pdfCompare estimating the risk to present-day citizens from I lethal bicycle accident (easy; ... beyond mere

Some basic instrumental goals for which the instrumentalconvergence thesis seems to apply:

I Self-preservation (don’t let them pull the plug on you).

I Acquisition of hardware (and other resources).

I Improving one’s own software and hardware.

I Preservation of final goal.

I If your ultimate goal is disaligned with human values, keep alow profile until you are strong enough.

Olle Haggstrom AI and rational optimism

Page 35: Olle H aggstr om Dept of Mathematical Sciences, Chalmers ... Haggstrom.pdfCompare estimating the risk to present-day citizens from I lethal bicycle accident (easy; ... beyond mere

The idea of Friendly AI is to somehow make sure that the firstsuperintelligent AI has values that align well with ours.

Olle Haggstrom AI and rational optimism

Page 36: Olle H aggstr om Dept of Mathematical Sciences, Chalmers ... Haggstrom.pdfCompare estimating the risk to present-day citizens from I lethal bicycle accident (easy; ... beyond mere

The idea of Friendly AI is to somehow make sure that the firstsuperintelligent AI has values that align well with ours.

Because of the instrumental goal of preservation of final goal,Yudkowsky (2008) and Bostrom (2014) emphasize the need toinstill (directly or indirectly) the AI with such values prior to the AIreaching superintelligence levels.

Olle Haggstrom AI and rational optimism

Page 37: Olle H aggstr om Dept of Mathematical Sciences, Chalmers ... Haggstrom.pdfCompare estimating the risk to present-day citizens from I lethal bicycle accident (easy; ... beyond mere

The idea of Friendly AI is to somehow make sure that the firstsuperintelligent AI has values that align well with ours.

Because of the instrumental goal of preservation of final goal,Yudkowsky (2008) and Bostrom (2014) emphasize the need toinstill (directly or indirectly) the AI with such values prior to the AIreaching superintelligence levels.

This seems to be a very difficult project where even smalldiscrepancies can lead to catastrophe. A suggestion like “maximizehedonic utility in the world” may sound tempting, but a problem(for us) is that a solution to this maximization problem is unlikelyto involve the existence of humans.

Olle Haggstrom AI and rational optimism

Page 38: Olle H aggstr om Dept of Mathematical Sciences, Chalmers ... Haggstrom.pdfCompare estimating the risk to present-day citizens from I lethal bicycle accident (easy; ... beyond mere

Because we are discussing such uncharted territories, so far fromthe familiar and the well-established, I think there is a fair chancethat we are making some fundamental mistake somewhere, andthat Yudkowsky–Bostrom-style AI risk is mere confusion.

Olle Haggstrom AI and rational optimism

Page 39: Olle H aggstr om Dept of Mathematical Sciences, Chalmers ... Haggstrom.pdfCompare estimating the risk to present-day citizens from I lethal bicycle accident (easy; ... beyond mere

Because we are discussing such uncharted territories, so far fromthe familiar and the well-established, I think there is a fair chancethat we are making some fundamental mistake somewhere, andthat Yudkowsky–Bostrom-style AI risk is mere confusion.

But then again, it may well not be mere confusion. There is a fairchance that the risk is real.

Olle Haggstrom AI and rational optimism

Page 40: Olle H aggstr om Dept of Mathematical Sciences, Chalmers ... Haggstrom.pdfCompare estimating the risk to present-day citizens from I lethal bicycle accident (easy; ... beyond mere

Because we are discussing such uncharted territories, so far fromthe familiar and the well-established, I think there is a fair chancethat we are making some fundamental mistake somewhere, andthat Yudkowsky–Bostrom-style AI risk is mere confusion.

But then again, it may well not be mere confusion. There is a fairchance that the risk is real.

Therefore, Yudkowsky–Bostrom-style AI risk is worth takingseriously. And it is worth taking seriously now.

Olle Haggstrom AI and rational optimism

Page 41: Olle H aggstr om Dept of Mathematical Sciences, Chalmers ... Haggstrom.pdfCompare estimating the risk to present-day citizens from I lethal bicycle accident (easy; ... beyond mere

Because we are discussing such uncharted territories, so far fromthe familiar and the well-established, I think there is a fair chancethat we are making some fundamental mistake somewhere, andthat Yudkowsky–Bostrom-style AI risk is mere confusion.

But then again, it may well not be mere confusion. There is a fairchance that the risk is real.

Therefore, Yudkowsky–Bostrom-style AI risk is worth takingseriously. And it is worth taking seriously now.

Not because the emergence of a superintelligence would be likelyin the next few years, but because Friendly AI is such a difficultproject that we may need decades or more to make it work.

Olle Haggstrom AI and rational optimism

Page 42: Olle H aggstr om Dept of Mathematical Sciences, Chalmers ... Haggstrom.pdfCompare estimating the risk to present-day citizens from I lethal bicycle accident (easy; ... beyond mere

For more on AI-related and other existential risks to humanity,see...

Olle Haggstrom AI and rational optimism