the risks and benefits of artificial ...unicri.it/in_focus/files/report_unicri_cambridge...the risks...

28
THE RISKS AND BENEFITS OF ARTIFICIAL INTELLIGENCE AND ROBOTICS An event in collaboration with and hosted by

Upload: others

Post on 25-Mar-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: THE RISKS AND BENEFITS OF ARTIFICIAL ...unicri.it/in_focus/files/Report_UNICRI_Cambridge...THE RISKS AND BENEFITS OF ARTIFICIAL INTELLIGENCE AND ROBOTICS An event in collaboration

THE RISKS AND BENEFITS OF ARTIFICIAL INTELLIGENCE AND ROBOTICS

An event in collaboration with and hosted by

Page 2: THE RISKS AND BENEFITS OF ARTIFICIAL ...unicri.it/in_focus/files/Report_UNICRI_Cambridge...THE RISKS AND BENEFITS OF ARTIFICIAL INTELLIGENCE AND ROBOTICS An event in collaboration

Acknowledgements

This report has been prepared by UNICRI.

UNICRI would like to express its appreciation to the high-level experts and participants of the

workshop held in Cambridge, United Kingdom, in February 2017. Special thanks also go to all

those that supported the workshop at both UNICRI and the Centre for Risk Studies at the

University of Cambridge Judge Business School, in particular: Ms. Marina Mazzini (UNICRI),

Mr. Irakli Beridze (UNICRI) and Dr. Michelle Tuveson (Cambridge Centre for Risk Studies) for

the initiative and overall management of the workshop; Mr. Fabrizio De Rosa (UNICRI) for its

organization and his multimedia services; Ms. Sona Krajciova (Cambridge Centre for Risk

Studies) for logistics; Mr. Odhran McCarthy (UNICRI) for organizational support and the

preparation of this report; and Mr. Beniamino Garrone (UNICRI) for its design.

Page 3: THE RISKS AND BENEFITS OF ARTIFICIAL ...unicri.it/in_focus/files/Report_UNICRI_Cambridge...THE RISKS AND BENEFITS OF ARTIFICIAL INTELLIGENCE AND ROBOTICS An event in collaboration

Page 2 of 18

Table of Contents

Introduction ............................................................................................................................ 3

Opening of the workshop ...................................................................................................... 3

Artificial intelligence and robotics 101: what is it and where are we now .......................... 4

Ethics and artificial intelligence ............................................................................................ 7

The cyber security overlap .................................................................................................... 9

From fear to accountability – the state of artificial intelligence journalism ..................... 10

Emerging technologies: quantum computing .................................................................... 12

Economic and social implications of robotics and artificial intelligence ......................... 13

Long-term issues of artificial intelligence and the future of humanity ............................. 14

Robotics and artificial intelligence at the United Nations ................................................. 16

Concluding remarks ............................................................................................................. 16

Annex 1: Agenda

Annex 2: Speakers biographies

Annex 3: Event photos

Page 4: THE RISKS AND BENEFITS OF ARTIFICIAL ...unicri.it/in_focus/files/Report_UNICRI_Cambridge...THE RISKS AND BENEFITS OF ARTIFICIAL INTELLIGENCE AND ROBOTICS An event in collaboration

Page 3 of 18

1. Introduction

The potential risks and benefits associated with advancements being made in the fields of

artificial intelligence (AI) and robotics were analysed and discussed during a two-day

workshop organized by the United Nations Interregional Crime and Justice Research Institute

(UNICRI) in collaboration with, and hosted by, the Cambridge Centre for Risk Studies. The

event took place at the University of Cambridge Judge Business School (United Kingdom)

from 6 to 7 February 2017.

As part of UNICRI's Artificial Intelligence and Robotics Programme and its Public Information

Programme on New Threats, journalists, representatives from the academia, international

organizations and the private sector from 20 countries met with leading AI and Robotics

experts to deepen their understanding of advancements in AI and robotics, with a special

focus on their potential global security implications.

This report summarizes the issues presented and discussed during the workshop.

2. Opening of the workshop

The workshop was opened by Mr. Irakli Beridze, Senior Strategy and Policy Advisor at the

United Nations Interregional Crime and Justice Research Institute (UNICRI), and Dr. Michelle

Tuveson, Founder & Executive Director, Cambridge Centre for Risk Studies at the University

of Cambridge Judge Business School, who officially welcomed participants to Cambridge. Mr.

Beridze mentioned that this was the second edition of the workshop. The previous edition

took place at the Netherlands Institute of International Relations Clingendael in The Hague, in

March 2016.

Following the initial welcoming remarks, Dr. Ing. Konstantinos Karachalios, Managing Director

of The Institute of Electrical and Electronics Engineers (IEEE) Standards Association and

member of the Management Council of IEEE, delivered a keynote address which appealed to

the audiences’ logic and emotion when considering the very messy, yet fundamentally

important theme of AI and robotics.

Dr. Karachalios noted that, in September 2015, world leaders at the United Nations adopted

the Sustainable Development Goals, also known as “Transforming our world: the 2030

Agenda for Sustainable Development.” Therein, the United Nations made reference to

technology as one of the major pillars of implementation of the 17 goals of the SDGs. While it

is indeed the case that technology can help us achieve the development goals and have a

Page 5: THE RISKS AND BENEFITS OF ARTIFICIAL ...unicri.it/in_focus/files/Report_UNICRI_Cambridge...THE RISKS AND BENEFITS OF ARTIFICIAL INTELLIGENCE AND ROBOTICS An event in collaboration

Page 4 of 18

massive impact on our world and our lives - for example by doubling life expectancy – he

noted technology also paradoxically threatens our existence.

Approximately 45 years ago, Buckminster Fuller, the American author, inventor, and architect,

observed that technology had reached a point at which we had the ability to utilise it to provide

the necessary protection and nurturing that our society requires to fulfil our needs and ensure

growth. We were crossing a singularity, he felt, making things such as war obsolete. In this

technical era, he questioned which political system can or should be the structure and

backbone of our society. Or, for that matter, if one was even required at all.

In contrast, the German Philosopher Martin Heidegger held a more pessimistic view of

technology. While many feel that technology is something under our control, this was not the

case for Heidegger. For him, once set on its course, the development of and advancements in

technology were something beyond our control.

Dr. Karachalios felt that the reality is probably somewhere in between and it is up to us to

decide what we feel or believe. For him, this decision was the heart of the two-day workshop.

To help us come to our own conclusions, Dr. Karachalios suggested we look for impact. For

instance, has technology made the world more sustainable? Has technology made the world

more democratic, fair or safe? In his opinion we should seek to measure if and how

technology has delivered under each of these topics before concluding optimistically or

pessimistically.

At the same time, looking more broadly at the agenda of the workshop, Dr. Karachalios

emphasised that in this technical era it is important that we properly educate and inform

ourselves about the potential dangers, risks and dilemmas in our path. With technology, he

noted, we tend to lack a clear understanding of the direction in which we are moving, and

instead follow the impetus of the various drivers or “war machines” that push forward

developments and advancements. Such drivers, Dr. Karachalios clarified, include: militarism,

geopolitics, the religion of “do-ability” within the techno-scientific community and even our own

fear of death. It is important for us to better understand these drivers or “war machines” to be

able to choose our own direction. He added the caveat however, that going against these

drivers will be costly. There will be no reward along the way for the individuals engaged

against the "war machines", but it is his hope that the end result will be worth it for the

collective.

He commended the workshop as a forum to hold dialogue on these drivers or “war machines”

behind technology and encouraged all those present to continue building bridges amongst

throughout the concerned communities to foster engagement.

Page 6: THE RISKS AND BENEFITS OF ARTIFICIAL ...unicri.it/in_focus/files/Report_UNICRI_Cambridge...THE RISKS AND BENEFITS OF ARTIFICIAL INTELLIGENCE AND ROBOTICS An event in collaboration

Page 5 of 18

3. Artificial intelligence and robotics 101: what is it and

where are we now

Prof. Noel Sharkey, University of Sheffield, UK, Co-Founder of the Foundation for Responsible

Robotics (FRR) and Chairman of the International Committee for Robot Arms Control

(ICRAC), opened the first session of the workshop. Before commencing however, Prof.

Sharkey stressed that it is important to remember that, while we do not know what the future

holds for us, we do know the present, and, in this reality, there are many problems facing us.

In this regard, he felt it prudent to avoid unnecessarily talking about the future for fear of

distracting ourselves from the real risks and problems that we face today.

To get started, Prof Sharkey provided participants with a brief introduction to AI and robotics.

He noted the origins of the term “AI” – originally coined in 1955 by the American computer

scientist John McCarthy – and observed the great deal of confusion that surrounds the term,

particularly with respect to the word “intelligence”. The resulting ambiguities were lamented by

many, including even John McCarthy himself, who is noted as having wished he never called it

AI at all. Moving along, Prof Sharkey noted that massive claims and false predications of

capabilities is something often associated with AI. More than 60 years since our first foray into

the field of AI and the promises of machines cleaning our homes have not yet materialised.

Notwithstanding this, there have been many significant milestones, notably including Google

DeepMind's 'AlphaGo' competing against, and ultimately prevailing over, world renowned

professional Go player, Lee Sedol, in 2016. Prof. Sharkey noted however, that the systems

making these milestones have been very singular in their function, for example, in playing Go,

Chess or Jeopardy. We have yet to meet a universally intelligent system. In this regard, he

speculated that these milestones resulted not from what the founding fathers of artificial

intelligence were trying to achieve with programming, but rather from a combination of the

massive computational power of big data and sophisticated pattern recognition techniques.

These games are uniquely suited to modern artificial intelligence.

Pushing aside the curtain, Prof. Sharkey explained that what is called “machine learning” is

the computer science behind the field of artificial intelligence. Machine learning or “statistical

parameter estimation”, he added, is in essence a network or several matrices of numbers,

wherein an input is converted into numbers (binary) and multiplied across the matrices. This

results in an output classification of the input. In the event of a mistake, an error correction

formula modifies the weights and the input is inserted again and the whole process is iterated

until the proper classification results.

Page 7: THE RISKS AND BENEFITS OF ARTIFICIAL ...unicri.it/in_focus/files/Report_UNICRI_Cambridge...THE RISKS AND BENEFITS OF ARTIFICIAL INTELLIGENCE AND ROBOTICS An event in collaboration

Page 6 of 18

Turning to robotics, Prof. Sharkey suggested that robots are conceptually more

straightforward. Simplifying his description, he explained that robots can be understood as

computers with sensors and motors connected to wheels or legs. The computer runs a

process in response to a sensor detecting a stimulus, such as heat, and the computer sends a

signal to the motors to move the robot accordingly. There are two main types of robots –

industrial and service. As with AI, there have been a number of significant developments in

this field. Most notably, there has been an explosion in the quantity of robots, with the World

Federation for Robotics (WFR) predicting that there will be 38 million robots worldwide by

2018 for a wide range of activities, from farming to cleaning to pumping gas to assisting the

elderly.

Given this prediction and the increasing prevalence of robotics in our daily lives, Prof. Sharkey

next turned his attention to shining the spotlight on some important societal issues that are

often overlooked.

For instance, he noted immediate concerns with the use of robotics in both child care and the

care of the elderly. While there are of course many advantages, such as assisting the elderly

and keeping them independent, humans require human contact. Depriving the young or the

old of this can have serious psychological consequences and may, in the case of children,

lead to disorders such as detachment. Noting the significant investments of countries such as

Japan, Korea and the United Kingdom into so-called companion robots for the young and old,

he stressed that we must not be blinded by potential economic benefits. We must remain

sensitive to the very human needs of people.

We have also seen the emergence of robotics and AI on our roads, with the increasing

popularity of autonomous vehicles. While he acknowledged the potential for this technology to

save lives, he also stressed caution. In essence, robots are machines and, as machines, there

is the ever-present possibility of a malfunction. Already we have seen fatalities on the road

with autonomous vehicles, one in the US, one in China and one in the Netherlands. While we

push ahead with this technology, we need to take care that proper systems are in place. At

present, this does not appear to be the case. He pointed out that self-driving cars could save

lives if we get them right and with caution we can engender public trust.

One of the largest points of concern for Prof. Sharkey is when robotics and AI come into

conflict with our right to life. Lethal autonomous weapons systems (LAWS), or so-called “killer

robots”, are not the humanoid robots portrayed in Hollywood movies. Rather they are drones

and submarines, operating either individually or collectively in swarms. While we have yet to

see a fully autonomous weapon that makes targeting and attack decisions without human

intervention, there are now a number of weapon systems with varying degrees of autonomy.

Page 8: THE RISKS AND BENEFITS OF ARTIFICIAL ...unicri.it/in_focus/files/Report_UNICRI_Cambridge...THE RISKS AND BENEFITS OF ARTIFICIAL INTELLIGENCE AND ROBOTICS An event in collaboration

Page 7 of 18

Uralvagonzavod’s T-14 Armata tank and the Northrop Grumman’s unmanned combat air

vehicle, the X-47B, are examples of very advanced weapons systems edging toward full

autonomy. The problem, Prof. Sharkey explained, is that while these systems are the pinnacle

of advancements in AI, the technology nonetheless encounters difficulties in distinguishing

civilians and combatants, which presents a challenge for complying with International

Humanitarian Law, also known as ‘the laws of war’. At the United Nations level, there is an

ongoing debate on LAWS within the context of the Convention on Prohibitions or Restrictions

on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively

Injurious or to Have Indiscriminate Effects (CCW). During the 2014 CCW Meeting of High

Contracting Parties, Member States agreed on a new mandate on LAWS and, since then have

discussed the matter on a yearly basis. While progress is slow, Prof. Sharkey noted that in

2016 the United Nations established a Group of Governmental Experts (GGE) to specifically

discuss, over a three-week period, the legal, ethical and societal concerns of LAWS in the

context of the objectives and purposes of the Convention.

Prof. Sharkey also noted that developments in this field were beginning to spill over into law

enforcement. In this regard, he noted the Skunk Riot Control Copter that is armed with pepper

spray for crowd control and Chaotic Moon’s "stun copter". He noted the positive side of the

technology, which keeps police out of harm’s way, but stressed the importance of not letting

these developments get out of hand by dehumanising violence and changing the very nature

of policing.

Echoing Dr. Karachalios’ earlier comments, Prof. Sharkey concluded noting the importance of

education as we move forward. Specifically, he acknowledged that, while professional ethics is

taught to engineers in universities, this is largely ethics with respect to the client and not the

ethics of social responsibility. We must be more aware of the various rights and societal issues

at stake here.

4. Ethics and artificial intelligence

Building on the foundation laid by Prof. Sharkey, Ms. Kay Firth-Butterfield, Barrister-at-Law,

Distinguished Scholar at the Robert S. Strauss Center for International Security and Law,

University of Texas, Austin and Co-Founder of the Consortium for Law and Ethics of Artificial

Intelligence and Robotics, led the second session on Ethics and AI. At the outset of her talk,

Ms. Firth-Butterfield, noted that advancements in this field are often compared to the industrial

revolution of the 18th and 19th centuries, but according to her, it is simply not the same thing.

The speed, rate of change, and impact of this new industrial revolution are unparalleled. In this

regard, she expressed that it is absolutely paramount that we already start discussing the

Page 9: THE RISKS AND BENEFITS OF ARTIFICIAL ...unicri.it/in_focus/files/Report_UNICRI_Cambridge...THE RISKS AND BENEFITS OF ARTIFICIAL INTELLIGENCE AND ROBOTICS An event in collaboration

Page 8 of 18

many ethical concerns raised by advancements in these technologies. We must do so, she

noted, because, in the end, AI is just software, bugs and all. Society cannot afford to “beta-

test” AI, addressing problems with software updates as they arise.

While AI is currently trending upwards, coming more and more into the public eye in recent

years, she felt it necessary not to overplay the capabilities of new technology, for fear of

sparking panic and harkening the next AI winter. If we allow fear to take control of us, we will

only end up with unnecessary, and perhaps ineffectual, regulation. She aptly grounded the

discussion in reality, making reference to influential deep learning specialist Yann LeCun, who

once noted that, at present, we cannot even build a machine as intelligent as a mouse.

Returning to the issue of ethics and AI, she explained that discussions really took off with the

sale of Deepmind to Google in January 2014, and the announcement that as part of the deal

Google was setting up an ethics advisory board to ensure that its AI technology is not abused.

The discussion bloomed thereafter following remarks from Professors Stephen Hawking and

Max Tegmark on the positive and negative effects of AI. Around this time, other commercial AI

entities, created their own ethics boards, for instance Lucid AI set up its Ethics Advisory Panel

in October 2014 which she led and included Professors Max Tegmark, Murray Shanahan and

Derek Jinks as members. This attention contributed a lot of ongoing research into AI ethics

and commendable work such the Future of Life Institute’s 23 high-level principles on AI ethics

— the Asilomar Principles.

In connection with this, Ms. Firth-Butterfield noted that AI and robotics has also recently

received a lot of governmental attention in terms of reports and regulations. Europe, for

instance, is very advanced and even produced a report in 2016 on Robotics and AI, proposing

legal personhood and addressing important issues such as intellectual property. On the other

side of the Atlantic, she said, the White House also produced two noteworthy reports in 2016

outlining a strategy for promoting AI research and development. Within the US, the State of

Nevada has even adopted legislation and regulations which allow for the use of autonomous

trucks on its roads.

In 2012, the Institute of Electrical and Electronics Engineers (IEEE) launched its Global

Initiative for Ethical Considerations in Artificial Intelligence and Autonomous System, an open,

collaborative and consensus based approach to Ethics and AI. Ms. Firth-Butterfield explained

that the IEEE’s goal is to educate, train and empower technologists to prioritize ethical

considerations in the design and development of autonomous and intelligent systems.

Through this initiative, she added, the IEEE issued a report, Ethically Aligned Design, in

December 2016 and is actively seeking contributions from the community to review and

improve the report. She also indicated that work on two new standards projects on

Page 10: THE RISKS AND BENEFITS OF ARTIFICIAL ...unicri.it/in_focus/files/Report_UNICRI_Cambridge...THE RISKS AND BENEFITS OF ARTIFICIAL INTELLIGENCE AND ROBOTICS An event in collaboration

Page 9 of 18

Transparency of Autonomous Systems and Data Privacy Process was started and, as part of

the IEEE’s peer driven and consensus-based approach, she invited participants to join the

working groups.

Concluding, Ms. Firth-Butterfield reflected on earlier comments by Prof. Sharkey, stating that

we have to question AI when it begins to spill over into our lives. While innovations such as the

use of AI in predictive policing may save lives and prevent harm, they can have a significant

impact on our civil liberties and our freedoms. The question remains: where do we draw the

line with innovation? She further speculated whether, if innovation continues to curb our civil

liberties, will we reach a point whereby it becomes necessary to challenge the very purpose of

innovation. Or, as Salesforce’s CEO, Marc Benioff said at Davos 2017, to even slow down the

rate of innovation through regulation. This may be an option in the future, but it is not

something being considered by Governments at present. Currently, national policies are

focused on maximising technology for economical purposes.

5. The cyber-security overlap

In the next session, the cyber security overlap was introduced by the team from the

Cambridge Centre for Risk Studies, which consisted of Professor Daniel Ralph, Academic

Director and Professor of Operations Research at University of Cambridge Judge Business

School; Mr. Simon Ruffle, Director of Technology Research & Innovation; and Ms. Jennifer

Copic, Research Assistant.

Kicking off the discussion, Professor Ralph took the floor to explain the work of the Cambridge

Centre for Risk Studies to address systematic risks in business, the economy and society.

Prof. Ralph observed that we have moved beyond the information technology revolution into

the data revolution, and in this new era there are novel risks for us to deal with, like cyber risk.

He defined cyber risk as any risk of financial loss, disruption or damage to the reputation of an

organisation from some sort of failure of its information technology (or operational technology)

systems.

Our environments have also changed, he explained, noting that critical infrastructure has

become an increasingly complicated mix of public and private sector actors, so much so that

we are not fully clear who owns the risk and who is ultimately responsible for it. Governments

and regulators? Private sector critical infrastructure companies? Or society more broadly, as

represented by corporations and private consumers? The result is what Prof. Ralph refers to

as ‘the triangle of pain’. The effects of the triangle are further amplified by growing

interdependency between sectors. The energy sector, for instance, plays a central role for all

Page 11: THE RISKS AND BENEFITS OF ARTIFICIAL ...unicri.it/in_focus/files/Report_UNICRI_Cambridge...THE RISKS AND BENEFITS OF ARTIFICIAL INTELLIGENCE AND ROBOTICS An event in collaboration

Page 10 of 18

critical infrastructure, feeding various other sectors, and, in this regard, cyber-attacks, such as

that which occurred in Ukraine in December 2015 leaving more than 200,000 people without

power, can be extremely harmful.

Building on this, Mr. Ruffle explained that part of the Centre’s ambition is find ways to measure

the economic impact of a catastrophic event and advance the scientific understanding of how

systems can be made more resilient. This is what they refer to as “catastronomics” or the

economics of catastrophes. Specifically, he explained, the Centre employs the use of stress

test scenarios and insurance loss models for cyber-attacks (for example DDOS, financial theft,

malware ransomware). To further the discussion, Mr. Ruffle described the ‘Erebos’ Trojan

attack, a hypothetical cyber-attack from a malware affected laptop targeting the North Eastern

electrical grid in the US from a distance. The attack involves overloading the air-conditioning

system in a facility, causing 50 generators to catch fire, prompting the shutdown of the

electrical grid, leaving 93 million people in 15 states without power — an area responsible for

30% of the country’s economy. Assessing the GDP risk for the next 5 years, they estimate

losses of between 243 and 1,024 billion USD, with the insurance industry losing 21.4 and 71.1

billion USD.

In the final part of the session, Ms. Copic discussed the insurance models for cyber risks,

including affirmative standalone cyber coverage, affirmative cyber endorsements, and silent

cyber exposure polices with gaps in explicit cyber exclusions or without cyber exclusions. She

noted that there is a wide variation in the coverage language in insurance policies, with no two

policies really being the same. To try to measure what these policies really mean in economic

terms, the Centre developed a series of cyber-attack scenarios for insurance accumulation

management. The scenarios can be used to stress test both individual syndicates and the

market as a whole.

6. From fear to accountability - the state of artificial

intelligence journalism

Mr. John C. Havens, Executive Director of The IEEE Global Initiative for Ethical

Considerations in Artificial Intelligence and Autonomous Systems and contributing writer for

Mashable and the Guardian, took the floor next on the topic of the media’s representation of

AI. He noted that AI is represented in the media in a highly polarised manner. It is either

dystopian or utopian in nature. Very rarely does the media present balance or even consider

solutions. Fear, he added, is an important tool in journalism, and some editors often rely on

fear to sell their publications. As a result, while the AI community regularly laments the use of

Page 12: THE RISKS AND BENEFITS OF ARTIFICIAL ...unicri.it/in_focus/files/Report_UNICRI_Cambridge...THE RISKS AND BENEFITS OF ARTIFICIAL INTELLIGENCE AND ROBOTICS An event in collaboration

Page 11 of 18

the Terminator in presenting AI to the public, the Terminator will always be there. The

challenge, Mr. Havens suggested, is to get beyond the fear and to really look at what

developments like AI mean for individuals and for our society. Now, more than ever, this is

important as we are at a really important time. Technology is already beginning to change us

he explained, and although the Singularity may still be far away, we have already started to

cross a threshold — to merge with the very technology we are developing.

In light of this, Mr. Havens expressed the importance of ethics in AI, adding that it is not

something we can afford to have only as an afterthought. Thorough examinations and

assessments should be done before a technology is released to make sure it is ethically

aligned with end users, and methodologies applied to provide robust due diligence to identify

and prioritize human values. The problem, he clarified, is this: How will machines know what

we value if we do not know ourselves?

He observed, that as part of his H(app)athon Project, an initiative to create digital tools to drive

global contentment, a survey was conducted to get a sense of how people view their well-

being in the digital age. Psychologists have determined that while values may differ amongst

people, there are twelve values that people in different cultures all prioritize. These values can

help identify what general ethical principles we can honour to build into emerging

technologies. While it is not an easy challenge, it is something the IEEE’s work in creating

their paper, Ethically Aligned Design and new Standards Working Groups, will help.

At the same time, Mr. Havens stressed that we should not lose sight of the importance of data

privacy as this digital boom continues. Data is a valuable commodity. This will increasingly

become apparent with the rise of augmented and virtual reality, where users will be looking

through the lenses of private companies for every part of their waking life.

In his conclusions, Mr. Havens noted that regardless of how the media represents AI,

technology is neither good nor evil. Nevertheless, he warned that it is certainly not inert. He

observed that the World Health Organization estimates that by 2030 depression is expected to

be the largest contributor to disease burden. Noting that unemployment is a major contributor

to depression, he urged that we must recognise the reality that, while autonomous cars will

save lives on our roads, their impact will be profound on our individual and collective psyche.

Not only will the trucker profession become obsolete almost overnight, the entire community

surrounding the profession will also feel the impact, as, for instance, the importance of gas

stations, roadside diners and everyone supporting the industry diminishes. In this regard, Mr.

Havens stressed that when we develop AI, we need to think it in terms of what helps us

increase human well-being versus only prioritizing GDP and exponential growth, which is the

status quo of today.

Page 13: THE RISKS AND BENEFITS OF ARTIFICIAL ...unicri.it/in_focus/files/Report_UNICRI_Cambridge...THE RISKS AND BENEFITS OF ARTIFICIAL INTELLIGENCE AND ROBOTICS An event in collaboration

Page 12 of 18

7. Emerging technologies: quantum computing

Next, Dr. Natalie Mullin, researcher at 1QBit, a Vancouver-based quantum computing software

company, introduced the audience to quantum computing, an emerging technology poised to

have a significant impact on AI and on our society as a whole. She explained that quantum

computing attempts to harness the strange nature of quantum mechanics, and in particular the

phenomena known as superposition and entanglement. Operating on the basis of quantum

bits (or qubits), instead of bits as in the case of classical computers, quantum computers offer

considerable computational advantages over classical computers. Dr. Mullin explained that

there are different types of quantum computers, with the original ideal being a universal

quantum computer. The more limited, yet commercially available, quantum computers are

known as quantum annealing computers.

Dr. Mullin explained that quantum computing is a very relevant emerging technology because

cybersecurity relies on the difficulty of mathematical problems that make up public-key

encryption schemes. For instance, in 2009, researchers solved an encryption problem based

on a 232-digit number (RSA-768). Using multiple computers, this effort took over two years.

On one standard desktop processor, the researchers estimated the task would take

approximately 1,500 years. The current standard of 617-digit numbers (RSA 2048) is 4.3

billion times more difficult than a 232-digit number. Future universal quantum computers could

significantly reduce the time required to solve these complex mathematical problems from

hundreds of years to only a matter of weeks.

Fortunately for cybersecurity’s sake, Dr. Mullin noted that we have not yet reached that point

because to run the algorithm to crack RSA 2048, we would need a computer than operates

with 4096 qubits and millions of qubits for error correction. Given that current universal

quantum computers control between 5 and 20 qubits, this is massively beyond our current

capabilities.

Nevertheless, given that quantum annealing computers are scaling up and doubling in power

every 12 to 24 months, we have seen increasing interest in this field. For example, in 2015,

the National Security Agency in the United States announced that it is getting ready to move

towards quantum resistant algorithms. The National Institute of Standards and Technology

(NIST) similarly called for the creation of such algorithms.

Turning to the issue of AI, Dr. Mullin went on to explain how quantum computing can benefit AI

and the 2013 launch of MIT and Google’s drive toward quantum algorithms for machine

Page 14: THE RISKS AND BENEFITS OF ARTIFICIAL ...unicri.it/in_focus/files/Report_UNICRI_Cambridge...THE RISKS AND BENEFITS OF ARTIFICIAL INTELLIGENCE AND ROBOTICS An event in collaboration

Page 13 of 18

learning. As an illustration, she noted that quantum computing might boost facial recognition

capabilities by reducing the number of data points required for accurate facial recognition.

Returning again to the issue of security, Dr. Mullin noted that, the more qubits we have, the

more useful the computers become, but also the more dangerous they become in terms of

cybersecurity.

Concluding, Dr. Mullin noted that it is a very interesting time for quantum computing. For

instance, IBM has said they can scale up their universal quantum computer to 50-100 qubits

within the next decade, while D-wave has recently released a 2,000-qubit sparsely-connected

quantum annealing computer. Interestingly, Google is at the same time advocating for a more

powerful quantum annealing computer than D-Wave’s computer with fewer, better-connected,

qubits. Only last week, she observed, the University of Sussex unveiled its plan for the

construction of a universal quantum computer, as opposed to a quantum annealing computer.

Even though the blueprints indicate the computer will be the size of a football pitch, it is a

significant breakthrough. In this regard, the future is bright for quantum computing and we

should start to think about how we apply this technology to innovatively address problems in

our society.

8. Economic and social implications of robotics and

artificial intelligence

Mr. Olly Buston, Founding Director of Future Advocacy, opened the next session, examining

the term “AI” and noting that there are difficulties in defining it, primarily because “intelligence”

is a challenging term to define in itself. To overcome this, the AI community uses a broad

understanding of AI based on problem solving abilities. In spite of its artificial nature however,

AI is something that touches upon the very the essence of being human — intelligence. After

all, he observed, intelligence is in our name: Homo sapiens. Intelligence is how our species

distinguishes itself from animals. In this regard, AI raises questions about the nature of

humanity.

Mr. Buston noted a significant development boom in AI as of late, which primarily results from

advancements in machine learning. Quoting Professor Stephen Hawking, he observed that

the creation of AI will be “either the best, or the worst thing, ever to happen to humanity”. The

difference between this intelligence revolution and past revolutions, he added, is its

extraordinary scale and the scope of change. He noted that, although AI will, for instance,

turbo charge productivity, maximise effective use of resources, create safer roads and even

lead to the creation of entire new categories of employment for the children of the future, there

Page 15: THE RISKS AND BENEFITS OF ARTIFICIAL ...unicri.it/in_focus/files/Report_UNICRI_Cambridge...THE RISKS AND BENEFITS OF ARTIFICIAL INTELLIGENCE AND ROBOTICS An event in collaboration

Page 14 of 18

are serious underlying economic and social concerns that should not be glossed over.

Exemplifying this, he noted that 35% of the routine intellectual jobs in the United Kingdom are

at risk from automation over the next 20 years. This amounts to approximately 15 million jobs.

Similarly, he referred to the State of Nevada’s recent adoption of legislation and regulations

allowing for the use of autonomous trucks on its roads, observing that “truck driver” is the top

job in the majority of US States. Moving beyond the West, he questioned what automation

means for low-income countries, or those countries whose economic strategy is based upon

the provision of low cost labour. Developments in AI and robotics will seriously challenge

these countries to diversify their economic strategies.

Notwithstanding this, there seems to be a certain degree of complacency regarding

automation and a clear lack of public and political focus. He observed that a recent poll in the

UK indicated that British people generally tended not to be worried about jobs being replaced.

In the political context, he noted that there has only been a total of 32 references to AI in the

House of Commons. Notwithstanding this, he noted some positive political developments,

including the recent the House of Commons Science and Technology Committee Report on

the Robotics and Artificial Intelligence in 2016.

Rather than overly focusing on the negative aspects however, Mr. Buston felt it is important to

be propositional and focus on what can be done to ensure the progression of this intelligence

revolution is consistent with these economic and social concerns. Regulating AI however, he

explained, would not be the solution, simply because regulating AI is as unrealistic as

regulating mathematics. If regulation is to be seriously considered, it would need to target

applications of AI to specific functions. Rather, Mr. Buston proposed the following steps: 1) AI

should be at the heart of the UK’s industrial and trade deals in a post-Brexit environment. 2)

Detailed and granular research on AI’s impact on the job market must be performed. 3) Smart

strategies to address automation's impact on the job market must be developed. 4) A universal

online self-employment system should be developed. 5) Radical reform of the educational

system should be undertaken.

In conclusion, Mr. Buston noted that we always hear about the need to avoid “hindering

innovation” and, while this is true, we must also start thinking about the need to avoid

“hindering our wellbeing”. The problem, he suggested, is that general understanding of the

real impact of robotics and AI is weak, and the media’s presentation of a dystopian robotic

future does not help the cause. To really affect change, the various concerned sectors must

organize themselves into a community and tackle the issues head on with an advocacy

strategy firmly focused on our economic and social wellbeing.

Page 16: THE RISKS AND BENEFITS OF ARTIFICIAL ...unicri.it/in_focus/files/Report_UNICRI_Cambridge...THE RISKS AND BENEFITS OF ARTIFICIAL INTELLIGENCE AND ROBOTICS An event in collaboration

Page 15 of 18

9. Long term issues of artificial intelligence and the

future of humanity

Following Mr. Buston’s talk on some of the immediate issues associated with AI and robotics,

Mr. Kyle Scott of the Future of Humanity Institute (FHI) at the University of Oxford turned the

focus to longer-term issues. Referencing Prof. Nick Bostrom and his seminal works on AI, Mr.

Scott observed that, humans, as a species, exist in a relatively unstable state, somewhere

between extinction and utilising technological maturity to unlock cosmic endowment. These

are the two attractor states and mankind is on a balance beam between these attractors,

trying to avoid being drawn toward extinction.

Mr. Scott continued, noting that, if earthquakes and volcanos have not destroyed mankind

over the past thousands of years, we can assume the likelihood that they will destroy us in the

next 100 years is low. In this regard, if we as a species are to face an extinction-level event it

is more likely to be from a new challenge, with emerging technologies being primary

candidates.

Looking at the concept of artificial intelligence Mr. Scott explained that intelligence is what

separates us from gorillas; intelligence and cooperation enabled our species to take over this

planet, and an AI that far surpasses our own intelligence is likely to have a similar strategic

advantage. He went on to distinguish between the different types of AI: narrow intelligence,

which can exceed human performance in a narrow domain such as chess; general

intelligence, which can match human performance in a variety of domains; and

superintelligence, which greatly exceeds the performance of humans in virtually all domains of

interest. Deep Blue, Mr. Scott elaborated, was great at chess but could not play checkers. The

algorithms used to create AlphaGo on the other hand are already a more broad AI than Deep

Blue as they can achieve superhuman performance in a variety of games. If an AI is capable

of matching human-level performance in designing improved agents, it is plausible that this

could trigger an "intelligence explosion" resulting in an artificial "superintelligence".

Speculation about AI is however, Mr. Scott explained, significantly affected by

anthropomorphic bias, leading to erroneous analogies with humans. He continued, noting that,

humans tend to rationalise their behaviour and position in the world on the basis of concepts

like “Mind”, “Consciousness” and “Spirit”. Accordingly, some feel that true ‘artificial’ intelligence

can never really be achieved because an AI can never have these special characteristics that

are unique to us alone. Mr. Scott went on to point out that an agent does not need these

special characteristics in order to perform at a superhuman level. A heat seeking missile, for

example, is capable of tracking targets at a superhuman level of performance but does not

Page 17: THE RISKS AND BENEFITS OF ARTIFICIAL ...unicri.it/in_focus/files/Report_UNICRI_Cambridge...THE RISKS AND BENEFITS OF ARTIFICIAL INTELLIGENCE AND ROBOTICS An event in collaboration

Page 16 of 18

have ‘consciousness’. These anthropomorphic mistakes, he noted, can affect our ability to

understand and assess the potential of these technologies.

In terms of timelines, Mr. Scott noted that, while they do not like to give predictions, the Future

of Humanity Institute did conduct a general survey amongst eminent representatives of the AI

community, seeking to ascertain when they thought AI would arrive. Most experts felt that

there was a 50% probability that human-level machine intelligence would arise between 2020

and 2075. The median answer to the survey, he stated, was by 2040-2050. Mr. Scott

concluded, noting that there was also general consensus that, before we reach this stage,

there was a lot of work to be done in addressing the problems of AI safety.

10. Robotics and artificial intelligence at the United

Nations

In the final session, Mr. Irakli Beridze, Senior Strategy and Policy Advisor at UNICRI, talked

about the United Nations’ (UN) perspective on advancements in the field of robotics and AI. To

start, Mr. Beridze noted that to a large degree the public considers AI and Robotics as

something futuristic, something confined to the realm of science fiction. As we have already

heard many times today though, he observed, this is very much not the case. AI and robotics

are already all around us and are here to stay. Exemplifying this, he noted the rate of

technological change in both the civil and military settings from 2003 to 2017.

However, how has the world’s foremost international organization reacted to these

technological developments? Explaining the structure and nature of the UN system, Mr.

Beridze noted that there has not been a common unified approach to AI and robotics within

the UN. Notwithstanding this, a number of organizations and agencies within the UN system

have taken note of the advancements in the field of AI and robotics from their respective

positions. Perhaps most notably, as Prof. Sharkey also mentioned earlier, since 2014 there

has been an ongoing discussion of lethal autonomous weapons systems (LAWS), or so-called

“killer robots”, in the context of the United Nations’ Convention on Certain Conventional

Weapons (CCW). The establishment of a Group of Governmental Experts (GGE) in December

2016 to specifically discuss the legal, ethical and societal concerns of LAWS over a three-

week period later this year is a significant advancement in these discussions. The United

Nations Institute for Disarmament Research (UNIDIR), he added, has also supported the

discussion of LAWS at the CCW through its work on the weaponization of increasingly

autonomous technologies in the context of security and disarmament, work that has resulted

Page 18: THE RISKS AND BENEFITS OF ARTIFICIAL ...unicri.it/in_focus/files/Report_UNICRI_Cambridge...THE RISKS AND BENEFITS OF ARTIFICIAL INTELLIGENCE AND ROBOTICS An event in collaboration

Page 17 of 18

in the publication of a series of reports to frame the complex issues that surround LAWS in an

accessible manner.

Moving beyond the realm of conflict, he observed that the International Telecommunication

Union is set to explore the latest developments in AI innovation and their implications at its

upcoming AI for Good Global Summit in Geneva in June 2017, which will be the first of a

series of annual conferences on AI Innovation. The ITU has recognized the important role of

AI in the achievement of the United Nations' Sustainable Development Goals (SDGs) and in

helping to solve humanity's challenges by capitalizing on the unprecedented quantities of data

now being generated on sentiment behaviour, human health, commerce, communications,

migration and more. The United Nations Chief Information Technology Officer, Ms. Atefeh

Riazi, has also noted the world-changing potential of AI, underlining at the same time the

importance considering both the positive and negative moral and ethical implications and the

importance of crafting appropriate policies.

Looking closer to home, Mr. Beridze explained that UNICRI launched its own programme on

AI and robotics in 2015. The programme seeks to support the development of an international

infrastructure to identify and understand in greater detail the risks and benefits of

advancements in AI and robotics; to facilitate stakeholder discussions; and to support the

development of international and national approaches that minimize the risks and maximise

the benefits of AI. He then described some of UNICRI’s contributions in this regard, which

included the organization of a side-event during the 70th session of the UN General Assembly

in New York in October 2015 with the participation of renowned experts, including Prof.

Tegmark and Prof. Bostrom, to brief delegations on the current and likely future capabilities of

artificially intelligent systems. The event was repeated in 2016, with the support of 1QBit, the

FBI, SICPA, DSTL, and eminent ethicist and scholar Prof. Wendell Wallach. Following the

2015 event, the UN Group of Friends on Chemical, Biological, Radiological and Nuclear

(CBRN) Risk Mitigation and Security Governance acknowledged the increasing importance of

AI and robotics for international security and decided to remain seized of the matter. In

November 2015, Mr. Beridze noted, UNICRI also collaborated with the Organization for the

Prohibition of Chemical Weapons (OPCW) and The Hague Security Delta to organize a side-

event during the 20th session of the Conference of States Parties to the Chemical Weapons

Convention (CWC). During the event member states were briefed on the technological trends

and on what AI and robotics innovations might mean for the implementation of the Chemical

Weapons Convention. Subsequently, in March 2016, UNICRI launched the first edition of its AI

and robotics public-awareness and educational programme with a training course for media

and security professionals hosted by the Netherlands Institute of International Relations

Clingendael. Today’s workshop, Mr. Beridze observed, is the second edition of this

Page 19: THE RISKS AND BENEFITS OF ARTIFICIAL ...unicri.it/in_focus/files/Report_UNICRI_Cambridge...THE RISKS AND BENEFITS OF ARTIFICIAL INTELLIGENCE AND ROBOTICS An event in collaboration

Page 18 of 18

programme, which he hopes will continue into the future, becoming a more regular event and

spreading from Europe to Africa, the Americas, and Asia.

All this, Mr. Beridze explained, is building toward to opening of UNICRI’s Centre on AI and

Robotics later this year in the city of The Hague, the Netherlands. The Centre, he continued,

will serve to enhance understanding of the risk-benefit duality of AI and robotics through

improved coordination, knowledge collection and dissemination, awareness-raising and

outreach activities. Recognizing the potential for AI to contribute to the implementation of the

SDGs, Mr. Beridze said that one of the Centre’s immediate ambitions is to find ways to bring

practical AI tools to the developing world to support implementation of SDGs. Overall, it was

his hope that this Centre will support policy-makers in developing balanced and appropriate

policies.

Concluding remarks

Before bringing the event to a close, Mr. Beridze invited the speakers to deliver final remarks

on the most important take-home point from the workshop. In general, there was consensus

amongst the speakers that the current status quo with respect to AI and robotics should not

continue. Change is required and this change should have AI ethics at its core. Although there

is an increasing interest in the field, with more and more discussion of the various issues, what

is evidently still lacking is a collective strategy on how to proceed in addressing the many

economic, social, legal, and ethical concerns associated with AI and robotics. Fostering a

global AI and robotics community and securing the support of global governance are critical

steps in this regard. It was suggested that inspiration might be taken from the struggle of

climate change advocates against the economic approach to energy and how they

approached placing climate change on the international agenda, prompting the formulation of

international legal instruments such as the Kyoto Protocol to The United Nations Framework

Convention on Climate Change.

In conclusion, Mr. Beridze reflected on Dr. Karachalios’ keynote address, in which he

highlighted the importance of dialogue and building bridges amongst concerned communities,

and argued that international dialogue on risk and benefits of AI and robotics should continue.

Relying on the combined expertise in the room, he felt that they could bring techno-scientific

communities and UN Member States together on AI and robotics for the benefit of humanity.

With that, Dr. Tuveson and Mr. Beridze, on behalf of the Cambridge Centre for Risk Studies

and UNICRI, officially brought the event to a close, thanking the distinguished faculty of

speakers for their fascinating contributions and the participants for their active participation in

discussions.

Page 20: THE RISKS AND BENEFITS OF ARTIFICIAL ...unicri.it/in_focus/files/Report_UNICRI_Cambridge...THE RISKS AND BENEFITS OF ARTIFICIAL INTELLIGENCE AND ROBOTICS An event in collaboration

Hosted by:

The Risks and Benefits of Artificial Intelligence and Robotics A workshop for media and security professionals

Date: 6 - 7 February 2017; 09:00 - 17:00

Location: University of Cambridge Judge Business School

Trumpington Street, Cambridge, UK CB2 1AG

Meeting Convenors:

• Mr. Irakli Beridze, Senior Strategy and Policy Advisor, United Nations Interregional Crime and Justice

Research Institute

• Dr Michelle Tuveson, Founder & Executive Director, Cambridge Centre for Risk Studies at the University

of Cambridge Judge Business School

Workshop Agenda

Monday 6 February 2017

08:50 – 09:00 Welcome and Introductions, UNICRI and the Cambridge Centre for Risk Studies 09.00 – 09.30 Keynote Address: Konstantinos Karachalios, Ph.D, Managing Director of The Institute of Electrical and Electronics Engineers (IEEE) Standards Association and Member of the Management Council of IEEE Session One: 09:30 – 10:15 Artificial Intelligence and Robotics 101: What Is It and Where Are We Now?, Prof. Noel Sharkey, University of Sheffield, UK, Co-Founder of the Foundation for Responsible Robotics (FRR) and Chairman of the International Committee for Robot Arms Control (ICRAC) (TBC) 10:15 – 10:45 Discussion moderated by Prof. Sharkey 10:45 – 11:15 Coffee & Tea

Page 21: THE RISKS AND BENEFITS OF ARTIFICIAL ...unicri.it/in_focus/files/Report_UNICRI_Cambridge...THE RISKS AND BENEFITS OF ARTIFICIAL INTELLIGENCE AND ROBOTICS An event in collaboration

Session Two: 11:15 – 12:00 Ethics and Artificial Intelligence, Ms. Kay Firth-Butterfield, Barrister-at-Law, Distinguished Scholar, Robert S. Strauss Center for International Security and Law, University of Texas, Austin, Co-Founder, Consortium for Law and Ethics of Artificial Intelligence and Robotics 12:00 – 12:45 Discussion moderated by Ms. Firth-Butterfield 12:45 – 13:30 Lunch in the common room. 13:30 – 14:15 Demo from Darktrace (in the large lecture theatre) Session Three: 14:15 – 15:00 The Cyber-Security Overlap:

The Triangle of Pain: The Role of Policy, Public and Private sectors in mitigating the Cyber Threat, Professor Daniel Ralph, Academic Director, Cambridge Centre for Risk Studies & Professor of Operations Research, University of Cambridge Judge Business School Modeling the Cost of Cyber Catastrophes to the Global Economy - Simon Ruffle, Director of Technology Research & Innovation, Cambridge Centre for Risk Studies Towards Cyber Insurance: Approaches to Data and Modeling - Jennifer Copic, Research Assistant, Cambridge Centre for Risk Studies

15:00 – 15:45 Discussion moderated by Dr. Michelle Tuveson, Executive Director, Cambridge Centre for Risk Studies 15:45 – 16:15 Coffee & Tea Session Four:

16:15 – 17:00 From Fear to Accountability - the State of Artificial Intelligence Journalism, Mr. John C. Havens, Executive Director of the IEEE Global Initiative for Ethical Considerations in the Design of Autonomous Systems and contributing writer for Mashable and The Guardian. 17:00 – 17:45 Discussion moderated by Mr. Havens

Tuesday 7 February 2017 09:00 – 09:15 Recap of First Day Takeaways, Mr. Irakli Beridze, UNICRI Session Five:

09:15 – 10:00 Emerging Technologies: Quantum Computing, Dr. Natalie Mullin, 1Qbit

10:00 – 10:30 Discussion moderated by Dr. Mullin

10:30 – 11:00 Coffee & Tea Session Six: 11:00 – 11:45 Economic and Social Implications of Robotics and Artificial Intelligence, Mr. Olly Buston, Founding Director, Future Advocacy 11:45 – 12:30 Discussion moderated by Mr. Buston

Page 22: THE RISKS AND BENEFITS OF ARTIFICIAL ...unicri.it/in_focus/files/Report_UNICRI_Cambridge...THE RISKS AND BENEFITS OF ARTIFICIAL INTELLIGENCE AND ROBOTICS An event in collaboration

12:30 – 14:00 Lunch Session Seven: 14:00 – 14:45 Long term Issues of Artificial Intelligence and the Future of Humanity, Mr. Kyle Scott, the Future of Humanity Institute (University of Oxford) 14:45 – 15:30 Discussion moderated by Mr. Scott 15:30 – 16:00 Coffee & Tea Session Eight: 16:00 – 16:45 Robotics and Artificial Intelligence at the United Nations, Mr. Irakli Beridze, UNICRI 16:45 – 17:15 Discussion moderated by Mr. Beridze

Panel Discussion: 17:15 – 18:15 Open panel discussion moderated by Mr. Irakli Beridze, UNICRI, and Dr. Michelle Tuveson, the Cambridge Centre for Risk Studies. Panellists include:

• Dr. Ing. Konstantinos Karachalios

• Mr Olly Buston,

• Mr. John C. Havens,

• Dr. Natalie Mullin,

• Mr. Kyle Scott,

• Ms. Firth-Butterfield, and,

• Dr Stephen Cave, Executive Director, Leverhulme Centre for the Future of Intelligence, University of Cambridge

Page 23: THE RISKS AND BENEFITS OF ARTIFICIAL ...unicri.it/in_focus/files/Report_UNICRI_Cambridge...THE RISKS AND BENEFITS OF ARTIFICIAL INTELLIGENCE AND ROBOTICS An event in collaboration

Hosted by:

The Risks and Benefits of Artificial Intelligence and Robotics

A workshop for media and security professionals

SSppeeaakkeerrss’’ BBiiooggrraapphhiieess

Mr. Irakli Berize

Senior Strategy and Policy Advisor at UNICRI, with more than 18 years of experience in leading highly political and complex multilateral negotiations, developing stakeholder engagement programmes and channels of communication with governments, UN agencies, International Organizations, think tanks, civil society, foundations, academia, private industry and other partners on an international level. Prior to joining UNICRI served as a special projects officer at the Organisation for the Prohibition of Chemical Weapons (OPCW) undertaking extensive missions in politically sensitive areas around the globe. Recipient of recognition on the awarding of the Nobel Peace Prize to the OPCW in 2013. Since 2015, Initiated and heading the first UN programme on Artificial Intelligence and Robotics. Heading the creation of the UN Centre on AI and Robotics with the objective to enhance understanding of the risk-benefit duality of AI through improved coordination, knowledge collection and dissemination, awareness-raising and global outreach activities. He is a member of various international task forces and working groups advising governments and international organisations on numerous issues related to international security, emerging technologies and global political trends.

Dr. Michelle Tuveson

Michelle Tuveson is a Founder and Executive Director at the Cambridge Centre for Risk Studies hosted at the University of Cambridge Judge Business School. Her responsibilities include the overall executive leadership at the Centre. This includes developing partnership relationships with corporations, governments, and other academic centres. Dr Tuveson leads the Cambridge CRO Council and she chairs the organising committee for the Cambridge Risk Centre's Annual Risk Summits. She is one of the lead organisers of the Aspen Crisis and Risk Forum. She is an advisor to the World Economic Forum's 2015 Global Risk Report and a contributor to the Financial Times Special Report on Risk Management. She is also an advisor to a number of corporations and boards as well as a frequent conference speaker.

Page 24: THE RISKS AND BENEFITS OF ARTIFICIAL ...unicri.it/in_focus/files/Report_UNICRI_Cambridge...THE RISKS AND BENEFITS OF ARTIFICIAL INTELLIGENCE AND ROBOTICS An event in collaboration

Dr Tuveson has worked in corporations within the technology sector with her most recent position in the Emerging Markets Group at Lockheed Martin. Prior to that, she held positions with management strategy firm Booz Allen & Hamilton, and US R&D organisation MITRE Corporation. Dr Tuveson's academic research focusses on the application of simulation models to study risk governance structures associated with the role of the Chief Risk Officer. She was awarded by the Career Communications Group, Inc. as a Technology Star for Women in Science, Technology, Engineering and Maths (STEM). She earned her BS in Engineering from the Massachusetts Institute of Technology, MS in Applied Math from Johns Hopkins University, and PhD in Engineering from the University of Cambridge. She is a member of Christ's College Cambridge.

Dr. Ing. Konstantinos Karachalios

A globally recognized leader in standards development and intellectual property, Dr. Ing. Konstantinos Karachalios is managing director of the IEEE Standards Association and a member of the IEEE Management Council. As managing director, he has been enhancing IEEE efforts in global standards development in strategic emerging technology fields, through technical excellence of staff, expansion of global presence and activities and emphasis on inclusiveness and good governance, including reform of the IEEE standards-related patent policy. As member of the IEEE Management Council, he championed expansion of IEEE influence in key techno-political areas, including consideration of social and ethical implications of technology, according to the IEEE mission to advance technology for humanity. Results have been rapid in coming and profound; IEEE is becoming the place to go for debating and building consensus on issues such as a trustworthy and inclusive Internet and ethics in design of autonomous systems. Before IEEE, Konstantinos played a crucial role in successful French-German cooperation in coordinated research and scenario simulation for large-scale nuclear reactor accidents. And with the European Patent Office, his experience included establishing EPO’s patent academy, the department for delivering technical assistance for developing countries and the public policy department, serving as an envoy to multiple U.N. organizations. Konstantinos earned a Ph.D. in energy engineering (nuclear reactor safety) and masters in mechanical engineering from the University of Stuttgart.

Prof. Noel Sharkey

Noel Sharkey PhD DSc FIET FBCS CITP FRIN FRSA Emeritus Professor of AI and Robotics University of Sheffield, co-director of the Foundation for Responsible Robotics http://responsiblerobotics.org and chair elect of the NGO: International Committee for Robot Arms Control (ICRAC) http://icrac.net. He has moved freely across academic disciplines, lecturing in departments of engineering, philosophy, psychology, cognitive science, linguistics, artificial intelligence, computer science, robotics, ethics, law, art, design and military colleges. He has held research and teaching positions in the US (Yale and Stanford) and the UK (Essex, Exeter and Sheffield). Noel has been working in AI/robotics and related disciplines for more than 3 decades and is known for his early work on neural computing and genetic algorithms. As well as writing academic articles, he writes for national newspapers and magazines. Noel has created thrilling robotics museum exhibitions and mechanical art installations and he frequently appears in the media and works in popular tech TV shows such as head judge of robot wars. His research since 2006 has been on ethical/legal/human rights issues in robot applications in areas such as the military, child care, elder care, policing, autonomous transport, robot crime, medicine/surgery, border control, sex and civil surveillance. A major part of his current work is advocacy (mainly at the United Nations) about the ethical, legal and technical aspects of autonomous weapons systems.

Page 25: THE RISKS AND BENEFITS OF ARTIFICIAL ...unicri.it/in_focus/files/Report_UNICRI_Cambridge...THE RISKS AND BENEFITS OF ARTIFICIAL INTELLIGENCE AND ROBOTICS An event in collaboration

Ms. Kay Firth-Butterfield

Kay Firth-Butterfield is a Barrister-at-Law and part-time Judge in the United Kingdom where she has also worked as a mediator, arbitrator, business owner and Professor of Law. In the United States, Kay is Executive Director of AI-Austin and former Chief Officer, and member, of the Lucid.ai Ethics Advisory Panel (EAP). She is a humanitarian with a strong sense of social justice and has advanced degrees in Law and International Relations. Kay advises governments, think tanks and non-profits about artificial intelligence, law and policy. Kay co-founded the Consortium for Law and Policy of Artificial Intelligence and Robotics at the University of Texas and as an adjunct Professor of Law teaches “Artificial Intelligence and emerging technologies: Law and Policy”. She is a Distinguished Scholar of the Robert E. Strauss Center for International Security and Law. Kay thinks about and advises on how AI and other emerging technologies will impact business and society, including how business can prepare for that impact in its internal planning and external interaction with customers and other stakeholders and how society will be affected by these technologies. Kay speaks regularly to international audiences addressing many aspects of these challenging changes.

Dave Palmer

Dave Palmer is a cyber security technical expert with over ten years' experience at the forefront of government intelligence operations. He has worked across UK intelligence agencies GCHQ and MI5, where he delivered mission-critical infrastructure services, including the replacement and security of entire global networks, the development of operational internet capabilities and the management of critical disaster recovery incidents. At Darktrace, Dave oversees the mathematics and engineering teams and product strategy. He holds a first class degree in Computer Science and Software Engineering from the University of Birmingham.

Prof. Daniel Ralph

Professor Daniel Ralph is a Founder and Academic Director of the Centre for Risk Studies, Professor of Operations Research at the University of Cambridge Judge Business School, and a Fellow of Churchill College. Daniel's research interests include identification and management of systemic risk, risk aversion in investment, economic equilibria models and optimisation methods. Management stress test, via selection and construction of catastrophe scenarios, is one focus of his work in the Cambridge Centre for Risk Studies. Another is the role and expression of risk management within organisations. Daniel engages across scientific and social science academia, a variety of commercial and industrial sectors, and government policy making. He was Editor-in-Chief of Mathematical Programming (Series B) from 2007-2013.

Mr. Simon Ruffle

Simon Ruffle is Director of Research & Innovation at the Cambridge Centre for Risk Studies and a member of its Executive Team. Simon's responsibilities include managing research in the Centre, particularly the TechCat track – solar storm and cyber catastrophe research, and the Cambridge Risk Framework, a platform for analysing multiple global systemic risks through unified modelling software; a common database architecture and information interchange standards. He is responsible for developing and maintaining partnership relationships with corporations, governments, and other academic centres. He speaks regularly at seminars and conferences. He is developing methods for storing and applying the Centre's Stress Test Scenarios and other Risk Assessment Tools to macro-economic analysis, financial markets and insurance loss aggregation. He is researching how network theory can be applied to understanding the impact of catastrophes in a globalised world, including supply chains, insurance and banking.

Page 26: THE RISKS AND BENEFITS OF ARTIFICIAL ...unicri.it/in_focus/files/Report_UNICRI_Cambridge...THE RISKS AND BENEFITS OF ARTIFICIAL INTELLIGENCE AND ROBOTICS An event in collaboration

Originally studying architecture at Cambridge, Simon has spent most of his career in industry, developing software for natural hazards risk. He has worked on risk pricing for primary insurers, catastrophe modelling for reinsurers, and has been involved in placing catastrophe bonds in the capital markets. He has many years of experience in software development, relational databases and geospatial analysis and has worked in a variety of organisations from start-ups to multinationals.

Ms. Jennifer Copic

Jennifer Copic is a Research Associate at the Centre for Risk Studies. Jennifer supports the research on scenario stress test development and insurance loss estimation, specifically on emerging topics, such as cyber. She is particularly excited to work with tools that help visualise complex data sets and enable organisations to make data driven decisions. She holds a BS in Chemical Engineering from the University of Louisville and a MS in Industrial and Operations Engineering from the University of Michigan.

Prior to joining the Centre for Risk Studies, Jennifer worked as a systems engineer for General Mills at a manufacturing plant. She really enjoys modelling and visualising data in order to help others make more informed decisions.

Mr. John C. Havens

John C. Havens is Executive Director of The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. The Initiative is creating a document called, Ethically Aligned Design to provide recommendations for values-driven Artificial Intelligence and Autonomous Systems as well standards recommendations. Guided by over one hundred thought leaders, The Initiative has a mission of ensuring every technologist is educated, trained, and empowered to prioritize ethical considerations in the design and development of autonomous and intelligent systems. John is also a regular contributor on issues of technology and wellbeing to Mashable, The Guardian, HuffPo and TechCrunch and is author of Heartificial Intelligence: Embracing Our Humanity To Maximize Machines and Hacking Happiness: Why Your Personal Data Counts and How Tracking it Can Change the World. John was an EVP of a Top Ten PR Firm, a VP of a tech startup, and an independent consultant where he has worked with clients such as Gillette, P&G, HP, Wal-Mart, Ford, Allstate, Monster, Gallo Wines, and Merck. He was also the Founder of The Happathon Project, a non-profit utilizing emerging technology and positive psychology to increase human wellbeing. John has spoken at TEDx, at SXSW Interactive (six times), and as a global keynote speaker for clients like Cisco, Gillette, IEEE, and NXP Semiconductors. John was also a professional actor on Broadway, TV and Film for fifteen years. For more information, visit John’s site (http://www.johnchavens.com/) or follow him @johnchavens.

Dr. Natalie Mullin

Natalie Mullin is a mathematician and quantum algorithms researcher at 1QBit, the world’s first software company dedicated to quantum computing. Natalie completed her doctorate in Combinatorics and Optimization at the University of Waterloo. Her research interests include graph theory, machine learning, and operations research. At 1QBit, Natalie develops optimization algorithms that utilize quantum annealing. She is currently investigating hybrid classical and quantum combinatorial algorithms that make optimal use of both computational paradigms.

Page 27: THE RISKS AND BENEFITS OF ARTIFICIAL ...unicri.it/in_focus/files/Report_UNICRI_Cambridge...THE RISKS AND BENEFITS OF ARTIFICIAL INTELLIGENCE AND ROBOTICS An event in collaboration

Mr. Olly Buston

Olly Buston is CEO of the think tank and consultancy Future Advocacy (www.futureadvocacy.org) which works on some of the greatest challenges faced by humanity in the 21st Century. Olly is author of the recent report An Intelligent Future? which focuses on what governments can do to maximise the opportunities and minimise the risks of artificial intelligence. Previously Olly was Director of the ONE campaign for 7 years. He has also run the global anti-slavery movement Walk Free, been an Executive Director of the UK Labour Party, and led Oxfam International's global education campaign from Washington DC. Follow him on twitter @ollybuston

Mr. Kyle Scott

Kyle Scott is the Press Officer at the Future of Humanity Institute in the University of Oxford. He started work on artificial intelligence and existential risk considerations through his previous work in effective altruist organizations such as the Centre for Effective Altruism and 80,000 Hours. He has juggled generalist roles spanning research, finance, marketing, web development, office administration, and more. Kyle graduated from Whitman College, where he studied philosophy and international development.

Dr. Stephen Cave

Dr Stephen Cave is Executive Director of the Leverhulme Centre for the Future of Intelligence (CFI) and Senior Research Fellow at the University of Cambridge. Previously, he worked for the British Foreign Office as a policy advisor and diplomat. He has written on a wide range of philosophical and scientific subjects, including for the New York Times, the Atlantic, the Guardian, the Telegraph, the Financial Times and Wired, and has appeared on television and radio around the world. His book 'Immortality' was a New Scientist book of the year. He has a PhD in philosophy from the University of Cambridge.

Page 28: THE RISKS AND BENEFITS OF ARTIFICIAL ...unicri.it/in_focus/files/Report_UNICRI_Cambridge...THE RISKS AND BENEFITS OF ARTIFICIAL INTELLIGENCE AND ROBOTICS An event in collaboration