ethical and social

27
Ethical & Social: Societal Implications of AI Contents Readings: Introductory General Videos FAQs Related Resources News Feed Any sci-fi buff knows that when computers become self-aware, they ultimately destroy their creators. From 2001: A Space Odyssey to Terminator, the message is clear: The only good self-aware machine is an unplugged one. We may soon find out whether that's true. ... But what about HAL 9000 and the other fictional computers that have run amok? "In any kind of technology there are risks," [Ron] Brachman acknowledges. That's why DARPA is reaching out to neurologists, psychologists - even philosophers - as well as computer scientists. "We're not stumbling down some blind alley," he says. "We're very cognizant of these issues." - Good Morning, Dave... The Defense Department is working on a self- aware comput er. By Kathleen Melymuka. Computerworld (November 11, 2002) With respect to social consequences, I believe that every researcher has some responsibility to assess, and try to inform others of, the possible social consequences of the research products he is trying to create. - from Herbert A. Simon's autobiography, Models of My Life "Technology is heading here. It will predictably get to the point of making artificial intelligence," [Eliezer] Yudkowsky said. "The mere fact that you cannot predict exactly when it will happen down to the day is no ex cuse for closing yo ur eye s and refusing to think about it." - Tec hies pon der comput ers smarter than us. By Marcus Woh lsen. The Associ ate d Press via Yahoo! (September 8, 2007) As computers are programmed to act more like people, several social and ethical concerns come into focus. For example: Are there ethical bounds on what computers should be programmed to do? Sources listed here focus on AI, but also included are works that range more broadly into the general impact of 

Upload: ahmerkumail

Post on 08-Apr-2018

219 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: ethical and social

8/7/2019 ethical and social

http://slidepdf.com/reader/full/ethical-and-social 1/27

• Ethical & Social: Societal Implications of AIContents

Readings:

• Introductory

• General

Videos FAQs Related Resources News Feed 

Any sci-fi buff knows that when computers become self-aware, they ultimately destroy their creators.

From 2001: A Space Odyssey to Terminator, the message is clear: The only good self-aware machine is an

unplugged one. We may soon find out whether that's true. ... But what about HAL 9000 and the other

fictional computers that have run amok? "In any kind of technology there are risks," [Ron] Brachman

acknowledges. That's why DARPA is reaching out to neurologists, psychologists - even philosophers - as

well as computer scientists. "We're not stumbling down some blind alley," he says. "We're very cognizant

of these issues."

- Good Morning, Dave... The Defense Department is working on a self-aware computer. By Kathleen

Melymuka. Computerworld (November 11, 2002)

With respect to social consequences, I believe that every researcher has some responsibility to assess,

and try to inform others of, the possible social consequences of the research products he is trying to

create.

- from Herbert A. Simon's autobiography, Models of My Life

"Technology is heading here. It will predictably get to the point of making artificial intelligence," [Eliezer]

Yudkowsky said. "The mere fact that you cannot predict exactly when it will happen down to the day is no

excuse for closing your eyes and refusing to think about it."

- Techies ponder computers smarter than us. By Marcus Wohlsen. The Associated Press via Yahoo!(September 8, 2007)

As computers are programmed to act more like people, several social and ethical concerns come into

focus. For example: Are there ethical bounds on what computers should be programmed to do? Sources

listed here focus on AI, but also included are works that range more broadly into the general impact of 

Page 2: ethical and social

8/7/2019 ethical and social

http://slidepdf.com/reader/full/ethical-and-social 2/27

computerization.

INTRODUCTORY MATERIALS

AAAI Presidential Panel on Long-term AI Futures. 2008-2009.

AAAI Presidential Panel Interim Report   . October 2009.

Associated Press story on the AAAI Presidential Panel and Stanford Law School discussion. December 6,

2009.

NY Times story on the AAAI Presidential Panel on Long-term AI Futures (on AI and Society). July 26, 2009.

"The Online Ethics Center for Engineering and Science National Academy of Engineering. "Computers andNew Technology: Material addressing the specific ethical issues arising from computers,

computer/software engineering, and the Internet, as well as other emerging technologies, such as

nanotechnology. The section includes cases, essays, ethical guidelines, and web resources. ." Be sure to

see the Essays and Articles section and theirGlossary of Ethical Terms.

We have the technology - Bionic eyes, robot soldiers and kryptonite were once just film fantasy. But now

science fiction is fast becoming fact. So how will it change our lives? By Gwyneth Jones. The Guardian

(April 25, 2007). "Our gadgets are just like our children. They have the potential to be marvellous, to

surpass all expectations. But children (and robots) don't grow up intelligent, affectionate, helpful and

good-willed all by themselves. They need to be nurtured. The technology, however fantastic, is neutral.

It's up to us to decide whether that dazzling new robot brain powers a caring hand, or a speedy fist highly

accurate at throwing grenades."

Gianmarco Veruggio - Roboethics [podcast]. Talking Robots (January 3, 2008). "In this interview we talk to

Gianmarco Veruggio who founded the association Scuola di Robotica in Genova (Italy) to study the

complex relationship between Robotics and Society. This led him to coin the term and propose the

concept of Roboethics, or the field of Ethics applied to robotics. He discusses topics such as the use of 

robots in our everyday environments, the lethality and benefits of medical robots or military robots,

augmented humans and robots as human-like artifacts. Should we start thinking like Asimov, deriving

laws and limits to apply for the peaceful cohabitation of humans and robots?""

• Visit Scuola di Robotica below.

Of Robots and Men - Rights for the Artificially Intelligent (radio broadcast; January 23, 2007). Listen as

"KJZZ's Dennis Lambert speaks with Scottsdale attorney David Calverley, whose research into bioethics is

driving him to artificial intelligence."

Page 3: ethical and social

8/7/2019 ethical and social

http://slidepdf.com/reader/full/ethical-and-social 3/27

• Also see David J. Calverley's paper, Additional Thoughts Concerning the Legal Status of a Non-

Biological Machine, In Machine Ethics: Papers from the 2005 AAAI Fall Symposium, ed. Michael

Anderson, Susan Leigh Anderson, and Chris Armen. Technical Report FS-05-06. American

Association for Artificial Intelligence, Menlo Park, California.Abstract: "Law, as a pragmatic tool,

provides us with a way to test, at a conceptual level, whether a humanly created non-biological

machine could be considered a legal person. This paper looks first at the history of law in order to

set the foundation for the suggestion that as a normative system it is based on a folk psychology

model. Accepting this as a starting point allows us to look to empirical studies in this area to

gather support for the idea that 'intentionality', in the folk psychology sense, can give us a

principled way to argue that non-biological machines can become legal persons. In support of this

argument I also look at corporate law theory. However, as is often the case, because law has

historically been viewed as a human endeavor, complications arise when we attempt to apply its

concepts to non-human persons. The distinction between human, person and property is

discussed in this regard, with particular note being taken of the concept of slavery. The

conclusion drawn is that intentionality in the folk sense is a reasonable basis upon which to rest at

least one leg of an argument that a nonbiological machine can be viewed as a legal person."

Surveillance Society - New High-Tech Cameras Are Watching You. In the era of computer-controlled

surveillance, your every move could be captured by cameras, whether you're shopping in the grocery

store or driving on the freeway. Proponents say it will keep us safe, but at what cost? By James Vlahos.

Popular Mechanics (January 2008). "Liberty Island's video cameras all feed into a computer system. The

park doesn't disclose details, but fully equipped, the system is capable of running software that analyzes

the imagery and automatically alerts human overseers to any suspicious events. The software can spot

when somebody abandons a bag or backpack. It has the ability to discern between ferryboats, which are

allowed to approach the island, and private vessels, which are not. And it can count bodies, detecting if 

somebody is trying to stay on the island after closing, or assessing when people are grouped too tightly

together, which might indicate a fight or gang activity. 'A camera with artificial intelligence can be there

24/7, doesn't need a bathroom break, doesn't need a lunch break and doesn't go on vacation,' says Ian

Ehrenberg, former vice president of Nice Systems, the program's developer. Most Americans would

probably welcome such technology at what clearly is a marquee terrorist target. An ABC

News/Washington Post poll in July 2007 found that 71 percent of Americans favor increased video

surveillance. What people may not realize, however, is that advanced monitoring systems such as the

one at the Statue of Liberty are proliferating around the country. ... 'Society is fundamentally changing

and we aren't having a conversation about it,' [Bruce] Schneier says. ... In the late 18th century, English

philosopher Jeremy Bentham dreamed up a new type of prison: the panopticon. It would be built so that

guards could see all of the prisoners at all times without their knowing they were being watched, creating

'the sentiment of an invisible omniscience,' Bentham wrote."

• Also read this opinion piece: Watching the Watchers - Why Surveillance Is a Two-Way Street. If 

governments and businesses can keep an eye on us in public spaces, we ought to be able to look

back. Op-Ed by Glenn Harlan Reynolds. Popular Mechanics (January 2008). "Today's pervasive

Page 4: ethical and social

8/7/2019 ethical and social

http://slidepdf.com/reader/full/ethical-and-social 4/27

surveillance may seem like something out of 1984, but access to technology has become a lot

more democratic since Orwell's time."

• Also listen to this podcast: America’s New Surveillance Society. By Matt Sullivan. Popular

Mechanics (December 7, 2007). "Every day we're being watched a little bit more, by intelligent

cameras, unmanned aircraft and newfound gadgetry. We'll get an exclusive report on FAA-

approved drone tests by American law-enforcement agencies, suggestions from Instapundit

blogger and PM contributing editor Glenn Reynolds on how to watch back, and a first look at a

eye-tracking hardware that might make Google millions."

Trust me, I'm a robot - Robot safety: As robots move into homes and offices, ensuring that they do not

injure people will be vital. But how? The Economist Technology Quarterly (June 8, 2006). "Last year there

were 77 robot-related accidents in Britain alone, according to the Health and Safety Executive. With

robots now poised to emerge from their industrial cages and to move into homes and workplaces,

roboticists are concerned about the safety implications beyond the factory floor. To address these

concerns, leading robot experts have come together to try to find ways to prevent robots from harming

people. Inspired by the Pugwash Conferences -- an international group of scientists, academics and

activists founded in 1957 to campaign for the non-proliferation of nuclear weapons -- the new group of 

robo-ethicists met earlier this year in Genoa, Italy, and announced their initial findings in March at the

European Robotics Symposium in Palermo, Sicily. ... According to the United Nations Economic

Commission for Europe's World Robotics Survey, in 2002 the number of domestic and service robots more

than tripled, nearly outstripping their industrial counterparts. ... So what exactly is being done to protect

us from these mechanical menaces? 'Not enough,' says Blay Whitby, an artificial-intelligence expert at

the University of Sussex in England. ... Robot safety is likely to surface in the civil courts as a matter of 

product liability. 'When the first robot carpet-sweeper sucks up a baby, who will be to blame?' asks John

Hallam, a professor at the University of Southern Denmark in Odense. If a robot is autonomous and

capable of learning, can its designer be held responsible for all its actions? Today the answer to these

questions is generally 'yes'. But as robots grow in complexity it will become a lot less clear cut, he says."

Machine Ethics: Creating an Ethical Intelligent Agent. By Michael Anderson and Susan Leigh Anderson. AI

Magazine 28(4): Winter 2007, 15. "The newly emerging field of machine ethics (Anderson and Anderson

2006) is concerned with adding an ethical dimension to machines. Unlike computer ethics -- which has

traditionally focused on ethical issues surrounding humans’ use of machines -- machine ethics is

concerned with ensuring that the behavior of machines toward human users, and perhaps other machines

as well, is ethically acceptable. In this article we discuss the importance of machine ethics, the need for

Page 5: ethical and social

8/7/2019 ethical and social

http://slidepdf.com/reader/full/ethical-and-social 5/27

machines that represent ethical principles explicitly, and the challenges facing those working on machine

ethics. We also give an example of current research in the field that shows that it is possible, at least in a

limited domain, for a machine to abstract an ethical principle from examples of correct ethical judgments

and use that principle to guide its own behavior."

Machine Ethics - special issue of IEEE Intelligent Systems 21(4): July/August 2006. As stated in

the introduction: "This special issue stems from the AAAI 2005 Fall Symposium on Machine Ethics. The

symposium brought together participants from computer science and philosophy to clarify the nature of 

this newly emerging field and discuss potential approaches toward realizing the goal of creating an

ethical machine"

• Guest Editors' Introduction. By Michael Anderson and Susan Leigh Anderson. "Machine ethics is

concerned with how machines behave toward human users and other machines. It aims to create

a machine that's guided by an acceptable ethical principle or set of principles in the decisions it

makes about possible courses of action it could take. As ethics experts continue to progress

toward consensus concerning the right way to behave in ethical dilemmas, the task for those

working in machine ethics is to codify these insights. Eight articles in this special issue address

the issues." [Full text available.]

• Why Machine Ethics? By Colin Allen, Wendell Wallach, and Iva Smit. "Machine ethics, machine

morality, artificial morality, and computational ethics are all terms for an emerging field of study

that seeks to implement moral decision-making faculties in computers and robots. Machine ethics

is not merely science fiction but a topic that requires serious consideration given the rapid

emergence of increasingly complex autonomous software agents and robots. The authors

introduce the issues shaping this new field of enquiry and describe issues regarding the

development of artificial moral agents."

• The Nature, Importance, and Difficulty of Machine Ethics. By James H. Moor. "Machine ethics has a

broad range of possible implementations in computer technology--from maintaining detailed

records in hospital databases to overseeing emergency team movements after a disaster. From a

machine ethics perspective, you can look at machines as ethical-impact agents, implicit ethical

agents, explicit ethical agents, or full ethical agents. A current research challenge is to develop

machines that are explicit ethical agents. This research is important, but accomplishing this goal

will be extremely difficult without a better understanding of ethics and of machine learning and

cognition."

• Particularism and the Classification and Reclassification of Moral Cases. By Marcello Guarini. "Is it

possible to learn to classify cases as morally acceptable or unacceptable without using moral

principles? Jonathan Dancy has suggested that moral reasoning (including learning) could be

done without moral principles, and he has suggested that neural network models could aid in

understanding how to do this. This article explores Dancy's suggestion by presenting a neural

network model of case classification. The author argues that although some nontrivial case

classification might be possible without the explicitly consulting or executing moral principles, the

process of reclassifying cases is best explained by using moral principles."

• Computational Models of Ethical Reasoning: Challenges, Initial Steps, and Future Directions. By

Page 6: ethical and social

8/7/2019 ethical and social

http://slidepdf.com/reader/full/ethical-and-social 6/27

Bruce M. McLaren. "Computational models of ethical reasoning are in their infancy in the field of 

artificial intelligence. Ethical reasoning is a particularly challenging area of human behavior for AI

scientists and engineers because of its reliance on abstract principles, philosophical theories not

easily rendered computational, and deep-seated, even religious, beliefs. A further issue is this

endeavor's ethical dimension: Is it even appropriate for scientists to try to imbue computers with

ethical-reasoning powers? A look at attempts to build computational models of ethical reasoning

illustrates this task's challenges. In particular, the Truth-Teller and SIROCCO programs incorporate

AI computational models of ethical reasoning, both of which model the ethical approach known as

casuistry. Truth-Teller compares pairs of truth-telling cases; SIROCCO retrieves relevant past

cases and principles when presented with a new ethical dilemma. The computational model

underlying Truth-Teller could serve as the basis for an intelligent tutor for ethics."

• Toward a General Logicist Methodology for Engineering Ethically Correct Robots. By Selmer

Bringsjord, Konstantine Arkoudas, and Paul Bello. "It's hard to deny that robots will become

increasingly capable and that humans will increasingly exploit these capabilities by deploying

them in ethically sensitive environments, such as hospitals, where ethically incorrect robot

behavior could have dire consequences for humans. How can we ensure that such robots will

always behave in an ethically correct manner? How can we know ahead of time, via rationales

expressed clearly in natural language, that their behavior will be constrained specifically by the

ethical codes selected by human overseers? In general, one approach is to insist that robots only

perform actions that can be proved ethically permissible in a human-selected deontic logic--that

is, a logic that formalizes an ethical code. Ethicists themselves work by rendering ethical theories

and dilemmas in declarative form and reasoning over this information using informal and formal

logic. The authors describe a logicist methodology in general terms, free of any commitment to

particular systems, and show it solving a challenge regarding robot behavior in an intensive care

unit."

• Prospects for a Kantian Machine. By Thomas M. Powers. "Rule-based ethical theories like Kant's

appear to be promising for machine ethics because of the computational structure of their

judgments. Kant's categorical imperative is a procedure for mapping action plans (maxims) onto

traditional deontic categories--forbidden, permissible, obligatory--by a simple consistency test on

the maxim. This test alone, however, would be trivial. We might enhance it by adding a

declarative set of "buttressing" rules. The ethical judgment is then an outcome of the consistency

test, in light of the supplied rules. While this kind of test can generate nontrivial results, it might

do no more than reflect the prejudices of the builder of the declarative set; the machine will

"reason" straightforwardly, but not intelligently. A more promising (though speculative) option

would be to build a machine with the power of nonmonotonic inference. But this option too faces

formal challenges. The author discusses these challenges to a rule-based machine ethics, starting

from a Kantian framework."

• There Is No "I" in "Robot": Robots and Utilitarianism. By Christopher Grau. "Utilizing the film I,

Robot as a springboard, this article considers the feasibility of robot utilitarians, the moral

responsibilities that come with the creation of ethical robots, and the possibility of distinct ethics

for robot-robot interaction as opposed to robot-human interaction." [Full text available for a

limited time.]

Page 7: ethical and social

8/7/2019 ethical and social

http://slidepdf.com/reader/full/ethical-and-social 7/27

• An Approach to Computing Ethics. By Michael Anderson, Susan Leigh Anderson, and Chris Armen.

"To make ethics computable, we've adopted an approach to ethics that involves considering

multiple prima facie duties in deciding how one should act in an ethical dilemma. We believe this

approach is more likely to capture the complexities of ethical decision making than a single,

absolute-duty ethical theory. However, it requires a decision procedure for determining the

ethically correct action when the duties give conflicting advice. To solve this problem, we employ

inductive-logic programming to enable a machine to abstract information from ethical experts'

intuitions about particular ethical dilemmas, to create a decision principle. We've tested our

method in the MedEthEx proof-of-concept system, using a type of ethical dilemma that involves

18 possible combinations of three prima facie duties. The system needed just four training cases

to create an ethically significant decision principle that covered the remaining cases."

Robot Wars. Hack radio program on triple j radio (August 17, 2006). Listen as Kaitlyn Sawrey (host), Luke

Williams (reporter), and Dr. Rob Sparrow of Monash University explore the question: "For the countries

with big defence budgets robot soldiers might seem like a good, clean way of fighting a war... But can a

robot fight a war ethically?"

Robots and the Rest of Us. View by Bruce Sterling. Wired Magazine (May 2004; Issue 12.05). "Since when

do machines need an ethical code? For 80 years, visionaries have imagined robots that look like us, work

like us, perceive the world, judge it, and take action on their own. The robot butler is still as mystical as

the flying car, but there's trouble rising in the garage. In Nobel's vaulted ballroom, experts uneasily point

out that automatons are challenging humankind on four fronts. First, this is a time of war. ... The prospect

of autonomous weapons naturally raises ethical questions. ... The second ominous frontier is brain

augmentation, best embodied by the remote-controlled rat recently created at SUNY Downstate in

Brooklyn. ... Another troubling frontier is physical, as opposed to mental, augmentation. ... Frontier

number four is social: human reaction to the troubling presence of the humanoid. ... If the [First

International Symposium on Roboethics] offers a take-home message, it's not about robots, but about

us."

• For more information about the symposium, see Roboethics below.

Machines and Man: Ethics and Robotics in the 21st Century. From the Tech Museum of Innovation. "This

section contains four questions examining robotics and ethics. Each question contains audio responses

collected from researchers, scientists, labor leaders, artists, and others. In addition, an online discussion

area is provided to allow you to post your own comments or respond to the questions."

The Social Impact of Artificial Intelligence. By Margaret A. Boden. From the book: The Age of Intelligent

Machines (ed. Kurzweil, Raymond. 1990. Cambridge, MA: The MIT Press). "Is artificial intelligence in

human society a utopian dream or a Faustian nightmare? Will our descendants honor us for making

machines do things that human minds do or berate us for irresponsibility and hubris?"

Page 8: ethical and social

8/7/2019 ethical and social

http://slidepdf.com/reader/full/ethical-and-social 8/27

SENIOR project initiates ethical debate on ICT for the elderly. CORDIS News (March 5, 2008). "Dubbed

Assistive Technologies (AT), these technologies aim to improve the day-to-day activities of the elderly, as

well as people with disabilities, to supplement their loss of independence. However, while they hold great

promise on the one hand, these technologies can also run the risk of further isolating the thesepopulation groups. ‘Technology can alleviate the burden of dependency by allowing people to live

autonomously at home or in an assisted environment,’ Professor [Emilio] Mordini told CORDIS News. ‘Yet

technology can also seriously threaten people's autonomy and dignity,’ he added. For these reasons the

project will aim to provide a systematic assessment of the social, ethical and privacy issues involved in

ICT and ageing. … Surveillance technology is just one area which is likely to undergo rigorous assessment

by the project consortium."

Should computer scientists worry about ethics? Don Gotterbarn says, "Yes!". By Saveen Reddy. (1995).

ACM Crossroads. [This article was alsorepublished in the Spring 2004 issue of Crossroads (10.3): Ethics

and Computer Science.] "The problem is that we don't emphasize that what we build will be used by

people.... I want students to realize what they do has consequences."

• To learn more about Don Gotterbarn, visit his site. That's also where you'll find, among other

things, his list of ethics courses being taught at various educational institutions.

Computer Ethics: Basic Concepts and Historical Overview. By Terell Bynum, Terrell. The Stanford

Encyclopedia of Philosophy (Winter 2001 Edition), Edward N. Zalta (ed.). "Computer ethics as a field of 

study has its roots in the work of MIT professor Norbert Wiener during World War II (early 1940s), in which

he helped to develop an antiaircraft cannon capable of shooting down fast warplanes. The engineering

challenge of this project caused Wiener and some colleagues to create a new field of research that

Wiener called 'cybernetics' -- the science of information feedback systems. The concepts of cybernetics,

when combined with digital computers under development at that time, led Wiener to draw some

remarkably insightful ethical conclusions about the technology that we now call ICT (information and

communication technology). He perceptively foresaw revolutionary social and ethical consequences."

Why the future doesn't need us. By Bill Joy. Wired Magazine (April 2000; Issue 8.04). "From the moment I

became involved in the creation of new technologies, their ethical dimensions have concerned me, but it

was only in the autumn of 1998 that I became anxiously aware of how great are the dangers facing us in

the 21st century."

• Then read: Ray Kurzweil's Promise and Peril. KurzweilAI.net (April 9, 2001)."Bill Joy wrote a

controversial article in Wired advocating 'relinquishment' of research on self-replicating

technologies, such as nanobots. In this rebuttal, originally published in Interactive Week, Ray

Kurzweil argues that these developments are inevitable and advocates ethical guidelines and

responsible oversight."

o Additional related articles can be found in the KurzweilAI.net Point/Counterpoint 

collection.

Page 9: ethical and social

8/7/2019 ethical and social

http://slidepdf.com/reader/full/ethical-and-social 9/27

• Then read : "Hope Is a Lousy Defense." Sun refugee Bill Joy talks about greedy markets, reckless

science, and runaway technology. On the plus side, there's still some good software out there. By

Spencer Reiss. Wired Magazine (December 2003; Issue 11.12).

• Then check our news collection for articles & interviews such as: Singularity - Ubiquity interviews

Ray Kurzweil (January 10-17, 2006).

• Also see Raj Reddy's talk, Infinite Memory and Bandwidth: Implications for Artificial Intelligence:

"The main thesis of my talk is that none of the dire consequences of Bill Joy or the predictions of 

Kurzweil and Moravec about the possible emergence of a robot nation will come to pass. Not

because they are incorrect, but because . . ."

o . . . and from our Philosophy page: Will Spiritual Robots Replace Humanity by 2100?

Isaac Asimov's Three Laws of Robotics (actually it's three plus a 'zeroth law') courtesy of the Robotics

Research Group at the University of Texas at Austin.

In gadget-loving Japan, robots get hugs in therapy sessions. By Yuri Kageyama. Associated Press /

available from The San Diego Union-Tribune & SignOnSanDiego.com (April 10, 2004). "[W]hile proponents

say robot therapy is no different from pet therapy, in which animals offer companionship, the idea of 

children and older people becoming emotionally attached to machines unnerves many people. ...

[Toshiyo] Tamura and colleagues recently published research that found that some patients' activity,

such as talking, watching and touching, increased with the introduction of the robot in therapy

sessions. ... Tamura also found that introducing a stuffed animal shaped like a dog got almost the same

effect from patients. But a stuffed animal can't be programmed to, for example, help an Alzheimer's

patient remember the names of their visiting children. Neither, of course, can real animals. ... [H]ow

robots will change people remains to be seen. Will robots make people lazy if they can do mundane

chores? Will they make us more callous or more humane? ... Ranges of appropriate behavior toward

robots will have to be socially defined, [John] Jordan said. Might it be weird to pat a robot for bringing a

drink? 'Humans are very good at attributing emotions to things that are not people,' Jordan said. 'Many,

many moral questions will arise.' ... 'People aren't going to be able to throw away robots even when they

break,' [Yasuyuki] Toki said. 'These are major issues that researchers must keep in the back of our

minds.'"

Robotics and Intelligent Systems in Support of Society. By Raj Reddy. IEEE Intelligent Systems (May/June 

2006) 21(3): 24-31. "Over the past 50 years, there has been extensive research into robotics and

intelligent systems. While much of the research has targeted specific technical problems, advances in

these areas have led to systems and solutions that will have a profound impact on society. This article

provides several examples of the use of such ecotechnologies in the service of humanity, in the areas of 

robotics, speech, vision, human computer interaction, natural language processing, and artificial

intelligence. Underlying most of the advances is the unprecedented exponential improvement of 

information technology. ... The question is, what will we do with all this power? How will it affect the way

we live and work? Many things will hardly change -- our social systems, the food we eat, the clothes we

wear, our mating rituals, and so forth. Others, such as how we learn,work, and interact with others, and

Page 10: ethical and social

8/7/2019 ethical and social

http://slidepdf.com/reader/full/ethical-and-social 10/27

the quality and delivery of healthcare, will change profoundly. Here I present several examples of using

intelligent technologies in the service of humanity. In particular, I briefly discuss the areas of robotics,

speech recognition, computer vision, human-computer interaction, natural language processing, and

artificial intelligence. I also discuss current and potential applications of these technologies that willbenefit humanity -- particularly the elderly, poor, sick, and illiterate." [Thefull-text of this article is

available to non-subscribers for a limited period.]

Readings Online

"Data Mining" Is NOT Against Civil Liberties. Letter by the Executive Committee, ACM Special Interest

Group on Knowledge Discovery in Data and Data Mining (SIGKDD).

AI Magazine's AI in the news column shines its spotlight on issues such as predictive technology (Fall

2002), humans & robots (Winter 2002), and privacy (Spring 2003).

description|description,0.html | AI & Society, Journal of Human-Centred System. Published by Springer-

Verlag London Ltd. "Established in 1987, the journal focuses on the issues associated with the policy,

design and management of information, communications and media technologies, and their broader

social, economic, cultural and philosophical implications.&quot The table of contents and article 

abstracts for several issues can be accessed without a subscription.

Proceedings of the AISB 2000 Symposium on Artificial Intelligence, Ethics and (Quasi-) Human Rights . One

of the many convention proceedings available from The Society for the Study of Artificial Intelligence and

Simulation of Behaviour (SSAISB).

Machine Ethics: Papers from the 2005 AAAI Fall Symposium, ed. Michael Anderson, Susan Leigh Anderson,

and Chris Armen. Technical Report FS-05-06. American Association for Artificial Intelligence, Menlo Park,

California. "Past research concerning the relationship between technology and ethics has largely focused

on responsible and irresponsible use of technology by human beings, with a few people being interested

in how human beings ought to treat machines. In all cases, only human beings have engaged in ethical

reasoning. The time has come for adding an ethical dimension to at least some machines. Recognition of 

the ethical ramifications of behavior involving machines, as well as recent and potential developments in

machine autonomy, necessitates this. In contrast to computer hacking, software property issues, privacy

issues and other topics normally ascribed to computer ethics, machine ethics is concerned with the

behavior of machines towards human users and other machines. We contend that research in machine

ethics is key to alleviating concerns with autonomous systems --- it could be argued that the notion of 

autonomous machines without such a dimension is at the root of all fear concerning machine intelligence.

Further, investigation of machine ethics could enable the discovery of problems with current ethical

theories, advancing our thinking about ethics."

Page 11: ethical and social

8/7/2019 ethical and social

http://slidepdf.com/reader/full/ethical-and-social 11/27

• Also see this special issue of IEEE Intelligent Systems.

MedEthEx: A Prototype Medical Ethics Advisor. By Michael Anderson, Susan Leigh Anderson, and Chris

Armen. In Proceedings of the Eighteenth Innovative Applications of Artificial Intelligence Conference, July

16 – 20 2006. Menlo Park, Calif.: AAAI Press. "As part of a larger Machine Ethics Project, we are

developing an ethical advisor that provides guidance to health care workers faced with ethical dilemmas.

MedEthEx is an implementation of Beauchamp’s and Childress' Principles of Biomedical Ethics that

harnesses machine learning techniques to abstract decision principles from cases in a particular type of 

dilemma with conflicting prima facie duties and uses these principles to determine the correct course of 

action in similar and new cases. We believe that accomplishing this will be a useful first step towards

creating machines that can interact with those in need of health care in a way that is sensitive to ethical

issues that may arise." A demo is available online.

Ethics dilemma in killer bots. By Philip Argy (National President of the Australian Computer Society).

Australian IT (January 16, 2007). "When science fiction writer Isaac Asimov developed his Three Laws of 

Robotics back in 1940, the first law was: 'A robot may not harm a human being, or, through inaction,

allow a human being to come to harm.' Asimov later amended the laws to put the needs of humanity as a

whole above those of a single individual, but his intention was unchanged: that robots should be designed

to protect human life and should be incapable of endangering it. So reports out of Korea of newly

developed guard robots capable of firing autonomously on human targets are raising concerns about their

potential uses. ... Ethicists have always questioned the use of technology in weapons development, but

the new robots are causing additional disquiet because of their self-directing capabilities. ... It is the

responsibility of all technology professionals to ensure that those in our organisation and within our

influence are both responsible and ethical in the way they develop and apply technology."

Georgia Tech's Ronald Arkin (September 12, 2005)."Technology Research News Editor Eric Smalley

carried out an email conversation with Georgia Institute of Technology professor Ronald C. Arkin in

August of 2005 that covered the economics of labor, making robots as reliable as cars, getting robots to

trust people, biorobotics, finding the boundaries of intimate relationships with robots, how much to let

robots manipulate people, giving robots a conscience, robots as humane soldiers and The Butlerian

Jihad. ... TRN: So what are the boundaries between human-robot relationships? Arkin: I tend not to be

prescriptive about these boundaries, that's a question of morality. I am interested in the ethical issues

surrounding these questions though, which will lead to the formulation perhaps of a moral code some

day. A few of the human-robot interface questions that concern me include: · How intimate should a

relationship be with an intelligent artifact? · Should a robot be able to mislead or manipulate human

intelligence? · What, if any, level of force is acceptable in physically managing humans by robotic

systems? · What do major religions think about the prospect of intelligent humanoids? (The Vatican and

Judaism to date have had related commentary on the subject). These are all ethical questions, and

depending upon your social convention, religious beliefs, or moral bias, every individual can articulate

their opinion. My concern now as a scientist that is concerned with the ethical outcome of his own

research is to get the debate going and begin to explore what the appropriate use of this technology from

Page 12: ethical and social

8/7/2019 ethical and social

http://slidepdf.com/reader/full/ethical-and-social 12/27

a variety of ethical stances (relativistic to deontological). TRN: And what is the role of lethality in the

deployment of autonomous systems by the military? Arkin: ... "

Ethical Issues in Advanced Artificial Intelligence. By Nick Bostrom, Faculty of Philosophy, OxfordUniversity; Director, Oxford Future of Humanity Institute. "This is a slightly revised version of a paper

published in Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial

Intelligence, Vol. 2, ed. I. Smit et al., Int. Institute of Advanced Studies in Systems Research and

Cybernetics, 2003, pp. 12-17."

No Where to Hide. By Alan Cohen. PC Magazine (July 13, 2004). "TIA [Total Information Awareness]

demonstrated the fundamental conflict that often arises between technology and privacy: We want the

benefits of convenience and safety that new tools can bring us, but at the same time we want to insure

our right to be left alone. Often, in the rush for the benefits, privacy suffers. Yet sometimes, attempts to

protect our privacy cripple or even jettison a promising technology."

• Also in this issue of PC Magazine: Visiting the Future. Opinion by Michael J. Miller. "As denizens of 

the 21st century, we can't just look at technology for its own sake. We need to understand how it

affects society."

"Chickens are Us" and other observations of robotic art. By Patricia Donovan. University at Buffalo

Reporter (December 4, 2003; Volume 35, Number 14). "Hundreds of artists in all corners of the world -- a

number of them at UB -- use emerging technologies as a tool for material and cultural analysis. One of 

them is conceptual artist Marc Böhlen, assistant professor in the Department of Media Study. His medium

is not oil or bronze, but robotics and site-specific data, and his practice combines the structured approach

of scientific investigation with artistic intuition, spiced with a deliberate and effective dash of good or bad

taste. ... Böhlen considers the media arts in the context of the history of automation technologies. They

were invented with the hope of improving everyday life, he notes, and in some ways they have. 'Our

unquestioned pursuit of efficiency, however, has made us slaves of automation,' he says, a point made by

artists from the mid-19th century on. 'Through our very inventiveness and persistence, we have

separated ourselves from the constraints of our natural surroundings. In my work, I attempt to contradict

preconceptions of what technical mediation is by a practice that is poetically inspired, radical and

technically competent.' To this end, Böhlen builds machines whose functions contradict their assumed

utilitarian purpose. ... He says 'the Keeper' is designed to re-imagine -- beyond issues of security and

repression -- how machines that use biometric technology are able to control our identities and validate

our right to gain access to any space. "

Humanoids With Attitude - Japan Embraces New Generation of Robots. By Anthony Faiola, with Akiko

Yamamoto. Washington Post (March 11, 2005; registration req'd.) and from The Sydney Morning Herald

(We, robot: the future is here; March 14, 2005). "'I almost feel like she's a real person,' said Kobayashi, an

associate professor at the Tokyo University of Science and [Saya,the cyber-receptionist's] inventor.

Page 13: ethical and social

8/7/2019 ethical and social

http://slidepdf.com/reader/full/ethical-and-social 13/27

Having worked at the university for almost two years now, she's an old hand at her job. 'She has a temper

. . . and she sometimes makes mistakes, especially when she has low energy,' the professor said. Saya's

wrath is the latest sign of the rise of the robot. Analysts say Japan is leading the world in rolling out a new

generation of consumer robots. Some scientists are calling the wave a technological force poised tochange human lifestyles more radically than the advent of the computer or the cell phone. ... In the quest

for artificial intelligence, the United States is perhaps just as advanced as Japan. But analysts stress that

the focus in the United States has been largely on military applications. By contrast, the Japanese

government, academic institutions and major corporations are investing billions of dollars on consumer

robots aimed at altering everyday life, leading to an earlier dawn of what many here call the 'age of the

robot.' But the robotic rush in Japan is also being driven by unique societal needs. ... It is perhaps no

surprise that robots would find their first major foothold in Japan. ... 'In Western countries, humanoid

robots are still not very accepted, but they are in Japan,' said Norihiro Hagita, director of the ATR

Intelligent Robotics and Communication Laboratories in Keihanna Science City near Kyoto. 'One reason is

religion. In Japanese [Shinto] religion, we believe that all things have gods within them. But in Western

countries, most people believe in only one God. For us, however, a robot can have an energy all its own.'"

Two interviews with Anne Foerst, researcher and theological advisor for the robots Cog and Kismet at

MIT's Artificial Intelligence Laboratory: "Baptism by Wire - Bringing religion to the Artificial Intelligence

lab" & "Do Androids Dream?"

Constructions of the Mind--Artificial Intelligence and the Humanities. Stefano Franchi and Guven

Guzeldere, editors (1995). A special issue of the Stanford Humanities Review 4(2): Spring 1995. From the

Table of Contents, you may link to several full-text articles.

"It's the Computer's Fault" -- Reasoning About Computers as Moral Agents. By Batya Friedman and

Lynette Millett. A short paper from the 1995 Conference on Human Factors in Computing Systems

sponsored by ACM/SIGCHI. "The data reported above joins a growing body of research that suggests

people, even computer literate individuals, may at times attribute social attributes to and at times

engage in social interaction with computer technology."

Future technologies, today's choices - Nanotechnology, Artificial Intelligence and Robotics: A technical,

political and institutional map of emerging technologies. Greenpeace UK July 2003. "[W]hile Greenpeace

accepts and relies upon the merits of many new technologies, we campaign against other technologies

that have a potentially profound negative impact on the environment. This prompted Greenpeace to

commission a comprehensive review of nanotechnology and artificial intelligence/robotics developments

from an organisation with a reputation for technological expertise - Imperial College London. We asked

them to document existing applications and to analyse current research and development (R&D), the

main players behind these developments, and the associated incentives and risks."

• Also see:

Page 14: ethical and social

8/7/2019 ethical and social

http://slidepdf.com/reader/full/ethical-and-social 14/27

o Nanotechnology: Small wonders. By Mike Toner. The Atlanta Journal-Constitution

(December 5, 2004). "The National Science Foundation predicts that within a decade

nanotechnology will be a $1 trillion market --- and provide as many as 2 million new

jobs. ... In his oft-cited 'Engines of Creation,' nanotech pioneer K. Eric Drexler --- formerly

a researcher at MIT's artificial intelligence lab --- warned that 'replicating assemblers and

thinking machines pose basic threats to people and life on Earth' --- threatening to turn

everything on the planet into an amorphous 'gray goo.' Michael Crichton breathed new

life into the notion a few years ago with 'Prey,' a sci-fi thriller about the escape of 

microscopic, self-replicating assemblers from a secret desert research lab. ... Drexler, who

now heads the nonprofit educational Foresight Institute, has recanted much of his original

claim, but he insists that the industry should have a policy prohibiting 'the construction of 

anything resembling a dangerous self-replicating nanomachine.'"

o Mean machines. By Dylan Evans. The Guardian (July 29, 2004). "Computer scientist Bill

Joy is not the only expert who has urged the general public to start thinking about the

dangers posed by the rapidly advancing science of robotics, and Greenpeace issued a

special report last year urging people to debate this matter as vigorously as they have

debated the issues raised by genetic engineering."

o Nanotechnology and Nanoscience. "In June 2003 the UK Government commissioned the

Royal Society, the UK national academy of science, and the Royal Academy of 

Engineering, the UK national academy of engineering, to carry out an independent study

of likely developments and whether nanotechnology raises or is likely to raise new ethical,

health and safety or social issues which are not covered by current regulation."

Read the final report: Nanoscience and nanotechnologies: opportunities and 

uncertainties ( 29 July 2004).

o Nanoethics Group: "a non-partisan and independent organization that studies the ethical

and societal implications of nanotechnology. We also engage the public as well as

collaborate with nanotech ventures and research institutes on related issues that will

impact the industry."

Too Much Information. Comment by Hendrik Hertzberg. The New Yorker (December 9, 2002). "But the

[Information Awareness] Office's main assignment is, basically, to turn everything in cyberspace about

everybody ... into a single, humongous, multi-googolplexibyte database that electronic robots will mine

for patterns of information suggestive of terrorist activity. Dr. Strangelove's vision'a chikentic gomplex

of gumbyuders'is at last coming into its own."

Artificial Intelligence and Ethics: An Exercise in the Moral Imagination. By Michael R. LaChat. AI Magazine

7(2): Summer 1986, 70-79. "The possibility of constructing a personal AI raises many ethical and religious

questions that have been dealt with seriously only by imaginative works of fiction; they have largely been

ignored by technical experts and by philosophical and theological ethicists. Arguing that a personal AI is

possible in principle, and that its accomplishments could be adjudicated by the Turing Test, the article

suggests some of the moral issues involved in AI experimentation by comparing them to issues in medical

experimentation. Finally, the article asks questions about the capacities and possibilities of such an

Page 15: ethical and social

8/7/2019 ethical and social

http://slidepdf.com/reader/full/ethical-and-social 15/27

artifact for making moral decisions. It is suggested that much a priori ethical thinking is necessary and

that, that such a project cannot only stimulate our moral imaginations, but can also tell us much about

our moral thinking and pedagogy, whether or not it is ever accomplished in fact."

The Moral Challenge of Modern Science. By Yuval Levin. The New Atlantis (Fall 2006; 14: 32-46). "A few

years ago, in the course of a long speech about health policy, President George W. Bush spoke of the

challenge confronting a society increasingly empowered by science. He put his warning in these

words: The powers of science are morally neutral -- as easily used for bad purposes as good ones. In the

excitement of discovery, we must never forget that mankind is defined not by intelligence alone, but by conscience. Even the most noble ends do not justify every means. In the president’s sensible formulation,

the moral challenge posed for us by modern science is that our scientific tools simply give us raw power,

and it is up to us to determine the right ways to use that power and to proscribe the wrong ways. The

notion that science is morally neutral is also widely held and advanced by scientists. ... The moral

challenge of modern science reaches well beyond the ambiguity of new technologies because modern

science is much more than a source of technology, and scientists are far more than mere investigators

and toolmakers. Modern science is a grand human endeavor, indeed the grandest of the modern age. Its

work employs the best and the brightest in every corner of the globe, and its modes of thinking and

reasoning have come to dominate the way mankind understands itself and its place. We must therefore

judge modern science not only by its material products, but also, and more so, by its intentions and itsinfluence upon the way humanity has come to think. In both these ways, science is far from morally

neutral."

What We Don’t Know Can Hurt Us. By Heather MacDonald. City Journal (Spring 2004; Vol. 14, No. 2).

"Immediately after 9/11, politicians and pundits slammed the Bush administration for failing to 'connect

the dots' foreshadowing the attack. What a difference a little amnesia makes. For two years now, left- and

right-wing advocates have shot down nearly every proposal to use intelligence more effectively -- to

connect the dots -- as an assault on 'privacy.' Though their facts are often wrong and their arguments

specious, they have come to dominate the national security debate virtually without challenge. The

consequence has been devastating: just when the country should be unleashing its technological

ingenuity to defend against future attacks, scientists stand irresolute, cowed into inaction. 'No one in the

research and development community is putting together tools to make us safer,' says Lee Zeichner of 

Zeichner Risk Analytics, a risk consultancy firm, 'because they’re afraid' of getting caught up in a privacy

scandal. The chilling effect has been even stronger in government. 'Many perfectly legal things that could

be done with data aren’t being done, because people don’t want to lose their jobs,' says a computer

security entrepreneur who, like many interviewed for this article, was too fearful of the advocates to let

Page 16: ethical and social

8/7/2019 ethical and social

http://slidepdf.com/reader/full/ethical-and-social 16/27

his name appear. ... The goal of TIA [the Total Information Awareness project] was this: to prevent

another attack on American soil by uncovering the electronic footprints terrorists leave as they plan and

rehearse their assaults. ... TIA would have been the most advanced application yet of a young technology

called 'data mining,' which attempts to make sense of the explosion of data in government, scientific, andcommercial databases." [Other projects discussed in this article: Human Identity at a Distance; LifeLog;

CAPPS II, Computer Assisted Passenger Prescreening System; MATRIX, Multistate Anti-Terrorism

Information Exchange; and FIDNet.] If you kick a robotic dog, is it wrong? By G. Jeffrey MacDonald. The

Christian Science Monitor (February 5, 2004). "How should people treat creatures that seem ever more

emotional with each step forward in robotic technology, but who really have no feelings?" Also see the

two related articles.

• And see: Humans have rights, should human-like animals? By Kate Douglas. New Scientist (May

30, 2007; Issue 2606).

Armchair warlords and robot hordes. Comment and Analysis by Paul Marks. New Scientist (October 28,

2006; Issue 2575: page 24 |subscription req'd). "It sounds like every general's dream: technology that

allows a nation to fight a war with little or no loss of life on its side. It is also a peace-seeking citizen's

nightmare. Without the politically embarrassing threat of soldiers returning home in flag-wrapped coffins,

governments would find it far easier to commit to military action. The consequences for countries on the

receiving end - and for world peace - would be immense. This is not a fantasy scenario. ... 'Teleoperation

[remote control] is the norm, but semi-autonomous enhancements are being added all the time,' says

Bob Quinn of Foster-Miller, a technology firm in Waltham, Massachusetts, owned by the UK defence

research company Qinetiq." [Also see this related article.]

• A Congressional Research Service (CRS) Report mentioned in the article (Unmanned Vehicles for

U.S. Naval Forces: Background and Issues for Congress, updated July 26, 2006) is available from

the Conventional Weapons Systems collection maintained by Steven Aftergood for The Federation

of American Scientists (FAS).

Rob Kling, 58; Specialist in Computers' Societal Effect. By Myrna Oliver. Los Angeles Times (May 26,

2003). "Rob Kling, an author and educator regarded as the founding father of social informatics -- how

computers influence social change -- has died. ... Concerned that all discussion of computers focused on

technology, Kling studied government, manufacturers and insurance companies to determine how

computers affect society and require choices that consider human values as well as technological

values. ... 'Many people, particularly white-collar workers, have a view that the best factory is one where

almost nobody is there,' he said in a speech to the Computer Professionals for Social Responsibility

meeting at Chapman University in 1985. 'Most functions are automated. In this view the factory is a

production machine, a gadget, and there's no honorable role for people except to fill in where the

machines aren't good enough yet.'"

Programming doesn't begin to define computer science. By Jim Morris ["professor of computer science

Page 17: ethical and social

8/7/2019 ethical and social

http://slidepdf.com/reader/full/ethical-and-social 17/27

and dean of Carnegie Mellon University's West Coast campus']. Pittsburgh Post-Gazette (July 4, 2004).

"Computer scientists must know enough history and social science to chart and predict the impact of 

computers on the intersecting worlds of work, entertainment and society. To do this, they must

understand the modern world and its roots. To participate in today's debates about privacy, one mustunderstand both computers and society."

Rise of the machines. Next News by James M. Pethokoukis. USNews.com (April 22, 2004). "But [Bill] Joy is

probably just as well known for his belief that the accelerating technologies of genetics, nanotechnology,

and robotics pose a dire threat to humanity by opening the way to new weapons of mass destruction such

as tiny, replicating nanobots run wild. But Joy isn't the only techie who frets about what his own labors

might one day help create. Hugo de Garis is a Belgian-born associate professor of computer science at

Utah State University. A former theoretical physicist, de Garis now researches neural networks, a branch

of artificial intelligence. ... Yet de Garis worries that one day supersmart machines -- or artilects (for

artificial intellects) -- will dominate humanity. ... De Garis admits some ambivalence himself. He is

involved with building artificial brains -- the precursors to the artilects -- but he's also raising the alarm

about their political effects. How could such conflict be prevented? I recently E-mailed de Garis that exact

question. His response: 'Ah, the $100 trillion question. I wish I knew. I haven't yet found a plausible way

out of this terrible dilemma. ... "

Ethics for the Robot Age - Should bots carry weapons? Should they win patents? Questions we must

answer as automation advances. View by Jordan Pollack. Wired Magazine (January 2005; Issue 13.01).

"While our hopes for and fears of robots may be overblown, there is plenty to worry about as automation

progresses. The future will have many more robots, and they'll most certainly be much more advanced.

This raises important ethical questions that we must begin to confront. 1. Should robots be

humanoid? ... 2. Should humans become robots? ... 4. Should robots eat? ... 6. Should robots carry 

weapons? ... "

Oppenheimer's Ghost - Can we control the evolution and uses of technology? Editorial by Jason Pontin.

Technology Review (November / December 2007). "Oppenheimer believed that technology and science

had their own imperatives, and that whatever could be discovered or done would be discovered and

done. 'It is a profound and necessary truth,' he told a Canadian audience in 1962, 'that the deep things in

science are not found because they are useful; they are found because it was possible to find them.'"

Infinite Memory and Bandwidth: Implications for Artificial Intelligence - Not to worry about superintelligent

machines taking over, says AI pioneer Dr. Raj Reddy. A more likely scenario: people who can think and

act 1000 times faster, using personal intelligent agents. By Raj Reddy. Originally presented as a talk at 

the Newell-Simon Hall Dedication Symposium, October 19, 2000. Published on KurzweilAI.net February 

22, 2001. "The main thesis of my talk is that none of the dire consequences of Bill Joy or the predictions of 

Kurzweil and Moravec about the possible emergence of a robot nation will come to pass. Not because

they are incorrect, but because we live in a society in which progress depends on the investment of 

Page 18: ethical and social

8/7/2019 ethical and social

http://slidepdf.com/reader/full/ethical-and-social 18/27

research dollars."

Biocyberethics: should we stop a company from unplugging an intelligent computer? By Martine

Rothblatt. KurzweilAI.net. "Attorney Dr. Martine Rothblatt filed a motion for a preliminary injunction toprevent a corporation from disconnecting an intelligent computer in a mock trial at the International Bar

Association conference in San Francisco, Sept. 16, 2003. The issue could arise in a real court within the

next few decades, as computers achieve or exceed the information processing capability of the human

mind and the boundary between human and machine becomes increasingly blurred." You can also access

a webcast and a transcript of the hearing via links from the article.

• Be sure to see the other articles in the Will Machines Become Conscious? collection at

KurzweilAI.net. which includes:

o The Rights of Robots: Technology, Culture and Law in the 21st Century. By Sohail

Inayatullah and Phil Mcnally. "Robot rights are already part of judiciary planning--can

sentient machines be far off? This discussion of robot rights looks in-depth at issues once

reserved for humans only."

o A Jurisprudence of Artilects: Blueprint for a Synthetic Citizen. By Frank W. Sudia. "Will

artilects have difficulties seeking rights and legal recognition? Will they make problems

for humans once they surpass our knowledge and reasoning capacities?

• Also see: Man and the Machines - It's time to start thinking about how we might grant legal rights

to computers. By Benjamin Soskis. Legal Affairs (January / February 2005).

Robots 'R' us? The machines are getting smarter every day. Human beings better be thinking about

science fiction becoming reality. Opinion by Charles Rubin. post-gazette.com (May 14, 2006). "After

decades of slow change and unfulfilled promise, it may be that robots and artificial intelligence are on the

verge of transforming what people do and how we do it. Yet popular culture has long reflected how the

rise of robots is not a prospect that everyone greets with enthusiasm. If people's fears are to be

addressed honestly, the hopes behind the serious work of invention going on here will need to be

matched by equally serious thought about the consequences for the human future these cutting-edge

efforts will have."

Essays on Science and Society. From Science Magazine. "In monthly essays on Science and Society,

Science features the views of individuals from inside or outside the scientific community as they explore

the interface between science and the wider society. This series continues the weekly viewpoints on

science and society that Science published in 1998 in honor of the 150th anniversary of AAAS [The

American Association for the Advancement of Science]."

Someone to Watch over You. Editorial by Nigel Shadbolt. IEEE Intelligent Systems (March/April 2003).

"Our own disciplines of AI and IS can serve to maintain or invade privacy. They can be used for legitimate

law enforcement or to carry out crime itself."

Page 19: ethical and social

8/7/2019 ethical and social

http://slidepdf.com/reader/full/ethical-and-social 19/27

Man and the Machines - It's time to start thinking about how we might grant legal rights to computers. By

Benjamin Soskis. Legal Affairs (January / February 2005). "The story of the self-aware computer asserting

its rightsand, in the dystopian version of the tale, its overwhelming power --- is a staple of science

fiction books and movies. ... At some point in the not-too-distant future, we might actually face a sentient,intelligent machine who demands, or who many come to believe deserves, some form of legal

protection."

• Go here for additional information about the mock trial that is referenced in this article.

Data Mining and Domestic Security: Connecting the Dots to Make Sense of Data . By K. A. Taipale. The

Columbia Science and Technology Law Review (Volume V, 2003-2004; page 83). "New technologies

present new opportunities and new challenges to existing methods of law enforcement and domestic

security investigation and raise related civil liberties concerns. Although technology is not deterministic,

its development follows certain indubitable imperatives. The commercial need to develop these powerful

analytic technologies as well as the drive to adopt these technologies to help ensure domestic security is

inevitable. For those concerned with the civil liberties and privacy issues that the use of these

technologies will present, the appropriate and useful course of action is to be involved in guiding the

research and development process towards outcomes that provide opportunity for traditional legal

procedural protection to be applied to their usage. To do so requires a more informed engagement by

both sides in the debate based on a better understanding of the actual technological potential and

constraints."

Privacy-Aware Autonomous Agents for Pervasive Healthcare. By Monica Tentori, Jesus Favela, and Marcela

D. Rodríguez. IEEE Intelligent Systems (November/December 2006; 21(6): 55-62. "Pervasive technology in

hospital work raises important privacy concerns. Autonomous agents can help developers design privacy-

aware systems that handle the threats raised by pervasive technology."

Schlock Mercenary, The Online Comic Space Opera by Howard Tyler. See the January 4, 2006

installment in which Captain Tagon asks: "Is this one of those 'machine ethics' questions?"

A Question of Responsibility. M. Mitchell Waldrop. AI Magazine 8(1): Spring 1987, 28-39. "So we return to

the questions we started with. Robots, in the broad sense that we have defined them, play the role

of agent . Coupled with AI, moreover, they will be able to take on responsibility and authority in ways that

no machines have ever done before. So perhaps it’s worth asking before we get to that point just how

much power and authority these intelligent machines ought to have -- and just who, if anyone, will control

them. ... [O]ne thing that is apparent from the above discussion is that intelligent machines will embody

values, assumptions, and purposes, whether their programmers consciously intend them to or not. Thus,

as computers and robots become more and more intelligent, it becomes imperative that we think

carefully and explicitly about what those built-in values are. Perhaps what we need is, in fact, a theory

and practice of machine ethics, in the spirit of Asimov’s three laws of robotics. Admittedly, a concept like

Page 20: ethical and social

8/7/2019 ethical and social

http://slidepdf.com/reader/full/ethical-and-social 20/27

'machine ethics' sounds hopelessly fuzzy and far-fetched-at first. But maybe it’s not as far out of reach as

it seems. Ethics, after all, is basically a matter of making choices based on concepts of right and wrong,

duty and obligation"

Launching a new kind of warfare - Robot vehicles are increasingly taking a role on the battlefield - but

their deployment raises moral and philosophical as well as technical questions. By Pete Warre. The

Guardian / Guardian Unlimited Technology (October 26, 2006). "By 2015, the US Department of Defense

plans that one third of its fighting strength will be composed of robots, part of a $127bn (£68bn) project

known as Future Combat Systems (FCS), a transformation that is part of the largest technology project in

American history. The US army has already developed around 20 remotely controlled Unmanned Ground

Systems that can be controlled by a laptop from around a mile away, and the US Navy and US Air Force

are working on a similar number of systems with varying ranges. According to a US general quoted in the

US Army's Joint Robotics Program Master Plan [link], 'what we're doing with unmanned ground and air

vehicles is really bringing movies like Star Wars to reality'. The US military has 2,500 uncrewed systems

deployed in conflicts around the world. But is it Star Wars or I, Robot that the US is bringing to reality? By

2035, the plan is for the first completely autonomous robot soldiers to stride on to the battlefield. The US

is not alone. Around the globe, 32 countries are now working on the development of uncrewed

systems. ... But if this is the beginning of the end of humanity's presence on the battlefield, it merits an

ethical debate that the military and its weapons designers are shying away from." [Also see this related 

article.]

Technology, Work, and the Organization: The Impact of Expert Systems. By Rob R. Weitz. AI Magazine

11(2): Summer, 1990, 50-60.

The Virtual Sky is not the Limit: Ethics in Virtual Reality. By Blay Whitby (1993). Intelligent Tutoring Media,

Vol.3 No.2. "Its reality stems from the convincing nature of the illusion, and most importantly for moral

considerations, the way in which human participants can interact with it."

RELATED RESOURCES

AAAI Corporate Bylaws (Bylaws of The Association for the Advancement of Artificial Intelligence): Article

II. Purpose - "This corporation is a nonprofit public benefit corporation and is not organized for the private

gain of any person. It is organized under the California Nonprofit Corporation Law for scientific and

educational purposes in the field of artificial intelligence to promote research in, and <u>responsible

use</u> of, artificial intelligence." [Emphasis added.]

ACM (Association for Computing Machinery) has several pertinent resources, including:

• ACM Professional Standards, one of which is the General ACM Code of Ethics.

• Computers and Public Policy: an overview of association-level policy activities with links to related

Page 21: ethical and social

8/7/2019 ethical and social

http://slidepdf.com/reader/full/ethical-and-social 21/27

committees, codes, declarations, resolutions, policies & statements.

• USACM: "The ACM U.S. Public Policy Committee (USACM) serves as the focal point for ACM's

interaction with U.S. government organizations, the computing community, and the U.S. public in

all matters of U.S. public policy related to information technology."

"The Center for the Study of Science and Religion (CSSR) was founded in the summer of 1999 as a forum

for the examination of issues that lie at the boundary of these two complementary ways of 

comprehending the world and our place in it. By examining the intersections that cross over the

boundaries between one or another science and one or another religion, the CSSR hopes to stimulate

dialogue and encourage understanding. The CSSR is not interested in promoting one or another science

or religion, and we hope that the service we provide will be of benefit and offer understanding into all

sciences and religions"

The Center for Unified Biometrics and Sensors (CUBS), University at Buffalo, The State University of New

York, research program regarding the Ethical, Legal, and Social Implications: "Many social and legal

issues surround the field of biometrics since by its very nature, the technology requires measurements of 

human physical traits and behavioral features. Co-operative and uncooperative users, user psychology,

dislike of intrusive systems, backlash at public rejection by a biometric sensor, and privacy concerns are

some of the myriad of issues that will be thoroughly studied to advance the field of biometrics. The

Center in partnership with the University at Buffalo's Baldy Center for Law and Social Policy will address

the growing interest in Biometrics among the government, industry and the lay public."

Centre for Computing and Social Responsibility (CCSR), De Montfort University. Resource collections

include:

• Conferences

• Professionalism, Artificial Intelligence & Robotics

Computer Ethics Institute at The Brookings Institution provides "an advanced forum and resource for

identifying, assessing and responding to ethical issues associated with the advancement of information

technologies in society."

Computer Professionals for Social Responsibility. "CPSR is a global organization promoting the responsible

use of computer technology. Founded in 1981, CPSR educates policymakers and the public on a wide

range of issues." - from About CPSR.

• The CPSR Wiener Award for Social and Professional Responsibility: "In 1987, CPSR began a

tradition to recognize outstanding contributions for social responsibility in computing technology.

The organization wanted to cite people who recognize the importance of a science-educated

public, who take a broader view of the social issues of computing. We aimed to share concerns

that lead to action in arenas of the power, promise, and limitations of computer technology."

Page 22: ethical and social

8/7/2019 ethical and social

http://slidepdf.com/reader/full/ethical-and-social 22/27

Past winners include Joe Weizenbaum (1988) and Doug Engelbart (2005).

Essays on the Philosophy of Technology. Maintained by Dr. Frank Edler, Metropolitan Community College,

Omaha, Nebraska. A well-presented and wide ranging list of links to full-text online papers and other web

sites.

Institute for Ethics and Emerging Technologies. As stated on their About the Institute page: "By promoting

and publicizing the work of thinkers who examine the social implications of scientific and technological

advance, we seek to contribute to the understanding of the impact of emerging technologies on

individuals and societies."

• See this related magazine article (2007) and this related newspaper article (2005).

Machine Ethics. Maintained by Dr. Michael Anderson, Department of Computer Science, University of 

Hartford. "Machine Ethics is concerned with the behavior of machines towards human users and other

machines. Allowing machine intelligence to effect change in the world can be dangerous without some

restraint. Machine Ethics involves adding an ethical dimension to machines to achieve this restraint."

MedEthEx demo.

Nanoethics Group: "a non-partisan and independent organization that studies the ethical and societal

implications of nanotechnology. We also engage the public as well as collaborate with nanotech ventures

and research institutes on related issues that will impact the industry."

No Place To Hide, a multimedia investigation led by Robert O'Harrow, Jr. andThe Center for Investigative

Reporting. "When you go to work, stop at the store, fly in a plane, or surf the web, you are being watched.

They know where you live, the value of your home, the names of your friends and family, in some cases

even what you read. Where the data revolution meets the needs of national security, there is no place to

hide. No Place To Hide is a multimedia investigation by news organizations working together across print

and broadcast platforms, to make a greater impact than any one organization could alone."

• Educators: click here for a link to their instructional resources which include materials suitable for

high school students, as well as college students and professionals.

Roboethics. Based at the Scuola di Robotica, Genova, Italy. "It is therefore important to open a debate on

the ethical basis which should inspire the design and development of robots, to avoid problems incurred

by other human activities, forced to become conscious of the ethical basis under the pressure of grievous

events. We are entering the time when robots are among us and the new question is: Could a robot do

'good' and 'evil'? We know about robots helping mankind in scientific, humanitarian and ecological

enterprises, useful for safeguarding our planet and its inhabitants. But we heard also about 'intelligent'

Page 23: ethical and social

8/7/2019 ethical and social

http://slidepdf.com/reader/full/ethical-and-social 23/27

weapons which kill people. It is important to underline that not only robotic scientists are called to give

their contribution to the definition of the problem whether a robot can or cannot do harm to a human

being (Isaac Asimov's 'First Law of Robotics'), but also philosophers, jurists, sociologists and many

scholars involved in similar themes. This website www.roboethics.org aims to be a reference point for theongoing debate on the human/robot relationship and a forum where Scientists and concerned people can

share their opinions."

• Also see:

o an article about the symposium

• And LISTEN to this related podcast.

sciencehorizons: "a national series of conversations about new technologies, the future and society. It has

been set up by the UK government and will run during 2007."

Technology & Citizenship Symposium, McGill University, Montréal, Canada, June 9 - 10, 2006. "This

symposium will address the complex relations between Technology and Citizenship. Technology is deeply

implicated in the organisation and distribution of social, political and economic power. Technological

artefacts, systems and practices arise from particular historical situations, and they condition subsequent

social, political and economic identities, practices and relationships."

Workshop on Roboethics (14 April 2007) at the 2007 IEEE International Conference on Robotics and

Automation (ICRA'07).

• As stated on the Objectives: "Roboethics deals with the ethical aspects of the design,

development and employment of Intelligent Machines. It shares many 'sensitive areas' with

Computer Ethics, Information Ethics and Bioethics. Along these disciplines, it investigates the

social and ethical problems due to the effects of the Second and Third Industrial Revolutions in

the Humans/Machines interaction’s domain. Urged by the responsibilities involved in their

professions, an increasing number of roboticists from all over the world have started - in cross-

cultural collaboration with scholars of Humanities - to thoroughly develop the Roboethics, the

applied ethics that should inspire the design, manufacturing and use of robots. The goal of the

Workshop is a cross-cultural update for engineering scientists who wish to monitor the medium

and long effects of applied robotics technologies."

FAQs about Ethical and Social Implications

"Q: I am a University Student at ___ . I am part of an honors seminar that will debate whether or not AI is a

threat, or could become a threat to mankind and why." Response by Patrick J. Hayes (from our collection).

""Q:" Should we use robots as caregivers in the home?" Response by Bruce Buchanan.

Page 24: ethical and social

8/7/2019 ethical and social

http://slidepdf.com/reader/full/ethical-and-social 24/27

""Q;" The question is: Is the Artificial Intelligence a menace to the Human Brain in the near

future?" Response by Bruce Buchanan.

Other References Offline

Amato, Ivan. Big Brother Logs On. Technology Review (September 2001). "Feeling exposed? Watchful

technologies could soon put everyone under surveillance. ... Now, similarly, police departments,

government agencies, banks, merchants, amusement parks, sports arenas, nanny-watching homeowners,

swimming-pool operators, and employers are deploying cameras, pattern recognition algorithms,

databases of information, and biometric tools that when taken as a whole can be combined into

automated surveillance networks able to track just about anyone, just about anywhere."

Anderson, David. 1989. Artificial Intelligence and Intelligent Systems: The Implications. Chichester, UK:

Ellis Horwood.

Bailey, James, David Gelernter, Jaron Lanier, et al. 1997. Our Machines, Ourselves. Harper's (May 1997):

45-54. If we are to accept the idea that computers, as well as humans, can be intelligent, then what

makes human beings "special"? Several computer science visionaries address this and other related

questions.

Bailey, James. 1996. After Thought: The Computer Challenge to Human Intelligence. New York: Basic

Books.

Beardon, Colin. 1992. The Ethics of Virtual Reality. Intelligent Tutoring Media. 3(1), 23-28.

Crevier, Daniel. 1993. AI: The Tumultuous History of the Search for Artificial Intelligence, New York: Basic

Books of Harper Collins Publishers. Chapter 12 (pp. 312-341)

Dennett, Daniel C. 1996. When HAL Kills, Who's to Blame? Computer Ethics. Abstract from HAL's Legacy:

2001's Computer as Dream and Reality. Edited by David G. Stork. MIT Press.

Dray, J. 1987. Social Issues of AI. From the Encyclopedia of Artificial Intelligence, Vol. 2., Shapiro, Stuart

C., editor, 1049-1060. New York: John Wiley and Sons.

Edgar, Stacey L. 1997. Morality and Machines: Perspectives on Computer Ethics. Sudbury, MA: Jones andBartlett Publishers. Includes a chapter titled "The Artificial Intelligentsia and Virtual Worlds," as well as

chapters on computer reliability and liability issues, and military uses.

Epstein, Richard G. 1997. The Case of the Killer Robot: Stories About the Professional, Ethical, and

Societal Dimensions of Computing. New York: John Wiley & Sons. A collection of fictional stories and

Page 25: ethical and social

8/7/2019 ethical and social

http://slidepdf.com/reader/full/ethical-and-social 25/27

factual chapters that complement each other in discussion of the issues.

Gill, K. S., editor. 1986. Artificial Intelligence for Society. Chichester, UK: John Wiley and Sons.

• Also see: AI in the news column Fall 2002

Kizza, Joseph M. 1997. New Frontiers for Ethical Considerations: Artificial Intelligence, Cyberidentity, and

Virtual Reality. In Ethical and Social Issues in the Information Age, New York: Springer-Verlag. Landauer,

Thomas K. 1995. The Trouble with Computers. Cambridge, MA and London: MIT Press. The author is

critical of techno-hype and, though overly dismissive of AI and expert systems, Landauer extensively

documents and analyzes the relationship between poor computer design and low productivity. The last

chapter of the book imaginatively describes many wonderful tools that can be expected from AI-type

computers if good user-centered design prevails. Leonard, Andrew 1997. Bots: The Origin of New Species.

San Francisco: Hardwired. Surveys the vast spectrum of software agents---from bots that retrieve

information to bots that chat---and compares them to living evolving organisms. Levinson, Paul. 1997.

The Soft Edge: A Natural History and Future of the Information Revolution. London and New York:

Routledge. Chapter 18 (pp. 205-221) Discusses AI in the context of other technological advances. Murray,

Denise. 1995. Knowledge Machines: Language and Information in a Technological Society. London; New

York: Longman.

Nilsson, Nils J. Artificial Intelligence, Employment, and Income. AI Magazine 5(2): Summer 1984, 5-14.

"Artificial intelligence (AI) will have profound societal effects. It promises potential benefits (and may also

pose risks) in education, defense, business, law and science. In this article we explore how AI is likely to

affect employment and the distribution of income. We argue that AI will indeed reduce drastically the

need of human toil. We also note that some people fear the automation of work by machines and the

resulting of unemployment. Yet, since the majority of us probably would rather use our time for activities

other than our present jobs, we ought thus to greet the work-eliminating consequences of AI

enthusiastically. The paper discusses two reasons, one economic and one psychological, for this

paradoxical apprehension. We conclude with discussion of problems of moving toward the kind of 

economy that will be enabled by developments in AI."

Picard, Rosalind W. 1997. Affective Computing. Cambridge, MA: MIT Press. Addresses ethical, social and

technical issues associated with synthesizing emotions in computers.

Rawlins, Gregory J. E. 1997. Slaves of the Machine: The Quickening of Computer Technology. Cambridge,

MA: MIT Press.

Rawlins, Gregory J. E. 1996. Moths to the Flame: The Seductions of Computer Technology. Cambridge,

MA: MIT Press/Bradford Books. Some historical perspective along with some future prophecy.

Page 26: ethical and social

8/7/2019 ethical and social

http://slidepdf.com/reader/full/ethical-and-social 26/27

Sale, Kirkpatrick. 1996. Rebels Against the Future: The Luddites and their War on the Industrial

Revolution. Reading, MA: Addison-Wesley Publishing Co. Providing first a fascinating history of the early

1800's Luddite movement, the author describes transformative changes wrought by technology and

computerization in our time, and claims that, contrary to popular belief, technology is neither neutral norsubservient to humankind.

Salveggio, Eric. Your (un)Reasonable Expectations for Privacy - While law enforcement adapts to the

challenges of the electronic era, expectations of privacy diminish. Ubiquity 5(9) (April 28 - May 4, 2004).

"Anyone who knows how the Internet works, realizes all of the e-commerce information contains a wealth

of information on people. All it takes is simply knowing how to get access to it. Congress killed the

Pentagon's 'Total Information Awareness' data mining program, but now the Florida police have instituted

a State-run equivalent, dubbed the Matrix. In this case, the system is supposed to enable investigators

and analyst across the country finds links and patterns in crimes more effectively and quicker by

combining all police records with commercially available collections of personal information about most

American habits."

Simon, Herbert A. 1991. Models of My Life. New York, NY: Basic Books. The following is an except from a

letter he wrote to his daughter, Barbara, in 1977: "With respect to social consequences, I believe that

every researcher has some responsibility to assess, and try to inform others of, the possible social

consequences of the research products he is trying to create." [page 274]

Tavani, Herman T. 1996. Selecting a Computer Ethics Coursebook: A Comparative Study of Five Recent

Works. Computers and Society 26 (4): 15-21.

Tavani, Herman T. 1997. Journals and Periodicals on Computers, Ethics, and Society II: Fifty Publications

of Interest. Computers and Society 27 (3): 39-43.

Turkle, Sherry, and Diane L. Coutu. Technology and Human Vulnerability. Harvard Business Review.

September 2003. "Children are growing up with interactive toy animals. If we want to be sure we'll like

who we've become in 50 years, we need to take a closer look at the psychological effects of current and

future technologies. The smartest people in technology have already started. Universities like MIT and

Caltech have been pouring millions of dollars into researching what happens when technology and

humanity meet. To learn more about this research, HBR senior editor Diane L. Coutu spoke with one of 

the field's most distinguished scholars--Sherry Turkle, MIT's Abby Rockefeller Mauze Professor in the

Program in Science, Technology, and Society and the author of Life on the Screen, which explores how

the Internet is changing the way we define ourselves. In a conversation with Coutu, Turkle discusses the

psychological dynamics that can develop between people and their high-tech toys, describes ways in

which machines might substitute for managers, and explains how technology is redefining what it means

to be human."

Page 27: ethical and social

8/7/2019 ethical and social

http://slidepdf.com/reader/full/ethical-and-social 27/27

Turkle, Sherry. 1984. The Second Self: Computers And the Human Spirit. New York: Simon and Schuster.

Weizenbaum, Joseph. 1976. Computer Power and Human Reason: From Judgment to Calculation. San

Francisco, CA: W. H. Freeman. An early vocal critic of AI, Weizenbaum predicts dire consequences of relying on intelligent machines.

Yazdani, M., and A. Narayan. 1986. Artificial Intelligence--Human Effects. Chichester, UK: E. Horwood.