précis: being in the world

2

Click here to load reader

Upload: teun

Post on 28-Jul-2016

217 views

Category:

Documents


3 download

DESCRIPTION

 

TRANSCRIPT

Page 1: Précis: Being in the World

Being in the World Chapter III – The Artificial Intelligence Debate

“What we are at bottom, much more fundamental than […] being thinking subjects, is

that we care about something. Something matters to us.” – Taylor Carman

Humans have a lot of different characteristics in common, but not all of them necessarily define us as

human beings. This statement by Taylor Carman is a statement about what humans fundamentally are. He

says that our caring defines us as human beings. He puts this opposite to the idea that (rational) thinking

defines us as human beings, which is a popular idea among AI (Artificial Intelligence) scholars.

This matter is important in the artificial intelligence debate, because if humans are fundamentally rationally

thinking beings, then it should in theory be possible to build a computer that simulates human thought.

However, if humans are fundamentally caring beings, then the implication is that computers can never

simulate humans in all their aspects. What Carman (along with Dreyfuss and several others) is suggesting, is

that computers are fundamentally incapable of caring in the same way as humans do. Or, in John

Haugeland’s words:

“the trouble with computers is that they don’t give a damn”

The idea that humans have some quality that’s fundamentally unexplainable by science and unattainable by

computers, is a popular one. Maybe this is the case because it’s so intuitive. After all, the alternative is that

humans are nothing but matter, organized in a somewhat more complicated way than the rest of the world

is. The idea that one day, scientists will be able to completely understand (and replicate) all your thoughts

and emotions, seems downright outrageous.

But a theory being counter-intuitive doesn’t automatically render it false. After all, the idea that the earth is

spinning was once counter-intuitive, simply because the stars seemed to revolve around the earth, not the

other way around. Similarly, the field of computer science is already challenging our assumptions about

intelligence, thanks to its incredibly fast progress in recent years. In 1997, after the computer program Deep

Blue beat a chess champion for the first time ever, the New York Times published an article about the game

of Go, saying:

“To play a decent game of Go, a computer must be endowed with the ability to recognize

subtle, complex patterns and to draw on the kind of intuitive knowledge that is the hallmark

of human intelligence.”

The article also quoted astrophysicist Dr. Piet Hut:

''It may be a hundred years before a computer beats humans at Go -- maybe even longer.''

But less than 20 years later, we witnessed the AlphaGo program defeat professional Go player Lee Sedol.

Apparently, AlphaGo managed to replicate the kind of “intuitive knowledge” that’s needed to play Go properly.

The main concept behind the program is so-called deep learning, which allowed the program to learn from a

database of high-level games of Go played in the past, and subsequently play millions of games against

itself to keep improving its strategies.

An interesting consequence of this method is that neither of the creators have full oversight or control over

AlphaGo’s learning process. As one of its creators explains: “Although we have programmed this machine to

play, we have no idea what moves it will come up with. Its moves are an emergent phenomenon from the

training. We just create the data sets and the training algorithms. But the moves it then comes up with are out

of our hands—and much better than we, as Go players, could come up with.” It seems that deep learning

enables us to program a computer to pursue a certain goal and have it make up its own path to the

fulfillment of that goal.

Page 2: Précis: Being in the World

But does that mean that they taught computers to “give a damn”? Given the way we humans can

passionately care about things, get emotional over failures and successes, value relationships, et cetera, it

seems that computers are still very far away from resembling humans in their most fundamental

characteristics. And given the current state of AI technology, it’s still very much the question if this is ever

going to happen.

Another question that’s often debated is whether we should want computers to act like humans. This is, I

think, where the humanities come in. They can observe scientific process within the context of humanity as a

whole, something science itself is incapable of doing. If AI keeps progressing like this in the following years,

the debate it undoubtedly going to heat up. I myself sincerely hope that the talk about AI won’t just be about

what we can do, but also about what we should do.