daniel dennett - university of · pdf fileoctober 2, 1998 1st annual lecture daniel dennett...

25
October 2, 1998 1st Annual Lecture Daniel Dennett Co-Director, Center for Cognitive Studies University Professor Austin B. Fletcher Professor of Philosophy Tufts University Note: A portion of the Q&A has been removed from the transcription at the request of Daniel Dennett. The transcription has also been edited for clarity by Professor Dennett. Things About Things Lecture Introduction Moderator It gives me the greatest pleasure to welcome you here today to the first annual Benjamin and Anne Pinkel lecture on Mind-Brain Paradigms. I would like first to take this occasion to thank Sheila Pinkel, on behalf of the students and faculty, of the University of Pennsylvania for her generous gift on behalf of the estate of her parents, which has made this annual series of lectures possible. I can think of no more fitting tribute to the memory of Benjamin Pinkel than the creation of a forum for the continuing discussion and investigation of the fundamental questions concerning the nature of the mind, which were his intellectual passion. Mr. Pinkel, who received his BSE in Electrical Engineering from the University of Pennsylvania in 1930, sought in his monograph "Consciousness Matter and Energy" to, and here I quote Mr. Pinkel, "propose an expansion of the scientific view of nature to include a concept of mind." In light of this proposal it seems especially appropriate that these lectures should be sponsored by our university's Institute for Research in Cognitive Science. This Institute, which is home to the only National Science Foundation Science and Technology Center devoted to the study of cognition, has as its mission, the development of a scientific understanding of cognitive processes, and the creation of technologies based on this understanding - a mission very much in harmony with Mr. Pinkel's vision.

Upload: vonguyet

Post on 04-Feb-2018

217 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Daniel Dennett - University of · PDF fileOctober 2, 1998 1st Annual Lecture Daniel Dennett Co-Director, Center for Cognitive Studies University Professor Austin B. Fletcher Professor

October 2, 1998

1st Annual Lecture

Daniel Dennett

Co-Director, Center for Cognitive Studies

University Professor

Austin B. Fletcher Professor of Philosophy

Tufts University

Note: A portion of the Q&A has been removed from the transcription at the request of

Daniel Dennett. The transcription has also been edited for clarity by Professor Dennett.

Things About Things

Lecture Introduction

Moderator

It gives me the greatest pleasure to welcome you here today to the first annual Benjamin

and Anne Pinkel lecture on Mind-Brain Paradigms.

I would like first to take this occasion to thank Sheila Pinkel, on behalf of the students

and faculty, of the University of Pennsylvania for her generous gift on behalf of the estate

of her parents, which has made this annual series of lectures possible.

I can think of no more fitting tribute to the memory of Benjamin Pinkel than the creation

of a forum for the continuing discussion and investigation of the fundamental questions

concerning the nature of the mind, which were his intellectual passion.

Mr. Pinkel, who received his BSE in Electrical Engineering from the University of

Pennsylvania in 1930, sought in his monograph "Consciousness Matter and Energy" to,

and here I quote Mr. Pinkel, "propose an expansion of the scientific view of nature to

include a concept of mind." In light of this proposal it seems especially appropriate that

these lectures should be sponsored by our university's Institute for Research in Cognitive

Science. This Institute, which is home to the only National Science Foundation Science

and Technology Center devoted to the study of cognition, has as its mission, the

development of a scientific understanding of cognitive processes, and the creation of

technologies based on this understanding - a mission very much in harmony with Mr.

Pinkel's vision.

Page 2: Daniel Dennett - University of · PDF fileOctober 2, 1998 1st Annual Lecture Daniel Dennett Co-Director, Center for Cognitive Studies University Professor Austin B. Fletcher Professor

I am delighted to introduce Sheila Pinkel, who will share with us her special perspective

on this vision.

Sheila Pinkel

Thank you very much, it's an honor to be here today, and on behalf of introducing the

first lecture in this series on Mind-Brain Paradigms, I thought I would tell you a little

about my family.

My father graduated from the Moore School of Electrical Engineering at the University

of Pennsylvania in 1930, and then began working at NACA, the National Advisory

Committee on Aeronautics, which was later to become NASA, the National Aeronautics

and Space Administration, first at Langley Field and then at the Louis Laboratory in

Cleveland Ohio. My mother graduated from William and Mary at the top of her class

and immediately joined the editorial staff of NACA at Langley Field where she started

editing my father's reports. They soon married, and moved from Virginia to Cleveland

Ohio. During World War II, dad devoted his energies to perfecting the cooling system

for the jet engine, joining the legions of people fighting to develop technology to win the

war. He went on to design the first nuclear reactor at the Louis Laboratory, headed one

of the largest divisions of scientists at the Lab, and spent a great deal of his professional

life working on nuclear airplane propulsion systems. In 1956 he and the family moved to

Santa Monica so that he could join the Rand Corporation, to function as a consultant to

the Air Force on novel airplane propulsion feasibility studies.

When he retired in 1972 he began to focus full-time on his passion and interest, the

philosophy of mind. He took numerous courses in neuroscience at UCLA, and

voraciously read neurological research and mind-brain philosophy, in an attempt to

understand how the neurological system, including the brain worked. As he continued

this research he became fascinated by the remarkableness of the neurological system, and

increasingly believed that conventional descriptions of brain functioning could not

account for the phenomenon of mind, which he viewed at a kind of energy in nature

which had not yet been accounted for by physicists. When he investigated what people

called the physical world, he found in fact that matter is described as a miniscule nucleus

amidst a huge area vaguely described as a "field" containing rings of electrons. What that

field is made of, or how it asserts its energy, is not understood, so that while we call

matter material, in fact there is almost nothing solid about it, and what that energy is,

which holds the atom together, is also not understood. We do have the word "field,"

however, and in his view, this word and words like it are used in science to give the false

impression that the physical world is well understood. In fact, from his point of view we

do not understand magnetism, gravity, the strong force or the weak force, the four forces

identified as the fundamental forces in nature. In addition, a force which is not

acknowledged in this list is the force of mind. He did not believe in psychokinesis; he

did approach the demonstration of energy of mind as a scientist would look for proof.

What one must do is to think, "pick up this pencil," then mind instructs the body, and

one's hand picks up a pencil. So he believed that, while we cannot explain energy of

mind any more than we can explain the other forces, we should not ignore it, but rather

include it in the list of forces in nature. In fact this was a very radical way of

Page 3: Daniel Dennett - University of · PDF fileOctober 2, 1998 1st Annual Lecture Daniel Dennett Co-Director, Center for Cognitive Studies University Professor Austin B. Fletcher Professor

restructuring our understanding of the physical and mental world. What impressed me

most in my discussions with him was his open attitude about knowing, and an ability to

imagine an alternative to the prevailing structure. During the last years of his life every

time I visited him he would describe yet another feature of the neurological system,

which is awesome in it constructive process. It struck me that what he had done was to

give perspective to human understanding of physical and mental structures, and it was in

acknowledging the remarkableness that he found the unimaginable. Ultimately I came to

understand that he was really asking, "of what benefit is it to believe that it is just a

matter of time until human being understand it all? Doesn't it serve us better to believe

that there is far more in this world than we can possibly imagine, and that by opening up

to the possibility of the unimaginable we can open up to new paradigms and new

solutions that we cannot now imagine." My family and I are quite grateful to the

University of Pennsylvania for giving us the opportunity to sponsor the current series on

Mind-Brain Paradigms. In 10 years we hope to publish a book of these lectures and

discussions, as a way of making the dialogue on this subject visible. The speakers

participating in this series will expand and extend the ideas which were so fascinating to

my father. My parents would have been proud to participate in this series, and I look

forward to watching this dialogue unfold.

Thank you again.

Moderator: Thank you very much Sheila.

Speaker Introduction

We are most fortunate to have Professor Daniel Dennett here today, to inaugurate this

series on Mind-Brain Paradigms. Professor Dennett, who is Distinguished Arts and

Sciences Professor, and Director of the Center for Cognitive Studies at Tufts University,

is an internationally-renowned scholar in the philosophy of mind and cognitive science.

He is the author of numerous books and essays, which have profoundly influenced

thought about the mind and its relation to nature over the past three decades. His works

include Content and Consciousness, which appeared in 1969, followed by Brainstorms,

ElbowRoom, The Intentional Stance, Consciousness Explained, Darwin's Dangerous

Idea, Kinds of Minds, and most recently, Brainchildren.

Today's lecture will be followed by a panel discussion; we are very fortunate to have our

distinguished colleagues Professor Gary Hatfield, recently chair of the philosophy

department, and Professor Robert Seyfarth, currently chair of the University's psychology

department, as discussants for today's lecture.

And without further ado, I am delighted to present Professor Dennett, to lecture on

"Things about Things."

Lecture : Things About Things

Page 4: Daniel Dennett - University of · PDF fileOctober 2, 1998 1st Annual Lecture Daniel Dennett Co-Director, Center for Cognitive Studies University Professor Austin B. Fletcher Professor

Daniel Dennett

It's a very great honor to me to be invited to give the inaugural lecture in this series. 1 I

am delighted to be here in this beautifully refurbished room. I want to thank Mrs. Pinkel

for setting this wonderful program up, and for the copy of the book by her father, which I

will take home with me and treasure. It's a particular honor to be invited as a philosopher

to give this lecture, because philosophers aren't always in such high regard in scientific

quarters. In a review of Steven Pinker's book, How the Mind Works, in New York Review

of Books, the British geneticist Steve Jones had the following comment to make: "To

most wearers of white coats, philosophy is to science as pornography is to sex. It is

cheaper, easier and some people seem, bafflingly, to prefer it." Now that view is all too

common, and I understand it from the depths of my soul. I appreciate why people think

this, but I think it is also important to combat this stereotype in a friendly and

constructive spirit, and no place better than in a center for research in cognitive science.

What philosophers can be good at – there aren't many things we can be good at – is

helping people figure out what the right questions are. When people ask me whether

there's been any progress in philosophy I say, "Oh yes, mathematics, astronomy, physics,

physiology, psychology – these all started out as philosophy, and once we philosophers

got them whipped into shape we set them off on their own to be sciences. We figured out

how to ask the right the questions, and then we turned them over to other specialists to

answer.

Of those fields that have been born out of philosophy, perhaps the most recent is

psychology, the study of the mind, and some people would say it was a premature birth; it

should have been kept in the oven a little longer. This is why it is such a rich field, I

think, for philosophers coming to cognitive science these days, and it is so delightful to

find people in white coats struggling with our issues, grappling with the questions that we

philosophers have been grappling with for a few thousand years. They have come to

realize these questions are not that easy. And one of the great side benefits of the boom

in works on consciousness by neuroscientists and physicists and psychologists that we've

seen in the last decade, is that writing these books has been a humbling experience for the

authors. It is hard to know when you're asking the right sorts of questions, and whenever

that is your problem, you’re stuck doing philosophy. And so it is gratifying to me that

the Institute for Research in Cognitive Science has declared an interest in having a

philosopher come and give the inaugural lecture in this series. On behalf of my

discipline, I am delighted to be here.

Perhaps we can all agree that in order for intelligent activity to be produced by embodied

nervous systems, those nervous systems have to have things in them that are about other

things in the following minimal sense: there is information about these other things not

just present but usable by the nervous system in its modulation of behavior. (There is

information about the climatic history of a tree in its growth rings--the information is

present, but not usable by the tree.) The disagreements set in when we start trying to

characterize what these things-about-things are - are they “just” competences or

dispositions embodied somehow (e.g., in connectionist networks) in the brain, or are they

more properly mental representations, such as sentences in a language of thought,

images, icons, maps, or other data structures? And if they are “symbols”, how are they

Page 5: Daniel Dennett - University of · PDF fileOctober 2, 1998 1st Annual Lecture Daniel Dennett Co-Director, Center for Cognitive Studies University Professor Austin B. Fletcher Professor

“grounded”? What, more specifically, is the analysis of the aboutness that these things

must have? Is it genuine intentionality or mere as if intentionality? These oft-debated

questions are, I think, the wrong questions to be concentrating on at this time, even if, “in

the end”, they make sense and deserve answers. These questions have thrived in the

distorting context provided by two ubiquitous idealizing assumptions that we should try

setting aside: an assumption about how to capture content and an assumption about how

to isolate the vehicles of content from the “outside” world.

A Thing about Redheads

The first is the assumption that any such aboutness can be (and perhaps must be) captured

in terms of propositions, or intensions - sometimes called concepts. What would an

alternative claim be? Consider an old example of mine:

Suppose, for instance, that Pat says that Mike “has a thing about redheads.” what Pat

means, roughly, is that Mike has a stereotype of a redhead which is rather derogatory and

which influences Mike’s expectations about and interactions with redheads. It’s not just

that he’s prejudiced against redheads, but that he has a rather idiosyncratic and particular

thing about redheads. And Pat might be right - more right than he knew! It could turn out

that Mike does have a thing, a bit of cognitive machinery, that is about redheads in the

sense that it systematically comes into play whenever the topic is redheads or a redhead,

and that adjusts various parameters of the cognitive machinery, making flattering

hypotheses about redheads less likely to be entertained, or confirmed, making relatively

aggressive behavior vis-à-vis redheads closer to implementation than otherwise it would

be, and so forth. Such a thing about redheads could be very complex in its operation or

quite simple, and in either case its role could elude characterization in the format:

Mike believes that: (x)(x is a redhead e . . . . )

no matter how deviously we piled on the exclusion clauses, qualifiers, probability

operations, and other explicit adjusters of content. The contribution of Mike’s thing about

redheads could be perfectly determinate and also undeniably contentful and yet no

linguification of it could be more than a mnemonic label for its role. In such a case we

could say, as there is often reason to do, that various beliefs are implicit in the system.

(“Beyond Belief,” [in The Intentional Stance, p148]

But if we do insist on recasting our description of the content in terms of implicit beliefs,

this actually masks the functional structure of the things that are doing the work, and

hence invites us to ask the wrong questions about how they work. Suppose we could

“capture the content” of such a component by perfecting the expression of some

sentence-implicitly-endorsed (and whether or not this might be “possible in principle,” it

is typically not remotely feasible). Still, our imagined triumph would not get us one step

closer to understanding how the component accomplished this. After all, our model for

such an activity is the interpretation of data structures in computer programs, and the

effect of such user-friendly interpretations (“this is how you tell the computer to treat

what follows as a comment, not an instruction to be obeyed”) is that they direct the

Page 6: Daniel Dennett - University of · PDF fileOctober 2, 1998 1st Annual Lecture Daniel Dennett Co-Director, Center for Cognitive Studies University Professor Austin B. Fletcher Professor

user/interpreter’s attention away from the grubby details of performance by providing a

somewhat distorted (and hyped up) sense of what the computer “understands”. Computer

programmers know enough not to devote labor to rendering the intentional interpretations

of their products “precise” because they appreciate that these are mnemonic labels, not

specifications of content that can be used the way a chemist uses formulae to describe

molecules. By missing this trick, philosophers have created fantasy worlds of

propositional activities marshaled to accomplish reference, recognition, expectation-

generation, and so forth. What is somewhat odd is that these same philosophers have also

largely ignored the areas of Artificial Intelligence that actually do take such content

specifications seriously: the GOFAI worlds of expert systems, inference engines, and the

techniques of resolution theorem-proving and the like. If you want to look at a model of

Mentalese, or the language of thought, look at Prolog, look at some expert systems, look

at the data structures therein. But no philosophers seem to take these seriously as models

of what go on in minds. Presumably they can see at a glance that whatever these

researchers are doing, their products are not remotely likely to serve as realistic models of

cognitive processes in living minds. But then why do they take the idea of a language of

thought seriously if they're not prepared to look at those models? This is a good

unanswered question for those philosophers.

A thing-about-redheads is not an axiomatized redhead-theory grafted into a large data

base. We do not yet know how much can be done by a host of things-about-things of this

ilk because we have not yet studied them directly, except in very simple models - such as

the insectoid subsumption architectures of Rodney Brook and his colleagues. One of the

chief theoretical interests of Brooks’ Cog project is that it is pushing these profoundly

non-propositional models of contentful structures into territory that is recognizable as

human psychology. Let’s see how they work, how they interact, and how much work

they can do before we take on the task of linguifying their competences as a set of

propositions-believed.

I want to continue my harping on the theme of philosophers playing a role, because it's

important to realize that the direction that Rod Brooks is now famously taking is a

direction that was argued for many years ago by a philosopher, and he earned a great deal

of hooting derision for his efforts. That, of course is Hubert Dreyfus, who claimed way

back in 1972 that in order to be intelligent you have to have a body. The artificial

intelligence community rose up en masse and said he was crazy. When I talk about this

before AI audiences, I use an overhead that says "Just because Bert Dreyfuss said it,

doesn't mean it couldn't be true." Now people have come around to seeing that maybe

Bert was right about something, even if he put it in suspiciously aprioristic and

philosophical terms.

Cog has become something of a media star, and has been featured in so many television

documentaries on robotics and artificial intelligence that I hardly need introduce it to you.

[At the lecture I showed some video clips, from which the following indented text is

drawn, introducing Cog, and the Cynthia Brezeal’s “emotional infant” robot, Kismet:

Page 7: Daniel Dennett - University of · PDF fileOctober 2, 1998 1st Annual Lecture Daniel Dennett Co-Director, Center for Cognitive Studies University Professor Austin B. Fletcher Professor

In order to act intelligently, there's a lot of things you have to know about the world. And

one approach is to try and tell an artificial intelligence program, write it out in great detail

and tell it all the facts. By building a robot, we're trying to build a system which can act

in the world, interact with people, and learn for itself. Our hope is that that will lead to a

quicker accumulation of the sort of knowledge of what it is to act in the world, so that we

can have true artificial intelligence. To encourage people to interact with the robot

naturally, we've built the robot to look like a human and to act like a human. He has two

eyes, microphones for ears, and a set of gyroscopes to give it a sense of balance. Each of

Cog's eyes has two cameras, one that has a very wide angle, peripheral field of view, and

one that has a very narrow field of view, but much higher resolution. Cog has a total of

21 degrees of freedom, including two 60-degree of freedom arms, three degrees of

freedom in the torso, three in the neck, and three in the eyes. . . .

...This is Kismet, Kismet is my infant robot, it gives me facial expressions which tells me

what its motivational state is. This one is anger, [...] disgust, excitement, fear, happiness,

this one is interest, this one is sadness, surprise, this one is tired, and this one is sleep. In a

suitable learning environment, Kismet's drives are in homeostatic balance. This means

that the robot is neither understimulated, nor overwhelmed by its interaction with the

caretaker. Stimulation intensity is computed by the perceptual system, moving faces are

a social stimuli whose intensity is proportional to the amount of motion. Any other

motion is viewed as a non-social stimulus. Kismet works with the caretaker to keep the

perceptual stimuli within an acceptable range. Kismet's emotions and expressions reflect

its motivational state. By reading Kismet's facial expressions, the caretaker can respond

to the robot's needs and stimulate the robot appropriately. One of Kismet's drives is to be

social. If Kismet does not receive any social stimulation, it becomes lonely and looks

sad. The caretaker responds by making face-to-face contact with the robot. This satiates

the social drive, and Kismet displays happiness. However, if the social stimulus is too

intense, Kismet becomes asocial and shows disgust. This is a cue for the caretaker to

back and restore the interaction to a suitable intensity level.]

The opponent process, homeostatic system for these quasi-pseudo emotional states is

actually much more subtle than is suggested in that bit of video. Once it's ported over

onto Cog itself, it will play a big role in allowing Cog to get its infant education from

many different human interactors. The idea is that Cog is going to go through a period of

infancy, and is going to learn a lot about the world the way you and I do, by playing with

things, reaching for things, learning about occlusion and gravity and bumping things and

discovering things that it can't get that it wants, and having people around it all the time.

If you're going to make this work, you have to make the robot as engaging as possible to

people, and the team has already succeeded beyond the predictions of many of the

skeptics. People faced with Cog or Kismet often make fools of themselves the same way

people do when encountering a darling baby in a baby carriage. The sense they have that

there is an agent, a self in there, whose interests they begin to care about very deeply, is

already very potent. This confirms a point I've been making to philosophers for years.

Some philosophers have said that if you ever did make a conscious robot you'd have a

problem, a civil rights problem, convincing the world that it was conscious, and not just a

Page 8: Daniel Dennett - University of · PDF fileOctober 2, 1998 1st Annual Lecture Daniel Dennett Co-Director, Center for Cognitive Studies University Professor Austin B. Fletcher Professor

zombie. And I've said, no, it's actually going to turn out to be the other way around.

Long before you have a conscious robot, you're going to have pseudo-conscious robots

and the hard thing is going to be convincing the world that they aren’t conscious. We're

already beginning to see this, but I have no idea what percentage of those who encounter

Cog come away with the conviction that they have been in the presence of another

conscious being.

Transducers, Effectors, and Media

The second ubiquitous assumption is that we can think of a nervous system as an

information network tied to the realities of the body at various restricted places:

transducer or input nodes and effector or output nodes. In a computer, there is a neat

boundary between the "outside" world and the information channels. A computer can

have internal transducers too, such as a temperature transducer that informs it when it is

getting too hot, or a transducer that warns it of irregularities in its power supply, but these

count as input devices since they extract information from the (internal) environment and

put it into the common medium of information-processing. It would be theoretically tidy

if we could identify the same segregation of information channels from "outside" events

in a body with a nervous system, so that all interactions happened at identifiable

transducers and effectors. The division of labor this permits is often very illuminating. In

modern machines it is often possible to isolate the control system from the system that is

controlled, so that control systems can be readily interchanged with no loss of function.

The familiar remote controllers of electronic appliances are obvious examples, and so are

electronic ignition systems (replacing the old mechanical linkages) and other computer-

chip-based devices in automobiles. And up to a point, the same freedom from particular

media is a feature of animal nervous systems, whose parts can be quite clearly segregated

into the peripheral transducers and effectors, and the intervening transmission pathways,

which are all in the common medium of impulse trains in the axons of neurons.

At millions of points, the control system has to interface with the bodily parts being

controlled, as well as with the environmental events that must be detected for control to

be well-informed. In order to detect light, you need something photosensitive, something

that will respond swiftly and reliably to photons, amplifying their sub-atomic arrival into

larger-scale events that can trigger still further events. In order to identify and disable an

antigen, for instance, you need an antibody that has the right chemical composition.

Nothing else will do the job. It would be theoretically neat if we could segregate these

points of crucial contact with the physics and chemistry of bodies, thereby leaving the

rest of the control system, the "information-processing proper," to be embodied in

whatever medium you like. After all, the power of information theory (and automata

theory) is that they are entirely neutral about the media in which the information is

carried, processed, stored. You can make computer signals out of anything - electrons or

photons or slips of paper being passed among thousands of people in ballrooms. The very

same algorithm or program can be executed in these vastly different media, and achieve

the very same effects, if hooked up at the edges to the right equipment.

Page 9: Daniel Dennett - University of · PDF fileOctober 2, 1998 1st Annual Lecture Daniel Dennett Co-Director, Center for Cognitive Studies University Professor Austin B. Fletcher Professor

As I say, it would be theoretically elegant if we could carry out (even if only in our

imagination) a complete segregation. In theory, every information-processing system is

tied at both ends, you might say, to transducers and effectors whose physical composition

is forced by the jobs that have to be done by them, but in between, everything is

accomplished by medium-neutral processes. In theory, we could declare that what a mind

is is just the control system of a body, and if we then declared the transducers and

effectors to be just outside the mind proper--to be part of the body, instead--we could

crisply declare that a mind can in principle be out of anything, anything at all that had the

requisite speed and reliability of information-handling. Now this important theoretical

idea is close to being the Grand Enabling Assumption of cognitive science. It has

liberated theorists for more than two decades from having to cope with the unimaginable

complexities of neural connectivity and interactivity.

This important theoretical idea sometimes leads to serious confusions, however. The

most seductive confusion is what I call the myth of double transduction: first the nervous

system transduces light, sound, temperature, and so forth into neural signals (trains of

impulses in nerve fibers) and second, in some special central place, it transduces these

trains of impulses into some other medium, the medium of consciousness! This is, in

effect, what Descartes thought, and he declared the pineal gland, right in the center of the

brain, to be the locus of that second transduction. While nobody today takes Descartes'

model of the second transduction seriously, the idea that such a second transduction must

somewhere occur (however distributed in the brain's inscrutable corridors) is still a

powerfully attractive, and powerfully distorting, subliminal idea. After all (one is tempted

to argue) the neuronal impulse trains in the visual pathways for seeing something green,

or red, are practically indistinguishable from the neuronal impulse trains in the auditory

pathways for hearing the sound of a trumpet, or a voice. These are mere transmission

events, it seems, that need to be "decoded" into their respective visual and auditory

events, in much the way a television set transduces some of the electromagnetic radiation

it receives into sounds and some into pictures. How could it not be the case that these

silent, colorless events are transduced into the bright, noisy world of conscious

phenomenology? This rhetorical question invites us to endorse the myth of double

transduction in one form or another, but we must decline the invitation. As is so often the

case, the secret to breaking the spell of an ancient puzzle is to take a rhetorical question,

like this one, and decide to answer it. How could it not be the case? That is what we must

see. I can't answer all of that question today. After all I am a philosopher; I ask the

questions I don't answer them. But I can perhaps make a little progress.

What is the literal truth in the case of the control systems for ships, automobiles, oil

refineries and other complex human artifacts doesn't stand up so well when we try to

apply it to animals, not because minds, unlike other control systems, have to be made of

particular materials in order to generate that special aura or buzz or whatever, but because

minds have to interface with historically pre-existing control systems. Minds evolved as

new, faster control systems in creatures that were already lavishly equipped with highly

distributed control systems (such as their hormonal systems), so their minds had to be

built on top of, and in deep collaboration with, these earlier systems. 2

Page 10: Daniel Dennett - University of · PDF fileOctober 2, 1998 1st Annual Lecture Daniel Dennett Co-Director, Center for Cognitive Studies University Professor Austin B. Fletcher Professor

This distribution of responsibility throughout the body, this interpenetration of old and

new media, makes the imagined segregation more misleading than useful. But still one

can appreciate its allure. It has been tempting to argue that the observed dependencies on

particular chemicals, and particular physical structures, are just historical accidents, part

of an evolutionary legacy that might have been otherwise. True cognitive science (it has

been claimed) ought to ignore these historical particularities and analyze the fundamental

logical structure of the information-processing operations executed, independent of the

hardware.

The Walking Encyclopedia

This chain of reasoning led to the creation of a curious intellectual artifact, or family of

artifacts, that I call The Walking Encyclopedia. In America, almost every schoolyard has

one student picked out by his classmates as the Walking Encyclopedia--the scholarly

little fellow who knows it all, who answers all the teacher's question, who can be counted

on to know the capital cities of all the countries of the world, the periodic table of

chemical elements, the dates of all the Kings of France, and the scores of all the World

Cup matches played during the last decade. His head is packed full of facts, which he can

call up at a moment's notice to amaze or annoy his companions. Although admired by

some, the Walking Encyclopedia is sometimes seen to be curiously misusing the gifts he

was born with. I want to take this bit of folkloric wisdom and put it to a slightly different

use: to poke fun at a vision of how a mind works.

According to this vision, a person, a living human body, is composed of a

collection of transducers and effectors intervening between a mind and the world. A

mind, then, is the control system of a vessel called a body; the mind is material--this is

not dualism, in spite of what some of its ideological foes have declared--but its material

details may be safely ignored, except at the interfaces--the overcoat of transducers and

effectors. Here is a picture of the Walking Encyclopedia.

[figure 1 about here]

In this picture--there are many variations--we see that just inboard of the transducers are

the perceptual analysis boxes that accept their input, and yield their output to what Jerry

Fodor has called the "central arena of belief-fixation" (The Modularity of Mind, 1983).

Just inboard of the effectors are the action-directing systems, which get their input from

the planning department(s), interacting with the encyclopedia proper, the storehouse of

world knowledge, via the central arena of belief-fixation. This crucial part of the system,

which we might call the thinker, or perhaps the cognition chamber, updates, tends,

searches, and - in general - exploits and manages the encyclopedia. Logic is the module

that governs the thinker's activities, and Noam Chomsky's LAD, the Language

Acquisition Device, with its Lexicon by its side, serves as a special purpose, somewhat

insulated module for language entry and exit.

This is the generic vision of traditional cognitive science; For several decades,

controversy has raged about the right way to draw the connecting boxes that compose the

Page 11: Daniel Dennett - University of · PDF fileOctober 2, 1998 1st Annual Lecture Daniel Dennett Co-Director, Center for Cognitive Studies University Professor Austin B. Fletcher Professor

flow charts- - the "boxology" - but little attention has been devoted to the overcoat. That

is not to say that perception, for instance, was ignored - far from it. But people who were

concerned with the optics of vision, or the acoustics of audition, or the physics of the

muscles that control the eye, or the vocal tract, were seen as working on the periphery of

cognitive science. Moreover, those who concerned themselves with the physics or

chemistry of the activities of the central nervous system were seen to be analogous to

electrical engineers (as contrasted with computer scientists).

We must not let this caricature get out of hand. Boxologists have typically been quite

careful to insist that the interacting boxes in such flow diagrams are not supposed to be

anatomically distinct subregions of the brain, separate organs or tissues “dedicated” (as

one says in computer science) to the tasks inscribed in the boxes, but rather a sort of

logical decomposition of the task into its fundamental components, which could then be

executed by “virtual machines” whose neuroanatomical identification could be as

inscrutable and gerrymandered as you like - just as the subroutines that compose a

complex software application have no reserved home in the computer’s hardware but get

shunted around by the operating system as circumstances dictate.

The motivation for this vision is not hard to find. Most computer scientists don’t really

have to know anything much about electricity or silicon; they can concentrate on the

higher, more abstract software levels of design. It takes both kinds of experts to build a

computer: the concrete details of the hardware are best left to those who needn’t concern

themselves with algorithms or higher level virtual machines, while voltages and heat-

dispersion are ignorable by the software types. It would be elegant, as I said, if this

division of labor worked in cognitive science as well as it does in computer science, and a

version of it does have an important role to play in our efforts to reverse-engineer the

human mind, but the fundamental insight has been misapplied. It is not that we have yet

to find the right boxology; it is that this whole vision of what the proper functioning parts

of the mind are is wrong. The right questions to ask are not:

How does the Thinker organize its search strategies?

or

Isn't the Lexicon really a part of the World Knowledge storehouse?

or

Do facts about the background have to pass through Belief Fixation in order to influence

Planning, or is there a more direct route from World Knowledge?

These questions, and their kin, tend to ignore the all-important question of how

subsystems could come into existence, and be maintained, in the highly idiosyncratic

environment of a mammalian brain. They tend to presuppose that the brain is constructed

of functional subsystems that are themselves designed to perform in just such an

organization - an organization roughly like that of a firm, with a clear chain of command

Page 12: Daniel Dennett - University of · PDF fileOctober 2, 1998 1st Annual Lecture Daniel Dennett Co-Director, Center for Cognitive Studies University Professor Austin B. Fletcher Professor

and reporting, and each sub-unit with a clear job description. We human beings do indeed

often construct such artificial systems - virtual machines - in our own minds, but the way

they come to be implemented in the brain is not how the brain itself came to be

organized. The right questions to ask are about how else we might conceptualize the

proper parts of a person.

Evolution embodies information in every part of every organism. A whale's baleen

embodies information about the food it eats, and the liquid medium in which it finds its

food. A bird's wing embodies information about the medium in which it does its work. A

chameleon's skin, more dramatically, carries information about its current environment.

An animal's viscera and hormonal systems embody a great deal of information about the

world in which its ancestors have lived. This information doesn't have to be copied into

the brain at all. It doesn't have to be "represented" in "data structures" in the nervous

system. It can be exploited by the nervous system, however, which is designed to rely on,

or exploit, the information in the hormonal systems just as it is designed to rely on, or

exploit, the information embodied in the limbs and eyes. So there is wisdom, particularly

about preferences, embodied in the rest of the body. By using the old bodily systems as a

sort of sounding board, or reactive audience, or critic, the central nervous system can be

guided - sometimes nudged, sometimes slammed - into wise policies. Put it to the vote of

the body, in effect.

Let us consider briefly just one aspect of how the body can contribute to the wise

governance of a mind without its contribution being a data structure or a premise or a rule

of grammar or a principle, in a phenomenon modeled by Kismet. When young children

first encounter the world, their capacity for attending is problematic. They alternate

between attention-capture - a state of being transfixed by some object of attention from

which they are unable to deflect their attention until externally distracted by some more

powerful and enticing signal - and wandering attention, attention skipping about too

freely, too readily distracted. These contrasting modes are the effects of imbalances

between two opponent processes, roughly captured under the headings of boredom and

interest. These emotional states - or proto-emotional states, in the infant - play a heavy

role in protecting the infant's cognitive systems from debilitating mismatches: when

confronted with a problem of pattern-recognition that is just too difficult, given the

current immature state of the system, boredom ensues, and the infant turns off, as we say.

Or turns away, in random search of a task more commensurate with the current state of

its epistemically hungry specialists. When a nice fit is discovered, interest or enthusiasm

changes the balance, focussing attention and excluding, temporarily, the distractors. 3

I suppose this sort of meta-control might in theory have been accomplished by some

centralized executive monitor of system-match and system-mismatch, but in fact, it

seems to be accomplished as a byproduct of more ancient, and more visceral, reactions to

frustration. The moral of this story may not strike one as news until one reflects that

nobody in traditional Artificial Intelligence or cognitive science would ever have

suggested that it be important to build a capacity for boredom or enthusiasm into the

control structure of an artificially intelligent agent.4 We are now beginning to see, in

Page 13: Daniel Dennett - University of · PDF fileOctober 2, 1998 1st Annual Lecture Daniel Dennett Co-Director, Center for Cognitive Studies University Professor Austin B. Fletcher Professor

many different ways, how crippled a mind can be without a full complement of emotional

susceptibilities. 5

Things that go Bump in the Head

But let me make the point in a deeper and more general context. We have just seen an

example of an important type of phenomenon: the elevation of a byproduct of an existing

process into a functioning component of a more sophisticated process. This is one of the

royal roads of evolution. 6 The traditional engineering perspective on all the supposed

subsystems of the mind - the modules and other boxes - has been to suppose that their

intercommunications (when they talk to each other in one way or another) were not

noisy. That is, although there was plenty of designed intercommunication, there was no

leakage. The models never supposed that one box might have imposed on it the ruckus

caused by a nearby activity in another box. By this tidy assumption, all such models

forego a tremendously important source of raw material for both learning and

development. Or to put it in a slogan, such over-designed systems sweep away all

opportunities for opportunism. What has heretofore been mere noise can be turned, on

occasion, into signal. But if there is no noise - if the insulation between the chambers is

too perfect - this can never happen. A good design principle to pursue, then, if you are

trying to design a system that can improve itself indefinitely, is to equip all processes, at

all levels, with "extraneous" byproducts. Let them make noises, cast shadows, or exude

strange odors into the neighborhood; these broadcast effects willy-nilly carry information

about the processes occurring inside. In nature, these broadcast byproducts come as a

matter of course, and have to be positively shielded when they create too many problems;

in the world of computer simulations, however, they are traditionally shunned - and

would have to be willfully added as gratuitous excess effects, according to the common

wisdom. But they provide the only sources of raw material for shaping into novel

functionality.

It has been recognized for some time that randomness has its uses. For instance, sheer

random noise can be useful in preventing the premature equilibrium of dynamical

systems - it keeps them jiggling away, wandering instead of settling, until some better

state can be found. This has become a common theme in discussions of these hot topics,

but my point is somewhat different: My point is not that systems should make random

noise--though this does have its uses, as just noted - but that systems should have squeaky

joints, in effect, wherever there is a pattern of meaningful activity. The noise is not

random from that system's point of view, but also not useful to it. A neighboring system

may learn to "overhear" these activities, however, thereby exploiting it, turning into new

functionality what had heretofore been noise.

This design desideratum highlights a shortcoming in most cognitive models: the absence

of such noise. In a real hotel, the fact that the guests in one room can overhear the

conversations in an adjacent room is a problem that requires substantial investment (in

soundproofing) to overcome. In a virtual hotel, just the opposite is true: Nobody will ever

overhear anything from an “adjacent” phenomenon unless this is specifically provided for

(a substantial investment). There is even a generic name for what must be provided:

Page 14: Daniel Dennett - University of · PDF fileOctober 2, 1998 1st Annual Lecture Daniel Dennett Co-Director, Center for Cognitive Studies University Professor Austin B. Fletcher Professor

“collision detection”. In the real world, collisions are automatically “detected”; when

things impinge on each other they engage in multifarious interaction without any further

ado; in virtual worlds, all such interactions have to be provided for, and most cognitive

models thriftily leave these out - a false economy that is only now beginning to be

recognized.

Efficient, effective evolution depends on having an abundant supply of raw material

available to shape into new functional structures. This raw material has to come from

somewhere, and either has paid for itself in earlier economies, or is a coincidental

accompaniment of features that have paid for themselves up till then. Once one elevates

this requirement to the importance it deserves, the task of designing (or reverse

engineering) intelligent minds takes on a new dimension, a historical, opportunistic

dimension. This is just one aspect of the importance of maintaining an evolutionary

perspective on all questions about the design of a mind. After all, our minds had to evolve

from simpler minds, and this brute historical fact puts some important constraints on what

to look for in our own designs. Moreover, since learning in the individual must be, at

bottom, an evolutionary processes conducted on a different spatio-temporal scale, the

same moral should be heeded by anybody trying to model the sorts of learning that go

beyond the sort of parameter-tuning that is exhibited by self-training neural nets whose

input and output nodes have significances assigned outside the model.

Conclusions

Cognitive science, like any other science, cannot proceed efficiently without large

helpings of oversimplification, but the choices that have more or less defined the field are

now beginning to look like false friends. I have tried to suggest some ways in which

several of the traditional enabling assumptions of cognitive science - assumptions about

which idealized (over-) simplifications will let us get on with the research - has sent us on

wild goose chases. The “content capture” assumption has promoted the mis-motivated

goal at explicit expression of content in lieu of the better goal of explicit models of

functions that are only indirectly describable by content-labels. The “isolated vehicles”

assumption has enabled the creation of many models, but these models have tended to be

too “quiet,” too clean for their own good. If we set these assumptions aside, we will have

to take on others, for the world of cognition is too complicated to study in all its

embodied particularity. There are good new candidates, however, for simple things about

things now on offer. Let’s give them a ride and see where we get. Thank you very much

for your attention.

Panel

Gary Hatfield

I chose to go first and respond to Dan Dennett's philosophical talk in the normal

philosophical way, that is, to read a shorter philosophical paper in response to his paper.

Cognitive science arose through an interdisciplinary federation of approaches to mind

and cognition, comprising psychology, philosophy, linguistics, computer science and AI,

Page 15: Daniel Dennett - University of · PDF fileOctober 2, 1998 1st Annual Lecture Daniel Dennett Co-Director, Center for Cognitive Studies University Professor Austin B. Fletcher Professor

and sometimes neuroscience, biological studies of animal behavior, and anthropology.

At its origin, it carried the stamp of three of these fields: linguistics, artificial intelligence,

and philosophy, especially a philosophy of mind closely allied with philosophy of

language. The originating ideology was expressed Jerry Fodor, who treated cognition as

essentially linguistic, and modeled all cognitive processes as transitions among sentences

in the head. Sentences expressed in an innate language of thought, which he compared to

the machine language of a digital computer. Although cognitive science itself has moved

on to a healthy disunity of approaches, variously embracing all of the disciplines I have

named, the core literature is still dominated by the founding ideology. Dan Dennett has

been a participant in the development of cognitive science for three decades, from before

it was known as cognitive science. He has from the beginning been an admirer of AI, but

a critic of the "sentences in the head" view of cognition. He has encouraged the use of

mentalistic language from within the intentional stance, a useful but perhaps provisional

view that treats organisms and artifacts such as heating systems as rational agents. He has

also been a staunch critic of the attribution of determinate phenomenal state to perceivers,

such as in the case of visual perception, perceptual images filled with color and form. He

sees such attributions as falling prey to traditional mentalism, going back to Descartes.

His criticisms in this domain come in conflict with an area of scientific psychology, the

experimental study of perception, which typically does ascribe determinate phenomenal

states, including images containing color and form to perceivers. Finally, Dennett has

urged cognitive scientists, who are usually dismissive toward BF Skinner and other

behaviorists, not to throw the baby out with the bathwater, but instead to preserve useful

parts of behaviorism, especially its learning theory.

Now today Dennett has offered a diagnosis of the current state of the core literature in

cognitive science by questioning two assumptions that still have currency. The first is the

assumption, not as widespread as it once was, that the content of mental states must be

fully capturable without remainder or significant distortion by equating that content with

sentences or propositions. He argues that some content is too amorphous to be rendered

with the precision of English or some other language. Instead, he encourages us to

pursue the tack exemplified by the worker Rodney Brooks, who has built insect-like

creatures that find their way about in complex environments without the benefit of

internal prepositional structures. I think this is good advice. Not that no cognition is

linguistic, or that linguifying models of some cognitive process shouldn't be pursued. It's

just that it would be good to explore other avenues of explaining contentful processes,

and not to assume from the start that all cognitive processes in both humans and animals

must be conceived as sentences in the language of thought or as implicit propositions.

The second assumption is the tendency to treat the nervous system, or the internal

information system, as tied to the body only at specific locales, called transducers and

effectors, which are usually equated with sensory transducers such as rods and cones in

the eye, and motor effectors such as the nerve terminals that control muscle activity.

Dennett complains that this assumption tends to treat cognitive processes as isolated and

insulated from the body in two ways. First, it treats cognitive processes in isolation from

the bodily structures they control. But, he rightly reasons, some or much information

about the environment, information which must be taken into account in explaining the

cognitive and motor achievements of organisms, may be embodied in the structure of the

Page 16: Daniel Dennett - University of · PDF fileOctober 2, 1998 1st Annual Lecture Daniel Dennett Co-Director, Center for Cognitive Studies University Professor Austin B. Fletcher Professor

muscles and the limbs, that internal cognitive processes are assumed to control. Whether

these limbs be leg, wing, fin, or flipper. Here Dennett takes a commendable step in the

direction of a more ecologically-informed cognitive science, which understands that

organisms, their bodies and their psychologies, evolve in relation to environments.

Second, Dennett argues that traditional models regard internal processes of information

transmission as isolated from the brain structures that realize those processes. Which is

to say two different things: that details about brain structure can be ignored, on this view

Dennett criticizes, and that these ignorable brain processes are insulated from other brain

processes. But, he rightly asserts, surely the brain structures that mediate cognition have

evolved have evolved from earlier simpler neural structures. Whatever we can learn of

the history of their evolving function, whether through comparative work or

paleontology, is likely to help in understanding their present function. Moreover,

although brain structures are specialized for the tasks they perform, Dennett suggests that

they nevertheless are likely to be subject to influence or perturbations from the other

structures, and that this might have good effects, of the sort that he mentions in the

connectionist models that don't settle down to local maxima, but he considers these good

effects to perhaps be found on a larger system level.

Dennett's warning about this second assumption of insulation and isolation is a helpful

counter to some models in AI and cognitive pychology, that posit and internal boxology,

conceived independently of ecological constrainsts, and constructed on the assumption

that hardware, or the brain's wetware, doesn't matter. It could also serve as a useful

corrective to some aspects of Fodor 1983 Modularity of Mind book, namely the

conception of modules as fully insulated from other modules, cognitively and presumably

mechanically. There are however two points in Dennett's discussion of the second

assumption that I would qualify. First, where he says that questions such as "Isn't the

lexicon really a part of the world knowledge storehouse?" are not the right questions to

ask, I would say rather that they're not the only questions to ask. As I said at the outset, I

take the increasing disunity of cognitive science to be a healthy thing. It makes sense to

pursue multiple research strategies simultaneously, rather than putting all the eggs in the

basket of a single research school. This division of labor strategy has been defended in

philosophy of science by Phillip Kitcher and Miriam Solomon among others. Second,

Dennett's warning about examining isolated systems could mistakenly be taken as an

indictment of one of the leading research strategies of post-behavioristic psychology.

Behaviorism, especially in its Skinnerian form, encouraged thinking of the organism in

terms of inputs and outputs, stimulus and response, and in solving an equation for what

mediates between them. The ideal was that psychological science should explain the

behavior of the whole organisms, by postulating appropriate laws mediating between S

and R. Although cognitive science rejected behaviorism, the founding ideology pursued

the same ideal, of solving for the behavior of the whole organism, now inserting a

complicated boxology between S and R. This tendency was especially clear in the work

of philosophers who spoke of attributing beliefs and desires to explain the pattern of

external behavior. That goal was incorporated into Fodor's early statements of the

language of thought thesis, and it informed Dennett's intentional stance. Since the demise

of behaviorism, another research strategy has driven the main areas of experimental

psychology. The strategy has been to give up, for now at least, the claim to solve the

Page 17: Daniel Dennett - University of · PDF fileOctober 2, 1998 1st Annual Lecture Daniel Dennett Co-Director, Center for Cognitive Studies University Professor Austin B. Fletcher Professor

organism, and to focus on functioning subsystems within the organism or specific classes

of phenomena. The subsystems include the various perceptual systems, whether visual,

auditory, olfactory or internal to the body, the control of sequential motor action, the

perceptual processes in reading and in listening to speech, and the memory systems. The

phenomena include various dimensions of visual experience and the play of attention. In

these cases, psychologists have fruitfully investigated a single psychological capacity, or

a group of related capacities. They have then sought to explain that capacity, rather than

the behavior of the whole organism, through a functional decomposition of its workings.

A functional decomposition looks something like a subpart of Dennett's boxology, but it

is more narrowly focused that the explanation of the walking encyclopedia. It is likely to

aim at capacities of the organism that are biologically salient, such as distance or color

perception, and it takes into account what is known about the physiology or the

evolutionary history of the structure subserving that capacity. The success of this

research strategy in psychology reveals an important moral which I think is left out of

Dennett's account. The psychologist can study and seek to explain various sensory,

cognitive, and motor capacities even if physiological or evolutionary knowledge is not

available. Psychology can make its own approach to the mind, independent of

neuroscience or evolution. Indeed, this is a good thing, since it helps to know what an

organism can do, and how it does it, if one is to approach our massively complex brains

or our sketchily known evolutionary history, and ask how these capacities are realized

physiologically or how they evolved. (So I'm suggesting there that psychology can and

in fact does often lead the way toward neuroscientific investigations or towards posing

evolutionary questions.)

So far I've commended much of what Dennett says, with a few qualifications. Now I

come to a point of disagreement. At one point, Dennett describes what he a terms a

confusion embodied in the myth of double transduction. According to the myth, "first the

nervous system transduces light sound temperature and so forth into neural signals, and

second in some special central place, it transduces these trains of impulses into some

other medium, the medium of consciousness." He compares this position to that of

Descartes, and suggests that while nobody holds it explicitly, it has a powerfully

subliminal effect on research. Now, it's true that only a very few modern psychologists

and neuroscientists explicitly endorse Cartesian dualism (there have been a few but not

very many) according to which the mind is a special, separate, immaterial substance,

entirely distinct from matter and sharing none of its essential properties. But

contemporary scientists do hold something that is formally equivalent to the doctrine of

double transduction, that is, some contemporary scientists do. I say that it is formally

equivalent to suggest that while the set of relations named in Dennett's doctrine is

preserved, and so its mathematical form, the metaphysics is left aside, so let me explain

this.

The formally equivalent law is found in the part of scientific psychology known as

psychophysics. The name psychophysics describes its original subject matter.

Psychophysics studies the lawful relations between physical stimuli and psychological

experience, between in the case of vision, say, wavelengths of light and the experience of

color they cause in the observer. Psychophysics, and the study of auditory and visual

Page 18: Daniel Dennett - University of · PDF fileOctober 2, 1998 1st Annual Lecture Daniel Dennett Co-Director, Center for Cognitive Studies University Professor Austin B. Fletcher Professor

perception more generally, were the heartland of psychology during its rapid expansion

in the last part of the 19th century. The findings in psychophysics were what for many

people convinced that psychology could be a science. Study of perceptual experience

was and remains a royal road to empirical study of mind, and of the mind-brain. Reports

of phenomenal experience provide access to the products of psychological processes and

to the existence of certain central brain states. Now from its inception, psychophysics

was intended to deal with the physiological side of the relation between stimulus and

experience. That is, it aimed also to discover facts about the neurophysiological events

that yield experience. About 15 years ago two respected psychophysicists, Veneeta

Teller at the University of Washington and Ed Pew here at Penn, formulated what they

took to be an important goal for psychophysicists, one that lay implicit in the science.

They argued for the importance of formulating what they called "psychophysical linking

propositions." A linking proposition makes explicit the reasoning that there must be a

physiological locus, whether in a small area of the brain or across areas, at which

physiological activity is related to the experienced content, what philosophers call

"qualia" and some psychophysicists such as Pew of "sensory experience." Now the

general form of a psychophysical linking proposition is like this: one conceives of a

stimulus, which is mapped by a certain relation onto a physiological state, which is

mapped onto other physiological states, leading finally to a physiological state that still

maps onto other physiological states, but which has the characteristic of being what Pew

and Teller the bridge locus, which is the physiological state that directly maps onto the

experienced content, say a visual, auditory or olfactory perception.

[Explaining diagram] So S here is a stimulus, M are various mappings, phis are all

physiological states, and psi is an experienced content, psychological state and M-star is

the mapping between the physiological state and phi-m is the bridge locus.

Although Pew and Teller do not explicitly call such propositions "laws", it is natural to

read their general scheme, this general scheme here, as giving the form for hypothetical

laws relating brain states or brain activity to qualia. Pew and Teller would then be seen

as proposing that psychophysicists formulate and test various law statements relating

brain states to sensory experiences, to go along with known or newly discovered laws

relating stimulus to experience. In this way, one can narrow in on central brain processes

from both ends, through stimulus pathway (the physiological side) and through the

phenomenal effect. By the way, neither Pew nor Teller are dualists, nor need they be;

they are both materialists. We can now see that what Dennett portrays as an undesirable

implicit assumption is in fact an explicitly formulated and defended tenet of current

science. Since Dennett is usually respectful of science, and usually prefers to cast his

philosophical points politely as recommendation rather than as summary executions of

scientific hypotheses, we need an explanation of why he would reject, outright, the

legitimacy of a proposal like Pew and Teller's. I foreshadowed the immediate

explanation earlier: Dennett denies that determinate, imagistic phenomenal experience

exists. In particular, he denies that qualia exist. Qualia include the concrete, experienced

content of a red sensation, which is and has been the very object of psychophysical laws

and linking propositions. Dennett favors instead a view according to which we attribute

qualia to ourselves as part of a narrative redescription of the information we receive

Page 19: Daniel Dennett - University of · PDF fileOctober 2, 1998 1st Annual Lecture Daniel Dennett Co-Director, Center for Cognitive Studies University Professor Austin B. Fletcher Professor

perceptually. In his view, qualia are just the subject matter of fictional stories, having the

same status as Santa Claus or the Three Bears. In technical terminology, they are

nonexistent intentional objects of narrative linguistic descriptions. But why does Dennett

deny qualia? We need a deeper explanation. This is especially demanded, since his

account of qualia as objects of narrative is an instance of a tendency he otherwise

bemoans: the tendency to linguify the psychology content. In an article in 1988 and in

his 1991 Dennett offered some arguments meant to discredit qualia, that came in the form

of thought experiments, namely about taste and vision. Now I myself don't find the

arguments compelling, for reasons of experimental design. I think they fall prey to

confounded variables including uncontrolled response bias and memory effects. But I

don't think these are his main arguments. In fact, his denial of qualia long predates these

particular arguments. It's one of the most persistent features of his writing, spanning 30

years. We could perhaps trace it to his early admiration of AI models, which tend to

linguify, or to an unavowed remainder in his admiration for behaviorism, but I think that

would be unfair and not very interesting. I think the real explanation comes in what I

would guess to be Dan's most repeated rhetorical charge against qualia, that they are

mysterious and immaterial, unreconcilable with natural science. Those who posit qualia

are in fact unable to say how those qualia could be produced by or be identical with, the

activity of neurons in the brain, that is, with the differential flow of ions across cellular

membranes in accordance with an electrical potential. But, Dennett reasons, the brain is

made of matter. The red experienced in a red sensation is not in the ontology of physics,

although the ontology of physical light is. So while the brain may carry information

about red light, it can't contain or produce red qualia; a properly naturalistic ontology

won't allow him. This line of reasoning contains two interrelated assumptions that are

interesting, widely shared, and by my lights wrong.

The first assumption concerns the domain of the natural. How shall we decide what is

natural? One was is to let the constituents of things decide. A thing is natural if it is

made of matter, and so its states and properties are in principle describable in physical

language. This contrasts with other putative objects not made of matter, such as souls or

God, which are supernatural. Let us call this "ontological naturalism." It is a version of

what Larry Shapiro has called "Lego naturalism": natural is, as natural is made out of

properly certified physical building blocks. This contrasts with what might be called

"scientific naturalism." According to scientific naturalism, a thing is natural in virtue of

being described by natural scientific law, or counted among the objects of natural

scientific explanation. The natural sciences are specified by a list, under this conception,

which always includes physics, chemistry, and biology, and often includes psychology.

On this view, it's one of the great things about Penn, in fact, that psychology is here

classed among the natural sciences. On this view, if qualia are the object of

psychophysical laws, they are natural phenomena, as long of course as one considers

perceptual psychology to be natural science.

The second assumption concerns the proper way to conduct science. Shall we see science

primarily as the search for lawful relations, and so include the laws of psychophysics in

the proper domain of science, or shall we conduct science by seeking explanations of

things in terms of what they are made of, in terms of their constituent parts? I think

Page 20: Daniel Dennett - University of · PDF fileOctober 2, 1998 1st Annual Lecture Daniel Dennett Co-Director, Center for Cognitive Studies University Professor Austin B. Fletcher Professor

Dennett favors the latter approach, given his repeated discussions of what things are

made of, and his use of what things are alleged to be made of as a criterion for

distinguishing the real and the unreal. The assumption that a real explanation involves

specifying an underlying mechanism built out of material parts is deep, but not universal.

It was encouraged by early proponents of modern science in the 17th century. It was

especially prominent in the physical explanations of Descartes and Robert Boyle, who

argued that real explanations must be cast in terms of the shapes and motions of the parts

of things, like the shapes and motions of the parts of a machine. Let us call this

mechanistic mode of system of science the "Boylean" model. Dennett's two assumptions

are deeply embedded in the history of science, or at least in a certain reading of the

history of science, that is ontological naturalism and the Boylean model. It is widely

held, that the Scientific Revolution in the 17th century had the effect of banishing the

mind from nature, and relegating mental phenomena to an immaterial mind nature in the

seventeenth century, but a larger group of investigators considered mind to be part of

nature. This included many dualists, who saw no reason why the states of an immaterial

substance could not be studied empirically, and so made an object of empirical science.

And this kind of naturalism about the mind and empiricism toward the mind, I think,

informed the work of Benjamin Pinkel, one of those for whom these lectures are named.

The new scientific study of mind was slower to gel than the new physics, and it was only

in the 18th century that substantial works in the new scientific psychology appeared.

Early works in the 1750’s were by the Swiss naturalist Charles Bonet and the German

physician Johann Kruger. As the century wore on the number of works in psychology

became more numerous. Interestingly, in 1808 when FA Karess wrote his History of

Psychology he could discuss 125 authors that he considered to belong to the community

of psychologists. Across time, the tendency within this group of literature was to defer

ontological questions about what the mind is made of, in order to study the empirical

properties of the mind, so that even the dualists deferred the metaphysical questions. The

strategy of 18th century authors to bracket ontological questions in favor of seeking

lawful relations is in the spirit of another style of science that was beginning to supplant

Boylean science at the time. The second style of science posited explanatory laws, even

where no proper Boylean or material building block ontology could be found. The most

famous was Newton's postulation of a law of universal gravitation, in the absence of any

mechanistic explanation of how the attractive force worked. Newton himself hankered

after a mechanical explanation of gravity in terms of the pushes and pulls and contacts of

particles. Others took the lesson from his writing that laws are equally or more important

that stories about constituent parts, so let's dub this second style that looks for laws rather

than constituent parts, Newtonian science.

This brief bit of history gives us a way to understand Dennett's objection to qualia, and a

way to describe Pew and Teller's work. Dennett subscribes to the tradition that sees mind

outside of nature. If mind is to be brought into nature it must be done carefully, by

equating mental activity with the activity of the material brain. We must explain mind in

terms of constituent mechanisms, for which we are able to see, in principle, how their

operation could be explained by physics. We must therefore be able to go from (unclear:

the attential?) to the design ultimately to the physical stance, in his words. If we now

can't see how to specify the component parts equated with some mental phenomena,

Page 21: Daniel Dennett - University of · PDF fileOctober 2, 1998 1st Annual Lecture Daniel Dennett Co-Director, Center for Cognitive Studies University Professor Austin B. Fletcher Professor

these phenomena should be jettisoned. This is the road of ontological naturalism in

Boylean science. By contrast, those who take psychological science such as

psychophysics seriously can describe themselves as scientific naturalist, who accept the

Newtonian model. Of course, these persons might also look for Boylean explanations

where they can be found. But they won't look for Boylean explanations exclusively, and

they'll perhaps conjecture that in many areas of science it's the Newtonian style that rules.

In the end, my main objection to Dennett's talk is his exclusionary metaphysics. He

wants to rule qualia out of present science, because it's hard to see how they can be

explained by appeal to the physical properties of the brain. In effect he is supposing that

we should extrapolate current knowledge of the brain and current physics into the future,

and predict that qualia will never be explicated, and concludes from this that they should

be denied existence. As a Boylean, he is ready to rule out what can't be explained in

Lego fashion. By contrast, the approach I advocate takes seriously the lawful in the

phenomenon. It approaches the mind on all fronts, by studying psychological capacities

and processes in their own rights, by studying their relation to physiology, and by

drawing on evolutionary findings where available. But it insists that the science of

perceptual psychology can proceed without needing a prior ontological certification,

based on current knowledge of the brain and current physics. Posited qualia are justified

by their incorporation into successful science. Psychology is an autonomous science. It

can proceed in advance of neuroscience, while listening to neuroscience bring any news.

Further reflection might even show that neuroscience needs psychology more than

psychology needs neuroscience, for the reason I gave earlier: psychology provides the

functional language for describing brain function, but that's another topic for another

philosophical age.

Dan Dennett

I'm really glad that [Hatfield] drew your attention to Teller and Pugh's concept of a bridge

locus, and he's exactly right that this is contrary to what I've claimed about qualia. If you

want to see a beautiful demolition job on the data in support of the Teller and Pugh

notion, look at Thompson and Palacio and Varela's target article in Behavior and Brain

Sciences of 1992, in which I also have a commentary.7 I think that we can't do justice to

this issue here, but I think it is really interesting to see why you don't need a bridge locus

to do psychophysics; something that Thompson, Palacio and Varela explain in some

detail. As for the distinction between Newtonian and Boylean science, I think Hatfield's

right about the history, and I'm also going to accept his claim that I am playing the

Boylean role, but I want to point out that in doing this I am simply supposing that, just as

one can be a good Boylean about reproduction, metabolism, and locomotion (for

instance), one can be a good Boylean about psychology. I don't see any reason to

suppose that the phenomena of psychology will require a different attitude towards the

natural sciences than the other, initially deeply perplexing phenomena of life. If we go

back in time, not so very far, we find that the mystery of reproduction was one that was

so great that preformationism reigned, or was at least the most serious contender for a

theory, and people hadn't a clue how using mechanisms you could explain the process of

reproduction. We now have a very good detailed Boylean mechanistic explanation of

how reproduction is possible. Our aspirations in cognitive science should be the

Page 22: Daniel Dennett - University of · PDF fileOctober 2, 1998 1st Annual Lecture Daniel Dennett Co-Director, Center for Cognitive Studies University Professor Austin B. Fletcher Professor

aspirations to treat the mind the way we treat the rest of the body, as a very complex bit

of machinery that will succumb to reverse engineering. So I accept Hatfield’s distinction

between Newtonian and Boylean science and I say, as far as psychology is concerned,

let's be good Boyleans.

Hatfield

Well, the only response I would make is that for a convincing defense of the cogency of

the Pew-Teller approach, see the forthcoming, still in preparation article by Hatfield and

Pew appearing soon near you. Other than that, I would just argue that the Newtonian

approach should be pursued as well.

Seyfarth

Well, all this is going to do is serve to show the different ways in which philosophers and

psychologists either prepare themselves or don't prepare themselves for being a

commentator. I should start off by saying that I work on the social behavior and

communication of animals, and I think Dan's paper makes a very cogent argument for the

integration of lots more work on nonhuman creatures into the central part of cognitive

science. Of course I believed this before I heard his paper, so in that sense he had no

effect on me whatsoever.

When one embarks on a program in artificial intelligence, as Dan was describing, one

sets up a structure in the program that embodies in most cases the two kinds of

assumptions that he's criticizing, and hence as he argues, quite well I think, you recreate

the problem because you're working within a specific framework. Now consider the

difference between this kind of research and the sort of research one does when one goes

out to study an animal in its natural habitat or in the laboratory. There, you take as given,

take for granted, that you're dealing with an evolved structure. There are two things

about this evolved structure that are important. First of all, you know that it's going to

have a lot of gerrymandering and ad-hockery, because we know that's how evolution has

worked over the animal's history. Second, you can't isolate the brain from the rest of the

animal. It forces you to integrate all of the biological systems in order to explain how the

animal achieves what it does. This is certainly what has happened in the course of our

understanding of the mechanisms that govern reproduction in vertebrates or invertebrates.

But this also forces us to entertain the possibility that there are many different answers to

questions about the proper functioning of mind. Dan gives us three questions that he

says, and I think Gary is right, these aren't, maybe not be the right questions, it might be

better to say they're not the only questions, and he says what we've got to do, in the future

of cognitive science, is to ask how else we might conceptualize the proper parts of a

person.

I would suggest that a lot of the reason why we frame the questions in this way is because

so much of cognitive science deals exclusively with humans. Imagine that I had sitting

here a research subject, a salamander. Would I be prompted to ask, do facts about the

background have to pass through belief fixation in order to influence planning or is there

a more direct route from world knowledge? Probably not. I'd be forced to ask some

Page 23: Daniel Dennett - University of · PDF fileOctober 2, 1998 1st Annual Lecture Daniel Dennett Co-Director, Center for Cognitive Studies University Professor Austin B. Fletcher Professor

other kind of question that nevertheless gets at the same general biological problem. So

what I take this as, is an argument very strongly in favor of taking taking a large part of

the Institute's space and devoting to work on things like salamanders.

Having said that, I just want to ask, I just want to throw out one other idea. Every year,

the Boston Museum of Science, I think it's every year, has a contest in which people are

asked to submit programs that judges, the programs talk with you on a computer terminal

and the judges sit at the end, the other end of the computer terminal, and they're supposed

to figure out whether it's a computer program or a real person at the other end, and I

know that Dan has been the judge of this contest one year, maybe it's not every year

maybe it's every so often. I am also pretty sure that members of the general public are

also invited to be judges. Here's an example in which a person is presented with a mind,

or a candidate mind, and chances are, if it's a member of the general public there would

be an attempt to linguify mental content, to structure it in the way that we in cognitive

science tend to do, and similarly in the Cog project, what matters to the success of that

project in some respects is whether Kismet elicits this parental response. Not how it does

it. So projects like these, like ethology in animal behavior, force you to use function as a

guide rather than structure. We want to know how an animal achieves its success in the

world before we start talking about the structure, and I think this is an important sort of

shift of emphasis and it's certainly an emphasis that involves more biological

approaches. A lot of this is what Dan Dennett has been saying for years. He wrote a

commentary in Behavioral Brain Sciences many many years ago, in which he chastised

people in artificial intelligence for using fancy computers to try and model one particular

tiny part of the human brain. He said, there's a much, much more difficult problem, and

he titled his article, "Why not the whole iguana?" And I think that is a much more

difficult problem, and it's something that we ought to start tackling.

Dennett

There’s very little for me to disagree with there. Indeed, "why not the whole iguana?"

This has turned out to be a very fruitful research strategy within artificial intelligence and

artificial life.8

A little story about the early days of the Turing Test may shed some light on some of

Robert’s other comments. The recent limited Turing Test competition, for the Loebner

Prize, was held not at the Museum of Science but (the first year) at the Computer

Museum in Boston, and I was chairman of the Prize Committee over for several years.

Some years before that, Joseph Weizenbaum, another member of the Prize Committee

and the creator of the famous or notorious Eliza program (the Rogerian psychotherapist

that interviews you about your psychological problems) ran a little informal experiment

at Harvard Medical School. He wanted to see what would happen if they confronted

people with something like a Turing Test, so he introduced subjects (psychiatry residents

as I recall, but I’m not certain) one at a time to a human being and said, “you're either

going to be talking to this human being or you're going to be talking to a computer.”

And the human being would shake hands with the subject, and they'd make a few

pleasantries perhaps, and then the human being would go off, and then the subject would

sit down at a terminal and start a question-and-answer game with . . . either the human

Page 24: Daniel Dennett - University of · PDF fileOctober 2, 1998 1st Annual Lecture Daniel Dennett Co-Director, Center for Cognitive Studies University Professor Austin B. Fletcher Professor

being in another room, or Eliza, the computer program. After a few minutes of

interaction, Weizenbaum asked the subject in passing what his opinion was so far: did he

think he was conversing with a person or a computer. If the subject gave the wrong

answer, Weizenbaum would surreptitiously adjust the performance: if the subject said he

was conversing with a computer, the actual human interlocutor would raise the quality of

his responses, making them ever “more human” and if the subject said he was conversing

with a human being, Weizenbaum would degrade Eliza’s performance. Subjects were

remarkably perseverative in their hypotheses, clinging to them in the face of ever

mounting evidence to the contrary. In one instance the human interlocutor was driven to

the point of saying something like, "I really liked that blue necktie that you had on when I

shook hands with you a few minutes ago." In one case in which a subject thought Eliza

was a human being, Weizenbaum found that when he degraded the performance of Eliza

to the point of "word salad", the response from the subject was, "I don't know, I think this

man's sick, I don't know what's on his mind." The fact is that we are much more

susceptible than we are prepared to acknowledge to the intentional stance. Once you get

into that frame of mind, it’s child's play to make the world fit the hypothesis; it's child's

play to find reasons, to find intention, in almost any behavior that emerges. A hard thing

to do in cognitive science is both to exploit the intentional stance and also resist its allure

when it should be resisted.

Lecture Notes

1. A shorter version of this talk has been published in The Foundations of Cognitive

Science, Joao Branquinho, ed., Oxford University Press, 1999. I want to thank Chris

Westbury and Rick Griffin for comments on an earlier draft.

2. The previous 6 paragraphs are drawn, with some revisions and additions, from my

Kinds of Minds (1996).

3. Cynthia Brezeal, discussion at American Association for Artificial Intelligence

Symposium on Embodied Cognition and Action, MIT Nov, 1996.

4. Consider a sort of problem that often arises for learning or problem-solving programs

whose task can be characterized as “hill-climbing”--finding the global summit in a

problem landscape pocked with lower, local maxima. Such systems have characteristic

weaknesses in certain terrains, such as those with a high steep, knife-edge “ridge” whose

summit very gently slopes, say, east to the global summit. Whether to go east or west on

the ridge is something that is “visible” to the myopic hill-climbing program only when it

is perched right on the knife-edge; at every other location on the slopes, its direction of

maximum slope (up the “fall line” as a skier would say), is roughly perpendicular to the

desired direction, so such a system tends to go into an interminable round of

overshooting, back and forth over the knife-edge, oblivious to the futility of its search.

Trapped in such an environment, an otherwise powerful system becomes a liability. What

one wants in such a situation, as Geoffrey Hinton has put it, is for the system to be

capable of “noticing” that it has entered into such a repetitive loop, and resetting itself on

Page 25: Daniel Dennett - University of · PDF fileOctober 2, 1998 1st Annual Lecture Daniel Dennett Co-Director, Center for Cognitive Studies University Professor Austin B. Fletcher Professor

a different course. Instead of building an eye to oversee this job, however, one can just let

boredom ensue.

5. Antonio Damasio's recent book Descartes' Error (New York: Grosset & Dunlap, 1994)

is a particularly effective expression of the new-found appreciation of the role of

emotions in the control of successful cognition. To be fair to poor old Descartes,

however, we should note that even he saw--at least dimly--the importance of this union of

body and mind:

By means of these feelings of pain, hunger, thirst, and so on, nature also teaches that I am

present to my body not merely in the way a seaman is present to his ship, but that I am

tightly joined and, so to speak, mingled together with it, so much so that I make up one

single thing with it. (Meditation Six)

6. In what follows I owe many insights to Lynn Stein's concept of "post-modular

cognitive robotics" and Eric Dedieu, "Contingency as a Motor for Robot Development"

AAAI Symposium on Embodied Cognition and Action, MIT Nov, 1996.

7. "Hitting the Nail on the Head," (commentary on Thompson, Palacios and Varela),

Behavioral and Brain Sciences, 15, 1, p. 35, 1992.

8. In July of 2000, I participated in an international workshop on the island of Lanzerote,

in the Canary Islands, entitled “Towards the Whole Iguana.” A volume of the papers and

discussions presented by the roboticists and artificial life researchers who participated,

edited by Owen Holland and David McFarland, is forthcoming from Oxford University

Press.