ruccs.rutgers.eduruccs.rutgers.edu/images/personal-zenon-pylyshyn/burge... · web viewbecause it is...

39
Chapter 1 Completely Meaningless We are, of course, aware that many philosophers, psychologists and AI-nix dispute the bona fides of `belief/intention' explanations, at least for the purpose of doing serious science. Behaviorists hold that causal interactions involving mental states and processes can't possibly explain anything because there are no such states or processes. And there is a certain kind of reductionist who holds that, though there are mental states and processes after all, psychological explanations in which they figure must eventually be replaced by explanations couched in the vocabulary of the neurosciences. We will simply assume that behaviorists and reductionists are both wrong. The theories of cognition that behaviorists have generally endorsed patently lack the sort of explanatory power that belief/intention theories very often achieve. And successful reductions of psychological explanations to neurological explanations have proved to be very thin on the ground. The merest glance at a textbook of neuropsychology discovers a plethora of unreduced psychological commitments. That unsurprising neurological theories are often able to exhibit the mechanisms by which psychological states and processes are implemented; in effect they explain how things that happen in brains can be the sorts of causes of which psychology claims that behavioral phenomena are the effects. But to do that isn’t to replace psychological explanations, it's to presuppose them. A word more about this since, though behaviorism is largely moribund these days, neuro-reductionists of one sort or other continue to flourish. Suppose you hold the following theories: T1: It's the ice cubes in your drink that that make it become cooler and more dilute. T2: Ice cubes are certain arrangements of water molecules. 1

Upload: lamliem

Post on 12-Apr-2019

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: ruccs.rutgers.eduruccs.rutgers.edu/images/personal-zenon-pylyshyn/Burge... · Web viewbecause it is used to expresses the concept CAT. (So too do the Bantu and Russian translations

Chapter 1

Completely Meaningless

We are, of course, aware that many philosophers, psychologists and AI-nix dispute the bona fides of `belief/intention' explanations, at least for the purpose of doing serious science. Behaviorists hold that causal interactions involving mental states and processes can't possibly explain anything because there are no such states or processes. And there is a certain kind of reductionist who holds that, though there are mental states and processes after all, psychological explanations in which they figure must eventually be replaced by explanations couched in the vocabulary of the neurosciences. We will simply assume that behaviorists and reductionists are both wrong. The theories of cognition that behaviorists have generally endorsed patently lack the sort of explanatory power that belief/intention theories very often achieve. And successful reductions of psychological explanations to neurological explanations have proved to be very thin on the ground. The merest glance at a textbook of neuropsychology discovers a plethora of unreduced psychological commitments. That unsurprising neurological theories are often able to exhibit the mechanisms by which psychological states and processes are implemented; in effect they explain how things that happen in brains can be the sorts of causes of which psychology claims that behavioral phenomena are the effects. But to do that isn’t to replace psychological explanations, it's to presuppose them.

A word more about this since, though behaviorism is largely moribund these days, neuro-reductionists of one sort or other continue to flourish.

Suppose you hold the following theories:

T1: It's the ice cubes in your drink that that make it become cooler and more dilute. T2: Ice cubes are certain arrangements of water molecules.

The question then arises: how could an arrangement of water molecules cause a drink (or anything else) to become cooler and more dilute? There is, as everybody knows, a widely accepted answer to this question; one that is beyond doubt more or less true. It involves complicated claims about heat transfer and the diffusion of liquids. Our point is that this answer doesn't reject either T1 or T2; to the contrary, it presupposes both and explains how they both can be true. That requires explaining what sort of things ice cubes are, and why things of that sort can cool and dilute liquids that contain them, even though many other sorts of things (submarines, for example) do not. This is the usual situation when a macrolevel science and a microlevel science converge on the same phenomena: the latter elucidate the implementation of processes that the former has revealed; and we don’t know why the same sort of story shouldn’t apply to the relation between psychological explanations and explanations in the brain sciences. If everything works out right neurology explains how brain

1

Page 2: ruccs.rutgers.eduruccs.rutgers.edu/images/personal-zenon-pylyshyn/Burge... · Web viewbecause it is used to expresses the concept CAT. (So too do the Bantu and Russian translations

events cause the sorts of phenomena that psychology uncovers. Each science respects the ontology of the other. What’s special about the psychology/neurology relation is that it has so often been denied that a belief/intention theory of the mind can meet conditions on `naturalization’ with which scientific explanations are supposed to conform. To suppose that mental causes have effects in the natural world is to imagine that a ghost could run a machine. But that’s a philosophical misperception of what is, in fact, quite a normal pattern of inter-science relations. So far, we think, nothing has turned up that lacks respectable precedents in the practice of other; nothing that should alarm a philosopher of science, a psychologist, an AI person, or, since Chomsky, a linguist.

But the scientific respectability of belief/intention explanations is by no means the only tendentious thesis we propose to take for granted. Another is the priority of thought to language: Both in the course of ontogeny and in the course of verbal communication, linguistic forms inherit their semantic content from the concepts and thoughts that they are used to express. The word `cat’ means cat because it is used to expresses the concept CAT. (So too do the Bantu and Russian translations of `cat’ if the translations are accurate.) Likewise, the reason one utters the word `cat’ when one wants to say there’s a cat, is that ‘cat’ is the word we use to say what we are thinking about when we are thinking about cats.

There is, both in philosophy and psychology, an enormous literature contrary to this thought-first view of mind-language relations; none of which strikes us as convincing. Usually either radical Behaviorism or radical Empiricism (or both) is lurking in the polemical background, and the credentials of both have long expired. Moreover, there are some empirically plausible and reasonably intuitive arguments suggesting that the use of language presupposes complex mental capacities rather than the other way around.

Item: The alternatives to `thought first’ thus far on offer are all discredited. For example: there is a tradition according to which `he said X because he intended to’ specifies the background of dispositions with which the speaker gave voice to his utterance. But that can’t be right. John’s being disposed to say what he did isn’t a sufficient condition for his saying it; whereas whatever caused John’s saying X ipso facto must have been; all causes are ipso facto sufficient to bring aboutf their effects.

Another example: Wittgenstein in philosophy and Skinner in psychology both held that first language acquisition is somehow the effect of `training’ (by ‘social reinforcement’ or whatever). But it turns out that children generally don’t get much language training in the course of first-language acquisition; nor, apparently, do they need much. And, more to the point, there is no serious suggestion of how such training might work its putative effects. Skinner takes learning theory for granted, which it is no longer possible to do; Wittgenstein offers hypothetical vignettes along the lines of: `Jane-says-`Slab’, Tarzan-brings-slab’.

Since acquiring a first language is, prima facie, a very complex cognitive achievement, it’s hard to imagine how it could proceed in a creature that lacks conceptual sophistication. Neither pigeons nor chimpanzees can do it. Nor, so far, can computers.

2

Page 3: ruccs.rutgers.eduruccs.rutgers.edu/images/personal-zenon-pylyshyn/Burge... · Web viewbecause it is used to expresses the concept CAT. (So too do the Bantu and Russian translations

Item: Thought-first avoids having to claim that neither thoughts nor pre-verbal infants can think. There is, to our knowledge, no evidence for these claims, and the second has become increasingly unattractive with the collapse of the Piagetian program.

Item: Thought-first may explain the intuition that we can’t always say what we think (cf the notorious tribulations of lawyers, logicians and poets).

Item: Thought-first may explain the intuition that we can almost always think what we say (compare: `I can’t tell what I think until I hear what I say’, which is, either a rhetorical joke or simply false).

Item: Thought-first may explain why we can communicate much of what we think to people who speak our language.

Item: Thought-first may explain why (with translation) we can communicate much of what we think even to people who don’t speak our language.

Item: Thought-first may explain why, even if the `Whorf hypothesis’ (fn) turns out to be true, much evidence suggests that the effects of one’s language on one’s thought, perception, cognitive style, and the like are pretty marginal. REFERENCES

Fn. Whorf hypothesized that cognitive states and processes are profoundly affected by differences between languages. (Presumably languages that are very different aren’t inter-translatable). These days, it is generally defended only in its `weak’ version that at least some cognitive consequences of which language one speaks. (References)

Item: Thought-first avoids having to hold that infrahuman animals and pre-verbal infants can’t think. Though both claims are frequently heard, there is, to our knowledge, no evidence for either; and the second seems increasingly unattractive with the collapse of the Piagetian program.

While none of this is conclusive, we think it’s persuasive enough to warrant taking the priority of thought to language as a working hypothesis and seeing where it leads; here as elsewhere, the only way to prove the pudding is to see what happens if you swallow it.

But, even assuming both that belief/intention explanation is OK, at least in principle, and that you can’t learn or speak a language (including a first language) unless you can already think, there remains something disturbing and perplexing about the cognitive sciences; something that philosophers and cognitive scientists really should worry about. As remarked, it is characteristic of cognitive science to offer theories in which propositional attitudes figure as causes and as effects; and propositional attitudes have semantic contents; and no other kinds of causes or effects do; not rocks or plants, not acids or protons; not anything else with which science has thus far concerned itself. So the cognitive sciences raise two questions that more familiar sciences do not, and to which nobody has yet managed to find a satisfactory answer. These are the questions on which this book is focused: Just what is semantic content and just

3

Page 4: ruccs.rutgers.eduruccs.rutgers.edu/images/personal-zenon-pylyshyn/Burge... · Web viewbecause it is used to expresses the concept CAT. (So too do the Bantu and Russian translations

what role does it play in cognitive science explanation? Both questions are very hard, and we don’t claim to have dealt with them even to our own satisfaction. But we think we know which way the answers may lie; and we will try to point in that direction.

The standard theory of conceptual content

Everybody knows that nobody knows much about the content of concepts. But at least there used to be a sort of consensus that embraced three of the fields most obviously concerned with it: linguistics, psychology and philosophy. We’ll refer to this more or less consensus view as the`Standard Theory of Semantics (STS)’ We propose to use STS as a point of reference for our discussion even though, for good and sufficient reasons that we’ll try to make clear, it has become less and less standard for the last fifty years or so and will, we think, quite likely continue to do so. It would hardly be an exaggeration to say that the scuttling of TST has been the major accomplishment of cognitive science so far.

The basic tenet of STS are these:

Semantic content has two distinct but related components: intension (with an `s`) or `sense` or, meaning.

(Fn You may object to speaking of the meaning of a concept: `concepts don’t have meanings’ you may wish to say, `concepts are meanings; they’re the meanings of words,’ So be it . We don’t have strong feelings, but we’ll generally stick to `intentions’ But the view that concepts and words are equivalent, or at least stand in one-one relation has lots of problems. <which we either will or will not discuss later>

the intension of a concept determines its extension.

This is a bare minimum; STS may be extended beyond these theses, and they themselves may be elaborated in one way or other. Still, we think some version of this bare-bone theory was common ground in both cognitive psychology and philosophy at least as recently as the early 1970s, and that it remains, to this day, the common ground of most psychologist’s accounts of cognition. We will presently turn to a sketch of several theories of mental content that are versions of STS, all of which have enjoyed a substantial following in philosophy and cognitive psychology, and all of which of we take to be false; not just incomplete or less than fully confirmed, but radically false; false root and branch. Psychological Reality

One way to summarize the discussion so far is to say that cognitive science must support a `Realistic`interpretation of the theories it has on offer. That, we think, is an obligation to which all sciences are subject as such. Another way to say this is that cognitive sciences is a natural science in whatever sense of term apples to, for example, biology, chemistry and other untendentious examples `nonbasic’ sciences that purport to explain empirical phenomena. We take that to require that the causal processes it posits must be of a sort that physical objects and events can undergo. We take these claims to be relatively untendentious, though do not. For example, an influential book by the philosopher Tyler Burge (19xx) denies that Naturalism is

4

Page 5: ruccs.rutgers.eduruccs.rutgers.edu/images/personal-zenon-pylyshyn/Burge... · Web viewbecause it is used to expresses the concept CAT. (So too do the Bantu and Russian translations

required of theories in cognitive psychology, but we see no plausible grounds for this claim. If a putative cognitive process could be shown not to be implementable by a physical mechanism, that would be universally taken to show that a theory that requires it must be false; surely that is the consensus view among scientists themselves. That it has regularly turned out that otherwise well-confirmed theories do [not?] meet this? condition is, we think, one of the major discoveries of the scientific enterprise.(fn) Likewise, is sometimes suggested, both by philosophers and linguists (REFERENCES; Jackson; Somes; check with JDF) that accurate predictions of speaker intuitions (modal or grammatical or both) is all that could reasonably required of a cognitive science. We assume, however, that speaker’s intuitions are of interest because they are generally epistemically reliable, and we take them to be epistemically reliable because we think that they actually are the effects of epistemically reliable mental processes of belief fixation of the sort that cognitive science to studies. If there really aren’t any such processes, who cares what informants intuits?

We take the points we’ve been making to be relatively untendentious; they are consequences of the cognitive sciences being cognitive sciences. But there are also features that they have in virtue of their being cognitive sciences; that is sciences the domains of which include content-bearing mental states. Here the issues are much less clear, and much more interesting.

The kind of cognitive psychology for which we will propose a theory of mental content is one that takes propositional attitudes (believing, intending, intuiting etc.) to be content-bearing states that are also bona fide causes and effects. But this may seem to be a paradox. Propositional attitudes are relations that creatures bear to abstract objects; indeed they are abstract objects (as are numbers, properties and the like). And abstract objects can’t be causes or effects. The number three can’t make anything happen, nor can it be an effect of something’s having happened (though, of course, a state of affairs that instantiates threeness, for example, there being three bananas on the shelf, can perfectly well be a cause of John’s looking for bananas there or an effect someone’s having put three bananas there. Likewise, propositions can’t be causes or effects; the proposition that it is raining can’t cause John to bring his umbrella; it can’t even cause John to believe that it’s raining. (fn) But then, if propositions can’t be causes or effects, mustn’t propositional attitudes be likewise causally inert?. It looks as though either we must be wrong either about propositional attitudes being causes, or about their being content-bearing states. In either case, we must be wrong about something.

Fn What can, of course, cause John to believe that it’s raining (and hence to carry his umbrella) is someone’s telling him that it is raining (i.e. someone’s telling him that the proposition it’s raining is true. But someon’s telling John that it’s raining is an event, not a proposition.

This a metaphysical minefield and has been at least since Plato; one in which we don’t propose to wander. We will simply take for granted that abstracta are without causal powers; only `things in the world’ ( including, in particular, individual states and events) can have causes or effects. We take those truths to be self-evident.

RTM

5

Page 6: ruccs.rutgers.eduruccs.rutgers.edu/images/personal-zenon-pylyshyn/Burge... · Web viewbecause it is used to expresses the concept CAT. (So too do the Bantu and Russian translations

It helps the exposition here and further on if we introduce the `type/token’ distinction. If one writes `cat’ three times, one has written three word tokens of the same word type. Likewise, if one utters ` this cat has no tail’ three times. Propositions are types of which there may (or may not) be tokens either in language or, according to the kind of cognitive science we endorse, tokens in thought. Proposition types abstract objects; proposition tokens in language are acoustic events, or marks on paper. Propositions in thought are mental representations. That being the assumed it’s proposition tokens, not propositions per se that figure as causes and/or according to standard our kind of cognitive psychology. It’s tokenings of propositions `in the head’ that connect propositions (which being abstract , are causally inert) with the concrete particulars whose causal interactions with one another, and with things in the world, that cognitive science attempts to explain. We think that positing mental representations is the way out of the paradox that seemed to be looming: How could there be a causal theory of propositional attitudes if propositions are abstracta? There may be a better out, but we don’t know of one. So The Representational Theory of Mind (RTM) will be the background theory in everything follows. As previously remarked, it’s convenienient also to suppose that mental representations are neural entities of some sort or other; doing so helps with the naturalization problem. But you don’t have to suppose this if you don’t want to, and we are officially neutral; all we insist on is that, whatever else they are, they have to be the sorts of things that physics talks about.

We do understand that RTM is a lot to ask you to swallow; even as a working hypothesis. Still, we don’t propose to defend it here; suffice it that were pretty much certain that it will have to be swallowed if cognitive science is to be interpreted Realistically, viz as a causal account of how the cognitive mind works.(fn) Suffice it that the idea of mental processes consisting of causal chains of tokenings of mental representations isn’t radical (or even particularly modern.) It has been explicit in theories that are at least as old as Locke and Hume (and arguably as old as Aristotle). It is almost always taken for granted in both Rationalism and Empiricism. To be sure, our version of RTM differs in a number of ways from Classical formulations. We don’t, for example, think that mental representations are image (images have a terrible time expressing propositions; which is what thoughts do routinely). And we aren’t Associationists. We think that mental processes are typically causal interactions but not that the causal interactions they are governed by the Laws of Association; rather, we think that mental processes ---at least the ones that subserve cognitions --- are typically computations, of which mental representations are the typical domain.

Behaviorism and Reductionism have been so much the norm in philosophy and psychology for going on a century that it’s hard to keep in mind that both are radical departures from the main line tradition in theories of mind (a fact of which their early proponents were fully aware). RTM revives that tradition. There are mental representations; mental representations are symbols, of which the semantic values are typically propositions and/or concepts; propositional attitudes are relations to propositions that are typically mediated by relations to mental representations; to a first approximation, to remember that Sam is a veterinarian is to have a token of a mental representation which (like tokens of the English sentence `Sam is a veterinarian’) expresses the

6

Page 7: ruccs.rutgers.eduruccs.rutgers.edu/images/personal-zenon-pylyshyn/Burge... · Web viewbecause it is used to expresses the concept CAT. (So too do the Bantu and Russian translations

proposition that Sam is a veterinarian stored `in your memory’ (which we suppose, is in turn somewhere in your head.)(fn) But, fond of it though we are, this is not a book about RTM.(fn)

Fn For expository purposes, it’s convenient to pretend that some mental representations are tokens of English expressions It is very unlikely that thoughts are expressed as actual natural language sentences, unless they are thoughts about or representations of sentences, in which case the sentences would be in quotations

Our topic is the nature of mental representations; what semantic properties they have and how their having them is compatible with the ontology of cognitive science being physicalistic. To which we now return.

Fn. There is plenty about RTM, pro and con, in both the psychological and philosophical literatures. Standard formulations include REFERENCES.

VERSIONS OF STS

STS is, in large part, a theory about mental representations that express concepts. it’s characteristic properties is that it takes concepts to have intensions and extensions, the former of which determine the latter. We’re about to discuss (albeit briefly) a number of versions of TST that have been and still are frequently espoused in the cognitive science literature. We think it is more or less demonstrable that all of them fail. We hope that will be enough to make you seriously consider the thought that maybe TST and ought to be replaced.

What are concepts? (fn) Versions of TST

fn This is an abbreviation. Strictly speaking, our question is: `What are the mental representations that express concepts.’ But it’s convenient to talk this way as, for better or worse, much of the cognitive science literature does

Concepts as mental images

Very likely, this is the first thing people will tell you if you ask them: Concepts are something like pictures that float before the mind’s eye. To think about (/remember/want/imagine/dream of) a Martini, is like having a photograph of a Martini, only it’s `in your head’ rather than on your mantlepiece. So, if you believe, as many/most psychologists do, that people have at least some access to the nature of their thoughts, memories, visual perceptions and so forth, it’s natural to conclude that the representation we have of sensory concepts are very much like pictures (although of course not identical to pictures for obvious reasons).

7

Page 8: ruccs.rutgers.eduruccs.rutgers.edu/images/personal-zenon-pylyshyn/Burge... · Web viewbecause it is used to expresses the concept CAT. (So too do the Bantu and Russian translations

But though this intuition is widespread, it can it can’t be right; The reasons are familiar from Phil 101. For one thing, real images have all sorts of properties other than the ones that they show. For example, a photograph has a propriety size, weight, and color none of which its subject may share: a photograph of John, but not John himself, is perhaps, a fraction of an inch thick. Likewise, your photograph of John (type or token) has perhaps been admired by Ansel Adams even if Adams didn’t much admire John. Conversely, as Berkley famously pointed out, your concept of John may `abstract from’ from many properties that both your photograph and, mutatis mutandis your mental image, of John may exhibit. Indeed, the concept must abstract from indefinitely many such properties if (as is usually supposed) the concept of X specifies only those properties that X has essentially. John’s photograph shows him sitting or standing, as wearing a hat or as bare-headed; come to think of it, as clothed or naked. But your concept of John allows him to have any of these. In this respect, it’s plausible to say that concepts are more like descriptions than pictures.

You’ve probably heard all that before, so we won’t subject you to further elaborate. Suffice it that that it’s a mistake to paraphrase Barkley as saying that the image theory fails for abstract concepts’. It’s true that you can’t picture the property that triangles have in common as such; but it’s equally true that you can’t picture the property that chairs have in common as such. The problem, in both cases, is that the property that the things in the extension of a concept have in common as such is, ipso facto, a property; and, of course, you can’t make a picture of a property. The best you can do is make an image of something that has (`personifies’) the property.

But we do want to emphasize a couple two points about mental images that are less frequently discussed.

First point: about mental images qua images: Suppose you believe, as many/most psychologits appear to do, that people have phenomological (/conscious) access to at least some properties of their mental states including thoughts, memories, visual perceptions and so on. What can one then conclude about what sorts of states these are? Well, if their properties are accessible to consciousness, they must be the kids of properties one can be conscious of; and the most obvious examples of things you can be conscious of are how something looks, feels smells … etc. This makes being an imagery (visual, auditory, olfactory etc. as natural candidate for the nature of at least some mental, since how things look, feel, smell etc are paradigms of things that you can be conscious of. This makes some sort of case for their being mental images, whether or not they are good candidates for identifying with concepts.

Consider, in particular, the case of visual images It’s pretty generally agreed that images (not just the thing that an image of, but the image itself.) But it’s hard to see how mental images could have (eg. width or depth); not unless it’s supposed that the image is displayed on a physical surface, such as the surface of the primary visual cortex. Likewise a visual image must be of some color or other (that is, it can’t be transparent); also it may or may not have a tear somewhere or other; also it may be or not be laid on over something, which it may or may not obscure part or all of. Suppose, then that mental images are displayed on the surface visual

8

Page 9: ruccs.rutgers.eduruccs.rutgers.edu/images/personal-zenon-pylyshyn/Burge... · Web viewbecause it is used to expresses the concept CAT. (So too do the Bantu and Russian translations

cortex; then the image does indeed have spatial properties, and some psychologists have managed to persuade themselves (and/or one another) that big mental images correspond to big pieces of cortex, little images correspond to little areas of cortext... and so forth. But not, surely, that red images correspond to red areas of cortext (cortex is widely known to be more or less gray.) If, in short, there are mental images then (assuming that they aren’t ghostly) the image must itself have physical properties; and there are, in general, no physical properties whose properties can be identified with those of t he image. Onions have a very strong smell. But things in the brain don’t; or, if they do, they aren’t the strong smells of onions. Likewise, images typically have `intensional’ objects; that is, they are typically images of something. Are bits of cortext likewise `of things?’ And, if so, do they typically resemble what they are of? We don’t say that such problems about mental images can’t be solved; but we do say that the kinds of properties we’ve mentioned are intrinsic to images as such. So, If we can’t figure out how mental images could have them, then they can’t, then the suggestion that there are mental images makes no clear sense.

Second problem: Concepts have constituents, but images don’t; they only have parts. So, for example, the concept MARY AND HER BROTHER has a semantic interpretation (viz it refers to Mary and her Brother); and each of its constituents has a referent too; MARY refers to (the person) MARY; MARY’S BROTHER refers to the person Mary’s brother; and so forth. A crucial problem (the problem of `compositionality’) is how the referents of complex concepts. What makes this problem crucial is that answering it is the only way we know of to explain why concepts are `productive’: MARY’S BROTHER; MARY’S BROTHER’S BROTHER; MARY’s BROTHER’S BROTHER…. And that, in turn, is needed to explain why there are so many thoughts one is able to have. A precisely analogous compositionality problem arises for the phrases and sentences of natural languages; neither has been solved so far, but at least we are working on the latter.

Whereas, consider a picture of Mary and her brother. It has, of course, parts; some of which are pictures of parts of Mary and some of which aren’t. So, part of a picture of Mary can be a picture of Mary’s left arm and her brother’s nose. But there are parts of a picture of Mary that aren’t pictures of parts of Mary. In fact, there are indefinitely many such parts (think of all the ways in which that you could carve a picture of Mary into a jigsaw puzzle.) Accordingly, if concepts are images, then they too should have (not just parts but) constituents. But they don’t, So concepts aren’t mental images. (fn)

Fn. This is putting it roughly. Better would be: `It’s intrinsic to complex concepts (like MARY’S BROTHER) to have constituents from which they inherit their semantic properties derive. But now, consider a picture of the blue sea. If it’s a picture by Seurat, then it’s quite possible that none of its parts are blue; in fact; you may well have to step far enough back so that you can’t see the color of the parts that make the picture blue.

It’s important, in thinking about whether pictures have constituents, to keep in mind not just the distinction between a thing and its parts, but also the distinction between parts of t he picture and parts of what it pictures. Arguably, at least, things in the world have parts may, or may not, count as constituents. (Maybe the constituents of an automobile are its `functional’

9

Page 10: ruccs.rutgers.eduruccs.rutgers.edu/images/personal-zenon-pylyshyn/Burge... · Web viewbecause it is used to expresses the concept CAT. (So too do the Bantu and Russian translations

parts; arguably, the drive chain is one of its constituents, but an arbitrary bit of the front seat cover probably isn’t). This sort of distinction has implications for psychology: It’s quite true that memory images, when they fragment, tend to respect the constituent structure of what they are representions of . Introspection suggests that you may `loose’ part of a memory image of John and Mary, but it’s likely to be a part that represents John or a part that represents Mary (rather than, say, a part that represents John’s nose and Mary’s toes.) But this consideration does not tend to show that pictures have constituents; it shows, at most, that John and Mary do. The question remains open whether mental images (as opposed to the things that they’re mental images of) have constituents; and we’re suggesting that they don’t. If we’re right about that, then mental images are very poor candidates for concepts.

Concepts as definitions

This is perhaps the most familiar version of TST: It says there are two aspects of the content of a concept: referential content and meaning (or ‘intension’ ( with an `s’), or `sense`)).The extension of a concept is the set of things that the concept applies to (according to some views; it’s the set of actual or possible things it applies to). The intension of a concept is the property such that things fall in the concept’s extension in virtue of having it. So, for example, the extension of the concept CAT is the set of cats. Its intension is (maybe) the property of being a domestic feline.

That is probably more or less the semantic theory they taught you in grade school. It’s because they believed it that your teachers kept telling you how important it is to `define your terms`, thereby making clear which concept you have in mind. Like so much of what they taught you in grade school, it is most likely isn’t true. We’ll say presently why, but let’s start with some of its virtues

-As we just saw, the theory that concepts are definitions seems to explicate the relation between the meaning of a concept(fn) and its reference. (Likewise for words insofar as they express concepts).

Perhaps you object to speaking of the `meaning` of a concept: ‘Concepts don’t have meanings, concepts are meanings. So be it. What we wish to speak of conceptual content in what follows, we’ll usually stick to `intensions’ or to `senses’.

-Likewise, it suggests an account of having a concept: To have a concept is to know its definition. (In some cases a stronger condition may be endorsed: To have a concept is not just to know its definition but also to know how to apply its definition; that is, how to tell whether something satisfies the concept’s definition and is thus in the extension of concept’s extension Philosophers of an epistemological turn of mind often like to think of having a concept as having `ways of telling’ (criteria for… or conceptually sufficient conditions for ...etc) what kind of thing it is. That seems to avoid (crazy ) skeptical thesis that there is no way of telling (or perhaps of telling `for sure’) what kind of thing it is. If a concepts is a definition, then we can know for sure whether a thing is in its extension (assuming, of course, that we can know for sure whether the thing satisfies the definition).

10

Page 11: ruccs.rutgers.eduruccs.rutgers.edu/images/personal-zenon-pylyshyn/Burge... · Web viewbecause it is used to expresses the concept CAT. (So too do the Bantu and Russian translations

- Many philosophers, and more psychologists than you might suppose, have thought that semantics should underwrite some notion of analytic truth (`truth in virtue of meaning alone’) thereby connecting semantic issues with questions about modality. (Cf. Bruner, Austin and Goodenough, who are pretty explicit in endorsing a definitional theory of concepts.) fn describe their experiments; why they think of them as studies of concept attainment (word learning)) If it’s true by definition that cats are felines, then there couldn’t be a cat that isn’t a feline; not even in ‘possible worlds’ other than our own. So, if you think that it’s true solely in virtue of the meaning of CAT that there cats have to be felines, you might well think that the definitional theory of conceptual (/lexical) content is the very semantic theory that you’re looking for.

-It is plausible (still only first blush) that the definition story can account for the fact that concepts compose; i.e. that you can make up new concepts by putting old ones together. The putative explanation is that concepts compose because concepts are definitions and definitions compose. If you have the concepts BROWN and DOG (that is, if you know their definitions) you can compute the content (i.e, the definition) of the concept BROWN DOG. And if you have the concept BROWN DOG and the concept BARK), you can figure out the content of the concept BARKING BROWN DOG: a barking brown dog is, by definition a thing that is brown, and is barking, and is a dog. And so on for ad infinitum. That might be how a merely finite brain can master an indefinitely large repertoire of concepts This begins to look sort of promising.

Nonetheless, concepts aren’t definitions.

-Most words just don’t have definitions, which they surely would if the concepts they express are definitions. More than anything else, it’s this lack of many clear examples that has of late lead to the steep decline, in of the popularity of the definitional account of conceptual content cognitive science.(fn)

Though most of cognitive science is increasingly dubious about the definitional theory of conceptual content, there are still many in linguistics who hold that word meanings are often definitions (which is much the same as holding that concept are, assuming that the meaning of a word is the concept it expresses). It is, for example, claimed that there are structural arguments that favor a definition of `kill’ as CAUSE TO DIE, and likewise in the case of other `causative’ verbs. The status of such claims depends on a number of quite complicated issues in `morphosyntax’ and is very unclear so far. For what it’s worth, however: `x killed y’ and `x caused to die’ are clearly not synonyms, since you can cause someone to die without killing him; eg. by getting someone else to kill him. There is a very large literature on this and related issues; the end is not yet in sight. REFERENCES

There is, no doubt, a scattering of cases to the contrary (like BACHELOR =df unmarried man; and; maybe WANT =df desire to have, etc.) A relative handful of these provide the standard examples of introductory courses in the philosophy of language, where they have become dull from overuse. Then there are concepts drawn from specialist vocabularies. These are, often enough, the products of explicit agreement and stipulation; so a yawl is a two-masted sailing vessel of which after mast is stepped behind the tiller. Similarly, a square is a four sided closed figure, all whose edges are straight lines that intersect at 90 degree angles; A Jack is the card

11

Page 12: ruccs.rutgers.eduruccs.rutgers.edu/images/personal-zenon-pylyshyn/Burge... · Web viewbecause it is used to expresses the concept CAT. (So too do the Bantu and Russian translations

that comes between the ten and the Queen in a standard deck of playing cards... and so forth. These do specify more or less necessary and sufficient conditions of application and are typically learned by explicit instruction. But they clearly don’t suggest themselves as providing plausible exemplars of a general theory of word meaning.

You may have been taught in grade school that a good way to `define your terms’ is to look them up in a dictionary. So it bears emphasis that dictionary entries generally don’t provide definitions in the sense of the term that definitional theories of content have in mind. People who write dictionaries do not take definitions as their models for entries (though they sometimes pretend to.) Here, for example are some entries drawn from a dictionary that we happen to have to hand: “game”: a contest governed by set rules, entered into for amusement, as a test of prowess, or for money or for other stakes. But. as Wittgenstein pointed out, skip- rope doesn’t count as a game by this criterion. Nor does a discarded tin can count as “garbage” according to the dictionary `definition` refuse from a kitchen, etc. consisting of unwanted or unusable pieces of meat, vegetable matter, eggshells etc. For one thing, these such definitions don’t determine are too open ended to determine extensions (vide `and so on’, `etc`). Rather, they’re informal guides to usage; written for users who can be relied to pick up a term from a scattering of examples of things it applies. That is just generally what we want to do when we consult them. But (as Wittgenstein also pointed out) It isn’t just the examples but also the `knowing how to go on’ that is do the work.

- Even if lots of concepts did have definitions, there couldn’t, barring circularity, be definitions for every concept; so what is one to do about the semantics of ` primitive’ concepts; the concepts in terms of which the others are defined? This question is urgent; since, if it isn’t answered, it is easy to trivialize the claim that the sense of a concept is its definition; i.e. a specification of a property that is shared by all and only things in the concept’s extension.

It is, after all, just a truism that all and only (actual or possible) green things share: the property of being green. Likewise, there is a property that all and only actual and possible cats share: the property of being cats; likewise, there is a property that all and only Sunday afternoons share: the property of being a Sunday afternoon; likewise, for that matter, there is a property that all and only Ulysses S. Grants share: the property of being Ulysses S. Grant. If the thesis that concepts are definitions is to be of any use, such circular definitions must somehow be ruled out. Presumably that requires decoding say which concepts are to counnt as primitives and what their content consists in. But, in fact, nobody knows how to do either. For example, here’s one suggestion we’ve heard:

-there are primitive concepts in terms of which all the others are (directly or indirectly) defined.

-The primitive concepts are such very general ones like, as sit might be PHYSICAL OBJECT and EVENT etc. (fn) But this seems to beg the question since, presumably, PHYSICAL OBJECT has an extension too, so the question arises: what do things in its extension have in common as such?

Fn. Notice, here too, how much of the work the `etc’ is doing. Perhaps that wouldn’t matter if the examples really did showed us `how to go on’; but as far as we can see, they don’t.

12

Page 13: ruccs.rutgers.eduruccs.rutgers.edu/images/personal-zenon-pylyshyn/Burge... · Web viewbecause it is used to expresses the concept CAT. (So too do the Bantu and Russian translations

It seems not to help to say that the intension of PHYSICAL OB JECT is something that is physical and an object since this raises the question what the intensions of those intensions of PHYSICAL and OBJECT might be. Nor does the dilemma’s other horn seem more attractive. It’d been suggested, for example, that PHYSICAL OBJECT can be defined after all: perhaps something is a physical object in virtue of its having a `closed shape`, and/or a `continuous trajectory in space’, etc. We’ve tried hard to convince ourselves that the concept of a TRAJECTORY is more basic than the concept of a PHYSICAL OBJECT (isn’t true by definition that the trajectory of an objectj is its path through space?); for that matter, we find it hard to convince ourselves that the concept of a physical object is more basic than the concept of a thing.

It’s to be said in praise of the Empiricists (Hume, for example) offered a serious suggestion about how to deal with this complex of worries: According to the `Empiricst Principle’ that, insofar as their content is cocnerned, all concepts reduce to sensory (/experential) concepts. Or rather, it’s a principiled answer to `which concepts are primitive?’ if there is a principled answer to `which concepts are sensory? Empiricists thought there is: Roughly, a sensation is a mental object such that you have one if and only if you believe that you do.(fn) When taken together with the claim that sensory definability is the general case for concepts that aren’t primitive, provides Empiricists with a refutation of skepticism; if all your concepts are directly or indirectly sensory, and if your beliefs about when your sensory concepts apply aren’t

Fn Unnoticed headaches, phantom limbs and the like appear to be counter instances; and, anyhow, if your goal is to say what propositional attitudes are, you mustn’t define ‘sensation’ in terms of belief.

subject to error, then contrary to what skeptics say, some of your beliefs are beliefs about nonmental things (tables and chairs and the like) are certainly true.(fn)

fn But, arguably, this is a pyrrhic refutation. The strongest claim that is at all plausible is that you can’t be in be wrong about what sensation you are having. If you want to know what sensation somebody else is having, you will generally need to ask them. This suggests, and the history of the discussion tends to confirm, that the price of adding the Empiricist Principle to the definitional account of concepts is likely to be solipsism. That seems to us much too much to pay for answering skeptics.

As for us, we’re not at all sure that answering skeptics is a project that’s worth the effort. Why exactly does it matter whether or not it is, in some tortured sense of the term, ‘possible’ that there are no tables or chars or elephants, given that, as a matter of fact, there are perfectly clearly lots of each? In any case, nothing more will be said about skepticism in the rest of what follows.

But, according to TST, intensions determine extensions; and, these days, it’s hard to believe that, the general case, things fall in the extensions of concepts in virtue of their sensory properties (that is, in virtue of how they look, sound, feel or taste and so forth). Maybe that works for GREEN, (though that it does is capable of being doubted). REFERENCES But it quite clearly doesn’t work for CAT (or PLUMBER (or for PHYSICAL OBJECT, come to think of it.)) If it’s

13

Page 14: ruccs.rutgers.eduruccs.rutgers.edu/images/personal-zenon-pylyshyn/Burge... · Web viewbecause it is used to expresses the concept CAT. (So too do the Bantu and Russian translations

true that there are very few bona fide definitions, there are still fewer bona fide sensory definitions; if it is rare for things to fall under a concept because they satisfy a definition, it is rarer still for things to fall under a concept because they satisfy a sensory definition. The crux is: being a cat isn’t the same property as being cat shaped, cat colored, and /or being disposed to make cat noises. The identity claim fails in both directions. Since that’s true, CAT doesn’t have a sensory definition.

What with one thing and another, the mainstream opinion these days is that Hume’s program of conceptual analysis by sensory reduction can’t be carried through.

Concepts as stereotypes

If, as we suggested, the sheer scarcity of plausible examples was a main cause of so many cognitive scientists abandoning the definitional model of conceptual content, a main cause of the enthusiasm that greeted the work on stereotypes (by Elenor Rosch and many others…refs –Rosch is not really the standard bearer of this idea, just one of the worker bees). Was that there’s a plethora of examples. (REFERENCES) Moreover, while there are relatively few cases of reliable effects of a subject’s knowledge of definitions on the sorts of experimental tasks that cognitive psychologists like to use, such effects are easy to elicit when the manipulated variable whether a stimulus is stereotypic of the kind that it belongs to.(fn) There are large effects of stereotypy on, for example: the strength of associations (if you are asked to

fn This is an abbreviation; we’re aware that everything belongs to more than one kind. This desk belongs to the kind `thing in my office’ and to the kind ‘thing purchased at a discount’ and to the kind ‘wood thing’ and so forth. It is, indeed, perfectly possible for a thing to be stereotypic of one of the kinds it belongs to but not of one of another. An example is given below.

think of an animal, you are much more likely to think `dog’ than `weasel’. It’ not unreasonable to think this is because DOG is a stereotype for ANIMAL and weasels aren’t very similar to dogs. Likewise, there are large effect of stereotype on reaction times identification tasks (dogs are more quickly recognized as dogs than weasels are recognized as weasles); on `interjudge reliability’ (we are more likely to agree whether sparrows are birds than about whales are fish); on the age of word acquisition )`dog’ is learned earlier than `weasel’ (in fact, stereotypy is a better predictor of how early a word is learned than the is frequency with which it occurs; on the probability that a property of a stimulus will generalize to other stimuli of the same kind ( a subject who is told that sparrows have ten year life spans, and is then asked to guess whether starlings do, is more likely to guess `yes’ than is a subject who is told that starlings have ten year life spans and is then asked to guess whether sparrows do) etc. These sort of effects persist when `junk variables’ are controlled. In short, effects of stereotype on subjects behaviors are widespread and reliable. That stereotypy affects a wide variety of empirical measures, is about certain as facts of cognitive psychology ever get. The open question is not whether subjects know which stimuli are stereotypic of their kind; it’s whether stereotypes have some of the properties that concepts need to have. < REFERENCES FOR THE ABOVE>

14

Page 15: ruccs.rutgers.eduruccs.rutgers.edu/images/personal-zenon-pylyshyn/Burge... · Web viewbecause it is used to expresses the concept CAT. (So too do the Bantu and Russian translations

To begin with virtues: stereotype is a graded notion; things can be more or less, stereotypic of a kind. A dog is a more stereotypic of ANIMAL than is a pig; and a pig is a more stereotypic of ANIMAL than is a weasel; and a weasel is much more stereotypic ANIMAL than is a paramecium, and so on. Indeed, a thing that is pretty stereotypic of one kind that it belongs to can be far less stereotypic of another. A chicken is a reasonably good example of something for dinner; but it’s only a so-so example of a bird. All else equal, this argues for the thesis that concepts are stereotypes, as does relative ubiquity of effects of stereotype. It is something of an embarrassment for the definitional view that a dog is a better example of ANIMAL than weasel; either weasels are in the extension of ANIMAL or they aren’t; the definitional theory of conceptual content offers no account of how that could be so. (Likewise, mutatis mutandis, for vague concepts, marginal examples and so on). Because the stereotype theory prefers graded parameters to dichotomies, it has no principled problem with these sorts of facts.(fn) fn On the other hand, there are dichotomous categories, and they do embarrass the identification of concepts with stereotypes. There is no such thing as a number that is more or less even; but subjects will tell you that two is a better example of an even number than 8. (REFERECES)

A question in passing: Is the theory that the contents of concepts are stereotypes really a version of STS? Strictly speaking STS claims that intentions determine extensions. Does the stereotype for a concept determine the set of things that the concept applies to?

That depends, of course, depends on what `determine’ means. A definition determines an extension in the sense that its clauses are true of and only of the things that the concept applies to. If `bachelor; means unmarried man, then its extension is the set whose members are all and only unmarried men. That isn’t, however how stereotypes work. The extension of a stereotype is the set of things that are `sufficiently similar’ to the stereotype; fn and its is left open what

Fn Assuming that stereotypes can properly be said to have extensions. It’s no accident that people who hold that concept are stereotypes are also prone to hold that the extensions of a stereotype is a `fuzzy’ set.

sufficient similarity consist in; indeed what it consists in may well be different from concept to concept. The way that kings are similar to one another is quite different from the way that oboes or crickets do. Hence it is natural for stereotype theorists to speak of a stereotype as a location in a `multi dimensional’ similarity space. (fn)

There is a varient of stereotype theory according to which the dimensions of the similarity space in which stereotypes are located are such properties as color, shape and the like. But this is just empiricism with graph paper unless there is a serious attempt which properties dimensions can correspond to and which they can’t. It’s a bit depressing how often cognitive science exhausts its energy by chasing its own tail. REFERENCES (Churchland),

But this may be a virtue of the stereotype view. It is an embarrassment for definition theories that there often both good and bad instances of things that fall under a concept; concepts can be more or less vague, and a thing that is a good example of one concept maybe an only a so-so

15

Page 16: ruccs.rutgers.eduruccs.rutgers.edu/images/personal-zenon-pylyshyn/Burge... · Web viewbecause it is used to expresses the concept CAT. (So too do the Bantu and Russian translations

example of another. Chickens are good example of things served for dinner; but they are at best middling examples of birds. All of that is grist for a stereotype theorists mill, since none of it is hard to explain on the assumption that the basic relation between instance of a concept is similarity since similarity is itself a graded notion.fn

Still, concepts can’t be stereotypes. The argument that they can’t is surprisingly straight-forward: If concepts are stereotypes, stereotypes have to compose; if they don’t the productivity of concepts defies explanation.

like definitions, stereotypes don’t compose) fn Here’s the classic example: there are stereotypic fish (say trout) to which trout and bass are both more or less similar. (Flounders aren’t, perhaps very similar trout, but perhaps they are adequately similar for the purposes we have at hand. Certainly trout and bass are more similar to one another than either is to a rock or a tree.) likewise, there are stereotypic pets; dogs win by miles with cats a bad second. But the relevant point is that the stereotypic pet fish isn’t a dog or a cat, it’s (maybe) a goldfish. There is no way to compute the stereotype structure of pet PET FISH from the stereotype structures of CAT and FISH can’t be computed.

We think this is as close to a knock-down argument that concepts can’t be stereotypes as a reasonable cognitive psychologist could hope to get. fn So we’re going to drop the topic.

FN It is ungratifying that definition theories of conceptual content tend to be good at things that stereotype theories are bat at vice versa. (For example, the latter but not the former offers an account of vaguness but the former offers a semantic account of analyticit; but the former does not. We suspect this is because both are chasing a wild goose . More on this presently.

Fn The in cognitive science literature offers many proposed rebuttals; REFERNCES but none of them strike us as remotely convincing. We’re sticking to our guns.

Concepts as nodes in a network of associations. (fn)

Fn Since associative connections are generally probabilistic, (some associtions are stronger than others) associationistic accounts of concepts tend to be compatible with stereotype views; indeed, two are often held together. We’re dubious whether this speaks well of either; but let’s waive the question for now.

Digression: concepts and thoughts

Concepts aren’t the only kinds of mental representations; there are also thoughts. But, though our review of versions of STS has had many things to say about the one, it’s said very little about the other (except that whatever theory of thoughts you opt for, it must not confuse them with concepts.) This is no accident; the tradition in the psychology of cognition ---especially

16

Page 17: ruccs.rutgers.eduruccs.rutgers.edu/images/personal-zenon-pylyshyn/Burge... · Web viewbecause it is used to expresses the concept CAT. (So too do the Bantu and Russian translations

the Associationist tradition in the psychology of cognition--- has consisted, in considerable part of missing the concept/thought distinction and suffering horrendous consequences. Concepts are different from thoughts in all sorts of ways. Before we consider associative accounts of concepts, we want to enumerate some of the differences between concepts and thoughts.

-Thoughts and (complex) concepts both have constituent structures, but thoughts also have `logical form’. Suppose that the concept HEAVY IRON consists of an association between IRON and HEAVY such that tokenings of the latter regular cause tokenings of the former. Still, as Kant and Frege both emphasized vehemently, that could not be true of the thought that iron is heavy. In the thought, the property denoted by HEAVY is predicated of the stuff denoted by IRON; accordingly, the thought is true or false depending on whether the stuff in question does in fact have the property in question..fn

Fn. `The thought predicates HEAVY of IRON’ and `the thought is true iff iron is heavy’ may well just be two ways of saying the same thing. In any case, to speak of thoughts as having logical form is to invoke denotation (reference) and truth as parameters of their content. And, remember, the content of a thought is supposed to be inherited compositionallyfrom the content of its constituents; so the fact that thoughts have logical form places constraints on the content of concepts. All of that is overlooked if you think of the semantics of mental representations the way Empiricists and Associationists did: as primarily a theory about the combinatorial structure of conceptual content; but it is grist for our mill, as later chapters will explain.

Thoughts are not a species of complex concepts. Because they very vehemently ignored that, assocationists were lead to greatly overestimate the generality of their account of mental representations; in particular, the likelihood that their theory of complex ideas might be generalized to provide a theory of mental representations at large.

- Concepts are constituents of thoughts, not the other way around. Thus, for example, the thought that John is remarkably stupid has the concept JOHN, the concept STUPID and the concept REMARKABLY STUPID among its constituents. By contrast, concepts either have no constituents if they are primitive or, if they are complex, their constituents are other concepts. (The constituents of the concept REMARKABLY STUPID include the concepts REMARKABLE and STUPID) fn

(fn) There are many variations. For example, if you like to think of concepts as definitions , you may want to say that a concept can have constituents at some levels of analysis (in particular, at the semantic level, levels where meanings are displayed explicitly) but may lack them at other levels of analysis. Perhaps the definition of KILL is CAUSE TO DIE. Then, presumably, KILL has CAUSE as a constituent at the semantic level of representation, but not at the level of representation where thoughts are encoded by utterances. We’ll largely ignore such options in what follows. But they are interesting to think about and much discussed in the linguistics literature.

17

Page 18: ruccs.rutgers.eduruccs.rutgers.edu/images/personal-zenon-pylyshyn/Burge... · Web viewbecause it is used to expresses the concept CAT. (So too do the Bantu and Russian translations

-Because they involve predication (typically of properties to things), thoughts often (maybe always) have `truth values’ (they are either true or false); but not concepts. The thought that John is remarkably stupid is true, false, or somewhere in between.fn But the concept REMARKABLY STUPID can’t be any of those.fn

fn On some views, thoughts can exhibit `truth value gaps’. Does the thought that John is remarkably stupid have a truth value if there is no such person?

Fn Of course, an utterance of the form of words `remarkably stupid’ can have a truth value if there an implicit subject that is evident from context ( if, for example, it is indicated by an ostenstive gesture of the speaker’s). But what such an utterance expresses is not the concept REMARKABLY STUPID but (ellipticly) the thought: [that person is] remarkably stupid.-

Having done our best to rub in the concept/thought distinction, we return to our main business, which is to survey some versions of STS that offer an account of the content of concepts

Concepts as nodes in a network of associations

It is possible to represent a `conceptual repertoire’--- that is, the totality of concepts available to a certain mind at a certain time --- as a graph consisting of finitely many labeled nodes with paths connecting some of them to others. See fig 1-1 (fn) On the intended interpretation,

Fn: `I thought you said that a mind at a given time has access to indefinitely many concepts . So how could a finite graph represent an indefinitely large conceptual repertoire?’

Well, it depends on how you read `available’. For the purpose of dscussing associationsm) it’s easiest to take `concepetual repertoire available to the mind at t’ to mean something like the set of concepts that are constituents of thoughts that the mind has actually tokened up to and including t. The question how associationists might represent the indefinitely large set of concepts that would be constituents of thoughts if the mind could think given its conceptual repertoire is generally not treated in the canonical associatiionist literature. So, for example, imagine a mind that has the concept RED and the concept TRIANGLE but has never happened to think about red triangles; it could think RED TRIANLE but, , for some reasom. Happens never to have done so. Associationists don’t generally consider this question; that’s why they haven’t needed to face the fact that concepts are productive.

the label on a node tells you which concept it stands for and the length of the path between nodes varies inversely with the strength of the association between the concepts they express, so relatively long paths correspond to relatively weak associatiions/ Fig 1-1 is an (entirely hypothetical) representation of what might be (a very small part of) a graph of the structure of the associative relations among concepts in someone’ associative space. Fn

Fig. 1-1 about here

fn it helps the exposition to assume (what is quite possibly not true) that the relation `associatively connected to’ is transitive; so nodes may be connected by paths that reach them

18

Page 19: ruccs.rutgers.eduruccs.rutgers.edu/images/personal-zenon-pylyshyn/Burge... · Web viewbecause it is used to expresses the concept CAT. (So too do the Bantu and Russian translations

via other nodes. (Cf the many discussions of `mediated association’ in the psychological literature, REFERENCES the upshot of which seems to be that, if there are such associations, they are relatively rare and relatively weak.

`But surely, something has gone wrong? The theory that we’re considering is that the intension of a concept supervenes on its associations. But if the totality of a node’s connections doesn’dt determine a node’s label, that implies that its connectivity and its intension must be independent parameters of a concepts; which is to say that the label of a node does not supervene on its connectivity. That contradicts (what we’re taking to be) the associationist’s account of conceptual content.’

Right; here’s another way to make much the same point: The associationist we’re imagining says that the content of a node supervenes on its connections. But its connections to what? He can’t mean its connections to labeled nodesl. That would beg the very question his theory is claims to answer: `What determines what content (i.e. what label) a node has?’ Rather, what he most hold is that corresponding nodes in isomorphic graphs have the same content whatever the labels of the nodes they’re connected to may be. That would avoid the threatened circularity, but it surely can’t be right. It is perfectly possible, for example, that the concept PUPPY has the same location on one graph as the concept KITTEN does on another, but that, whereas one of the nodes that KITTEN is connected to has the label FELINE, the corresponding node on the PUPPY graph has the label CANINE. If that’s situation, then surely the KITTEN node and the PUPPY nodes differ in content. So the associationist’s suggestion must be that two nodes have the same content if they are connected to the sane labeled But if that is the associationist’s story, then his account of conceptual content is circular; in effect, his characterization of the conditions under which two nodes have the same labels requires that the nodes they are connected to have the same labels. But you aren’t allowed to specify the conditions for being such-and such in terms of the conditions for being such-and- such. Quine was righ t when he warmed i n his iconic aticle `two dogmas of empiricism,’ that theories of meaning tend to run in circles, and that is in of itself, makes the notio of meaning seem suspect.

For all we know, it may be true that that the node labeled A and the node labeled B must have the same content if they have the same associative connections to corresponding labeled nodes in isomorphic graphs. But even if it is true, it would be of no use when the project is to explain what identity/difference of content is. But that would be cheating because the notion of identity of labels just is the notion of identity of content. A dog can’t make a living by chasing its own tail. And since, as far as we know, associationists have no other cards up their sleeves, they have no notion of conceptual content, advertisments to the contrary notwithstanding. It’s our practice, in each of these discussions of conceptual contentret by discussing its virtues. But we don’t think that associationistic accounts of content have any except that, since associative relations are causal by definition, it has a head start over other kinds of semantics insofar as they aspire to naturalizability.

Short form: It’s maybe alright to treat a concept’s content as its position in a space of connected nodes. But then, you mustn’t identify the nodes to which a concept is connected by

19

Page 20: ruccs.rutgers.eduruccs.rutgers.edu/images/personal-zenon-pylyshyn/Burge... · Web viewbecause it is used to expresses the concept CAT. (So too do the Bantu and Russian translations

reference to their labels. So there’s a dilemma: If ‘the connectivity of a node’ means its connectivity to unlabeled nodes, then identity of connectivity isn’t sufficient for identity of content. But if `the connectivity of a node’ means its connectivity to labeled nodes, then the theory is circular We know of no association account of conceptual content that is fee of this dilemma..

Concepts as inferential roles.

The idea that the content of a concept is (or supervenes on) its inferential connections is very widespread in current philosophical writings. It started with Sellars’ REFERENCE observation that the content of the logical constants can be so specified; AND, for example, is the concept whose inferential role is specified by the `Introduction Rule’ P,Q P&Q and the `Elimination Rules P & Q P; P&Q Q fn

Since it’s a major commitment of STS that intensions determine extensions (their reference), and since the logical concepts don’t have extensions (`the set whose members are all and only ANDs is not well-define) it’s a puzzle just how this treatment of AND could serve as a model for, say, the content of the concept TREE. Not surprisingly, people who are into IR theories of content are a little inclined to condescend to the notion of reference (and, often enough, to the notion of truth as well). REFERENCES (Brandom). But, for the moment, let’s put that aside.

So, then, there’s a certain similarity between IRS and associationism in that both understand content in terms of connectivity. But whereas associations are causal relations, inferences are semantic ones. Roughly inferences are operations on propositions; and what they are supposed to preserve is truth: (If P&Q is true then P and Q are true). But Since truth is itself a semantic notion, it’s takes some dodging and wiggling for IRS to avoid a circular theory of content,. It’s an interesting and possibly edifying story how proponents if IRS have thought to avoid such circles. We’ll discuss that in Part 2.

The idea of constructing a theory of content that is based on inference rather than association is proof against at least some of the troubles that have plagued the latter. For example, it’s a problem that, at least, on the face of it0 that associating doesn’t seem much like reasoning; and, reasoning is one of the processes that cognitive scientists most want to understand. It’s also a problem that association isn’t productive, given the usual assumption that productivity is grounded in compositionality; i.e that what makes a system of symbols productive is the availability of procedures by which the content of relatively complex symbols is constructed from the content of their relatively simple constituents. To a first approximation, English is productive because, if you know what `red’ means, and you know what `triangle’ means, then you can figure out what `red triangle’ means. That, basically is how a creature with a finite brain can have indefinitely many concepts. But it’s not likely that a similar story can be told about associations. Maybe your strongest associate to `cat’ to `dog’; maybe you generally associate `dead’ to `bone’; it wouldn’t begin to follow that you associate `dead cat’ to `dogbone’; associative relations aren’t productive. fn But, prima facie, inferential relations might well be, If you know the inference rules that govern `and’, you can deduce the inferential roles of P&Q and P&Q&R and P&Q&R&S ... etc.

20

Page 21: ruccs.rutgers.eduruccs.rutgers.edu/images/personal-zenon-pylyshyn/Burge... · Web viewbecause it is used to expresses the concept CAT. (So too do the Bantu and Russian translations

Fn Associationsists sometimes try to get out of this by postulating a process of `generalization’ from previously learned associations to new ones that are `similar’. But ‘similar’ had better not mean similar in content’ because. If it does, we once again have a circle in the theory. We can’t think of any reading of `similar’ that might make association productive except similar in content. If we’re right about that’s, then appeals to `generalization of association’ won’t help at all to make associations productive.

So productivity isn’t, in and of itself, at a problem for IRS. The content of RED TRIANGE is constructed compositionally from the contents of RED and TRIANGLE, just as productivity requires; and the inferences that are valid with X IS A RED TRIANGLE as the premise are exactly the ones that are valid with X IS RED and X IS A TRIANGLE as premises

Also, IRS may be able to cope with a thing that give other kinds of semantic theories nightmares Consider concepts that have empty extensions: These include AND (see above) but also, presumably, SANTA CLAUS; THE SECOND EVEN PRIME AFTER ONE, GOD, GHOST. SQUARE CRICLE and so on; But, most of them appear to have intensions at least in the sense that there is story about each, and people who have the concepts know the corresponding stories. Maybe, then, the contents of `empty’ concepts can be identified with their inferential roles in the corresponding stories. (As you won’t be surprised to hear by now, there are problems with this proposal. We’ll discuss some of them in Part 2).

Notice that the definition and stereotype theories of concepts are themselves instance of, IRS and one can imagine an eclectic theorist who holds more than one of them fn. For example, everything would be fine with IRS if it were possible to believe that lots of concepts/words have

Fn but to make the stereotype theory consistent with IRS, you must allow that some contingent inferences are concept constitutive. That is, in fact, the view of many people who hold IRS; see (eg, Brandom; OTHER REFEREN CES.

definitions. For, if C is definable and D is its definition, then all and only the inferences that are valid with C in the premises are also valid with the D in the premises. But, in fact, as we mentioned earlier most words /concepts don’t have definitions; so the question remains: how can a theory that works for AND be extended to, as it might be `tree’? Various philosophers have offered various suggestions (eg. that to have a concept is to know how to tell (or maybe, how to tell in normal circumstances) whether a thing satisfies the concept. But, really, that’s preposterous. I’m pretty sure I have the concept `tree’; but there are lots of situations in which couldn’t tell whether a certain object is one. Imagine a kind of bird which, in the winter, disguises itself as a tree. Imagine that its disguise is very, very good; good enough, in fact, to fool anyone who isn’t an ornithologist. Surely, there are lots of people who wouldn’t epistemological questions. And it isn’t true that if the last ornithologist died, the concept TREE would ipso facto vanish.

Finally, there is a sort of natural liaison between the idea that concepts are inferential roles and the idea that cognitive mental processes are some sort of computations. Computations are plausibly operations that are defined over representations and which preserve some favored

21

Page 22: ruccs.rutgers.eduruccs.rutgers.edu/images/personal-zenon-pylyshyn/Burge... · Web viewbecause it is used to expresses the concept CAT. (So too do the Bantu and Russian translations

property; and representation tokens (unlike propositions) have causal powers (see above. So it might be a good idea to take (at least some) mental processes (including, in particular, thinking) as defined over sentence-like mental representations. And that, in turn, permits thinking as causal chains of mental representations. To a first approximation the links in such chains are inferences; and these, in turn, have the virtue of being (more or less) truth-preserving when all goes well.

That is the general shape of the theory of mind that we endorse. We find it attractive since mental representations can exhibit properties of propositions to which computations require access; as for example, their constituent structure and their logical form. The idea would be that the computations that minds perform, are sensitive to the content of mental representations because, on the one hand, content is defined in terms of inferential role and , on the other hand, inferential roles are determined by the constituent structure and logical form of the representations whose inferential roles that have them.

But none of that implies that IRS is viable as semantics. Indeed, we’re pretty sure that it’s not.

Over the last fifty years or so, anglophone philosophers have expended much effort and many tears on two problems that threaten Inferential Theories of Content. No appreciable progress has been made. We think the problems facing Irs are unsolvable because, if if it is understood as a theory of conceptual content IRS isn’t true.

Holism and Analyticity

If you want to hold that content is constituted by inferential role, you will have to say which inferences comprise which roles. You have, we think, only two options: You can say that every inference is constitutive of some inferential role or other; or you can say only some inferences are. In the latter case, you are obliged to say something about which inferences are content constitutive (and, if possible, why those infererences and the others are not. We are convinced that both options invite catastrophe, indeed, that both demand it.

The first option: Holism

Suppose that last Tuesday you saw a butterfly between your house and the house next door. Doing so adds a cluster (in fact, quite a large cluster) of new beliefs to the ones that you had previously: I saw a butterfIy; I saw a butterfly between my house and the one next door; there was a butterfly visible from my house yesterday; there are butterflies around here; the total of my lifetime butterfly sightings has increased by one; there was something between my house and the one next door yesterday that probably wasn’t there last January; if I hadn’t been home yesterday, I likely would not have seen a butterfly ... and so forth, and so on and on. And each belief adds a corresponding new rule of inference to those you were previously want to apply in the course of your reasoning: Tuesday was the fourth of the month I saw an insect on the fourth of the month; I saw a butterfly on the fourth of the month; there was a butterfly visible on the fourth of the month; butterflies are insects there was an insect visible on the fourth of

22

Page 23: ruccs.rutgers.eduruccs.rutgers.edu/images/personal-zenon-pylyshyn/Burge... · Web viewbecause it is used to expresses the concept CAT. (So too do the Bantu and Russian translations

the month... and so on. Etc. (Fn) And, according to present suggestion, each of these new rules of inference alters the content of some (maybe all) of your other beliefs, depending on how far the effects of changing the inferential role of a given concept are supposed to spread to other concepts with which it is (direcly or indirectly) inferentially connected.

Crazy consequences follow: your beliefs change change minute by minute (indeed, instant by instant) so the content of your concepts changes instant by instant too. Though you and your spouse both used to agree on the proposition that butterflies are often aesthetically pleasing you don’t any more. Indeed, neither you nor your spouse can so much as think the proposition about which you once agreed, since neither of you now has the concepts that were required to do so. That sort of thing can be very hard on relationships.

Or suppose you have come to believe that butterflies are insects. Accordingly, you come to believe that if you saw a butterfly, then you saw an insect, on the grounds that ((P Q and P)) Q). Perhaps, even, you are pleased with yourself for having drawn that inference; it speaks well for the rationality of your mental processes. But not so according to the present account of conceptual content; by adding Q to the inventory of your beliefs, you have changed what it (and P) meant in the premise. You committed a fallacy of ambiguity and have no cause for self-congratulation.

The natural thing to say here is that, though the content of your concepts (hence of your beliefs) changes instant by instant, it doesn’t usually change very much. But how much is very much.? And in what direction does one change them? If the day before yesterday you believed that the Sun is a considerable distance from here, and yesterday you came to believe that you saw a butterfly, what belief does your belief about the distance of the Sun change into? This is a morass, from whose boundaries no traveler returns. We strongly recommend that you stay out of it.

The second option: analyticity.

The truth values of propositions are connected in all sorts of ways. For example, if P and P->Q are true, so too is Q. If This is a butterfly is true, so too is this is an insect. If my house is near your house is true, so too is your house is near my house. (On the other hand, even if my house is near your house is true, and your house is near John’s house is true, it may not also be true that my house is near John’s house. ) If this glass is full of H20 is true, so too is this glass is full of water, if all the observed swans have been white, and none of the observed swans have not been white and very many swans have been observed are all true, then so too is all else equal, all swans are white is well confirmed . If swans are white is a law of nature is true, so too are this swan is white and all else equal, if this had been a swan, it would have been white. And so on, world without end.

For all sorts of reason, it is often of great interest to us to know which propositions have truth values that are interdependent; sometimes because our well-being depends on it, but often enough just because we’re curious. Accordingly, one of the things our cognitive processes allow us to do is trace such connections; having grasped one bit of the web, we can follow it to

23

Page 24: ruccs.rutgers.eduruccs.rutgers.edu/images/personal-zenon-pylyshyn/Burge... · Web viewbecause it is used to expresses the concept CAT. (So too do the Bantu and Russian translations

other ones that it’s connected to. If we know the truth value of the one that we’ve grasped (for example, because we’ve seen, or heard, or been told by a reliable source.... whether it’s true,) then that may fix the truth value of some proposition that it’s connected to and whose truth value we care about. We can, of course, never know about all of the connections there are; nor would we conceivably wish to. But, from time to, we can know about some of the connections that our well-being depends on, or that our curiousity requires us to look into. That’s what logic and science and mathematics and history are for. That’s what thinking is for.

But still if we can’t, even in principle, know all the connections there are, maybe we can know all of the kinds of connections there are? Or all of the important kinds? In effect, the Empiricist tradition made two epistemological suggestions about the structure of the web, both of which have seemed plausible in their time: That propositions whose truth values are accessible to perception have a special role to play in finding ones’ way through the webv; and that all propositions have their truth values either in virtue of their content alone or of their content together with how the world is. The letter thesis is of special interest in the present context because, if there are propositions that are true/false in virtue of their content alone, then there are at least some fixed points in the tangle of connections of which the web is constituted. If (or to the extent that) the content of `bachelor’ fixes the truth value of John is a bachelor John unmarried , then we can rely on that inference being sound, whichever of our beliefs may alter. So if there are propositions that are true in virtue of their meaning, and if the content of a proposition is its inferential role, then holism is false: I have some beliefs whose content when add or subtract other beliefs. Call the beliefs whose content doesn’t depend on what other beliefs I have `analytic’. Then the present point is that if here are analytic beliefs, then, at a minimum, holism is false of them

But convenient as a substantive notion of analyticity might be, there is an increasing number of reasons why philosophers have ceased to believe that such a substantive notion of analyticity can be sustained. Two mains ones are: Analytic beliefs can’t be revised without changing the content of their constituents. But, there don’t seem to be any such belief. I can ( reasonably) revise any of my beliefs under sufficient pressure from data and background theories. The trouble is that belief change is conservative; If enough rests on a belief of mine, and if there is some replacement for it waiting in the wings, then any belief may be rationally abandoned; even the ones that are allegedly analytic. The second worry is that you clearly can’t use the notion of analyticity to explicate meaning (or anything else that’s central in semantics) on pain of the usual problem: since analyticity is itself a semantic notion par excellence, you end up in a circle. Both these lines of argument were spelled out in Qune’s iconic paper `Two dogmas of empiricism’; To our knowledge, neither has been rebutted.

The moral of this chapter is that all the available accounts) of conceptual content (or, anyhow, all the ones we’ve heard of) seem to be pretty clearly not viable; those who to cling them do so largely in despair. At a minimum the arguments against the available theories of content are sufficiently impressive that it would be unwise to take meanings, intensions and the like for granted in any theory that you care about, cognitive science included. So, what now? * * *

24

Page 25: ruccs.rutgers.eduruccs.rutgers.edu/images/personal-zenon-pylyshyn/Burge... · Web viewbecause it is used to expresses the concept CAT. (So too do the Bantu and Russian translations

Why , after all these years, have we still not caught the Loch Ness Monster? Of course, it might be that we’ve been looking in the wrong places; we’re told that Loch Ness is very large, very deep (and very wet); and we believe it. But as the failures pile up, an alternative explanation suggests itself: The reason we haven’t caught the LNM is that there is no such beast. Likewise, we think, in semantics: The reason nobody has found anything that can bear the weight that meaning has been supposed to ---it determines extensions, it is preserved under translation and paraphrase, it is transmitted in successful communication, it is what synonymy is the identity of, it supports a semantic notion of necessity, it supports philosophically interesting notions of analyticity and conceptual analysis, it is psychologically real, it distinguishes coextensive concepts (including empty ones), it is compositional, it is systematic and productive, it isn’t occult, and, even if it doesn’t meet quite all of those criteria, it does meet a substantial number--- the reason meaning has proved so elusive is that there is no such beast as that either. We think that, Like the Loch Ness Monster, meaning is a myth.

The rest of the book is about how it may be possible to construct a semantics for mental representations that is sufficient for the purposes of cognitive science, and is compatible with reasonable naturalistic constraints on empirical explanations, but which dispenses with the notion of meaning altogether: It recognizes reference as the only relevant factor of content. Accordingly, there are plenty of extensions but there are no intensions, and there are no senses. We don’t claim to know for sure that any such position is viable; and even assuming that there is, we don’t claim to know how to get there. But maybe we can point in the right general direction.

25