the epistemological import of morphological content

11
The epistemological import of morphological content Jack C. Lyons Ó Springer Science+Business Media Dordrecht 2013 Abstract Morphological content (MC) is content that is implicit in the standing structure of the cognitive system. Henderson and Horgan claim that MC plays a distinctive epistemological role unrecognized by traditional epistemic theories. I consider the possibilities that MC plays this role either in central cognition or in peripheral modules. I argue that the peripheral MC does not play an interesting epistemological role and that the central MC is already recognized by traditional theories. Keywords Epistemology Á Evidence Á Connectionism Á Reliabilism Á Unconscious Henderson and Horgan’s Epistemological Spectrum is an ambitious and innovative effort to develop a naturalized epistemology. H&H are explicit and self-reflective about their methodology, but this is not one of those abstract meta-level attempts to argue that empirical data are relevant to epistemology; they do a lot of first-order epistemological theorizing, across a wide range of the titular spectrum, from low- grade a priori reasoning, which has a small empirical component, to richly empirical science. The result is a novel synthesis of some traditional epistemological views, where the bulk of the novelty results from an incorporation of what they call ‘‘morphological content’’, which is information ‘‘implicit in the standing structure’’ of the system, rather than explicit in the occurrent, tokened representations of the system. J. C. Lyons (&) Department of Philosophy, MAIN 318, University of Arkansas, Fayetteville, AR 72701, USA e-mail: [email protected] 123 Philos Stud DOI 10.1007/s11098-013-0240-5

Upload: jack-c

Post on 23-Dec-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

The epistemological import of morphological content

Jack C. Lyons

� Springer Science+Business Media Dordrecht 2013

Abstract Morphological content (MC) is content that is implicit in the standing

structure of the cognitive system. Henderson and Horgan claim that MC plays a

distinctive epistemological role unrecognized by traditional epistemic theories. I

consider the possibilities that MC plays this role either in central cognition or in

peripheral modules. I argue that the peripheral MC does not play an interesting

epistemological role and that the central MC is already recognized by traditional

theories.

Keywords Epistemology � Evidence � Connectionism � Reliabilism � Unconscious

Henderson and Horgan’s Epistemological Spectrum is an ambitious and innovative

effort to develop a naturalized epistemology. H&H are explicit and self-reflective

about their methodology, but this is not one of those abstract meta-level attempts to

argue that empirical data are relevant to epistemology; they do a lot of first-order

epistemological theorizing, across a wide range of the titular spectrum, from low-

grade a priori reasoning, which has a small empirical component, to richly empirical

science. The result is a novel synthesis of some traditional epistemological views,

where the bulk of the novelty results from an incorporation of what they call

‘‘morphological content’’, which is information ‘‘implicit in the standing structure’’

of the system, rather than explicit in the occurrent, tokened representations of the

system.

J. C. Lyons (&)

Department of Philosophy, MAIN 318, University of Arkansas, Fayetteville, AR 72701, USA

e-mail: [email protected]

123

Philos Stud

DOI 10.1007/s11098-013-0240-5

Although there is much of value and interest in this book, I want to focus on the

nature and role of morphological content (henceforth, MC). H&H argue that

(1) MC is required to solve the classic ‘‘frame problem’’ of AI; therefore, the

human cognitive system must contain a good deal of MC;

(2) an appeal to at least some of this MC is needed to distinguish propositional

justification (an agent’s having justification for some proposition, whether she

believes it or not) from doxastic justification (an actual belief’s being

justified); and

(3) the result is an ‘‘iceberg epistemology’’ that combines elements of coherentism

and foundationalism but offers a theory that is importantly different from

traditional versions of either.

Let me cover these in a bit more detail.

(1) The frame problem from AI is the problem of knowing what needs to be

reasoned about and what doesn’t. In a seminal work, Fodor (1983) argued

that peripheral modules (perceptual and motor systems) solve this problem

by being informationally encapsulated, and thus architecturally constrained

to ignore vast amounts of potentially relevant data, taking into account only

a limited set of current (usually sensory) inputs and some hardwired rules,

or constraints, or assumptions about the narrow domain to which the

module applies. Central systems of reasoning and belief-fixation, on the

other hand, are ‘‘Quinean’’ and ‘‘isotropic’’: confirmation is in the end

holistic, and any piece of information is potentially relevant (given the right

background beliefs) to any particular belief. Fodor claims that this leaves

central systems subject to the frame problem and consequently despairs of

our being able to offer a computationalist theory of the central systems.

H&H argue that MC solves the frame problem for central systems.

Determining whether a given belief fits with the background system would

be computationally intractable, perhaps, if the background system existed

only as discrete, explicit representations. But if this information is

embodied implicitly and morphologically, then it can automatically

constrain belief-fixation, thus solving the frame problem. When information

is embodied morphologically, it is typically all bundled together, in a way I

will discuss further below, so it really is the whole background theory that

is involved in belief-fixation.

(2) A nice bonus of this is that if the background system is morphologically

encoded, then the whole background system can be causally implicated in any

episode of belief-fixation. This allows us to bridge the gap between

propositional and doxastic justification. If belief-fixation were causally local,

involving only explicit contents, then there would be a great deal of leftover

information in the system, information that could serve as reasons for a given

belief, but doesn’t serve as the agent’s actual reasons for the belief, because it

isn’t causally implicated in that belief; the belief thus isn’t in any sense based

J. C. Lyons

123

on these reasons. Typical forms of coherentism and classical foundationalism

implicate a vast number of beliefs in the propositional justification of a typical

target belief. But this leaves these theories without a credible account of

doxastic justification, since it is implausible to hold that all these justifier

beliefs are causally relevant to the target belief. Unless, however, these beliefs

are encoded as MC; in that case, they can be causally relevant in a clear and

straightforward way.

(3) Finally, this move allows H&H to develop a nontraditional coherentist view

with a quasi-foundationalist element as well. Traditional foundationalism and

coherentism, they claim, restrict the epistemically relevant states to the tip of

the iceberg: explicit representations, including occurrent and dispositional

beliefs—where the latter are construed as dispositions to token the relevant

occurrent beliefs. H&H claim that foundationalism gets the explicit part of the

story approximately correct, but the bulk of the iceberg is MC, which is

ignored by traditional epistemology. H&H proceed to endorse a nontraditional

coherentism, one that understands justification in terms of coherence with the

morphological background system, not just the explicit background. Together

with the previous point, this brand of coherentism can understand doxastic

justification as the causal dependence of a belief on that which propositionally

justifies it.

In what follows, I will focus on MC and its epistemic significance. First, I

want to say a bit more about MC, distinguishing two very different kinds of

inexplicit information H&H might have in mind. One kind typifies the peripheral

modules, and the other kind typifies the central systems. A problem arises, which

is that, as per my opening reconstruction, and because of the epistemic

irrelevance of peripheral MC (to be argued below), the epistemically significant

MC has to be central MC. But central MC doesn’t seem able to fit the bill.

Because it’s the wrong one of the two kinds of inexplicit content, it (a) doesn’t

offer a new alternative to the traditional view, and (b) is not able to play the

right causal role to underwrite doxastic justification in the way they hope. I close

by suggesting that unconscious but in-principle introspectable contents can play

the epistemic role H&H reserve for MC, but that this role doesn’t depend on

those contents being morphological.

1 Two kinds of MC

Cognitive scientists often distinguish between explicit and inexplicit encoding of

information. One way to illustrate the difference is to compare two ways of storing

long-term memory about general facts: semantic networks versus distributed

connectionist networks.

In a semantic network (Collins and Quillian 1969) information is explicitly

represented.

The epistemological import of morphological content

123

This particular example explicitly encodes the information that Clyde is an

elephant and that elephants have trunks, but not the information that Clyde has a trunk.

Redundancy and clutter are avoided by allowing the latter information to be recovered

if and when necessary. And it is quite easily recovered, as individuals (defeasibly)

inherit the properties of their subordinating kinds unless specified otherwise. On a

given probe of the network regarding the question whether Clyde has a trunk, a

specific and determinate subset of the network will be activated, thus making explicit

what is implicit in the at-rest network: the information that Clyde has a trunk.

Compare this with a typical connectionist network that uses distributed

representations:

In such a network, we might let input units represent features like ‘brown fur’,

‘grey skin’, ‘barks’, ‘swims’, ‘has a trunk’, ‘can fly’, etc. and let the output layer

represent various kinds of animals, with one unit lighting up to represent ‘dog’,

J. C. Lyons

123

another for ‘fish’, another for ‘elephant’, and so on. The representations—strictly

speaking—in the network are limited to the input and output units (and perhaps the

hidden units, though this will be an empirical question to be discovered after the

network is trained up); the units of the input layer, for example, constitute a finite

and rudimentary vocabulary, with a simple conjunctive syntax that generates

complex representations (layers) out of concatenations of primitives (units), where

the meaning of the whole is a function of the meanings of the parts. Of course, there

is further information embodied in the weights. In a properly wired network, the

connection weights will encode enduring knowledge about animals—e.g., that dogs

bark, fish swim, elephants have trunks, etc.—without ever explicitly representing

this information. The information that elephants have trunks is distributed

throughout the weights and is inseparable from the information that dogs bark or

that cats have fur, etc. Because all the knowledge is jumbled together, on any use of

the network, all of it is equally activated, even the parts that are intuitively irrelevant

to the task at hand (e.g., information about cats is activated in the course of

reasoning about elephants).

So while semantic networks prominently feature explicit content (or represen-

tational content), most of the heavy lifting in connectionist networks is done by

what we can call implicit content. This is clearly a kind of MC. But there is a third

category, too, that might involve a different sort of MC. Dennett has an example

regarding a poorly written chess program that ‘‘thinks it should get the queen out

early.’’ The program, we will assume, consists entirely of a set of explicit rules,

none of which says to get the queen out early, but the combination of the rules

consistently results in the program’s bringing the queen into play early on. Unlike

the semantic or connectionist network, there is no determinate and causally active

part of the system that realizes or embodies this information; rather it is an entirely

emergent level phenomenon. No part of the system encodes a goal of getting the

queen out early, either explicitly or implicitly, but because of what really is

encoded, the system acts like one that represented that goal. Let’s call this virtual

content.

One important difference between implicit MC on the one hand, and explicit and

virtual content on the other, is that implicit MC is implicit because it is distributed

throughout the weight matrix. There isn’t a part that encodes the knowledge that

elephants are grey and another that encodes the knowledge that cats have fur. All

this knowledge is spread out through the whole system and is inextricably entangled

and therefore difficult to modify. This is both bad and good. On the down side, it

makes learning a slow, arduous process of repetition and incremental change; if the

system were to quickly change all the weights to incorporate some new information

about elephants, it would likely forget what it previously knew about cats. On the up

side, it makes forgetting difficult as well and makes the system nicely resistant to

noise and damage (connectionists call this ‘‘graceful degradation’’).

I would think that the concept of MC is really aimed at implicit, rather than

virtual content. But H&H might need the term to have broader scope, so let us

consider it as encompassing both implicit and virtual content; where the differences

matter—and they usually will—I will specify whether I am talking about implicit

MC or virtual MC.

The epistemological import of morphological content

123

2 Central and peripheral systems

Suppose a vaguely Fodorian mental architecture: a set of relatively encapsulated,

domain specific input and output modules, coupled with one or more Quinean and

isotropic central systems for integrating and adjudicating the outputs of the input

systems, reasoning, forming plans, and sending commands to the motor output

systems.1 The peripheral modules make initial guesses about the distal environment,

and the central processes confirm or override these proposals. In my reconstruction

of H&H above, I assumed that the relevant MC would come in at the level of central

processes, rather than peripheral processes. Central MC would allow the whole

background belief set to be causally implicated, in a tractable manner, to the

formation/retention of a given belief.2

The problem, however, is that the kind of MC necessary for this kind of causal

relevance is implicit, rather than virtual, MC, and H&H have given us little reason

to think that there is any MC that is both implicit and central.

Much of the central information may be explicit; semantic networks and the like

are still taken quite seriously in psychology and AI. Of any remaining information,

the bulk of it would seem to be virtual, rather than implicit. I say this because the

background information relevant to belief-fixation tends to be susceptible to quick

and easy change. Persuade me that all the lights nearby are tinted green, and

although my perceptual modules will still operate just as they did before, I will

instantly suspend or modify my considered judgments about color. If there is much

content that is both morphological and central, it seems to be highly labile and thus

virtual, rather than implicit.

This example illustrates the contrast with implicit peripheral MC, which seems to

be more or less the norm. Perceptual systems need to employ substantive

assumptions or constraints about the nature of distal stimuli and/or their

environment, e.g., that two retinally adjacent points map to equally distant parts

of the array, that certain kinds of texture discontinuities indicate object boundaries,

that illumination comes from above and is unevenly distributed across uniform

surfaces in predictable ways, etc. It is unlikely that these assumptions are explicitly

represented anywhere, and they are substantive and pervasive enough that they seem

to be wired into the basic architecture of the systems and not merely virtual.

Furthermore, these systems tend to be fairly modular, and the information they

employ is highly resistant to change—they are highly nonlabile.

1 Fodor’s own (1983) view puts too much emphasis on innateness, strict encapsulation, domain

specificity, shallowness of the peripheral modules, and unity of central system(s) to be empirically

plausible, by my lights, anyway. (Hence the ‘relatively’ hedge in the text.) Embracing weak modularity,

massive modularity, or other variants on the Fodorian position shouldn’t affect the present points much. It

is probably better to think of things in System 1/System 2 terms (Kahneman 2011; Schneider and Shiffrin

1977) (replacing ‘central’ with ‘System 2’ and ‘peripheral’ with ‘System 1’), but we’ve already started

down the Fodor path, so I’ll stick with that terminology and framework.2 This claim may be too strong. Whether it is really the whole background system that is causally

implicated will depend on whether central cognition involves one or several distinct networks. For present

purposes, I will suppose that the whole background system really is causally involved each time, simply

because this supposition renders H&H’s claims all the more interesting.

J. C. Lyons

123

The causal differences between implicit/peripheral and virtual/central content are

significant. In the periphery, the realizers of implicit content (e.g., connection

weights) are directly, actively, causally implicated in the production of the belief.

By contrast, central contents seem—on the face of it, at least—to involve bare

counterfactual dependence, often entirely negative, e.g., ‘if S had believed that dogs

were 100’ tall, she wouldn’t have continued to believe that x was a dog’. I call this

example negative because there aren’t any particular beliefs of S’s that are held to

be counterfactually responsible for the belief that this is a dog, but rather the

absence of a belief (that dogs are 100’ tall). Sometimes we do ascribe positive

dependencies, e.g., ‘if S hadn’t believed that dogs have fur, she wouldn’t have

continued to believe x was a dog’. Here, there is an actual belief being appealed to.

But for one thing, it is unclear that such ascriptions are plausible where they are not

merely elliptical statements of negative dependencies. Surely if S had believed that

dogs don’t have fur, she wouldn’t have believed this was a dog (that’s a negative

dependency). But would her lacking that belief (e.g., not having the concept of fur

or not having a settled opinion about the matter) have resulted in suspension of

belief? Furthermore, in the cases where the counterfactual link is direct and the

dependency is positive, we have counterfactual dependency on individual beliefs,

not on the whole background, in the way they would were the background embodied

as implicit content.

I say that this seems on the face of it to be the role of central contents. Perhaps

this is an erroneous bit of folk psychology, or perhaps the dependency is more

robust than folk psychology recognizes. Again, I doubt that the contents in question

are encoded in a genuinely implicit manner, because they are too labile. But in any

case, H&H need some positive argument for thinking that these contents are in fact

implicit and not merely virtual.

3 MC and traditional epistemology

H&H defend a coherentism whose novelty consists largely in its inclusion of MC,

something left out of traditional coherentist theories. Of course, coherentists have

long put a lot of weight on inexplicit beliefs. For Lehrer (1990), for instance,

anything you could say in response to a skeptic counts as part of your justification

for believing p. This includes things like ‘I don’t have any reason to believe I’m a

three-legged pony being tricked by clever dalmatians into believing that I am a

human seated at a computer keyboard’. Such content is more likely virtual than

implicit. And it is probably not the kind of content H&H have in mind, for their

proposal is more radical: they want MC to contrast with explicit or potentially

explicit (i.e., dispositional) beliefs. And surely the moves one could make in

Lehrer’s skeptic game involve potentially explicit beliefs.

However, once these are put aside, it is unclear what other inexplicit content

there is in the central systems to supplement the traditional view. If there is

something new and nontraditional here, H&H must have some implicit, presumably

central, information in mind, but it would be good to tell us what it is. To go beyond

the traditional theories in the way H&H promise, this content needs to be something

The epistemological import of morphological content

123

that doesn’t figure into the agent’s dispositional beliefs. Maybe it’s a failure of

imagination on my part, but examples of such information aren’t leaping to mind.

4 Evidential relevance and peripheral modules

One possibility is that the MC H&H are interested in lies not in the central systems

after all, but in the peripheral modules. I think this won’t work, for reasons that are

independently interesting.

First, let’s distinguish among the class of epistemically significant factors, those

that serve as evidence (i.e., reasons, grounds) from those that do not. I will leave

‘evidence’ undefined, but it is a familiar enough notion. Anyone who understands

Feldman and Conee’s evidentialism (e.g., Conee and Feldman 2004) and knows

why it is plausible but controversial understands the relevant distinction. While

beliefs and experiences are often said to serve as evidence, other factors—e.g.,

reliability, proper function, assertoric force—are intended to play a very different

kind of epistemic role. Roughly, the distinction is between that which a belief is

based on and that which justification supervenes on.3 Some epistemologists allow

evidence to consist of things (e.g., experiential states, distal states of affairs) that are

not themselves justified, but a distinguishing feature of evidence is that if a piece of

evidence is itself unjustified, it cannot then confer justification.

Now the question arises: is peripheral MC supposed to play an evidential role or

not? Some implicit peripheral contents were mentioned earlier: the assumption that

retinally adjacent points tend to lie at equal distances, that objects are lit from above,

etc. Because the peripheral modules are relatively encapsulated—in fact, because

the content is implicit—I could come to have very good reason to deny these facts

about adjacency and lighting conditions, without that affecting the outputs of my

visual modules; things would continue to look exactly as they always did. I could, at

the same time, have no idea that my visual system relies on such assumptions and

thus no idea that these assumptions—which I should now regard as false—are

influencing the way things look. In such a case, it seems that my visual beliefs are

just as justified as they were otherwise. That is, the fact that I have compelling

defeaters for the MC of my peripheral modules does nothing to diminish the

justification of the outputs of those modules. (That is, in the cases where I’m

unaware of the psychological role of the relevant contents.) But then these

intramodular contents are not playing an evidential role in the normal case, for the

evidential status of my perceptual beliefs is undiminished in the case where the MCs

are unjustified.

At best, then, the peripheral MC serves a nonevidential role; it is relevant to

justification insofar as it contributes to whatever it is on which justification

supervenes (transglobal reliability, etc.,) but is not evidentially relevant. But then

the MC doesn’t seem to be doing much; it is the transglobal reliability that’s doing

all the work. If MC makes for transglobal reliability, fine, but if the system had

3 I discuss this in more detail in Lyons (2008, 2009).

J. C. Lyons

123

another way to achieve reliability (using explicit contents, or none at all), that would

be just as good.

5 Which contents are evidentially relevant?

We might be tempted to draw a similar conclusion about implicit content generally,

whether it is centrally or peripherally located. However, I think this isn’t quite right.

I will argue that whether a content has evidential relevance is a matter of

accessibility, not implicitness per se, though these do tend to overlap.4 (I focus on

evidential relevance rather than epistemic relevance more broadly because it’s the

controversial and interesting issue.)

First, explicitness and evidential relevance do not line up neatly. There are plenty

of unconscious, explicit representations occurrently tokened in the perceptual

modules, and I doubt we would claim that these serve as evidence for perceptual

beliefs.

Second, a suitably positioned network could make its implicit contents explicit

by a process of off-line simulation. By feeding a number of samples into the

network from section I above, one might be able to inductively extract the

information that elephants have trunks. This is roughly how Williamson (2007)

thinks we arrive at general counterfactual claims. Peripheral networks tend not to be

suitably positioned; we can’t just feed whatever inputs we like into them to discover

their inner principles. When networks are suitably positioned, and their inner

principles are easily induced, it is plausible that implicit content can have evidential

significance. In such cases, however, the MC implicit in the network happens to be

introspectably accessible, and it is yoked to dispositional beliefs. A network with

accessible though implicit contents will be part of a system whose dispositional

beliefs mirror the MC of that network.

Third, examples seem to show that even unconscious beliefs can be evidentially

relevant, provided that they are accessible. Here’s a real-life example: I often leave

the top down on my Jeep in the summer, and frequently the sound of rain causes me

to immediately—without any conscious inference or ratiocination—form the belief

that the seats are getting wet. This belief is cognitively spontaneous in BonJour’s

(1985) sense. This happens a bit too often, and I know the phenomenology quite

well. The first thought I’m aware of is not ‘it’s raining’, or ‘the top is down’, or ‘if

it’s raining and the top is down, then the seats are getting wet’. The first conscious

thought—sometimes as I awaken in the middle of the night—is ‘Shit! My seats are

getting wet!’ It seems pretty clear that this spontaneous belief is causally dependent

on the aforementioned beliefs, at least in the sense that if I didn’t have those, I

4 I don’t actually think this is quite right. I argue elsewhere (Lyons, in prep) that the relevant distinction

is really between beliefs/contents that are beliefs of the agent vs. those that are contents of the agent’s

subpersonal modules; the former but not the latter are evidentially relevant. I don’t have the space to

make this argument here, but the claim presently defended lies in the direction of the one I want to really

endorse. Also, the claim I defend here is pretty much the ‘‘proto theory’’ (p. 204) that H&H explicitly

reject.

The epistemological import of morphological content

123

wouldn’t have this one. I’m fairly good, for instance, about having this reaction only

when the top was down earlier. It also seems fairly clear that there is an evidential

connection here: if I were unjustified in believing the top was down, for example, I

wouldn’t be justified in thinking that the seats were getting wet.

The belief that the top is down plays a causal role here much like that played in

the visual case by the continuity constraint, although the former is evidentially

relevant and the latter is not. This suggests that certain unconscious beliefs can have

evidential relevance to a given belief, even though they are unconscious, provided

(a) that they are in some important sense accessible, and (b) that the very same

states that are thus accessible are also causally implicated in the production or

sustaining of that belief. In the Jeep case, however, what makes the relevant beliefs

accessible is that they are explicitly represented, so this is no vindication of the

epistemic role of MC. The beliefs are unconscious because I have overlearned and

automated the process of going from the auditory input to the belief that the seats

are wet. But it doesn’t seem to be automated in a sense that involves implicit

content, because, once again, it is too flexible, too labile, too sensitive to my current

standing beliefs. The beliefs are unconscious, but they’re not buried in the way they

would be if they were encoded only implicitly.

Thus, the kind of MC most deserving of the name—implicit content—seems to

be the kind of content least likely to be epistemically significant.

6 Conclusion

There are two main candidates for MC: implicit content and virtual content.

Virtual MC can’t do the causal work H&H intend for MC to do. Implicit MC can

be either central or peripheral MC, and I have argued that peripheral MC is

generally not evidentially relevant. I have suggested that central MC can be

evidentially relevant, but only if the content is in some important sense accessible.

It is an empirical question how prevalent central implicit content is, and I’ve

claimed that introspective accessibility won’t tell us whether some given

information is explicit or implicit. However, if I am right that MC is evidentially

relevant only when it is thus accessible, then MC is relevant only when it appears

in the dispositional beliefs of the agent. H&H’s view, however, was supposed to

go beyond traditional views in attributing epistemic relevance—presumably

evidential relevance—to contents that were neither explicit nor dispositional

beliefs of the agent.

I think there’s a very important insight here: that some form of connectionism

can allow certain episodes of belief fixation to involve positive causal dependence

on whole, large theories. But I doubt that much of this is going to involve genuinely

implicit content, because the relevant theories are too labile. And I doubt that the

resulting epistemology will be a large departure from the traditional view, because

all the evidentially relevant contents will be contents of the agent’s dispositional

beliefs.

J. C. Lyons

123

References

BonJour, L. (1985). The structure of empirical knowledge. Cambridge: Harvard University Press.

Collins, A. M., & Quillian, M. R. (1969). Retrieval time from semantic memory. Journal of Verbal

Learning and Verbal Behavior, 8(2), 240–247.

Conee, E., & Feldman, R. (2004). Evidentialism. Oxford: Oxford University Press.

Fodor, J. A. (1983). The modularity of mind. Cambridge, MA: MIT Press.

Kahneman, D. (2011). Thinking, fast and slow. New York: Farrar, Straus & Giroux.

Lehrer, K. (1990). Theory of knowledge. New York: Routledge.

Lyons, J. C. (2008). Experience, evidence, and externalism. Australasian Journal of Philosophy, 86,

461–479.

Lyons, J. C. (2009). Perception and basic beliefs: Zombies, modules, and the problem of the external

world. New York: Oxford University Press.

Lyons, J. C. (in prep). Unconscious evidence.

Schneider, W., & Shiffrin, R. M. (1977). Controlled and automatic human information processing I:

Detection, search and attention. Psychological Review, 84, 1–66.

Williamson, T. (2007). The philosophy of philosophy. Malden, MA: Blackwell.

The epistemological import of morphological content

123