thecollege.syr.eduthecollege.syr.edu/people/faculty/pages/phi/_pdfs/van gulick .e... · web viewthe...

45
R. Van Gulick Rethinking the Unity of Consciousness 16 E pluribus unum: Rethinking the Unity of Consciousness Robert Van Gulick Etymology is not always a reliable guide to meaning and even less so to truth, but perhaps there is something to be learned from the fact that the word “conscious” derives from the Latin verb “conscio,” which literally translates as “know together” (con + scio). Indeed, in one archaic use, it could mean knowledge shared among different people. The Oxford English Dictionary (2nd edition, 2000) defines this obsolete use as “sharing knowledge with another” and cites Thomas Hobbes in Leviathan (1651, I. vii. 31) where he wrote, “When two, or more men, know one and the same fact, they are said to be conscious of it,” as well Robert South slightly later (1693, II.ii.88), “Nothing is to be conceal' d from the other self. To be a friend and to be conscious are terms 596

Upload: lamhanh

Post on 03-May-2018

218 views

Category:

Documents


3 download

TRANSCRIPT

R. Van Gulick

Rethinking the Unity of Consciousness

16

E pluribus unum: Rethinking the Unity of Consciousness

Robert Van Gulick

Etymology is not always a reliable guide to meaning and even less so to truth, but

perhaps there is something to be learned from the fact that the word “conscious” derives

from the Latin verb “conscio,” which literally translates as “know together” (con + scio).

Indeed, in one archaic use, it could mean knowledge shared among different people. The

Oxford English Dictionary (2nd edition, 2000) defines this obsolete use as “sharing

knowledge with another” and cites Thomas Hobbes in Leviathan (1651, I. vii. 31) where

he wrote, “When two, or more men, know one and the same fact, they are said to be

conscious of it,” as well Robert South slightly later (1693, II.ii.88), “Nothing is to be

conceal’'d from the other self. To be a friend and to be conscious are terms equivalent.”

Being conscious in this sense of knowing together is a mutual or shared mental activity,

just as one confides or conspires—literally “breathes together”- ”—with another. We no

longer use “conscious” in that way, but perhaps the surviving concept of “conscious” we

apply to single individuals retains some sense of being known together, a way in which

the very word “conscious” implies some form of unity or integration. The relevant unity

596

would be within one mind or self, but still involve some way in which features or states

of mind are shared or integrated.

Consciousness is generally believed to be unified in some important respect, but

in what specific ways and to what degree is not as clear. Nor is there agreement about the

status of such unity: Is it essential to consciousness as a logical or empirical matter? And

if so, how so and why? If not, might unity nonetheless be important to our understanding

of consciousness, and how so?

Unity and integration might figure in two distinct but complementary ways in

theories of consciousness: either as an explanandum or as an explanans, i.e.that is, as a

real feature of consciousness that needs to be explained, or as something to which we

might appeal in explaining consciousness and its properties. Indeed, given the complexity

of the actual theoretical situation, it might serve as both.

As with any complex phenomenon, a theory of consciousness needs to describe,

and perhaps model, its many important features and properties. We need a good sense of

what consciousness is before we can explain how it can exist or be produced. Unities of

various sorts seem likely candidates for inclusion on any adequate list of the properties of

consciousness, including representational unity, object unity, and subject unity, as well

as introspective, access, and phenomenal unity. Indeed, each of those unities subdivides

into yet more specific types. Representational unity might be unity of content or of

vehicle, and unity of content in turn can take make many forms and degrees. Unity of

597

subject might concern a unified subject of thought or one of action, and each in turn can

take many yet more specific forms and degrees of integration.

All these various possible unities need to be adequately described or modeled, and

each serves as a possible explanandum, a property or feature whose existence and basis

needs to be explained by a comprehensive theory of consciousness. Some forms of

conscious unity might also serve as an explanans, in so far as we might appeal to one sort

of conscious unity to explain another, e.g., explaining phenomenal unity in terms of the

representational unity of consciousness. (Tye, 2003).

Unification and integration of various sorts can also occur at unconscious levels,

and some theories try to explain consciousness or its properties in terms of such

unconscious unities or integrations. Like conscious unity, unconscious unity comes in

many forms both psychological and neural, including representational, spatial, and

multi-modal unities, as well as many sorts of functional and causal integrations, both

within and between modules or subsystems of the mind or brain.

In answering the “how question,” many theories of consciousness appeal to such

nonconscious unities. Indeed, some explain the crucial transition from unconscious

mental state to conscious state in terms of such integrative or unifying processes. For

example, on Bernard Baars’s global workspace theory (1988, 1997), a specific

unconscious mental state becomes conscious when it is brought into that workspace and

thus globally “'broadcast”' for integration with other contentful states in a wide range of

different subsystems or modules. Stan Deheane has further developed the global

598

workspace theory and combined it with a proposed neural model of the brain regions

involved in carrying out the relevant integrations (Dehaene & Naccahe, 2001).

Integration plays a more direct and essential role in Giulio Tononi’s (2008)

integrated information theory of consciousness. On Tononi’s model, a state of a system is

conscious just if it has the highest degree of integrated informational content, which

Tononi defines in terms of an information-theory based measure he calls Φ, which

depends in part on the degree of interdependence between the states of the system and

thus on their integration. I myself have proposed a model, the Higher Order Global States

model (or HOGS), that explains the transition from unconscious to conscious mental state

as a matter of its being recruited into to the unified global state that constitutes the

transient substrate of a subject’s conscious mental stream (Van Gulick, 2004, 2006).

Though the HOGS model agrees with the workspace theories of Baars and Dehaene in

treating the transition as a matter of increased global integration, it differs in the specific

form of self-like unity it proposes.

Thus the unity of consciousness is not one issue or one question. It generates a

variety of questions within a problem space defined by the many possible forms of

conscious and unconscious integration and their possible explanatory connections. We

must determine which types of conscious unity are real, and then describe and explain

them. As to unconscious forms of unity and integration, they too must be modeled and

described. And at least aAccording to many theorists at least, they are likely to play an

important role in explaining the “how” of consciousness. Their guiding hypothesis is that

599

eXtyles Citation Match Check, 10/25/13,
The in-text citation "Dehaene and Naccahe 2001" is not in the reference list. Please correct the citation, add the reference to the list, or delete the citation.

consciousness, or at least some of its key features, is realized or produced by underlying

nonconscious integrative processes. If so, nonconscious forms of unity and integration

may figure as key explanantia in our understanding of consciousness.

Philosophical discussions of the unity of consciousness often concern whether

unity of one sort or another is a necessary condition for consciousness, or alternatively

whether it is sufficient for it. Both sorts of questions are open to logical as well as

empirical readings. If phenomenal unity is a necessary feature of human consciousness, is

that a matter of logical necessity, nomic necessity, or merely a contingent fact about the

particular structure of human consciousness or its substrate? Some scientific theories of

consciousness also assert or imply claims about the necessity or sufficiency of one or

another sort of unity or integration. Tononi’s integrated information theory explicitly

equates consciousness with having a high Φ value, and global workspace and HOGS

models both regard integration into a larger unified state as a necessary element of the

transition from unconscious to conscious state.

Unity may bear an important relation to consciousness even if it is not strictly

necessary nor or sufficient. Scientific theories of a complex phenomenon Z often invoke

explanatory properties that are in themselves neither necessary nor sufficient for Z, but

nonetheless help us understand the nature of Z. The relevant property P, for example,

might be a necessary part of some condition S that is sufficient for producing Z, but not

uniquely so. Even though there may be other alternative ways to produce or realize Z,

doing so in the S-way essentially involves P. For example, consciousness might be

600

realized in one architecture that requires integration of content across modular

subsystems; human consciousness may in fact do so. But there nonetheless be may be

other ways to produce consciousness in systems with a different functional organization

—e.g., some conditions S* that suffice in systems without a modular structure. Thus,

what might be necessary for consciousness in one systemic context might not be required

in another.

Even if unity were not necessary for consciousness per se, it might nonetheless be

necessary to understanding its function. Given any sort of unity one might initially think

essential to consciousness, both clinical evidence and thought experiments may provide

reason to believe that some limited cases of consciousness can occur without that form of

unity, no matter how common it is in ordinary conscious experience. Nonetheless,

consciousness may need to be unified in that way to carry out at least some of the

functions that make it valuable and adaptive.

For example, our normal conscious life involves the unified experience of

integrated objects and scenes, and having such experiences surely requires specific forms

of representational integration at the conscious and underlying nonconscious levels.

However, we know that patients suffering from perceptive visual agnosia have great

difficulty integrating visual stimuli into coherent wholes, though there is no doubt that

they have visual experiences. Patients with simultanagnosia, Bálint’sBalint’s syndrome,

cannot see more than one object at a time, and thus are incapable of having a unified

experience of a scene. Moreover, with unimpaired subjects, it seems possible to have

601

Katie Helke, 01/10/14,
AU: Would you like these variables italicized?

some minimal experience which with no integration of object or scene. Imagine having

just the experience of a dim flicker that passes so quickly that one cannot say just where

it occurred or whether it was of any given color or shape, or the experience of brief, faint

sound whose location and tone one cannot discern. Such stripped- down experience

seems possible despite its lack of any parts to integrate or unify. It seems possible to have

at least some conscious experiences that do not involve such unity. Thus if we think in

terms of necessary conditions, we might conclude that such unities of object and scene

are not essential or central to understanding consciousness.

However, that need not follow. Even if consciousness in some pathologically

restricted cases lacks such unities, it may be the capacity of consciousness to support and

enable such forms of unity and integration that explains why consciousness is important

and useful. Enabling and supporting widespread integration in a dynamically unified

representation may be one of consciousness’s central powers, even if it can be blocked

from doing so in special cases. If so, understanding the nature of consciousness would

require explaining how it comes to have that power and exercise it in normal conditions.

If that is one of its key functions, then we need to understand what it is about

consciousness and its underlying basis that enables it to play that role in normal contexts.

The fact that the exercise of that power may be blocked in abnormal cases does not show

that its capacity to support such integration is not central to its nature and value.

In introducing these issues, I have spoken interchangeably of “unity” and

“integration,” and I will continue to do so below. The two notions are closely related,

602

though they may have subtly different associations and convey somewhat different

implications. Integration leads us to think in terms of a process, whereas unity may seem

more like a basic fact or result. It is also natural to think of integration as admitting of

degrees. Unity as well can be treated as a matter of degree, but there is also some pull

toward thinking of it as all or none.

Once again, etymology is worth noting. The Latin root of “unity” literally invokes

the idea of “one-ness” from the number “unum.” What is united is one thing; and that

might seem like a simple and determinate fact, e.g.for example, is there one conscious

subject or not? “Integration,” which shares its root with “integer,” turns on a slightly

different metaphor, that of combining into an integer or whole (a whole as what is

literally “untouched” --— from “in” meaning not + “tangere”). Especially when one is

dealing with complex systems, what constitutes a whole may turn on many factors, and

we are accustomed to the idea that new wholes may arise from suitably related or

interacting parts. Though the idea of unity as one-ness may incline us more to think in

terms of what is simple, and integration more in terms of what has an underlying complex

basis, each of the two notions can be used to think about the way in which consciousness

coheres and how it might result from the coherent interaction of underlying nonconscious

processes. Indeed, having both notions may aid our theorizing by offering two slightly

different conceptual perspectives on the same basic process.

Before moving on to consider some more specific questions, let me recap the

general structure of the problem space. Unity may occur in many conscious and

603

nonconscious forms as shown in table 16.1. Some questions concern the reality of those

varying sorts of unity. Which are true of consciousness in general, or of human

consciousness? Other questions concern relations between the various sorts of unity, both

conscious and nonconscious. Which sorts of unities might be explained fully, or at least

partly, in terms of others? Which forms of unity might be necessary or sufficient for

consciousness, or human consciousness, or at least important to our understanding of its

nature, function, and substrate?

[Table 16.1 near here]

Table 16.1 aims to display the general problem space, with column three having a

special structure that includes both various types of conscious unity as well as

consciousness itself. The table can be read either across the columns or up and down

within column 3 (and perhaps within column 1). Reading across, one set of questions can

be generated by selecting specific items from each of the three columns: Is unconscious

multimodal integration necessary for multimodal conscious integration? Is the

unconscious unity of thought and action sufficient for the conscious unity of subject? Is

the unconscious representational unity of content sufficient for consciousness itself?

Other questions can be generated by applying one of the linking relations from column 2

with various pairings within column 3, either between various specific forms of

conscious unity, or between such unities and consciousness itself: Is conscious

representational unity sufficient for phenomenal unity? Is conscious object unity

604

necessary for conscious subject unity? Is phenomenal unity necessary for consciousness?

Does the unity of the experienced world explain the functional value of consciousness?

Some cross pairings generate more interesting and plausible linkages than others,

but it is useful to have an overview of the full range of possible connections.

Understanding the unity of consciousness requires understanding how its various forms

relate to each other and to consciousness itself, as well as to the various sorts of

nonconscious unity that may provide their underlying substrate. A comprehensive exam

of the full problem space is beyond the scope of the present chapter, and I will instead

focus for the remainder of this chapter on a few specific questions about the relations

between representational unity, phenomenal unity, and consciousness.

As noted above, the neuroscientist Giulio Tononi has developed an influential

theory of consciousness that identifies it with a form of integrated information that his

theory defines in purely information theoretic terms (Tononi, 2008). Tononi’s proposal is

thus a reductive theory that aims to fully explain consciousness in terms of nonconscious

integration. He writes, “The integrated information theory (IIT) of consciousness claims

that, at the fundamental level, consciousness is integrated information, and that its quality

is given by the informational relationships generated by a complex of elements.” (2008).

Since the supposed relation is one of identity, relative to figure 16.1 Tononi’s theory

should be understood as asserting that a type of nonconscious informational unity from

column 1 provides both a necessary and sufficient condition (link from column 2) for

consciousness itself in column 3. The key idea in Tononi’s IIT is that of integrated

605

Katie Helke, 01/07/14,
AU: Are these italics yours or Tononi’s? If yours, please note in the citation.

information for which he proposes a mathematical measure he terms “Φ” defined in

purely information theoretic terms (with the symbol “Φ” itself composed of two

components “I” for information and the circular “Ο” for integration within a whole).

For present purposes, we need not go into the precise mathematical definition of

Φ used by IIT. What matters is that Φ concerns the information within a complex or

system that results from the interactions and causal dependencies among its parts as

oppposed to the information in the parts themselves. As Tononi puts it, “” In short,

integrated information captures the information generated by causal interactions in the

whole, over and above the information generated by the parts.” (2008, p.221).

To illustrate his point, Tononi uses the example of the detector in a digital camera

as an example of non-integrated information. The camera’s detector may have five

million pixel elements, each with its own information value, but that information is not

integrated; each is an independent unit simply signaling the light value for its small

portion of the scene in isolation. By contrast, when one has a conscious visual experience

—as when I look at the cluttered desk in front of me—the information about all the parts

of the scene is integrated into a unified awareness of the overall environment from a

single subjective viewpoint. This is with an understanding of how the parts fit together as

well as their connections with all sorts of other stored information, including my

knowledge and memory of the various items on my desk.

According to Tononi, a complex that embodies such integrated information

literally has a point of view, or at least does so if it is not embedded within a yet more

606

Katie Helke, 01/07/14,
AU: This sentences seems to run on a bit and I’m worried readers will lose their train of thought. I suggest making these changes.

integrated complex with a higher Φ value. He writes, “Specifically, a complex X is a set

of elements that generate integrated information (> 0) that is not fully contained in some

larger set of higher Φ.” (2008, p.221). A complex, then, can be properly considered to

form a single entity having its own, intrinsic “point of view” (as opposed to being treated

as a single entity from an outside, extrinsic point of view). The restriction on not being

contained within a set of elements with a higher Φ is relevant to the case of the conscious

mind or brain. A human brain will contain many subsystems with some significant

measure of integrated information such as the visual cortex or auditory cortex, but they

do not each have their own separate consciousness or subjective point of view. Only the

larger corticoal-thalamic complex of globally integrated elements is conscious and has

such a view point, or at least that is the supposed implication of Tononi’s theory.

Tononi’s IIT is an interesting attempt to capture our phenomenological intuitions

about the integrated nature of consciousness and translate them into a rigorous

mathematical theory that might be applied to the brain (though the actual computation of

Φ values for any system as complex as a brain is at present not possible in practice).

However, as offering a strictly necessary and sufficient condition for consciousness, IIT

confronts a number of challenges. First, it is an entirely abstract theory, i.e.that is, the

conditions it specificies are purely mathematical and highly medium- independent. They

might be satisified by all sorts of physical systems, not just biological ones and electronic

systems, but also bizarre realizations of the sorts that have been raised by critics of the

computational theory of mind (Searle, 1980). Indeed, John Searle (2013) makes this point

607

himself in reviewing a recent book by another famed neuroscientist, Christof Koch, who

is an staunch advocate of IIT (Koch, 2012). Tononi accepts this consequence and does

not regard it as a reductio of his system, but others will surely balk at the idea that being

conscious and having a subjective point of view in the “what it is like” sense does not

depend on the medium in which a mathematical structure is realized but only on the

mathematical structure alone.

Tononi’s theory also commits him to a form of panpsychism. Any system that

forms a complex with a Φ value that is not itself contained with ann system with a higher

Φ value will have some sort or degree of consciousness on his theory, and will thus have

some sort of point of view. Some might regard this as well as a reductio of his theory, but

he again accepts it because he allows that consciousness admits of degrees in quantity

and that its quality is determined by the network of elements linked within the complex.

Thus he allows that an ant, an amoeba, or even an single isolated photo diode can be

conscious in some way; it is just that its conciousnessconsciousness is of a far lower

degree in quantity than the consciousness of a human or a mouse because it has a far

lower Φ value, and its consciousness will not be similar to ours in quality since it does

not involve the same vast network of integrated elements. Despite Tononi’s attempts to

make panpsychism acceptable, many may find the implication that photo diodes have any

consciousness at all a reason to reject his theory.

A third objection concerns IIT’s claims that with a system with overlapping

complexes, only the complex with the maximal Φ will be conscious and have a subjective

608

Katie Helke, 01/07/14,
AU: Should this be “with a system” or “within a system”?
eXtyles Citation Match Check, 10/25/13,
The in-text citation "Koch 2012" is not in the reference list. Please correct the citation, add the reference to the list, or delete the citation.

point of view. If a complex C either contains or is contained within a complex C' with a

higher Φ, then only C' will be conscious and C will not be conscious no matter how high

its Φ value. As noted above, this would yield the intuitively correct result for situations

like the human brain. We take it to have a single conscious point of view perhaps

associated with a global pattern of thalamo-cortical integration, and we do not associate

separate points of views with smaller complexes or subsystems such as the visual cortex

even though they may have a high Φ value.

So far so good for IIT. However, the theory also entails that the visual cortex

would have a conscious point of view if it were not contained within the larger, global

thalamo-cortical complex. Thus whether a complex is conscious and whether or not it has

a subjective point of view turns not just on intrinsic facts about the level of informational

integration internal to the complex, but also on extrinsic facts about what larger complex

it may or may not be contained within. Consider two visual cortices, VC1 and VC2, that

are exactly alike in all their physical properties and processes over a temporal interval T,

but such that VC1 is contained within a complex with a higher Φ value during T while

VC1 is not. VC1 and VC2 would have the same Φ value during T, but according to IIT,

VC2 would have a conscious subjective point of view during T while VC1 would not.

This seems to conflict both with IIT’s supposed identification of integrated information

with consciousness as well as with our strong intuitions about the supervenience of

consciousness on a system’s intrinsic properties. It seems odd to suppose that VC1 and

VC2 could be physically the same in all respects during T, and yet one of them have has

609

Katie Helke, 01/10/14,
AU: Would you prefer to italicize any of the variables in this paragraph?

a conscious point of view and the other does not. Having such a consequence is not a

knock down argument against IIT, but it does weigh against it.

A lot more could be said about IIT, and it continues to be regarded as a serious

theory of consciousness by at least some neuroscientists. But overall it does not seem

plausible as a reductive proposal to provide necessary and sufficient conditions for

consciousness in terms of nonconscious forms of unity and integration.

Issues of a different sort within the problem space of figure 16.1 concern relations

not between items in columns 1 and 3, but relations solely within column 3. Unlike IIT,

these do not involve proposals to explicate consciousness in terms of some form of

nonconscious integration, but rather raise questions about the relations that various forms

of conscious unity bear to each other and to consciousness itself. My discussion will

again be selective, with a focus on the relation between conscious representational unity

and phenomenal unity, especially as that issue has been recently addressed by Tim Bayne

(2010) in The Unity of Consciousness (2010).

According to Bayne, phenomenal unity is not identical with conscious

representational unity. Though phenomenal unity may typically be accompanied by

representational unity, he argues that the former is not merely a special case of the latter.

It is a separate and distinct type of conscious unity. He thus disagrees with

representationalists, like such as Michael Tye (2003), who argue that phenomenal unity is

nothing over and above representational unity and analyze the unity of consciousness in

terms of unified representational content.

610

Having developed his nonrepresentational view of phenomenal unity, Bayne goes

on to argue for the truth of what he calls the “uUnity tThesis,” namely the claim that all

the experiences had by a conscious subject at a time are phenomenally unified. I will

argue that careful consideration of the uUnity tThesis reveals that phenomenal unity is in

fact a form of representational unity or at least that it depends essentially upon

representational unity, though in a way that is different and more indirect than that

proposed by standard representationalist accounts.

Bayne offers three specifications of what it is for two experiences to be

phenomenally unified:

(1). Tthey are subsumed by a single conscious state, i.e., by being parts or components of

that single state (Bayne, 2010, p.15).

(2). they They occur within a single phenomenal field (2010, p. 11).

(3). Tthey possess a conjoint experiential character (2010, p. 10).

They are not intended as three distinct conditions but as three ways of explicating one

and the same relation. Of the three, the third is the most informative. The meaning of (1)

depends crucially upon on how one understands the notion of a “conscious state,” and the

notion of a state can be interpreted so broadly as to make the unity thesis trivial. One

might interpret a subject’s conscious state at a time to be simply the totality of all her

conscious states at that moment, just as one might define a subject’s belief state as the

totality of all her beliefs. Reading “conscious state” in that way, would make it a

tautology that all one’s experiences at a time were parts of a single such state. Explication

611

(2) does not fare much better in so far as it relies on the similarly vague metaphor of

being part of a single “phenomenal field,” which clearly cannot be interpreted in a spatial

sense in so far as it is intended to cover many experiences without any explicit spatial

aspect.

Thus the notion of conjoint phenomenality invoked by (3) offers the best

possibility for unpacking Bayne’s notion of phenomenal unity. The idea is that if two

experiences, E1 and E2, are phenomenally unified, there is something that it is like to

experience them together, something more than the mere conjunction of experiencing E1

and experiencing E2. As in Bayne’s example, if I smell the coffee in the café and hear the

rumba at the same time, there is something it is like to for me to experience both of them

together. This makes the uUnity Thesis thesis a rather strong claim and less than

intuitively obvious. Given all the many diverse experiences a subject can be having at a

given moment, is there always a further experience (or experiential feature) of their

conjoint togetherness over and above the mere conjunctive fact that one is having each of

them ats the same time? It is not at all obvious that there is, especially when one

considers peripheral as well as focal experiences.

The meaning of the uUnity of Tthesis also depends on how one interprets the

notion of a “conscious subject.” For most of the book, Bayne interprets the subject as the

human organism. Thus understood, the unity thesis asserts that all the experiences had by

a given human organism at a time are phenomenally unified. Bayne takes this to be an a

posteriori claim about actual human beings, and a good part of the book is devoted to

612

Katie Helke, 01/10/14,
AU: Would you like any of the E1, E2, etc. italicized?

considering and replying to empirical examples that might seem to falsify the thesis, such

as cases of hypnosis, dissociative identity disorder, and split- brain patients. Only in the

last chapter of the book , chapter 12, does he turns his attention to a more a priori

interpretation of the thesis, which takes the relevant subject to be the “conscious self”

rather the organism. It is by considering this latter version of the thesis that I believe we

can see how Bayne’s notion of phenomenal unity essentially depends upon

representational unity.

But fFirst let us back up a bit, though, and get be clear about the basic

disagreement between Bayne and representationalists like such as Tye who explicate

phenomenal unity in terms of the unity of representational content. If one is a

representationalist who accepts the so- called “transparency of experience,” there is a

simple direct argument one can give for equating the unity of consciousness with

representational unity. According to the transparency thesis, the only properties of our

experiences of which we are consciously aware are their contents, that isi.e., how they

represent the world as being. They are “transparent” in the sense that we “look right

through” them to the represented world without being aware of any intrinsic or

nonrepresentational properties of those experiences themselves. Thus if the unity of

consciousness is to be a phenomenologically manifest property, i.e.that is, one present as

part of the “what-it-is-likeness” of experience, then it must be a unity of representational

content, a unity of the world as it is represented by experience. If we are aware only of

content, then the only unity of which we can be aware is unity of content. Obviously,

613

representationalists like Tye have a lot more to say in defense of their position, but

hopefully the basic argument will suffice to motivate the view for present purposes.

Bayne’s contrary view is based in large part on the existence of cases, many of

them pathological ones, in which he believes our experience is phenomenally unified

despite the presence of profound failures of representational integration.

Even in ordinary experience, we fail to make many logical or inferential

connections between items we simultaneously experience, but which are nonetheless

likely to be phenomenally unified in Bayne’s sense. Of course, the representationalist’s

claim is not that we make every such connection, but only that we make a sufficient

number of such connections, e.g., sufficient to form a representation of unified objects

possessing multiple properties and relations within unified scenes. Cases of illusion and

hallucination might also be raised as objections to the representationalist view of unity

since they seem to involve unified experiential states with nonunified contents—the stick

looks bent but feels straight. Tye, however, denies there is any problem. He argues that

the representational content in such cases is inconsistent but unified; unity of content

need not involve consistency. It requires only that there is a single overall

representational state that represents the stick as being both straight and bent (Tye, 2003,

p.37).

In pathological cases, the failures to unify content may be extreme. Neglect

patients, agnosics, and schizophrenics may fail to make the most obvious contentful

connections, and even the representation of unified objects, spaces, or body parts may be

614

absent. Yet it seems plausible to regard their experience as phenomenally unified. For

Bayne, this is further reason to distinguish phenomenal unity from representational unity,

but again the representationalist may reply that representational unity does not require the

particular sorts of logical connections or integrations that are lacking in such pathological

cases. He may argue that his position commits him only to the claim that whatever

conscious or phenomenal unity is present is simply a fact about the total content of the

subject’s overall representational state, no matter how disorganized or chaotic that

content may be. Thus there seems little possibility of resolving the basic dispute on the

basis of such evidence. Nonetheless, considering such cases may help to clarify more

specific issues about just what sorts and degrees of representational integration are

required for phenomenal unity, or even for consciousness itself. Moreover, as noted

above, even if a certain type of integration is not strictly necessary for consciousness or

for phenomenal unity and is absent (or very limited) in some special cases, it may

nonetheless play a major role in explaining the function and value of consciousness. Its

presence in normal conscious cases may be essential to understanding what is distinctive

and valuable about consciousness. Various integrative capacities of consciousness may

play a key part in enabling it to fulfill major roles, even if those capacities are not always

exercised or blocked in special cases.

Though the empirical evidence about actual failures of integration may not settle

the basic dispute, I believe there is another more a priori route one might follow to show

that phenomenal unity of the sort Bayne describes is committed at a deeper level to a

615

form of representational unity on which it essentially depends. Thus even if one stops

short of identifying phenomenal unity as just a special case of representational unity, the

links between the two may turn out to be tighter than Bayne proposes. At least that is

what I hope to show.

Recall that Bayne’s uUnity Thesis thesis has two interpretations that depend upon

how one interprets the idea of a conscious subject, either as the human organism or as the

conscious self. The empirical evidence is relevant largely to the former interpretation,

which is an empirical claim about actual humans—as a factual matter, all the experiences

occurring in an actual human at a time are phenomenally unified.

The latter interpretation about the conscious self is a more a priori claim, and it is

that second version of the thesis that promises to give us a deeper understanding of the

link between representational and phenomenal unity.

First, as to the empirical version, Bayne defends it against various empirical cases

that might seem like counterexamples by offering an interpretation of the data in each

instance that is consistent with his thesis. As we just saw, that sometimes involves

distinguishing representational integration from phenomenal unity and arguing the latter

can be present, even when some forms of the former are absent. Other replies turn on the

fact that the Unity unity Thesis thesis is a claim about simultaneous phenomenal unity,

which is compatible with some measure of disunity across time, a lack of diachronic

unity.

616

The interpretations that Bayne gives of the various problem cases seem plausible

enough to defend his thesis from refutation, with one notable exception: that of split-

brain patients. As is well known, in split- brain patients, after the severing of the corpus

callosum connecting their two hemispheres, there seem to be at least some times in which

phenomenally disunified experiences occur within a single human organism. Each

hemisphere is able to act on the basis of experiences to which the other appears to have

no access, and the standard view of such cases is that they involve two separate centers of

consciousness. Indeed, both Tononi (2008) and Tye (2003) endorse that position.

If so, the split- brain cases would refute the uUnity tThesis understood as an

empirical claim equating subjects with human organisms. Bayne offers an alternative

account in terms of a rapid switching model, according to which there are quickly

alternating centers of consciousness in the split- brain patients that are distinct and not

diachronically unlinked but never simultaneous. If they never occur at the same time,

then their distinctness would pose no threat to Bayne’s unity thesis, which is a claim

solely about synchronic phenomenal unity. Though it may not be possible to conclusively

disprove the switching hypothesis, it seems implausible and somewhat ad hoc. It is the

least plausible of Bayne’s various interpretations of the problem cases (see Prinz, 2013,

for a similar critique). The split- brain patients seem capable of carrying out independent

and contrary actions with their left and right hands at the same time, each of which is

complex and nonhabitual to a degree that would indicate conscious control rather than

control by a “zombie” system according to Bayne’s own criteria. Though the detailed

617

data may not suffice to rule out rapid switching, it does not seem to provide evidence to

support it. Thus the empirical interpretation of the uUnity tThesis is called into serious

question by the split- brain cases.

However, from a philosophical point of view, the a priori reading of the thesis that

interprets “subjects” as conscious selves may be the more interesting and important

claim. That claim, which Bayne addresses in the final chapter of his book (2010, ch.

chapter 12) need not conflict with the split- brain patients, since such cases can be viewed

as having two conscious selves in one organism, each of whose experiences are

phenomenally unified. Indeed that is the view put forward by Tononi (2008), who views

the two hemispheres of the split- brain patients as complexes each with maximal Φ value,

neither of which is contained within the other and both of which thus have a conscious

subjective point of view.

On In the a priori reading, the uUnity tThesis asserts that all the experiences of a

conscious self at a time are phenomenally unified, leaving open the key issue of what

counts as a conscious self and how such selves are to be individuated. Bayne argues

against both animalist and bundle theory accounts of the self. Discussing Peter van

Inwagen’s example of Cerberus, the two- headed dog (van Inwagen, 1990), Bayne argues

convincingly that Cerberus with two disjoint and phenomenally disunified centers of

experience—one in each of its two brains—would constitute two selves rather than one,

as van Inwagen’s animalist criterion implies.

618

Katie Helke, 01/14/14,
AU: Cerberus is usually a three-headed dog, but I am not familiar with van Inwagen’s work and so he may refer to it as two-headed in this particular source. Please confirm either way, and if it is indeed two-headed, consider inserting an endnote explaining that it is typically three-headed but that van Inwagen refers to it as two-.

Bayne also offers sound objections to theories that equate selves with mere

bundles or streams of experiences, what he calls “naïve phenomenalism.” He notes that

they get the ontology of selves wrong, writing that selves “cannot simply be streams of

consciousness for selves are things in their own right' whereas streams of consciousness

are not -— they are modifications of selves” (2010, p.281). Such theories, as he argues,

also have difficulty accounting both for the sense in which selves “own” their

experiences and for our modal intuitions about how a given self might have a radically

different set of experiences yet remain one and the same self (2008, p. 282–283).

However, rRather than reviewing Bayne’s reasons for rejecting such views,

however, I want to focus on the view he supports, that of the self as a virtual entity

implicit in the structure of phenomenal intentionality, both because I myself regard that

view as the most promising option (Van Gulick, 2004, 2006) and because it allows us to

finally see the deep connection between phenomenal and representational unity. The

phenomenal, virtual self- view is a variant of Daniel Dennett’s theory of the self as the

“center of narrative gravity.” That center or point of view ion Dennett’s and Bayne’s

accounts is not a character in the story, nor the author of the story, but rather a point of

view implicit in how the parts of the story cohere. They hang together in a way that

implies the existence of the relevant observer without needing to explicitly refer to or

describe that observer. What is explicit is the story. The point of view itself need not ever

be described; rather it is implicit in the narrative stream of experience.

619

Katie Helke, 01/10/14,
AU: These are two separate accounts, yes?
Katie Helke, 01/10/14,
AU: Is this meant to be a comma?

Extending the metaphor to the case of conscious experience, the idea is that the

self too is a virtual structure, an intentional entity implicit in how our experience coheres

as that of a unified subject. Thus it is not a version of the bundle theory or naiïve

phenomenalism. The self is not identical with the stream of experience, but rather an

intentional entity implicit in the organization of that stream. Those experiences are

unified and coherently connected in their content as if they were the experiences of a

single conscious subject, and thus that point of view is implicit in those experiences.

They hang together and make sense as the experiences of a single self. Moreover, Bayne

argues that each of the experiences has de se intentionality, i.e., its intentionality has an

inherently self- referential character that refers each experience to the subject whose

experience it is in a direct and non-descriptive way.

Given that basic explanation of the virtual self theory, we can now see how it

implies a deep connection between phenomenal unity and representational unity. If we

understand the uUnity tThesis as an a priori claim about subjects considered as conscious

selves, then it asserts that all the experiences had by such a self at a given moment are

phenomenally unified. But according to the virtual self theory, whether or not a set of

experiences at a given time count as the experiences of a single self will depend on the

contentful connections that hold among them. Whether or not they imply the existence of

a single self, i.e., a single shared experiential point of view, is an intentional fact that

depends on what relations of coherence hold among their representational contents. They

may fail to be fully integrated in terms of their logical consequences, even some of their

620

obvious logical consequences in the pathological cases, but those contents must at least

be integrated so as to imply the existence of a single self as their shared subject. IOn the

virtual self view, the subject unity of consciousness thus depends upon a form of

representational or content unity.

If one agrees with Bayne and accepts a virtual self view, as I believe one should

(Van Gulick, 2004, 2006), then one can give a fairly simple and direct argument linking

phenomenal and representational unity:

(P1). The self is an intentional entity implicit in the structure of conscious

representations and their integrated contents—contents that are integrated as being

from the perspective or point of view of a single self.

(P2).  Whether or not a set of conscious representations is integrated in that way—i.e.,

whether or not there is a virtual self implicit in those representations—depends upon

the contents of those representations and how they are linked and integrated, thus on a

type of representational unity.

(P3). A set of experiences is phenomenally unified only if they are all experienced from

the point of view of one and the same self, only if they are “like something” for one

and the same self of subject.

(P4). Therefore, whether two experiences are phenomenally unified ultimately depends

on representational facts about whether or not their contents are integrated as implicit

parts of one and the same point of view or virtual self.

621

Indeed, one can extend the argument to show that such representational unity is a

necessary condition for conscious experience itself:

(P5). A conscious mental state CM (or experience E) can exist at time t only if there is

“something that it is like” to be in the state (or have that experience) at t.

(P6). There can be something that it is like to be in CM (or have experience E) at t only

if there is some self or subject for whom it is like some way to be in CM (or have E) at

t.

(P7). Therefore, a conscious mental state CM (or experience E) can exist at t only if it is

contained within a set of representations whose contents are integrated or unified in a

way that implies the existence of a single self or subject.

We can put the latter point in terms of a specific example. There cannot be a conscious

pain without some self or subject for whom it is like something to be in or have that pain.

But ion the virtual self view, the existence of such a subject ultimately depends upon

facts about whether the contents of a set of representations are integrated in a way that

implies the existence of such a single self or point of view. Thus consciousness per se

requires at least some significant measure of representational unity or integration.

Of course, both the basic argument and its extension assume the virtual self view

in premises 1 and 2(P1) and (P2), and that view is far from obvious. Indeed, it is likely a

minority view among current views of the self. So the arguments above are perhaps best

viewed as conditional arguments that show what follows if one accepts the virtual self

theory. Since Bayne seems to do so in his final chapter, he ought to accept the

622

Katie Helke, 01/10/14,
AU: Would you prefer any of these variables be italicized?

conclusions of both arguments, including the thesis that phenomenal unity depends in a

deep way on a type of representational unity. As to others, who are more skeptical about

the virtual self view, more persuasion will be needed. But I leave that for another time.

Table 16.1

Consciousness and unity1. Nonconscious Unity 2. Relation 3. Conscious Unity Synchronic/Diachronic  Synchronic/DiachronicRepresentational unity [Sufficiency] Representational unity  Vehicle/Content   Vehicle/ContentObject unity [Necessity] Object unityScene unity Scene unitySpatial unity [Functional value] Spatial unityWorld unity World unityMultimodal unity [Oother relations?] Multim-modal unitySubject Unity Subject unity Thought/Action  Thought/ActionFunctional unity Phenomenal unityNeural unity

References

<bok><bok>Baars, B. (1988). A cognitive theory of consciousness. New York:

Cambridge University Press.</bok></bok>

<bok><bok>Baars, B. (1997). In the theater of consciousness: The workspace of the

mind. New York: Oxford University Press.</bok></bok>

623

<bok><bok>Bayne, T. (2010). The unity of cosnciousnessconsciousness. Oxford: Oxford

University Press.</bok></bok>

<jrn> _issn="0010-0277"<jrn>Dahaene, S., & Nacacche, L. (2001). Towards a cognitive

neuroscience of consciousness: Basic evidence and a workspace framework.

Cognition, 79, 1–37.</jrn></jrn>

<bok><bok>Hobbes, T. (1651). Leviathan, or The matter, forme, and power of a

common- wealth ecclesiasticall and civill. London: Andrew Crooke.</bok></

bok>

<jrn> _issn="0031-8205"<jrn>Prinz, J. (2013). Attention, atomism, and disunity of

consciousness. Philosophy and Phenomenological Research, 86, 215–222.</

jrn></jrn>

<jrn> _issn="0140-525X"<jrn>Searle, J. (1980). Minds, brains, and programs.

Behavioral and Brain Sciences, 3(3), 417–457.</jrn></jrn>

<other><other>Searle, J. (2013). Can information theory explain consciousness? New

York Review of Books, 60.</other></other>

<bok><bok>South, Robert. (1693). Twelve sermons preached upon several occasions.

London: Jonah Bowyer.</bok></bok>

<jrn> _issn="0006-3185"<jrn>Tononi, G. (2008). Consciousness as integrated

information: A provisional manifesto. Biological Bulletin, 215, 216–242.</jrn></

jrn>

624

eXtyles Citation Match Check, 10/25/13,
Reference "Dahaene, Nacacche, 2001" is not cited in the text. Please add an in-text citation or delete the reference.

<bok><bok>Tye, M. (2003). Consciousness and persons. Cambridge, MA: MIT Press.</

bok></bok>

<edb><edb>Van Gulick, R. (2004). HOGS (higher-order global states)—an alternative

higher-order model of consciousness. In R. Gennaro (Ed.), Higher-order theories

of consciousness (pp. 67–92). Amsterdam: John Benjamins.</edb></edb>

<edb><edb>Van Gulick, R. (2006). Mirror-mirror, is that all? In U. Kriegel & K.

Williford (Eds.), Self-representational approaches to consciousness (pp. 11–40).

Cambridge, MA: MIT Press.</edb></edb>

<bok><bok>van Inwagen, P. (1990). Material beings. Ithaca, NY: Cornell University

Press.</bok></bok>

<bok><bok>van Inwagen, P. (2000). Oxford English dictionary (2nd ed.). Oxford:

Oxford University Press.</bok></bok>

625

eXtyles Citation Match Check, 11/04/13,
Au: Should there be an entry title listed here?