eve sweetser dept. of linguistics university of california, berkeley [email protected] cogsci...

76
Eve Sweetser Dept. of Linguistics University of California, Berkeley [email protected] CogSci Faculty Retreat, Dec. 7, 2007 Gesture and language: Iconicity and viewpoint

Post on 22-Dec-2015

217 views

Category:

Documents


0 download

TRANSCRIPT

Eve SweetserDept. of Linguistics

University of California, Berkeley

[email protected]

CogSci Faculty Retreat, Dec. 7, 2007

Gesture and language:Iconicity and viewpoint

People gesture when they speak. In every culture. Gesture is minutely co-timed with speech production in what

is clearly a common neural production package. (McNeill, Hand and Mind)

What varies:Size of gesture spacePrecise pattern of co-timing of speech and gesture

(This takes a lot of learning! Cf. McNeill, Goldin-Meadow)

Conventional “emblems”

Gesture is a universal

How much of a gesture is “meaningful” - and

to whom?Speakers gesture in absence of physical interlocutor

seeing the gestures. (on the telephone, for example)Speakers’ lexical access is impeded by impeding gesture -

especially lexical access to spatial vocabulary.Speakers gesture LESS without physical interlocutor:

in particular, INTERACTIVE gestures are diminished.(Bavelas et al.)(Though nods and head-shakes may persist!)“Content” gestures are still there.

Speakers alter gestural patterns to take addressees into account (Özyurek 2000).

Gesture structureGestures have prosodic structure like speech. A linguistic PHRASE often coincides with a single

GESTURE, therefore called a gestural “phrase” by some (Kendon).

A linguistic phrase has various stages including: a preparatory phasepossibly a pre-stroke “hold”a STROKE (the major motion phase of a gesture,

often temporally associated with some specific constituent of the linguistic phrase, in English often the VERB)

a post-stroke holdretraction

Intermodal meaning overlap

Gestural “phrases” are temporally associated with linguistic forms whose meaning is related to theirs.

Gesture can add information not present in the linguistic form.It can also give “interactional” meaning about how to take the

linguistic content. And it can contradict (and win out over) linguistic information

in the listener/viewer’s interpretation.

HOW and WHY does gesture accompany language this way?

Attentional focus is one aspect of the interaction - Mischa will talk about that.

Things to notice in “Listen”

The use of the INTER-SPEAKER space: the floor is gained by reaching gesturally into the space between the two speakers’ spaces - and even into the interlocutor’s space.

The use of the “unclaimed” adjacent space: the |cupboard| could NOT just as well be in between the interlocutors, but the |stacking| can be done in the speaker’s own gesture space.

Listen

QuickTime™ and aDV - NTSC decompressor

are needed to see this picture.

Listen transcriptS1: …the underside.S2: OK, this is what would happen.S1: You’d stack dishes.S2: Listen.We did stack dishes buutI’d like reach in to get a plateto get ready to eatand there’d be like [grea-S1: [laughs]S2: - there’d be like grease on the bottom]S1: yeahS2: And I’d be like…

Listen transcriptS1 mimes washing the underside of the dishes as she says “the underside,” then shapes a stack of plates (or makes a stacking gesture) as she says “you’d stack dishes.” S2 is meanwhile trying to break into S1’s high-involvement feedback, which is keeping her from the floor. She first says, “OK, this is what would happen,” with hands shaping a new topic in her own gesture space, which return to rest as she fails to get the floor. She then tries again with “Listen” - accompanying her attempt with three left-hand D-points (on “listen”, “did” and “buut”), which reach well out of her own space into the shared interactional gesture space. She gains the floor.

Listen

QuickTime™ and aDV - NTSC decompressor

are needed to see this picture.

IconicityICONICITY: a representation is iconic if the form in some way

resembles the meaning. Spoken language examples: (1) phonosymbolism: meow, crash, pop. (2) She talked on and on and on.

But (cf. Taub, Language from the Body) the VISUAL-GESTURAL medium allows for a greater variety of effective iconic mechanisms than the auditory one.

Signed languages therefore share this extra-iconic character with gesture, although they are conventional in ways that gesture is not.

Visual-gestural iconicity

Iconic mappings (thanks to S. Taub)

ASL TREE iconically represents a schematic concept of a tree.

DH FOREARM = TRUNKDH HAND & FINGERS = BRANCHES

(note: doesn’t mean a tree with 5 branches!)NDH FOREARM = GROUND

Upper arms, signer’s head & trunk: NOT MAPPED

How conventional is gesture?

There are cultural and crosscultural regularities in how people gesture about both concrete and abstract domains. (Metaphor and iconicity can be conventional, we know; and culture-specific. HKSL “Tree” vs. ASL “Tree.”

Local “catchments” (cf. McNeill and Duncan, in McNeill (2000)) regularly arise in interaction. Just as local repeated phrasings do. Reduced forms of these catchments still carry meaning. (cf. Also LeBaron and Streeck.

McNeill’s Snow White experiment.

Gesture forwards - does that mean physically ahead of me, or in the future?

Rotation of hand - does that mean some physical object is rotating, or does it show repetitive or ongoing aspect of an action?

Hand up, palm out - is the speaker trying to prevent the addressee from approaching her (or fend off a projectile), or is she metaphorically “fending off” questions

Cf. Parrill and Sweetser 2004.

Iconic gestures can then be

interpreted metaphorically

MetaphorAll languages use METAPHOR, because all cultures have

metaphoric cognitive patterns.

One way of looking at metaphor: understanding more abstractthings in terms of more concrete things.

TIME IS SPACEFUTURE IS IN FRONT OF EGO PAST IS BEHIND EGO

last year is behind us; look ahead to next year

Source and target domains

Metaphoric gestures are iconic for the SOURCE

domain of the metaphoric mapping.

Gesture forwards for future: FUTURE IS AHEAD.Hand up, palm out to forestall questions: IDEAS ARE OBJECTS,

COMMUNICATION IS OBJECT EXCHANGE…Palm-up “offering” hand, meaning “it’s obvious”, or “now

I’m sure you see”: Again, COMMUNICATION IS OBJECT EXCHANGE.

“Put you away” iconic mappings

2 B hands, palms facing each other (thumb side up) =The hands of someone putting something away.

Motion of the gesturer’s hands = Motion of the hands putting something away.

Space between gesturer’s hands = object being put away (box?)(Invisible surrogate! Moves when hands do.)

“Put you away” metaphoric mappings

CONVICTED CRIMINAL = AN OBJECT (a box?)PUTTING CRIMINAL IN PRISON = PUTTING THE OBJECT

AWAYAGENT PUTTING CRIMINAL IN PRISON = AGENT

PUTTING THE OBJECT AWAYRESULTING STATUS OF CRIMINAL (STUCK IN ONE

PLACE, MONITORED) = RESULTING STATUS OF OBJECT (WHEN PUT AWAY, WE KNOW WHERE TO FIND IT AND WE DON’T THINK IT WILL GET LOST).

Sign language examples

TOMORROW, YESTERDAYKNOW, IDEAVOTE, TEABAGDIPLOMADRIVE, FLY(PLANE) NEPHEW, NIECE, COUSIN

Things to note in “concepts”

Setting up of two spatial areas, |concepts| and |forms|.

Mapping between them.The encircling gestures for “framework.”

Concepts

QuickTime™ and aDV - NTSC decompressor

are needed to see this picture.

Transcript of “concepts”

Clip name: concepts map onto the world(lecture by Mark Johnson)

...have fixed definitionsand they map onto the world...um...and that knowledge consists in...framing a set of concepts that neatly map onto states of affairs in the world whether those states of affairs have to do with morality or politics or um...or um...quantum physics or whatever.

Gesture transpript of “concepts”

Clip name: concepts map onto the world(lecture by Mark Johnson)

Two B hands to Left, delineate a definition/concept space.They then move to the Right and delineate a World space.Delineation of a globe-shaped central space is |framing

a set of concepts|.Moving hands from one side to the other is |mapping|Handwaving shows “it doesn’t matter” at the discourse

level (cf. whatever)

Concepts

QuickTime™ and aDV - NTSC decompressor

are needed to see this picture.

Language is basically, intrinsically viewpointed. Cognition is basically, intrinsically viewpointed. (BECAUSE)The body is basically, intrinsically viewpointed.

Viewpoint, language and body

Cognition could not be genuinely independent of bodily experience, and language could not be independent of (embodied) cognition.

The surprising thing would be if we did NOT exploit our constant use of irrealis space understandings for other less obviously or immediately “functional” purposes.

It would also be really surprising if fictional characters and situations lacked viewpoint.

Implications

Message in a bottle: Meet me here tomorrow.

Deictic marking (e.g., here/there, this/that) is pervasive in human language. In a physical scene it marks speaker’s physical viewpoint.

BUT: cf. Rubba 1986, or Hanks 1990: a distal deictic can just as well mean social non-identification. This cooking-fire can mean the one I cook on, while that cooking fire can mean the one I don’t get to cook on. This part of town can mean my part of town, and that kind of neighborhood can mean the kind I lack an ethnic affiliation with.

Linguistic viewpoint

First, second and third person seem to be linguistic universals: I, You, Other = Speaker, Addressee, Third Party. No big surprise, since in actual communication, these distinctions are inevitable.

BUT (Rubba 1986) there are differences between so-called “impersonal” uses of English 2nd-person you and 3rd-person one or they. You shows more identificationof the speaker with the referent, even though all the referents are third-person.

Linguistic viewpoint 2

Linguistic viewpoint markers 1All the different ways that content is presented and construed

differently depending on (among other things!):

Where the Speaker and Addressee are assumed to be, and what they be able to see, be able to reach, etc. (The Real Space.)here, there, this, that, next door, ….

When S and A are assumed to be: now, then, tomorrow, last year…

What an imagined participant can see, reach, etc., from an imagined location in some imagined space.

What the Speaker and Addressee are assumed to know, think, presuppose, and be able to calculate mentally about whatever mental space is involved. The/a, if/when/since, choice of formal/informal pronouns, presuppositional verbs like stop,...

What the Speaker and Addressee feel about the contents of the relevant spaces - how they evaluate them affectively, culturally, etc. Thrifty/stingy, maybe, hopefully,….

And what imagined participants know, think, presuppose, calculate, feel, etc. about relevant spaces.

(And more, including possession, social identification and differentiation, ….)

In short, language seems affected by just about anything about the way that a particular individual’s mental space construal is specific to that individual’s cognitive and perceptual access.

Linguistic viewpoint markers 2

Literal origo of visual access and perspective.Social “viewpoint”Literary “viewpoint” Cognitive “viewpoint” We co-experience:Our physical visual perspective on events Our self-location and definition of peripersonal spaceOur tactile and other sensory access to the situationOur cognitive assessment of the situationOur emotional reaction to the situation

Video systems may give us multiple simultaneous visual perspectives on the same event; but normally all we get is one: our own.

What is “viewpoint”?

Front-back:Asymmetric physical access, manual affordances Asymmetric visual accessAsymmetric movement affordances

Up-down:We have no experience of life outside an asymmetric gravitic

field. Motion, vision, etc are all affected by this.

Left-right: Dominant/non-dominant hand; asymmetric manual affordances

Relative spatial languages vs. absolute spatial languages.

Inherent and transferred asymmetry of non-human entities.

The asymmetric Self

Asymmetries in visual access correlate with asymmetries in:Informational access attendant on visual accessPhysical access to object manipulationMotion affordances

This complex set of correlations with located visual viewpointalso correlate with evaluative differences in assessing the situation. The kid with the plate of cookies in front of her, as

opposed to the one who doesn’t, not only has different visual, tactile, etc., access, but a reason to plan differently, make different inferences, and experience the situation differently from an emotional viewpoint.

What is viewpoint, cont.

What the evidence seems to show is that children from the start react differently to humans vs. nonhumans and animates vs. inanimates, even though they only gradually develop the adult concepts of human-ness and animacy.

Relatively early shared attention loci, attention to direction of caregiver’s eye-gaze.

Even at early stages of language-learning, children show considerable interactional ability to cope with the fact that caregivers may disapprove of their actions or try to thwart them, and versatile mechanisms for getting approval.

They also request information and transmit it.

Why these belong together

Moreover, meta-awareness of even some of the basic perceptual aspects of differentiation develops slowly. Small children clearly understand that they can’t see or know everything their parents see or know - they ask questions expecting the caregiver to know more than they do, and they ask to be picked up to get a better view, as they know the grownup is getting one. BUT they tend to think that interlocutors (esp. adults??) not only know but actually see everything they do. They point and say “this” and “that” when speaking on the telephone, or to a caregiver who can’t see the object in question.

Levels of theory of mind

A late stage in all this is the acquisition of a conscious meta-awareness that other people have MINDS whose STATES may be DIFFERENT from theirs: other people may know or believe different things from what they know and believe (Wimmer, Perner, Tomasello,….), and people may feel differently about the same stimuli. (E.g. celery and goldfish crackers - cf. Gopnik.)

Levels of theory of mind 2

The ability to “put yourself in someone else’s position” cognitively and emotionally is one that adults never fully learn (if they did,

one supposes they would essentially have “out-of-body experiences” in other people’s situations).

Incomplete shared viewpoint

BUT humans can’t unlearn or do without such viewpoint sharing.

We can’t help having physical and emotional responses to film images of humans involved in eating, crying, laughing, kissing, or hitting each other.

Reading a newspaper story about strangers who have lost their jobs, or can’t get ex-husbands to pay child support, or have overcome odds to win a sports event - these things, to a greater or lesser degree, put us in sympathy with the participants’ viewpoints.

Inasmuch as we are considerate of others in daily life, much of our consideration derives not just from “following the rules” but from being able to imagine “how the other person would feel” if we did the opposite.

Incomplete shared viewpoint 2

All primates appear to have mirror neuron circuits which (at least for certain aspects of physical interaction and spatial relations) are activated both by the Self’s motions of hand, mouth, foot and the Self’s peripersonal space, and by an observed primate’s actions and spatial relations.

The ability to maintain local coherence between differing viewpoints apparently follows from our physical perceptions of motion,

space, etc, including mirror neurons; NOTE, we do not have trouble tracking which things are in our interlocutor’s field of vision, even though it may be very different from ours.

This is a plausible basis on which to build later higher-level awareness of different viewpoints as parts of a coherent larger scene.

Local and global coherence

Other people’s viewpoints

Add to this a basic experience of INTERACTION with another human with viewpoint (babies interact with care-givers actively from the start). A contrast between SPEAKER and ADDRESSEE - or more generally between a communicatively expressive agent and the intended interpreting observer - is thus another deeply entrenched experiential correlation. From the start, we experience ourselves in BOTH of these two roles.

And, thanks to our understanding that everyone has Viewpoint, Viewpoint blends freely with either the Speaker or the Hearer role in the Speaker-Hearer contrast.

(1) Have the experiential correlations involved in having a Viewpoint “from the inside”.

(2) Project that kind of experiential correlation of Viewpoint onto other people, assuming that they also have that kind of perspectival experience of the world.

(3) meta-navigate this system - be able to go back and forth between representing one viewpoint and another, and know when/whether our language requires or allows particular viewpoint representations of situations.(May S, or must S, say “I’d love to come to your party” rather than using go?)

(4) also have some natural representation of global, less viewpointed knowledge (e.g. spatial knowledge)

(5) be able to represent many situations linguistically in “global” as well as in “participant” viewpoints.

Adult cognition requires that speakers:

Adult cognition requires that speakers

also:(6) Project much of their systematic spatiomotor “viewpoint”

structure onto other understanding of less concrete domains such as Time, social relations, cognition, selfhood.

(7) Maintain more and less perspectival models of these domains as well, and know when to use language which shows the appropriate perspectival construal.

(8) Be able to TAKE APART viewpoint blends, maintaining personal “I”ness and “you” ness separate from (for example) a proximal/distal structure, and use linguisticforms appropriate to this dissection.

vs.: "Please come to my party."

H: "you"S: "I"

S: "I"

H: "you"

S: "I"

H: "you"

Blend: "Can I come to your party?"

Input 1: S/H spacewith EGO at Speaker

Input 2: deictic coordinate space

8

Experiential basis of deixisMost languages seem to have at least a two-way distinction between

this and that, here and there.Some have more complex three-way distinctions (here, there, yonder).

(1)Basis in Speaker’s visual field and manual access field: Here = within S’s manual access rangeThere = within the visual field but outside manual access rangeYonder = outside both visual field and manual access field.

In a two-term system, the manual access field would be the central sense of here, the area outside the manual access and vision fields would be unmistakably there, and the areaof the visual field beyond manual access would be negotiable, depending on what objects were being contrasted.

Experiential basis of deixis, 2

(2) Basis in the Speaker/Hearer contrast: Here = near SThere = near HYon = away from both. (Once again, in a two-term system, things get fuzzy.)

This is not necessarily in opposition to the analysis in terms of S’s different fields of access; we might expect that in a prototypical communicative exchange, S and H will be within visual range of each other, and that there may very possibly be more overlap between their visual fields than between their fields of manual access.

So “near H” (or “nearer to H than to S”) might well also refer to a location beyond S’s manual access, but inside S’s visual field.

Problems with the spatial view

of deixis(1) It doesn’t, on its own, explain systematic extensions to time,

or the independent system of temporal deixis; more on this soon!(2) It doesn’t explain all the SOCIAL, non-spatial uses of deixis.

(cf. Rubba 1996, Hanks Referential Practice).(3) It doesn’t explain language-specific spatial uses (is the “this”

term or the “that” term, the “come” verb or the “go” verb, the unmarked member of the pair, for example?)

A combination of problems (2) and (3) can be noted in French:Professeur Jones n’est pas ici. (She works at UCLA, not at UCB))Professeur Jones n’est pas là. (She is not at her desk just now)

Metaphoric viewpoint spaces

Social uses of here/there, this/that can readily be seen as FURTHER blends, between physical space and social space.This should be seen as coherent with the very general metaphorical spatialization of our concepts of Self and social structure.

SOCIAL RELATIONSHIP IS PHYSICAL CLOSENESSSOCIAL ALIENATION IS PHYSICAL DISTANCE

Displaced deixisThe speaker's body is one of our most basic landmarks for

understanding whatever she says. It is because bodily viewpoint infuses our cognitive and

interactional structure that deixis and perspective are so pervasively manifested in language.

Yet actual here-and-now bodily viewpoint is very flexibly displaced to represent other imagined ones; we don't really know the meaning of here or the reference of a pointing gesture unless we know whether, for example, the speaker is enacting some irrealis situation .

Such displacement phenomena are present equally in language and gesture, and in spoken and signed languages.

They can usefully be analyzed as blends of the here-and-now deictic space with an imagined one.

Real-space viewpoint blendsThe visual/gestural modality uses S’s body to represent (among other

things!):- itself, at other times and places- other human bodies and animal bodies

Every representation of a body brings with it the physical asymmetries of affordance and sensory access which arecharacteristic of bodies: in other words, VIEWPOINT can represent VIEWPOINT DEICTIC CENTER can represent DEICTIC CENTER. BODILY AFFORDANCES can represent

BODILY AFFORDANCES

This is crucial to the ways in which gesture can represent social andabstract concepts.

Displaced gestural deixis

Displacement of deictic centers occurs in gesture as well as in language. cf. Haviland’s work, esp. Haviland 2000.

“Simple” spatial points are anything but simple.

The “same” pointing gesture used by a Mayan compadreof Haviland to refer first to the direction in whichhe would find the ruins if he went to the town ofPalenque (pointing AS IF FROM a location inthe distant town), and to refer to the direction fromthe speech location to a local landmark.(Palenque is in the opposite direction from the speech location.)

Pointing is INDEXICAL rather than iconic? Or both?

Secondary iconicityVery saliently, a set of deep metaphoric mappings based inthe spatial source domain allow what I refer to as “secondary iconicity”effects in the visual/gestural domain.

These are crucially mappings of one deictically centered domain onto another, which is why they are naturally and saliently representable in the visual/gestural modality.

One obvious and complex example is TIME IS SPACE.Gesture, like most signed languages, normally and conventionally

uses body-centered spatial deixis to represent “now”-centered temporal deixis.FUTURE = FORWARDS, PAST = BACK(contrast with sideways “time line”)

This is so “natural” that the connection is nearly as strong asany direct iconic mapping.

Spatiotemporal metaphors

and experiential bases(cf. Moore 2000, Nuñez and Sweetser 2001 & forthcoming)

(1) The experience of moving along a path, and encounteringone location after another.

Linear mapping of locations to times;inferential structure is parallel.

Past, already-encountered locations are behind Ego;Future, yet-to-be-seen locations are in front of Ego.

(2) The experience of standing and looking in front of Ego.Asymmetry: can see in front of self, not in back.

(Therefore can know what’s happening in front.)Corresponding temporal asmmetry: can know past

events, not future ones.

I'm going to discuss deixis.

profiled time

"I" (subjective experience center) and

deictic "viewpoint" center

25

Going to

I'm coming to appreciate John's sense of humor.

"I", Center of subjective experience

time profiled anddeictic "viewpoint"

center

(cf. Michele Emanatian 1992, Chagga 'come' and 'go': Metaphor and the development of tense-aspect.

Studies in Language 16:1, 1-33.)

26

Coming to

Nayra maraQuickTime™ and a

DV - NTSC decompressorare needed to see this picture.

Nayra maraQuickTime™ and a

DV - NTSC decompressorare needed to see this picture.

30

Reference to current physical speech-act setting:Speaker points to a present object in the room.Speaker points to her own body, meaning to refer to herself.Blends already present: content and physical setting!

(e.g. Points at an object: “What is wrong with that?”gesture identifies entity in world AS part of content)

Deictic references

English speaker gestures backwards when referring to long ago...

A more complex blend: a French speaker says C’était bien avant (“it was well before

(that time),i.e., earlier”), and gestures

backwards.The speech here metaphorically sets up

a moving-time structure, while the

gesture seems to refer to an ego-centered (past is back,

future is ahead) metaphorical mapping.An Aymara speaker says nayra (“long ago”) and gestures

forwards. (Nuñez)Prevalence of deictically centered (“Egocentric”) models of

time in gesture. (What would this be like in an “absolute”

spatial language?)

Abstract described deixis

Speaker discusses planning, long-range and short-range.Speaker’s body is the center from which relative

futurity from a hypothetical present is calculated.

Speaker compares two topics or gives two viewpoints:Speaker’s dominant hand’s space represents the

central topic or the speaker’s viewpoint,while the non-dominant hand represents the contrasting topic or viewpoint.

Long and short range

Dynamic programming

QuickTime™ and aDV - NTSC decompressor

are needed to see this picture.

Dynamic programming

QuickTime™ and aDV - NTSC decompressor

are needed to see this picture.

chess

QuickTime™ and aDV - NTSC decompressor

are needed to see this picture.

chess

QuickTime™ and aDV - NTSC decompressor

are needed to see this picture.

Mental book-keeping

QuickTime™ and aDV - NTSC decompressor

are needed to see this picture.

Mental book-keeping

QuickTime™ and aDV - NTSC decompressor

are needed to see this picture.

Stop and take questions

QuickTime™ and aDV - NTSC decompressor

are needed to see this picture.

Stop and take questions

QuickTime™ and aDV - NTSC decompressor

are needed to see this picture.

Generalizations: more than one direction away from the center.

SIDES:Dominant vs. non-dominant:

Trajector vs. landmark Main topic vs. secondary topicSpeaker’s views vs. contrasting views

UP/DOWNUp or down vs. level:

“normative” place on scale is on speaker’s gesture-space level(A speaker identifying with embodiment places “reality”

on level, and “abstract ideas” on a higher level opposite his face.)

Motion and event structure: Event trajectories start near speaker, move towards farther

away. (The start is a baseline; Ego is a baseline.)

Aspect: (cf. Duncan)Mapping of aspectual structure from gesture to represented

activity, etc.

Query: how “global” can gesture ever be? There’s always perspective, even if Ego isn’t mapped onto some represented entity.

36

In gesture, discourse and temporal deictic spaces are not only

conceived spatially (as they are pervasively in spoken

language), they are actually enacted in Real Space.

The same is true of signed language. (One more reason why

Sign Linguistics needs to incorporate viewpoint from

the ground up, as Liddell and others have argued.)

Enactment of other spaces in Real Space, or use of deictically

centered words with displaced senses in spoken language, need not of course diminish the cognitive and perceptual priority of Real Space. It is precisely their

dependence on Real Space which gives discourse and social and temporal spatial uses their cognitive power and flexibility.

Conclusions:

The physical viewpoint of the speaker is highly polysemous in gesture, and in some very conventional ways.

It represents much of the range of phenomena which linguists and literary analysts and speakers have referred to by the label “viewpoint” metaphorically.

It does so systematically, in ways that parallel linguistic metaphor, but are directly embodied as spoken-language metaphor is not.

No surprise that novelists use descriptions of physical situations from some specific physical vantage point, as well as evaluative and descriptive and deictic and other aspects of language, to show “character viewpoint.”

Written language is a very strange medium. In principle, it needs to step back from “meet me here tomorrow”

and give information that is less situated and more explicit. In fact, the very same issues permeate written language at the

next level down.

Joey sat quietly. (is this an onlooking teacher’s viewpoint?)Daddy would come soon to pick him up. Everything would be all

right then. (cf. Banfield)

The police spent the morning trying to locate the kidnapped child, but they could not find her. Finally they received a telephone call. A man had spotted a small girl in a park. The kidnappers had apparently released her.

39

References

Cienki, Alan. 1998. Metaphoric gestures and some of their relations to verbal metaphoric expressions. In Discourse and Cognition, ed. J-P Koenig, 189-204. Stanford CA: CSLI Publications.

Hanks, William. 1990. Referential Practice: Language and livedspace among the Maya. University of Chicago Press.

Haviland, John. 2000. Pointing, gesture spaces and mental maps. InMcNeill (2000), pp. 13-46.

Kendon, Adam. 2000. Language and gesture: unity or duality? In McNeill (2000), pp. 47-63.

Kendon, Adam. 1995. Gestures as illocutionary and discourse structuremarkers in southern Italian conversation. Journal of Pragmatics23:3, pp. 247-279.

40

Langacker, Ronald W. 1987, 1991. Foundations of CognitiveLinguistics. Stanford: Stanford University Press.

_______. 1990. Subjectification. Cognitive Linguistics 1: 5-38.Levinson, Stephen. 2003. Space in language and cognition. Cambridge:

Cambridge University Press.Liddell, Scott. 1998. Grounded blends, gestures, and conceptual

shifts. Cognitive Linguistics 9:3, 283-314.McNeill, David (ed.) 2000. Language and gesture. Cambridge:

Cambridge University Press.

41

Sweetser, Eve. 1990. From etymology to pragmatics. Cambridge: Cambridge University Press.

_______. Regular metaphoricity in gesture: bodily-based models of speech interaction. In Actes du 16e Congrès International des Linguistes (CD-ROM), Elsevier (1998).

Taub, Sarah. 2001. Language from the Body. Cambridge University Press.

Traugott, Elizabeth Closs. 1989. On the rise of epistemic meanings in English. Language 57:33-65.

_______. 1995. Subjectification in grammaticalization. In Dieter Steinand Susan Wright (eds.), Subjectivity and subjectivisation in Language. Cambridge: Cambridge University Press. Pp. 31-54.

Traugott, Elizabeth Closs and Richard Dasher. In press. Regularity in Semantic Change.

Webb, Rebecca. 1996, Linguistic features of metaphoric gestures. Ph.D. dissertation, University of Rochester.