brentari 2012 phonology

Upload: lorenabal

Post on 07-Jul-2018

218 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/19/2019 Brentari 2012 Phonology

    1/34

    3. Phonology 21

    3. Phonology

    1. Introduction2. Structure3. Modality effects4. Iconicity effects5. Conclusion6. Literature

    Abstract

    This chapter is concerned with the sub-lexical structure of sign language phonology:

     features and their organization into phonological units, such as the segment, syllable andword. It is organized around three themes – structure , modality , and iconicity – becausethese themes have been well-studied since the inception of the field and they touch on

    the reasons why the consideration of sign languages is essential if one wishes to under-

     stand the full range of possibilities of the phonology of natural languages. The cumula-

    tive work described here makes two main arguments. First, modality affects the phono-

    logical representation in sign and spoken languages; that is, the phonological structure

    represents the strengths of the phonetic and physiological systems employed. Without a

    comparison between sign and spoken languages, it is easy to lose sight of this point.

    Second, iconicity works with phonology, not against it. It is one of the pressures – like

    ease of perception and ease of production – that shape a phonological system. This

    interaction is more readily seen in sign languages because of the availability of visual 

    iconicity and the ease with which it is assumed by phonological structures.

    1. Introduction

    Why should phonologists, who above all else are fascinated with the way things  sound,care about systems without sound? The short answer is that the organization of phono-logical material is as interesting as the phonological material itself    whether it is of 

    spoken or sign languages. Moreover, certain aspects of work on spoken languages canbe seen in a surprising new light, because sign languages offer a new range of possibili-ties both articulatorily and perceptually.

    In this chapter the body of work on the single sign will be described under theumbrella terms  structure, modality, and  iconicity. Under the term  structure  is includedall the work that showed that sign languages were natural languages with demonstrablestructure at all levels of the grammar including, of course, phonology. Much progresshas been achieved toward the aim of delineating the structures, distribution, and opera-tions in sign language phonology, even though this work is by no means over anddebates about the segment, feature hierarchies, contrast, and phonological operations

    continue. For now, it will suffice to say that it is well-established crosslinguistically thatsign languages have hierarchical organization of structures analogous to those of spo-

    Authenticated | [email protected] author s copy

    Download Date | 11/8/12 12:55 AM

  • 8/19/2019 Brentari 2012 Phonology

    2/34

    I. Phonetics, phonology, and prosody22

    ken languages. Phonologists are in a privileged place to see differences between signand spoken languages, because, unlike semantics or syntax, the language medium af-fects the organization of the phonological system. This chapter deals with the word-sized unit (the sign) and phonological elements relevant to it; phonetic structure andprosodic structure above the level of the word are dealt with in chapter 2 and chapter 4of the handbook, respectively.

    Taken together, the five sign language parameters of Handshape, Place of Articula-tion (where the sign is made), Movement (how the articulators move), Orientation(the hands’ relation towards the Place of Articulation), and Non-manual behaviors(what the body and face are doing) function similarly to the cavities, articulators andfeatures of spoken languages. Despite their different content, these parameters (i.e.,phonemic groups of features) in sign languages are subject to operations that are simi-lar to their counterparts in spoken languages. These broad-based similarities must beseen, however, in light of important differences due to modality and iconicity effectson the system.  Modality addresses the effect of peripheral systems (i.e., visual/gestural

    vs. auditory/vocal) on the very nature of the phonological system that is generated (seealso chapter 25).  Iconicity  refers to the non-arbitrary relationships between form andmeaning, either visual/spatial iconicity in the case of sign languages (Brennan 1990,2005), or sound symbolism in the case of spoken languages (Hinton/Nicholls/Ohala1995; Bodomo 2006; see also chapter 18).

    This chapter will be structured around the three themes of structure, modality, andiconicity because these issues have been studied in sign language phonology (indeed,in sign language linguistics) from the very beginning. Section 2 will outline the phono-logical structures of sign languages, focusing on important differences from and similar-ities to their spoken language counterparts. Section 3 will discuss modality effects by

    using a key example of word-level phonotactics. I will argue that modality effects allowsign languages to occupy a specific typological niche based on signal processing andexperimental evidence. Section 4 will focus on iconicity. Here I will argue that thisconcept is not in opposition to arbitrariness; instead iconicity co-exists along with otherfactors such as ease of perception and ease of production that contribute to signlanguage phonological form.

    2. Structure

    2.1. The word and sublexical structure

    The structure in Figure 3.1 shows the three basic manual parameters    Handshape(HS), Place of Articulation (POA), and Movement (MOV)    in a hierarchical struc-ture from the Prosodic Model (Brentari 1998), which will be used throughout the chap-ter to make generalizations across sets of data. This structure presents a fundamentaldifference between sign and spoken languages. Besides the different featural content,the most striking difference between sign and spoken languages is the hierarchical

    structure itself  

      i.e., the root node at the top of the structure is an entire lexeme, astem, not a consonant- or vowel-like unit. This is a fact that is     if not explicitly

    Authenticated | [email protected] author s copy

    Download Date | 11/8/12 12:55 AM

  • 8/19/2019 Brentari 2012 Phonology

    3/34

    3. Phonology 23

    Fig. 3.1: The hierarchical organization of a sign’s Handshape, Place of Articulation, and Move-ment in the Prosodic Model (Brentari 1998).

    stated inferred in many models of sign language phonology (Sandler 1989; Brentari1990a, 1998; Channon 2002; van der Hulst 1993, 1995, 2000; Sandler/Lillo-Martin 2006).

    Both sign and spoken languages have simultaneous structure, but the representationin Figure 3.1 encodes the fact that a high number of features are specified only onceper lexeme in sign languages. This idea will be described in detail below. Since thebeginning of the field there has been debate about how much to allow the simultaneousaspects of sublexical sign structure to dominate the representation: whether sign lan-guages have the same structures and structural relationships as spoken languages, butwith lots of exceptional behavior, or a different structure entirely. A proposal such asthe one in Figure 3.1 is proposing a different structure, a bold move not to be takenlightly. Based on a wide range of available evidence, it appears that the simultaneousstructure of words is indeed more prevalent in sign than in spoken languages. Thepoint here is that the root node refers to a lexical unit, rather than a C- or V-unit ora syllabic unit.

    The general concept of ‘root-as-lexeme’ in sign language phonology accurately re-flects the fact that sign languages typically specify many distinctive features just onceper lexeme, not once per segment or once per syllable, but once per word. Tone in

    tonal languages, and features that harmonize across a lexeme (e.g., vowel features andnasality) behave this way in spoken languages, but fewer features seem to have thistype of domain in spoken than in sign languages. And when features do operate thisway in spoken languages, it is not universal for all spoken languages. In sign languagesa larger number of features operate this way and they do so universally across mostknown sign languages that have been well studied to date.

    2.2. The Prosodic Model

    In the space provided, I can provide neither a complete discussion of all of the phono-logical models nor of the internal debates about particular elements of structure. Please

    Authenticated | [email protected] author s copy

    Download Date | 11/8/12 12:55 AM

  • 8/19/2019 Brentari 2012 Phonology

    4/34

    I. Phonetics, phonology, and prosody24

    see Brentari (1998) and Sandler and Lillo-Martin (2006) for a more comprehensivetreatment of these matters. When possible, I will be as theory-neutral as possible, butgiven that many points made in the chapter refer to the Prosodic Model, I will providea brief overview here of the major structures of a sign in the Prosodic Model forHandshape, Place of Articulation, Movement, and Orientation. Non-manual propertiesof signs will be touched on only as necessary, since their sublexical structure is not wellworked out in any phonological model of sign language, and, in fact, it plays a largerrole in prosodic structure above the level of the word (sign); see chapter 4. The struc-ture follows Dependency Theory (Anderson/Ewen 1987; van der Hulst 1993) in thateach node is maximally binary branching, and each branching structure has a head,which is more elaborate, and a dependent, which is less elaborate. The specific featureswill be introduced only as they become relevant; the discussion below will focus onthe class nodes of the feature hierarchy.

    The  inherent feature structure  (Figure 3.2a) includes both  Handshape  and  Place of  Articulation. The Handshape (HS) structure (Figure 3.2ai) specifies the active articula-

    tor. Moving down the tree in (2ai), the head and body (non-manual articulators) canbe active articulators in some signs, but in most cases the arm and hands are the activearticulators. The manual node branches into the dominant (H1) and non-dominant(H2) hands. If the sign is two-handed as in   sit and   happen (Figures 3.3aiii and 3.3aiv)it will have both H1 and H2 features. There are a number of issues about two-handedsigns that are extremely interesting, since nothing like this exists in spoken languages(i.e., two articulators potentially active at the same time). Unfortunately these issueswill not be covered in this chapter in the interest of space (Battison 1978; Crasborn1995, submitted; Brentari 1998). If the sign is one-handed, as in   we,  sorry, and   throw(Figures 3.3ai, 3.3aii, and 3.3av), it will have only H1 features. The H1 features enable

    each contrastive handshape in a sign language to be distinguished from every other.These features indicate, for instance, which fingers are ‘active’ ( selected), and of theseselected fingers, exactly how many of them there are (quantity) and whether they arestraight bent, flat, or curved ( joints). The Place of Articulation (POA) structure (Figure3.2aii) specifies the passive articulator, divided into the three dimensional planes  horizontal ( y-plane), vertical ( x-plane), and midsagittal (z-plane). If the sign occurs inthe vertical plane, then it might also require further specifications for the major placeon the body where the sign is articulated (head, torso, arm, H2) and also even a particu-lar  location within that major body area; each major body area has eight possibilities.The POA specifications allow all of the contrastive places of articulation to be distin-

    guished from one another in a given sign language. The inherent features have onlyone specification per lexeme; that is, no changes in values.Returning to our point of root-as-lexeme, we can see this concept at work in the

    signs illustrated in Figure 3.3a. There is just one Handshape in the first three signs:   we(3.3ai),   sorry (3.3aii), and   sit (3.3aiii). The Handshape does not change at all through-out articulation of the sign. In each case, the letters ‘1’, ‘S’, and ‘V’ stand for entirefeature sets that specify the given handshape. In the last sign,   throw (3.3av), the twofingers change from closed [open] to open [Copen], but the   selected fingers used inthe handshape do not change. The opening is itself a type of movement, which isdescribed below in more detail. Regarding Place of Articulation, even though it looks

    like the hand starts and stops in a different places in each sign, the major region wherethe sign is articulated is the same     the   torso   in   we  and   sorry,  the   horizontal plane

    Authenticated | [email protected] author s copy

    Download Date | 11/8/12 12:55 AM

  • 8/19/2019 Brentari 2012 Phonology

    5/34

    3. Phonology 25

    Fig. 3.2: The feature geometry for Handshape, Place of Articulation, and Movement in the Pro-sodic Model.

    (y-plane) in front of the signer in   sit  and   happen,  and the  vertical plane   (x-plane) infront of the signer in   throw. These are examples of contrastive places of articulationwithin the system, and the labels given in Figure 3.3b stand for the entire Place of Articulation structure.

    The prosodic feature structure in Figure 3.1 (shown in detail in Figure 3.2b) specifies

    movements within the sign, such as the aperture change just mentioned for the signthrow   (3.3av). These features allow for changes in their values within a single root

    Authenticated | [email protected] author s copy

    Download Date | 11/8/12 12:55 AM

  • 8/19/2019 Brentari 2012 Phonology

    6/34

    I. Phonetics, phonology, and prosody26

    Fig. 3.3: Examples of ASL signs that demonstrate how the phonological representation organizes

    sublexical information in the Prosodic Model (3a). Inherent features (HS and POA) arespecified once per lexeme in (3b), and prosodic features (PF) may have different valueswithin a lexeme; PF features also generate the timing units (x-slots) (3c).

    node (lexeme) while the inherent features do not, and this phonological behavior ispart of the justification for isolating the movement features on a separate autosegmen-tal tier. Note that Figures 3.3ai, 3.3iv, and 3.3av (we,   happen, and   throw) all havechanges in their movement feature values; i.e., only one contrastive feature, butchanges in values. Each specification indicates which anatomical structures are respon-

    sible for articulating the movement. Going from top to bottom, the more  proximal  joints of the shoulder and arm are at the top and the more   distal   joints of the wristand hand are at the bottom. In other words, the shoulder articulating the setting move-ment in   we is located closer to the center of the body than the elbow that articulatesa  path movement in   sorry and   sit. A sign having an  orientation change (e.g.,   happen)is articulated by the forearm or wrist, a joint that is even further away from the body’scenter, and an   aperture change  (e.g.,   throw), is articulated by joints of the hand, fur-thest away from the center of the body. Notice that it is possible to have two simultane-ous types of movement articulated together; the sign   throw   has a path movementand an aperture change. Despite their blatantly articulatory labels, these may have an

    articulatory or a perceptual basis (see Crasborn 2001). The trees in Figure 3.3c demon-strate different types of movement features for the signs in Figure 3.3a. Note that

    Authenticated | [email protected] author s copy

    Download Date | 11/8/12 12:55 AM

  • 8/19/2019 Brentari 2012 Phonology

    7/34

    3. Phonology 27

    Figures 3.3ai, 3.3aiv, and 3.3av (we, happen, and   throw) all have changes in theirmovement feature values; one contrastive feature but changes in their values.

    Orientation was proposed as a major manual parameter like Handshape, Place of Articulation and Movement by Battison (1978), but there are only a few minimal pairsbased on Orientation alone. In the Prosodic Model, Orientation is derivable from arelation between the  handpart  specified in the Handshape structure and the Place of Articulation ,  following a convincing proposal by Crasborn and van der Kooij (1997).The mini-representations of the signs in Figure 3.3 show their orientation as well. Theposition of the fingertip of the 1-handshape towards the POA determines the hand’sorientation of   we and   throw; the position of the back of the fingers towards the torsodetermines the hand’s orientation in   sorry, and the front of the fingers towards thePOA determines the hand’s orientation in   sit and   happen.

    The timing slots (segments) are projected from the prosodic structure, shown asx-slots in Figure 3.2b. Path features generate two timing slots; all other features gener-ate one timing slot. The inherent features do not generate timing slots at all, only

    movement features can do this in the Prosodic Model. When two movement compo-nents are articulated simultaneously as in   throw, they align with one another and onlytwo timing slots are projected onto the timing tier. The movement features play animportant role in the sign language syllable, discussed in the next section.

    2.3. The syllable

    The syllable is as fundamental a unit in sign as it is in spoken languages. One point of nearly complete consensus across models of sign language phonology is that the move-

    ments are the nuclei of the syllable. This idea has its origin in the correlation betweenthe function of movements and the function of vowels in spoken languages (Liddell1984; Brentari 2002), which is that both vowels and movements are the ‘medium’ bywhich signs are visible from considerable distance, just as vowels are the ‘medium’ inspoken languages making words audible from considerable distance. This physical factwas determined to have theoretical consequences and was developed into a theory of syllable structure by Brentari (1990a) and Perlmutter (1992). The arguments for thesyllable are based on its importance to the system (see also Jantunen/Takkinen 2010).They are as follows:

    2.3.1. The babbling argument

    Petitto and Marentette (1991) have observed that a sequential dynamic unit formedaround a phonological movement appears in young Deaf children at the same timeas hearing children start to produce syllabic babbling. Because the distributional andphonological properties of such units are analogous to the properties usually associatedwith syllabic babbling, this activity has been referred to as manual babbling. Like syl-labic babbling, manual babbling includes a lot of repetition of the same movement,and also like syllabic babbling, manual babbling makes use of only a part of the pho-

    nemic units available in a given sign language. The period of manual babbling developswithout interruption into the first signs (just as syllabic babbling continues without

    Authenticated | [email protected] author s copy

    Download Date | 11/8/12 12:55 AM

  • 8/19/2019 Brentari 2012 Phonology

    8/34

    I. Phonetics, phonology, and prosody28

    interruption into the first words in spoken languages). Moreover, manual babbling canbe distinguished from excitatory motor hand activity and other communicative gesturesby its rhythmic timing, velocity, and spectral frequencies (Petitto 2000).

    2.3.2. The minimal word argument

    This argument is based on the generalization that all well-formed (prosodic) wordsmust contain at least one syllable. In spoken languages, a vowel is inserted to insurewell-formedness, and in the case of sign languages a movement is inserted for the samereason. Brentari (1990b) observed that American Sign Language (ASL) signs withouta movement in their input, such as the numeral signs ‘one’ to ‘nine’ add a small,epenthetic path movement when used as independent words, signed one at a time.Jantunen (2007) observed that the same is true in Finnish Sign Language (FinSL), andGeraci (2009) has observed a similar phenomenon in Italian Sign Language (LIS).

    2.3.3. Evidence of a sonority hierarchy

    Many researchers have proposed sonority hierarchies based ‘movement visibility’ (Co-rina 1990; Perlmutter 1992; Sandler 1993; Brentari 1993). Such a sonority hierarchy isbuilt into the prosodic features’ structure in Figure 3.2b since movements representedby the more proximal joints higher in the structure are more visible than are thosearticulated by the distal joints represented lower in the structure. For example, move-ments executed by the elbow are typically more easily seen from further away thanthose articulated by opening and closing of the hand. See Crasborn (2001) for experi-mental evidence demonstrating this point. Because of this finding some researchershave observed that movements articulated by more proximal joints are a manifestationof visual ‘loudness’ (Crasborn 2001; Sander/Lillo-Martin 2006). In both spoken andsign languages more sonorous elements of the phonology are louder than less sonorousones (/a/ is louder than /i/; /l/ is louder than /b/, etc.). The evidence from the nativizationof fingerspelled words, below, demonstrates that sonority has also infiltrated the word-level phonotactics of sign languages.

    In a study of fingerspelled words used in a series of published ASL lectures onlinguistics (Valli/Lucas 1992), Brentari (1994) found that fingerspelled forms containing

    Fig. 3.4: An example of nativization of the fingerspelled word  p-h-o-n-o-l-o-g-y, demonstrating

    evidence of the sonority hierarchy by organizing the reduced form around the two moresonorous wrist movements.

    Authenticated | [email protected] author s copy

    Download Date | 11/8/12 12:55 AM

  • 8/19/2019 Brentari 2012 Phonology

    9/34

    3. Phonology 29

    strings of eight or more handshapes representing the English letters were reduced ina systematic way to forms that contain fewer handshapes. The remaining handshapesare organized around just two movements. This is a type of nativization process; nativesigns conform to a word-level phonotactic of having no more than two syllables. Bynative signs I am referring to those that appear in the core vocabulary, including mono-morphemic forms and lexicalized compounds (Brentari/Padden 2001). Crucially, themovements retained were the most visible ones, argued to be most sonorous ones, e.g.,movements made by the wrist were retained while aperture changes produced by thehand were deleted. Figure 3.4 contains an example of this process: the carefully finger-spelled form   p-h-o-n-o-l-o-g-y is reduced to the letters underlined, which are the let-ters responsible for the two wrist movements.

    2.3.4. Evidence for light vs. heavy syllables

    Further evidence for the syllable comes from a division between those movements thatcontain just one movement element (features on only one tier of Figure 3.2b are speci-fied), which behave as   light  syllables (e.g.,   we,   sorry, and   sit  in Figure 3.3 are light),vs. those that contain more than one simultaneous movement element, which behaveas heavy syllables (e.g.,   throw   in Figure 3.3). It has been observed in ASL that aprocess of nominalization by movement reduplication can occur only to forms thatconsist of a light syllable (Brentari 1998). In other words, holding other semantic fac-tors constant, there are signs, such as   sit, that have two possible forms: a verbal formwith the whole sequential movement articulated once and a nominal form with thewhole movement articulated twice in a restrained manner (Supalla/Newport 1978). The

    curious fact is that the verb   sit  has such a corresponding reduplicated nominal form(chair), while   throw  does not. Reduplication is not the only type of nominalizationprocess in sign languages, so when reduplication is not possible, other possible formsof nominalization are possible (see Shay 2002). These facts can be explained by thefollowing generalization: ceteris paribus, the set of forms that allow reduplication have

     just one simultaneous movement component, and are   light  syllables, while those thatdisallow reduplication, such as   throw, have two or more simultaneous movement el-ements and are therefore heavy. A process in FinSL requiring the distinction betweenheavy and light syllables has also been observed by Jantunen (2007) and Jantunen andTakkinen (2010). Both analyses call syllables with one movement component light, and

    those with more than one heavy.

    2.4. The segment and feature organization

    This is an area of sign language phonology where there is still lively debate. Abstractingaway from the lowest level of representation, the features themselves (e.g., [one], [all],[flexed], etc.), I will try to summarize one trend namely, features and their relationto segmental (timing) structure. Figure 3.5 shows schematic structures capturing thechanges in perspective on how timing units, or segments, are organized with respect to

    the feature material throughout the 50 years of work in this area. All models in Fig-ure 3.5 are compatible with the idea of ‘root-as-lexeme’ described in section 2.1; the

    Authenticated | [email protected] author s copy

    Download Date | 11/8/12 12:55 AM

  • 8/19/2019 Brentari 2012 Phonology

    10/34

    I. Phonetics, phonology, and prosody30

    Fig. 3.5: Schematic structures showing the relationship of segments to features in different models of sign language phonology (left to right): the Cheremic Model (Stokoe 1960; Stokoe et al.1965), the Hold-Movement Model (Liddell/Johnson 1989), the Hand-Tier Model (Sandler1989), the Dependency Model (van der Hulst 1995), and the Prosodic Model (Brentari1998).

    root node at the top of the structure represents the lexeme. Figure 3.5a representsStokoe’s Cheremic Model  described in Sign Language Structures (1960). The sub-lexicalparameters of Handshape, Place of Articulation, and Movement had no hierarchicalorganization, and like the spoken models of the 1950s (e.g., Bloomfield 1933), werebased entirely on phonemic structure (i.e., minimal pairs). It was the first linguisticwork on sign language linguistics of any type, and the debt owed to Stokoe is enormousfor bringing the sub-lexical parameters of signs to light.

    Thirty years later, Liddell and Johnson (1989) looked primarily to sequential struc-ture (timing units) to organize phonological material (see Figure 3.5b). Their Hold-

    Movement Model was also a product of spoken language models of the period, whichwere largely segmental (Chomsky/Halle 1968), but were moving in the direction of non-linear phonology, starting with autosegmental phonology (Goldsmith 1976). Seg-mental models depended heavily on slicing up the signal into units of  time as a way of organizing the phonological material. In such a model, consonant and vowel units takecenter stage in spoken language, which can be identified sequentially. Liddell and John-son called the static Holds ‘consonants’, and Movements the ‘vowels’ of sign languages.While this type of division is certainly possible phonetically, several problems relatedto phonological distribution of Holds make this model implausible. First of all, thepresence and duration of most instances of holds are predictable (Perlmutter 1992;

    Brentari 1998). Secondly, length is not contrastive in movements or holds; a few mor-phologically related forms are realized by lengthening e.g., [intensive] forms have ageminated first segment, such as   good  vs.   good   [intensive] ‘very good’,   late  vs.   late[intensive] ‘very late’, etc.    but no lexical contrast is achieved by segment length.Thirdly, the feature matrix of all of the holds in a given lexeme contains a great dealof redundant material (Sandler 1989; Brentari 1990a, 1998).

    As spoken language theories became increasingly non-linear (Clements 1985; Sagey1986) sign language phonology re-discovered and re-acknowledged the non-linear si-multaneous structure of these languages. The Hand-Tier Model (Sandler 1989 andFigure 3.5c) and all future models use feature geometry to organize the properties

    of the sign language parameters according to phonological behavior and articulatoryproperties. The Hand-Tier Model might be considered balanced in terms of sequential

    Authenticated | [email protected] author s copy

    Download Date | 11/8/12 12:55 AM

  • 8/19/2019 Brentari 2012 Phonology

    11/34

    3. Phonology 31

    and simultaneous structure. Linear segmental timing units still hold a prominent placein the representation, but Handshape was identified as having non-linear (autosegmen-tal) properties. The Moraic Model (Perlmutter 1992) is similar to the Hand-Tier Modelin hierarchical organization, but this approach uses morae, a different type of timingunit (Hyman 1985; Hayes 1989).

    Two more recent models have placed the simultaneous structure back in centralposition, and they have made further use of feature geometry. In these models, timingunits play a role, but this role is not as important as that which they play in spokenlanguages. The Dependency Model (van der Hulst 1993, 1995, see Figure 3.5d) derivestiming slots from the dependent features of Handshape and Place of Articulation. Infact, this model calls the root node a segment/lexeme and refers to the timing units astiming (X)-slots, shown at the bottom of this representation. The Movement parameteris demoted in this model, and van der Hulst argues that most of movement can bederived from Handshape and Place of Articulation features, despite its role in thesyllable (discussed in section 2.3) and in morphology (see sections 4.3 and 4.4). The

    proposals by Uyechi (1995) and Channon (2002) are similar in this regard. This differsfrom the Hand-Tier and Prosodic Models, which award movement a much more centralrole in the structure.

    Like the Dependency Model, the Prosodic Model (already discussed in section 2.2,Brentari 1990a, 1998) derives segmental structure. It recognizes that Handshape, Placeof Articulation, and Movement all have autosegmental properties. The role of the signlanguage syllable is acknowledged by incorporating it into the representation (see Fig-ure 3.5e). Because of their role in the syllable and in generating segments prosodic(Movement) features are set apart from Handshape and Place of Articulation on theirown autosegmental tier, and the skeletal structure is derived from them, as in the

    Dependency Model described above. In Figure 3.5e timing slots are at the bottom of the representation.To summarize these sections on sign language structure, it is clear that sign lan-

    guages have all of the elements one might expect to see in a spoken language phono-logical system, yet their organization and content is somewhat different: features areorganized around the lexeme and segmental structure assumes a more minor role.What motivates this difference? One might hypothesize that this is in part due to thevisual/gestural nature of sign languages, and this topic of modality effects will be takenup in section 3.

    3. Modality effects

    The modality effects described here refer to the influence that the phonetics (or com-munication mode) used in a signed or spoken medium have on the very nature of thephonological system that is generated. How is communication modality expressed inthe phonological representation? Brentari (2002) describes several ways in which signalprocessing differs in sign and spoken languages (see also chapters 2 and 25). ‘Simulta-neous processing’ is a cover term for our ability to process various input types pre-

    sented roughly at the same time (e.g., pattern recognition, paradigmatic processing inphonological terms) for which the visual system is better equipped relative to audition.

    Authenticated | [email protected] author s copy

    Download Date | 11/8/12 12:55 AM

  • 8/19/2019 Brentari 2012 Phonology

    12/34

    I. Phonetics, phonology, and prosody32

    ‘Sequential processing’ is our ability to process temporally discrete inputs into tempo-rally discrete events (e.g., ordering and sequencing of objects in time, syntagmatic proc-essing in phonological terms), for which the auditory system is better equipped relativeto vision. I am claiming that these differences have consequences for the organizationof units in the phonology at the most fundamental level. Word shape will be used asan example of how modality effects ultimately become reflected in phonological andmorphological representations.

    3.1. Word shape

    In this section, first outlined in Brentari (1995), the differences in the shape of thecanonical word in sign and spoken languages will be described, first in terms of typolog-ical characteristics alone, and then in terms of factors due to communication modality.

    Canonical word shape refers to the preferred phonological shape of words in a givenlanguage. For an example of such canonical word properties, many languages, includingthe Bantu language Shona (Myers 1987) and the Austronesian language Yidin (Dixon1977), require that all words be composed of binary branching feet. With regard tostatistical tendencies at the word level, there is also a preferred canonical word shapeexhibited by the relationship between the number of syllables and morphemes in aword, and it is here that sign languages differ from spoken languages. Signed wordstend to be monosyllabic (Coulter 1982) and, unlike spoken languages, sign languageshave an abundance of monosyllabic, polymorphemic words because most affixes insign languages are feature-sized and are layered simultaneously onto the stem rather

    than concatenated (see also Aronoff/Meir/Sandler (2005) for a discussion of this point).This relationship between syllables and morphemes is a hybrid measurement, whichis both phonological and morphological in nature, due in part to the shape of stemsand in part to the type of affixal morphology in a given language. A spoken languagesuch as Hmong contains words that tend to be monosyllabic and monomorphemicwith just two syllable positions (CV), but a rather large segmental inventory of 39consonants and 13 vowels. The distinctive inventory of consonants includes voiced andvoiceless nasals, as well as several types of secondary articulations (e.g., pre- and post-nasalized obstruents, lateralized obstruents). The inventory of vowels includes mon-ophthongs, diphthongs, and seven contrastive tones, both simple and contour tones

    (Golston/Yang 2001; Andruski/Ratliff 2000). Affixal morphology is linear, but thereisn’t a great deal of it. In contrast, a language such as West Greenlandic contains stemsof a variety of shapes and a rich system of affixal morphology that lengthens wordsconsiderably (Fortescue 1984). In English, stems tend to be polysyllabic, and there isrelatively little affixal morphology. In sign languages, words tend to be monosyllabic,even when they are polymorphemic. An example of such a form re-presented fromBrentari (1995, 633) is given in Figure 3.6; this form means ‘two bent-over upright-beings advance-forward carefully side-by-side’ and contains at least six morphemes ina single syllable. All of the classifier constructions in Figure 3.10 (discussed later insection 4) are monosyllabic, as are the agreement forms in Figure 3.11. There is also a

    large amount of affixal morphology, but most of these affixes are smaller than a seg-ment in size; hence, both polymorphemic and monomorphemic words are typically just

    Authenticated | [email protected] author s copy

    Download Date | 11/8/12 12:55 AM

  • 8/19/2019 Brentari 2012 Phonology

    13/34

    3. Phonology 33

    Fig. 3.6: An example of a monosyllabic, polymorphemic form in ASL: ‘two bent-over upright-beings advance-forward carefully side-by-side’.

    one syllable in length. In Table 3.1, a chart schematizes the canonical word shape interms of the number of morphemes and syllables per word.

    (1) Canonical word shape according to the number of syllables and morphemesper word

    Tab. 3.1: Canonical word shape according to the number of syllable and morphemes per word

    monosyllabic polysyllabic

    English, German,monomorphemic Hmong Hawaiian

    West Greenlandic,polymorphemic   sign languages Turkish, Navajo

    This typological fact about sign languages has been attributed to communication mo-dality, as a consequence of their visual/gestural nature. Without a doubt, spoken lan-guages have simultaneous phenomena in phonology and morphophonology such tone,vowel harmony, nasal harmony, and ablaut marking (e.g., the past preterite in English(‘sing’ [pres.]/‘sang’ [preterit]; ‘ring’ [pres.]/‘rang’ [preterit]), and even person markingin Hua indicated by the [Gback] feature on the vowel (Haiman 1979)). There is alsononconcatenative morphology found in Semitic languages, which is another type of simultaneous phenomenon, where lexical roots and grammatical vocalisms alternate

    with one another in time. Even collectively, however, this doesn’t approach the degreeof simultaneity in sign languages, because many features are specified once per stemto begin with: one Handshape, one Place of Articulation, one Movement. In addition,the morphology is feature-sized and layered onto the same monosyllabic stem, addingadditional features but no more linear complexity, and the result is that sign languageshave two sources of simultaneity    one phonological and another morphological. Iwould argue that it is this combination of these two types of simultaneity that causessign languages to occupy this typological niche (see also Aronoff et al. (2004) for asimilar argument). Many researchers since the 1960s have observed a preference forsimultaneity of structure in sign languages, but for this particular typological compari-

    son it was important to have understood the nature of the syllable in sign languagesand its relationship to the Movement component (Brentari 1998).

    Authenticated | [email protected] author s copy

    Download Date | 11/8/12 12:55 AM

  • 8/19/2019 Brentari 2012 Phonology

    14/34

    I. Phonetics, phonology, and prosody34

    Consider this typological fact about canonical word shape just described from theperspective of the peripheral systems involved and their particular strengths in signalprocessing, described in detail in chapters 2 and 25. What I have argued here is thatsignal processing differences in the visual and auditory system have typological conse-quences for the shape of words, which is a notion that goes to the heart of what alanguage looks like. In the next section we explore this claim experimentally usingword segmentation task.

    3.2. Word segmentation is grounded in communication modality

    If this typological difference between words in sign and spoken language is deeplygrounded in communication modality it should be evident in populations with differenttypes of language experience. From a psycholinguistic perspective, this phenomenon

    of word shape can be fruitfully explored using word segmentation tasks, because it canaddress how language users with different experience handle the same types of items.We discuss such studies in this section. In other words, if the typological niche in (1) isdue to the visual nature of sign languages, rather than historical similarity or language-particular constraints, then signers of different sign languages and non-signers shouldsegment nonsense strings of signed material into word-sized units in the same way.

    The cues that people use to make word segmentation decisions are typically putinto conflict with each other in experiments to determine their relative salience toperceivers. Word segmentation judgments in spoken languages are based on (i) therhythmic properties of metrical feet (syllabic or moraic in nature), (ii) segmental cues,

    such as the distribution of allophones, and (iii) domain cues, such as the spreading of tone or nasality. Within the word, the first two of these are ‘linear’ or ‘sequential’ innature, while domain cues are simultaneous in nature they are co-extensive with thewhole word. These cues have been put into conflict in word segmentation experimentsin a number of spoken languages, and it has been determined crosslinguistically thatrhythmic cues are more salient when put into conflict with domain cues or segmentalcues (Vroomen/Tuomainen/Gelder 1998; Jusczyk/Cutler/Redanz 1993; Jusczyk/Hohne/Bauman 1999; Houston et al. 2000). By way of background, while both segmental andrhythmic cues in spoken languages are realized sequentially, segmental alternations,such as knowing the allophonic form that appears in coda vs. onset position, requires

    language-particular knowledge at a rather sophisticated level. Several potential allo-phonic variants can be associated with different positions in the syllable or word,though infants master it sometime between 9 and 12 months of age (Jusczyk/Hohne/Bauman 1999). Rhythm cues unfold more slowly than segmental alternations and re-quire less specialized knowledge about the grammar. For instance, there are fewerdegrees of freedom (e.g., strong vs. weak syllables in ‘chil.dren’, ‘break.fast’) and thereare only a few logically possible alternatives in a given word. If we assume that thereis at least one prominent syllable in every word (two-syllable words have three possibil-ities; three-syllable words have seven possibilities). Incorporating modality into thephonological architecture of spoken languages would help explain why certain struc-

    tures, such as the trochaic foot, may be so powerful a cue to word learning in infants(Jusczyk/Hohne/Bauman 1999).

    Authenticated | [email protected] author s copy

    Download Date | 11/8/12 12:55 AM

  • 8/19/2019 Brentari 2012 Phonology

    15/34

    3. Phonology 35

    Word-level phonotactic cues are available for sign languages as well, and these havealso been used in word segmentation experiments. Rhythmic cues are not used at theword level in ASL, they begin to be in evidence at the phrasal level (see Miller 1996;see also chapter 4, Visual Prosody). The word-level phonotactics described in (1) holdabove all for lexical stems; they are violated in ASL compounds to different degrees.

    (1) Word-level phonotacticsa. Handshape: within a word selected finger features do not change in their

    value; aperture features may change. (Mandel 1981)b. Place of Articulation: within a word major Place of Articulation features

    may not change in their value; setting features (minor place features) withinthe same major body region may change. (Brentari 1998)

    c. Movement: within a word repetition of movement is possible, or‘circleCstraight’ sequences (*‘straightCcircle’ sequences). (Uyechi 1996)

    Within a word, which properties play more of a role in sign language word segmenta-tion: those that span the whole word (the domain cues) or those that change within

    ai. One sign, based on ASL aii. Two signs, based on ASL

    bi. One sign, based on ASL bii. Two signs, based on ASL

    Fig. 3.7: Examples of one- and two-movement nonsense forms in the word segmentation experi-ments. The forms in (a) with one movement were judged to be one sign by our partici-pants; the forms in (b) with two movements were judged to be two signs by our partici-

    pants. Based on ASL phonotactics, however, the forms (ai) and (bi) should have been judged to be one sign and those of (aii) and (bii) should have been judged to be two signs.

    Authenticated | [email protected] author s copy

    Download Date | 11/8/12 12:55 AM

  • 8/19/2019 Brentari 2012 Phonology

    16/34

    I. Phonetics, phonology, and prosody36

    the word (i.e. the linear ones)? These cues were put into conflict with one another ina set of balanced nonsense stimuli that were presented to signers and non-signers. Theuse of a linear cue might be, for example, noticing that the open and closed aperturevariants of handshapes are related, and thereby judging a form containing such achange to be one sign. The use of a domain strategy might be, for example, to ignoresequential alternations entirely, and to judge every handshape or movement as a newword. The nonsense forms in Figure 3.7 demonstrate this. If an ASL participant reliedon a linear strategy, Figure 3.7ai would be judged as one sign because it has an openand closed variant of the same handshape, and Figure 3.7aii would be judged as twosigns because it contains two distinctively contrastive handshapes (two different se-lected finger groups). Figure 3.7bi should be judged as one sign because it has a repeti-tion of the movement and only one handshape and 3.7bii as two signs because it hastwo contrastive handshapes and two contrastive movements.

    In these studies there were six groups of subjects included in two experiments. Inone study groups of native users of ASL and English participated (Brentari 2006), and

    in a second study four more groups were added, totaling six: native users of ASL,Croatian Sign Language (HZJ), and Austrian Sign Language (ÖGS), spoken English,spoken Austrian German and spoken Croatian (Brentari et al. 2011). The method wasthe same in both studies. All were administered the same word segmentation task usingsigned stimuli only. Participants were asked to judge whether controlled strings of nonsense stimuli based on ASL words were one sign or two signs. It was hypothesizedthat (1) signers and non-signers would differ in their strategies for segmentation, and(2) signers would use their language-particular knowledge to segment sign strings.Overall, the results were mixed. A “1 value = 1 word” strategy was employed overall,primarily based on how many  movement s were in the string, despite language-particu-

    lar grammatical knowledge. As stated earlier, for ASL participants Figures 3.7ai and3.7bi should be judged one sign, because in 3.7ai the two handshapes are allowable inone sign, as are the two movements in 3.7bi. Those in Figures 3.7aii and 3.7bii shouldbe judged as two signs. This did not happen in general; however, if each  parameter  isanalyzed separately, the way that the Handshape parameter was employed was signifi-cantly different both between signing and non-signing groups, and among sign lan-guage groups.

    The conclusion drawn from the word segmentation experiments is that modality(the visual nature of the signal) plays a powerful role in word segmentation; this drivesthe strong similarity in performance between groups using the Movement parameter.

    It suggests that, when faced with a new type of linguistic string, the modality will playa role in segmenting it. Incorporating this factor into the logic of phonological architec-ture might help to explain why certain structures, such as the trochaic foot, may be sopowerful a cue to word learning in infants (Jusczyk/Cutler/Redanz 1993) and why pro-sodic cues are so resilient crosslinguistically in spoken languages.

    3.3. The reversal of segment to melody

    A final modality effect is the organization of melody features to skeletal segments in

    the hierarchical structure in Figure 3.1, and this will be described here in detail. Thereason that timing units are located at the top of the hierarchical structure of spoken

    Authenticated | [email protected] author s copy

    Download Date | 11/8/12 12:55 AM

  • 8/19/2019 Brentari 2012 Phonology

    17/34

    3. Phonology 37

    languages is because they can be contrastive. In spoken languages, affricates, geminates,long vowels, and diphthongs demonstrate that the number of timing slots must berepresented independently from the melody, even if the default case is one timing slotper root node. Examples of affricate and geminates in Italian are given in (2).

    (2) Spoken language phonology root:segment ratios [Italian]

    The Dependency and Prosodic Models of sign language phonology build into them thefact that length is not contrastive in any known sign language, and the number of timing slots is predictable from the content of the features. As a consequence, the

    melody (i.e., the feature material) has a higher position in the structure and timingslots a lower position; in other words, the reverse of what occurs in spoken languageswhere timing units are the highest node in the structure (see also van der Hulst (2000)for this same point). As shown in Figure 3.2b and 3.3c, the composition of the prosodicfeatures can generate the number of timing slots. In the Prosodic Model path featuresgenerate two timing slots, all other features generate one timing slot.

    What would motivate this structural difference between the two types of languages?One reason has already been mentioned: audition has the advantage over vision inmaking temporal judgments, so it makes sense that the temporal elements of speechhave a powerful and independent role in phonological structure with respect to themelody. One logical consequence of this is that the timing tier, containing either seg-ments or moras, is more heavily exploited to produce contrast within the system andmust assume a more prominent role in spoken than in sign languages. A schema forthe relationship between timing slots, root node, and melody in sign and spoken lan-guages is given in (3).

    (3) Organization of phonological material in sign vs. spoken languagesa. Spoken languages b. Sign languages

    x (timing slot) root

    root melody

    melody x (timing slot)

    To conclude this section on modality, we see that it affects other levels of representa-tion. An effect of modality on the phonetic representation can be seen in the similaruse of movement in signers and non-signers in making word segmentation judgments.An effect on the phonological representation can be seen when the single movement(now assuming the role of syllable in a sign language phonological system) is used to

    express a particular phonological rule or constraint, such as the phonotactic constrainton handshape change: that is, one handshape change per syllable. An effect of modality

    Authenticated | [email protected] author s copy

    Download Date | 11/8/12 12:55 AM

  • 8/19/2019 Brentari 2012 Phonology

    18/34

    I. Phonetics, phonology, and prosody38

    on the morphophonological representation can be seen in the typological niche thatsign languages occupy, whereby words are monosyllabic and polymorphemic.

    Modality effects are readily observable when considering sign languages because of their contrast with structures in spoken languages, and they also encourage an addi-tional look at spoken language systems for similar effects of modality on speech thatmight be typically taken for granted.

    4. Iconicity effects

    The topic of iconicity in sign languages is vast, covering all linguistic areas e.g., prag-matics, lexical organization, phonetics, morphology, the evolution of language but inthis chapter only aspects of iconicity that are specifically relevant for the phonologicaland morphophonemic representation will be discussed in depth (see also chapter 18 oniconicity and metaphor). The idea of analyzing iconicity and phonology together is fasci-nating and relatively recent. See, for instance, van der Kooij (2002), who examined thephonology-iconicity connection in native signs and has proposed a level of phonetic im-plementation rules where iconicity exerts a role. Even more recently, Eccarius (2008)provides a way to rank the effects of iconicity throughout the whole lexicon of a sign lan-guage. Until recently research on phonology and research concerning iconicity have beentaken up by sub-fields completely independent from one other, one side sometimes evengoing so far as to deny the importance of the other side. Iconicity has been a serious topicof study in cognitive, semiotic, and functionalist linguistic perspectives, most particularlydealing with productive, metaphoric, and metonymic phenomena (Brennan 1990; Cuxac

    2000; Taub 2001; Brennan 2005, Wilcox 2001; Russo 2005; Cuxac/Sallandre 2007). In con-trast, with the notable exceptions just mentioned, phonology has been studied within agenerative approach, using tools that make as little reference to meaning or iconicity aspossible. For example, the five models in Figure 3.5 (the Cheremic, Hold-Movement,Hand-Tier, Dependency, and Prosodic Models) make reference to iconicity in neither theinventory nor the system of rules.

    ‘Iconicity’ refers to mapping of a source domain and the linguistic form (Taub 2001);it is one of three Peircean notions of iconicity, indexicality, and symbolicity (Peirce, 1932[1902]); see chapter 18 for a general introduction to iconicity in sign languages. From thevery beginning, iconicity has been a major topic of study in sign language research. It is

    always the ‘800-lb. gorilla in the room’, despite the fact that the phonology can be con-structed without it. Stokoe (1960), Battison (1978), Friedman (1976), Klima and Bellugi(1979), Boyes Braem (1981), Sandler (1989), Brentari (1998), and hosts of referencescited therein have all established that ASL has a phonological level of representationusing exclusively linguistic evidence based on the distribution of forms examples comefrom slips of the hand, minimal pairs, phonological operations, and processes of word-formation (see Hohenberger/Happ/Leuninger 2002 and chapter 30). In native signers,iconicity has been shown experimentally to play little role in first-language acquisition(Bonvillian/Orlansky/Folven 1990; Conlin et al. 2000 and also chapter 28) or in languageprocessing; Poizner, Bellugi, and Tweney (1981) demonstrated that iconicity has no relia-

    ble effect on short-term recall of signs; Emmorey et al. (2004) showed specifically thatmotor-iconicity of sign languages (involving movement) does not alter the neural systems

    Authenticated | [email protected] author s copy

    Download Date | 11/8/12 12:55 AM

  • 8/19/2019 Brentari 2012 Phonology

    19/34

    3. Phonology 39

    underlying tool and action naming. Thompson, Emmorey, and Gollan (2005) have used‘tip of the finger’ phenomena (i.e., almost but not quite being able to recall a sign)to show that the meaning and form of signs are accessed independently, just as they arein spoken languages (see also chapter 29 for further discussion). Yet iconicity is presentthroughout the lexicon, and every one of these authors mentioned above also acknowl-edges that iconicity is pervasive.

    There is, however, no means to quantitatively and absolutely measure just howmuch iconicity there is in a sign language lexicon. The question, ‘Iconic to whom, andunder what conditions?’ is always relevant, so we need to acknowledge that iconicityis age-specific (signs for   telephone  have changed over time, yet both are iconic, cf.Supalla 1982, 2004) and language-specific (signs for   tree are different in Danish, HongKong, and American Sign Languages, yet all are iconic). Except for a restricted set of cases where entire gestures from the surrounding (hearing) community are incorpo-rated in their entirety into a specific sign language, the iconicity resides in the sub-lexical units, either in classes of features that reside at a class node or in individual

    features themselves. Iconicity is thought to be one of the factors that makes sign lan-guages look so similar (Guerra 1999; Guerra/Meier/Walters 2002; Wilcox/Rossini/Piz-zuto 2010; Wilbur 2010), and sensitivity to and productive use of iconicity may be oneof the reasons why signers from different language families can communicate with eachother so readily after such little time, despite crosslinguistic differences in lexicon, and,in many instances, also in the grammar (Russo 2005). Learning how to use iconicityproductively within the grammar is undoubtedly a part of acquiring a sign language.

    I will argue that iconicity and phonology are not incompatible, and this view is gainingmore support within the field (van der Kooij 2002; Meir 2002; Brentari 2007; Eccarius2008; Brentari/Eccarius 2010; Wilbur 2010). Now, after all of the work over recent dec-

    ades showing indisputably that sign languages have phonology and duality of patterning,one can only conclude it is the distribution that must be arbitrary and systematic in orderfor phonology to exist. In other words, even if a property is iconic, it can also be phono-logical because of its distribution. Iconicity should not be thought of as either a hindranceor opposition to a phonological grammar, but rather another mechanism, on a par withease of production or ease of perception, that contributes to inventories. Saussure wasn’twrong, but since he based his generalizations on spoken languages, his conclusions arebased on tendencies in a communication modality that can only use iconicity on a morelimited basis than sign languages can. Iconicity does exist in spoken languages in redupli-cation (e.g., Haiman 1980) as well as expressives/ideophones. See, for example, Bodomo

    (2006) for a discussion of these in Dagaare, a Gur language of West Africa. See alsoOkrent (2002), Shintel, Nussbaum, and Okrent (2006), and Shintel and Nussbaum (2007)for the use of vocal quality, such as length and pitch, in an iconic manner.

    Iconicity contributes to the phonological shape of forms more in sign than in spokenlanguages, so much so that we cannot afford to ignore it. I will show that iconicity is astrong factor in building signed words, but it is also restricted and can ultimately giverise to arbitrary distribution in the morphology and phonology. What problems can beconfronted or insights gained from considering iconicity? In the next sections we willsee some examples of iconicity and arbitrariness working in parallel to build wordsand expressions in sign languages, using the feature classes of handshape and orienta-

    tion and movement. See also chapter 20 for a discussion of the Event Visibility Hypoth-esis (Wilbur 2008, 2010), which also pertains to iconicity and movement. The mor-

    Authenticated | [email protected] author s copy

    Download Date | 11/8/12 12:55 AM

  • 8/19/2019 Brentari 2012 Phonology

    20/34

    I. Phonetics, phonology, and prosody40

    phophonology of word formation exploits and restricts iconicity at the same time; it isused to build signed words, yet outputs are still very much restricted by the phonologi-cal grammar. Section 4.1 can be seen as contributing to the historical development of a particular aspect of sign language phonology; the other sections concern synchronicphenomena.

    4.1. The historical emergence of phonology

    Historically speaking, Frishberg (1975) and Klima and Bellugi (1979) have establishedthat sign languages become ‘less iconic’ over time, but iconicity never reduces to zeroand continues to be productive in contemporary sign languages. Let us consider thetwo contexts in which sign languages arise. In most Deaf communities, sign languagesare passed down from generation to generation not through families, but through com-munities i.e., schools, athletic associations, social clubs, etc. But initially, before thereis a community per se, signs begin to be used through interactions among individuals either among deaf and hearing individuals (‘homesign systems’), or in stable communi-ties in which there is a high incidence of deafness. In inventing a homesign system,isolated individuals live within a hearing family or community and devise a methodfor communicating through gestures that become systematic (Goldin-Meadow 2001).Something similar happens on a larger scale in systems that develop in communitieswith a high incidence of deafness due to genetic factors, such as the island of Martha’sVineyard in the seventeenth century (Groce 1985) and Al-Sayyid Bedouin Sign Lan-guage (ABSL; Sandler et al. 2005; Meir et al. 2007; Padden et al. 2010). In both cases,these systems develop at first within a context where being transparent through the

    use of iconicity is important in making oneself understood.Mapping this path from homesign to sign language has become an important re-

    search topic since it allows linguists the opportunity to follow the diachronic path of asign language  al vivo in a way that is no longer possible for spoken languages. In thecase of a pidgin, a group of isolated deaf individuals are brought together to a schoolfor the deaf. Each individual brings to the school a homesign system that, along withother homesign systems, undergoes pidginization and ultimately creolization. This hashappened in the development of Nicaraguan Sign Language (NSL; Kegl/Senghas/Cop-pola 1999; Senghas/Coppola 2001). This work to date has largely focused on morphol-ogy and syntax, but  when and how does phonology arise in these systems? Aronoff et

    al. (2008) and Sandler et al. (2011) have claimed that ABSL, while highly iconic, stillhas no duality of patterning even though it is ~75 years old. It is well known, however,that in first-language acquisition of spoken languages, infants are statistical learnersand phonology is one of the first components to appear (Locke 1995; Aslin/Saffran/Newport 1998; Creel/Newport/Aslin 2004; Jusczyk et al. 1993, 1999).

    Phonology emerges in a sign language when properties    even those with iconicorigins take on conventionalized distributions, which are not predictable from theiriconic forms. Over the last several years, a project has been studying how these sub-types of features adhere to similar patterns of distribution in sign languages, gesture,and homesign (Brentari et al. 2012). It is an example of the intertwined nature of 

    iconicity and phonology that addresses  how a phonological distribution might emergein sign languages over time.

    Authenticated | [email protected] author s copy

    Download Date | 11/8/12 12:55 AM

  • 8/19/2019 Brentari 2012 Phonology

    21/34

    3. Phonology 41

    Productive handshapes were studied in adult native signers, hearing gesturers (with-out using their voices), and homesigners in handshapes    particularly the selectedfinger features of handshape. The results show that the distribution of selected fingerproperties is re-organized over time. Handshapes were divided into three levels of selected finger complexity. Low complexity handshapes have the simplest phonologicalrepresentation (Brentari 1998), are the most frequent handshapes crosslinguistically(Hara 2003; Eccarius/Brentari 2007), and are the earliest handshapes acquired by na-tive signers (Boyes Braem 1981).  Medium complexity and  High complexity handshapesare defined in structural terms     i.e., the simpler the structure the less complexityit contains.   Medium complexity   handshapes have one additional elaboration of therepresentation of a [one]-finger handshape, either by adding a branching structure oran extra association line. High complexity handshapes are all other handshapes. Exam-ples of low and medium complexity handshapes are shown in Figure 3.8.

    Fig. 3.8: The three handshapes with low finger complexity and examples of handshapes with me-dium finger complexity. The parentheses around the B-handshape indicate that it is thedefault handshape in the system.

    The selected finger complexity of two types of productive handshapes was analyzed:those representing objects and those representing the  handling of objects (correspond-

    ing to whole entity and handling classifier handshapes in a sign language, respectivelysee section 4.2 and chapter 8). The pattern that appeared in signers and homesignersshowed no significant differences along the dimension analyzed: relatively higher fingercomplexity in object handshapes and lower for handling handshapes (Figure 3.9). Theopposite pattern appeared in gesturers, which differed significantly from the other twogroups: higher finger complexity in handling handshapes and lower in object hand-shapes. These results indicate that as handshape moves from gesture to homesign andultimately to a sign language, object handshapes  gain finger complexity and handlinghandshapes lose it relative to their distribution in gesture. In other words, even thoughall of these handshapes are iconic in all three groups, the features involved in selected

    fingers are heavily re-organized in sign languages, and the homesigners already displaysigns of this re-organization.

    Authenticated | [email protected] author s copy

    Download Date | 11/8/12 12:55 AM

  • 8/19/2019 Brentari 2012 Phonology

    22/34

    I. Phonetics, phonology, and prosody42

    Fig. 3.9: Mean finger complexity, using a Mixed Linear statistical model for Object  handshapes andHandling handshapes in signers, homesigners, and gesturers (Brentari et al. submitted).

    4.2. Orientation in classifier constructions is arbitrarily distributed

    Another phonological structure that has iconic roots but is ultimately distributed arbi-

    trarily in ASL is the orientation of the handshape of classifier constructions (see chap-ter 8 for further discussion). For our purposes here, classifier constructions can be de-fined as complex predicates in which movement, handshape, and location are meaningfulelements; we focus here on handshape, which includes the orientation relation discussionin section 2. We will use Engberg-Pedersen’s (1993) system, given in (4), which dividesthe classifier handshapes into four groups. Examples of each are given in Figure 3.10.

    (4) Categories of handshape in classifier constructions (Engberg-Pedersen 1993)a.   Whole Entity:  These handshapes refer to whole objects (e.g., 1-handshape:

    ‘person’ (Figure 3.10a)).

    b.   Surface: These handshapes refer to the physical properties of an object (e.g.,B-B-handshape: ‘flat_surface’ (Figure 3.10b)).c.   Limb/Body Part: These handshapes refer to the limbs/body parts of an agent

    (e.g., V-handshape: ‘by_legs’ (Figure 3.10c)). In ASL we have found that theV-handshape ‘by-legs’ can function as either a body or whole entity classifier.

    d.   Handling:  These handshapes refer to how an object is handled or manipu-lated (e.g., S-handshape: ‘grasp_gear_shift’ (Figure 3.10d)).

    Benedicto and Brentari (2004) and Brentari (2005) argued that, while all types of classifier constructions use handshape morphologically because at least part of the

    handshape is used in this way, only classifier handshapes of the handling and limb/

    Authenticated | [email protected] author s copy

    Download Date | 11/8/12 12:55 AM

  • 8/19/2019 Brentari 2012 Phonology

    23/34

    3. Phonology 43

    Phonological use of orientation:

    ai. upright aii. upside down bi. surface of table bii. surface upside downwhole entity (1-HS ‘person’) surface/extension (B-HS ‘flat surface’)

    Morphological use of orientation:

    ci. upright cii. upside down di. grasp from above dii. grasp from belowbody part (V-HS ‘person’) handling (S-HS ‘grasp-gear shift’)

    Fig. 3.10: Examples of the distribution of phonological (top) and morphological (bottom) use of orientation in classifier predicates (ASL). Whole Entity and Surface/Extension classifierhandshapes (10a) and (10b) allow only phonological use of orientation (so a change inorientation is not permissible), while Body Part and Handling classifier handshapes(10c) and (10d) do allow both phonological and morphological use of orientation so a

    change in orientation is possible.

    body part type can use  orientation in a morphological way. Whole entity and surfaceclassifier handshapes cannot. This is shown in Figure 3.10, which illustrates the varia-tion of the forms using orientation phonologically and morphologically. The formsusing the whole entity classifier in Figure 3.10ai ‘person’ and the surface classifier inFigure 3.10bi ‘flat surface’ are not grammatical if the orientation is changed to thehypothetical forms as in 3.10aii (‘person upside down’) and 3.10bii (‘flat surface upsidedown), indicated by an ‘x’ through the ungrammatical forms. Orientation differences

    in whole entity classifiers are shown by signing the basic form, and then sequentiallyadding a movement to that form to indicate a change in orientation. In contrast formsusing the body part classifier and the handling classifier in Figure 3.10ci (‘by-legs’) and3.10di (‘grasp gear shift’) are grammatical when articulated with different orientationsas shown in 3.10cii (‘by-legs be located upside down’) and 3.10dii (‘grasp gear shiftfrom below’).

    This analysis requires phonology because the representation of handshape mustallow for subclasses of features to function differently, according to the type of classifierhandshape being used. In all four types of classifiers, part of the phonological orienta-tion specification expresses a relevant  handpart’s orientation (palm, fingertips, back of 

    hand, etc.) toward a place of articulation, but only in body part and handling classifiersis it allowed to function morphologically as well. It has been shown that these four

    Authenticated | [email protected] author s copy

    Download Date | 11/8/12 12:55 AM

  • 8/19/2019 Brentari 2012 Phonology

    24/34

    I. Phonetics, phonology, and prosody44

    types of classifiers have different syntactic properties as well (Benedicto/Brentari 2004;Grose et al. 2007).

    It would certainly be more iconic to have the orientation expressed uniformly acrossthe different classifier types, but the grammar does not allow this. We therefore haveevidence that iconicity is present but constrained in the use of orientation in classifierpredicates in ASL.

    4.3. Directional path movement and verb agreement

    Another area in sign language grammars where iconicity plays an important role isverb agreement (see also chapters 7 and 10). Agreement verbs manifest the transferof entities, either abstract or concrete. Salience and stability among arguments may beencoded not only in syntactic terms, but also by visual-spatial means. Moreover, pathmovements, which are an integral part of these expressions, are phonological propertiesin the feature tree, as are the spatial loci of sign language verb agreement. There issome debate about whether the locational loci are, in fact, part of the phonologicalrepresentation because they have an infinite number of phonetic realizations. SeeBrentari (1998) and Mathur (2000) for two possible solutions to this problem.

    There are three types of verbs attested in sign languages (Padden 1983): those thatdo not manifest agreement (‘plain’ verbs), and those that do, which divide furtherinto those known as ‘spatial’ verbs, which take only take source-goal agreement, and‘agreement’ verbs, which take source-goal agreement, as well as object and potentiallysubject agreement (Brentari 1988; Meir 1998, 2002; Meir et al. 2007). While Padden’s1983 analysis was based on syntactic criteria alone, these more recent studies include

    both semantics (including iconicity) and syntax in their analysis. The combination of syntactic and semantic motivations for agreement in sign languages was formalized asthe ‘direction of transfer principle’ (Brentari 1988), but the analysis of verb agreementas having an iconic source was first proposed in Meir (2002). Meir (2002) argues thatthe main difference between verb agreement in spoken languages and sign languagesis that verb agreement in sign languages seems to be thematically (semantically), ratherthan syntactically, determined (Kegl (1985) was the first to note this). Agreement typi-cally involves the representation of phi features of the NP arguments, and functionallyit is a part of the referential system of a language. Meir observes that typically inspoken languages there is a closer relationship between agreement markers and struc-

    tural positions in the syntax than between agreement markers and semantic roles, butsign language verbs can agree not only with themes and agents, they can also agreewith their source and goal arguments.

    Crucially, Meir argues that ‘DIR’, which is an abstract construct used in a transfer(or directional) verb, is the iconic representation of the semantic notion ‘path’ used intheoretical frameworks, such as Jackendoff (1996, 320); DIR denotes spatial relations.It can appear as an independent verb or as an affix to other verbs. This type of iconicityis rooted in the fact that referents in a signed discourse are tracked both syntacticallyand visuo-spatially; however, this iconicity is constrained by the phonology. Independ-ently a [direction] feature has been argued for in the phonology, indicating a path

    moving to or from a particular plane of articulation, as described in section 2 (Bren-tari 1998).

    Authenticated | [email protected] author s copy

    Download Date | 11/8/12 12:55 AM

  • 8/19/2019 Brentari 2012 Phonology

    25/34

    3. Phonology 45

    Fig. 3.11: Examples of verb agreement in ASL and how it is expressed in the phonology:   be-sorryexpresses no manual agreement;   say-yes  expresses the direction feature of agreement

    in orientation;   help in path; and   pay in both orientation and path.

    The abstract morpheme DIR and the phonological feature [direction] are distrib-uted in a non-predictable (arbitrary) fashion both across sign languages (Mathur/Rath-mann 2006, 2010) and language internally. In ASL it can surface in the path of the verbor in the orientation; that is, on one or both of these parameters. It is the phonology of the stem that accounts for the distribution of orientation and path as agreement mark-ers, predicting if it will surface, and if so, where it will surface. Figure 3.11 providesASL examples of how this works. In Figure 3.11a we see an example of the agreement

    verb,   be-sorry, that takes neither orientation nor source-goal properties. Signs in thisset have been argued to have eye gaze substituted for the manual agreement marker(Bahan 1996; Neidle et al. 2000), but there is debate about exactly what role eye gazeplays in the agreement system (Thompson et al. 2006). The phonological factor rele-vant here is that many signs in this set have a distinct place of articulation that is onor near the body. In Figure 3.11b we see an example of an agreement verb that takesonly the orientation marker of agreement,   say-yes; this verb has no path movement inthe stem that can be modified in its beginning and ending points (Askins/Perlmutter1995), but the affixal DIR morpheme is realized on the orientation, with the palm of the hand facing the vertical plane of articulation associated with the indirect object. In

    Figure 3.11c there is an example of an agreement verb that has a path movement inthe stem     help   whose beginning and endpoints can be modified according to thesubject and object locus. Because of the angle of wrist and forearm, it would be verydifficult (if not impossible) to modify the orientation of this sign (Mathur/Rathmann2006). In Figure 3.11d we see an example of the agreement verb   pay that expresses theDIR verb agreement on both path movement and orientation; the path moves fromthe payer to the payee, and the orientation of the fingertip is towards the payee at theend of the sign. The analysis of this variation depends in part on the lexical specifica-tion of the stem   whether orientation or path is specified in the stem of the verb orsupplied by the verb-agreement morphology (Askins/Perlmutter 1995)    and in part

    on the phonetic-motoric constraints on the articulators involved in articulating thestem i.e., the joints of the arms and hands (Mathur/Rathmann 2006).

    Authenticated | [email protected] author s copy

    Download Date | 11/8/12 12:55 AM

  • 8/19/2019 Brentari 2012 Phonology

    26/34

    I. Phonetics, phonology, and prosody46

    In summary, iconicity is a factor that contributes to the phonological inventories of sign languages. Based on the work presented in this section, I would maintain that thedistribution of the material is more important for establishing the phonology of signlanguages than the material used     iconic or otherwise. One can generalize acrosssections 4.14.3 and say that, taken alone, each of the elements discussed has iconicroots, yet even so, this iconicity is distributed in unpredictable ways (that is, unpredicta-ble if iconicity were the only motivation). This is true for which features    joints orfingers    will be the first indications of an emerging phonology (section 4.1), for theorientation of the hand representing the orientation of the object in space (section4.2), and for the realization of verb agreement (section 4.3).

    5. Conclusion

    As stated in the introduction to this chapter, this piece was written in part to answerthe following questions: ‘Why should phonologists, who above all else are fascinatedwith the way things  sound,  care about systems without sound? How does it relate totheir interests?’ I hope that I have shown that by using work on sign languages, phonol-ogists can broaden the scope of the discipline from one that includes not only analysesof phonological structures, but also how modality and iconicity infiltrates and interactswith phonetic, phonological, and morph-phonological structure. This is true in bothsign and spoken languages, but we see these effects more vividly in sign languages. Inthe case of modality this is because, chronologically speaking, analyses of sign lan-guages set up comparisons with what has come before (e.g., analyses of spoken lan-

    guages grounded in a different communication modality) and we now see that someof the differences between the two languages result from modality differences. Animportant point of this chapter was that general phonological theory can be betterunderstood by considering its uses in sign language phonology. For example, non-linearphonological frameworks allowed for breakthroughs in understanding spoken and signlanguages that would not have been possible otherwise, but also allowed the architec-tural building blocks of a phonological system to be isolated and examined in such away as to see how both the visual and auditory systems (the communication modalities)affect the ultimate shape of words and organization of units, such as features, segments,and syllables.

    The effects of iconicity on phonological structure are seen more strongly in signlanguages because of the stronger role that visual iconicity can play in these languagescompared with auditory iconicity in spoken languages. Another important point forgeneral phonological theory that I have tried to communicate in this chapter has to dowith the ways in which sign languages manage iconicity. Just because a property isiconic, doesn’t mean it can’t also be phonological. Unfortunately some phonologistsstudying sign languages called attention away from iconicity for a long time, but iconic-ity is a pervasive pressure on the output of phonological form in sign languages (on apar with ease of perception and ease of articulation), and we can certainly benefit fromstudying its differential effects both synchronically and diachronically.

    Finally, the more phonologists focus on the physical manifestations of the system

    the vocal tract, the hands, the ear, the eyes    sign and spoken language phonology

    Authenticated | [email protected] author s copy

    Download Date | 11/8/12 12:55 AM

  • 8/19/2019 Brentari 2012 Phonology

    27/34

    3. Phonology 47

    will look different but in interesting ways. The more focus there is on the mind, themore sign language and spoken language phonologies will look the same in ways thatcan lead to a better understanding of a general (cross-modal) phonological compe-tence.

     Acknowledgements: This work is being carried out thanks to NSF grants BCS 0112391and BCS 0547554 to Brentari. Portions of this chapter have appeared in Brentari (2011)and are reprinted with permission of Blackwell Publishing.

    6. Literature

    Alpher, Barry1994 Yir-Yoront Ideophones. In: Hinton, Leanne/Nichols, Johanna/Ohala, John J. (eds.),

    Sound Symbolism. Cambridge: Cambridge University Press, 161177.

    Anderson, John/Ewen, Colin J.1987   Principles of Dependency Phonology. Cambridge: Cambridge University Press.

    Andruski, Jean E./Ratliff, Martha2000 Phonation Types in Production of Phonological Tone: The Case of Green Mong. In:

     Journal of the International Phonetics Association 30(1/2), 6382.Aronoff, Mark/Padden, Carol/Meir, Irit/Sandler, Wendy

    2004 Morphological Universals and the Sign Language Type. In: Booij, Geert/Marle, Jaapvan (eds.),  Yearbook of Morphology 2004.   Dordrecht: Kluwer Academic Publishers,1940.

    Aronoff, Mark/Meir, Irit/Sandler, Wendy2005 The Paradox of Sign Language Morphology. In:  Language 81, 301344.

    Aronoff, Mark/Meir, Irit/Padden, Carol/Sandler, Wendy2008 The Roots of Linguistic Organization in a New Language. In: Bickerton, Derek/Arbib,Michael (eds.),   Holophrasis vs. Compositionality in the Emergence of Protolanguage(Special Issue of  Interaction Studies 9(1)), 131150.

    Askins, David/Perlmutter, David1995 Allomorphy Explained through Phonological Representation: Person and Number In-

    flection of American Sign Language. Paper Presented at the   Annual Meeting of theGerman Linguistic Society, Göttingen.

    Aslin, Richard N./Saffran, Jenny R./Newport, Elissa L.1998 Computation of Conditional Probability Statistics by 8-Month-Old Infants. In: Psycho-

    logical Science 9, 321324.Bahan, Benjamin

    1996   Nonmanual Realization of Agreement in American Sign Language. PhD Dissertation,Boston University.

    Battison, Robbin1978   Lexical Borrowing in American Sign Language. Silver Spring, MD: Linstok Press.

    Benedicto, Elena/Brentari, Diane2004 Where Did All the Arguments Go? Argument Changing Properties of Classifiers in

    ASL. In:  Natural Language and Linguistic Theory 22, 743810.Bloomfield, Leonard

    1933   Language. New York: Henry Holt and Co.Bodomo, Adama

    2006 The Structure of Ideophones in African and Asian Languages: The Case of Dagaare

    and Cantonese. In: Mugane, John (ed.),  Selected Proceedings of the 35th

     Annual Confer-ence on African Languages.  Somerville, MA: Cascadilla Proceedings Project, 203213.

    Authenticated | [email protected] author s copy

    Download Date | 11/8/12 12:55 AM

  • 8/19/2019 Brentari 2012 Phonology

    28/34

    I. Phonetics, phonology, and prosody48

    Bonvillian, John D./Orlansky, Michael D./Folven, Raymond J.1990 Early Sign Language Acquisition: Implications for Theories of Language Acquisition.

    In: Volterra, Virginia/Erting, Carol J. (eds.), From Gesture to Language in Hearing andDeaf Children. Berlin: Springer, 219232.

    Boyes Braem, Penny

    1981   Distinctive Features of the Handshapes of American Sign Language. PhD Dissertation,University of California.Brennan, Mary

    1990   Word Formation in British Sign Language. PhD Dissertation, University of Stockholm.Brennan, Mary

    2005 Conjoining Word and Image in British Sign Language (BSL): An Exploration of Meta-phorical Signs in BSL. In:  Sign Language Studies 5, 360382.

    Brentari, Diane1988 Backwards Verbs in ASL: Agreement Re-Opened. In:  Proceedings from the Chicago

    Linguistic Society Issue 24, Volume 2: Parasession on Agreement in Grammatical Theory.Chicago: University of Chicago, 1627.

    Brentari, Diane1990a   Theoretical Foundations of American Sign Language Phonology. PhD Dissertation, Lin-guistics Department, University of Chicago.

    Brentari, Diane1990b Licensing in ASL Handshape Change. In: Lucas, Ceil (ed.),  Sign Language Research:

    Theoretical Issues. Washington, DC: Gallaudet University Press, 5768.Brentari, Diane

    1993 Establishing a Sonority Hierarchy in American Sign Language: The Use of Simultane-ous Structure in Phonology. In:  Phonology 10, 281306.

    Brentari, Diane1994 Prosodic Constraints in American Sign Language.   Proceedings from the 20th  Annual 

    Meeting of the Berkeley Linguistics Society, Berkeley: Berkeley Linguistic Society.

    Brentari, Diane1995 Sign Language Phonology: ASL. In: Goldsmith, John (ed.), Handbook of Phonological 

    Theory. Oxford: Blackwell , 615639.Brentari, Diane

    1998   A Prosodic Model of Sign Language Phonology. Cambridge, MA: MIT Press.Brentari, Diane

    2002 Modality Differences in Sign Language Phonology and Morphophonemics. In: Meier,Richard/Cormier, Richard/Quinto-Pozos, David (eds.), Modality and Structure in Signedand Spoken Languages. Cambridge: Cambridge University Press, 3564.

    Brentari, Diane2005 The Use of Morphological Templates to Specify Handshapes in Sign Languages. In:

    Linguistische Berichte 13, 145177.Brentari, Diane

    2006 Effects of Language Modality on Word Segmentation: An Experimental Study of Pho-nological Factors in a Sign Language. In: Goldstein, Louis/Whalen, Douglas H./Best,Catherine (eds.),  Papers in Laboratory Phonology VIII . The Hague: Mouton de Gruy-ter, 155164.

    Brentari, Diane2007 Sign Language Phonology: Issues of Iconicity and Universality. In: Pizzuto, Elena/Pie-

    trandrea, Paola/Simone, Raffaele (eds.),   Verbal and Signed Languages, ComparingStructures, Constructs and Methodologies. Berlin: Mouton de Gruyter, 5980.

    Brentari, Diane

    2011 Sign Language Phonology. In: Goldsmith, John/Riggle, Jason/Yu, Alan (eds.),   Hand-book of Phonological Theory. Oxford: Blackwell, 691721.

    Authenticated | [email protected] author s copy

    Download Date | 11/8/12 12:55 AM

  • 8/19/2019 Brentari 2012 Phonology

    29/34

    3. Phonology 49

    Brentari, Diane/Coppola, Marie/Mazzoni, Laura/Goldin-Meadow, Susan2012 When Does a System Become Phonological? Handshape Production in Gesturers, Sign-

    ers, and Homesigners. In:  Natural Language and Linguistic Theory 30, 131.Brentari, Diane/Eccarius, Petra

    2010 Handshape Contrasts in Sign Language Phonology. In: Brentari, Diane (ed.) Sign Lan-

     guages: A Cambridge Language Survey. Cambridge: Cambridge University Press,284311.

    Brentari, Diane/González, Carolina/Seidl, Amanda/Wilbur, Ronnie2011 Sensitivity to Visual Prosodic Cues in Signers and Nonsigners. In: Language and Speech

    54, 4972.Brentari, Diane/Padden, Carol

    2001 Native and Foreign Vocabulary in American Sign Language: A Lexicon with MultipleOrigins. In: Brentari, Diane (ed.),  Foreign Vocabulary in Sign Languages. Mahwah, NJ:Lawrence Erlbaum, 87119.

    Channon, Rachel2002 Beads on a String? Representations of Repetition in Spoken and Signed Languages. In:

    Meier, Richard/Cormier, Richard/Quinto-Pozos, David (eds.),  Modality and Structurein Signed and Spoken Languages. Cambridge: Cambridge University Press , 6587.

    Chomsky, Noam/Halle, Morris1968   The Sound Pattern of English. New York: Harper and Row.

    Clements, G. Nick1985 The Geometry of Phonological Features. In:  Phonology Yearbook 2, 225252.

    Conlin, Kimberly E./Mirus, Gene/Mauk, Claude/Meier, Richard P.2000 Acquisition of First Signs: Place, Handshape, and Movement. In: Chamberlain, Char-

    lene/Morford, Jill P./Mayberry, Rachel (eds.),  Language Acquisition by Eye. Mahwah,NJ: Lawrence Erlbaum, 5169.

    Corina, David

    1990 Reassessing the Role of Sonority in Syllable Structure: Evidence from a Visual-gesturalLanguage. In: Proceedings from the 26th  Annual Meeting of the Chicago Linguistic Soci-ety: Vol. 2: Parasession on the Syllable in Phonetics and Phonology. Chicago: ChicagoLinguistic Society , 3344.

    Coulter, Geoffrey1982   On the Nature of ASL as a Monosyllabic Language. Paper Presented at the   Annual 

    Meeting of the Linguistic Society of America, San Diego, California.Crasborn, Onno

    1995   Articu