scuola internazionale superiore di studi avanzati - …lcd.sissa.it/books/2000_gazzaniga.pdfcn rs...

12
62 Acquisition of Languages: Infant and Adult Data ABSTRACT This chapter advocates a multidimensional ap- proach to the issues raised by language acquisition. We bring to bear evidence from several sources-experimental investiga- tions of infants, studies of bilingual infants and adults, and data from functional bmin-imaging-to focus on the pmblem of early lanJ:,'1.Iagelearning. At bottom, we believe that our under- standing of development and knowledge attainment requires the joint study of the initial state, the stable state, and the mechanisms that constrain the timetables of learning. And we are persuaded that such an approach must take into account data from many disciplines. What advantage can a newborn infant draw from listen- ing to speech? Possibly none, as was thought not so long ago (Mehler and Fox, 1985). But now we conjecture that the speech signal furnishes information about the struc- ture of the mother tongue (see Christophe et al., 1997; Gleitman and Wanner, 1982; Mazuka, 1996; Mehler' and Christophe, 1995). This position, dubbed the "pho- nological bootstrapping" hypothesis (Morgan and De- muth, 1996a), reflects an increasing tendency in language acquisition research {Morgan and Demuth, 199Gb; Weissenborn and Hahle, 1999)-namely, that language acquisition starts very early and that a purely phonological analysis of the speech signal (a surface analysis) gives information about the grammatical struc- ture of the language. Phonological bootstrapping rests on the idea that lan- guage is a species-specific ability, the product of a "lan- guage organ" specific to human brains. Hardly new, this view has been advocated by psychologists, linguists, and neurologists since Gall (1835). Lenneberg (1967) initially provided much evidence for the biological foundations of language, evidence that has been corroborated by other research programs (for review, see Mehler and Dupoux, 1994; Pinker, 1994). Human brains appear to have specific neural structures, part of the species en- dowment, that mediate the grammatical systems of lan- b~age. Noam Chomsky (1988) proposed conceiving of the "knowledge of language" that newborns bring to the task of acquiring a language in terms of principles and .JACQ.U£S MEHLER and ANNE CHPJSTOI'HE LSCP, EHESS- CN RS U RA, Paris parameters-universal principles that are common to all human languages and parameters that elucidate the di- versity of natural languages (together, then, principles and parameters constitute Universal Grammar). Param- eters are set through experience with the language spo- ken in the environment (for a different opinion about how about species-specific abilities may be innately specified, see Elman et al., 1996). In this chapter, we first present experimental results with infants to illustrate the phonological bootstrapping approach. We then examine how word forms can be learned. Finally, we discuss studies of the cortical repre- sentation of languages in bilingual adults, who, by being bilingual, exhibit the end result of a special (though fre- quent) language-learning case. The structure of this chapter illustrates an unconventional view of develop- ment. Given the task of presenting language acquisition during the first year of life, why do we include studies on the representation of languages in adult bilinguals? We db so in the belief that development is more than the study of change in developing organisms. To devise an adequate theory of how an initial capacity turns into the full-fledged adult capacity, we need a good description of both endpoints. In addition, considering both end- points together while focusing on the problem of devel- opment is a good research strategy, one that has increased our understanding of language acquisition. In our previous chapter (Mehler and Christophe, 1995), we started from the fact that many babies are exposed to more than one language-languages that must be kept separate in order to avoid confusion. We reviewed a number of studies that illustrate such babies' ability to dis- criminate between languages. The picture that emerged was that, from birth onward, babies are able to distin- guish their mother tongue from foreign languages (Bahr- ick and Pickens, 1988; Mehler et al., 1988); moreover, they show a preference for their mother tongue (Moon, Cooper, and Fifer, 1993). Both these facts hold when speech is low-pass-filtered, preserving only prosodic In

Upload: others

Post on 26-Mar-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Scuola Internazionale Superiore di Studi Avanzati - …lcd.sissa.it/Books/2000_Gazzaniga.pdfCN RS URA, Paris parameters-universal principles that are common to all human languages

62 Acquisition of Languages:Infant and Adult Data

ABSTRACT This chapter advocates a multidimensional ap-proach to the issues raised by language acquisition. We bringto bear evidence from several sources-experimental investiga-tions of infants, studies of bilingual infants and adults, and datafrom functional bmin-imaging-to focus on the pmblem ofearly lanJ:,'1.Iagelearning. At bottom, we believe that our under-standing of development and knowledge attainment requiresthe joint study of the initial state, the stable state, and themechanisms that constrain the timetables of learning. And weare persuaded that such an approach must take into accountdata from many disciplines.

What advantage can a newborn infant draw from listen-ing to speech? Possibly none, as was thought not so longago (Mehler and Fox, 1985). But now we conjecture thatthe speech signal furnishes information about the struc-ture of the mother tongue (see Christophe et al., 1997;Gleitman and Wanner, 1982; Mazuka, 1996; Mehler'and Christophe, 1995). This position, dubbed the "pho-nological bootstrapping" hypothesis (Morgan and De-muth, 1996a), reflects an increasing tendency inlanguage acquisition research {Morgan and Demuth,199Gb; Weissenborn and Hahle, 1999)-namely, thatlanguage acquisition starts very early and that a purelyphonological analysis of the speech signal (a surfaceanalysis) gives information about the grammatical struc-ture of the language.

Phonological bootstrapping rests on the idea that lan-guage is a species-specific ability, the product of a "lan-guage organ" specific to human brains. Hardly new, thisview has been advocated by psychologists, linguists, andneurologists since Gall (1835). Lenneberg (1967) initiallyprovided much evidence for the biological foundationsof language, evidence that has been corroborated byother research programs (for review, see Mehler andDupoux, 1994; Pinker, 1994). Human brains appear tohave specific neural structures, part of the species en-dowment, that mediate the grammatical systems of lan-b~age. Noam Chomsky (1988) proposed conceiving ofthe "knowledge of language" that newborns bring to thetask of acquiring a language in terms of principles and

.JACQ.U£S MEHLER and ANNE CHPJSTOI'HE LSCP, EHESS-CN RS U RA, Paris

parameters-universal principles that are common to allhuman languages and parameters that elucidate the di-versity of natural languages (together, then, principlesand parameters constitute Universal Grammar). Param-eters are set through experience with the language spo-ken in the environment (for a different opinion abouthow about species-specific abilities may be innatelyspecified, see Elman et al., 1996).

In this chapter, we first present experimental resultswith infants to illustrate the phonological bootstrappingapproach. We then examine how word forms can belearned. Finally, we discuss studies of the cortical repre-sentation of languages in bilingual adults, who, by beingbilingual, exhibit the end result of a special (though fre-quent) language-learning case. The structure of thischapter illustrates an unconventional view of develop-ment. Given the task of presenting language acquisitionduring the first year of life, why do we include studies onthe representation of languages in adult bilinguals? Wedb· so in the belief that development is more than thestudy of change in developing organisms. To devise anadequate theory of how an initial capacity turns into thefull-fledged adult capacity, we need a good descriptionof both endpoints. In addition, considering both end-points together while focusing on the problem of devel-opment is a good research strategy, one that hasincreased our understanding of language acquisition.

In our previous chapter (Mehler and Christophe, 1995),we started from the fact that many babies are exposed tomore than one language-languages that must be keptseparate in order to avoid confusion. We reviewed anumber of studies that illustrate such babies' ability to dis-criminate between languages. The picture that emergedwas that, from birth onward, babies are able to distin-guish their mother tongue from foreign languages (Bahr-ick and Pickens, 1988; Mehler et al., 1988); moreover,they show a preference for their mother tongue (Moon,Cooper, and Fifer, 1993). Both these facts hold whenspeech is low-pass-filtered, preserving only prosodic

In

Page 2: Scuola Internazionale Superiore di Studi Avanzati - …lcd.sissa.it/Books/2000_Gazzaniga.pdfCN RS URA, Paris parameters-universal principles that are common to all human languages

information (Dehaene-Lambertz and Houston, 1998; Me-hler et al., 1988 ). In addition, after reanalyzing our data(from Mehler et al., 1988), we noticed a developmentaltrend between birth and 2 months of age: Whereas new-borns discriminated between two foreign languages, 2-month-olds did not. More recent data confirm this devel-opmental trend. Thus, Nazzi, Bertoncini, and Mehler(1998) showed that French newborns could discriminatefiltered sentences in English and Japanese; and Chris-tophe and Morton (1998) showed that, while English 2-month-olds do not react to a change from French toJapa-nese (using unfiltered sentences), they do discriminate be-tween English and Japanese in the same experimentalsetting. A possible interpretation of this counterintuitiveresult is that, while newborns attempt to analyze everyspeech sample in detail, 2-month-olds have gainedenough knowledge of their mother tongue to be able tofilter out utterances from a foreign language as irrelevant.As a consequence, they do not react to a change fromone foreign language to another. This takes place whenthe infant is about 2 months old, and thus marks one ofthe earliest reorganizations of the perceptual responses asa result of exposure to speech.

When Mehler and colleagues (1996) proposed aframework to explain babies' ability to discriminate lan-guages, they began with two facts: first, that babies dis-criminate languages on the basis of prosodic propertiesand, second, that vowels are salient features for babies(see Bertoncini et al., 1988; Kuhl et al., 1992). They con-jectured that babies construct a grid-like representationcontaining only vowels. Such a representation would fa-cilitate the discrimination of the language pairs reported.Indeed, the pairs invariably involved distant languages.In addition, it predicts that infants should have most dif-ficulty discriminating languages having similar pro-sodic/rhythmic properties. It is assumed that the vowelgrid represents the languages of the world as clustersaround discrete positions in the acoustic space.

Nazzi, Bertoncini, and Mehler (1998) began to testsome predictions of this framework. First, they showedthat French newborns fail to discriminate English andDutch filtered sentences. English and Dutch share anumber of prosodic properties (complex syllables,vowel reduction, similar word stress) and both are"stress-timed"; they should therefore receive similargrid-like representations. The authors also investigatedthe question of whether languages receiving similar gridrepresentation would be grouped into one single cate-gory-a language family or class. To that end, they se-lected four languages falling into two language classes:Dutch and English (stress-timed), and Spanish and Ital-ian (syllable-timed). They presented infants with a mix-ture of filtered sentences from two different languages.

When habituated to sentences in Dutch and English, in-fants responded to a change in Spanish and Italian sen-tences, and vice versa. In contrast, when habituated toEnglish and Spanish and tested with Dutch and Italian,infants failed to discriminate. (Note: All the possible in-terlanguage class combinations were used.) Infants re-acted only if a change of language class had taken place(see figure 62.1). These experiments suggest that infantsspontaneously classify languages into broad classes orrhythmic-prosodic families, as hypothesized.

So now we know that some languages are more simi-lar than others, even for babies. How can we learnmore about the metric underlying perceptual judg-ments? Two lines of research have allowed us to inves-tigate the perceptual space. One of these exploitsadaptation to time-compressed speech (artificially ac-celerated speech, which is about twice as fast as normalspeech). Subjects are asked to listen to and compre-hend compressed sentences. Comprehension is ini-tially rather poor; but subjects habituate to compressedspeech, and their performance improves after listeningto a few sentences (Mehler et al., 1993). Dupoux andGreen (1997) observed that adaptation resists speakerchange, demonstrating that adaptation takes place at arelatively abstract level. However, Altmann and Young(1993) reported adaptation to compressed speech insentences composed of nonwords, showing that lexicalaccess is not necessary for adaptation. Similarly, Pallierand colleagues (1998) found that monolingual Spanishsubjects listening to Spanish compressed sentencesbenefit from previous exposure to highly compressedCatalan, a language that they were unable to under-stand. The same pattern of results was observed whenmonolingual British subjects were exposed to highlycompressed Dutch. Pallier and colleagues (1998) alsoreported that French-English bilinguals showed notransfer of adaptation from French to English, or vice-versa, again demonstrating that comprehension con-tributes, at best, very little to adaptation.

These results were interpreted in terms of the phono-logical (prosodic and rhythmical) similarity of the adapt-ing and the target languages. PhonolOgically similarlanguages, such as Catalan and Spanish or English andDutch, show transfer of adaptation from one member ofthe pair to the other; however, adaptation to French orEnglish, two phonologically dissimilar languages, doesnot transfer from one to the other. Sebastian-Galles, Du-poux, and Costa (1999) extended these findings byadapting Spanish subjects to Spanish, Italian, French,Greek, English, or Japanese sentences. Sebastian-Gallesand colleagues replicated and extended the findings byPallier and co-workers; in particular, they observed thatGreek, which shares rhythmic properties with Spanish

Page 3: Scuola Internazionale Superiore di Studi Avanzati - …lcd.sissa.it/Books/2000_Gazzaniga.pdfCN RS URA, Paris parameters-universal principles that are common to all human languages

c 50EU5 45Q)

-+-'~ 400>C:s2 35u:Jen 30

........... ~"'" ..,,,

................. '- ...,,,...J .···T>..·~~·· t"'".. ,,

.7".":'.......... ~ .

...................... ~ .

-2 -1

Minutes

RHYTH~ROUP NON-RHv:r!iM~C GROUP

FIGURE62.1 Results of a nonnutritive sucking experiment:Auditory stimulation is presented contingently upon babies'high-amplitude sucks on a blind dummy. Mter a baseline pe-riod without stimulation, babies hear sentences from one cate-gory until they reach a predefined habituation criterion. Theyare then switched to sentences from the second category. Thegraph displays sucking-rate averages for 32 French newbornsfor the baseline period, 5 minutes before the change in stimula-tion, and 4 minutes after the change. The rhythmic group wasswitched from a mixture of sentences taken from two stress-timed languages (Dutch and English) to a mixture of sentences

but has little lexical overlap, is very efficient in promot-ing adaptation to Spanish. These authors concluded thatthe technique of time-compressed speech can serve as atool to explore the grouping of languages into classes.Another technique being assessed as a tool to study themetric of natural languages is speech resynthesis-a pro-cedure that makes it possible to selectively preservesome aspects of the original sentences, such as pho-nemes, phonotactics, rhythm, and intonation (see Ra-mus and Mehler, 1999). It is too early to relate theseearly categorizations of languages with the languageclassifications proposed by comparative linguists and bi-ologists (see Cavalli-Sforza, 1991; Renfrew, 1994).

So far, we have seen that the languages of the worldcan be organized on a metric; that is, some languagesare more distant than others. This metric could be dis-crete, like the one Miller and Nicely (1955) proposed forphonemes-a space structured by a finite number of di-mensions (e.g., the parameters within the principles-and-

from two syllable-timed languages (Spanish and Italian), orvice versa. The nonrhythmic group also changed languages,but in each phase of the experiment there were sentences fromone stress-timed and one syllable-timed language (e.g., Spanishand English, then Italian and Dutch). Infants from the rhyth-mic group reacted Significantly more to the change of stimula-tion than infants from the nonrhythmic group, indicating thatonly those in the rhythmic group were able to form a categorywith the habituation sentences and notice a change in category(syllable-timed vs. stress-timed). (Adapted from Nazzi, Berton-cini, and Mehler, 1998.)

parameters theory). Or it could be a continuous space,without structure, with as many dimensions as there arelanguages. Mehler and colleagues (1996) claimed thatthe first option should be correct because the metric oflanguages, the underlying structure, could aid in acquisi-tion. The way a language sounds (Le., its phonologicalinformation) would help infants to discover some prop-erties of their native language and allow them to start theprocess of acquisition. This is a kind of phonologicalbootstrapping.

One recent proposal (Nespor, Guasti, and Christophe,1996) illustrates how phonological information maybootstrap the acquisition of syntax. Languages vary as tothe way words are organized in sentences. Either com-plements follow their heads, as in English or Italian (e.g.,He reads the book.,where book is the complement of read),or they precede their heads, as in Turkish or Japanese(e.g., KitabI yazdim [TIe-book I-readJ). This structuralproperty possesses a prosodic correlate: In head-initial

Page 4: Scuola Internazionale Superiore di Studi Avanzati - …lcd.sissa.it/Books/2000_Gazzaniga.pdfCN RS URA, Paris parameters-universal principles that are common to all human languages

languages like English, prominence falls at the end ofphonological phrases (small prosodic units, e.g., the bigbook), but in head-final languages like Turkish, promi-nence falls at the beginning of phonological phrases.Therefore, if babies can hear whether prominence fallsat the beginning or end of phonological phrases, theycan decipher their language's word order. Infants couldunderstand some structural aspects before they learnwords (see Mazuka, 1996, for a discussion). They coulduse their knowledge of the typical order of words intheir mother tongue to guide the acquisition of wordmeanings (e.g., see Gleitman, 1990).

If babies are to use prosodic information to determinethe word order of their language, they should first be ableto perceive the prosodic correlates of word order. In or-der to test the plausibility of this hypothesis, Christopheand colleagues (1999) compared two languages that differon the head-direction parameter, but have otherwise sim-ilar prosodic properties: French and Turkish. Both lan-guages have word-final stress, fairly simple syllabicstructure through resyllabification, and no vowel reduc-tion. Matched sentences in the two languages were con-structed such that they had the same number of syllables,and their word boundaries and phonological phraseboundaries fell in the same places. Only prominencewithin phonological phrases distinguished these sen-tences. They were read naturally by native speakers ofFrench and Turkish. And, in order to eliminate phonemicinformation while preserving prosodic information, thesentences were resynthesized so that all vowels weremapped to schwa and consonants by manner of articula-tion (stops, fricatives, liquids, etc.). The prosody of theoriginal sentences was copied onto the resynthesized sen-tences. An initial experiment showed that 2-month-oldFrench babies were able to distinguish between these twosets of sentences (see also Guasti et al., 1999). This resultsuggests that babies are able to perceive the prosodic cor-relate of word order well before the end of the first yearof life (although additional control experiments areneeded to rule out alternative explanations). As theystand, the findings support the notion that babies deter-mine the word order of their language before the end ofthe first year of life-about the time that lexical and syn-tactic acquisition begins.

This proposal is an example of how purely phonolog-ical information (in that case, prosodic information) di-rectly gives information about syntax. Languages differnot only in syntax, but in their phonological propertiesas well; and babies may learn about these properties oftheir mother tongue early in life. (The alternative is thatchildren learn the phonology of a language when theylearn its lexicon, after age 1). In fact, recent experimentshave shown that the adult speech processing system is

shaped by the phonological properties of the native lan-guage (e.g., Cutler and Otake, 1994; Otake et aI., 1996;see Pallier, Christophe, and Mehler, 1997, for a review).For instance, Dupoux and his colleagues examinedSpanish and French adult native speakers' processing ofstress information (stress is contrastive in Spanish: BEbeand beBE are different words, meaning "baby" and"drink," respectively; in French stress is uniformly word-final). They used an ABX paradigm where subjects lis-tened to triplets of pseudowords and had to decidewhether the third one was like the first or like the secondone (Dupoux et al., 1997). They observed that Frenchnative speakers were almost unable to make the correctdecision when stress was the relevant factor (e.g., VA-suma, vaSUma, VAsuma, correct answer first item),whereas Spanish speakers found it as straightforward tomonitor for stress as to monitor for phonetic content. Inaddition, Spanish speakers found it very hard to basetheir decision on phonemes when stress was an irrele-vant factor (e.g., VAsuma,faSUma, vaSUma, correct an-swer first item), whereas French speakers happilyignored the stress information. In another set of experi-ments, Dupoux and colleagues investigated the percep-tion of consonant clusters and vowel length in Frenchand Japanese speakers (Dupoux et al., 1999). In French,consonant clusters are allowed but vowel length is irrele-vant, while in Japanese, consonant clusters are not al-lowed and vowel length is relevant. They observed thatJapanese speakers were at chance level in an ABX taskwhen the presence of a consonant cluster was the rele-vant variable (e.g., ebzo, ebuzo, ebuzo, correct answer sec-ond item), whereas they performed very well when theyhad to base their decision on vowel length (e.g., ebuzo,ebuuzo, ebuzo, correct answer first item). In contrast,French speakers showed exactly the reverse pattern ofperformance (see figure 62.2).

All these results suggest that adult speakers of a lan-guage listen to speech through the filter of their own pho-nology. Presumably, they represent all speech soundswith a sublexical representation that is most adequate fortheir mother tongue, but inadequate for foreign lan-guages. How and when do babies learn about such pho-nological aspects of their mother tongue? Although westill lack experimental results on this topic, we conjecturethat babies must stabilize the correct sublexical represen-tation for their mother tongue sometime during the firstyear of life. Thus, when they start acquiring a lexicon to-ward the end of their first year, they may directly establishlexical representations in a format that is most suitable totheir mother tongue. Consistent with this view, evidenceis emerging that the representations in adults' auditory in-put lexicon depend on the phonological properties of thenative language (Pallier et al., 1999).

Page 5: Scuola Internazionale Superiore di Studi Avanzati - …lcd.sissa.it/Books/2000_Gazzaniga.pdfCN RS URA, Paris parameters-universal principles that are common to all human languages

Japanese SubjectsReaction Time1,300

Error Rate60

French SubjectsReaction Time1,300

Error Rate60

FIGURE 62.2 Reaction times (in gray) and error rates (inblack) to ABXjudgrnents in French and Japanese subjects on avowel length contrast and on an epenthesis contrast. (Adaptedfrom Dupoux et aL, 1999.)

Here we have discussed one particular kind of phono-logical bootstrapping, whereby the way a languagesounds gives information as to its abstract structure (suchas the head-direction parameter, or the kind of sublexi-cal representation it uses). Phonological information mayhelp bootstrap acquisition in other ways (Morgan andDemuth, 1996a): For instance, prosodic boundary infor-mation may help parse a sentence (e.g., Morgan, 1986);prosodic and distributional information may help revealword boundaries (e.g., Christophe and Dupoux, 1996;Christophe et al., 1997); and phonological informationmay help to categorize words (see Morgan, Shi, and AI-lopenna, 1996). This is not to imply that everythingabout language can be learned through a purely phono-logical analysis of the speech signal during the first yearof life and before anything else has been learned. Ofcourse this is not the case. For instance, although we ar-gued that some syntactic properties may be cued by pro-sodic correlates (e.g., the head-direction parameter), wedo not claim that prosody contains traces for all syntacticproperties. But if the child can discover only a few basic

syntactic properties from such traces, learning is madeeasier. The ·number of possible grammars diminishes byhalf with each parameter that is set (for binary parame-ters). As a consequence, the first parameter that is seteliminates the largest number of candidate grammars(see Fodor, 1998; Gibson and Wexler, 1994; Tesar andSmolensky, 1997, for discussions about learnability). Inother words, a purely phonological analysis may boot-strap the acquisition process but not solve it entirely. Fer-nald (Fernald and McRoberts, 1996) also discussed thepotential help of prosody in bootstrapping syntactic ac-quisition: She criticized this approach on the groundsthat the proportion of syntactic boundaries reliably cuedby prosodic cues (pauses, lengthening, etc.) is not high.Some syntactic boundaries are not marked and somecues appear in inadequate positions. If so, children mayoften think that a syntactic boundary is present whennone is and posit erroneous grammars. This argumentholds only if one assumes that children use prosodicboundaries to parse every sentence they hear. Infants'brains may instead keep statistical tabs established onthe basis of prosody and use such information to learnsomething about their language. For instance, they maytally the most frequent syllables at prosodic edges, inwhich case they find the function words and morphemesof their language at the top of the list. In that view, itdoes not matter very much if prosodic boundaries do notcorrelate perfectly with syntactic boundaries.

Learning the syntax and phonology of their native lan-guage is not all babies have to do to acquire a function-ing language. They also have to learn the lexicon-anarbitrary collection of word forms and their associatedmeanings (word forms respect the phonology of the lan-guage, but there is no systematic relationship betweenform and meaning). Experimental work shows that ba-bies start to know the meaning of a few words at about 1year of age (Oviatt, 1980; Thomas et al., 1981), and thisis consistent with parental report. Mehler, Dupoux, andSegui (1990) argued that infants must be able to segmentand store word forms in an appropriate language-spe-cific representation before the end of the first year of lifein order to be ready for the difficult task of linking wordforms to their meaning (Gleitman and Gleitman, 1997).Recent experiments by Peter Jusczyk and colleaguessupported this view by showing that babies are able toextract word forms from the speech stream and remem-ber them by about 8 months of age. Jusczyk and Aslin(1995) showed that Z5-month-olds attended less to newwords than to "familiar" words-i.e., isolated monosyl-labic words with which they had been familiarized in the

Page 6: Scuola Internazionale Superiore di Studi Avanzati - …lcd.sissa.it/Books/2000_Gazzaniga.pdfCN RS URA, Paris parameters-universal principles that are common to all human languages

context of whole sentences. This result was later repli-'ated with multisyllabic words, indicating that babies ofthis age have at least some capacity to extract word-likeunits from continuous speech Ousczyk, 1996). Theseword forms are not forgotten immediately afterwards.When 8-month-olds were repeatedly exposed to re-corded stories at home then brought into the lab a weeklater, they exhibited a listening preference to words thatfrequently occurred in these stories Ousczyk andHohne, 1997).

How do babies extract word forms from the speechstream? This area of research has exploded since our lastversion of this chapter, and we now have a reasonablygood view of what babies might do. Babies may exploitdistributional regularities (i.e., segments belonging to aword tend to cohere). Brent and Cartwright (1996)showed that an algorithm exploiting these cues could re-cover a significant number of words from unsegmentedinput. In addition, Morgan (1994) demonstrated that 8-month-old babies exploited distributional regularity topackage unsegmented strings of syllables (see also Saff-ran, Aslin, and Newport, 1996). Babies may also exploitthe phonotactic regularities of the language, such as thefact that some strings of segments occur only at the be-ginning or at the end of words. Babies may learn aboutthese regularities by examining utterance boundaries,then use these regularities to find word boundaries(Brent and Cartwright, 1996; Cairns et al., 1997). Inter-estingly, Friederici and Wessels (1993) showed that at 9months, babies are already sensitive to the phonotacticregularities of their native language (see also Jusczyk,Cutler, and Redanz, 1993; Jusczyk, Luce, and Charles-Luce, 1994).

Next, babies may rely on their knowledge of the typi-cal word shapes of their language. Anne Cutler and hercolleagues extensively explored a well-known exampleof such a strategy in English. Cutler's metrical segmenta-tion strategy relies on the fact that English content wordspredominantly start with a strong syllable (containing afull vowel, as opposed to a weak syllable containing a re-duced vowel). Adult English-speaking listeners make useof this regularity when listening to faint speech or whenlocating words in nonsense syllable strings (e.g., Cutlerand Butterfield, 1992; McQueen, Norris, and Cutler,1994). Importantly, 9-month-old American babies pre-fer to listen to lists of strong-weak words (bisyllabicwords in which the first syllable IS strong and the secondweak) than to lists of weak-strong words Ousczyk et al.,1993). In addition, Peter Jusczyk and his colleaguesshowed, in a recent series of experiments, that babies ofabout 8 months find it easier to segment strong-weak(SW) words than weak-strong (WS) words from pas-sages (Newsome andJusczyk, 1995).

Finally, babies may rely on prosodic cues to performan initial segmentation of continuous speech. Prosodicunits roughly corresponding to phonological phrases(small units containing one or two content words plussome grammatical words or morphemes) were shown tobe available to adult listeners (de Pijper and Sanderman,1994), and presumably to babies as well (Christophe etal., 1994; Gerken, Jusczyk, and Mandel, 1994; see alsoMorgan, 1996; Fisher and Tokura, 1996, for evidence thatprosodic boundary cues have robust acoustic correlatesin infant-directed speech). These prosodic units wouldnot allow babies to isolate every word; however, theywould provide a first segmentation of the speech streamand restrict the domain of operation of other strategies(e.g., distributional regularities). In addition, prosodicunits possess the interesting property that they derivefrom syntactic constituents (in a nonisomorphic fashion;see Nespor and Vogel, 1986); as a consequence, functionwords and morphemes tend to occur at prosodic edges.Therefore, infants may keep track of frequent syllables atprosodic edges, and the most frequent will correspond tothe function words and morphemes in their language. Ac-tually, LouAnn Gerken and her colleagues showed thatll-month-old American babies reacted to the replace-ment of function words by nonwords, indicating that theyalready knew at least some of the English function words(Gerken, 1996).Once babies have identified the functionwords of their native language, they may "strip" syllableshomophonous to function words from the beginning andend of prosodic units; the remainder can then be treatedas one or several content words.

This area of research illustrates particularly well theadvantages of studying adults and babies simulta-neously. For instance, the fact that English speakers relyon the predominant word pattern of English to performlexical segmentation was first shown for adult subjects;this result was then extended to babies between 8 and 12months of age. Reciprocally, the use of prosody to hy-pothesize word or syntactic boundaries was first advo-cated to solve the acquisition problem, and is now beingfruitfully studied in adults (e.g., Warren et al., 1995).

We have argued that the speech signal provides fairlyuseful hints to isolate some basic properties of one's ma-ternal language. There is, however, a situation that pre-sents the infant with incompatible evidence; that is,sometimes two languages are systematically presented atthe same time in the environment. But this apparentlyinsurmountable problem is readily solved by every childraised in a multilingual society-no important delay inlanguage acquisition has ever been documented. We

Page 7: Scuola Internazionale Superiore di Studi Avanzati - …lcd.sissa.it/Books/2000_Gazzaniga.pdfCN RS URA, Paris parameters-universal principles that are common to all human languages

have to provide a model of language acquisition that canaccount for that.

Bosch and Sebastian-Galles (1997) have shown thatat 4 months bilingual Spanish/Catalan babies alreadybehave differently than do monolingual controls of ei-ther language. These investigators showed that mono-ling11al babies orient faster to their mother tongue(Spanish for half of the babies and Catalan for theother half) than to English. In contrast, bilingual ba-bies orient to Spanish or Catalan significantly moreslowly than to English. In more recent and still unpub-lished work, these authors show that the bilingual in-fants can discriminate between Spanish and Catalan,indicating that confusion cannot be adduced to explainthe above results. Needing to keep the languages sepa-rate, bilingual infants might be performing more fine-grained analyses of both Spanish and Catalan, andhence need more processing time.

Does the bilin~,'ual baby become equally competent inboth languages? To answer this question, we have tostudy adult bilinguals. Pallier, Bosch, and Sebastian-Galles (1997a) studied Spanish/Catalan bilinguals wholearned to speak Catalan and Spanish before the age of4 (although one language was always dominant-the onespoken by both parents). Whereas Spanish has only one/e/ vowel, Catalan has two, an open and a closed lei.Pallier and colleagues showed that the Spanish-domi-nant speakers could not correctly perceive the Catalanvocalic contrast, even though they had had massive'ex-posure to that language and spoke it fluently. Had thesesubjects been exposed to both languages in th'e ',cbb,would they have learned both vowel systems? Behav-ioral studies are not yet readily available.

In the remainder of this section, we will review whatbrain-imaging techniques can tell us about language rep-resentation in bilingual adults. Mazoyer and colleagues(1993) explored brain activity in monolingual adultswho were listening to stories in their maternal tongue(French) or in an unknown foreign language (Tamil). Lis-tening to French stories activated a large left hemispherenetwork including parts of the prefrontal cortex and thetemporal lobes-a result that is congruent with thestanc

dard teachings of neuropsychology. I In contrast, the ac-tivation for Tamil was restricted to the midtemp6ralareas in both the left and rig):!~hemispheres. Was the lefthemisphere of the French <.s.Vactivated by French be-<;ause it was the participants' mother tongue or becauseFrench, in contrast to Tamil, was a language they under-stood? Perani and colleagues (1996) investigated nativeItalians with moderate-to-good English comprehension.When Italians were listening to Italian, they exhibitedthe same pattern of activation as that reported in theMazoyer study. Italians listening to and understanding

English displayed a weak symmetrical right-hemisphereand left-hemisphere activation. Surprisingly, they dis-played a similar activation when listening to stories inJapanese, a language they did not understand.2 Compar-ing the cortical activation to English andJapanese oughtto have uncovered tlle cortical areas dedicated to pro-cessing the words, syntax, and semantics of English. Fig-ure 62.3 illustrates the fact that our expectation was notfulfilled.

How can we explain this failure? It seems unlikelythat natural languages should yield comparable activa-tion regardless of comprehension. An alternative ac-count is that the native language yields the same corticalstructures in every volunteer while a second language"shows greater inter-individual variability and thereforefails to stand out when averaged across subjects." De-haene and colleagues (1997) used fMRI to explore thispossible explanation. They examined eight French vol-unteers who had a fair understanding of English compa-rable in proficiency to the bilinguals tested by Peraniand co-workers. The French volunteers listened toFrench or English stories and backward speech in alter-nation. When listening to French, their mother tongue,they all showed more activity in and around the left su-perior temporal sulcus. But when they listened to En-glish, left and right cortical activation was highlyvariable among subjects (see figure 62.4). This resultmay reflect the fact that a second language may belearned in a number of different ways (e.g., through ex-plicit tuition or more naturally), and hence the end resultmay vary from one individual to the next. To assess this,we need to examine ilie cortical representation of thesetond language when it was learned in a natural settingduring childhood.

Perani and colleagues (1997) tried to explore these is-sues by testing two groups of highly proficient bilinguals.The first were Spanish/Catalan bilinguals who had ac-quired their second language between ages 2 and 4 andappeared to be equally proficient in boili of their lan-guages (but see Pallier, Bosch, and Sebastian-Galles,1997a, who tested the same population of subjects). Thesecond were Italians who had learned English after theage of 10 and had attained excellent performance. Themain finding of this PET study was iliat the cortical rep-resentations of the two languages were very similar (forboth groups of high-proficiency bilinguals), and signifi-cantly different from iliat in low-proficiency bilinguals.This study suggests that proficiency is more importantthan age of acquisition as a determinant of cortical repre-sentation of the second language, at least in bilingualswho speak languages iliat are historically, lexically, andsyntactically reasonably close (even iliough there arephonological differences). All these studies investigated

Page 8: Scuola Internazionale Superiore di Studi Avanzati - …lcd.sissa.it/Books/2000_Gazzaniga.pdfCN RS URA, Paris parameters-universal principles that are common to all human languages

FIGURE 62.3 Patterns of activation in a PET study measuringthe activity in Italian speakers' brains while listening to Italian(mother tongue), English (second language), Japanese (un-known language), and backward Japanese (not a possible hu-

the cortical activation while listening to speech. What hap-pens for speech production?

Kim and colleagues (1997) investigated a kind ofspeech production situation: They used fMRI with bilin-gual volunteers who were silently speaking in either oftheir languages. They tested bilingual volunteers whomastered their second language either early or late inlife (the languages involved in the study were highly var-ied). They found that the first and second languageshave overlapping representation in Broca's area in earlylearners whereas these representations are segregated inlate learners. In contrast, both languages overlap overWernicke's area regardless of age of acquisition. How-ever, only age of acquisition and not proficiency wascontrolled; thus it is difficult to evaluate their separatecontribution. As Perani and colleagues have indicated,there is a negative correlation between age of acquisitionand proficiency Uohnson and Newport, 1989).

How can we harmonize the picture we get from thebrain imaging studies with behavioral results? Even withvery proficient bilinguals, it has been shown that the lan-guages are not equivalent: People behave as natives intheir dominant language and do not perform perfectlywell in the other language. This has been shown for pho-netic perception (Pallier, Bosch, and Sebastian-Galles,1997a), sublexical representations (Cutler et aI., 1989;

man language). There was a significant activation differencebetween Italian and English. In contrast, English andJapanesedid not differ significantly.Japanese differed significantly frombackwardJapanese. (Adapted from Perani et al., 1996.)

1992), grammaticality judgments (Weber-Fox and Nev-ille, 1996), and speech production (Flege, Munro, andMacKay, 1995). Possibly, the fact that the cortical repre-sentations for the first and second languages tend tospan overlapping areas with increased proficiency can-not be taken to mean equivalent competence. As Peraniand colleagues state:

A possible interpretation of what brain imaging is telling us isthat in the case of low proficiency individuals, the brain is re-cruiting multiple, and variable, brain regions, to handle as faras pOSSiblethe dimensions of the second language, which aredifferent from the first language. As proficiency increases, thehighly proficient bilinguals use the same neural machinery todeal with the first and the second languages. However, this an-atomical overlap cannot exclude that this brain network is us-ing the linguistic structures of the first language to assimilateless than perfectly the dimensions of the second language.

Another inconsistency will have to be explained in fu-ture work. We know from several studies (e.g., Cohen etal., 1997; Kaas, 1995; Rauschecker, 1997; Sadato et al.,1996) that changing the nature and distribution of sen-sory input can result in important modifications of thecortical maps, even in adult organisms. However, peopledo not reach native performance in a second language,despite massive exposure, training, and motivation. Canwe reconcile cortical plasticity Witll behavioral rigidity?

Page 9: Scuola Internazionale Superiore di Studi Avanzati - …lcd.sissa.it/Books/2000_Gazzaniga.pdfCN RS URA, Paris parameters-universal principles that are common to all human languages

S';'$

MTG

MfGIFGPfl3

!(('1J!<,) 0 9» 1OC>(l~

Ui flH

,NUrrl:le.f ot!.tJtr=;~~:! 11·2 g 3-4. $,6 •• 1.a

FIGURE 62.4 Intersubject variability in the cortical repre'sen-tation of language is greater for the second than for the firstlanguage. Each bar represents an anatomical region of interest(left and right hemisphere). Its length represents the averageactive volume (in square millimeters) in that region. Its colorreOects the number of subjects in which that region was active.(Adapted from Dehaene et aI., 1997.)

Yet another area in great need of more research is thefunctional significance of activation maps in general.Consider the work by Neville and her colleagues on therepresentation of American Sign Language (ASL) indeaf and hearing native signers. These authors carriedout an fMRI study and showed that ASL is bilaterallyrepresented in both populations while written English isbasically represented in the left hemisphere of the hear-ing signers (who were also native speakers of English).Hickok, Bellugi, and Klima (1996) studied 23 nativespeakers of ASL with unilateral brain lesions and foundclear and convincing evidence for a left-hemisphericdominance, much like the one found in speakers qf natural languages. This leaves us in a dilemma: Either weassert that it is not easy to draw functional conclusions

from activation maps, and/or we have to posit that ASLis not a natural language.

Mapping the higher cognitive functions to neural net-works is an evolving skill. In the near future, we expectto witness a trend toward a more exhaustive study of agiven function in a single subject. For instance, each vol-unteer could be tested several times to investigate in de-tail how language competence is organized andrepresented in his/her cortex. This should help us to elu-cidate several of the seemingly paradoxical findings werepcirted above. However things evolve, it is clear that~hen we understand these issues in greater depth, wewill be able to make gigantic leaps in our understandingof development.

We have presented data drawn from experimental psy-chology, infant psychology, and brain-imaging to pro-pose another way of addressing the study ofdevelopment-this, to illustrate an approach to the studyof development now gaining in support. By bringing to-gether some of the areas that jointly clarify how the hu-man mind gains access to language, logic, mathematics,and other such recursive systems, we can begin to an-swer several fundamental questions: (l) Why is the ac-quisition of some capacities possible for human brainsand not for the brains of other higher vertebrates? (2)Are there skills that can be acquired more naturally andwith a better outcome before a certain age? (3) Does theacquisition of some skill facilitate the acquisition of simi-lar skills in older organisms-those who would normallyshow little disposition to learn "from scratch"? The needto answer such questions is the essence of many devel-opmental research programs. The fact that questions likethese are being has implications for the cognitive psy-chologist and cognitive neuroscience. Cognitive psy-chologists are becoming increasingly aware that theymust conceive of development as a critical aspect of cog-nitive neuroscience. If, for instance, there are timeschedules to naturally learn certain skills, the joint studyof the time course of development and the concurrentmaturation of neural structures may tell us which neuralstructures are responsible for the acquisition of whichcapacities. Coupled with brain-imaging and neuropsy-chological data, these facts may help us to understandbetter the mapping between neural structure and cogni-tive function. One may call such a study the "neuropsy-chology of the normal."

In concluding, we dispute the notion that the acqui-sition of mental abilities can be sufficiently studied bylooking only at growing children. We suggest that tounderstand the mechanisms responsible for change,

Page 10: Scuola Internazionale Superiore di Studi Avanzati - …lcd.sissa.it/Books/2000_Gazzaniga.pdfCN RS URA, Paris parameters-universal principles that are common to all human languages

one must first specify the normal envelope of stablestates and the species-specific endowment; then wecan go on to develop a theory of acquisition that ac-counts for the mapping from the initial to the stablestate. Many different disciplines will have to partici-pate in achieving this goal. Neuroscience has mademany discoveries that are essential to the understand-ing of development; in particular, brain-imaging stud-ies may bring in a new perspective to the research ofgrowth and development. Thus, rather than makingthe study of development an autonomous part of cog-nition, we conceive of it as a unique source of informa-tion that should be an integral component of cognitivepsychology.

ACKNOWLEDGMENTS We wish to thank the Human FrontiersScience Programme, Human Capital and Mobility Pro-l,Tfamme, the Direction de la Recherche, des Etudes, et de laTechnologie, and the Franco-Spanish grant.

1. There are some areas-e.g., the anterior poles of the tempo-ral lobes and Brodmann H-that had not often been relatedto language processing by classical neuropsychology.

2. Interestingly, the left inferior frontal gyrus, the left midtem-poral gyrus, and the inferior left parietal lobule were moreactive when subjects were listening to Japanese comparedto backward Japanese. These activations could mirror"thesubjects' attempt to store the input as meaningless phono-logical information in auditory short-term memory (seePaulesu, Frith, and Frackowiak, 1993). Perani andcol-leagues observed that the left middle temporal gyrus a<;~iva-tion appeared only when subjects were listening to speech,regardless of whether they understood the stories, but notwhen listening to backward speech, an unnatural stimulus.The authors suggest that the left middle temporal gyrus ishighly attuned to the processing of the sound patterns ofany natural language.

ALTMANN, G. T. M., and D. H. YOUNG, 1993. Factors affectingadaptation to time-compressed speech. Paper presented atthe Eurospeech 9, Berlin.

BAHRICK, L. E., andj. N. PICKENS, 1988. Classification of bi-modal English and Spanish language passages by infants. In-fant Behav. Dev. 11:277-296.

BERTONCINI, j., R. BUELJAC-BADIC, P. W. JUSCZYK, L.KENNEDY, and j. MEHLER, 19HH.An investigation of younginfants' perceptual representations of speech sounds.J Exp._Psycho!. 117:21-33.

BOSCH. L.. and N. SEBASTIAN-GALLES, 1997. Native-Ian-l,'l.lagc recognition abilities in four-month-old infants frommonolingual and bilingual environments. Cognition 65:33-69.

BRENT, M. R., and T. A. CARTWIUGHT, 1996. DistributionalrCl,'l.I1arityand phonotactic constrainL~ are useful for segmen-tation. Cognition 61 :93-125.

CAIRNS, P., R. SHILLCOCK, N. CHATER, and j. LEVY, 1997.BooL~trapp!ng word boundaries: A bottom-up corpus-based approach to speech segmentation. Cogn. Psycho!.33:111-153.

CAVALLI-SFORZA, L. L., 1991. Genes, people and languages.Scientific American 265: 104-110.

CHOMSKY, N., 198H. Language and Problems o/Knowledge: Cam-bridge, Mass.: MIT Press.

CHRISTOPHE, A., and E. Dupoux, 1996. Bootstrapping lexicalacquisition: The role of prosodic structure. Linguistic Rev.13:383-412.

CHRISTOPHE, A., E. Dupoux,j. BERTONCINI, andj. MEHLER,1994. Do infants perceive word boundaries? An empiricalstudy of the bootstrapping of lexical acquisition. j. Acoust.Soc. Amer. 95: 1570-15HO.

CHRISTOPHE, A., M. T. GUASTI, M. NESPOR, E. Dupoux, andB. VAN OOYEN, 1997. Reflections on phonological boot-strapping: Its role for lexical and syntactic acquisition. Lang.Cogn. Processes 12:585-612.

CHIUSTOPHE, A., M. T. GUASTI, M. NESPOR, E. Dupoux, andB. VAN OOYEN, 1999. Prosodic structure and syntactic ac-quisition: The case of the head-complement parameter. Inpreparation.

CHRISTOPHE, A., and j. MORTON, 1998. Is Dutch native En-glish? Linguistic analysis by 2-month-olds. Dev. Sci., in press.

COHEN, L. G., P. CELNIK, A. PASCUAL-LEONE, B. CORWELL,L. FALZ, j. DAMBROSIA, M. HONDA, N. SADATO, C.GERLOFF, M. D. CATALA, and HALLETT, 19n Functionalrel~~ance of cross-modal plasticity in blind humans. Nature31:59:180-183.

CUTLER-,.A., and S. BUTTERFIELD, 1992. Rhythmic cues tospeech segmentation: Evidence from juncture mispercep-ti9n.j. Mem. Lang. 31:218-236.

CUTLER, A.,j. MEHLER.,D. NORIUS, andj. SEGUI, 1989. Lim-its on bilingualism. Nature 320:229-230.

CUTLER., A.,j. MEHLER, D. NORRIS, andj. SEGUI, 1992. Themonolingual nature of speech segmentation by bilinguals.Cogn. Psycho!. 24:381-410.

CUTLER, A., and T. OTAKE, 1994. Mora or phoneme: Furtherevidence for language-specific listening. j. Mem. Lang.33:824-844.

DE PUPER, j. R., and A. A. SANDERMAN, 1994. On the per-ceptual strength of prosodic boundaries and its relation tosuprasegmental cues.] Acoust. Soc. Amer. 96:2037-2047.

DEHAENE, S., E. Dupoux,j. MEHLER.,L. CoHEN, E. PAULESU,D. PERANI, P.-F. VAN DE MOORTELE, S. LEHERICY, and D.LEEIHAN, 1997. Anatomical variability in the cortical repre-sentation of first and second languages. Neuroreport 8:3809-3815.

DEHAENE-LAMBERTZ, G., and D. HOUSTON, 1998. A chrono-metric study of language discrimination in two-month-oldinfants. Lang. Speech, in press.

Dupoux, E., and K. GREEN, 1997. Perceptual adjustment tohighly compressed speech: effects of talker and rate changes.j. J!.xp. Psycho!.: Human Percept. Perform. 23:914-927.

Dupoux, E., K. KAKEHI, Y. HIROSE, C. PALLIER, S. FITNEVA,and]. MEHLER, 1999. Epenthetic vowels in Japanese: A per-ceptual illusion.}. Mem. Lang., submitted.

Dupoux, E., C. PALLlER, N. SEBASTIAN-GALLES,andj. MEH-LER, 1997. A destressing "deafness" in French? j. Mem. Lang.36:406-421.

ELMAN,j. L., E. A. BATES, M. H.JOHNSON, A. KARMILOFF-SMITH, D. PARISI, and K. PLUNKETT, 1996. Rethinking In-

Page 11: Scuola Internazionale Superiore di Studi Avanzati - …lcd.sissa.it/Books/2000_Gazzaniga.pdfCN RS URA, Paris parameters-universal principles that are common to all human languages

nateness: A COn/uctionist Perspective on Development. Cam-bridge, Ma~s.: MIT Press.

1~':HNi\LD,A., ilnd G. MCROBERTS, 1996. Prosodic bootstrap-ping: A critical analysis of the argument and the evidence.In Signal to Syntax: Bootstrapping from Speech to Grammar inEarly Acquisition, J. L. Morgan and K. Demuth, cds. Mah-wah, N.J: Lawrence ErIbaum, pp. 365-3H8.

FISHElt, C., and H. lbKURA, 1996. Prosody in speech to in-fants: Direct and indirect acoustic cues to syntactic structure.In Signal to Syntax: Bootstrapping from Speech to Grammar inEarly Acquisition, J. L. Morgan and K. Demuth, eds. Mah-wah, N.J: Lawrence Erlbaum, pp. 343-363.

Fl.EGE,J E., M.J MUNRO, and I. R. A. MACKAY, 1995. Fac-tors affecting strength of perceived foreign accent in a sec-ond language.}. Acoust. Soc. Amer. 97:3125-3131.

FODOlt, J D., 109H. Unambiguous triggers. Linguistic Inquiry29, in press.

FIUEDERlCI, A. D., and J M. I. WESSELS, 1993. Phonotacticknowledge of word boundaries and its use in infant speechperception. Percept. PsychoPhysics1993:2H7-295.

GALL, F.J, 1H35. Works: On the Function of the Brain and Each ofIts Parts. Boston: March Capon and Lyon.

GERKEN, L., 1996. Phonological and distributional informationin syntax acquisition. In Signal to Syntax: Bootstrapping fromSpeedl to Grammar in Early Acquisition, J L. Morgan alld .1<..Demuth, eds. Mahwah, NJ.: Lawrence Erlbaum, pp.411-:-425

GERKEN, L., P. w.jUSCZYK, D. R. AND MANDEL, 1994: Whenprosody fails to cue syntactic structure: 9-month-olds' s~nsi-tivity to phonological versus syntactic phrases. Cogn(tion51:237-265.

GIBSON, E., and K. WEXLER, 1994. Triggers. Linguistic''jnriufry25:407-454. .

GLEITMAN, L., 1990. The structural sources of verb meanings.Language Acquisition 1:3-55.

GLEITMAN, L., and H. GLEITMAN, 19970 What is a languagemade out of? Lingua 100:29-55.

GLEITMAN, L., and E. WANNER, 19H2. The state of the state ofthe art. In Language Acquisition: The State of the Art, E. Wannerand L. Gleitman, eds, Cambridge: Cambridge UniversityPress, pp. 3-4H.

GUASTI, M. T., M. NESPOR, A. CHRISTOPHE, .and B. VANOOYEN, 1999. Pre-lexical setting of the head complementparameter through prosody. In Signal to Syntax II,]. Weis-senborn and B. Hohle, eds. Submitted

HICKOK, G., U. BELLUGI, and E. S. KLIMA, 1996. The neurobi-ology of sign language and its implications for the neural'ba-sis of language. Nature 3HI :699-702.

jOHNSON,J S., and E. L. NEWPORT, 19H9. Critical period ef-fects in second language learning: The influence ,ofmatttra-tional state on the acquisition of English as a se~ordlanbruage. Cogn. Psychol. 21 :60-99.

JUSCZYK, P. W., 1996. Investigations of the word segmef)tatio'nabilities of infants. Paper presented at the 4th InternationalConference on Spoken Language Processing, PhiladelphJa.

jUSCZYK, P. W., and R. N. ASLIN, 1995. Infants' detection 'ofthe sound patterns of words in fluent speech. Cogn. Psycho!.29:1-2:1.

jUSCZYK, P. w., A. CUTLER, and N.]. REDANZ, 1993. Infants'preference for the predominant stress patterns of Englishwords. Child Dev. 64:675-6H7o

JUSCZYK, P. W., and E. A. HOHNE, 1997. Infants' memory forspoken words. Science 277: 19H4-19H6.

jUSCZYK, P. w., P. A. LUCE, andJ CHARLES-LuCE, 1994. In-fants' sen~itivity to phonotactic patterns in the native lan-guage.]. Mem. Lang. 33;630-645.

KAAs,]. H., 1995. The reorganization of sensory and motormaps in adult mammals. In The Cognitive Neurosciences, M. S.Gazzaniga, ed. Cambridge, Mass.: MIT Press, pp. 51-72.

KIM, K. H. S., N. R. RELKIN, K. M. LEE, and]. HIRSCH, 19970Distinct cortical areas associated with native and second lan-guages. NaturdHH:171-174.

KUHL, P. K., K. A. WILLIAMS, F. LACERDA, K. N. STEVENS,and B. LINDBLOM, 1992. Linguistic experience alters pho-netic perception in infants by 6 months of age. Science255 :606-60H.

Lenneberg, E., 19670Biological Foundations of Language. NewYork: Wiley.

MAZOYER, B. M., S. DEHAENE, N. lZOURIO, V. F'RAK, N. Mu-RAYAMA, L. COHEN, O. LEVRlER, G. SALAMON, A. SYR-OTA, and J MEHLER, 1993. The cortical representation ofspeech.}. Cogn. Neurosci. 5:467-479.

MAZUKA, R., 1996. How can a grammatical parameter be setbefore the first word? In Signal to Syntax: Bootstrapping fromSpeech to Grammar in Early Acquisition, j. L. Morgan and K.pemuth, eds. Mahwah, NJ.: Lawrence Erlbaum, pp. 313-330.

MCQUEEN,]. M., D. NORRIS, and A. CUTLER, 1994. Competi-qcm in spoken word recognition: Spotting words in otherwords.}. Exp. Psycho!.:Learn. Mem. Cognit.20:621-638.

MEHLER,]., and A. CHRlSTOPHE, 1995. Maturation and learn-ing of language in the first year of life. In The Cognitive Neuro-sciences, M. S. Gazzaniga, ed. Cambridge, Mass.: MIT Press,

. ,pp. 943-954.ME;HLER,]., and E. DUpoux, 1994. What inftnts I..-now.Cam-

bridge, Mass.: Basil Blackwell.MEHLER, ]., E. Dupoux, T. NAZZI, and G. DEHAENE-

LAMBERTZ, 1996. Coping with linguistic diversity: The in-fant's viewpoint In Signal to Syntax: Bootstrappingfrom Speech

. to Grammar in Early Acquisition,J. L. Morgan and K. Demuth,eds. Mahwah, NJ.: Lawrence Erlbaum, pp. 101-116.

MEHLER,]., E. Dupoux, and J. SEGUI, 1990. Constrainingmodels of lexical access: the onset of word recognition. InCognitive Models of Speech Processing:Psycholinguistic and Com-putational Perspectives, G. T. M. Altmann, ed. CambridgeMass: MIT Press, pp. 263-2HO.

MEHLER,]., and R. Fox {eds.}, 19H5. Neonate Cognition: Beyondthe Blooming, BuaJng ConfUsion.Hillsdale N.].: Lawrence ErI-baum.

MEHLER,]., P. W. jUSCZYK, G. LAMBERTZ, G. HALSTED, JBEIi-TONCINI, and C. AMIEL-TISON, 19HH. A precursor of

, language acquisition in young infants. Cognition 29:143-17H.MEHLER, ]., N. SEBASTIAN-GALLEs, G. ALTMANN, E.

Dupoux, A. CHRISTOPHE, and C. PALLIER, 1993. Under-standing compressed sentences: The role of rhythm and

,meaning. Ann. N. Y.Acad. Sci. 6H2:272-2H2.MILLER, G. A., and P. W. NICELY, 1955. An analysis ofpercep-

tual confusions among some English consonants.}. Acoust.Soc. Amer. 27:33H-352.

MOON, C., R, CoOPER, and W. FIFER, 1993. Two-day-olds pre-fer their native language. Infant Behav. Dev. 16:495-500. '

MORGAN,]., 1996. Phonological bootstrapping in early acqui-sition: The importance of starting small versus starting'sharp.' Paper presented at How to Get into Language: Ap-proaches to Bootstrapping in Early Language Development,Potsdam, Germany.

Page 12: Scuola Internazionale Superiore di Studi Avanzati - …lcd.sissa.it/Books/2000_Gazzaniga.pdfCN RS URA, Paris parameters-universal principles that are common to all human languages

MORGAN,]. L., 1986. From Simple Input to Complex Grammar.Cambridge Mass.: MIT Press.

MORGAN,]. L., 1994. Converging measures of speech segmen-tation in preverbal infants. Infant Behav. Dev. 17:389-403.

MORGAN,]. L., and K. DEMUTH, 1996a. Signal to syntax: An·overview. In Signal to Syntax: Bootstrapping from Speech toGrammar in Early Acquisition,]. L. Morgan and K. Demuth,eds. Mahwah, NJ.: Lawrence Erlbaum, pp. 1-22.

MORGAN,]. L., and K. DEMUTH (eds.), 1996b. Signal to Syntax:Bootstrapping from Speech to Grammar in Early Acquisition.Mahwah, NJ.: Lawrence Erlbaum.

MORGAN,]. L., R. SHI, and P. ALLOPENNA, 1996. Perceptualbases of rudimentary grammatical categories: Toward abroader conceptualization of bootstrapping. In Signal to Syn-tax: Bootstrapping from Speech to Grammar in Early Acquisition,

]. L. Morgan and K. Demuth, eds. Mahwah, N.J.: LawrenceErlbaum, pp. 263-283.

NAZZI, T.,]. BERTONCINI, and]. MEHLER, 1998. Language dis-crimination by newborns: Towards an understanding of therole of rhythm.J Exp. Psychol.: Human Percept.Perftrm. 24: 1-11.

NESPOR, M., M. T. GUASTI, and A. CHRISTOPHE, 1996. Select-ing word order: The rhythmic activation principle. In Inter-fices in Phonology, U. Kleinhenz, ed. Berlin: AkademieVerlag, pp. 1-26.

NESPOR, M., and 1. VOGEL, 1986. Prosodic Phonology: Dor-drecht: Foris.

NEWSOME, M. R., and P. W. JUSCZYK, 1995. Do infants usestress as a cue in segmenting fluent speech? In Proceedings ofthe 79th Boston University Conference on Language Development,JfJl. 2, C. MacLaughlin and S. McEwen, eds. Boston Mass.:Cascadilla Press, pp. 415-426.

OTAKE, T., K. YONEYAMA,A. CUTLER, and A. VAN DER LUGT,1996. The representation of Japanese moraic nasals. j.Acoust. Soc. Amer. 100:3831-3842.

OVIATT, S. L., 1980. The emerging ability to comprehend lan-guage: An experimental approach. Child Dev. 51 :97-106.

PALLIER, C., L. BOSCH, and N. SEBASTIAN-GALLEs, 1997. Alimit on behavioral plasticity in vowel acquisition. Cognition64:B9-BI7.

PALLIER, C., A. CHRISTOPHE, and]. MEHLER, 1997. Language-specific listening. Trends Cogn. Sci. 1:129-132.

PALLIER, C., N. SEBASTIAN-GALLEs, and A. COLOME, 1999.The use of phonolOgical code in auditory repetition priming.In preparation.

PALLIER, C., N. SEBASTIAN-GALLEs, E. DupOux, A.CHRISTOPHE, and]. MEHLER, 1998. Perceptual adjustmentto time-compressed speech: A cross-linguistic study. Mem.Cognit., in press

PAULESU, E., C. D. FRITH, and R. S.]. FRACKOwIAK, 1993.The neural· correlates of the verbal component of workingmemory. Nature 362:342-345.

PERANI, D., S. DEHAENE, F. GRASSI, L. COHEN, S. F. CAPPA, E.Dupoux, F. FAZIO, and]. MEHLER, 1996. Brain processingof native and foreign languages. Neuroreport 7:2349-2444.

PERANI, D., E. PAULESU, N. SEBASTIAN-GALLEs,E. DuPOux,S. DEHAENE, V. BETTINARDI,S. E CAPPA,]. MEHLER, and F.FAZIO, 1997. The bilingual brain: Proficiency and age of acquisition of the second language.

PINKER, S., 1994. The Language instinct. London: PenguinBooks.

RAMUS, E, and]. MEHLER, 1999. Language identification withsuprasegmental cues: A study based on speech resyntheSis.j. Acoust. Soc. Amer., submitted.

RAUSCHECKER,]. P., 1997. Mechanisms of compensatory plas-ticity in cerebral cortex. In Brain Plasticity: Advances in Neu-rology, JfJl. 7, H.:J. Freund, B. A. Sabel, and O. W. Witte, eds.Philadelphia: Lippincott-Raven, pp. 137-146.

RENFREW, C., 1994. World linguistic diversity. Scientific Ameri-can 270: 116-123.

SADATO, N., A. PASCUAL-LEONE,]. GRAFMAN, v. IBANEZ, M.P. DELBER, G. DOLD, and M. HALLETT, 1996. Activationand the primary visual cortex by Braille reading in blindsubjects. Nature 380:526-528.

SAFFRAN,]. R., R. N. ASLIN, and E. L. NEWPORT, 1996. Statis-tical learning by 8-month-old infants. Science 274: 1926-1928.

SEBASTIAN-GALLEs, N., E. Dupoux, and A. COSTA, 1999. Ad-aptation to time-compressed speech: PhonolOgical determi-nants. Submitted.

'IEsAR, B., and P. SMOLENSKY, 1997. Learnability in OptimalityTheory.

THOMAS, D. G.,].]. CAMPOS, D. W. SHUCARD, D. S. RAMSEY,and]. SHUCARD, 1981. Semantic comprehension in infancy:A Signal detection analysis. Child Dev. 52:798-803.

WARREN, P., F. NOLAN, E. GRABE, and T. HOLST, 1995. Post-lexical and prosodic phonolOgical processing. Lang. Cogn.Processes, 10:411-417.

WEBER-Fox, C. M., and H.]. NEVILLE, 1996. Maturationalconstraints on functional specializations for language pro-cessing: ERP and behavioral evidence in bilingual speakers.j. Cogn. Neurosci. 8:231-256.

WEISSENBORN,]., and B. HOHLE (eds.), 1999. From Signal toSyntax, II: Approaches to Bootstrapping in Early Language Devel-opment. In preparation