Hemispheric differences in processing dichotic meaningful and non-meaningful words

Download Hemispheric differences in processing dichotic meaningful and non-meaningful words

Post on 10-Sep-2016

212 views

Category:

Documents

0 download

Embed Size (px)

TRANSCRIPT

<ul><li><p>Neuropsychologia 45 (2007) 27182729</p><p>Hemispheric differences in procnin</p><p>Southarch</p><p>2007</p><p>Abstract</p><p>Classic d peecis assumed t ech pbetween ear usin(MMN) and nd nosyllable. Eig tanda(a standard I-di:]REA for me rce aleft than the encemeaningful and non-meaningful words which may be explained by the characteristics of embedded words and the position-specific probability ofphoneme occurrence in words. 2007 Elsevier Ltd. All rights reserved.</p><p>Keywords: Cerebral lateralization; Auditory; Dichotic; Speech; MMN</p><p>1. Introdu</p><p>In a dichKimura, 19presented wto report thlistening ta(CV) syllaright-handethey are mthe right eaSchwartz &amp;tasks using(LEA) (Bry</p><p>The assbetween aby the com</p><p> Tel.: +44E-mail ad</p><p>0028-3932/$doi:10.1016/jction</p><p>otic-listening task (Broadbent, 1954; Hugdahl, 1988;61, 1967; Repp, 1977) a listener is simultaneouslyith differing auditory items to each ear, and is asked</p><p>e item heard in either the left or right ear. Dichotic-sks using speech sounds such as consonantvowelbles, stop consonants and words typically find thatd listeners show a right-ear advantage (REA), i.e.,</p><p>ore accurate in the reporting of stimuli presented tor (Bryden, 1988; Hugdahl, Carlsson, &amp; Eichele, 2001;</p><p>Tallal, 1980). In contrast, some dichotic-listeningnon-speech sounds have found a left-ear advantageden, 1986; Kimura, 1964; Piazza, 1980).</p><p>ociation between a REA and speech processing, andLEA and non-speech processing, has been explainedplex neural interactions involved in projecting audi-</p><p>1865 281860; fax: +44 1865 310447.dress: ifat.yasin@psy.ox.ac.uk.</p><p>tory information from the cochlea to the cortex, which resultin stronger activation of the cortex on the side contralateral tothe stimulated ear. Auditory neurones relay auditory informa-tion from the cochlea via the eighth cranial nerve to the cochlearnucleus in the brainstem, from where they then progress to thenuclei of the superior olivary nucleus (SON). In the SON theauditory neurones either connect with other fibres or terminate(Liberman, 1991, 1993). From the SON, the auditory neuronsfollow either ipsilateral or contralateral pathways to the inferiorcolliculus in the midbrain. From the midbrain, the auditory infor-mation is projected to the ipsilateral medial geniculate body inthe thalamus and then to the primary cortex and auditory associa-tion areas in the temporal lobes (Tonnquist-Uhlen, 1996). In thisway, the auditory information from each ear terminates in boththe contralateral and ipsilateral temporal lobes of each cerebralhemisphere. Ear advantages in dichotic listening are explainedin terms of two factors. First, the contralateral projections consistof more fibres and produce more cortical activity than the ipsi-lateral projections (Philips &amp; Gates, 1982; Rosenzweig, 1951).Second, the stronger activity in the contralateral projectionsinhibits the processing in the ipsilateral projections (Aiello et al.,</p><p> see front matter 2007 Elsevier Ltd. All rights reserved..neuropsychologia.2007.04.009meaningful and non-meaIfat Yasin </p><p>Department of Experimental Psychology, University of Oxford,Received 16 August 2006; received in revised form 30 M</p><p>Available online 13 April</p><p>ichotic-listening paradigms reveal a right-ear advantage (REA) for so be associated with a left-hemisphere dominance for meaningful speadvantage and hemispheric dominance in a dichotic-listening situation,a late negativity (LN) were measured for bisyllabic meaningful words ahteen normal-hearing listeners were presented with a repeating diotic spresented to one ear and a deviant [beI-bi:], [beI-di:], [leI-bi:] or [leaningful words compared to non-meaningful words. Also, dipole souright cortical region for meaningful words. However, there were differessing dichoticgful words</p><p>Parks Road, Oxford OX1 3UD, UK2007; accepted 6 April 2007</p><p>h sounds as compared to non-speech sounds. This REArocessing. This study objectively probed the relationship</p><p>g event-related potentials (ERPs). The mismatch negativityn-meaningful pseudowords, which differed in their secondrd ([beI-gi:] or [leI-gi:]) and an occasional dichotic deviantpresented to the opposite ear). As predicted there was a</p><p>nalysis suggested that dipole strength was stronger in thes in response within meaningful words as well as between</p></li><li><p>I. Yasin / Neuropsychologia 45 (2007) 27182729 2719</p><p>1994; Kimura, 1967; Pantev, Hoke, Lutkenhoner, &amp; Lehnertz,1986).</p><p>Studiesone cerebra strong chemisphereLEA and rstudies invlobe lesionthe contral&amp; Damasiostudies supspheres precontralaterindicatinging of speeindicates aing certain1964; Piazz</p><p>In the ligproved surpmetries inmeasuringphy (PET)in unimpaiclear left-lRosen, &amp; W(fMRI: Nament of boMummery,Johnsrude,</p><p>Other ststudy laterahere too, retory event-laterality uelicited bysequence oand is regatory changGaillard, &amp;is defined awaveformulus and tresponse toval. It is ty(electrodesshown by c(MEG) (AfMRI (OpiOpitz, MecMecklingesuring MMhemisphereet al., 1998expected leNicol, &amp; K</p><p>nical differences between MEG and EEG that could contributeto the presence/absence of a lateralization to speech. Accuracy</p><p>ce loG thmodte thmet</p><p>tes other</p><p>ffectof</p><p>ellisto b</p><p>ahti,Pulvtatiofor ly geemisic stie, 19ed toears</p><p>, usuimulegatiing lindi</p><p>ationen a</p><p>t in tther</p><p>eralillablof thappeted ilahtiropois evg foren, 2</p><p>expes toprojeleft eng awere</p><p>formeudoicallyy).dictiin thrighf prbeon patients undergoing temporary inactivation ofal hemisphere prior to epilepsy surgery have foundorrelation between a behavioural REA and left-</p><p>language dominance, and between a behaviouralight-hemisphere dominance (Strauss, 1988). Otherolving patients with classified regions of temporals show a reduced REA or LEA with damage to</p><p>ateral auditory region of the temporal lobe (Eslinger, 1988; Ilvonen et al., 2004; Kimura, 1961). Suchport the conclusion that the left and right hemi-dominantly process auditory information from the</p><p>al ear. On this view, the REA can be interpreted asa cerebral left-hemispheric dominance for process-ch sounds (Schwartz &amp; Tallal, 1980), whilst the LEAcerebral right-hemispheric dominance for process-types of non-speech sound (Bryden, 1986; Kimura,a, 1980).ht of such evidence for cerebral lateralization, it hasrisingly difficult to demonstrate hemispheric asym-auditory processing using more direct methods ofbrain activity, such as positron emission tomogra-and functional magnetic resonance imaging (fMRI)red humans. Although some studies have shown aateralization to speech stimuli (PET: Scott, Blank,</p><p>ise, 2000; Scott, Rosen, Wickam, &amp; Wise, 2004)rain et al., 2003), others suggest similar involve-th hemispheres in processing speech stimuli (PET:Ashburner, Scott, &amp; Wise, 1999) (fMRI: Scott &amp;2003).udies have used electro-encephalography (EEG) tolization of brain responses to auditory stimuli, but,sults have been mixed. EEG studies measuring audi-related potentials (ERPs) have investigated cerebralsing the mismatch negativity (MMN). The MMN isan occasional deviant auditory stimulus amongst a</p><p>f more frequently occurring standard auditory stimulirded as a cortical index of the perception of an audi-e along a given dimension (Alho, 1995; Naatanen,Mantysalo, 1978; Naatanen et al., 1997). The MMNs the difference between the amplitude of the ERP</p><p>generated in response to the standard auditory stim-he amplitude of the ERP waveform generated inthe deviant auditory stimulus, over a given time inter-pically maximal at frontal and central midline sitesFz and Cz) and is generated in the auditory cortices asonverging evidence from magneto-encephalographylho et al., 1996), PET (Tervaniemi et al., 2000) andtz, Mecklinger, Friederici, &amp; von Cramon, 1999;klinger, von Cramon, &amp; Kruggel, 1999; Opitz, Rinne,r, von Cramon, &amp; Schroger, 2002). Some studies mea-Ns for speech stimuli report greater MMNs in the left(Naatanen et al., 1997; Rinne et al., 1999; Shtyrov). However, other studies have failed to show theft-hemisphere dominance for speech stimuli (Bellis,raus, 2000; Shtyrov et al., 2000). There are many tech-</p><p>in sourfor MErecentseparathe twoestima</p><p>Anoality ethe use(e.g. BsentedKorpil1997;presendenceactivitright hdichotNevilloften lacross</p><p>MMNafter stlater noccurr</p><p>salientegorizcase, theviden</p><p>Anoleft-latCV synitudewordsgenera(Korpibeen pwordsexistinNaatanmightMMNdirectto the</p><p>Usilableseitherless psdichotor lagthe preeffectsto thebasis owouldcalization has often been shown to be more accuratean EEG (e.g. Hari, Levanen, &amp; Raij, 2000). Althoughelling studies (Liu, Dale, &amp; Belliveau, 2002) whiche effects of the head model and differences betweenhods suggest that EEG and MEG may provide similarf source localization and hence lateralization.important reason for weak and inconsistent later-</p><p>s in ERP speech-processing studies may be due tomonotic (auditory items presented to only one ear)</p><p>et al., 2000), or diotic (same auditory item pre-oth ears at the same time) (e.g. Alho et al., 1998;Krause, Holopainen, &amp; Lang, 2001; Naatanen et al.,ermuller et al., 2001; Shtyrov et al., 2000) stimulins. A dichotic presentation may provide stronger evi-aterality, due to ipsilateral suppression of the neuralnerated in the auditory projections in both left andphere (Bryden, 1988). Some earlier studies did use amulus presentation to measure ERPs (Haaland, 1974;74) but limitations in synchrony of signal generation</p><p>slight discrepancies between stimulus onset times. Furthermore, in most MMN studies only the earlyally calculated for the latency range of 100200 msus onset was investigated. It has been proposed that avity, referred to hereafter as the late negativity (LN),ater than 200 ms after stimulus onset may be a morecator of more complex higher processing prior to cat-(Korpilahti, Lang, &amp; Aaltonen, 1995). If this is theny lateralization for meaningful speech may be morehe LN than in the MMN.reason as to why ERP studies do not reliably report</p><p>zed responses to speech stimuli may be the use ofes, rather than meaningful stimuli. The overall mag-e MMN generated in response to hearing meaningfulars to be greater than the magnitude of the MMNn response to hearing non-meaningful pseudowordset al., 2001; Shtyrov &amp; Pulvermuller, 2002). It has</p><p>sed that the increased amplitude of the MMN to realidence for long-term stored cortical memory tracesspoken language (Pulvermuller, Shtyrov, Kujala, &amp;004), predominantly in the left hemisphere. If so, we</p><p>ct a dichotic paradigm to reveal evidence for strongermeaningful words presented to the right ear (withctions to the left hemisphere) than to those presentedar.</p><p>paradigm based on Pulvermuller et al. (2001) bisyl-constructed such that the same second syllable coulda meaningful word (baby or lady) or a meaning-word (bady or laby). These items were presented</p><p>in a stream of pseudoword standards (e.g. bagyThe current paper had two main aims. First to teston that a dichotic presentation will reveal lateralizede MMN, with larger responses to deviants presented</p><p>t ear, especially over left-hemisphere sites. On theevious work it was also predicted that such effectsgreater for meaningful words than for phonologi-</p></li><li><p>2720 I. Yasin / Neuropsychologia 45 (2007) 27182729</p><p>cally matched non-meaningful words. Second, to compare sucheffects between the MMN and the LN. Insofar as the LN indexeslater stages of speech processing, we might expect to see strongerlateralized effects (for meaningful words) for the LN than forthe MMN.</p><p>2. Experiment: bisyllabic meaningful words andnon-meaningful pseudowords</p><p>2.1. Methods</p><p>2.1.1. ListenersThe 18 lis</p><p>(mean age, 2inventory of hfirm they hadnormal-hearinically within aEnglish waswere paid for</p><p>2.1.2. CondiTable 1 p</p><p>standard andwas bagy, witbady. In conddeviant couldstandard anddifferent secoas the diotic pboth right anda dichotic prethe right or leat the same timdeviant and 5the listeners wor lagy) predeviant of bopposite to thstimuli, standwith a minimtotal testing tiof every block</p><p>The orderpresentation odomized. Inleft-headphonconditions. Eacorrect positiophones in thecorrect and fo</p><p>2.1.3. StimulFive sylla</p><p>Speech Synth</p><p>avefwordsnding</p><p>syllab</p><p>bal pae sizelatt, 1:], [diix A.difiedsyllab[di:] oet of tI-bi:]</p><p>I-di:],bagn Figs, twoed a min theeitheras theviantdeviant laby. Likewise, the second syllable [di:] makes a meaningfulviant lady but a non-meaningful pseudoword bady.bisyllabic stimuli were converted to stereo .wav files in Cool Edit 2000ruct diotic standard stimuli of bagy or lagy and dichotic right- andiant stimuli of bady, baby, laby or lady. Each bisyllable onset toration was 612 ms. The inter-stimulus interval of 588 ms was specified</p><p>lent time-interval between the offset of the first bisyllable and at the onsetxt bisyllable. The stimuli were presented at 75 dB SPL for all listeners.</p><p>pparatustimuli were stored and output via a personal computer (PC) with a 16-</p><p>lution soundcard (Creative Labs Sound Blaster Live 5.1). The stimuli</p><p>Table 1The probabili nd non-meaningful words</p><p>Stimulus cond viant (devL), p= 0.1 Right-deviant (devR), p= 0.1bagy-bady badybagy-baby babylagy-laby labylagy-lady lady</p><p>Meaningful cteners (10 females and 8 males) ranged in age from 18 to 32 years4.9 years). All were right-handed as indicated by the Edinburghandedness (adapted from Oldfield, 1971) and were screened to con-hearing thresholds of 20 dB HL or less, from 0.25 to 4.0 kHz. [For ag population within this age group, inter-aural thresholds are typ-bout 4 dB across this frequency range (Lutman &amp; Davis, 1994).]</p><p>the predominant language spoken by all the listeners. Listenerstheir participation and gave informed consent.</p><p>tionsresents the total of four stimulus conditions with the respectivedeviant probabilities of presentation. In condition B the standardh two subconditions in which the deviant could be either baby orition L the standard was lagy with two subconditions in which thebe either lady or laby. In each of these stimulus conditions the</p><p>deviant stimuli were composed of an identical first syllable but and syllable. For each stimulus type, a standard trial was definedresentation of the standard stimulus; standard stimuli presented toleft ears at the same time. A deviant trial was one where there wassentation of stimuli; the deviant stimulus was presented to eitherft ear, whilst a standard stimulus was presented to the opposite ear</p><p>e. Each block contained 400 standard presentations and 50 right-0 left-deviant presentations. That is, in each stimulus condition,ere presented with a repeating diotic standard stimulus (bagysented at p= 0.8, with a dichotic presentation of a right- or left-aby, bady, laby or lady presented at a p= 0.1 in the eare one presented with a standard stimulus. Within each block ofard and deviant stimuli were presented in a pseudorandom orderum of two standard stimuli between any two deviant stimuli. Theme per listener was around 1.52 h. A break was offered at the end.of the four stimulus conditions was randomized. The order off the right- and left-deviants within each condition was also ran-</p><p>order to account for any slight differences between right- ande output the headphones were reversed for...</p></li></ul>

Recommended

View more >