hemispheric differences in processing dichotic meaningful and non-meaningful words

12
Neuropsychologia 45 (2007) 2718–2729 Hemispheric differences in processing dichotic meaningful and non-meaningful words Ifat Yasin Department of Experimental Psychology, University of Oxford, South Parks Road, Oxford OX1 3UD, UK Received 16 August 2006; received in revised form 30 March 2007; accepted 6 April 2007 Available online 13 April 2007 Abstract Classic dichotic-listening paradigms reveal a right-ear advantage (REA) for speech sounds as compared to non-speech sounds. This REA is assumed to be associated with a left-hemisphere dominance for meaningful speech processing. This study objectively probed the relationship between ear advantage and hemispheric dominance in a dichotic-listening situation, using event-related potentials (ERPs). The mismatch negativity (MMN) and a late negativity (LN) were measured for bisyllabic meaningful words and non-meaningful pseudowords, which differed in their second syllable. Eighteen normal-hearing listeners were presented with a repeating diotic standard ([beI-gi:] or [leI-gi:]) and an occasional dichotic deviant (a standard presented to one ear and a deviant [beI-bi:], [beI-di:], [leI-bi:] or [leI-di:] presented to the opposite ear). As predicted there was a REA for meaningful words compared to non-meaningful words. Also, dipole source analysis suggested that dipole strength was stronger in the left than the right cortical region for meaningful words. However, there were differences in response within meaningful words as well as between meaningful and non-meaningful words which may be explained by the characteristics of embedded words and the position-specific probability of phoneme occurrence in words. © 2007 Elsevier Ltd. All rights reserved. Keywords: Cerebral lateralization; Auditory; Dichotic; Speech; MMN 1. Introduction In a dichotic-listening task (Broadbent, 1954; Hugdahl, 1988; Kimura, 1961, 1967; Repp, 1977) a listener is simultaneously presented with differing auditory items to each ear, and is asked to report the item heard in either the left or right ear. Dichotic- listening tasks using speech sounds such as consonant–vowel (CV) syllables, stop consonants and words typically find that right-handed listeners show a right-ear advantage (REA), i.e., they are more accurate in the reporting of stimuli presented to the right ear (Bryden, 1988; Hugdahl, Carlsson, & Eichele, 2001; Schwartz & Tallal, 1980). In contrast, some dichotic-listening tasks using non-speech sounds have found a left-ear advantage (LEA) (Bryden, 1986; Kimura, 1964; Piazza, 1980). The association between a REA and speech processing, and between a LEA and non-speech processing, has been explained by the complex neural interactions involved in projecting audi- Tel.: +44 1865 281860; fax: +44 1865 310447. E-mail address: [email protected]. tory information from the cochlea to the cortex, which result in stronger activation of the cortex on the side contralateral to the stimulated ear. Auditory neurones relay auditory informa- tion from the cochlea via the eighth cranial nerve to the cochlear nucleus in the brainstem, from where they then progress to the nuclei of the superior olivary nucleus (SON). In the SON the auditory neurones either connect with other fibres or terminate (Liberman, 1991, 1993). From the SON, the auditory neurons follow either ipsilateral or contralateral pathways to the inferior colliculus in the midbrain. From the midbrain, the auditory infor- mation is projected to the ipsilateral medial geniculate body in the thalamus and then to the primary cortex and auditory associa- tion areas in the temporal lobes (Tonnquist-Uhl´ en, 1996). In this way, the auditory information from each ear terminates in both the contralateral and ipsilateral temporal lobes of each cerebral hemisphere. Ear advantages in dichotic listening are explained in terms of two factors. First, the contralateral projections consist of more fibres and produce more cortical activity than the ipsi- lateral projections (Philips & Gates, 1982; Rosenzweig, 1951). Second, the stronger activity in the contralateral projections inhibits the processing in the ipsilateral projections (Aiello et al., 0028-3932/$ – see front matter © 2007 Elsevier Ltd. All rights reserved. doi:10.1016/j.neuropsychologia.2007.04.009

Upload: ifat-yasin

Post on 10-Sep-2016

225 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Hemispheric differences in processing dichotic meaningful and non-meaningful words

A

ib(s(Rlmp©

K

1

Kptl(rttSt(

bb

0d

Neuropsychologia 45 (2007) 2718–2729

Hemispheric differences in processing dichoticmeaningful and non-meaningful words

Ifat Yasin ∗Department of Experimental Psychology, University of Oxford, South Parks Road, Oxford OX1 3UD, UK

Received 16 August 2006; received in revised form 30 March 2007; accepted 6 April 2007Available online 13 April 2007

bstract

Classic dichotic-listening paradigms reveal a right-ear advantage (REA) for speech sounds as compared to non-speech sounds. This REAs assumed to be associated with a left-hemisphere dominance for meaningful speech processing. This study objectively probed the relationshipetween ear advantage and hemispheric dominance in a dichotic-listening situation, using event-related potentials (ERPs). The mismatch negativityMMN) and a late negativity (LN) were measured for bisyllabic meaningful words and non-meaningful pseudowords, which differed in their secondyllable. Eighteen normal-hearing listeners were presented with a repeating diotic standard ([beI-gi:] or [leI-gi:]) and an occasional dichotic devianta standard presented to one ear and a deviant [beI-bi:], [beI-di:], [leI-bi:] or [leI-di:] presented to the opposite ear). As predicted there was aEA for meaningful words compared to non-meaningful words. Also, dipole source analysis suggested that dipole strength was stronger in the

eft than the right cortical region for meaningful words. However, there were differences in response within meaningful words as well as betweeneaningful and non-meaningful words which may be explained by the characteristics of embedded words and the position-specific probability of

honeme occurrence in words.2007 Elsevier Ltd. All rights reserved.

tittnna(fcmttw

eywords: Cerebral lateralization; Auditory; Dichotic; Speech; MMN

. Introduction

In a dichotic-listening task (Broadbent, 1954; Hugdahl, 1988;imura, 1961, 1967; Repp, 1977) a listener is simultaneouslyresented with differing auditory items to each ear, and is askedo report the item heard in either the left or right ear. Dichotic-istening tasks using speech sounds such as consonant–vowelCV) syllables, stop consonants and words typically find thatight-handed listeners show a right-ear advantage (REA), i.e.,hey are more accurate in the reporting of stimuli presented tohe right ear (Bryden, 1988; Hugdahl, Carlsson, & Eichele, 2001;chwartz & Tallal, 1980). In contrast, some dichotic-listening

asks using non-speech sounds have found a left-ear advantageLEA) (Bryden, 1986; Kimura, 1964; Piazza, 1980).

The association between a REA and speech processing, andetween a LEA and non-speech processing, has been explainedy the complex neural interactions involved in projecting audi-

∗ Tel.: +44 1865 281860; fax: +44 1865 310447.E-mail address: [email protected].

thiolSi

028-3932/$ – see front matter © 2007 Elsevier Ltd. All rights reserved.oi:10.1016/j.neuropsychologia.2007.04.009

ory information from the cochlea to the cortex, which resultn stronger activation of the cortex on the side contralateral tohe stimulated ear. Auditory neurones relay auditory informa-ion from the cochlea via the eighth cranial nerve to the cochlearucleus in the brainstem, from where they then progress to theuclei of the superior olivary nucleus (SON). In the SON theuditory neurones either connect with other fibres or terminateLiberman, 1991, 1993). From the SON, the auditory neuronsollow either ipsilateral or contralateral pathways to the inferiorolliculus in the midbrain. From the midbrain, the auditory infor-ation is projected to the ipsilateral medial geniculate body in

he thalamus and then to the primary cortex and auditory associa-ion areas in the temporal lobes (Tonnquist-Uhlen, 1996). In thisay, the auditory information from each ear terminates in both

he contralateral and ipsilateral temporal lobes of each cerebralemisphere. Ear advantages in dichotic listening are explainedn terms of two factors. First, the contralateral projections consist

f more fibres and produce more cortical activity than the ipsi-ateral projections (Philips & Gates, 1982; Rosenzweig, 1951).econd, the stronger activity in the contralateral projections

nhibits the processing in the ipsilateral projections (Aiello et al.,

Page 2: Hemispheric differences in processing dichotic meaningful and non-meaningful words

logia

11

oahLslt&ssciiii1

pmmpicR(mMJ

shtlesatGiwurv(s(fOMsheeN

ntifrste

at(sK1pdardNoaMalosece

lCnwg(bweNmMdt

leldot

I. Yasin / Neuropsycho

994; Kimura, 1967; Pantev, Hoke, Lutkenhoner, & Lehnertz,986).

Studies on patients undergoing temporary inactivation ofne cerebral hemisphere prior to epilepsy surgery have foundstrong correlation between a behavioural REA and left-

emisphere language dominance, and between a behaviouralEA and right-hemisphere dominance (Strauss, 1988). Othertudies involving patients with classified regions of temporalobe lesions show a reduced REA or LEA with damage tohe contralateral auditory region of the temporal lobe (Eslinger

Damasio, 1988; Ilvonen et al., 2004; Kimura, 1961). Suchtudies support the conclusion that the left and right hemi-pheres predominantly process auditory information from theontralateral ear. On this view, the REA can be interpreted asndicating a cerebral left-hemispheric dominance for process-ng of speech sounds (Schwartz & Tallal, 1980), whilst the LEAndicates a cerebral right-hemispheric dominance for process-ng certain types of non-speech sound (Bryden, 1986; Kimura,964; Piazza, 1980).

In the light of such evidence for cerebral lateralization, it hasroved surprisingly difficult to demonstrate hemispheric asym-etries in auditory processing using more direct methods ofeasuring brain activity, such as positron emission tomogra-

hy (PET) and functional magnetic resonance imaging (fMRI)n unimpaired humans. Although some studies have shown alear left-lateralization to speech stimuli (PET: Scott, Blank,osen, & Wise, 2000; Scott, Rosen, Wickam, & Wise, 2004)

fMRI: Narain et al., 2003), others suggest similar involve-ent of both hemispheres in processing speech stimuli (PET:ummery, Ashburner, Scott, & Wise, 1999) (fMRI: Scott &

ohnsrude, 2003).Other studies have used electro-encephalography (EEG) to

tudy lateralization of brain responses to auditory stimuli, but,ere too, results have been mixed. EEG studies measuring audi-ory event-related potentials (ERPs) have investigated cerebralaterality using the mismatch negativity (MMN). The MMN islicited by an occasional deviant auditory stimulus amongst aequence of more frequently occurring standard auditory stimulind is regarded as a cortical index of the perception of an audi-ory change along a given dimension (Alho, 1995; Naatanen,aillard, & Mantysalo, 1978; Naatanen et al., 1997). The MMN

s defined as the difference between the amplitude of the ERPaveform generated in response to the standard auditory stim-lus and the amplitude of the ERP waveform generated inesponse to the deviant auditory stimulus, over a given time inter-al. It is typically maximal at frontal and central midline siteselectrodes Fz and Cz) and is generated in the auditory cortices ashown by converging evidence from magneto-encephalographyMEG) (Alho et al., 1996), PET (Tervaniemi et al., 2000) andMRI (Opitz, Mecklinger, Friederici, & von Cramon, 1999;pitz, Mecklinger, von Cramon, & Kruggel, 1999; Opitz, Rinne,ecklinger, von Cramon, & Schroger, 2002). Some studies mea-

uring MMNs for speech stimuli report greater MMNs in the left

emisphere (Naatanen et al., 1997; Rinne et al., 1999; Shtyrovt al., 1998). However, other studies have failed to show thexpected left-hemisphere dominance for speech stimuli (Bellis,icol, & Kraus, 2000; Shtyrov et al., 2000). There are many tech-

etbw

45 (2007) 2718–2729 2719

ical differences between MEG and EEG that could contributeo the presence/absence of a lateralization to speech. Accuracyn source localization has often been shown to be more accurateor MEG than EEG (e.g. Hari, Levanen, & Raij, 2000). Althoughecent modelling studies (Liu, Dale, & Belliveau, 2002) whicheparate the effects of the head model and differences betweenhe two methods suggest that EEG and MEG may provide similarstimates of source localization and hence lateralization.

Another important reason for weak and inconsistent later-lity effects in ERP speech-processing studies may be due tohe use of monotic (auditory items presented to only one ear)e.g. Bellis et al., 2000), or diotic (same auditory item pre-ented to both ears at the same time) (e.g. Alho et al., 1998;orpilahti, Krause, Holopainen, & Lang, 2001; Naatanen et al.,997; Pulvermuller et al., 2001; Shtyrov et al., 2000) stimuliresentations. A dichotic presentation may provide stronger evi-ence for laterality, due to ipsilateral suppression of the neuralctivity generated in the auditory projections in both left andight hemisphere (Bryden, 1988). Some earlier studies did use aichotic stimulus presentation to measure ERPs (Haaland, 1974;eville, 1974) but limitations in synchrony of signal generationften led to slight discrepancies between stimulus onset timescross ears. Furthermore, in most MMN studies only the earlyMN, usually calculated for the latency range of 100–200 ms

fter stimulus onset was investigated. It has been proposed that aater negativity, referred to hereafter as the late negativity (LN),ccurring later than 200 ms after stimulus onset may be a morealient indicator of more complex higher processing prior to cat-gorization (Korpilahti, Lang, & Aaltonen, 1995). If this is thease, then any lateralization for meaningful speech may be morevident in the LN than in the MMN.

Another reason as to why ERP studies do not reliably reporteft-lateralized responses to speech stimuli may be the use ofV syllables, rather than meaningful stimuli. The overall mag-itude of the MMN generated in response to hearing meaningfulords appears to be greater than the magnitude of the MMNenerated in response to hearing non-meaningful pseudowordsKorpilahti et al., 2001; Shtyrov & Pulvermuller, 2002). It haseen proposed that the increased amplitude of the MMN to realords is evidence for long-term stored cortical memory traces

xisting for spoken language (Pulvermuller, Shtyrov, Kujala, &aatanen, 2004), predominantly in the left hemisphere. If so, weight expect a dichotic paradigm to reveal evidence for strongerMNs to meaningful words presented to the right ear (with

irect projections to the left hemisphere) than to those presentedo the left ear.

Using a paradigm based on Pulvermuller et al. (2001) bisyl-ables were constructed such that the same second syllable couldither form a meaningful word (“baby” or “lady”) or a meaning-ess pseudoword (“bady” or “laby”). These items were presentedichotically in a stream of pseudoword standards (e.g. “bagy”r “lagy”). The current paper had two main aims. First to testhe prediction that a dichotic presentation will reveal lateralized

ffects in the MMN, with larger responses to deviants presentedo the right ear, especially over left-hemisphere sites. On theasis of previous work it was also predicted that such effectsould be greater for meaningful words than for phonologi-
Page 3: Hemispheric differences in processing dichotic meaningful and non-meaningful words

2 logia 45 (2007) 2718–2729

cellt

2n

2

2

(ifiniEw

2

swbdsdabatadtodoswto

pdlccpc

2

S

Fig. 1. Waveforms of the six bisyllables [beI-bi:] and [leI-di:], corresponding totco

Tts[Aw

oawa“sscdiowdw

tloao

TT

S

bbll

M

720 I. Yasin / Neuropsycho

ally matched non-meaningful words. Second, to compare suchffects between the MMN and the LN. Insofar as the LN indexesater stages of speech processing, we might expect to see strongerateralized effects (for meaningful words) for the LN than forhe MMN.

. Experiment: bisyllabic meaningful words andon-meaningful pseudowords

.1. Methods

.1.1. ListenersThe 18 listeners (10 females and 8 males) ranged in age from 18 to 32 years

mean age, 24.9 years). All were right-handed as indicated by the Edinburghnventory of handedness (adapted from Oldfield, 1971) and were screened to con-rm they had hearing thresholds of 20 dB HL or less, from 0.25 to 4.0 kHz. [For aormal-hearing population within this age group, inter-aural thresholds are typ-cally within about 4 dB across this frequency range (Lutman & Davis, 1994).]nglish was the predominant language spoken by all the listeners. Listenersere paid for their participation and gave informed consent.

.1.2. ConditionsTable 1 presents the total of four stimulus conditions with the respective

tandard and deviant probabilities of presentation. In condition B the standardas bagy, with two subconditions in which the deviant could be either baby orady. In condition L the standard was lagy with two subconditions in which theeviant could be either lady or laby. In each of these stimulus conditions thetandard and deviant stimuli were composed of an identical first syllable but aifferent second syllable. For each stimulus type, a standard trial was defineds the diotic presentation of the standard stimulus; standard stimuli presented tooth right and left ears at the same time. A deviant trial was one where there wasdichotic presentation of stimuli; the deviant stimulus was presented to either

he right or left ear, whilst a standard stimulus was presented to the opposite eart the same time. Each block contained 400 standard presentations and 50 right-eviant and 50 left-deviant presentations. That is, in each stimulus condition,he listeners were presented with a repeating diotic standard stimulus (“bagy”r “lagy”) presented at p = 0.8, with a dichotic presentation of a right- or left-eviant of “baby”, “bady”, “laby” or “lady” presented at a p = 0.1 in the earpposite to the one presented with a standard stimulus. Within each block oftimuli, standard and deviant stimuli were presented in a pseudorandom orderith a minimum of two standard stimuli between any two deviant stimuli. The

otal testing time per listener was around 1.5–2 h. A break was offered at the endf every block.

The order of the four stimulus conditions was randomized. The order ofresentation of the right- and left-deviants within each condition was also ran-omized. In order to account for any slight differences between right- andeft-headphone output the headphones were reversed for half of the presentedonditions. Each of the blocks per conditions was run with the headphones in theorrect position and then a second block of each condition was run with the head-hones in the reversed condition. The eight blocks (four runs with headphones

orrect and four runs with headphones reversed) were also randomized.

.1.3. Stimulus generationFive syllables were generated using the default output from the All-Prosodic

peech Synthesis Architecture (IPOX) interface (Dirksen & Coleman, 1994).

2

b

able 1he probabilities of presentation for the standard, right-deviant and left-deviant mean

timulus condition Standard (stand), p = 0.8

agy-bady bagyagy-baby bagy

agy-laby lagyagy-lady lagy

eaningful conditions are given in bold.

he real words “baby” and “lady”, and [beI-di:], [beI-gi:], [leI-bi:], and [leI-gi:],orresponding to pseudowords “bady”, “bagy”, “laby” and “lagy”. The durationf each syllable is 306 m.

he global parameters were set so that the sampling rate was 11,025 Hz andhe frame size was 5 ms. The parameters were passed to a Klatt speech synthe-izer (Klatt, 1980), to synthesize an adult male voice. The syllables were [beI],leI], [bi:], [di:] and [gi:]. Stimulus characteristics of the syllables are defined inppendix A. The parameter files were saved as .wav files and syllable durationas modified so that onset to offset duration was 306 ms for all syllables.

A bisyllable comprised a first syllable of [beI] or [leI] and the second syllablef [bi:], [di:] or [gi:]. There was a 0 ms gap between the offset of the first syllablend onset of the second syllable. Six bisyllabic stimuli were constructed in thisay: [beI-bi:] and [leI-di:], corresponding to the real words “baby” and “lady”,

nd [beI-di:], [beI-gi:], [leI-bi:], and [leI-gi:], corresponding to pseudowordsbady”, “bagy”, “laby” and “lagy”. The waveforms of the six bisyllables arehown in Fig. 1. To control for the acoustic difference between the secondyllables, two conditions were constructed so that the second syllable whichompleted a meaningful word in one condition, completed a meaningless pseu-oword in the second condition. The standard was always a pseudoword endingn [gi:], either “bagy” or “lagy”. The deviants were formed by substituting [di:]r [bi:] as the second syllable. The second syllable [bi:] makes a meaningfulord deviant “baby” in the bagy-baby condition but a non-meaningful pseu-oword deviant “laby”. Likewise, the second syllable [di:] makes a meaningfulord deviant “lady” but a non-meaningful pseudoword “bady”.

The bisyllabic stimuli were converted to stereo .wav files in Cool Edit 2000o construct diotic standard stimuli of “bagy” or “lagy” and dichotic right- andeft-deviant stimuli of “bady”, “baby”, “laby” or “lady”. Each bisyllable onset toffset duration was 612 ms. The inter-stimulus interval of 588 ms was specifieds the silent time-interval between the offset of the first bisyllable and at the onsetf the next bisyllable. The stimuli were presented at 75 dB SPL for all listeners.

.1.4. ApparatusAll stimuli were stored and output via a personal computer (PC) with a 16-

it resolution soundcard (Creative Labs Sound Blaster Live 5.1). The stimuli

ingful and non-meaningful words

Left-deviant (devL), p = 0.1 Right-deviant (devR), p = 0.1

bady badybaby babylaby labylady lady

Page 4: Hemispheric differences in processing dichotic meaningful and non-meaningful words

logia

whiwmi(KeAtafoa

owswfigtt

cB1FCttsc(eaww

2

0wptirat

2

oAtetot1tra

Ar2ws01Totdwr1d

t(fdastt

2

wtcpafMnents. Spatiotemporal dipole modelling was then performedusing Source (Neuroscan) as follows. Since digitized positionsof the electrodes used in this study were not available, the Sourcelabel-matching algorithm was used which is based on a model

I. Yasin / Neuropsycho

ere presented through both the right and left channel of Sennheiser HD25eadphones (frequency response of 16–22 kHz; −3 dB/1.0 kHz). The headphonenput came directly from the output of the soundcard DAC. The stimulus levelas checked before each run by adjusting the output of the PC soundcard whilsteasuring the output of either the right or left earphone. The earphone sound

ntensity output was measured by coupling an earphone to an ear simulatorBruel and Kjær type 4153) and a 0.5 in. polarising microphone (Bruel andjær type 4192) a 2669 pre-amplifier and a 2610 measuring amplifier. The

arphone sound intensity output was measured by a Modular Precision Soundnalyzer (Observer; Bruel and Kjær type 2260) set to record intensity using a

ime window of 1 s duration, passing an unmodified signal (no weighting waspplied to the input signal). The sound intensity for a stimulus was set so thator 400 presentations of a given stimulus, the sound analyzer recorded a valuef 75 + 0.2 dB SPL from both right and left earphones. The sound intensity forll stimuli was re-checked before each experimental run.

The EEG activity was recorded using a SynAmps amplifier and the Acquireption of Scan 4.2 software (NeuroScan, El Paso, TX, USA). The ac signalsere amplified with a gain of 250 dB (resolution of 0.336 �V/bit) and digitally

ampled at an A/D rate of 500 Hz (16-bit resolution). The signals were filteredith a low-pass cut off of 70 Hz and a high-pass cut off of 0.05 Hz. A notched-lter with a centre-frequency of 50 Hz was used to reduce any mains-powerenerated frequency-interference. An external TTL trigger code was used torigger event recording; the trigger-delay was maintained at 6 ms throughoutesting.

Listeners were fitted with an electrode cap fitted with sintered silver/silverhloride ring electrodes (EasyCap system; Falk Minow Services, Herrsching-reitbrunn, Germany). Thirty scalp electrodes were placed at the following0–20 scalp recording positions: FP1, FP2, AFz (ground), Fz, F3, F4, F7,8, FC1, FC2, FT9, FC5, FCz, FC6, FT10, T7, T8, TP7, TP8, Cz, C3, C4,P5, CP6, Pz, P3, P4, P7, P8, Oz. Three other electrodes were positioned at

he left mastoid (M1), right mastoid (M2) and nose (reference). Four elec-rodes were used to record eye movements (electro-oculograms; EOG), forubsequent eye-movement artifact rejection. Two electrodes to record verti-al electro-oculograms (VEOG) were placed above and below the left eyeVEOGU and VEOGL, respectively) and two electrodes to record horizontallectro-oculograms (HEOG) were placed 1 cm from the outer canthi of the rightnd left eye (HEOGR and HEOGL, respectively). Inter-electrode impedanceas monitored and maintained at a minimum (typically below 5 k�). The dataere recorded continuously and stored for later off-line analysis.

.1.5. ProcedureListeners were tested individually whilst seated 1.0 m in front of a

.2 m × 0.27 m color television screen within an IAC electrically shielded single-alled sound-attenuating booth. A pair of Sennheiser HD25 headphones waslaced over the listeners’ ears and the inter-electrode impedance was re-checkedo ensure the average impedance was below 5 k�. Stimuli were presented diot-cally or dichotically through the headphones at 75 dB SPL. During the EEGecording, listeners watched a silent video (without subtitles) of their choice,nd were asked not to attend to the auditory stimuli and to keep overt movementso a minimum.

.2. EEG analysis

The EEG recordings were processed off-line using the editption of Scan 4.3 software (NeuroScan Labs, El Paso, TX).fter visual inspection of files and removal of bad electrodes,

he EEG was re-referenced to all electrodes (minus the eye-lectrodes). This procedure gives a better signal-to-noise ratiohan the nose reference, without altering the relative amplitudef left- and right-sided activity. For a given stimulus condition,he EEG data were epoched over a latency interval of −100 to

000 ms. The VEOU and VEOL recording channels were usedo apply ocular artefact correction. This method attempts to “cor-ect” the recorded EEG for eye movements, by subtraction ofportion of the electro-oculogram from the EEG (Semlitsch,

Fe(

45 (2007) 2718–2729 2721

nderer, Schuster, & Presslich, 1986). The ocular artefact cor-ection was applied with a positive-going EEG at 10%, with0 minimum sweeps with a duration of 400 ms. Each epochas baseline-corrected to the time-period extending from pre-

timulus to the start of the second syllable. The latency interval of–800 ms was used for rejecting epochs with voltages exceeding50 �V. This criterion led to a rejection of 12.7% of the data.he epoched data were low-pass filtered with a low-pass cutff of 30 Hz and a low-pass attenuation of 6 dB. Epochs werehen averaged separately for standards (excluding those imme-iately preceded by a deviant), and right- and left-ear deviants,ith averaged data from the headphone-correct and headphone-

eversed conditions was combined. This gave a possible total of000 stimuli per stimulus condition (800 standards, 100 right-eareviants, and 100 left-ear deviants).

In order to define the latency intervals for calculation ofhe greatest response negativity, the Mean Global Field PowerMGFP; an estimate of the signal-to-noise ratio) was calculatedrom the grand average of all listeners. Two regions of responseenoted by a high MGFP (greater than 1.3 �V) were of interest;n early-latency negativity occurring 76–190 ms after secondyllable onset, referred to as the MMN and a later-latency nega-ivity occurring 234–386 ms after second syllable onset, referredo as the late negativity (LN). The MGFP is shown in Fig. 2.

.3. Source analysis

Prior to dipole analysis, principle component analysis (PCA)as carried out on the selected latency intervals for both

he MMN and LN to provide an estimate of the number ofomponents/dipoles. Based on the results of the PCA, inde-endent component analysis (ICA) was performed to selectnd retain viable (highest signal-to-noise ratio) componentsor source analysis. For latency intervals corresponding to the

MN and LN, ICA typically revealed two to three compo-

ig. 2. The Mean Global Field Power (MGFP) with maximum signal-to-noisestimated to be within the time latencies of 382–496 ms (MMN) and 540–692 msLN) post-stimulus onset.

Page 5: Hemispheric differences in processing dichotic meaningful and non-meaningful words

2 logia

oasatushu4ots6ie

2

2

ea

p(aomCvwfttdtd

tw(d

FPF

722 I. Yasin / Neuropsycho

f the conventional extended 10–20 positioning of electrodescross the scalp. In order to estimate EEG source activity a fixedymmetrical dipole model was applied in which dipole locationnd orientation were jointly fitted for all time points. For eachime point the source strength parameter was iteratively adjustedntil the residual variance (difference between the recorded ver-us modelled activity) was minimised. This model localized twoemispherically symmetrical sources (minimally 10 mm apart)sing a standard boundary element model (BEM) consisting of656 nodes. Dipole solutions were obtained for time windowsf 382–496 ms (MMN) and 540–692 ms (LN) corresponding tohe peaks of the MGFP curve. The main criteria used to specifyource locations were low standard deviations (typically below0% although in some cases this value had to be exceededn order to provide a meaningful fit) and minimal confidencellipsoids.

.4. Results

.4.1. MMN and LNThe grand average response waveforms for the 18 listen-

rs elicited by the presentation of the standard stimulus bagynd the deviant stimulus baby are shown in Fig. 3 each panel

C3tη

ig. 3. Each row presents the grand average waveforms for the 18 listeners for theanels presents the average standard (black line), left-ear deviant (dark gray line) andC1, FCz, FC2, C3, Cz, C4). The boxes depict the latency intervals over which the M

45 (2007) 2718–2729

resents three average waveforms; the standard stimulus bagyblack line), the left-ear deviant stimulus baby (dark gray line)nd the right-ear deviant stimulus baby (light gray line). The setf nine panels presents the three standard and deviant waveformseasured at each of nine electrodes: F3, Fz, F4, FC1, FCz, FC2,3, Cz, C4. The small rectangular boxes depict the latency inter-als over which the MMN (382–496 ms) or LN (540–692 ms)as calculated. Similarly, Fig. 4 presents the average waveforms

or the stimulus condition in which the standard was bagy andhe deviant was bady, Fig. 5 presents the average waveforms forhe stimulus condition in which the standard was lagy and theeviant was lady, and Fig. 6 presents the average waveforms forhe stimulus condition in which the standard was lagy and theeviant was laby.

An ANOVA was conducted on the mean amplitudes withinhe MMN latency window of 382–496 ms. The main factorsere word condition (conditions B or L), deviant condition

diotic standard, dichotic left-ear deviant, or dichotic right-eareviant), electrode position (FC1, Fz, FC2, F3, FCz, F4, C3, Cz,

4). There was a significant effect of deviant condition, F(2,4) = 44.986, p < 0.001, with effect size, η2 = 0.726 and elec-rode position, F(8, 136) = 13.186, p < 0.001, with effect size,2 = 0.436. There was also an interaction between word con-

stimulus condition in which the standard was bagy and the deviant was baby.right-ear deviant (light gray line) waveforms for a given electrode (F3, Fz, F4,MN (382–496 ms) or LN (540–692 ms) was calculated.

Page 6: Hemispheric differences in processing dichotic meaningful and non-meaningful words

I. Yasin / Neuropsychologia 45 (2007) 2718–2729 2723

ich th

de

rsmSetetspsec

tfBdtda

betsp

palcrcmedmdrap

Fig. 4. As for Fig. 3 but for the condition in wh

ition and deviant condition, F(2, 34) = 5.135, p = 0.011, withffect size, η2 = 0.232.

Post hoc t-tests revealed, that as expected, both left andight deviants elicited a greater response than the corre-ponding standard for both word conditions [condition L:ean difference between standard and left-ear deviant = 0.72,.D. = 0.75, t(17) = 4.13, p(two-tailed) = 0.001; mean differ-nce between standard and right-ear deviant = 0.72, S.D. = 0.67,(17) = 4.54, p(two-tailed) < 0.001], [condition B: mean differ-nce between standard and left-ear deviant = 1.08, S.D. = 0.51,(17) = 8.96, p(two-tailed) < 0.001; mean difference betweentandard and right-ear deviant = 1.17, S.D. = 0.49, t(17) = 4.54,(two-tailed) < 0.001]. The response to the standard stimuli wasubsequently subtracted from the responses to the right- and left-ar deviant stimuli within the latency window of 382–494 ms toalculate the MMN.

Post hoc t-tests conducted on the MMNs showed thathe response to a left-ear deviant (summing across meaning-ul or non-meaningful words and electrodes) for condition

was significantly greater than the response to a right-ear

eviant for condition L [mean ear difference = 0.55, S.D. = 0.54,(17) = 4.33, p(two-tailed) < 0.001]. In addition, a right-eareviant (again summing across meaningful or non-meaningfulnd electrodes) for condition B was significantly greater than

bSfit

e standard was bagy and the deviant was bady.

oth a left- and right-ear deviant for condition L [mean differ-nce between standard and left-ear deviant = 0.55, S.D. = 0.54,(17) = 4.33, p(two-tailed) < 0.001 and mean difference betweentandard and right-ear deviant = 0.64, S.D. = 0.52, t(17) = 5.21,(two-tailed) < 0.001].

To investigate the relationship between ear of deviantresentation and hemispheric lateralization, responses wereveraged across three electrodes from three cortical regions:eft cortical region (electrodes C3, FC1, and F3), mid-ortical region (electrodes Cz, FCz, and Fz) and right corticalegion (electrodes C4, FC2, and F4). Interestingly, for allortical regions, left, mid or right, a meaningful deviant (sum-ing across conditions B and L) presented to the right ear

licited a significantly greater response than a non-meaningfuleviant presented to the same ear [left cortical region:ean difference between meaningful and non-meaningful

eviants = 0.33, S.D. = 0.45, t(17) = 3.15, p(two-tailed) = 0.006;ight cortical region: mean difference between meaningfulnd non-meaningful deviants = 0.31, S.D. = 0.46, t(17) = 2.90,(two-tailed) < 0.01; mid-cortical region: mean difference

etween meaningful and non-meaningful deviants = 0.26,.D. = 0.49, t(17) = 2.25, p(two-tailed) = 0.038]. The expectednding of a greater response for a meaningful deviant presented

o the right ear than left ear was observed (summing across con-

Page 7: Hemispheric differences in processing dichotic meaningful and non-meaningful words

2724 I. Yasin / Neuropsychologia 45 (2007) 2718–2729

ich th

dbtttstp

ewteptbSbtS

tfd

rFdPlrmSetetsptdct

2

Fig. 5. As for Fig. 3 but for the condition in wh

itions B and L) in only the left cortical region; mean differenceetween responses elicited by a meaningful deviant presentedo the right and left ears = 0.32, S.D. = 0.62, t(17) = 3.15, p(two-ailed) = 0.044. However this effect may be mainly attributable tohe finding of a greater response for a meaningful word baby pre-ented to the right ear than the meaningful word lady presentedo the left ear; mean difference = 0.53, S.D. = 0.89, t(17) = 2.52,(two-tailed) = 0.022.

There was however a difference between the responseslicited for the meaningful words between conditions B and Lhich approached significance; baby elicited a greater response

han lady: (summing across electrodes, and ears) [mean differ-nce between baby and lady = 0.38, S.D. = 0.77, t(17) = 2.09,(two-tailed) = 0.052]. The non-meaningful word for condi-ion L, laby, elicited smaller responses compared to baby,ady, and lady [mean difference between laby and baby = 0.60,.D. = 0.71, t(17) = 2.90, p(two-tailed) = 0.002; mean differenceetween laby and bady = 0.42, S.D. = 0.58, t(17) = 3.13, p(two-ailed) = 0.006; mean difference between laby and lady = 0.22,.D. = 0.43, t(17) = 2.14, p(two-tailed) = 0.047].

Similarly an ANOVA was conducted on the mean ampli-udes within the LN latency window of 540–692 ms. The mainactors were word condition (condition B or L), deviant con-ition (diotic standard, dichotic left-ear deviant, or dichotic

fcT

e standard was lagy and the deviant was lady.

ight-ear deviant), electrode position (FC1, Fz, FC2, F3, FCz,4, C3, Cz, C4). There was a significant effect of deviant con-ition, F(2, 34) = 5.991, p = 0.006, with effect size, η2 = 0.261.ost hoc t-tests on the LN revealed, that as expected, both

eft and right deviants elicited a greater response than the cor-esponding standard for both word conditions [condition L:ean difference between standard and left-ear deviant = 0.34,.D. = 0.45, t(17) = 3.21, p(two-tailed) = 0.005; mean differ-nce between standard and right-ear deviant = 0.27, S.D. = 0.41,(17) = 2.82, p(two-tailed) = 0.012], [condition B: mean differ-nce between standard and left-ear deviant = 0.38, S.D. = 0.53,(17) = 3.03, p(two-tailed) = 0.008; mean difference betweentandard and right-ear deviant = 0.39, S.D. = 0.36, t(17) = 4.63,(two-tailed) < 0.001]. The response to the standard stimuli washen subtracted from the responses to the right- and left-eareviant stimuli within the latency window of 540–692 ms to cal-ulate the LN. There were no further main effects or interactionso investigate.

.4.2. Dipole source analysis

Grand average headmaps depicting the dipole analysis results

or all conditions are shown in Fig. 7. Dipole locations are indi-ated by black circles and dipole orientation by the attached bar.he dipole source strength (nA m) and the residual deviation

Page 8: Hemispheric differences in processing dichotic meaningful and non-meaningful words

I. Yasin / Neuropsychologia 45 (2007) 2718–2729 2725

ich th

(dttsaT

le

Fbtd

Fig. 6. As for Fig. 3 but for the condition in wh

RDev; a measure of the fit quality), are shown in Table 2. Theipole sources occupy a location close to the auditory cortex withhe dipole in the right temporal lobe in most cases orientated

owards the contralateral hemisphere. However, not all dipoleources appear to be located within the temporal lobes—someppear to be located in more medial or posterior positions.his variation may be due to the use of a standard electrode

t3op

ig. 7. Grand average headmaps of the dipole source analysis. Dipole location is indiar. Small gray discs represent the electrode positions. The top row shows the headmahe bottom row shows the headmaps for the interval corresponding to the LN for the seviant presentation, i.e., either right (R) or left (L) ear.

e standard was lagy and the deviant was laby.

abelling algorithm rather than actual digitized locations of thelectrodes.

An ANOVA was conducted on the individual values of

he source strengths within the MMN latency window of82–496 ms. The main factors were word meaning (meaningfulr non-meaningful), second syllable ( by or dy), ear of deviantresentation (right or left), and dipole location (right or left hemi-

cated by the black circle and the dipole orientation is indicated by the attachedps for the latency interval corresponding to the MMN for all conditions, whilstame conditions. A capital R or L after the word-condition represents the ear of

Page 9: Hemispheric differences in processing dichotic meaningful and non-meaningful words

2726 I. Yasin / Neuropsychologia 45 (2007) 2718–2729

Table 2The grand average dipole source strength (nA m) and the residual deviation (RDev) estimated for the average MMN and LN, for both meaningful and non-meaningfulwords presented to the left (L) or right ear (R)

Meaningful word babyR babyL ladyR ladyL

MMN LN MMN LN MMN LN MMN LN

Dipole strength (nA m)Left dipole 154.00 53.38 103.00 38.35 200.50 77.86 136.70 26.15Right dipole −81.65 −78.88 −69.33 −14.76 −144.50 −25.48 −63.53 38.66RDev (%) 24.80 41.40 22.20 85.20 35.40 37.30 27.30 81.90

Non-meaningful word labyR labyL badyR badyL

MMN LN MMN LN MMN LN MMN LN

Dipole strength (nA m)Left dipole 91.67 39.90 84.31 39.45 37.64 82.18 65.50 81.98Right dipole −82.61 95.31 −77.81 43.09 80.88 −57.11 −59.44 −32.82

6

swticη

i

gtpstwStapt

ssprtwp

sTmpT1tis

sp

3

ogttTmwtswtlslhm(m

PampFT(f(

RDev (%) 35.40 56.90 53.20

phere). There was a significant three-way interaction betweenord meaning, second syllable and ear of deviant presenta-

ion, F(1, 17) = 9.202, p = 0.008, with effect size, η2 = 0.351. Thenteraction between word meaning and dipole location was alsolose to significant, F(1, 17) = 4.511, p = 0.049, with effect size,2 = 0.210.

Post hoc t-tests were carried out to investigate the three-waynteraction (by summing across dipoles).

The meaningful word baby presented to the right ear elicited areater source strength than the meaningful word lady presentedo left ear (mean difference = 47.53, S.D. = 89.06, t(17) = 2.26,(two-tailed) = 0.037) or the non-meaningful word laby pre-ented to right ear (mean difference = 58.66, S.D. = 108.12,(17) = 2.30, p(two-tailed) = 0.034) or the non-meaningfulord bady presented to left ear (mean difference = 83.06,.D. = 123.34, t(17) = 2.86, p(two-tailed) = 0.011). In addition,

he non-meaningful word bady presented to right ear elicitedgreater source strength than the non-meaningful word bady

resented to left ear, mean difference = 53.911, S.D. = 106.36,(17) = 2.15, p(two-tailed) = 0.046.

The interaction between word meaning and dipole sourcetrength was probed by summing across ears and secondyllable. The dipole strength due to a meaningful wordresentation was greater in the left hemisphere than in theight hemisphere (mean difference = 118.201, S.D. = 74.78,(17) = 6.706, p(two-tailed) < 0.001) or a non-meaningfulord (mean difference = 114.910, S.D. = 82.65, t(17) = 5.89,(two-tailed) < 0.001).

An ANOVA was conducted on the individual values of theource strengths within the LN latency window of 540–692 ms.he main factors were word meaning (meaningful or non-eaningful), second syllable ( by or dy), ear of deviant

resentation (right or left), and dipole location (right or left).here was a significant main effect of dipole strength, F(1,

6) = 21.322, p < 0.001, with effect size, η2 = 0.571. Post hoc-tests (summing across ears, second syllable, and word mean-ng) showed that the dipole strength in the left hemisphere wasignificantly greater than the dipole strength in the right hemi-

(&ct

2.70 85.20 67.00 25.10 39.60

phere, mean difference = 78.037, S.D. = 68.72, t(17) = 4.818,(two-tailed) < 0.001.

. Discussion

This study investigated the electrophysiological analoguef the REA seen in behavioural dichotic-listening studies. Areater response was observed for meaningful words presentedo the right ear compared to non-meaningful words presentedo the right ear or meaningful words presented to the left ear.here was however a difference between the response withineaningful words, such that the response to the word babyas greater than the response to the word lady (both from

he waveform amplitude averaging and dipole source analy-is); the response to the word baby presented to the right earas greater than the response to the word lady presented to

he left ear. Furthermore, the findings showed that the wordaby produced the weakest activity, with a response that wasmaller than the responses elicited by the words baby, bady andady. The dipole source analysis also suggested that the left-emisphere response was greater than the right hemisphere foreaningful words for the MMN. At a later stage of processing

LN) this trend was retained albeit for all words regardless ofeaningfulness.The source analysis results, in part, support the findings of

ulvermuller et al. (2001) in that there was overall a greaterctivity in the left hemisphere for meaningful words than non-eaningful words. Pulvermuller et al. (2001), using a similar

aradigm showed that diotically presented meaningful words (ininnish) elicited a greater MMN than meaningless pseudowords.urning to the effects of meaning on the LN, Korpilahti et al.2001) using diotic stimuli also found a greater LN for meaning-ul words than pseudowords, whilst others have shown the LNlatency, of 400 ms) to be greater for pseudowords than for words

e.g. Holcomb & Neville, 1990; Pulvermuller, Lutzenberger,

Birbaumer, 1995). However, the latter studies used a lexi-al decision task which encourages searching the lexcion forhe presented word. In such a scenario it may be expected that

Page 10: Hemispheric differences in processing dichotic meaningful and non-meaningful words

logia

te

ptiapaM

[nrcuwtrsoi‘bmlamerarwl(bttn(nralgewaipfbic(i(

rdStrtlmpfgoiu(0rpltswfiswrri

isttpofmtp

A

fdtaER

A

I. Yasin / Neuropsycho

he additional load of searching for a pseudoword may actuallynhance a later negativity for pseudowords.

If the LN is an indicator of higher-level complex processingrior to categorization (Korpilahti et al., 1995), then lateraliza-ion to meaningful stimuli may be expected to be more evidentn the LN than in the MMN. Whilst the present dipole sourcenalysis suggest that lateralization remains at a later stage ofrocessing (LN) the present findings do not support the idea ofgreater lateralization for meaningful words in the LN than theMN.Since the same second syllable [bi:] was presented after

beI] to form a meaningful word, and after [leI] to form aon-meaningful pseudoword, the effect of a greater MMN inesponse to the word baby versus the word laby cannot be as aonsequence of the physical differences between the two stim-li. The word baby elicited an overall greater response than theord lady, and one possible explanation for this finding may be

he lexical status of the second syllable which affects the brain’sesponse to bisyllables. The second syllable of ‘baby’ corre-ponds to a common real word ‘be’, whereas the second syllablef ‘lady’ does not. If the MMN is enhanced by lexical status ofndividual syllables as well as the bisyllabic compound, thenbaby’ will benefit, as it is the only item in our study in whichoth individual syllables as well as the bisyllable are clearlyeaningful. Could the fact that in the present study the first syl-

able [beI] of the word [beI][bi:] is itself an embedded word,ffect the size of the REA response? The answer to this questionay be found in the dynamics of sequential and associative mod-

ls of word recognition. In a classic sequential model of wordecognition, the Cohort model (Marslen-Wilson & Welsh, 1978)ll the phonemes of a given word need to be recognised beforeecognition of the word as a whole. This definition constrainsord recognition to occur at its offset. But what about polysyl-

abic words composed of syllables which are themselves words?McQueen & Cutler, 1996). The recognition of these words maye difficult to explain with the original version of the sequen-ial model, although there is scope within the model to allowop-down syntactic and semantic processing to guide the recog-ition of the word. Unlike the Cohort model, the TRACE modelMcClelland & Elman, 1986) allows competition between wordodes at different points in time, potentially speeding up wordecognition. In this case each group of phonemes comprisingn embedded word would initiate competition from both short-ength words and longer words which have the same initialrouping of phonemes (McClelland & Elman, 1986). How-ver, in order to substantiate competition at different time pointsithin a phoneme stream comprising a word, the model requireslarge number of lexical networks simultaneously operating

n multiple layers at different time points for a given string ofhonemes. The Shortlist model (Norris, 1994), reduces the needor such a multiple layer lexical networks by reducing the num-er of potential lexical candidates at the outset (shortlisting) byncorporating a set of rules which use cues such as the metri-

al segmentation strategy (MSS) or the possible-word constraintPCW) (when words are segmented the point of segmentations chosen so as not to end up with a remainder without a vowel)Norris, McQueen, Cutler, & Butterfield, 1997). These rules

A

T

45 (2007) 2718–2729 2727

educe the number of competitors. In the case of a word embed-ed within a word these rules decrease the word recognition time.ome of the present findings, especially the increased response

o the word baby compared to lady may be explained by speededecognition for words comprised of embedded words. However,his does not explain the relatively small response to the wordaby (comprised of the syllables “lay” and “be”, which are also

eaningful). A more probable explanation may be based onhonotactic probabilities. Phonotactic probability refers to therequency of distribution of phonemes making up words of a lan-uage. If the phonotactic probability (Vitevitch & Luce, 2004)f the position-specific occurrence of each phoneme (enteredn Klattesse) in a word is calculated then the following val-es are obtained: bady (0.0512, 0.0292, 0.0380, 0.0432), baby0.0512, 0.0292, 0.0260, 0.0432), lady (0.0341, 0.0292, 0.0380,.0432), and laby (0.0341, 0.0292, 0.0260, 0.0432). The greateresponse for baby than lady may be explained by the greater totalhoneme-specific probability for the word baby (0.1496) thanady (0.1445). However, the word bady did not appear to followhe expected trend; the word only elicited the strongest sourcetrength when presented to the right ear rather than the left. Theord laby was also shown to elicit the smallest MMN, and thisnding may also be explained by the smallest total phoneme-pecific probability for the word laby (0.1325) compared to theords bady (0.1616), baby (0.1496), and lady (0.1445). In a

ecent study by Bonte, Poelmans, and Blomert (2007), MMNesponses were shown to be similarly sensitive to manipulationsn phonotactic probability.

In conclusion, a right-ear advantage was observed for mean-ngful words compared to non-meaningful words. In addition,ource analysis suggested that dipole strength was stronger inhe left than right hemisphere for meaningful words at a rela-ively early stage of processing (MMN) and this lateralizationersisted to a later stage of processing LN, albeit irrespectivef word meaning. The differences in response within meaning-ul words and between meaningful and non-meaningful wordsay be explained by considering the effects on word recogni-

ion of embedded words and the position-specific probability ofhoneme occurrence in words.

cknowledgements

The author is very grateful to Douglas Kieweg (The Centeror Biobehavioral Neurosciences in Communication Disor-ers, University of Kansas) for his assistance with the use ofhe phonotactic calculator and two anonymous reviewers. Theuthor was supported by a post-doctoral fellowship from theconomic and Social Research Council (ESRC) and the Medicalesearch Council (MRC).

ppendix A

.1. Bisyllables

The first syllable of the bisyllable was either [beI] or [leI].he second syllable could be either [bi:], [gi:] or [di:].

Page 11: Hemispheric differences in processing dichotic meaningful and non-meaningful words

2 logia

owfF5vdf2f

1iofiftf22[v3atFtsf2a3rFsfa3f

R

A

A

A

A

B

B

B

B

B

D

E

H

H

H

H

H

I

K

K

K

K

K

K

L

L

L

L

M

M

728 I. Yasin / Neuropsycho

For [beI], f0 had a steady-state value of 119 Hz maintainedver 56 ms, falling over the next 218 ms to a value of 58 Hz,hich was then maintained over 32 ms. The onset frequencies

or F1, F2 and F3, were 387, 1442 and 2274 Hz, respectively.1, F2 and F3, increased to 551, 1723 and 2449 Hz over 25,5 and 51 ms. All formants then remained at these steady-statealues for the next 101 ms (F1), 74 ms (F2) and 77 ms (F3). F1ecreased to 368 Hz over the next 97 ms and remained at thisrequency for 83 ms. F2 and F3 increased to values of 2042 and603 ms over the next, 93 and 89 ms and remaining at theserequencies for the next 84 and 89 ms.

For [LeI], f0 remained at a steady-state value of 108 Hz for54 ms, decreasing to 57 Hz over the next 82 ms and remain-ng at this frequency for the next 70 ms. F1, F2 and F3 hadnset frequencies of 319, 1200 and 2855 Hz, remaining at theserequencies for 107, 100 and 31 ms, respectively. F1 and F2ncreased to 542 and 1732 Hz, over 30 ms, remaining at theserequencies for the next 82 and 79 ms after which they decreasedo 377 and 2003 Hz over 87 and 97 ms. F3 remained at 2855 Hzor 31 ms, then decreased over 76 ms to a steady-state value of410 Hz which was maintained for 75 ms. F3 then increased to623 Hz over 91 ms, and remained at this value for 33 ms. Forbi:], f0 remained at 123 Hz for 54 ms, falling to a steady-statealue of 55 Hz over 220 ms and remaining at this frequency for2 ms. F1 had a steady-state value of 300 Hz for 306 ms. F2nd F3 had onset frequencies of 1645 and 2391 Hz, increasingo 2003 and 2952 Hz over 31 and 51 ms, respectively. F2 and3 remained at these frequencies for 275 and 255 ms, respec-

ively. For [gi:], f0 remained at 119 Hz for 54 ms, falling to ateady-state value of 55 Hz over 221 ms and remaining at thisrequency for 31 ms. F2 decreased in frequency from 2129 to023 Hz over 54 ms and remained at 2023 Hz for 252 ms. F1nd F3 had onset frequencies of 251 and 2458 Hz, increasing to00 ms, and 2962 Hz over 42 and 50 ms, respectively. F1 and F3emained at these frequencies for 264 and 256 ms, respectively.or [di:], f0 remained at 121 Hz for 74 ms, falling to a steady-tate value of 56 Hz over 201 ms and remaining at this frequencyor 31 ms. F1, F2, and F3 had onset frequencies of 261, 1878nd 2720 Hz, increasing to 290 ms, and 2032, and 2962 Hz over8, 38 and 47 ms, respectively. F1, F2 and F3 remained at theserequencies for 268, 268 and 259 ms, respectively.

eferences

iello, I., Sontgui, S., Sau, G. F., Manca, S., Conti, M., & Rosati, G. (1994).Long-latency evoked potentials in a case of corpus callosum agenesia. ItalianJournal of Neurological Sciences, 15, 497–505.

lho, K. (1995). Cerebral generators of mismatch negativity (MMN) and itsmagnetic counter-part (MMNm) elicited by sound changes. Ear and Hear-ing, 16, 38–51.

lho, K., Tervaniemi, M., Huotilainen, M., Lavikainen, J., Tiitinen, H.,Ilmoniemi, R. J., et al. (1996). Processing of complex sounds in the humanauditory cortex as revealed by magnetic brain responses. Psychophysiology,

33, 369–375.

lho, K., Winkler, I., Escera, C., Huotilainen, M., Virtanen, J., Jaaskelainen, I. P.,et al. (1998). Processing of novel sounds and frequency changes in the humanauditory cortex: Magnetoencephalographic recordings. Psychophysiology,35, 211–224.

M

45 (2007) 2718–2729

ellis, T. J., Nicol, T., & Kraus, N. (2000). Aging affects hemispheric asymmetryin the neural representation of speech sounds. Journal of Neuroscience, 20,791–797.

onte, M. L., Poelmans, H., & Blomert, L. (2007). Deviant neurophysiolog-ical responses to phonological regularities in speech in dyslexic children.Neuropsychologia, 45, 1427–1437.

roadbent, D. E. (1954). The role of auditory localization in attention andmemory span. Journal of Experimental Psychology, 47, 191–196.

ryden, M. P. (1986). Dichotic listening performance, cognitive ability, andcerebral organization. Canadian Journal of Psychology, 40, 445–456.

ryden, M. P. (1988). An overview of the dichotic listening procedure and itsrelation to cerebral organization. In K. Hugdahl (Ed.), Handbook of dichoticlistening: Theory, methods and research (pp. 1–43). New York: John Wiley& Sons.

irksen, A., & Coleman, J. S. (1994). All-Prosodic Synthesis Architecture. InProceedings of the second ESCA/IEEE workshop on speech synthesis (pp.232–235).

slinger, P. J., & Damasio, H. (1988). Anatomical correlates of paradoxic earextinction. In K. Hugdahl (Ed.), Handbook of dichotic listening: Theory,methods and research (pp. 140–160). New York: John Wiley & Sons.

aaland, K. Y. (1974). The effect of dichotic, monaural and diotic verbal stimulion auditory evoked potentials. Neuropsychologia, 12, 339–345.

ari, R., Levanen, S., & Raij, T. (2000). Timing of human cortical functionsduring cognition: Role of MEG. Trends in Cognitive Sciences, 4, 455–462.

olcomb, P. J., & Neville, H. J. (1990). Auditory and visual semantic prim-ing in lexical decision: A comparison using event-related brain potentials.Language and Cognitive Processes, 5, 281–312.

ugdahl, K. (1988). Handbook of dichotic listening: Theory, methods andresearch. New York: John Wiley & Sons.

ugdahl, K., Carlsson, G., & Eichele, T. (2001). Age effects in dichotic listeningto consonant–vowel syllables: Interactions with attention. DevelopmentalNeuropsychology, 20, 445–457.

lvonen, T., Kujala, T., Kozou, H., Kiesilainen, A., Salonen, O., Alku, P., et al.(2004). Neuroscience Letters, 366, 235–240.

imura, D. (1961). Cerebral dominance and the perception of verbal stimuli.Canadian Journal of Psychology, 15, 166–171.

imura, D. (1964). Left–right differences in the perception of melodies. Quar-terly Journal of Experimental Psychology, 16, 355–358.

imura, D. (1967). Functional asymmetry of the brain in dichotic listening.Cortex, 3, 163–168.

latt, D. H. (1980). Software for a cascade/parallel formant synthesizer. Journalof the Acoustical Society of America, 67, 971–995.

orpilahti, P., Krause, C. M., Holopainen, I., & Lang, A. H. (2001). Early and latemismatch negativity elicited by words and speech-like stimuli in children.Brain and Language, 76, 332–339.

orpilahti, P., Lang, H., & Aaltonen, O. (1995). Is there a late latency mis-match negativity (MMN) component? Electroencephalography and ClinicalNeurophysiology, 95, 96.

iberman, M. C. (1991). Central projections of auditory-nerve fibers of differingspontaneous rate. I. Anteroventral cochlear nucleus. Journal of ComparativeNeurology, 313, 240–258.

iberman, M. C. (1993). Central projections of auditory-nerve fibers of differingspontaneous rate. I. Posteroventral and dorsal cochlear nuclei. Journal ofComparative Neurology, 327, 17–36.

iu, A. K., Dale, A. M., & Belliveau, J. W. (2002). Monte Carlo simulationstudies of EEG and MEG localization accuracy. Human Brain Mapping, 16,47–62.

utman, M. E., & Davis, A. C. (1994). The distribution of hearing thresholdlevels in the general population aged 18–30 years. Audiology, 33, 327–350.

arslen-Wilson, W. D., & Welsh, A. (1978). Processing interaction and lexicalaccess during word recognition in continuous speech. Cognitive Psychology,10, 29–63.

cClelland, J. L., & Elman, J. L. (1986). The TRACE model of speech percep-

tion. Cognitive Psychology, 18, 1–86.

cQueen, J. M., & Cutler, A. (1996). Words within words: Lexical statistics andlexical access. In Proceedings of the international conference on spoken lan-guage processing, vol. 1 (pp. 221–224). Banff, Alberta, Canada: Universityof Alberta.

Page 12: Hemispheric differences in processing dichotic meaningful and non-meaningful words

logia

M

N

N

N

N

N

N

O

O

O

O

P

P

P

P

P

P

R

R

R

S

S

S

S

S

S

S

S

S

T

T

I. Yasin / Neuropsycho

ummery, C. J., Ashmore, J., Scott, S. K., & Wise, R. J. S. (1999). Functionalneuroimaging of speech perception in six normal and two aphasic subjects.Journal of the Acoustical Society of America, 106, 449–457.

aatanen, R., Gaillard, A. W. K., & Mantysalo, S. (1978). Early selectiveattention effect on evoked potentials reinterpreted. Acta Psychologia, 42,313–329.

aatanen, R., Lehtokoski, A., Lennes, M., Cheour, M., Huotilainen, M.,Livonen, A., et al. (1997). Language-specific phoneme representationsrevealed by electric and magnetic brain responses. Nature, 385, 432–434.

arain, C., Scott, S. K., Wise, R. J. S., Rosen, S., Leff, A., Iverson, S. D., etal. (2003). Defining a left-lateralized response specific to intelligible speechusing fMRI. Cerebral Cortex, 13, 1362–1368.

eville, H. (1974). Electrographic correlates of lateral asymmetry in the pro-cessing of verbal and nonverbal auditory stimuli. Journal of PsycholinguisticResearch, 3, 151–163.

orris, D. (1994). Shortlist: A connectionist model of continuous speech recog-nition. Cognition, 52, 189–234.

orris, D. G., McQueen, J. M., Cutler, A., & Butterfield, S. (1997). Thepossible-word constraint in the segmentation of continuous speech. Cog-nitive Psychology, 34(3), 191–243.

ldfield, R. C. (1971). The assessment and analysis of handedness: The Edin-burgh inventory. Neuropsychologia, 91, 97–113.

pitz, B., Mecklinger, A., Friederici, A. D., & von Cramon, D. Y. (1999). Thefunctional neuroanatomy of novelty processing: Integrating ERP and fMRIresults. Cerebral Cortex, 9, 379–391.

pitz, B., Mecklinger, A., von Cramon, D. Y., & Kruggel, F. (1999). Combiningelectrophysiological and hemodynamic measures of the auditory oddball.Psychophysiology, 36, 142–147.

pitz, B., Rinne, T., Mecklinger, A., von Cramon, D. Y., & Schroger, E. (2002).Differential contribution of frontal and temporal cortices to auditory changedetection: fMRI and ERP results. NeuroImage, 15, 167–174.

antev, Ch., Hoke, M., Lutkenhoner, B., & Lehnertz, K. (1986). Compari-son between simultaneously recorded auditory evoked magnetic fields andpotentials elicited by ipsilateral, contra-lateral and binaural tone burst stim-ulation. Audiology, 25, 54–61.

hilips, D. P., & Gates, G. R. (1982). Representation of the two ears in theauditory cortex: A re-examination. International Journal of Neuroscience,16, 41–46.

iazza, D. M. (1980). The influence of sex and handedness in the hemi-spheric specialization of verbal and nonverbal tasks. Neuropsychologia, 18,163–176.

ulvermuller, F., Kujala, T., Shtyrov, Y., Simola, J., Tiitinen, H., Alku, P., et al.

(2001). Memory traces for words as revealed by the mismatch negativity.NeuroImage, 14, 607–616.

ulvermuller, F., Lutzenberger, W., & Birbaumer, N. (1995). Electrocorticaldistinction of vocabulary types. Electroencephalography and Clinical Neu-rophysiology, 94, 357–370.

V

45 (2007) 2718–2729 2729

ulvermuller, F., Shtyrov, Y., Kujala, T., & Naatanen, R. (2004). Word-specificcortical activity as revealed by the mismatch negativity. Psychophysiology,41, 106–112.

epp, B. (1977). Measuring laterality effects in dichotic listening. Journal ofthe Acoustical Society of America, 62, 720–737.

inne, T., Alho, K., Alku, P., Holi, M., Sinkkonen, J., & Virtanen, J. (1999).Analysis of speech sounds is left hemisphere predominant at 100–150 msafter sound onset. Neuroreport, 10, 113–117.

osenzweig, M. R. (1951). Representations of the two ears at the auditory cortex.American Journal of Physiology, 167, 147–158.

emlitsch, H. V., Anderer, P., Schuster, P., & Presslich, O. (1986). A solutionfor reliable and valid reduction of ocular artifacts, applied to the P300 ERP.Psychophysiology, 23, 695–703.

chwartz, J., & Tallal, P. (1980). Rate of acoustic change may underlie hemi-spheric specialization for speech perception. Science, 207, 1380–1381.

cott, S. K., Blank, C. C., Rosen, S., & Wise, R. J. S. (2000). Identificationof a pathway for intelligible speech in the left temporal lobe. Brain, 123,2400–2406.

cott, S. K., & Johnsrude, I. S. (2003). The neuroanatomical and functionalorganization of speech perception. Trends in Neuroscience, 26, 100–107.

cott, S. K., Rosen, S., Wickham, L., & Wise, R. J. S. (2004). A positron emissiontomography study of the neural basis of informational and energetic maskingeffects in speech perception. Journal of the Acoustical Society of America,115, 813–821.

htyrov, Y., Kujala, T., Ahveninen, J., Tervaniemi, M., Alku, P., Ilmoniemi, R. J.,et al. (1998). Background acoustic noise and the hemispheric lateralizationof speech processing in the human brain: Magnetic mismatch negativitystudy. Neuroscience Letters, 251, 141–144.

htyrov, Y., Kujala, T., Lyytinen, H., Kujala, J., Ilmoniemi, R. J., & Naatanen,R. (2000). Lateralization of speech processing in the brain as indicatedby mismatch negativity and dichotic listening. Brain and Cognition, 43,392–398.

htyrov, Y., & Pulvermuller, F. (2002). Neurophysiological evidence of memorytraces for words in the human brain. Neuroreport, 13, 521–525.

trauss, E. (1988). Dichotic listening and sodium amytal: Functional andmorphological aspects of hemispheric asymmetry. In K. Hugdahl (Ed.),Handbook of dichotic listening: Theory, methods and research (pp.117–138). New York: John Wiley & Sons.

ervaniemi, M., Medvedev, S. V., Alho, K., Pakhomov, S. V., Roudas, M. S.,van Zuijen, T. L., et al. (2000). Lateralized automatic auditory processing ofphonetic versus musical information: A PET study. Human Brain Mapping,10, 74–79.

onnquist-Uhlen, I. (1996). Topography of auditory evoked cortical potentials

in children with severe language impairment. Scandinavian Audiology, 44,1–40.

itevitch, M. S., & Luce, P. A. (2004). A web-based interface to calculate phono-tactic probability for words and nonwords in English. Behavior ResearchMethods, Instruments & Computers, 36, 481–487.