double dissociation between rules and memory in music: … · double dissociation between rules and...

15
Double dissociation between rules and memory in music: An event-related potential study Robbin A. Miranda a,b, and Michael T. Ullman a, a Brain and Language Laboratory, Department of Neuroscience, Georgetown University, New Research Building, 3970 Reservoir Road, NW, Washington, DC 20057, USA b Interdisciplinary Program in Neuroscience, Georgetown University, USA Received 25 April 2007; revised 16 July 2007; accepted 17 July 2007 Available online 8 August 2007 Language and music share a number of characteristics. Crucially, both domains depend on both rules and memorized representations. Double dissociations between the neurocognition of rule-governed and mem- ory-based knowledge have been found in language but not music. Here, the neural bases of both of these aspects of music were examined with an event-related potential (ERP) study of note violations in melodies. Rule- only violations consisted of out-of-key deviant notes that violated tonal harmony rules in novel (unfamiliar) melodies. Memory-only violations consisted of in-key deviant notes in familiar well-known melodies; these notes followed musical rules but deviated from the actual melodies. Finally, out-of-key notes in familiar well-known melodies constituted violations of both rules and memory. All three conditions were pre- sented, within-subjects, to healthy young adults, half musicians and half non-musicians. The results revealed a double dissociation, independent of musical training, between rules and memory: both rule violation conditions, but not the memory-only violations, elicited an early, somewhat right-lateralized anterior-central negativity (ERAN), con- sistent with previous studies of rule violations in music, and analogous to the early left-lateralized anterior negativities elicited by rule violations in language. In contrast, both memory violation conditions, but not the rule-only violation, elicited a posterior negativity that might be characterized as an N400, an ERP component that depends, at least in part, on the processing of representations stored in long-term memory, both in language and in other domains. The results suggest that the neurocognitive rule/memory dissociation extends from language to music, further strengthening the similarities between the two domains. © 2007 Elsevier Inc. All rights reserved. Introduction Language and music share a range of characteristics (Besson and Schön, 2001; Maess et al., 2001; Patel, 2003; Koelsch, 2005). Of particular interest here, both domains crucially rely on both rules and memorized representations. In both language and music, higher-order structures (e.g., sentences and melodies) are made up of basic units (e.g., words and notes) that are arranged in a rule- governed hierarchical configuration. Language and music also both depend on long-term memorized representations that are at least partly idiosyncratic, that is, not entirely derivable from rules. For example, in language one must memorize various types of lexical knowledge, such as the sequence of phonemes /c/, /a/, and /t/ and that this refers to a furry, four-legged animal, as well as the fact that the combination of words let the cat out of the bagmeans disclose a secret.Likewise in music, a familiar tune must be memorized as a particular sequence of notes and rhythms. The neurocognition of this rule/memory distinction has been well-studied in language but not in music. In language, evidence suggests that the two capacities rely on at least partially distinct neural bases. Double dissociations between rules and memory have been found using a variety of empirical approaches, including lesion studies of patients (Damasio, 1992; Goodglass, 1993; Ullman et al., 1997), neuroimaging (Newman et al., 2001; Friederici et al., 2003b), and event-related potentials (ERPs) (De Vincenzi et al., 2003; Sanders and Neville, 2003; Friederici et al., 2004). ERPs have been particularly revealing. Violations of gramma- tical rules, e.g., in syntactic phrase structure or morpho-syntax, often elicit a biphasicpattern of ERPs. Initial negativities, generally found around 150200 or 300500 ms, and commonly referred to as (early) left anterior negativities (E)LANs due to their typical scalp distribution, are followed by later positivities (5001000 ms), often called P600s due to their positivity and typical onset latency (Neville et al., 1991; Friederici et al., 1996; Coulson et al., 1998). LANs seem to be automatic, have been linked to aspects of grammatical structure building, and appear to depend at least in part on left frontal lobe structures (Hahne and Friederici, 1997; Friederici et al., 1998, 1999b, Hagoort and Brown, 2000; Newman et al., www.elsevier.com/locate/ynimg NeuroImage 38 (2007) 331 345 Portions of this article were included as part of a doctoral dissertation by R.A. Miranda. Corresponding authors. Brain and Language Laboratory, Department of Neuroscience, Georgetown University, New Research Building, 3970 Reservoir Road, NW, Washington, DC 20057, USA. Fax: +1 202 687 6914. E-mail addresses: [email protected] (R.A. Miranda), [email protected] (M.T. Ullman). Available online on ScienceDirect (www.sciencedirect.com). 1053-8119/$ - see front matter © 2007 Elsevier Inc. All rights reserved. doi:10.1016/j.neuroimage.2007.07.034

Upload: phungtuyen

Post on 26-Jul-2018

225 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Double dissociation between rules and memory in music: … · Double dissociation between rules and memory in music: ... of musical training, ... musicians,suggestingthatatleastsomeofthesenotesviolatednon-

www.elsevier.com/locate/ynimg

NeuroImage 38 (2007) 331–345

Double dissociation between rules and memory in music:An event-related potential study☆

Robbin A. Mirandaa,b,⁎ and Michael T. Ullmana,⁎

aBrain and Language Laboratory, Department of Neuroscience, Georgetown University, New Research Building, 3970 Reservoir Road, NW,Washington, DC 20057, USAbInterdisciplinary Program in Neuroscience, Georgetown University, USA

Received 25 April 2007; revised 16 July 2007; accepted 17 July 2007Available online 8 August 2007

Language and music share a number of characteristics. Crucially, bothdomains depend on both rules and memorized representations. Doubledissociations between the neurocognition of rule-governed and mem-ory-based knowledge have been found in language but not music. Here,the neural bases of both of these aspects of music were examined with anevent-related potential (ERP) study of note violations inmelodies. Rule-only violations consisted of out-of-key deviant notes that violated tonalharmony rules in novel (unfamiliar) melodies. Memory-only violationsconsisted of in-key deviant notes in familiar well-known melodies; thesenotes followed musical rules but deviated from the actual melodies.Finally, out-of-key notes in familiar well-known melodies constitutedviolations of both rules and memory. All three conditions were pre-sented, within-subjects, to healthy young adults, half musicians and halfnon-musicians. The results revealed a double dissociation, independentof musical training, between rules and memory: both rule violationconditions, but not the memory-only violations, elicited an early,somewhat right-lateralized anterior-central negativity (ERAN), con-sistent with previous studies of rule violations inmusic, and analogous tothe early left-lateralized anterior negativities elicited by rule violationsin language. In contrast, both memory violation conditions, but not therule-only violation, elicited a posterior negativity that might becharacterized as an N400, an ERP component that depends, at least inpart, on the processing of representations stored in long-term memory,both in language and in other domains. The results suggest that theneurocognitive rule/memory dissociation extends from language tomusic, further strengthening the similarities between the two domains.© 2007 Elsevier Inc. All rights reserved.

☆ Portions of this article were included as part of a doctoral dissertation byR.A. Miranda.⁎ Corresponding authors. Brain and Language Laboratory, Department of

Neuroscience, Georgetown University, New Research Building, 3970Reservoir Road, NW, Washington, DC 20057, USA. Fax: +1 202 687 6914.

E-mail addresses: [email protected] (R.A. Miranda),[email protected] (M.T. Ullman).

Available online on ScienceDirect (www.sciencedirect.com).

1053-8119/$ - see front matter © 2007 Elsevier Inc. All rights reserved.doi:10.1016/j.neuroimage.2007.07.034

Introduction

Language and music share a range of characteristics (Bessonand Schön, 2001; Maess et al., 2001; Patel, 2003; Koelsch, 2005).Of particular interest here, both domains crucially rely on bothrules and memorized representations. In both language and music,higher-order structures (e.g., sentences and melodies) are made upof basic units (e.g., words and notes) that are arranged in a rule-governed hierarchical configuration. Language and music also bothdepend on long-term memorized representations that are at leastpartly idiosyncratic, that is, not entirely derivable from rules. Forexample, in language one must memorize various types of lexicalknowledge, such as the sequence of phonemes /c/, /a/, and /t/ andthat this refers to a furry, four-legged animal, as well as the fact thatthe combination of words “let the cat out of the bag” means“disclose a secret.” Likewise in music, a familiar tune must bememorized as a particular sequence of notes and rhythms.

The neurocognition of this rule/memory distinction has beenwell-studied in language but not in music. In language, evidencesuggests that the two capacities rely on at least partially distinctneural bases. Double dissociations between rules and memory havebeen found using a variety of empirical approaches, including lesionstudies of patients (Damasio, 1992; Goodglass, 1993; Ullman et al.,1997), neuroimaging (Newman et al., 2001; Friederici et al., 2003b),and event-related potentials (ERPs) (De Vincenzi et al., 2003;Sanders and Neville, 2003; Friederici et al., 2004).

ERPs have been particularly revealing. Violations of gramma-tical rules, e.g., in syntactic phrase structure or morpho-syntax, oftenelicit a “biphasic” pattern of ERPs. Initial negativities, generallyfound around 150–200 or 300–500 ms, and commonly referred to as(early) left anterior negativities – (E)LANs – due to their typicalscalp distribution, are followed by later positivities (500–1000 ms),often called P600s due to their positivity and typical onset latency(Neville et al., 1991; Friederici et al., 1996; Coulson et al., 1998).LANs seem to be automatic, have been linked to aspects ofgrammatical structure building, and appear to depend at least in parton left frontal lobe structures (Hahne and Friederici, 1997; Friedericiet al., 1998, 1999b, Hagoort and Brown, 2000; Newman et al.,

Page 2: Double dissociation between rules and memory in music: … · Double dissociation between rules and memory in music: ... of musical training, ... musicians,suggestingthatatleastsomeofthesenotesviolatednon-

332 R.A. Miranda, M.T. Ullman / NeuroImage 38 (2007) 331–345

2007). The late positivities, by contrast, have a central-posteriordistribution, seem to be controlled rather than automatic, have beenimplicated (at least in part) in aspects of syntactic integration, andappear to involve left temporal/temporo-parietal structures and thebasal ganglia (Osterhout and Holcomb, 1992; Hahne and Friederici,1997; Friederici et al., 1998, 2003a; Kaan et al., 2000; Friederici,2002).

Lexical-conceptual processing generally elicits bilateral central-to-posterior negativities known as N400s, which typically occurfrom about 250 to 500 ms post-stimulus (Kutas and Hillyard, 1980;Friederici et al., 1999a; Kutas and Federmeier, 2000; Hinojosa etal., 2001; Laurent et al., 2006). N400s seem to reflect one or morepossibly related processes, including access to representations inlong-term memory (thus the component is modulated by thefrequency of the stimulus) and contextual integration, that is, theintegration of the given stimulus with the current context held inworking memory and with established long-term knowledge(Kutas and Federmeier, 2000). For example, reading or hearing aword that does not match the preceding context and long-termknowledge (e.g., “He shaved off his moustache and city.”) elicitsan N400, as would seeing a picture of a city or hearing a sound of acar horn honking following the same sentence fragment (VanPetten and Rheinfelder, 1995; McPherson and Holcomb, 1999;Kutas and Federmeier, 2000; Federmeier and Kutas, 2001). Basedon findings from patient, neuroimaging, and intracranial recordingstudies, N400s seem to reflect activity in multiple brain areas,including modality-specific regions of the temporal neocortex,ventrolateral prefrontal areas involved in processing informationacross modalities, and medial temporal lobe structures, includingthe hippocampus (Kutas and Federmeier, 2000).

ERP studies of rule violations in music have yielded resultsintriguingly similar to those found in language, suggesting at leastsome shared neurocognitive correlates across the domains. In aseries of musical chords, harmonically inappropriate or dissonantchords generally elicit anterior negativities in a similar timewindow (about 150 to 280 ms) as some of the earlier LANs (i.e.,ELANs) found in language studies (Verleger, 1990; Paller et al.,1992; Patel et al., 1998; Koelsch et al., 2000, 2002b,c; Loui et al.,2005; Leino et al., 2007). However, whereas the negativitieselicited by rule violations in language are generally left lateralized,those elicited by rule violations in music are usually rightlateralized, or sometimes bilateral – hence the term early (right)anterior negativity, or E(R)AN – and have been linked to frontallobe structures in both hemispheres (Maess et al., 2001; Koelsch etal., 2002a).

Later bilateral or right-lateralized anterior negativities (N5s/Late Negativities—450–1500 ms) have also been elicited by thesame types of rule violations in music (Koelsch et al., 2000, 2002b,c; Loui et al., 2005). Although it has been suggested that thesenegativities may be similar to the N400s found in language studies,reflecting musical “meaning” conveyed by the violations (Koelsch,2005), the fact that they have an anterior scalp distribution and areelicited by rule-based violations suggests that they might also orinstead be related to rule processing. Finally, musical ruleviolations also typically elicit late central to posterior positivities,in both melodies (Verleger, 1990; Paller et al., 1992; Besson andFaïta, 1995) and chord sequences (Patel et al., 1998; Koelsch et al.,2000, 2002c; Regnault et al., 2001). Interestingly, the later anteriornegativities seem to be observed only (though not always;Steinbeis et al., 2006) when subjects do not attend to the violations(Koelsch et al., 2000, 2002b,c; Loui et al., 2005), resulting in the

reduction of the temporally overlapping late positivities that maymask the negativities due to additivity effects (Paller et al., 1992;Besson and Faïta, 1995; Koelsch et al., 2002c).

In contrast, the processing of idiosyncratic, non-rule-based,aspects of long-term memorized representations has not been wellstudied in music. In particular, we are aware of no ERP studiesexamining this issue. Moreover, to our knowledge, N400s have notbeen directly elicited by music in any studies. Note that althoughone study investigated in-key note (“diatonic”) violations inmelodies, these notes actually appeared to violate musical rules,for example, non-tonic notes following leading notes (Besson andFaïta, 1995; Besson and Schön, 2001). Indeed, in a separate groupof subjects, these violations in unfamiliar melodies were detected46% of the time by musicians and 31% of the time by non-musicians, suggesting that at least some of these notes violated non-memory-based aspects of the melodies. Interestingly, the “diatonic”violations, as well as out-of-key note (“non-diatonic”) violations,appeared to elicit negativities between 200 and 400 ms in familiarmelodies but not in unfamiliar melodies. However, these negativ-ities were only shown in the figures and not reported in the text, andwere immediately followed by large positivities that may havemasked the negativities in later time windows due to additivityeffects. It is thus unclear whether or not these negativities wererelated to N400s.

Despite the fact that N400s have not clearly been elicited by noteviolations in familiar melodies, neuroimaging studies have revealedthat a similar network of brain areas is involved in processingmemorized aspects of both music and language. For instance,semantic judgments of single words (Friederici et al., 2000; Boothet al., 2002; Pilgrim et al., 2002; Chou et al., 2006) and of idioms ormetaphors (Rapp et al., 2004; Eviatar and Just, 2006; Zempleni et al.,2007) have been linked to activation of left or bilateral temporal/temporo-parietal regions, including Brodmann's areas (BA) 20 and21, as well as left or bilateral anterior ventro-lateral prefrontal cortex(BA 45/47). The processing of semantic anomalies has additionallybeen linked to activation of medial temporal lobe structures, namelythe hippocampus and parahippocampal gyrus (Newman et al.,2001). Each of these brain regions has also been activated duringfamiliarity judgments of music, or while processing familiar ascompared to unfamiliar melodies (Platel et al., 1997, 2003; Satohet al., 2006; Plailly et al., 2007), suggesting at least partlyoverlapping neural substrates involved in processing memorizedaspects of both music and language.

In the present ERP study, we examined the rule/memory contrastin music within subjects. Two types of note violations wereexamined in the same set of well-known melodies, as well as inmatched novel melodies. “In-key” deviant notes in well-knownmelodies were appropriate with respect to musical rules but differedfrom the actual melody, thus serving as memory-based violations inwell-known but not novel melodies. “Out-of-key” deviant notesviolated rules of musical tonality. In novel melodies, out-of-keynotes served solely as rule violations, whereas in well-knownmelodies, the notes violated both rules and memory. To ensure thateach subject was actually familiar with each well-known melody,only those well-known melodies that that particular subject rated asfamiliar and novel melodies rated as unfamiliar in a separate testsession were included for analysis. To reduce the attention-relatedlate positivity, subjects were not informed of the in-key and out-of-key violations, but instead made responses about the timbre of thelast note in each melody. We predicted a double dissociationbetween rule and memory violations: ERANs for rule violations but

Page 3: Double dissociation between rules and memory in music: … · Double dissociation between rules and memory in music: ... of musical training, ... musicians,suggestingthatatleastsomeofthesenotesviolatednon-

333R.A. Miranda, M.T. Ullman / NeuroImage 38 (2007) 331–345

not for purely memory violations, and N400s for memory violationsbut not for purely rule violations. Finally, to probe whether theseeffects vary according to musical training, we included musicaltraining as a factor in the experimental design: half the subjects weremusicians and half were non-musicians.

Methods

Participants

We tested 64 adult (32 male, 32 female) monolingual nativespeakers of American English. Because knowledge of the well-knownmelodies in this study is largely culturally dependent, none ofthe subjects had lived outside the United States for more than sixmonths before the age of 18. None of the subjects had any knowndevelopmental, neurological, or psychiatric disorders, and all hadnormal or corrected hearing and vision. Half the subjects (16 male,16 female) were musicians, having at least five years of musicaltraining, and half (16male, 16 female) were non-musicians, with oneyear or less of musical training. Musical training was defined asprivate instrument or voice lessons, or participation in a musicalensemble such as band, orchestra, or chorus. Due to ERP data lossfrom extensive noise, one female non-musician was excluded, and anew female non-musician was tested in her place. All subjects wereright-handed (Oldfield, 1971). Familial sinistrality, defined as one ormore left-handed or ambidextrous individuals among biologicalparents or full siblings, was matched across musicians (4 males, 5females) and non-musicians (4males, 5 females). Themusicians andnon-musicians did not differ in age (musicians: mean=22.8 years,SE=0.7; non-musicians: mean=22.5, SE=0.6; t(62) = 0.39,p=0.699) or education (musicians: mean=15.9 years, SE=0.5;non-musicians: mean=15.6, SE=0.4; t(62)=0.50, p=0.617). Asexpected, the musicians had significantly more years of musicaltraining than the non-musicians (musicians: range 5–24 years,mean=10.6, SE=0.8; non-musicians: range 0–1 years, mean=0.3,SE=0.1; t(31)=13.73, pb0.001). Research methods were approvedby the Institutional Review Board at Georgetown University. Allsubjects gave written informed consent and received monetarycompensation for their participation.

Stimuli

The stimuli consisted of 240 melodies, which ranged from 5.0 to18.0 s in length (mean=9.8 s), and contained between 10 and 36notes (mean=17.8 notes). The melodies were created in MIDIformat, using Finale Version 3.5.1 (Coda Music), and were thenconverted to WAV files with a “grand piano” Sound Font, usingMidiSyn Version 1.9 (Future Algorithms). All melodies were in thekey of C-major (94.2%) or A-minor (5.8%), whose natural minorscale contains the same notes as the C-major scale. Half of themelodies (120) were segments from well-known tunes, includingtraditional, folk, children's, patriotic, holiday, classical, and “pop”music, as well as themes from movies, television, and Broadwaymusicals (for a full list of the well-known melodies, seeAppendix A). The other half (120) were novel melodies composedby one of the authors (Miranda). Each novel melody was exactlymatched to one of the 120 well-known melodies with respect totempo, length, and implied harmony. To minimize the potential ofsubjects falsely recognizing the novel melodies as their matchedwell-known melodies, distinctive rhythms in some of the novelmelodies were slightly altered.

Three versions of each well-known and novel melody werecreated: control, in-key violation, and out-of-key violation (Fig. 1).The control melodies (the original versions) contained noviolations. Each melody with an in-key violation contained onenote that differed from its corresponding control melody. The in-key violation notes never violated rules of musical tonality/harmony, and did not constitute abnormal pitch intervals with thepreceding or following note. For listeners familiar with a givenwell-known melody, however, the in-key deviant note violated thememorized sequence of notes. In contrast, the in-key “deviant”notes in novel melodies did not constitute such violations, since thenote sequences in these melodies had presumably not beenpreviously encountered, and thus should not have been memorized.Each out-of-key violation melody contained a harmonicallydeviant note, that is, a note that violated the implicit harmony atthat point in the melody. Therefore, out-of-key deviant notesconstituted violations of musical rules in both well-known andnovel melodies, as well as violations of the memorized notesequences in familiar well-known melodies.

Thus the stimuli consisted of 120 pairs of well-known/novelmelodies, with control, in-key violation and out-of-key violationconditions for each.Within each set of six matched conditions (well-known/novel melody pairs in control, in-key, and out-of-keyversions), the target notes – i.e., the deviant notes or correspondingcontrol notes – occurred at the same time relative to the onset of themelodies. Moreover, the note(s) within an interval of 600 msimmediately preceding and following the target notes were the samein all six conditions. All target notes were 600 ms in length and hadthe same pitch across each well-known/novel pair of control, in-keyand out-of-key conditions. In a majority of the melodies (103 out ofthe 120 sets), the target note was equivalent to a quarter note in a 4/4or 3/4 time signature. These 103 melodies were presented at a tempoof 100 beats per minute. In the 17 remaining sets of melodies, thetarget note was equivalent to a quarter note in 6/8 time (9 melodies)or a dotted quarter note in 4/4 or 3/4 time (8 melodies), and werepresented at a tempo of either 66.7 or 150 beats per minute,respectively. To increase the likelihood that subjects would befamiliar with each well-known melody before the onset of the targetnote, each target note began after the mean recognition time for themelody in which it occurred, based on response times for familiar-rated melodies from a previous behavioral study (Wood and Ullman,2005; Miranda and Ullman, under revision).

In 25% of the melodies, the final note was a timbre deviant. Forthese melodies, the timbre of the last note was altered using the“Wahwah Effect” in Audacity Version 1.2.0, resulting in a muffledpiano sound. For each melody containing a timbre deviant, thecorrespondingmelodies in each of the remaining five conditions alsocontained the same timbre deviant. To prevent overlap between ERPresponses to the timbre deviants and the target notes of interest, theonset of each timbre deviant occurred at least 1200ms after the onsetof the preceding in-key, out-of-key, or control target note.

Counterbalancing and presentation order

In the previous behavioral study, we found that novel melodieswere rated as more familiar when they were heard after, ascompared to before, their well-known counterparts (Wood andUllman, 2005). In contrast, no such priming effects were found forwell-known melodies that followed their matched novel melodies.To avoid any such well-known-to-novel priming effects in thepresent study, novel melodies were always presented before their

Page 4: Double dissociation between rules and memory in music: … · Double dissociation between rules and memory in music: ... of musical training, ... musicians,suggestingthatatleastsomeofthesenotesviolatednon-

Fig. 1. Examples of the stimuli used in the experiment. Asterisks indicate target control or violation notes.

334 R.A. Miranda, M.T. Ullman / NeuroImage 38 (2007) 331–345

well-known counterparts. To counterbalance the stimuli appro-priately, and to satisfy various constraints, including that novelmelodies had to occur before their well-known counterparts, theitems were presented in the following manner. Each subjectcompleted three experimental runs, with each run containing 60different melodies that were presented in equal proportions (10each) across each of the six conditions (well-known and novelmelodies, in control, in-key, and out-of-key conditions). Thus, eachsubject heard a total of 180 melodies—30 melodies in each of thesix conditions. To avoid the previously observed priming effects,the 30 well-known melodies in the first run were not followed bytheir corresponding novel melodies in the subsequent runs.Likewise, the 30 novel melodies in the third run were not precededby their corresponding well-known melodies. For the 60 novelmelodies in the first and second runs that were followed by their 60corresponding well-known melodies in the second and third runs,respectively, the matched well-known and novel melodies werepresented in different control/violation conditions. With thisdesign, all six conditions of the 120 well-known/novel melodypairs (that is, 720 melody versions) were presented once acrossevery four subjects (within musicians or non-musicians, balancedacross males and females), with the presentation order counter-balanced across the four subjects and melody pairs. The melodiesin each run were presented in a different pseudo-random order foreach subject, with restrictions on the number of each melody type(well-known vs. novel), violations, and timbre deviants that couldoccur in a row.

Procedure

Subjects were seated 70 cm from a 16 inch CRT monitor in adark quiet testing room. The melodies were presented via ER-4insert earphones (Etymotic Research) at approximately 70 dB. Fivehundred milliseconds prior to each melody, a fixation crossappeared in the center of the screen. The cross remained until theend of the melody, when it was replaced by the words “Same” and“Different”, which remained until the subject responded bypressing either the left or right mouse button, respectively. Priorto the experimental session, subjects were instructed to listen toeach melody and determine whether the “sound quality” (i.e.,timbre) of the final note was the same or different as that of the

preceding notes. Subjects were not informed about the in-key andout-of-key violations. Each subject was given a practice sessioncontaining 12 different melodies, which were presented in equalproportions (2 each) across the six experimental conditions. Threeof the practice melodies (25%) contained timbre deviants. Visualfeedback (“Correct” or “Incorrect”) was given in the practicesession, but not in the subsequent experimental session.

Follow-up behavioral assessments

To test each subject's familiarity with the melodies s/he hadbeen presented with, all subjects returned (after 1 to 7 days) for afollow-up behavioral test session. Each subject was presented withthe control versions (with no timbre deviants) of the 180 well-known and novel melodies that had been presented to thatparticular subject in the ERP experiment. To ensure that thebehavioral measures reflected the subject's familiarity of eachmelody prior to the target note in the ERP study, melodies werepresented only up to and not including the target note. The runswere presented to each subject in the same order as in the ERPstudy, although the melodies within each run were presented in anew pseudo-randomized order, with the same ordering restrictionsas described above. After listening to each melody, subjects wereprompted to rate their familiarity with the melody on a scale from 1to 5, with a rating of “1” indicating “very unfamiliar,” and a ratingof “5” indicating “very familiar.” Prior to presentation of themelodies, subjects were instructed to base their familiarity ratingson how familiar they were with each melody before participating inthe ERP session. These ratings were then used to identify“familiar” melodies (well-known melodies rated 4 or 5) and“unfamiliar” melodies (novel melodies rated 1 or 2) for eachsubject, for inclusion in the ERP analyses (see below). Differencesin behavioral results between musicians and non-musicians wereanalyzed using two-tailed t-tests (SPSS 15.0).

EEG recordings, pre-processing, and analysis

Scalp EEG was continuously recorded in DC mode at asampling rate of 500 Hz from 64 electrodes mounted in an elasticcap (Electro-Cap International) and referenced to the left mastoid,in accordance with the 10–20 international system. Impedance at

Page 5: Double dissociation between rules and memory in music: … · Double dissociation between rules and memory in music: ... of musical training, ... musicians,suggestingthatatleastsomeofthesenotesviolatednon-

335R.A. Miranda, M.T. Ullman / NeuroImage 38 (2007) 331–345

each scalp electrode was reduced to 5kΩ or below. The EEG wasamplified by Neuroscan SynAmps2 amplifiers, and filtered on-linewith a band-pass filter (DC to 100 Hz, 24-dB/octave attenuation).Electrodes were also placed above and below the right eye and onthe outer canthi of both eyes to measure vertical and horizontal eyemovements and eye-blinks. Off-line, the EEG was re-referenced tothe average of the left and right mastoids, and filtered with a band-pass filter (0.1 to 40 Hz, 24-dB/octave attenuation). Eye-blinks(determined by reference to vertical EOG) were removed from theraw EEG using an ocular artifact reduction algorithm in SCAN4.3.3 (Compumedics Neuroscan, El Paso, TX). Trials (the EEGfrom 200 ms before to 1000 ms after the onset of the target note)containing additional artifacts exceeding 100 Hz at any given scalpelectrode were excluded from analysis.

ERPs time-locked to the onset of each target note wereaveraged by condition type for each subject at each electrode site,using a 200-ms pre-stimulus baseline. For each violation andcontrol condition, the computation of ERP averages for eachsubject was restricted to either well-known melodies that the givensubject rated in the follow-up behavioral session as “familiar”(given a familiarity rating of 4 or 5) or novel melodies that wererated as “unfamiliar” (given a familiarity rating of 1 or 2). Thesecorrectly rated melodies will henceforth be referred to as “familiar”and “unfamiliar” melodies.

Fig. 2. Regions of interest (ROIs) used for statistical analyses of ERP data: Left An(R. Ant.): average of F2, F4, F6, F8, FF2, FF4, and FP4; Midline Anterior (M. AntRight Central (R. Cent.): average of C2, C4, C6, and T4; Midline Central (M. Cent.)TO1, and O1; Right Posterior (R. Post.): average of P2, P4, P6, T6, PO4, TO2, an

ERPs were statistically evaluated over nine regions of interest(ROIs; Fig. 2): left anterior (F1, F3, F5, F7, FF1, FF3, FP3), rightanterior (F2, F4, F6, F8, FF2, FF4, FP4), midline anterior (FZ), leftcentral (C1, C3, C5, T3), right central (C2, C4, C6, T4), midlinecentral (CZ), left posterior (P1, P3, P5, T5, PO3, TO1, O1), rightposterior (P2, P4, P6, T6, PO4, TO2, O2), and midline posterior(POZ).

Mean amplitudes for each condition were computed forappropriate time windows for each ERP component of interest.The time windows were selected on the basis of previous researchand visual inspection of the grand averages across all conditionsthat showed the component in question. Visual inspection of thewaveforms suggested that the time windows for each componentwere similar across all conditions showing the component.

Repeated measures ANOVAs were performed on each timewindow (SAS 9.1). In all cases, the Huynh–Feldt correction wasapplied to account for the violation of sphericity assumption. Thethreshold for significance used in all analyses was p≤0.05 (two-tailed). Separate ANOVAs were performed for lateral and midlineROIs. For the sake of simplicity, and because results were alwayssimilar for lateral and midline ROIs, only analyses on lateral ROIsare reported here, unless the ERP component under investigationwas clearly larger over the midline than lateral sites, in which caseonly midline analyses are reported.

terior (L. Ant): average of F1, F3, F5, F7, FF1, FF3, and FP3; Right Anterior): includes only FZ; Left Central (L. Cent.): average of C1, C3, C5, and T3;: includes only CZ; Left Posterior (L. Post.): average of P1, P3, P5, T5, PO3,d O2; and Midline Posterior (M. Post.): includes only POZ.

Page 6: Double dissociation between rules and memory in music: … · Double dissociation between rules and memory in music: ... of musical training, ... musicians,suggestingthatatleastsomeofthesenotesviolatednon-

336 R.A. Miranda, M.T. Ullman / NeuroImage 38 (2007) 331–345

For each of the four violation/control contrasts (unfamiliar in-key, unfamiliar out-of-key, familiar in-key, and familiar out-of-key), each time window for which visual inspection suggested acomponent of interest was analyzed with three levels of analysis.These analyses allowed us to identify the ROIs in which thecomponent was most prominent, and to reveal any musician/non-musician differences.

In the first-level analyses, the ANOVAs on the lateral ROIsincluded four factors: Violation (2 levels: violation vs. control),Hemisphere (2 levels: left vs. right), Anterior/Posterior (3 levels:anterior, central, and posterior), and Musical Training (2 levels:musicians vs. non-musicians). ANOVAs on the midline ROIsincluded three of these factors (Violation, Anterior/Posterior, andMusical Training). Only interactions between Violation and one orboth scalp-distribution factors (Hemisphere, Anterior/Posterior)were included in these models. The mean ERP amplitude for eachcondition in the time window constituted the dependent measure.

Any significant interactions were followed up with the second-level analyses, which included only those factors in the interaction,and allowed us to identify the set of ROIs in which the effect wasmost prominent (the “area of activation”).

This area of activation was examined with third-level analysesthat included Musical Training as well as relevant scalp-distribution factors, allowing us to examine possible effects ofthese factors on the component of interest, while avoidinginterference from temporally overlapping components with otherareas of activation.

Finally, after all components had been analyzed in eachcontrast, we further examined each component of interest bydirectly comparing the different violation/control contrasts inwhich the component had been found. The difference waves (thedifference of the mean amplitudes of the two contrastedconditions) of the relevant violation/control contrasts constitutedthe dependent measure in these ANOVAs, which included thefactors Hemisphere, Musical Training and Violation Condition (upto three levels: unfamiliar out-of-key, familiar in-key, familiar out-of-key), or just Musical Training and Violation Condition formidline analyses. All main effects and interactions (not just thoseincluding Violation Condition) were included, allowing us toexamine effects of Hemisphere and Musical Training acrossviolation conditions.

Results

Behavioral results

In the ERP experiment, subjects accurately judged the presence(mean=97.4%, SE=1.2%) and absence (mean=98.1%, SE=0.6%)of timbre violations, with no significant differences betweenmusicians and non-musicians (psN0.09). In the follow-up behavior-al session, the familiarity ratings of the well-known melodies(mean=4.23, SE=0.06) differed between musicians (mean=4.44,SE=0.06) and non-musicians (mean=4.02, SE=0.09; t(52)=3.76,pb0.001), whereas the ratings of novel melodies (mean=1.49,SE=0.04) did not (musicians: mean=1.47, SE=0.05; non-musi-cians: mean=1.51, SE=0.05; t(62)=0.58, p=0.562). The subjectsrated 78.9% (SE=1.7%) of the well-knownmelodies as familiar and88.2% (SE=1.0%) of the novel melodies as unfamiliar. More well-knownmelodies were rated as familiar by musicians (mean=84.4%,SE=1.7%) than non-musicians (mean=73.5%, SE=2.6%; t(62)=3.54, p=0.001), whereas no such difference was observed for the

novel melodies (musicians: mean=88.3%, SE=1.5%; non-musi-cians: mean=88.2%, SE=1.4%; t(62)=0.08, p=0.940).

ERP results

Visual inspection of the waveforms indicated a characteristicauditory P1–N1–P2 complex for all conditions (i.e., in bothviolation and control conditions), in addition to five ERPcomponents elicited by the violation conditions as compared tothe control conditions (Fig. 3). Time windows for each of thesefive components were selected on the basis of previous researchand visual inspection of the waveforms: 150–270 ms for an earlyanterior-central negativity elicited by out-of-key violations in bothfamiliar and unfamiliar melodies; 220–380 ms for a posteriornegativity elicited by both in-key and out-of-key violations infamiliar melodies; 270–350 ms for an anterior-central positivityalso elicited by both types of violations in familiar melodies; 500–700 ms for a later anterior negativity elicited by all three violationconditions; and 500–700 ms for a late posterior positivity alsoelicited by all three violation conditions.

Violation/control contrasts

In-key violations in unfamiliar melodiesThese “violations” did not constitute actual violations of either

musical rules or known musical melodies, and thus were notpredicted to elicit any ERP components, as compared to controltarget notes in unfamiliar melodies. Indeed, visual inspectionsuggested no clear ERP effects in this contrast (Fig. 3b), so theinitial set of three-level analyses was not performed (see below forfurther analyses on this condition).

Out-of-key violations in unfamiliar melodiesAnalyses were performed on the time windows of the three

components suggested by visual inspection (Fig. 3b): 150–270 msfor the early anterior-central negativity, 500–700 ms for the lateranterior negativity, and 500–700 ms for the late posterior positivity(see above). Analysis of the 150–270 ms window with the first-level ANOVA (that is, with factors Violation, Hemisphere,Anterior/Posterior and Musical Training on lateral ROIs; seeMethods) elicited a significant Violation by Anterior/Posteriorinteraction (F(4,312)=12.07, pb0.0001), with no other significantinteractions that included both Violation and either of the scalp-distribution factors (Hemisphere, Anterior/Posterior). The second-level analyses revealed that although the negativity extended acrossall lateral ROIs, it was clearly greatest at anterior and central ROIs(anterior: F(1,63)=33.31, pb0.0001, central: F(1,63)=30.72, pb0.0001, posterior: F(1,63)=9.33, p=0.003). Therefore the anteriorand central lateral ROIs were selected as the “area of activation”for further targeted analyses. The third-level analyses on thisanterior-central area of activation (factors Violation, Hemisphere,and Musical Training, collapsing across anterior and central levelsof the factor Anterior/Posterior) elicited the expected main effect ofViolation (F(1,61)=63.13, pb0.0001), confirming the anterior-central negativity. No interactions with Violation were obtained,indicating that the negativity did not vary across the hemispheresor between musicians and non-musicians.

In the 500–700 ms time window, the first level ANOVAagain elicited only a Violation by Anterior/Posterior interaction(F(4,312)=12.47, pb0.0001), with no other significant interac-tions that included both Violation and either scalp-distribution

Page 7: Double dissociation between rules and memory in music: … · Double dissociation between rules and memory in music: ... of musical training, ... musicians,suggestingthatatleastsomeofthesenotesviolatednon-

Fig. 3. Grand average ERPs for control notes (black line), in-key violations (blue line), and out-of-key violations (red line) in (a) familiar melodies and (b)unfamiliar melodies. Timing is displayed in milliseconds. Negative voltage is plotted upward. The ordinate indicates the onset of the target note (onset of thesubsequent note occurred 600 ms later). Individual plots represent average waveforms from all electrodes within each of the nine regions of interest. Arrowsindicate the five ERP components of interest: the early anterior-central negativity (ERAN), the posterior negativity (N400), the anterior-central positivity (P3a),the later anterior negativity (N5), and the late posterior positivity (P600).

337R.A. Miranda, M.T. Ullman / NeuroImage 38 (2007) 331–345

factor. The second-level analyses revealed that the anterior ROIsshowed a significant negativity (F(1,63)=14.59, p=0.0003),with no significant effect for the central ROIs (F(1,63)=1.18,p=0.282), and a significant positivity for the posterior ROIs(F(1,63)=30.35, pb0.0001). To investigate the anterior area ofactivation, a third-level ANOVA was performed on the anteriorlateral ROIs. This revealed a main effect of Violation (F(1,61)=

14.45, p=0.0003), confirming the anterior negativity, and nosignificant interactions of Hemisphere or Musical Training withViolation.

To further examine the posterior positivity in the 500–700 mstimewindow, a third-level analyseswas performed over the posteriorarea of activation (posterior lateral ROIs). This analysis revealedonly a main effect of Violation (F(1,61)=30.07, pb0.0001),

Page 8: Double dissociation between rules and memory in music: … · Double dissociation between rules and memory in music: ... of musical training, ... musicians,suggestingthatatleastsomeofthesenotesviolatednon-

338 R.A. Miranda, M.T. Ullman / NeuroImage 38 (2007) 331–345

verifying the positivity, with no significant interactions betweenViolation and either of the other two factors (Hemisphere andMusical Training).

In-key violations in familiar melodiesAnalyses were performed on three time windows: 220–380 ms

for the posterior negativity, 270–350 ms for the anterior-centralpositivity, and 500–700 ms for both the later anterior negativityand the late posterior positivity (Fig. 3a). In the 220–380 mswindow, the first-level ANOVA revealed only a significantViolation by Anterior/Posterior interaction (F(4,312)=16.30,pb0.0001). The second-level analyses confirmed that the negativ-ity was restricted to the posterior ROIs (F(1,63)=2.19, p=0.0009),with significant positivities at anterior (F(1,63)=4.85, p=0.031)and central (F(1,63)=5.63, p=0.021) ROIs. The third-levelanalyses on the posterior lateral ROIs revealed only a main effectof Violation (F(1,61)=11.93, p=0.001).

In the 270–350 ms time window, visual inspection of thewaveforms suggested that the anterior-central positivity was muchlarger at midline than lateral ROIs (Fig. 3a), so we report ANOVAs

Fig. 4. Average ERPs for (a) musicians and (b) non-musicians: for familiar melorepresent average waveforms from all electrodes within each of the nine regions o

only on midline ROIs (see Methods). The first-level ANOVArevealed only a Violation by Anterior/Posterior interaction(F(5,309)=13.52, pb0.0001). Second-level analyses confirmed theanterior-central area of activation, with significant positivities atanterior (F(1,63)=20.16, pb0.0001) and central (F(1,63)=9.46,p=0.007) ROIs, and a marginally significant negativity at theposterior ROI (F(1,63)=3.54, p=0.064). The third-level ANOVAon the anterior and central midline ROIs revealed a main effect ofViolation (F(1,61)=37.83, pb0.0001), reflecting the positivity overthis area of activation. Additionally, this ANOVA yielded asignificant interaction between Violation and Musical Training(F(2,61)=3.71, p=0.030), reflecting a greater positivity formusicians than non-musicians (Fig. 4), although the positivitywas significant in both groups (musicians: F(1,31)=31.09,pb0.0001; non-musicians: F(1,31)=8.55, p=0.006).

In the 500–700 ms time window, the first-level ANOVAagain showed only a Violation by Anterior/Posterior interaction(F(4,312)=12.86, pb0.0001). Second-level analyses revealedthat the negativity was indeed restricted to the anterior ROIs(F(1,63)=10.62, p=0.002), with significant positivities at central

dies (left column) and unfamiliar melodies (right column). Individual plotsf interest.

Page 9: Double dissociation between rules and memory in music: … · Double dissociation between rules and memory in music: ... of musical training, ... musicians,suggestingthatatleastsomeofthesenotesviolatednon-

339R.A. Miranda, M.T. Ullman / NeuroImage 38 (2007) 331–345

(F(1,63) = 6.96, p = 0.011) and posterior (F(1,63) = 27.76,pb0.0001) ROIs. The third-level analysis on anterior lateral ROIsrevealed only the expected main effect of Violation (F(1,61)=10.79, p=0.002), verifying the anterior negativity.

The second level analyses revealed that the positivity in the500–700 ms time window was significantly greater over posteriorthan central ROIs (F(2,126)=4.27, p=0.016). The third-levelanalysis on posterior lateral ROIs revealed a main effect ofViolation (F(1,61)=28.44, pb0.0001), as well as an interactionbetween Violation and Musical Training (F(2,61)=4.16, p=0.020),with musicians showing a larger positivity than non-musicians.Follow-up analyses showed that the positivity was significant onlyfor musicians (F(1,31)=46.84, pb0.0001; non-musicians: non-significant positivity, F(1,31)=2.61, p=0.116).

Out-of-key violations in familiar melodiesAnalyses were performed on the time windows of the five

components suggested by visual inspection (Fig. 3a): 150–270 msfor the early anterior-central negativity, 220–380 ms for theposterior negativity, 270–350 ms for the anterior-central positivity,and 500–700 ms for the later anterior negativity and the lateposterior positivity.

In the 150–270 ms time window, the first-level ANOVA elicitedonly a Violation by Anterior/Posterior interaction (F(4,312)=4.64,p=0.001). The second-level analyses revealed the same pattern asthat observed in this time window for out-of-key violations inunfamiliar melodies: whereas the negativity extended across alllateral ROIs, it was greatest at anterior and central ROIs (anterior:F(1,63)=36.75, pb0.0001; central: F(1,63)=34.95, pb0.0001;posterior: F(1,63)=21.28, pb0.0001). Moreover, it was signifi-cantly smaller in posterior than in central (F(2,126)=6.73,p=0.002) and anterior (F(2,126)=7.23, p=0.001) ROIs, but didnot differ between anterior and central ROIs (F(2,126)=0.96,p=0.385), confirming that the negativity was most prominentacross anterior and central ROIs. The third-level analysis, onanterior and central lateral ROIs, revealed only the expected maineffect of Violation (F(1,61)=72.07, pb0.0001).

The first-level ANOVA in the 220–380 ms time window showedonly a Violation by Anterior/Posterior interaction (F(4,312)=10.15,pb0.0001). The posterior distribution of the negativity wasconfirmed by second-level analyses (posterior: F(1,63)=19.41,pb0.0001; central: F(1,63)=0.16, p=0.687; anterior: F(1,63)=0.01, p=0.937). The third-level analysis, on posterior lateral ROIs,revealed only the predicted main effect of Violation (F(1,61)=19.24, pb0.0001).

The anterior-central positivity in the 270–350 ms time windowagain appeared to be largest over midline ROIs (as was observedfor in-key violations in familiar melodies), and thus only analyseson the midline ROIs are reported. The first-level ANOVA revealeda Violation by Anterior/Posterior interaction (F(5,309)=14.25,pb0.0001), as well as a three-way interaction between Violation,Anterior/Posterior, and Musical Training (F(6,309) = 4.61,p=0.0002). Second-level analyses confirmed the anterior-centraldistribution of the positivity, with a significant positivity at anterior(F(1,63) = 13.08, p= 0.0006) and central (F(1,63) = 10.58,p=0.002) ROIs, and a significant negativity at the posterior ROI(F(1,63)=4.58, p=0.036). Second-level analyses following up onthe three-way interaction revealed that only musicians showed theanterior-central positivity (musicians: anterior: F(1,31)=15.72,p=0.0004; central: F(1,31)=11.17, p=0.002; posterior: F(1,31)=1.24, p=0.273; non-musicians: anterior: F(1,31)=0.92, p=0.344;

central: F(1,31)=0.89, p=0.352; posterior: negativity, F(1,31)=5.20, p=0.030). The third-level ANOVA, on the anterior-centralmidline area of activation, yielded the expected main effect ofViolation (F(1,61)=36.37, pb0.0001), and an interaction betweenViolation and Musical Training (F(2,61)=11.02, pb0.0001), withmusicians showing a greater positivity than non-musicians.Follow-up analyses revealed that only musicians showed asignificant positivity (musicians: F(1,31)=38.82, pb0.0001; non-musicians: F(1,31)=2.41, p=0.131).

The first level ANOVA in the 500–700 ms window showedonly a Violation by Anterior/Posterior interaction (F(4,312)=78.79, pb0.0001). Second-level analyses confirmed the anteriordistribution of the negativity (anterior ROIs: F(1,63)=36.99,pb0.0001), with significant positivities at central (F(1,63)=25.98, pb0.0001) and posterior (F(1,63)=169.08, pb0.0001)ROIs. The third-level analysis on anterior lateral ROIs revealedonly the expected main effect of Violation (F(1,61)=36.99,pb0.0001), verifying the anterior negativity.

The second level analyses showed that the positivity in the500–700 ms time window was significantly greater at posteriorthan at central ROIs (F(2,126)=33.88, pb0.0001). The third-levelanalysis examining the positivity over posterior lateral ROIsrevealed the predicted main effect of Violation (F(1,61)=172.99,pb0.0001), as well as a significant interaction between Violationand Musical Training (F(2,61)=3.56, p=0.034). Although thepositivity was larger for musicians than for non-musicians, theeffect was significant for both groups (musicians: F(1,31)=152.90,pb0.0001; non-musicians: F(1,31)=47.31, pb0.0001).

Comparisons of contrasts for each ERP component of interest

Before directly comparing the violation/control contrasts foundabove for each component, we statistically examined whether thecomponents were indeed absent in those contrasts not analyzedabove, due to their apparent absence on the basis of visualinspection. For all such contrasts, ANOVAs with the single factorViolation were performed in the appropriate time windows andareas of activation. These revealed no significant effects, confirm-ing visual inspection.

Early anterior-central negativity (150–270 ms)This component was found for out-of-key violations in both

unfamiliar and familiar melodies (Figs. 3 and 5a). In both cases, theeffect was most prominent across anterior and central lateral ROIs.It might be argued that visual inspection of the waveforms suggestsa very slight negativity within this time window for in-keyviolations in familiar melodies (Figs. 3 and 5a). However, anANOVA on the anterior and central lateral ROIs confirmed thatthis negativity did not reach significance in the 150–270 ms timewindow (F(1,63)=2.59, P=0.113). Moreover, direct comparisonsof the difference waves (violation minus control) in this timewindow and anterior-central area of activation revealed that bothtypes of out-of-key violations elicited significantly greaternegativity than in-key violations in familiar melodies (out-of-keyfamiliar vs. in-key familiar: F(1,63)=39.32, Pb0.0001; out-of-keyunfamiliar vs. in-key familiar: F(1,63)=18.95, Pb0.0001).

To follow up on the significant early anterior-central negativ-ities in the two out-of-key violation conditions, we performed anANOVA on the difference waves for these conditions in theanterior-central area of activation, with factors Hemisphere,Musical Training, and Violation Condition (2 levels, for the

Page 10: Double dissociation between rules and memory in music: … · Double dissociation between rules and memory in music: ... of musical training, ... musicians,suggestingthatatleastsomeofthesenotesviolatednon-

Fig. 5. Scalp topographic maps showing the distribution of the two ERP components of primary interest, averaged across all 64 subjects. The maps representdifference waves across the specified time windows for each violation/control contrast (i.e., in-key violations in familiar melodies minus control notes in familiarmelodies, out-of-key violations in familiar melodies minus control notes in familiar melodies, and out-of-key violations in unfamiliar melodies minus controlnotes in unfamiliar melodies). Where significant, these components are indicated by an asterisk. Early anterior-central negativities (a) were elicited by ruleviolations (out-of-key violations in both familiar and unfamiliar melodies) but not by notes that violated only memorized representations (in-key violations infamiliar melodies). Posterior negativities (b) were elicited by violations of memorized representations (both types of violations in familiar melodies) but not byviolations only of rules (out-of-key violations in unfamiliar melodies). Note that the time window for the anterior-central positivity (270–350 ms) – also elicitedby familiar melody violations – falls within the time window for the posterior negativity (220–380 ms), and thus the anterior-central positivity is evident in thescalp topographic maps shown here for familiar melody violations.

340 R.A. Miranda, M.T. Ullman / NeuroImage 38 (2007) 331–345

difference waves for out-of-key violations in unfamiliar andfamiliar melodies). All main effects and interactions were includedin the model (see Methods). The analysis revealed no main effectof Violation Condition (F(1,62)=1.31, p=0.257), indicating thatthe amplitude of the early anterior-central negativity did notsignificantly differ between out-of-key violations in unfamiliar andfamiliar melodies. The only significant result was a main effect ofHemisphere (F(1,62)=4.44, p=0.039), indicating a greater nega-tivity in the right hemisphere across both violation conditions (thatis, it was right-lateralized), although follow-up analyses revealedthat the negativity was significant in both hemispheres (left: t(62)=4.33, pb0.0001; right: t(62)=6.92, pb0.0001).

Posterior negativity (220–380 ms)This effect was found for both in-key and out-of-key violations

in familiar melodies (Figs. 3a and 5b), and was restricted toposterior lateral ROIs in both cases. The ANOVA on the differencewaves of these two contrasts in this posterior area of activationrevealed no significant main effects or interactions, indicating thatthe amplitude of the posterior negativity did not significantly differbetween the two violation conditions, between musicians and non-musicians, or between the two hemispheres (that is, it was bilaterallydistributed).

Anterior-central positivity (270–350 ms)This component was found for in-key and out-of-key violations

in familiar melodies (Fig. 3a), with the effect found in anterior andcentral (midline) ROIs in both cases. The ANOVA on thedifference waves of these contrasts in these ROIs revealed first, a

main effect of Musical Training (F(1,62)=5.69, p=0.020),reflecting a larger positivity across both violation conditions formusicians than for non-musicians (Fig. 4), and second, asignificant interaction between Violation Condition and MusicalTraining (F(1,62)=8.51, p=0.005). Follow-up analyses showedthat, across both violation types, the positivity was significant formusicians (t(62)=5.09, pb0.0001), but only marginally significantfor non-musicians (t(62)=1.72, p=0.091). Moreover, musiciansshowed a significantly greater positivity for out-of-key than in-keyviolations (F(1,31)=7.75, p=0.009), whereas no significantdifference between these contrasts was observed for non-musicians(F(1,31)=1.69, p=0.204).

Anterior negativity (500–700 ms)This effect was found for all three violation conditions: out-of-

key violations in both unfamiliar and familiar melodies, and in-keyviolations in familiar melodies (Fig. 3). In all cases, the negativitywas restricted to anterior ROIs. The ANOVA on the three differencewaves in anterior lateral ROIs revealed three significant effects.First, a main effect of Violation Condition (F(2,124)=3.57,p=0.031) was found. This indicated a greater negativity for out-of-key violations in familiar melodies as compared to in-keyviolations in familiar melodies (F(1,63)=5.62, p=0.021) and out-of-key violations in unfamiliar melodies (F(1,63)=5.07, p=0.028),with no significant difference between the latter two (F(1,63)N0.01,p=0.950). Second, a main effect of Hemisphere (F(1,62)=7.39,p=0.009) reflected an overall greater negativity over the right thanleft hemisphere, though the effect was significant in both hemi-spheres (left: t(62)=2.92, p=0.005; right: t(62)=6.08, pb0.0001).

Page 11: Double dissociation between rules and memory in music: … · Double dissociation between rules and memory in music: ... of musical training, ... musicians,suggestingthatatleastsomeofthesenotesviolatednon-

341R.A. Miranda, M.T. Ullman / NeuroImage 38 (2007) 331–345

Third, a Hemisphere by Musical Training interaction (F(1,62)=6.71, p=0.012) reflected the finding that musicians showed nosignificant effect of Hemisphere (F(1,31)=0.01, p=0.926; Fig.4a), whereas the negativity was right-lateralized in non-musicians(F(1,31)=12.96, p=0.001), who showed a significant negativityon the right (t(31)=5.59, pb0.0001) but not on the left (t(31)=0.81, p=0.424; Fig. 4b).

Late posterior positivity (500–700 ms)The late positivity was also found for all three violation condi-

tions (Fig. 3). In all cases, the positivity was largest over posteriorROIs. The ANOVA on the difference waves of the three contrasts inlateral posterior ROIs revealed two effects: a main effect of ViolationCondition (F(2,124)=39.56, pb0.0001), and a significant interac-tion between Violation Condition and Musical Training (F(2,124)=4.89, p=0.009). Follow-up ANOVAs were run to compare each ofthe three pairs of difference waves, with the factors ViolationCondition, Musical Training, and their interaction in each ANOVA.These analyses revealed that out-of-key violations in familiarmelodies elicited a significantly larger positivity than either in-keyviolations in familiar melodies (F(1,62)=74.05, pb0.0001) or out-of-key violations in unfamiliar melodies (F(1,62)=75.84, pb0.0001), which themselves did not differ from each other(F(1,62)=0.67, p=0.415). In addition, these follow-up analysesrevealed interactions between Violation Condition and MusicalTraining in the comparisons between out-of-key violations inunfamiliar melodies and both in-key (F(1,62)=6.93, p=0.011) andout-of-key (F(1,62)=8.36, p=0.005) violations in familiar melo-dies (Fig. 4). This reflected, first of all, the finding that musiciansshowed a significantly greater positivity for in-key violations infamiliar melodies than out-of-key violations in unfamiliar melodies(F(1,31)=7.36, p=0.011), whereas non-musicians showed, if any-thing, a trend for the opposite effect (F(1,31)=1.38, p=0.249).Moreover, the difference in the effect between out-of-key violationsin familiar and unfamiliar melodies was greater for musicians thanfor non-musicians, although the former showed a significantlygreater positivity than the latter both in musicians (F(1,31)=93.63,pb0.0001) and non-musicians (F(1,31)=13.20, p=0.001).

Summary and discussion

In this study, a double dissociation was observed between ruleand memory violations in music. Early anterior-central negativitieswere elicited by out-of-key violations in both unfamiliar andfamiliar melodies, whereas posterior negativities were elicited byboth in-key and out-of-key violations in familiar melodies. Theamplitude of the early anterior-central negativity did not differbetween the two out-of-key (rule) violation conditions, while theamplitude of the posterior negativity did not differ between the twomemory-based violations (i.e., the two types of violations infamiliar melodies). These results suggest that the early anterior-central negativity is driven by violations of musical rules,independent of familiarity, that is, independent of whether themelody is memorized or not, whereas the posterior negativity isdriven by violations of memorized (familiar) representations,regardless of whether or not the deviant note violates musical rules.Thus the two components appear to be independent of one another.

The early anterior-central negativity is similar in scalpdistribution and latency to the early (right) anterior negativity, orE(R)AN, found in previous studies of harmonic rule violations inmelodies (Verleger, 1990; Paller et al., 1992) and chord sequences

(Patel et al., 1998; Koelsch et al., 2000, 2002b,c, 2005; Loui et al.,2005; Leino et al., 2007). In these previous studies, the negativitieswere either right-lateralized (Patel et al., 1998; Koelsch et al.,2000, 2002b,c, 2005) or bilateral (Paller et al., 1992; Loui et al.,2005; Leino et al., 2007). In the present study, the early anterior-central negativity was significantly right-lateralized, but only whenanalyzed across both out-of-key violation conditions (i.e., in bothfamiliar and unfamiliar melodies). In one study of musical ruleviolations, musicians showed a larger ERAN than non-musicians(Koelsch et al., 2002b). This difference was not statisticallyobserved here, although visual inspection of the waveformssuggests a trend in this direction (Fig. 4).

Posterior negativities have not previously been reported in ERPstudies of music, even by studies that examined violations infamiliar melodies. In one study of out-of-key violations in well-known melodies (Verleger, 1990), in which subjects were informedabout the violations, a posterior positivity began around 200 msand thus may have masked any posterior negativity. Additionally,since behavioral results were not reported, it is not clear whetherthe subjects were actually familiar with the melodies. Paller et al.(1992) also examined ERP responses to out-of-key violations inwell-known melodies. In this case, 99% were judged as familiar.No posterior negativities were apparent for these deviants, evenwhen the positivities were somewhat diminished by reducingattention to the violations. However, it is possible that thepositivities still masked a posterior negativity. In addition, theabsence of such a negativity could be explained by the 800-msdelay prior to the final (target) note in all melodies during whichsubjects were instructed to imagine the final note. This delay itselfviolated the melody (ERP responses to this delay were notexamined), and may have disrupted the memory-based contextnecessary to elicit the posterior negativity. Finally, Besson andFaïta (1995) also examined melody-final pitch violations in bothfamiliar and unfamiliar melodies. Both out-of-key violations andin-key violations were examined, though the latter appear to haveviolated other musical rules (see discussion in Introduction above).In familiar but not unfamiliar melodies, both out-of-key and in-keyviolations appeared to elicit a central-posterior negativity betweenabout 200–400 ms, immediately preceding the late positivity.However, the negativity only appeared in the waveform figures andwas not mentioned in the text. The present study differed from eachof these studies: subjects were not informed about the violationsand were instructed to attend to other aspects of the stimuli; ERPswere only analyzed for melodies that were familiar; violationsoccurred in the appropriate temporal context within each melody;and the study included violations of memorized representationsthat did not violate any musical rules.

We suggest that the posterior negativity can be characterized asan N400. Both the scalp distribution and latency seem consistentwith N400s, which vary from posterior to central distributions, andin latency from about 250 to 500 ms (Kutas and Hillyard, 1980;Friederici et al., 1999a; Kutas and Federmeier, 2000; Hinojosa etal., 2001). Note that although the negativity was posteriorlydistributed compared to typical N400s, it is possible that itsdistribution extended more centrally, but was hidden by additivityeffects from the temporally overlapping anterior-central positivity(Fig. 3). Additionally, some evidence suggests that the posteriornegativity appears to have extended beyond the 220–380 ms timewindow, but was largely masked by additivity effects from thesubsequent posterior positivity. First, in-key deviants in familiarmelodies, which elicited significantly smaller late posterior

Page 12: Double dissociation between rules and memory in music: … · Double dissociation between rules and memory in music: ... of musical training, ... musicians,suggestingthatatleastsomeofthesenotesviolatednon-

342 R.A. Miranda, M.T. Ullman / NeuroImage 38 (2007) 331–345

positivities than out-of-key deviants in familiar melodies, alsoelicited longer posterior negativity latencies (based on visualinspection; Fig. 3). Second, non-musicians, who showed signifi-cantly smaller posterior positivities than musicians to both in-keyand out-of-key violations in familiar melodies, also showed longer-latency posterior negativities than musicians (Fig. 4). Perhaps mostimportantly, in the one case in which the posterior positivity wasnon-significant (i.e., in-key violations in familiar melodies, in non-musicians; see Fig. 6), the posterior negativity continued to about500 ms, further strengthening the view that the negativity canindeed be characterized as an N400.

While the N400 is most commonly known for being elicited bysemantic violations in language, similar negativities have also beenfound for line drawings (Holcomb and McPherson, 1994;Federmeier and Kutas, 2001), photos of objects (McPherson andHolcomb, 1999), faces (Bobes et al., 1994), and environmentalsounds (Van Petten and Rheinfelder, 1995). Most notable withrespect to the present findings is an ERP study by Bobes et al.(1994), in which subjects viewed photos of faces, some of whichcontained altered features that were detectable only if the faces werefamiliar. Similar to the in-key deviants in familiar melodiesexamined in the present study, the altered faces elicited a posteriorlydistributed N400 only in subjects who were previously familiar withthe faces. Thus, N400s appear to be elicited by violations of familiar(i.e., memorized) stimuli across perceptual/cognitive domains. Notealso that because the in-key violations in the present study do notseem to be related to any violation or anomaly of meaning, theapparent N400 observed here suggests that this component may notbe restricted to semantics, but rather may extend to the processing ofnon-semantic information in long-term memory (Ullman, 2004).

Additional studies are clearly necessary to further clarify thenature of the posterior negativity. In particular, the true timewindow of the negativity could be revealed by additional reductionsof potentially additive effects from any subsequent posteriorpositivity. For instance, having subjects read a book or play avideo game during presentation of the melodies may further reduceattention to the note violations, thus diminishing the attention-related positivities. However, it is not yet clear whether suchshallow processing of the stimuli would be sufficient to elicit the

Fig. 6. Average ERP waveforms for the non-musician group, shown forcontrol notes in familiar melodies (black line), and in-key violations infamiliar melodies (blue line). Individual plots represent average waveformsfrom all electrodes within the left and right posterior regions of interest. Inthis condition and subject group, there was no significant late posteriorpositivity, and the posterior negativity extended to approximately 500 ms,supporting the view that this component can be characterized as an N400.(For interpretation of the references to colour in this figure legend, the readeris referred to the web version of this article.)

posterior negativities. Alternatively, future studies could examinewhether the posterior negativity may be more strongly elicited byother tasks, such as those with greater involvement of thememorized representations (e.g., making a familiarity judgmentafter each melody). In addition, one would ideally investigate theposterior negativity using in-key violations in familiar melodies innon-musician subjects, since this condition did not elicit asignificant posterior positivity in non-musicians in the presentstudy. Future studies will also be necessary to reveal whether or notthe neural generators of the musical posterior negativity overlap,partially or wholly, with those of N400s elicited by lexical-semanticviolations in language and other domains (e.g., see Koelsch et al.(2005) for one approach that investigated the relationship betweenthe anterior negativities in language and music).

It is also possible that the posterior negativity may be related toN400s, but was not elicited by violations of the melodiesthemselves. For example, a majority of the melodies used in thepresent study (103 out of 120, based on responses from a pilotstudy with different subjects) are linked to lyrics. Since melodiesand their lyrics are strongly associated with one another in memory(Hebert and Peretz, 2001), a familiar melody note violation couldpotentially be interpreted as a prosodic violation of the lyricsassociated with that melody. Thus the posterior negativity could inprinciple be related to N400 or N400-like components that havebeen elicited by prosodic violations in speech (Bostanov andKotchoubey, 2004; Mietz et al., 2007)—although it should bepointed out that in the current study it is not clear how manysubjects knew how many of the lyrics.

Finally, it is clear that evidence from converging methodologieswill be important for elucidating the neural generators of theposterior negativity. For instance, neuroimaging studies couldreveal whether the brain areas activated in response to in-keyviolations in familiar melodies are similar to those activated by(lexical-)semantic anomalies in language and other domains (seeabove), while lesion studies may reveal whether damage to thesebrain regions results in deficits in detecting these musical violations.

In addition to the early right anterior-central negativity (ERAN)and posterior negativity (apparent N400), three other ERPcomponents were observed. First, an anterior-central positivitywas elicited by both types of violations in familiar melodies. Thescalp distribution and time window (270–350ms) of this componentare consistent with that of a P3a, which is thought to reflect theautomatic orientation of attention to a novel/deviant stimulus(Münte et al., 1998; Koelsch et al., 2000; Tervaniemi et al., 2005).The fact that the positivity was found for familiar but not unfamiliarmelodiesmay be due to increased attentional shift to violations of thehighly specific expectancies generated by familiar note sequences.This may also help explain the finding that the positivity was largerin musicians than in non-musicians, and within musicians, waslarger for out-of-key than in-key violations. Additionally, previousstudies using auditory “oddball” tasks have shown reduced P3acomponents in non-musicians (Tervaniemi et al., 2005), and inpatients with musical processing deficits (Münte et al., 1998).

Second, a later anterior negativity (500–700 ms) was generatedby all three violation conditions. This component is consistent withthe bilateral or right-lateralized N5/Late Negativity componentselicited in musical chord sequences by harmonically incongruouschords (Koelsch et al., 2000, 2002b,c; Loui et al., 2005), toneclusters (Koelsch et al., 2000), and timbre deviants (Koelsch et al.,2000). In the present study the negativity was bilateral in musiciansand right-lateralized in non-musicians, consistent with behavioral

Page 13: Double dissociation between rules and memory in music: … · Double dissociation between rules and memory in music: ... of musical training, ... musicians,suggestingthatatleastsomeofthesenotesviolatednon-

343R.A. Miranda, M.T. Ullman / NeuroImage 38 (2007) 331–345

and neuroimaging studies that have found greater right hemispheredominance for processing music in non-musicians, as compared tomusicians (Bever and Chiarello, 1974; Johnson et al., 1977;Messerli et al., 1995; Evers et al., 1999; Ohnishi et al., 2001). Thedifferent patterns of activation across violation conditions betweenthe early anterior-central negativity and the later anterior negativitysuggest that these are indeed different components. Whereas theearly anterior-central negativity was observed only for out-of-keyviolations (and did not differ between familiar and unfamiliarmelodies), the later anterior negativity was largest for the out-of-key violations in familiar melodies (which constituted both ruleand memory violations), and smaller but equivalent to one anotherfor in-key violations in familiar melodies and out-of-key violationsin unfamiliar melodies. Thus, the N5/Late Negativity does notappear to reflect solely rule-based or solely memory-based pro-cesses, but rather seems to be influenced by additive expectanciesgenerated by both.

Third, a late posterior positivity (500–700 ms) was observed,again for all three violation conditions. Late positivities have alsobeen found in many other studies of musical violations, in bothfamiliar and unfamiliar melodies, as well as in chord sequences(Verleger, 1990; Paller et al., 1992; Besson and Faïta, 1995; Patelet al., 1998; Koelsch et al., 2000, 2002c). Although the presentstudy was designed to diminish the late positivity by reducingattention to in-key and out-of-key violations, all but one subject (anon-musician) reported noticing these violations. Importantlyhowever, the late posterior positivity was not large enough tomask the N5/anterior negativity that occurred in a similar timewindow. In fact, the late positivity and anterior negativity showedthe same pattern of activation across the violation conditions. Thissuggests that the differences among the violation conditions werenot due to additivity effects between these two components, andthat the late posterior positivity also seems to reflect additive rule-and memory-based expectancies. Finally, musicians showedsignificantly larger late posterior positivities than non-musiciansfor in-key and out-of-key violations in familiar melodies, but notfor out-of-key violations in unfamiliar melodies. These effectsmay reflect differences between musicians and non-musicians inthe amount of attention allocated to violations in the familiarmelodies.

Together, the results from the present study suggest thatdifferent neural mechanisms are involved in rule-governed andmemory-based processing of melodies, and that these mechan-isms are independent from one another. The study demonstrates adouble dissociation analogous to that observed for grammaticaland lexical-conceptual processing in language, and thus suggestsan extension of the rule/memory dissociation in language to thedomain of music. While the anterior negativities in music andlanguage have previously been linked to both overlapping andpartially distinct structures in the frontal cortex, future studiesshould reveal whether N400s generated by the two domains alsoreflect activity from the same and/or overlapping neuralsubstrates.

Acknowledgments

We thank Aaron Newman, Petr Janata, John Drury, MatthewWalenski, Matthew Moffa, Jocelyn Curchack, Marco Piñeyro,Heidi Getz, and Brittany Sonnichsen for help on various aspects ofthis study. Support was provided to RAM from an NSF Graduate

Research Fellowship and support from Georgetown University,and to MTU from NIH R01 MH58189 and R01 HD049347.

Appendix A. Well-known melody stimuli

AddamTs Family ThemeAmazing GraceAmericaAmerica (from West Side Story)America the BeautifulAnchors AweighAndy Griffith ThemeAngels We Have Heard on HighAuld Lang SyneBaby Bumble BeeBack to the Future ThemeBattle Hymn of the RepublicB-I-N-G-OCalifornia GirlsChariots of Fire ThemeClementineCoca Cola ThemeColonel Bogey MarchDeck the HallsDing Dong! The Witch is DeadDixieDo Your Ears Hang LowEdelweissEine Kleine NachtmusikFor He's a Jolly Good FellowFrere JacquesFrosty the SnowmanGo Tell It on the MountainGod Bless AmericaGood King WenceslasHabanera (from Carmen)Hail to the ChiefHappy BirthdayHark! the Herald Angels SingHeart and SoulHere Comes Santa ClausHolly Jolly ChristmasHow Much is that Doggie in the WindowHush Little BabyI Feel Pretty (from West Side Story)I'm a Little TeapotImperial March (from Star Wars)Inspector Gadget ThemeIt's a Small WorldIt's Beginning to Look a Lot Like ChristmasItsy Bitsy SpiderI've Been Working on the RailroadJesus Loves MeJingle BellsJohn Jacob Jingleheimer SchmidtJolly Old Saint NicholasJoyful Joyful We Adore TheeJurassic Park Theme (End Credits)Killing Me SoftlyKorobochkaLet it SnowLondon Bridge is Falling DownLong Long AgoMary Had a Little LambMeet the Flintstones (from The Flintstones)

(continued on next page)

Page 14: Double dissociation between rules and memory in music: … · Double dissociation between rules and memory in music: ... of musical training, ... musicians,suggestingthatatleastsomeofthesenotesviolatednon-

Appendix A (continued)

344 R.A. Miranda, M.T. Ullman / NeuroImage 38 (2007) 331–345

Minuet in G (from Anna Magdalena's Notebook)Ninety-nine Bottles of BeerO Christmas TreeO Come, All Ye FaithfulO Holy NightO Little Town of BethlehemOh SusannaOld MacDonaldOscar Meyer ThemeOver the River and Through the WoodsPomp and CircumstancePop Goes the WeaselPuff the Magic DragonRaiders March (from Indiana Jones: Raiders of the Lost Ark)Rock-a-Bye BabyRockin' Around the Christmas TreeRow Row Row Your BoatRudolph the Red Nosed ReindeerSeventy-Six TrombonesSilent NightSilver BellsSimple GiftsSkip to my LouSomeone's in the Kitchen with Dinah (section of “I've Been Working on the

Railroad”)Spoonful of Sugar (from Mary Poppins)Star Spangled BannerTa-ra-ra Boom-der-eThe Ballad of Gilligan's Isle (from Gilligan's Island)The EntertainerThe Farmer in the DellThe First NoelThe Godfather ThemeThe Lion Sleeps TonightThe More We Get TogetherThe Muffin ManThe RoseThe Song That Never EndsThe Wheels on the BusThis Land is Your LandThis Old ManThree Blind MiceTop Gun ThemeToreador March (from Carmen)Turkey in the StrawTwelve Days of ChristmasTwinkle Twinkle Little StarUnited States Army Anthem (The Caissons Go Rolling Along)United States Marine HymnUp on the HousetopWe Wish You a Merry ChristmasWedding March (from A Midsummer Night's Dream)What Child is ThisWhen the Saints Go Marching InWhen You Wish Upon a StarWiegenlied (Cradle Song)Winnie the Pooh ThemeYankee DoodleYou Are My SunshineYou're a Grand Old Flag

References

Besson, M., Faïta, F., 1995. An event-related potential (ERP) study ofmusical expectancy: comparison ofmusicians with nonmusicians. J. Exp.Psychol. Hum. Percept. Perform. 21, 1278–1296.

Besson, M., Schön, D., 2001. Comparison between language and music.Ann. N. Y. Acad. Sci. 930, 232–258.

Bever, T.G., Chiarello, R.J., 1974. Cerebral dominance in musicians andnonmusicians. Science 185, 537–539.

Bobes, M.A., Valdes-Sosa, M., Olivares, E., 1994. An ERP study ofexpectancy violation in face perception. Brain Cogn. 26, 1–22.

Booth, J.R., Burman, D.D., Meyer, J.R., Gitelman, D.R., Parrish, T.B.,Mesulam, M.M., 2002. Modality independence of word comprehension.Hum. Brain Mapp. 16, 251–261.

Bostanov, V., Kotchoubey, B., 2004. Recognition of affective prosody:continuous wavelet measures of event-related brain potentials toemotional exclamations. Psychophysiology 41, 259–268.

Chou, T.L., Booth, J.R., Burman, D.D., Bitan, T., Bigio, J.D., Lu, D., Cone,N.E., 2006. Developmental changes in the neural correlates of semanticprocessing. NeuroImage 29, 1141–1149.

Coulson, S., King, J.W., Kutas, M., 1998. Expect the unexpected: event-related brain response to morphosyntactic violations. Lang. Cogn.Processes 13, 21–58.

Damasio, A.R., 1992. Aphasia. N. Engl. J. Med. 326, 531–539.De Vincenzi, M., Job, R., Di Matteo, R., Angrilli, A., Penolazzi, B., L.C.,

Vespignani, F., 2003. Differences in the perception and time course ofsyntactic and semantic violations. Brain Lang. 85, 280–296.

Evers, S., Dannert, J., Rodding, D., Rotter, G., Ringelstein, E.B., 1999. Thecerebral haemodynamics of music perception. A transcranial Dopplersonography study. Brain 122 (Pt 1), 75–85.

Eviatar, Z., Just, M.A., 2006. Brain correlates of discourse processing: anfMRI investigation of irony and conventional metaphor comprehension.Neuropsychologia 44, 2348–2359.

Federmeier, K.D., Kutas, M., 2001. Meaning and modality: influences ofcontext, semantic memory organization, and perceptual predictability onpicture processing. J. Exp. Psychol.: Learn. Mem. Cogn. 27, 202–224.

Friederici, A.D., 2002. Towards a neural basis of auditory sentenceprocessing. Trends Cogn. Sci. 6, 78–84.

Friederici, A.D., Hahne, A., Mecklinger, A., 1996. The temporal structure ofsyntactic parsing: early and late effects elicited by syntactic anomalies.J. Exp. Psychol. Learn. Mem. Cogn. 22, 1219–1248.

Friederici, A.D., Hahne, A., von Cramon, D.Y., 1998. First-pass versussecond-pass parsing processes in a Wernicke's and a Broca's aphasic:electrophysiological evidence for a double dissociation. Brain Lang. 62,311–341.

Friederici, A.D., Steinhauer, K., Frisch, S., 1999a. Lexical integration:sequential effects of syntactic and semantic information. Mem. Cogn.27, 438–453.

Friederici, A.D., von Cramon, D.Y., Kotz, S.A., 1999b. Language relatedbrain potentials in patients with cortical and subcortical left hemispherelesions. Brain 122, 1033–1047.

Friederici, A.D., Opitz, B., von Cramon, D.Y., 2000. Segregating semanticand syntactic aspects of processing in the human brain: an fMRIinvestigation of different word types. Cereb. Cortex 10, 698–705.

Friederici, A.D., Kotz, S.A., Werheid, K., Hein, G., von Cramon, D.Y.,2003a. Syntactic comprehension in Parkinson's disease: investigatingearly automatic and late integrational processes using event-related brainpotentials. Neuropsychology 17, 133–142.

Friederici, A.D., Ruschemeyer, S.A., Hahne, A., Fiebach, C.J., 2003b. Therole of left inferior frontal and superior temporal cortex in sentencecomprehension: localizing syntactic and semantic processes. Cereb.Cortex 13, 170–177.

Friederici, A.D., Gunter, T., Hahne, A., Mauth, K., 2004. The relative timingof syntactic and semantic processes in sentence comprehension.NeuroReport 15, 165–169.

Goodglass, H., 1993. Understanding Aphasia. Academic Press, San Diego,CA.

Hagoort, P., Brown, C., 2000. ERP effects of listening to speech compared toreading: the P600/SPS to syntactic violations in spoken sentences andrapid serial visual presentation. Neuropsychologia 38, 1531–1549.

Hahne, A., Friederici, A.D., 1997. Two stages in parsing: early automaticand late controlled processes. Exp. Brain Res. 117, 47.

Page 15: Double dissociation between rules and memory in music: … · Double dissociation between rules and memory in music: ... of musical training, ... musicians,suggestingthatatleastsomeofthesenotesviolatednon-

345R.A. Miranda, M.T. Ullman / NeuroImage 38 (2007) 331–345

Hebert, S., Peretz, I., 2001. Are text and tune of familiar songs separable bybrain damage? Brain Cogn. 46, 169–179.

Hinojosa, J.A., Martin-Loeches, M., Rubia, F.J., 2001. Event-relatedpotentials and semantics: an overview and an integrative proposal.Brain Lang. 78, 128–139.

Holcomb, P.J., McPherson, W.B., 1994. Event-related brain potentialsreflect semantic priming in an object decision task. Brain Cogn. 24,259–276.

Johnson, R.C., Bowers, J.K., Gamble, M., Lyons, F.M., Presbrey, T.W.,Vetter, R.R., 1977. Ability to transcribe music and ear superiority fortone sequences. Cortex 13, 295–299.

Kaan, E., Harris, A., Gibson, E., Holcomb, P., 2000. The P600 as an index ofsyntactic integration difficulty. Lang. Cogn. Processes 15, 159–201.

Koelsch, S., 2005. Neural substrates of processing syntax and semantics inmusic. Curr. Opin. Neurobiol. 15, 207–212.

Koelsch, S., Gunter, T., Friederici, A.D., Schröger, E., 2000. Brain indices ofmusic processing: “Nonmusicians” are musical. J. Cogn. Neurosci. 12,520–541.

Koelsch, S., Gunter, T.C., Cramon, D.Y.v., Zysset, S., Lohmann, G.,Friederici, A.D., 2002a. Bach speaks: a cortical “language-network”serves the processing of music. NeuroImage 17, 956–966.

Koelsch, S., Schmidt, B.H., Kansok, J., 2002b. Effects of musical expertiseon the early right anterior negativity: an event-related brain potentialstudy. Psychophysiology 39, 657–663.

Koelsch, S., Schröger, E., Gunter, T.C., 2002c. Music matters: preattentivemusicality of the human brain. Psychophysiology 39, 38–48.

Koelsch, S., Gunter, T.C., Wittfoth, M., Sammler, D., 2005. Interactionbetween syntax processing in language and in music: an ERP Study.J. Cogn. Neurosci. 17, 1565–1577.

Kutas, M., Federmeier, K.D., 2000. Electrophysiology reveals semanticmemory use in language comprehension. Trends Cogn. Sci. 4, 463–470.

Kutas, M., Hillyard, S.A., 1980. Reading senseless sentences: brainpotentials reflect semantic incongruity. Science 207, 203–205.

Laurent, J.-P., Denhières, G., Passerieux, C., Iakimovac, G., Hardy-Baylé,M.-C., 2006. On understanding idiomatic language: the saliencehypothesis assessed by ERPs. Brain Res. 1068, 151–160.

Leino, S., Brattico, E., Tervaniemi, M., Vuust, P., 2007. Representation ofharmony rules in the human brain: further evidence from event-relatedpotentials. Brain Res. 1142, 169–177.

Loui, P., Grent-'t-Jong, T., Torpey, D., Woldorff, M., 2005. Effects ofattention on the neural processing of harmonic syntax in Western music.Cogn. Brain Res. 25, 678–687.

Maess, B., Koelsch, S., Gunter, T.C., Friederici, A.D., 2001. Musical syntaxis processed in Broca's area: an MEG study. Nat. Neurosci. 4, 540–545.

McPherson, W.B., Holcomb, P.J., 1999. An electrophysiological investiga-tion of semantic priming with pictures of real objects. Psychophysiology36, 53–65.

Messerli, P., Pegna, A., Sordet, N., 1995. Hemispheric dominance formelody recognition in musicians and non-musicians. Neuropsychologia33, 395–405.

Mietz, A., Toepel, U., Ischebeck, A., Alter, K., in press. Inadequate andinfrequent are not alike: ERPs to deviant prosodic patterns in spokensentence comprehension. Brain Lang. doi:10.1016/j.bandl.2007.03.005.

Miranda, R.A., Ullman, M.T., under revision. Sex differences in music: afemale advantage in the recognition of familiar melodies. Mem. Cogn.

Münte, T.F., Schuppert, M., Johannes, S., Wieringa, B.M., Kohlmetz, C.,Altenmuller, E., 1998. Brain potentials in patients with music perceptiondeficits: evidence for an early locus. Neurosci. Lett. 256, 85–88.

Neville, H.J., Nicol, J.L., Barss, A., Forster, K.I., Garrett, M.F., 1991.Syntactically based sentence processing classes: evidence from event-related brain potentials. J. Cogn. Neurosci. 3, 151–165.

Newman, A.J., Pancheva, R., Ozawa, K., Neville, H.J., Ullman, M.T., 2001.An event-related fMRI study of syntactic and semantic violations.J. Psycholinguist. Res. 30, 339–364.

Newman, A.J., Ullman, M.T., Pancheva, R., Waligura, D.L., Neville, H.J.,

2007. An ERP study of regular and irregular English past tenseinflection. NeuroImage 34, 435–445.

Ohnishi, T., Matsuda, H., Asada, T., Aruga, M., Hirakata, M., Nishikawa,M., Katoh, A., Imabayashi, E., 2001. Functional anatomy of musicalperception in musicians. Cereb. Cortex 11, 754–760.

Oldfield, R.C., 1971. The assessment and analysis of handedness: theEdinburgh inventory. Neuropsychologia 9, 97–113.

Osterhout, L., Holcomb, P.J., 1992. Event-related brain potentials elicited bysyntactic anomaly. J. Mem. Lang. 31, 785–806.

Paller, K., McCarthy, G., Wood, C., 1992. Event-related potentials elicitedby deviant endings to melodies. Psychophysiology 29, 202–206.

Patel, A.D., 2003. Language, music, syntax and the brain. Nat. Neurosci. 6,674–681.

Patel, A.D., Gibson, E., Ratner, J., Besson, M., Holcomb, P.J., 1998.Processing syntactic relations in language and music: an event-relatedpotential study. J. Cogn. Neurosci. 10, 717–733.

Pilgrim, L., Fadili, J., Fletcher, P., Tyler, L., 2002. Overcoming confounds ofstimulus blocking: an event-related fMRI design of semantic processing.NeuroImage 16, 713.

Plailly, J., Tillmann, B., Royet, J.-P., 2007. The feeling of familiarity ofmusic and odors: the same neural signature? Cereb. Cortex. doi:10.1093/cercor/bhl173.

Platel, H., Price, C., Baron, J.-C., Wise, R., Lambert, J., Frackowiak, R.S.J.,Lechevalier, B., Eustache, F., 1997. The structural components of musicperception: a functional anatomical study. Brain 120, 229–243.

Platel, H., Baron, J.C., Desgranges, B., Bernard, F., Eustache, F., 2003.Semantic and episodic memory of music are subserved by distinct neuralnetworks. NeuroImage 20, 244–256.

Rapp, A.M., Leube, D.T., Erb, M., Grodd, W., Kircher, T.T., 2004. Neuralcorrelates of metaphor processing. Brain Res. Cogn. Brain Res. 20,395–402.

Regnault, P., Bigand, E., Besson, M., 2001. Different brain mechanismsmediate sensitivity to sensory consonance and harmonic context:evidence from auditory event-related brain potentials. J. Cogn. Neurosci.13, 241–255.

Sanders, L.D., Neville, H.J., 2003. An ERP study of continuous speechprocessing: I. Segmentation, semantics, and syntax in native speakers.Cogn. Brain Res. 15, 228–240.

Satoh, M., Takeda, A., Nagata, K., Shimosegawa, E., Kuzuhara, S., 2006.Positron-emission tomography of brain regions activated by recognitionof familiar music. Am. J. Neuroradiol. 27, 1101–1106.

Steinbeis, N., Koelsch, S., Sloboda, J.A., 2006. The role of harmonicexpectancy violations in musical emotions: evidence from subjective,physiological, and neural responses. J. Cogn. Neurosci. 18,1380–1393.

Tervaniemi, M., Just, V., Koelsch, S., Widmann, A., Schröger, E., 2005.Pitch discrimination accuracy in musicians vs nonmusicians: an event-related potential and behavioral study. Exp. Brain Res. 161, 1–10.

Ullman, M.T., 2004. Contributions of memory circuits to language: thedeclarative/procedural model. Cognition 92, 231–270.

Ullman, M.T., Corkin, S., Coppola, M., Hickok, G., Growdon, J.H.,Koroshetz, W.J., Pinker, S., 1997. A neural dissociation within language:evidence that the mental dictionary is part of declarative memory, andthat grammatical rules are processed by the procedural system. J. Cogn.Neurosci. 9, 266–276.

Van Petten, C., Rheinfelder, H., 1995. Conceptual relationships betweenspoken words and environmental sounds: event-related brain potentialmeasures. Neuropsychologia 33, 485–508.

Verleger, R., 1990. P3-evoking wrong notes: unexpected, awaited, orarousing? Int. J. Neurosci. 55, 171–179.

Wood, R., Ullman, M.T., 2005. Processing of familiar melodies: effects ofsex and musical training. Cogn. Neurosci, Abstr. C-138, 98.

Zempleni, M.-Z., Haverkort, M., Renken, R., Stowe, L.A., 2007. Evidencefor bilateral involvement in idiom comprehension: an fMRI study.NeuroImage 34, 1280–1291.