scores level composition based on the guido music notation

4
_383 SCORES LEVEL COMPOSITION BASED ON THE GUIDO MUSIC NOTATION D. Fober, Y. Orlarey, S. Letz Grame Centre national de cr´ etaion musicale, Lyon, France [email protected] ABSTRACT Based on the Guido Music Notation format, we have de- veloped tools for music score ”composition” (in the ety- mological sense), i.e. operators that take scores both as target and arguments of high level transformations, appli- cable for example to the time domain (e.g. cutting the head or the tail of a score) or to the structural domains (e.g. putting scores in sequence or in parallel). Provid- ing these operations at score level is particularly conve- nient to express music ideas and to compose these ideas in an homogeneous representation space. However, scores level composition gives raise to a set of issues related to the music notation consistency. This paper introduces the GUIDO Music Notation format, presents the score com- position operations, the notation issues and a proposal to solve them. 1. INTRODUCTION The GUIDO Music Notation format [GMN] [4] has been designed by H. Hoos and K. Hamel more than ten years ago. It is a general purpose formal language for represent- ing score level music in a platform independent plain text and human readable way. It is based on a conceptually simple but powerful formalism: its design concentrates on general musical concepts (as opposed to graphical charac- teristics). A key feature of the GUIDO design is adequacy which means that simple musical concepts are represented in a simple way and only complex notions require com- plex representations. Based on the GMN language, the GUIDO Library [2, 3] provides a powerful score layout engine that dif- ferentiates from the compiler solutions for music notation [5, 1] by its ability to be embedded into standalone ap- plications, and by its fast and efficient rendering engine, making the system usable in real-time for simple music scores. Based on the combination of the GUIDO language and engine, score level composition operators have been de- signed, providing time or pitch transformations, composi- tion in sequence or in parallel, etc. Developing score level composition operators provides an homogeneous way to write scores and to manipulate them while remaining at a high music description level. Moreover, the design al- lows to use scores both as target and as arguments of the operations, enforcing the notation level metaphor. However, applied at score level, these operations raise a set of issues related to the music notation consistency. We propose a simple typology of the music notation ele- ments and a set of rules based on this typology to enforce the music notation coherence. The next section introduces the GUIDO Music No- tation format, followed by a presentation of the score composition operations, the related notation problems and the proposed solutions, including a language extension to handle reversibility issues. 2. THE GUIDO MUSIC NOTATION FORMAT 2.1. Basic concepts Basic GUIDO notation covers the representation of notes, rests, accidentals, single and multi-voiced music and the most common concepts from conventional music notation such as clefs, meter, key, slurs, ties, beaming, stem direc- tions, etc. Notes are specified by their name (abcde f g h), optional accidentals (’#’ and ’&’ for sharp and flat), an optional octave number and an optional duration. Duration is specified in one of the forms: * ’enum’/’denom dotting * ’enum dotting ’/’denom dotting where enum and denom are positive integers and dotting is either empty, ’.’, or ’..’. When enum or denom is omit- ted, it is assumed to be 1. The duration represents a whole note fractional. When omitted, optional note description parts are as- sumed to be equal to the previous specification before in the current sequence. Chords are described using comma separated notes enclosed in brackets e.g {c, e, g} 2.2. GUIDO tags Tags are used to represent additional musical information, such as slurs, clefs, keys, etc. A basic tag has one of the forms: \tagname \tagname<param-list> where param-list is a list of string or numerical argu- ments, separated by commas (’,’). In addition, a tag may have a time range and be applied to a series of notes (e.g. slurs, ties, etc.); the corresponding forms are:

Upload: ipires

Post on 16-Dec-2015

12 views

Category:

Documents


1 download

DESCRIPTION

Scores Level Composition Based on the Guido Music Notation

TRANSCRIPT

  • _382 _383

    In this example, shown in figures 4 and 5, it is pos-sible to listen to either the leader or the followers. They coordinate of each turtle controls the frequency of onesine wave oscillator, while the x coordinate is mapped tostereo panning. It is also possible to change the minimumand maximum frequencies in the osc/dataMappingbox and set different frequency ranges for each breed aswell.

    Figure 5. Pure data patch for the Pursuit model

    In the sonogram of this example, shown in figure 6,the different pursuits and different paths of the leader andfollowers can be clearly observed. Also, the sonic shapethat each oscillator produces can be audibly mapped to theequations governing the behavior of the leader.

    Figure 6. Sonogram excerpt for the Pursuit model

    5. CONCLUSIONS

    We have developed OSC-NETLOGO, a NetLogo exten-sion tool that allows to create very complex sonic phe-nomena by taking advantage of NetLogos power for de-signing and building models of complex systems.The ex-tension is very simple to install and use and gives the pos-sibilty of mapping any variable of a NetLogo model intoan OSC-enabled audio synthesis engine.

    We have provided two examples taken from the avail-able models at the NetLogos library of models. Theseexamples provide evidence for the capabilities and po-tential of NetLogo as a sound generating and processingtool. The behavior of complex systems is something that

    is very appealing from a musical standpoint, and we hopethat this tool could be of aid in the efforts of creating newcomplex sounds and interesting musical material.

    6. ACKNOWLEDGEMENTS

    This research was funded by Fondecyt Grant #11090193,Conicyt, Government of Chile.

    7. REFERENCES

    [1] A. Hechmer, A. Tindale, and G. Tzanetakis,Logorhythms: Introductory audio programmingfor computer musicians in a functional languageparadigm, in Frontiers in Education Conference,36th Annual, San Diego, CA, 2007.

    [2] Illposed, Java OSC. http://www.illposed.com/software/javaosc.html, 2006.

    [3] S. Papert, Mindstorms: Children, Computers, andPowerful Ideas. Basic Books, 1980.

    [4] S. Tisue and U. Wilensky, Netlogo: A simple envi-ronment for modeling complexity, in InternationalConference on Complex Systems, Boston, MA, 2004.

    [5] , Netlogo: Design and implementation of amulti-agent modeling language, in SwarmFest, AnnArbor, MI, 2004.

    [6] U. Wilensky, NetLogo Pursuit model. Center forConnected Learning and Computer-Based Modeling,Northwestern University, Evanston, IL.: http://ccl.northwestern.edu/netlogo/models/Pursuit, 1998.

    [7] , NetLogo Wolf Sheep Predation model. Cen-ter for Connected Learning and Computer-BasedModeling, Northwestern University, Evanston,IL.: http://ccl.northwestern.edu/netlogo/models/WolfSheepPredation, 1998.

    [8] , NetLogo. Center for Connected Learningand Computer-Based Modeling, Northwestern Uni-versity, Evanston, IL.: http://ccl.northwestern.edu/netlogo, 1999.

    [9] M. Wright and A. Freed, Open sound control: anew protocol for communicating with sound synthe-sizers, in Proceedings of the International ComputerMusic Conference, Thessaloniki, Greece, 1997.

    SCORES LEVEL COMPOSITION BASED ON THE GUIDO MUSICNOTATION

    D. Fober, Y. Orlarey, S. Letz

    GrameCentre national de cretaion musicale, Lyon, France

    [email protected]

    ABSTRACT

    Based on the Guido Music Notation format, we have de-veloped tools for music score composition (in the ety-mological sense), i.e. operators that take scores both astarget and arguments of high level transformations, appli-cable for example to the time domain (e.g. cutting thehead or the tail of a score) or to the structural domains(e.g. putting scores in sequence or in parallel). Provid-ing these operations at score level is particularly conve-nient to express music ideas and to compose these ideasin an homogeneous representation space. However, scoreslevel composition gives raise to a set of issues related tothe music notation consistency. This paper introduces theGUIDO Music Notation format, presents the score com-position operations, the notation issues and a proposal tosolve them.

    1. INTRODUCTION

    The GUIDO Music Notation format [GMN] [4] has beendesigned by H. Hoos and K. Hamel more than ten yearsago. It is a general purpose formal language for represent-ing score level music in a platform independent plain textand human readable way. It is based on a conceptuallysimple but powerful formalism: its design concentrates ongeneral musical concepts (as opposed to graphical charac-teristics). A key feature of the GUIDO design is adequacywhich means that simple musical concepts are representedin a simple way and only complex notions require com-plex representations.

    Based on the GMN language, the GUIDO Library[2, 3] provides a powerful score layout engine that dif-ferentiates from the compiler solutions for music notation[5, 1] by its ability to be embedded into standalone ap-plications, and by its fast and efficient rendering engine,making the system usable in real-time for simple musicscores.

    Based on the combination of the GUIDO language andengine, score level composition operators have been de-signed, providing time or pitch transformations, composi-tion in sequence or in parallel, etc. Developing score levelcomposition operators provides an homogeneous way towrite scores and to manipulate them while remaining ata high music description level. Moreover, the design al-lows to use scores both as target and as arguments of the

    operations, enforcing the notation level metaphor.However, applied at score level, these operations raise

    a set of issues related to the music notation consistency.We propose a simple typology of the music notation ele-ments and a set of rules based on this typology to enforcethe music notation coherence.

    The next section introduces the GUIDO Music No-tation format, followed by a presentation of the scorecomposition operations, the related notation problems andthe proposed solutions, including a language extension tohandle reversibility issues.

    2. THE GUIDO MUSIC NOTATION FORMAT

    2.1. Basic concepts

    Basic GUIDO notation covers the representation of notes,rests, accidentals, single and multi-voiced music and themost common concepts from conventional music notationsuch as clefs, meter, key, slurs, ties, beaming, stem direc-tions, etc. Notes are specified by their name (a b c d ef g h), optional accidentals (# and & for sharp andflat), an optional octave number and an optional duration.Duration is specified in one of the forms:

    *enum/denom dotting*enum dotting/denom dotting

    where enum and denom are positive integers and dottingis either empty, ., or ... When enum or denom is omit-ted, it is assumed to be 1. The duration represents a wholenote fractional.

    When omitted, optional note description parts are as-sumed to be equal to the previous specification before inthe current sequence.

    Chords are described using comma separated notesenclosed in brackets e.g {c, e, g}

    2.2. GUIDO tags

    Tags are used to represent additional musical information,such as slurs, clefs, keys, etc. A basic tag has one of theforms:

    \tagname\tagname

    where param-list is a list of string or numerical argu-ments, separated by commas (,). In addition, a tag mayhave a time range and be applied to a series of notes (e.g.slurs, ties, etc.); the corresponding forms are:

  • _384 _385

    \tagname(note-series)\tagname(note-series)

    The following GMN code illustrates the concision ofthe notation; figure 1 represents the corresponding GUIDOengine output.

    [ \meter \key c d e& f/8 g ]

    & bb 44 _Xxxxxxx Xxxxxxx Xxxxx

    xx Xxxxxxx Xxxxxxx \Figure 1. A simple GMN example

    2.3. Notes sequences and segments

    A note sequence is of the form [tagged-notes]where tagged-notes is a series of notes, tags, andtagged ranges separated by spaces. Note sequences repre-sent single-voiced scores. Note segments represent multi-voiced scores; they are denoted by {seq-list} whereseq-list is a list of note sequences separated by com-mas as shown by the example below (figure 2):

    { [ e g f ], [ a e a ] } Figure 2. A multi-voices example2.4. Advanced GUIDO.The advanced GUIDO specification extends basic GUIDOwith more tags and more tags parameters, giving morecontrol over the score layout. For example, it introducestags parameters like dx and dy for fine positioning of thescore elements, notes and rests format specifications, staffassignments, etc.

    3. COMPOSING MUSIC SCORES

    Since GUIDO is a concise textual format, it seems naturalto use operations commonly applied to text, like cut, copyand paste, text concatenation, etc. Thus the first idea withthe score level operations was based on textual manipula-tion, extended to music specific operations.

    3.1. Operations

    Score level operations are given by table 1. Theseoperations are available as library API calls, as com-mand line tools, or using a graphic environment namedGUIDOCalculus. Almost all of the operations take a GMNscore and a value parameter as input and produce a GMNscore as output. The value parameter can be taken fromanother GMN score: for example, the top operation cuts

    the bottom voices of a score after a given voice num-ber; when using a score as parameter, the voice numberis taken from the score voices count.

    All the operations concentrate on the transformed di-mension (pitch, time), without modifying user defined el-ements or trying to interfere with the automatic layout ofthe GUIDO Engine (that may add notation elements likeclef, barlines). For example, the duration operation re-computes the notes length but doesnt affect the time sig-nature or the barlines. When two scores are put in parallel,the system preserves each voice time and key signatures,even when they dont match. The transposition operationis the only exception: it adds or modifies the key signa-ture and selects the simplest enharmonic diatonic transpo-sition.

    The design allows all the operations to take placeconsistently at the notation level. Using the commandline tools, series of transformations can be expressed aspipelining scores through operators e.g.

    head s1 s2 | par s2 | transpose "[ c ]"

    3.2. Notation issues

    Actually, the score level composition functions operate ona memory representation of the music notation. But wellillustrate the notation issues with the textual representa-tion which is equivalent to the memory representation.

    Lets take an example with the tail operation appliedto the following simple score:

    [\clef c d e c]A raw cut of the score after 2 notes would give [e c],removing the clef information and potentially leading tounexpected results (figure 3).

    ! "#$$$$$$

    %"#$$$$$$ &'

    %%"()))))))%"())))))

    &'

    %"())))))%"())))))

    %%"()))))))%"())))))

    &! %"#$$$$$$ "#$$$

    $$$ !"#$$$$$$

    %"#$$$$$$ &

    !"#!$%!&

    '(!"#!$%!&

    )*$+!,-.,./0$0&0!0$1

    2/

    2/

    Figure 3. Tail operation consistency

    Here is another example with the seq operation: a rawsequence of [\clef c d]and [\clef e c]would give [\clef c d \clef e c ]where the clef repetition (figure 4) is useless and blurs thereading.

    ! "#$%%%%%% #$%%%

    %%% !#$%%%%%%

    "#$%%%%%% &Figure 4. A raw sequence operationSome operations may also result in syntactically in-correct results. Consider the following code:

    [g \slur(f e) c]

    operation args descriptionseq s1 s2 puts the scores s1 and s2 in sequencepar s1 s2 puts the scores s1 and s2 in parallelrpar s1 s2 puts the scores s1 and s2 in parallel, right alignedtop s1 [n | s2] takes the n top voices of s1;

    when using a score s2 as parameter, n is taken from s2 voices countbottom s1 [n | s2] takes the bottom voices of s1 after the voice n;

    when using a score s2 as parameter, n is taken from s2 voices counthead s1 [d | s2] takes the head of s1 up to the date d;

    when using a score s2 as parameter, d is taken from s2 durationevhead s1 [n | s2] id. but on events basis i.e. the cut point is specified in n events count;

    when using a score s2 as parameter, n is taken from s2 events counttail s1 [d | s2] takes the tail of a score after the date d;

    when using a score s2 as parameter, d is taken from s2 durationevtail s1 [n | s2] id. but on events basis i.e. the cut point is specified in n events count;

    when using a score s2 as parameter, n is taken from s2 events counttranspose s1 [i | s2] transposes s1 to an interval i;

    when using a score s2 as parameter, i is computed as the difference betweenthe first voice, first notes of s1 and s2

    duration s1 [d | r | s2] stretches s1 to a duration d or using a ratio r;when using a score s2 as parameter, d is computed from s2 duration

    applypitch s1 s2 applies the pitches of s1 to s2 in a loopapplyrythm s1 s2 applies the rhythm of s1 to s2 in a loop

    Table 1. Score level operations

    slicing the score in 2 parts after f would result ina) [g \slur(f] and b) [e) c]

    i.e. with uncompleted range tags. Well use the termsopened-end tags to refer the a) form and opened-begintags for the b) form.

    These simple examples illustrate the problem andthere are many more cases where the music notation con-sistency has to be preserved across score level operations.

    4. MUSIC NOTATION CONSISTENCY

    In order to solve the notation issues, we propose a simpletypology of the notation elements regarding their time ex-tent and a set of rules defining adequate consistency poli-cies according to the operations and the elements type.

    4.1. Notation elements time extent

    The GMN format makes a distinction between positiontags (e.g. \clef, \meter) and range tags (e.g. \slur,\beam). Position tags are simple notations marks at agiven time position while range tags have an explicit timeextent: the duration of the enclosed notes. However, thisdistinction is not sufficient to cover the time status of theelements: many of the position tags have an implicit timeduration and generally, they last up to the next similar no-tation or to the end of the score. For example, a dynamiclasts to the next dynamic or the end of the score.

    Table 2 presents a simple typology of the music no-tation elements, mainly grounded on their time extent.Based on this typology, provisions have to be made when:

    computing the beginning of a score:1) the pending explicit time extent elements must

    be properly opened (i.e. opened-begin tags,see section 3.2)

    2) the current implicit time extent elements mustbe recalled,

    computing the end of a score:3) the explicit time extent elements must be prop-

    erly closed (i.e. opened-end tags)

    putting scores in sequence:4) implicit time extent elements starting the second

    score must be skipped when they correspondto current existing elements.

    4.2. Structure control issues

    Elements relevant to the others / structure control timeextent category may also give rise to inconsistent notation:a repeat begin bar without repeat end, a dal segno withoutsegno, a da capo al fine without fine, etc. We introducenew rules to catch the repeat bar issue. Lets first definea pending repeat end as the case of a voice with a repeatbegin tag without matching repeat end.

    5) when computing the end of a score, every pendingrepeat end must be closed with a repeat end tag.

    6) from successive unmatched repeat begin tags, onlythe first one must be retained.

  • _384 _385

    \tagname(note-series)\tagname(note-series)

    The following GMN code illustrates the concision ofthe notation; figure 1 represents the corresponding GUIDOengine output.

    [ \meter \key c d e& f/8 g ]

    & bb 44 _Xxxxxxx Xxxxxxx Xxxxx

    xx Xxxxxxx Xxxxxxx \Figure 1. A simple GMN example

    2.3. Notes sequences and segments

    A note sequence is of the form [tagged-notes]where tagged-notes is a series of notes, tags, andtagged ranges separated by spaces. Note sequences repre-sent single-voiced scores. Note segments represent multi-voiced scores; they are denoted by {seq-list} whereseq-list is a list of note sequences separated by com-mas as shown by the example below (figure 2):

    { [ e g f ], [ a e a ] } Figure 2. A multi-voices example2.4. Advanced GUIDO.The advanced GUIDO specification extends basic GUIDOwith more tags and more tags parameters, giving morecontrol over the score layout. For example, it introducestags parameters like dx and dy for fine positioning of thescore elements, notes and rests format specifications, staffassignments, etc.

    3. COMPOSING MUSIC SCORES

    Since GUIDO is a concise textual format, it seems naturalto use operations commonly applied to text, like cut, copyand paste, text concatenation, etc. Thus the first idea withthe score level operations was based on textual manipula-tion, extended to music specific operations.

    3.1. Operations

    Score level operations are given by table 1. Theseoperations are available as library API calls, as com-mand line tools, or using a graphic environment namedGUIDOCalculus. Almost all of the operations take a GMNscore and a value parameter as input and produce a GMNscore as output. The value parameter can be taken fromanother GMN score: for example, the top operation cuts

    the bottom voices of a score after a given voice num-ber; when using a score as parameter, the voice numberis taken from the score voices count.

    All the operations concentrate on the transformed di-mension (pitch, time), without modifying user defined el-ements or trying to interfere with the automatic layout ofthe GUIDO Engine (that may add notation elements likeclef, barlines). For example, the duration operation re-computes the notes length but doesnt affect the time sig-nature or the barlines. When two scores are put in parallel,the system preserves each voice time and key signatures,even when they dont match. The transposition operationis the only exception: it adds or modifies the key signa-ture and selects the simplest enharmonic diatonic transpo-sition.

    The design allows all the operations to take placeconsistently at the notation level. Using the commandline tools, series of transformations can be expressed aspipelining scores through operators e.g.

    head s1 s2 | par s2 | transpose "[ c ]"

    3.2. Notation issues

    Actually, the score level composition functions operate ona memory representation of the music notation. But wellillustrate the notation issues with the textual representa-tion which is equivalent to the memory representation.

    Lets take an example with the tail operation appliedto the following simple score:

    [\clef c d e c]A raw cut of the score after 2 notes would give [e c],removing the clef information and potentially leading tounexpected results (figure 3).

    ! "#$$$$$$

    %"#$$$$$$ &'

    %%"()))))))%"())))))

    &'

    %"())))))%"())))))

    %%"()))))))%"())))))

    &! %"#$$$$$$ "#$$$

    $$$ !"#$$$$$$

    %"#$$$$$$ &

    !"#!$%!&

    '(!"#!$%!&

    )*$+!,-.,./0$0&0!0$1

    2/

    2/

    Figure 3. Tail operation consistency

    Here is another example with the seq operation: a rawsequence of [\clef c d]and [\clef e c]would give [\clef c d \clef e c ]where the clef repetition (figure 4) is useless and blurs thereading.

    ! "#$%%%%%% #$%%%

    %%% !#$%%%%%%

    "#$%%%%%% &Figure 4. A raw sequence operationSome operations may also result in syntactically in-correct results. Consider the following code:

    [g \slur(f e) c]

    operation args descriptionseq s1 s2 puts the scores s1 and s2 in sequencepar s1 s2 puts the scores s1 and s2 in parallelrpar s1 s2 puts the scores s1 and s2 in parallel, right alignedtop s1 [n | s2] takes the n top voices of s1;

    when using a score s2 as parameter, n is taken from s2 voices countbottom s1 [n | s2] takes the bottom voices of s1 after the voice n;

    when using a score s2 as parameter, n is taken from s2 voices counthead s1 [d | s2] takes the head of s1 up to the date d;

    when using a score s2 as parameter, d is taken from s2 durationevhead s1 [n | s2] id. but on events basis i.e. the cut point is specified in n events count;

    when using a score s2 as parameter, n is taken from s2 events counttail s1 [d | s2] takes the tail of a score after the date d;

    when using a score s2 as parameter, d is taken from s2 durationevtail s1 [n | s2] id. but on events basis i.e. the cut point is specified in n events count;

    when using a score s2 as parameter, n is taken from s2 events counttranspose s1 [i | s2] transposes s1 to an interval i;

    when using a score s2 as parameter, i is computed as the difference betweenthe first voice, first notes of s1 and s2

    duration s1 [d | r | s2] stretches s1 to a duration d or using a ratio r;when using a score s2 as parameter, d is computed from s2 duration

    applypitch s1 s2 applies the pitches of s1 to s2 in a loopapplyrythm s1 s2 applies the rhythm of s1 to s2 in a loop

    Table 1. Score level operations

    slicing the score in 2 parts after f would result ina) [g \slur(f] and b) [e) c]

    i.e. with uncompleted range tags. Well use the termsopened-end tags to refer the a) form and opened-begintags for the b) form.

    These simple examples illustrate the problem andthere are many more cases where the music notation con-sistency has to be preserved across score level operations.

    4. MUSIC NOTATION CONSISTENCY

    In order to solve the notation issues, we propose a simpletypology of the notation elements regarding their time ex-tent and a set of rules defining adequate consistency poli-cies according to the operations and the elements type.

    4.1. Notation elements time extent

    The GMN format makes a distinction between positiontags (e.g. \clef, \meter) and range tags (e.g. \slur,\beam). Position tags are simple notations marks at agiven time position while range tags have an explicit timeextent: the duration of the enclosed notes. However, thisdistinction is not sufficient to cover the time status of theelements: many of the position tags have an implicit timeduration and generally, they last up to the next similar no-tation or to the end of the score. For example, a dynamiclasts to the next dynamic or the end of the score.

    Table 2 presents a simple typology of the music no-tation elements, mainly grounded on their time extent.Based on this typology, provisions have to be made when:

    computing the beginning of a score:1) the pending explicit time extent elements must

    be properly opened (i.e. opened-begin tags,see section 3.2)

    2) the current implicit time extent elements mustbe recalled,

    computing the end of a score:3) the explicit time extent elements must be prop-

    erly closed (i.e. opened-end tags)

    putting scores in sequence:4) implicit time extent elements starting the second

    score must be skipped when they correspondto current existing elements.

    4.2. Structure control issues

    Elements relevant to the others / structure control timeextent category may also give rise to inconsistent notation:a repeat begin bar without repeat end, a dal segno withoutsegno, a da capo al fine without fine, etc. We introducenew rules to catch the repeat bar issue. Lets first definea pending repeat end as the case of a voice with a repeatbegin tag without matching repeat end.

    5) when computing the end of a score, every pendingrepeat end must be closed with a repeat end tag.

    6) from successive unmatched repeat begin tags, onlythe first one must be retained.

  • _386 _387

    time extent description sampleexplicit duration is explicit from the notation slurs, cresc.implicit element lasts to the next similar element or to the end of the score meter, dynamics, keyothers structure control coda, da capo, repeats

    - formatting instructions new line, new page- misc. notations breath mark, bar

    Table 2. Typology of notation elements.

    7) from successive repeat end tags, only the last onemust be retained.

    No additional provision is made for the other structurecontrol elements: possible inconsistencies are ignored butthis choice preserves the operations reversibility.

    4.3. Operations reversibility

    The above rules solve most of the notation issues but theydo not permit the operations to be reverted: consider ascore including a slur, sliced in the middle of the slur andreverted by putting the parts back in sequence. The re-sult will include two slurs (figure 5) due to the rules 1)and 3) that enforce opening opened-begin tags and clos-ing opened-end tags. Figure 5. A score sliced and put back in sequence

    To solve the problem, we need the support of the GMNlanguage and we introduce a new tag parameter, intendedto keep the history of range tags and to denote opened-end and/or opened-begin ancestors. The parameter hasthe form:

    open="type"

    where type is in [begin, end, begin-end], cor-responding to opened-begin, opened-end, and opened-begin-end ancestors.

    Next, we introduce a new rule for score level opera-tions. Lets first define adjacent tags as tags placed onthe same voice and that are not separated by any note orchord. Note that range tags are viewed as containers andthus, notes in the range do not separate tags.

    8) adjacent similar tags carrying an open parameterare mutually cancelled when the first one is opened-end and the second one opened-begin.

    For example, the application of this rule to the followingscore:

    [ \anytag(f g)\anytag(f e) ]

    will give the score below:[ \anytag(f g f e) ]

    Note that Advanced GUIDO allows range tags tobe expressed using a Begin and End format (e.g.

    \slurBegin, \slurEnd instead \slur(range)). Thisformat is handled similarly to regular range tags and theopen parameter is also implemented for Begin/End tags.

    5. CONCLUSION

    Music notation is complex due to the large number of no-tation elements and to the heterogeneous status of theseelements. The typology proposed in table 2 is actually asimplification intended to cover the needs of score leveloperations but it is not representative of this complexity.However, it reflects the music notation semantic and couldbe reused with other score level music representation lan-guage. Thus apart for the reversibility rule that requiresthe support of the music representation language, all theother rules are independent from the GMN format and ap-plicable in other contexts.

    Score level operations could be very useful in the con-text of batch processing (e.g. voices separation from aconductor, excerpt extraction, etc.). The operations pre-sented in table 1 support this kind of processing but theyalso open the door to a new approach of the music creativeprocess.

    6. REFERENCES

    [1] A. E. Daniel Taupin, Ross Mitchell. Musixtex usingtex to write polyphonic or instrumental music.[Online]. Available: http://icking-music-archive.org/

    [2] C. Daudin, D. Fober, S. Letz, and Y. Orlarey, TheGuido Engine - a toolbox for music scores rendering.in Proceedings of the Linux Audio Conference 2009,2009, pp. 105111.

    [3] D. Fober, S. Letz, and Y. Orlarey, Open source toolsfor music representation and notation. in Proceed-ings of the first Sound and Music Computing confer-ence - SMC04. IRCAM, 2004, pp. 9195.

    [4] H. Hoos, K. Hamel, K. Renz, and J. Kilian, TheGUIDO Music Notation Format - a Novel Approachfor Adequately Representing Score-level Music. inProceedings of the International Computer MusicConference. ICMA, 1998, pp. 451454.

    [5] H.-W. Nienhuys and J. Nieuwenhuizen, LilyPond, asystem for automated music engraving. in Proceed-ings of the XIV Colloquium on Musical Informatics(XIV CIM 2003), May 2003.

    ENGRAVINGHAMMERINGCASTING: EXPLORING THESONIC-ERGOTIC MEDIUM FOR LIVE MUSICAL PERFORMANCE

    Edgar Berdahl

    Audio Communication GroupTU Berlin, Germany

    Alexandros Kontogeorgakopoulos

    Cardiff School of Art & DesignCardiff, United Kingdom

    ABSTRACT

    EngravingHammeringCasting is a live music composi-tion written for two performers, who interact with force-feedback haptic interfaces. This paper describes the phi-losophy and development of the composition. A virtualphysical model of vibrating resonators is designed andemployed to generate both the sound and the haptic forcefeedback. Because the overall system, which includes thephysical model and the coupled operators to it, is approx-imately energy conserving, the model simulates what isknown as ergotic interaction.

    It is believed that the presented music composition isthe first live composition, in which performers interactwith an acoustic physical model that concurrently gener-ates sound and ergotic haptic force feedback. The com-position consists of three sections, each of which is moti-vated by a particular kind of craft process involving ma-nipulation of a tool by hand.

    1. BACKGROUND

    Physical modeling has been employed for decades to syn-thesize sound [5, 16, 15]. In real-time applications, theapproach is typically to compute difference equations thatmodel the equations of motion of virtual acoustic musi-cal instruments [9]. However, besides merely imitatingpre-existing musical instruments, new virtual instrumentscan be designed with a computer by simulating the acous-tics of hypothetical situations [6], creating a metaphori-sation of real instruments. Sounds generated using phys-ical models tend to be physically plausible, enhancing thelisteners percept due to familiarity [7, 14].

    Besides synthesizing sound, a physical model can alsobe employed concurrently for synthesizing visual feed-back and haptic force feedback. When these feedbackmodalities are provided concurrently to a human, the sen-sory percepts can fuse in the brain of a human and providea distinctive sense of immersion. The ACROE-ICA labo-ratory has a long history of working in this area [10], andthey have developed extraordinarily high quality hardwarefor synthesizing haptic force feedback for musical appli-cations [13]. They have also introduced key terminologyinto the discourse, as outlined in the book Enaction andEnactive Interfaces: A Handbook of Terms [12].

    In this paper, the term ergotic interaction will be used.A human interacts ergotically with a system when the hu-

    man exchanges significant mechanical energy with it andthe energy exchange is necessary to perform a task [12].For example, employing a tool to deform an object ormove it is ergotic. Bowing a string or playing a drumis also ergotic. There is a mechanical feedback loop be-tween the human and the environment: the human exertsa force on the environment, and the environment exerts aforce on the human. In ergotic interaction, the user notonly informs and transforms the world, but the world alsoinforms and transforms the user [12].

    As far as the authors know, there has never been aportable musical act that explored the musical applica-tions of simulated ergotic interaction in live performance.This paper describes the development of a new composi-tion in this area.

    2. HUMANS USING TOOLS

    The authors are inspired not only by the way people inter-act with traditional acoustic musical instruments, but alsoby the way people interact skillfully with tools in gen-eral. Indeed, seasoned craftspeople leverage thousands ofhours of experience in operating tools. They can almostimagine that a favored tool is an extension of their body,allowing them to focus more on the result than on the toolitself [8]. They use the tool efficiently to preserve energy,while often making graceful gestures to achieve an aes-thetically pleasing result.

    Interaction with tools for craft was emphasized at atthe Victoria and Albert Museum in London. The Powerof Making exhibition presented over 100 crafted objectsand provided a glossary outlining processes used to makethe objects [18]. The following processes were particu-larly inspiring: carving, casting, cutting, drawing, forg-ing, glassblowing, grinding, hammering, incising, milling,molding, painting, polishing, striking, tapping, welding,wood turning. These words provided a strong conceptand dictated the form and the sonic qualities of the com-position.

    3. PORTABLE, DURABLE, AND AFFORDABLEHARDWARE

    Prior research has focused on accessible haptic hardwarefor musicians [3]. In contrast with precise yet expen-sive and fragile devices designed for simulating surgery,