engraving hammeringcasting exploring the sonic ergotic

4
_387 ENGRAVING–HAMMERING–CASTING: EXPLORING THE SONIC-ERGOTIC MEDIUM FOR LIVE MUSICAL PERFORMANCE Edgar Berdahl Audio Communication Group TU Berlin, Germany Alexandros Kontogeorgakopoulos Cardiff School of Art & Design Cardiff, United Kingdom ABSTRACT Engraving–Hammering–Casting is a live music composi- tion written for two performers, who interact with force- feedback haptic interfaces. This paper describes the phi- losophy and development of the composition. A virtual physical model of vibrating resonators is designed and employed to generate both the sound and the haptic force feedback. Because the overall system, which includes the physical model and the coupled operators to it, is approx- imately energy conserving, the model simulates what is known as ergotic interaction. It is believed that the presented music composition is the first live composition, in which performers interact with an acoustic physical model that concurrently gener- ates sound and ergotic haptic force feedback. The com- position consists of three sections, each of which is moti- vated by a particular kind of craft process involving ma- nipulation of a tool by hand. 1. BACKGROUND Physical modeling has been employed for decades to syn- thesize sound [5, 16, 15]. In real-time applications, the approach is typically to compute difference equations that model the equations of motion of virtual acoustic musi- cal instruments [9]. However, besides merely imitating pre-existing musical instruments, new virtual instruments can be designed with a computer by simulating the acous- tics of hypothetical situations [6], creating a “metaphori- sation of real instruments.” Sounds generated using phys- ical models tend to be physically plausible, enhancing the listener’s percept due to familiarity [7, 14]. Besides synthesizing sound, a physical model can also be employed concurrently for synthesizing visual feed- back and haptic force feedback. When these feedback modalities are provided concurrently to a human, the sen- sory percepts can fuse in the brain of a human and provide a distinctive sense of immersion. The ACROE-ICA labo- ratory has a long history of working in this area [10], and they have developed extraordinarily high quality hardware for synthesizing haptic force feedback for musical appli- cations [13]. They have also introduced key terminology into the discourse, as outlined in the book “Enaction and Enactive Interfaces: A Handbook of Terms” [12]. In this paper, the term ergotic interaction will be used. A human interacts ergotically with a system when the hu- man exchanges significant mechanical energy with it and the energy exchange is necessary to perform a task [12]. For example, employing a tool to deform an object or move it is ergotic. Bowing a string or playing a drum is also ergotic. There is a mechanical feedback loop be- tween the human and the environment: the human exerts a force on the environment, and the environment exerts a force on the human. In ergotic interaction, the user not only informs and transforms the world, but the world also informs and transforms the user [12]. As far as the authors know, there has never been a portable musical act that explored the musical applica- tions of simulated ergotic interaction in live performance. This paper describes the development of a new composi- tion in this area. 2. HUMANS USING TOOLS The authors are inspired not only by the way people inter- act with traditional acoustic musical instruments, but also by the way people interact skillfully with tools in gen- eral. Indeed, seasoned craftspeople leverage thousands of hours of experience in operating tools. They can almost imagine that a favored tool is an extension of their body, allowing them to focus more on the result than on the tool itself [8]. They use the tool efficiently to preserve energy, while often making graceful gestures to achieve an aes- thetically pleasing result. Interaction with tools for craft was emphasized at at the Victoria and Albert Museum in London. The “Power of Making” exhibition presented over 100 crafted objects and provided a glossary outlining processes used to make the objects [18]. The following processes were particu- larly inspiring: “carving, casting, cutting, drawing, forg- ing, glassblowing, grinding, hammering, incising, milling, molding, painting, polishing, striking, tapping, welding, wood turning.” These words provided a strong concept and dictated the form and the sonic qualities of the com- position. 3. PORTABLE, DURABLE, AND AFFORDABLE HARDWARE Prior research has focused on accessible haptic hardware for musicians [3]. In contrast with precise yet expen- sive and fragile devices designed for simulating surgery,

Upload: ipires

Post on 06-Nov-2015

214 views

Category:

Documents


0 download

DESCRIPTION

Engraving Hammeringcasting Exploring the Sonic Ergotic

TRANSCRIPT

  • _386 _387

    time extent description sampleexplicit duration is explicit from the notation slurs, cresc.implicit element lasts to the next similar element or to the end of the score meter, dynamics, keyothers structure control coda, da capo, repeats

    - formatting instructions new line, new page- misc. notations breath mark, bar

    Table 2. Typology of notation elements.

    7) from successive repeat end tags, only the last onemust be retained.

    No additional provision is made for the other structurecontrol elements: possible inconsistencies are ignored butthis choice preserves the operations reversibility.

    4.3. Operations reversibility

    The above rules solve most of the notation issues but theydo not permit the operations to be reverted: consider ascore including a slur, sliced in the middle of the slur andreverted by putting the parts back in sequence. The re-sult will include two slurs (figure 5) due to the rules 1)and 3) that enforce opening opened-begin tags and clos-ing opened-end tags. Figure 5. A score sliced and put back in sequence

    To solve the problem, we need the support of the GMNlanguage and we introduce a new tag parameter, intendedto keep the history of range tags and to denote opened-end and/or opened-begin ancestors. The parameter hasthe form:

    open="type"

    where type is in [begin, end, begin-end], cor-responding to opened-begin, opened-end, and opened-begin-end ancestors.

    Next, we introduce a new rule for score level opera-tions. Lets first define adjacent tags as tags placed onthe same voice and that are not separated by any note orchord. Note that range tags are viewed as containers andthus, notes in the range do not separate tags.

    8) adjacent similar tags carrying an open parameterare mutually cancelled when the first one is opened-end and the second one opened-begin.

    For example, the application of this rule to the followingscore:

    [ \anytag(f g)\anytag(f e) ]

    will give the score below:[ \anytag(f g f e) ]

    Note that Advanced GUIDO allows range tags tobe expressed using a Begin and End format (e.g.

    \slurBegin, \slurEnd instead \slur(range)). Thisformat is handled similarly to regular range tags and theopen parameter is also implemented for Begin/End tags.

    5. CONCLUSION

    Music notation is complex due to the large number of no-tation elements and to the heterogeneous status of theseelements. The typology proposed in table 2 is actually asimplification intended to cover the needs of score leveloperations but it is not representative of this complexity.However, it reflects the music notation semantic and couldbe reused with other score level music representation lan-guage. Thus apart for the reversibility rule that requiresthe support of the music representation language, all theother rules are independent from the GMN format and ap-plicable in other contexts.

    Score level operations could be very useful in the con-text of batch processing (e.g. voices separation from aconductor, excerpt extraction, etc.). The operations pre-sented in table 1 support this kind of processing but theyalso open the door to a new approach of the music creativeprocess.

    6. REFERENCES

    [1] A. E. Daniel Taupin, Ross Mitchell. Musixtex usingtex to write polyphonic or instrumental music.[Online]. Available: http://icking-music-archive.org/

    [2] C. Daudin, D. Fober, S. Letz, and Y. Orlarey, TheGuido Engine - a toolbox for music scores rendering.in Proceedings of the Linux Audio Conference 2009,2009, pp. 105111.

    [3] D. Fober, S. Letz, and Y. Orlarey, Open source toolsfor music representation and notation. in Proceed-ings of the first Sound and Music Computing confer-ence - SMC04. IRCAM, 2004, pp. 9195.

    [4] H. Hoos, K. Hamel, K. Renz, and J. Kilian, TheGUIDO Music Notation Format - a Novel Approachfor Adequately Representing Score-level Music. inProceedings of the International Computer MusicConference. ICMA, 1998, pp. 451454.

    [5] H.-W. Nienhuys and J. Nieuwenhuizen, LilyPond, asystem for automated music engraving. in Proceed-ings of the XIV Colloquium on Musical Informatics(XIV CIM 2003), May 2003.

    ENGRAVINGHAMMERINGCASTING: EXPLORING THESONIC-ERGOTIC MEDIUM FOR LIVE MUSICAL PERFORMANCE

    Edgar Berdahl

    Audio Communication GroupTU Berlin, Germany

    Alexandros Kontogeorgakopoulos

    Cardiff School of Art & DesignCardiff, United Kingdom

    ABSTRACT

    EngravingHammeringCasting is a live music composi-tion written for two performers, who interact with force-feedback haptic interfaces. This paper describes the phi-losophy and development of the composition. A virtualphysical model of vibrating resonators is designed andemployed to generate both the sound and the haptic forcefeedback. Because the overall system, which includes thephysical model and the coupled operators to it, is approx-imately energy conserving, the model simulates what isknown as ergotic interaction.

    It is believed that the presented music composition isthe first live composition, in which performers interactwith an acoustic physical model that concurrently gener-ates sound and ergotic haptic force feedback. The com-position consists of three sections, each of which is moti-vated by a particular kind of craft process involving ma-nipulation of a tool by hand.

    1. BACKGROUND

    Physical modeling has been employed for decades to syn-thesize sound [5, 16, 15]. In real-time applications, theapproach is typically to compute difference equations thatmodel the equations of motion of virtual acoustic musi-cal instruments [9]. However, besides merely imitatingpre-existing musical instruments, new virtual instrumentscan be designed with a computer by simulating the acous-tics of hypothetical situations [6], creating a metaphori-sation of real instruments. Sounds generated using phys-ical models tend to be physically plausible, enhancing thelisteners percept due to familiarity [7, 14].

    Besides synthesizing sound, a physical model can alsobe employed concurrently for synthesizing visual feed-back and haptic force feedback. When these feedbackmodalities are provided concurrently to a human, the sen-sory percepts can fuse in the brain of a human and providea distinctive sense of immersion. The ACROE-ICA labo-ratory has a long history of working in this area [10], andthey have developed extraordinarily high quality hardwarefor synthesizing haptic force feedback for musical appli-cations [13]. They have also introduced key terminologyinto the discourse, as outlined in the book Enaction andEnactive Interfaces: A Handbook of Terms [12].

    In this paper, the term ergotic interaction will be used.A human interacts ergotically with a system when the hu-

    man exchanges significant mechanical energy with it andthe energy exchange is necessary to perform a task [12].For example, employing a tool to deform an object ormove it is ergotic. Bowing a string or playing a drumis also ergotic. There is a mechanical feedback loop be-tween the human and the environment: the human exertsa force on the environment, and the environment exerts aforce on the human. In ergotic interaction, the user notonly informs and transforms the world, but the world alsoinforms and transforms the user [12].

    As far as the authors know, there has never been aportable musical act that explored the musical applica-tions of simulated ergotic interaction in live performance.This paper describes the development of a new composi-tion in this area.

    2. HUMANS USING TOOLS

    The authors are inspired not only by the way people inter-act with traditional acoustic musical instruments, but alsoby the way people interact skillfully with tools in gen-eral. Indeed, seasoned craftspeople leverage thousands ofhours of experience in operating tools. They can almostimagine that a favored tool is an extension of their body,allowing them to focus more on the result than on the toolitself [8]. They use the tool efficiently to preserve energy,while often making graceful gestures to achieve an aes-thetically pleasing result.

    Interaction with tools for craft was emphasized at atthe Victoria and Albert Museum in London. The Powerof Making exhibition presented over 100 crafted objectsand provided a glossary outlining processes used to makethe objects [18]. The following processes were particu-larly inspiring: carving, casting, cutting, drawing, forg-ing, glassblowing, grinding, hammering, incising, milling,molding, painting, polishing, striking, tapping, welding,wood turning. These words provided a strong conceptand dictated the form and the sonic qualities of the com-position.

    3. PORTABLE, DURABLE, AND AFFORDABLEHARDWARE

    Prior research has focused on accessible haptic hardwarefor musicians [3]. In contrast with precise yet expen-sive and fragile devices designed for simulating surgery,

  • _388 _389

    such as those manufactured by Sensable,1 it was essentialto use devices that are more affordable to musicians andmore durable. For this reason, the authors have recentlybeen using the NovInt Falcon device, which is a commer-cial gaming device with USB interface.

    Figure 1 shows a human hand gripping the Falcon de-vice. It does not look as artistic as we would prefer, but itsatisfies our requirements for now, and it operates in threedimensions. It measures position in the XYZ Cartesiancoordinate space, and it can exert a force in the Carte-sian coordinate space. Furthermore, an open-source driveris a available for the NovInt Falcon for Mac OS, Linux,and Windows, and this driver has been compiled into bothMax/MSP and Pure Data (pd) objects, making it easier toaccess the device for computer music applications [1, 2].

    Figure 1. NovInt Falcon haptic force-feedback device.

    4. MODEL

    The authors of this paper designed and implemented a re-configurable model that allows the performer to experi-ment with sonic-ergotic sounds that could correspondto crafting processes listed in Section 2. The shape issimple so that we could choose to expand upon it some-day in future compositions. In the model, the musicianreaches inside a virtual shape and can interact with thesides (see Figure 2). The square has been used because itis the only regular polygon with angles of 90, allowingthe hand to quickly move around striking all sides with-out getting stuck in any corners, while leaving open thepossibility to bounce back and forth within one corner atwill. Since the model is two-dimensional, the performeris allowed to move freely within the third dimension.

    Each of the four sides is modeled as a rigid side mov-ing in and out according to a lumped model. The lumpedmodel is reconfigurable, and the ergotic interaction is sim-ulated using a form of the Cordis-Anima equations forsimplicity [11].

    According to the authors opinion, the simplest musi-cal model is that of a single mechanical resonator, whichvibrates only at a single frequency when vibrating freely.

    1http://www.sensable.com

    Figure 2. Hand reaching inside of a square to interactwith it

    It is enjoyable to interact with simple models such as thisone, particularly while making early explorations of thesonic-ergotic medium; however, it has been decided even-tually to add additional resonances to each side in order toenable a wider range of sounds. Thus, each sides lumpedmodel corresponds to the mechanical equivalent diagramin Figure 3, in which the blue arrow emphasizes the factthat, at least for the purpose of modeling the sound and er-gotic interaction, the movement is assumed to be orthog-onal to the surface.

    For example, the ith resonance is modeled by the massmi, which is connected to mechanical ground by a springki and damper Ri in parallel. The performer interacts withthe ith resonator by a similar parallel link combination ofspring k and damper R, with the exception that k and Ronly engage when the position of the haptic force-feedbackdevice is beyond the position of mi. In other words, kand R allow the performer to push into the mass, but onlywhen the performer is touching the mass. The mass midoes not stick to the performer. This contact spring-damper link (k,R) element is referred to as the BUT el-ement in the Cordis-Anima formalism [11]. There is aseparate link (k,R) for each resonator so that the tuning ofeach resonator is independent of the other resonators. It isinspiringly remarkable that so many diverse sounds can beobtained with this basic model, simply by employing dif-ferent physical gestures and by adjusting the parametersof the model.

    5. COMPOSITION

    Ergotic interaction is an integral part of our compositionalmedium. We do not merely synthesize or create the sound;instead, we transform and deform it, while it transformsand deforms us physically and mentally. Ergotic interac-tion inextricably links the gesture of the musician with thesound. We believe that the audience can comprehend thislinkage and appreciate it, as we explore the new possi-bilities of artistic expression enabled by the sonic-ergoticmedium.

    5.1. Structure

    The composition consists of three sections. In the firstsection, the performers interact with model parameteriza-tions designed to evoke perceptions of engraving. With

    Figure 3. Mechanical equivalent diagram for hand/hapticdevice touching five independent resonators on the right-hand side

    high resonance frequencies and low masses for the res-onators, the sound is delicate and responds intimately tothe small, precise movements made by the performers.

    In the second section, the resonators are re-tuned tosound more like pieces of metal or bells. The performersmake hammering gestures to play melodic-like passages.

    Finally, in the third section, the k and R parameters ofthe contact links are varied rhythmically in time. Throughthis modulation, the virtual instrument seems to gain theability to exert forces on the performer. It asserts a rhyth-mic form on the gestures of the performers, as if it werecasting the performers gestures into a specific form.

    5.2. Score

    The score for the composition consists of six staves, whichare notated in a special manner but also contain traditionalmarks from Western music notation such as remarks, dy-namics, etc. The first staff describes which sides the firstperformer should play and at what time (see Figure 4,top). The f note describes the right side, the a noteindicates the left side, c indicates the bottom side, ande indicates the top side.

    Consider the engraving section, for which k is smalland R is big, resulting in a kind of frictional interaction.Arrows on the score indicate bowing-like gestures to beperformed. For example, subject to this interaction, thehypothetical top staff in Figure 4 would specify that theperformer should first play a rest for four beats, and thenfor five beats the performer should slowly push down intothe bottom c side. Next, the performer should push tothe left into the left a bar, at a position low enough (seeFigure 2) that both the bottom and the left sides will createsound. Similarly, the second staff (see Figure 4, bottom)would indicate that only in the third measure, the secondperformer should play by gradually pushing into his or herthe bottom side.

    Figure 4. Top two staves indicating to the two performerswhen to play which sides.

    The stiffness (k) and damping (R) interaction param-eters are prespecified by the score and not under the con-trol of the performers. The lower four staves of the scorespecify how k and R interaction parameters vary duringthe composition. In the excerpt from the engraving sec-tion shown in Figure 5 (left), the interaction stiffness re-mains low for both performers while the interaction damp-ing gradually increases over five bars for both perform-ers. Figure 5 (right) shows another example in which thedamping remains generally low for both performers. Thestiffness for performer one varies periodically to emulateengraving, and after three bars, the stiffness for performertwo also begins to vary to emulate casting for performertwo (see Figure 5, left). Through the variation of the in-teraction parameters, the haptic force-feedback device as-serts its influence over the performers, in a sense castingtheir gestures into a form that suits the models program-ming.

    6. CONCLUSIONS

    The form of the composition is shaped by the affordancesof the force-feedback device. The NovInt Falcon is de-signed for simulating interaction with virtual tools, andthe composition explores interaction with tools within partof the sonic-ergotic medium. The authors also explore thelimitations of the force-feedback device. Because thereis a delay in the feedback control loop of the device, itwill become unstable for sufficiently large k and R. In thiscase, the device will tend to chatter when coming in con-tact with the virtual resonators, which produces a soundcharacteristic of the haptic drum [4]. The chattering in-teraction is not ergotic, but it is nevertheless interestingbecause it could not normally occur without the exter-nal energy source of the force-feedback devices motors.Indeed, in contrast with other human-input devices, hap-tic force-feedback devices allow for the possibility of thedevice to assert partial control of a performer [2, 17]. Inthe context of the current composition, the devices onlybehave assertively for short time frames, in order to aug-ment and accentuate the gestures of the performers, as leftup the volition of the performers.

    7. REFERENCES

    [1] E. Berdahl, A. Kontogeorgakopoulos, and D. Over-holt, HSP v2: Haptic signal processing with ex-

  • _388 _389

    such as those manufactured by Sensable,1 it was essentialto use devices that are more affordable to musicians andmore durable. For this reason, the authors have recentlybeen using the NovInt Falcon device, which is a commer-cial gaming device with USB interface.

    Figure 1 shows a human hand gripping the Falcon de-vice. It does not look as artistic as we would prefer, but itsatisfies our requirements for now, and it operates in threedimensions. It measures position in the XYZ Cartesiancoordinate space, and it can exert a force in the Carte-sian coordinate space. Furthermore, an open-source driveris a available for the NovInt Falcon for Mac OS, Linux,and Windows, and this driver has been compiled into bothMax/MSP and Pure Data (pd) objects, making it easier toaccess the device for computer music applications [1, 2].

    Figure 1. NovInt Falcon haptic force-feedback device.

    4. MODEL

    The authors of this paper designed and implemented a re-configurable model that allows the performer to experi-ment with sonic-ergotic sounds that could correspondto crafting processes listed in Section 2. The shape issimple so that we could choose to expand upon it some-day in future compositions. In the model, the musicianreaches inside a virtual shape and can interact with thesides (see Figure 2). The square has been used because itis the only regular polygon with angles of 90, allowingthe hand to quickly move around striking all sides with-out getting stuck in any corners, while leaving open thepossibility to bounce back and forth within one corner atwill. Since the model is two-dimensional, the performeris allowed to move freely within the third dimension.

    Each of the four sides is modeled as a rigid side mov-ing in and out according to a lumped model. The lumpedmodel is reconfigurable, and the ergotic interaction is sim-ulated using a form of the Cordis-Anima equations forsimplicity [11].

    According to the authors opinion, the simplest musi-cal model is that of a single mechanical resonator, whichvibrates only at a single frequency when vibrating freely.

    1http://www.sensable.com

    Figure 2. Hand reaching inside of a square to interactwith it

    It is enjoyable to interact with simple models such as thisone, particularly while making early explorations of thesonic-ergotic medium; however, it has been decided even-tually to add additional resonances to each side in order toenable a wider range of sounds. Thus, each sides lumpedmodel corresponds to the mechanical equivalent diagramin Figure 3, in which the blue arrow emphasizes the factthat, at least for the purpose of modeling the sound and er-gotic interaction, the movement is assumed to be orthog-onal to the surface.

    For example, the ith resonance is modeled by the massmi, which is connected to mechanical ground by a springki and damper Ri in parallel. The performer interacts withthe ith resonator by a similar parallel link combination ofspring k and damper R, with the exception that k and Ronly engage when the position of the haptic force-feedbackdevice is beyond the position of mi. In other words, kand R allow the performer to push into the mass, but onlywhen the performer is touching the mass. The mass midoes not stick to the performer. This contact spring-damper link (k,R) element is referred to as the BUT el-ement in the Cordis-Anima formalism [11]. There is aseparate link (k,R) for each resonator so that the tuning ofeach resonator is independent of the other resonators. It isinspiringly remarkable that so many diverse sounds can beobtained with this basic model, simply by employing dif-ferent physical gestures and by adjusting the parametersof the model.

    5. COMPOSITION

    Ergotic interaction is an integral part of our compositionalmedium. We do not merely synthesize or create the sound;instead, we transform and deform it, while it transformsand deforms us physically and mentally. Ergotic interac-tion inextricably links the gesture of the musician with thesound. We believe that the audience can comprehend thislinkage and appreciate it, as we explore the new possi-bilities of artistic expression enabled by the sonic-ergoticmedium.

    5.1. Structure

    The composition consists of three sections. In the firstsection, the performers interact with model parameteriza-tions designed to evoke perceptions of engraving. With

    Figure 3. Mechanical equivalent diagram for hand/hapticdevice touching five independent resonators on the right-hand side

    high resonance frequencies and low masses for the res-onators, the sound is delicate and responds intimately tothe small, precise movements made by the performers.

    In the second section, the resonators are re-tuned tosound more like pieces of metal or bells. The performersmake hammering gestures to play melodic-like passages.

    Finally, in the third section, the k and R parameters ofthe contact links are varied rhythmically in time. Throughthis modulation, the virtual instrument seems to gain theability to exert forces on the performer. It asserts a rhyth-mic form on the gestures of the performers, as if it werecasting the performers gestures into a specific form.

    5.2. Score

    The score for the composition consists of six staves, whichare notated in a special manner but also contain traditionalmarks from Western music notation such as remarks, dy-namics, etc. The first staff describes which sides the firstperformer should play and at what time (see Figure 4,top). The f note describes the right side, the a noteindicates the left side, c indicates the bottom side, ande indicates the top side.

    Consider the engraving section, for which k is smalland R is big, resulting in a kind of frictional interaction.Arrows on the score indicate bowing-like gestures to beperformed. For example, subject to this interaction, thehypothetical top staff in Figure 4 would specify that theperformer should first play a rest for four beats, and thenfor five beats the performer should slowly push down intothe bottom c side. Next, the performer should push tothe left into the left a bar, at a position low enough (seeFigure 2) that both the bottom and the left sides will createsound. Similarly, the second staff (see Figure 4, bottom)would indicate that only in the third measure, the secondperformer should play by gradually pushing into his or herthe bottom side.

    Figure 4. Top two staves indicating to the two performerswhen to play which sides.

    The stiffness (k) and damping (R) interaction param-eters are prespecified by the score and not under the con-trol of the performers. The lower four staves of the scorespecify how k and R interaction parameters vary duringthe composition. In the excerpt from the engraving sec-tion shown in Figure 5 (left), the interaction stiffness re-mains low for both performers while the interaction damp-ing gradually increases over five bars for both perform-ers. Figure 5 (right) shows another example in which thedamping remains generally low for both performers. Thestiffness for performer one varies periodically to emulateengraving, and after three bars, the stiffness for performertwo also begins to vary to emulate casting for performertwo (see Figure 5, left). Through the variation of the in-teraction parameters, the haptic force-feedback device as-serts its influence over the performers, in a sense castingtheir gestures into a form that suits the models program-ming.

    6. CONCLUSIONS

    The form of the composition is shaped by the affordancesof the force-feedback device. The NovInt Falcon is de-signed for simulating interaction with virtual tools, andthe composition explores interaction with tools within partof the sonic-ergotic medium. The authors also explore thelimitations of the force-feedback device. Because thereis a delay in the feedback control loop of the device, itwill become unstable for sufficiently large k and R. In thiscase, the device will tend to chatter when coming in con-tact with the virtual resonators, which produces a soundcharacteristic of the haptic drum [4]. The chattering in-teraction is not ergotic, but it is nevertheless interestingbecause it could not normally occur without the exter-nal energy source of the force-feedback devices motors.Indeed, in contrast with other human-input devices, hap-tic force-feedback devices allow for the possibility of thedevice to assert partial control of a performer [2, 17]. Inthe context of the current composition, the devices onlybehave assertively for short time frames, in order to aug-ment and accentuate the gestures of the performers, as leftup the volition of the performers.

    7. REFERENCES

    [1] E. Berdahl, A. Kontogeorgakopoulos, and D. Over-holt, HSP v2: Haptic signal processing with ex-

  • _390 _391

    Figure 5. Two example excerpts of bottom four staves specifying the interaction stiffness (k) and interaction damping(R) for the two performers (left excerpt: from engraving, right excerpt: from casting).

    tensions for physical modeling, in Proceedings ofthe Haptic Audio Interaction Design Conference,Copenhagen, Denmark, Sept. 2010, pp. 6162.

    [2] E. Berdahl, J. Smith III, and G. Niemeyer, Me-chanical sound synthesis: And the new applicationof force-feedback teleoperation of acoustic musicalinstruments, in Proceedings of the 13th Interna-tional Conference on Digital Audio Effects (DAFx-10), Graz, Austria, Sept. 6-10 2010.

    [3] E. Berdahl, H.-C. Steiner, and C. Oldham, Practicalhardware and algorithms for creating haptic musi-cal instruments, in Proceedings of the InternationalConference on New Interfaces for Musical Expres-sion (NIME-2008), Genova, Italy, June 5-7 2008, pp.6166.

    [4] E. Berdahl, B. Verplank, J. O. Smith, andG. Niemeyer, A physically intuitive haptic drum-stick, in Proceedings of the International ComputerMusic Conference, Denmark, Copenhagen, August2007, pp. 3636.

    [5] C. Cadoz, A. Luciani, and J.-L. Florens, Synthe`semusicale par simulation des mecanismes instrumen-taux, Revue dacouqistique, vol. 59, pp. 279292,1981.

    [6] N. Castagne and C. Cadoz, Creating music bymeans of physical thinking: The musician orientedGenesis environment, in Proc. 5th Internatl Con-ference on Digital Audio Effects, Hamburg, Ger-many, Sept. 2002, pp. 169174.

    [7] N. Castagne and C. Cadoz, A goals-based reviewof physical modeling, in Proceedings of the Inter-national Computer Music Conference, Barcelona,Spain, Sept. 5-9 2005.

    [8] P. Dourish, Where the Action Is: The Foundationsof Embodied Interaction. Cambridge, MA, USA:MIT Press, 2001.

    [9] N. Fletcher and T. Rossing, The Physics of Musi-cal Instruments, 2nd ed. New York, NY: Springer,1998.

    [10] J.-L. Florens, C. Cadoz, and A. Luciani, A real-timeworkstation for physical model of multi-sensorialand gesturally controlled instrument, in Proceed-ings of the International Computer Music Confer-ence, Ann Arbor, MI, USA, July 1998.

    [11] A. Kontogeorgakopoulos and C. Cadoz, Cordis an-ima physical modeling and simulation system analy-sis, in Proc. 4th Sound and Music Computing Con-ference, Lefkada, Greece, July 2007, pp. 275282.

    [12] A. Luciani, Enaction and Enactive Interfaces: AHandbook of Terms, A. Luciani and C. Cadoz, Eds.Grenoble, France: Enactive Systems Books, ISBN978-2-9530856-0-0, 2007.

    [13] A. Luciani, J.-L. Florens, D. Courousse, andC. Cadoz, Ergotic sounds: A new way to improveplayability, believability and presence of digital mu-sical instruments, in Proc. of 4th Int. Conf. on En-active Interfaces, Nov. 2007, pp. 3736.

    [14] I. Peretz, D. Gaudreau, and A.-M. Bonnel, Expo-sure effects on music preference and recognition,Memory and Cognition, vol. 26, no. 5, pp. 884902,1998.

    [15] C. Roads, The Computer Music Tutorial. Cam-bridge, MA: MIT Press, February 1996.

    [16] J. Smith III, Synthesis of bowed strings, in Pro-ceedings of the International Computer Music Con-ference, Venice, Italy, 1982.

    [17] B. Verplank and E. Berdahl, Assertive haptics formusic, in Proceedings of the Fifth InternationalConference on Tangible, Embedded, and EmbodiedInteraction, Funchal, Portugal, January 23-26 2011,pp. 2529.

    [18] Victoria and Albert Museum and Crafts Coun-cil, Power of Making: The importance of beingskilled, Catalogue of the Power of Making Exhi-bition at the Victoria and Albert Museum, London,United Kingdom, 2011.

    INTERFACING THE NETWORK: AN EMBEDDED APROACH TO NETWORK INSTRUMENT CREATION

    Tom Davis Jason E. Geistweidt University of Bournemouth

    Poole House Talbot Campus

    BH12 5BB [email protected]

    The University of Troms VERDIONE/The World Opera

    The Music Conservatory N-9037 Troms, Norway

    [email protected]

    Alain Renaud Jason Dixon University of Bournemouth

    Poole House Talbot Campus

    BH12 5BB [email protected]

    University of East Anglia Norwich Research Park

    Norwich NR4 7TJ

    [email protected]

    ABSTRACT This paper discusses the design, construction, and development of a multi-site collaborative instrument, The Loop, developed by the JacksOn4 collective during 2009-10 and formally presented in Oslo at the arts.on.wires and NIME conferences in 2011. The development of this instrument is primarily a reaction to historical network performance that either attempts to present traditional acoustic practice in a distributed format or utilises the network as a conduit to shuttle acoustic and performance data amongst participant nodes. In both scenarios the network is an integral and indispensible part of the performance, however, the network is not perceived as an instrument, per se. The Loop is an attempt to create a single, distributed hybrid instrument retaining traditionally acoustic interfaces and resonant bodies that are mediated by the network. The embedding of the network into the body of the instrument raises many practical and theoretical discussions, which are explored in this paper through a reflection upon the notion of the distributed instrument and the way in which its design impacts the behaviour of the participants (performers and audiences); the mediation of musical expression across networks; the bi-directional relationship between instrument and design; as well as how the instrument assists in the realisation of the creators compositional and artistic goals.

    1. INTRODUCTION This introduction is not an attempt to provide a comprehensive review of the field of network performance, rather it outlines some general trends in order to provide a context for the work. Early examples of distributed performance such as the Telematic Circle [1] sought to recreate traditional concert settings over the network by creating a shared environment, or telepresent performance, via the

    transmission of high-quality video and audio assets over high-bandwidth networks. Such performances often used traditional acoustic instruments in their attempt to produce a performance in which the boundaries between local and remote spaces dissolved into a single co-located experience or shared environment. This is exemplified in projects such as the Playing Apart study [2], which aimed to promote situated types of musicianship over the network. This study aimed to better understand the conditions of playability over a network, especially when long distances were involved. It also devised ways of introducing new technologies and principles to facilitate the playability and increase interactions between geographically displaced musicians despite high latency values. The study used two contrasting pieces of music (slow/fast) allowing experimentation across several aspects of distanced performance, such as dealing with large latencies. The study also investigated the impact of interactive technologies, such as spatialised monitoring, video, and simple display using motion capture technology, upon the musicians ability to convey gestures via the network. As networked performance tradition matured, there was a realisation that the acoustics of the network could be utilised as part of the formal compositional process. Early studies of network acoustics [3] stated that, depending upon the distance between nodes and resulting latency, the network can generate acoustical features ranging from reverberation to echo like effects. This paradigm was exemplified in Renauds Renditions [4] a multi-site composition which exploits the delay of the network as a catalyst for musical exchange. A related example is Rebelos Netrooms [5], which utilises the network to extend/blend the natural colours of co-located spaces into a hyper-acoustic. In both scenarios the acoustic properties afforded by the network are exploited for an artistic purpose and the acoustics of the network become an integral part of the performance.