distort them as you please: the sonic...
TRANSCRIPT
DISTORT THEM AS YOU PLEASE:THE SONIC ARTIFACTS OF COMPOSITIONAL TRANSFORMATION
A Thesis
Submitted to the Faculty
in partial fulfillment of the requirements for the
degree of
Master of Arts
in
Digital Musics
by
Ryan Maguire
DARTMOUTH COLLEGE
Hanover, New Hampshire
May 2013
Examining Committee:
(chair) Larry Polansky
Michael Casey
Aden Evens
Tara Rodgers
__________________________F. Jon Kull, Ph.D.Dean of Graduate Studies
Abstract
This thesis discusses a series of compositions created over the past year that
combine the use of external data with intuitive aesthetic decisions. I begin with a brief
chapter that draws out connections between recent projects by placing them in historical
context. In each composition, I bring non-musical information into the creation cycle as
novel stimuli, placing myself both in charge and in service of this material. I ask how
different representations of data can generate new music and what my role as composer
might be. I assess several compositional strategies characterized by the act of
transformation. First, a series of pieces entitled Heard incorporate various transcription
strategies into the creative process. Next, I present a compositional method which passes
digital data through a variety of file formats, audifying the data at each step. Following
this, I use MP3 compression to develop musical material and explore the potential of this
format as an aesthetic object. Finally, I review all of the compositions and highlight
common ideas. Through transcription, reformatting data, and digital compression, each
composition makes use of the artifacts of its particular transformation as musical
material.
ii
Preface
"Get your facts first, then you can distort them as you please."
— Mark Twain
iii
Acknowledgements
I'd like to thank my amazing thesis committee—Michael Casey, Aden Evens,
Larry Polansky, & Tara Rodgers—for their guidance, revisions, comments, and general
good humor. I truly could not have done this without you! Special thanks to Larry for
being the chair of my committee—living through two Hanover winters was worth it to
work with you. I was fortunate to study with David Dunn and John King at Dartmouth
College in 2012. I left every lesson and class with them in higher spirits than I had
entered—a young composer could not ask for more inspiring or cooler role models. I'd
also like to thank Robert Cogan at the New England Conservatory of Music. The two-
and-a-half years in Boston of composition lessons, both formal and informal, were life
changing. It's been a pleasure to learn and play alongside my classmates in Digital
Musics—I can't wait to see what awesome work you all do in the future. To my close
friends & loved ones—I am so happy to know each and every one of you. You make life
worth living. Lastly, I'd like to thank my strong and beautiful mother, Maureen Maguire,
for...everything! What a journey it's been!
iv
Table of Contents
Abstract ii
Preface iii
Acknowledgements iv
Table of Contents v
List of Illustrations vi
Introduction 1
Connections 4
Heard 13
Data / Format 34
MP3 49
Conclusion 61
Appendix 64
References 84
v
List of Illustrations
Dunn's “Cows and Thunderstorm” 14
Heard Sound Installation 16
“Cows and Thunderstorm” (excerpt used in Heard) 17
Heard transcription sound file 19
Heard Transcription Program in Pure Data 20
Onset Detection Pure Data Code for MIDI note 21 20
Heard, for sextet and tape excerpt 22
Heard spatial notation solution 24
Heard ― thunder solution 25
Heard Spectrograms, First 5 minutes 29
Heard, for wind ensemble – first page 30
Original White Noise Soundfile 54
MP3 Compressed White Noise 54
Chaos Uncompressed 55
Chaos Compressed 55
Ascending Kick Drum Frequency Sweep 57
Ascending Kick Drum MP3 57
Ascending Kick Drum MP3 with Reverberation 57
Chords Uncompressed 58
vi
Chords MP3 58
Chords “Ghost” File 58
MP3 “Ghost” Harmony Study 59
vii
I. Introduction
This thesis discusses a series of compositions created over the past year that
combine the use of external data with intuitive aesthetic decisions. I bring non-musical
information into the creation cycle as novel stimuli, placing myself both in charge and in
service of this material. I ask how different representations of data can generate new
music and what my role as composer might be. I develop compositional strategies
characterized by the act of transformation. Through transcription, reformatting data, and
digital compression, each composition makes use of the artifacts of a particular
transformation as its musical material. In doing so, I propose that these works manifest a
developing digital ecology.
I begin with a brief chapter that draws out connections between recent projects by
placing them in historical context. Following this, the next chapter examines a series of
compositions entitled Heard. This series includes an hour-long acousmatic recording, a
fixed–media sound installation, a score for chamber sextet and tape, a set of instructions
for live improvisers, an orchestration for wind ensemble, and a solo piano composition
that I focus my discussion on. Inspired by ideas from spectralism and music information
retrieval, I use transcription by both machines and people as the central creative act. By
reworking the original source material through multiple and varied transcriptions, I seek
the musical potential inherent in this transformative process. I discuss both the techniques
employed and my aesthetic motivations at length.
1
Shifting focus from the act of transcription, I use digital data as compositional
material in the next chapter. I present a compositional method which passes digital data
through a variety of file formats, audifying the data at each step. These sounds are then
used as the source material for new sonic constructions. I give thought to the limitations
and possibilities of several working methods with this material. Further, I briefly survey
the work of other artists in a variety of media who utilize data in similar ways. I ask what
the implications of this work are and examine its connection to ecologically inspired
musics. In doing so, I posit that data composition is a new form of soundscape
composition reflecting a burgeoning digital ecology.
The following chapter deals with one file format only. I use MP3 compression to
develop musical material and explore the potential of this format as an aesthetic object. I
first compare and contrast this project to the work of previous “glitch” artists. Then, I
experiment with extreme MP3 compression at very low bit rates and use the resultant
artifacts as material for musical studies. After discussing these results, I develop
techniques to extract sonic material usually lost in MP3 compression. Future technical
and aesthetic directions are suggested at the end.
In the final chapter, I review all of the compositions and highlight common ideas.
I review how each project uses transformation uniquely as a creative act. I assess the
compositional strategies that I have developed and suggest future refinements. Finally, I
comment on the aspects of digital ecology that these works reveal.
Throughout this thesis, I return to two central questions. The first is how I might
balance external structures with my desire to make intuitive, improvisatory compositional
2
decisions. As I will show, I have developed techniques for transforming sonic material in
response to this inquiry. The second question is how my music interacts with the digital
environment in which I increasingly live and work. I frequently refocus discussion
around these two lines of thought.
3
II. Connections
This chapter provides an historical overview of ideas related to my work.
Understanding that all historical surveys are necessarily incomplete, I begin at the onset
of the twentieth century working towards the present day. Key concepts are developed
concurrently through the work of a cadre of musicians, artists, theorists, and scientists. I
trace the use of creative musical transcription throughout the twentieth century, highlight
artists using data as material in various media, and develop the concept of digital ecology.
From the beginning of the twentieth century, transcription has played an important
part in music composition. Béla Bartòk incorporated folk melodies of his own
transcription into his compositions in the early decades of the twentieth century (Lendvai
and Bartòk, 1971). Others reworked their own pieces for various ensembles. An early
example is The Rite of Spring, of which Stravinsky first published his transcription for
piano four hands in 1913, before releasing the full orchestral score. Though distinct from
transcription, Ravel's 1922 orchestration of Pictures at an Exhibition shares a similar
spirit.
Meanwhile in the art world, Marcel Duchamp was creating his famous
readymades, a transformative act in its own right. These found object sculptures signified
a new role for the artist. By simply selecting a manufactured object and displaying it, the
artist transformed it into art. It would be several decades before similar concepts were
taken up fully in music, as I will show.
4
Instead, musicians were working with other kinds of external restraints. Serialism
was developed in the 1920's which saw the imposition of strict mathematical
transformations on tightly controlled musical material. Composers like Ruth Crawford
Seeger utilized serial techniques and other formal strategies in composing, such as
Charles Seeger's dissonant counterpoint.
In the field of literary criticism, Walter Benjamin's 1923 essay The Task of the
Translator notes that the job of the translator is not merely to copy a text word for word.
Instead he states that “to some degree all great texts contain their potential translation
between the lines” (Benjamin 1968, 82). He says that the task of the translator “consists
in finding that intended effect upon the language into which he is translating which
produces in it the echo of the original” (Benjamin 1968, 76). I take a similar approach
towards the act of musical transcription in Heard.
As serialism continued to develop in the 1940s and 1950s through composers
such as Messiaen and Boulez, others began exploring environmental sounds as
compositional material. Musique Concrète composers working with Pierre Schaeffer at
the Groupe de Recherches Musicales (GRM) used tape recordings as their primary source
material rather than mathematical formalism. In America, Cage's 1953 Williams Mix, was
an early example of how musicians could incorporate found sounds into their work. Just
the year before, Cage had made a bold symbolic statement about the role of the composer
and listener through his infamous 4'33”. Both of these works did for music what
Duchamp had done for art in the 1920s. They gave the sounds of everyday life aesthetic
status through an act of transformative recontextualization.
5
Interestingly, the composer Olivier Messiaen was both a prominent voice in
serialism and a notable practitioner of creative transcription. He incorporated his own
transcriptions of bird songs into his important body of spiritually and environmentally
focused art music beginning in the 1950s. Similar to Bartòk, Messiaen often used whole
melodies from his source instead of smaller melodic fragments. Where Bartòk implanted
folk music, Messiaen infused bird song into his large scale compositions to great effect.
The 1950s and 60s saw composers exploring the boundary between the serialist
and GRM approaches. Karlheinz Stockhausen's Gesang der Jünglinge is both musique
concrète and elektronische musik. Xenakis was accepted to work at the GRM while
developing the ideas that would become the basis for his 1963 text Formalized
Music:Thought and Mathematics in Composition. By the 1960s Helmut Lachenmann had
developed the term musique concrète instrumentale, to describe his acoustic
compositions (Ryan, 1999). All the while recording and synthesis technologies were
undergoing constant development.
In the art world, the 1960s saw the emergence of conceptual and serial artists. Sol
LeWitt explained that “the serial artist does not attempt to produce a beautiful or
mysterious object but functions merely as a clerk cataloging the results of his premise”
(LeWitt 1967 cited in Buchloh 1990). Painter Josef Albers worked systematically and in
series to explore his conceptual ideas. I show how this way of working is important in my
own work later.
Moving into the 1970s, the acoustic ecology movement was founded in Canada
by R. Murray Schafer. These composers sought to study the relationship between man
6
and the natural and artificial sounds of the environment, making this the subject of their
compositions. This work was distinct from earlier practices in musique concrète by
focusing on the source of the sounds and the relationships between these sound sources
and the composer or listener. At the same time, algorithmic composition was gaining
prevalence:
"In recent years [the '70s and '80s], the behaviour of
systems of nonlinear dynamical equations when iterated
has generated interest into their uses as note generation
algorithms. The systems are described as systems of
mathematical equations, and, as noted by Bidlack and
Leach, display behaviours found in a large number of
systems in nature, such as the weather, the mixing of fluids,
the phenomenon of turbulence, population cycles, the
beating of the human heart, and the lengths of time between
water droplets dripping from a leaky faucet" (Alpern,
1995).
Unlike algorithmic composition, the field of sonification, also called auditory
display, seeks to make the relationships in data audible. As this field developed in the
1980s and 1990s, a distinction between artistic and scientific sonification emerged. As
Polansky points out, “There is no canon of art which necessitates "efficiency,"
7
"economy," or even, to play devil’s advocate, "clarity"” (Polansky, 2002). This
distinction brings this work closer to the music of Xenakis and to later experiments in
data art.
While composers and engineers were working with data, a movement to utilize
preexisting recordings as raw material in composition developed. DJs began reusing old
Motown records as the basis of instrumental tracks over which MCs rhymed. Hip-Hop
was born from this exploration of recycled media artifacts, reflecting the sonic
environment of the era. In this same vein, artist Christian Marclay, influenced by the
earlier work of Marcel Duchamp, began using records and turntables in the late 1970s
and early 80s to create a “theatre of found sound”. His project Recycled Records from
1980 to 1986 took fragments of records and reassembled them in a collage like manner.
Another important voice is John Oswald. His 1985 article Plunderphonics, or
Audio Piracy as Compositional Prerogative argues for sampling as a creative act. He
states, “A sampler, in essence a recording, transforming instrument, is simultaneously a
documenting device and a creative device, in effect reducing a distinction manifested by
copyright” (Oswald, 1985). His music reflects this philosophy by transforming snippets
from pop recordings into dense new sound compositions. This strategy of transformation
is one which I will draw on later. Similar in effect to Cage's Williams Mix, Oswald's work
is distinguished by the source of its material, harkening back to Tenney's 1961 Collage
#1 (Blue Suede) constructed from the famous Elvis recording.
As these sample–based and data–inspired compositions continued, other artists
were looking to the sounds of the natural environment again. Alaskan composer John
8
Luther Adams is an interesting example. His 1980 songbirdsongs has become well known
as a transcription of birdsong and evocation of the outdoors through its score for piccolos
and percussion. He combines this interest with a focus on natural, mathematical forms in
Strange and Sacred Noise which derives its form from a variety of fractals. Others, such
as composer and sound artist David Dunn were creating music for specific environments,
while raising important philosophical questions about music, consciousness, and nature.
From 1973 to 1985 Dunn published a series of site–specific works focused on
environmental interaction through structured improvisation. Dunn's work continues to be
important for the way in which it becomes a part of its ecosystem through active
involvement with the given environment.
Moving into the 1990s, the field of automatic transcription began to emerge as
hardware and algorithms advanced. Automatic music transcription has become an active
subfield in computer science, as researchers continue to develop systems more and more
capable of autonomously transcribing music from the western canon with great accuracy.
These systems still have some distance to go towards creating accurate polyphonic,
multi-instrument transcriptions, but the current technology is still of use to composers,
especially in situations when these systems “fail” in interesting ways.
This interest in the failures of digital technology is seen in the work of “glitch”
artists emerging in the 1990s. This genre of minimalist post-techno focuses on digital
noise and error as compositional material. An example is German group Oval, whose
“Textuell” (1996) uses the repeated sound of a skipping CD for its percussion track. This
is related to earlier work using samples and found sounds, however the sound source has
9
evolved yet again. The changing technological environment of musicians is reflected in
this music. The industry standard is always shifting—from vinyl, past tape recordings,
and on to digital CDs. Perhaps the most visible practitioner and exponent of glitch music
is Kim Cascone. His 2000 article “The Aesthetics of Failure: 'Post-Digital' Tendencies in
Contemporary Computer Music” has become an unofficial manifesto of glitch music.
Cascone argues that musicians, having become disillusioned with their digital tools, are
using signal processing techniques to magnify the errors inherent in sound reproduction
technologies (Cascone, 2000).
Moving from CDs to computers, artists and musicians in recent years have begun
to develop a practice known as data-bending. Taking its name from the electronic toy
hacking movement known as circuit-bending, data-bending manipulates raw digital data
to generate visual and sonic art. This is distinct from sonification because there is usually
no mapping involved, and often no interest in displaying the relationships in the data
beyond their aestheticism. Much current work in data-bending stems from the visual arts.
Common techniques include opening an image file as raw data with an audio editor,
processing it, and then reopening it as an altered image. In the audio domain, an
important artist is Ryoji Ikeda, whose large scale audiovisual installations and recordings
use “pure data” as the basic material of sound (Ikeda, 2010). Throughout this thesis, I
return to Ikeda's work as a recurring case study.
Returning to the idea of translation as previously highlighted in Benjamin's
writing, Christian Marclay's piece Mixed Reviews (1999–present) presents a text
translated repeatedly in series. This is one kind of transformation, distinct from other
10
means and mediums, but there are parallels between translation and practices like music
transcription. Culled from music reviews the text in Mixed Reviews is translated into the
local language at each subsequent installation from the previous installation of the piece.
This series of translations causes certain words and phrases to inevitably change over
time. The most interesting part of the work is the way that the act of translation changes
the text over time, and what this reveals about the act of translation and about language
itself.
As a final point of reference, I will briefly discuss my 2012 composition Pivot.
This was the last large work I completed before embarking on the projects discussed in
this thesis. Pivot is a composition for amplified quartet and computer processed sounds
organized as a large fractal. In writing Pivot, I first created a rigid temporal structure
similar to Cage's square root form and then attempted to formalize as many musical
parameters as I could. I organized these parameters to elucidate the temporal structure as
clearly as possible. This work was a response to Boulez's take on total serialism and to
the fractal composition Strange and Sacred Noise by John Luther Adams. Additionally, I
sought to categorize the timbres using the same parameters as the timbre classification
tables from Cogan's New Images of Musical Sound (Cogan, 1984). This pre-
compositional process was extremely interesting, but I found it very difficult to compose
with so many constraints. I found myself following the structure I created more loosely as
I wrote more of the piece, ultimately making some very intuitive decisions, loosely
following the rules I had established, as the deadline approached. All of the compositions
discussed in the remainder of this thesis, especially the Heard project, stem from this
11
experience.
12
III. Heard
Heard is a series of compositions transcribed from a field recording from David
Dunn's Why Do Whales and Children Sing?. It exists as a fixed media acousmatic
composition, a sound installation, a composition for solo piano, a set of instructions for
live improvisers, a score for chamber sextet and tape, and a sketch for wind ensemble. Of
particular interest is the piano composition (see Appendix), which functions as the crux of
the entire series. I will describe the creation of this work, the methods utilized, and
analyze and critique the results. Finally, future directions for work in this vein will be
suggested.
In the spring of 2012 I received a commission to write a composition for solo
piano from the Wisconsin Alliance for Composers. The commission, part of the
“Wisconsin Soundscapes Project”, requested a new composition reflecting the
soundscape of Wisconsin. I had just completed a term studying closely with sound artist,
field recordist, and composer David Dunn, and was studying his book (and
accompanying recording) Why Do Whales and Children Sing?. One of the recordings was
particularly interesting. While hiking down from an alpine mountain pass, Dunn and a
group of his friends encountered a small herd of cows with delicately plinking cow bells.
A thunderstorm was approaching and resonating through the stark mountains. This
“wonderful contradiction in the soundscape” made it impossible for David not to stop and
record (Dunn, 1999). The inharmonic bell timbres, their ebb and flow in the sound field,
13
and their paired relationship with the occasional roll of thunder stood out to my ear. This
recording became the basis for my composition. Utilizing a recording of a group of cows
for a Wisconsin themed commission seemed appropriate as well.
I decided to make an automatic transcription of this recording for the commission.
I found the source material interesting, and I was curious about how a transcription
algorithm would capture some of the sonic information but also how it would fail and
misinterpret other sounds. At this point in the process, I thought that I would simply
deliver the automatic transcription to the pianist as the final composition. As I progressed
in this project that ended up changing.
Dunn's “Cows and Thunderstorm”
14
Taking this recording (see spectrogram above) into the Bregman studio, I used a
time–stretching algorithm to expand the entire recording to thirty-two times its original
length without changing the pitch. I next dropped the pitch of this lengthened recording
two octaves. My hope was to move the high–frequency content in the bells into the range
of the solo piano. I made a copy of this time–stretched and pitch-shifted recording and
lowered the frequency of the copy to twenty-five percent of its value, another 2 octave
drop. Layering these two versions of the recording on top of one another, I used heavy
reverberation in Logic to add ambience to the recording. The recording had been
transformed into a series of layered, shifting frequencies in semi–harmonic relation to
each other with occasional swells of low–frequency noise from the thunder.
At this point, the recording had expanded from roughly 2 minutes long to over an
hour. Listening to the recording repeatedly, I noticed that the strong lowest harmonics of
the bells masked the upper partials I had hoped to reveal. Through selective notch
filtering I was able to bring out the overtones of various bells at different points in the
recording resulting in a soundscape of shifting harmonies. This version became the basis
for a sound installation and serves as the fixed media acousmatic recording of the piece.
Its length and pacing reminds me of ambient compositions such as Brian Eno's. When
played at a high volume on good speakers, the lowest frequencies from the transposed
bells and thunder rolls can be experienced physically. This physical sensation from low
frequency sound is something I hope to explore further in future work.
With this starting material complete, I prepared to make multiple transcriptions. I
had become interested in how multiple versions of the same piece might reveal
15
something interesting about both the distortions of the transcription process and the
original material. Walter Benjamin states in his 1923 text The Task of the Translator that
it “is the task of the translator to release in his own language that pure language which is
under the spell of another, to liberate the language imprisoned in a work in his re–
creation of that work” (Benjamin 1968, 80). Music is distinct from language and does not
require translation to be understood. However, my hope in this work is that multiple
transcriptions of the same material will distill something musical from the source.
For the piano commission, I selected a 15-minute excerpt with the most complex
behavior. In the original field recording, this was the section from 1:27 to 1:54 (below).
After excerpting the correspondent segment from the time stretched recording, I added a
gradual fade-in over the first minute and a fade-out at the end. To transcribe this material
for piano, I asked the composer John King for advice.
Heard Sound Installation
16
“Cows and Thunderstorm” (excerpt used in Heard)
Before beginning, I had been committed to making this an automatic transcription
via pitch tracking and onset detection. At this later stage however, I thought that, with
such a noisy, polyphonic recording, I would have better luck transcribing this material
manually than with a technological solution. Nonetheless, there was an appeal to using
computational methods to transcribe.
My relationship with this field recording is an indirect one. Despite my personal
relationship with the recordist I have never been to the Swiss Alps, have not sat out in the
rain on a mountain pass with a microphone and field recording rig, and have not
nervously awaited an impending storm amongst a herd of cows. It was only through
listening to this digital recording that I had arrived at the source material for my
composition. By providing the computer with a chance to listen as well, I could explicitly
17
enhance this digital relationship. I could make the distortions of the material arising from
the transcription algorithm an integral part of the composition and not just an unwanted
artifact.
I thus devised a transcription system. Inherent in the act of transcription is a
filtering of information, diminishing certain qualities of the sound source and
strengthening others. I work with this filtration as an element of aesthetic interest rather
than fighting against it. The recording was heavily reverberated, with mostly long notes
that were slow to attack and even slower to decay, and a considerable amount of
background noise. This is not ideal material for automatic transcription. Signal
processing techniques for automatic transcription range from beat tracking and musical
meter analysis to multiple fundamental frequency estimation and source separation
algorithms. I hoped to transcribe the prominent frequency information from the recording
while not admitting the background noise that had developed. Precise rhythm was not
important to me, but I needed at least the approximate onset and offset timing.
I settled on a relatively lo-fi approach to transcribing the file, opting to develop a
threshold based transcription system myself for both personal and aesthetic reasons.
Using the programming language environment Pure Data, I created a patch to detect
frequencies in the mix with amplitudes above a given threshold for a pre–determined
length of time, and then record MIDI note-on messages for the nearest piano key to these
frequencies. If a given frequency then falls below a given amplitude for a certain length
of time, a MIDI note-off message is then be generated. This is done with a band of high-
Q notch filters centered around the frequency of each key on an equal-tempered piano.
18
Routing the output of each filter to onset detection and amplitude tracking Pure Data
objects coupled with a few logic operations, I was able to implement a fairly successful
frequency detector for this particular material.
Heard transcription sound file
19
Heard Transcription Program in Pure Data
Onset Detection Pure Data Code for MIDI note 21
20
The Pure Data patch required a great deal of tuning to output acceptable results.
The early transcriptions were riddled with noise, and it was difficult to find a balance
between setting the noise threshold too high and thus blocking out relevant frequency
information, and setting it too low and allowing too much noise into the transcription. In
choosing to transcribe from a sound file which had been reverberated so heavily this
might have been expected. The false note onsets that the computer detected did end up
being interesting, but they did not suit my vision for the piano piece. I was confronted
here with a choice between leaving the data from my transcription process unedited or
intervening as a composer and listener.
After multiple attempts, I arrived at an automatic transcription that I felt captured
an interesting aspect of the original recording, noise and all. This transcription became
the direct material for a score for chamber sextet and tape. I went through the
transcription data and selected monophonic lines that could be performed within the
range of the respective members of the sextet and copied this machine transcription
exactly to the score. The instructions were then given to perform this along with the 15
minute tape part from which the score had been transcribed (see page 22). It was then
time to decide how to use this material for the piano commission.
I imported the transcription from Pure Data to Finale in its entirety and converted
it directly to a wholly unplayable piano score. Pure Data had captured certain notes that I
might not have paid attention to in the recording. On the other hand, there was also a
great deal of sonic information that had not been captured by my transcription program.
This transcription had brought the properties of the technology to the forefront,
21
highlighting my abstract relationship with the source recording. However, the sonic
qualities of the original recording that had attracted me had been obscured. In order to
rectify this situation, I decided to undergo a painstaking manual transcription process
with the piece as well.
Heard, for sextet and tape excerpt
I now had to determine how strictly to transcribe this material by ear. I took the
transcribed score and the recording to a grand piano and worked through the piece in 12-
second-long sections, comparing these to the automatic transcription and negotiating a
kind of middle ground between these two hearings of the material. This negotiation
between machine and human is a recurring theme in my recent work and takes many
22
different guises. I found the process of repeatedly listening to small sections, playing
notes at the piano and trying to both represent what I heard and what the recording
suggested to my ear extremely interesting. Working with this external data provided a
structure from which I could use my musical imagination. This continues to be important
to me as a musician.
Through the aural transcription process to the preparation of the final score
several decisions needed to be made. These included how to represent time in the
notation, how to represent the low-frequency thunder noises in the score (if at all), how to
utilize the range of the piano, and how literal the translation should be. I addressed the
issue of time first.
In Heard, the source material is an environmental recording of incidental ambient
sounds. These sounds are loosely arranged and stochastic in their distribution, suggesting
spatial notation as a possible solution. I wondered whether the precise location of
particular onsets was important to the overall effect of the material. Tightly syncopated
sounds might require a different solution than the sounds of a rainstorm, for example. In
this case, precise rhythms were less important.
I also considered the potential mindset of a performer. A tightly constructed
rhythmic notation makes the piece more challenging for both performer and transcriber,
whereas a spatial representation will produce a more fluid situation. I was concerned that
an overly rigorous rhythmic notation would draw attention away from controlling the
sonorities and timbres in favor of a precise rhythmic execution on the part of the
performer. On the contrary, I also worried that spatial notation would be too simplistic for
23
a highly skilled pianist and might lessen their engagement with the piece. Weighing these
pros and cons I chose to use spatial notation, in the hopes that the performer would then
be free to focus on timbre, harmony, and expression, and that the fluid nature of the
source material would be more accurately reflected. I also added dashed lines marking off
each second, to encourage tightly controlled rehearsal of the piece, before freeing the
performer up in concert.
Heard spatial notation solution
Representing extra-musical sounds with the piano is a challenge. I wanted to
transcribe the low-frequency statistical noise of thunder in the recording, but was not
allowed to prepare the piano due to the commission guidelines. This was problematic
because playing the keys in a traditional manner creates significantly more pitch content
24
than noise. To work around this, I requested that the pianist reach inside the piano and tap
on the lowest piano strings rapidly with the pads of her fingers. This sounded surprisingly
similar to the rumbling low-frequency sounds in the recording.
Heard — thunder solution
Different instruments provide different frequency limitations and tessitura is
always a concern in orchestration. As previously noted, I am interested in low frequency
sounds and tend to use them as major structural points in composition. Heard is no
exception to this. Low frequency notes provide salient points of interest in the form. The
recording from which I transcribed provided a wealth of frequency information, covering
the entire range of the piano. I used as much of this as I could in the piano transcription.
My original solution saw regularly occurring high pitched notes marked 8va, an octave
above where they are notated. This notation quickly became cumbersome and Spencer
Topel suggested using a three stave representation, rather than the traditional two stave
25
notation common for piano music. This solution made it far easier to notate chords and
passages with mixed frequency content at the extreme high and low registers of the
piano.
The final question was how literal my transcription should be. By working in
series, I was able to explore multiple versions, ranging from the note by note automatic
transcription in the sextet to more intuitive approaches. In the piano composition, I aimed
for somewhere in the middle, though I stayed closer to literal rendering. Transcription is
an imperfect act. The material is always colored by what is heard and what is written.
Thus, rather than being a pure copy, a transcription should always be considered as a
document of what the transcriber could hear and what they could notate. Four different
solutions were attempted for Heard in this regard. First was the previously mentioned
sextet, which was essentially a direct machine transcription, slightly filtered by the ranges
of the instruments available and by my mapping the transcription onto these ranges. Next
along this continuum is the piano piece. I tried to be as true to what I heard and what the
computer had transcribed as possible. However, as the piece progressed, I gave myself
increasing leeway to stray towards imaginative notation— though always returning to
consult the original computer generated score and the recording. In the end, the piano
piece remains a fairly “pure” transcription. Next is the composition for wind ensemble. In
this piece, I allowed myself to freely elaborate on what I could hear. While I always
returned to the recording to provide the basic structure, I allowed myself to extrapolate
from the recording freely. If the recording suggested some sonic idea, I notated that idea,
rather than trying only to notate what was literally present in the soundfile.
26
Finally, the last version of the piece is for live improvisers. This takes the act of
translation to one extreme. Five improvisers, on any instruments, are provided with
headphones which play back the source recording. The improvisers are asked to respond
to what they hear in the recording, in real-time. Mimicry is permitted. This version of the
piece becomes a living transcription. The sonic result is a version of what Dunn
originally heard in the Alps, filtered through his recording device, through the recording
studio and my ears, subjected to digital transformations, and then finally through these
performers, through their ears, minds, and instruments. Comparing the results to the
original recording and the other transcriptions provides a unique perspective. Below I
have taken spectrograms of the live improvisation and piano recordings and compared
them to the original tape.
Returning to the solo piano composition, I wonder: what exactly have I captured
by using this field recording of a country scene? There is interest in both the time and
frequency content. Bells have long been fascinating to composers and musicians. One of
the earliest pieces of spectralism, Grisey's Partiels, uses bells for their inharmonic
overtone structure. In the time domain, the segment of the field recording that I used had
a strong shape with a clear build in momentum and a release into stasis towards the end.
Finally, the method of composition and transcription provided me with a focused
opportunity to engage in a deep listening of the material.
In the time domain, it has been shown that much classical music displays 1/f
spectral behavior (Voss and Clarke, 1978). Additionally, soundscapes judged as pleasing
often show this same behavior, as demonstrated by Bert De Coensel (De Coensel et. al,
27
2003). By transcribing from a rural soundscape, I have set up the potential to capture this
characteristic temporal behavior. This can be done even with field recordings where pitch
is a less salient feature. In these cases, a sparse transcription of the power spectrum is still
possible. This can guide a stochastic or algorithmic composition manifesting 1/f spectral
and temporal properties. Alternately, the information thus derived could be used to
provide major structural foci in a fixed/notated composition.
I am also interested in the harmonic language that arose from this process. The
number of bells present manifested interesting statistical behavior in time and pitch.
Some bells were in tune together while others were not, creating contrast between
consonant and dissonant sections of the composition. As previously mentioned, the
inharmonic partials of the bells provided the opportunity for non-traditional harmonies in
the upper ranges of the piano. The time and frequency behaviors in the source recording
combined to provide the composition with a musical language that “avoids both cliche
tonal and atonal idioms” typical of the last century (Dunn, 2012). In doing so, I hope that
attention can be given over more exclusively to listening, rather than to expectation or
reliance on a learned musical vocabulary.
This is a crucial point. By working with external data, in this case a carefully
chosen field recording, I hoped to create a music that, while vaguely familiar, was not
derived entirely from previous idiomatic writing. By using these particular bells, I was
assured of having a harmonic language that drew mostly on the harmonic series, though
not perfectly. The number of bells present, and the fact that they were not tuned together
provided a degree of complexity and dissonance.
28
Heard Spectrograms, First 5 minutesacousmatic source (top), piano (middle), improvisers (bottom)
29
Heard, for wind ensemble – first page
30
Fortunately, several prominent bells were almost in tune together, so the harmonic
language hovered somewhere between familiar and exotic. Combing this harmonic
language from a section of the recording with a clear dynamic shape, I was able to instill
momentum in the composition. The trajectory was derived from a variety of interacting
circumstances and decisions, but was not based on what a traditional musical form should
be. This was important to me. I wanted a structure with definition, but one that was not
familiar or derivative. I believe that the techniques used here have successfully met that
end.
With Heard I have taken a field recording of an outdoor scene, a scene without
any particular social message to most people, though at which I was not present, and have
transcribed it to instruments. This digital–only relationship is a strange product of our
current sound technologies. The transcription itself was a challenge to arrive at, and has
both elements of found sound and careful arrangement by the composer. As a sparse, 15-
minute long piece for solo piano, I believe that it opens up a listening space similar to
ambient or minimalist compositions. I also hope that an object of aesthetic interest has
been born through non-reliance on idiom, engagement with significant forces of nature
(thunderstorm, sentient animal life), harnessing technology, and placing the act of
listening at the center of the creative method. This piece is not as much about personal
expression, about what I have tried to say or communicate, as it is about what I have
heard and tried to share. I find this approach to creating music very satisfying and would
like to pursue it more in the future.
The threshold-based automatic transcription resulted in a specific rendering of the
31
source material. Other methods for extracting temporal and harmonic information from a
field recording exist as well. These techniques, combined with an effort to clean up and
denoise the source recording, rather than essentially muddying it before transcription,
could have resulted in a cleaner representation of the information in the source file, were
I aiming for a clean transcription. Other methods have other shortcomings. I consider
these mishearings to be similar to the pleasant distortion acquired in a signal when
overdriving a tube amplifier. It would be interesting to explore these possibilities further.
How could a transcription algorithm be designed so that it would fail in an interesting
way?
I have also considered whether the transcription I did by hand could be formalized
and applied algorithmically. Supposing it could, I would somewhat mourn the loss of the
ineffable experience of listening, imagining, and transcribing by hand. Any such
formalization and algorithmic development would, I hope, be grounded in a similar
method of working by the composer. This warrants deeper examination in future work.
In conclusion, I have presented a series of compositions derived from the same
source recording in a variety of ways. All four transcriptions were derived from the same
source. A future project might involve transcribing each new version from the previous
one, as in Marclay's Mixed Reviews. It would be interesting to observe the accumulation
of a transcription 'residue', so to speak. As it stands now, the piano piece represents a
distillation of Dunn's field recording and, I hope, captures something of its aura in
addition to the pitches and rhythms of that summer scene. By pursuing a transcription
project as suggested above, the idiosyncratic accumulation of signal noise could
32
eventually stand equal to (and perhaps overtake) the influence of the source. This version
however has shed a clearer light on the original source and on the distinction between the
various methods. By isolating them, they become more clear as individual transcription
techniques, and their unique distortions are on full display.
33
IV. Data / Format
In this chapter, I use digital data in a variety of file formats as compositional
material. I begin by contextualizing this work through a survey of other artists using data
in similar ways, paying particular attention to the sound art and music of Ryoji Ikeda. I
then present a compositional method which converts digital files between different
formats and audifies them at each step. I will discuss the limitations and possibilities of
this method and material. I ask what the implications of this work are and examine its
connection with ecologically inspired musics. In doing so, I posit that data composition is
a new form of soundscape composition reflecting a burgeoning digital ecology. Finally, I
suggest future directions for this project.
One advantage to working with digital sound is that the composer is afforded a
control over the materials of sound that can not be achieved with traditional notation.
Though a highly precise score can convey a great deal of information to a performer,
there is a limit beyond which the composer can no longer shape the sound. Generally
speaking, this limitation is different and often less cumbersome when working with
computerized sound in a digital audio workstation. The composer can account for micro-
variations in timing, adjust them at will, subtly alter timbres, and audition candidates for
the final solution numerous times, at relatively low cost.
Increasingly, digital technologies have come to mediate common musical
experiences. Composers now work regularly in digital studios. In comparison to previous
34
decades, the ratio of consumer listening to recorded versus live music is higher. The
composition studio has moved away from pencil and paper and towards mouse and
monitor. This has opened up more possibilities than it has closed, but engaging with
digital content in some way has become a foregone conclusion.
Hoping to reflect this situation, I use digital computer data as compositional
material in this project. A lesson with the composer Chaya Czernowin in the fall of 2012
challenged me to consider more carefully the materials of my music, rather than focusing
so closely on structure and form. With this in mind, I began this studio-centric project.
This led to the development of a technique for harvesting data from the hard drive on my
computer. I found in this technique an opportunity to reflect on past work, create strange
and new sounds, and engage creatively with old, recycled computer files.
The use of data as the raw material of art has gained numerous devotees in the last
twenty years. This trend can be traced back even further to early work in algorithmic
composition. More recently, with the age of personal computers, laptops, and mobile
computing in full swing, artists' relationship with data has shifted. Practitioners of data art
span both new media arts as well as traditional genres such as literature, poetry, visual art,
performance art, and of course music and sound art. I will discuss recent practices and
raise questions arising from this work below.
Using data as sonic material is closely related to the field of sonification and more
specifically, audification. Thomas Hermann, in The Sonification Handbook, explains that
sonification is “the technique of rendering sound in response to data and interaction”
(Hermann et al 2011, 1). Furthermore, audification is defined as “the direct play back of
35
data samples” (Kramer, 1994), and is differentiated from other forms of sonification
which tend to involve parameter mapping techniques. Discussing audification, Dombois
and Eckel distinguish between four “different types of data that result in different types of
sounds” (Hermann et al 2011, 302). These are sound recording data, general acoustical
data, physical data, and abstract data. Examples from each category include ultrasonic
signals, sonar, EEG data, and stock-market data (Hermann et al, 2011).
Using the above definition of audification, one can trace the practice back to
Thomas Edison in 1878, with his “Time Axis Manipulation” of sound recording data
(Gelatt, 1977). Audification, in theory and practice, has existed in the arts for at least a
century. An early exemplar is the work of Oskar Fischinger, who began painting directly
on the soundtracks of films in 1932, resulting in new synthetic sounds not unlike those
from electric synthesizers. In this case, the data was derived from his painting. Whereas
most sonification and audification practitioners aim primarily to make the relationships
within data clear, my goal is to draw on the data as source material. I am interested in the
resultant sounds as sounds themselves, not as representations of data relationships or
information dynamics. Other artists have shared this approach, but this distinguishes my
goals from the goals of auditory display and scientific sonification practitioners.
Data from chaos theory, probability distributions, and star charts have
underpinned many of the most significant works of 20th century art music. An example of
the use of external information in music is Earth's Magnetic Field, by Charles Dodge, in
which the composer sonifies data gathered from instruments measuring the magnetic
field of the earth over time. The use of data is not entirely new, though the kinds of data
36
and ways in which it is used are evolving.
In Digital Art and Meaning, Roberto Simanowski addresses several important
issues that arise when creating data driven art. For example, data practices call into
question traditional notions of authorship. Simanowski calls such works “mapping art” as
they typically involve the mapping of some dataset onto aesthetic parameters
(Simanowski, 2011). Issues of hermeneutics are complicated by digital technologies, and
materiality often finds itself transformed and promoted to a position of primacy. What's
more, data art brings into question “crucial issues such as consumption, reflection, critical
distance, and affirmative embrace” (Simanowski 2011, 21). Simanowski considers
mapping art as the 21st century equivalent to naturalism in late nineteenth-century
literature due to its seemingly objective posture (Simanowski, 2011). Data is scientific
and by using it, artists are appropriating some of the prestige given to science from
society. He also notes that data art is closely connected to “ready-mades, photography”
and dada through its transformation of mundane information into aesthetic objects
(Simanowski, 2011).
The compositional project I discuss below fits into the emerging practice of data-
bending. This practice takes the inspiration for its name from the established genre of
circuit-bending in which musical toys and gadgets are subjected to exploratory hacking in
search of new sounds. Practitioners such as Qubais Reed Ghazala, Alec Feld (a.k.a.
Expensive Looks), and Jeff Morton emphasize a process of exploration and often
discover “new personal and unique narratives … reflecting the position of an individual
inside a culture overrun with cheap electronics.” (Whitelaw, 2004). Where circuit-
37
bending takes hardware as its object, data-bending is purely digital. This reflects the new
information-society of the twenty-first century, where data is available freely,
immediately, and in massive quantities.
Much current interest in data-bending stems from the visual arts. A common
practice is importing an image file as raw data into an audio editor such as Audacity and
then doing basic signal processing operations on the file before exporting as raw data
again. When reopened as an image file, the original image will have been altered with
interesting “glitchy” artifacts and effects. In music, an early example of data–bending
comes from the artist stAllio!, whose 2003 twelve-inch vinyl release True Data consists
of excerpts from “random data files” edited into techno beats (Whitelaw, 2004).
I propose that data-bending is a variation on ecologically focused musics, such as
soundscape composition and site-specific works. Whereas ecologically focused media
have previously tended to engage the cityscape or the countryside as the environment of
the artist, data-bending reflects the increasing digitalization of society. The machine is
becoming our primary ecosystem. As artist and scholar Mitchell Whitelaw states, “Given
that the basic platform for sound culture is the personal computer, it's not surprising that
it has begun to draw on data as the raw material of that environment.” (Whitelaw, 2004).
In this sense, I place my work with data next to works centered around field recording
and transcription, such as the Heard project. In future work, I would like to process and
alter data in real time, in a manner similar to David Dunn's interactive improvisations
with site-specific environments.
Data-bending engages the conversion from data to signal. All file types become
38
possible audio files (samples, field recordings) with their own characteristic sonic
signatures. The hard drive becomes a sample library (Whitelaw, 2004). Beneath the
veneer of graphic user interfaces and sophisticated operating systems lies a wash of
information in a more simplistic form. Whitelaw reminds us that, “despite the structured
media artifacts (software) produces … its internal representations are abstract; purely
data.” (Whitelaw, 2004). Numerous approaches are possible. In using data as raw sonic
material, I decontextualize it. It is transformed from a representation of some
information, into pure sound—an aesthetic object.
This materialization of digital data is a common trend in contemporary digital art.
Whitelaw suggests that any pretense that such art is accessing data in its “pure” form
does not hold up to closer inspection (Whitelaw, 2004). All digital data is necessarily
ordered according to some standard or definition. It is impossible to truly audify pure
digital data. By converting data between different formats, I am accessing a new kind of
signal processing transformation—reformatting. When converting non-audio data into
sound, I unpack and reorder the data, interpreting and coloring this information at each
pass. The formats which store digital data are also cultural artifacts, often widely agreed
upon conventions of digital form and carry with them this cultural baggage (Sterne,
2012). This last idea will be explored in the next chapter. Here, my goal is not to access
data in its pure form, but to explore how data is translated and transformed via
reformatting, and what kind of sonic material this process offers.
Ryoji Ikeda, a prominent Japanese sound artist known for his large scale
audiovisual installations engaging data as primary material offers only minimal
39
information about the sources of his material (Abe et al, 2012). Interviews are almost
non-existent, and his output consists almost exclusively of the art itself, with very little
prose surrounding it. Ikeda describes his work as an exploration of “the potential to
perceive the invisible multi-substance of data that permeates our world” (Ikeda, 2010). In
his work, he seeks “to materialize pure data” (Ikeda, 2010). His style is highly rhythmic,
tightly controlled, and seems sequenced. His use of data is mainly for its timbre and
symbolic implications, not for its information dynamics or form.
There is a continuum between auditory display meant chiefly to convey the
information in data, and purely aesthetic approaches, wherein the information in the data
can be distorted or obscured for artistic purposes. Roberto Simanowski discusses these
choices and their aesthetic consequences, elucidating the range of possibilities between
dadaism and scientific sonification. Ikeda's work tends towards the low information,
highly aestheticized extreme of this continuum, where the information in his data exists
largely as surface spectacle to a viewer. An example is his dataphonics project where the
viewer is told that all of the sounds come from digital data, but not what the sources are
or what the data was conveying. This paucity of information leaves several questions
unanswered: how was this made, what information is at play here? Ikeda culls his
material from various data sources, but it is not clear if these data sources share a similar
origin, or if they are related beyond their juxtaposition in the final work of art.
In formatBreak0, my first composition utilizing data as raw material, I used
several data-bending techniques. I sought to recycle some of my previous music in
creating this new composition. The files that I manipulated for this project were all
40
documents storing various forms of previous projects. Specifically, the score, film, and
audio files from my composition Pivot were extensively utilized. Pivot was an exercise in
formalism that was successful in many ways, but also tested the limits of how much
parametric restriction I could stand to place on myself as a composer. Mining the leftover
files from the creation of this project was a cathartic act. As artists in urban environments
make creations from junk and trash as raw material, this project was their digital
analogue—recycling the digital leftovers from a previous creation cycle.
In formatBreak0, I am engaging explicitly with the digital information ecology in
which many of us increasingly live. I am accessing old data files from digital storage and
repurposing them. This is an act of data recycling. By creating a new work and the
accompanying new files, I am returning new data into this information ecology. This all
takes place on my hard drive, but by making the music available on-line it enters an even
larger ecosystem. Now the data can spread—it can be copied to other computers, listened
to by other users, and potentially recycled in the same way by someone else. An
interesting collaborative project would be to take a data set and transform it into an
aesthetic work, sharing the results with a collaborator who then does the same and passes
it on, and so on.
I used MP3, PDF, M4V, and AIFF files in creating formatBreak0. All of the files
came from the Pivot project save one. This was a PDF on the theory of MP3 encoding
rendered as an audio file and converted into MP3 format. This file was used in a
contrasting subsection of the piece, against the Pivot material. I pored over these files for
sonic material and then subjected them to basic signal processing, editing, and mixing
41
techniques in Audacity and Logic.
To access the data in these files as sound I used two techniques in alteration. I first
used the Import Raw Data function in Audacity. This function has a variety of options
and opens the file directly in the digital audio editing environment. As an alternative
technique, I created a patch in Pure Data which could open any file and play it back as a
sound file. Each method sounded slightly different. Each format tended to have its own
audible fingerprint as well. As in Heard, I am interested not just in the “pure” data stored
in these files, but in the residue that develops upon converting to different formats. To
me, this residue is analogous to the distortion that develops from transcription. By
opening data stored as one file type as another type of file, a different ordering of the data
is made audible along with the new file header and footer information. Passing this along
to another format repeats this process. In essence this is a game of digital “Telephone”.
These transformations can accumulate in a manner similar to Lucier's I am sitting in a
room. Unlike Lucier's piece however, this process is not foregrounded in the final
composition. I use these transformations to develop raw material which I then edit, cut,
and collage.
This returns us to the question I ask in the introduction: How can I balance the use
of external data with a desire to improvise and compose intuitively? By using the data as
a building block, I can take advantage of its structure and inherent logic while still giving
myself the opportunity to react intuitively as a musician to this material. I find value in
listening to the data and imagining how it might be transformed. Allowing myself to
realize this sonic vision is an empowering act.
42
Developing an approach to using data requires multiple decisions. I can map data
directly to sonic parameters, allowing the information dynamics of the data to be
displayed in an approximately 1-to-1 manner. Alternately, I can work intuitively with the
data, reacting to its materiality as sound, and edit it in a digital audio workstation as if it
were raw sculptural material. I can also take a mixed approach. A key question here is
this: at what level will I engage with the data?
In formatBreak0, I have a personal relationship with the source material, as it
represents a previous artistic project of mine. However, the data was not created with its
eventual audification in mind. All of the data was generated previously and functions as
found object, and my process is akin to someone practicing sample-based music. Is there
any fundamental difference between this and someone who digs through vinyl at a record
shop, or to an artist working with ready-mades? The depth of my engagement with the
data is, at least in this sense, limited. I have engaged deeply with the data since
recovering it from the hard drive, but I did not impose any intentionality in its creation at
the start.
Is anything lost by engaging with found material as the starting point as opposed
to creating the initial material from scratch? If I had some pure conception in mind before
starting the project, then perhaps it would have been, especially if I had been unable to
mould the found material to fit this conception. On the contrary, working with found
material provides me with a stimulus to react to and build upon. Even a pure conception
is in reality informed by previous sonic experiences and constraints. Working with found
sound makes that initial seed more explicit. What's more, if the found material is new and
43
interesting, it provides an opportunity to avoid previous musical idioms. There are
limitations to working with found sound, and it is not always the most efficient means of
arriving at certain results. Nonetheless, given the level of intention that can be exercised
with this material, this is an interesting and artistically valid mode of working with sound.
The structure of formatBreak0 is derived from the PDF score of Pivot. Most of the
opening section comes from this file. When rendered as 16-bit PCM audio, the PDF file
assumes an imperfect rondo structure. Using my technique, there is a great deal of varied
low–frequency noise and pitch. This material is alternated at regular intervals with pure
sustained tones set against metrical ticking sounds. This structure repeats for almost 10
minutes at a 44100 sample rate. I condensed the material to about two minutes, cutting
out the sections of fairly static, undifferentiated noise. I wanted to use the portions with a
great deal of internal, timbral contrast and with active dynamic fluctuations. Having done
this, I found the material mostly interesting, but a little long-winded. To combat this, I
began selecting small portions for pitch shifting, cutting, pasting, copying, overlaying,
filtering, and reverberating. Working through the file little by little, I enhanced the
amount of dynamic activity in the file, creating a collage with contrasting short segments
juxtaposed in rapid succession. The material was well suited to this kind of collage as it
already had some of these properties inherently. I was essentially enhancing these
characteristics. These edits were all done “by hand” and with consideration. Might a
sufficiently robust algorithm be able to achieve similar results? I suspect so and future
work could involve exploring this possibility.
Data carries information and references processes and realities beyond its
44
representation in the computer. As an artist working with data, I have to decide how to
engage with this semantic value. Should the substance of the data, the information
contained therein, contribute in a significant way towards its sonification? Further, should
this semantic information be conveyed to the audience? Is it worthwhile to tell an
audience where the data comes from, or is it better to leave them in the dark, to leave
them with only the materiality of sound?
Returning to Ryoji Ikeda as a case study, recall that his work tends to reveal little
about the information content of his data. The viewer is seldom told anything about the
source, though he does give an occasional cryptic reference to mathematics and science,
an effective strategy. His main approach however has both merit and fault. From his
comments (“My approach is practical, not conceptual”) and the conventional rhythmic
patterns that often emerge in Ikeda's music, one might assume that he has essentially
audified arbitrary data and cut it into snippets which are entered into a digital sequencer
such as Logic (Abe et al, 2012). In such a practice, the information in the data is not
contributing much at all to the form of the composition. His music is interesting for its
novel timbres while the rhythms he creates draw on electronic dance music. In this case,
it would be clear why he has not elaborated upon the source of the data— the information
it carried is not of essential import to the finished work. The data has been
decontextualized and stripped of its meaning and form and thus knowing what it was
would offer very little to the listener. This is similar in effect to Pierre Schaefer and his
tape edits, whose goal was to divorce the sounds from their context and approach them as
pure sound. Ikeda has a different aim, but arrives at a similar decontextualization.
45
In formatbreak0, I decided to make the source of my data clear. In doing so, I was
not attempting a scientific sonification. Instead, I provided this information so that
audiences will have something to consider conceptually after listening to the work for its
sonic properties. The process of recycling old files to create a new work seems interesting
to me, and if carried out through further iterations could be a theme for an entire series of
compositions. The material could be nearly anything and have a similar sonic result,
however I find it interesting to draw attention to the data that people leave behind on
computers and online servers and consider what might happen to that data, or creative
ways in which it might be reused.
Other data sources are possible as well. I am interested in generating data from
scratch, as suggested earlier. Mining the computer for data files to sonify, I propose
analyzing the statistical sonic properties of the data in different arrangements, correlating
this with its sonic output and labeling it with aesthetic descriptors. Doing this opens up
the possibility of generating new data stochastically. I do not wish for this to replace the
process of discovery when audifying a new data file that one has not encountered before,
but merely to augment it by providing another level of potential engagement and control.
A preliminary implementation could involve saving small snippets of data in a
repository that have been deemed acoustically interesting and classifying them by their
characteristics as sonic material. Building up a large enough database of these will result
in a kind of data lexicon from which to draw. Analyzing this material for its sonic
characteristics, one could develop a data synthesizer. The control parameters for such a
synthesizer could include entropy, spectral centroid, spectral spread, along with more
46
traditional synthesizer controls such as frequency and amplitude. By making the synthesis
stochastic and generative in nature, one could still retain an element of discovery for the
end user. Depending on how tightly a user decides to restrict the parameters of the output,
they would generate a range of data sounds. One might also develop data mining
approaches to search through large numbers of files stored on a hard drive looking for
segments that are statistically similar to selected elements in the data lexicon, then
extracting these file segments for audification. Yet another approach could be to devise a
measurement (perhaps entropy?) that could be used to classify data files by their potential
as interesting sonic material.
As a composer, I appreciate the opportunity to audition material by ear and to
engage in the act of listening and responding during the composition process. Other
composers have worked in this way with formal schemes and this has become a recurring
theme in my own work. I often seek to generate material in strict or automated ways and
then pass it through the final prism of my own listening and editing. This grants me
agency as a composer and allows me to exercise the basic musical skills that I have
developed over many years in an intuitive way. I try to establish this balance between
formal rigor and intuitive ways of working to draw on the benefits of both. By
incorporating external stimuli, I force myself to react to an unfamiliar situation and find
new solutions. At the same time, working intuitively allows me to edit from the listeners
perspective and shape the music dynamically.
Other kinds of music might call for a different creation cycle than the one
described above. Algorithmic work, for example, could place the listening and
47
auditioning stage at the end of the execution of the algorithm. If the sonic results need to
be tweaked, more general results can be obtained by tweaking the algorithm than by
simply editing the given output by hand. In formatBreak0 however, I am not generating
material algorithmically. I am working through found data-sound looking for interesting
material. This is a kind of collage approach, more than an algorithmic one. Cage wanted
to let sounds be themselves. I agree, but I also want to be myself in the compositional
process.
In conclusion, this project has laid the groundwork for continuing work in
formatted data audification, defined here as a sub-genre of data-bending. By focusing on
the unique artifacts and arrangements of data in various formats, I draw a distinction
between this and the general data-bending aim of accessing “pure” data. Here, data is
always colored in some way by its format, and these colors are sought out as important
sonic material. I have discussed my working method and an initial composition in this
style, formatBreak0. I have also discussed how this project satisfies both my desire to
work with data intuitively and reflects my place in a developing digital information
ecology. Ideas for collaborative data composition have been suggested which would
make this ecological relationship more explicit. Future technical developments have been
suggested as well. First, using statistical measures of data mined from a variety of
formats to generate new data, irrespective of format, has been proposed. Additionally, a
lexicon–based approach to auditioning data files for their value as sound has been
suggested, along with the idea of developing a measure that would rate files according to
their potential sonic interest.
48
V. MP3
The MPEG-1/2 Layer III standard, more commonly referred to as MP3, has
become a nearly ubiquitous digital audio file format. First published in 1993, this codec
implements a lossy compression algorithm based on a perceptual model of human
hearing. In doing so, it is capable of reducing large audio files to a fraction of their
original size, expediting their transfer over low–bandwidth networks. This was of critical
importance in the early-to-mid 1990s when internet speeds were a significant bottleneck
in the transfer of information. It has remained important for individuals looking to
maximize available storage capacity with media content.
The MP3 standard has become an interesting object of critique in contemporary
technology studies (Sterne, 2006). How a standard which subtly reduces the audio quality
of files has remained in place, despite massively increased bandwidths and storage
capacity is impressive, and highlights the foresight (and fortune) of the format's creators.
Due to a complex combination of market and social factors, the majority of music
listeners today continue to prefer a standard which maximizes the download times and
storage capacity of their audio devices (Sterne 2012, 5). These are often portable
machines such as the iPod, on which much listening occurs in noisy environments (gyms,
subways, city streets) through (often cheap) ear bud headphones and inexpensive
preamplifiers. The loss of fidelity from these external factors, along with the cleverness
with which MP3s are coded, a socialization to the sound of MP3 files, and other factors
49
have obviated the need for an upgrade to higher fidelity formats for most end users
(Evens 2005, 121).
Regardless, the MP3 is not always the most appropriate format for a given task,
and a critical evaluation of the technology and its limitations is warranted. In the project
below, I have made the MP3 compression codec an integral part of the sonic material in a
series of compositions. Despite its highly touted performance in listening tests, this lossy
compression codec does generate audible artifacts, especially when implemented at low
bit rates. In order to work with these compression artifacts as compositional material, I
have created several audio files and encoded them at various bit rates using the LAME
MP3 encoder. These newly compressed MP3 files are the basis for my compositional
work discussed below.
Approaching MP3
Working with digital artifacts has become a fairly common practice in the last
decade among digital artists. Referred to as “glitch”, practitioners in a variety of media
seek out technological errors as aesthetic material (Cascone, 2000). Glitch artists focus on
digital noise and mechanical erratum as the substance of their compositions. The glitch
aesthetic is present in the visual arts as well, with artworks focussing on compression
artifacts from JPEG and movie file formats. Kim Cascone suggests in his 2000 article
“The Aesthetics of Failure” that, having become disillusioned with the digital revolution,
artists are using signal processing techniques to magnify the errors inherent in these
technologies. I'm not sure if disillusionment is the only factor at play here though, as
50
fascination with distortions and noise have been present in music since at least the
beginning of the 20th century.
MP3 compression has been described as producing metallic, ringing, warbling,
and other varieties of sound artifacts. Highly transient signals, such as percussion
instruments, and recordings with a high degree of randomness are especially vulnerable
to these digital distortions (Pras et al, 2009). Commonly referred to as pre-echo, sounds
with a sharp attack will often be encoded as beginning too early. This loss of precision in
handling high frequency components is a manifestation of some of the frequency and
time limitations of quantization and compression. Most interesting to me as a composer is
how seemingly new information can be generated by the MP3 compression algorithm
when presented with sufficiently noisy material.
I am interested in a compositional method that highlights the artifacts generated
by this compression standard. As a preliminary research direction I considered creating
sonic material crafted to interact with the MP3 perceptual model. This material could
assume a variety of characteristics. One possibility is to evade the intentions of the
perceptual model. This material would be highly altered by MP3 compression because it
would not fit neatly within the perceptual model. Material with a great deal of percussive
and/or high frequency content most easily undermines this assumed model. Additional
audio with content playing at the boundaries of critical bands as modeled in the MP3
codec generates artifacts as well. Another possibility is developing content that would be
minimally effected by MP3 compression. The MP3 format was created with a specific
kind of music in mind—typical western popular music. Judging from this encoder
51
specification, music with predominantly low to midrange frequency content, gradual note
onsets, and widely spaced harmonic and timbral information would be least altered by the
compression.
Given these possibilities, I wonder—what compositional structures and processes
are suited to these materials? One possibility is a composition which foregrounds the
consumption of an audio file by MP3 compression. Such a composition would function
as a process piece where a sound file is slowly transformed from uncompressed to full,
low-bit rate MP3 compression. This would articulate the digital compression space of the
codec. An implementation of this which I have not created yet could span the range of
available compression settings in series through a sequence of crossfades. Material would
move from uncompressed audio to slightly compressed and then on to the final material
encoded at 8 kbps, the minimum MPEG-1/2 Layer III bit rate (Brandenburg, 1999). The
music could also undergo a simultaneous transformation from well-handled, slow, and
timbrally smooth sounds to high–frequency, highly percussive sounds by the end of the
composition.
Another possibility is to work with the “negative space” of compression, an
approach which I have pursued below. When an audio file is compressed using a lossy
standard such as MP3, some information in the original file is left out. This information is
effectively lost. Finding a way to capture these sounds is an interesting aesthetic and
technical challenge. I suggest two possible approaches. The first involves editing or
creating an encoder for the MP3 standard which, after the analysis stage, stores the
information that is to be eliminated by the compression process in a separate file. The
52
second approach, which I have implemented here, is to encode the MP3 file and then take
a time-aligned distance measure between the original file and the compressed file. At
each point where the two files are within a given distance threshold, one can implement
an auditory mask by subtracting the compressed file content from the uncompressed, thus
deleting shared material. What remains is a file containing only the information in the
original sound file that did not make it through the compression process. This will have
captured what I call the “ghost” of compression, the negative space behind MP3
encoding.
In implementing this technique, I compared the time–frequency matrices of the
two sound files. Where they fell within a given distance of each other, I zeroed out the
original audio file. Adjusting the threshold distance produced varied results. I could
alternately have implemented a finer method of control rather than zeroing. For example,
I could set the amplitude of a given component to the distance measured between the two
files at that particular point. This would be a more accurate means of harvesting the “lost”
material by capturing less extra noise in the masking process.
Initial Experiments
As preliminary steps towards composing in this manner, I did a series of tests.
First, I generated several sound files and subjected them to varying levels of MP3
compression, the results of which I will detail below. Second, I made initial attempts at
utilizing the negative space of MP3 compression. Finally, I organized these materials into
a short study.
53
Original White Noise Soundfile
MP3 Compressed White Noise
First, I began with a short sample of white noise with a low pass filter. At first the
noise is allowed to bypass the filter, then the filter is turned on with a fairly low cutoff
frequency and gradually rolled up until the full spectrum is allowed to pass through
again. This sound file was saved in an uncompressed format at 44100 Hz and then
copied. The copy was then rendered as an extremely low bit–rate MP3 file, encoded at
8kbps with a sampling rate of 11025 Hz. The compression artifacts are clearly audible
particularly in the range between 1 and 2.5 kHz.
Especially notable is the frequency content between 2 and 2.5 kHz in the MP3 file
54
which, upon listening, sounds like short pitched “bleeping” sounds. I was surprised that a
file containing only noise developed clearly pitched “notes” upon compression. The next
test involved a sound file I generated using a series of networked chaotic oscillators
created in the Pure Data programming language. This sound file contained a great deal of
noise but also has clearly pitched content. It was copied in a similar fashion to the
previous test and compressed into an 8 kbps LAME encoded MP3 file with a sample rate
of 11 kHz. The results of compression are again clearly audible.
Chaos Uncompressed
Chaos Compressed
55
With this file the effects of compression are even more noticeable. All but the
lowest frequency noise content is attenuated. Pitched information seems to have survived
relatively better, but noise above 500 Hz has been mostly eliminated in the MP3
compression process. This suggests the potential to contrast frequencies above and below
500 hertz and to work with noise in different registers as compositional material.
Next, I recorded a synthesized bass drum at dozens of fundamental frequencies
from 20 Hz to 800 Hz. I time–stretched this recording, acquiring digital sampling
artifacts in the process. This transformed the drum synthesis from a clearly percussive
sound, to one with an elongated, vocal quality. I saved the sound in both WAV format and
as an MP3, encoded at 8 kbps with constant bit rate in joint stereo. Both files had an 11k
sample rate. Spectrograms of the two files are below. The MP3 file has significant
frequency artifacts in the upper registers unnoticeable in the original sound file. These
sometimes create a descending glissando sound effect visible in spectrograms though
only faintly audible. These unexpected artifacts made for a complex soundfile and took
on the quality of a choir singing in a cathedral when passed through a reverb unit.
Finally, I crafted a preliminary study using the negative space of MP3
compression as material. I took a sequence of microtonal harmonies, programmed using
Pure Data, and saved them in both uncompressed and MP3 file formats. Then, working in
Python with Michael Casey's Bregman Python Music and Audio Toolkit, I took a distance
measure between each point in the time frequency spectrogram of both files, storing these
values in a matrix of the same dimensionality as the two sound files. Having done this,
where the distance between the two files fell below a certain threshold (i.e.– where the
56
two files were most similar) I zeroed the corresponding matrix position in the original
uncompressed file. This essentially erased all of the information from the uncompressed
file that had been successfully encoded in the MP3, leaving behind only the information
that had been deemed unnecessary by the perceptual model in the compression codec.
Ascending Kick Drum Frequency Sweep
Ascending Kick Drum MP3
Ascending Kick Drum MP3 with Reverberation
57
Chords Uncompressed
Chords MP3
Chords “Ghost” File
58
The result was a sound file presenting information that is lost when compressing
to MP3. Taking what I have learned from the previous experiments and literature about
MP3 compression, I aim to compose a piece elucidating this material. As a preliminary
step in this direction, I have made an initial study using the chordal material mentioned
above. It begins with the chord progression as synthesized in Pure Data, uncompressed.
Then over the course of the sound file, a crossfade between the uncompressed and the
MP3 compressed version occurs. The progression reaches its penultimate chord and, after
a brief silence, a final harmony emerges rendered as the “ghost” of its own MP3.
.
MP3 “Ghost” Harmony Study
Future Directions
Composing with MP3 files in the manner suggested above is an attempt to find
interesting material to work with. It is also meant to utilize and draw attention to the
ubiquitous MP3 file format. In keeping with the theme of this thesis, the compression
artifacts provide externally generated material with which I can work as a composer. The
59
MP3 artifacts are difficult to predict and thus add a degree of chance to the process. There
is, at the same time, more room for intentionality in the development of material here
than in the previous two chapters. The MP3 format requires a source soundfile. Choosing
or creating this file requires agency on the part of the composer missing in, for example,
Heard. This project also draws attention to the current state of music consumption in our
digital society. By making MP3 artifacts explicitly a part of the music, I hope to bring this
codec into question as a universal standard. I am not against MP3, but I believe there are
and will be better alternatives. I would be happy if the music industry could move sooner
rather than later to a standard sound file of higher fidelity. I am also probing the format
for its aesthetic possibilities with this project, inspired by musics built around previous
technologies—“tape music”, for example.
The initial experiments in this chapter suggest several possibilities for future
work. Progress can be made in both technical and aesthetic directions. In the technical
domain, it would be interesting to create an MP3 or other lossy compression encoder that
saves the sounds from the “negative space” automatically, as a second sound file, during
the encoding process. This would require work that is currently beyond my coding
experience, and so collaboration with a more experienced programmer would be
necessary. A second direction would be to explore different methods of implementing an
auditory mask. As previously mentioned, rather than zeroing out all components, one can
use a sliding scale based on the distance measure to determine these values. Aesthetically,
encoding material with different mask settings and distance metrics can provide
additional dimensions along which to compose. I hope to pursue these directions shortly.
60
VI. Conclusion
This thesis has elucidated the themes and processes at play in my recent work. In
all three projects, I have tried to interact with novel elements somewhat beyond my
control. In doing so, I have provided myself with the opportunity and impetus to create
something that had previously been beyond my view of what might be possible. A
recurring theme has been the transformation of data and the noise that this data takes on
as a result of this process. This remains an interesting concept to me and one which I
hope to continue working with.
When I began, I felt that there was a conflict between working systematically with
found materials, data, or algorithms and working intuitively or improvisationally. I have
begun to find strategies for bringing these modes of working into harmony. I have
concluded that intuition and improvisation are always a part of artistic creation, it is just
the point in the creation cycle at which this occurs that varies.
In the projects in this thesis, I felt most comfortable with a creative process that
generates some material first, then enters into a loop, pivoting between listening and
editing this material until arriving at a satisfactory result. Moving forward, I would like to
continue working in this manner for some projects but would like to try a different cycle
for other pieces. In particular, I am interested in working with algorithms or strict
processes and letting them generate results, as in the automatic transcription of Heard,
and then reviewing the output. Now, instead of editing the output directly, I would like to
61
edit the original algorithm or process more extensively, run it again, and judge the output
anew. Thus, the entire creation cycle will be contained in one large loop. Until now, I
have not worked in this manner and I am excited to try it in the days and months to come.
Working as a composer with technology has opened many possibilities. The
potential to integrate data with the compositional process is greater than ever before and
is always increasing. Sophisticated composition assistance algorithms, for example, are
now entirely possible. This digital information ecology is the new ecosystem for
composers, citizens, and artists. How individuals react to this increased access to data is
of vital importance. Through the transcription of a digital audio file, through the
audification of data, and through a close study of a particularly important file format, I
have engaged with this ecosystem. Other strategies are possible such as interactive,
networked, and collaborative artworks. There should always be room as well for people
to reject these systems or to create works that subvert them.
My work here has engaged with particular technology and information available
at this historical juncture. I have leaned towards works which create fixed or static
compositions as final products, as opposed to interactive or generative works. This was
done in part to draw on my previous compositional experience and training and also
because the resultant objects and process of creation were interesting to me. Moving
ahead, I would like to experiment with more open and generative pieces. I have been
doing this already, but that work is beyond the scope of this current thesis.
It seems to me that, although society is entering an increasingly networked era,
there is still value in fixed, individually composed music. Egalitarian ideals suggest to me
62
that overthrowing this hierarchical tradition would be beneficial, doing away with top–
down compositional models in favor of collaborative modes of music creation. While I
am a strong advocate of enacting more democratic systems in politics, I am not opposed
to the work of art meticulously created and directed by one individual. I believe that this
provides the opportunity for a deep expression of personal imagination that is still of
value to our society. Digital music studios provide an interesting venue for the individual
composer to create such works in. The situation is complex however and, in addition to
the old models, new modes of creation and collaboration are developing every day.
Looking back, it is easier to see the connections between these compositional
projects than it was to see them at the time of their genesis. Inspiration often seems to
take fickle leaps, and yet, upon close examination, common themes emerge. I can not say
which ideas will prove most fruitful moving forward, but I hope that I have suggested
more possibilities than I have shut down.
“I can't understand why people are frightened of new ideas. I'm frightened of the old
ones.”
— John Cage (Kostelanetz 2003, 211)
63
Appendix
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
References
Abe, Kazunao, Maria Belen Saez de Ibarra, Benjamin Weil, and Ryoji Ikeda. Ryoji Ikeda:
Datamatics. Charta, 2012.
Alpern, Adam. “Techniques for Algorithmic Composition of Music.” On the Web:
http://hamp.hampshire.edu/~adaF92/algocomp/algocomp95.html (1995).
Ballet, Jérôme, and Roland Guillon. Regards Croisés Sur Le Capital Social. Editions
L’Harmattan, 2003.
Bartòk, Béla, and Albert Bates Lord. Yugoslav Folk Music: Serbo-Croatian Folk Songs
and Instrumental Pieces from the Milman Parry Collection. 7. SUNY Press, 1978.
Ben-Tal, Oded, and Jonathan Berger. “Creative Aspects of Sonification.” Leonardo 37,
no. 3 (2004): 229–233.
Benjamin, Walter. The Task of the Translator. Vol. 79. Illuminations, 1968.
———. The Work of Art in the Age of Mechanical Reproduction. Penguin, 2008.
Botteldooren, Dick, Bert De Coensel, and Tom De Muer. “The Temporal Structure of
Urban Soundscapes.” Journal of Sound and Vibration 292, no. 1 (2006): 105–123.
Brandenburg, Karlheinz. “MP3 and AAC Explained.” In Audio Engineering Society
Conference: 17th International Conference: High-Quality Audio Coding, 1999.
Brandenburg, Karlheinz, and Gerhard Stoll. “ISO/MPEG-1 Audio: A Generic Standard
for Coding of High-quality Digital Audio.” Journal of the Audio Engineering
Society 42, no. 10 (1994): 780–792.
84
Buchloh, Benjamin HD. “Conceptual Art 1962–1969: From the Aesthetic of
Administration to the Critique of Institutions.” October (1990): 105–143.
Cage, John. Silence: Lectures and Writings. Wesleyan, 2011.
———. “The Future of Music: Credo.” Audio Culture: Readings in Modern Music
(1937): 25–28.
Cascone, Kim. “The Aesthetics of Failure:‘Post-digital’ Tendencies in Contemporary
Computer Music.” Computer Music Journal 24, no. 4 (2000): 12–18.
Childs, Edward P., and Dartmouth College. “Musical Sonification Design,” 2003.
Cogan, Robert. New Images of Musical Sound. Harvard University Press Cambridge,
MA, 1984.
De Coensel, Bert, and Dick Botteldooren. “The Quiet Rural Soundscape and How to
Characterize It.” Acta Acustica United with Acustica 92, no. 6 (2006): 887–897.
De Coensel, Bert, Dick Botteldooren, and Tom De Muer. “1/f Noise in Rural and Urban
Soundscapes.” Acta Acustica United with Acustica 89, no. 2 (2003): 287–295.
———. “Classification of Soundscapes Based on Their Dynamics.” In Proceedings of
the 8th International Congress on Noise as a Public Health Problem (ICBEN),
Rotterdam, The Netherlands, 2003.
Demers, Joanna. Listening Through the Noise: The Aesthetics of Experimental Electronic
Music. Oxford University Press, USA, 2010.
Dunn, David. “Ahoy!,” August 24, 2012.
———. “Purposeful Listening in Complex States of Time.” Site of Sound: Of
Architecture & the Ear (1999): 77–87.
85
———. Why Do Whales and Children Sing?: A Guide to Listening in Nature. Earth Ear,
1999.
Eno, Brian. “Ambient Music.” Audio Culture. Readings in Modern Music (2004): 94–97.
Eno, Brian, and Robert Wyatt. Ambient 1: Music for Airports. Editions eg, 1978.
Escot, Pozzi. The Poetics of Simple Mathematics in Music. Publication Contact
International, 1999.
Evens, Aden. Sound Ideas: Music, Machines and Experiences. Minneapolis: University
of Minnesota Press, 2005.
Gelatt, Roland. The Fabulous Phonograph, 1877–1977. Cassell London, 1977.
Grisey, Gérard. Partiels [: Pour 18 Musiciens: Partitura. Ricordi, 1976.
Grisey, Gérard, and Joshua Fineberg. “Did You Say Spectral?” Contemporary Music
Review 19, no. 3 (2000): 1–3.
Hermann, Thomas, Andy Hunt, and John G. Neuhoff. The Sonification Handbook. Logos
Verlag, 2011.
Howat, Roy. Debussy in Proportion: a Musical Analysis. Cambridge University Press,
1986.
Ikeda, Ryoji. Ryoji Ikeda: Dataphonics. Pap/Com. Dis Voir, 2010.
Klapuri, Anssi, and Manuel Davy. Signal Processing Methods for Music Transcription.
Springer, 2006.
Kostelanetz, Richard. Conversing with Cage. Routledge, 2003.
Kramer, Gregory. Auditory Display: Sonification, Audification, and Auditory Interfaces.
Addison-Wesley Reading, MA, 1994.
86
Lendvai, Erno, and Béla Bartòk. “An Analysis of His Music.” Kahn & Averill (1971).
LeWitt, Sol. “Paragraphs on Conceptual Art.” Artforum 5, no. 10 (1967): 79–83.
———. Sentences on Conceptual Art (1968). Art-language, 1969.
LeWitt, Sol, Nicholas Baume, Jonathan Flatley, and Pamela M. Lee. Sol Lewitt:
Incomplete Open Cubes. Wadsworth Atheneum Museum of Art, 2001.
LeWitt, Sol, and Dwan Gallery. Serial Project# 1, 1966. Dwan Gallery, 1967.
Lucier, Alvin. I Am Sitting in a Room. Lovely Music, 1981.
Maguire, R. “Creating Musical Structure from the Temporal Dynamics of Soundscapes.”
In 2012 11th International Conference on Information Science, Signal Processing
and Their Applications (ISSPA), 1432–1433, 2012.
Manovich, Lev. The Language of New Media. The MIT press, 2001.
Marclay, Christian, Kim Gordon, Jennifer A. González, Matthew Higgs, and Blaise
Cendrars. Christian Marclay. Phaidon, 2005.
Möritz, William. Optical Poetry: The Life and Work of Oskar Fischinger. John Libbey,
2004.
“MP3.” Wikipedia, the Free Encyclopedia, April 13, 2013.
Oliveros, Pauline. Deep Listening: A Composer’s Sound Practice. iUniverse, 2005.
———. On Sonic Meditation. Vol. 27. Center for Music Experiment and Related
Research, University of California at San Diego, 1973.
———. Software for People: Collected Writings 1963–80. Printed Editions, 1984.
Oswald, John. “Plunderphonics, or Audio Piracy as a Compositional Prerogative.” In
Wired Society Electro-Acoustic Conference, 1985.
87
Painter, Ted, and Andreas Spanias. “Perceptual Coding of Digital Audio.” Proceedings of
the IEEE 88, no. 4 (2000): 451–515.
Polansky, Larry. “Manifestation and Sonification.” Accessed April 16, 2013.
http://eamusic.dartmouth.edu/~larry/sonification.html, 2002.
Pras, Amandine, Rachel Zimmerman, Daniel Levitin, and Catherine Guastavino.
“Subjective Evaluation of Mp3 Compression for Different Musical Genres.” In
Audio Engineering Society Convention 127, 2009.
Rodgers, Tara. Pink Noises: Women on Electronic Music and Sound. Duke University
Press Books, 2010.
Ryan, David. “Composer in Interview: Helmut Lachenmann.” TEMPO-LONDON-
(1999): 20–25.
Sangild, Torben. “The Beauty of Malfunction.” Bad Music: The Music We Love to Hate
(2013): 257.
Schaeffer, Pierre. “Acousmatics.” Audio Culture: Readings in Modern Music (2004): 76–
81.
Schafer, R. Murray, and Raymond Murray. The Tuning of the World. Knopf New York,
1977.
Schultz, Rob. “Melodic Contour and Nonretrogradable Structure in the Birdsong of
Olivier Messiaen.” Music Theory Spectrum 30, no. 1 (2008): 89–137.
Simanowski, Roberto. Digital Art and Meaning: Reading Kinetic Poetry, Text Machines,
Mapping Art, and Interactive Installations. Vol. 35. U of Minnesota Press, 2011.
Sterne, Jonathan. MP3: The Meaning of a Format. Duke University Press Books, 2012.
88
———. “The Mp3 as Cultural Artifact.” New Media & Society 8, no. 5 (2006): 825–842.
Truax, Barry. Acoustic Communication. Ablex Pub, 2001.
———. “Handbook of Acoustic Ecology, (CD-ROM Version).” Computer Music
Journal 25 (2001): 93–94.
———. “Soundscape, Acoustic Communication and Environmental Sound
Composition.” Contemporary Music Review 15, no. 1–2 (1996): 49–65.
Vergine, Lea. When Trash Becomes Art: Trash, Rubbish, Mongo. Skira-Berenice, 2007.
Voss, Richard F., and John Clarke. “’’1/f Noise’’in Music: Music from 1/f Noise.” The
Journal of the Acoustical Society of America 63 (1978): 258.
Westerkamp, Hildegard. “Linking Soundscape Composition and Acoustic Ecology.”
Organised Sound 7, no. 1 (2002): 51–56.
Whitelaw, Mitchell. “Hearing Pure Data: Aesthetics and Ideals of Data-sound.”
Unsorted: Thoughts on the Information Arts: An A to Z for Sonic Acts X Altena,
A., Ed (2004).
Xenakis, Iannis. Arts-sciences, Alloys: The Thesis Defense of Iannis Xenakis Before
Olivier Messiaen, Michel Ragon, Olivier Revault D’Allonnes, Michel Serres, and
Bernard Teyssèdre. Pendragon Press, 1985.
———. Formalized Music: Thought and Mathematics in Composition. 6. Pendragon Pr,
1992.
89