an accelerometer and gyroscope based sensor system for...
TRANSCRIPT
An Accelerometer and Gyroscope Based Sensor System for Dance Performance
Technical Report: UL-CSIS-07-2
Giuseppe Torre1, Mikael Fernstrom1, Margaret Cahill2
1Interaction Design Centre/2Centre for Computational Musicology and Computer Music, Department of Computer Science and Information Systems,
University of Limerick, Ireland
[email protected], [email protected]
Abstract This project explores the use of wearable wireless sensors for the generation and
manipulation of music from dance performances. The design of a prototype wireless
sensor that includes dual-axis accelerometers and three single-axis gyroscopes is
discussed. The input of sensor data to a Pd (Pure Data) patch is explored and possible
mapping strategies for a system of sensors is discussed.
1. Introduction This project investigates the use of a new wireless sensor system for
interactive dance performances. Such a system would allow the dancer to control the
generation or manipulation of their own musical accompaniment and could be useful
in a variety of ways including the choreography stage as well as the performance
itself. The implementation consists of two main components: a series of wireless body
sensors worn by the dancer and a Pd object for the real-time manipulation of the data
received from these sensors. The development process consisted of four main steps:
• a study of the properties of the device used
• the creation of an interface between the device and a host computer
• the creation of a new object to input the data into Pd
• the tracking and manipulation of the sensor data
The chosen sensor is an array of accelerometers and gyroscopes which are used to
track the dancer’s movements.
1.1 Sensors Sensors are electronic components whose purpose is to transform different
types of physical energy into data. It is common to distinguish between wearable and
non-wearable sensor types depending on whether or not they are worn by a performer.
Camera input devices are among the most commonly used non-wearable devices. The
proliferation of low cost webcams and free software such as EyesWeb (Camurri 2004;
Volpe 2003), means that such technology is readily available and easily accessible.
Tracking movements using cameras can be problematic however due to their
sensitivity to changes in light and the heavy computational burden they place on the
host computer.
Among the reasons for choosing a sensor-based approach for this project are
the relatively low cost, ease of tracking multiple performers, accuracy of movement
tracking, and user-centered control. The issue of interference due to changing light
conditions is also avoided. Wireless sensors in particular have been chosen because of
their non-intrusive nature.
1.2 Wearable Wireless Sensors A number of different wearable wireless sensor systems have been developed
for similar uses. A survey of the literature on these systems follows and the suitability
of these approaches for our wireless sensor system is discussed.
1.2.1 DIEM – Digital Dance System
The DIEM Digital Dance System (DIEM, 1999) consists of 14 bending
sensors that communicate with a transmitter (Dancer Unit) worn by a dancer on a belt
(see Figure 1). The sensors can track the rotations of angles between 0 and 129
degrees and the Dancer Unit transmits this information to a receiver using radio
frequencies (RF). As with other similar systems such as Troika Ranch (Troika Ranch,
2006), and Shapewrap (Shapewrap II, 2004), there are a number of obvious
disadvantages to this system. Firstly, the system is not fully wireless-compatible.
Figure 1: DIEM Digital Dance System
The only wireless component in the system is the radio worn in a beltpack by the
dancer; the sensors need to be tethered across the body and connected via a wire to the
radio. This can make the system cumbersome for the performer. Although the system
has already been successfully used in many musical performances (Wayne Siegel
1999) it still doesn’t provide a comprehensive tracking mechanism for the various
different human movements. It is limited to the analysis of movements involving
bending limbs; for example – the movements of fingers, the neck or the knees.
Another disadvantage relates to the radio frequencies (RF) used by the radio to
transmit to the base station. The radio frequencies used by this system is around
433.92 MHz. This can be problematic, as use of frequencies in this range usually
requires some form of authorization from communications authorities or may be
prohibited altogether.
1.2.2 Expressive Footwear
Developed at the MIT Media Laboratory by Joseph Paradiso and his team
between 1997 and 2000 (Paradiso, 1997, 1999, and 2000), this sensor system consists
of a pair of shoes each of which comes with 16 different sensors that communicate
with a base station (receiver). Through an interesting engineering design innovation,
the sensors are able to detect a range of movements of the foot, including twisting,
pressure on the left or right of the toe area, the distance from the floor, and rotation
speed. Among the technologies uses are a gyroscope, an accelerometer, some
piezoelectric foil, and a piezoceramic sonar receiver. The sonar receiver uses a
frequency of 5 Hz to acoustically locate the direction and determine the distance of
objects. Each of these components, along with a strip of copper mesh on the base of
the shoe, transmits data to the base station.
Figure 2: The sensors used in the Expressive Footwear shoe.
The base station consists of a pic16C73 microcomputer that receives serial
input from an RF receiver communicating with the shoe. It subsequently sends the
serial messages via RS-232 to communicate with the computer. The Radiometrix RX
Series (receiver) uses a fixed frequency so once again this can be a prohibitive factor.
Figure 3: Expressive Footwear
In addition, each shoe-sensor must have its own base station which can prove costly.
The developers are working on new approaches including higher-bandwidth channel-
sharing options, such as Code Division Multiple Access (CDMA) or Time Division
Multiple Access (TDMA) as an alternative.
1.2.3 PAIR and WISEAR
This system is based on the development and combination of two different
types of hardware for audio and video generation from dance performance. It is a
recent product developed at the University of Virginia by D. Topper and P. Swendsen
(Topper and Swendsen 2005). WISEAR is a Linux based Embeddedx86 TS-5600
Single Board Computer with an internal processor (see Figure 4). PAIR is the
wireless sensor system worn by the dancer. It is designed to track the movements of
two dancers and uses a digital compass to track distance and relative orientation
between the two dancers. A Force Sensing Resistor (FSR) is used to detect pressure
on the hand and a bending sensor and accelerometer also retrieve data based on the
position of the hand. Mapping is executed with two different software products:
Max/Msp for audio processing and Isadora for video; both of which run on OS X.
Figure 4: The WISEAR TS-5600 SBC and processor/
Transmission Control Protocol (TCP) is used for simultaneous communication
between the audio and video engines as each WISEAR box has is own IP address.
WISEAR is a 12-bit resolution machine.
This system seems to work quite accurately although its design strategies
focus only on a particular and limited range of movements. This problem also
becomes more acute if the system is used with only one dancer.
1.2.4 ECO
Eco is an ultra-compact and low-power wireless sensor node developed at the
University of California (Park and Pai, 2006). The entire system consists of three
parts: Eco wireless sensors, a wireless data aggregator and a wireless interface board.
The most important characteristic of the system is the body sensor network.
Originally designed for health monitoring, it is now used with some hardware
modifications for dance performance.
Eco is currently the smallest wireless sensor available (12 x 12 x 4.5 mm); it
can track both the movements and the physical activities of the dancer such as his/her
heartbeat. It consists of a wireless transmitter, an accelerometer, a temperature sensor,
a light-sensing unit (ATLS) and an Image Sensing and Gyroscope (ISG). Each of
these sensors communicates with a wireless data aggregator worn around the dancers
waist.
Figure 5: The Eco lamp, ultrasonic sensor, relay array and MIDI I/O terminal
The sensors and data aggregator together form a network which uses a 2.4 GHz
Industrial Scientific Medical band radio (ISM) and a TDMA-based MAC protocol
with a maximum data rate of 250 Kbps. A second network is comprised of the data
aggregator and a computer. It works with an 802.11 standard wireless.
The big advantage of this system is that it can be used for performances
involving either single or multiple performers. For the latter option, it is sufficient to
assign different channels for each data aggregator used. The platform has been used
and tested in live performance using Max/MSP and Jitter with good results.
Despite these advantages some drawbacks are still evident with this system.
Firstly, the use of ISM band radio requires permission from the relevant authorities.
Also, the sensors and the data aggregator use 40mAh and 700mAh Li-Polymer
batteries (respectively) providing a lifetime of just one hour.
1.2.5 Other Wireless Sensors
Along with the Expressive Footwear project, two other related projects at the
MIT Media Lab are of interest. The first of these is by Feldmeier (2003) and it
experiments with the design of a low-cost wireless sensor to track global activities
among a large group of dancers. The architecture is fairly simple: a 3-Volt lithium
battery, a dual monostable-multivibrator, a vibration sensor, and a 300 MHz
transmitter similar to the remote control transmitters used to open and close garage
doors. The platform while good is limited in the range of movements it can track.
Figure 5: A low-cost sensor used by Feldmeier (2003).
More recently, another wireless sensor for Interactive Dance has been
proposed by Paradiso (2006). Again, the primary intention of this system is the
creation of an interactive environment for more than one dancer. The design of this
sensor includes a 6-axis inertial movement unit (IMU) with an orthogonal gyroscope
(ADXRS300), an accelerometer (ADXL203), and a capacitive sensor to detect the
proximity of the sensors. An onboard processor digitizes voltage at 12-bit resolution.
Sensors communicate with the base station through an nRF2401A data radio device
which sends a 1 Mbps data rate. One of the biggest advantages of the nRF24021A is
that it doesn’t need permission from the relevant authorities to transmit signals. In
addition, the high bit-rate makes possible the simultaneous use of up to 25 nodes with
a RF range of about 15 meters. The protocol used is a TDMA scheme. The sensor noe
is shown in figures 7 and 8.
Figure 7: Sensor node on wrist Figure 8: Sensor with battery
This research approach and system design does look promising but the
dimensions may be a weak point (4x4x2 cm and 45g weight), especially when
compared to very small lightweight sensors such as those used in ECO.
2. Understanding Sensors For the purpose of physical interaction design, a sensor is a transducer that converts
a form of energy into an electrical signal. (sensorwiki.org 2006)
Almost all sensors can be classified into two groups which differ
according to their particular electronic mechanism:
- Resistive sensor
- Voltage-producing sensor
Changing the status of the sensor in some way results in a corresponding change of
resistance of resistance or voltage. Every sensor has default minimum and maximum
values but these differ depending on the sensor type and model. It is often necessary
to convert the range of output values to an appropriate range for the intended use.
2.1 A more detailed classification Given that a sensor is a converter of energy, it may be useful to further
subdivide the different sensor types according to the energy that they are
converting. Table 1 below summarises the most common (highlighted items are
fundamental components in the MOTE device, see Chapter 4).
2.2 SENSOR INTERFACES The output from these types of sensors is unreadable for a computer. For
this reason, the most important component inside a sensor is its microprocessor.
The first step of the interfacing process is to use a microprocessor to convert an
analogue signal (voltage) into digital format (0/1). An important characteristic of
the microprocessor is the number of bits (resolution) used by the A/D1 converter:
this is usually referred to as the resolution. This is a measure of the number of
distinct values that can be output from the sensor, for example, a 10-bit
1 A/D: Analogue to Digital conversion
resolution A/D converter is able to store and send values in the range between 0
and 1024. A 7-bit converter can represent values between 0 and 128; this also
explains why the MIDI DIEM Digital Dance System used a 7-bit microprocessor
(MIDI messages are also between 0 and 127).
Table 1: Types of sensors
2.3 COMMUNICATION PROTOCOLS Once the sensor has been chosen a means of communication with the host
computer/device is needed. Several methods that facilitate this task are currently
available. The most common communication protocols are listed below:
- RS232 serial
- USB CDC serial
- USB HID
- IEEE 1284 Parallel Port
As will be shown in the next chapter, the Mote sensor use a combination
of RS232 serial and USB protocols.
2.4 The Hardware - 25mm Wireless Inertial Measurement System (WIMU)
The 25mm Wireless Inertial Measurement (WIMU) System is an array of
sensors built as part of a unique piece of hardware assembled at the Tyndall
National Institute of Cork (Ireland). An Inertial Measurement System is a system
that is used to detect altitude, location and motion. These various data are
normally retrieved through an accelerometer, which measures the acceleration,
and a gyroscope, which measures the rate of orientation according to the three
rotation angles used in aviation: pitch, roll and yaw. The 25mm reference in the
name comes from the size of the Atmel ATMEGA128 microprocessor which
provides the wireless communication.
Figure 9: An overview of a number of Motes
2.5 A closer examination of the Motes The array of sensors used includes two accelerometers, three single-axis
gyroscopes and two magnetometers. The sensors communicate with a 12-bit
resolution A/D Converter that uses a 5Volt supply with an offset of 2.5 Volts
applied to input values. This means that as a 12-bit ADC can send values
between 0 and 4096 but that an acceleration of 0, for example, will be read as
2048.
2.5.1 The Accelerometers
The accelerometers installed in the 25mm WIMU are two low-cost, dual-
axis accelerometers - ADXL202 - from Analogue Devices. The main task of the
accelerometer is to retrieve the acceleration values along the three dimensional
axes X, Y and Z. Because a single accelerometer is able to track movements
along two coordinates (dual-axis) only, a second accelerometer is installed in the
opposite orthogonal direction. The Y value will now be considered in the same
manner as the the Z coordinate while X will give back a copy of X as indicated
by the first accelerometer which is then omitted during the process of interface
coding.
Figure 10: Accelerometers Displacement
At the moment, the 25mm WIMU is just a prototype and the solution
adopted here will eventually be replaced in the next generation with a single
three-axis accelerometer, thereby reducing the total dimensions of the device. At
the moment the ADXL202 accelerometer has been chosen because its output is a
Pulse Width Modulation (PWM) which makes is easy to directly connect it to the
microprocessor. In contrast with this the three-axis accelerometer has an
analogue output. Both accelerometers record variations on the acceleration as
particular voltage values. The resolution of the sensors is 600 mV per g (which
represents the gravitational constant 9.8 meter/second). The ADC has a step
increment of 0.002 g which, when converted, is equal to 0.0196 m/sec2. (See
Appendix C for a more detailed specification)
2.5.2 The Gyroscope
The gyroscope is used to track the rotations of the object around its own
axis. There are three possible rotations of an object in space, the names of which
are usually stated in aeronautical terms - i.e. pitch, yaw and roll. Pitch is the
rotation around the lateral axis, roll is the rotation around the longitudinal axis
and yaw is the rotation around the perpendicular one.
Figure 11: Pitch, Roll and Yaw
The data retrieved here is particularly useful, as it can be combined with
the data coming from the accelerometer, so as to calculate the actual position of
the mote in a three-dimensional space. The gyroscopes implemented in the
25mm WIMU are three single-axis ADXRS150 from the Analogue Device. To
retrieve the three possible rotational movements they are set along the mote
platform as shown in Figure 12.
Figure 12: Orientation of the three single-axis gyroscope
The unit of measurement here is degrees per second (°/s). A single ADC
step records 0.27 °/sec of turn. The data specification refers to 150°/s of the total
range but at Tyndall this value is modified up to 406°/s so that the sensor can
track the limb’s rotation, which is above the factory pre-set range.
2.5.3 The Magnetometers
The magnetometers are two dual axis Honeywell HNC1052L. They are
designed to track magnetic force along the three dimensional axes x, y and z and
are arranged in a similar way to the accelerometer.
2.6 The FPGA module
Figure 13: 25mm FPGA Schematic Block diagram
An FPGA (Field-programmable gate array) is a semiconductor device
which consists of multiple Programmable Logic (PL) components (see Figure
13). Its main function is to deal with logic gates such as AND, OR, XOR or NOT
or simple math functions. Although this device is generally slower than other
similar items such as the Application Specific Integrated Circuit (ASIC); one of
its main advantages, nevertheless, is its “easy” re-programmability. As shown in
the block diagram above, the FPGA has six inputs and three outputs. The inputs
are one-clock, two-voltage regulators, a bidirectional IO port and a link to the
Electrically Erasable Programmable Read-Only Memory (EEPROM). The
EEPROM, which is a small read-only memory used to store a small amount of
configuration data, represents the connection with the ATMega128L
microcontroller. This connection is obtained through JTAG Port (Join Test
Action) which is used for testing sub-blocks of integrated circuits, and also for
debugging the embedded system, when necessary.
2.7 The Transceiver The transceiver is the object that enables communication between the
nodes and the base station. It consists of two main components: a microprocessor
and an RF transmission-receiver. The microprocessor is an ATMega128L which
converts and packs the data retrieved from the sensors in digital format. It has its
own clock, a voltage regulator and is connected to the FPGA module through a
JTAG port. The transmitter is a Nordic VLSI nRF2041 which consists of an
antenna, a voltage regulator and a crystal oscillator. The latter is used to provide
a precise clock signal and works to stabilize the frequency of the transmitter.
Figure 14 shows the schematic diagram for the transceiver.
Figure 14: Transceiver Schematic Block Diagram
2.8 Communication Protocol The connection to a host computer is made through an RS232 – USB
converter (see Figure 15).
Figure 15: RS232-USB Converter
An early design provided a male RS232 connection for each single mote. The
next generation of 25mm WIMU system should provide a single mote with
RS232 serial port making the sensors smaller and more wearable. A recent
implementation has seen the 25mm WIMU set on the top of a 4V lithium battery.
The motes’ new look is shown in Figure 16.
Figure 16: The new generation Mote (left) and lithium battery with charger
(right)
2.9 Communications Packet Structure A brief description of the Communication Packet Structure is given here.
A full datasheet is included in Appendix C.
The data arrived at the base station as a packet of bytes. The packet
length is 20. The 20 bytes is made up of 18 bytes of data and 2 synchronisation /
delimiting bytes. The delimiting bytes are Carriage Return (0x0A) and Line Feed
(0x0D).
The 18 bytes of data are made up of 9 two-byte packets. These represent
the ADC data. The first 4 MSB of the 2 bytes denote the ADC channel with the
remaining 12 bits representing voltage recorded by the ADC (0-4096).
(Tyndall, 2006)
2.10 Device Programming Interfacing the computer with the external hardware represents one of the
most important programming processes. It is vital at this stage of the process to
ensure that the computer can read from the external device – i.e. the sensors. This
part of the project was carried out by Stephen Shirley and his description of the
written algorithm is provided here (full code is included in Appendices A1, A2,
A3 and A4).
“The data from the master node consists of a stream of packets, one per node
that the master is aware of. The processing of this stream is handled by the
cel_ser library. The packet format starts with a node number in ascii and the ’:’
character, and finishes with ’\n\r’. As the request for data from the applications
using cel_ser is not synchronous with the data stream, the library tries to be
intelligent about how it handles requests. Another consideration to be taken into
account is that not all nodes may be present at any given time, and we want to
avoid having to reconfigure the library every time the list of connected nodes
changes. Bearing all these in mind, the rest of this section describes the
algorithm used in cel_ser.
cel_ser is multi-threaded; the serialio thread handles reading from the serial
port and processing the data, the main thread handles the passing of data back to
the application that calls cel_ser_read(). Whenever the application calls
cel_ser_read(), it attempts to lock the new_data_mut mutex. This mutex is only
unlocked by the serialio thread whenever a full cycle has been completed,
ensuring that the application never gets old or partial data.
The serialio thread loops constantly, calling cel_ser_read_real(). This
function starts off by setting all the node data buffers to 0xFF, so that if a node
doesn’t send data, the application will get all -1’s for that node’s sensors. This
makes it easy to notice and deal with a node going offline from the application
level. If the library is out of sync with the datastream (happens on startup, or if
the stream became temporarily corrupted), it will begin searching the datastream
for a valid packet. This is done by reading an amount of data equal to twice the
packet length. That data is then searched for a sequence of bytes the same length
as a packet, that start with a byte between ’1’ and ’8’, followed by ’:’, and has
’\n\r’ as the last two bytes. If this is found, then that sequence is a valid packet.
Further checks could be done, as each of the sensor bytes have the sensor
number encoded into the top four bits, but experience showed that it was
overkill.
Once a valid packet has been found, it is processed (i.e. the node number
and various sensor readings are extracted) and the result stored in the appropriate
node data buffer. cel_ser_read_real() then starts reading in one packet length’s
worth of data and processesing it, and continues doing so until it encounters an
invalid packet (in which case the stream is corrupted and a re-sync needs to be
attempted), data for all nodes has been collected, or a duplicate node number is
detected. In the latter case, any nodes that haven’t shown up in the data stream so
far are assumed to be missing/offline as the master node cycles through all the
nodes sequentially. When cel_ser_read_real() has reached one of those
terminating conditions, it unlocks the new_data_mut mutex allowing
cel_ser_read() to read the node data buffers and return that data to the calling
application.” (Shirley 2006).
2.10 PURE DATA (Pd) Pure Data (Puckette, 2006) has been chosen for the real-time
manipulation of the data retrieved from the motes. Pd is a graphical Object-
Oriented programming language developed by Miller Puckette. There are two
main reasons for this choosing this language. Firstly, Pd is an Open Source
Program which provides total freedom of access to its code structure. This
characteristic makes it a powerful tool for software development, and enables the
programmer the freedom to modify it according to his/her own requirements.
Another important factor in this choice of programming language is the vast web
community which supports it and the consequent potential to consult with others
on any possible problems which may arise.
2.10.1 “mote” Object
The name of the new graphical object for Pd is “mote” and it consists of
one inlet and three outlets (see Figure 17).
Figure 17: Mote Object
When the message “1” is received by the object, a function call is made to the
driver code. At this stage, the code packages and orders the numbers coming
from the mote base station. Six arrays of seven elements are passed back to the
mote code. The leftmost outlet then sends out this list. An “unpack” object is
used to view the contents of a single array in the examples shown below.
Figure 18: Inlet & Leftmost outlet
2.10.2 The middle outlet
The middle outlet is designed to “bang” each time a new array of
numbers is sent out. This bang is then connected to the message box containing
the number 1 which again activates the function call to the driver code. In this
way, a cyclic loop is created. Such a loop can create a stack overflow error
message causing the machine to crash (see Figure 19).
Figure 19: Erroneous connection
To avoid this scenario, a “pipe” object is connected subsequent to this “bang”,
thereby delaying the next “1” message – by a pre-determined amount of
milliseconds. This amount is set to 20 ms which is below the smallest value of
latency as recorded (see Figure 20).
Figure 20: Correct Loop
2.10.3 The righthand outlet
The hardware and the software employed during this process both
function within a discrete time domain. This means that only a certain amount of
data is stored over time. This gap in time between the two different packages of
incoming data is defined as latency. The right-hand outlet sends out this value,
which is useful for integrating the incoming acceleration values so as to retrieve
the speed and location of the nodes in space (see Figure 21).
Figure 21: Outlet Latency
2.10.4 The files
The code to compile the new object consists of four source code files and
two header files named as outlined below:
Source Files Header Files
cel_ser.c cel_ser.h
cel_ser_util.c cel_ser_util.h
mote.c
mote.def
cel_ser.c/.h and cel_ser_util.c/.h are files which allow the computer to read from
the base station and pack the data as a sequence of arrays. The mote.c file
contains all the attributes and characteristics of the new Pd object. mote.def
contains the definitions of the library file that will be subsequently exported.
3. SENSOR TRACKING
Kia Ng (Ng 2002) divides the building of an interactive system into four
fundamental steps
- Input sensing and Data acquisition
- Feature detection and tracking
- Mapping
- Output and simulation
The first step, which has been discussed previously, concerns the
interfacing of the sensors with a host computer and the most suitable software.
The next section here discusses the building of a preliminary class of algorithms
to impose a structure on the numerical data. This is the preliminary step prior to
the mapping process whereby the retrieved numbers are utilized to generate an
appropriate output. Finally it is the quality of the output and the simulation which
represents the real nature of each performance.
3.1 Feature Detection and tracking An awareness of the range of the data that is detectable by the external
device is an important issue. This process thereby gives a complete view of those
numbers which can be successively manipulated during the mapping phase and
which constitute its core.
3.1.1 Raw input data
The first incoming numerical values in Pd are those values recorded and
emitted by the microprocessor which - being of 12-bit resolution - sends them
within a range of between 0 and 4096. These values represent the acceleration
within the ADC format. The “mote_input” Pd patch is the space in which these
numbers are displayed and therefore represents the main graphical input to the
system as depicted in Figure 22.
Figure 22: “mote_input” patch
Six motes are available. Each of these arrives into the system as an array of seven
elements organised as follows:
Figure 23: Mote’s array
The first element of the array is the mote’s name. Since there are six operating
motes, it has been decided to assign numbers between one and six so as to
distinguish each of them from one another. This clarifies to which mote each of
the six elements belongs. The following six elements can be divided into two
pairs of three. The first pair carries data coming from the accelerometers and
displays the acceleration recorded along the x, y and z axis respectively. The last
three elements of the array are different and display the pitch, roll and yaw values
coming from the gyroscope.
3.2 Manipulation The simple data arriving in Pd is not satisfactory in itself to track the
movements of a performer. It needs to be further manipulated to provide a more
precise view of the motes’ movement. At the moment the “mote_input” Pd patch
can report just the simplest values of acceleration in a digital format along the six
coordinates.
3.2.1 Range of obtainable data
Both the accelerometers and the gyroscopes have minimum and
maximum values for their outputs. The range of the sensors is then defined
between these two different values. With an offset value set at 2048, numbers
above this value indicate acceleration on the right for the accelerometers and an
anticlockwise angular velocity for the gyroscopes. The opposite applies for
numbers below 2048. The range of obtainable values is normally well within the
range of the 12 bit resolution ADC converter. The accelerometers can register a
minimum acceleration of 0.002g which also represents a single ADC step.
Considering that the maximum acceleration retrievable is 2g, the accelerometers’
range is on a point somewhere between +/- 1000 ADC step.
Figure 24: Accelerometer Range
The minimum angular velocity for the gyroscope is 0.27°/s in this case; a figure
which also represents a single ADC step. The maximum value is 406°/s which,
when divided, by the single ADC step, gives a range of about +/- 1503 ADC
steps.
Figure 25: Gyroscope Range
3.2.2 Jittering
An initial examination of the mote patch while it is in operation
highlights a number of issues for further discussion. When the six motes are
displaced on the table, all of them in the exact same position and with no
perceptible movement, the incoming data is normally affected by a strong
jittering phenomenon. Although this phenomenon was anticipated somewhat, it is
nevertheless considered relatively unusual. In a steady position, the jittering was
expected to comprise an almost constant oscillation of values around a middle
point. What was observed instead was a strange irregularity as relating to the data
frequency output. A closer observation of the phenomenon highlighted the
system’s initial problem. The object was sporadically outputting copies of the
arrays. That was due to the fact that the function ‘cel_ser_read’ in the mote’ code
was contacted at a rate that was faster than the sensor’s rate of data transmission.
Manipulating the ‘cel_ser_read’ code so that it only passes back new strings
solved this problem. This code change improved matters significantly and made
the jittering issue more “predictable”.
A second issue arose regarding the centre point around which the jittering
phenomenon was expected to operate. Tyndall chose a value of 2048 for the
initial offset of the sensor, and a similar value was therefore expected to
correspond with the centre point of the jittering phenomenon. Observing the
patch made it clear however that this value does not correspond to the middle
point of the accelerometer or the gyroscope. Further study was undertaken to
investigate this. Considering the same (fixed) position of all the various motes,
the problems occurring can be summarized as follows:
- no jittering around 2048
- dissimilar values for each of the two motes
It is appropriate at this juncture to distinguish between the accelerometers
and the gyroscopes as relating to these observations. Accelerometers are sensitive
to gravity while this issue is not relevant as relating to the gyroscope. The gravity
constant (9.81 m/sec2) also affects the accelerometer when in a steady position
because of its inclination and orientation. The 25mm WIMU is hand-made,
therefore, even the tiniest motion on the part of the sensor can affect the
production of data. This consideration may also partly explain the erroneous
offset that is displayed. Despite taking the inclination level into consideration, the
offset is still too far from the due point. Experimentation has shown that for some
motes an inclination of 45° on the x-axis aligns the jittering on the proper offset
value (2048). The justification reported above does not sufficiently explain the
exact nature of the retrieved data and further analysis is therefore necessary.
Initially it was observed that the offset value changed significantly
according to the power level of the battery (Ratiometric). When the battery
voltage was low, a difference up to 200 point was detected. To avoid the use of
ratiometric calculation the experiment was then conducted using a power
generator. Notwithstanding the accelerometers, all the data coming from all of
the gyroscopes placed within the six motes, reported an offset error quite similar
i.e. around the 1890 – 1970 ADC values.
3.2.3 Solution Adopted
Numerous experiments were conducted but in spite of this, the offset was
never found at the (exact) correct point. It was decided to continue with the data
manipulation process in spite of this – and by considering a relative starting point
as opposed to an absolute starting point. This means that the initial calculated
middle-point of the jittering zone will now be considered the “correct” offset.
3.2.4 Setting the offset of the gyroscope
Initially it was decided to calculate the gyroscopes’ offset. The main
reason for this was that gyroscopes have demonstrated a greater stability. Hence,
two Pd patches have been designed. Figure 26 show the first patch which has
been named “angle_calcul_is2”.
Figure 26: Angle_calcul_is2
Preliminary experiments have demonstrated that the jittering zone does not have
a fixed middle-point. The following steps give details of an experiment carried
out using the above patch.
Condition: mote placed in steady position on a table / mode: running
Patch 1: the average point of the first 100 numeric values in the data
stream is calculated and the result set as offset.
Patch 2: given a negative sign to the number below the offset and a
positive one to the number above the offset, an “accumulator” patch adds
and subtracts these values.
Result expected: to prove that the system is balanced, the accumulator
patch should output values oscillating around ‘0’.
Result obtained: after a few seconds of expected jittering around ‘0’, the
accumulator patch started outputting crescent values thereby increasing
the distance from the ‘0’ point.
The result obtained highlights the irregularity of the jittering phenomenon
around different middle points. When setting the calculation of the average point
as a constant (as opposed to referring to the first hundred numbers only), the
jittering appears to oscillate around ‘0’. At the same time, however, it makes it
impossible to calculate the data with the mote in motion It was necessary,
therefore, to enlarge the offset to a range of numbers rather than limiting it to a
fixed number (see the ‘setboundaries’ patch, Figure 27). The boundaries have
been calculated using “serialize” and “minmax” Pd objects. “Serialize” packs the
first 100 numbers sending the list to “minmax” which then sends out the
minimum and the maximum number - in the received list.
Figure 27: “setboundaries” Patch
A cross-test between the two patches demonstrated that the quantity of
errors accumulated by ‘setboundaries’ is less than the one accumulated by
‘angle_calcul_is2’ and so this is the preferred method. The solution adopted is
always going to give a constant error – one which is equal to the difference
between the two boundaries as calculated (max - min values).
3.2.5 Setting the offset of the accelerometer
Accelerometers are sensitive to gravitational force and this factor has to
be carefully considered in any calculations. The acceleration of gravity on the
Earth is:
9.81 m/sec2 = 1 g (sea level)
The orientation of the axis of an accelerometer is affected by this force. The
gravitational force values for slopes between 0° and ±90° can vary between 1g
and +1g (Figure 28).
Figure 28: Gravity and orientation
This figure demonstrates how the acceleration is a function of the tilt angle.
Generally it can be stated that:
Gn = G * cos (θ)
where G is the gravity constant and θ represents the tilt angle.
When in a steady position, the motes are affected by this force. Considering
0.002 g to be a single accelerometer ADC step, the range within the sensor is
influenced by gravity and can subsequently be calculated according to the
following formula:
±1g : 0.002 g = ±500 ADC steps
The following graph shows the function-curve of the ADC steps versus the tilt
angle.
Figure29: ADC steps versus Degrees
The limit of the accelerometer given in ADC steps is relative to an
orientation of 0°. The retrieved data will be incorrect if considered in relation to a
movement in the same direction as that of gravity. In the Tyndall Datasheet, a
single ADC step is equal to 39.85 mm/s. This value has been calculated using
the acceleration over the sampling time. This means that an acceleration of 1000
steps corresponds to a movement at a speed of 39.85 m/s. This range is well
within the possibilities of a dancer.
3.2.6 Setting Accelerometer Boundaries
Along with the gyroscopes, it is also necessary for the accelerometers to
have an initial offset value also. The original offset was calculated as 2048 (as
reported in the Tyndall Datasheet). If the mote’s accelerometer was sending data
above or below 2048, the data was interpreted as related to the initial inclination
of the sensor and the angle would have had to be calculated with this taken into
account. In fact this method was problematic because, for example, rotating the x
axis by up to 90° (pitch) meant that the ADC value was limited to a value above
the +500 steps allowed. A further example clarifies this observation further. If
the starting value is 2200 ADC, and we suppose that this is due to an
imperceptible inclination of the mote - the remaining ADC steps should be 348:
2200 – 2048 = 152 (ADC steps due to gravity acceleration) then
500 – 152 = 348
When the mote was rotated, the ADC values began to increase up to 2700. This
data proves that the correct offset point that needs to be taken into account is not
2048 but 2200 (2700 – 2200 = 500). To calculate the initial tilt of the mote it
was decided to calculate the average number of the first hundred incoming ADC
values.
Figure 30: Averaging process
Subsequently, it is possible to calculate the initial angles that define the actual
orientation of the mote:
Xv = Actual ADC data – Average point calculated
With an initial orientation of around 0° for X and Y - one which implicates the
usage of sine rather than cosine, it is thereby possible to retrieve the initial
inclination γ of the mote using:
Xv = G * sin (θ)
then: sin (θ) = Xv / G
and
θ = sin -1 (Xv / G)
Figure 31 shows the “initial_tilt_acc” patch which defines the two elements γ
and Xv so as to define the initial status of the mote.
Figure 31: “initial_tilt_acc” Patch
3.2.7 Final Definition of the Initial Position
A further step is now required to properly initialize the position of the
mote. The γ angle has to be sent as a starting value for the gyroscope (see the
“accumulgyro” patch in Figure 32). The function of this patch is to create a
“history” of the rotation by displaying the current orientation against the initial
position γ.
Figure 32: “accumulgyro” Patch
Using data coming from the x axis, the initial “pitch” inclination can now be
calculated. Using other data as emitted from the y axis the “roll” inclination can
also be calculated. The “Yaw” rotation is set to ‘0’ at this point also.
3.2.8 ADC angular speed (Gyroscope)
A gyroscope measures angular speed. The SI (International System of
Unit) unit is radians per second. The ADXRS150 gyroscopes have a minimum
resolution of 0.27 °/sec, which also represents a single ADC step in the
microprocessor. The following formula converts ADC steps into angular speed:
ADC steps * 0.27°/s = angular speed ( °/sec )
The “angle_calcul” patch (Figure 33) shows the process whereby the angular
speed is calculated:
Figure 33 : “Angle_calcul” Patch
This patch also integrates the speed values used to caculate the correct distance.
The angular speed is divided by 0.0155, which represents half of the lowest
latency retrievable.
3.2.9 ADC Instant Acceleration (Accelerometer)
The SI unit for acceleration is metres per second squared (m/sec2). To
convert ADC values to m/ sec2 the following formula is used:
ADC step * 0.002g * 9.81 m/ sec2 = instant acceleration (m/ sec2)
where 0.002 is a single ADC step and 9.81 is the unit of conversion equivalent to
1 g.
Figure 34: instant_accel.pd
It is then necessary to multiply the ADC values by 0.0196133.
3.2.10 ADC Instant Speed (Accelerometer)
The incoming acceleration value is multiplied by time to calculate the
speed for the acceleration value. This system only gives the instantaneous speed
of the resulting mote. In the following patch two different time values were
considered; the latency time of the hardware and 0.0155, which represents half
of the lowest latency value recorded (it had been taken into consideration that if
the minimum number of motes to make the system functional is two, the latency
of each single mote is half of the retrieved value).
Figure 35: instant_speed.pd
The patch seems to work very well, on first examination. However, it is
important to test it i.e. - to implement the patch with an averaging object letting
the system retrieve the delta speed over a determined period of time.
3.2.11 Accelerometer Issues
The research undertaken so far has attempted to integrate the acceleration
values to retrieve the speed and (subsequently), the distance. The numerous tests
performed did not produce a successful result however. One of the main
problems here relates to the initial calibration of the accelerometers. The
averaging process was shown to be a successful method by which the initial tilt
angles could be calculated with the mote in an apparently steady position.
Instead, when the mote began moving, the averaging process needed to be
stopped so that the incoming values due to the real acceleration would not be
affected. At this stage the values increase constantly even when the mote stops
moving. This happens because the initial orientation will never equal the arrival
orientation; therefore we will still be reading values with an “incorrect” offset
point. The re-setting of the offset point each time the mote stops moving might
solve this problem, but this will only be possible if a fixed offset point is know,
for instance, if 2048 was always the offset, the acceleration from the
accelerometer could be subtracted from the gravitational acceleration retrieved
from the actual angle recorded by the gyroscope, giving a result equal to ± 0
which implies no movement. Since the offset is continuously changing it is not
possible to know if the mote is moving unless we examine it closely. The
accumulation error will also increase rapidly, outputting inaccurate results.
The intention is to achieve a calibrated system which, once set, is able to
run with (only) a small accumulative error. A problem in relation to the issue of
gravity still remains. If the gyroscope retrieves imprecise data, it then becomes
difficult to subtract the correct gravitational acceleration from the accelerometer
values.
3.3 Data Retrieved So Far and Range The following data has been retrieved so far:
Gyroscope x 3 axis
- ADC steps --------------------------------------------------- +/- 1000
- Angular Speed (°/s) ---------------------------------------- +/- 270
- Degrees (°) -------------------------------------------------- +/- 360
Accelerometer x 3 axis
- ADC steps --------------------------------------------------- +/- 1300
- Acceleration (m/sec2) ---------------------------------------- +/- 25
- Gravity acceleration (ADC or m/sec2) ----- +/- 500 or 9.8
The global patch containing the complete series of sub-patches used to solve the
mathematical issues is called “MAIN_CALCULATION” and is shown below:
Figure 36: MAIN_CALCULATION.pd
A further patch called “DISPLAY.Pd” shows all of the data retrieved so far
within a single window:
Figure 37: DISPLAY.pd
The tool also has a main patch which allows the user to start and stop the reading
process from the 25mmWIMU. This patch has been named CTRL_WINDOW.pd
Figure 38: CTRL_WINDOW.pd
4. Possible Mapping Strategies In the previous chapter we demonstrated the retrieval of data from the
sensors being used. One this data is available, a further series of patches needs to
be built to interpret the performer’s movements from this data and to choose a
method of mapping these movements to some output processes.
6.1 Preliminary Considerations Four main types of data have been retrieved at this stage:
- Acceleration
- Speed
- Distance
- ADC values
With the exception of the acceleration values, each of the other types of data
comes from both the accelerometers and the gyroscopes. Depending on the
particular use that each type of data is intended for, some scaling of values may
be necessary. If, for example, the speed values are used to control the general
volume of a synthetic instrument, the incoming values need to be scaled to so as
to fall into a range between 0 and 1 – i.e. the amplitude range in Pure Data.
Another important issue needs to be taken into account at this point. The
way in which the data is received in Pd is a continuous stream of jittering data. If
the scaled speed is assigned to the amplitude through a simple line-object, the
ramp generator will be continuously interrupted, thereby clipping the signal. The
main problem here is that a simple line-object would work but only with non-
continuous pairs of numbers, the following sequence, for example:
( 3 – 4 ) / ( 8 – 3 ) / ( 6 – 10 ) etc.
In this sequence a gap is created in the ramp between the second and the first
number of each consecutive pair. Referring to the previous example (above), the
proper sequence of pairs should be:
(3 – 4) / (4 – 8) / (8 – 3) / (3 – 6) / (6 – 10) etc.
The Pd patch is shown below avoids the gap between each pair, making the
movement of the ramp continuous.
Figure 38: cont_ramp.pd
The mapping process can involve many other approaches beyond the
simple one-to-one correlation - also known as “direct mapping”. In Seine Hohle
Form’s project report (Seine Hohle Form 2001) three main mapping approaches
are outlined:
- one to one “direct mapping ”
- one to many “ divergent mapping ”
- many to one “ convergent mapping ”
A “divergent mapping” approach uses a single data source to create a
range of different outputs. The scaled speed values with a range of between 0 and
1 (float point values) could also be assigned to produce micro-variations in the
partials of ring modulation synthesis, for example. In this way, the same values
can be used for two different tasks (i.e. amplitude scaling and changing the
values relating to partials).
A “convergent mapping” process is the opposite of the divergent one.
More than one data input is used to manipulate a single output parameter. For
example, the ratio between the speed recorded by the accelerometer and the
speed recorded by the gyroscope could be used to set the phase of two melodic
lines. If the ratio is 1, a synchronous melodic line will be created, if below 1 a
slightly phased effect will be the outcome.
4.2 The Vitruvian Man A further mapping idea based on Leonardo Da Vinci’s “Vitruvian Man”
is also being explored. The idea is to divide the space around the human figure
Figure 39: Leonardo’s Vitruvian Man
according to the ratio of his/her body to create a virtual 3-D musical instrument
around the body of the performer. The system of sensors can then be used to
create an improvisatory dance performance in which the dancer interacts with or
controls music in real-time. A potential mapping strategy is discussed in the next
sections.
4.2.1 Placement of the Motes
Six motes are used and they are placed around the body of the performer
in the following way:
Each mote will be named as follows:
Mote_left_leg ankle of the left leg
Mote_right_leg ankle of the right leg
Mote_baricenter middle of the chest
Mote_left_arm wrist of the left arm
Mote_right_arm wrist of the right
arm
Mote_head on the top of the head
Figure 40: Mote Placement
4.2.2 Initial Settings
A preliminary step in the mapping process involves the creation of the
virtual spherical space around the body of the individual performer. To set initial
system attributes, some of the principles described in Leonardo da Vinci’s design
notes for his Vitruvian Man are taken into account. In his notes, da Vinci uses the
height coordinates to deduce all the other proportions of the human body. He sets
the male sexual organ at the middle of a square circumscribing it, thereby
representing the middle point of the human figure. The circle is calculated by
using the distance between the navel and the feet as a radius. The navel location
is calculated dividing the distance between the feet and the arms (held
outstretched and above the head), by 2.
Figure 41: Sphere
4.2.3 Subdivision of the sphere
The most important step in the initialization of the virtual environment
involves dividing the sphere into as many portions or zones as required. The
number, shape and dimension of the zones can vary according to the needs of the
composer/ programmer and the dancer. Generally, a point in a three-dimensional
space is defined with reference to the three coordinates (x, y, z). In a sphere these
coordinates can be calculated according to the following formula:
X = r cos(α) cos(β)
Y = r sin (α) cos(β)
Z = r sin (β)
with -π < α < π and -π < β <-π
Defining the fundamental coordinates of spaces by dividing the sphere, it is then
possible to know which zone a particular mote is located in. It is then necessary
to implement a matrix in which the current position of the mote is compared to
the respective points. The precise division of the sphere defines a series of active
zones which are sensitive to the presence of the mote.
The movement of the performer could then be mapped to control or
generation elements of audio and/or video.
4.3 FUTURE DEVELOPMENT AND IMPROVEMENT This system is in the development stages and a number of problematic
areas and potential solutions have been identified:
• Smaller dimension
The sensor is still not small enough to be comfortably worn by a
performer. Among the solutions is replacing the two current dual-axis
accelerometers with a single three-axis accelerometer. Further
improvements in the size of the hardware design are being explored at the
Tyndall Institute, Cork.
• Protecting the sensors
The sensors need to a durable casing to withstand the rigorous movement
of a performer.
• Hiding the sensors
It is often desirable to hide all the technological components during a live
performance. The Fashion School of Limerick is currently working on
this particular issue.
• Testing the system
The system needs to be tested in many different environments and
incorporating a range of different scenarios, particularly relating to live
performance. This can only be done once the mapping process has been
completed. Such tests could focus on issues such as:
• Software performance
• Hardware performance
• Performer feedback
• Artistic tasks explored and implemented
Appendix A
The Tyndall Datasheet for the 25mm Wireless Inertial Measurement System (WIMU)
General Description
The 25mmWIMU is an array of sensors combined with a 12-bit ADC. The
WIMU utilises the communications functionality of the 25mm Atmel ATMEGA128.
The WIMU sensor array is made up of three single axis gyroscopes (ADXRS150
Analog Devices), two dual axis accelerometers (ADXL202 Analog Devices) and two
dual axis magnetometers (HMC1052L Honeywell). The ADC in the design is the
Analog Devices part, AD7490.
Sensor Specification
The gyroscopes have a default measurement range of 150 °/s but this can be
increases up to 600 °/s if required. The accelerometers have a measurement range of
+/- 2g with the magnetometers being specified with a measurement range of +/-
6gauss.
In terms of ADC steps:
The ADC is powered by a 5V supply and has a resolution of 12 bits, which
provides a voltage step of 1.22mV. The ADC inputs are offset around 2.5
V, which means a zero voltage will read as 2048. The max positive voltage
it can read is then 2.5V
The Gyroscopes’ range has been modified to 406 °/s and has a resolution
of 4.5mV / °/s. A single ADC step increment corresponds to 0.27 °/s rate
of turn. A rate of turn of 406°/s will produce a 1.84V output which is well
within the 2.5V limit.
The Accelerometers resolution has been recorded as 600mV / g. A single
ADC step increment corresponds to a 2mg acceleration (19.6 mm/s2). The
maximum acceleration it can register corresponds to a 1.2 V signal, which
is well within the 2.5V ADC limit.
The Magnetometer resolution has been registered as 385mV/gauss. A
single ADC step increment corresponds to 317mG (317nT). The maximum
magnetic field that the sensor can register is 6gauss, which corresponds to
a voltage of 2.31, which is well with in the ADC maximum limit.
Sensor Resolution Min Max
Gyroscope 4.5 mV / °/s 0.27 °/s 406°/s
Accelerometer 600 mV / g 0.002 g 2 g
Magnetometer 385 mV
/gauss
0.317
gauss
6 gauss
Communications Packet Structure
The data arrived at the base station as a packet of bytes. The packet length is
20. The 20 bytes is made up of 18 bytes of data and 2 synchronisation / delimiting
bytes. The delimiting bytes are Carriage Return (0x0A) and Line Feed (0x0D).
The 18 bytes of data are made up of 9 two-byte packets. These represent the
ADC data. The first 4 MSB of the 2 bytes denote the ADC channel while the
remaining 12 bits representing voltage recorded by the ADC (0-4096).
REFERENCES
AnalogueDevice (2006) ADXRS150 - Angular Rate Sensor ADXRS150 [online],
available: http://www.analog.com/en/prod/0,2877,ADXRS150,00.html[accessed 30
June 2006]
AnalogueDevice (2006) ADXL202 - ±2 g Dual Axis Accelerometer [online],
available: http://www.analog.com/en/prod/0,2877,ADXL202,00.html [accessed 30
June 2006]
Aylward, R., Paradiso, J.A., ‘Sensemble: A Wireless, Compact, Multi-User Sensor
System for Interactive Dance’, Proceeding of the 2006 International Conference on
New Interfaces for Musical Expression (NIME06)[online], available:
http://www.informatik.uni-trier.de/~ley/db/conf/nime2006.html#AylwardP06
[accessed 30 Jul 2006].
Camurri, A., C. Krumhansl, L., Mazzarino, B., Volpe, G., An Exploratory Study of
Anticipating Human Movement in Dance, in Proc. 2nd International Symposium on
Measurement, Analysis and Modeling of Human Functions, Genova, Italy, June 2004.
available: http://musart.dist.unige.it/Publications.html [accessed 30 Jul 2006].
Choi, I., Zheng, G., Chen, K., ‘Embedding a sensory data retrivial system in a
movement-sensitive space and a surround sound system’,
Chulsung, P., Pai, H.C. and Sun, Y., ‘ A wearable wireless sensor for Interactive
Dance Performance’, in Proc. Fourth Annual IEEE International Conference on
Pervasive Computing and Communication [online], available:
http://www.ece.eci.edu/~chou/pubblications.html [accessed 31 Jul 2006].
DIEM (1999), DIEM Digital Dance System [online], available:
http://hjem.get2net.dk/diem/products.html [accessed 12 August 2006]
DIEM (1999), Wayne Siegel - Movement Study I/II [online], available:
http://hjem.get2net.dk/diem/notes-sister.html [accessed 12 August 2006]
DIEM (1999), Wayne Siegel - Sister [online], available:
http://hjem.get2net.dk/diem/notes-mvst.html [accessed 12 August 2006]
Dimitrov, S., Serafin, S., ‘A simple practical approach to a wireless data acquisition
board’, Proceeding of the 2006 International Conference on New Interfaces for
Musical Expression (NIME06)[online], available: http://www.informatik.uni-
trier.de/~ley/db/conf/nime2006.html#AylwardP06 [accessed 28 Jul 2006].
Dobrian, C., Bevilacqua, F., ‘Gestural Control of Music Using the Vicon 8 Motion
Capture System’, Proceeding of the 2003 International Conference on New Interfaces
for Musical Expression (NIME03)[online], available:
http://hct.ece.ubc.ca/nime/2003/onlineproceedings/Papers/NIME03_Dobrian.pdf ,
[accessed 4 July 2006].
Dubost, G., Tanaka, A., ‘A Wireless, Network-based Biosensor Interface for Music’ ,
Mats Nordhal, ed., ‘Voice of Nature: International Computer Music Conference
2002’, Goteborg, 16-21 Sept, 2002, Sweden, ICMC 2002, 92-95.
Elert, G. (2006), The Physic Hypertextbook [online],
http://hypertextbook.com/physics/mechanics/acceleration/ [accessed 10 July 2006]
Erman di Rienzo (2006) Matematica e Storia [online], available:
http://www.matematicamente.it/storia/divina_proporzione.htm [accessed 9 June
2006]
Feldeimer, M., Paradiso, J. A., Giveaway wireless sensor for large group Interaction,
Conference on Human Factors in Computer System (CHI ’04)[online], available:
http://portal.acm.org/citation.cfm?id=986046&dl=ACM&coll=&CFID=15151515&C
FTOKEN=6184618 , [accessed 30 June 2006].
Feldeimer, M., Paradiso, J. A. and Malinowsky, M., ‘Large Group Musical Interaction
using Disposable Wireless Motion Sensor’, Mats Nordhal, ed., ‘Voice of Nature:
International Computer Music Conference 2002’, Goteborg, 16-21 Sept, 2002,
Sweden, ICMC 2002, 83-87.
Feldmeier, Mark et al, Large Group Musical Interaction Using Disposable Wireless
Motion Sensors http://www.media.mit.edu/resenv/ pubs/papers/2002-09-
ICMCultralowcost.
Ghione, F. (unknown), Lezioni di Geometria [online], available:
http://www.mat.uniroma2.it/~ghione/Testi/Geo1/Uni5/Uni5.html [accessed 31 July
2006]
Jensenius, A. R., Kvifte, T., Godoy, R. I., ‘Towards a Gesture Description
Interchange Format’, Proceeding of the the 2006 International Conference on New
Interfaces for Musical Expression (NIME06)[online], available:
http://www.hf.uio.no/imv/forskningsprosjekter/musicalgestures/publications/pdf/jense
nius-nime2006.pdf , [accessed 5 July 2006].
Kia Ng, ‘Interactive Gesture Music Performance Interface’, Proceeding of the 2002
International Conference on New Interfaces for Musical Expression
(NIME02)[online], available: http://hct.ece.ubc.ca/nime/2002/papers/ng.pdf ,
[accessed 1 July 2006].
Lunar (1998), Thinking About Accelerometers and Gravity [online], available:
http://www.lunar.org/docs/LUNARclips/v5/v5n1/Accelerometers.html [accessed 12
August 2006]
McGill's IDMI Laboratory (2006), SensorWiki [online], available:
http://www.sensorwiki.org/index.php/Main_Page [accessed 30 August 2006]
Orio, N., Schnell, N., Wanderley, M. M., ‘Input Devices for Musical Expression:
Borrowing Tool from HCI’, Proceeding of the 2001 International Conference on
New Interfaces for Musical Expression (NIME01)[online], available:
http://www.music.mcgill.ca/musictech/idmil/papers/nime01.pdf , [accessed 3 July
2006].
Palindrome (2006), Palindrome Inter-media Performance [online], available:
http://www.palindrome.de/index.html?/video.htm [accessed 31 August 2006]
Paradiso, J., Wearable Wireless Sensing for Interactive Media [online], available:
http://www.media.mit.edu/resenv/pubs/papers/2004-04-JoeP-BSN-Abstract.pdf ,
[accessed 22 June 2006]
Paradiso, J. and Hu, E., "Expressive Footwear for Computer-Augmented Dance
Performance," in Proc. of the First International Symposium on Wearable Computers,
Cambridge, MA, IEEE Computer Society Press, Oct. 13-14, 1997, pp. 165-166.
Paradiso, J.A., Hsiao, K., Benbasat, A.Y., Teegarden, Z. (2000) ‘Design and
implementation of expressive footwear’, IBM System Journal, vol. 39, 511 – 529.
Paradiso, J. A., Hu, E., Hsiao, K-Y., ‘The CyberShoe: A Wireless Multisensor
Interface for a Dancer’s Feet’, Proceedings of International Dance and Technology
99., Tempe, AZ, FullHouse Publishing, Colombus. OH (2000), pp. 57-60 [online]
available: http://www.media.mit.edu/resenv/pubs/papers/98_11_IDAT99_Shoe.pdf ,
[accessed 29 June 2006].
Rovan, B. J., Wechsler, R., Weiss, F., ‘Seine hohle Form, a project report’, available
online: http://www.palindrome.de/4-paper-2w.htm , [accessed 29 June 2006].
Rubino, A. (2000) Leonardo e la Geometria Segreta e Sacra [online], available:
http://freeweb.supereva.com/flobert/geometria_sacra.htm?p [accessed 13 July 2006]
Saggini, V., ‘Mapping Human Gesture into Electronic Media’, Teleura, ed. October
16, 2002 [online], available: http://www.thereminvox.com/article/articleprint/24/=1/3/
[accessed 11 August 2006].
Measurand Shape Advantage (2004), Schapewrap II [online], available:
www.measurand.com [accessed 3 July 2006]
Shirley, S. (2006), Algorithm Description For Celeritas Project, unpublished.
Texas Instrument, Accelerometers and How they Work [online], available:
http://www2.usfirst.org/2005comp/Manuals/Acceler1.pdf [accessed 2 August 2006]
Topper, D., Swendsen, P.V., ‘Wireless Dance Control: PAIR and WISEAR’,
Proceeding of the 2005 International Conference on New Interfaces for Musical
Expression (NIME05)[online], available: http://www.nime.org/2005/proceedings.html
[accessed 20 Jul 2006].
Troika Ranch (2006), Troika Ranch Digital Dance Theater [online], available:
http://www.troikaranch.org [accessed 20 June 2006]
Ulyate, R., Bianciardi, D., ‘The Interactive Dance Club: Avoiding Chaos in A Multi
Participant Environment’, Proceeding of the 2001 International Conference on New
Interfaces for Musical Expression (NIME01)[online], available:
http://hct.ece.ubc.ca/nime/2001/papers/ulyate.pdf , [accessed 3 August 2006].
Volpe, G., Computational models of expressive gesture in multimedia systems
Ph.D. Disseration, Faculty of Engineering, University of Genova, April 2003
available: http://musart.dist.unige.it/Publications.html , [accessed 1 August 2006].
Wanderlery, M. M., ‘Gestural Control of Music’, IRCAM (France) [online],
available: http://www.ircam.fr/equipes/analyze-
synthese/wanderle/Gestes/Externe/kassel.pdf , [accessed 20 June 2006].
Wanderley, M. M., Birnbaum, D., Malloch, J., Sinyor, E., Boissinot, J.,
‘SensorWiki.org: A Collaborative Resource for Researcher and Interface Designers’,
Proceeding of the 2006 International Conference on New Interfaces for Musical
Expression (NIME06)[online], available:
http://www.music.mcgill.ca/musictech/idmil/ , [accessed 4 July 2006].
Wikipedia Foundation Inc. (2001) Wikipedia [online], available:
http://en.wikipedia.org/wiki/Main_Page [accessed 31 August 2006]
Winkler, T., ‘Creating Interactive Dance with the Very Nervous System’, Proceedings
of 1997 Connecticut College Symposium on Arts and Technology [online], available:
http://www.brown.edu/Departments/Music/sites/winkler//papers/Interactive_Dance_1
997.pd.pdf , [accessed 23 August 2006].
Winkler, T., (1998)Composing Interactive Music: Techniques and Idea Using Max,
London: The MIT Press Cambridge.
Winkler, T., ‘Participation and Response in Movement-Sensing Installations’,
Proceeding of the the 2000 International Computer Music Conference