acoustic fingerprinting system

41
ACOUSTIC FINGERPRINTING SYSTEM Submitted in partial fulfilment of the requirements for the award of the degree of Bachelor of Technology in Computer Engineering Under The Guidance of Submitted by Mr. Mumtaz Ahmed Asst. Professor Dept. of Computer Engineering Ainuddin Faizan 08CSS04 Devansh Kumar Gupta 08CSS18 Yash Dhingra 08CSS73 Department of Computer Engineering Faculty of Engineering and Technology Jamia Millia Islamia New Delhi (2008-2012)

Upload: ainuddin-faizan

Post on 23-Oct-2015

19 views

Category:

Documents


1 download

DESCRIPTION

Imagine the following situation. You’re in your car, listening to the radio and suddenly you hear a song that catches your attention. It’s the best new song you have heard for a long time, but you missed the announcement and don’t recognize the artist. Still, you would like to know more about this music. What should you do? You could call the radio station, but that’s too cumbersome. Wouldn’t it be nice if you could record a small clip of a few seconds with your phone, push a few buttons and a few seconds later the phone would respond with the name of the artist and the title of the music you’re listening to? In this project we present an audio fingerprinting system, which makes the above scenario possible. By using the fingerprint of an unknown audio clip as a query on a fingerprint database, which contains the fingerprints of a large library of songs, the audio clip can be identified. At the core of the presented system is a highly robust fingerprint extraction method which robust against considerable noise and compressions of various kinds.

TRANSCRIPT

Page 1: Acoustic Fingerprinting System

ACOUSTIC FINGERPRINTING

SYSTEM

Submitted in partial fulfilment of the requirements for the award of the degree of

Bachelor of Technology

in

Computer Engineering

Under The Guidance of Submitted by

Mr. Mumtaz Ahmed Asst. Professor Dept. of Computer Engineering

Ainuddin Faizan 08CSS04 Devansh Kumar Gupta 08CSS18 Yash Dhingra 08CSS73

Department of Computer Engineering

Faculty of Engineering and Technology

Jamia Millia Islamia

New Delhi (2008-2012)

Page 2: Acoustic Fingerprinting System

ii

ACKNOWLEDGEMENT

We are greatly indebted to our supervisor and guide Mr. Mumtaz Ahmed, Department of Computer Engineering for his invaluable technical guidance, great innovative ideas and overwhelming moral support during the course of the project. We are also thankful to the Department of Computer Engineering and all the faculty members especially Prof. M. N. Doja, Prof. M.M. Sufyan Beg, Dr. Bashir Alam, Mr. Tanvir Ahmad, Mr. Md. Amjad, Mr. Musheer Ahmad, Mr. Jawarhar Lal, Mr. Faiyaz Ahmad, Mr. Zeeshan Ansari and Mr. Sarfaraz Masood for their teaching, guidance and encouragement. We are also thankful to our classmates and friends for the valuable suggestions and their active support. Ainuddin Faizan Devansh Kumar Gupta Yash Dhingra Department of Computer Engineering, Faculty of Engineering and Technology, Jamia Millia Islamia, New Delhi

Page 3: Acoustic Fingerprinting System

iii

ABSTRACT

Imagine the following situation. You’re in your car, listening to the radio and

suddenly you hear a song that catches your attention. It’s the best new song

you have heard for a long time, but you missed the announcement and don’t

recognize the artist. Still, you would like to know more about this music.

What should you do? You could call the radio station, but that’s too

cumbersome. Wouldn’t it be nice if you could record a small clip of a few

seconds with your phone, push a few buttons and a few seconds later the

phone would respond with the name of the artist and the title of the music

you’re listening to? In this project we present an audio fingerprinting system,

which makes the above scenario possible. By using the fingerprint of an

unknown audio clip as a query on a fingerprint database, which contains the

fingerprints of a large library of songs, the audio clip can be identified. At the

core of the presented system is a highly robust fingerprint extraction method

which robust against considerable noise and compressions of various kinds.

Page 4: Acoustic Fingerprinting System

iv

CONTENTS

1. Introduction 1

2. Acoustic Fingerprinting Concepts 3

3. Acoustic Fingerprint System Parameters 5

4. Applications 7

5. Guiding Principles 10

6. Fingerprint Generation Algorithm 13

7. Search Algorithm 16

8. Our Improvisation 20

9. Implementation 22

10. Screenshots 26

10. Future Scope 35

11. References 36

Page 5: Acoustic Fingerprinting System

1

INTRODUCTION

Page 6: Acoustic Fingerprinting System

2

Fingerprint systems are over one hundred years old. In 1893 Sir Francis

Galton was the first to prove that no two fingerprints of human beings were

alike. Conceptually a fingerprint can be seen as a human summary or

signature that is unique for every human being. It is important to note that a

human fingerprint differs from a textual summary in that it does not allow

the reconstruction of other aspects of the original. For example, a human

fingerprint does not convey any information about the colour of the person’s

hair or eyes.

Recent years have seen a growing scientific and industrial interest in

computing fingerprints of multimedia objects.

The prime objective of multimedia fingerprinting is an efficient mechanism

to establish the perceptual equality of two multimedia objects: not by

comparing the (typically large) objects themselves, but by comparing the

associated fingerprints (small by design). In most systems using fingerprinting

technology, the fingerprints of a large number of multimedia objects, along

with their associated meta-data (e.g. name of artist, title and album) are

stored in a database. The fingerprints serve as an index to the meta-data. The

meta-data of unidentified multimedia content are then retrieved by

computing a fingerprint and using this as a query in the fingerprint/meta-

data database. The advantage of using fingerprints instead of the multimedia

content itself is three-fold:

1. Reduced memory/storage requirements as fingerprints are relatively

small;

2. Efficient comparison as perceptual irrelevancies have already been

removed from fingerprints;

3. Efficient searching as the dataset to be searched is smaller.

As can be concluded from the above, a fingerprint system generally consists

of two components: a method to extract fingerprints and a method to

efficiently search for matching fingerprints in a fingerprint database.

In this project we focus on the technical aspects of the some audio

fingerprinting algorithms. We have described the implementation of each in

detail and also mentioned the pros and cons of each algorithm.

Page 7: Acoustic Fingerprinting System

3

ACOUSTIC

FINGERPRINTING

CONCEPTS

Page 8: Acoustic Fingerprinting System

4

An acoustic/audio fingerprint is a condensed digital summary, generated

from an audio signal, which can be used to identify an audio sample or

quickly locate similar items in an audio database. It aims at defining a small

signature (the fingerprint) from the audio signal based on its perceptual

properties.

Here we can draw an analogy with so-called hash functions, which are well

known in cryptography. A cryptographic hash function H maps an (usually

large) object X to a (usually small) hash value (a.k.a. message digest). A

cryptographic hash function allows comparison of two large objects X and Y,

by just comparing their respective hash values H(X) and H(Y). Strict

mathematical equality of the latter pair implies equality of the former, with

only a very low probability of error. For a properly designed cryptographic

hash function this probability is 2-n, where n equals the number of bits of the

hash value. Instead of storing and comparing with all of the data in Y, it is

sufficient to store the set of hash values {hi = H(Yi)}, and to compare H(X) with

this set of hash values.

At first one might think that cryptographic hash functions are a good

candidate for fingerprint functions. However recall from the introduction

that, instead of strict mathematical equality, we are interested in perceptual

similarity. For example, an original CD quality version of ‘Rolling Stones –

Angie’ and an MP3 version at 128Kb/s sound the same to the human

auditory system, but their waveforms can be quite different. Although the

two versions are perceptually similar they are mathematically quite different.

Therefore cryptographic hash functions cannot decide upon perceptual

equality of these two versions. Even worse, cryptographic hash functions are

typically bit-sensitive: a single bit of difference in the original object results in

a completely different hash value.

Given the above arguments, we need to construct a fingerprint function in

such a way that perceptual similar audio objects result in similar fingerprints.

Furthermore, in order to be able discriminate between different audio

objects, there must be a very high probability that dissimilar audio objects

result in dissimilar fingerprints. More mathematically, for a properly designed

fingerprint function F, there should be a threshold T such that with very high

probability ||F(X)-F(Y)||≤T if objects X and Y are similar and ||F(X)-F(Y)||>T

when they are dissimilar.

Page 9: Acoustic Fingerprinting System

5

ACOUSTIC FINGERPRINT

SYSTEM PARAMETERS

Page 10: Acoustic Fingerprinting System

6

Robustness:

Can an audio clip still be identified after severe signal degradation? In

order to achieve high robustness the fingerprint should be based on

perceptual features that are invariant (at least to a certain degree)

with respect to signal degradations. Preferably, severely degraded

audio still leads to very similar fingerprints. The false negative rate is

generally used to express the robustness. A false negative occurs when

the fingerprints of perceptually similar audio clips are too different to

lead to a positive match.

Reliability:

How often is a song incorrectly identified? For example “Rolling Stones

– Angie” being identified as “Beatles – Yesterday”. The rate at which

this occurs is usually referred to as the false positive rate.

Fingerprint size:

How much storage is needed for a fingerprint? To enable fast

searching, fingerprints are usually stored in RAM memory. Therefore

the fingerprint size, usually expressed in bits per second or bits per

song, determines to a large degree the memory resources that are

needed for a fingerprint database server.

Granularity:

How many seconds of audio is needed to identify an audio clip?

Granularity is a parameter that can depend on the application. In some

applications the whole song can be used for identification, in others

one prefers to identify a song with only a short excerpt of audio.

Search speed and scalability:

How long does it take to find a fingerprint in a fingerprint database?

What if the database contains thousands and thousands of songs? For

the commercial deployment of audio fingerprint systems, search speed

and scalability are a key parameter. Search speed should be in the

order of milliseconds for a database containing over 100,000 songs

using only limited computing resources (e.g. a few high-end PC’s).

Page 11: Acoustic Fingerprinting System

7

APPLICATIONS

Page 12: Acoustic Fingerprinting System

8

In this section we elaborate on a number of applications for audio

fingerprinting.

1. BROADCAST MONITORING

Broadcast monitoring is probably the most well-known application for

audio fingerprinting. It refers to the automatic playlist generation of

radio, television or web broadcasts for, among others, purposes of

royalty collection, program verification and advertisement. Currently

broadcast monitoring is still a manual process: i.e. we have 'real'

people listening to broadcasts and filling out scorecards.

A large-scale broadcast monitoring system based on fingerprinting

consists of several monitoring sites and a central site where the

fingerprint server is located. At the monitoring sites fingerprints are

extracted from all the (local) broadcast channels. The central site

collects the fingerprints from the monitoring sites. Subsequently, the

fingerprint server, containing a huge fingerprint database, produces

the playlists of all the broadcast channels.

2. CONNECTED AUDIO

Connected audio is a general term for consumer applications where

music is somehow connected to additional and supporting

information. The example given in the abstract, using a mobile phone

to identify a song is one of these examples. This business is actually

pursued by a number of companies. The audio signal in this application

is severely degraded due to processing applied by radio stations,

FM/AM transmission, the acoustical path between the loudspeaker

and the microphone of the mobile phone, speech coding and finally

the transmission over the mobile network. Therefore, from a technical

point of view, this is a very challenging application.

Other examples of connected audio are (car) radios with an

identification button or fingerprint applications “listening” to the audio

streams leaving or entering a soundcard on a PC. By pushing an “info”

button in the fingerprint application, the user could be directed to a

page on the Internet containing information about the artist. In other

words, audio fingerprinting can provide a universal linking system for

audio content.

Page 13: Acoustic Fingerprinting System

9

3. FILTERING TECHNOLOGY FOR FILE SHARING

Filtering refers to active intervention in content distribution. The prime

example for filtering technology for file sharing was Napster. Starting in

June 1999, users who downloaded the Napster client could share and

download a large collection of music for free. Later, due to a court case

by the music industry, Napster users were forbidden to download

copyrighted songs. In May 2001 Napster introduced an audio

fingerprinting system by Relatable, which aimed at filtering out

copyrighted material even if it was misspelled. Owing to Napster’s

closure only two months later, the effectiveness of that specific

fingerprint system is, to the best of the author’s knowledge, not

publicly known.

Although from a consumer standpoint, audio filtering could be viewed

as a negative technology; there are also a number of potential benefits

to the consumer. Firstly it can organize music song titles in search

results in a consistent way by using the reliable meta-data of the

fingerprint database. Secondly, fingerprinting can guarantee that what

is downloaded is actually what it says it is.

4. AUTOMATIC MUSIC LIBRARY ORGANIZATION

Nowadays many PC users have a music library containing several

hundred, sometimes even thousands, of songs. The music is generally

stored in compressed format (usually MP3) on their hard-drives. When

these songs are obtained from different sources, such as ripping from

a CD or downloading from file sharing networks, these libraries are

often not well organized. Meta-data is often inconsistent, incomplete

and sometimes even incorrect. Assuming that the fingerprint database

contains correct meta-data, audio fingerprinting can make the meta-

data of the songs in the library consistent, allowing easy organization

based on, for example, album or artist. For example, ID3Man, a tool

powered by Auditude fingerprinting technology is already available for

tagging unlabelled or mislabelled MP3 files. A similar tool from

Moodlogic is available as a Winamp plug-in.

Page 14: Acoustic Fingerprinting System

10

GUIDING PRINCIPLES

Page 15: Acoustic Fingerprinting System

11

Audio fingerprints intend to capture the relevant perceptual features of

audio. At the same time extracting and searching fingerprints should be fast

and easy, preferably with a small granularity to allow usage in highly

demanding applications (e.g. mobile phone recognition). A few fundamental

questions have to be addressed before starting the design and

implementation of such an audio fingerprinting scheme. The most prominent

question to be addressed is: what kind of features are the most suitable. A

scan of the existing literature shows that the set of relevant features can be

broadly divided into two classes: the class of semantic features and the class

of non-semantic features. Typical elements in the former class are genre,

beats-per-minute, and mood. These types of features usually have a direct

interpretation, and are actually used to classify music, generate play-lists and

more.

We have for a number of reasons explicitly chosen to work with non-

semantic features:

1. Semantic features don’t always have a clear and unambiguous

meaning. That is, personal opinions differ over such classifications.

Moreover, semantics may actually change over time. For example,

music that was classified as hard rock 25 years ago may be viewed as

soft listening today. This makes mathematical analysis difficult.

2. Semantic features are in general more difficult to compute than non-

semantic features.

3. Semantic features are not universally applicable. For example, beats-

per-minute does not typically apply to classical music.

A second question to be addressed is the representation of fingerprints. One

obvious candidate is the representation as a vector of real numbers, where

each component expresses the weight of a certain basic perceptual feature.

A second option is to stay closer in spirit to cryptographic hash functions and

represent digital fingerprints as bit-strings. For reasons of reduced search

complexity we have decided in this work for the latter option. The first option

would imply a similarity measure involving real additions/subtractions and

depending on the similarity measure maybe even real multiplications.

Fingerprints that are based on bit representations can be compared by

simply counting bits. Given the expected application scenarios, we do not

expect a high robustness for each and every bit in such a binary fingerprint.

Page 16: Acoustic Fingerprinting System

12

Therefore, in contrast to cryptographic hashes that typically have a few

hundred bits at the most, we will allow fingerprints that have a few thousand

bits. Fingerprints containing a large number of bits allow reliable

identification even if the percentage of nonmatching bits is relatively high.

A final question involves the granularity of fingerprints. In the applications

that we envisage there is no guarantee that the audio files that need to be

identified are complete. For example, in broadcast monitoring, any interval

of 5 seconds is a unit of music that has commercial value, and therefore may

need to be identified and recognized. Also, in security applications such as

file filtering on a peer-to-peer network, one would not wish that deletion of

the first few seconds of an audio file would prevent identification. In this

work we therefore adopt the policy of fingerprints streams by assigning sub-

fingerprints to sufficiently small atomic intervals (referred to as frames).

These sub-fingerprints might not be large enough to identify the frames

themselves, but a longer interval, containing sufficiently many frames, will

allow robust and reliable identification.

Page 17: Acoustic Fingerprinting System

13

FINGERPRINT

GENERATION ALGORITHM

Page 18: Acoustic Fingerprinting System

14

Most fingerprint extraction algorithms are based on the following approach.

First the audio signal is segmented into frames. For every frame a set of

features is computed. Preferably the features are chosen such that they are

invariant (at least to a certain degree) to signal degradations. Features that

have been proposed are well known audio features such as Fourier

coefficients, Mel Frequency Cepstral Coefficients (MFFC), spectral flatness,

sharpness, Linear Predictive Coding (LPC) coefficients and others. Also

derived quantities such as derivatives, means and variances of audio features

are used.

Generally the extracted features are mapped into a more compact

representation by using classification algorithms, such as Hidden Markov

Models, or quantization. The compact representation of a single frame will

be referred to as a sub-fingerprint. The global fingerprint procedure converts

a stream of audio into a stream of sub-fingerprints.

One sub-fingerprint usually does not contain sufficient data to identify an

audio clip. The basic unit that contains sufficient data to identify an audio clip

(and therefore determining the granularity) will be referred to as a

fingerprint block.

Page 19: Acoustic Fingerprinting System

15

Our algorithm is based on segmentation of audio signal into overlapping

frames of length 0.37 seconds.

1. It extracts 32-bit sub-fingerprints for every interval of 11.6

milliseconds. A fingerprint block consists of 256 subsequent sub-

fingerprints, corresponding to a granularity of only 3 seconds.

2. The audio signal is first segmented into overlapping frames. The

overlapping frames have a length of 0.37 seconds and are weighted by

a Hamming window with an overlap factor of 31/32.

3. This strategy results in the extraction of one sub-fingerprint for every

11.6 milliseconds. In the worst-case scenario the frame boundaries

used during identification are 5.8 milliseconds off with respect to the

boundaries used in the database of pre-computed fingerprints.

4. The large overlap assures that even in this worst-case scenario the sub-

fingerprints of the audio clip to be identified are still very similar to the

sub-fingerprints of the same clip in the database. Due to the large

overlap subsequent sub-fingerprints have a large similarity and are

slowly varying in time.

The most important perceptual audio features live in the frequency domain.

Therefore a spectral representation is computed by performing a Fourier

transform on every frame. Due to the sensitivity of the phase of the Fourier

transform to different frame boundaries and the fact that the Human

Auditory System (HAS) is relatively insensitive to phase; only the absolute

value of the spectrum, i.e. the power spectral density, is retained.

In order to extract a 32-bit sub-fingerprint value for every frame, 33 non-

overlapping frequency bands are selected. These bands lie in the range from

300Hz to 2000Hz (the most relevant spectral range for the Human Auditory

System) and have a linear spacing. The sign of energy differences

(simultaneously along the time and frequency axes) is a property that is very

robust to many kinds of processing. If we denote the energy of band m of

frame n by E(n,m) and the m-th bit of the sub-fingerprint of frame n by

F(n,m), the bits of the sub-fingerprint are formally defined as-

Page 20: Acoustic Fingerprinting System

16

SEARCH ALGORITHM

Page 21: Acoustic Fingerprinting System

17

PROPOSED SEARCH

ALGORITHM The inspiration for our search algorithm came from the search algorithm

proposed by Jaap Haitsma and Ton Kalker in their paper “A Highly Robust

Audio Fingerprinting System”. The criterion for matching the recorded clip

and the song is the value of BER (Bit Error Rate). Both values are XORed and

the number of ones is counted. If this value is below a threshold, we can say

that a match has been found. Instead of calculating the BER for every

possible position in the database, such as in a brute-force search method, it

is calculated for a few candidate positions only. These candidates contain

with very high probability the best matching position in the database. In the

simple version of the improved search algorithm, candidate positions are

generated based on the assumption that it is very likely that at least one sub-

fingerprint has an exact match at the optimal position in the database. If this

assumption is valid, the only positions that need to be checked are the ones

where one of the 256 sub-fingerprints of the fingerprint block query matches

perfectly. The positions in the database where a specific 32-bit sub-

fingerprint is located are retrieved using the database architecture shown in

the figure. The fingerprint database contains a lookup table (LUT) with all

possible 32 bit sub-fingerprints as an entry. Every entry points to a list with

pointers to the positions in the real fingerprint lists where the respective 32-

bit sub-fingerprint is located. In practical systems with limited memory, a

lookup table containing 232 entries is often not feasible, or not practical, or

both. Furthermore the lookup table will be sparsely filled, because only a

limited number of songs can reside in the memory. At the cost of a lookup-

table, the proposed search algorithm is approximately a factor 800,000 times

faster than the brute force approach. The assumption that at least one of the

sub-fingerprints is error free almost always holds for audio signals with

“mild” audio signal degradations. However, for heavily degraded signals the

assumption is indeed not always valid.

Page 22: Acoustic Fingerprinting System

18

An example of how the proposed search algorithm works can be seen from

the figure. The last extracted sub-fingerprint of the fingerprint block in the

figure is 0x00000001. First the fingerprint block is compared to the positions

in the database where sub-fingerprint 0x00000001 is located. The LUT is

pointing to only one position for sub-fingerprint 0x00000001, a certain

position p in song 1. We now calculate the BER between the 256 extracted

sub-fingerprints (the fingerprint block) and the sub-fingerprint values of song

1 from position p-255 up to position p. If the BER is below the threshold of

0.35, the probability is high that the extracted fingerprint block originates

from song 1. However if this is not the case, either the song is not in the

database or the sub-fingerprint contains an error. Let us assume the latter

and that bit 0 is the least reliable bit. The next most probable candidate is

the sub-fingerprint 0x00000000. Still referring to the figure, sub-fingerprint

0x00000000 has two possible candidate positions: one in song 1 and one in

song 2. If the fingerprint block has a BER below the threshold with respect to

the associated fingerprint block in song 1 or 2, then a match will be declared

for song 1 or 2, respectively. If neither of the two candidate positions give a

below threshold BER, either other probable sub-fingerprints are used to

generate more candidate positions, or there is a switch to one of the 254

remaining sub-fingerprints where the process repeats itself. If all 256 sub-

fingerprints and their most probable sub-fingerprints have been used to

Page 23: Acoustic Fingerprinting System

19

generate candidate positions and no match below the threshold has been

found the algorithm decides that it cannot identify the song.

ISSUES This kind of structure can exist only in theory. There is no record of a

database that contains link lists.

If the sub-fingerprint is 32 bits long, then the length of the table would

be 232 i.e. this table would take 4 GB of space and would thus require

4 GB of RAM to reside in which is not very efficient.

The solutions to these problems are discussed in the upcoming sections.

Page 24: Acoustic Fingerprinting System

20

OUR IMPROVISATION

Page 25: Acoustic Fingerprinting System

21

We converted the entire approach to 16 bits from 32 bits. This greatly eased

the implementation of the search algorithm. The benefits of the 16-bit

approach over the proposed 32-bit approach are as follows:-

The probability of a successful match is higher in 16 bits than in 32.

It is more robust towards noise since the reduction of bits is somewhat

like mapping 2 bits into 1 bit (the number of bits is halved). Hence, the

number of errors is less while recording.

The search becomes faster because the system has to match only 16

values instead of 32.

In the 32 bit version, the entire database would take up 232 memory

locations which is around 4 GB which is impossible to fit in the

memory of a computer.

In the 16 bit version, the database takes up 216 memory locations,

which is just 64 KB.

Hence there is a huge difference in the space occupied.

There is no need for hashing proposed in the 32 bit scheme.

Page 26: Acoustic Fingerprinting System

22

IMPLEMENTATION

Page 27: Acoustic Fingerprinting System

23

To implement the aforementioned idea, we have used a combination of a set

of files and a database.

We have simulated an LUT by creating a folder which consists of 216

files with file names 0 to 65535.

Each of these files represents a sub-fingerprint value (just like the

entries in the LUT).

A file contains the entries of songs in which this sub-fingerprint value

occurs. These entries occur in the form song_ID, position. (These

entries thus act like the pointers to the occurrences of the sub-

fingerprint value in the songs).

This song_ID field points to the Song_No field in the table

song_meta_data which contains, as the name suggests, metadata of

the songs stored in the database, namely, Song_No, Name, Artist and

Album. Thus, this field serves as an identifier for the song and hence

can be used to extract information from the database to present to the

user.

During search, the Name field is used to look for the song fingerprint

files since these files are named after the song’s name.

Whenever a new song is added to the database, the fingerprint for

that song is generated and stored. Also, the entries of occurrences of

all sub-fingerprint values in the fingerprint are appended in the

respective files.

Page 28: Acoustic Fingerprinting System

24

So, when a song is queried, these are the steps that take place:

1. The fingerprint for that clip is generated.

2. Blocks of 256 sub-fingerprint values are taken up.

3. The file for each sub-fingerprint value in the block is opened and

the entries in it are parsed.

4. For each entry, that song’s fingerprint file is opened and the BER is

calculated for that block versus the values in the actual file. This is

done by moving the window (block) such that the matching value

of the sub-fingerprint in both the block and the fingerprint file are

in line. Then the 256 values are matched and the BER is calculated.

5. A threshold of 1500 (ratio = 0.35) is checked.

6. If the BER calculated is below the threshold, a match is announced

else the search moves on to the next entry in the sub-fingerprint

file. If no match is found for that value, the next value in the block is

checked. This process continues until a match is found or the sub-

fingerprint values in the clip are exhausted.

Page 29: Acoustic Fingerprinting System

25

TOOLS USED

1. MATLAB – Main Programming Tool

MATLAB (matrix laboratory) is a numerical computing environment and fourth-generation programming language. Developed by MathWorks, MATLAB allows matrix manipulations, plotting of functions and data, implementation of algorithms, creation of user interfaces, and interfacing with programs written in other languages, including C, C++, Java, and Fortran.

MATLAB users come from various backgrounds of engineering, science, and economics. MATLAB is widely used in academic and research institutions as well as industrial enterprises.

2. AUDACITY – For waveform analysis, playback and recording

Audacity is a free software, cross-platform digital audio editor and recording application. It is used for importing and exporting of WAV, AIFF, MP3, Ogg Vorbis and all file formats supported by libsndfile library. Audacity can also be used for Recording and playing back sounds, Editing via Cut, Copy and Paste (with unlimited levels of Undo), multi-track mixing and post-processing of all types of audio by adding effects such as normalization, trimming, and fading in and out.

3. XILISOFT VIDEO CONVERTER – For converting sound file formats to .wav

Xilisoft Video Converter is an easy, fast and reliable application for converting between audio formats: MP3, WMA, WAV, AAC and so on. It has various features for editing audio and video files like clip, merge and split.

4. MYSQL – For managing the database of songs

MySQL is a relational database management system (RDBMS) that runs as a server providing multi-user access to a number of databases. MySQL is a popular choice of database for use in web applications. MySQL is used in some of the most frequently visited websites on the Internet, including Flickr, Nokia.com, YouTube, Wikipedia, Google, Facebook and Twitter.

Page 30: Acoustic Fingerprinting System

26

SCREENSHOTS

Page 31: Acoustic Fingerprinting System

27

CLIENT Following are the screenshots of the client process.

The GUI is initiated by typing FP_GUI on the command line.

There are two options, the user can either use an existing recorded

clip or record a new sound clip.

Page 32: Acoustic Fingerprinting System

28

Clicking on the Record sound clip button opens a new dialogue.

Page 33: Acoustic Fingerprinting System

29

Clicking on the record button initiates the recording process.

Page 34: Acoustic Fingerprinting System

30

Clicking on the Use existing sound clip button opens a new dialogue.

The user can click on the browse button to browse for pre-recorded

sound clips.

The path of the clip appears in the textbox.

Page 35: Acoustic Fingerprinting System

31

After finding the sound clip, the Send to server button is clicked and

the client sends the clip to the server for querying.

Page 36: Acoustic Fingerprinting System

32

The server queries the database and sends the result to the client

which includes all the meta-data associated with the song.

Page 37: Acoustic Fingerprinting System

33

SERVER Following are the screenshots of the client process.

The server process is initiated by typing serverside on the

command line.

Page 38: Acoustic Fingerprinting System

34

The server receives the recorded clip from the client and queries the

database.

It sends the results to the client and also displays the same on the

command line.

Page 39: Acoustic Fingerprinting System

35

FUTURE SCOPE The length of the recorded clip required for identification can be

reduced to 3-4 seconds.

The above algorithms can be modified in such a way so as to identify

plagiarism in music.

The algorithms can be used for automatic organisation of all the music

present in the user’s device by adding all the missing meta-data

associated with the song.

Instead of sending the audio clip, the client can stream it to the server

which can identify it on the fly.

Page 40: Acoustic Fingerprinting System

36

REFERENCES

Page 41: Acoustic Fingerprinting System

37

1. T. Kalker and J. Haitsma. A highly robust audio fingerprinting system. In Proceedings of ISMIR’2002, pages 144–148, 2002.

2. F. Kurth. A ranking technique for fast audio identification.

In Proceedings of the International Workshop on Multimedia Signal Processing, 2002.

3. Jerome Leboss´e, Luc Brun, Jean Claude Pailles. A robust audio

fingerprint extraction algorithm. In SPPR’07 Proceedings of the Fourth conference on IASTED International Conference : Signal Processing, Pattern Recognition, and Applications ACTA Press Anaheim, CA, USA ©2007

4. Shazam website <www.shazam.com>

5. Midomi website <www.midomi.com>

6. Haitsma J., Kalker T. and Oostveen J., “Robust Audio Hashing for Content Identification,Content Based Multimedia Indexing 2001 Brescia, Italy, September 2001

7. Christopher J.C. Burges, John C. Platt and Soumya Jana . “Distortion Discriminant Analysis for Audio Fingerprinting”. IEEE Transcations on speech and audio processing. Vol XX NO. Y, Month ZZ.

8. Matthew L. Miller, Manuel Acevedo Rodriguez, Ingemar J. Cox, “Audio Fingerprinting: Nearest Neighbor Search in High Dimensional Binary Spaces” . Published in the IEEE Multimedia Signal Processing Workshop 2002, St. Thomas, US Virgin Islands.

9. Pedro Cano , Eloi Batlle , Emilia Gomez , Leandro de C.T. Gomes , Madeleine Bonnet, “Audio Fingerprinting:Concepts And Applications”. Proceedings of 1st International Conference on Fuzzy Systems and Knowledge Discovery.