artificial neural network for speech recognitionruiyan/csc411/annspeechrecognition.pdf ·...

20
Artificial Neural Network for Speech Recognition Austin Marshall March 3, 2005 2nd Annual Student Research Showcase

Upload: dinhlien

Post on 14-Mar-2018

230 views

Category:

Documents


5 download

TRANSCRIPT

Page 1: Artificial Neural Network for Speech Recognitionruiyan/csc411/ANNSpeechRecognition.pdf · Artificial Neural Network for Speech Recognition Austin Marshall March 3, 2005 2nd Annual

Artificial Neural Networkfor Speech Recognition

Austin MarshallMarch 3, 20052nd Annual Student Research Showcase

Page 2: Artificial Neural Network for Speech Recognitionruiyan/csc411/ANNSpeechRecognition.pdf · Artificial Neural Network for Speech Recognition Austin Marshall March 3, 2005 2nd Annual

23/3/05

Overview

ν Presenting an Artificial Neural Network torecognize and classify speech♦ Spoken digits

ν “one”,”two”,”three”, etc…ν Choosing a speech representation schemeν Training Perceptronν Results

Page 3: Artificial Neural Network for Speech Recognitionruiyan/csc411/ANNSpeechRecognition.pdf · Artificial Neural Network for Speech Recognition Austin Marshall March 3, 2005 2nd Annual

33/3/05

Representing Speechν Problem

♦ Recording samples never produce identical waveformsν Lengthν Amplitudeν Background noiseν Sample rate

♦ However, perceptual information relative to speech remains consistentν Solution

♦ Extract speech-related informationν See: Spectrogram

Page 4: Artificial Neural Network for Speech Recognitionruiyan/csc411/ANNSpeechRecognition.pdf · Artificial Neural Network for Speech Recognition Austin Marshall March 3, 2005 2nd Annual

43/3/05

Representing Speech“one” “one”

Waveform

Spectrogram

Page 5: Artificial Neural Network for Speech Recognitionruiyan/csc411/ANNSpeechRecognition.pdf · Artificial Neural Network for Speech Recognition Austin Marshall March 3, 2005 2nd Annual

53/3/05

Spectrogram

ν Shows change inamplitude spectra overtime

ν Three dimensions♦ X Axis: Time♦ Y Axis: Frequency♦ Z axis: Color intensity

represents magnitude

“one”

Page 6: Artificial Neural Network for Speech Recognitionruiyan/csc411/ANNSpeechRecognition.pdf · Artificial Neural Network for Speech Recognition Austin Marshall March 3, 2005 2nd Annual

63/3/05

Mel Frequency Cepstrum Coefficients

ν Spectrogram provides a good visualrepresentation of speech but still variessignificantly between samples

ν A cepstral analysis is a popular method forfeature extraction in speech recognitionapplications, and can be accomplished usingMel Frequency Cepstrum Coefficient analysis(MFCC)

Page 7: Artificial Neural Network for Speech Recognitionruiyan/csc411/ANNSpeechRecognition.pdf · Artificial Neural Network for Speech Recognition Austin Marshall March 3, 2005 2nd Annual

73/3/05

Mel Frequency Cepstrum Coefficients

ν Inverse Fourier transform of the log of the Fouriertransform of a signal using the Mel Scale filterbank

ν mfcc function returns vectors of 13 dimensions

Page 8: Artificial Neural Network for Speech Recognitionruiyan/csc411/ANNSpeechRecognition.pdf · Artificial Neural Network for Speech Recognition Austin Marshall March 3, 2005 2nd Annual

83/3/05

Network Architectureν Input layer

♦ 26 Cepstral Coefficients

ν Hidden Layer♦ 100 fully-connected hidden-layer

units♦ Weight range between -1 +1

ν Initially randomν Remain constant

ν Output♦ 1 output unit for each target♦ Limited to values between 0 and +1

Page 9: Artificial Neural Network for Speech Recognitionruiyan/csc411/ANNSpeechRecognition.pdf · Artificial Neural Network for Speech Recognition Austin Marshall March 3, 2005 2nd Annual

93/3/05

Sample Training Stimuli(Spectrograms)

“one”

“two”

“three”

Page 10: Artificial Neural Network for Speech Recognitionruiyan/csc411/ANNSpeechRecognition.pdf · Artificial Neural Network for Speech Recognition Austin Marshall March 3, 2005 2nd Annual

103/3/05

Training the network

ν Spoken digits were recorded♦ Seven samples of each digit♦ “One” through “eight” recorded♦ Total of 56 different recordings with varying lengths and

environmental conditions

ν Background noise was removed from eachsample

Page 11: Artificial Neural Network for Speech Recognitionruiyan/csc411/ANNSpeechRecognition.pdf · Artificial Neural Network for Speech Recognition Austin Marshall March 3, 2005 2nd Annual

113/3/05

Training the network

ν Calculate MFCC using Malcolm Slaney’sAuditory Toolbox♦ c=mfcc(s,fs,fix((3*fs)/(length(s)-256)))♦ Limits frame rate such that mfcc always produces a matrix of two

vectors corresponding to the coefficients of the two halves of thesample

ν Convert 13x2 matrix to 26 dimensionalcolumn vector♦ c=c(:)

Page 12: Artificial Neural Network for Speech Recognitionruiyan/csc411/ANNSpeechRecognition.pdf · Artificial Neural Network for Speech Recognition Austin Marshall March 3, 2005 2nd Annual

123/3/05

Training the network

ν Supervised learning♦ Choose intended target and create a target vector♦ 56 dimensional target vector

ν If training the network to recognize spoken“one”, target has a value of +1 for each of theknown “one” stimuli and 0 for everything else

Page 13: Artificial Neural Network for Speech Recognitionruiyan/csc411/ANNSpeechRecognition.pdf · Artificial Neural Network for Speech Recognition Austin Marshall March 3, 2005 2nd Annual

133/3/05

Training the network

ν Train a multilayer perceptron with feature vectors(simplified)♦ Select stimuli at random♦ Calculate response to stimuli♦ Calculate error♦ Update weights♦ Repeat

ν In a finite amount of time, the perceptron willsuccessfully learn to distinguish between stimuli ofan intended target and not.

Page 14: Artificial Neural Network for Speech Recognitionruiyan/csc411/ANNSpeechRecognition.pdf · Artificial Neural Network for Speech Recognition Austin Marshall March 3, 2005 2nd Annual

143/3/05

Training the network

ν Calculate response tostimuli♦ Calculate hidden layer

ν h=sigmoid(W*s+bias)♦ Calculate response

ν o=sigmoid(v*h+bias)ν Sigmoid transfer function

♦ Maps values between 0 and +1

sigmoid(x)=1/(1+e-x)

Page 15: Artificial Neural Network for Speech Recognitionruiyan/csc411/ANNSpeechRecognition.pdf · Artificial Neural Network for Speech Recognition Austin Marshall March 3, 2005 2nd Annual

153/3/05

Training the network

ν Calculate error♦ For a given stimuli, error is the difference between target and

response♦ t-o♦ t will be either 0 or 1♦ o will be between 0 and +1

Page 16: Artificial Neural Network for Speech Recognitionruiyan/csc411/ANNSpeechRecognition.pdf · Artificial Neural Network for Speech Recognition Austin Marshall March 3, 2005 2nd Annual

163/3/05

Training the network

ν Update weights♦ v=vprevious+γ(t-o)hT

♦ v is weight vector between hidden-layer units and output♦ γ (gamma) is learning rate

Page 17: Artificial Neural Network for Speech Recognitionruiyan/csc411/ANNSpeechRecognition.pdf · Artificial Neural Network for Speech Recognition Austin Marshall March 3, 2005 2nd Annual

173/3/05

Results

ν Learning rate: +1ν Bias: -1ν 100 hidden-layer

unitsν 3000 iterationsν 316 seconds to learn

target

Target = “one”

Page 18: Artificial Neural Network for Speech Recognitionruiyan/csc411/ANNSpeechRecognition.pdf · Artificial Neural Network for Speech Recognition Austin Marshall March 3, 2005 2nd Annual

183/3/05

Results

ν Response to unseen stimuli♦ Stimuli produced by same voice used to train network with noise

removed♦ Network was tested against eight unseen stimuli corresponding

to eight spoken digits♦ Returned 1 (full activation) for “one” and zero for all other stimuli.♦ Results were consistent across targets

ν i.e. when trained to recognize “two”, “three”, etc…♦ sigmoid(v*sigmoid(w*t1+bias)+bias) == 1

Page 19: Artificial Neural Network for Speech Recognitionruiyan/csc411/ANNSpeechRecognition.pdf · Artificial Neural Network for Speech Recognition Austin Marshall March 3, 2005 2nd Annual

193/3/05

Results

ν Response to noisy sample♦ Network returned a low, but response > 0 to a sample without

noise removed

ν Response to foreign speaker♦ Network responded with mixed results when presented samples

from speakers different from training stimuli

ν In all cases, error rate decreased andaccuracy improved with more learningiterations

Page 20: Artificial Neural Network for Speech Recognitionruiyan/csc411/ANNSpeechRecognition.pdf · Artificial Neural Network for Speech Recognition Austin Marshall March 3, 2005 2nd Annual

203/3/05

Referencesν Jurafsky, Daniel and Martin, James H. (2000) Speech and Language

Processing: An Introduction to Natural Language Processing,Computational Linguistics, and Speech Recognition (1st ed.). Prentice Hall

ν Golden, Richard M. (1996) Mathematical Methods for Neural NetworkAnalysis and Design (1st ed.). MIT Press

ν Anderson, James A. (1995) An Introduction to Neural Networks (1st ed.).MIT Press

ν Hosom, John-Paul, Cole, Ron, Fanty, Mark, Schalkwyk, Joham, Yan,Yonghong, Wei, Wei (1999, February 2). Training Neural Networks forSpeech Recognition Center for Spoken Language Understanding, OregonGraduate Institute of Science and Technology,http://speech.bme.ogi.edu/tutordemos/nnet_training/tutorial.html

ν Slaney, Malcolm Auditory Toolbox Interval Research Corporation