chapter 4 isolated-word automatic speech recognition ...repository.um.edu.my/78/6/chapter 4.pdf ·...

25
CHAPTER 4 ISOLATED-WORD AUTOMATIC SPEECH RECOGNITION (IWASR) TECHNIQUES AND ALGORITHMS 4.1 Introduction It has been noticed that the success of isolated-word automatic speech recognition systems requires a combination of various techniques and algorithms, each of which performs a specific task for achieving the main goal of the system. Therefore, a combination of related algorithms improves the accuracy or the recognition rate of such applications. The previous chapter discussed the main steps required for designing most of the automated speech recognition systems starting from the spectral analysis that is also known as features extraction, followed by features classification and features matching. Thus, this chapter presents the techniques and algorithms used for the development and implementation of the Isolated- Word Automatic Speech Recognition (IWASR) system. In fact, this chapter provides a step-by- step Matlab implementation of the features extraction, features classification and features matching processes used in developing the IWASR. This chapter describes the Mel-Frequency Cepstral Coefficients (MFCC) algorithm as the main algorithm used for the features extraction of all the set of distinct words. IWASR also implemented the Vector Quantization (VQ) algorithm for the features classification/matching and pattern recognition, which is also described in this chapter. In addition, this chapter explains the Euclidean distance measure as the similarity or distortion measure which was implemented in the IWASR.

Upload: lamnguyet

Post on 17-Mar-2018

231 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: CHAPTER 4 ISOLATED-WORD AUTOMATIC SPEECH RECOGNITION ...repository.um.edu.my/78/6/Chapter 4.pdf · It has been noticed that the success of isolated-word automatic speech recognition

CHAPTER 4

ISOLATED-WORD AUTOMATIC SPEECH RECOGNITION (IWASR) TECHNIQUES

AND ALGORITHMS

4.1 Introduction

It has been noticed that the success of isolated-word automatic speech recognition systems

requires a combination of various techniques and algorithms, each of which performs a specific

task for achieving the main goal of the system. Therefore, a combination of related algorithms

improves the accuracy or the recognition rate of such applications.

The previous chapter discussed the main steps required for designing most of the automated

speech recognition systems starting from the spectral analysis that is also known as features

extraction, followed by features classification and features matching. Thus, this chapter presents

the techniques and algorithms used for the development and implementation of the Isolated-

Word Automatic Speech Recognition (IWASR) system. In fact, this chapter provides a step-by-

step Matlab implementation of the features extraction, features classification and features

matching processes used in developing the IWASR.

This chapter describes the Mel-Frequency Cepstral Coefficients (MFCC) algorithm as the

main algorithm used for the features extraction of all the set of distinct words. IWASR also

implemented the Vector Quantization (VQ) algorithm for the features classification/matching

and pattern recognition, which is also described in this chapter. In addition, this chapter explains

the Euclidean distance measure as the similarity or distortion measure which was implemented in

the IWASR.

Page 2: CHAPTER 4 ISOLATED-WORD AUTOMATIC SPEECH RECOGNITION ...repository.um.edu.my/78/6/Chapter 4.pdf · It has been noticed that the success of isolated-word automatic speech recognition

4.2 Speech Samples Collection (Speech Recording)

Speech samples collection is mostly concerned with recording various speech samples of

each distinct word by different speakers. However, Rabiner and Juang (1993) identified four

main factors that must be considered when collecting speech samples, which affect the training

set vectors that are used to train the VQ codebook. Those factors include who the talkers are; the

speaking conditions; the transducers and transmission systems and the speech units.

The first factor is the profile of the talkers/speakers. The IWASR had five different speakers

out of whom their speech samples were collected. Those five speakers include three male and

two female speakers belonging to different ages, genders and races. Table 4.1 summarizes the

first factor in more details about the profiles of talkers/speakers.

Table 4.1

Summary of Talkers’ Profiles

Range of Age Gender Races

Adults ranging

from 22 to 40

years old

1. Three male

speakers

2. Two female

speakers

Two main

categories:

1. Malaysians

2. Non-

Malaysians

The second factor is the speaking conditions in which the speech samples were collected

from, which basically refer to the environment of the recording stage. The IWASR speech

samples collection was done in a noisy environment. The speech samples were recorded in the

faculty’s speech technology lab; however, that lab is not a quiet or noise proof lab, meaning that

all the speech samples were interrupted with noise. The rationale behind collecting the speech

Page 3: CHAPTER 4 ISOLATED-WORD AUTOMATIC SPEECH RECOGNITION ...repository.um.edu.my/78/6/Chapter 4.pdf · It has been noticed that the success of isolated-word automatic speech recognition

samples from noisy environments is to represent a real world speech samples collection, because

most speech recognition systems are meant to be used in different environments and spheres.

Therefore, collecting speech samples from noisy environments was purposely done.

The third factor is the transducers and transmission systems. Speech samples were recorded

and collected using a normal microphone.

The fourth factor is speech units. The IWASR main speech units are specific isolated words.

In other words, the purpose of IWASR is to recognize words that belong to isolated word

recognition category of applications. A set of nine distinct words were recorded, which are (Al-

Arabiyyah, Bahasa_Melayu, English, Computer_Science, Engineering, Science, Ainon, Salwah,

and Salimah) each of which was recorded separately. Therefore, the IWASR is an isolated word

recognizer rather a continuous or conversational word recognizer.

The IWASR used a simple Matlab function for recording speech samples. However, this

function requires defining certain parameters which are the sampling rate in hertz and the time

length in seconds. Figure 4.1 shows the Matlab code used for recording the speech samples. It

also shows that the time given for recording speech samples is two seconds, because it was found

that two seconds are enough for recording speech samples. If the time given for recording was

more than two seconds that would result in having so much silence time in the recorded speech

sample or the word’s utterance.

Figure 4.1

% Record Button

% Set sampling rate and time length info for recording speech samples

R_fs = 16000; % Sampling rate

R_samp_len = 2; % Recording time length in seconds

% Record Function “wavrecord” is a Matlab function used for recording speech signals

ai = wavrecord(R_samp_len*R_fs,R_fs, 'double');

Page 4: CHAPTER 4 ISOLATED-WORD AUTOMATIC SPEECH RECOGNITION ...repository.um.edu.my/78/6/Chapter 4.pdf · It has been noticed that the success of isolated-word automatic speech recognition

Matlab Code “wavrecord” for Recording Speech Signals

At this stage, speech samples have been recorded and collected in order to be used for the

next stages that are explained further in the next sections. The collected speech samples are then

going to pass through the features extraction, features training and features testing stages.

4.3 Features Extraction Using MFCC Algorithm

As mentioned earlier in section 3.2, the main objective of features extraction is to extract

characteristics from the speech signal that are unique to each word which will be used to

differentiate between a wide set of distinct words. The frequently used features for speech

processing, also known as the Mel-Frequency Cepstral Coefficients (MFCC), are based on the

known variation of the human ear’s critical bandwidths with frequency, filters spaced linearly at

low frequencies and logarithmically at high frequencies that have been used to capture the

phonetically important characteristics of speech.

According to Ursin (2002), MFCC is considered as the standard method for feature

extraction in speech recognition tasks. Perhaps the most popular features used for speech

recognition today are the mel-frequency cepstral coefficients (MFCCs), which are obtained by

first performing a standard Fourier analysis, and then converting the power-spectrum to a mel-

frequency spectrum. By taking the logarithm of that spectrum and by computing its inverse

Fourier transform one then obtains the MFCC. The individual features of the MFCC seem to be

just weakly correlated, which turns out to be an advantage for the creation of statistical acoustic

models. (Martens, 2000). MFCC has generally obtained a better accuracy and a minor

computational complexity with respect to alternative processing as compared to other features

extraction techniques. (Davis and Mermelstein, 1980).

Page 5: CHAPTER 4 ISOLATED-WORD AUTOMATIC SPEECH RECOGNITION ...repository.um.edu.my/78/6/Chapter 4.pdf · It has been noticed that the success of isolated-word automatic speech recognition

This stage emphasizes on the MFCC computational process, as the main algorithm used in

this research for features extraction (front-end) analysis. It is also concerned with applying the

MFCC algorithm against all collected speech samples in order to produce a features vector out of

each collected speech sample.

There are certain parameters that need to be defined earlier in order to execute the MFCC

algorithm and produce its coefficients. Table 4.2 shows the parameters and their defined values

that are used in the entire MFCC Matlab code.

Table 4.2

MFCC Parameters Definition

Parameter Defined Value

Time Length (len) 2 seconds

Sampling Rate (R_fs) 16000 Hertz per second

Frame Size (N) 256

Overlap Size (M) 156

Number of Filters (nof) 40

Mel-Frequency Cepstral Coefficient (MFCC) is a deconvolution algorithm applied in order

to obtain the vocal tract impulse response from the speech signal. It transforms the speech signal

which is the convolution between glottal pulse and the vocal tract impulse response into a sum of

two components known as the cepstrum. This computation is carried out by taking the inverse

DFT of the logarithm of the magnitude spectrum of the speech frame. This can be shown below

(4.1): (Becchetti and Ricotti, 1999).

x^[n] = IDFT { log ( DFT{ h[n]*u[n] } ) }

= h^[n] + u^[n] (4.1)

Where h^[n], u^[n] and x^[n] are the complex cepstrum of h[n], u[n] and x[n] respectively.

Page 6: CHAPTER 4 ISOLATED-WORD AUTOMATIC SPEECH RECOGNITION ...repository.um.edu.my/78/6/Chapter 4.pdf · It has been noticed that the success of isolated-word automatic speech recognition

From (4.1), the convolution of the two components is changed to multiplication when Fourier

transform is performed. Then by taking the logarithm, the multiplication is changed to addition.

This is basically how the complex cepstrum x^[n] is obtained.

Figure 3.4 showed a block diagram of the computation steps of MFCC, which are discussed

in details in the next sections. MFCC includes certain steps applied on an input speech signal.

Those computational steps of MFCC include; preprocessing, framing, windowing, Discrete

Fourier Transform (DFT), Mel Filterbank, Logarithm, and finally computing the inverse of DFT.

4.3.1. Preprocessing

According to Gordon (1998), preprocessing is considered as the first step of speech signal

processing, which involves the conversion of analog speech signal into a digital form. It is a very

crucial step for enabling further processing. Here the continuous time signal (speech) is sampled

at discrete time points to form a sample data signal representing the continuous time signal. Then

samples are quantized to produce a digital signal.

The method of obtaining a discrete time representation of a continuous time signal through

periodic sampling, where a sequence of samples, x[n] is obtained from a continuous time signal

x(t), stated clearly in the relationship below (4.2).

x[n] = x(nT) (4.2)

Where T is the sampling period and “1/T = fs” is the sampling frequency, in samples/second,

and n is the number of samples. It is apparent that more signal data will be obtained if the

samples are taken closer together through making the value of T smaller. (Alan and Ronald,

1999).

Page 7: CHAPTER 4 ISOLATED-WORD AUTOMATIC SPEECH RECOGNITION ...repository.um.edu.my/78/6/Chapter 4.pdf · It has been noticed that the success of isolated-word automatic speech recognition

The size of the sample for a digital signal is determined by the sampling frequency and the

length of the speech signal in seconds. For example if a speech signal is recorded for 2 seconds

using sampling frequency of 10000 Hz, the number of samples = 10000 x 2s = 20000 samples.

In IWASR, the speech sample is 16000 Hertz for 2 seconds of time length. The

preprocessing works as to obtain an array from the microphone after recording, calculates the

time graph and spectrum of the speech signal and displays both the time graph as well as the

spectrum in a figure plot format. Figure 4.2a displays the obtained results of the time graph for

the recorded word “English”, whereas, Figure 4.2b displays the obtained results of the spectrum

for the recorded word “English”.

Figure 4.2a

Plot of the Time Graph for the Recorded Word “English”

Figure 4.2b

Plot of the Spectrum for the Recorded Word “English”

Page 8: CHAPTER 4 ISOLATED-WORD AUTOMATIC SPEECH RECOGNITION ...repository.um.edu.my/78/6/Chapter 4.pdf · It has been noticed that the success of isolated-word automatic speech recognition

4.3.2. Framing

Framing is the process of segmenting the speech samples obtained from the analog to digital

(A/D) conversion into small frames with time length in the range of (20 to 40) milliseconds.

In reference to the human speech production mechanism discussed earlier, speech signal is

known to exhibit quasi-stationary behavior in a short period of time (20 – 40) milliseconds.

Therefore, framing enables the non-stationary speech signal to be segmented into quasi-

stationary frames, and enables Fourier transformation of the speech signal. The rationale behind

enabling the Fourier transformation of the speech signal is because a single Fourier transform of

the entire speech signal cannot capture the time varying frequency content due to the non-

stationary behavior of the speech signal. Therefore, Fourier transform is performed on each

segment separately. (Thomas, 2002).

If the frame length is not too long (20 – 40) milliseconds, the properties of the signal will not

change appreciably from the beginning of the segment to the end. Thus, the DFT of a windowed

speech segment should display the frequency – domain properties of the signal at the time

corresponding to the window location. (Alan and Ronald, 1999).

Alan and Ronald (1999) also said that if the frame length is long enough so that the

harmonics are resolved (>80) milliseconds, the DFT of a windowed segment of voiced speech

should show a series of peaks at integer multiples of the fundamental frequency of the signal in

that interval. This would normally require that the window span several periods of the waveform.

Whereas, if the frame is too short (<10) milliseconds, then the harmonics will not be resolved,

Page 9: CHAPTER 4 ISOLATED-WORD AUTOMATIC SPEECH RECOGNITION ...repository.um.edu.my/78/6/Chapter 4.pdf · It has been noticed that the success of isolated-word automatic speech recognition

but the general spectral shape will still be evident. This is a typical tradeoff between frequency

resolution and time resolution that is required in the analysis of non stationary signals.

In addition, each frame overlaps its previous frame by a predefined size. The goal of the

overlapping scheme is to smooth the transition from frame to frame. Refer to Figure 3.1, which

was shown earlier in order to get a clear idea of the concepts of framing, windowing and

overlapping.

In IWASR, framing is meant to frame the speech samples into segments small enough so that

the speech segment shows quasi-stationary behavior. The length of each segment is 256 samples

which is equivalent to [((256 / 16000) * 1000)] = 16 milliseconds. Figure 4.3 shows the Matlab

code for performing the framing of speech samples, whereas Figure 4.4 shows a segmented

speech signal of frame size equal to 256 samples.

Figure 4.3

Matlab Code for the Framing Stage of MFCC

s = data; % data generated from the Matlab function “wavread” of the speech signal

a(1:256,1:83)=0; % framing the signal with overlap

a(:,1)=s(1:256);

for j=2:83

a(:,j)=s((N-M)*j+1:(N-M)*j+256); ; % N is size of each frame and M is overlap size

end;

0 50 100 150 200 250 300-1

-0.8

-0.6

-0.4

-0.2

0

0.2

0.4

0.6

0.8

Frame Size [256 Samples]

Am

plitude

Page 10: CHAPTER 4 ISOLATED-WORD AUTOMATIC SPEECH RECOGNITION ...repository.um.edu.my/78/6/Chapter 4.pdf · It has been noticed that the success of isolated-word automatic speech recognition

Figure 4.4

Segmented Speech Signal (Frame Size = 256 samples)

4.3.3. Windowing

This processing step is meant to window each individual frame so as to minimize the signal

discontinuities at the beginning and end of each frame. The concept here is to minimize the

spectral distortion by using the window to taper the signal to zero at the beginning and end of

each frame. If we define the window as w(n), 0 ≤ n ≤ N -1, where N is the number of samples in

each frame, then the result of windowing is the signal as shown in (4.3).

10),()()( −≤≤∗= Nnnwnxny (4.3)

According to Alan and Ronald (1999) and Thomas (2002), windowing is very necessary to

work with short term or frames of the speech signal in order to select a portion of the speech

signal that can be reasonably assumed to be stationary speech signal. It is performed in order to

avoid any unnatural discontinuities in the speech segment and distortion in the underlying

spectrum, in order to ensure that all parts of the speech signal are recovered and possible gaps

between frames are eliminated.

Becchetti and Ricotti (1999) mentioned that hamming window is the most commonly used

window shape in speech recognition technology, because a high resolution is not required,

considering that the next block in the feature extraction processing chain integrates all the closest

Page 11: CHAPTER 4 ISOLATED-WORD AUTOMATIC SPEECH RECOGNITION ...repository.um.edu.my/78/6/Chapter 4.pdf · It has been noticed that the success of isolated-word automatic speech recognition

frequency lines. Hamming window, whose impulse response is a raised cosine impulse has the

form (4.4):

−≤≤

=

0

10,1

2cos46.054.0

)(Nn

N

n

nw

π

(4.4)

In IWASR, this step is used to window the speech segment using the hamming window.

Figure 4.5 shows the Matlab code for performing the windowing of the segmented speech

samples, whereas Figure 4.6 shows the hamming window of the same segmented speech signal

in Figure 4.4.

Figure 4.5

Matlab Code for the Windowing Stage of MFCC

The effect of windowing on the speech segment in Figure 4.4 can be seen clearly in Figure

4.7. There seems to be a smooth transition towards the edges of the frame.

Otherwise

h= hamming(256); % windowing using hamming window

for j=1:83

b(:,j)= a(:,j).* h; % a is the segmented frame of size 256 samples and h is the

hamming window for the segmented frame

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Am

plit

ude

Page 12: CHAPTER 4 ISOLATED-WORD AUTOMATIC SPEECH RECOGNITION ...repository.um.edu.my/78/6/Chapter 4.pdf · It has been noticed that the success of isolated-word automatic speech recognition

Figure 4.6

Hamming Window

Figure 4.7

Windowed Speech Segment

4.3.4. Discrete Fourier Transform (DFT)

Owens (1993) stated that the discrete Fourier transform (DFT) is normally computed via the

fast Fourier transform (FFT) algorithm, which is a widely used technique for evaluating the

frequency spectrum of speech. FFT converts each frame of N samples from the time domain into

0 50 100 150 200 250 300-1

-0.8

-0.6

-0.4

-0.2

0

0.2

0.4

0.6

0.8

Frame Size [256 Samples]

Am

plit

ude

Page 13: CHAPTER 4 ISOLATED-WORD AUTOMATIC SPEECH RECOGNITION ...repository.um.edu.my/78/6/Chapter 4.pdf · It has been noticed that the success of isolated-word automatic speech recognition

the frequency domain. The FFT is a fast algorithm, which exploits the inherent redundancy in the

DFT and reduces the number of calculations. FFT provides exactly the same result as the direct

calculation.

According to Alexander and Sadiku (2000), Fourier series enable a periodic function to be

represented as a sum of sinusoids and convert a speech signal from the time domain to the

frequency domain. The same analysis can be carried out on non periodic functions using Fourier

transform. Therefore, Fourier transform is used due to the non periodic behavior of the speech

signal.

Alexander and Sadiku (2000) also added that the basis of performing Fourier transform is to

convert the convolution of the glottal pulse u[n] and the vocal tract impulse response h[n] in the

time domain into multiplication in the frequency domain. This can be supported by the

convolution theorem (4.5).

If X(w), H(w) and Y(w) are the Fourier transforms of x(t), h(t) and y(t) respectively, then:

Y(w) = FT [ h(t) * x(t) ] = H(w) x X(w) (4.5)

In analyzing speech signals, Discrete Fourier Transform (DFT) is used instead of Fourier

transform, because the speech signal is in the form of discrete number of samples due to

preprocessing. The discrete Fourier transform is represented by the equation (4.6), Where X(k) is

the Fourier transform of x(n).

Where (4.6)

Page 14: CHAPTER 4 ISOLATED-WORD AUTOMATIC SPEECH RECOGNITION ...repository.um.edu.my/78/6/Chapter 4.pdf · It has been noticed that the success of isolated-word automatic speech recognition

In IWASR, the windowed speech segment is now transformed to the frequency domain using

the Fourier transform using a Matlab command shown in Figure 4.8, which computes the fast

Fourier transform (FFT) that returns the discrete Fourier transform (DFT) as a result.

Figure 4.8

Matlab Code for the FFT Computation

4.3.5. Mel Filterbank

The information carried by low frequency components of the speech signal is more important

than the high frequency components. In order to place more emphasize on the low frequency

components, Mel scaling is applied.

According to Thomas (2002), Mel scale is a unit of special measure or scale of perceived

pitch of a tone. It does not correspond linearly to the normal frequency, but behaves linearly

below 1 kHz and logarithmically above 1 kHz. This is based on studies of the human perception

of the frequency content of sound. Therefore we can use the formula (4.7) in order to compute

the Mels for a given frequency f in Hz. This formula also shows the relationship between both

the frequency in hertz and Mel scaled frequency.

Frequency (Mel Scaled) = [2595log (1+f (Hz)/700] (4.7)

In order to implement the filterbanks, the magnitude coefficient of each Fourier transformed

speech segment is binned by correlating them with each triangular filter in the filterbank. In

order to perform Mel-scaling, a number of triangular filters or filterbanks are used. Figure 4.9

shows the configuration of filters.

for j=1:83 %FFT(X) is the discrete Fourier transform (DFT) of vector X.

y(:,j)=fft(b(:,j)); % calculating the FFT of the windowed segment resulting vector “y”

Page 15: CHAPTER 4 ISOLATED-WORD AUTOMATIC SPEECH RECOGNITION ...repository.um.edu.my/78/6/Chapter 4.pdf · It has been noticed that the success of isolated-word automatic speech recognition

Figure 4.9

(a) Filterbank in Mel Scale Frequency and (b) Filterbank Representation in Normal Frequency

From Figure 4.9 (a), in the Mel frequency scale, it can be seen that the center frequency of

each triangular filter is spaced uniformly with respect to each other. But the spacing between the

centre frequency increases logarithmically when the scale is changed to Hertz as shown in

Figure 4.9 (b).

In Mel filterbank analysis, the magnitude coefficients of the Fourier transform for the speech

segment is binned by correlating it with each triangular filter in the filterbank.

4.3.6. Logarithm

The logarithm has the effect of changing multiplication into addition. Therefore, this step

simply converts the multiplication of the magnitude of the Fourier transform into addition.

According to Becchetti and Ricotti (1999), this step is meant for computing the logarithm of

the magnitude of the coefficients, because of the logarithm algebraic property which brings back

the logarithm of a power to a multiplication by a scaling factor.

Page 16: CHAPTER 4 ISOLATED-WORD AUTOMATIC SPEECH RECOGNITION ...repository.um.edu.my/78/6/Chapter 4.pdf · It has been noticed that the success of isolated-word automatic speech recognition

Becchetti and Ricotti (1999) also added that the magnitude and logarithm processing is

performed by the ear as well, whereby the magnitude discards the useless phase information

while a logarithm performs a dynamic compression in order to make the feature extraction

process less sensitive to variations in dynamics. The result obtained after this step is often

referred to as signal’s logarithm Mel spectrum.

In IWASR, the logarithm of the Mel filtered speech segment is carried out using the Matlab

command “log”, which returns the natural logarithm of the elements of the Mel filtered speech

segment. Figure 4.10 shows the Matlab code for “log”, whereas Figure 4.11 shows the resulting

figure of the Matlab code for “log” command.

Figure 4.10

Matlab Code for Logarithm Computation

for j=1:83

y(:,j)=fft(b(:,j)); % calculating fft

n2 = 1 + floor(N/2);

ms = m * abs(y(1:n2,j)).^2;

v(:,j)=dct(log(ms)); % converting back to time domain by calculating the dct which

is the Discrete cosine transform. Y = dct(X) returns the discrete cosine

transform of X.

-60

-50

-40

-30

-20

-10

0

10

20

30

Am

plit

ude

Page 17: CHAPTER 4 ISOLATED-WORD AUTOMATIC SPEECH RECOGNITION ...repository.um.edu.my/78/6/Chapter 4.pdf · It has been noticed that the success of isolated-word automatic speech recognition

Figure 4.11

Result of the Matlab Logarithm Command

4.3.7. Inverse of Discrete Fourier Transform (IDFT)

According to Becchetti and Ricotti (1999), the final procedure for the Mel frequency cepstral

coefficients (MFCC) computation consists of performing the inverse of DFT on the logarithm of

the magnitude of the Mel filterbank output.

The speech signal is represented as a convolution between slowly varying vocal tract impulse

response and quickly varying glottal pulse. Therefore, by taking the inverse of DFT of the

logarithm of the magnitude spectrum, the glottal pulse and the impulse response can be

separated. The result obtained after this step is often referred to as signal’s Mel cepstrum.

In IWASR, this was the final step of computing the MFCCs. It required computing the

inverse Fourier transform of the logarithm of the magnitude spectrum in order to obtain the Mel

frequency cepstrum coefficients. This is shown in Figure 4.12.

0

5

10

15

20

25

Am

plit

ude

Page 18: CHAPTER 4 ISOLATED-WORD AUTOMATIC SPEECH RECOGNITION ...repository.um.edu.my/78/6/Chapter 4.pdf · It has been noticed that the success of isolated-word automatic speech recognition

Figure 4.12

Mel Frequency Cepstrum Coefficients (MFCCs)

The MFCCs at this stage are ready to be formed in a vector format known as features vector.

This features vector is then considered as an input for the next section, which is concerned with

training the feature vectors that are randomly chosen for forming the VQ codebook. Each

features vector has the vector size of [1 * 3237].

4.4 Features Classification Using Vector Quantization (VQ) Algorithm

In the previous section 4.3, the features extraction process using MFCC was discussed

whereby isolated-word discriminative features are extracted from the speech signal. In this

section, the classification or clustering method known as vector quantization is discussed. This

method is part of the decision making process of determining a word based on previously stored

information, and it uses the features vectors extracted from speech signals using MFCC as the

inputs for this algorithm.

This step is basically divided into two parts, namely features training and features

matching/testing. Features training is a process of enrolling or registering a new speech sample

of a distinct word to the identification system database by constructing a model of the word

Page 19: CHAPTER 4 ISOLATED-WORD AUTOMATIC SPEECH RECOGNITION ...repository.um.edu.my/78/6/Chapter 4.pdf · It has been noticed that the success of isolated-word automatic speech recognition

based on the features extracted from the word’s speech samples. Features training is mainly

concerned with randomly selecting feature vectors of the recorded speech samples and perform

training for the codebook using the LBG vector quantization (VQ) algorithm. On the other hand,

features matching/testing is a process of computing a matching score, which is the measure of

similarity of the features extracted from the unknown word and the stored word models in the

database. The unknown word is identified by having the minimum matching score in the

database.

4.4.1. Features Training Using Vector Quantization (VQ) Algorithm

Vector quantization (VQ) is the process of taking a large set of feature vectors and producing

a smaller set of feature vectors that represent the centroids of the distribution, i.e. points spaced

so as to minimize the average distance to every other point. Vector quantization has been used

since it would be impractical to store every single feature that is generated from the speech

utterance through MFCC algorithm.

The training process of the VQ codebook applies an important algorithm known as the LBG

VQ algorithm, which is used for clustering a set of L training vectors into a set of M codebook

vectors. This algorithm is formally implemented by the following recursive procedure: (Linde et

al., 1980). The following steps are required for the training of the VQ codebook using the LBG

algorithm as described by Rabiner and Juang (1993).

1. Design a 1-vector codebook; this is the centroid of the entire set of training vectors. Therefore,

no iteration is required in this step.

Page 20: CHAPTER 4 ISOLATED-WORD AUTOMATIC SPEECH RECOGNITION ...repository.um.edu.my/78/6/Chapter 4.pdf · It has been noticed that the success of isolated-word automatic speech recognition

2. Double the size of the codebook by splitting each current codebook yn according to the

following rule (4.8):

(4.8)

Where n varies from 1 to the current size of the codebook , and ε is the splitting parameter,

whereby ε is usually in the range of (0.01 ≤ ε ≤ 0.05).

In IWASR, the initial codebook is obtained by combining all the selected feature vectors for

each distinct word in one database. The purpose of this initial codebook is to serve as a starting

codebook for training each selected feature vector against one another. The initial codebook is

referred to as the variable “CODE” in the LBG VQ Matlab function. Figure 4.13 shows the

Matlab code for defining the initial codebook as well as the values for the relevant parameters

such as the number of iterations for training the VQ codebook.

[CODE_n, dist] = vq(X, CODE, n_it, QUIET)

% Performs n_it iterations of the K-means algorithm on X, using CODE as initial

codebook.

% X is the input features vector, CODE is the initial codebook

n_it=5; % n_it is the number of iterations

error(nargchk(3, 4, nargin));

if (nargin < 4)

QUIET = 0;

End

% Dimensions of X

[n,p] = size(X);

% m is Codebook size

m = length(CODE(:,1));

% dist is the distortion value

dist = zeros(1,n_it);

Page 21: CHAPTER 4 ISOLATED-WORD AUTOMATIC SPEECH RECOGNITION ...repository.um.edu.my/78/6/Chapter 4.pdf · It has been noticed that the success of isolated-word automatic speech recognition

Figure 4.13

Defining the Initial Codebook and Relevant Variables

3. Nearest-Neighbor Search: for each training vector, find the codeword in the current codebook

that is closest (in terms of similarity measurement), and assign that vector to the corresponding

cell (associated with the closest centroid). This is done using the K-means iterative algorithm.

In IWASR, this step is meant to find the nearest neighbor for the squared distortion. For each

speech vector, the nearest codevector in the current codebook is calculated using Euclidean

distance. Then the vector is assigned to that nearest codevector. In Matlab, both the features

vector and codevector are represented in matrix form. In order for the Euclidean distance to be

measured between the feature vectors and the codevector, the row size should be the same for

both matrices. Figure 4.14 shows the Matlab computation for finding the nearest neighbor using

the Euclidean distance measure.

% Main loop

CODE_n = CODE;

for iter = 1:n_it

% Find nearest neighbor for the squared distortion

DIST = zeros(m,n);

if (p > 1)

for i = 1:m

DIST(i,:) = sum(((X - ones(n,1)*CODE_n(i,:))').^2);

end

else

DIST = (ones(m,1)*X' - CODE_n*ones(1,n)).^2;

end

[vm,label] = min(DIST);

% Mean distortion

dist(iter) = mean(vm);

Page 22: CHAPTER 4 ISOLATED-WORD AUTOMATIC SPEECH RECOGNITION ...repository.um.edu.my/78/6/Chapter 4.pdf · It has been noticed that the success of isolated-word automatic speech recognition

Figure 4.14

Finding the Nearest Neighbor Using the Euclidean Distance Measure

4. Centroid Update: update the centroid in each cell using the centroid of the training vectors

assigned to that cell. In IWASR, the centroid updates requires updating the codebook too, by taking

the average of the speech vector in a cell to find the new value of the codevector. Figure 4.15 shows

the Matlab command for this step.

Figure 4.15

Updating the Codebook

5. Iteration 1: repeat steps 3 and 4 until the average distance falls below a preset threshold.

6. Iteration 2: repeat steps 2, 3, and 4 until a codebook of size M is reached.

Figure 4.16 shows a flow diagram of the detailed steps of the LBG algorithm. “Cluster

vectors” is the nearest-neighbor search procedure which assigns each training vector to a cluster

associated with the closest codeword. “Find centroids” is the centroid update procedure.

% Update the codebook

n_out = 0;

for i = 1:m

ind = (1:n);

ind = ind((label == i));

if (length(ind) == 0)

% When there is only one nearest neighbor for a given codebook entry

CODE_n(i,:) = X(ind,:);

else

CODE_n(i,:) = mean(X(ind,:));

end

end

Page 23: CHAPTER 4 ISOLATED-WORD AUTOMATIC SPEECH RECOGNITION ...repository.um.edu.my/78/6/Chapter 4.pdf · It has been noticed that the success of isolated-word automatic speech recognition

“Compute D (distortion)” sums the distances of all training vectors in the nearest-neighbor

search so as to determine whether the procedure has converged.

Figure 4.16

Flow Diagram of the LBG Algorithm (Rabiner And Juang, 1993)

By performing the said steps for training the VQ codebook, the IWASR now has trained sets

of codebooks which serve as the main databases of the system for testing purposes which are

explained in the next section.

4.4.2. Features Matching/Testing Using Euclidean Distance Measure

Euclidean distance measure is applied in order to measure the similarity or the dissimilarity

between two spoken words, which take place after quantizing a spoken word into its codebook.

The matching of an unknown word is performed by measuring the Euclidean distance between

Page 24: CHAPTER 4 ISOLATED-WORD AUTOMATIC SPEECH RECOGNITION ...repository.um.edu.my/78/6/Chapter 4.pdf · It has been noticed that the success of isolated-word automatic speech recognition

the features vector of the unknown word to the model (codebook) of the known words in the

database. The goal is to find the codebook that has the minimum distance measurement in order

to identify the unknown word. (Franti et al., 1997).

For example in the testing or identification session, the Euclidean distance between the

features vector and codebook for each spoken word is calculated and the word with the smallest

average minimum distance is picked as shown in the equation below (4.9).

∑=

−=

D

i

ii yxyxd1

2)(),( (4.9)

Where xi is the ith input features vector, yi is the i

th features vector in the codebook, and d is

the distance between xi and yi.

In IWASR, a simple Euclidean distance measure is applied on an unknown features vector

compared against the trained codebook. Therefore, there must be an unknown speech signal and a

trained codebook as inputs to this algorithm in order to measure their distance and test the entire

performance of the IWASR. The outputs of this algorithm are the ID numbers assigned for each

features vector in the trained codebook as well as the distances or the squared error values.

However, this algorithm picks up the ID number of the features vector which has the minimum

distance to the unknown features vector. The most important purpose of performing this stage is

to measure the accuracy/recognition level of IWASR in order to measure the validity of the

algorithms used in this application.

4.5 Summary

This chapter has presented a detailed technical overview of MFCC and VQ, and how those

two algorithms relate to each other. It was clearly mentioned that MFCC handles the features

extraction process, which then produces outputs of speech feature vectors that are then

Page 25: CHAPTER 4 ISOLATED-WORD AUTOMATIC SPEECH RECOGNITION ...repository.um.edu.my/78/6/Chapter 4.pdf · It has been noticed that the success of isolated-word automatic speech recognition

considered as the training set used in the VQ algorithm to train the VQ codebook. Therefore, VQ

works as a classification or pattern recognition technique that classifies different speech signals

according to the classes. LBG VQ is the most commonly used VQ algorithm, which is divided

into two phases. The first phase is the training, whereby a randomly selected speech signals form

a training set of samples that are used as an initial codebook for training the VQ codebook. The

second phase is the matching/testing that uses the Euclidean distance measure for comparing an

unknown speech signal against the VQ codebook, which then selects the codeword in the

codebook with the minimum distance.

The combination of MFCC and VQ has been widely used in speaker recognition. Thus, this

research studies the possibility of using this combination in telephony speech recognition

systems.

This chapter also has presented a detailed IWASR methodology and Matlab implementation

of MFCC and VQ. It also emphasized on the fact that the IWASR is a standalone application

categorized under the telephony applications of speech recognition.