auto-regressive hidden markov models

2
Page 1 of 2 Auto-Regressive HMMs Auto-Regressive Hidden Markov Models Continuous density HMMs – The random vector observed during a hidden state is pulled from a continuous pdf modeled by the following mixture: where is an elliptically symmetric density (e.g., Gaussian). + Avoid quantization errors (e.g., codebooks in discrete HMMs) and yield better performance , 1 () [, ], M j jm jm jm m b c O U O N 1 j n Auto-Regressive HMMs The random vector observed during a hidden state is pulled from a Gaussian Auto-Regressive process where O k represents the k’th component of the observation vector O where HMM training involves learning the state transition matrix, the mixture weights and parameters of the basis density for each mixture component, using max. likelihood estimation + AR-modeling allows us to account for the correlation in the components of the observation vector + Known to be effective models for recognition of discrete speech 1 () (), M j jm jm m b cb O O /2 2 () exp ( ), 2 k jm jm k b , K O Oa p a a i=1 ( )= r(0)r(0)+2 r(i)r(i) jm , Oa Autocorrelation of AR parameters Autocorrelation of the observation samples p i=1 = k e k k-i O O N

Upload: ralph-fox

Post on 30-Dec-2015

28 views

Category:

Documents


4 download

DESCRIPTION

Auto-Regressive Hidden Markov Models. Continuous density HMMs – The random vector observed during a hidden state is pulled from a continuous pdf modeled by the following mixture: where is an elliptically symmetric density (e.g., Gaussian). - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Auto-Regressive Hidden Markov Models

Page 1 of 2Auto-Regressive HMMs

Auto-Regressive Hidden Markov ModelsContinuous density HMMs – The random vector observed during a hidden state is pulled from a continuous pdf modeled by the following mixture:

where is an elliptically symmetric density (e.g., Gaussian).

+ Avoid quantization errors (e.g., codebooks in discrete HMMs) and yield better performance

,

1

( ) [ , ],M

j jm jm jm

m

b c O U

O N 1 j n

Auto-Regressive HMMs – The random vector observed during a hidden state is pulled from a Gaussian Auto-Regressive process

where Ok represents the k’th component of the observation vector O

where

HMM training involves learning the state transition matrix, the mixture weights and parameters of the basis density for each mixture component, using max. likelihood estimation

+ AR-modeling allows us to account for the correlation in the components of the observation vector

+ Known to be effective models for recognition of discrete speech utterances (e.g., isolated digit recognition)

1

( ) ( ),M

j jm jm

m

b c b

O O/ 2

2( ) exp ( ) ,

2

k

jm jmk

b ,K

O O ap

a a

i=1

( ) = r (0)r(0) + 2 r (i)r(i)jm, O a

Autocorrelation of AR parameters Autocorrelation of the observation samples

p

i=1

= kek k - iO O

N

Page 2: Auto-Regressive Hidden Markov Models

Page 2 of 2Auto-Regressive HMMs

References:

•Lawrence R. Rabiner, “A tutorial on Hidden Markov Models and Selected Applications in Speech Recognition”, Proceedings of the IEEE, Vol. 77, No.2, February 1989•Biing-Hwang Juang and Lawrence R. Rabiner, “Mixture Autoregressive Hidden Markov Models for Speech Signals”, IEEE transactions on Acoustics, Speech and Signal Processing, Vol. ASSP-33, No. 6, December 1985.