research article adaptive algorithm for...
TRANSCRIPT
Research ArticleAdaptive Algorithm for Multichannel Autoregressive Estimationin Spatially Correlated Noise
Alimorad Mahmoudi
Electrical Engineering Department Shahid Chamran University Ahvaz Iran
Correspondence should be addressed to Alimorad Mahmoudi amahmoudiscuacir
Received 24 April 2014 Accepted 5 June 2014 Published 19 June 2014
Academic Editor Chi-Yi Tsai
Copyright copy 2014 Alimorad Mahmoudi This is an open access article distributed under the Creative Commons AttributionLicense which permits unrestricted use distribution and reproduction in any medium provided the original work is properlycited
This paper addresses the problemofmultichannel autoregressive (MAR) parameter estimation in the presence of spatially correlatednoise by steepest descent (SD) method which combines low-order and high-order Yule-Walker (YW) equations In additionto yield an unbiased estimate of the MAR model parameters we apply inverse filtering for noise covariance matrix estimationIn a simulation study the performance of the proposed unbiased estimation algorithm is evaluated and compared with existingparameter estimation methods
1 Introduction
ThenoisyMARmodeling hasmany applications such as highresolution multichannel spectral estimation [1] parametricmultichannel speech enhancement [2] MIMO-AR timevarying fading channel estimation [3] and adaptive signaldetection [4]
When the noise-free observations are available theNuttall-Strand method [5] the maximum likelihood (ML)estimator [6] and the extension of some standard schemesin scalar case to multichannel can be used for estimation ofMAR model parameters The relevance of the Nuttall-Strandmethod is explained in [7] by carrying out a comparativestudy between these methods The noise-free MAR estima-tion methods are sensitive to the presence of additive noisein the MAR process which limits their utility [1]
The modified Yule-Walker (MYW) method is a conven-tional method for noisy MAR parameter estimation Thismethod uses estimated correlation at lags beyond the ARorder [1] The MYW method is perhaps the simplest onefrom the computational point of view [8] but it exhibitspoor estimation accuracy and relatively low efficiency due tothe use of large-lag autocovariance estimates [8] Moreover
numerical instability issues may occur when it is used inonline parameter estimation [9]
The least-squares (LS) method is another method fornoisy MAR parameter estimation The additive noise causesthe least-squares (LS) estimates of MAR parameters to bebiased In [10] an improved LS (ILS) based method hasbeen developed for estimation of noisy MAR signals In thismethod bias correction is performed using observation noisecovariance estimation
The method proposed in [10] denoted by vector ILSbased (ILSV) method is an extension of Zhengrsquos method [11]to the multichannel case In the ILSV method the channelnoises can be correlated and no constraint is imposed onthe covariance matrix of channel noises Nevertheless thismethod has poor convergence when the SNR is low
In [12] the ILSV algorithm is modified using symmetryproperty of the observation covariance matrix which isnamed an advanced least square vector (ALSV) algorithmOne step called symmetrisation is added to the ILSV algo-rithm to estimate the observation noise covariance matrix
In [13 14] methods based on errors-in-variables andcross coupled Kalman andH
infin filters are suggested respec-
tively These methods are also an extension of the scalarmethods presented in [15] and [16 17] respectively In these
Hindawi Publishing CorporationJournal of StochasticsVolume 2014 Article ID 502406 7 pageshttpdxdoiorg1011552014502406
2 Journal of Stochastics
methods the observation noise covariancematrix is assumedto be a diagonal matrix This assumption means that thechannel noises are uncorrelated These methods are notsuitable for spatially correlated noise case
Note that cross coupled filter idea consists of using twomutually interactive filters Each time a new observation isavailable the signal is estimated using the latest estimatevalue of the parameters and conversely the parameters areestimated using the latest a posteriori signal estimate
In this paper we combine low-order and high-orderYW equations in a new scheme and use adaptive steepestdescent (SD) algorithm to estimate the parameters of theMAR process We also apply the inverse filtering idea forobservation noise covariance matrix estimation Finally wecan theoretically perform the convergence analysis of the pro-posedmethodUsing computer simulations the performanceof the proposed algorithm is evaluated and compared withthat of the LS MYW ILSV and ALSV methods in spatiallycorrelated noise case Note that we do not do comparisonswith [13 14] because they assume that the channel noises areuncorrelated
The proposed estimation algorithm provides more accu-rate parameter estimationThe convergence behaviour of theproposed method is better than those of the ILSV and theALSV methods
2 Data Model
A real noisy119898-channel AR signal can be modeled as
z (119899) = A1z (119899 minus 1) + sdot sdot sdot + A
119901z (119899 minus 119901) + e (119899)
y (119899) = z (119899) + w (119899) (1)
where e(119899) is the119898times1 input noise vector and z(119899) is the119898times1MAR output noise vector A
119894rsquos (119894 = 1 2 119901) are the119898 times 119898
MARparametermatrices and y(119899) is the119898times1 observed noisevectorThe noises e(119899) andw(119899) are assumed to be zero-meanstationary white noises with the 119898 times 119898 covariance matricesgiven by
Re (119897) = 119864 e (119899) e(119899 minus 119897)119879
= Σ119890120575119897
R119908(119897) = 119864 w (119899)w(119899 minus 119897)119879 = Σ
119908120575119897
(2)
where 120575119897is the Kronecker delta and Σ
119890and Σ
119908are the
unknown input noise covariance matrix and the observationnoise covariance matrix which are generally nondiagonalThis means that the observation noise is spatially correlated119864sdot is the expectation operator and 119879 denotes the transposeoperation
In this data model we have the following assumptions
(i) The AR order 119901 and the number of channels 119898 areknown
(ii) TheMARparametermatricesA119894are constrained such
that the roots of
det A119901(119911) = 0 (3)
lie inside the unit circle where A119901(119911) = I
119898minus sum119901
119894=1A119894119911minus119894 I119898
is the 119898 times 119898 identity matrix and det denotes determinantof a matrix This condition on the matrices A
119894guarantees the
stability of the MAR model [1] Note that the roots of (3) arepoles of the MAR model
(iii) The observation noise w(119899) is uncorrelated with theinput noise e(119899) that is 119864w(119899)e(119896)119879 = 0 for all 119899and 119896
Using (1) we can write the following linear regression fory(119899)
y (119899) = A119879g (119899) + 120576 (119899) (4)
where A is the119898119901 times 119898 parameter matrix defined as
A = [A1sdot sdot sdot A
119901]
119879 (5)
and g(119899) is the119898119901 times 1 regression vector defined as
g (119899) = [y(119899 minus 1)119879 sdot sdot sdot y(119899 minus 119901)119879]119879
(6)
The119898 times 1 equation error vector 120576(119899) is
120576 (119899) = minusA119879n (119899) + w (119899) + e (119899) (7)
where
n (119899) = [w(119899 minus 1)119879 sdot sdot sdot w(119899 minus 119901)119879]119879
(8)
The covariance matrices of 120576(119899) and y(119899) are given as
R120576(119897) = 119864 120576 (119899) 120576(119899 minus 119897)
119879
=
119901
sum
119895=119897
A119895Σ119908A119879119895minus119897+ Σ119890120575119897 0 le 119897 le 119901
(9)
R119910(119897) = 119864 y (119899) y(119899 minus 119897)119879 = R
119911(119897) + Σ
119908120575119897 (10)
where A0= minusI119898
The main objective in this paper is to estimateA Σ119908 and
Σ119890using the given samples y(1) y(2) y(119873)
3 Steepest Descent Algorithm forNoisy MAR Models
The proposed estimation algorithm uses steepest descentmethod It iterates between the following two steps untilconvergence (I) given noise covariance matrix estimateMAR matrices using steepest descent method (II) givenMAR matrices estimate noise covariance matrix based oninverse filtering idea
31 MAR Parameter Estimation Given Observation NoiseCovariance Matrix The Yule-Walker equations for the noisyMAR process given by (4) are
R119910(119897) minus
119901
sum
119894=1
A119894R119910(119897 minus 119894) +
119901
sum
119894=1
A119894Σ119908120575119897minus119894= 0 119897 ge 1 (11)
Journal of Stochastics 3
Evaluating (11) for 119897 = 1 119901 and using R119910(119897) = R
119910(minus119897)119879
give the following system of linear equations
(R119910minus R119908)A minus r
119910= 0 (12)
where
R119910=
[
[
[
[
[
[
R119910(0) R
119910(1) sdot sdot sdot R
119910(119901 minus 1)
R119910(minus1) R
119910(0) sdot sdot sdot R
119910(119901 minus 2)
d
R119910(1 minus 119901) R
119910(2 minus 119901) sdot sdot sdot R
119910(0)
]
]
]
]
]
]
r119910= [R119910(1) R
119910(2) sdot sdot sdot R
119910(119901)]
119879
(13)
and R119908is a block-diagonal119898119901 times 119898119901matrix given by
R119908=
[
[
[
[
[
[
Σ119908
0 sdot sdot sdot 0
0 Σ119908
0 d 00 sdot sdot sdot sdot sdot sdot Σ
119908
]
]
]
]
]
]
= Σ119908otimes I119901 (14)
Evaluating (11) for 119897 = 119901 + 1 119901 + 119902 and using R119910(119897) =
R119910(minus119897)119879 give the following modified Yule-Walker (MYW)
equations
R119910119902A minus r119910119902= 0 (15)
where
R119910119902=
[
[
[
[
[
[
R119910(minus119901) R
119910(minus119901 + 1) sdot sdot sdot R
119910(minus1)
R119910(minus119901 minus 1) R
119910(minus119901) sdot sdot sdot R
119910(minus2)
d
R119910(minus119901 minus 119902) R
119910(minus119901 minus 119902 + 1) sdot sdot sdot R
119910(minus119902)
]
]
]
]
]
]
r119910119902= [R119910(119901 + 1) R
119910(119901 + 2) sdot sdot sdot R
119910(119901 + 119902)]
119879
(16)
and 119902 is an arbitrary nonnegative integerCombining the low-orderYule-Walker equations given by
(12) and the high-order Yule-Walker equations given by (15)we obtain
H119910A minusH
119908A minus h
119910= 0 (17)
where
H119910= [
R119910
R119910119902
] h119910= [
r119910
r119910119902
] (18)
H119908= [
R119908
0 ] (19)
Premultiplying (17) by (H119879119910H119910)minus1H119879119910 we obtain
A minus A119867minus R119867H119908A = 0 (20)
where
A119867= (H119879119910H119910)
minus1
H119879119910h119910
R119867= (H119879119910H119910)
minus1
H119879119910
(21)
And R119867of dimension119898119901 times 119898(119901 + 119902) can be partitioned as
R119867= [R1198671
R1198672] (22)
where the dimensions ofR1198671
andR1198672
are119898119901times119898119901 and119898119901times119898119902 respectively
Substituting (19) and (22) into (20) we get
A minus A119867minus R1198671R119908A = 0 (23)
If R119908is assumed to be known we can iteratively estimate A
using steepest descent algorithm as follows
A(119894) = A(119894minus1) + 120583 (A119867+ R1198671R119908A(119894minus1) minus A(119894minus1)) (24)
where 120583 is the step size parameter and A119867 R1198671
can beestimated from the observations We can also control thestability and rate of convergence of the algorithmby changing120583 The above equation is converged if 120583 is selected betweenzero and 2(1minus120582min)where 120582min is the maximum eigenvalueof the R
1198671R119908matrix Furthermore if we set 120583 = 1 and 119902 = 0
in (24) it is changed to
A(119894) = A119897119904+ Rminus1119910R119908A(119894minus1) (25)
where A119897119904= Rminus1119910r119910 which is previously derived in [8]
32 Application of Inverse Filtering for Estimation of Obser-vation Noise Covariance Matrix Given MAR Matrices If theobserved process is filtered by MAR matrices the outputprocess of the filter is obtained as
120576 (119899) = y (119899) minus A119879g (119899) (26)
By evaluating (9) for 119897 = 1 we can obtain
R120576(1) =
119901
sum
119895=1
A119895Σ119908A119879119895minus1 (27)
We can rewrite (27) as
(
119901
sum
119895=1
A119895otimes A119895minus1) vec (Σ119879
119908) = vec (R
120576(1)119879
) (28)
We solve (28) as follows
vec (Σ119879119908) = (
119901
sum
119895=1
A119895otimes A119895minus1)
minus1
r120576 (29)
where r120576= vec(R
120576(1)119879
) and vec is a column vectorizationoperator that is
vec([1198861 119886211988631198864
]) =
[
[
[
[
1198861
1198863
1198862
1198864
]
]
]
]
(30)
4 Journal of Stochastics
By assuming Σ119908= Σ119879
119908 symmetry property of Σ
119908 we can
improve estimate as [12]
Σ119908=
unvec (Σ119879119908) + unvec(Σ119879
119908)
119879
2
(31)
where unvec is the inverse operation of vecUsing (26) R
120576(1) is evaluated from the observed signal as
R120576(1) = 119864 120576 (119899) 120576(119899 minus 1)
119879
= 119864 (y (119899) minus A119879g (119899)) (y (119899 minus 1) minus A119879g(119899 minus 1)119879) (32)
In (32) A is unknown So we will use a recursive algorithmfor estimating Σ
119908and A This algorithm is discussed in
Section 33Now we assume that Σ
119908and A are known and derive an
estimate for Σ119890
Evaluating (9) for 119897 = 0 we obtain
Σ119890= R120576(0) minus
119901
sum
119895=0
A119895Σ119908A119879119895 (33)
where
R120576(0) = 119864 120576 (119899) 120576(119899)
119879
= 119864 (y (119899) minus A119879g (119899)) (y (119899) minus A119879g(119899)119879) (34)
33 The Algorithm The results derived in previous sectionsand subsections are summarized to propose a recursivealgorithm for estimating Σ
119908and A
119894rsquos (119894 = 1 2 119901) The
improved least-squares algorithm for multichannel processesbased on inverse filtering (IFILSM) is as follows
Step 1 Compute autocovariance estimates by means of givensamples y(1) y(2) y(119873)
R119910(119896) =
1
119873
119873
sum
119899=119901+119902+1
y (119899) y(119899 minus 119896)119879 119896 = 0 1 2 119901 + 119902
(35)
and use them to construct the covariance matrix estimatesR119910 r119910 R119910119902 and r
119910119902
Step 2 Initialize 119894 = 0 where 119894 denotes the iteration numberThen compute the estimate
A(119894) = A119867= (H119879119910
H119910)
minus1
H119879119910
h119910 (36)
Step 3 Set 119894 = 119894 + 1 and calculate the estimate of observationnoise covariance matrix Σ
119908
R120576(1) =
1
119873
119873
sum
119899=119901+2
(y (119899) minus A(119894minus1)IFILSM119879g (119899))
times (y (119899 minus 1) minus A(119894minus1)IFILSM119879g(119899 minus 1)119879)
vec (Σ119879119908) = (
119901
sum
119895=0
A119895otimes A119895minus1)
minus1
r120576
Σ(119894)
119908=
unvec (Σ119879119908) + unvec(Σ119879
119908)
119879
2
R(119894)119908= Σ119908otimes I119901
(37)
Step 4 Perform the bias correction
A(119894)IFILSM = A(119894minus1)IFILSM + 120583 (A119867 + R1198671R(119894)119908A(119894minus1)IFILSM minus A(119894minus1)IFILSM)
(38)
Step 5 If10038171003817100381710038171003817
A(119894)IFILSM minus A(119894minus1)
IFILSM10038171003817100381710038171003817
10038171003817100381710038171003817A(119894minus1)IFILSM
10038171003817100381710038171003817
le 120575 (39)
where sdot is the Euclidean norm and 120575 is an appropriatesmall positive number the convergence is achieved and theiteration process must be terminated otherwise go to Step 3
Step 6 Estimate Σ119890via (33)
Σ119890= R120576(0) minus
119901
sum
119895=0
A119895Σ119908A119879119895
R120576(0) =
1
119873
119873
sum
119899=119901+1
(y (119899) minus A119879g (119899)) (y (119899) minus A119879g(119899)119879)
(40)
4 Simulation Result
In simulation study we have compared the IFILSM methodwith the other existing methods Two examples have beenconsidered first frequency estimation and second syntheticnoisyMARmodel estimation In all simulations we set 120583 = 1for the proposed IFILSM algorithm
41 Estimating the Frequency of Multiple Sinusoids Embeddedin Spatially Colored Noise Consider several sinusoidal sig-nals in spatially colored noise as follows
z (119899) =
[
[
[
[
[
[
[
[
[
1199111(119899) =
119871
sum
119896=1
1198861119896cos (120596
1119896119899 + 1205931119896)
119911119898(119899) =
119871
sum
119896=1
119886119898119896
cos (120596119898119896119899 + 120593119898119896)
]
]
]
]
]
]
]
]
]
y (119899) = z (119899) + w (119899)
(41)
It is well known that z(119899) can be modeled as an MAR (2119871)process with zero driving noise vector (e(119899) = 0) So wecan apply the method proposed in the previous subsection to
Journal of Stochastics 5
Table 1 Simulation results (119872 = 1000 119873 = 500 SNR = 5 10 dB1205961= 212058701 and 120596
2= 212058703)
SNR 5 dB 5 dB 10 dB 10 dBMethods MSE1 (dB) MSE2 (dB) MSE1 (dB) MSE2 (dB)IFILSM minus4218 minus4704 minus5111 minus6006
ILSV minus3264 minus4191 minus4222 minus5051
ALSV minus3235 minus4276 minus4233 minus5136
LS minus575 minus2302 minus32 minus3127
MYW minus3758 minus4505 minus4682 minus5194
estimate the frequency of sinusoidal signals corrupted withspatially colored observation noise Note that the frequencyof sinusoids can be estimated by locating the peaks in theMAR spectrum
Example 1 We consider 119873 = 500 data samples of a signalcomprised of two sinusoids corrupted with spatially colorednoise Consider y(119899) as follows
y (119899) = [[
1198861cos (120596
1119899 + 1205931)
1198862cos (120596
2119899 + 1205932)
]
]
+ w (119899) (42)
where 1205961= 212058701 120596
2= 212058703 120593
1 1205932are independent
random phases uniformly distributed over [0 2120587] and w(119899)is Gaussian spatially colored noise having the followingcovariance matrix
Σ119908= [
1205902
11990811
1 1205902
1199082
] (43)
The signal-to-noise ratio (SNR) of each channel is defined as
SNR119894=
1198862
119894
21205902
119908119894
(44)
which is set to 5 and 10 dB The number of trials 119872 isset to 1000 and the parameter 119902 in the IFILSM and MYWmethods is set to 4 The 120575 in the ILSV ALSV and IFILSMmethods is set to 001We compare the proposedmethodwiththe LS MYW ALSV and ILSV methods in terms of meansquare error (MSE) of frequency estimates Note that otherexisting methods cannot work in the spatially correlatedobservation noise In Table 1 we tabulate the MSEs in termsof dB for the LS MYW ALSV ILSV and proposed methodsWe see that from the table the proposedmethod has the bestperformance
42 Synthetic MAR Processes
Example 2 Consider a noisy MARmodel with 119901 = 2119898 = 2and coefficient matrices
A1= [
11625 01432
01432 15443]
A2= [
minus096 minus00064
minus0064 minus064]
(45)
1000 1500 2000 2500 3000 3500 4000
minus28
minus26
minus24
minus22
minus20
minus18
minus16
minus14
minus12
minus10
minus8
N
MSE
(dB)
ALSVILSV
IFILSM
Figure 1MSE values ofMARmatrices estimate versus data samples119872 = 1000 SNR = 15 dB
The driving process e(119899) and observation noise w(119899) covari-ance are mutually uncorrelated Gaussian temporally whiteand spatially correlated processes with covariance matrices
Σ119890= [
1 02
02 07]
Case 1 Σ119908asymp [
04149 001
001 04689]
Case 2 Σ119908asymp [
129 001
001 144]
(46)
We compare the ILSV the ALSV and the proposed IFILSMmethod in terms MSE of MAR coefficient estimate and theconvergence probability defined as
119875 =
119872
119872119905
(47)
where 119872 is the number of runs that an iterative algorithmis converged and119872
119905is the total number of runs We set the
parameters of the algorithms119872 = 1000 and the parameter119902 in the IFILSM method is set to 4 The 120575 in the ILSV ALSVand IFILSM methods is set to 001
In Case 1 and Case 2 SNR for each channel is set to15 and 10 dB respectively In this example data samples 119873are changed from 1000 to 4000 with step size 1000 TheMSE and 119875 values are plotted in Figures 1 2 and 3 versus119873 respectively The probability of convergence is one forall algorithms in Case 1 and the figure is not plotted hereFrom the figures we can see that the accuracy and theconvergence of the IFILSM are better than the ILSV andALSVmethods Note that the ILSV and ALSVmethods havepoor convergence when SNR is lower or equal to 10 dBIt can be seen from the figures that the performance ofthe all algorithms is constant over the sample size becauseautocovariance estimates of the observations are not changedin this rage of data sample size
6 Journal of Stochastics
1000 1500 2000 2500 3000 3500 4000
minus40
minus35
minus30
minus25
minus20
minus15
minus10
minus5
0
5
N
MSE
(dB)
IFILSMILSV
ALSV
Figure 2 MSE values of MAR matrices estimate versus datasamples119872 = 1000 SNR = 10 dB
1000 1500 2000 2500 3000 3500 4000
0
02
04
06
08
1
12
N
Prob
abili
ty o
f con
verg
ence
ALSVIFILSMILSV
Figure 3 Probability of convergence values of MAR matricesestimate versus data samples119872 = 1000 SNR = 10 dB
5 Conclusion
Steepest descent (SD) algorithm is used for estimating noisymultichannel AR model parameters The observation noisecovariancematrix is nondiagonal thismeans that the channelnoises are assumed to be spatially correlated The inversefiltering idea is used to estimate the observation noisecovariance matrix Computer simulations showed that theproposed method has better performance and convergencethan the ILSV and the ALSV methods
Appendix
The following result discusses the convergence conditions ofthe proposed algorithm
Theorem A1 The necessary and sufficient condition for theconvergence of the proposed algorithm is to require that the stepsize parameter 120583 satisfy the following condition
0 lt 120583 lt
2
1 minus 120582min (A1)
where 120582min is the minimum eigenvalue of the R1198671R119908matrix
Proof Defining estimation error matrix x(119894) = A(119894) minus Asubstituting A minus A
119867minus R1198671R119908A = 0 into (24) and using
x(119894) = A(119894) minus A we obtain
x(119894) = (I minus 120583 (I minus R1198671R119908)) x(119894minus1) (A2)
The eigenvalues of I minus 120583(I minus R1198671R119908) are 1 minus 120583(1 minus 120582
119894) where
120582119894= 120582119894(R1198671R119908) is between 0 and 1
If minus1 lt 1 minus 120583(1 minus 120582119894) lt 1 then lim
119894rarrinfinx(119894) = 0 In this
case we have 0 lt 120583 lt 2(1 minus 120582119894) and their intersection is
0 lt 120583 lt 2120582max(I minus R1198671R119908) = 2(1 minus 120582min)
Conflict of Interests
The author declares that there is no conflict of interestsregarding the publication of this paper
References
[1] S M KayModern Spectral Estimation Theory and ApplicationPrentice-Hall Englewood Cliffs NJ USA 1988
[2] S Srinivasan R Aichner W B Kleijn and W KellermannldquoMultichannel parametric speech enhancementrdquo IEEE SignalProcessing Letters vol 13 no 5 pp 304ndash307 2006
[3] C Komninakis C Fragouli A H Sayed and R D WeselldquoMulti-input multi-output fading channel tracking and equal-ization using Kalman estimationrdquo IEEE Transactions on SignalProcessing vol 50 no 5 pp 1065ndash1076 2002
[4] P Wang H Li and B Himed ldquoA new parametric GLRT formultichannel adaptive signal detectionrdquo IEEE Transactions onSignal Processing vol 58 no 1 pp 317ndash325 2010
[5] S Marple Digital Spectral Analysis with Applications Prentice-Hall Englewood Cliffs NJ USA 1987
[6] D T Pham and D Q Tong ldquoMaximum likelihood estimationfor a multivariate autoregressive modelrdquo IEEE Transactions onSignal Processing vol 42 no 11 pp 3061ndash3072 1994
[7] A Schlogl ldquoA comparison of multivariate autoregressive esti-matorsrdquo Signal Process vol 86 no 9 pp 2426ndash2429 2006
[8] A Nehorai and P Stoica ldquoAdaptive algorithms for constrainedARMA signals in the presence of noiserdquo IEEE Transactions onAcoustics Speech and Signal Processing vol 36 no 8 pp 1282ndash1291 1988
[9] W X Zheng ldquoFast identification of autoregressive signals fromnoisy observationsrdquo IEEE Transactions on Circuits and SystemsII Express Briefs vol 52 no 1 pp 43ndash48 2005
[10] A Mahmoudi andM Karimi ldquoEstimation of the parameters ofmultichannel autoregressive signals from noisy observationsrdquoSignal Processing vol 88 no 11 pp 2777ndash2783 2008
[11] W X Zheng ldquoAutoregressive parameter estimation from noisydatardquo IEEE Transactions on Circuits and Systems II Analog andDigital Signal Processing vol 47 no 1 pp 71ndash75 2000
[12] X M Qu J Zhou and Y T Luo ldquoA new noise-compensatedestimation scheme formultichannel autoregressive signals fromnoisy observationsrdquo Journal of Supercomputing vol 58 no 1 pp34ndash49 2011
[13] J Petitjean E Grivel W Bobillet and P Roussilhe ldquoMultichan-nel AR parameter estimation from noisy observations as anerrors-in-variables issuerdquo Signal Image and Video Processingvol 4 no 2 pp 209ndash220 2010
Journal of Stochastics 7
[14] A Jamoos E Grivel N Shakarneh and H Abdel-NourldquoDual optimal filters for parameter estimation of a multivariateautoregressive process from noisy observationsrdquo IET SignalProcessing vol 5 no 5 pp 471ndash479 2011
[15] D Labarre E Grivel Y Berthoumieu E Todini andM NajimldquoConsistent estimation of autoregressive parameters from noisyobservations based on two interacting Kalman filtersrdquo SignalProcessing vol 86 no 10 pp 2863ndash2876 2006
[16] D Labarre E Grivel M Najim and N Christov ldquoDual119867infin
algorithms for signal processing-application to speechenhancementrdquo IEEE Transactions on Signal Processing vol 55no 11 pp 5195ndash5208 2007
[17] W Bobillet R Diversi E Grivel R Guidorzi M Najim and USoverini ldquoSpeech enhancement combining optimal smoothingand errors-in-variables identification of noisy AR processesrdquoIEEE Transactions on Signal Processing vol 55 no 12 pp 5564ndash5578 2007
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
2 Journal of Stochastics
methods the observation noise covariancematrix is assumedto be a diagonal matrix This assumption means that thechannel noises are uncorrelated These methods are notsuitable for spatially correlated noise case
Note that cross coupled filter idea consists of using twomutually interactive filters Each time a new observation isavailable the signal is estimated using the latest estimatevalue of the parameters and conversely the parameters areestimated using the latest a posteriori signal estimate
In this paper we combine low-order and high-orderYW equations in a new scheme and use adaptive steepestdescent (SD) algorithm to estimate the parameters of theMAR process We also apply the inverse filtering idea forobservation noise covariance matrix estimation Finally wecan theoretically perform the convergence analysis of the pro-posedmethodUsing computer simulations the performanceof the proposed algorithm is evaluated and compared withthat of the LS MYW ILSV and ALSV methods in spatiallycorrelated noise case Note that we do not do comparisonswith [13 14] because they assume that the channel noises areuncorrelated
The proposed estimation algorithm provides more accu-rate parameter estimationThe convergence behaviour of theproposed method is better than those of the ILSV and theALSV methods
2 Data Model
A real noisy119898-channel AR signal can be modeled as
z (119899) = A1z (119899 minus 1) + sdot sdot sdot + A
119901z (119899 minus 119901) + e (119899)
y (119899) = z (119899) + w (119899) (1)
where e(119899) is the119898times1 input noise vector and z(119899) is the119898times1MAR output noise vector A
119894rsquos (119894 = 1 2 119901) are the119898 times 119898
MARparametermatrices and y(119899) is the119898times1 observed noisevectorThe noises e(119899) andw(119899) are assumed to be zero-meanstationary white noises with the 119898 times 119898 covariance matricesgiven by
Re (119897) = 119864 e (119899) e(119899 minus 119897)119879
= Σ119890120575119897
R119908(119897) = 119864 w (119899)w(119899 minus 119897)119879 = Σ
119908120575119897
(2)
where 120575119897is the Kronecker delta and Σ
119890and Σ
119908are the
unknown input noise covariance matrix and the observationnoise covariance matrix which are generally nondiagonalThis means that the observation noise is spatially correlated119864sdot is the expectation operator and 119879 denotes the transposeoperation
In this data model we have the following assumptions
(i) The AR order 119901 and the number of channels 119898 areknown
(ii) TheMARparametermatricesA119894are constrained such
that the roots of
det A119901(119911) = 0 (3)
lie inside the unit circle where A119901(119911) = I
119898minus sum119901
119894=1A119894119911minus119894 I119898
is the 119898 times 119898 identity matrix and det denotes determinantof a matrix This condition on the matrices A
119894guarantees the
stability of the MAR model [1] Note that the roots of (3) arepoles of the MAR model
(iii) The observation noise w(119899) is uncorrelated with theinput noise e(119899) that is 119864w(119899)e(119896)119879 = 0 for all 119899and 119896
Using (1) we can write the following linear regression fory(119899)
y (119899) = A119879g (119899) + 120576 (119899) (4)
where A is the119898119901 times 119898 parameter matrix defined as
A = [A1sdot sdot sdot A
119901]
119879 (5)
and g(119899) is the119898119901 times 1 regression vector defined as
g (119899) = [y(119899 minus 1)119879 sdot sdot sdot y(119899 minus 119901)119879]119879
(6)
The119898 times 1 equation error vector 120576(119899) is
120576 (119899) = minusA119879n (119899) + w (119899) + e (119899) (7)
where
n (119899) = [w(119899 minus 1)119879 sdot sdot sdot w(119899 minus 119901)119879]119879
(8)
The covariance matrices of 120576(119899) and y(119899) are given as
R120576(119897) = 119864 120576 (119899) 120576(119899 minus 119897)
119879
=
119901
sum
119895=119897
A119895Σ119908A119879119895minus119897+ Σ119890120575119897 0 le 119897 le 119901
(9)
R119910(119897) = 119864 y (119899) y(119899 minus 119897)119879 = R
119911(119897) + Σ
119908120575119897 (10)
where A0= minusI119898
The main objective in this paper is to estimateA Σ119908 and
Σ119890using the given samples y(1) y(2) y(119873)
3 Steepest Descent Algorithm forNoisy MAR Models
The proposed estimation algorithm uses steepest descentmethod It iterates between the following two steps untilconvergence (I) given noise covariance matrix estimateMAR matrices using steepest descent method (II) givenMAR matrices estimate noise covariance matrix based oninverse filtering idea
31 MAR Parameter Estimation Given Observation NoiseCovariance Matrix The Yule-Walker equations for the noisyMAR process given by (4) are
R119910(119897) minus
119901
sum
119894=1
A119894R119910(119897 minus 119894) +
119901
sum
119894=1
A119894Σ119908120575119897minus119894= 0 119897 ge 1 (11)
Journal of Stochastics 3
Evaluating (11) for 119897 = 1 119901 and using R119910(119897) = R
119910(minus119897)119879
give the following system of linear equations
(R119910minus R119908)A minus r
119910= 0 (12)
where
R119910=
[
[
[
[
[
[
R119910(0) R
119910(1) sdot sdot sdot R
119910(119901 minus 1)
R119910(minus1) R
119910(0) sdot sdot sdot R
119910(119901 minus 2)
d
R119910(1 minus 119901) R
119910(2 minus 119901) sdot sdot sdot R
119910(0)
]
]
]
]
]
]
r119910= [R119910(1) R
119910(2) sdot sdot sdot R
119910(119901)]
119879
(13)
and R119908is a block-diagonal119898119901 times 119898119901matrix given by
R119908=
[
[
[
[
[
[
Σ119908
0 sdot sdot sdot 0
0 Σ119908
0 d 00 sdot sdot sdot sdot sdot sdot Σ
119908
]
]
]
]
]
]
= Σ119908otimes I119901 (14)
Evaluating (11) for 119897 = 119901 + 1 119901 + 119902 and using R119910(119897) =
R119910(minus119897)119879 give the following modified Yule-Walker (MYW)
equations
R119910119902A minus r119910119902= 0 (15)
where
R119910119902=
[
[
[
[
[
[
R119910(minus119901) R
119910(minus119901 + 1) sdot sdot sdot R
119910(minus1)
R119910(minus119901 minus 1) R
119910(minus119901) sdot sdot sdot R
119910(minus2)
d
R119910(minus119901 minus 119902) R
119910(minus119901 minus 119902 + 1) sdot sdot sdot R
119910(minus119902)
]
]
]
]
]
]
r119910119902= [R119910(119901 + 1) R
119910(119901 + 2) sdot sdot sdot R
119910(119901 + 119902)]
119879
(16)
and 119902 is an arbitrary nonnegative integerCombining the low-orderYule-Walker equations given by
(12) and the high-order Yule-Walker equations given by (15)we obtain
H119910A minusH
119908A minus h
119910= 0 (17)
where
H119910= [
R119910
R119910119902
] h119910= [
r119910
r119910119902
] (18)
H119908= [
R119908
0 ] (19)
Premultiplying (17) by (H119879119910H119910)minus1H119879119910 we obtain
A minus A119867minus R119867H119908A = 0 (20)
where
A119867= (H119879119910H119910)
minus1
H119879119910h119910
R119867= (H119879119910H119910)
minus1
H119879119910
(21)
And R119867of dimension119898119901 times 119898(119901 + 119902) can be partitioned as
R119867= [R1198671
R1198672] (22)
where the dimensions ofR1198671
andR1198672
are119898119901times119898119901 and119898119901times119898119902 respectively
Substituting (19) and (22) into (20) we get
A minus A119867minus R1198671R119908A = 0 (23)
If R119908is assumed to be known we can iteratively estimate A
using steepest descent algorithm as follows
A(119894) = A(119894minus1) + 120583 (A119867+ R1198671R119908A(119894minus1) minus A(119894minus1)) (24)
where 120583 is the step size parameter and A119867 R1198671
can beestimated from the observations We can also control thestability and rate of convergence of the algorithmby changing120583 The above equation is converged if 120583 is selected betweenzero and 2(1minus120582min)where 120582min is the maximum eigenvalueof the R
1198671R119908matrix Furthermore if we set 120583 = 1 and 119902 = 0
in (24) it is changed to
A(119894) = A119897119904+ Rminus1119910R119908A(119894minus1) (25)
where A119897119904= Rminus1119910r119910 which is previously derived in [8]
32 Application of Inverse Filtering for Estimation of Obser-vation Noise Covariance Matrix Given MAR Matrices If theobserved process is filtered by MAR matrices the outputprocess of the filter is obtained as
120576 (119899) = y (119899) minus A119879g (119899) (26)
By evaluating (9) for 119897 = 1 we can obtain
R120576(1) =
119901
sum
119895=1
A119895Σ119908A119879119895minus1 (27)
We can rewrite (27) as
(
119901
sum
119895=1
A119895otimes A119895minus1) vec (Σ119879
119908) = vec (R
120576(1)119879
) (28)
We solve (28) as follows
vec (Σ119879119908) = (
119901
sum
119895=1
A119895otimes A119895minus1)
minus1
r120576 (29)
where r120576= vec(R
120576(1)119879
) and vec is a column vectorizationoperator that is
vec([1198861 119886211988631198864
]) =
[
[
[
[
1198861
1198863
1198862
1198864
]
]
]
]
(30)
4 Journal of Stochastics
By assuming Σ119908= Σ119879
119908 symmetry property of Σ
119908 we can
improve estimate as [12]
Σ119908=
unvec (Σ119879119908) + unvec(Σ119879
119908)
119879
2
(31)
where unvec is the inverse operation of vecUsing (26) R
120576(1) is evaluated from the observed signal as
R120576(1) = 119864 120576 (119899) 120576(119899 minus 1)
119879
= 119864 (y (119899) minus A119879g (119899)) (y (119899 minus 1) minus A119879g(119899 minus 1)119879) (32)
In (32) A is unknown So we will use a recursive algorithmfor estimating Σ
119908and A This algorithm is discussed in
Section 33Now we assume that Σ
119908and A are known and derive an
estimate for Σ119890
Evaluating (9) for 119897 = 0 we obtain
Σ119890= R120576(0) minus
119901
sum
119895=0
A119895Σ119908A119879119895 (33)
where
R120576(0) = 119864 120576 (119899) 120576(119899)
119879
= 119864 (y (119899) minus A119879g (119899)) (y (119899) minus A119879g(119899)119879) (34)
33 The Algorithm The results derived in previous sectionsand subsections are summarized to propose a recursivealgorithm for estimating Σ
119908and A
119894rsquos (119894 = 1 2 119901) The
improved least-squares algorithm for multichannel processesbased on inverse filtering (IFILSM) is as follows
Step 1 Compute autocovariance estimates by means of givensamples y(1) y(2) y(119873)
R119910(119896) =
1
119873
119873
sum
119899=119901+119902+1
y (119899) y(119899 minus 119896)119879 119896 = 0 1 2 119901 + 119902
(35)
and use them to construct the covariance matrix estimatesR119910 r119910 R119910119902 and r
119910119902
Step 2 Initialize 119894 = 0 where 119894 denotes the iteration numberThen compute the estimate
A(119894) = A119867= (H119879119910
H119910)
minus1
H119879119910
h119910 (36)
Step 3 Set 119894 = 119894 + 1 and calculate the estimate of observationnoise covariance matrix Σ
119908
R120576(1) =
1
119873
119873
sum
119899=119901+2
(y (119899) minus A(119894minus1)IFILSM119879g (119899))
times (y (119899 minus 1) minus A(119894minus1)IFILSM119879g(119899 minus 1)119879)
vec (Σ119879119908) = (
119901
sum
119895=0
A119895otimes A119895minus1)
minus1
r120576
Σ(119894)
119908=
unvec (Σ119879119908) + unvec(Σ119879
119908)
119879
2
R(119894)119908= Σ119908otimes I119901
(37)
Step 4 Perform the bias correction
A(119894)IFILSM = A(119894minus1)IFILSM + 120583 (A119867 + R1198671R(119894)119908A(119894minus1)IFILSM minus A(119894minus1)IFILSM)
(38)
Step 5 If10038171003817100381710038171003817
A(119894)IFILSM minus A(119894minus1)
IFILSM10038171003817100381710038171003817
10038171003817100381710038171003817A(119894minus1)IFILSM
10038171003817100381710038171003817
le 120575 (39)
where sdot is the Euclidean norm and 120575 is an appropriatesmall positive number the convergence is achieved and theiteration process must be terminated otherwise go to Step 3
Step 6 Estimate Σ119890via (33)
Σ119890= R120576(0) minus
119901
sum
119895=0
A119895Σ119908A119879119895
R120576(0) =
1
119873
119873
sum
119899=119901+1
(y (119899) minus A119879g (119899)) (y (119899) minus A119879g(119899)119879)
(40)
4 Simulation Result
In simulation study we have compared the IFILSM methodwith the other existing methods Two examples have beenconsidered first frequency estimation and second syntheticnoisyMARmodel estimation In all simulations we set 120583 = 1for the proposed IFILSM algorithm
41 Estimating the Frequency of Multiple Sinusoids Embeddedin Spatially Colored Noise Consider several sinusoidal sig-nals in spatially colored noise as follows
z (119899) =
[
[
[
[
[
[
[
[
[
1199111(119899) =
119871
sum
119896=1
1198861119896cos (120596
1119896119899 + 1205931119896)
119911119898(119899) =
119871
sum
119896=1
119886119898119896
cos (120596119898119896119899 + 120593119898119896)
]
]
]
]
]
]
]
]
]
y (119899) = z (119899) + w (119899)
(41)
It is well known that z(119899) can be modeled as an MAR (2119871)process with zero driving noise vector (e(119899) = 0) So wecan apply the method proposed in the previous subsection to
Journal of Stochastics 5
Table 1 Simulation results (119872 = 1000 119873 = 500 SNR = 5 10 dB1205961= 212058701 and 120596
2= 212058703)
SNR 5 dB 5 dB 10 dB 10 dBMethods MSE1 (dB) MSE2 (dB) MSE1 (dB) MSE2 (dB)IFILSM minus4218 minus4704 minus5111 minus6006
ILSV minus3264 minus4191 minus4222 minus5051
ALSV minus3235 minus4276 minus4233 minus5136
LS minus575 minus2302 minus32 minus3127
MYW minus3758 minus4505 minus4682 minus5194
estimate the frequency of sinusoidal signals corrupted withspatially colored observation noise Note that the frequencyof sinusoids can be estimated by locating the peaks in theMAR spectrum
Example 1 We consider 119873 = 500 data samples of a signalcomprised of two sinusoids corrupted with spatially colorednoise Consider y(119899) as follows
y (119899) = [[
1198861cos (120596
1119899 + 1205931)
1198862cos (120596
2119899 + 1205932)
]
]
+ w (119899) (42)
where 1205961= 212058701 120596
2= 212058703 120593
1 1205932are independent
random phases uniformly distributed over [0 2120587] and w(119899)is Gaussian spatially colored noise having the followingcovariance matrix
Σ119908= [
1205902
11990811
1 1205902
1199082
] (43)
The signal-to-noise ratio (SNR) of each channel is defined as
SNR119894=
1198862
119894
21205902
119908119894
(44)
which is set to 5 and 10 dB The number of trials 119872 isset to 1000 and the parameter 119902 in the IFILSM and MYWmethods is set to 4 The 120575 in the ILSV ALSV and IFILSMmethods is set to 001We compare the proposedmethodwiththe LS MYW ALSV and ILSV methods in terms of meansquare error (MSE) of frequency estimates Note that otherexisting methods cannot work in the spatially correlatedobservation noise In Table 1 we tabulate the MSEs in termsof dB for the LS MYW ALSV ILSV and proposed methodsWe see that from the table the proposedmethod has the bestperformance
42 Synthetic MAR Processes
Example 2 Consider a noisy MARmodel with 119901 = 2119898 = 2and coefficient matrices
A1= [
11625 01432
01432 15443]
A2= [
minus096 minus00064
minus0064 minus064]
(45)
1000 1500 2000 2500 3000 3500 4000
minus28
minus26
minus24
minus22
minus20
minus18
minus16
minus14
minus12
minus10
minus8
N
MSE
(dB)
ALSVILSV
IFILSM
Figure 1MSE values ofMARmatrices estimate versus data samples119872 = 1000 SNR = 15 dB
The driving process e(119899) and observation noise w(119899) covari-ance are mutually uncorrelated Gaussian temporally whiteand spatially correlated processes with covariance matrices
Σ119890= [
1 02
02 07]
Case 1 Σ119908asymp [
04149 001
001 04689]
Case 2 Σ119908asymp [
129 001
001 144]
(46)
We compare the ILSV the ALSV and the proposed IFILSMmethod in terms MSE of MAR coefficient estimate and theconvergence probability defined as
119875 =
119872
119872119905
(47)
where 119872 is the number of runs that an iterative algorithmis converged and119872
119905is the total number of runs We set the
parameters of the algorithms119872 = 1000 and the parameter119902 in the IFILSM method is set to 4 The 120575 in the ILSV ALSVand IFILSM methods is set to 001
In Case 1 and Case 2 SNR for each channel is set to15 and 10 dB respectively In this example data samples 119873are changed from 1000 to 4000 with step size 1000 TheMSE and 119875 values are plotted in Figures 1 2 and 3 versus119873 respectively The probability of convergence is one forall algorithms in Case 1 and the figure is not plotted hereFrom the figures we can see that the accuracy and theconvergence of the IFILSM are better than the ILSV andALSVmethods Note that the ILSV and ALSVmethods havepoor convergence when SNR is lower or equal to 10 dBIt can be seen from the figures that the performance ofthe all algorithms is constant over the sample size becauseautocovariance estimates of the observations are not changedin this rage of data sample size
6 Journal of Stochastics
1000 1500 2000 2500 3000 3500 4000
minus40
minus35
minus30
minus25
minus20
minus15
minus10
minus5
0
5
N
MSE
(dB)
IFILSMILSV
ALSV
Figure 2 MSE values of MAR matrices estimate versus datasamples119872 = 1000 SNR = 10 dB
1000 1500 2000 2500 3000 3500 4000
0
02
04
06
08
1
12
N
Prob
abili
ty o
f con
verg
ence
ALSVIFILSMILSV
Figure 3 Probability of convergence values of MAR matricesestimate versus data samples119872 = 1000 SNR = 10 dB
5 Conclusion
Steepest descent (SD) algorithm is used for estimating noisymultichannel AR model parameters The observation noisecovariancematrix is nondiagonal thismeans that the channelnoises are assumed to be spatially correlated The inversefiltering idea is used to estimate the observation noisecovariance matrix Computer simulations showed that theproposed method has better performance and convergencethan the ILSV and the ALSV methods
Appendix
The following result discusses the convergence conditions ofthe proposed algorithm
Theorem A1 The necessary and sufficient condition for theconvergence of the proposed algorithm is to require that the stepsize parameter 120583 satisfy the following condition
0 lt 120583 lt
2
1 minus 120582min (A1)
where 120582min is the minimum eigenvalue of the R1198671R119908matrix
Proof Defining estimation error matrix x(119894) = A(119894) minus Asubstituting A minus A
119867minus R1198671R119908A = 0 into (24) and using
x(119894) = A(119894) minus A we obtain
x(119894) = (I minus 120583 (I minus R1198671R119908)) x(119894minus1) (A2)
The eigenvalues of I minus 120583(I minus R1198671R119908) are 1 minus 120583(1 minus 120582
119894) where
120582119894= 120582119894(R1198671R119908) is between 0 and 1
If minus1 lt 1 minus 120583(1 minus 120582119894) lt 1 then lim
119894rarrinfinx(119894) = 0 In this
case we have 0 lt 120583 lt 2(1 minus 120582119894) and their intersection is
0 lt 120583 lt 2120582max(I minus R1198671R119908) = 2(1 minus 120582min)
Conflict of Interests
The author declares that there is no conflict of interestsregarding the publication of this paper
References
[1] S M KayModern Spectral Estimation Theory and ApplicationPrentice-Hall Englewood Cliffs NJ USA 1988
[2] S Srinivasan R Aichner W B Kleijn and W KellermannldquoMultichannel parametric speech enhancementrdquo IEEE SignalProcessing Letters vol 13 no 5 pp 304ndash307 2006
[3] C Komninakis C Fragouli A H Sayed and R D WeselldquoMulti-input multi-output fading channel tracking and equal-ization using Kalman estimationrdquo IEEE Transactions on SignalProcessing vol 50 no 5 pp 1065ndash1076 2002
[4] P Wang H Li and B Himed ldquoA new parametric GLRT formultichannel adaptive signal detectionrdquo IEEE Transactions onSignal Processing vol 58 no 1 pp 317ndash325 2010
[5] S Marple Digital Spectral Analysis with Applications Prentice-Hall Englewood Cliffs NJ USA 1987
[6] D T Pham and D Q Tong ldquoMaximum likelihood estimationfor a multivariate autoregressive modelrdquo IEEE Transactions onSignal Processing vol 42 no 11 pp 3061ndash3072 1994
[7] A Schlogl ldquoA comparison of multivariate autoregressive esti-matorsrdquo Signal Process vol 86 no 9 pp 2426ndash2429 2006
[8] A Nehorai and P Stoica ldquoAdaptive algorithms for constrainedARMA signals in the presence of noiserdquo IEEE Transactions onAcoustics Speech and Signal Processing vol 36 no 8 pp 1282ndash1291 1988
[9] W X Zheng ldquoFast identification of autoregressive signals fromnoisy observationsrdquo IEEE Transactions on Circuits and SystemsII Express Briefs vol 52 no 1 pp 43ndash48 2005
[10] A Mahmoudi andM Karimi ldquoEstimation of the parameters ofmultichannel autoregressive signals from noisy observationsrdquoSignal Processing vol 88 no 11 pp 2777ndash2783 2008
[11] W X Zheng ldquoAutoregressive parameter estimation from noisydatardquo IEEE Transactions on Circuits and Systems II Analog andDigital Signal Processing vol 47 no 1 pp 71ndash75 2000
[12] X M Qu J Zhou and Y T Luo ldquoA new noise-compensatedestimation scheme formultichannel autoregressive signals fromnoisy observationsrdquo Journal of Supercomputing vol 58 no 1 pp34ndash49 2011
[13] J Petitjean E Grivel W Bobillet and P Roussilhe ldquoMultichan-nel AR parameter estimation from noisy observations as anerrors-in-variables issuerdquo Signal Image and Video Processingvol 4 no 2 pp 209ndash220 2010
Journal of Stochastics 7
[14] A Jamoos E Grivel N Shakarneh and H Abdel-NourldquoDual optimal filters for parameter estimation of a multivariateautoregressive process from noisy observationsrdquo IET SignalProcessing vol 5 no 5 pp 471ndash479 2011
[15] D Labarre E Grivel Y Berthoumieu E Todini andM NajimldquoConsistent estimation of autoregressive parameters from noisyobservations based on two interacting Kalman filtersrdquo SignalProcessing vol 86 no 10 pp 2863ndash2876 2006
[16] D Labarre E Grivel M Najim and N Christov ldquoDual119867infin
algorithms for signal processing-application to speechenhancementrdquo IEEE Transactions on Signal Processing vol 55no 11 pp 5195ndash5208 2007
[17] W Bobillet R Diversi E Grivel R Guidorzi M Najim and USoverini ldquoSpeech enhancement combining optimal smoothingand errors-in-variables identification of noisy AR processesrdquoIEEE Transactions on Signal Processing vol 55 no 12 pp 5564ndash5578 2007
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Journal of Stochastics 3
Evaluating (11) for 119897 = 1 119901 and using R119910(119897) = R
119910(minus119897)119879
give the following system of linear equations
(R119910minus R119908)A minus r
119910= 0 (12)
where
R119910=
[
[
[
[
[
[
R119910(0) R
119910(1) sdot sdot sdot R
119910(119901 minus 1)
R119910(minus1) R
119910(0) sdot sdot sdot R
119910(119901 minus 2)
d
R119910(1 minus 119901) R
119910(2 minus 119901) sdot sdot sdot R
119910(0)
]
]
]
]
]
]
r119910= [R119910(1) R
119910(2) sdot sdot sdot R
119910(119901)]
119879
(13)
and R119908is a block-diagonal119898119901 times 119898119901matrix given by
R119908=
[
[
[
[
[
[
Σ119908
0 sdot sdot sdot 0
0 Σ119908
0 d 00 sdot sdot sdot sdot sdot sdot Σ
119908
]
]
]
]
]
]
= Σ119908otimes I119901 (14)
Evaluating (11) for 119897 = 119901 + 1 119901 + 119902 and using R119910(119897) =
R119910(minus119897)119879 give the following modified Yule-Walker (MYW)
equations
R119910119902A minus r119910119902= 0 (15)
where
R119910119902=
[
[
[
[
[
[
R119910(minus119901) R
119910(minus119901 + 1) sdot sdot sdot R
119910(minus1)
R119910(minus119901 minus 1) R
119910(minus119901) sdot sdot sdot R
119910(minus2)
d
R119910(minus119901 minus 119902) R
119910(minus119901 minus 119902 + 1) sdot sdot sdot R
119910(minus119902)
]
]
]
]
]
]
r119910119902= [R119910(119901 + 1) R
119910(119901 + 2) sdot sdot sdot R
119910(119901 + 119902)]
119879
(16)
and 119902 is an arbitrary nonnegative integerCombining the low-orderYule-Walker equations given by
(12) and the high-order Yule-Walker equations given by (15)we obtain
H119910A minusH
119908A minus h
119910= 0 (17)
where
H119910= [
R119910
R119910119902
] h119910= [
r119910
r119910119902
] (18)
H119908= [
R119908
0 ] (19)
Premultiplying (17) by (H119879119910H119910)minus1H119879119910 we obtain
A minus A119867minus R119867H119908A = 0 (20)
where
A119867= (H119879119910H119910)
minus1
H119879119910h119910
R119867= (H119879119910H119910)
minus1
H119879119910
(21)
And R119867of dimension119898119901 times 119898(119901 + 119902) can be partitioned as
R119867= [R1198671
R1198672] (22)
where the dimensions ofR1198671
andR1198672
are119898119901times119898119901 and119898119901times119898119902 respectively
Substituting (19) and (22) into (20) we get
A minus A119867minus R1198671R119908A = 0 (23)
If R119908is assumed to be known we can iteratively estimate A
using steepest descent algorithm as follows
A(119894) = A(119894minus1) + 120583 (A119867+ R1198671R119908A(119894minus1) minus A(119894minus1)) (24)
where 120583 is the step size parameter and A119867 R1198671
can beestimated from the observations We can also control thestability and rate of convergence of the algorithmby changing120583 The above equation is converged if 120583 is selected betweenzero and 2(1minus120582min)where 120582min is the maximum eigenvalueof the R
1198671R119908matrix Furthermore if we set 120583 = 1 and 119902 = 0
in (24) it is changed to
A(119894) = A119897119904+ Rminus1119910R119908A(119894minus1) (25)
where A119897119904= Rminus1119910r119910 which is previously derived in [8]
32 Application of Inverse Filtering for Estimation of Obser-vation Noise Covariance Matrix Given MAR Matrices If theobserved process is filtered by MAR matrices the outputprocess of the filter is obtained as
120576 (119899) = y (119899) minus A119879g (119899) (26)
By evaluating (9) for 119897 = 1 we can obtain
R120576(1) =
119901
sum
119895=1
A119895Σ119908A119879119895minus1 (27)
We can rewrite (27) as
(
119901
sum
119895=1
A119895otimes A119895minus1) vec (Σ119879
119908) = vec (R
120576(1)119879
) (28)
We solve (28) as follows
vec (Σ119879119908) = (
119901
sum
119895=1
A119895otimes A119895minus1)
minus1
r120576 (29)
where r120576= vec(R
120576(1)119879
) and vec is a column vectorizationoperator that is
vec([1198861 119886211988631198864
]) =
[
[
[
[
1198861
1198863
1198862
1198864
]
]
]
]
(30)
4 Journal of Stochastics
By assuming Σ119908= Σ119879
119908 symmetry property of Σ
119908 we can
improve estimate as [12]
Σ119908=
unvec (Σ119879119908) + unvec(Σ119879
119908)
119879
2
(31)
where unvec is the inverse operation of vecUsing (26) R
120576(1) is evaluated from the observed signal as
R120576(1) = 119864 120576 (119899) 120576(119899 minus 1)
119879
= 119864 (y (119899) minus A119879g (119899)) (y (119899 minus 1) minus A119879g(119899 minus 1)119879) (32)
In (32) A is unknown So we will use a recursive algorithmfor estimating Σ
119908and A This algorithm is discussed in
Section 33Now we assume that Σ
119908and A are known and derive an
estimate for Σ119890
Evaluating (9) for 119897 = 0 we obtain
Σ119890= R120576(0) minus
119901
sum
119895=0
A119895Σ119908A119879119895 (33)
where
R120576(0) = 119864 120576 (119899) 120576(119899)
119879
= 119864 (y (119899) minus A119879g (119899)) (y (119899) minus A119879g(119899)119879) (34)
33 The Algorithm The results derived in previous sectionsand subsections are summarized to propose a recursivealgorithm for estimating Σ
119908and A
119894rsquos (119894 = 1 2 119901) The
improved least-squares algorithm for multichannel processesbased on inverse filtering (IFILSM) is as follows
Step 1 Compute autocovariance estimates by means of givensamples y(1) y(2) y(119873)
R119910(119896) =
1
119873
119873
sum
119899=119901+119902+1
y (119899) y(119899 minus 119896)119879 119896 = 0 1 2 119901 + 119902
(35)
and use them to construct the covariance matrix estimatesR119910 r119910 R119910119902 and r
119910119902
Step 2 Initialize 119894 = 0 where 119894 denotes the iteration numberThen compute the estimate
A(119894) = A119867= (H119879119910
H119910)
minus1
H119879119910
h119910 (36)
Step 3 Set 119894 = 119894 + 1 and calculate the estimate of observationnoise covariance matrix Σ
119908
R120576(1) =
1
119873
119873
sum
119899=119901+2
(y (119899) minus A(119894minus1)IFILSM119879g (119899))
times (y (119899 minus 1) minus A(119894minus1)IFILSM119879g(119899 minus 1)119879)
vec (Σ119879119908) = (
119901
sum
119895=0
A119895otimes A119895minus1)
minus1
r120576
Σ(119894)
119908=
unvec (Σ119879119908) + unvec(Σ119879
119908)
119879
2
R(119894)119908= Σ119908otimes I119901
(37)
Step 4 Perform the bias correction
A(119894)IFILSM = A(119894minus1)IFILSM + 120583 (A119867 + R1198671R(119894)119908A(119894minus1)IFILSM minus A(119894minus1)IFILSM)
(38)
Step 5 If10038171003817100381710038171003817
A(119894)IFILSM minus A(119894minus1)
IFILSM10038171003817100381710038171003817
10038171003817100381710038171003817A(119894minus1)IFILSM
10038171003817100381710038171003817
le 120575 (39)
where sdot is the Euclidean norm and 120575 is an appropriatesmall positive number the convergence is achieved and theiteration process must be terminated otherwise go to Step 3
Step 6 Estimate Σ119890via (33)
Σ119890= R120576(0) minus
119901
sum
119895=0
A119895Σ119908A119879119895
R120576(0) =
1
119873
119873
sum
119899=119901+1
(y (119899) minus A119879g (119899)) (y (119899) minus A119879g(119899)119879)
(40)
4 Simulation Result
In simulation study we have compared the IFILSM methodwith the other existing methods Two examples have beenconsidered first frequency estimation and second syntheticnoisyMARmodel estimation In all simulations we set 120583 = 1for the proposed IFILSM algorithm
41 Estimating the Frequency of Multiple Sinusoids Embeddedin Spatially Colored Noise Consider several sinusoidal sig-nals in spatially colored noise as follows
z (119899) =
[
[
[
[
[
[
[
[
[
1199111(119899) =
119871
sum
119896=1
1198861119896cos (120596
1119896119899 + 1205931119896)
119911119898(119899) =
119871
sum
119896=1
119886119898119896
cos (120596119898119896119899 + 120593119898119896)
]
]
]
]
]
]
]
]
]
y (119899) = z (119899) + w (119899)
(41)
It is well known that z(119899) can be modeled as an MAR (2119871)process with zero driving noise vector (e(119899) = 0) So wecan apply the method proposed in the previous subsection to
Journal of Stochastics 5
Table 1 Simulation results (119872 = 1000 119873 = 500 SNR = 5 10 dB1205961= 212058701 and 120596
2= 212058703)
SNR 5 dB 5 dB 10 dB 10 dBMethods MSE1 (dB) MSE2 (dB) MSE1 (dB) MSE2 (dB)IFILSM minus4218 minus4704 minus5111 minus6006
ILSV minus3264 minus4191 minus4222 minus5051
ALSV minus3235 minus4276 minus4233 minus5136
LS minus575 minus2302 minus32 minus3127
MYW minus3758 minus4505 minus4682 minus5194
estimate the frequency of sinusoidal signals corrupted withspatially colored observation noise Note that the frequencyof sinusoids can be estimated by locating the peaks in theMAR spectrum
Example 1 We consider 119873 = 500 data samples of a signalcomprised of two sinusoids corrupted with spatially colorednoise Consider y(119899) as follows
y (119899) = [[
1198861cos (120596
1119899 + 1205931)
1198862cos (120596
2119899 + 1205932)
]
]
+ w (119899) (42)
where 1205961= 212058701 120596
2= 212058703 120593
1 1205932are independent
random phases uniformly distributed over [0 2120587] and w(119899)is Gaussian spatially colored noise having the followingcovariance matrix
Σ119908= [
1205902
11990811
1 1205902
1199082
] (43)
The signal-to-noise ratio (SNR) of each channel is defined as
SNR119894=
1198862
119894
21205902
119908119894
(44)
which is set to 5 and 10 dB The number of trials 119872 isset to 1000 and the parameter 119902 in the IFILSM and MYWmethods is set to 4 The 120575 in the ILSV ALSV and IFILSMmethods is set to 001We compare the proposedmethodwiththe LS MYW ALSV and ILSV methods in terms of meansquare error (MSE) of frequency estimates Note that otherexisting methods cannot work in the spatially correlatedobservation noise In Table 1 we tabulate the MSEs in termsof dB for the LS MYW ALSV ILSV and proposed methodsWe see that from the table the proposedmethod has the bestperformance
42 Synthetic MAR Processes
Example 2 Consider a noisy MARmodel with 119901 = 2119898 = 2and coefficient matrices
A1= [
11625 01432
01432 15443]
A2= [
minus096 minus00064
minus0064 minus064]
(45)
1000 1500 2000 2500 3000 3500 4000
minus28
minus26
minus24
minus22
minus20
minus18
minus16
minus14
minus12
minus10
minus8
N
MSE
(dB)
ALSVILSV
IFILSM
Figure 1MSE values ofMARmatrices estimate versus data samples119872 = 1000 SNR = 15 dB
The driving process e(119899) and observation noise w(119899) covari-ance are mutually uncorrelated Gaussian temporally whiteand spatially correlated processes with covariance matrices
Σ119890= [
1 02
02 07]
Case 1 Σ119908asymp [
04149 001
001 04689]
Case 2 Σ119908asymp [
129 001
001 144]
(46)
We compare the ILSV the ALSV and the proposed IFILSMmethod in terms MSE of MAR coefficient estimate and theconvergence probability defined as
119875 =
119872
119872119905
(47)
where 119872 is the number of runs that an iterative algorithmis converged and119872
119905is the total number of runs We set the
parameters of the algorithms119872 = 1000 and the parameter119902 in the IFILSM method is set to 4 The 120575 in the ILSV ALSVand IFILSM methods is set to 001
In Case 1 and Case 2 SNR for each channel is set to15 and 10 dB respectively In this example data samples 119873are changed from 1000 to 4000 with step size 1000 TheMSE and 119875 values are plotted in Figures 1 2 and 3 versus119873 respectively The probability of convergence is one forall algorithms in Case 1 and the figure is not plotted hereFrom the figures we can see that the accuracy and theconvergence of the IFILSM are better than the ILSV andALSVmethods Note that the ILSV and ALSVmethods havepoor convergence when SNR is lower or equal to 10 dBIt can be seen from the figures that the performance ofthe all algorithms is constant over the sample size becauseautocovariance estimates of the observations are not changedin this rage of data sample size
6 Journal of Stochastics
1000 1500 2000 2500 3000 3500 4000
minus40
minus35
minus30
minus25
minus20
minus15
minus10
minus5
0
5
N
MSE
(dB)
IFILSMILSV
ALSV
Figure 2 MSE values of MAR matrices estimate versus datasamples119872 = 1000 SNR = 10 dB
1000 1500 2000 2500 3000 3500 4000
0
02
04
06
08
1
12
N
Prob
abili
ty o
f con
verg
ence
ALSVIFILSMILSV
Figure 3 Probability of convergence values of MAR matricesestimate versus data samples119872 = 1000 SNR = 10 dB
5 Conclusion
Steepest descent (SD) algorithm is used for estimating noisymultichannel AR model parameters The observation noisecovariancematrix is nondiagonal thismeans that the channelnoises are assumed to be spatially correlated The inversefiltering idea is used to estimate the observation noisecovariance matrix Computer simulations showed that theproposed method has better performance and convergencethan the ILSV and the ALSV methods
Appendix
The following result discusses the convergence conditions ofthe proposed algorithm
Theorem A1 The necessary and sufficient condition for theconvergence of the proposed algorithm is to require that the stepsize parameter 120583 satisfy the following condition
0 lt 120583 lt
2
1 minus 120582min (A1)
where 120582min is the minimum eigenvalue of the R1198671R119908matrix
Proof Defining estimation error matrix x(119894) = A(119894) minus Asubstituting A minus A
119867minus R1198671R119908A = 0 into (24) and using
x(119894) = A(119894) minus A we obtain
x(119894) = (I minus 120583 (I minus R1198671R119908)) x(119894minus1) (A2)
The eigenvalues of I minus 120583(I minus R1198671R119908) are 1 minus 120583(1 minus 120582
119894) where
120582119894= 120582119894(R1198671R119908) is between 0 and 1
If minus1 lt 1 minus 120583(1 minus 120582119894) lt 1 then lim
119894rarrinfinx(119894) = 0 In this
case we have 0 lt 120583 lt 2(1 minus 120582119894) and their intersection is
0 lt 120583 lt 2120582max(I minus R1198671R119908) = 2(1 minus 120582min)
Conflict of Interests
The author declares that there is no conflict of interestsregarding the publication of this paper
References
[1] S M KayModern Spectral Estimation Theory and ApplicationPrentice-Hall Englewood Cliffs NJ USA 1988
[2] S Srinivasan R Aichner W B Kleijn and W KellermannldquoMultichannel parametric speech enhancementrdquo IEEE SignalProcessing Letters vol 13 no 5 pp 304ndash307 2006
[3] C Komninakis C Fragouli A H Sayed and R D WeselldquoMulti-input multi-output fading channel tracking and equal-ization using Kalman estimationrdquo IEEE Transactions on SignalProcessing vol 50 no 5 pp 1065ndash1076 2002
[4] P Wang H Li and B Himed ldquoA new parametric GLRT formultichannel adaptive signal detectionrdquo IEEE Transactions onSignal Processing vol 58 no 1 pp 317ndash325 2010
[5] S Marple Digital Spectral Analysis with Applications Prentice-Hall Englewood Cliffs NJ USA 1987
[6] D T Pham and D Q Tong ldquoMaximum likelihood estimationfor a multivariate autoregressive modelrdquo IEEE Transactions onSignal Processing vol 42 no 11 pp 3061ndash3072 1994
[7] A Schlogl ldquoA comparison of multivariate autoregressive esti-matorsrdquo Signal Process vol 86 no 9 pp 2426ndash2429 2006
[8] A Nehorai and P Stoica ldquoAdaptive algorithms for constrainedARMA signals in the presence of noiserdquo IEEE Transactions onAcoustics Speech and Signal Processing vol 36 no 8 pp 1282ndash1291 1988
[9] W X Zheng ldquoFast identification of autoregressive signals fromnoisy observationsrdquo IEEE Transactions on Circuits and SystemsII Express Briefs vol 52 no 1 pp 43ndash48 2005
[10] A Mahmoudi andM Karimi ldquoEstimation of the parameters ofmultichannel autoregressive signals from noisy observationsrdquoSignal Processing vol 88 no 11 pp 2777ndash2783 2008
[11] W X Zheng ldquoAutoregressive parameter estimation from noisydatardquo IEEE Transactions on Circuits and Systems II Analog andDigital Signal Processing vol 47 no 1 pp 71ndash75 2000
[12] X M Qu J Zhou and Y T Luo ldquoA new noise-compensatedestimation scheme formultichannel autoregressive signals fromnoisy observationsrdquo Journal of Supercomputing vol 58 no 1 pp34ndash49 2011
[13] J Petitjean E Grivel W Bobillet and P Roussilhe ldquoMultichan-nel AR parameter estimation from noisy observations as anerrors-in-variables issuerdquo Signal Image and Video Processingvol 4 no 2 pp 209ndash220 2010
Journal of Stochastics 7
[14] A Jamoos E Grivel N Shakarneh and H Abdel-NourldquoDual optimal filters for parameter estimation of a multivariateautoregressive process from noisy observationsrdquo IET SignalProcessing vol 5 no 5 pp 471ndash479 2011
[15] D Labarre E Grivel Y Berthoumieu E Todini andM NajimldquoConsistent estimation of autoregressive parameters from noisyobservations based on two interacting Kalman filtersrdquo SignalProcessing vol 86 no 10 pp 2863ndash2876 2006
[16] D Labarre E Grivel M Najim and N Christov ldquoDual119867infin
algorithms for signal processing-application to speechenhancementrdquo IEEE Transactions on Signal Processing vol 55no 11 pp 5195ndash5208 2007
[17] W Bobillet R Diversi E Grivel R Guidorzi M Najim and USoverini ldquoSpeech enhancement combining optimal smoothingand errors-in-variables identification of noisy AR processesrdquoIEEE Transactions on Signal Processing vol 55 no 12 pp 5564ndash5578 2007
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
4 Journal of Stochastics
By assuming Σ119908= Σ119879
119908 symmetry property of Σ
119908 we can
improve estimate as [12]
Σ119908=
unvec (Σ119879119908) + unvec(Σ119879
119908)
119879
2
(31)
where unvec is the inverse operation of vecUsing (26) R
120576(1) is evaluated from the observed signal as
R120576(1) = 119864 120576 (119899) 120576(119899 minus 1)
119879
= 119864 (y (119899) minus A119879g (119899)) (y (119899 minus 1) minus A119879g(119899 minus 1)119879) (32)
In (32) A is unknown So we will use a recursive algorithmfor estimating Σ
119908and A This algorithm is discussed in
Section 33Now we assume that Σ
119908and A are known and derive an
estimate for Σ119890
Evaluating (9) for 119897 = 0 we obtain
Σ119890= R120576(0) minus
119901
sum
119895=0
A119895Σ119908A119879119895 (33)
where
R120576(0) = 119864 120576 (119899) 120576(119899)
119879
= 119864 (y (119899) minus A119879g (119899)) (y (119899) minus A119879g(119899)119879) (34)
33 The Algorithm The results derived in previous sectionsand subsections are summarized to propose a recursivealgorithm for estimating Σ
119908and A
119894rsquos (119894 = 1 2 119901) The
improved least-squares algorithm for multichannel processesbased on inverse filtering (IFILSM) is as follows
Step 1 Compute autocovariance estimates by means of givensamples y(1) y(2) y(119873)
R119910(119896) =
1
119873
119873
sum
119899=119901+119902+1
y (119899) y(119899 minus 119896)119879 119896 = 0 1 2 119901 + 119902
(35)
and use them to construct the covariance matrix estimatesR119910 r119910 R119910119902 and r
119910119902
Step 2 Initialize 119894 = 0 where 119894 denotes the iteration numberThen compute the estimate
A(119894) = A119867= (H119879119910
H119910)
minus1
H119879119910
h119910 (36)
Step 3 Set 119894 = 119894 + 1 and calculate the estimate of observationnoise covariance matrix Σ
119908
R120576(1) =
1
119873
119873
sum
119899=119901+2
(y (119899) minus A(119894minus1)IFILSM119879g (119899))
times (y (119899 minus 1) minus A(119894minus1)IFILSM119879g(119899 minus 1)119879)
vec (Σ119879119908) = (
119901
sum
119895=0
A119895otimes A119895minus1)
minus1
r120576
Σ(119894)
119908=
unvec (Σ119879119908) + unvec(Σ119879
119908)
119879
2
R(119894)119908= Σ119908otimes I119901
(37)
Step 4 Perform the bias correction
A(119894)IFILSM = A(119894minus1)IFILSM + 120583 (A119867 + R1198671R(119894)119908A(119894minus1)IFILSM minus A(119894minus1)IFILSM)
(38)
Step 5 If10038171003817100381710038171003817
A(119894)IFILSM minus A(119894minus1)
IFILSM10038171003817100381710038171003817
10038171003817100381710038171003817A(119894minus1)IFILSM
10038171003817100381710038171003817
le 120575 (39)
where sdot is the Euclidean norm and 120575 is an appropriatesmall positive number the convergence is achieved and theiteration process must be terminated otherwise go to Step 3
Step 6 Estimate Σ119890via (33)
Σ119890= R120576(0) minus
119901
sum
119895=0
A119895Σ119908A119879119895
R120576(0) =
1
119873
119873
sum
119899=119901+1
(y (119899) minus A119879g (119899)) (y (119899) minus A119879g(119899)119879)
(40)
4 Simulation Result
In simulation study we have compared the IFILSM methodwith the other existing methods Two examples have beenconsidered first frequency estimation and second syntheticnoisyMARmodel estimation In all simulations we set 120583 = 1for the proposed IFILSM algorithm
41 Estimating the Frequency of Multiple Sinusoids Embeddedin Spatially Colored Noise Consider several sinusoidal sig-nals in spatially colored noise as follows
z (119899) =
[
[
[
[
[
[
[
[
[
1199111(119899) =
119871
sum
119896=1
1198861119896cos (120596
1119896119899 + 1205931119896)
119911119898(119899) =
119871
sum
119896=1
119886119898119896
cos (120596119898119896119899 + 120593119898119896)
]
]
]
]
]
]
]
]
]
y (119899) = z (119899) + w (119899)
(41)
It is well known that z(119899) can be modeled as an MAR (2119871)process with zero driving noise vector (e(119899) = 0) So wecan apply the method proposed in the previous subsection to
Journal of Stochastics 5
Table 1 Simulation results (119872 = 1000 119873 = 500 SNR = 5 10 dB1205961= 212058701 and 120596
2= 212058703)
SNR 5 dB 5 dB 10 dB 10 dBMethods MSE1 (dB) MSE2 (dB) MSE1 (dB) MSE2 (dB)IFILSM minus4218 minus4704 minus5111 minus6006
ILSV minus3264 minus4191 minus4222 minus5051
ALSV minus3235 minus4276 minus4233 minus5136
LS minus575 minus2302 minus32 minus3127
MYW minus3758 minus4505 minus4682 minus5194
estimate the frequency of sinusoidal signals corrupted withspatially colored observation noise Note that the frequencyof sinusoids can be estimated by locating the peaks in theMAR spectrum
Example 1 We consider 119873 = 500 data samples of a signalcomprised of two sinusoids corrupted with spatially colorednoise Consider y(119899) as follows
y (119899) = [[
1198861cos (120596
1119899 + 1205931)
1198862cos (120596
2119899 + 1205932)
]
]
+ w (119899) (42)
where 1205961= 212058701 120596
2= 212058703 120593
1 1205932are independent
random phases uniformly distributed over [0 2120587] and w(119899)is Gaussian spatially colored noise having the followingcovariance matrix
Σ119908= [
1205902
11990811
1 1205902
1199082
] (43)
The signal-to-noise ratio (SNR) of each channel is defined as
SNR119894=
1198862
119894
21205902
119908119894
(44)
which is set to 5 and 10 dB The number of trials 119872 isset to 1000 and the parameter 119902 in the IFILSM and MYWmethods is set to 4 The 120575 in the ILSV ALSV and IFILSMmethods is set to 001We compare the proposedmethodwiththe LS MYW ALSV and ILSV methods in terms of meansquare error (MSE) of frequency estimates Note that otherexisting methods cannot work in the spatially correlatedobservation noise In Table 1 we tabulate the MSEs in termsof dB for the LS MYW ALSV ILSV and proposed methodsWe see that from the table the proposedmethod has the bestperformance
42 Synthetic MAR Processes
Example 2 Consider a noisy MARmodel with 119901 = 2119898 = 2and coefficient matrices
A1= [
11625 01432
01432 15443]
A2= [
minus096 minus00064
minus0064 minus064]
(45)
1000 1500 2000 2500 3000 3500 4000
minus28
minus26
minus24
minus22
minus20
minus18
minus16
minus14
minus12
minus10
minus8
N
MSE
(dB)
ALSVILSV
IFILSM
Figure 1MSE values ofMARmatrices estimate versus data samples119872 = 1000 SNR = 15 dB
The driving process e(119899) and observation noise w(119899) covari-ance are mutually uncorrelated Gaussian temporally whiteand spatially correlated processes with covariance matrices
Σ119890= [
1 02
02 07]
Case 1 Σ119908asymp [
04149 001
001 04689]
Case 2 Σ119908asymp [
129 001
001 144]
(46)
We compare the ILSV the ALSV and the proposed IFILSMmethod in terms MSE of MAR coefficient estimate and theconvergence probability defined as
119875 =
119872
119872119905
(47)
where 119872 is the number of runs that an iterative algorithmis converged and119872
119905is the total number of runs We set the
parameters of the algorithms119872 = 1000 and the parameter119902 in the IFILSM method is set to 4 The 120575 in the ILSV ALSVand IFILSM methods is set to 001
In Case 1 and Case 2 SNR for each channel is set to15 and 10 dB respectively In this example data samples 119873are changed from 1000 to 4000 with step size 1000 TheMSE and 119875 values are plotted in Figures 1 2 and 3 versus119873 respectively The probability of convergence is one forall algorithms in Case 1 and the figure is not plotted hereFrom the figures we can see that the accuracy and theconvergence of the IFILSM are better than the ILSV andALSVmethods Note that the ILSV and ALSVmethods havepoor convergence when SNR is lower or equal to 10 dBIt can be seen from the figures that the performance ofthe all algorithms is constant over the sample size becauseautocovariance estimates of the observations are not changedin this rage of data sample size
6 Journal of Stochastics
1000 1500 2000 2500 3000 3500 4000
minus40
minus35
minus30
minus25
minus20
minus15
minus10
minus5
0
5
N
MSE
(dB)
IFILSMILSV
ALSV
Figure 2 MSE values of MAR matrices estimate versus datasamples119872 = 1000 SNR = 10 dB
1000 1500 2000 2500 3000 3500 4000
0
02
04
06
08
1
12
N
Prob
abili
ty o
f con
verg
ence
ALSVIFILSMILSV
Figure 3 Probability of convergence values of MAR matricesestimate versus data samples119872 = 1000 SNR = 10 dB
5 Conclusion
Steepest descent (SD) algorithm is used for estimating noisymultichannel AR model parameters The observation noisecovariancematrix is nondiagonal thismeans that the channelnoises are assumed to be spatially correlated The inversefiltering idea is used to estimate the observation noisecovariance matrix Computer simulations showed that theproposed method has better performance and convergencethan the ILSV and the ALSV methods
Appendix
The following result discusses the convergence conditions ofthe proposed algorithm
Theorem A1 The necessary and sufficient condition for theconvergence of the proposed algorithm is to require that the stepsize parameter 120583 satisfy the following condition
0 lt 120583 lt
2
1 minus 120582min (A1)
where 120582min is the minimum eigenvalue of the R1198671R119908matrix
Proof Defining estimation error matrix x(119894) = A(119894) minus Asubstituting A minus A
119867minus R1198671R119908A = 0 into (24) and using
x(119894) = A(119894) minus A we obtain
x(119894) = (I minus 120583 (I minus R1198671R119908)) x(119894minus1) (A2)
The eigenvalues of I minus 120583(I minus R1198671R119908) are 1 minus 120583(1 minus 120582
119894) where
120582119894= 120582119894(R1198671R119908) is between 0 and 1
If minus1 lt 1 minus 120583(1 minus 120582119894) lt 1 then lim
119894rarrinfinx(119894) = 0 In this
case we have 0 lt 120583 lt 2(1 minus 120582119894) and their intersection is
0 lt 120583 lt 2120582max(I minus R1198671R119908) = 2(1 minus 120582min)
Conflict of Interests
The author declares that there is no conflict of interestsregarding the publication of this paper
References
[1] S M KayModern Spectral Estimation Theory and ApplicationPrentice-Hall Englewood Cliffs NJ USA 1988
[2] S Srinivasan R Aichner W B Kleijn and W KellermannldquoMultichannel parametric speech enhancementrdquo IEEE SignalProcessing Letters vol 13 no 5 pp 304ndash307 2006
[3] C Komninakis C Fragouli A H Sayed and R D WeselldquoMulti-input multi-output fading channel tracking and equal-ization using Kalman estimationrdquo IEEE Transactions on SignalProcessing vol 50 no 5 pp 1065ndash1076 2002
[4] P Wang H Li and B Himed ldquoA new parametric GLRT formultichannel adaptive signal detectionrdquo IEEE Transactions onSignal Processing vol 58 no 1 pp 317ndash325 2010
[5] S Marple Digital Spectral Analysis with Applications Prentice-Hall Englewood Cliffs NJ USA 1987
[6] D T Pham and D Q Tong ldquoMaximum likelihood estimationfor a multivariate autoregressive modelrdquo IEEE Transactions onSignal Processing vol 42 no 11 pp 3061ndash3072 1994
[7] A Schlogl ldquoA comparison of multivariate autoregressive esti-matorsrdquo Signal Process vol 86 no 9 pp 2426ndash2429 2006
[8] A Nehorai and P Stoica ldquoAdaptive algorithms for constrainedARMA signals in the presence of noiserdquo IEEE Transactions onAcoustics Speech and Signal Processing vol 36 no 8 pp 1282ndash1291 1988
[9] W X Zheng ldquoFast identification of autoregressive signals fromnoisy observationsrdquo IEEE Transactions on Circuits and SystemsII Express Briefs vol 52 no 1 pp 43ndash48 2005
[10] A Mahmoudi andM Karimi ldquoEstimation of the parameters ofmultichannel autoregressive signals from noisy observationsrdquoSignal Processing vol 88 no 11 pp 2777ndash2783 2008
[11] W X Zheng ldquoAutoregressive parameter estimation from noisydatardquo IEEE Transactions on Circuits and Systems II Analog andDigital Signal Processing vol 47 no 1 pp 71ndash75 2000
[12] X M Qu J Zhou and Y T Luo ldquoA new noise-compensatedestimation scheme formultichannel autoregressive signals fromnoisy observationsrdquo Journal of Supercomputing vol 58 no 1 pp34ndash49 2011
[13] J Petitjean E Grivel W Bobillet and P Roussilhe ldquoMultichan-nel AR parameter estimation from noisy observations as anerrors-in-variables issuerdquo Signal Image and Video Processingvol 4 no 2 pp 209ndash220 2010
Journal of Stochastics 7
[14] A Jamoos E Grivel N Shakarneh and H Abdel-NourldquoDual optimal filters for parameter estimation of a multivariateautoregressive process from noisy observationsrdquo IET SignalProcessing vol 5 no 5 pp 471ndash479 2011
[15] D Labarre E Grivel Y Berthoumieu E Todini andM NajimldquoConsistent estimation of autoregressive parameters from noisyobservations based on two interacting Kalman filtersrdquo SignalProcessing vol 86 no 10 pp 2863ndash2876 2006
[16] D Labarre E Grivel M Najim and N Christov ldquoDual119867infin
algorithms for signal processing-application to speechenhancementrdquo IEEE Transactions on Signal Processing vol 55no 11 pp 5195ndash5208 2007
[17] W Bobillet R Diversi E Grivel R Guidorzi M Najim and USoverini ldquoSpeech enhancement combining optimal smoothingand errors-in-variables identification of noisy AR processesrdquoIEEE Transactions on Signal Processing vol 55 no 12 pp 5564ndash5578 2007
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Journal of Stochastics 5
Table 1 Simulation results (119872 = 1000 119873 = 500 SNR = 5 10 dB1205961= 212058701 and 120596
2= 212058703)
SNR 5 dB 5 dB 10 dB 10 dBMethods MSE1 (dB) MSE2 (dB) MSE1 (dB) MSE2 (dB)IFILSM minus4218 minus4704 minus5111 minus6006
ILSV minus3264 minus4191 minus4222 minus5051
ALSV minus3235 minus4276 minus4233 minus5136
LS minus575 minus2302 minus32 minus3127
MYW minus3758 minus4505 minus4682 minus5194
estimate the frequency of sinusoidal signals corrupted withspatially colored observation noise Note that the frequencyof sinusoids can be estimated by locating the peaks in theMAR spectrum
Example 1 We consider 119873 = 500 data samples of a signalcomprised of two sinusoids corrupted with spatially colorednoise Consider y(119899) as follows
y (119899) = [[
1198861cos (120596
1119899 + 1205931)
1198862cos (120596
2119899 + 1205932)
]
]
+ w (119899) (42)
where 1205961= 212058701 120596
2= 212058703 120593
1 1205932are independent
random phases uniformly distributed over [0 2120587] and w(119899)is Gaussian spatially colored noise having the followingcovariance matrix
Σ119908= [
1205902
11990811
1 1205902
1199082
] (43)
The signal-to-noise ratio (SNR) of each channel is defined as
SNR119894=
1198862
119894
21205902
119908119894
(44)
which is set to 5 and 10 dB The number of trials 119872 isset to 1000 and the parameter 119902 in the IFILSM and MYWmethods is set to 4 The 120575 in the ILSV ALSV and IFILSMmethods is set to 001We compare the proposedmethodwiththe LS MYW ALSV and ILSV methods in terms of meansquare error (MSE) of frequency estimates Note that otherexisting methods cannot work in the spatially correlatedobservation noise In Table 1 we tabulate the MSEs in termsof dB for the LS MYW ALSV ILSV and proposed methodsWe see that from the table the proposedmethod has the bestperformance
42 Synthetic MAR Processes
Example 2 Consider a noisy MARmodel with 119901 = 2119898 = 2and coefficient matrices
A1= [
11625 01432
01432 15443]
A2= [
minus096 minus00064
minus0064 minus064]
(45)
1000 1500 2000 2500 3000 3500 4000
minus28
minus26
minus24
minus22
minus20
minus18
minus16
minus14
minus12
minus10
minus8
N
MSE
(dB)
ALSVILSV
IFILSM
Figure 1MSE values ofMARmatrices estimate versus data samples119872 = 1000 SNR = 15 dB
The driving process e(119899) and observation noise w(119899) covari-ance are mutually uncorrelated Gaussian temporally whiteand spatially correlated processes with covariance matrices
Σ119890= [
1 02
02 07]
Case 1 Σ119908asymp [
04149 001
001 04689]
Case 2 Σ119908asymp [
129 001
001 144]
(46)
We compare the ILSV the ALSV and the proposed IFILSMmethod in terms MSE of MAR coefficient estimate and theconvergence probability defined as
119875 =
119872
119872119905
(47)
where 119872 is the number of runs that an iterative algorithmis converged and119872
119905is the total number of runs We set the
parameters of the algorithms119872 = 1000 and the parameter119902 in the IFILSM method is set to 4 The 120575 in the ILSV ALSVand IFILSM methods is set to 001
In Case 1 and Case 2 SNR for each channel is set to15 and 10 dB respectively In this example data samples 119873are changed from 1000 to 4000 with step size 1000 TheMSE and 119875 values are plotted in Figures 1 2 and 3 versus119873 respectively The probability of convergence is one forall algorithms in Case 1 and the figure is not plotted hereFrom the figures we can see that the accuracy and theconvergence of the IFILSM are better than the ILSV andALSVmethods Note that the ILSV and ALSVmethods havepoor convergence when SNR is lower or equal to 10 dBIt can be seen from the figures that the performance ofthe all algorithms is constant over the sample size becauseautocovariance estimates of the observations are not changedin this rage of data sample size
6 Journal of Stochastics
1000 1500 2000 2500 3000 3500 4000
minus40
minus35
minus30
minus25
minus20
minus15
minus10
minus5
0
5
N
MSE
(dB)
IFILSMILSV
ALSV
Figure 2 MSE values of MAR matrices estimate versus datasamples119872 = 1000 SNR = 10 dB
1000 1500 2000 2500 3000 3500 4000
0
02
04
06
08
1
12
N
Prob
abili
ty o
f con
verg
ence
ALSVIFILSMILSV
Figure 3 Probability of convergence values of MAR matricesestimate versus data samples119872 = 1000 SNR = 10 dB
5 Conclusion
Steepest descent (SD) algorithm is used for estimating noisymultichannel AR model parameters The observation noisecovariancematrix is nondiagonal thismeans that the channelnoises are assumed to be spatially correlated The inversefiltering idea is used to estimate the observation noisecovariance matrix Computer simulations showed that theproposed method has better performance and convergencethan the ILSV and the ALSV methods
Appendix
The following result discusses the convergence conditions ofthe proposed algorithm
Theorem A1 The necessary and sufficient condition for theconvergence of the proposed algorithm is to require that the stepsize parameter 120583 satisfy the following condition
0 lt 120583 lt
2
1 minus 120582min (A1)
where 120582min is the minimum eigenvalue of the R1198671R119908matrix
Proof Defining estimation error matrix x(119894) = A(119894) minus Asubstituting A minus A
119867minus R1198671R119908A = 0 into (24) and using
x(119894) = A(119894) minus A we obtain
x(119894) = (I minus 120583 (I minus R1198671R119908)) x(119894minus1) (A2)
The eigenvalues of I minus 120583(I minus R1198671R119908) are 1 minus 120583(1 minus 120582
119894) where
120582119894= 120582119894(R1198671R119908) is between 0 and 1
If minus1 lt 1 minus 120583(1 minus 120582119894) lt 1 then lim
119894rarrinfinx(119894) = 0 In this
case we have 0 lt 120583 lt 2(1 minus 120582119894) and their intersection is
0 lt 120583 lt 2120582max(I minus R1198671R119908) = 2(1 minus 120582min)
Conflict of Interests
The author declares that there is no conflict of interestsregarding the publication of this paper
References
[1] S M KayModern Spectral Estimation Theory and ApplicationPrentice-Hall Englewood Cliffs NJ USA 1988
[2] S Srinivasan R Aichner W B Kleijn and W KellermannldquoMultichannel parametric speech enhancementrdquo IEEE SignalProcessing Letters vol 13 no 5 pp 304ndash307 2006
[3] C Komninakis C Fragouli A H Sayed and R D WeselldquoMulti-input multi-output fading channel tracking and equal-ization using Kalman estimationrdquo IEEE Transactions on SignalProcessing vol 50 no 5 pp 1065ndash1076 2002
[4] P Wang H Li and B Himed ldquoA new parametric GLRT formultichannel adaptive signal detectionrdquo IEEE Transactions onSignal Processing vol 58 no 1 pp 317ndash325 2010
[5] S Marple Digital Spectral Analysis with Applications Prentice-Hall Englewood Cliffs NJ USA 1987
[6] D T Pham and D Q Tong ldquoMaximum likelihood estimationfor a multivariate autoregressive modelrdquo IEEE Transactions onSignal Processing vol 42 no 11 pp 3061ndash3072 1994
[7] A Schlogl ldquoA comparison of multivariate autoregressive esti-matorsrdquo Signal Process vol 86 no 9 pp 2426ndash2429 2006
[8] A Nehorai and P Stoica ldquoAdaptive algorithms for constrainedARMA signals in the presence of noiserdquo IEEE Transactions onAcoustics Speech and Signal Processing vol 36 no 8 pp 1282ndash1291 1988
[9] W X Zheng ldquoFast identification of autoregressive signals fromnoisy observationsrdquo IEEE Transactions on Circuits and SystemsII Express Briefs vol 52 no 1 pp 43ndash48 2005
[10] A Mahmoudi andM Karimi ldquoEstimation of the parameters ofmultichannel autoregressive signals from noisy observationsrdquoSignal Processing vol 88 no 11 pp 2777ndash2783 2008
[11] W X Zheng ldquoAutoregressive parameter estimation from noisydatardquo IEEE Transactions on Circuits and Systems II Analog andDigital Signal Processing vol 47 no 1 pp 71ndash75 2000
[12] X M Qu J Zhou and Y T Luo ldquoA new noise-compensatedestimation scheme formultichannel autoregressive signals fromnoisy observationsrdquo Journal of Supercomputing vol 58 no 1 pp34ndash49 2011
[13] J Petitjean E Grivel W Bobillet and P Roussilhe ldquoMultichan-nel AR parameter estimation from noisy observations as anerrors-in-variables issuerdquo Signal Image and Video Processingvol 4 no 2 pp 209ndash220 2010
Journal of Stochastics 7
[14] A Jamoos E Grivel N Shakarneh and H Abdel-NourldquoDual optimal filters for parameter estimation of a multivariateautoregressive process from noisy observationsrdquo IET SignalProcessing vol 5 no 5 pp 471ndash479 2011
[15] D Labarre E Grivel Y Berthoumieu E Todini andM NajimldquoConsistent estimation of autoregressive parameters from noisyobservations based on two interacting Kalman filtersrdquo SignalProcessing vol 86 no 10 pp 2863ndash2876 2006
[16] D Labarre E Grivel M Najim and N Christov ldquoDual119867infin
algorithms for signal processing-application to speechenhancementrdquo IEEE Transactions on Signal Processing vol 55no 11 pp 5195ndash5208 2007
[17] W Bobillet R Diversi E Grivel R Guidorzi M Najim and USoverini ldquoSpeech enhancement combining optimal smoothingand errors-in-variables identification of noisy AR processesrdquoIEEE Transactions on Signal Processing vol 55 no 12 pp 5564ndash5578 2007
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
6 Journal of Stochastics
1000 1500 2000 2500 3000 3500 4000
minus40
minus35
minus30
minus25
minus20
minus15
minus10
minus5
0
5
N
MSE
(dB)
IFILSMILSV
ALSV
Figure 2 MSE values of MAR matrices estimate versus datasamples119872 = 1000 SNR = 10 dB
1000 1500 2000 2500 3000 3500 4000
0
02
04
06
08
1
12
N
Prob
abili
ty o
f con
verg
ence
ALSVIFILSMILSV
Figure 3 Probability of convergence values of MAR matricesestimate versus data samples119872 = 1000 SNR = 10 dB
5 Conclusion
Steepest descent (SD) algorithm is used for estimating noisymultichannel AR model parameters The observation noisecovariancematrix is nondiagonal thismeans that the channelnoises are assumed to be spatially correlated The inversefiltering idea is used to estimate the observation noisecovariance matrix Computer simulations showed that theproposed method has better performance and convergencethan the ILSV and the ALSV methods
Appendix
The following result discusses the convergence conditions ofthe proposed algorithm
Theorem A1 The necessary and sufficient condition for theconvergence of the proposed algorithm is to require that the stepsize parameter 120583 satisfy the following condition
0 lt 120583 lt
2
1 minus 120582min (A1)
where 120582min is the minimum eigenvalue of the R1198671R119908matrix
Proof Defining estimation error matrix x(119894) = A(119894) minus Asubstituting A minus A
119867minus R1198671R119908A = 0 into (24) and using
x(119894) = A(119894) minus A we obtain
x(119894) = (I minus 120583 (I minus R1198671R119908)) x(119894minus1) (A2)
The eigenvalues of I minus 120583(I minus R1198671R119908) are 1 minus 120583(1 minus 120582
119894) where
120582119894= 120582119894(R1198671R119908) is between 0 and 1
If minus1 lt 1 minus 120583(1 minus 120582119894) lt 1 then lim
119894rarrinfinx(119894) = 0 In this
case we have 0 lt 120583 lt 2(1 minus 120582119894) and their intersection is
0 lt 120583 lt 2120582max(I minus R1198671R119908) = 2(1 minus 120582min)
Conflict of Interests
The author declares that there is no conflict of interestsregarding the publication of this paper
References
[1] S M KayModern Spectral Estimation Theory and ApplicationPrentice-Hall Englewood Cliffs NJ USA 1988
[2] S Srinivasan R Aichner W B Kleijn and W KellermannldquoMultichannel parametric speech enhancementrdquo IEEE SignalProcessing Letters vol 13 no 5 pp 304ndash307 2006
[3] C Komninakis C Fragouli A H Sayed and R D WeselldquoMulti-input multi-output fading channel tracking and equal-ization using Kalman estimationrdquo IEEE Transactions on SignalProcessing vol 50 no 5 pp 1065ndash1076 2002
[4] P Wang H Li and B Himed ldquoA new parametric GLRT formultichannel adaptive signal detectionrdquo IEEE Transactions onSignal Processing vol 58 no 1 pp 317ndash325 2010
[5] S Marple Digital Spectral Analysis with Applications Prentice-Hall Englewood Cliffs NJ USA 1987
[6] D T Pham and D Q Tong ldquoMaximum likelihood estimationfor a multivariate autoregressive modelrdquo IEEE Transactions onSignal Processing vol 42 no 11 pp 3061ndash3072 1994
[7] A Schlogl ldquoA comparison of multivariate autoregressive esti-matorsrdquo Signal Process vol 86 no 9 pp 2426ndash2429 2006
[8] A Nehorai and P Stoica ldquoAdaptive algorithms for constrainedARMA signals in the presence of noiserdquo IEEE Transactions onAcoustics Speech and Signal Processing vol 36 no 8 pp 1282ndash1291 1988
[9] W X Zheng ldquoFast identification of autoregressive signals fromnoisy observationsrdquo IEEE Transactions on Circuits and SystemsII Express Briefs vol 52 no 1 pp 43ndash48 2005
[10] A Mahmoudi andM Karimi ldquoEstimation of the parameters ofmultichannel autoregressive signals from noisy observationsrdquoSignal Processing vol 88 no 11 pp 2777ndash2783 2008
[11] W X Zheng ldquoAutoregressive parameter estimation from noisydatardquo IEEE Transactions on Circuits and Systems II Analog andDigital Signal Processing vol 47 no 1 pp 71ndash75 2000
[12] X M Qu J Zhou and Y T Luo ldquoA new noise-compensatedestimation scheme formultichannel autoregressive signals fromnoisy observationsrdquo Journal of Supercomputing vol 58 no 1 pp34ndash49 2011
[13] J Petitjean E Grivel W Bobillet and P Roussilhe ldquoMultichan-nel AR parameter estimation from noisy observations as anerrors-in-variables issuerdquo Signal Image and Video Processingvol 4 no 2 pp 209ndash220 2010
Journal of Stochastics 7
[14] A Jamoos E Grivel N Shakarneh and H Abdel-NourldquoDual optimal filters for parameter estimation of a multivariateautoregressive process from noisy observationsrdquo IET SignalProcessing vol 5 no 5 pp 471ndash479 2011
[15] D Labarre E Grivel Y Berthoumieu E Todini andM NajimldquoConsistent estimation of autoregressive parameters from noisyobservations based on two interacting Kalman filtersrdquo SignalProcessing vol 86 no 10 pp 2863ndash2876 2006
[16] D Labarre E Grivel M Najim and N Christov ldquoDual119867infin
algorithms for signal processing-application to speechenhancementrdquo IEEE Transactions on Signal Processing vol 55no 11 pp 5195ndash5208 2007
[17] W Bobillet R Diversi E Grivel R Guidorzi M Najim and USoverini ldquoSpeech enhancement combining optimal smoothingand errors-in-variables identification of noisy AR processesrdquoIEEE Transactions on Signal Processing vol 55 no 12 pp 5564ndash5578 2007
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Journal of Stochastics 7
[14] A Jamoos E Grivel N Shakarneh and H Abdel-NourldquoDual optimal filters for parameter estimation of a multivariateautoregressive process from noisy observationsrdquo IET SignalProcessing vol 5 no 5 pp 471ndash479 2011
[15] D Labarre E Grivel Y Berthoumieu E Todini andM NajimldquoConsistent estimation of autoregressive parameters from noisyobservations based on two interacting Kalman filtersrdquo SignalProcessing vol 86 no 10 pp 2863ndash2876 2006
[16] D Labarre E Grivel M Najim and N Christov ldquoDual119867infin
algorithms for signal processing-application to speechenhancementrdquo IEEE Transactions on Signal Processing vol 55no 11 pp 5195ndash5208 2007
[17] W Bobillet R Diversi E Grivel R Guidorzi M Najim and USoverini ldquoSpeech enhancement combining optimal smoothingand errors-in-variables identification of noisy AR processesrdquoIEEE Transactions on Signal Processing vol 55 no 12 pp 5564ndash5578 2007
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of