2011 beamforming regularization matrix and inverse problems applied to sound field measurement and...

Upload: philippe-aubert-gauthier

Post on 03-Jun-2018

229 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/11/2019 2011 Beamforming regularization matrix and inverse problems applied to sound field measurement and extrapola

    1/27

    This article appeared in a journal published by Elsevier. The attached

    copy is furnished to the author for internal non-commercial research

    and education use, including for instruction at the authors institution

    and sharing with colleagues.

    Other uses, including reproduction and distribution, or selling or

    licensing copies, or posting to personal, institutional or third partywebsites are prohibited.

    In most cases authors are permitted to post their version of the

    article (e.g. in Word or Tex form) to their personal website or

    institutional repository. Authors requiring further information

    regarding Elseviers archiving and manuscript policies are

    encouraged to visit:

    http://www.elsevier.com/copyright

    http://www.elsevier.com/copyrighthttp://www.elsevier.com/copyright
  • 8/11/2019 2011 Beamforming regularization matrix and inverse problems applied to sound field measurement and extrapola

    2/27

    Author's personal copy

    Beamforming regularization matrix and inverse problems applied tosound field measurement and extrapolation using microphone array

    P.-A. Gauthier a,b,, C. Camier a,b, Y. Pasco a,b, A. Berry a,b, E. Chambatte a,b,R. Lapointe c, M.-A. Delalay d

    a Groupe dAcoustique de lUniversitede Sherbrooke, Universite de Sherbrooke, 2500 boul. de lUniversite, Sherbrooke, Quebec, Canada J1K 2R1b Centre for Interdisciplinary Research in Music, Media and Technology, McGill University, 527 Sherbrooke St. West, Montreal, Quebec, Canada H3A 1E3c

    Bombardier Aerospace, P.O. Box 6087, Station Centre-Ville Montreal, Quebec, Canada H3C 3G9d CAE, 858 ch. de la Cote-de-Liesse, Saint-Laurent, Quebec, Canada H4T 1G6

    a r t i c l e i n f o

    Article history:

    Received 12 July 2010

    Received in revised form

    18 July 2011

    Accepted 19 July 2011

    Handling Editor: M.P. CartmellAvailable online 12 August 2011

    a b s t r a c t

    For sound field reproduction using multichannel spatial sound systems such as Wave

    Field Synthesis and Ambisonics, sound field extrapolation is a useful tool for the

    measurement, description and characterization of a sound environment to be reproduced

    in a listening area. In this paper, the inverse problem theory is adapted to sound field

    extrapolation around a microphone array for further spatial sound and sound environ-

    ment reproduction. A general review of inverse problem theory and analysis tools is given

    and used for the comparative evaluation of various microphone array configurations.

    Classical direct regularization methods such as truncated singular value decomposition

    and Tikhonov regularization are recalled. On the basis of the reviewed background, a newregularization method adapted to the problem at hand is introduced. This method

    involves the use of an a priori beamforming measurement to define a data-dependent

    discrete smoothing norm for the regularization of the inverse problem. This method

    which represents the main contribution of this paper shows promising results and opens

    new research avenues.

    & 2011 Elsevier Ltd. All rights reserved.

    1. Introduction

    In the last decades, a strong and wider interest for microphone or loudspeaker arrays and multichannel signalprocessing is observed. Typical applications range from acoustic imaging, sound source localization and separation using

    near-field acoustical holography[13], beamforming[46], inverse problems[3,7,8], subspace methods[6], time-reversal

    and other array processing algorithms [9,10] to spatial sound reproduction using Wave Field Synthesis (WFS) [1114],

    Ambisonics[15], multichannel Surround sound [16], etc. This paper deals with spatial sound recording within a larger

    sound field reproduction context.

    Microphone array applications for acoustic imaging aim at the experimental visualization and characterization of noise

    sources[4]. This type of applications is common in industrial and transport engineering for noise abatement purposes.

    A microphone array is typically used to measure the sound field in a given measurement grid and a post-processing stage

    Contents lists available at ScienceDirect

    journal homepage: www.elsevier.com/locate/jsvi

    Journal of Sound and Vibration

    0022-460X/$ - see front matter & 2011 Elsevier Ltd. All rights reserved.

    doi:10.1016/j.jsv.2011.07.022

    Corresponding author at: Groupe dAcoustique de lUniversite de Sherbrooke, Universitede Sherbrooke, 2500 boul. de lUniversite, Sherbrooke,

    Quebec, Canada J1K 2R1. Tel.: 1 819 821 8000x63773.

    E-mail address: [email protected] (P.-A. Gauthier).

    Journal of Sound and Vibration 330 (2011) 58525877

  • 8/11/2019 2011 Beamforming regularization matrix and inverse problems applied to sound field measurement and extrapola

    3/27

    Author's personal copy

    involves the sound field extrapolation outside the measurement grid up to the measured sound source[1]. Applications

    such as sound source localization, identification, quantification, and separation look for algorithms and signal processing

    which can either state angular directions from which the target sounds come from or output a signal which is caused by a

    single source in a noisy or reverberating environment[10].

    Among the acoustical localization techniques, the beamforming method [4,5] is one of the most widely used for

    far-field and noise-submitted problems. Beamforming is indeed robust against environmental and metrological error[17]

    and were in consequence largely questioned and improved for transport applications [18]. Basically, the conventionalbeamforming map shows the source strength spatially convolved with an array-dependent point-spread function.

    In a recent past, hybrid methods using subspace analysis and beamforming have been particularly developed since the

    prominent approach of MUSIC[19]then ESPRIT[20]. The aim is to split relevantly signal and noise components into identified

    subspaces to attenuate the measurement noise. One has to note that this underlying framework differs from the

    deconvolution approaches which aim at attenuating the effect of the point-spread function in the beamforming map and

    consequently precise the localization of the sources among which some widespread methods are DAMAS [21]and CLEAN[22].

    Recently and dedicated to aeroacoustic applications, Susuki developed the generalized inverse beamforming (GIB)

    which aims at identify sources of compact or distributed nature, coherent or incoherent[23]. Sarradj proposed a different

    subspace-based beamforming method focused on signal subspace and leading to a computationally efficient estimation of

    the source strength and location[24]. This was extended to monopole or multipole radiation patterns by Zavala et al. [25]

    and Bouchard[26]. The general idea of these approaches is to improve performance of beamforming by estimating the

    assigned distribution of sources by solving an inverse problem. In this paper, we propose a different method which can be

    interpreted as the opposite train of thinking than these approaches. Indeed, the following developments will lead to usethe beamforming map to penalize the non-signal region in an inverse problem. This is the main contribution of this paper.

    A considerable amount of recent papers on inverse problems in acoustic imaging are devoted to the selection of the

    optimal regularization parameter[2730]which is a key aspect of inverse problems. In this paper, we introduce a new

    direct regularization method (by opposition to the iterative regularization methods [31]) that increases the inverse

    problem spatial resolution without being more sensitive to measurement noise.

    Among the current trends, one also notes the combination and modification of known classical techniques such as near-

    field acoustical holography and beamforming. The method proposed in this paper fits these hybridizations of known and

    existing methods.

    On the loudspeaker counterpart of array processing, part of the applications are related to sound field reproduction.

    In this paper, we are specifically concerned by array measurements for spatial sound field reproduction for audio

    applications such as surround auditory display, sound environment reproduction or vehicle interior noise spatial rendering

    in vehicle mock-ups. The sound field reproduction applications can be further differentiated in terms of their goals or targets

    (the quantity or metric that must be reproduced). The most straightforward being a sound pressure field: the target is then aset of (or a continuous) acoustic pressures for different spatial locations. This is typically the aim of Wave Field Synthesis[11]

    and classical Ambisonics (using decoders based on FourierBessel series)[15]. Other sound field reproduction methods may

    involve other spatial targets such as sound intensity [32,33], sound diffuseness [32,33], acoustic impedance [34], sound

    contrast maximization [35,36], sound power minimization [36], random pressure fields over plane surfaces [37,38] and

    evanescent waves[39]. Although less common, it could even involve psychoacoustics metrics such as binaural interaural

    correlation, listener envelopment [40], etc. For all these sound field reproduction methods, the target is either measured

    in situ or synthesized from a theoretical definition (examples: simple plane or spherical wave, diffuse sound field). The

    methods presented in this paper address the in situ measurement of real sound environment.

    For spatial sound reproduction, there is therefore a need for on-site sound field measurement, characterization and

    description. Most characterization methods rely on microphone arrays using from four (such as for first-order Ambisonics

    [16,32]) to hundreds of microphones. For microphone arrays with few microphones, the sensors are close enough to avoid

    any spatial aliasing issue and to evaluate sound pressure gradients. However, these very compact microphone arrays do

    not provide a spatially extended sound field description. To achieve a sound field capture over a larger area, one mustincrease the number of measurement channels and possibly increase the smallest distance between microphones, hence

    exposing the measurement to spatial aliasing issues above the spatial aliasing frequency [6,41]. However, as long as the

    microphone array spatial aliasing frequency is higher than the spatial aliasing frequency of the sound field reproduction

    system, this is not a problem. Indeed, in all cases, the sound field above the spatial aliasing frequency should be processed

    and reproduced using appropriate methods which are different from the ones presented in this paper.

    Generally speaking, sound field extrapolation (SFE) aims at the prediction of a measured sound field inside and outside

    a measurement area [2,42]. This is somewhat similar to the definition of near-field acoustical holography (NAH) and

    acoustic imaging. However, what really distinguishes SFE from NAH and acoustic imaging is simply its more general aims,

    i.e. contrary to NAH and acoustic imaging, SFE is not necessarily concerned by the sound field extrapolation up to a

    potential sound source. Indeed, in our case, SFE is simply concerned by an extrapolation area surrounding the

    measurement area. This fits the need of spatial sound reproduction for a given listening array where the microphone

    array originally stands[42].

    In this paper, we investigate a SFE method (below the array spatial aliasing frequency) that allows later post-processingand sound field characterization for subsequent sound field reproduction using loudspeaker or vibration source arrays in

    closed space (i.e. listening rooms or mock-ups). The main objective of the presented methods is to achieve the largest

    P.-A. Gauthier et al. / Journal of Sound and Vibration 330 (2011) 58525877 5853

  • 8/11/2019 2011 Beamforming regularization matrix and inverse problems applied to sound field measurement and extrapola

    4/27

    Author's personal copy

    extrapolation region around the listening area (where the microphone array is installed) so that any relevant spatial

    parameters (acoustic pressure field, spatial coherence, mode shapes, intensity maps, direct and diffuse energy densities as

    function of spatial coordinates, etc.) can be computed and characterized from a measurement using a single array. An

    example of sound environment synthesis using general spatial parameters has been reported by Verron et al.[43].

    2. Paper outline

    InSection 3of this paper, the general inverse problem theory is reviewed and presented for linear acoustics with an

    emphasis on sound field extrapolation. Section 4 recalls the classical inverse problem analysis tools and discusses their

    ability to evaluate prototypical microphone array configurations. These two sections are provided as a general theoretical

    review that simplifies the presentation and justification of the main contribution of this paper. Classical inverse problem

    regularization methods and the beamforming regularization matrix, that is the main contribution of this paper, are

    presented in Section 5. Section 6 illustrates the aforementioned tools and regularization methods for few microphone

    array configurations along with sound field extrapolation examples. Concluding remarks are reported in Section 7.

    3. Inverse problem in acoustics

    The problem geometry is depicted in Fig. 1. Any field point is described by x2 V. A point which belongs to a source

    surfaceSs(surrounding a source volume Vs V) is denotedy. A point which belongs to a continuous measurement surface

    Sa V is denoted xa. A single microphone m is located in xm 2Sa with m 2 f1,. . . ,Mg. In this paper, two scalar products

    associated with the aperture surface and source surface will be used:R

    SafnxagxadSa and

    RSsfnygydSs, respectively,

    where n denotes complex conjugation.

    3.1. General inverse problem of sound radiation

    For a point source in x0, the Helmholtz equation is

    r2 k2Gx9x0 4pdxx0, (1)

    whereGx9x0 is the Green function of the problem at hand and k, the wavenumber (rad/m).For sound sources confined in a source volume Vs, the resulting continuous sound field is given by the simple source

    formulation[1,7]

    px ZSs

    Gx9yqydSs, (2)

    where qy is monopole-amplitude distribution per unit surface. For the direct problem, the sound pressure field px is

    computed from the source distribution qy. For the general inverse problem in acoustics, the aim is to estimate the sound

    sourceqydistribution from a given sound field measurement inSa, either spatially continuous ( ^pxa) or sampled ( ^pxm).

    In the latter case, Eq. (2) is a Fredholm integral equation of the first kind [44,45] that must be solved. From this

    observation, many general properties of Eq. (2) as an inverse problem are derived in the sequel[31]. In most practical

    applications of the inverse problem method in acoustics, the continuous formulation is soon discretized before being

    x1x2

    x3

    O

    Vs

    y

    x

    R

    xm

    m-1

    m+1

    m+2

    m

    ...

    ...

    Ss

    Va

    Sa

    Fig. 1. Geometrical and symbols convention. Any field point is described byx. A point which belongs to a source surfaceSs is denotedy. Microphonemis

    located in xm. Rectangular coordinates are given by x1 ,x2,x3 and spherical coordinates by R,y,j with azimuth y and elevationj.

    P.-A. Gauthier et al. / Journal of Sound and Vibration 330 (2011) 585258775854

  • 8/11/2019 2011 Beamforming regularization matrix and inverse problems applied to sound field measurement and extrapola

    5/27

    Author's personal copy

    actually analyzed and solved[7,8]. However, some general ideas that help the understanding of inverse problem issues and

    that simplify the justification of the proposed regularization method are derived from the continuous inverse problem.

    Therefore, we recall some of those classical principles in the remainder of this section[31].

    Typically, G is a system model either based on theory or system identification. As for most inverse problems

    encountered in physics, they are often ill-posed in the sense that they do not satisfy simultaneously the following three

    properties: existence of solution, uniqueness of solution and stability of the solution[45].

    In Eq. (2), Gx9y (or Gxm9y) is the integrals kernel which represents the physics of sound radiation since it mustsatisfy the Helmholtz equation (Eq. (1)). For any kernels of the integral in Eq. (2), the integral operations associated with

    these kernels have a smoothing effect that reduces high-frequency content (or spatial details) contribution ofqyin px

    [31]. This smoothing effect is at the heart of many difficulties encountered in sound source imaging processing

    (beamforming, acoustical holography, etc.). Moreover, it justifies the near-field measurement approaches such as near-

    field acoustical holography[1] where the microphone are close enough to the source to capture the evanescent waves.

    3.2. Singular value expansion of the integral operator

    The inverse problem can be usefully interpreted through the singular value expansion (SVE) of the integral operator.

    Detailed references about SVE can be found in various textbooks [31,44,45]. Within the field of acoustics, array processing

    and sound field reproduction, the use of the SVE and singular value decomposition is not new: it was reported by Borgiotti

    [46], Photiadis [47] and Fazi et al. [48], to name just a few. As an example, recent work by Fazi [49] on sound field

    reproduction found its foundation in the SVE. In this paper, the general theory is recalled to justify the proposed method.For any kernel found in the integral operator of Eq. (2), the integral operator has its singular value expansion expressed as

    Gxa9y X1i 1

    miuixavn

    iy, (3)

    where uixa and viy are the singular functions of the integral kernel in the domains of interest. The mi are thecorresponding singular values. The singular values are real positive numbers typically ordered in decreasing order

    (m1Zm2Z 40).The singular functions u ixa and v iyare orthonormal in Sa and Ss, respectively,Z

    Sa

    unixaujxadSa dij, (4)

    ZSsvn

    iyvjydSs dij: (5)

    This leads to this important property[31,45]

    miuixa

    ZSs

    Gxa9yviydSs, (6)

    which shows that if the source distribution is a singular function viy, the resulting sound pressure field is the

    corresponding singular functionuixascaled by mi. The singular value mirepresents the coupling between the two singularfunctions vi and ui. Working in the singular value expansion coordinates system then involves uncoupled source

    distributions and resulting sound pressure fields. Each group uixa, v iyand m i is called a singular system of the integraloperator equation (2)[44]. The singular functions and values are only known theoretically for simple geometries of the

    domainsSs and Sa: planar, cylindrical and spherical[49]. Indeed, in these cases, one finds the corresponding orthogonal

    solutions of the complex Helmholtz equation (1): plane waves, cylindrical harmonics and spherical harmonics. Examples

    of SVE analysis and development on the basis of such geometries are reported by Williams [2](for conformal planes) andby Fazi et al. [48] (for concentric spheres). Other recent examples of sound field capture and reproduction based on

    spherical harmonics are reported in Refs. [50,51]. In this section, no assumption is made about the geometry of the

    domainsSa and Ssas this allows the use of irregular arrays.

    Since the singular values satisfy the relation

    X1i 1

    m2i JGxa9yJ2

    ZSa

    ZSs

    Gnxa9yGxa9ydSsdSa, (7)

    the singular valuesmi must strictly decay faster than i1=2 to ensure a square integrable kernel (according to the converge

    properties of Riemann series). This condition should be verified before solving the inverse problem.

    Another fundamental property of the kernel is visible through the SVE: the singular values decay faster to zero when

    the kernel is smoother, where the smoothness is defined by the number of non-zero continuous partial derivatives of the

    kernelG [31]. Hence, one notes that a kernel with very few fluctuations for a varying coordinate will: (1) involve fewer

    non-zero continuous partial derivatives and (2) present a much more important smoothing of the input function qyoncethe linear integration is evaluated. An additional property reported by Hansen[31]is that the smaller the singular values

    mi are, the more oscillations and zero-crossings in the corresponding singular functions ui and vi exist. This is often

    P.-A. Gauthier et al. / Journal of Sound and Vibration 330 (2011) 58525877 5855

  • 8/11/2019 2011 Beamforming regularization matrix and inverse problems applied to sound field measurement and extrapola

    6/27

    Author's personal copy

    observed in practice, but not yet formally proven, and difficult to generalize. According to these two properties, a singular

    value spectrum that rapidly decreases to zero involves a smoother kernel. Therefore, the smoothness of the kernel is also

    measured through the singular value decay rate: a fast decay rate testifies a smoother kernel. This rate will be later used to

    compare various microphone array configurations in Section 6.

    3.3. Solution of the general inverse problem of sound radiation

    On the basis of SVE, the continuous inverse problem solution is given by

    qy X1i 1

    RSaunixapxadSa

    miviy 8y2 Ss: (8)

    As this equation shows, the smoother the kernel, the faster the singular values mi decay and the greater will be theamplification of the corresponding singular system in the inverse problem.

    One of the most important verification tools for continuous inverse problems is the continuous Picard condition [31,44].

    This condition states that in order to obtain a square integrable solution qy in Ss, the following condition must be

    satisfied:

    X1

    i 1

    RSaunixapxadSa

    mi

    !2o1, (9)

    which shows that from a certain point in the summation the coefficientsRSaunixapxadSa must decay faster than mi [52].

    Put simply, this means that for large i, the projection of the measured pressure pxa on the i-th singular function uixa

    must decay with i since as the denominators mi decay with i, they may amplify the contribution of the coefficientsRSaunixapxadSa. The corresponding small singular valuesm i are often associated with coefficients

    RSaunixapxadSa that

    possibly carry noise only. Indeed, as mentioned earlier, as i increases the number of zero-crossings and oscillations in u itypically increases. Thus, these singular functions may approach spatially incoherent signals, such as measurement noise.

    Many properties and difficulties of continuous inverse problem are understood on the basis of the singular value

    expansion of the systems kernel. Moreover, many of these properties and difficulties are transposed to the spatially

    sampled version of the continuous inverse problem. A good understanding of the continuous inverse problem was

    mandatory to circumvent numerical difficulties encountered in the practical sampled inverse problem.

    3.4. Free-field example

    The ideas mentioned in Sections 3.13.3 are illustrated for the free-field case in three dimensions. In that case, the

    following Green function is used:

    Gx9y ejkr=r, (10)

    which gives

    px

    ZSs

    ejkr

    r qydSs, (11)

    with r 9xy9. Here the smoothness of the kernels ejkr=r(or ejkrm=rm), as introduced inSection 3.2, is easily observed interms of their derivatives d

    nG=drn [31]. According to the general Leibnitzs rule for higher derivatives of products [53],

    these are given by

    dnG

    drn dn

    drnejkrr1

    XnZ 0

    n!

    Z! jkZrnZejkr

    r , (12)

    whereris replaced byrmfor the sampled case. If we only take into account the magnitude of these n-order derivatives, it is

    clear that at low frequency the corresponding small wavenumber k makes the derivative magnitude decays faster than at

    higher frequency (for a fixed r). As expected, this means that the free-field Green function smoothness is large at low

    frequency. For a fixed frequency, a small distancerr1 makes the derivative magnitude increases with n, hence giving a

    less smooth kernel at small distance from the source, while a large r(rZ1) makes the derivative magnitude decreases with

    n, hence giving a smoother kernel at large distance from the source. As it will be shown in Section 4, these kernel

    smoothness variations with distance and frequency are directly transposed to the inverse problem singular value spectrum

    and condition number. These observations also fit the numerical results and observations reported by Nelson and Yoon

    [54]and Yoon and Nelson[27] for acoustical imaging of sound sources using inverse problems.

    An other case is when the separation between the sources and the receivers tends to infinity, the sound field at the

    receivers locally takes the form of propagating plane waves. Then, one writes

    px

    Z 2p0

    Z p=2p=2

    ejkxsj,ycosjdj dy, (13)

    P.-A. Gauthier et al. / Journal of Sound and Vibration 330 (2011) 585258775856

  • 8/11/2019 2011 Beamforming regularization matrix and inverse problems applied to sound field measurement and extrapola

    7/27

    Author's personal copy

    with the wavenumber vector k kcosjcosye1 cosjsinye2 sinje3, where y and j are the azimuth andelevation angles and ei are the unit vectors along the rectangular coordinates. In Eq. (13), sj,y are the elementaryplane wave amplitudes. In this simpler case of propagating plane waves, the n-order spatial derivative magnitude along

    the propagation directiony andj is proportional to kn and the previous remarks regarding the smoothness of the integralkernel as a function of frequency apply.

    3.5. Sampled inverse sound radiation problem

    For practical sound field extrapolation using inverse problem theory, the general inverse problem must be discretized

    both for the systems input qy(or sj,y) and outputpx[45]. Assuming a set ofMmeasurement microphones and a setofL point sources, the spatially sampled direct problem is written in matrix form

    pxm Gxm,ylqyl, (14)

    with

    p2 CM

    , G2 CML

    , q2 CL

    , (15)

    and yl is the l-th element of a set L fy1,. . . ,yl, . . . ,yLg Ss. For the plane wave case, the Green functions matrix G is

    replaced by

    Gml

    ejkyl ,jlxm 2 CML

    , (16)

    and withqthen designating a vector sof plane wave amplitudes. Note that for the remainder of this paper, we assume that

    the number of sources used in the inverse problem is always larger than or equal to the number of measurement

    microphones, i.e. MrL.

    Depending on the system dimensions M and L, the inverse problem can be presented in different forms. If ML,

    the inverse solution is directly written as

    q G1p: (17)

    For the more general case of a rectangular matrix G(with MoL), usual matrix inversion is impossible. To outstrip this

    limitation, a more general formulation is used. It involves the minimization of the 2-norm of the error between the

    reconstructed sound pressure p and the measured sound pressure p. The problem is then to find the optimal q for this

    minimization problem

    qopt argminfJpGqJ22g: (18)

    It is also possible to regularize the inverse problem using Tikhonov regularization

    qopt argminfJpGqJ22 l

    2Oq2g, (19)

    wherelis the regularization parameter and Othe discrete smoothing norm used to shape the regularization. The purpose

    of the discrete smoothing norm is at the heart of the beamforming regularization matrix. This is presented inSection 5.4.

    For clarity sake, note thatqin the previous equations (Eqs. (14)(19)) will substitutesjl,ylfor the inverse problem with aset ofL plane wave sources (see Eq. (16)).

    4. Discrete inverse problem analysis tools and solutions

    In this section, the analysis tools for the discrete inverse problem are recalled. They are derived from the continuous

    inverse problem and they can be found in many textbooks[1,31,55]. The solution of the discrete inverse problem is also

    presented.

    4.1. Singular value decomposition and singular value spectrum

    The SVE finds its discrete equivalent in the singular value decomposition (SVD) [55](assuming that MrLand that Gis

    full rank)

    G USVH XMi 1

    uisivHi , (20)

    with unitary matrices U2 CMM and V2 CL

    L (UHU VHV I). In Eq. (20), the vectors ui and vi are the left and right

    singular vectors, respectively. They correspond to the columns ofU and V. Each singular vector pair corresponds to a

    singular values i stored on the main diagonal ofS 2 RML. As for the continuous case, the singular values are ordered in

    decreasing order (s1Z

    s2Z 40). The SVE properties related to the smoothness of the kernel are transposed to the SVD

    of the discretized kernel. The orthogonality of the singular vectors leads to

    siui Gvi (21)

    P.-A. Gauthier et al. / Journal of Sound and Vibration 330 (2011) 58525877 5857

  • 8/11/2019 2011 Beamforming regularization matrix and inverse problems applied to sound field measurement and extrapola

    8/27

  • 8/11/2019 2011 Beamforming regularization matrix and inverse problems applied to sound field measurement and extrapola

    9/27

    Author's personal copy

    However, the development of the discrete Picard condition is subtle and complex and involves many details and practical

    shortcuts which are beyond the scope of this paper. The reader is redirected to Refs. [31,44,45,52] for the complete

    developments. The following paragraph summarizes the resulting condition and definition for practical microphone array

    applications.

    Although the SVD theory suggests that both the singular values si and the coefficients uHi

    p decrease monotonically,

    measurement noise in p will typically make the coefficients settle at a given level tp for iZ itp for any real situation.

    For uncorrelated noise with covariance matrixt2pI, the coefficients will settle at tp. Accordingly, one defines the resolutionlimit

    sres tp=sitp , (30)

    as the size of the smallest SVD component9uHi p=si9in Eq. (25) that can be recovered in the solution without being entirelydetermined by measurement noise[31]. If one hasa prioriinformations about the measurement noise level, the resolution

    limit is useful to set a penalization parameter for a regularization strategy. Moreover, the Picard condition and the

    resolution limits are related to the corner location of the L-curve [31]of the inverse problem. The L-curve is a common tool

    used for the selection of an appropriate regularization parameter, especially when the measurement noise level is

    unknown or inaccessible. InSection 6, it will be shown how the selected penalization parameter using the Picard condition

    and the resolution limit approaches the optimal corner of the L-curves. The Picard condition and the resolution limit have

    the great advantage of allowing the selection of a regularization parameter without the need to compute many times the

    inverse problem solution, as is the case for the traditional L-curve. A complete discussion of the L-curve analysis is given by

    Hansen[31].

    As for the continuous case (Section 3.2), the discrete Picard condition states that 9uHi p9 must decay faster than si toobtain a viable inverse problem solution. The easiest and simplest way to verify the discrete Picard condition is by visual

    inspection of the coefficients 9uHi p9 and singular values. However, some automatic verifications of the discrete Picardcondition exist, the easiest and most usual being the moving geometric mean[52]defined by

    riYiDi

    j iDi

    9uHj p9

    0@

    1A

    1=2Di 1,si, (31)

    whereDi defines the size of the averaging window. The discrete Picard condition is satisfied up to a given iwhile ri decayswith increasing i. From that given i, r i typically starts to increase and the Picard condition is not satisfied above that i.Examples of plotted Picard conditions with measurement noise are shown inSection 6.

    5. Inverse problem regularization

    As mentioned earlier, small singular values or smooth kernels have the potential to amplify measurement noise in the

    inverse problem solution. Regularization methods aim at the identification and filtering (or removal) of these small

    singular values in the inverse problem solution as expressed in Eq. (25). For the case where the discrete smoothing norm is

    simply the solution norm Oq JqJ2, this filtering or removal is easily generalized through the definition of the filter

    factorsfi

    qregXrankGi 1

    fiuHi p

    sivi, (32)

    with fir1. The filter factors fi have the power to leave untouched (fi1) or attenuate (fio1) the potentially erroneousuHi p=si coefficients contribution in qreg. The regularization method uniquely defines the filter factors. In the followingsubsections, various direct regularization methods are presented and the novel regularization method based on

    beamforming regularization matrix as the discrete smoothing norm is introduced (Sections 5.3 and5.4). In this paper,

    we are concerned by direct regularization methods. Direct regularization methods involve the computation of the inverse

    problem solution (hereqreg) in a single step[31]. Iterative regularization methods involve an iterative computation of the

    inverse problem solution[2,31,45].

    5.1. Truncated SVD solution

    The truncated SVD (TSVD) regularization[31]simply involves the cancellation of the contribution of the singular values

    from i I. This is written as follows:

    qIXIi 1

    uHi p

    sivi: (33)

    P.-A. Gauthier et al. / Journal of Sound and Vibration 330 (2011) 58525877 5859

  • 8/11/2019 2011 Beamforming regularization matrix and inverse problems applied to sound field measurement and extrapola

    10/27

    Author's personal copy

    The resulting filter factors for Eq. (32) are given by

    fi1, 8irI,

    0, 8i4I:

    ( (34)

    Application and discussion of truncated SVD for inverse problems in acoustics is presented in[8].

    5.2. Tikhonov regularization

    Basic Tikhonov regularization involves the minimization task as shown in Eq. (19) with the discrete smoothing norm

    corresponding to the solution norm (Oq JqJ2). This gives

    qregl argminfJGqpJ22 l

    2JqJ22g: (35)

    The solution to this minimization task is given by Eq. (32) with the following filter factors [31]:

    fis2i

    s2i l2: (36)

    The basic Tikhonov regularization filter factors are much progressive than those of TSVD. Generic filter factors for basic

    Tikhonov are shown inFig. 2. As l2

    crossess2i, the filter factor fi decreases progressively. At l si, fi0.5.

    5.3. Tikhonov regularization with discrete smoothing norm

    A more general form of Tikhonov regularization involves a discrete smoothing norm Oq as shown in Eq. (19).

    The most current form of discrete smoothing norm is based on a weighting matrix L

    Oq JLqJ2: (37)

    This type of discrete smoothing norm can also encompass an a priori (or favored) solution ~q [31]

    Oq JLq ~qJ2: (38)

    In acoustics the idea of an a priorisolution was applied to the definition of adaptive Wave Field Synthesis for sound field

    reproduction in listening rooms[12]. Typically, matrix Lis either the identity matrix or a scaled approximation of first or

    second derivative operators [31]. Examples of derivative operators in the acoustical inverse problem for sound field

    equalization was recently reported by Stefanakis and Jacobsen[56]and by Langrenne et Garcia[57]. Another work using a

    weighting matrix has been presented by Poletti [58]for surround sound reproduction where the weighting matrix is used

    to add more penalization to the loudspeakers which are far from the angle of the sound that should be reproduced.

    However, note that none of these works took advantage of the generalized SVD (GSVD) or the inverse problem standard

    form[31]to highlight the impact of the weighting matrix. When matrix Lis the identity matrix, the problem corresponds

    to the basic Tikhonov regularization as described inSection 5.2.

    To study the impact of the matrix L on the inverse problem, one can rely on the GSVD which allows for a specific

    understanding of the problem and sheds some light on a specific property of the proposed method, namely the increased

    spatial resolution. Therefore, the theoretical analysis of inverse problems with Tikhonov regularization and LaI

    will be based on GSVD of the matrix pair Gand L. The development is postponed toSection 5.4. The reader is directed

    102 101 100 101 102104

    102

    100

    Normalized penalization parameter /i

    Filterfactorsfi

    Fig. 2. Classical Tikhonov filter factors fi for normalized l.

    P.-A. Gauthier et al. / Journal of Sound and Vibration 330 (2011) 585258775860

  • 8/11/2019 2011 Beamforming regularization matrix and inverse problems applied to sound field measurement and extrapola

    11/27

    Author's personal copy

    to Refs.[31,52]for more details. In this case, the generalized singular values are given by g i and the filter factors are

    fig2i

    g2i l2: (39)

    As this equation shows, the filter factors for Tikhonov regularization with LaI involve the same behavior as for basic

    Tikhonov regularization withL I: the generalized singular values of the matrix pair Gand Lfor whichg2

    i

    rl2

    are filtered

    out from the solution by the regularization. Although it is possible to obtain an inverse problem solution expression

    (different to Eq. (32)) based on GSVD, we will first rely on a simpler approach that illustrates the link between inverse

    problem and the beamforming approach. This link will be used to present, in the following section, a novel approach to

    sound field measurement and extrapolation using a matrix Lderived from a classical delay-and-sum operation, such as for

    classical beamforming. The interpretation of this method on the basis of GSVD will then be developed.

    5.4. Beamforming regularization matrix for the inverse problem

    The inverse problem with Tikhonov regularization is formulated through Eq. (19). Although it is possible to write down

    the optimal solution in terms of GSVD, it is easier to first show that, for a discrete smoothing norm in the form of Eq. (37),

    the optimal solution is given by

    qregl G#

    p GHp

    GHGl2

    LHL,

    (40)

    whereG# is the regularized inverse of matrixG[8,31]. This result is obtained by setting the derivative of the quadratic cost

    function to zero (see Eq. (19)).

    One can interpret this optimal solution in the framework of beamforming. Indeed, a part of this inverse problem

    solution corresponds to a beamforming delay-and-sum operation[5,9,59]. It is possible to write the simple delay-and-sum

    spatial responses[59] QBF 2 CL using

    QBF GHp, (41)

    or for the l-th listening direction (or point)

    QBFl gHl

    p, (42)

    where the column gl of matrix G(as introduced in Eq. (14)) exactly corresponds to a classical steering vector used for

    focused (point source) or non-focused (plane wave) beamforming [59]. Indeed, as reported in the beamforming literature,the steering vector is the evaluation of the Green function for one listening direction (or point) at the entire microphone

    array. This exactly corresponds to the matrixGdefinition presented in this paper.

    The squared magnitude of the spatial response is known as a beamformer [59]

    Bl Qn

    BFlQBFl: (43)

    In the inverse problem solution equation (40), the measured pressure premultiplication by GH involves a time alignment

    and summation of the Mpressure signals for a set ofL listening points just like classical delay-and-sum beamforming.

    The only difference with classical beamforming notation stems from the normalized steering vector (often known as the

    weight vector) which is not present in the previous equation. However, for the purpose of the presentation, we will only

    focus on the time alignment by delay-and-sum operations (Eq. (41)) which is at the heart of basic beamforming principles.

    Note that for the case of plane wave sources, the previous equation exactly corresponds to a non-focused delay-and-sum

    operation as found in basic beamforming.

    Comparing Eqs. (40) and (41), one notes that this delay-and-sum corresponds to the inverse problem solutionnumerator. Moreover, each column or row of the first part of the denominator (GHG) corresponds to the classical delay-

    and-sum spatial responses (Eq. (41)) obtained for a source in yl for all theL listening points or directions. Put simply, each

    elementgHi gj ofGHGcorresponds to the delay-and-sum beamformer output for a listening point yi and a source in yj.

    One then concludes that the inverse problem solution equation (40), through the division of the beamformer output

    GHpby the spatial response matrix GHG(with l 0 for now), simply detects the classical beamforming patterns that might

    appear in the beamformer output QBF and replaces those with unitary impulses corresponding to the detected sound

    source positions or directions (for the noise-free scenario). Although the inverse solution can achieve that goal, depending

    on the condition number and singular value decay rate of G, it can be highly sensitive to noise. Hence the need for

    regularization. However, in the context of sound field extrapolation, it would be interesting to use the a prioriinformation

    from the beamforming-like signalQBF to properly regularize the inverse problem with a discrete smoothing norm matrix L

    derived from a priori beamforming results. To introduce this approach, a simple delay-and-sum beamformer is used for

    illustration purposes and simplicity sake.

    To achieve and illustrate such a goal, it is possible to introduce the following weighting matrix L:

    L diag9QBF9=JQBFJ11 2 R

    LL: (44)

    P.-A. Gauthier et al. / Journal of Sound and Vibration 330 (2011) 58525877 5861

  • 8/11/2019 2011 Beamforming regularization matrix and inverse problems applied to sound field measurement and extrapola

    12/27

    Author's personal copy

    The corresponding minimization task, from Eq. (35), is

    qBF argminfeHe l

    2diag 9QBF9=JQBFJ1

    1qHdiag9QBF9=JQBFJ1

    1qg, (45)

    withe pGq2 CM and where diagaindicates that the vectora 2 CL is mapped on the main diagonal of anL Lmatrix.

    Note that the beamforming-like signal 9QBF9 has been normalized by its infinity norm JQBFJ1 [55] to ensure that theregularization is normalized in terms of beamformer signal level. This formulation is equivalent to Eq. (19) where the

    discrete smoothing norm matrix L(see Eq. (37)) is simply the inverse of the a priori beamforming output absolute valuenormalized by the beamformer maximum value. Therefore, the inverse solution with such a regularization matrix favors

    source positions or directions for which classical beamforming yields a large output. Note that the use of a regularization

    matrix different from I that favors some angular directions for the derivation of panning functions for surround sound

    reproduction was reported by Poletti [58]. The square diagonal matrix diag9QBF9=JQBFJ11 will be called the

    beamforming regularization matrix. It is important to note that this regularization approach involves a data-dependent

    weighting matrixLin the discrete smoothing norm of the regularization: This differentiates this method from most of the

    classical regularization methods which are based on fixed weighting matrix. Moreover, although in this development a

    simple delay-and-sum beamforming algorithm is used to derive a data-dependent weighting matrix in the discrete

    smoothing norm, one should note that it is possible to apply the proposed method with any other beamforming or acoustic

    imaging methods. This could be the topic of further research.

    The inverse problem solution that will minimize the above cost function is

    qBF G

    H ^

    pGHGl

    2diag9QBF9=JQBFJ1

    21, (46)

    where diaga2 indicates that a vector a is mapped on the main diagonal of a matrix and that each of the main diagonal

    elements are squared. This type of inverse problem solution is also written in an other often-encountered and relevant

    format[31]

    qBF diag9QBF9=JQBFJ1

    2GHp

    diag9QBF9=JQBFJ12GHGl

    2I

    : (47)

    As mentioned earlier, since LaI, one can rely on the GSVD of the matrix pair Gand L diag9QBF9=JQBFJ11 to analyze

    the problem.

    For the studied case with MrL, the GSVD of this matrix pair is given by [31,55]

    G UCZ1, LVMZ1, (48)

    withU 2 CMM,V2 CLL,C 2 RML,M 2 RLL andZ 2 CLL. The columns ofU andVare orthonormal (UHU IandVHV I)

    andZis non-singular. The columns ofU,VandZ(ui,viand zi, respectively) form a new set of singular vectors that are used

    as independent basis vectors. The columns ofUare used as basis vectors for acoustic pressurepwhile the columns ofZ are

    used as basis vectors for source distributionq. Note thatUandVare not equal to those found from standard SVD. Matrices

    CandM have the valuesciandmistored in increasing order on their main diagonals. The generalized singular values gi aregiven by

    gi ci=mi: (49)

    In our very specific case, Lis a diagonal matrix so that the nullspace ofLisf0gwhich simplifies the development. According

    to the GSVD, one can rewrite the regularized solution with these new basis vectors

    qBF XM

    i 1

    fiuHi p

    cizi: (50)

    The generalized singular values filter factors fiare given by Eq. (39). Note that while the overall structure of Eq. (50) is very

    similar to the one of Eq. (32), the basis vectors zifor the regularized solution are altered by the regularization matrix L. This

    is an extremely important and powerful feature of Tikhonov regularization with discrete smoothing norm LaI. Indeed, as

    mentioned by Hansen[31], although the SVD gives the optimal basis vectors ui and vi for Git does not necessarily provide

    the optimal singular vectorsvi from which the regularized solution will be built as a linear combination ofvi. Examples of

    basis vectors zi obtained for the beamforming regularization matrix and identity regularization matrix will be given in

    Section 6 to illustrate that behavior. It will also illustrate the advantages of the method based on the beamforming

    regularization matrix.

    6. Microphone array configurations and numerical examples

    The three microphone arrays shown inFig. 3are compared. For the configurations (a) and (b), the microphone spacing

    alongx1andx2is 0.11 cm. The spatial aliasing frequency is then approximately 1500 Hz. Arrays side lengths are 0.99 m forthese 100-microphone arrays. Specific details of each of the configurations are given in the sequel. For the numerical

    examples reported in this paper, a discrete set of plane waves will be used as the solution basis for the inverse problem

    P.-A. Gauthier et al. / Journal of Sound and Vibration 330 (2011) 585258775862

  • 8/11/2019 2011 Beamforming regularization matrix and inverse problems applied to sound field measurement and extrapola

    13/27

    Author's personal copy

    (see Eqs. (14) and (16)). Despite the fact that a propagative plane waves distribution cannot describe any sound pressure

    field (such as a sound source very close to the array), this does not compromise the generality of the analytical theory

    detailed above. The direction cosines of the set of spherical distribution of 392 plane waves are shown in Fig. 4. For the

    two-dimensional array (configuration (a)), only the North hemisphere of the plane wave distribution is used (for a total of196 plane waves). The measurement-data-independent analysis tools presented in Section 4 will be first presented to

    compare the inherent limitations of these three arrays.

    6.1. Singular value spectrum

    As mentioned in Section 4.1, the singular value decay rate is a simple yet powerful analysis tool to compare various

    array configurations exposed to a predetermined and fixed plane wave distribution yl andj l. The singular value spectraare shown in Fig. 5 as a function of frequency for the three configurations shown in Fig. 3 and with some additional

    randomization of the plane waves propagating directions yl andj l when mentioned.As shown inFig. 5(a) for the URA and with circular distribution of 196 plane waves (yl f0, . . . , 2pg and j l 0), the

    singular values decay rapidly towards the numerical accuracy limit. This steep decay rate suggests that the kernel for such

    a microphone array and a circular plane wave distribution is very smooth and has the potential to amplify measurement

    noise once inverted. Hence, this entire two-dimensional arrangement of a planar microphone array and circularlydistributed plane waves should be rejected outright. However, as shown inFig. 5(b), for the same array configuration but

    with a hemispherical impinging set of plane waves, the singular values decay much more gradually. In this very specific

    x2

    x1

    x3

    x2

    x1

    x3

    x2

    x1

    x3

    Fig. 3. Three studied microphone arrays: (a) uniform rectangular array (URA), (b) URA with random layer distribution along x3 and (c) same as (b) with

    random variation of microphone rows alongx2 . The microphones are shown as black dots. The measurement grid projected in thex12x2 plane is shown

    as dashed line. The microphones x3 coordinates are shown as thick black lines.

    0.5

    0

    0.5

    1

    0.5

    0

    0.5

    1

    0.5

    0

    0.5

    1

    sin(l)

    sin(l)cos(

    l)co

    s(l)c

    os(l)

    Fig. 4. Direction cosines of the spherical distribution of 392 plane waves. For the hemispherical case, the upper hemisphere with 196 plane waves is

    used. Each direction y l,j l is shown by a small dot on the grid.

    P.-A. Gauthier et al. / Journal of Sound and Vibration 330 (2011) 58525877 5863

  • 8/11/2019 2011 Beamforming regularization matrix and inverse problems applied to sound field measurement and extrapola

    14/27

    Author's personal copy

    case, the hemispherical distribution is selected to avoid the inherent bottom-up ambiguity of a simple-layer sensor array

    such as the one shown inFig. 3(a).

    For practical applications, one is looking for a complete three-dimensional mapping of the measured sound field. For

    this purpose, the array configuration shown inFig. 3(b) is the modification of the URA (Fig. 3(a)) but with an additional

    random positioning of top or bottom layer for each microphone. Top and bottom layers are separated by 0.11 m along x3.

    The singular value decay rate, shown inFig. 5(c), is lower, suggesting that this configuration is much better than the first in

    terms of inversion and possible noise amplification. This was to be expected since a three-dimensional distribution of

    plane waves will involve much more variations at the microphone array and hence minimize the redundancy between the

    columns ofG. An additional random variation of the plane wave distribution y l andjl (the random variation maximalamplitude is set to half the angular separation of two original plane waves) does not seem to provide a slower decaying

    singular value spectrum, as shown inFig. 5(d). The microphone array configuration (c) is the same as the configuration (b)

    with a supplementary random variation of the microphone row positions along x2. The singular value spectrum for the

    third microphone array configuration is shown inFig. 5(e) and (f). According to these singular value spectra, the differencebetween configurations (b) and (c) is marginal. Indeed, at least according to the singular value spectrum, the configuration

    (b) seems the most efficient yet simple to implement.

    Fig. 5. Singular value spectra sias a function of frequency for the three configurations shown inFig. 3. (a) Configuration (a) with a circular distribution ofplane waves over 2p rad (jl 0). (b) Configuration (a) with a hemispherical distribution of plane waves. (c) Configuration (b) with a sphericaldistribution of plane waves. (d) Configuration (b) with a spherical distribution of plane waves with a supplementary random variation on y l andj l.(e) Configuration (c) with a spherical distribution of plane waves. (f) Configuration (c) with a spherical distribution of plane waves with a supplementary

    random variation on y l and jl.

    P.-A. Gauthier et al. / Journal of Sound and Vibration 330 (2011) 585258775864

  • 8/11/2019 2011 Beamforming regularization matrix and inverse problems applied to sound field measurement and extrapola

    15/27

    Author's personal copy

    6.2. Matrix rank and condition number

    Fig. 6shows theGmatrix condition numberkG(seeSection 4.2) as a function of frequency for the six cases reportedinFig. 5. The condition numbers correlate with the results obtained from the singular value spectrum: configurations (b)

    and (c) will be much less affected by measurement noise since their condition numbers are much lower (see Eq. (29)).

    The condition number somewhat summarizes the range of the singular value spectrum over a frequency range. However,

    it does not provide the insightful information on the singular value decay rate.Matrix ranks are shown inFig. 7. These results suggest that arrangement (a) with a circular plane wave distribution is

    rank-deficient at all frequencies, i.e. rankGominfM,Lg. Indeed, it would be impossible to resolve more than 20 and 65

    plane waves at the lowest and highest frequencies, respectively. Above about 125 Hz, configurations (b) and (c) are able to

    resolve 100 plane waves, which is the best one can obtain from a 100-microphone array. For the arrangements (b) and (c),

    the system is not be able to resolve 100 plane waves in the low frequency range. This is expected since the array overall

    dimension becomes smaller to the acoustical wavelength in this frequency range.

    6.3. Picard condition

    Beside the singular value spectrum and the condition number that illustrate the microphone array and plane wave distribution

    inherent sensitivity to measurement noise, one must look at the Picard condition which shows how a measured array signal p

    will be coupled to the inverse problem. The Picard condition is shown inFig. 8for p given by a unitary incident plane wave at

    589 Hz coming fromy 0:65p andj 0. For the noise-free cases, the Picard condition is not satisfied for the configuration (a):the coefficientsuHi p decay more slowly than the singular values (although it is difficult to judge from the figure). For the other

    0 250 500 750 1000 1250 1500

    105

    1010

    1015

    Freq. [Hz]

    (G)

    Config. (a) (circ)Config. (a) (hemi)Config. (b)Config. (b) with random (l, l)Config. (c)Config. (c) with random (l, l)

    Fig. 6. Condition number kG as a function of frequency for the three configurations shown in Fig. 3and for the six cases reported inFig. 5.

    0 250 500 750 1000 1250 1500

    0

    20

    40

    60

    80

    100

    Freq. [Hz]

    rank(G)

    Config. (a) (circ)

    Config. (a) (hemi)

    Config. (b)Config. (b) with random (l,l)

    Config. (c)Config. (c) with random (l,l)

    Fig. 7. Matrix rank as a function of frequency for the three configurations shown inFig. 3and for the six cases reported inFig. 5.

    P.-A. Gauthier et al. / Journal of Sound and Vibration 330 (2011) 58525877 5865

  • 8/11/2019 2011 Beamforming regularization matrix and inverse problems applied to sound field measurement and extrapola

    16/27

    Author's personal copy

    configurations, the noise-free situation seems to involve a similar decay rate of the coefficientsuHi p and the singular values: The

    Picard condition should then be respected. Hence, on the basis of the visual inspection of the Picard condition (at least for that

    frequency) configurations (b) and (c) lead to a viable inverse solution, contrary to configuration (a). When spatially uncorrelated

    noise is added to the array signal, the coefficients uHi plevel off attpwhich is equal to the noise level. Clearly, the configuration (b)offers the highest index itp from which the coefficients u

    Hi

    p level off (itp 90). This means that configuration (b) will involve

    more coefficientsuHi p that will not be affected by noise, making the configuration (b) the most attractive.Evaluation of the Picard condition through the moving average r i (Eq. (31)) is shown in Fig. 9for the same cases. We

    recall that the Picard condition is satisfied up to an i from which r i stops to decrease or starts to increase. The previousobservations hold. The moving average ri also shows from which i the average ratio of coefficients u

    Hi

    p andsi start toincrease hence highlighting the critical i from which the inverse problem will mainly amplify measurement noise. This

    facilitates the identification of the point from which regularization should be effective, i.e. the point from which the filter

    factor must attenuate the SVD components (fio1).

    The Picard condition is very effective to identify the regularization parameter for the classical Tikhonov regularization: for a

    known or expected noise leveltp and signal level, one can predict the singular value index itp from which the measurementsignal coefficients level off. Setting the regularization parameter equal to the singular value corresponding to theitp will ensure

    that all the singular components for which i4 itp will be filtered out by a filter factor fio1. For the TSVD regularization

    (Eq. (33)), one then simply uses I itp . As an example, for a known noise level tp 0:001, the coefficients start to level off atitp 62 for the case reported inFig. 8(b). Then one would select a regularization parameter such that l stps62. Note that

    theitp from which the measurement signal coefficients level off depends both on the noise and the signal levels.As mentioned earlier, the identification of the regularization parameter through the verification of the Picard condition is

    highly related to the L-curve of the problem under consideration (Section 4.3). To illustrate that relation, several TSVD and

    0 50 100

    1010

    100

    i,|uiHp|

    p= 0.001 p= 0.001

    i=28

    p

    i=62

    p

    i

    =90

    p

    i=90

    p

    i=87

    p

    i=87

    p

    0 50 100

    0 50 100

    1010

    100

    i,|uiHp|

    p= 0.001

    0 50 100

    p= 0.001

    p= 0.001p= 0.001

    0 50 100

    1010

    100

    1010

    100

    1010

    100

    1010

    100

    i

    i,|uiHp|

    0 50 100

    i

    Fig. 8. Verification of the Picard condition and noise effect at 589.2308 Hz for the six cases reported in Fig. 5.: singular valuessi , J:uHi

    p coefficients for

    a unitary impinging plane wave (y 0:65p,j 0) without noise, :uHi p coefficients for the same impinging plane wave with noise (0.1 percent). Thenoise leveltp is shown by an horizontal dashed line and the value itp from which the coefficients u

    Hi

    p level off is shown by a vertical dashed line.

    P.-A. Gauthier et al. / Journal of Sound and Vibration 330 (2011) 585258775866

  • 8/11/2019 2011 Beamforming regularization matrix and inverse problems applied to sound field measurement and extrapola

    17/27

    Author's personal copy

    Tikhonov regularization L-curves are shown in Fig. 10 for the reported cases. Clearly, the points on the L-curves that correspond to

    the regularization parameters selected on the basis of the Picard condition approach the corners of the L-curves. Since the corner

    (region of maximum curvature) of the L-curve is often used as a selection criterion for the regularization parameter [31], these

    examples illustrate that the Picard condition and the resolution limit are a possible replacement of the L-curve corner selection

    criterion when one has access to an estimation of the measurement noise level. This could be the subject of further verifications.

    6.4. Numerical examples with regularization

    Since the previous numerical examples suggest that the microphone array configuration (b) (see Fig. 3) is the most

    attractive, numerical examples of sound extrapolation are only given for that configuration.

    0 20 40 60 80 100

    102

    101

    100

    101

    i

    Picardcondition:i

    Config. (a) (circ)

    Config. (a) (hemi)

    Config. (b)

    Config. (b) with random (l, l)Config. (c)

    Config. (c) with random (l, l)

    Fig. 9. Verification of the Picard condition for the six cases (with added noise) reported inFig. 5using rias defined in Eq. (31) with Di 4 at 589.2308 Hz.

    102 100 102

    100

    105

    1010

    Solutionnorm||qopt||2

    Error norm ||e||2

    Oversmoothing

    region

    Undersmoothing

    Decreasing

    Decreasing In

    creasingI

    104 102 100 102

    100

    105

    Error norm ||e||2

    Oversmoothingregion

    Undersmoothin

    g

    Decreasing

    Decre

    asing

    104 102 100 102

    101

    100

    Error norm ||e||2

    Oversmoothing

    region

    Undersmoothing

    Decreasing

    Fig. 10. TSVD and Tikhonov regularization L-curves for the first three cases (with added noise) reported in Fig. 5. (a) Configuration (a) with a circulardistribution of plane waves over 2p rad (jl 0). (b) Configuration (a) with a hemispherical distribution of plane waves. (c) Configuration (b) with aspherical distribution of plane waves. The discrete TSVD L-curve is shown as small and the continuous Tikhonov L-curve is shown as a thin black line.

    The decreasing directions ofl are indicated. For the TSVD regularization, I increases in the directions of decreasing l. The points of the L-curves that

    correspond to the regularization parameters selected through the Picard condition are shown as J (Tikhonov) and & (TSVD).

    P.-A. Gauthier et al. / Journal of Sound and Vibration 330 (2011) 58525877 5867

  • 8/11/2019 2011 Beamforming regularization matrix and inverse problems applied to sound field measurement and extrapola

    18/27

    Author's personal copy

    The extrapolated sound fields at 589 Hz is shown in Figs. 1115 for a unitary amplitude plane wave coming from

    y 0:65p and j 0 with 0.1 percent spatially incoherent measurement noise (60 dB signal-to-noise ratio). Fig. 11showsthe extrapolated sound field in the horizontal plane x1x2 for the case without any regularization (Eq. (27)). For the

    remainder of this paper, the black (10 percent) and white (0.1 percent) lines shown on sound field extrapolation results are

    isocontour lines of the local normalized quadratic extrapolation errors (the difference between the original and the

    extrapolated sound field). These lines help in the evaluation of the spatial extent of the effective extrapolation area. Clearly,

    the sound field extrapolation is not accurate, except in the immediate vicinity of the array.The extrapolated sound field with TSVD regularization with I90 (see Eq. (33), we recall that rankG 100 for the present

    case) is shown in Fig. 12. The extrapolation is much more progressive (it does diverge with increasing distance form the

    microphone array) and the valid extrapolation region (defined by the relative quadratic local error between the original sound

    field and the extrapolated sound field) is larger than for the case without any regularization. The resulting plane wave solution

    q90 is shown inFig. 13where one clearly localizes the impinging plane wave from y 0:65p andj 0. In this figure, thespherical distribution of the plane waves are directly mapped in a Cartesian plot where the azimuth angles ylare shown on the

    abscissa and the elevation anglesj l are shown on the ordinate. The same representation process will be used in subsequentrepresentation of the plane wave solutions q. Using basic Tikhonov regularization (L I) for the same case, a slightly smoother

    extrapolated sound field is obtained as shown in Figs. 14 and 15. Here the penalization parameterlwas set to 0.0189 which is

    equal to the singular value siat itp 90 (see Fig. 8). These two examples show that sound field extrapolation using microphonearray and inverse problem theory are effective for sound field capture in an extended area around the microphone array for

    subsequent sound field reproduction.

    Fig. 11. Real part (a) and imaginary part (b) of sound field extrapolation at 589.2308 Hz with added noise without regularization. Isocontours of

    local normalized quadratic extrapolation errors are given for 10 percent (black line) and 0.1 percent (white line). The microphones are shown as

    black dots.

    Fig. 12. Real part (a) and imaginary part (b) of sound field extrapolation at 589.2308 Hz with added noise using TSVD with I itp 90 (see Eq. (33)

    and Fig. 8(c)). Isocontours of local normalized quadratic extrapolation errors are given for 10 percent (black line) and 0.1 percent (white line).

    The microphones are shown as black dots.

    P.-A. Gauthier et al. / Journal of Sound and Vibration 330 (2011) 585258775868

  • 8/11/2019 2011 Beamforming regularization matrix and inverse problems applied to sound field measurement and extrapola

    19/27

    Author's personal copy

    6.5. Numerical examples with beamforming regularization

    This section exemplifies the beamforming regularization method. The extrapolated sound field, for a case similar to theone reported above, is shown in Fig. 16 with l set to 0.01 in Eq. (46) with 0.1 percent of added spatially incoherent

    measurement noise. Clearly, the extrapolated sound field is closer to the original sound field than for the TSVD and basic

    Fig. 14. Real part (a) and imaginary part (b) of sound field extrapolation at 589.2308 Hz using classical Tikhonov regularization with l 0:0189.

    Isocontours of local normalized quadratic extrapolation errors are given for 10 percent (black line) and 0.1 percent (white line). The microphones are

    shown as black dots.

    3 2 1 0 1 2 3

    1.5

    1

    0.5

    0

    0.5

    1

    1.5

    l

    l

    0.05

    0.1

    0.15

    0.2

    Fig. 15. Magnitude of plane wave distribution 9qlyl ,jl9 using classical Tikhonov regularization for the extrapolated sound field shown in Fig. 14.The impinging plane wave incident angle is shown as a whiteblack circle.

    3 2 1 0 1 2 31.5

    1

    0.5

    0

    0.5

    1

    1.5

    l

    l

    0.05

    0.1

    0.15

    0.2

    0.25

    Fig. 13. Magnitude of plane wave distribution 9q90yl,jl9 using TSVD for the extrapolated sound field shown in Fig. 12. The impinging plane waveincident angle is shown as a whiteblack circle.

    P.-A. Gauthier et al. / Journal of Sound and Vibration 330 (2011) 58525877 5869

  • 8/11/2019 2011 Beamforming regularization matrix and inverse problems applied to sound field measurement and extrapola

    20/27

    Author's personal copy

    3 2 1 0 1 2 3

    1.5

    1

    0.5

    0

    0.5

    1

    1.5

    l

    l

    0.05

    0.1

    0.15

    0.2

    0.25

    0.3

    Fig. 17. Magnitude of plane wave distribution 9qBFyl ,jl9 using beamforming regularization matrix for the extrapolated sound field shown in Fig. 16.The impinging plane wave incident angle is shown as a whiteblack circle.

    Fig. 18. Real part (a) and imaginary part (b) of sound field extrapolation at 589.2308 Hz in x12x2 plane using Tikhonov regularization (l 0:0189) for a

    monopole source located at x 1,2,1m. Isocontours of local normalized quadratic extrapolation errors are given for 10 percent (black line) and 0.1

    percent (white line). The microphones are shown as black dots. The spherical source is shown as a whiteblack circle.

    Fig. 16. Real part (a) and imaginary part (b) of sound field extrapolation at 589.2308 Hz using beamforming regularization matrix with l 0:01.

    Isocontours of local normalized quadratic extrapolation errors are given for 10 percent (black line) and 0.1 percent (white line). The microphones are

    shown as black dots.

    P.-A. Gauthier et al. / Journal of Sound and Vibration 330 (2011) 585258775870

  • 8/11/2019 2011 Beamforming regularization matrix and inverse problems applied to sound field measurement and extrapola

    21/27

    Author's personal copy

    Fig. 20. Real part (a) and imaginary part (b) of sound field extrapolation at 589.2308 Hz in x12x2plane using beamforming regularization (l 0:01) for a

    monopole source located at x 1,2,1m. Isocontours of local normalized quadratic extrapolation errors are given for 10 percent (black line) and 0.1

    percent (white line). The microphones are shown as black dots. The spherical source is shown as a whiteblack circle.

    Fig. 21. Real part (a) and imaginary part (b) of sound field extrapolation at 589.2308 Hz in x12x3plane using beamforming regularization (l 0:01) for a

    monopole source located at x 1,2,1m. Isocontours of local normalized quadratic extrapolation errors are given for 10 percent (black line) and 0.1

    percent (white line). The microphones are shown as black dots. The spherical source is shown as a whiteblack circle.

    Fig. 19. Real part (a) and imaginary part (b) of sound field extrapolation at 589.2308 Hz in x12x3 plane using Tikhonov regularization (l 0:0189) for a

    monopole source located at x 1,2,1m. Isocontours of local normalized quadratic extrapolation errors are given for 10 percent (black line) and 0.1

    percent (white line). The microphones are shown as black dots. The spherical source is shown as a whiteblack circle.

    P.-A. Gauthier et al. / Journal of Sound and Vibration 330 (2011) 58525877 5871

  • 8/11/2019 2011 Beamforming regularization matrix and inverse problems applied to sound field measurement and extrapola

    22/27

    Author's personal copy

    3 2 1 0 1 2 3

    1.5

    1

    0.5

    0

    0.5

    1

    1.5

    l

    l

    0.05

    0

    0.05

    3 2 1 0 1 2 3

    1.5

    1

    0.5

    0

    0.5

    1

    1.5

    l

    l

    0.05

    0

    0.05

    Fig. 22. Real (a) and imaginary (b) parts of the basis vector v1 (largest singular value) for classical Tikhonov regularization at 589.2308 Hz.

    3 2 1 0 1 2 3

    1.5

    1

    0.5

    0

    0.5

    1

    1.5

    l

    l

    0.05

    0

    0.05

    3 2 1 0 1 2 31.5

    1

    0.5

    0

    0.5

    1

    1.5

    l

    l

    0.05

    0

    0.05

    Fig. 23. Real (a) and imaginary (b) parts of the basis vector v2 (second largest singular value) for classical Tikhonov regularization at 589.2308 Hz.

    P.-A. Gauthier et al. / Journal of Sound and Vibration 330 (2011) 585258775872

  • 8/11/2019 2011 Beamforming regularization matrix and inverse problems applied to sound field measurement and extrapola

    23/27

    Author's personal copy

    Tikhonov regularization exemplified inSection 6.4. Among the characteristics that distinguish the sound field extrapola-

    tion results of the beamforming regularization method, one finds: (1) a larger extrapolation region, (2) a more progressive

    transition from the effective extrapolation region to the far field of the array and (3) a well developed sound field in the

    propagation direction. The smoothness of the transition between the effective extrapolation region centered at the array

    and the extrapolated sound field at further distance from the array is a feature for sound field reproduction and audio

    application. Indeed, a low-level sound field outside the listening area is preferred to a loud and spatially varying sound

    field as observed inFigs. 11, 12 and 14. This aspect makes the inverse problem with beamforming regularization matrixeven more attractive. Note that these observations result from a more precise solution qBF as shown in Fig. 17. By

    comparison ofFigs. 13, 15 and 17, one sees that the inverse problem solution with beamforming regularization matrix

    provides a much more spatially accurate localization of the incoming sound wave.

    These observations show that the inverse modeling with a beamforming regularization matrix has an interesting

    potential for sound extrapolation and sound field capture applications. Moreover, this opens several research avenues,

    such as the modification of beamforming regularization matrix by a more precise beamforming algorithm.

    6.6. Three-dimensional sound field extrapolation

    For practical applications, one is interested by a complete three-dimensional description of a measured sound field.

    Three-dimensional examples (for the microphone array configuration (b) with a plane wave source distribution in the

    inverse problem) are shown inFigs. 18 and 19for basic Tikhonov regularization and inFigs. 20 and 21for the sound field p

    created by a monopole source in x 1,2,1m. Clearly, as shown inFigs. 18 and 19, the inverse problem solution withclassical Tikhonov regularization (L I) is able to extrapolate both the geometrical spreading loss, the curvature and the

    three-dimensional features of the spherical sound wave. For the same incident sound field, the inverse problem with the

    beamforming regularization matrix, seeFigs. 20 and 21, gives again (1) a soft transition from the effective extrapolation

    region to the other regions and (2) a more accurate three-dimensional sound field extrapolation.

    At the light of these results, it seems that the beamforming regularization method is a promising approach for sound

    field extrapolation using inverse problem.

    3 2 1 0 1 2 3

    1.5

    1

    0.5

    0

    0.5

    1

    1.5

    l

    l

    5

    0

    5

    3 2 1 0 1 2 3

    1.5

    1

    0.5

    0

    0.5

    1

    1.5

    l

    l

    5

    0

    5

    Fig. 24. Real (a) and imaginary (b) parts of the z1 (largest generalized singular value) for the beamforming regularization matrix at 589.2308 Hz.

    The impinging plane wave incident angle is shown as a whiteblack circle.

    P.-A. Gauthier et al. / Journal of Sound and Vibration 330 (2011) 58525877 5873

  • 8/11/2019 2011 Beamforming regularization matrix and inverse problems applied to sound field measurement and extrapola

    24/27

    Author's personal copy

    6.7. Illustration of the differences between basic Tikhonov and beamforming matrix regularizations

    In this section, the intimate differences between classical Tikhonov and beamforming matrix regularizations are

    illustrated and explained on the basis of SVD and GSVD using a numerical example.

    As mentioned earlier, the SVD gives the right singular vectors vi ofG. These vectors create an optimal basis for the

    decomposition ofG and also create a basis for the formulation of the inverse problem solution equation. (32). For the

    inverse problem with a discrete smoothing norm different to the identity matrix, such as the beamforming regularization

    matrix, the GSVD gives a different set of generalized singular vectors zi which are used to reconstruct the solution, see

    Eq. (50). The interest of using a matrix Ldifferent to the identity matrix comes from its ability to shape the generalized

    singular vector zi. The first two singular vectors vi (corresponding to the largest two singular values) and the first three

    generalized singular vectorszi(corresponding to the largest three generalized singular values) are shown inFigs. 2226for

    the configuration (b) with a set of 393 incident plane waves at 589 Hz.By marked contrast with the singular vectors vi, the generalized singular vectors zi obtained with the beamforming

    regularization matrix are much more spatially compact. This provides an increased spatial resolution for the beamforming

    regularization right from the largest generalized singular values. For the basic Tikhonov method, one must go up to much

    higheri before actually reaching such a singular vector v i with high spatial resolution: this is however at the cost of an

    increased noise sensitivity. This is clearly illustrated in Figs. 2226 and highlights the interesting behavior of inverse

    problem with beamforming regularization matrix for sound field extrapolation and sound source localization. Moreover,

    we put to the attention of the reader that the singular vectors z1, z2 and z3 seems to correspond to simple directive

    patterns (monopole, dipole, etc.) in the source position. This should be the topic of further investigation for sound source

    localization and characterization.

    7. Conclusion

    This paper addressed the sound field extrapolation problem for the general application of sound field reproduction. Theextrapolation problem was developed on the well-known inverse problem theory in acoustics. One of the benefits of inverse

    problem is that it can easily cope with any array geometry. A special care was taken to explain and review the underlying

    3 2 1 0 1 2 3

    1.5

    1

    0.5

    0

    0.5

    1

    1.5

    l

    l

    5

    0

    5

    3 2 1 0 1 2 3

    1.5

    1

    0.5

    0

    0.5

    1

    1.5

    l

    l

    5

    0

    5

    Fig. 25. Real (a) and imaginary (b) parts of the z2 (second largest generalized singular value) for the beamforming regularization matrix at 589.2308 Hz.

    The impinging plane wave incident angle is shown as a whiteblack circle.

    P.-A. Gauthier et al. / Journal of Sound and Vibration 330 (2011) 585258775874

  • 8/11/2019 2011 Beamforming regularization matrix and inverse problems applied to sound field measurement and extrapola

    25/27

    Author's personal copy

    inverse problem behavior from a continuous space viewpoint. The discrete version of the inverse problem with a fixed set of

    measurement microphones and sound sources was then presented and analyzed on the basis of classical mathematical tools

    such as: singular value spectrum, condition number, matrix rank and Picard condition. Numerical examples showed the

    potential of these tools to evaluate various source and sensor arrangements and design an efficient sensor array. Two classical

    inverse problem direct regularization methods (truncated singular value decomposition and basic Tikhonov) were reviewed

    and tested for sound field extrapolation. These methods are able to stabilize the inverse problem and they provide a fairly good

    sound field extrapolation around the microphone array. A new data-dependent regularization method was introduced on the

    basis of the beamforming regularization matrix in the discrete smoothing norm. This is the main contribution of this paper.

    Both theoretical analysis and numerical examples show the advantages and effectiveness of this novel sound field extrapolation

    method. One of its distinct features is its ability to advantageously modify the vector basis used to build the inverse problem

    solution. This was highlighted through the use of the generalized singular value decomposition: one of the contributions

    presented in this paper. As shown through numerical examples, this feature creates a much more localized and accurate vector

    basis for the inverse problem solution construction. Of all three regularization methods investigated, the beamformingregularization matrix provides the best sound field extrapolation.

    Although the beamforming regularization matrix method was illustrated with a simple non-focusing delay-and-sum, one

    could easily imagine various variations of the data-dependent beamforming regularization matrix (focused beamforming,

    source mapping obtained from any other method, etc.): this opens many interesting research avenues and could even

    ameliorate the sound field extrapolation results. Moreover, the proposed concept of the beamforming regularization matrix

    could be transposed and evaluated within the context of iterative regularization methods[2,31]. This is left for future research.

    On the basis of the inverse problem solution, defined as a plane wave sources distribution in this paper, one should be

    able to progress towards sound field characterization which will seek for the extraction of meta-descriptors for the sound

    field for subsequent sound environment reproduction. This is the topic of current research.

    Acknowledgments

    This work is part of a project involving: Consortium for Research and Innovation in Aerospace in Que bec, Bombardier

    Aeronautique and CAE, supported by a Natural Sciences and Engineering Research Council of Canada grant.

    3 2 1 0 1 2 3

    1.5

    1

    0.5

    0

    0.5

    1

    1.5

    l

    0.015

    0.01

    0.005

    0

    0.005

    0.01

    0.015

    3 2 1 0 1 2 3

    1.5

    1

    0.5

    0

    0.5

    1

    1.5

    l

    0.015

    0.01

    0.005

    0

    0.005

    0.01

    0.015

    l

    l

    Fig. 26. Real (a) and imaginary (b) parts of the z3 (third largest generalized singular value) for the beamforming regularization matrix at 589.2308 Hz.

    The impinging plane wave incident angle is shown as a whiteblack circle.

    P.-A. Gauthier et al. / Journal of Sound and Vibration 330 (2011) 58525877 5875

  • 8/11/2019 2011 Beamforming regularization matrix and inverse problems applied to sound field measurement and extrapola

    26/27

    Author's personal copy

    References

    [1] E.G. Williams,Fourier AcousticsSound Radiation and Nearfield Acoustical Holography, Academic Press, San Diego, 1999.[2] E.G. Williams, Continuation of acoustic near-fields,Journal of the Acoustical Society of America 113 (2003) 12731281.[3] S.F. Wu, Methods for reconstructing acoustic quantities based on acoustic pressure measurements, Journal of the Acoustical Society of America

    124 (2008) 26802697.[4] J. Billingksley, R. Kinns, The acoustic telescope,Journal of Sound and Vibration 48 (1976) 485510.

    [5] B.D. Van Veen, K.M. Buckley, Beamforming: a versatile approach to spatial filtering,IEEE ASSP Magazine (April) (1988) 424.[6] P.S. Naidu,Sensor Array Signal Processing, CRC Press, Boca Raton, 2001.[7] G.H. Koopmann, L. Song, J.B. Fahnline, A method for computing acoustic fields based on the principle of wave superposition,Journal of the Acoustical

    Society of America 86 (1989) 24332438.[8] P.A. Nelson, A review of some inverse problems in acoustics,International Journal of Acoustics and Vibration 6 (2001) 118134.[9] H. Teutsch, W. Kellermann, Acoustic source detection and localization based on wavefield decomposition using circular microphone arrays,Journal

    of the Acoustical Society of America 120 (2006) 27242736.[10] H. Teutsch,Modal Array Signal Processing: Principles and Application of Wavefield Decomposition, Springer, Berlin, 2007.[11] A.J. Berkhout, D. de vries, P. Vogel, Acoustic control by wave field synthesis,Journal of the Acoustical Society of America 93 (1993) 27642778.[12] P.-A. Gauthier, A. Berry, Adaptive wave field synthesis with independent radiation mode control for active sound field reproduction: theory,Journal

    of the Acoustical Society of America 119 (2006) 27212737.[13] P.-A. Gauthier, A. Berry, Adaptive wave field synthesis with independent radiation mode control for active sound field reproduction: signal

    processing,Journal of the Acoustical Society of America 123 (2008) 20032016.[14] P.A. Gauthier, A. Berry, Adaptive wave field synthesis for active sound field reproduction: experimental results,Journal of the Acoustical Society of

    America123 (2008) 19912002.[15] J. Daniel, R. Nicol, S. Moreau, Further investigations of high order ambisonics and wavefield synthesis for holophonic sound imaging, Convention

    paper 5788, Audio Engineering Society 114th Convention, Amsterdam, March 2003.

    [16] F. Rumsey,Spatial Audio, Focal Press, Burlington, 2001.[17] P. Castellini, M. Martarelli, Acoustic beamforming: analysis of uncertainty and metrological performances,Journal of Sound and Vibration 22 (2008)

    672692.[18] J.J. Christensen, S. Hald, Beamforming, Bruel & Kjaer Technical Review, 2004.[19] R.O. Schmidt, Multiple emitter location and signal parameter estimation.IEEE Transactions on Antennas and Propagation, AP-34 (March) (1986) 276280.[20] R. Roy, A. Paulraj, T. Kailath, ESPRIT: a subspace rotation approach to estimation of parameters of cisoids in noise, IEEE Transactions on Acoustics,

    Speech and Signal Processing 34 (1986) 13401342.[21] T.F. Brooks, W.M. Humphreys, A deconvolution approach for the mapping of acoustic sources (DAMAS) determined from phased microphone arrays,

    Journal of Sound and Vibration 294 (2006) 856879.[22] P. Sijtsma, CLEAN based on spatial source coherence,AIAA paper, 2007, pp. 20073436.[23] T. Susuki, Generalized beamforming algorithm resolving coherent/incoherent distributed and multipole sources, 14th AIAA/CEAS Aeroacoustics

    conference (29th AIAA Aeroacoustics Conference) No. 2954, Vancouver, British Columbia, Canada, 57 May, 2008.[24] E. Sarradj, A fast signal subspace approach for the determination of absolute levels from phased microphone array measurements,Journal of Sound

    and Vibration 329 (2008) 15531569.[25] P.A.G. Zavala, W. De Roeck, K. Janssens, J.R.F. Arruda, P. Sas, W. Desmet, Generalized inverse beamforming investigation and hybrid estimation,Berlin

    Beamforming Conference, February 2010.[26] C. Bouchard, Beamforming with microphone arrays for directional sources, PhD Thesis, University of Ottawa, 2010.

    [27] S.H. Yoon, P.A. Nelson, Estimation of acoustics source strength by inverse method: Part II, experimental investigation of methods for choosingregularization parameters, Journal of Sound and Vibration233 (2000) 669705.

    [28] Y. Kim, P.A. Nelson, Optimal regularisation for acoustic source reconstruction by inverse methods, Journal of Sound and Vibration 275 (2004) 463487.[29] H.G. Choi, A.N. Thite, D.J. Thompson, Comparison of methods for parameter selection in Tikhonov regularization with application to inverse force

    determination,Journal of Sound and Vibration304 (2007) 894917.[30] Q. Lecl ere, Acoustic imaging using under-determined inverse approaches: frequency limitations and optimal regularizat