wave let book

Upload: erezwa

Post on 14-Apr-2018

215 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/30/2019 Wave Let Book

    1/75

    Introduction

    to

    Wavelets

  • 7/30/2019 Wave Let Book

    2/75

    Preface

    About the Book

    This book represents an attempt to present the most basic concepts of wavelet analysis,

    in addition to the most relevant applications. Compared to the half a dozen or so in-

    troductory books on the subject, this book is designed, with its very detailed treatment,

    to serve the undergraduates and newcomers to the subject, assuming a general calculus

    preparation and what comes with it of matrix computations. The essential subjects

    needed for wavelet analysis, namely, vector spaces and Fourier series and integrals, are

    presented in a simple way to suit such a background, or the background of those thathave been away for a while from such subjects. It is a challenging task, and this book

    is an attempt at meeting such a challenge.

    The author has consulted, primarily, half a dozen basic introductory books of Nev-

    ergelt [1], Bogges and Narcowich [2], Walker [3], Mix and Olejniczak [4], Walter [5],

    Aboufadel and Schlicker [6], Hubbard [7], Burrus et al [8], and, of course, the standard

    reference of Daubechies [9]. This is besides other books, papers, and tutorials on the

    subject with due thanks and much appreciation for their demonstrating what is impor-

    tant for an introductory book. The present book goes beyond their treatments by adding

    all the details necessary, for an undergraduate in science and engineering or others new

    to the subject, to make the text applicable to using wavelets for their varied purposes.

    It is hoped that this attempt meets the needs of students as well as their instructors to

    have a very readable book on wavelets, even with basically only a calculus background.

    xv

  • 7/30/2019 Wave Let Book

    3/75

    xvi PREFACE

    Special Features of this Wavelet Book

    Compared to all books on wavelets, this book comes with detailed or partial solutions to

    almost all the exercises, which are found in the Students Solution Manual(ISBN:0967330-

    0-8) to accompany the book.

    Audience and Suggestions for Course Adaption

    The book may be used for a one semester, using a selection of sections from Chapters

    1-6, and covering Chapters 9-12. The whole book makes a good one year course.

    Abdul J. Jerri

    Potsdam, New York

    April 2011

  • 7/30/2019 Wave Let Book

    4/75

    1Introduction

    This text, written on the relatively new subject of wavelets, is an attempt to introduce

    wavelets to undergraduate students, newcomers to the field, and other interested read-

    ers. The goal is to do this with basic calculus tools. This is not an easy task since

    the topic of wavelets requires a higher level of mathematical background, which is not

    covered in basic calculus courses. It is from almost all books written on this subject to

    date, and even those claiming to be aimed at an undergraduate level, that a minimum

    of such additional requirements would be basic elements of linear algebra and Fourier

    analysis. In addition, for the practical implementation of wavelet decomposition and

    signal reconstruction a rudimentary exposure to signal analysis, namely, filtering, is

    vital for completing the subject of wavelets as a practical, fast, and efficient algorithm.The latter subject of filtering signals, fortunately, can be covered within the applications

    of the Fourier transform, the subject of Chapter 6.

    Doing justice to the necessary topics would move this book to the level of the intro-

    ductory" texts that preceded it, which is not within our aim of the simplest book for the

    undergraduate student in engineering and science, as well as the interested practitioners.

    So, remaining true to our purpose, we decided to formally" introduce the essentials

    of the above necessary requirements in the simplest and most illustrative way. This

    approach, we hope, will make it accessible to the first-time reader of the new topic of

    wavelets.

    In doing so, we will begin with the relevant basic ideas found in calculus courses

    and build on them for introducing the above two basic requirements of Fourier analysis

    and linear algebra. This may mean that our presentation will be necessarily elementary

    and a formal one. However, we will attempt to support it by quoting the most basic

    mathematical results and theorems in a very clear manner. As such, we will not be able

    1

  • 7/30/2019 Wave Let Book

    5/75

    2 Chapter 1 INTRODUCTION

    to present the proofs of all such results.

    Since wavelet analysis may be considered superior" for representing signals or

    functions, we shall review the familiar ways of expressing functions in terms of much

    simpler ones in Chapter 2, e.g., f(x) = ex in terms of the basic simple monomials

    {1, x , x2, . . .}. Another example is expressing the two-dimensional force F in terms ofthe simpler unit vectors i and j, F = iFx + jFy. Speaking of vectors, we will recallthe basic elements of matrices as needed, since one way of representing a vector is by

    a column (or row) matrix.

    Another most basic representation of signals in engineering and applied mathematics

    is that of periodic functions in terms of the very familiar harmonic functions sin nx andcos nx on (, ). This constitutes the Fourier series branch of Fourier analysis, whichwe will visit briefly in Chapter 2 and with more detail in Chapter 5. Readers familiar

    with these topics may just as well skip them or scan them quickly.

    Wavelets may be seen as small waves (t), which oscillate at least a few times, butunlike the harmonic waves must die out to zero as t . The most applicablewavelets are those that die out to identically zero after a few oscillations on a finite

    interval [a, b), i.e., (t) = 0 outside the interval [a, b).

    Such a special interval [a, b) is called the support" or compact support" of thegiven (basic) wavelet (t). We say a basic wavelet since it will be equipped with twoparameters, namely, scale" and translation" to result in a family" of wavelets

    ( t

    ). An example of a basic wavelet is the Daubechies 2 wavelet (t) shown inFigure 1.1.

    1 0 1 2 3 41.5

    1

    0.5

    0

    0.5

    1

    1.5

    2

    t

    (t)

    (t1)

    Fig. 1.1: The Daubechies 2 wavelet (t) and its associated scaling function (t), < t

  • 7/30/2019 Wave Let Book

    6/75

    3

    The construction of basic wavelets is established in terms of their associated build-

    ing blocks" or scaling functions" (t). The latter is governed by an equation calledthe recurrence relation" or scaling equation." We should mention here that there is

    something relatively new about this fundamental equation. It comes with unknown

    coefficients, whose determination constitutes a significant part of this book. Some very

    basic theorems will lead us in this direction with the aid of the Fourier transform.

    The Daubechies 2 scaling function (t) is shown in Figure 1.1, where we note thatboth (t) and (t) in this case vanish identically to zero beyond their respective finiteintervals (or supports). In Figure 1.2 we show the oldest known basic wavelet, the Haar

    wavelet, which dates to 1910:

    (t) =

    1, 0 t < 12

    1, 12

    t < 10, otherwise

    (1.1)

    and its associated scaling function,

    (t) =

    1, 0 t < 10, otherwise

    (1.2)

    with their (common) compact support [0,1).

    0.5 0 0.5 1 1.5

    1

    0.5

    0

    0.5

    1

    t

    (t)

    0.5 0 0.5 1 1.5

    1

    0.5

    0

    0.5

    1

    t

    (t)

    Fig. 1.2: The Haar basic wavelet (t) and its associated scaling function (t), < t < .

    In Section 3.3 of Chapter 3 we will present a few other wavelets and their associ-

    ated scaling functions. The emphasis will be on the successive approximation iterative

    method of solving the recurrence or scaling equation to find the scaling function. Oncethat has been accomplished, constructing the associated basic wavelet amounts to a

    rather simple computation.

    This brings us to the most relevant question in wavelet analysis: what is the source

    of the scaling equation? Here, we have a relatively new subject compared to our usual

  • 7/30/2019 Wave Let Book

    7/75

    4 Chapter 1 INTRODUCTION

    source of constructing the different familiar functions, differential equations. The new

    subject is that ofMultiresolution Analysis (MRA), where it depends mainly on the new

    parameter of the scale that is introduced to the basic wavelet as ( t

    ), which will bethe subject of Chapter 8. Its definition depends on the topic ofvector spaces and sub

    (vector) spaces. This is a relatively new subject to readers with calculus background.

    However, we will attempt in Chapter 4 to introduce it gradually, building on the fa-miliarity of the usual vectors in two and three dimensions. In fact, such vectors, with

    their binary operation of addition and multiplication by a scalar, constitute one example

    of a vector space. The same two operations on vectors in n-dimensions are used for

    the definition of the n-dimensional vector space. We will then move to real-valued

    functions with their very familiar binary operation of addition and multiplication by a

    scalar to show that they also constitute a vector space. In the same chapter we will

    discuss the basis for such vector spaces and introduce the inner product among their

    members, to reach the new subject of projecting a member of (the general) vector space

    onto another. Finally, we will introduce the very important subject of orthogonality"

    among members of the (desired) vector space, or members of different vector spaces.

    After preparing for the vector spaces, the subject of Multiresolution Analysis will

    be covered in Chapter 8, following the coverage of the Fourier series and transforms

    in Chapters 5 and 6, respectively. In Chapter 7 we will briefly present the windowed

    (or short term) Fourier transform (WSFT) and the continuous wavelet transform (CWT).

    Wavelet analysis is the benefactor of Fourier Analysis, in the sense that it answers

    the latters serious shortcomings, namely the fact that Fourier analysis represents a

    signal as either a function of time or frequency, but not both. There are also other new

    advantages of wavelet analysis to be discussed throughout the book. Thus, discussing

    or showing some wavelets is incomplete without comparing them with the harmonic

    sine and cosine waves of Fourier series. This is the reason behind having some prelim-

    inaries of the latter subject in Section 2.2 of Chapter 2, as we intend at this very earlystage to compare Fourier and wavelet analysis, and highlight the advantages of the latter.

    As we shall see in Section 2.2, the Fourier series decomposes a given signal, e.g.,

    f(t), < t < , in terms of the harmonic functions{1, cos nt, sin nt}n=1 on (, )with their (time-independent) frequency n in radians/sec (or period 2/n seconds). Incontrast to the wavelets, which die out after few oscillations on their compact supports,

    the above harmonic functions continue to repeat their shape (periodic) on the whole

    real line with (constant) period 2/n. In Figure 1.3 we illustrate such periodicity for

    f(t) = sin t and g(t) = cos t with frequency n = 1 (period 2), and in Figure 1.4 weillustrate f(t) = sin 2t and g(t) = cos 2t with their higher frequency n = 2 (period

    2

    n =2

    2 = ).

  • 7/30/2019 Wave Let Book

    8/75

    2.1 TAYLOR SERIES REPRESENTATION OF FUNCTIONS 23

    signal if we take f(t) = i(t) as the current flowing in a wire of unit resistance. Thepower in the t time interval is |f(t)|2t with the total power (for the whole time) as

    |f(t)|2dt. Such a class of signals with finite power are termed square integrable

    functions. The mathematical symbol for such set of functions is L2(, ), wherethe 2 in L2 refers to the square of the function in the above integral. L refers to a

    more general aspect of integration, i.e., the integration in the sense of Lebesgue as wewill elaborate on briefly at the end of Section 2.2.3. In our treatment we shall confine

    ourselves to the integration used in calculus courses, which is the Riemann type of

    integration. Hence, we take limn Vn as f(t) L2(, ).

    Combining the result of the nested subsets of the dyadic 12n

    scale in (2.14), andlimn Vn = {0}, limn Vn = L2(, ), we have the idea of nested subsetsconcept that we will need for Multiresolution Analysis,

    {0} Vn V2 V1 V0 V1 V2

    Vn L2(, ). (2.15)

    Here, we try to introduce the concept of nested subsets that are characterized by their

    decreasing scale 12n

    . For Multiresolution Analysis we will need nested sub (vector)

    spaces," of which the above nested subsets are an example. We will present and define

    vector spaces, subspaces, and their bases in Chapter 4.

    The next example, with its illustrations, may clarify further this vital concept of

    nested subsets for wavelet analysis with its dyadic scale.

    Example 2.1 Discretizing with a Dyadic Scale

    Consider the function f(t) = sin

    8 t, t (0, 8) and its discrete values f(tk) = f(

    k

    2j )with their spacing t = k+12j

    k2j

    = 12j

    . Let us take the case of j = 0, where t = 1,as shown in Figure 2.3.

    We note how crude the approximation of sin 8

    t is with the eight sample points and

    their spacing of t = 120

    = 1. The scaling analysis of wavelet spaces the samples at

    tj,k =k2j

    , so the incremental scale used is tj,k =k+12j

    k2j

    = 12j

    . This, we note, will

    decrease very fast as we increase j, as shown in Figures 2.4 and 2.5 for t = 121

    = 12

    and t = 122

    = 14

    , respectively. It is in contrast to the manner in which we usually sam-

    ple in basic calculus with n points on the interval (a, b), where the constant increment(scale) is t = ba

    n, which decreases as 1

    ninstead of the (fast) dyadic 1

    2nof wavelets.

    Let us denote the set of signals discretized with scale ofn =1

    2nas Vn. Thus, what

    we have in Figure 2.3 is an example of a signal discretized with a scale of 120

    = 1,which are elements ofV0. As seen, it is a crude or coarse approximation of the sin

    8

    t

    signal. If we decrease the scale to 121

    = 12

    , we have a more refined approximation of

    the signal, and all signals done that way with scale 121

    constitute the set V1. This better

  • 7/30/2019 Wave Let Book

    9/75

    24 Chapter 2 REVIEW OF BASIC REPRESENTATIONS OF FUNCTIONS

    approximation (with half the scale) is shown for f(t) = sin 8

    t in Figure 2.4. The next,

    more refined scale is 122

    = 14

    , shown in Figure 2.5, a better approximation than that of

    Figures 2.3 to 2.4 (with their respective scales of 1 and 12

    ). The result is the set V2 of

    the discretized signals. It is clear that the details in V0 are found in V1, and in turn the

    details of the latter are found in V2. This enables us to write the three sets with their

    nested subsets relation as V0 V1 V2. As we did above, we can still decrease thescale to 1

    2nfor n > 2 or increase it as 1

    2n= 2n, n > 0 with their corresponding sets

    Vn and Vn, where, following the above observation for V0, V1, and V2, we can write

    Vn V2 V1 V0 V1 V2 Vn.

    0 1 2 3 4 5 6 7 80

    0.2

    0.4

    0.6

    0.8

    1

    t

    sin(/8t)

    Approx, t=1

    Fig. 2.3: f(t) = sin 8

    t and its dyadic discretization with a scale of t = 120

    = 1.

    0 1 2 3 4 5 6 7 80

    0.2

    0.4

    0.6

    0.8

    1

    t

    sin(/8t)

    Approx, t=1/2

    Fig. 2.4: f(t) = sin 8

    t and its dyadic discretization with a scale of t = 121

    = 12

    .

  • 7/30/2019 Wave Let Book

    10/75

    2.1 TAYLOR SERIES REPRESENTATION OF FUNCTIONS 25

    0 1 2 3 4 5 6 7 80

    0.2

    0.4

    0.6

    0.8

    1

    t

    sin(/8t)

    Approx, t=1/4

    Fig. 2.5: f(t) = sin 8

    t and its discretization with a scale of t = 122

    = 14

    .

    Exercises 2.1

    Note: These exercises are also meant for a review of the basic function approximation

    of Taylor series, plus the important concept of nested subspaces.

    1. Solve the differential equation

    d2y

    dx2+ 2y = 0, 0 x (E.1)

    with its two boundary conditions,

    y(0) = 0, (E.2)

    y() = 0.

    Hint: y1(x) = cos x and y2(x) = sin x are the two linearly independentsolutions of (E.1).

    2. (a) Write the Taylor series expansion of f(t) = sin t, 0 t 2, aboutx0 = 0.

    (b) Consider the S2(t) partial sum of the Taylor series in part (a) for approx-imating f(t) = sin t. Follow what was done in equation (2.6) to showthe added refinement to the S2(t) approximation due to involving the nextseven terms in S9(t). Graph your results for the comparison. (Note thatboth S2(t) and S9(t) are still not good approximations off(t) = sin t, whileS9(t) S2(t) is only an added refinement to S2(t) that does not improvethe approximation by much, which means that we need to have a very large

    N in SN(t), such as N = 40.

    3. Consider the function f(t) = cos t, 0 < t < 2. Follow Example 2.1 (and Fig-ure 2.3) to show the approximation of this function with the dyadic discretization

  • 7/30/2019 Wave Let Book

    11/75

    3The Scaling Recurrence

    Relation

    3.1 INTRODUCTION

    In the second chapter (Section 2.3.1) we mentioned that in wavelet analysis, usually, a

    single scaling function series is used to yield an approximated" or blurred" version

    of the given signal. Another (double) series of the associated wavelets is added to

    the former to bring about a refinement. The result is a satisfactory approximation (or

    representation) of the signal.

    The question now is how are these scaling functions found. Once they are, it is a

    simple computation to construct their associated basic wavelets. The scaling functionsor building blocks" are of paramount importance in our study of wavelet analysis in

    this book. They are the solutions of the following scaling recurrence relation (or simply

    the scaling equation) in the scaling function (x):

    (x) =k

    pk(2x k) (3.1)

    or

    (x) =k

    hk

    2(2x k). (3.2)

    The second version of the scaling equation in (3.2) with coefficients hk

    2 replac-

    ing pk in (3.1) is used to emphasize the normalization of 2(2x k) such that

    |2(2x k)|2dx = 1 in order to have the normalization at the different energiesof the signal, which will be explained later. When the computations become long, we

    may choose the equivalent form in (3.1) with {pk} as the scaling coefficients to avoidcarrying the coefficient of

    2 in (3.2). As the name implies, we note in (3.2) that the

    scaling function (x) is scaled as (2x) and translated by k2

    , for integer k, as the final

    45

  • 7/30/2019 Wave Let Book

    12/75

    46 Chapter 3 THE SCALING RECURRENCE RELATION

    (2x k) = (2(x k2

    )) inside the series (3.2).

    We have mentioned that the above scaling equation is with coefficients hk (or pk)

    to be determined at the outset. This turns out not to be a simple matter! Indeed, we

    will depend on Multiresolution Analysis (MRA) in Chapter 8 to derive a number of

    properties for these coefficients. Then, with guidance from some basic theorems andthe Fourier transform, we will be able to determine these coefficients for a number of

    very applicable scaling functions. Fortunately, the coefficients needed for constructing

    the associated basic wavelets are related in a very simple way to the above scaling

    coefficients hk of (3.2).

    In our discussion and illustration of a number of well-known scaling functions and

    their associated basic wavelets, we will assume, for now, that we know the scaling

    coefficients.

    With the involvement of scaling and translation in the scaling equation (3.2), we

    begin this chapter with a simple review of scaling and translating functions. This isfollowed by the iterative method of successive approximations for solving the scaling

    equation, assuming knowledge of the coefficients. We end the chapter by presenting a

    number of well-known basic wavelets.

    3.2 SCALING AND TRANSLATION: HALLMARKS OF WAVELET ANALYSIS

    Here we will review and illustrate the concept of translating a function, followed by

    scaling its domain, and finally the two operations together.

    Translation

    To translate a function f(x) to the right of the origin by x0 = a, we choose a newcoordinate x measured from a new y-axis erected at x0 = a, as shown in Figure3.1(a), x = x a. So, in the new coordinates (x, y), the translated y = f(x) tothe right is y = f(x) = f(x a), a > 0, as shown in Figure 3.1(b). The sameis done for the translation f(x + a) of f(x) to the left of the origin by x0 = a asy = f(x(a)) = f(x+a). This can also be seen by erecting a new y-axis at x0 = a,with new coordinate x = x (a) = x + a, where we have y = f(x) = f(x + a)as shown in Figure 3.1(c).

  • 7/30/2019 Wave Let Book

    13/75

    54 Chapter 3 THE SCALING RECURRENCE RELATION

    3.5 ITERATIVE METHODS FOR SOLVING THE SCALING EQUATION

    We will present here two different iterative methods for solving the scaling equation

    (3.2). The first subject in the next section is the successive approximations iterative

    method, which is often seen in solving integral equations, for example. The second (in

    Section 3.5.2) starts with initial values of the scaling function at integers, for example,on the right side of (3.2) to generate values of (x) on the left side at half integers. So,this method starts with initial values and it parallels what we do in solving differential

    equations when given the initial values of the solution.

    3.5.1 The iterative method of successive approximations

    It may be time to give some idea about the usual (approximate) method of solving the

    above scaling equation (3.2),

    (x) =

    k

    hk

    2(2x k).

    It is an iterative method of successive approximations. This approach starts by

    assigning a zeroth approximation 0(x) for (x) inside the sum of (3.2) as an input,and looks at the result of the sum as a first approximation 1(x) as an output,

    1(x) =k

    hk

    20(2x k). (3.9)

    The 1(x) is then used again as an input inside the sum of (3.9) to result in 2(x) as anoutput,

    2(x) = k

    hk

    21(2x k). (3.10)

    This iterative process continues to n(x) as input and n+1(x) as output1,

    n+1(x) =k

    hk

    2n(2x k), (3.11)

    until in practical situations the difference between successive iterations becomes negli-

    gible, that is |n+1(x) n(x)| 0. In Chapter 10 we will present Theorem 10.4 inSection 10.3 with conditions that guarantee the convergence of the presented iterative

    method of successive approximations. In Example 3.8(a) we will use this iterative

    method to generate a Daubechies scaling function (as shown in Figure 3.14).

    In the following example we will illustrate this iterative method for the Haar scalingfunction with its scaling coefficients h0 = h1 =

    12

    .

    1This notation of the approximation n(x) of the scaling function is not to be confused with the well-establishednotation n(x) of the B-splines, which we will present 3(x) as a special case in Equation (3.22).

  • 7/30/2019 Wave Let Book

    14/75

    3.5 ITERATIVE METHODS FOR SOLVING THE SCALING EQUATION 55

    Example 3.1 The Iterative Method of Successive Approximations: Computing for the

    Haar Scaling Function.

    In this case, the scaling equation (3.2)(with k=0,1) becomes

    (x) = (2x) + (2x

    1), (E.1)

    where h0 = h1 =12

    . Another constraint is given by the normalization, (x)dx =

    1.

    1 0 1 2 3 4

    0

    0.2

    0.4

    0.6

    0.8

    1

    x

    a) 0(x)

    1 0 1 2 3 4

    0

    0.2

    0.4

    0.6

    0.8

    1

    x

    b) 1(x)

    1 0 1 2 3 4

    0

    0.2

    0.4

    0.6

    0.8

    1

    x

    c)2

    (x)

    1 0 1 2 3 4

    0

    0.2

    0.4

    0.6

    0.8

    1

    x

    d)3

    (x)

    1 0 1 2 3 4

    0

    0.2

    0.4

    0.6

    0.8

    1

    x

    e) 4(x)

    1 0 1 2 3 4

    0

    0.2

    0.4

    0.6

    0.8

    1

    x

    f) 5(x)

    1 0 1 2 3 4

    0

    0.2

    0.4

    0.6

    0.8

    1

    x

    g) 6(x)

    1 0 1 2 3 4

    0

    0.2

    0.4

    0.6

    0.8

    1

    x

    h) 7(x)

    Fig. 3.5: The iterative method for the Haar scaling function, 0(x) to 7(x).

    We begin the iterative method with the zeroth approximation, shown in Figure 3.5(a)

    0(x) =

    13 , 0 x 30, otherwise,

    (E.2)

  • 7/30/2019 Wave Let Book

    15/75

    56 Chapter 3 THE SCALING RECURRENCE RELATION

    which satisfies30 0(x)dx = 1. From (3.11), using again h0 = h1 =

    12

    , the first

    approximation is

    1(x) = 0(2x) + 0(2(x 12

    )) (E.3)

    that is shown in Figure 3.5(b). Then we use this 1(x) for the input in (3.11) to havethe second approximation 2(x),

    2(x) = 1(2x) + 1(2(x 12

    )), (E.4)

    as shown in Figure 3.5(c). This process is repeated, where the graph of n+1(x) in(3.11) with n = 4 in Figure 3.5(f) comes close, then 7(x) in Figure 3.5(h) comescloser to the exact Haar scaling function in Figure 1.2 of Chapter 1. We leave it as

    an exercise to use the successive approximations iterative method to solve for the roof

    scaling function,

    (x) =

    x, 0 x < 12 x, 1 x < 20, otherwise,

    (3.12)

    as shown in Figure 3.8, with its three non-zero scaling coefficients h0 =1

    22

    , h1 =12

    ,

    and h2 =1

    22

    . As we did for the Haar scaling function, it is advisable to start with a

    zeroth approximation,

    0(x) =

    12 , 0 x < 20, otherwise,

    with its integral over (, ) being one (see Exercise 4).

    With the very simple form of the Haar and the roof scaling functions we will attempt,

    in Examples 3.3 and 3.4, a graphical method to verify that they are solutions to the

    scaling equation, given their respective scaling coefficients. For such simple forms of

    scaling functions, we will also illustrate the method graphically by using these scaling

    functions for constructing their corresponding wavelets.(as seen in Examples 3.6 and

    3.7)

    We will leave the use of such an approach for solving the scaling equation to Exam-

    ple 3.8(a) in the case of the D2(x) Daubechies 2 scaling function as shown in Figure

    3.14. Then, we will use such results in Example 3.8(b) to construct the correspondingD2(x), the Daubechies 2 basic wavelet, as shown in Figure 3.15.

    Before we do this, it is instructive to give a simple example of how such a successive

    approximations iterative method is used for solving other types of equations, in this

    case solving a simple integral equation.

  • 7/30/2019 Wave Let Book

    16/75

    70 Chapter 3 THE SCALING RECURRENCE RELATION

    It should be easy to show that this basic roof wavelet satisfies the necessary condition

    of all admissible basic wavelets, i.e., their average over the real line must vanish. This

    is done by showing that the integral (x)dx = 0, which is the case when we use

    the (x) in (3.26). It also shows from the graph of(x) in Figure 3.13 where the areaunder (x) is zero.

    Example 3.8(a) The Iterative Method for Constructing The Daubechies 2 Scaling

    Function D2

    This example is for the Daubechies 2 scaling function D2(x) with its four coefficients

    h0 =1+3

    42

    , h1 =3+3

    42

    , h2 =33

    42

    , and h3 =13

    42

    . The scaling equation (3.2)

    gives,

    (x) =3

    k=0

    hk

    2(2x k)

    = h0

    2(2x) + h1

    2(2x

    1) + h2

    2(2x

    2) + h3

    2(2x

    3)

    =1 + 3

    4

    2

    2(2x) +

    3 + 34

    2

    2(2x 1) + 3

    34

    2

    2(2x 2)

    +1 3

    4

    2

    2(2x 3). (3.27)

    We have already seen the very simple forms of the Haar and roof scaling functions,

    which we were able to use in even a graphical way to verify their respective scaling

    equations. The present Daubechies 2 scaling function, unfortunately, does not havean analytical expression by any measure for us to attempt to verify its above scaling

    equation (3.27). The only method remaining at our disposal is the iterative one of (3.11),which we must carry out numerically. Here, we may start with the zeroth approximation

    as a small constant, 0(x) = c, but we need some idea about its compact support. So,we assume (depending on prior knowledge) that the 0(x) has a compact support of(0,3) in order to start the iterative process with, for example,

    0(x) =

    1

    3, 0 < x < 3

    0, otherwise.

    Note that our first guess satisfies 0(x)dx =

    3

    0

    1

    3dx = 1. We use this 0(x)

    inside the following sum of the scaling equation,

    n+1(x) =3

    k=0

    hk

    2n(2x k), (3.11)

    to find 1(x). Then we continue the process to find 2(x), 3(x), . . . , 10(x) as shownin Figure 3.14.

  • 7/30/2019 Wave Let Book

    17/75

    3.5 ITERATIVE METHODS FOR SOLVING THE SCALING EQUATION 71

    0 1 2 3

    0.5

    0

    0.5

    1

    1.5

    x

    a) 0(x)

    0 1 2 3

    0.5

    0

    0.5

    1

    1.5

    x

    b) 1(x)

    0 1 2 3

    0.5

    0

    0.5

    1

    1.5

    x

    c) 2(x)

    0 1 2 3

    0.5

    0

    0.5

    1

    1.5

    x

    d) 3(x)

    0 1 2 3

    0.5

    0

    0.5

    1

    1.5

    x

    e) 4

    (x)

    0 1 2 3

    0.5

    0

    0.5

    1

    1.5

    x

    f) 5

    (x)

    0 1 2 3

    0.5

    0

    0.5

    1

    1.5

    x

    g) 6(x)

    0 1 2 3

    0.5

    0

    0.5

    1

    1.5

    x

    h) 7(x)

    Fig. 3.14: The result of seven iterations, 0(x), 1(x), . . . , 7(x), of the scaling equation associatedwith the D2 scaling function.

    Example 3.8(b) Constructing The Daubechies 2 WaveletD2

    As it is for any basic wavelet, once its associated scaling function is found (in the

    present case numerically) it is easy (again, numerically in the present case) to find thebasic wavelet via equation (3.23),

    (x) =k

    (1)kh1k

    2(2(x k2

    )). (3.23)

  • 7/30/2019 Wave Let Book

    18/75

    72 Chapter 3 THE SCALING RECURRENCE RELATION

    We determine here the k index values of the sum, given that we have h0, h1, h2, andh3. So, for the above equation h1k is non-zero for 1 k = 0, 1, 2, 3, which makes kof the sum run from 2 to 1,

    (x) =

    1k=2

    (1)k

    h1k2(2(x k/2)),= (1)2h3

    2(2x + 2) + (1)1h2

    2(2x + 1)+

    + (1)0h1

    2(2x) + (1)1h0

    2(2x 1).

    In Figure 3.15 we show the Daubechies 2 scaling function n(x) and its associatedwavelet (x 1).

    1 0 1 2 3 41.5

    1

    0.5

    0

    0.5

    1

    1.5

    2

    t

    (t)

    (t1)

    Fig. 3.15: The Daubechies 2 scaling function D2(x) and its associated wavelet D2(x), translated by1 to the right.

    In Section 3.6 we will present a scaling function and its associated basic wavelet that

    die out to zero as x . Thus, they do not have a compact support because theydo not die out to identically zero beyond a finite interval. These are the Shannon (sinc)

    scaling functions and one of its associated basic wavelets.

    3.5.2 An Initial Value Problem for Computing the Scaling Functions

    Here we present another numerical (iterative) method for computing the scaling func-

    tion. It, of course, assumes knowledge of the scaling coefficients for having a definite

    scaling equation. In this method, we also assume that we have initial values of the

    sought scaling function, for example, (0), (1), (2), and (3) in the case of the

  • 7/30/2019 Wave Let Book

    19/75

    104 Chapter 4 VECTOR SPACES, SUBSPACES AND BASES

    orthonormal scaling functions {2 j2 (2jx k)}, isfj(x) =

    k

    j,k2j

    2 (2jx k). (4.20)

    If we use the orthonormality of the present Haar basis, after multiplying both sidesof the above equation by 2

    j

    2 (2jx l) and integrating, we can obtain the followingexpression for the coefficients {j,k}:

    j,k =

    fj(x)2j

    2 (2jx k)dx. (4.21)

    The derivation goes as follows:

    fj(x)2j

    2 (2jx l)dx =

    k

    j,k2j

    2 (2jx k) 2 j2 (2jx l)dx

    = k

    j,k

    2

    j

    2 (2jx

    k)

    2j

    2 (2jx

    l)dx

    =k

    j,kk,l = j,l,

    j,l =

    fj(x)2j

    2 (2jx l)dx,

    where the orthonormality of the above Haar scaling functions on (, ) has beenused in the last integral inside the sum.

    The same can be done for gj(x) Wj in terms of the orthonormal discrete waveletset

    {2

    j

    2 (2jx

    k)

    } gj(x) =k

    j,k2j2 (2jx k), (4.22)

    j,k =

    gj(x)2j

    2 (2j k)dx. (4.23)

    We now illustrate these two expansions for the Haar scaling functions and their

    associated wavelets with a couple of examples.

    Example 4.12 Approximation of Functions by the Haar Scaling Functions Series

    For simplicity, we will start the computations by considering expansion in the space V0with its Haar scaling functions basis

    {(x

    k)

    }to give a (rough) approximation f0(x)

    of the function in (E.1) (as shown in Figure 4.2). We will follow this by finding a morerefined f1(x) V1 decomposition of the function using the smaller scale 1 = 12 , asshown in Figure 4.4.

    Consider the function

    f(x) = x, 0 < x < 3. (E.1)

  • 7/30/2019 Wave Let Book

    20/75

    4.4 BASIS OF A VECTOR SPACE 105

    Note that for the Fourier series expansion we had f(x) = x, 0 < x < (or 1) and compression (for a < 1) and b represents the translation parameter.Here, as it was presented in this and previous chapters, we again write the discrete

    wavelet as

    j,k(t) 2j

    2 (2jt k) = 2 j2 (2j(t 2jk)), (7.37)which formally" for now is the discrete version of the continuous wavelet a,b(t) =

    |a| 12 ( tba ), evaluated (for its variables a and b) at the (discrete) dyadic positionbj,k = 2

    j

    k with binary scale aj = 2j

    . The scaling factor1

    aj = 2j

    2

    in (7.33)represents a contraction (approximately" high frequency for j > 0), and dilation (ap-proximately" low frequency forj < 0). We mention that aj = 2

    j is also used, as done in

    some references. This makes no difference, since in the discrete wavelet representation

    (7.30) we sum over j = to .

  • 7/30/2019 Wave Let Book

    33/75

    7.3 THE CONTINUOUS (CWT) AND DISCRETE WAVELET TRANSFORM (DWT) 231

    As we had it in(7.30)-(7.32), the associated j,k - wavelet transform is the following

    discrete version of the continuous wavelet transform in (7.9) for the above discretevalues of its two parameters a and b, namely, aj = 2

    j and bj,k = 2jk,

    (

    Wf)(2

    j, k2j)

    Wf(2

    j , k2j)

    cj,k

    2 j2

    f(t)(2jt k) dt. (7.32)

    The inverse of this Wf(2j, k2j), if it exists, would be the following double

    series expansion, of the square integrable signal f(t) on (, ), in terms of theabove discrete wavelets

    f(t) =

    j=

    k=

    (Wf)(2j , k2j)

    2j

    2 (2jt k)

    , < t < . (7.30)

    A close look at the possible derivation of(7.37), in comparison to what we do for the

    Fourier series expansion, would suggest that the derivation becomes easier if we limitourselves to the set of discrete wavelets that are orthonormal on (, ), i.e.,

    j,k(t)m,n(t) dt =

    2j

    2 (2jt k) 2m2 (2mt n) dt (7.38)= jm kn

    where k , m , j, and n are integers and km is the Kronecker delta symbol

    km

    1, k = m0, k = m. (7.39)

    The question that presents itself now, in light of our discussion in this chapter, is how

    does the double wavelet series in (7.30) make a representation of the function f(t)?This question is valid since in Chapter 4 we always spoke about the role of the wavelet

    series in offering only a refinement" to be added to the approximate (blurred) version

    of the signal via the scaling function series of (7.34)-(7.35).

    The catch here, as compared to the infinite series (j = to ) in the wavelet series(7.30), is that, as we have already indicated earlier, we stop at an acceptable scale levelj = J, that is, we use a finite sum. There, for example, we had to appeal to the nesteddecomposition VJ+1 = VJWJ, VJ+1 = VJWJ W0 WJ1WJ,where for every high J with scale J = 2J, the approximation fJ VJ of (7.34)is a very blurred version of the signal f(t) for such a large scale. Hence, fJ

    VJ

    can be considered a mere constant, especially if we let J . Thus, we can see thedouble wavelet series (7.30), formally (since, in practice, we do not need to go to sucha large J = 2J), as a representation of the signal f(t), < t < plus the(added) constant represented by fJ(t) of the scaling function series at the very highscale J = 2J.

  • 7/30/2019 Wave Let Book

    34/75

    232 Chapter 7 THE WINDOWED FOURIER AND THE CONTINUOUS WAVELET TRANSFORMS

    As we briefly mentioned in Chapter 4 and will discuss in detail in Chapter 8, this can

    be put in the language of implementing the discrete wavelet-scaling function decompo-

    sition of the signal in terms of successive high pass filters corresponding to {gj(t)}JJof the wavelet series in WJWJ+1 WJ and the (one) scaling function seriesoff

    J(t)

    V

    J.

    In terms of the frequency spectrum of the signal, such high pass filters of the wavelets

    use almost all of the spectrum corresponding to (2J, 2J), except for the very smallinterval (0, 2J) that is covered by the low pass filter corresponding to fJ VJ.The latter fills the small gap (0, 2J), which is called the plug." So, theoretically, thespectrum of the 2J + 1 high pass filters and the (only) one low pass filter add up to thefull spectrum of the signal (2J, 2J). However, there is of course an overlap due to theindividual spectra not being ideal, i.e., they do not vanish abruptly to identically equal

    to zero.

    7.3.3 Frames - Before the Orthonormal Wavelets: Towards Discretizing the

    Continuous Wavelet Transform

    Attention must be paid to the resolution of the identity in (7.20) for representing a

    continuous signal as the inverse of the continuous wavelet (double integral) transform

    in (7.9). Also, we recognize here the power of the Fourier transform analysis in reach-

    ing such an important result. This is in the sense that the wavelet transform in (7.9)

    gives us a complete characterization of the signal f(t) as in (7.20). However, there isa serious problem regarding the numerical computations of the single integral in (7.9)

    and, especially, the double integral in (7.10).

    There will be a question when we come to discretize the wavelet 1|a|(tba

    ) as

    j,k(t) =1

    aj (1

    aj (t bj,k)) with aj = aj

    0 and bj,k = kaj

    0b0, a0 > 1, b0 > 0, then tryto use the set {j,k(t)} of these discrete wavelets with appropriate coefficients to char-acterize or represent a signal. Here we are away from the Fourier series analysis. So,

    such a wavelet series must stand on its own to answer the important question, whether

    it completely characterizes the signal. In addition, this series may be required to satisfy

    a condition similar to the resolution of the identity of the continuous wavelet transform.

    So, the test for such a series must be that it allows a good (stable) numerical scheme,

    and a simple algorithm for computing the coefficients of the sought series. A very

    simple way of defining a stable system, is that a bounded (or small) input to the system

    should produce a correspondingly bounded (small) output. In other words, the output

    depends continuously on the input. In our brief reference in Sections 3.3, 4.3, and 4.4

    to the scaling function and wavelet series, these questions were not raised. The reason,as we will see shortly, is that we considered only the special case of the orthonormal

    wavelets {2 j2 (2j(t k2j

    ))} and their associated scaling functions {2 j2 (2j(t k2j

    ))},with their very special dyadic scaling aj =

    1

    2jand translation bj,k =

    k2j

    .

  • 7/30/2019 Wave Let Book

    35/75

    8.2 MULTIRESOLUTION ANALYSIS (MRA) 241

    W{f} = (W)f(a, b) = 1|a|

    f(t)(t b

    a)dt. (8.3)

    Here we note the dependence of the transform (W)f(a, b) on the two continuous

    parameters a and b.

    As expected, the inverse (continuous) wavelet transform involves integration over

    the two parameters a and b. Thus, we expect a double integral; as a result, we must

    worry about the redundancy and long computations involving such an integral. This

    is especially apparent when we allow (a,b) R2, i.e., when they cover the entire plane.The inverse (continuous) ( tb

    a)-wavelet transform is

    f(t) = W1{(W)f(a, b)}=

    1

    C

    dadb

    a2{(Wf)(a, b) 1

    |a|(

    t ba

    )}, (7.10,8.4)

    where C is a finite number depending on the wavelet used, and is defined as

    C =

    |()|2|| d < , (7.11,8.5)

    where () is the Fourier transform of the wavelet (t).

    In equation (8.5) we see the importance of the required admissibility condition"

    (0) = 0, since it is needed in the integrand of it to ensure the convergence of theintegral, as was shown in section 7.3 following equation(7.21).

    As we will see in the next sections, the shift from the continuous wavelet transform

    to the discrete wavelet series was done to efficiently remedy the redundancy of the

    (double integral) inverse continuous wavelet transform. This departure to a great extent

    is manifested in the following Multiresolution Analysis.

    8.2 MULTIRESOLUTION ANALYSIS (MRA)

    Now we present a rather formal definition of the MRA, which is the most usual (typical)

    one involving orthonormal basis.

    Definition 8.2 Multiresolution Analysis

    A Multiresolution Analysis first requires the existence of a nested sequence

    ... V2 V1 V0 V1 V2 ... (8.6)of subspaces {Vj}j= such that the following conditions are met:

    1. Density: Their unionjZ Vj is dense in the real square integrable functions

    f(x) L2(, ),

  • 7/30/2019 Wave Let Book

    36/75

    242 Chapter 8 MULTIRESOLUTION ANALYSIS AND FILTER BANKS

    jZ

    Vj = L2(, ), (8.7)

    which again means that any square integrable function can be approximated as

    closely as desired by a sequence of members of the union of these vector spaces.

    For example, this is the case for a double series of scaling functions summed overthe scale levels and the translations by integers.

    2. Separation: Their intersection is the zero set,jZ

    Vj = {0}. (8.8)

    3. Scaling: f(x) Vj if and only if f( x2j ) V0 (or f(x) V0 if and only iff(2jx) Vj). This means that by scaling up (or down) by the dyadic scalej =

    1

    2jwe can move from one of the nested spaces to the other.

    4. Orthonormal basis: The set of the scaling functions

    {(x

    k)

    }kZ is an or-

    thonormal basis, i.e.,

    (x k)(x k)dx =

    0, k = k1, k = k k, k

    Z. (8.9)

    We note that in the space Vj we have the orthonormal set of scaling functions

    {2 j2 (2jx k)}. We must note, as we had illustrated in Chapter 3, that this fourthrequirement (of the usual MRA) is not satisfied by all applicable scaling functions. For

    example, it is satisfied by the Daubechies ones, whose (simplest) special case is the

    Haar scaling function. However, it is not satisfied by the hat (roof) scaling function,

    (x) =

    x, 0 x 12

    x, 1

    x

    2

    0, otherwise

    (8.10)

    where, for example, (x) and (x 1) are not orthogonal on (, ). This is seenin Figure 8.1, where the product (x)(x 1) on (1, 2) is positive. Hence,

    (x)(x 1)dx =

    2

    1

    (x)(x 1)dx = 0. (8.11)

    1 0 1 2 3 4

    0

    0.5

    1

    x

    (x)

    (x1)

    Fig. 8.1: The roof scaling function (x) and its translation (x 1) as an example of a non-orthogonalscaling function with respect to translation by integers.

  • 7/30/2019 Wave Let Book

    37/75

    8.2 MULTIRESOLUTION ANALYSIS (MRA) 243

    As we mentioned in Chapter 3, this hat scaling function is a special case of a more

    general class of functions, namely, the B-splines. For such scaling functions that do not

    satisfy the above fourth condition of the MRA definition, there is a theory with weaker

    conditions. For more on this, we refer to the authoritative reference of Daubechies.

    There is also a method of orthogonalizing the B-splines that yields the Battle-Lemarie

    scaling functions, as we will discuss in Section 11.3.2.

    With the Daubechies scaling functions we can show that the above four requirements

    of the MRA are met. In particular, starting from their simplest one, the Haar scaling

    function, we find that conditions 1 and 2 are clear when we examine the limits of

    j Z. Condition 3 is intuitive when we look at the scaling by the dyadic scale j = 12j .Condition 4 is easily satisfied, since in this special case of the Haar scaling functions

    there is no overlap between their compact supports, as their translations by the integers

    k and k differ by integers. Thus, their product is zero, and the integral in (8.9) vanishes.

    8.2.1 Establishing The Scaling Equation

    The first result we can easily obtain from the MRA definition is the establishment of

    the scaling equation,

    (x) =k

    hk

    2(2x k). (8.12)

    We are after only the form in the above equation; the coefficients {hk} are to befound later from their properties that are derived using the MRA, a number of basic

    theorems, and the use of the Fourier transform. This will be the subject of Chapter 10.

    Since {(x k)} are an orthonormal basis of V0, we can express (x) V0 in aseries of them,

    (x) =k

    ak(x k). (8.13)

    However, this is not the scaling equation, since the latter involves (2x k) inside thesum and not (x k). Now, the existence of nested subspaces, demanded by the firstcondition of the MRA, in particular V0 V1 with {2 12 (2x k)} as the orthonormalbasis ofV1, becomes important. Thus, since (x) V0 V1, then (x) V1, and wecan write its series expansion in terms of the basis {2 12 (2x k)} ofV1, which yields

    (x) =k

    hk

    2(2x k) (8.12)

    as the sought scaling equation.

    We will return to the MRA in Sections 9.2 and 9.3 for a helping hand in developing

    the properties of the scaling coefficients {hk}. (We have spelled out four of these inequations (3.18)-(3.21) that will be used in conjunction with the Fourier transform,

    which will prove to be very instrumental in determining the coefficients.)

  • 7/30/2019 Wave Let Book

    38/75

    262 Chapter 8 MULTIRESOLUTION ANALYSIS AND FILTER BANKS

    1,0, 1,1

    1,0, 1,1

    0,0

    0,0

    0,00,0

    1,0, 1,1

    1,0, 1,1

    Fig. 8.10: Decomposition and reconstruction of f1(x) V1 - A special case of a quadrature mirrorfilter pairs.

    In Figure 8.10 we show the decomposition as well as the construction of f1(x) in V1.Such a figure depicts the first stage of the two parallel low and high pass filters at the

    scale 1 =1

    2 , expressed as V1 = V0 W0 as can be seen clearly in Figure 8.13(for 16inputed samples). This means that the details in V1 are formed via the projections of the

    signal onto the wavelet subspace W0, with scales of1. The role of the scaling functions,on the other hand, is only to supply a coarse (blurred) projection of the signal onto the

    subspace V0, with its large scale of 1 in the above case, resulting in an approximationof the signal itself.

    Thus, for the general case when a signal is suspected to need a very fine scale, for

    example, 5 =1

    32, then we begin with f5(x) V5, and the above decomposition goes as

    V5 = V0W0W1W2W3W4, which requires five low and high pass filter pairs.

    In the next section we will elaborate more on the above implementation of the signal

    decomposition and reconstruction, by dealing more with filter banks, with illustrations

    in terms of the filters frequency bands.

    8.5.2 Schematic Formulation of the Filter Banks Process

    The decomposition in Figure 8.8 of the signal f(t) V1 = V0 W0 may be furtherillustrated with the actual frequencies that the low and high pass filters allow to pass

    through them. In Figure 8.11 we designate the action of the low and high pass filter

    by the absolute value of their respective system functions |H()| and |G()|. LetH() and G() be the Fourier transforms of the impulse responses h(t) and g(t) ofthe low and high pass filters, respectively. Here is the frequency measured in radians

    per second. For an ideal low pass filter, |H()| is represented by a constant valuefor (a, a), and zero otherwise. Here we take a =

    2for Figure 8.11(a). Thus,

    the low pass filter allows only low frequencies limited to (2

    , 2

    ). On the otherhand(for the total interval of(, )), the high pass filter of Figure 8.11(b) allows thehigh frequencies (,

    2) (

    2, ). In our illustration we will consider a half

    low pass filter with (0, 2

    ), and a half high pass filter with (2

    , ). Hence, the

  • 7/30/2019 Wave Let Book

    39/75

    8.5 A BRIEF LOOK AT IMPLEMENTING SIGNAL DECOMPOSITION WITH FILTERS 263

    combination of the above low and high pass filters in parallel will allow all frequencies

    , 0 < < , to pass.

    0

    1

    /2 0 /2

    (a) |H()|

    0

    1

    /2 0 /2

    (b) |G()|

    Fig. 8.11: (a) A low pass filter, (b) A high pass filter.

    The use of such filters is an implementation of scaling functions and wavelets compu-

    tations for the signal samples (or coefficients). For example, considering V1 = V0W0,in V1 we have a refined version of the signal at the scale 1 =

    1

    2, which means that there

    we have all frequencies 0 < < . The decomposition V0 W0, on the other hand,corresponds to the combination of the low frequencies 0 < <

    2, allowed by the

    low pass filter for an approximation of the signal, and the remaining high frequencies2

    < < allowed by the high pass filter for the added refinement. This decompo-

    sition is illustrated in Figure 8.12 for 16 samples input to the initial (single) low pass

    filter at the scale 1 =1

    2.

    V1, 1 = 1/2

    V0, 0 = 1W0, 0 = 1

    low-pass

    0 < <

    high-pass2

    < <

    low-pass

    0 < < 2

    down sampl ing down sampl ing

    16 samples

    16 coefficients16 coefficients

    16 coefficients16 coefficients

    8 coefficients 8 coefficients

    Fig. 8.12: A simple low pass filter outputing 16 coefficients to a parallel low and high pass filters,representing the decomposition V1 = V0 W0.

  • 7/30/2019 Wave Let Book

    40/75

    264 Chapter 8 MULTIRESOLUTION ANALYSIS AND FILTER BANKS

    In the usual computations the refined (high frequency) output coefficients of the high

    pass filter are down sampled, then stored, resulting in the eight coefficients of W0, in

    this case. The sixteen coefficients output of the low pass filter are also down sampled

    to eight, representing the coefficients of V0. These latter coefficients output of the low

    pass filter are inputed again to another parallel pair of low and high pass filters, with

    half the previous frequency bands, i.e., 0 < < 4 and 4 < < 2 , respectively. Thiscorresponds to the next (larger) scale following 0 = 1, that is 1 = 2 as shown inFigure 8.13.

    V1, 1 = 1/2

    V0, 0 = 1

    V1, 1 = 2

    V2, 2 = 4

    V3, 3 = 8

    W0, 0 = 1

    W1, 1 = 2

    W2, 2 = 4

    W3, 3 = 8

    low-pass

    0 < <

    high-pass

    2

    < <

    low-pass

    0 < < 2

    high-pass4

    < < 2

    low-pass

    0 < < 4

    high-pass8

    < < 4

    low-pass

    0 < < 8

    high-pass16

    < < 8

    low-pass

    0 < < 16

    16 samples

    16 coefficients16 coefficients

    16 coefficients16 coefficients

    8 coefficients8 coefficients

    8 coefficients8 coefficients

    8 coefficients8 coefficients

    4 coefficients4 coefficients

    4 coefficients4 coefficients 4 coefficients4 coefficients

    4 coefficients4 coefficients

    2 coefficients2 coefficients

    2 coefficients2 coefficients

    2 coefficients2 coefficients

    1 coefficient1 coefficient

    Fig. 8.13: Four parallel low and high pass filters constituting filter banks for the decomposition V1 =V3 W3 W2 W1 W0.

  • 7/30/2019 Wave Let Book

    41/75

    9.2 THE FAST DAUBECHIES WAVELET TRANSFORM (FDWT) AND ITS INVERSE (IFDWT) 289

    0

    1

    |L()|

    3 2 0 2 3

    0

    1

    |H()|

    3 2 0 2 3

    Fig. 9.1: Low and high pass filters system functions.

    Onthe band (

    2, 2) the two filters complement each other, since together they pass

    all frequencies in that band. Let us concentrate on the right side of this band, i.e, (0, 2).The low pass filter is concerned with the low frequencies (large scale) on (0, ), whilethe high pass filter passes the higher frequencies (smaller scale-refinements) output

    on its band (, 2). The output of the low pass filter that we will follow for furtherdecompositions will be with lower frequency, i.e., larger scale. This means that the

    decomposition process moves from high frequencies (smaller scale) to low frequencies

    (larger scale) for the outputs of the low pass filters.

    9.2 THE FAST DAUBECHIES WAVELET TRANSFORM (FDWT) AND ITS

    INVERSE (IFDWT)

    For our illustration of the Fast Wavelet Transform we concentrate in this Chapter on

    using the Daubechies scaling function D2, with its four coefficients

    h0 =1 +

    3

    4

    2, h1 =

    3 +

    3

    4

    2, h2 =

    3 34

    2, h3 =

    1 34

    2

    and its associated wavelet D2, with its coefficients

    h0 = h3 =1 3

    4

    2, h1 = h2 = 3

    3

    4

    2,

    h2 = h1 =3 +

    3

    4

    2

    , h3 = h0 = 1 +

    3

    4

    2The latter coefficients are used to construct D2 in terms of its associated scaling

    functions D2. Note that, in order not to carry the

    2 in the denominator of theexpression of the above eight coefficients, a number of authors use

    p0 =

    2h0 =1 +

    3

    4, p1 =

    2h1 =

    3 +

    3

    4,

  • 7/30/2019 Wave Let Book

    42/75

    290 Chapter 9 THE FAST DAUBECHIES WAVELET TRANSFORM (FDWT)

    p2 =

    2h2 =3 3

    4, p3 =

    2h3 =

    1 34

    .

    What we have been using as h0, h1, h2, h3 correspond to the normalized scaling

    functions {2(2t k)} in the scaling equation,

    (t) =k

    hk2(2t k) (3.2)

    while thep0,p1,p2, andp3 correspond to the use of the non-normalized scaling functions

    {(2t k)} in the same scaling equation,

    (t) =k

    hk

    2(2t k) =k

    pk(2t k). (3.2a)

    So, in order not to carry the

    2 in the computations for the fast Daubechies wavelettransform, we will adopt the latter notation of {pk}, but for that time we call them{hk}, remembering that the (2t k) used are not normalized. We stay with the useof{hk} because it is customary that these symbols are associated with the structure ofthe filters, such as {h0, h1, h2, h3} and {h3, h2, h1, h0}, for the Daubechies 2 lowand high pass filters, respectively.

    Our discussion in this section concentrates on the basic development of the fast

    Daubechies wavelet transform and its inverse. The illustrations with numerical compu-

    tations are the subject of Section 9.2.2. Our aim is to transform the coefficient output

    {ak,0} of the first low pass filter, at the scale 0 = 1, to those {ak,1, bk,1} of the nextlow and high pass filters outputs, at the larger scale 1 = 2. This is done by showingthe decomposition transformation in this example of the scaling function {(t k)} to{( t

    2 k), ( t

    2 k)} as shown in equation (9.24). Then we will associate the scaling

    function and wavelet coefficients ak,0, ak,1, and bk,

    1 with (t

    k), ( t

    2 k), and

    ( t2

    k), respectively. We add that we have 2n+1 coefficients {ak,0} and the total ofthe {ak,1, bk,1} coefficients, after downsampling, will also be 2n+1.

    Let us remember that for the averaging process for 2n samples in (8.51),

    ak,0 =2n1i=0

    f(i)(i k), (8.51(a))

    we needed to extend the samples to have them match the translated given values of the

    scaling functions, as demonstrated in Examples 8.6 and 8.7. We will assume here a pe-

    riodic extension. So, if we start with four samples

    {f0, f1, f2, f3

    }, for example, we have

    a period of 4, so that f4 = f0, f5 = f1, f6 = f2, and f7 = f3. (This is different from theperiodic extension variations of Section 8.6, which were done to reduce the edge effect.)

    We will see that going from {ak,1} to {ak,0, bk,0} amounts to a simple matrix equation,whose input is the column of{ak,1} and the output is the column with alternate elementsin the sequence {ak,0, bk,0}, as will be shown in (9.33). Furthermore, the square matrix

  • 7/30/2019 Wave Let Book

    43/75

    9.2 THE FAST DAUBECHIES WAVELET TRANSFORM (FDWT) AND ITS INVERSE (IFDWT) 291

    of the transformation is somewhat sparse. Also, because of the following propertiesof the above four Daubechies scaling coefficients,

    h2

    0 + h2

    1 + h2

    2 + h2

    3 = 2, (9.7)h2h0 + h3h1 = 0, (9.8)

    and the sparsity of , the inverse matrix 1 is related to in a very simple way,namely, as a constant multiple of its transpose, 1 = 1

    2. This inverse matrix will,

    of course, be needed for the reconstruction process of the signal as we return from

    {ak,0, bk,0} to {ak,1}.

    Note that had we used the coefficients with normalized scaling functions in their

    scaling equation, the sum in (9.7) would have become 1 because of the 1

    2in the

    denominator of our usual expressions for these coefficients.

    To prepare for our illustration we will show next that = 2I. Our illustrationhere will start with an 8 8 decomposition matrix (of the eight scaling coefficients) atthe scale 2 =

    1

    22= 1

    4, with period 2 22 = 8. What remains is to show that = 2I

    in the following.

    The Decomposition Matrix and its Inverse 1 for the Reconstruction, 1 =1

    2

    We will show soon in(9.34), using equations (9.7)-(9.8), that the transformation matrix

    of the scaling functions basis, associated with the scale 2 =1

    4 , to those of the scalingfunctions-wavelets, associated with the scale 1 =

    1

    2, is

    =

    h0 h1 h2 h3 0 0 0 0h3 h2 h1 h0 0 0 0 00 0 h0 h1 h2 h3 0 00 0 h3 h2 h1 h0 0 00 0 0 0 h0 h1 h2 h30 0 0 0 h3 h2 h1 h0

    h2 h3 0 0 0 0 h0 h1h1 h0 0 0 0 0 h3 h2

    . (9.9)

    This will be used in equation (9.24) for transforming a scaling function sequence of

    period 8 at the scale 0 = 1 to those of the scaling function-wavelet sequence at the

  • 7/30/2019 Wave Let Book

    44/75

    292 Chapter 9 THE FAST DAUBECHIES WAVELET TRANSFORM (FDWT)

    scale 1 = 2, with period 4 for each. The transpose of this matrix is

    =

    h0 h3 0 0 0 0 h2 h1h1

    h2 0 0 0 0 h3

    h0

    h2 h1 h0 h3 0 0 0 0h3 h0 h1 h2 0 0 0 00 0 h2 h1 h0 h3 0 00 0 h3 h0 h1 h2 0 00 0 0 0 h2 h1 h0 h30 0 0 0 h3 h0 h1 h2

    . (9.10)

    Now, we will show that 1 = 12

    .

    =

    h0 h1 h2 h3 0 0 0 0h3 h2 h1 h0 0 0 0 00 0 h0 h1 h2 h3 0 00 0 h3 h2 h1 h0 0 00 0 0 0 h0 h1 h2 h30 0 0 0 h3 h2 h1 h0

    h2 h3 0 0 0 0 h0 h1h1 h0 0 0 0 0 h3 h2

    h0 h3 0 0 0 0 h2 h1h1 h2 0 0 0 0 h3 h0h2 h1 h0 h3 0 0 0 0

    h3 h0 h1 h2 0 0 0 00 0 h2 h1 h0 h3 0 00 0 h3 h0 h1 h2 0 00 0 0 0 h2 h1 h0 h30 0 0 0 h3 h0 h1 h2

    =

    h20 + h21 + h

    22 + h

    23 h0h3 h1h2 + h1h2 h0h3

    h0h3 h1h2 + h1h2 h0h3 h20 + h21 + h22 + h23h0h2 + h1h3 h0h1 h0h1h3h2 h3h2 h1h3 + h0h2

    0 0

    h0h2 + h1h3 . . . . . .h0h1 h0h1 . . . . . .

    h20 + h21

    + h22 + h23 . . . . . .

    h0h3 h1h2 + h1h2 h0h3 . . . . . .h0h2 + h1h3 . . . . . .

    . . . . . . h20 + h21 + h

    22 + h

    23

  • 7/30/2019 Wave Let Book

    45/75

    9.2 THE FAST DAUBECHIES WAVELET TRANSFORM (FDWT) AND ITS INVERSE (IFDWT) 301

    at the scale 2 andd1 = [a0,1, b0,1, a1,1, b1,1, . . . , a3,1, b3,1]

    at the scale 1, we use the matrix to go from d1 to a2 = 21d1 = d1,

    a2 =

    a0,2a1,2a2,2a3,2a4,2a5,2a6,2a7,2

    = d1

    =

    h0 h3 0 0 0 0 h2 h1h1 h2 0 0 0 0 h3 h0h2 h1 h0 h3 0 0 0 0

    h3 h0 h1 h2 0 0 0 00 0 h2 h1 h0 h3 0 00 0 h3 h0 h1 h2 0 00 0 0 0 h2 h1 h0 h30 0 0 0 h3 h0 h1 h2

    a0,1b0,1a1,1

    b1,1a2,1b2,1a3,1b3,1

    , (9.42)

    In the following example, we will illustrate the decomposition process in part (a),

    the approximated function for the different decompositions in part (b), and the recon-

    struction of the original function in part (c).

    Example 9.1 Decomposing a Signal with Daubechies 2 Filters

    (a) The Decomposition (Analysis)

    In this example we start by illustrating d1 =1

    2a2 of Equation (9.34). We are given

    the sequence s = {0, 1, 2, 3}. For the necessary extension to period 8 (correspondingto n = 2 in 8 = 2(2n) for the index n in an), we choose to do it with a smooth periodicextension (with matching slopes at the two ends) to avoid the edge effect [see Nievergelt,

    1]. There, for s5 and s6 we took the easy way of interpolating with a straight line to

    have the extended sequence as {0, 1, 2, 3, 4, 73

    , 23

    , 1}. Had we done the interpolationwith splines, we would have obtained the extended sequence as {0, 1, 2, 3, 4, 2, 1, 1}.This is the one used for this example.It is instead of {0, 1, 2, 3, 2, 1, 0} of the mirrorimage extension of Example 8.7 that showed edge effect.

    We first prepare the coefficients ofa2 = [a0,2, a1,2, a2,2, a3,2, a4,2, a5,2, a6,2, a7,2]for (9.34). Here we easily compute a0,2 =

    332

    , a1,2 =532

    , a1,2 = a7,2 = 13

    2.

    Also a2,2 =732

    , a6,2 = a2,2 = 1+3

    2. For example,

    a0,2 = s0(0) + s1(1) + s2(2) + s3(3)

  • 7/30/2019 Wave Let Book

    46/75

    302 Chapter 9 THE FAST DAUBECHIES WAVELET TRANSFORM (FDWT)

    = 0 + 11 +

    3

    2+ 2

    1 32

    + 0 =3 3

    2.

    Our illustration is for computing a3,2, a4,2, and a5,2:

    a3,2 = s3(0) + s4(1) + s5(2) + s6(3)

    = (3)(0) + (4)1 + 3

    2+ (2)

    1 32

    + (1)(0) = 3 +

    3,

    a4,2 = s4(0) + s5(1) + s6(2) + s7(3)

    = (4)(0) + (2)1 +

    3

    2+ (1)

    1 32

    + (1)(0) = 3 +

    3

    2,

    a5,2 = s5(0) + s6(1) + s7(2) + s8(3)

    = (2)(0) + (1)1 +

    3

    2+ (1) 1

    3

    2+ 0 =

    3.

    So,

    a2 =

    332

    , 532

    , 732

    , 3 + 3, 3+32

    , 3, 132

    , 132

    .(E.1)

    These are the coefficients for the signal decomposition as f2(t) V2. We are using theDaubechies 2 filters ofh0 =

    1+3

    4, h1 =

    3+3

    4, h2 =

    334

    , and h3 =134

    . Now we

    will have the decomposition of f2(t) as f2(t) V2 = V1 W1 in d1 = 12a2, where is as in (9.34). Therefore, we have

    a0,1 =1

    2[h0a0,2 + h1a1,2 + h2a2,2 + h3a3,2]

    =1 +

    3

    8

    3 32

    +3 +

    3

    8

    5 32

    +3 3

    8

    7 32

    +1 3

    8(3 +

    3)

    a0,1 =18 53

    8,

    b0,1 =1

    2[h3a0,2 h2a1,2 + h1a2,2 h0a3,2]

    =1 3

    8

    3 32

    3

    3

    8

    5 32

    +3 +

    3

    8

    7 32

    1 +

    3

    8(3 +

    3)

    b0,1 = 38

    ,

    a1,1 = 12

    [h0a2,2 + h1a3,2 + h2a4,2 + h3a5,2]

    =1 +

    3

    8

    7 32

    +3 +

    3

    8(3 +

    3) +

    3 38

    3 +

    3

    2+

    1 38

    (

    3)

    a1,1 =7 + 5

    3

    4,

  • 7/30/2019 Wave Let Book

    47/75

    9.2 THE FAST DAUBECHIES WAVELET TRANSFORM (FDWT) AND ITS INVERSE (IFDWT) 303

    b1,1 =1

    2[h3a2,2 h2a3,2 + h1a4,2 h0a5,2]

    =1 3

    8

    7 3

    2

    3 3

    8

    (3 +

    3) +3 +

    3

    8

    3 +

    3

    2

    1 +

    3

    8

    (

    3)

    b1,1 =1 3

    4.

    Following the same computations, we find

    a2,1 =8 + 3

    3

    8, b2,1 =

    1 638

    , a3,1 = 1

    3, b3,1 = 0.

    Hence, we have

    d1 =

    a0,1b0,1

    a1,1b1,1a2,1b2,1a3,1b3,1

    =

    18538

    38

    7+53

    4134

    8+33

    8163

    8

    1 30

    . (E.2)

    From this d1 we obtain

    a1 =

    18538

    , 7+53

    4, 8+3

    3

    8, 1 3

    (E.3)

    and

    b1 =

    38

    , 134

    , 1638

    , 0 . (E.4)

    The coefficients a1 and b1 are considered to be the outputs of the first pair of the

    Daubechies 2 scaling function and wavelet filters, respectively. On the other hand, a2was the output of the (first) Daubechies scaling function filter.

    Now, we will decompose a1 using (9.33), moving toward the decomposition in

    V1 = V0 W0,

    d0 =

    a0,0b0,0

    a1,0b1,0

    =1

    2

    a1 =1

    2

    h0 h1 h2 h3h3 h2 h1 h0h2 h3 h0 h1h1 h0 h3 h2

    18538

    7+53

    48+3

    3

    8

    1 3

    , (E.5)

    where is as in (9.33). Hence,

    a0,0 =1 +

    3

    8

    18 538

    +3 +

    3

    8

    7 + 5

    3

    4

  • 7/30/2019 Wave Let Book

    48/75

    10.1 BASIC GUIDING THEOREMS 315

    10.1.1 The Scaling Coefficients Representation as an Integral

    Here, we will use the above orthonormality property of the scaling functions to derive

    the form of the scaling coefficients as an integral. This parallels the same method

    we employed in deriving the Fourier coefficients {a0, ak, bk}k=1 of the Fourier seriesin Example 2.3, where we used the orthogonality property of the harmonic func-tions {1, cos kx, sin kx}k=1 on (, ). We also used the fact that the functionf(x), < x < , being represented by the Fourier series, is square integrableon (, ). This allowed us to integrate the (infinite) series term by term or, in otherwords, to exchange the integration operation with the infinite summation of the series.

    Such an exchange is based on the series converging in the mean to the square integrable

    function f(x) L2(, ).

    In our analysis here, we assume that we are working with square integrable functions

    on (, ), i.e., f(x) L2(, ).

    Consider the scaling equation

    (x) =k

    hk

    2(2x k). (10.1)

    To find hl, for example, we multiply both sides of this equation by

    2(2x l), thenintegrate from x = to ,

    (x)

    2(2x l)dx =

    2(2x l)

    k

    hk

    2(2x k)dx

    =k

    hk

    2(2x l)

    2(2x k)dx

    =k

    hkkl = hl

    after using the orthonormality of the set {2(2x k)} in the integral inside the sum.Hence,

    hk =

    (x)

    2(2x k)dx. (10.3)

    Note that in the case of the Fourier coefficients we integrate the product of the

    given function and one of the known harmonic functions. Here, hk is expressed as the

    integral of the product of the unknown (sought) scaling function (x) and its scaled andtranslated version

    2(2x

    k). However, we will find a good use for this formula of

    hk.

    The Length of the Scaling Coefficients Sequence

    Indeed, the above form ofhk is very helpful in telling us how non-vanishing number of

    the scaling coefficients is dependent on the compact support of (x). This is so since

  • 7/30/2019 Wave Let Book

    49/75

    316 Chapter 10 SEARCHING FOR THE SCALING EQUATION COEFFICIENTS

    the integrand in (10.3) is (x)

    2(2(x k2 )), and the second factor, with its scale of 12and translation by k2 , will cease to have a non-zero overlap with (x) after

    k2 exceeds the

    width of the compact support of(x). For the Haar scaling function, for example, withits compact support [0, 1) of width one, only (2(x k2 )) with k = 0 and k = 1 willhave non-zero overlap with (x), resulting in the possible two non-zero coefficients

    h0, h1.

    For the Daubechies 2 scaling function, the support is [0, 3). So, there is no overlapto possibly constitute non-vanishing coefficients, except for k = 0, 1, 2, 3, 4, 5. Thus,we expect at most six non-vanishing coefficients, and we know that there are four. The

    reason must be that the integral of the overlaps corresponding to the other two of the

    translations is zero.

    Such a simple explanation for the length of the sequence of the scaling coefficients

    {hn} can be, with more details and care, developed in the following Theorem 10.2. Letus call hn = h(n) a function ofn Z; the theorem will tell us about the strong relationbetween the length N of the non-zero members of the sequence {h(n)}

    Nn=0 and the

    compact support of the associated scaling function.

    Theorem 10.2

    If the scaling function (x) has compact support on 0 x N1, and if{(xk)}are linearly independent, then hn = h(n) = 0 forn < 0 andn > N 1. Hence, N isthe length of the sequence."

    Again, our example of the Daubechies 2 scaling function with compact support [0, 3)(or [0, 4 1)) is definitely linearly independent (as these scaling functions are orthogo-nal there). Hence, h(n) = 0 for n < 0 and n > 3, which makes the four non-vanishingcoefficients ofD2.

    This theorem covers more than the orthogonal scaling functions of the Daubechies

    type, for example. It is more relaxed as it assumes only linear independence of the

    compact supported scaling functions. Hence, it is valid for the roof scaling functions

    too, for example, which are linearly independent with compact support [0, 2). So,N = 3 and we have three non-vanishing scaling coefficients, namely, h0 =

    122

    , h1 =12

    , h2 =1

    22

    .

    10.1.2 The coefficients for constructing the basic wavelet in terms of its

    associated scaling functions

    Similar to what we did in finding the above formula (10.3) of integral representation ofthe scaling coefficients {hn} of the (orthonormal) scaling functions {2(2x k)},we can do the same for the coefficients {ck} used in constructing the basic wavelet

  • 7/30/2019 Wave Let Book

    50/75

    10.3 EXACT METHOD FOR DETERMINING THE DAUBECHIES SCALING COEFFICIENTS 333

    compact support to (0, 2) denies it the orthogonality property on (, ) with respectto its translations by integers. So, we repeat once more that our attempt in this direction

    to seek continuous orthogonal scaling functions has failed.

    10.3.1 Determining the Daubechies 2 Scaling Coefficients

    We should note at this stage that moving from the Haar scaling function polynomial

    P(ei

    2 ) = P1(z) =1

    2[1 + e

    i

    2 ] (10.39)

    to the new one,

    p() = P(ei) =1

    2[1 + ei] = e

    i

    2 .1

    2[e

    i

    2 + ei

    2 ] = ei

    2 cos

    2, (10.38)

    results in scaling down by a factor of2 in the -frequency space, since we went from

    P(ei

    2 ) to P(e(i

    2)/ 1

    2 ) = P(ei) with a smaller scale of 12 . This, according to theFourier transform pair, corresponds to scaling up by a factor of 2 in the time space.

    Thus,p() corresponds to a Haar scaling function with the larger compact support [0, 2).Note that this may be in the direction of our development, since according to Theorem

    10.1 a larger compact support increases the number of non-vanishing coefficients.

    10.3.2 What to do next? The creative step

    Now, the other possibility for aiming at a continuous scaling function, possibly differ-

    entiable, is to consider the polynomial,

    q(z) = p2() =1

    4(1 + ei)2 = ei cos2

    2, z = ei. (10.44)

    Unfortunately, this polynomial does not satisfy Condition 2 of Theorem 10.4,

    |q(z)|2 + |q(z)|2 = |ei cos2 2|2 + | ei sin2

    2|2

    = cos4

    2+ sin4

    2= 1,

    after writing

    q(z) = 14

    (1 ei)2 = ei sin2 2

    .

    This is only true for p() itself in (10.38), as we showed in Example 10.2. So, there isno point in going further to consider pn(), n > 1.

    We see from this and the above discussion that the crux of the matter, in part of this

    attempt for satisfying Condition 2 of Theorem 10.4, is the simple identity

    cos2

    2+ sin2

    2= 1,

    which will ensure Condition 2. At this stage comes the promised crucial step, which

    is to stay with this identity and try higher integer powers of it instead of higher integer

  • 7/30/2019 Wave Let Book

    51/75

    334 Chapter 10 SEARCHING FOR THE SCALING EQUATION COEFFICIENTS

    powers ofp() in (10.38). For example, we start with cubing both sides of this identity,and obtaining

    [cos2

    2+ sin2

    2]3 = 13

    = cos6

    2 + 3 cos4

    2 sin2

    2 + 3 cos2

    2 sin4

    2 + sin6

    2

    = [cos6

    2+ 3 cos4

    2sin2

    2] + [3 sin2(

    +

    2)cos4(

    +

    2) + cos6(

    +

    2)],

    (10.45)

    after using cos = sin( + 2 ) and sin = cos( + 2 ).

    The grouping of the two parts in (10.45) is done in preparation of the first two terms

    (in brackets) as a nominee for a polynomial Q() = |p()|2, where p() is the soughtpolynomial P() to be tested with the conditions of Theorem 10.4,

    |p()|2

    = Q() = cos

    6

    2 + 3 cos

    4

    2 sin

    2

    2 . (10.46)

    The second grouping in (10.45) makes the polynomial Q( + ) = |p( + )|2,

    |p( + )|2 = Q( + ) = 3 sin2( + 2

    )cos4( +

    2) + cos6(

    +

    2). (10.47)

    This ensures that Condition 2 of Theorem 10.4 is satisfied in advance, since

    |p()|2 + |p( + )|2 = Q() + Q( + ) = 1. (10.48)Condition 3 of Theorem 10.4 requires that |p()| > 0 for || 2 , which is satisfied

    here, as we shall show next. From (10.46) we have

    |p()|2 = cos4 2

    [cos2

    2+ 3 sin2

    2], (10.49)

    where cos 2 12 for || 2 , and the sum of the two terms in brackets is positive.

    Hence, |p()| > 0 for || 2 . We note that the above choice ofp() = P(ei) in(10.38) allows the latter to be P(ei), since if we change i to i in Q() = |p()|2of (10.46) we obtain the same result. Hence, we can speak of p() as a function ofei,where Condition 3 of Theorem 10.4 is satisfied for P(ei).

    What remains is Condition 1 of Theorem 10.4, which requiresp(0) = 1 for the abovenew polynomial p(). However, we do not yet have p(), since in (10.46) we only

    defined its absolute value |p()|. We need to find p() from |p()|, and at the end ofthis computation we will show that p(0) = 1.

    Note that we can write a complex number in its polar form as

    z = x + iy = rei, r =

    x2 + y2 = |z|.

  • 7/30/2019 Wave Let Book

    52/75

    10.3 EXACT METHOD FOR DETERMINING THE DAUBECHIES SCALING COEFFICIENTS 335

    So, in |z| = |rei| = r, we lose the phase factor ei, which is what we must recoverfor p() from having |p()| in (10.46). We will for now allow such a phase factor ()for p() = |p()|() to be determined in the sequel, in such a way that serves ourpurpose for determining the scaling coefficients.

    Now, by factorizing the sum in (10.49) and realizing that |x iy| = |x + iy| =x2 + y2, we can write | cos 2 i

    3sin 2 | = | cos 2 + i

    3sin 2 |, so we have

    |p()|2 = cos4 2

    [cos2

    2+ 3 sin2

    2]

    = cos4

    2(cos

    2+ i

    3sin

    2)(cos

    2 i

    3sin

    2)

    = cos4

    2| cos

    2+ i

    3sin

    2|2. (10.50)

    By taking the square root of the equation we choose.

    p() = cos2

    2 |cos

    2+ i

    3sin

    2 |(), (10.51)

    where () is the phase factor after writing | cos 2 + i

    3sin 2 |() = [cos 2 +i

    3sin 2 ].

    For the present case of cubing cos2 2 + sin2 2 = 1 in (10.45), in the search for

    the scaling coefficients of the Daubechies 2 wavelet, the phase factor () in (10.51)is chosen to make p() a polynomial P3(z) of degree 3. This is to aim at the fourcoefficients of the polynomial P3(z) in (6.52).

    We are after the four coefficients ofp() = P3(z) =12 [a0 + a1z + a2z

    2 + a3z3] =

    1

    2

    [h0 + h1z + h2z2 + h3z

    3] as z = ei. So, we write the trigonometric functions in(10.51) in terms of complex exponentials. After this, the phase factor () is chosensuch that the first term in our result is of degree zero in z = ei,

    p() = [ei

    2 + ei

    2

    2]2[(

    ei

    2 + ei

    2

    2) + i

    3(

    ei

    2 e i22i

    )]() (10.52)

    p() =1

    8[ei + ei + 2][e

    i

    2 + ei

    2 +

    3(ei

    2 e i2 )]()

    =1

    8[{e i32 + e i2 +

    3(e

    i3

    2 e i2 )} + {e i2 + e 3i2

    +

    3(ei

    2 e 3i2 )} + 2{e i2 + e i2 +

    3(ei

    2 e i2 )}]()

    =

    1

    8 [(e

    i3

    2 + 3ei3

    2 ) + (e

    i

    2 + 2e

    i

    2 3ei

    2 + 23ei

    2 )

    + (ei

    2 +

    3ei

    2 + 2ei

    2 2

    3ei

    2 ) + (ei3

    2

    3ei3

    2 )](),

    p() =1

    8[(1 +

    3)e

    i3

    2 + (3 +

    3)ei

    2 + (3

    3)ei

    2 + (1

    3)ei3

    2 ]()

    (10.53)

  • 7/30/2019 Wave Let Book

    53/75

    336 Chapter 10 SEARCHING FOR THE SCALING EQUATION COEFFICIENTS

    after grouping the similar terms involving ei3w

    2 , ei

    2 , ei

    2 , and e3iw

    2 .

    Now, to have the first term be of degree zero in z = ei we choose the phase factor() = e

    i3

    2 ,

    p() =1

    8[(1 +

    3) + (3 +

    3)ei + (3

    3)e2i + (1

    3)e3i]

    =1

    2[1 +

    3

    4+

    3 +

    3

    4ei +

    3 34

    e2i +1 3

    4e3i],

    p() = P3(z) =1

    2[1 +

    3

    4+

    3 +

    3

    4z +

    3 34

    z2 +1 3

    4z3]. (10.54)

    From this result we have Condition 1 of p(0) = P(1) = 1, since z = ei|=0 = 1.

    The last step is to equate coefficients of same powers ofz in the above polynomial and

    P3(z) in (6.52), after noting going to P(ei) = P() in (10.38) instead of P(e

    i

    2 )in (10.39), as explained there around (10.39) and (10.38).

    P3(z) =1

    2[h0 + h1z + h2z

    2 + h3z3], z = ei

    to have the Daubechies 2 scaling coefficients

    h0 =1 +

    3

    4

    2, h1 =

    3 +

    3

    4

    2, h2 =

    3 34

    2, and h3 =

    1 34

    2. (10.55)

    10.3.3 Toward Determining the Daubechies N (or N) Scaling Coefficients

    For the above case of Daubechies N = 2 we raised the identity cos2 + sin2 = 1 topower 2N

    1 = 2(2)

    1 = 3 for the 2N = 2(2) = 4 non-zero coefficients. It is tempt-

    ing to generalize the above method for finding the scaling coefficients for Daubechies3, 4, etc., which happened to be the course to follow, as was done by Daubechies.But, before we do that in Section 10.5, let us note for the Haar (Daubechies 1) scalingfunction we had a polynomial P1(z) of degree 1, where the Haar scaling function isdiscontinuous. For the above Daubechies 2 we have a polynomial P3(z) of degree 3,and we know that the scaling function 2(t) is continuous.

    We must sense from this observation that the higher degree polynomial has something

    to do with the quality of the scaling function. Indeed, there is another observation that

    such a polynomial P2N1(z), resulting from raising cos2 + sin2 = 1 to power2N 1 for Daubechies N (with 2N non-zero coefficients), factorizes as

    P2N1(z) = (1 + z)NQN1(z), (10.56)

    where QN1(z) is another polynomial of degree N 1, and QN1(1) = 0.

    It is easy to verify (10.56) for the Haar scaling function, with N = 1, whereP1(z) =

    12

    h0 +12

    h1z =12

    12

    + 12

    12

    z = (1 + z) 12 , and QN1(z) is of

  • 7/30/2019 Wave Let Book

    54/75

    11Other Wavelet Topics

    In this chapter we will primarily discuss and illustrate the two-dimensional Haar wavelet

    transform and an elementary method for its inverse. Other topics presented include some

    other important wavelets and the biorthogonal wavelets.

    11.1 THE TWO-DIMENSIONAL HAAR WAVELET TRANSFORM

    In this book we do not have the space to present the general two-dimensional wavelet

    transforms, which are of utmost importance for analyzing and constructing images.

    However, with the simple interpretation of the fast Haar wavelet transform in Example9.2 of averaging and differencing, with the scaling functions and wavelets filters, re-

    spectively, we are encouraged to give some ideas about the first two basic steps of the

    fast two-dimensional Haar wavelet transform. This will enable us to retrace our steps

    for a method of finding the inverse of this transform.

    We will illustrate the first major step of averaging and differencing in the x-direction,

    followed by the same two operations in the y-direction. The interested reader may also

    see [Nievergelt, Y[1]].

    As shown in Example 9.2, the Haar wavelet transform does simple averaging and dif-

    ferencing to each pair of a one-dimensional sequence, for example, S =

    {s0, s1, s2, s3

    }.

    So, now we have an idea about a two-dimensional Haar wavelet transform of a two di-mensional sequence, such as the four points in space: z0 = f(0, 0) = 9, z1 = f(0,

    12) =

    7, z2 = f(12 , 0) = 5 and z3 = f(

    12 ,

    12) = 3. Even though this is a 2 2 array, it will

    give some feeling about what we may expect in dealing with two-dimensional images.

    Here, we can take the values 9, 7, 5, and 3 as a measure of the intensity in the grey scale

    349

  • 7/30/2019 Wave Let Book

    55/75

    350 Chapter 11 OTHER WAVELET TOPICS

    of an image.

    These samples are easily interpreted by using a two-dimensional Haar scaling func-

    tion as parallelepiped, for example, which has an area of 12 12 = 14 , when using scale1 =

    12 in both the x and y directions. With scale 0 = 1, the double Haar scaling

    function becomes

    (0)(0,0)(x, y) =

    1, 0 x < 1 and 0 y < 10, otherwise

    (11.1)

    as shown in Figure 11.1.

    y

    x

    z

    1/21/2

    1/21/2

    0

    1

    1

    1

    (1,1)

    Fig. 11.1: The Haar wavelet (0)(0,0)(x, y)(scaling function in two dimensions).

    The expression in (11.1) is obtained from the tensor product of our usual scaling

    function,

    (x) = (0)(0,0)

    (x) =

    1, 0 x < 10, otherwise

    (11.2)

    and a similar one in the y-direction,

    (y) = (0)(0,0)(y) =

    1, 0 y < 10, otherwise

    , (11.3)

    (0)(0,0)

    (x, y) = (0)(0,0)

    (x)(0)(0,0)

    (y) =

    1, 0 x < 1 , 0 y < 10, otherwise.

    (11.4)

  • 7/30/2019 Wave Let Book

    56/75

    11.1 THE TWO-DIMENSIONAL HAAR WAVELET TRANSFORM 351

    Even though this is a Haar scaling function in two dimensions, it is called a wavelet as

    one of the four possible ones.

    We will use the subscript (a, b) in (j)(a,b)

    (x, y) to indicate the top left corner of its

    base support in the x-y plane(with the y-axis being the top in Figure 11.1). The super-

    script (j) is used to indicate the level of the scale j (in this case j = 0 for 0 = 1) inboth the x and y directions, where we see that the base of the cube is 11 in Figure 11.1.

    We may note that (0)(0,0)

    (x, y) in (11.4) is associated with averaging in the x and

    y directions. However, we expect more operations, such as average of differencing,

    difference of averaging, and difference of differencing. As it may sound, this would

    involve other Haar wavelet actions in the x, y, and both directions (namely, diago-

    nal), to which we will refer as h(0,0)(x, y), v(0,0)(x, y), and

    d(0,0)(x, y), respectively.

    The scale used will be spelled out, or we may write h(1)(0,0)

    (x, y), v(1)(0,0)

    (x, y), and

    d(1)(0,0)

    (x, y) at scale 1 =12 , for example. Here h and v refer to the horizontal and

    vertical edges resulting from the differencing caused by the wavelet action along theperpendicular direction to that particular edge. Such three mixed combinations that

    involve wavelets are the reason for using the symbol instead of , as the latter isreserved for the pure averaging as in (11.4).

    We note that we are moving to the scale 12 for the four12 12 squares of the unit square,

    where we will also involve translations by 12 . So, before writing the expressions for the

    above two-dimensional Haar wavelets, we may recall the two important operations of

    scaling and translation. An example of scaling with 1 =12 for the two-dimensional

    wavelet is

    (1)

    (0,0)(x, y) =

    (0)

    (0,0)(2x, 2y) = 1, 0

    x < 12 , 0

    y < 12

    0, otherwise, (11.5)

    which is located at the top-left corner 12 12 square in Figure 11.1. Its translation by 12in the y-direction is

    (1)(0,0)

    (2x, 2(y 12

    )) = (1)(0,1)

    (x, y) =

    1, 0 x < 12 , 12 y < 10, otherwise

    (11.6)

    which is located at the top-right 12 12 square. We are using (0, 1) in (1)(0,1) (instead of

    (1)

    (0, 12)) to symbolically indicate the translation direction and to simplify the notation.

    The same thing is done for the other (1)

    (0,0)functions. Examples are:

    (1)(0,0)

    (2(x 12

    ), y) = (1)(1,0)

    (x, y),

    (0)(0,0)

    (2(x 12

    ), 2(y 12

    )) = (1)(1,1)

    (x, y). (11.7)

  • 7/30/2019 Wave Let Book

    57/75

    352 Chapter 11 OTHER WAVELET TOPICS

    These are located in Figure 11.1 at the bottom-left and bottom-right 12 12 squares.

    Now, we return to the two-dimensional Haar wavelet bases

    h(0)(0,0) =

    (0)(0,0)(x)

    (0)(0,0)(y) =

    1, 0

    x < 1, 0

    y < 12

    1, 0 x < 1, 12 y < 10, otherwise,

    (11.8)

    as illustrated in Figure 11.2. This can be interpreted as the wavelet operation in the

    y-direction, which results in differences, followed by the scaling function operation in

    the x-direction, which averages these differences. Here, we are using the scale 1 =12 .

    So, from Figures 11.1 and 11.2 we may see that h(0)(0,0)(x, y) can be written in terms

    of the (1) basis at t