chapter 3 an introduction to wavelet transform

80
, ..• , }' A Modified Vector Quantization Based Image Compression Technique Using Wavelet Transform by Jayanta Kumar Dcbnalh A thesis submitted to the Department of Electrical and Electronic Engineering of r Bangladesh University of Engineering and Technology in partial fulfillment of thc requircmcnts for the degree of MASTER OF SCIENCE IN ELECTRICAL AND ELECTRONIC ENGINEERING DEPARTMENT OF ELECTRICAL AND ELECTRONIC ENGINEERING BANGLADESH UNIVERSITY OF ENGINEERING AND TECHNOLOGY August 2006

Upload: others

Post on 11-Sep-2021

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Chapter 3 An Introduction to Wavelet Transform

,..•,

}'

A Modified Vector Quantization Based Image CompressionTechnique Using Wavelet Transform

byJayanta Kumar Dcbnalh

A thesis submitted to the Department of Electrical and Electronic Engineeringof r

Bangladesh University of Engineering and Technologyin partial fulfillment of thc requircmcnts for the degree of

MASTER OF SCIENCE IN ELECTRICAL AND ELECTRONIC ENGINEERING

DEPARTMENT OF ELECTRICAL AND ELECTRONIC ENGINEERINGBANGLADESH UNIVERSITY OF ENGINEERING AND TECHNOLOGY

August 2006

Page 2: Chapter 3 An Introduction to Wavelet Transform

Declaration

It is hereby declared that this thesis or any part of it has not been submitted elsewhere for theaward of any degree or diploma.

:J~10-- /<JAJYVlG/l

(Jayanta Kumar Debnath)

11

Page 3: Chapter 3 An Introduction to Wavelet Transform

Approval

The thesis titled "A Modified Vector Quantization Based Image CompressionTechnique Using Wavelet Transform" submitted by Jayanta Kumar Debnath, RollNo.: 040306222F of session: April 2003 has been accepted as satisfactory in partialfulfillment of the requirements for the degree of MASTER OF SCIENCE INELECTRlCAL AND ELECTRONIC ENGINEERlNG on 27 August 2006.

BOARD OF EXAMINERS

,-~.

I. ~W1

(Dr. Newaz Muhammad Syfur Rahim)Associate ProfessorDepartment of Electrical and ElectronicEngineering, BUET, Dhaka-1000,Bangladesh.

2. i~_~_(D~raSad Majumder)Professor and HeadDepartment of Electrical and ElectronicEngineering, BUET, Dhaka-l 000,Bangladesh.

~TJtM.., 2.'t.g.o(,3. C--

(Prof. Dr. Saiful Islam)ProfessorDepartment of Electrical and ElectronicEngineering, BUET, Dhaka-1000,Bangladesh.

4. ~c:Q.(Prof. Dr Abdul Mottalib)Professor and HeadDepartment of CIT, JUT, Gazipur, Dhaka,Bangladesh.

iii

Chairman

Member(Ex-officio)

Member

Member(External)

Page 4: Chapter 3 An Introduction to Wavelet Transform

Dedication

To my beloved parents

IV

Page 5: Chapter 3 An Introduction to Wavelet Transform

Acknowledgements

I would like to give thanks to my honorable supervisor, Dr. Newaz Muhammad Syfur Rahim,Associate Professor, Department of Electrical and Electronic Engincering (EEE), BangladeshUniversity of Engineering and Technology (BUET), Bangladesh, for his kind supervision,constructive suggestions and constant support during the whole research work. Specially Iwould like to give him thanks for introducing me in thc area of wavelet and image processingand for his extremely helping and sincere behavior for the whole pcriod of this rescarch work.

Special thanks are given to Dr. Md. Saifur Rahman, Professor of EEE department, BUET, myformer undergraduate thesis supervisor, for his kind advice regarding higher studies.

I would like to give thanks to our honorable Head ofthe depal1mcnt of EEE, Profcssor Dr. SatyaPrasad Majumder and the honorable Dean of the faculty of Electrical and ElectronicEngineering, Professor Dr. Mohammad Ali Choudhury for supporting me for proceeding withthis thesis work.

I would also give my heartfelt thanks to my honorable former teacher, Prof. Dr. Saiful Islam,Professor in the EEE department of BUET, for his constant motivation for higher studies andrcsearch works.

Finally, I express my deep gratitude to my parents for their continuous support, lovc andencouragements.

v

Page 6: Chapter 3 An Introduction to Wavelet Transform

DeclarationApprovalAcknowledgementsList of TablesList of FiguresAbstract

Contents

(i i)(iii)(v)

(viii)(ix)(x)

Chapter 1

Chapter 2

Chapter 3

Introduction

1.1 Introduction I1.2 Background of digital image processing I1.3 Objective of this work.............................................................. 21.4 Introduction to this thesis 3

Image Compression Methodologies

2.1 Introduction.............................................................................. 52.2 Some important factors to be considered in the compression process........ ... 6

2.2.1 Spatial redundancy...................................................... 72.2.2 Coding redundancy...................................................... 72.2.3 Psychovisual redundancy............................... 8

2.3 Different classes of compression techniques........................................ 82.3.1 Lossy compression process 82.3.2 Lossless encoding proccss 92.3.3 Predictive coding 102.3.4 Transform coding II

2.4 Summary.................................................................................. 14

An Introduction to Wavelet Transform

3. I Introduction.............................................................................. J 53.2 Brief introduction to Fourier Transform............................................. 153.3 Brief introduction to Discrete Fourier Transform................................... 17

3.3.1 Time/ Frequency problem 18

3.4 Brief introduction to Short-Timc Fourier Transfonn (STFT) 193SHistorical Background of wavelets 20

3.5.1 Definition of Wavelets and Wavelet Transform 21

3.6 Continuous Wavelet Transform (CWT) 233.6. I Basic theory ofCWT 233.6.2 Continuous Wavelet Transform computation......................... 253.6.3 Frequency and Time resolution...................................... 25

3.7 Discrete Wavelet Transform (DWT) 263.7. I Importance of Discrete Wavelet Transform.......................... 263.7.2 Wavelet features for image compression 283.7.3 Subband coding............................................................. 29

VI

Page 7: Chapter 3 An Introduction to Wavelet Transform

Chapter 4

3.8 Summary ............•............ ~ 30

Vector Quantization

4. J Introduction 314.2 Brief introduction to vector quantization.............................................. 31

4.2.1 Vector formation 324.2.2 Training sct gcneration 324.2.3 Codebook design 334.2.4 Quantization 36

4.3 Classification of vector quantization................................................... 364.3.1 Full search vector quantiZation 364.3.2 Trcc structured vector quantizer 374.3.3 Pruned tree structurcd vector quantization............................. 38

4.4 Full search vector quantizer design: the gcneral Lloyd algorithm........... ... 404.5 Summary.................................................................................... 41

Chapter 5 Vector Quantization Based Image Compression Using Wavelet Transform5.1 Introduction................................................................................. 425.2 The complete methods of image compression 42

5.2.1 Codebook gcneration step................................................... 425.2.2 Encoding step 435.2.3 Decoding step : 45

5.3 Summary 46

Chapter 6 Simulation Results and Discussions6.1 Introduction 476.2 Simulation results ofthe proposed method 476.3 Discussions. 53

Chapter 7 Conclusions and suggestions for future works7.1 Conclusions 557.2 Suggestions for future works 56

Appendices

Appendix A Different standard Images used for generating different codebooks 57Appendix B Different standard test images.......................................... 58Appendix C Matlab codes corresponding to different Matlab files used in this

algorithm 59

References.................................................................................................. .... 6~

VII

Page 8: Chapter 3 An Introduction to Wavelet Transform

Table 6.1Table 6.2Table 6.3

List of Tables

Details about different eodebook sizes used in this work 47Different experimental results using the proposed method................. 48Comparison ofthe results of the proposed method with other methods ... 48

VIII

Page 9: Chapter 3 An Introduction to Wavelet Transform

Figurc- 2.1Figure- 2.2Figure- 3.1Figurc- 3.2Figurc- 3.3Figurc- 3.4Figurc- 3.5Figure- 3.6Figure- 3.7Figure- 3.8Figure- 3.9Figurc- 3.10Figure- 4.1Figure- 4.2Figure- 4.3Figurc- 4.4Figure- 4.5Figurc- 5.1Figure- 5.2

Figure- 5.3Figure- 6.1

List of Figures

Lossy data comprcssion model.................. 8OCT bascd image compression model....................................... IIIllustration of the Fouricr transform 16Jllustration of the Short Time Fouricr transform 19Illustrations of Short Time Fouricr transform and Wavelets.............. 21Illustration ofthe differcnce of a sine-wave with a wavelct 22Illustration of the Wavelct transform 22Different views of a signal 23Illustration ofthe effect from varying the scaling factor, a 24Illustration of the affcct from varying the tram/alion factor, k 24Time/Frequency representation in CWT 26Wavelet Dccomposition Trce 29Principle of vector quantization 32Schematic explanation ofthc SOFM algorithm............................... 35Full search Vector Quantization mcthodology 37Tree Structured Vcctor Quantizer 38A schematic TSVQ and Pruned TSVQ structure explanation 39Differcnt subbands ofa general image aftcr 3-level Wavelct transform 43Flowchart ofthc encodcr for image compression based on wavelct transformand vector quantization 44Flowchart of the decoding process of this imagc compression process 46Different rcconstructcd images using proposcd method at diffcrcnt PSNR andCR 52

IX

Page 10: Chapter 3 An Introduction to Wavelet Transform

Abstract

In this thesis, an image compression mcthod combining discrctc Wavclct transform (OWT) and

vector quantization (YQ) i~ presented. First, a three-level OWT is performed on the original

image resulting in ten scparatc subbands. Ten separate codcbooks are gcnerated for ten

subbands using four training imagcs. The self-organizing featurc map (SOFM) algorithm is used

for the generation of codebook. An error correction scheme is also employed to improve the

peak signal to noise ratio (PSNR) of the reconstructcd imagc. Ten crror codcbooks are also

generated in the error correction scheme using the difference between the original wavelet

coefficicnts and the vector quantized coefficients with SOFM algorithm. The indices of the

codebooks are Huffman coded to further increase the compression ratio at the transmission end

of the encoder. The proposed scheme shows better image quality in terms of PSNR at the same

compression ratio as compared to other OWT and YQ based image compression technique

found in the literature. The error correction schcme is an iteration process which continuously

checks the image quality after scnding thc I-Iuffman coded bit stream of the error code book

indices through the channel each timc. The proposed method will be extremely helpful in

situations where high quality data is required at the cxpense of compression ratio.

x

Page 11: Chapter 3 An Introduction to Wavelet Transform

Chapter 1Introduction

1.1 Introduction

In their raw form, digital images require an enormous amount of memory. In fact, according to

a recent estimate, ninety percent of the total volume of traffic in the internet is composed of

images or image related data. With the advent of multimedia computing, the demand for

processing, storing and transmitting images has increased exponentially. Considerable amount

of research has been devoted in the last two decades to tackle the problem of image

compression. Digital imaging has had an enormous impact on, scientific and industrial

applications. Uncompressed images require considerable storage capacity and transmission

bandwidth. The solution to this problem is to compress an image for desired application.

Wavelet transform [I] has recently emerged as the tool of choice for image compression. In

this thesis work a combined approach of image compression based on wavelet transform and

vector Quantization [2] is proposed.

1.2 Background of digital imagc processing

The need for compressing data has been felt in the past, even before the advent of computers,

as the following quotation by Blaise Pascal suggests: "I have made this letter longer than

usual because I lack the time to make it shorter." The necd for compressing amount of data

stream is very old as in [3] in the 1558 Giambattista della Porta, a Renaissance scientist, was

the author of Magia Naturalis (Natural Magic), a book in which he discusses many subjects,

including demonology, magnetism, and the camera obscure. The book mentions an imaginary

device that has since become known as the "sympathetic telegraph." This device was to have

consisted of two circular boxes, similar to compasses, cach with a magnetic needle. Each box

was to be labeled with the 26 letters, instead of the usual directions, and the main point was

that the two needles were supposed to be magnetized by thc same lodestone. Porta assumed

that this would somehow coordinate the needles such that when a lettcr was dialcd in one box,

•,.

Page 12: Chapter 3 An Introduction to Wavelet Transform

the needle in the other box would swing to point to the same letter. Then again in 1711 a

worried wife wrote to the Spectator, a London periodical, asking for advicc on how to bear the

long absences of her beloved husband. The adviser, Joseph Addison, olTered some practical

ideas, then mentioned Porta's device, adding that a pair of such boxes might enable her and

her husband to communicate with cach other even when they "were guarded by spies and

watches, or separated by castles and adventures." Mr. Addison thcn added that, in addition to

the 26 letters, the sympathetic telegraph dials should contain, when used by lovers, "several'

entire words which always have a place in passionate epistles." The message "I love you," for

examplc, would, in such a case, require sending just three symbols instead of ten. This advice

is an early example of text compression achieved by using short codes for common messages

and longer codes for other messages. Even more importantly, this shows how the concept of

data compression comes naturally to people who are interested in communications. We seem

to be preprogrammed with the idea of sending as little data as possible in order to save time.

An image may be defined as a two dimensional function, f(x,y), where x and yare spatial

coordinates, and the amplitudes of f at any pair of coordinates (x,y) is called the intensity or

gray level of the image at that point. Whcn x,y, and the amplitude values of f are all finite,

discrete quantities then the image is a digital image. Proccssing digital images by means of

digital computers is called digital image processing. Digital images are composed of a finite

number of elements with a particular location and value. These elements are called pixels.

There are three types of digital image processing: i) Low level image processing, ii) Mid-level

image processing and iii) High-level imagc processing. Low-level processing involves

primitive operations such as image prcprocessing to reducc noisc, contrast enhancement, and

image sharpening. Mid level processes on images involve tasks such as segmentation,

description of those objects to reduce them to a form suitablc for computer processing, and

classification of individual objects. Finally high level processing involves "making sense" of

an ensemble of recognized objects as in image analysis, and at the far end of the continuum,

performing the cognitive functions normally associatcd with human vision [4].

1.3 Objective of this work

The wavelet transform [1,3] of an image is a multi-resolution representation of the image,

where vertical, horizontal and diagonal image detail are represented in different resolutions.

One favorable property of wavelet transforms is that their coefficients in different resolutions

2

Page 13: Chapter 3 An Introduction to Wavelet Transform

and the same orientation exhibit strong similarities. This similarity comcs from thc following

sense: if a wavelet coefficient is small, then the coefficicnts of the same spatial orientation and

higher frequency have a great probability of also being small. When these coefficients are

quantized to zero, their positions form a zero-tree. When it comes to quantizing and coding the

wavelet coefficients, the methods that used is the vector quantization as explained in [4], [5],

and [6]. The objective of this thesis work is to develop a combined approach of wavelet

transform and vector quantization for compressing 2-D gray-scale images is proposed.

1.4 Introduction to this thesis

Chapter 2, describes the very basics of digital data compression, different performance

measures of digital data compressions, basic ideas of data compressions, etc. This chapter also

introduces lossy compression and Lossless compression and also predictive and transform

based image coding techniques arc also described.

Chapter 3, briefly describes the evaluation of the discrete wavelet transform (DWT) starting

from the Fourier transform. This chapter includes brief description about Fast Fourier

Transform, Short time Fourier Transform, Windowing techniques, introduction to Continuous

Wavelet transform, etc and finally about the DWT. This chapter starts with the very basics of

Fourier transform, then gradually describes briclly the Discrete Fourier Transform, Fast

Fourier transform, and their time-frequency limitations, introduction to Short-time Fourier

Transform, finally to the multi-resolution problem and finally introduce Wavelet transform as

a powerful tool form image processing.

Chapter 4, introduces details of vector quantization process, starting from the very basic

definitions of the vector quantization, then Self Organizing Feature Map (SOFM) is introduced

which is used to train the code book and in the image compression process of this thesis work.

Then Lloyd algorithm, Generalized Lloyd algorithm described which is necessary to generate

the code books. Then LBG algorithm was introduced for the same purpose.

Chapter 5, this chapter describes the whole process of image compression based on the

proposed method. This proposed method is a combined application of vector quantization and

discrete wavelet transform.

3

Page 14: Chapter 3 An Introduction to Wavelet Transform

Chapter 6, different simulation results ofthe proposed method arc shown, and also comparison

of the proposed method with other methods is also listed in this chapter. An overall discussion

of the whole method ends this chapter.

Chapter 7, this chapter provides suggestions for the future research. works and draws

conclusion to the thesis book.

4

Page 15: Chapter 3 An Introduction to Wavelet Transform

Chapter 2Image Compression Methodologies

2.1 Introduction

Uncomprcssed images require considerable storage capacity and transmission bandwidth. The

solution to this problem is to compress an image'for desired application. Wavelet transform

has recently emerged as the tool of choice for image compression. Basic principles underlying

compression of images and the advantages of wavelet transform in this regard is explained in

this chapter. Digital images have become an important source of information in the modern

world of communication systems. In their raw form, these images require an enormous amount

of memory. In fact, according to a recent estimate, 90% of the total volume of traffic in the

internet is composed of images or image related data [7]. With the advent of multimedia

computing, the demand for processing, storing and transmitting images has increased

exponentially. Considerable amount of research has been devoted in the last two decades to

tackle the problem of image compression. There are two different compression categories:

lassless and las"y. Lossless compression preserves the information in the image. Thus an

image could be compressed and decompressed without loosing any information. Applications

requiring this type of compression include medical and legal records' imaging, military and

satellite photography. In lossy compression information is lost but is tolerated as it gives a

high compression ratio. Lossy compression is useful in areas such as video confereneing, fax

or multi-media applications and is the focus of this work. Chapter begins with an introductory

note on data compression, and then gives a brief description of the various attributes of

images, which make them amenable to compression and how to evaluate the compression

performances. Discrete cosine and wavelet transforms, the workhorses of JPEG-93 and JPEG-

2000, respectively arc then presented and the advantages of wavelets pointed out. In this work,

two dimensional wavelet transforms over images, which are represented as two dimensional

arrays of numbers was used.

5

Page 16: Chapter 3 An Introduction to Wavelet Transform

.............. (2. I)

2.2 Some important factors to be considered in the compression process

Data comprcssion refers to the process in which a givcn information is reprcsented with

reduced number of data points. Let us clearly understand the difference between information

and data. Data refers to the means with which the givcn information is conveyed. It may be in

the form of symbols, ASCII characters or numbers. Various amounts Qf data can be used to

represent the same amount of information. For example, consider two individuals telling the

same story in different number of words. Then t\1ere is a possibility that one individual among

them will convey the same story with more number of words as compared to the other. This

non-essential data introduces redundancy and hence the scope for compression. In this work

lossy compression of images, which tries to achieve a high compression at thc cost of some

loss of information is described. A digital image can be reprcscnted, by a two dimensional (2-

D) array i.e., a matrix, each of whose element f(i,)) corresponds to the value of the (i,j)th pixel

in the image. If each pixel represents a shade of gray in monochrome images, we nced to

allocate only one byte or 8 bits per pixel (bpp). With 28 = 256 combinations, one can represent

numbers ranging from 0 to 255. Thus, a gray scale imagc whcn displaycd will have shades of

gray, ranging from 0 (black) to 255 (white). An uncomprcssed, say 800 x 800 pixel image,

will need 64 x 104 bytes'" 0.64 MB. If 1000 images are to bc stored, we nced 640 MB!

Obviously, we then need to compress the image, i.e. represent thc same imagc with reduced

number of bits, possibly with some loss, without changing the original size of the image.

Mathematically, a measure of compression is given by the comprcssion ratio (CR) dcfined as

C = Numbero[ Bils in the original image" Number of Bits in the compressed image

While compressing an image, two important objectives are kcpt in mind. On one hand, the

compressed image should not be distorted and on the othcr, it should require minimum number

of bytes to store. Typically, these two objectives are conllicting, thus a suitable criterion is

needed to reach a compromise. This criterion depcnds upon the particular application. As will

be seen soon, it is possible to have a visually pleasing image quality, with a highly lossy

compression! Before procceding further, let us define the mathematical quantity which

measures the quality of the reconstructed image compared with the original image. It is called

peak signal to noise ratio (PSNR) [4], [7], mcasured in decibel (dB) and is defined as

PSNR=20LOgl0[~25_5_]RMSE

6

................... (2.2)

"

Page 17: Chapter 3 An Introduction to Wavelet Transform

Where RMSE is the root mean-square error defined as

I M N "

RMSE.= -IDfU,j) - fU,j)]'. MN ;=1 /=1 .

......... (2.3)

Here, M and N are the width and height, respectively (in pixels), of the image array f is the

original imagc and f is the reconstructed image. Note that thc original and the reconstructed

images must be of the same size. Images can be compressed, primarily by eliminating thc

following types of redundancies:

2.2.1 Spatial redundancy

In most of the natural images, the values ofthc neighboring pixels are strongly correlated i.e.,

the value of any given pixel can be reasonably predicted from the values of its neighbors. Thus

the information carried by individual pixels is relatively small. This type of redundancy can be

reduced, if the 2-D array representation of the image, is transformed into a format which only

keeps differences in the pixel values. From our previous discussions on wavelet [1,8], we are

familiar with the ability of the wavelet transforms to effectively capture variations at different

scales. Hence, wavelets are ideally suited for this purpose.

2.2.2 Coding redundancy

For a given image stored as a matrix with valucs ranging from 0 to 255, one can find out the

number of times each digit occurs in the image. This frcqucncy distribution of pixel values

from 0 to 255 is called a histogram. In typical images, a few pixel values have greater

frequency of occurrence, as compared to thc others. Hence, if the same code word size is

assigned to each pixel (8 bils in case of grayscale images), then coding redundancy is

incurred. This can be removed, if fewcr bits are assigned to the more probable gray scale

values than the less probable ones. This rcdundancy can be reduced using Hujjinan coding.

2.2.3 Psychovisual redundancy

This takes into account the properties of human vision. Human eyes do not respond with equal

sensitivity to all visual information. Certain information has less relativc importance as

compared to other information in normal visual processing. This information is said to be

7

r

Page 18: Chapter 3 An Introduction to Wavelet Transform

psychovisually rcdundant. Broadly speaking, an obscrvcr searches for distinguishing features

like edges or textural regions and mentally combines them into recognizablc groupings. The

brain then relates these groupings, with its prior knowledge, in order to complete the image

interpretation process. This redundancy can bc overcome by thc proccss of thresholding and

quantization ofthe wavelct coefficients, to be discusscd latcr.

2.3 Different classes of compression techniqnes

Thcre are mainly four typcs of compression tcchniqucs for digital data. Thesc arc i) Lossless

compression, ii) Lossy compression iii) Predictivc coding, and iv) Transform coding. These

method are briefly dcscribcd below:

2.3.1 Lossy compression process

In loss less compression schcmcs, the rcconstructcd imagc, aftcr comprcssion, is numerically

identical to the original imagc. Howcvcr losslcss compression can only achieve a modest

amount of compression. An imagc rcconstructcd following lossy compression contains

degradation relative to the original. Oftcn this is becausc the compression scheme completely

discards redundant information. However, lossy schemes are capable of achieving much

higher comprcssion. Under normal viewing conditions, no visible loss is perceived (visually

lossless). Typical lossy image compression system contains four elosely rclated components as

shown in Figure 2. I. Thcy are (a) source encoder (b) thresholder (c) quantizeI', and (d)

entropy encoder. Comprcssion is accomplished by first applying a linear transform to

decorrelate the image data, the transformed cocfficicnts are then thresholded and quantized

and finally, the quantized values are entropy codcd. The decompression is achieved by

applying the above four operations in the reverse order.

Sourcc Image

Source Thresholdcr QuantizeI'Entropy

Encoder Encoder-

Com pressed Image

Different Steps Involved in compressing an Image

Figure 2.1: Lossy data compression model

8,.,

, ..••..•.•._~'\

'-.> ;

Page 19: Chapter 3 An Introduction to Wavelet Transform

2.3.2 Losslcss encoding process

In this type of data compression original data is exactly dccodcd, i.e. no information is lost.

Using this method rcpeated patterns in a message are found and encoded in an efficient

manner. This process is also called 'redundancy reduction' method of image encoding. In case

of textual data, executable codes, word processing files, tabulated numbers lossless

compression is of utmost important. Popular algorithms for loss less compression .are i)

Lempel-Ziv-Weleh (LZW) algorithm, ii) Run Length Encoding (RLE) algorithm, iii) Huffman

coding, Iv) Arithmetic coding, v) Delta Encoding, etc. An example of lossless compression of

image is the GIF image. These algorithms arc briefly described here.

A) Huffman coding

In this method of data compression, characters in a data file arc converted in to a binary code.

The most common characters in the input file (characters with higher probability) are assigned

short binary codes and least common characters (with lower probabilities) are assigned longer

binary codes. Codes in this algorithm can be of different length. The basic idea behind

Huffman coding [9 lis simply to use shorter bit patterns for more common characters. We can

make this idea quantitative by considering the concept of en/ropy. Suppose the input alphabet

has NCh characters, and that these occur in the input string with respective probabilities Pi, i ~ I,

2, ... Nch , so that LP; ~ I. Then the fundamental theorem of information theory says that

strings consisting of independently randcm sequences of these characters (a conservative, but

not always realistic assumption) require, on the average, at least

H~- LP; log, p, (2.4)

bits per character. Here H is the entropy of the probability distribution. Moreover, coding

schemes exist which approach the bound arbitrarily closely. For the case of equiprobable

characters, with all Pi ~ lINd" one easily sees that H ~ log2 Nch, which is the case of no

compression at all. Any other set of pis gives a smaller entropy, allowing some useful

compression. If a character I with a code length Li ~ - log2 Pi, bits, then the equation 2.4 will

simply become the average, L p;L; which is not generally an integer, in that case the bound

provided by the equation 2.4 will be achieved. But the problem is that -log2 Pi is not generally

an integer. But Huffman coding makes a stab there by approximating all the probabilities Pi by

integer powers of \1" so that all Li'S are integral. In this case a I-Iuffman code does achieve the

entropy bound ]-] as given by equation 2.4. This Huffman coding [9] is used in this thesis

9

Page 20: Chapter 3 An Introduction to Wavelet Transform

work, over the data after encoding the imagc bcforc transmitting through thc channel, just to

get better compression performancc.

B) Lempel- Ziv - Welch algorithm

This algorithm uses a dictionary or code table. Dictionary is constructed using words or parts

of the words in a messagc, and then using pointers to the words in the dictionary. Higher

compression of 1.5 can be aehicved using this mcthod. LZW is uscd to compress text,

executablc codes, and similar data files.

C) Run Length Encoding algorithm

This mcthod is used ifthc data contains frcquently repeated data. In this method a run is made

for repeated bits and coded in lesscr bits by only stating how many bits werc thcrc.

D) Arithmetic coding

In this algorithm message or data is encoded as a realnumbcr in an intcrval from 0 and I. This

algorithm shows better performance than Huffman coding. This mcthod is much more flexible,

as it can use adaptive model of signal statistics. But this mcthod has the disadvantage that it

needs the whole codeword must be rcceived to start decoding. Sccond disadvantage is that if

there is a corrupt bit in the codeword, the entire messagc could become corrupt. Third

disadvantage is that it has limited number of symbols to cncodc within a codcword.

2.3.3 Predictive coding

In predictive coding, information already sent or availablc is uscd to prcdict future values, and

the difference is coded. Since this is done in the imagc or spatial domain, it is relatively simple

to implement and is readily adapted to local image characteristics. Differential Pulse Code,

Modulation (DPCM) is one particular example of predictive coding. Transform coding, on the

other hand, first transforms thc image from its spatial domain reprcscntation to a different type

of representation using some well-known transform and then codes the transformed values

(coefficients). This method provides greater data compression compared to predictive

methods, although at thc expense of greater computation.

10

"

Page 21: Chapter 3 An Introduction to Wavelet Transform

2.3.4 Transform coding

This refers to the linear transforms, which are used to map the original image into some

transformed domain. Popular transform techniques used are discrete Fourier transform (OFT),

discrete cosine transform (OCT) or discrete wavelet transform (OWT). Each transform has its

own advantages and disadvantages. In this work OWT is used but in brief OCT also will be

described. JPEG- 93 was based on OCT, while the recent JPEG-2000 is totally based on

OWT. The pros and cons of each method and the advantages of OWT over OCT will be

briefly described in this chapter.

A) Discrete Cosine Transform (DCT) based coding

An important discovery of mid 1970's, OCT gives an approximate representation of OFT.

considering only the real part of the series. Por a data of N values, OCT's time complexity

(broadly speaking, amount of computational time) is of the order of N log2N, similar to OFT.

But OCT gives better convergence, as compared to OFT. The block diagram of OCT based

'coding is shown in Figure 2.2. Pirst, a given image is divided into 8 x 8 blocks and forward

discrete cosine transform (POCT) is carried out over each block. Since the adjacent pixels arc

highly correlated, the POCT processing step lays the foundation for achieving data

compression. This transformation concentrates most of the signal in the lower spatial

frequencies, whose values are zero (or near zero). These coefficients are then quantized and

encoded to get a compressed image. The decompression is obtained by applying the above

operations in reverse order and replacing FOCT by inverse discrete cosine Iran.lform (I0CT).

8 x 8 block

FOCT QuantizerEntropy -Encoder

Source Image Compressed Image

Figure 2.2: OCT based image compression model.

B) Discrete Wavelet Transform (DWT) based coding

Prom a practical point of view, wavelets [4] provide a basis set which allows one to represent a

data set in the form of differences and averages, called the high-pass or detail and low-pass or

average coefficients, respectively. The number of data points to be averaged and the weights

to be attached to each data point, depends on the wavelet one chooses to use. Usually, one

II,-~.:7--

Page 22: Chapter 3 An Introduction to Wavelet Transform

takes N = 2" (where n, is a positive integer), number of data points for analysis. In case of the

simplest, Haar wavelet, the level-I high-pass and low-pass coefficients are the nearest ncighbor

differences and nearest neighbor averages respectively, of the given set of data with the

alternate points removed. Subsequcntly, the level-I low pass coefficients can again be written

in the form of level-2 high-pass and low-pass coefficients, having one fourth number of points

of the original sct. In this way, with 2" number of points, at the nih level of decomposition, the

low-pass will have only one point. For the case ofHaar, modulo a normalization factor, the nih

Icvel low-pass coefficient is the average of all the data points. In principlc, an infinite choice

of wavelets exist. The choice of a given wavelet depends upon the problcm at hand. As one

can easily imagine, wavelets are ideal to find variations at differcnt scales present in a data set.

This procedure can be easily extended to two dimensional case, for applications to image

processing. Wavelets are the probing functions, which give optimal lime-frequency

localization of a given signal. Due to its nexible mathcmatical modeling, it has certain distinct

advantages ovcr DCT. Below, we point outthe relative merits of DCT and DWT.

C) Relative advantages and disadvantages of DCT and DWT

i) Gibbs' phenomenon: In the transformcd domain, one usually performs a

thresholding i.e. discarding of all the coefficients, whose values are below the

threshold. In case of DCT, thresholding is done in the frequency domain, where the

time information about the signal is hidden in the relative phases of different

frequency modes. Hence, the effcct of chopping certain coefficients (local

phenomenon), manifcsts itself throughout the signal (global phcnomenon) after

reconstruction. This effect is called Gibbs' phenomenon. Its ill-effects become

obvious, if the selccted threshold introduces large errors, which will eventually

corrupt the entire signal. In case of DWT, we get the so called lime-frequency

localization. Consider the (J, I) clement of the low pass sub-matrix after applying 2-

D OWl' over the image. In case of the Haar basis, this clement is the result of an

average, performcd on the first 2 x 2 (parent) clements of the original image.

Similarly, the origin of the remaining three high pass elements could be traced to

differences. This procedure elarifics the notion of localization i.e .. each and every

point in the sub-matrices can be attributed to particular sets of points in the original

image. If thresholding is to be performed on these coefficients, they will affect only

the corresponding parent elements of the original image and error (if any) will be

local in nature.

12

Page 23: Chapter 3 An Introduction to Wavelet Transform

ii) Time complexity: Time complexity (broa'dly spcaking, amount of computational

time) of OCT is of 0 (in general N log, N) while many wavclet transforms can bc

calculated with 0 (N) operations. More general wavclcts rcquirc O(i.c. N log, N)

calculations, same as that of OCT.

iii) Blocking artifacts: In OCT, the given image is sub-divided into 8x 8 blocks. Due

to this, the corrclation bctwccn adjaccnt. blocks is lost. This result is noticcablc and

annoying, particularly at low bit ratcs'. In DWT, no such blocking is done and the

transformation is carried over the entire image.

iv) Advantage of a designed wavelet set: It is possible to construct our own basis

function, for wavelets, depending upon the application in hand. Thus, if a suitable

basis function is designed for a given task, then it will capture most of the energy of

the function with very few coefficients. This freedom is curtailed in OCT, where one

has only cosine functions as the basis set.

v) Compression performance: The OCT bascd JPEG-93 compressor performs well

for a compression ratio of about 25: I. But the quality of image rapidly deteriorates

above 30: I, while wavelet based coders degrade gracefully, well beyond ratios of

100:1 [7].

vi) Disadvantages: The biggest disadvantage of the wavelet based coding technique is

the problem of sclccting basis function lor a particular opcration. This is bccause, a

particular wavelet is suitcd for a particular purpose. So the propcrty of each wavelet

should be known earlier. Most of the time, thc sclcction is done after experimenting

with different sets of wavelets for a given application. Once DWT is performed, the

next task is thresholding, which is neglecting certain wavelet coefficients. For doing

this one has to decide the value of a threshold and how to apply the same. This is an

important step which affects the quality of the compressed image. The basic idea is

to truncate the insignificant coefficients, since the amount of information contained

in them is negligible. In the second part of this article, we will describe in detail the

procedure for selecting the desired threshold and other aspects of image

compression.

13

Page 24: Chapter 3 An Introduction to Wavelet Transform

2.4 Summary

This chapter focuses on mainly on the vcry basics of digital data compression, its importance

and different classifications of compression also described. Comparison of the DCT and DWT

based data compression techniques also described brieny as they arc the most widely used

techniques of digital image compressions. Lastly lossy compression and Lossless compression

introduced and then predictive and transform based image coding techniques are also

described.

14

••

Page 25: Chapter 3 An Introduction to Wavelet Transform

Chapter 3An Introduction to Wavelet Transform

3.1 Introduction

Wavelet Transform [1] has emerged as a powerful mathematical tool in many areas of science

and engineering specifically for data compression. Wavelets were developed independently in

the fields of mathematics, physics, electrical engineering and seismic-geology. Interchanges

between these fields during the last ten years have led to many wavelet applications, such as

image compression, dc-noising, human vision, radar, etc. Also, it is being used in many areas

of science and engineering such as: signal processing, fractal analysis, numerical analysis,

statistics, and astronomy. Recently, wavelets were determined to be the best way to compress

a huge library of fingerprints [10]. Before starting Wavelet transform briefly some elementary

topics like Fourier transform is described which will ultimately help in understanding the

Wavelet transform. Fourier Analysis has been available to signal analysts for many years. It

serves as a powerful tool in examining the frequency spectrum of a signal, where hidden

information about the signal's properties are buried. This chapter starts with the very basics of

Fourier transform, then gradually describes briefly the Discrete Fourier Transform, Fast

Fourier transform, and their time-frequency limitations, introduction to Short-time Fourier

Transform, finally to the multi-resolution problem and finally introduce Wavelet transform as

a powerful tool form image processing.

3.2 Brief introduction to Fourier Transform

Fourier analysis is named after its inventor Jean Baptiste Joseph Fourier. At an early age,

Fourier has shown a keen interest in mathematics. In 1780, Fourier entered the Royal Military

Academy of Auxerre, where at age 13 he became fascinated with mathematics and took to

creeping down at night to a e1assroom where he studied by candlelight. His contribution to

areas such as mathematics and science has made an enormous impact on our modern way of

15 ",

Page 26: Chapter 3 An Introduction to Wavelet Transform

life. Yet he was neither a professional mathematicians nor a scientist. There are two parts to

Fourier's contribution: first, a mathematical statement and second, an explanation of why this

statement is useful. The mathematical statement is that: "Any periodic function can be

represented as a sum of sines and cosines "[ II] This means any curve that periodically repeats

itself can be expressed as thc sum of perfcctly smooth oscillations (sines and cosines).

Fourier's statement can be expressed mathematically as

1 k==,,>

f(t) = -ao + La, cos (kt) + 11,sin (kt)2 '=1

where the coefficients an, ak, bk are defined as:

............. (3.1)

I r2nao = -.L f(t)dt,

2"I r2n

a, = -.L f(t)eos(kt)dt,

"and I i'nb, = - f(t)sin (kt)dt,,'

This famous mathematical statcment is known as the Fourier Series. A more understandable

way of thinking about Fourier analysis is to think of it as a mathematical technique for

transforming our view of a particular signal from the time domain to the frequency domain.

This process is commonly known as the Fourier Tram/orlll as shown in Figure 3.].

~

i\lrrllil,~!I!~\\I~'lllllil\I~'~\I\illll)!TIme

Fourier I

TransformFigure-3.t - Illustration of the Fourier Transform

Nowadays, Fourier Transform is used across many industries, and has been a catalyst towards

their devclopment. Most signals in practicc arc time-domain signals in their raw format. That

is, whatever the signal is measuring, it is a function of time. When a time-domain signal is

plotted, a time-amplitude representation of the signal is produced. Unfortunately, this

representation may not be suitable for the type of signal processing we are intending to do. [n

many cases, most useful information is hidden in the frequcncy content of the signal.

Therefore, a frequency representation of the signal is needed. If the Fourier Transform of a

signal is taken, a frequency-amplitude representation of the signal is obtained. Working in the

frequency domain can hclp to extract a wholc new set of information from the signal. Fourier

transform decomposes a signal to complex exponential functions of differcnt frequcncies,

which tells how much of each frequency the signal contains. This is done using the following

two equations as

F(w) = [f(tViW'dt

16

..................... (3.2)

Page 27: Chapter 3 An Introduction to Wavelet Transform

I inf(t) = ~ F(OJ)ei"'dOJ27[ ~,

.................. (3.3)

Where, t - is time, W - is frequency, f denotes the signal in time domain and F - denotes the

signal in the frequcncy domain. Equation - (3.2) is callcd thc Fourier transform of f(t) and

Equation - (3.3) is callcd thc invcrse Fouricr transform of F(w).

3.3 Brief introduction to discrete Fourier Transform

With the introduction of digital computers, a discrete Fourier Transform was needed. Digital

computers are finite machines; any desired computation can only use a finite number of

operations. No digital computer, then, can handle real- (or complex)- valucd functions of real

numbers. Any such function must be sampled in order to be rcpresented and proccsscd in a

computer. We have only the values offat finitc number ofpointsf(tl),j(IJJ, ..... ,j(I,,), instead

off(x) for all x. Sample points are taken at regular intervals ofT. If the sampling was started at

II= 0, the sampling sequence will becomc 0, "t,2"t, , (n-1)"t and the sample valucs are

frO), f(r), f(2r) f((n-J)r). This allows data functions to be represented. In order to use

digital computers in Fourier Analysis, a finite analogue of the Fourier Transform is needed

too. Thus Equation - (2) becomes the sum as

(n-I)

IJ(rr)e-im, .................... (3.4)r=1

Now e,j"", for fixed r, is periodic function of w with period (2rrh). Restricting the range to

0:'::w:,::(2rrh),we take as many samplc points of w as there are in the timc-domain, spaced

equidistantly. So, Equation- (3.4) then becomes the discrete Fouricr Transform

1 (11-1)

D".J(2rrk I nr) = - IJ(rr)e"}2"";"n r=O

................. (3.5)

Using Fourier Transform, a signal can be expressed as sums of sines and cosines. But since the

integration is over all time, the sums arc infinite. Therefore, the complex signal has been

translatcd into an cndless arithmetic problem, where we must do the followings [11 J:I. Calculate an infinite number of cocfficients

2. Sum an infinite number of waves.

Fortunately, a small number of coefficients are often adequate for an accurate Fourier

transform. With the introduction of computers, the Fourier transform of a signal can be,processed even quicker. On the other hand, the gain in speed from the FFT is greater than the

gain in speed from better computers; significant gains in computer speed have come from such

fast algorithms built into the hardware of the computer. Matlab software has the built-in

17

Page 28: Chapter 3 An Introduction to Wavelet Transform

command 'fft' that implements the FFT technique to perform a quick Fourier Transform on a

signal provided.

Carl Friedrich Gauss discovered the idea underlying the FFT, probably In 1805, two years

before Fourier presented his memoir to the Academy of Science in Paris. The algorithm was

.rediscovered and used as a computer program by James Cooley and John Tukcy in 1965. The

algorithm has helped Fourier analysis be more widely used by considerably reducing the

number of necessary computations. It catapulted Fourier analysis from being a mathematical

tool to being a practical one used in signal analysis. The algorithm works by recursively

halving the data, until enough coefficients have been sampled so the signal can be

reconstructed. Thus FFT can only be applied to signals that had a length of a power of2 (i.e. 2,

4, 8, 16,32,64, etc.).

3.3.1 Time/Frequency problem

Fourier Transform is extremely useful because of its ability to capture the signal's frequency

content, but it has a serious drawback. Fourier Transform is a revisable transform. It allows

you to go back and forward between the raw and processed signals. However, only one of

them is available at any given time. This means no frequency information is available in the

time-domain, and no time information is available in the frequency-domain. In transforming a

signal from the time domain to the frequency domain, time information is lost. When looking

at a Fourier transform of a signal, it is impossible to tell when in time a particular frequency

. occurs. The question now is, is it necessary to have both the time and the frequency

information at the same time? The answer depends on the particular application, and the nature

of the signal in hand. The Fourier Transform of a signal displays the frequency information of

the signal, which means that it tells us how much of each frequency exists in the signal, but it

does not tell us when in time these frequency components exist [II]. But if the signal is

stationary, then this information is not required. Signals whose frequency content does not

change in time are called stationary signals. If the frequency content of stationary signals does

not change in time, then we do not need to know at what times frequency components exist,

since all frequency components exist at all times. This is where the wavelets transform excels;

it eliminates the disjunction between the time and frequency information. Wavelet transform

. can capture both the frequency and time properties of a signal in a single representation. This

allows more specific filtering to be done. In addition, the lack of time information makes a

.Fourier Transform terribly vulnerable to errors. The information in one part of a signal

whether real or erroneous is necessarily spread throughout the entire transform.

18

,IC .\

Page 29: Chapter 3 An Introduction to Wavelet Transform

3.4 Brief introduction to Short-Time Fourier Transform (STFT)

In an effort to correct this dcficicncy, Dcnnis Gabor (1946) adaptcd the Fourier Transform to

analyze only a small section of the signal at a time, thus limiting the span of time during which

something is happening. The signal is divided into small enough segments where the signal

can be assumed to be stationary. For this purposc, a window function is chosen. The width of

the window must be equal to the width of the segment. The window function is first located to

the very beginning of thc signal, at t~O , and lct T be the width of the window. At t~O , the

window function will overlap with the first Tl2 seconds of thc signal. The window function

and the signal arc then multiplied. The resulting product is treated as another signal, and the

Fourier Transform is taken. If the portion of the signal is stationary, as it is assumed, then

there will be no problem and the obtained results will be a truc frequcncy represcntation of the

first TI2 seconds of the signal. Next, thc window is shifted along the signal, to a new location,

multiplied with the signal, and the Fourier Transform of the product is then taken again. This

process is repeated until the end of the signal is reached. This technique is known as

windowing the signal. Gabor's adaptation, eallcd thc Short-Tim Fourier 7;'ansform (STFT) as

shown in Figure 3.2, maps a signal into a two-dimcnsional function of time and frequcncy, as

shown in the following equation.

Short TimcI

Fouricl"

Transform lime

Figurc-3.2: Illustration ofthc Short Time Fourier Transform

STFT(T,f) = fx(t)W(t-T)e'i2''''dt .............. (3.6)

where, x(t) is the signal, w(t) is thc window function, T is the timc location whcre thc window

is stationed,fis the frequency.

The STFT makcs a compromise betwecn thc time and frcqueney information of a signal. The

smaller the window used, the bettcr you can locate suddcn changes, such as peaks or

discontinuities. Unfortunatcly, you will become less able to detect lower frequency

components of the signal, since they appear to have zero frequcney within the window. If you

choose a bigger window, you can see more of the lower frequencies, but the worse you do at

19

Page 30: Chapter 3 An Introduction to Wavelet Transform

localizing in time and detecting sudden changes. Therefore, STFT can only show limited

amount of i nfonnation for both components of the signal. The information can only be

obtained with limited precision, and that precision is determ ined by the size of the window

used. This problem roots back to what is know as the Heisenberg's Uncertainty Principle:

'The momentum and the position of a moving particle cannot be known simultaneously'

The frequency and time information of a signal at some certain point in the time-frequency

plane cannot be known. It simply means the exact time-frequency representation of a signal

cannot be known. We can only ascertain the time intervals in which certain bands of

frequencies exist. STFT's ability to compromise between time and frequency information can

be useful, but the disadvantage is that once you have chosen a particular size for the time

window, that window remains the same for all frequencies. Many signal analysts require a

more flexible approach, where the window size can vary, in order to determine more

accurately either the time or frequency information of the signal. This problem is known as the

resolution problem. Then in 1975, Jean Morlet came into the scene. Morlet realised the

resolution problem with STFT, and he decided to take an alternate approach to this problem,

which this lead to the discovery of the Wavelet.

3.5 Historical Background of wavelet

The history of wavelet is an ambiguous one. Many people had realized that the above

mentioned problem of the application of STFT on different signals needs to be solved. Both

engineers and researchers had used wavelets for some times; yet no one was able forth see the

ability of the wavelets to solve the problem mentioned in STFT. The theory behind wavelets

has developed independently from a large number of areas, but it was Yves Meyer who

eventually made the connection. He made the statement. "Tracing the history of wavelets is

almost a job for an archaeologist, I have found at least 15 distinct roots of the theory, some

going back to the 1930's. "[II] Meyer got involved almost be accident. He was waiting to

photocopy some material when he came across his department's chairman. The chairman was

making photocopies too, and they chatted. One day in spring of 1985, the chairman showed

Meyer an article by a physicist called Alex Grossmann, and asked whether it interested him.

After reading the article, Meyer realized it involves signal processing, using mathematical

techniques he was familiar with. He then took the train to Marseille and started working with

20

Page 31: Chapter 3 An Introduction to Wavelet Transform

Grossmann. Around 1975, Jean Morlet came across the time-frequency representation of the

signals by Gabor, whilst using STFT. He noticed STFT had the disadvantage of being

imprecise about time in high frequencies because the size of the window was too small. But if

the window were set to a small size, information in regards to low frequencies would be lost

instead. So Morlet took a different approach. Instead of keeping the size of the window fixed

and filling it with oscillations of different frequencies, he instead kept the number of

oscillations fixed and varied the width of the widow, thus stretching or compressing it.

Frequency is a measure of change of time. Since the number of oscillations in the window is

kept constant, stretching the window would stretch the oscillations, thus decreasing their

frequency. Compressing the window would compress the oscillations, thus producing higher

frequency. This window is called a Wavelel.

WAVEl.ETS

Figure-3.3: ~ Illustrations of Short Time Fourier Transform and Wavelets

In Ihe upper parI of Figure-3.3 Ihe STFT of a general signal. The size of the window of the

STFT is fixed and the number of oscillation varies. A small window is 'blind' to low

frequencies, because low frequencies arc too large for the window. On the other hand if one

uses a large window, information about a brief change will be lost in the information

concerning the entire interval corresponding to the window.

In Ihe lower parI of Figure-3.3 A mother wavelet is shown. Here the 'mother wavelet' (left) is

stretched or compressed to change the size of the window. This makes it possible to analyze a

signal at different scales. The wavelet transform is sometimes called a 'mathematical

microscope, big wavelets give an approximate image of the signal, while smaller wavelets

zoom in on the small details.

21

" ..••,\ .,

Page 32: Chapter 3 An Introduction to Wavelet Transform

3.5.1 Definition ofwavclets and wavelet transform

Wavelets are a family of functions generated from one singlc function 'V (as shown in thefollowing equation), which is callcd mother wavclet, by dilations and translations

'I/".h (x) = la[-II 'W(x - b), whcrc 'I' must satisfy fW(x)dt = 0 and flv/(x)I'dx = I .a

Wavelct transform is the reprcscntation of any arbitrary function.! as a dccomposition of the

wavelet basis or writc.! as an integral over a and b of ~Ia.b [5]. In other words a wavelet is a

waveform of effectively limited duration that has an averagc valuc of zero. In Figure-3,4, a

comparison of a sine wave with a wavelct is shown.

There are many availablc typcs of wavelct families (known as mother wavelets), such as

Daubechies, Meyer, Gaussian, Mexican Hat, Morlet and many more. For this project,

Daubechies wavelet is used. I

Not a Wavelet, Do not have limited

duration, Smooth, Predictablc

Is a Wavelet, Limited duration, Irregular,

Asymmetric

Figure-3.4: Illustration of the differcncc ofa Sine-wave with a wavelet

Wavelet analysis is a version of windowing tcchniquc, but with a varying window sizc. It

allows the use of longcr windows when morc precisc low frequcncy information is requircd,

and shorter windows where high frcqucncy information is nccded. The wavelet transform in

Wavelet analysis is analogous to the Fourier transform in Fourier analysis.

Wavelet

TransformI

'fimf!

Wavelet Analysi!5

Figure-3.5: Illustration of the Wavelct Transform

Very elementary concept of wavelet transform is shown in Figure-3.5. Fourier analyses consist

of breaking up a signal into sinc waves of various frequcncies. Similarly, wavelet analysis is

the breaking up of a signal into translated and scaled versions of the original (or mother)

wavelet. Wavelet transform compares the signal to the mother wavelet, and produces a set of

22

Page 33: Chapter 3 An Introduction to Wavelet Transform

coefficients measuring their similarity. Instead of transforming a signal li'om the timc domain

to into the timc-frequency representation, Wavelet transform transforms a signal from the time

domain to the scale-time rcprescntation. Figurc-3.6 contrasts the wavelct vicw of a signal to

the time domain, frequency domain and the STFT's time-frequcncy vicw for a signal.

ritneTi,"'" Oom;,irl IShan"on)

G'!1i -,--_.-._-~-

1;T1------1••.I- J...•.

A'nplll.J::eFrHltl4o!'flC'l OC#'n;lin (r:ouri1?~)

~HHHI-I~H-;.+-Iu: '-...L_'---'-_'-_

rimaSifl' (Cabol}

Figurc-3.6: Different views of a signal

3.6 Continuous wavelet transform (CWT)

rkH~

Wa"elt't A':HJ'y~i$

In this section briefly the Continuous Wavelet Transform (CWT) will be described. As this is

the very basic of wavelet transform, and it will give a clcar understanding about the discrete

wavelet transform. This section starts with the basic theory of CWT then discusses the

computation of Continuous Wavelet Transform (CWT).

3.6.1 Basic theory ofCWT

The CWT was developed to overcome the resolution problem identified in the STFT. The

wavelet analysis is done in a similar way to the STFT analysis. The signal is multiplied with a

function, similar to the window function in the STFT, and the transform is computed

separately for each different segments of the time-domain signal. However, there are two

distinctive differences between the STFT and the CWT [I I]:

.:. The Fourier transforms of the windowed signals arc not taken, and

therefore single peak will be seen corresponding to a sinusoid .

•:. The width of the window is changed as the transform is computed for

cvery single spectral component, which is probably the most

significant characteristic of the wavelet transform.

23

f , ""'.'

Page 34: Chapter 3 An Introduction to Wavelet Transform

The CWT is defined as bclow:

CWT/ = h fX(l)~(' - r )dl\11,\1 s

, (3.7)

Thc transformed signal is a function of two variablcs T and s. thc translation and scale

parameters respectivcly and <l>(t) is thc transforming function, and it is called the mother

wavelet. The term mother wavelet gets its name duc to two important propertics ofthe wavelet

analysis. The term wavclet means a small wave. The smallncss refcrs to the condition that this

(window) function is of finite length (compactly supportcd). The wave refers to the condition

that this function is oscillatory.

The term mother implies that the functions with diffcrent rcgion of support that are uscd in thc

transformation proccss are derived from one main function, or thc mothcr wavelct. In other

words, the mother wavclct is a prololype lor gcncrating thc othcr window functions. In Figure-

3.7 and Figure-3.8 effcct of scaling and translation on a gcncral signal is explained.

Figure- 3.7: Illustration of the cffect trom varying thc scaling factor a

o I--.:RH-\fI.---'"

Wavelet function",flt

•()

Shifted w!lvelCl function'Vfl. Ii)

Figure - 3.8: Illustration of the affect from varying the Iranslalion factor k

Scaling either dilates or compresses the signal. Larger scalcs correspond to dilated signals, and

small scales corrcspond to compressed signals. In tcrms of mathematical functions, ifj(1) is a

given function, then j(sl) compressed to a compressed version off(t) if s > I, and a dilated

version of j(I) if s < I.

24

Page 35: Chapter 3 An Introduction to Wavelet Transform

However, in the definition of the wavelet transform, the scaling term is used in the

denominator, and therefore, the opposite of the above statements holds, i.e. scales s > 1 dilates,the signal whereas scales s < I compresses the signal.

3.6.2 Continuous wavelet transform computation

First a mother wavelet is chosen too serve as a prototype for all windows in process. All

windows used are translated and dilated/compressed versions of the mother wavelet. Once

chosen, the computation starts with s = I, and the CWT is computed for all values of s.

Practically, the signals are band limitcd, thus a limited interval of scales suffice. The procedure

will start from scale s = I, and will continue for the increasing values of s. Thus the analysis

will start from high frequenc'ies and proceed' towards low frequencies, since scale is the

frequency reciprocal.

The wavelet is placed at the beginning of the signal at time I ~ O. The wavelet function at scale

I is multiplied by the signal and then integratcd over all times. The result is then multiplied by

a constant, for energy normalization purposes, so thc transformed signal will have the same

energy at every scale. The result is the value of the transformation at time I= 0 and scale s = 1,

which corresponds to, = 0 and s = I in the time-scale plane.

Then wavelet is translated towards thc right by" and the scale remains at 1. The

transformation value is obtained and the wavelet is translated right again by,. When the

wavelet reaches the end, one row of the time-scale plane would have been obtained. The

wavelet is then placed at the bcginning of the signal again, with the scale value increased. The

process is repcated until the entire time-scale plane has been filled.

3.6.3 Frequency and time resolution

The main reason for switching from the STFT to Wavelet Transform was the resolution

problem. Every box in Figure - 3.9 corresponds to a value of the wavelet transform in the

time-frequency plane. The boxes have a certain non-zero area, which implies that the value of

a particular point in the time-frequency plane cannot be known. All the points in the time-

frequency plane that falls into a box, is represented by one value of the wavelet transform.

Although the widths and heights of the boxes change, the area is constant because each box

represents an equal portion of the time-frequency plane, but giving different proportions to

time and frequency.

25•

Page 36: Chapter 3 An Introduction to Wavelet Transform

At low frequencies, the heights of the boxes arc shorter, corresponding to beller frequency

resolutions, but their widths are longer, corresponding to poor time resolution. At higher

frequencies, the width of the boxes decreases thus the timc rcsolution gcts better, and the

heights of the boxes increase making the frequency resolution poorer. In STFT the time and

frequency resolutions are determincd by thc width of the analysis window, which is selected

once for the entire analysis. Thercfore the timc-f,'cquency planc consists of squares in STFT.- ---I ._[li,\'\F

Figure - 3.9: Time/Frequency rcprescntation in CWT

3.7 Discrete wavelet transform (DWT)

In this section discrete wavelet transform (DWT) is introduccd, this is actually the class of

wavelet transforms that is possible to implement using a personal computer. In this work

discrete wavclet transform is used.

3.7.11mportanee of discrete wavelet transform

The application of wavelet transform in signal and image compression has attracted a great

deal of attention. It is known that it gcnerates a multi-resolution representation of an image.

There are several subimagcs or subbands that might be encoded more perfcctly than the

original image. The wavelct transform technique breaks thc imagc information into various

frequency bands and encodes each subband using suitablc coding system [I OJ. Conseque';t1y,

different coding approaches or different bit rates could be assigned to every subimages.

Separate coding of different subbands provides some desirable features. First, by allocating the

available bits for encoding among subbands and using an appropriate quantizer for each of

them, encoding process can be tailored to thc statistics of cach subband. Second, spectral

shaping of quantization noise is possible. This feature can be uscd to take advantage of noise

perception of human auditory system for speech or human visual for images. Third, subband

decomposition of a signal or image leads naturally to multiresolution decomposition. Thisris)

26 C ,-

Page 37: Chapter 3 An Introduction to Wavelet Transform

useful for progressive transmission of images in which an incrcasingly highcr resolution and

quality image can be reconstructcd by dccoder. To get high compression ratio, one can not

code whole information of an image. Only the significant information of an object that is

needed to reconstruct the image with less distortion or degradation [I OJ. A more sophisticated

wavelet can also provide more energy compaction than Haar wavelet. Daubechies el af [1OJhas shown that a wavelet ability measure is to provide compaction by the number of vanishing

moments it possesses. More vanishing moments imply more compaction in smooth regions.

Haar wavelet has only one vanishing moment-. Therefore, it does not possess very strong

compaction ability. The wavelet series is simply a sampled version of the CWT, and the

information provided is highly redundant. This redundancy requires a significant amount of

computation time and resources. The discrete wavelet transform provides sufficient

information both for analysis and reconstruction of the original signal, with a reduction in the

computation time [4].

As defined previously the general form of wavelet is

:.' I-hV/""(t)=lal'v;(-)

a

In case of Discrete wavelet transform, a common choice for (l and b is

where n,m E Z

This choice for (l and b reduces the continuous time wavelet to discrete time wavelet as

follows-11/

VI",_,,(I) =2 2 v;(T"'I-n)

These wavelets are used in the wavelet transform. The purpose of the wavelet transform is to

represent a signal, x(t), as superposition of wavelets. For special choices of VI the signal can

be represented as follows, this process is called discrete wavelet transform (DWT)-1/1

where, e",_" = 2'" fX(I)'II",.,,(I)dlm,n

The purpose of obtaining this description is that it provides a representation of the signal x(t)

in terms of both space and frequency localization. The coefficients Cm.n characterizes the

projection of x(t) onto the base formed by Vlm•n• For different III, Vlm•n represents different

frequency characteristics, II is the translation of the dilated mother wavelet, therefore Cm.n

represent the combined space-frequency characteristics of the signal. These Cm•n are called the

wavelet coefficients.

27

Page 38: Chapter 3 An Introduction to Wavelet Transform

3.7.2 Wavelet features for image compression

In this section a summary of some features of image compression usmg wavelets arc

described.

i. Wavelet transform has a good energy compaction capability, it is preserved

across the transform, i.e. the sum of squares of the wavelet coefficients is equal

the sum of squares of the original. image.

II. Wavelets can provide a good compression [10], it can perform better than

JPEG2000,both in terms of SNR and image quality. Thus show no blocking

effect unlike JPEG2000.

III. The entire image is transformed and compressed as a single data object using

wavelet transforms, rather than block by block. This allows for uniform

distribution of compression error across the entire image and at all scales.

iv. The wavelet transform methods have been shown to provide integrity at higher

compression rates than other methods where integrity of data is important e.g.,

medical images and fingerprints, etc.

v. Multiresolution properties allow for progressive transmission and zooming,

without extra storage.

vi. It is a fast operation performance, in addition to symmetry: both the forward

and inverse transform have the same complexity, in both compression and

decompression phases.

VII. Many image operations such as noise reduction and image scaling can be

performed on wavelet transformed images.

Resultant wavelet multiresolutions are scaled subbands. Coarse approximation scaled from the

original image and the others are detail coefficients where there are a statistical dependence

across scale in these coefficients. Efficient encoding is to exploit such dependence. This is

how the art wavelet transforms compression is achieved [10].

28

Page 39: Chapter 3 An Introduction to Wavelet Transform

3.7.3 Subband coding

Subband Coding is a method for calculating the discrcte wavclct transform. The whole

subband process consists of a filter bank, and filters of differcnt cut-off frcquencics are used to

analyze the signal at differcnt scales. The procedure starts by passing the signal through a half

band high-pass filter and a half band low-pass filter. A half band low-pass filter eliminates

exactly half the frequencies from the low end of the frequcncy scale. For example, if a signal

has a maximum of 1000 Hz component, then half band low-pass filtering removes all the

frequencies above 500 Hz. The filtered signal is thcn down-samplcd, mcaning some sample of

the signal is removed. Then the resultant signal from thc down-samplcd half band low-pass

filter is then processed in the same way again. This process will produce sets of wavelet

transform coefficients that can be used to reconstruct the signal. An cxample of this process is

illustrated in Figure - 3.10. The resolution of the signal is changcd by thc filtcring operations,

and the scale is changed by down-sampling operations. Down-sampling a signal corresponds

to reducing the sampling rate, which is equivalent to removing some of thc samplcs of the

signal.

~',I

Figure 3.10: Wavelet decomposition trec. Where, cAx is the approximationcoefficients at decomposition level x, cDx is the detail coefficients atdecomposition level x, S is the original signal

From Figure 3.10, it is seen the original signal is broken down into different levels of

decomposition. In the above case, it is a 3-level decomposition. Every time the newly scaled

wavelet is applied to the signal, the information captured by the coefficients remains stored at

that level. Thus the remaining information contains the higher frequencies of the signal, if the

scaling factor decreases.

Wavelet encoding involves taking the discrctc-timc wavelet transform of the data and

quantizing the wavelet subbands [4]. The design of the quantizer is bascd on statistical

29

Page 40: Chapter 3 An Introduction to Wavelet Transform

analyses of the wavelet coefficients for sample data and requires careful study [4]. In

particular there are two issues in particular that must be addressed. The first is bit allocation,

i.e. the assignment of the rate for each of the wavelet subbands, and the second is the size of

the vectors for each wavelet scale. Bit allocation is the process of assigning a given number of

bits to a set of different sources (i. e. wavelet subbands) to minimize the overall distortion of

the coder. For uniform scalar quantization, the bit allocation scheme chooses the size of the

quantizer step for each source.

3.8 Summary

In this chapter evaluation of the discrete wavelet transform (OWl') is explained briefly starting

from the Fourier transform. There was the problem in Fourier transform in getting information

in the time domain in the absence of frequency information. In case of Short Time Fourier

Transform (STFT) short width window was used to solve the problem. Then became the

wavelet with the localization facility of time and frcqucncy both at the same time. Lastly

OWl', the sampled version of the CWT, which can be implemented by a simple personal

computer, was introduced.

30

CJoI' '. '.r-'....:::--- '''-....

f'j

Page 41: Chapter 3 An Introduction to Wavelet Transform

Chapter 4Vector Quantization

4.1 Introduction

Vector Quantization (VQ) is a lossy compression technique that has been used extensively in

speech and image compression [4]. An extension of scalar quantization, VQ exploits the

memory or correlation that exists bctween neighboring samples of a signal by quantizing them

togcther rather than individually. This chapter introduccs this VQ in dctails starting from the

very elementary concepts behind VQ. Given N possible data symbols Sn, n ~ 0, I, , (N -

I), the rate R needed to uniquely represent thc data with a fixcd number of bits is defined as

R = [log, N]' The first order entropy, which is the lower bound on the number of bits required

to uniquely represent the data whcn each symbol is coded independently, is defined as

N-I

- LP(S,,)log, P(S,,). If blocks of data are coded, lower entropies can be achieved. This is then=O

very basic idea behind the VQ. There arc mainly two types of VQ, i) Full search VQ, and ii)

Tree-Structured VQ. Both of this will be described briefly in this chapter.

4.2 Brief introduction to vector quantization

A block diagram of the various steps involved in vector quantization, as applied to image

coding is depicted in Figure 4.1. The first step in image vector quantization is the

decomposition of the image into a set of vectors {V}. In the second step, a subset of {V}, {T}

is chosen as a training set. In the third step, a codebook is generated from the training set {T},

normally with the use of an iterative clustering algorithm.

31

o

Page 42: Chapter 3 An Introduction to Wavelet Transform

ut Image VectorI---~

Training Set ---~ Code book ~~e.E0~Formation Generation Generation

QuantizerLabels of

Codewords

Inp

Figurc-4.1: Principle of vector quantization. The dashed lines correspond to training setgeneration, code book gcneration, and transmission.

The quantizer or coding step involves searching, for cach input vector, the closcst codcword w

in the codcbook W; the corresponding labcl of this codcword is thcn transmittcd. Thus, thc

design decisions in implemcnting image vector quantization arc as follows [7]:

a) Vcctor formation"

b) Training set gcneration,

c) Codebook generation,

d) Quantization,

Briefly thcsc are introduced at this stagc as follows:

4.2.1 Vector formation

Thc first step in vector quantization is vector formation, that is, the decomposition of the

images in to a set of vectors [6]. Many different decompositions have been proposed,

examples include the color components of a pixel [6], the intensity values of a spatially

contiguous block of a pixel, these same intcnsity values but now normalized by thc mean and

variance, the transformed coefficients of the block of pixels, and the adaptive linear predictive

coding (LPC) coefficients for a block of pixcis,

4.2.2 Training set generation

An optimal vector quantizer should ideally match the statistics of thc input vcctor source.

However, if the statistics of an input vector source arc unknown, a training set representative

of the expected input vector source is used to design the vector quantizer. If the expected input

vector source has a large variance, then a large training set (and codebook) is needed. To

alleviate this problem the input vector source can be subdivid~d. Like a single input vector can

be subdivided into two more homogeneous source corresponding to "edge" and "shade" [6]

vectors, respectively. Separate training sets are then used for each source and the resultant

32

Page 43: Chapter 3 An Introduction to Wavelet Transform

\ code books are then concatenated. Small local input sourccs, corresponding to portions of the

image, may be used as training sets, thus the code book can better match the local sets, thus,

the codebook can better match the local statistics.

4.2.3 Codebook design

In general, one is concerned with optimal quantizers whose performance minimizes the

average distortion D = E[d(X,Y)]. There are two conditions that must be met for a code book

to be optimal. These conditions are defined on tlie cells, and if they are met, the cells are called

Voronoi regions. The first condition is the nearest neighbor (NN) condition which states that

every value in a given Vorol1oi regiol1s R; (as given in equation - 4.1) is "closer" or nearer to

the codeword Yi of the Voronoi regions than to any other codeword with respect to a given

distortion measure, that is

R, = {X :d(X,Yj), for j =0, I, 2, .... (N -I)} ....... (4.1)

In the case of a tie, the vector X can be arbitrarily assigned to the Voronoi region with the

lesser index to maintain mutual exclusivity of the partitions. The second condition for

optimality is the centroid condition (as given in equation - 4.2) which states that the codeword

must minimize the average distortion of the entire Voronoi region with respect to a given

distortion measure, that is

r: = Cel1troid(R,) = arg min E[d(X,Y) I X E R,]l'e/{j

............. (4.2)

When the MSE distortion measure is used, the centroid corresponds to the "average" value or

the geometric centre of the Voronoi region. These two conditions imply that the encoder is

optimized for the given decoder by the nearest ncighbor condition, and that the decoder is

optimized for the given decoder by the nearest neighbor condition, and that the decoder is

optimized for the given encoder by the centroid condition.

A generalization ofa least squares quantization method for scalar data [4] to multidimensional

data in 1980 was the first vector quantization design technique introduced in literature and is

known as the generalized Lloyd algorithm (GLA). The GLA is a descent algorithm that finds

codebooks with progressively lower average distortions by using iterations of the nearest

neighbor and centroid conditions. The algorithm is initialized with an arbitrary set of initial

codewords Y[O] = {V;}, i = 0,1, , (N-I), training data X, and initial distortion D[O] =

E[d(X, Y[O])].

33

Page 44: Chapter 3 An Introduction to Wavelet Transform

One iteration ofthe GLA, where n refers to the current iteration, works as follows:

Given code book Y[n] ~ {Vi} the first stcp is to find thc optimal partition of the training data

using the nearest neighbor condition as in Equation-I. Thcn, using thc ccntroid condition, the

optimal reproduction vectors Yi for the given partitions Ri are found. Thcse reproduction

vectors constitutc the (n+I)" codebook Y[n+l] as in the following equation:

The distortion due to codcbook Y[n+l] is evaluated as

D[n + I] ~ E[d(X, Y[n + I])] .............. (4.3)

The GLA is a dcscent algorithm that will converge to a local minimum. However, while

convergence is guaranteed, the number of itcrations it takes to converge can be quite large.

Typically the algorithm is halted when the relative change in distOltion incurrcd between two

iterations bccomes arbitrarily small as defincd by €I in thc following equation:

(O[n] - O[n + I])D[n]

and the final code book is determined to be Y ~ Y[n+ I].

.............. (4.4)

To design a full search YQ with an arbitrary number of codewords, one may start with any

initial code book Y[O] with the desired ratc and apply the GLA to it until it converges to a

locally optimal code book Y. Obviously, different initial code books will converge to different

locally optimal code books. The choice of initial codebooks is so important. One common

method of creating a full-search code book is the Linde-Buza-Gray (LBG) splitting algorithm

[5]. In this method an initial code book of rate 0 containing I codeword, yO ~ {Yo}, is used

where the optimal codcword is the ccntroid of the training data. Note that the superscrtpt on

the codebook yi now refers to the rate i of thc codcbook. An initial code book of size 2 (rate I)

is created by "splitting" the codeword which may be done by adding a vector E fo arbitrarily

small, random values e.g. yl[O] ~ {yo, Yo+d ~ {yo, YJl. The GLA is applied to yl[O] to

create code book yi. For each LBG iteration, thc codcbook sizc is increased by splitting thc

codewords of the previous codebook y" to create a larger initial codebook yn+1 [0], and GLA is

applied to the larger code book to achieve thc optimized codebook yn+l. The iteration continue

until the desired rate is achicved.

In this work S~lf Organizing Feature Map [5] is uscd to gencratc thc codcbook using full

search vector quantization. SOFM is a network structure model with two layers. Each input

node is connected with output node by variable weights as shown in Figure-4.2. SOFM

34

Page 45: Chapter 3 An Introduction to Wavelet Transform

-

produces code book for vector quantization through modifying the weights between input

nodes and output nodcs.

-Output layer

_ Weight Vectors, W

XI x2000 xk ..••__ Input layerFigure - 4.2: Schcmatic explanation ofthe SOFM algorithm.

The dimension of sample is K and thc target is to obtain N output quantization vectors. After

training, the weight vectors (OJOj,OJtj, ,OJk_tJ)r,j = 0, I, 2, _, N-I represents

representative vector of image coded, model X is input to network and output nodes competes

again.

The steps in SOFM algorithm [2] is shown bellow:

a. Given the number of output nodes N (the model of vcctor) and thc input nodcs

K (cach element of vector) initialize the weight from input node i to output

nodej;Tb. Input X = (Xt, X2, , Xk) to the network;

c. Calculate the distance betwcen input vector X and the weight vectors OJJ

connectcd with all of the output nodes.K

dj = I[X -OJi;]2, here, j = J,2,... .NI"']

d. Select node j' (corresponding to the minimum of dj) as respond node.

e. Adjust the weight ofj' and its adjacent NEt.(t);

OJI/t+I) = OJt;(t)+ aCt) [Xt(t) - OJI/t)]

Nj £ NEj.(t), 1<::i 'OK, 0 'O aCt) 'O I

Here, aCt) is variable learning speed.

If there are still input sample data, then turn to step b until the algorithm converges.

Full search vector quantization is very easy to understand and also not so much complex to

implement. Also it is the basic vector quantization technique. In this work full search YQ is

35

Page 46: Chapter 3 An Introduction to Wavelet Transform

:I..(

=.

used. Matlab is used for the sollware implementation of this algorithm. rirst the codcbooks

were created using SOFM algorithm and thcn full search YQ is used.

4.2.4 Quantization

In vector quantization, the process of quantization involves finding the closest codeword for

each input vector. In this work the straightforward expedient of searching the entire code book

is feasible, as the code books arc of relativcly small size.

4.3 Classification of vector quantization

In this section different types of existing vector quantization is discussed briefly. Following

are the main classifications of vector quantization below:

I. rull Search Yector Quantization,

ii. Tree Structured Yector Quantizer,

III. Pruned Tree Structured Yector Quantization.

4.3.1 Full search vector quantization

All forms of vector quantization usc code books for encoding and decoding. A code-book is a

collection of code-words or possible reproduction vectors. In full search YQ, as shown in

Figure-4.3, the encoder computes the distortion between an input vector (group of data

samples) and all code-words in an unstructured codebook. The binary index of the codeword

has the least distortion with respect to the input vector is transmitted (or stored). The decoder

performs a simple table lookup with the transmitted (or stored) index and outputs the

reproduction vector. Note that input is never ideally equal to the output, because this is a

many-to-one mapping. The rate R of a full search YQ code book with vector dimension d is

log Ndefined to be R = 2 bits per vector element. Using this definition, the size of the

d

code book can be rewritten as N = 2/(,( • The size of the codebook, and hence the size of the

search, grows exponentially with rate and vector dimension. The storage requirements of the

code book are low as only the N eodewords need to be stored. The full search, while

computationally complex, guarantees that the best possible representation of the input vector

will be selected. Several fast full-search methods which significantly reduce the search time by

ordering the eodebook (requiring a larger storage structure) to restrict searches to a small

portion of the search space, have been developed for full search YQ.

36

Page 47: Chapter 3 An Introduction to Wavelet Transform

Reconstructions

Decoder

I TableLookup Unblock

index Codebook

-

hanne

Encoder

InputImage

+Group Find Closest C

-into - •• Codeveetor

vectorsCode book index

-

.(

Figurc-4.3: Full Search Vector Quantization methodology.

4.3.2 Tree structured vector quantizer

Tree-structured YQ (TSYQ) is a low-complcxity altcrnative (0 standard full search YQ. As

shown in Figure 4.4, the codcbook is structured as a binary (or M-ary) tree where the leaves of

the tree are codewords Yi, and the intermediate nodes of the trcc wherc the leaves of the tree

arc codewords of thcir childrcn. Beginning from thc root node of the codebook, thc encoder. .",

computes the distortion between an input vector X and a node's childrcn and selects the child

node that produced the lowest distortion with respect to the input vector. This process is

repeated until a leaf node (codeword) is reached. The binary index I of the leaf node codeword

(i.e. the path map from the root to the leaf node) is then transmitted (or stored). The decoder

performs a simple table lookup with the transmitted (or stored) index I and outputs the

reproduction vector Yi. Not that X of Yi, and thaI Yi is not necessarily the best possible

representation of the input vector that would be found among all of the codewords if a full .

search were done.

37

Page 48: Chapter 3 An Introduction to Wavelet Transform

Encodf,lol

Figure-4.4: Tree Structured Yeetor Quantizer

The rate R of a balanced binary tree structured YQ eodebook with vector dimension d and N

leaf eodewords is defined to be R = Ig, N bits per vector elements. Since the encoderd

performs a sequence of binary (or larger) searches instead of the one large search done in full

search YQ, encoding complexity increases linearly with rate and vector dimension rather than

increasing exponentially. TSYQ eodebooks require twice the storage space of the full search

YQ with the same number of eodewords since the N - 1 intermediary codewords must also be

stored. In general the output of a TSYQ will suffer more degradation than the output of a full

search YQ with the same number of codewords. This is due to the constraint on the search.

However, an unbalanced TSYQ with the same rate as a full search YQ may have many more

codewords, and may outperform a full search YQ in terms of distortion as well as speed.

Hence, the trade-off of greatly reduced search and design complexity for some possible

increase in distortion usually makes TSYQ attractive. Another types of vector quantization

exists, the Pruned Tree Structured Yector Quantization as explained in [4] and [12]. Here it is

describcd in brief.

4.3.3 Pruned Tree structured vector quantization

A natural variable rate code results if the balanced tree (a tree with all terminal nodes at the

same layer) underlying Tree Structured Yector Quantization is replaced with a pruned tree in

which terminal nodes lie at varying depths as shown in Figure 4.5. The variable rate code

arises if the path map to specify the chosen reproduction is used, since the number of bits

required will then depend on the particular reproduction.

38

Page 49: Chapter 3 An Introduction to Wavelet Transform

Figure 4.5a: The structure of a Figure 4.5b: A pruned tree

depth two binary TSYQ structured Vector Quantizer

Figure 4.5: A schematic TSYQ and Pruned TSYQ structure explanation.

In designing pruned tree structured vector quantizer first one has to design a large TSYQ.

Then the generalized Breiman, Friedman, Olshen, and Stone (BFOS) algorithm [12] is used to

remove the portions of the TSYQ in an optimal manner so as to trade average rate (length of

the path map) for average distortion. The algorithm has complexity much less than

exhaustively searching all pruned subtrees of a tree. Finding the optimal subtrees of an M node

tree requires a search through at most M subtrees, each requiring G(M) time at most and G(log

M) time on average. This is much less time than required for an exhaustive search through all

pruned subtrees [12]. (As an example for a balanced tree with M = 63 nodes, the nlllnber of

pruned subtrees is 458300! [12]) The key to the algorithm is that these optimal subtrees arc

nested. The resulting quantizers have the virtue of simplicity since the search is still a tree

search rather than a full search and no second stage of noiseless variable rate coding is

required. In addition, pruned TSYQ inherits the graceful degradation or embedded coding

property of TSYQ: the quality of reproduction for any particular vector can be gracefully

sacrificed for reduced bit rate by deleting bits from the path map, working from the terminal

node towards the root. This property is useful in progressive transmission and in buffered

systems since the effects of buffer overflow can be mitigated.

In general it is preferable to prune as largc an initial tree as variablc, evcn for a fixed training

sequence. Initial trcc sizc is gcncrally limited by mcmory, computational rcsources, and thc

amount of available training data. The generalized BFOS algorithm can also be applied to a

tree structured vector quantizer to trade off the entropy of the choice of tcrminal node for

average distortion. This is callcd Entropy pruned Tree structured vector quantization [12],

another new type of vector quantization. Unlike pruned tree structured vector quantization,

effective application of these entropy-pruned tree structured vector quantizers usually requires

a second stage of coding in which the codeword indiecs (the lads at the leaves) are coded with

39

Page 50: Chapter 3 An Introduction to Wavelet Transform

a variable rate noiseless code. Altcrnativcly, the sequence of left child/right child decisions can

be transmitted using a binary arithmetic code. Thus the entropy pruned tree structured vector

quantizer is somewhat more complicated than straightforward pruned tree structured vector

quantizer, but it is more efficient in the distortion-rate sense [12].

4.4 Full search vector quantizer design: The general Lloyd algorithm

A vector quantizer Q maps a dimension d Euclidean source vcctor X C It onto a finite set of

reproduction vectors f = If;}, i = 0, I, ,N - I,where Y is called a code book and Y, C

Rd are eodewords of the codebook. Associatcd with each codeword Yi is a cell, or region, R,

which is a partition of ft. The cells are mutually exclusive, and the union of the cells must

cover the entire space Rd, as explained by the equation as follows:

Rj nRi = 1>, i", j, and uRj = R" ........... (4.5)

Quantization works by assigning to any random vector X, the associated cell's codeword fi

given that X is an element of cell R, as given in the following equation:

Q(x) = {Y, : X E R,}

The mapping Q : Rd ---t f is many-to-one and is thus irreversible.

......... (4.6)

Quantizer performance is typically evaluated by a distance metric or distortion measure d(x,y)

where x andy are vectors in Rd. The average distortion D due to quantization is evaluated as

the expected distortion per source vector where the source X is treated as a random variable, asN~I

D =E[d(X,Q(X))] = E[d(X,Yl] = L>c;E[d(X"J~) I X E Rj]

;",0

....... (4.7)

Where Pi is the probability that the source vector X is in R,.

Typical metrics used for d(x,y) arc the Lp norms, as defined by the following equation

d(x, y) = [Ix _yiP]~ .................... (4.8)

and the squared error which is the square of the L2 norm (i.c. the squarc of the Euclidean

. distance), is defined by the following equation

d(x,y) = \x- yj' .................. (4.9)

leading to the mean square error (MSE) distortion measure defined by the equation

MSE = E[d(X,Q(X))] = E[IX -Q(X)I'] ........... (4.10)

The MSE is often used due to its numerical tractability. Although it is not always perceptually

meaningful, low MSE's (on compressed images) usually correspond to high quality, and high

MSE's usually corresponds to poor quality. Other commonly used measures arc the Peak

40

Page 51: Chapter 3 An Introduction to Wavelet Transform

Signal to Noise Ratio (PSNR) and Signal to Noise Ratio (SNR) as defined by equations as

follows:

PSNR = IOIOglo[~] (4.11)MSE

and SNR = IOIOglo[_E_(X_'_)] (4.12)MSE

Where A is the maximum value of X. PSNR is typically used to cvaluate image quality. Since

each pixel in and 8-bit image relates to color or grayscale intensity, the energy of different

image can vary greatly even though cach image is just as visually important. For example, a

light gray image can be just as visually important as dark gray image, but the power of these

two images varies greatly due to the arbitrary numerical assignments given to pixels. In this

work PSNR is used to evaluate the image quality.

The definitions of the previously mentioned distortion measures depend on well defined

source characteristics. Unfortunately, good models for multidimensional data and image data

are not common, so training data or empirical data are commonly used to approximate the

probability distribution function of the source data. Training data are often a sequence of

vectors or sample X = {X,) gathered from data source. The average distortion is still defined

as given by the following equation.N-I

D = E[ d(X,Y)] = Il~E[d(X"Y;) I X E R,]1=0

............ (4.13)

but X now refers to the training data, Ri refers to partitions of the training data that are subsets

of the previously defined cells subspaces, rather than to the entire subspace itself, and Pi is

redefined as thc probability that a training vector X is in partition Ri representing the

probability that the codeword Yi is used for cncoding. In this book X and R, respectively refers

to the training data and the partitions of the training data into cell. Although, the descriptions

are still valid if X is a random source variable and the R,are subspaces of It.

4.5 Summary

In this chapter, starting from the very basic definitions of the vector quantization, then Self

Organizing Feature Map (SOFM) is introduccd which used to train the code book and in the

image compression process of this thesis work. Then Lloyd algorithm, Generalized Lloyd

algorithm described which is neccssary to generate the codebooks. Then LBG algorithm was

introduced for the same purpose.

41

Page 52: Chapter 3 An Introduction to Wavelet Transform

Chapter 5Vector Quantization Based Image Compression Using

Wavelet Transform

5.1 Introduction

Wavelets are able to de-correlate the image data, so that the resulting coefficients can be,efficiently coded and also it possesses good energy compaction capability which results in a

high compression ratio. Wavelet transform decomposes an imagc into a sct 'of different

resolution sub-images, corresponding to the various frequcncy bands. This rcsults in a multi-

resolution of images with spatial and frcqucncy domain localization. This multi-resolution

makes the difference of Wavelct domain with other domains like Fourier and the Discrete

Cosine domain as explained in other chapters before. But the application of wavelet transform

on images does not reduce the amount of data to be compressed for practical purpose. Vector

quantization (2, 4J is a powerful tool for digital image compression. A combined approach of

image compression, based on the wavelet transform [I, 13, 16J and Vector quantization [4, 16J

is presented in this chapter. Details of the steps involved in this work will be described in this

chapter.

5.2 The complete methods of image compression

Images used for generating different code books are called training image, as shown in the

appendix A. Different images used for encoding purpose is called the test image. Now at this

stage different steps of image compression process will be discussed. The whole work of this

thesis work can be divided into three steps, i) Codebook generation, ii) Encoding of the

original image and iii) Decoding of the image. Now all these steps will be described in details

in the following sections.

5.2.1 Codebook generation step

In this step of this work, ten code books for original image reconstruction and another ten error

code books are generated for error reconstruction in the whole Vector Quantization process. In

42

( C"f

Page 53: Chapter 3 An Introduction to Wavelet Transform

the first step of generating ten code books, four differcnt images are used, and thcsc tcn

codebooks are neccssary for the vector quantization proccss applied to the image data in the

compression process. Each of these images is subjected to go under 3-level decomposition of

discrete wavelet transform (DWT). After three level wavelet decomposition of an image ten

subbands are generated for each image as shown in Figure 5.1. Thcre were four such images.

Similar subbands of each image are then combined (i.e. Ihere was.limr cA2. cV2. cD2, cH2,

elc for each image, so Ihese four cA2 's as well as olher sub bands also used 10 form one

image, elc.) to form a single image, so there was ten separate images. Using these ten separate

images, ten separatc codebooks are gencratcd using Self Organizing Feature Map (SOFM) (as

explained in chapter fOllr).

cA, cH2

cHIcVz cD,

cl-l

cV1 cD,

cY cD

Figure 5.1: Different subbands ofa general image after 3-levcl Wavelet Transform.

At this stage previous ten sub-images are used as test image. Using these previously generated

ten code books ten sub-images are reconstructed, using code book search method of vector

quantization. Then these ten rcconstructcd images arc compared with thc original ten images,

and error between thesc two is calculated. So therc cxists ten error (each for each subband)

images. These ten error sub-images are now taken as ncw tcst image for generating the error

codebooks. Using these ten initial image errors again SOFM algorithm is used to create ten

error codebooks. There ends the codcbook generation step. At this stage ten original

codebooks and also ten error code books are available for the experimental encoding and

decoding processes.

5.2.2 Encoding step

Details of this step of the image compression process is shown in Figure 5.2, the test image

(the image which is 10 be compressed) goes under thrce level of wavelet transform, which

generates ten subbands as shown in Figure 5.1 previously. Then in the first step of the

encoding process each of these ten subbands is vcctor quantized using the code book search

43

Page 54: Chapter 3 An Introduction to Wavelet Transform

method (codebook entry corresponding /0 the minimlllll error is searched and IIsed 10

represent the vector). Here for each subband scparate codcbook is used. Thcse codebook

indices after vector quantization process is transmitted to thc decodcr through the channel.

Using these vector quantization indices in the encodcr cnd the image is rcconstructed and

image quality is tcsted in terms of the required Pcak Signal to Noise Ratio (PSNR) (as

explained beforc in chapter two, Equation 2.2) is tested. If the PSNR of this reconstructed

image is equal or greater then the desired PSNR then the encoding process ends. If the Image

PSNR is below the required image quality then starts the progressivc error correction method

as explained in thc slep 111'0.

Step 1:3-Levcl DWT,generates 10 sub-Imagc

Each of these sub-imagesarc now subjected to vectorquantization using separatecode books for each subband.

o e 00

indices aretransmitted tothe decoder

Step 2:

aC10 tlcsesubbands are nowtaken as initialima e 1.I

I Step 21Using the original code books,each image subballds arcreconstructed, this is thereconstructed image (R.I.)

Yes

Stcp 3

Step 3:Calculate the imagecrror (I.E.) betweenJ.I. and R.I.

These I.E. is nowVector Quantized usingthe error codebooks

Error Code bookindices are transmittedto the decoder

OW uSIng t lese errorcodebooks I.E's arcreconstructed, and calledR.I.E

No Yes Recalculate the newReconstructed Image,R.I .. = ru. + R.I.E

. :.)Figure 5.2: Flowchart ofthe encoder for image compression based on wavelct transform and

. vector quantization. " V

44•

r

Page 55: Chapter 3 An Introduction to Wavelet Transform

In the step two, this is actually the beginning of the progressive error correction method. In

this step, in the transmission end (i.e. at the encoder) errors between the original image and the

reconstructed image arc taken as test image. For these ten error sub-images, using the

previously generated error codebooks, each of these error sub-images is reconstructed, using

the error code book search method. These reconstructed subimages arc now called the

reconstructed image. Error eodebook indices corresponding to this error sub-images is

transmitted to the decoder through the channel. This is actually the starting of step three of the

encoding process. This is also called the progressive error correction method.

At,the beginning of this progressive error correction step, each of the reconstructed errors

subimages are added with the previously reconstructed images in the step one. Alier adding

the reconstructed error sub-images to the original reconstructed images of step one, these

resulted ten sub-images are now treated as new reconstructed images. At this stage using these

new reconstructed sub-images PSNR is calculated and compared with the desired PSNR. If

this PSNR is equal or greater than the required PSNR, the process will end. If not met the

PSNR requirement, then again error betwecn these new reconstructed image and the original

image is calculatcd and repeat step two. This iteration process continues until the desired

image quality achieved. There ends the encoding process. l3ut considering the case of infinite

loop, and also based on some trial and error process it seemed that alier third iteration of the

error loop the compression ratio becomes too much poor, so the loop is terminated after third

iteration. l3ut if the decoder is agreed to get better quality rather than compression ratio, the

iteration process may be made at the desired limit.

After encoding the image, before transmitting the codebook indices to the channel, Huffman

coding [9] was used over the encoded vector quantized data available for each subbands, for

improving the compression ratio (a measure of the efficiency of image compression).

Compression ratio is actually a measure of the degree of data reduction as a result of data

compression. Huffman coding was used for each subbands of the image. Then these coded

informations arc transmitted through the channel to the decodcr cnd. Then stm1s the dccoding

process.

5.2.3 Decoding step

Details of this decoding step are shown in Figure 5.3. In the first step of this decoding 'processrA\

the decoder will receive the Huffinan coded transmission bit stream corresponding the:'Sfeli'.

45

n" -,-'"" .--....... 1

\.

Page 56: Chapter 3 An Introduction to Wavelet Transform

one reconstructed image of thc cncoding proccss. From thcsc Huffinan codcs decoder first

reconstruct the original eodcbook indices of Vector quantization, and reconstruct the wavelet

coefficients of the ten sub-images. Then in the step two of the decoding step deeodcr rcceives

Huffman eodcd information for error codcbook indices. Decoder reconstruct the .crror

code book indices, thus dceodcr rceonstruct thc cr!'Or sub-images. Decoder then adds' theseI

error sub-images with the corresponding sub-images reconstructed in the step one.] This

process of adding error sub-images continues until the encoder ends the process of progressive

error correction.

Then in the final stage of the decoding process the decoder just uscs 3-level inverse wavelet

transform to reconstruct the image. Herc cnds the decoding process ends.

From Transmission Channel

~

Receive the Huffman coded informationof YQ indices of error subbands

tep

Receive the Huffman codedinformation ofYQ indices

Step 1

Recover the YQ indices andthe corresponding imagedata from the code book

Add the error image data of step2 with the step 1 image data

Reconstruct the imageusing 3-1evel inverse

OWT

Figure 5.3: Flowchart of the decoding process of this image compression process.

For implementing all of these steps of encoding and decoding of the compression scheme,

Matlab-7 release 14 was uscd.

5.3 Summary

In this chaptcr a teehniquc of digital image compression based on multiresolution m\alysis

using wavelet transform and vector quantization is proposed. This method can provide a

satisfactory image quality with a reasonably high compression ratio. This proposed melhl:\d of

image compression is dedicated for those areas of digital images where high precision

reconstructed image (for example criminal investigations, medical photo, etc) is required.

46

Page 57: Chapter 3 An Introduction to Wavelet Transform

.(

Chapter 6Simulation Results and Discussions

6.1 Introduction

In this chapter different simulation results arc presented. For simulation purpose of the

proposed algorithm of image compression Matlab version 7 release 14 is used. Different

Matlab codes and different Matlab files used in this simulation purpose arc listed in Appendix

C. Now at this stage some simulation results and also some comparison of the proposed

method with some other methods are listed.

6.2 Simulation results of the proposed image compression method

For generating the original and the error code books, in this work. lour images (as shown in

Appendix A) namely Lenna, Couple, Frog, Baboon, were used. In this step code book sizes

were arbitrarily determined based on trial and error basis to get the optimum performance. In

this training phase of generating eodebooks. ten original and ten error eodebooks were

generated, as explained before. For encoding different sub-images of the wavelet transformed

image different eodebooks were used. In table 6.1 details about the size of di fferent eodebooks

are listed.

Table 6.1: Details about different Codebook sizes used in this work

Wavelet Subbands.Q!ig~n~I_~.o~_ebo..~S.i1:e J Err(jl:.<:.:o.debookSizeeA2 128 = 27 32 = 25

..•.•.•.•...•.. _.__ .•.....•.. __ ...•-_........... ..._-_ ... --,...._,-_._-'.,.,' •.".eY2 32 = 25 32 = 25

"------"'---_ •• __ • __ ._-_ ••••• __ ••• _ •••••••••••••••• ! ,., •••••••• , •• __ .- • ••••••• •••• • ••••••••••••••••• ".

c02 32 = 25 32 = 25......................

eH2 ___.__}~:2' ..... i 32 = 25

cHI 16 = 24 16 = 24

cOl 16=24 16=24

eYI 16=24 16=24

eH 16 = 24 16 = 24

cD 16 = 24 16 = 24. .. ...•...... ,_ .., "•...•....• , _,...•._--_._- .__ .•. -- _.

eY 16=24 16=24

....__~."re e_A.?,ey?,.e!2~, __"1::'_2 a~e..as deli ned.rrev iOlI2IXil1_Yig~~~_:::.?~L.......!47

Page 58: Chapter 3 An Introduction to Wavelet Transform

In the testing phase images namely Peppers, Boat, Plane, Woman (as shown in Appendix 13)

were used. In Figure 6.1 some of the reconstructed imagcs at different Peak Signal to Noise

Ratio (PSNR) and Compression Ratio (CR) are shown. Di rrercnt PSNR and compression

ratios as obtained from the testing phasc for different images are listed in table 6.2. Here

compression ratios arc calculated after using Hufrman coding for the VQ indices. All of the

experimental images were of 512x512 pixels and also Matlab was used for implementing the

proposed algorithm. For wavelet decomposition or di rferent images Daubechies ('db I')

wavelet filters were used. At this stage some comparison of the result of the proposed method

with other Vector Quantization based and Wavelet based methods proposcd in [14] and [15]

are listed in Table-6.3. In case of[16] they used eombincd approach orvQ and wavclct based

method to a 160x88 pixel imagc and the result was PSNR = 18 dB and CR ~ 15. So we could

not compare this result with our proposed method due to the standard image size is different

compared to that we used. But it is our claim Ii'om the comparisons of other mcthods that our

proposed method is really giving better results. It is clear from tablc 6.2 and 6.3 that our

proposed method gives really better results.

Table 6.2: Different Experimcntal Results using the proposed mcthod

Image Original reconstruction 1" Error211d Error LoopLoon

PeppersPSNR 30.7023 35.1845 38.6182CR 38.9378 22.6614 18.4291

Boat PSNR 29.8715 34.5667 38.1755CR 36.9657 21.5688 17.5181

Plane PSNR 28.8031 33.5962 37.3361CR 41.2750 24.0239 19.3073

Woman PSNR 36.4113 43.1257 ....

CR 46.9092 27.4220 ......

Table 6.3: Comparison of the results ofthc proposed method with other methods

Image FMFSVO [14] Proposed MethodOril'inal reconstruction 1" Error Loop 2"" Error Loop

PSNR CR PSNR CR PSNR CR PSNR CRPenners 29.70 21.739 30.7023 38.9378 35.1845 22.66 38.618 18.429Woman 31.71 23.392 36.4113 46.9092 43.1257 27.42 .... .....

Method inriSl Oril'inal reconstruction 1sl Error Loon 2"" Error LoonPSNR CR PSNR CR PSNR 1 CR PSNR CR

Boat 29.70 32.0 29.87 36.965 34.57 I 21.569 38.1751 17.52c

~ •48

t, 0

• ••,

Page 59: Chapter 3 An Introduction to Wavelet Transform

~!fy1\""- ,i ,"

.,"' :..

.. -

, .".'...•••.••• ~' •• ., .l""' .•••..

','- .. '. ~'.':--...!'"" .••.• ' --....--

..

--~ "I'.c....--...~r, ' ~,iro~.

Page 60: Chapter 3 An Introduction to Wavelet Transform

..i.. -1I..•~~,~

• ~ .•.. .._ ~L-' __Figure 6.1 d : Reconstructed image peppcrs, PSNR= 30.7023 and CR = 38.9378

50

Page 61: Chapter 3 An Introduction to Wavelet Transform

'0'il'

":!",~_J .••

c .•••.•

5 J

Page 62: Chapter 3 An Introduction to Wavelet Transform
Page 63: Chapter 3 An Introduction to Wavelet Transform

6.3 Discussions

This dissertation explains in details the process of image compression based on wavelet transform

and vector quantization. The test image goes under three level of discrete wavelet transform in the

first step of encoding process. Available ten subbands after three level discrete wavelet transfoml

is then subjected to vector quantization using ten separate codebooks created in the training phase

ofthis algorithm using four difTerent images as training images.

In case of encoding process thcrc exists an error correction algorithm to improve the image

quality. In this error correction algorithm, alier first stage of encoding process thc encoded

codebook indices is transmitted to the decodcr, and also at the encodcr using this codcbook

indices these ten subbands are reconstructed, and pcak signal to noise ratio of this image is

caleulated and compared with the desired peak signal to noise ratio. II' this calculated peak signal

to noise ratio (PSNR) is grcater than or equal to the dcsired PSNR the proccss stops. Otherwise

this reconstructed image subbands arc comparcd with the original tcn subbands, and error

between these two is again subjected to vcctor quantization using thc previously generated tcn

separate crror codebooks created in the training phase of the image compression process.

This encoded error codebook indices are also transmitted to the decoder and also errors are

reconstructed at the encoder using the error code book indices, then this reconstructed errors are

added with the previously reconstructed image and this new image is taken as new reconstructed

image. At this stage also peak signal to noise ratio for this new reconstructed'image is calculated

and compared with the desired peak signal to noise ratio, i( this peak signal to noise ratio is equal

or greater than the desired the process ends. Otherwise this new reconstructed image is compared

with the original image again and error between this two is again subjected to vector quantization,

and this process eontinues until the desired peak signal to noise ratio is achieved. But in case of

this iteration process there exists a process to stop the iteration process, so that the iteration

process does not goes infinitely. Decoder is acknowledged about the matter that if the desired

PSNR is not met with in the third iteration the eneoder will stop the iteration process. But if the

PSNR requirement is met before the third iteration the process will automatically stop.

"\

53

.~..

Page 64: Chapter 3 An Introduction to Wavelet Transform

Matlab software was used to implement this algorithm. For generating different codebooks in the

elementary stage of the algorithm the Matlab tilc 1171.1nwas uscd (Differenl codes corresponding

10 Ihis Matlabfile is given in Appendix q. For vector quantization purposc, convcrting image into

vcctor and also to convcrt thc vcctors in to imagc location, b1kM2vc.1I1 and vc2blkMI71 werc uscd

(Different codes corresponding 10 this Matlab file is given in Appendix C). Then for the

experimental encoding and dccoding purpose used the ill1ag7i ..111 file (Differenl codes

corresponding 10 this Matlabfile is given in Appendix C).

From the comparisons of the results provided in table 6.3 shows thal simulation results of this

proposed method are giving superior results compared with other methods. This improvement in

results is mainly due to the error loop which is the main focus of this thesis work. In most of the

cases desired PSNR requirement is met with in three iterations. But if the decoder requires fUliher

image quality then increase in iteration process may bc done. But wc truncated thc loop after third

iteration only because the compression ratio bccomes vcry poor after third iteration.

54

Page 65: Chapter 3 An Introduction to Wavelet Transform

Chapter 7Conclusions and suggestions for future works

7.1 Conclusions

In this Work a techniquc of digital image compression based on multi resolution analysis using

wavelet transform and vector quantization is presented. It is clear from the table 6.3 that the

propos cd method is giving superior pcrformance. In casc of[14] fcaturc map finite state vector

quantization (FMFSVQ) is used. Using that method PSNR was equal to 29.70 and

compression ratio was equal to 21.739 in case of image Peppers. But in case of the proposed

method in the first original reconstruction PSNR was equal to 30.7023 with compression ratio

equal to 38.9378 is obtained which is much higher than the method used in Ref. [14]. Ifen'or

loop is used for the same image Peppers, then after first error loop, PSNR becomes 35.1845

with compression ratio equal to 22.66 still these results proving the superiority of the proposed

method. In Ref. [14], in case of image Woman using FMFSVQ method PSNR was only 31.71

and compression ratio was only 23.392, but in case of the proposed method for the same

image Woman, in case of original reconstruction PSN R obtained is 36.41 13 with compression

ratio 46.9.92 which is really superior to the results in Ref. [14]. If error loop is used for the

same image Woman, then after first error loop, PSNR becomes 43.1257 with compression

ratio equal to 27.42. Here also the simulation results proving the superiority of the proposed

method compared to the FMFSVQ method of [14]. But in case of the wavelet based image

compression method [15] in case of the image Boat PSNR was equal to 29.70 with

compression ratio is equal to 32.0 and in case of this proposcd method for the same image

Boat, PSNR is equal to 29.87 which is a little bit improved ~ompared to the method in [15]

and compression ratio is 36.965 which is greater than the method in [15]. Thus error loop

gives improvement in the PSNR as well as in the compression ratio.

This method can provide a satisfactory image quality with a reasonably high compression"ratio. Here remains the choice on the decoder to check if the image quality is satisfaetory.or

55

Page 66: Chapter 3 An Introduction to Wavelet Transform

not. If not satisfied with the image quality then the decoder just ask the encoder to retransmit a

better quality bit stream. This proposed method of image compression is dedicated for those

areas of digital images where high precision reconstructed image (for example criminal

investigations, medical photo, etc) is required.

7.2 Suggestions for future works

Further research can be initiated with the expectations of getting better results and

improvements of the proposed method. Different performance parameters of digital data

transmission, like compression ratio can be further improved using some modifications to the

error encoding step as there may be lots of zero error elements, so instead of transmitting the

whole codeword for the zeroes; only a flag can be transmitted so that the decoder can easily

identify that these elements are zero.

Further improvement can be done if Feature Map Finite State Vector Quantization could be

used in the encoding process and also in the decoding process of this algorithm.

56

"

Page 67: Chapter 3 An Introduction to Wavelet Transform

Appendices

Appendix A

"

IIFigure At: Image Lenna or size 512x512 pixel Figure A2: Image Couple of size 512x512 pixel

~

\. \r

Page 68: Chapter 3 An Introduction to Wavelet Transform

Appendix B

Different standard test images used in this work arc as lollows:

),

II

Fi!:ul"c B-1: Image Plane of size 512x512 pixels

"--.--, . I

58 o

Page 69: Chapter 3 An Introduction to Wavelet Transform

Appendix C

Matlab codes corresponding to different Matlab files used ill this algorithm:

Codes for the Matlab File Im.m:

dispCwc will now train our network'):disp(Training images arc tmQ I.pug. trn03.png. trn04.pngnnd lm05.png');II = doublc(imrcad('trnO I.png'»;12 = doublc(imread{'trn03.png'»;13 = doublc(imreadClm04.png'»;14= doublc(imrcadCtrn05.png'»;dispCPress anykcy to continue ... .');pause(2)rcA II, cHI I, cVI I, cDllJ ~ dw<2(1I, 'dbl '):[cA12, cHI2, cVI2, cDI2J ~ dwl2(eAII, 'dbl'):[cA13, cHI3, eVI3, cD 13J ~ dw<2(cA 12, 'dbl'):[eA21, cHZI, cV2 I, cl)21 J ~ dwl2(12, 'db I'):[cA22, eHZ2. eV22, cD22] ~ dw<2(cA2I, 'dbl'):[cA23, cH23, cV23, eD23] ~ dwt2(cA22, 'dbl '):[cA3 I, eH31, cV3!. cDJI] ~ dw12(13, 'db I '):[cA32, eH32, eV32, cD32] ~ dwt2(eA3l, 'dbl'):(cA33, cH33, cV33, eDJ3] ~ dwt2(cA32, 'dbl '):[cA41, cH4 I, cV4!. cl)41] ~ dwI2(14, 'db I '):[eA42, el142, cV42, cD42] ~ dwl2(cA4l, 'dbl '):[cA43, cl143, cV43, cD43J ~ dw12(cA42, 'dbl'):8 I ~ [eA 13 eA23; cA33 cA43J:112 ~ [cH13 cHZ3; cl133 cH43]:83 ~ [cV13 cV23; cV33 cV43]:114~ [cD 13 eD23; cD33 cD43]:115= (cl1 12 eH22; cIB2 cH42J;86 = [cV12 cV22; eV32 cV42];117~ [cD12 cD22; cDJ2 eD42];88~[cl1ll cHZI;cIBI c1141J;119~ [cVII cV21; eV31 eV41]:1110 = [cD 11 c02 I ; cDJI eD41];c1ear!l 12I314cAII cH11 eVil cOli cAl2cI-1I2cY12cOil cAl3 cHI3 cVl3 c013 cA21 cH21 cV21 cD2!;clear cA22 cH22 cV22 cD22 cA23 clU3 cV23 cD23 cA32cH32 cV32 cD32 cA31 cH31 cV31 cD3! cA33 cJ-l33cV33 cD33;clear cA42 cH42 cV42 cD42 cA41 cJ-l41 cV41 cD41 cA43eH43 cV43 cD43;Iprinltrln');IMG~BI;imS = sizc(lMG);blkS ~ 14 4J:ebkL = 2"7:

lmt= 200;vIMG ~ blkM2vc(IMG. blkS):cdbk ~ vIMG(:, I :cbkL)';[NN KK] ~ sizc(vIMG);Ireq ~ zeros(l, cbkL);lor k ~ cbkL+(I :(KK-ebkL)xl = vIMG(:, k)';dxC8~xl(ones(cbkL, I), :)-cdbk;[Lwinjwin] = min(sum(abs(dxC13'))):if (frcq(jwin) < Imt)freq(jwin) = freq(jwin) + I:edbkUwin,:) ~ cdbkUwin, :) +

(1/Ii'cqUwin»"'dxCB(jwin, :):end

endedbk = round(edbk);x = sizc(cdbk,2);

59

cdbk =sort(cdbk(:. x/2»;clear IM(j vIM(; dxCB xl:fprintlnn');IMG = B2;illlS = sizc(lM0);hlkS ~ [4 41:cbkL = 2"5;lmt= 200;vIMG = blkM2ve(IMG, blkS):cdbkl = vIMG(:. I:chkL)':[NN KK] ~ size(vIMG):freq = zeros(1. cbkL):lor k ~ ebkL +( I:(KK-ebkL))xl~vIMG(:.k)':dxCB=xt(OIlCS{chkL.I).:).cdhkl:IL\vin jwinj = l11in{sum(,lhs{d:'\CB'))):if«(j'cq(jwin) < 1m!)li'cqOwin) = freq(jwill) -+ I:cdhk I (jwin.:) = cdhk IOwin. ;)-+

(l/frcq(jwin»*dxCB(jwill. ;):end

endcdbk I = rouncJ{cdbk I):x =size(cdbkI.2):cdbkl =sort{cdbkl(:. :.;/2»:clcar IM(; vlMG dxCB xl:fprintfl'\n');IMG = U3:imS = sizc(1MG);blkS ~ 1'14]:cbkL = 2"5:

Imt= 200:vIMG ~ blkMZvc(lMG. blkS):cdbk2 = vIMG(:, I :cbkL)':INN KKJ ~ sizc(vIMG):freq = zeros(I. cbkL):lor k ~ cbkL +(1 :(K K.cbkL»)xl = vIMG(:, k)':dxCIl ~ xl(oncs(cbkL. I), :) - cdbk2:[Lwin .iwin] = min(sum(ahs(dxCB'»}:if(frcq(jwin) < Imt)frcq(jwin) = li'cq(jwin) + I:cdbk2(jwin.:) = cdbk2(iwin. :) +

(1/freq(jwin»*dxCB(jwin. :);end

endcdbk2 = l"Olllld(cdhk2);x = size(cJbk2.2):cdbk2 =sort{cdhk2(:. x/2»:clear IMG vlMG dxCB xl;IIKintlt'\n'):IMG = 1l4:imS = size(IMG):blkS ~ [4 41:cbkL = 2"5:

Imt = 200:vIMG = blkM2vc(lMG, blkS):cdbk3 ~ vIMG(:, I :cbkL)':INN KK] ~ sizc(vIMG):freq = zcros( I. chk L).;lor k ~ cbkL+(1 :(KK-ebkL»)

Page 70: Chapter 3 An Introduction to Wavelet Transform

imS = size(IMG);blkS ~ [4 4];cbkL = 21\4:

Imt= 100:vlMG ~ blkM2ve(IMG, blkS);edbk9 ~ vIMG(:, I :ebkL)';[NN KK] ~ size(vIMG);freq = zeros( I. cbkL);for k ~ ebkL+(1 :(KK-ebkL»

xl ~ vIMG(:, k)';dxCB ~ xl(ones(ebkL, I), :) - edbk9:[Lwin jwin] = min(sum(abs(dxCB'))):if(freqUwin) < lint)

freqUwin) = freq(iwin) + I:cdbk9Uwin.:) = cdbk9(jwin. :) +

(l/freqUwin»*dxCI3(iwin. :):end

endedbk9 ~ round(edbk9);x ~ slze(edbk9,2);edbk9 ~sort(edbk9(:, x/2));clear IMG vlMG dxCB xt;disp('Training eooebook generation completed');pause( I)dispCRcnding dntn for error code book');fprinIW\n');I ~BI;imS = size(l):vlMG ~ blkM2vc(L blkS);X = size(vIMG. I);Y = size(vIMG. 2);Z = size(cdbk, I);fori=I:X

forj=1:Yxl ~ (vIMG(ij)'ones(Z,I ));dxCB ~ xl - edbk;[A,I] ~ min(abs(dxCB»;z(ij) ~ edbk(l);

endendIMG ~ ve2blkM(z, imS, blkS);zll ~IMG;clear I vlMG X Y Z dxCB xl;I ~B2;imS = sizc(l):vlMG ~ blkM2vc(l, blkS);X = size{vIMG. I);Y ~ size(vIMG, 2);Z~size(edbkl, I);fori=I:X

forj~ I:Yxl ~ (vIMG(i,j)'ones(Z, I ));dxCB ~ xl - edbk I;[A,J] ~ min(abs(dxCB»);z(ij)~edbkl(I);

endendclear vlMG X Y Z dxCB xl;IMG ~ vc2blkM(z, imS, blkS);zl ~IMG;I ~B3;imS = sizc(l);vlMG ~ blkM2ve(l, blkS);X = sizc(vIMG. I);Y = size(vIMG, 2);Z ~ size(cdbk2, I);fori= \:X

forj= I:Yxl ~ (vIMG(I,j)'oncs(Z, I));

(6)1

dxCB = xl ~ cdbk2:[AJ] = min(ahs(dxCB)):z(i,i) = cdhk2(1):

endendIMG = ve2hlkM(z. imS. blkS):z2 = IMG:clear I vlMG X Y Z dxCB xl:fprintfl'\n'):I ~ B4;imS = size(l);vlMG ~ blkM2vc(L blkS);X = size(vIMG. I):Y = sizc(vIMG. 2):Z = sizc(cdbk3. I):fori=I:X

-forj=I:Yxl = (vlMG(i.j)*olles{LI »):d."CB = xl - cdhk3:[A.I] = l11in(abs(dxCB):z(i.j) ~ cdbk3(1);

endendIMG = ve2blkM(z. illlS. hlkS):1.3 = IMG:clear I v[MG X Y Z dxCB xl:I ~ B5:imS = sizc(l):vlMG ~ blkM2vc(L blkS);X=size(vlMG.I):Y = size(vIMG. 2):Z = sizc(cdbk4. I):fori=\:X

forj=I:Yxl = (vlrvIG{i.j)*oncs(Z.\ »):dxCB = ."l - cdbk4:IA.I] = min(abs(dxCB):z(i,i) ~ cdbk4(1);

endendlMG = vc2blkM(z. imS. blkS):7.4~IMG;clear I vlMG X Y Z dxCn .-.:1:1=136:illlS "" sizc(I):v1MG = blkM2vc(1. b\kS):X = sizc(vIMG. I):Y = sizc(vIMG. 2);Z = siz~(cdbk5. \):for j= I:X

for j = I:Yxl = (vIMG(ij)*OIlCS(Z.\»:dxCB = xt - cubk5:[A,I] ~ mln(abs(dxCIl);z(l.j) ~ cdbk5(1);

endendIMG ~ vc2blkM(7, IIllS. blkS);z5 = IMG:dear I vlMG X Y Z dxCB xl:fprintf'('\n'):1 ~ 137;illlS = size(l):vlMG ~ blkM2vc(1. bIkS);X = sizc(vIMG. 1):Y = sizc(vIMG. 2):Z = sizc(edbk6. I):llxi= I:X

forj=I:Yxl ~ (vIMG(I.j)'UllCS(Z.1 i);

a

p

Page 71: Chapter 3 An Introduction to Wavelet Transform

xl ~ vIMG(:, k)';dxCI3 ~ xl(ones(cbkL, I), :) - cdbkJ;elwin jwin] = min{sum(abs(dxCI3'»);if(freq(jwin) < Imt)freq(jwin) = freq(jwin) + I;cdbkJUwin,:) = cdbkJUwin. :) +

(1/frcqUwin))'dxCI3Uwin, :);end

endcdbk3 = round(cdbU):x = sizc(cdbk3,2);cdbkJ =sort(edbkJ(:. x/2)):clear IMG vlMG oxeB xt;fprintf{'\n');IMG=1l5:imS = sizc(lMG);blkS ~ [4 4]:cbkL = 2"'4:

Iml~ 200:vlMG ~ blkM2vc(lMG, blkS):cdbk4 ~ vIMG(:, I :ebkL)';[NN KK] ~ size(vIMG):freg_= zeros(I, cbkL);for k ~ ebkL+(1 :(KK-cbkL))xl = vIMG(:, k)':dxCll ~ xl(ones(ebkL, I). :) - cdbk4;[Lwin jwin] = min(sum(abs(dxCB'»);if(freq(jwin) < Jmt)freq(jwin) = freq(jwin) + I:cdbk4(jwin,:) = cdbk4(jwin. :) +

(l/freq(jwin»*dxCB(jwin. :):end

endedbk4 ~ roond(cdbk4):x ~ size(cdbk4,2);cdbk4 ~sort(edbk4(:, x/2));clear IMG vlMG dxen xt;fprintf{'\n');IMG = 136;imS = sizc(IMG);blkS ~ [4 4];cbkL ~ 2"4;

lmt= 200:vlMG ~ blkM2vc(lMG, blkS);cdbk5 = vIMG(:, I :ebkL)';[NN KK] ~ size(vIMG);freg = zeros(l, cbkL);for k ~ cbkL +(1 :(KK-ebkL))xl = vIMG(:, k)';dxCI3 = xt(ones(cbkL, I), :) - edhk5;[Lwin jwin] = min(sum{abs(dxCB')));if(frcq(jwin) < Imt)freq(jwin) = freq(jwin) + I;cdbk5(jwin,:) = cdbk5(jwin, :) +

(1/frcq(jwin))*dxCB(jwin, :);end

endedhkS ~ rnund(cdhk5);x = size(cdhk5,2);cdhkS ~sort(edhkS(:, x/2));clear IMG vIMG dxCB xl;fprinIWIn');IMG ~ 137;imS = size(lMG);hlkS ~ [4 4];cbkL ""2A4;

Imt = 200;

(60

vlMG = hlkM2yc(JMG. blkS);edbk6 ~ vIMGI:. I :cbkL)':INN KKJ ~ sizc(vIMG):fi'cq "" zeros( L chkL);for k = cbkL+( I :(KK.ebkL))xl ~ vIMG(:. k)':dxCl3 = xt(ollcs(cbkL. I). :) - cdbk6;Itwin jwin] = mitl(slIm(:lbs(dxCB')));if(frcq{j\vin) < Imt)frcq(jwin) = frcq(jwin) + I;cdbk6{fwin.:) = cdbk6(j\'v-ill. :) +

(1/t'i.cq(jwin))*dxCB{iwin. :);end

endedbk6 = round(cdbk6):x::o:sizc(cdbk(I.2);cdbk6 ;=;sort(cdbk6(:. x/2);c1eur IMG ylMG dxCI3 xl;fprinttr\n');IMG ~ IlR:imS = sizc(IMG);blkS = 14 41:cbkL;=; 2A4:

Imt;=; 100;vIMG;=; blkM2vc(IMG. blkS);edbk7 = vIMG(:. l:cbkL)':rNN KK] = size(vIMG);freq = zcros( l. ebkL).:for k ~ ehkL+( I :(KK-cbkL))xl = vIMGI:. k)':dxCB = xt(oncs(cbkL. I). :) - cdbk7;[twin .jwin] = lllin(s1I1l1(abs(dxC.lY)));if(frcq(jwin) < Imt)frcq{jwin) = frcq(jwin) + I;cdbk7{jwin.:) = cdbk7(j\vin, :) +

(1/frcq{jwin»*dxCJ3(iwin. :);end

endcdbk7 = round(cdbk7);x = size(cdbk7.2);cdbk7 =sorl(cdbk7(:. x/2»);clear IMG vlrvlG dxCIl xl;tj1rintl1'\n' ):IMG = 139;imS = size(lMG);blkS ~ [4 4]:cbkL = 2A4;

Iml= 100;vlMG ~ blkM2vc(IMG. blkS):cdbkg = vIMG(:. I :ebkL)':INN KK] = sizc(vIMG):Il"cq = zeros( I. chkL):Ii)!. k = cbkL+(1 :(KK.chkL»xl = vIMG(:. k)':dxCB = xt(oncs(cbkL. I). :) - cdbkS;[Lwin jwinJ = min(sum(ahs(dxCB')));if(frcq{jwin) < Imt)frcq(jwin) = fj'cq(jwin) + I;cdbk&(jwin.:) = cdbk8{jwin, :) +

(I Ifreq(jwin))*d.xCB(jwin. :);end

endcdbk8;=; round(cdbkR):x = sizc(cdbkR.2);cdbk8 =sorl(edbk8(:. x/2)):clear IMG vIMG dxCI3 xt;Iprinll1'\n');IMG ~ IlIO:

Page 72: Chapter 3 An Introduction to Wavelet Transform

dxCB = xt • cdbk6:[A,I] = min(tlbs(dxCB»;z(I,;) ~ edbk6(1):

endendIMG ~ veIblkM(z, imS, blkS):z6~IMG:clcar I vIMG X Y Z dxCB xt:I ~B8;imS = size(l):vlMG ~ blkMlvc(l, blkS):X = sizc(vIMG, I):Y = sizc(vIMG. 2):Z ~ sl7.e(cdbk7, I):fori=I:X

forj=I:Yxt = (vIMG(i,j)*oncs(Z, I)):dxCB ~ xl - edbk7:[A,I] ~ mln(abs(dxCB»:z(i,D ~ edbk7(1);

endendIMG ~ ve2blkM(z, ImS, blkS);7.7 ~ IMG;clear I vlMG X Y Z dxCB x1:I~B9;imS = size(l);vlMG ~ blkMlve(l, blkS);X = sizc(vlMG, 1);Y = sizc(vIMG, 2);Z = size(cdbk&, I);fori=I:X

forj=I:Yxl ~ (vIMG(I,D'ones(Z, I)):dxCB ~ xl - edbk8;[A,I] ~ mln(abs(dxCB»;z(I,D ~ edbk8(1);

endendIMG ~ ve2blkM(z, imS, blkS);z8 ~ IMG;clear I vIMG X Y Z dxCI3 xt:I~BIO;imS = sizc(I);vlMG ~ blkMlve(1. blkS):X = sizc(vIMG, ]);Y = size(vIMG. 2);Z ~ slzc(edbk9, I);(or i= LX

forj=I:Yxl ~ (vIMG(iJ)'ones(Z, I));dxCB ~ xl • edbk9;[A,I] ~ mln(abs(dxCB));z(IJ) ~ edbk9(1);

endendIMG = vc2blkM(z. imS, blkS):z9=JMG;(printlnll');clear I vI MG X Y-Z dxCB xt;R=zll:1(1~zl;1(2 ~ z2;1(3 ~ ,3;1(4~z4;1(5 ~ z5;R6 = z6;R7 ~z7;R8 ~ z8;R9 ~ 7.9;

tlllJ Iclear R R I R2 R3 R'I R5 R5 R6 R7 ]{R R9 z z I z II z2 z3z4 z5 z6 z7 zS z!.):fprint/fError code1Jook gelleration completed .. \n NnwYOLI can encodc and d:code images .. .');

Codes fill' 1\1;ltlah Fift.' b1kM2J'c.m:

function ve = hlkM2vc{M. blkS)[rr ecl = 3ize(M):r ~ blkS( I): c ~ blkS(I):if(rclll{n" r) -= 0) I (n:rn(cc. c) -=())clTor('bloeks do not lit into ll1,ltrix')

cndnr = (IT/r) :

nc=(edc) :rc'=r*c;\lC = zcros(rc. nr*nc):forii=O:(nr-l)vc(:. (I :nc)+ ii *nc) = rcshapc( M« I :r)+ (ii*r). :), rc. Ilc):

cno

Codes fill' i\'l;ttlah Fill, l'c2h1kM.m:

fUllction M = vc2blkM(ve. imgS, blkS)r ~ blkS( I): c ~ blkS(2):rr = illlgS( I); ec = imgS(2):nr = rr/r: 11e,= cclc:M = zcros(imgS):for ii = O:(nr.l)M{(] :r)+(ii*r). :) = n:shapc:(vc(:. (I :nc)+(ii*llc)). r. rr):

end

elides fill' Matlah File im(fuTr.m:

disp('Now we will encode and then n:constnlc1 the image'):dispClmagc value II = '1.. .');kcyboard[cA, eH, eV, cD] ~ d\\12(11, 'db I'):leA I, eH I, cVI, cDI] ~ d\\l2(eA, 'dbl '):[eA2, cH2, eVI, eD2] ~ d\\12(eA I, 'dbl '):clearll;1= cA2:imS = sizc(l):%blkS~11 II:vlMG ~ blkM2vc(1. blkS):IX Y] ~ slze(vIMG):Z = size(edbk. I):B II = zeros(sizc(cubk):0= zcros(X, V);fori=I:X

forj=I:Y;.;t = (vIMG(ij)*ones(1.1 »):dxCB = xt - edbk:fA.!] = min(abs(dxC13)):BII(I)~JlII(I)+ I:D(i..i)~I;z(i.j) = eubk(I):

endendIMG = vc2blkM(z, imS. blkS):zll =IMG;clear I vIMe; X Y Z uxCB xt:1~dll:imS = sizc(J);vlMG ~ blkMlvc(1. blkS):IX YI ~ sizc(vIMG);Z ~ slzc(edbk I, I):B 12 ~ zcros(sl,e(edbk I»):D1 = zcros(X. V);

Page 73: Chapter 3 An Introduction to Wavelet Transform

for i= I:Xforj~ I:Y

xl ~ (vIMG(ij)'oncs(Z, I));dxCB ~ xl - cdbk I;[AI] ~ min(.bs(dxCB));BI2(1)~BI2(1)+ I;OI(ij)~I;zO,il ~ cdbk I(I);

endendclear vlMG X Y Z dxC13xl:IMG ~ vc2blkM(z, imS, blkS);zl ~IMG;I =eV2;imS = size(l):vlMG ~ blkM2vc(l, blkS);[X Y] ~ sizc(vIMG);Z ~ sizc(cdbk2, I);1321= zeros(size(etlbk2)):1)2 = zeros(X V):fori= I:X

forj~ I:Yxl ~ (vIMGO,i)'oncs(Z, 1)):dxC13= xl - edbk2:[A,IJ ~ min(.bs(dxCB»);B2I(1)~B2I(1)+ I;02(i,j) ~ I;z(ij) ~ cdbk2(1);

endendIMG ~ vc2blkM(z, imS, blkS);z2 = IMG:clear I vIMG X Y Z dxC13xt:1= c02;imS = size(l);vlMG ~ blkM2vc(l, blkS);[X Y] ~ sizc(vlMG);Z ~ size(cdbk3, I);133= zeros(size(edbk3));03 ~ zerns(X, V);for i= l:X

forj=I:Yxl ~ (vlMG(ij)'oncs(Z,1 »;tlxCB = xl. edbk3;[A,I] ~ min(abs(dxCB):133(1)~ B3(1) + I;03(i,j) ~ I;z(i,il ~ cdbk3(1):

endendIMG ~ vc2blkM(z, imS, blkS):z3 =IMG:clear I vlMG X Y Z dxCB x1:I=cHl;imS = size(l):vlMG ~ blkM2vc(l, blkS);[X Y] ~ sizc(vIMG);Z ~ size(cdbk4, I);134 = zeros(size(cdbk4));04 ~ zeros(X, V);fori= I:X

forj~I:Yxl ~ (vIMG(ij)'oncs(Z,I»:dxCB = xl - cdbk4;[A,I] ~ min(.bs(dxCB));B4(1) ~ 134(1)+ I;040, j) = I;zOj) ~ cdbk4(1);

endend

IMG = vc2blkivl(/ .. illlS, blkS):z4 = IMCi:clear I vlMG X Y Z d."en xt:1~cVl:imS = size(l):vlMG = blkM2vc(1. blkS):IX YJ = sizc(vIMCi):Z = siz:c(cdbk5. I);B5 = zeros(size(cdbU)):1)5 = zeros(X V):for i= I:X

forj=I:Yxl = (vIMG(i.j)*nnes(z'I»;dxCB = xt - cdbk5:fA.l1 = mill(abs(t1xCB}):B5(1) ~ 1l5(1) + I:1)5(i,.I1~ I:z(i,j) ~ cdbk5(1):

endendIMG = vc2hlkM(z. imS. hlkS):z5 = IMG:dear! vlivIC; X Y Z dxCB xl:1= cI)l:illl5 = size(l):vlMG = blkM2vc(L blkS):[X YI ~ sizc(vIMG);Z = sizc(cdbk6, I):136 = zeros(sizc(cdbk6)):D6 = zeros(X. V):fori=I:X

forj = I:Yxl = (vIMG(i.j)*oncs(Z.! )):dxCB = xl • cdbk6:ll\J 1= lllin(abs(dxCB));136(1)~ B6(1) + I:D6(i,j) ~ 1:z(i~i) = edbk6(1):

endendIMG = vc2blkM(z. imS. blkS):z6 = IMG:clear I vlMG X Y Z t1xC13 xl:I = c1-l:imS = size(l):vlMG = hlkM2vc(1, hlkS):[X Y] ~ sizc(vIMG):Z=size(cdbk7.1):137 = zcros(sizc(cdbk7));07 = ZCl'os(X V):fori= I:X

forj=I:Yxl = (vIMG(i.j)*oncs(Z. [)):dxCB = xl - cdbk7;11\.1] = min(abs(dxCB)):137(1)~ 137(1)+ I:D7(i,j) ~ I:zO,i) ~ cdbk7(1):

endendIMG = vc2blkM(z. imS. hlkS):z7 ~ IMG;clear I vIMG X Y Z d."en xl:I=cV:imS = sizc(l):vlMG = blkM2vc(1. blkS):IX Y] ~ sizc(vIMG):Z = size(cdbk8. I):BX = zcros(sizc(cdbkS)):DS = zcros(X. V);

Page 74: Chapter 3 An Introduction to Wavelet Transform

fori= I:Xforj'=I:Y

xl ~ (vIMG(i,i)'oncs(Z, I »;dxCB = xt - cdbk8;[AJJ ~ min(ahs(dxCB»;B8(1) ~ B8(1) + I;D8(ij) ~ I;z(i,i) ~ cdbk8(1);

endcndIMG ~ ve2blkM(z, imS, blkS);z8 ~IMG;clear I vIMG X Y Z dxCB xLl~eD;imS '= size(l);vlMG ~ blkM2vc(I. hlkS);[X YJ ~ size(vIMG);Z = size(cdbk9, I);B9 '= zeros(size(cdbk9»;D9 '= zeros(X, V);fori= I:X

forj= J:Yxl ~ (vIMG(ij)'oncs(Z,1 i);dxCB ~ xt - edbk9;[A,I] ~ min(abs(dxCB));B9(1) ~ B9(1) + I;D9(i,.i)~I;z(iJ) ~ edbk9(1);

endcndIMG ~ ve2blkM(z, imS, blkS);z9~ IMG;clear I vlMG X Y Z dxCB xl;%%%%%%%%%%%%%

R ~ zll;RI~zl;R2 ~ 72;RJ ~ zJ;R4 '=z4;R5 ~ z5;R6 ~ z6;R7 = z7;R8 ~ z8;R9 ~ z9;clearzll zl z2 z3 z4 7.:5 z6 z7 z& z9;

III ~ [cA2 eH2; cV2 eD2];il2 ~ till cHI; cVI cDI];B ~ [B2 eH; cV cDJ;clear B 1 B2;

X ~ size(ll, I);Y ~ sizc(B,2);fprintWIn');%%%%%%%%%%%%%** *****%%%%%%,for d~ 1;20

if(d>l)encJ

end

CI~[RRI;R2RJJ;C2 ~ [CI R4; R5 R6];C ~ tC2 R7; R8 R9J;

clcar C1 C2;%sizcO

G ~ size(C, I);k ~ 255;

YI ~ (abs(B) - abs(C) .'2;Y2 ~ sum(Y1 ,2);Y ~ sum(Y2);

ME~Y /G'2;

PSNR'= 10 * logl 0(k"2/ ME)if(PSNR > J(,)

break:enddispCNow ,It error loop ..')

end%,OA,%,'Y"ciearcA2cl-l2cV2cD2dli cVI cD! cAdleVel):disp('Thc n;collslrueled image IS ,11X):disp('Press <lnykcy 10 sce the n.:cons!ruclcd im,lgc .');pausc(2)

A I = idwt2(R. R I. R2. R3. 'db I '):A2 = id\vt2(A !. R4. R5. R6. 'db 1'):A = id\\-12(A2. R7, R&. R9. 'db I '):

X ~ A/255;pausc(2)inlsbO\\'(X)

Codes for' i\'lallJ.h File 1m.! 1./1/:

1>1l1-1(;EI~1J2-RI;[2 ~ IJJ - R2;EJ ~ 1J4 - RJ;E4 ~ B5 - R4;[5 ~Il(' - R5;E6 ~ 117 - R6;E7~1l8-R7;E8 ~ 1J9 - R8;E9~1l10-R9;

clear IJ I 112 IlJ 114 115 Il(' 117 118 Il') III 0;fprinIH'\n'):IMG~E;illlS = siz:c(lMCi):blkS ~ [4 4J;cbkL = 2"5:

IIll! = 200:vlMG ~ blkM2ve(IMG, blkS);rcdbk = vIMG(:. 1:cbkL)':INN KKJ ~ size(vIMG);frcq = zems( 1. ebkL);for k ~ ebkL+(1 ;(KK-cbkL))

xl ~ vlMGC, k)';dxCB = .xl(oncs(cbkL, I). :) - redbk:ILwil1 jwin] = min(sllln(abs(dxCB'))):if(frcq(iwin) < lmt)

frcq{j\vin) = freq(iwin) + I:redbk{j\vin,:) '= redbk(i\vin. :) +

( llli'cq(j win»* dxC B{jwin, :);end

endrcdbk = round(rcdbk):x = sizc(rcdbk.2):redbk =s0l1(redbk(:, .'1./2)):dear E IMG vIMG dxCB xl;fprintf('\n');IMG~ 101;imS '= sizc(lMG):blkS ~ [4 4J;ebkL = 2"5:

Il1lt = 200:vlMG = hlkM2vc(IMG. blkS):redbk I = v!MG(:, I :cbkl.)':INN KK] ~ sizc(vIMG);fi'eq = zeros( I , cbk L):for k '= cbkL+( I :(KK~cbkL»

Xl ~ vIMG(;, k)';dxCB=xt(oncs(cbkL I), :)-rcdbkl;

Page 75: Chapter 3 An Introduction to Wavelet Transform

[Lwin jwin] = min(sum(abs(dxCB'))):if(frcq(jwin) < lmt)

frcq(jwin) = freq(iwin) + 1;redbk I (jwin,:) = rcdbk I(iwin. :) +

(1/freq(jwin»*dxCB(jwin, :);end

endredbkl = round(redbkl);x = size(redbkL2);rcdbk I ~sort(rcdbk I(:, x/2));clcar EI fMG vlMG dxCB xl;fprintl('ln');IMG ~ E2;imS = sizc(lMG):blkS ~ [4 4J;cbkL = 2A5;

Iml~ 200;vlMG ~ blkM2vc(lMG, blkS);rcdbk2 ~ vlMG(:, I :cbkL)';INN KKJ ~ size(vIMG);freq = zcros( J, cbkL);for k ~ cbkL+(1 :(KK-cbkL)xl ~ vIMG(:, k)';dxCI3 =:= xt(oncs(cbkL, I). :) - redbk2:[Lwinjwin] = min(sum(abs(dxCI3')));if(freqawin) < Imt)

fi'cq(jwin) = freq(jwin) + I:rcdbk2(jwin.:) = redbk2(jwin. :) +

(J/freq(jwin»"'dxCB(jwin. :);end

endredbk2 = round(redbk2);x = sizc(rcdbk2.2);rcdbk2 ~sort(rcdbk2(:, x/2));clear E2 IMG vlMG dxCB xl;fprinlWln');IMG~E3;imS = size(lMG);blkS ~ [4 4J;cbkL ~ 2A5;

lint ~ 200;vlMG ~ blkM2vc(lMG, blkS);redhk3 = vIMG(:. I :cbkL)':[NN KKJ ~ sizc(vIMG);freq = zeros( I, cbkL);for k ~ cbkL+(1 :(KK-cbkL»)

xl ~ vIMG(:, k)';dxCB ~ xI(ones(cbkL, I), :) - rcdbk3;[Lwin jwin] = min(sum(nbs(dxCB'))):if(frcq0win) < Iml)

frcq(jwin) = freq(jwin) + I;rcdbk3(jwin.:) = rcdbk3(jwin. :) +

(l/freq(iwin)'dxCB0win, :);end

endrcdbk3 = round(rcdbk3);x = size(rcdbk3,2);rcdbk3 ~sort(rcdbk3(:, x/2);clcar E3 IMG vIMG dxCB xt:fprinlWln');IMG ~E4;imS = size(lMG);blkS ~ [4 4J;cbkL = 2A4;

Iml ~ 200;vlMG ~ blkM2vc(lMG, blkS);rcdbk4 ~ vIMG(:, I :cbkL)';

INN KK] = sizc(vIMG);frcC]= zeros( I. ehkL):for k = cbkL+( 1:(KK-ebkL»xl = vIM(;(:. k)':dxCB = xt(ol1cs(cbkL. I). :) - n:dbk4;ILwin jwin:1 = lllin(slll11(ahs(dxCB'»)):if(fi'cq(jwin) < 1m!)

frcq(jwin) = frcq(jwill) + I:rcdbk4(jwin.:) = n:dhk4(j\\'ill. ) +

( I/fi'cq(jwi n)) *dxC B(j win. :):end

endredbk4 = l"Oulld(rcdhk.J):x = size(redbk4.2);n:dbk4 =s0l1(rcdbk4(:. x/2)):de.ar E4 IMG vlMG d,\C13 xl:lprintrr\ll');IMG~ E5;imS = sizc(lMG}:blkS ~ 14 41:ebkL = 2A4:

Imt = 200:vlMG = blkM2vc(Ir-.'1Ci. blkS):redbk5 = vIMG(:. I :cbkL)':rNN KK] = sizc(vIMG);/i'cq = zcrns( 1. cbkL);for k = ebkL+( 1:(KK-cbkL»xl ~ vIMG(:, k)';dxCB = xt(oncs(cbkL. I), :) - reJbk5:ILWlll jwin] = min(sull1(abs(dxCB'»);if(frcq{jwin) < lint)

freq(j\vin) = li'cq(j\vin) + I;n.:dbk5(iwin,:) = rcdbk5(j\Vin. :) +

(l/frcq(jwin))*dxCB(jwin, :);end

cndrcdbk5 = round(redbk5);x = sizc(redbk5.2);rcdbk5 ~s""(rcdbk5(:. x/2):clcar E5 IMG vIMG dxC13 xl;fprintf('\n');IMG~ E6;imS = sizc(lMG):blkS~ 14 41:cbkL = 2"''4:

lml= 200:vlMG ~ blkM2vc(lMG. blkS);rcdbk6 = vIMCi(:. 1:ebkL)':INN KK] ~ sizc(vIMG);li'cq = 7.cros( I, cbkL);for k ~ cbkL+( I :(KK-cbkL))xt = vIMG(:, k)';dxCB = xt(oncs(cbkL. I). :) - rcdbk6;ILwin jwinl = min(slllll(ahs(dxCB'))):if(Ii'eq{jwln) < lmt) ,

frcq{jwin) = frcq{jwin) + I;rcdhk6(jwin.:) = rcdbk6(jwin. :) +

(l/ti'cq(jwin))*dxC13(jwill. :):end

cndrcdbk6 = rmind(reclbk6):x = sizc(rcdbk6.2);redhk6 =sort(rcdbk6(:, .\./2»:clcar E6 IMG vlMG dxCB xCfprinlf('\n');IMG~ E7;imS = sizc(lMG):blkS ~ 14 41;

Page 76: Chapter 3 An Introduction to Wavelet Transform

iI1f

cbkL=Y4;

Imt= 100;vlMG ~ blkM2vc(IMG, blkS);rcdbk7 ~ vIMG(:, 1:ebkL)';INN KK] ~ size(vIMG);freq = zcros(l. cbkL):for k ~ cbkL+( 1:(KK-cbkL))xl ~ vIMG(:, k)';dxCIl ~ xl(ones(ebkL, I), :) - rcdbk7:[Lwin jwin] = min(sum(nbs(dxCB'))).;if(frcq(jwin) < 1m!)frcq(jwin) = li"cq(jwin) + I:rcdbk7(jwin.:) = rcdbk7(jwin. :) +

(1/frcq(jwin»*dxCB(jwin. :):end

endrcdbk7 = round(rcdbk7);x = sizc(redbk7,2);rcdbk7 ~sort(redbk7(:, x(2));clear £7 IMG vlMG dxCB xl;fprinlf('In');IMG~E8;imS ~ sizc(lMG);blkS ~[4 4];ebkL = 2"4;

Iml~ 100;vlMG ~ blkM2vc(IMG, blkS);rcdbk8 ~ vIMG(:, I :cbkL)':[NN KK] ~ size(vIMG);frcq = zcros(l, ebkL);for k ~ ebkL+(l :(KK-ebkL))xl ~ vIMG(:, k)';dxCIl ~ xl(ones(cbkL, I),:) - rcdbk8;[Lwin jwinJ = min(sum(abs(dxCB'))):if(freq(jwin) < [mt)freq(jwin) = frcqUwin) + I:redbk8Uwin,:) = rcdbk8Uwin. :) +

(1/li'eq(jwin))'dxCB(jwin, :):end

endrcdbk8 = round(rcdbk8);x = sizc(rcdbk8.2);redbk8 =s0l1(rcdbk8(:, x/2»:clear E8 IMG vlMG dxCB xl;fprinlf('\n');IMG ~ 1'9;imS = sizc(IMG);blkS ~ [4 4];cbkL = 2"4:

hnl ~ 100;vlMG ~ blkM2ve(JMG, blkS);rcdbk9 ~ vIMG(:, 1:cbkL)';[NN KK] ~ sizc(vIMG);freq = zeros(l, cbkL);for k ~ cbkL+( I :(KK-cbkL))xl ~ vIMG(:, k)';dxCIl ~ xI(ones(cbkL. I),:) - rcdbk9;[Lwin jwin] = min(sum(abs(d:xCI1')));if(frcq(jwin) < Iml)frcq(jwin) = frcq(jwin) + I;rcdbk9(iwin,:) = rcdbk9(jwin. :) +

(1/frcq(iwin)*dxCB(iwin, :);end

endrcdbk9 = round(rcdbk9);x = sizc(rcdbk9,2);rcdbk9 ~sort(rcdbk9(:, x/2));

dem" El) [MG ,,[Me; dxCB xl:

endes fm' i\1;lllab Fill'cl1cJ.m:

E~cA2-R:1'1 ~c1l2 - RI;1'2 ~cY2 - R2:1'3 ~ cIJ2 - R3:E4~cHI-R4:E5 = cV I - R5:E6 = cD I - R6:E7~cll-R7:1'8 ~ cY - RS:E9 = cl) - R9:clear Y I Y2 Y C:I ~ 1':

imS = sizc(l):vlMG = hlkMlvc(1, blkS):IX YI ~ sizc(vIMG):Z = sizc(rcdhk. I);Gil = zcros(sizc(rcdhk));H = zcros(X, Y):far i= I:Xlarj = I:Y

xl = (vIMCi(i.j)*OllCS(Z.1 »:d:xCll = xt - rcdbk:IA,I[ ~ min(abs(dxCIl)):GII(I)~GII(I)+1:Il(i.j) ~ I:z(i.j) = rcdbk{l);

endcndIMG = vclbIUvl(z. imS. blkS):zll =IMG:clcar I vlMG X Y Z d:xCB xl:I ~ 1'1:imS = sizc(I);vlMG ~ blkM2ve(1, blkS):IX YJ ~ sizc(vIMG):Z = sizc(rcdbk I. I):G I = zeros(sizc(rcdbk I »;HI = zcros(X. Y):for i= I:X

forj=I:Y.'\\ = (\'[MG(i.j)*ollc~(Z.1 )):dxell:=: xt - rcdbl\ I;Itdl = min(abs(d:xCB»):GI(J) ~ GIII)+I:III(i,j) ~ I;z(i~i)= nxlbkl(I);

endendIMG = vclblkM(z. illlS. blkS);zl =IMG;clcar I vlMG X Y Z dxCI1 :xl:I ~ E2;imS = sizc(l):vlMG ~ blkM2ve(1, blkS):IX Y] ~ sizc(vIMG):Z = sizc(rcdbk2. I);Gl = zeros(sizc(rcdbk2)):1.12= zcros(X. Y):fori= I:X

!C.lfj=1:Yxl = (vIMG(i~j)*oncs(Z, I)):dxen =:xl - rcdbk2:[A,II ~ min(abs(dxCIl)):G2(1) ~ G2(1)+ I:H2(i,j) ~ I;z(i,j) ~ rcdbk2(J):

~,

Page 77: Chapter 3 An Introduction to Wavelet Transform

endendIMG ~ ve2blkM(z, imS, blkS):z2 ~IMG:clear I vlMG X Y Z dxCB xt:I ~ E3:imS 0::: sizc(l);vlMG ~ blkM2vc(l, blkS):IX YJ ~ sizc(vIMG):Z 0::: size(rcdbk3, I);G3 = zcros(size(rcdhk3»:H3 ~ zeros(X, V):fori==I:X

forj=I:Yxl ~ (vIMG(ij)'ones(Z.I)):dxCB = xl - rcdhk3;fA,!] ~ min(abs(dxCIJ)):G3(1) ~ G3(1)+1:H3(i, j) ~ I;z(i,i) ~ rcdbk3(1):

endendIMG ~ vc2blkM(z, imS, blkS):z3 ~ IMG:clear 1 vlMG X Y Z t1xCB xt;I ~E4;imS = size(1);vlMG ~ blkM2vc(l, blkS):[X YJ ~ size(vIMG);Z ~ size(redbk4, I):G4 == zeros(size(rcdbk4»;H4 = zeros(X, V);fori= J:X

forj== I:Yxl ~ (vIMG(ij)'ones(Z.I»:dxCB == xl - rcdbk4:[A,I] ~ min(abs(dxCB)):G4(1) ~ G4(1)+ I:H4(i,j) ~ I:z(ij) ~ rcdbk4(1);

endendIMG ~ vc2blkM(z, imS, blkS):z4 ~IMG:clcar I vIMG X Y Z dxCB xt:

I ~ E5:imS = size(l):vlMG ~ blkM2ve(l, blkS);fX Y] ~ size(vIMG);Z = size(rcdbk5. I);G5 == zcros(size(rcdbk5»;m~zeros(X, V);fori= I;X

forj= LYxt = (vIMG(i,j)*oncs(Z, I»:dxCB = xt - rcdbk5;[A, I] ~ rnin(abs(dxCIJ»:G5(1) ~ G5(1)+ I;m(i,j) ~ I;z(ij) ~ redbk5(1):

endendIMG ~ ve2blkM(z, imS, blkS);z5 ~ IMG;clear I vlMG X Y Z oxcn xt;I ~E6;imS = size(I);vlMG ~ blkM2ve(l, blkS);IX Y] ~ size(vIMG);

Z == si1..c(n.:dbk6. I):G6 == %cros(sizc(rcdbk6»:1-16= zcros(X, V):fori== I:X

forj==I:Yxt == (vIMG(i,j)*ollcs(Z. [)):dxCB = xt - rcdbk6;fA.l1 = min(ahs(dxCB)):G6(1)=G6(1)n:H6(i,j) ~ I:z(i.j) = rcdbk6(1);

endendIMG = vc2blkM(z, illlS. blkS}:z6 = IMCi:clear 1vlMG X Y Z dxC!3 .\1:I ~ 107:imS = size(l):vlMG ~ blkM2vc(1. blkS);IX YJ ~ size(vIMG):Z == sizc(rcdbk7. I):G7 = zeros(size(rcdbk7»):1-17== zcros(X. V):fori= I:Xforj=I:Yxl = (vIMG(i,j)*OllCS(Z.I»;dxCB == .'\t - rcdbk7:[A.I.I = min(abs(dxCB»;G7(1) ~ G7(1)+I:117(i. j) ~ I:z(i,j) ~ rcdbk7(1):

endendIMG == vc2blkM(z. il1lS. hlkS):1..7= IMG:clear I vlMG X Y Z d.'\CB xl;I ~ ES:imS = sizc(l):vlMG ~ blkM2vc(1. blkS):IX Y] ~ sizc(vIMG);Z == size(rcdbkS. I):GS == zeros{sizc(rcdhkX»:U8 = zcros(X. V):fori==I:Xforj = I:Yxt = (vIMG(ij)*oncs(Z.1 »:dxCB = xt - rcdbk8:rA,!] ~ min(abs(dxCI3));GS(I) ~ GS(I)+ I;HS(i,,;) ~ I:z(i,j) ~ rcdbkS(I):

endendclem vlMG X Y Z dxCB xl:IMG = vc2blkM(7.. illlS, blkS);z8=IMCi:I ~ E9:imS = size(l);vlMG ~ blkM2vc(l. hlkS);IX Y] ~ size(vIMG);Z = size(n::dbk9. I);G9 0::: zcros{sizc(rcdbk9));J-l9= zeros(X. V):for i= I:X

forj==I:Yxt == (vIMG(i.j)*oncs(Ll »:dxCI3 = xl - rcdbk9:(A.l1 =min(abs{dxCB»):G9(1) ~ G9(1)+ I;H9(i.j) ~ I;

o

Page 78: Chapter 3 An Introduction to Wavelet Transform

z(ij) = redbk9(1);end

endIMG = vc2hJkM(z, illlS, blkS);z9= IMG;clear I vlMG X Y Z dxC13 xl:

R = (R + zII);RI = (RI + zl);R2 = (R2 + z2);

R3 = (R3 + z3):R4 = (R4 + z4);R5 = (R5 -I- z5);R6 = (R6 + z(,);R7 = (R7 + z7);R8 = (RS + z8);R9 = (R9 + z9);clear EEl E2 E3 E4 E5 E6 F7 EX E9 /,1 [ zl z2 z3 z4 z5

z6 z7 z8 z9;

Page 79: Chapter 3 An Introduction to Wavelet Transform

References

[I] Prof. Lesley Ward, Lectures on "Introduction to Wavelets and their Applications", aMathematics course in the Department of Mathematics, Harvey Mudd College, Spring2000, Lectures 1-20.

[2] Ruey-Feng Chang, Wen-Tsuen Chen, and Jia-Shung Wang, "A Fast Finite StateAlgorithm for Vector Quantizer Design", IEEE Transactions on Singal Processing, Vol.40, No.1, January 1992, PI'. 221-225

[3] David Salomon, "Data Compression, The Complete Rcference", Third Edition, ISBN 0-387-40697-2, 2004, Springer- Verlag New York, Inc, PI'. 2-100.

[4] Jill R. Goldsehneider, "Lossy Compression of Scientific Data via Wavelet Transform andVector Quantization" A Ph.D. Thesis submitted to the University of Washington, 1997,PI'. 5-86.

[5] Hong Wang, Ling Lu, Da-Shun Que, Xun Luo, "Image Compression based on WaveletTransform and Vector Quantization", IEEE Proceedings of First InternationalConference on Machine Learning and Cybernetics, Beijing, 4-5 November 2002, PI'.1778-1780.

[6] Morris Goldberg, Huifang Sun, "Image Sequence Coding Using Vector Quantization",IEEE transaction on Communications, vol. eom-34, no. 7, July 1986. PI'. 703-710.

[7] Saehin P. Nanavati and Prasanta K. Panigrahi, "Wavelets: Applications to ImageCompression-I", General Article on RESONANCE, February-2005, PI'. 52-61.

[8] Rafael C. Gonzalez, Richard E. Woods and Steven L. Eddins, "Digital Image ProcessingUsing MATLAB", ISBN# 81-297-0515-X. LPE, 2004. PI'. 3-192.

[9] Khalid Sayood, "Introduction to Data Compression". San Francisco. Morgan Kaufmann.2000, second edition, PI'. 35-95.

[10] O. O. Khalifa, "Fast Algorithr.l for VQ-based wavelet coding system", IEEE.Transaction on Image Processing Vol. 1, No.2, April 1992, PI'. 205-220.

[11] Panrong Xiao, "Image Compression by Wavelet Transform" An M. Sc. thesis submittedto the Computer and Information Science Department, East Tennessee StateUniversity, May 200 I, PI'. 10-48.

[12] Tom Lookabaugh, Eve A. Riskin, Philip A. Chou, Robert M. Gray, "Variable RateVector Quantization for Speech, Image, and Video Compression", IEEE Transactionon Communications, Vol. 41. No. I, January 1993, PI'. 186-199

[13] Sonja Grgie, Mislav Grgie and Branka Zovko Cihlar, 'Performance Analysis of ImageCompression Using Wavelets', IEEE Transaction on Industrial Electronics, Vol-48.No.-3, June-200 I, PI'. 682-695.

Page 80: Chapter 3 An Introduction to Wavelet Transform

...;,

r

[14] Newaz Muhammad Syfur Rahim, "Study of Digital Imagc cllmprcssilln using NcwralNetwork and Vector Quanlization", A PhD. Thesis suhmiltcd tll thc (,raduate schooillfScience and Technology. Chiba Univcrsity, Japan. January 2003. pp. 40-65.

[15] Satybrata Rout, 'Orthogonal and Biorthogonal Wavelets fiJI"Imagc Compression'. AnM.Sc. Thesis submitted to the Faeully of thc Virginia 1'1I1yteehnie Inslitute and StateUnivcrsity, Black Burg. Virgcnia. USA, August 21. 2003. pp. 70-100.

[16] K. Safi, A. Badri, "Image Compression by Wavelet Transform and Vector Quantizationwith Progressive and Corrected Reconstruction", La!Jora/oire de 7i"ailemenl dll Signalel de I 'Image (LA TSf), FST Mohammedia. HI' 146, Morrocco. PI'. 1-4.