img denoising time freq docum

Upload: m-sandeep-kumar-reddy

Post on 05-Apr-2018

222 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/31/2019 Img Denoising Time Freq Docum

    1/69

  • 7/31/2019 Img Denoising Time Freq Docum

    2/69

    important signal features. In the recent years there has been a fair amount of research on

    wavelet thresholding and threshold selection for signal de-noising, because wavelet

    provides an appropriate basis for separating noisy signal from the image signal. The

    motivation is that as the wavelet transform is good at energy compaction, the small

    coefficient is more likely due to noise and large coefficient due to important signal

    features. These small coefficients can be threshold without affecting the significant features

    of the image.

    Wavelets are nonlinear functions and do not remove noise by low-pass filtering like

    many traditional methods. Low-pass filtering approaches, which are linear time invariant,

    can blur the sharp features in a signal and sometimes it is difficult to separate noise from

    the signal where their Fourier spectra overlap. For wavelets the amplitude, instead of the

    location of the Fourier spectra, differs from that of the noise. This allows for threshold of

    the wavelet coefficients to remove the noise. These localizing properties of the wavelet

    transform allow the filtering of noise from a signal to be very effective. While linear

    methods trade-off suppression of noise for broadening of the signal features, noise

    reduction using wavelets allows features in the original signal to remain sharp. This works

    very well and even overcomes pseudo-Gibbs phenomena that are often seen due to lack of

    shift invariance.

    Thresholding is a simple non-linear technique, which operates on one wavelet

    coefficient at a time. In its most basic form, each coefficient is thresholded by comparing

    against threshold, if the coefficient is smaller than threshold, set to zero; otherwise it is kept

    or modified. Replacing the small noisy coefficients by zero and inverse wavelet transform

  • 7/31/2019 Img Denoising Time Freq Docum

    3/69

    on the result may lead to reconstruction with the essential signal characteristics and with

    less noise.

    This project is implemented on MATLAB. In this project, we first discuss the

    features that a practical digital image denoising. Second, we present wavelet-based

    denoising algorithm. Experimental results and analyses are then given to demonstrate that

    the proposed algorithm is effective and can be used in a practical system.

    Acknowledgement

  • 7/31/2019 Img Denoising Time Freq Docum

    4/69

    We are very grateful to our head of the Department of Electronics and communication

    Engineering, Mr.-------, ------------ College of Engineering & Technology for having

    provided the opportunity for taking up this project.

    We would like to express our sincere gratitude and thanks to Mr. ---------Department of

    Electronics & Communication Engineering, -------College of Engineering & Technology

    for having allowed doing this project.

    Special thanks to Deccan Embedded Solutions Pvt. Ltd., for permitting us to do this

    project work in their esteemed organization, and also for guiding us through the entire

    project.

    We also extend our sincere thanks to our parents and friends for their moral support

    throughout the project work. Above all we thank god almighty for his manifold mercies

    in carrying out the project successfully

    CONTENTS

    1. Introduction

  • 7/31/2019 Img Denoising Time Freq Docum

    5/69

    1.1 Images in MATLAB

    1.2 IMAGE REPRESENTATION

    1.3 Digital Image File Types

    1.4 Image Coordinate Systems

    2.0 Digital Image Processing2.1 Image digitization

    2.2 Image Pre-processing

    2.3Image Segmentation

    3.0 Image Denoising3.1 Introduction to wavelet representation

    A. Fourier analysisB. Short-Time Fourier Analysis

    C. Wavelet Analysis

    4.0 Introduction To Matlab

    1. INTRODUCTION

  • 7/31/2019 Img Denoising Time Freq Docum

    6/69

    Image:

    A digital image is a computer file that contains graphical information

    instead of text or a program. Pixels are the basic building blocks of all digital images.

    Pixels are small adjoining squares in a matrix across the length and width of your digital

    image. They are so small that you dont see the actual pixels when the image is on your

    computer monitor.

    Pixels are monochromatic. Each pixel is a single solid color that is blended from

    some combination of the 3 primary colors of Red, Green, and Blue. So, every pixel has a

    RED component, a GREEN component and BLUE component. The physical dimensions of

    a digital image are measured in pixels and commonly called pixel or image resolution.

    Pixels are scalable to different physical sizes on your computer monitor or on a photo print.

    However, all of the pixels in any particular digital image are the same size. Pixels as

    represented in a printed photo become round slightly overlapping dots.

  • 7/31/2019 Img Denoising Time Freq Docum

    7/69

    Pixel Values: As shown in this bitonal image, each pixel is assigned a tonal value,

    in this example 0 for black and 1 for white.

    PIXEL DIMENSIONS are the horizontal and vertical measurements of an image

    expressed in pixels. The pixel dimensions may be determined by multiplying both the

    width and the height by the dpi. A digital camera will also have pixel dimensions,

    expressed as the number of pixels horizontally and vertically that define its resolution (e.g.,

    2,048 by 3,072). Calculate the dpi achieved by dividing a document's dimension into the

    corresponding pixel dimension against which it is aligned.

  • 7/31/2019 Img Denoising Time Freq Docum

    8/69

  • 7/31/2019 Img Denoising Time Freq Docum

    9/69

  • 7/31/2019 Img Denoising Time Freq Docum

    10/69

    Fig: Pixel Values in a Binary Image

    Grayscale Images:

    A grayscale image (also called gray-scale, gray scale, or gray-level) is a data matrix

    whose values represent intensities within some range. MATLAB stores a grayscale image

    as an individual matrix, with each element of the matrix corresponding to one image pixel.

    By convention, this documentation uses the variable name I to refer to grayscale images.

    The matrix can be of class uint8, uint16, int16, single, or double. While grayscale

    images are rarely saved with a color map, MATLAB uses a color map to display them.

    For a matrix of class single or double, using the default grayscale color map, the

    intensity 0 represents black and the intensity 1 represents white. For a matrix of type uint8,

    uint16, or int16, the intensity intmin (class (I)) represents black and the intensity intmax

    (class (I)) represents white.

  • 7/31/2019 Img Denoising Time Freq Docum

    11/69

    The figure below depicts a grayscale image of class double.

    Fig: Pixel Values in a Grayscale Image Define Gray Levels

    Color Images:

    A color image is an image in which each pixel is specified by three values one

    each for the red, blue, and green components of the pixel's color. MATLAB store color

    images as an m-by-n-by-3 data array that defines red, green, and blue color components for

    each individual pixel. Color images do not use a color map. The color of each pixel is

    determined by the combination of the red, green, and blue intensities stored in each color

    plane at the pixel's location.

    Graphics file formats store color images as 24-bit images, where the red, green, and

    blue components are 8 bits each. This yields a potential of 16 million colors. The precision

  • 7/31/2019 Img Denoising Time Freq Docum

    12/69

    with which a real-life image can be replicated has led to the commonly used term color

    image.

    A color array can be of class uint8, uint16, single, ordouble. In a color array of

    class single ordouble, each color component is a value between 0 and 1. A pixel whose

    color components are (0, 0, 0) is displayed as black, and a pixel whose color components

    are (1, 1, 1) is displayed as white. The three color components for each pixel are stored

    along the third dimension of the data array. For example, the red, green, and blue color

    components of the pixel (10,5) are stored in RGB(10,5,1), RGB(10,5,2), and

    RGB(10,5,3), respectively.

    The following figure depicts a color image of class double.

  • 7/31/2019 Img Denoising Time Freq Docum

    13/69

    Fig: Color Planes of a True color Image

  • 7/31/2019 Img Denoising Time Freq Docum

    14/69

    Indexed Images:

    An indexed image consists of an array and a colormap matrix. The pixel values in

    the array are direct indices into a colormap. By convention, this documentation uses the

    variable name X to refer to the array and map to refer to the colormap.

    The colormap matrix is an m-by-3 array of class double containing floating-point

    values in the range [0, 1]. Each row ofmap specifies the red, green, and blue components

    of a single color. An indexed image uses direct mapping of pixel values to colormap

    values. The color of each image pixel is determined by using the corresponding value of X

    as an index into map.

    A colormap is often stored with an indexed image and is automatically loaded with

    the image when you use the imread function. After you read the image and the colormap

    into the MATLAB workspace as separate variables, you must keep track of the association

    between the image and colormap. However, you are not limited to using the default

    colormap--you can use any colormap that you choose.

    The relationship between the values in the image matrix and the colormap depends

    on the class of the image matrix. If the image matrix is of class single or double, it

    normally contains integer values 1 through p, where p is the length of the colormap. The

    value 1 points to the first row in the colormap, the value 2 points to the second row, and so

    on. If the image matrix is of class logical, uint8 oruint16, the value 0 points to the first

    row in the colormap, the value 1 points to the second row, and so on.

  • 7/31/2019 Img Denoising Time Freq Docum

    15/69

    The following figure illustrates the structure of an indexed image. In the figure, the image

    matrix is of class double, so the value 5 points to the fifth row of the colormap.

    Fig: Pixel Values Index to Colormap Entries in Indexed Images

    1.3 Digital Image File Types:

    The 5 most common digital image file types are as follows:

    1. JPEG is a compressed file format that supports 24 bit color (millions of colors). This is

    the best format for photographs to be shown on the web or as email attachments. This is

    because the color informational bits in the computer file are compressed (reduced) and

    download times are minimized.

  • 7/31/2019 Img Denoising Time Freq Docum

    16/69

    2. GIF is an uncompressed file format that supports only 256 distinct colors. Best used

    with web clip art and logo type images. GIF is not suitable for photographs because of its

    limited color support.

    3. TIFFis an uncompressed file format with 24 or 48 bit color support. Uncompressed

    means that all of the color information from your scanner or digital camera for each

    individual pixel is preserved when you save as TIFF. TIFF is the best format for saving

    digital images that you will want to print. Tiff supports embedded file information,

    including exact color space, output profile information and EXIF data. There is a lossless

    compression for TIFF called LZW. LZW is much like 'zipping' the image file because there

    is no quality loss. An LZW TIFF decompresses (opens) with all of the original pixel

    information unaltered.

    4. BMP is a Windows (only) operating system uncompressed file format that supports 24

    bit color. BMP does not support embedded information like EXIF, calibrated color space

    and output profiles. Avoid using BMP for photographs because it produces approximately

    the same file sizes as TIFF without any of the advantages of TIFF.

    5. Camera RAW is a lossless compressed file format that is proprietary for each digital

    camera manufacturer and model. A camera RAW file contains the 'raw' data from the

    camera's imaging sensor. Some image editing programs have their own version of RAW

    too. However, camera RAW is the most common type of RAW file. The advantage of

    camera RAW is that it contains the full range of color information from the sensor. This

    means the RAW file contains 12 to 14 bits of color information for each pixel. If you shoot

    JPEG, you only get 8 bits of color for each pixel. These extra color bits make shooting

  • 7/31/2019 Img Denoising Time Freq Docum

    17/69

  • 7/31/2019 Img Denoising Time Freq Docum

    18/69

    element (5, 2). You use normal MATLAB matrix subscripting to access values of

    individual pixels.

    For example, the MATLAB code

    I (2, 15)

    Returns the value of the pixel at row 2, column 15 of the image I.

    Spatial Coordinates:

    In the pixel coordinate system, a pixel is treated as a discrete unit, uniquely

    identified by a single coordinate pair, such as (5, 2). From this perspective, a location such

    as (5.3, 2.2) is not meaningful.

    At times, however, it is useful to think of a pixel as a square patch. From this

    perspective, a location such as (5.3, 2.2) is meaningful, and is distinct from (5, 2). In this

    spatial coordinate system, locations in an image are positions on a plane, and they are

    described in terms ofx and y (not rand c as in the pixel coordinate system).The following

    figure illustrates the spatial coordinate system used for images. Notice that y increases

    downward.

  • 7/31/2019 Img Denoising Time Freq Docum

    19/69

  • 7/31/2019 Img Denoising Time Freq Docum

    20/69

    2.0 Digital Image Processing

    Digital image processing is the use of computer algorithms to perform image

    processing on digital images. As a subfield of digital signal processing, digital image

    processing has many advantages over analog image processing; it allows a much wider

    range of algorithms to be applied to the input data, and can avoid problems such as the

    build-up of noise and signal distortion during processing.

    2.1 Image digitization:

    An image captured by a sensor is expressed as a continuous function f(x,y) of two

    co-ordinates in the plane. Image digitization means that the function f(x,y) is sampled into

    a matrix with M rows and N columns. The image quantization assigns to each continuous

    sample an integer value. The continuous range of the image function f(x,y) is split into K

    intervals. The finer the sampling (i.e., the larger M and N) and quantization (the larger K)

    the better the approximation of the continuous image function f(x,y).

    2.2 Image Pre-processing:

    Pre-processing is a common name for operations with images at the lowest level of

    abstraction -- both input and output are intensity images. These iconic images are of the

    same kind as the original data captured by the sensor, with an intensity image usually

    represented by a matrix of image function values (brightness). The aim of pre-processing is

  • 7/31/2019 Img Denoising Time Freq Docum

    21/69

    an improvement of the image data that suppresses unwanted distortions or enhances some

    image features important for further processing. Four categories of image pre-processing

    methods according to the size of the pixel neighborhood that is used for the calculation of

    new pixel brightness:

    o Pixel brightness transformations.

    o Geometric transformations.

    o Pre-processing methods that use a local neighborhood of the processed

    pixel.

    o Image restoration that requires knowledge about the entire image.

    2.3 Image Segmentation:

    Image segmentation is one of the most important steps leading to the analysis of

    processed image data. Its main goal is to divide an image into parts that have a strong

    correlation with objects or areas of the real world contained in the image.Two kinds of

    segmentation

    1. Complete segmentation: This results in set of disjoint regions uniquely

    corresponding with objects in the input image. Cooperation with higher

    processing levels which use specific knowledge of the problem domain is

    necessary.

    2. Partial segmentation: in which regions do not correspond directly with image

    objects. Image is divided into separate regions that are homogeneous with

    respect to a chosen property such as brightness, color, reflectivity, texture, etc.

  • 7/31/2019 Img Denoising Time Freq Docum

    22/69

  • 7/31/2019 Img Denoising Time Freq Docum

    23/69

    Unfortunately, there is no general theory for determining what `good image enhancement

    is when it comes to human perception. If it looks good, it is good! However, when image

    enhancement techniques are used as pre-processing tools for other image processing

    techniques, then quantitative measures can determine which techniques are most

    appropriate.

    3.0 Image Denoising

    Introduction:

    An Image is often corrupted by noise in its acquisition or transmission. The goal of

    denoising is to remove the noise while retaining as much as possible the important signal

    features. Traditionally, this is achieved by linear processing such as Wiener filtering. A

    vast literature has emerged recently on signal denoising using nonlinear techniques, in the

    setting of additive white Gaussian noise. The seminal work on signal denoising via wavelet

    thresholding have shown that various wavelet thresholding schemes for denoising have

    near-optimal properties in the minimax sense and perform well in simulation studies of

    one-dimensional curve estimation. It has been shown to have better rates of convergence

    than linear methods for approximating functions. Thresholding is a nonlinear technique, yet

    it is very simple because it operates on one wavelet coefficient at a time. Alternative

    approaches to nonlinear wavelet-based denoising can be found in, for example and

    references therein.

  • 7/31/2019 Img Denoising Time Freq Docum

    24/69

    The intuition behind using lossy compression for denoising may be

    explained as follows. A signal typically has structural correlations that a good coder can

    exploit to yield a concise representation. White noise, however, does not have structural

    redundancies and thus is not easily compressable. Hence, a good compression method can

    provide a suitable model for distinguishing between signal and noise. The discussion will

    be restricted to wavelet-based coders, though these insights can be extended to other

    transform-domain coders as well. A concrete connection between lossy compression and

    denoising can easily be seen when one examines the similarity between thresholding and

    quantization, the latter of which is a necessary step in a practical lossy coder. That is, the

    quantization of wavelet coefficients with a zero-zone is an approximation to the

    thresholding function. Thus, provided that the quantization outside of the zero-zone does

    not introduce significant distortion, it follows that wavelet-based lossy compression

    achieves denoising. With this connection in mind, this paper is about wavelet thresholding

    for image denoising and also for lossy compression. The threshold choice aids the lossy

    coder to choose its zero-zone, and the resulting coder achieves simultaneous denoising and

    compression if such property is desired.

    Denoising i.e. restoration of electronically distorted images is an old but

    also still a relevant problem. There are many different cases of distortions. One of the most

    prevalent cases is distortion due to additive white Gaussian noise which can be caused by

    poor image acquisition or by transferring the image data in noisy communication channels.

    Early methods to restore the image used linear filtering or smoothing methods. These

    methods where simple and easy to apply but their effectiveness is limited since this often

    leads to blurred or smoothed out in high frequency regions.

  • 7/31/2019 Img Denoising Time Freq Docum

    25/69

    All denoising methods use images artificially distorted with well defined white

    Gaussian noise to achieve objective test results. Note however that in real world images, to

    discriminate the distorting signal from the true image is an ill posed problem since it is

    not always well defined whether a pixel value belongs to the image or it is part of

    unwanted noise.

    Newer and better approaches perform some thresholding in the wavelet domain

    of an image. The idea of wavelet thresholding relies on the assumption that the signal

    magnitudes dominate the magnitudes of the noise in a wavelet representation, so that

    wavelet coefficients can be set to zero if their magnitudes are less than a predetermined

    threshold. More recent developments focus on more sophisticated methods, like local or

    context-based thresholding in the wavelet domain. Some methods are inspired by wavelet-

    based image compression methods.

    The theoretical formalization of filtering additive iidGaussian noise (of zero-mean

    and standard deviation) via thresholding wavelet coefficients was pioneered by Donoho

    and Johnstone. A wavelet coefficient is compared to a given threshold and is set to zero if

    its magnitude is less than the threshold; otherwise, it is kept or modified (depending on the

    thresholding rule). The threshold acts as an oracle which distinguishes between the

    insignificant coefficients likely due to noise, and the significant coefficients consisting of

    important signal structures.

    Thresholding rules are especially effective for signals with sparse or near-sparse

    representations where only a small subset of the coefficients represents all or most of the

    signal energy. Thresholding essentially creates a region around zero where the coefficients

  • 7/31/2019 Img Denoising Time Freq Docum

    26/69

    are considered negligible. Outside of this region, the thresholded coefficients are kept to

    full precision (that is, without quantization).

    Since the works of Donoho and Johnstone, there has been much research on

    finding thresholds for nonparametric estimation in statistics. However, few are specifically

    tailored for images. In this project, we propose a framework and a near-optimal threshold

    in this framework more suitable for image denoising. This approach can be formally

    described as Bayesian, but this only describes our mathematical formulation, not our

    philosophy. The formulation is grounded on the empirical observation that the wavelet

    coefficients in a sub band of a natural image can be summarized adequately by a

    generalized Gaussiandistribution (GGD). This observation is well-accepted in the image

    processing community and is used for state-of-the-art image coders. It follows from this

    observation that the average MSE (in a sub band) can be approximated by the

    corresponding Bayesian squared error risk with the GGD as the prior applied to each in an

    iid fashion. That is, a sum is approximated by an integral. We emphasize that this is an

    analytical approximation and our framework is broader than assuming wavelet coefficients

    are iid draws from a GGD. The goal is to find the soft-threshold that minimizes this

    Bayesian risk, and we call our methodBayesShrink.

    Adaptive Threshold for BayesShrink

    The GGD, following is

  • 7/31/2019 Img Denoising Time Freq Docum

    27/69

    ADAPTIVE WAVELET THRESHOLDING FOR IMAGE DENOISING AND

    COMPRESSION

    Histogram of the wavelet coefficients of four test images. For each image, from top to

    bottom it is fine to coarse scales: from left to right, they are the HH, HL, and LH sub

    bands, respectively.

  • 7/31/2019 Img Denoising Time Freq Docum

    28/69

    3.1 Introduction to wavelet representation:

    The wavelet concept and its origins

    The central idea to wavelets is to analyze (a signal) according toscale. Imagine a function

    that oscillates like a wave in a limited portion of time or space and vanishes outside of it.

    The wavelets are such functions: wave-like but localized. One chooses a particular wavelet,

    stretches it (to meet a given scale) and shifts it, while looking into its correlations with the

    analyzed signal. This analysis is similar to observing the displayed signal (e.g., printed or

    shown on the screen) from various distances. The signal correlations with wavelets

    stretched to large scales reveal gross (rude) features, while at small scales fine signal

    structures are discovered. It is therefore often said that the wavelet analysis is to see both

    the forest andthe trees.

    In such a scanning through a signal, the scale and the position can vary

    continuously or in discrete steps. The latter case is of practical interest in this thesis. From

    an engineering point of view, the discrete wavelet analysis is a two channel digital filter

    bank (composed of the low pass and the high pass filters), iterated on the low pass output.

    The low pass filtering yields an approximation of a signal (at a given scale), while the high

    pass (more precisely, band pass) filtering yields the details that constitute the difference

    between the two successive approximations. A family of wavelets is then associated with

    the band pass and a family of scaling functions with the low pass filters.

    A)Fourier analysis:

  • 7/31/2019 Img Denoising Time Freq Docum

    29/69

    Signal analysts already have at their disposal an impressive arsenal of tools. Perhaps

    the most well-known of these is Fourier analysis, which breaks down a signal into

    constituent sinusoids of different frequencies. Another way to think of Fourier analysis is

    as a mathematical technique for transformingour view of the signal from time-based to

    frequency-based.

    Figure 2

    For many signals, Fourier analysis is extremely useful because the signals frequency

    content is of great importance. So why do we need other techniques, like wavelet analysis?

    Fourier analysis has a serious drawback. In transforming to the frequency domain,

    time information is lost. When looking at a Fourier transform of a signal, it is impossible to

    tell whena particular event took place. If the signal properties do not change much overtime that is, if it is what is called a stationary signalthis drawback isnt very

    important. However, most interesting signals contain numerous non stationary or transitory

    characteristics: drift, trends, abrupt changes, and beginnings and ends of events. These

    characteristics are often the most important part of the signal, and Fourier analysis is not

    suited to detecting them.

    B)Short-Time Fourier Analysis

    In an effort to correct this deficiency, Dennis Gabor (1946) adapted the Fourier

    transform to analyze only a small section of the signal at a timea technique called

    windowingthe signal.Gabors adaptation, called the Short-Time FourierTransform(STFT),

    maps a signal into a two-dimensional function of time and

  • 7/31/2019 Img Denoising Time Freq Docum

    30/69

    frequency.

    Figure 3

    The STFT represents a sort of compromise between the time- and frequency-based

    views of a signal. It provides some information about both when and at what frequencies a

    signal event occurs. However, you can only obtain this information with limited precision,

    and that precision is determined by the size of the window. While the STFT compromise

    between time and frequency information can be useful, the drawback is that once you

    choose a particular size for the time window, that window is the same for all frequencies.

    Many signals require a more flexible approachone where we can vary the window size to

    determine more accurately either time or frequency.

    C.Wavelet Analysis

    Wavelet analysis represents the next logical step: a windowing technique with

    variable-sized regions. Wavelet analysis allows the use of long time intervals where we

    want more precise low-frequency information, and shorter regions where we want high-

    frequency information.

    Figure 4

  • 7/31/2019 Img Denoising Time Freq Docum

    31/69

    Heres what this looks like in contrast with the time-based, frequency-based,

    and STFT views of a signal:

    Figure 5

    You may have noticed that wavelet analysis does not use a time-frequency region, but

    rather a time-scaleregion. For more information about the concept of scale and the link

    between scale and frequency, see How to Connect Scale to Frequency?

    What Can Wavelet Analysis Do?

    One major advantage afforded by wavelets is the ability to perform local analysis,

    that is, to analyze a localized area of a larger signal. Consider a sinusoidal signal with a

    small discontinuity one so tiny as to be barely visible. Such a signal easily could be

    generated in the real world, perhaps by a power fluctuation or a noisy switch.

    Figure 6

  • 7/31/2019 Img Denoising Time Freq Docum

    32/69

    A plot of the Fourier coefficients (as provided by the fft command) of this signal shows

    nothing particularly interesting: a flat spectrum with two peaks representing a single

    frequency. However, a plot of wavelet coefficients clearly shows the exact location in time

    of the discontinuity.

    Figure 7

    Wavelet analysis is capable of revealing aspects of data that other

    signal analysis techniques miss, aspects like trends, breakdown points, discontinuities in

    higher derivatives, and self-similarity. Furthermore, because it affords a different view of

    data than those presented by traditional techniques, wavelet analysis can often compress or

    de-noise a signal without appreciable degradation. Indeed, in their brief history within the

    signal processing field, wavelets have already proven themselves to be an indispensable

    addition to the analysts collection of tools and continue to enjoy a burgeoning popularity

    today.

    What Is Wavelet Analysis?

  • 7/31/2019 Img Denoising Time Freq Docum

    33/69

    Now that we know some situations when wavelet analysis is useful, it is worthwhile

    asking What is wavelet analysis? and even more fundamentally,

    What is a wavelet?

    A wavelet is a waveform of effectively limited duration that has an average value of zero.

    Compare wavelets with sine waves, which are the basis of Fourier analysis.

    Sinusoids do not have limited duration they extend from minus to plus

    infinity. And where sinusoids are smooth and predictable, wavelets tend to be

    irregular and asymmetric.

    Figure 8Fourier analysis consists of breaking up a signal into sine waves of various

    frequencies. Similarly, wavelet analysis is the breaking up of a signal into shifted and

    scaled versions of the original (ormother) wavelet. Just looking at pictures of wavelets and

    sine waves, you can see intuitively that signals with sharp changes might be better analyzed

    with an irregular wavelet than with a smooth sinusoid, just as some foods are better

    handled with a fork than a spoon. It also makes sense that local features can be described

    better with wavelets that have local extent.

    The Continuous Wavelet Transform:

    Mathematically, the process of Fourier analysis is represented by the Fourier

    transform:

  • 7/31/2019 Img Denoising Time Freq Docum

    34/69

    which is the sum over all time of the signal f(t) multiplied by a complex exponential.

    (Recall that a complex exponential can be broken down into real and imaginary sinusoidal

    components.) The results of the transform are the Fourier coefficients F(w), which when

    multiplied by a sinusoid of frequency w yields the constituent sinusoidal components of the

    original signal. Graphically, the process looks like:

    Figure 9

    Similarly, the continuous wavelet transform (CWT) is defined as the sum over all

    time of the signal multiplied by scaled, shifted versions of the wavelet function

    The result of the CWT is a series many wavelet coefficientsC, which are a function

    of scale and position.

    Multiplying each coefficient by the appropriately scaled and shifted wavelet yields the

    constituent wavelets of the original signal:

  • 7/31/2019 Img Denoising Time Freq Docum

    35/69

    Figure 10

    Scaling

    Weve already alluded to the fact that wavelet analysis produces a time-scale

    view of a signal and now were talking about scaling and shifting wavelets.

    What exactly do we mean byscale in this context?

    Scaling a wavelet simply means stretching (or compressing) it.

    To go beyond colloquial descriptions such as stretching, we introduce the scale factor,

    often denoted by the letter a.

    If were talking about sinusoids, for example the effect of the scale factor is very easy to

    see:

  • 7/31/2019 Img Denoising Time Freq Docum

    36/69

    Figure 11

    The scale factor works exactly the same with wavelets. The smaller the scale factor, the

    more compressed the wavelet.

    Figure 12

    It is clear from the diagrams that for a sinusoid sin (w t) the scale factor a is related(inversely) to the radian frequency w. Similarly, with wavelet analysis the scale is related

    to the frequency of the signal.

    Shifting

  • 7/31/2019 Img Denoising Time Freq Docum

    37/69

  • 7/31/2019 Img Denoising Time Freq Docum

    38/69

    Figure 14

    3. Shift the wavelet to the right and repeat steps 1 and 2 until youve covered the whole

    signal.

    Figure 15

    4. Scale (stretch) the wavelet and repeat steps 1 through 3.

  • 7/31/2019 Img Denoising Time Freq Docum

    39/69

    Figure 16

    5.Repeat steps 1 through 4 for all scales.

    When youre done, youll have the coefficients produced at different scales by

    different sections of the signal. The coefficients constitute the results of a regression of the

    original signal performed on the wavelets.

    How to make sense of all these coefficients? You could make a plot on which the x-

    axis represents position along the signal (time), they-axis represents scale, and the color at

    each x-y point represents the magnitude of the wavelet coefficient C. These are the

    coefficient plots generated by the graphical tools.

    Figure 17

    These coefficient plots resemble a bumpy surface viewed from above.

  • 7/31/2019 Img Denoising Time Freq Docum

    40/69

  • 7/31/2019 Img Denoising Time Freq Docum

    41/69

    frequency w.

    High scale a=>Stretched wavelet=>Slowly changing, coarse features=>Low

    frequency w.

    The Scale of Nature:

    Its important to understand the fact that wavelet analysis does not produce a time-

    frequency view of a signal is not a weakness, but a strength of the technique.

    Not only is time-scale a different way to view data, it is a very natural way to view data

    deriving from a great number of natural phenomena.

    Consider a lunar landscape, whose ragged surface (simulated below) is a result of

    centuries of bombardment by meteorites whose sizes range from gigantic boulders to dust

    specks.

    If we think of this surface in cross-section as a one-dimensional signal, then it is

    reasonable to think of the signal as having components of different scaleslarge features

    carved by the impacts of large meteorites, and finer features abraded by small meteorites.

    Figure 20

    Here is a case where thinking in terms of scale makes much more sense than thinking

    in terms of frequency. Inspection of the CWT coefficients plot for this signal reveals

    patterns among scales and shows the signals possibly fractal nature.

  • 7/31/2019 Img Denoising Time Freq Docum

    42/69

  • 7/31/2019 Img Denoising Time Freq Docum

    43/69

    For many signals, the low-frequency content is the most important part. It is what

    gives the signal its identity. The high-frequency content on the other hand imparts flavor or

    nuance. Consider the human voice. If you remove the high-frequency components, the

    voice sounds different but you can still tell whats being said. However, if you remove

    enough of the low-frequency components, you hear gibberish. In wavelet analysis, we

    often speak of approximations and details. The approximations are the high-scale, low-

    frequency components of the signal. The details are the low-scale, high-frequency

    components.

    The filtering process at its most basic level looks like this:

    Figure 23

    The original signal S passes through two complementary filters and emerges as two

    signals.

    Unfortunately, if we actually perform this operation on a real digital signal, we

    wind up with twice as much data as we started with. Suppose, for instance that the original

    signal S consists of 1000 samples of data. Then the resulting signals will each have 1000

    samples, for a total of 2000.

    These signals A and D are interesting, but we get 2000 values instead of the 1000

    we had. There exists a more subtle way to perform the decomposition using wavelets. By

    looking carefully at the computation, we may keep only one point out of two in each of the

    two 2000-length samples to get the complete information. This is the notion of own

    sampling. We produce two sequences called cA and cD.

  • 7/31/2019 Img Denoising Time Freq Docum

    44/69

    Figure 24

    The process on the right which includes down sampling produces DWT

    Coefficients. To gain a better appreciation of this process lets perform a one-stage discretewavelet transform of a signal. Our signal will be a pure sinusoid with

    high- frequency noise added to it.

    Here is our schematic diagram with real signals inserted into it:

    Figure 25

    The MATLAB code needed to generate s, cD, and cA is:

    s = sin(20*linspace(0,pi,1000)) + 0.5*rand(1,1000);

  • 7/31/2019 Img Denoising Time Freq Docum

    45/69

    [cA,cD] = dwt(s,'db2');

    where db2 is the name of the wavelet we want to use for the analysis.

    Notice that the detail coefficients cD is small and consist mainly of a high-frequency noise,

    while the approximation coefficients cA contains much less noise than does the original

    signal.

    [length(cA) length(cD)]

    ans = 501 501

    You may observe that the actual lengths of the detail and approximation coefficient

    vectors are slightly more than half the length of the original signal. This has to do with the

    filtering process, which is implemented by convolving the signal with a filter. The

    convolution smears the signal, introducing several extra samples into the result.

    Multiple-Level Decomposition:

    The decomposition process can be iterated, with successive approximations being

    decomposed in turn, so that one signal is broken down into many lower resolution

    components. This is called the wavelet decomposition tree.

  • 7/31/2019 Img Denoising Time Freq Docum

    46/69

    Figure 26

    Looking at a signals wavelet decomposition tree can yield valuable information.

    Figure 27

    Number of Levels:

    Since the analysis process is iterative, in theory it can be continued indefinitely. In

    reality, the decomposition can proceed only until the individual details consist of a single

    sample or pixel. In practice, youll select a suitable number of levels based on the nature of

    the signal, or on a suitable criterion such as entropy.

    Wavelet Reconstruction:

    Weve learned how the discrete wavelet transform can be used to analyze or

    decompose, signals and images. This process is called decomposition or analysis. The other

    half of the story is how those components can be assembled back into the original signal

    without loss of information. This process is called reconstruction, or synthesis. The

  • 7/31/2019 Img Denoising Time Freq Docum

    47/69

    mathematical manipulation that effects synthesis is called the inverse discrete wavelet

    transforms(IDWT). To synthesize a signal in the Wavelet Toolbox, we reconstruct it from

    the wavelet coefficients:

    Figure 28

    Where wavelet analysis involves filtering and down sampling, the wavelet

    reconstruction process consists of up sampling and filtering. Up sampling is the process of

    lengthening a signal component by inserting zeros between samples:

    Figure 29

    The Wavelet Toolbox includes commands like idwt and waverec that perform

    single-level or multilevel reconstruction respectively on the components of one-

    dimensional signals. These commands have their two-dimensional analogs, idwt2 and

    waverec2.

    Reconstruction Filters:

  • 7/31/2019 Img Denoising Time Freq Docum

    48/69

    The filtering part of the reconstruction process also bears some discussion, because it

    is the choice of filters that is crucial in achieving perfect reconstruction of the original

    signal. The down sampling of the signal components performed during the decomposition

    phase introduces a distortion called aliasing. It turns out that by carefully choosing filters

    for the decomposition and reconstruction phases that are closely related (but not identical),

    we can cancel out the effects of aliasing.

    The low- and high pass decomposition filters (L and H), together with their

    associated reconstruction filters (L' and H'), form a system of what is called quadrature

    mirror filters:

    Figure 30

    Reconstructing Approximations and Details:

    We have seen that it is possible to reconstruct our original signal from the

    coefficients of the approximations and details.

    Figure31

  • 7/31/2019 Img Denoising Time Freq Docum

    49/69

    It is also possible to reconstruct the approximations and details themselves from

    their coefficient vectors.

    As an example, lets consider how we would reconstruct the first-level

    approximation A1 from the coefficient vector cA1. We pass the coefficient vector cA1

    through the same process we used to reconstruct the original signal. However, instead of

    combining it with the level-one detail cD1, we feed in a vector of zeros in place of the

    detail coefficients

    vector:

    Figure 32

    The process yields a reconstructed approximationA1, which has the same length as

    the original signal S and which is a real approximation of it. Similarly, we can reconstruct

    the first-level detail D1, using the analogous process:

    Figure 33

    The reconstructed details and approximations are true constituents of the original

    signal. In fact, we find when we combine them that:

    A1 +D1 = S

  • 7/31/2019 Img Denoising Time Freq Docum

    50/69

    Note that the coefficient vectors cA1 and cD1because they were produced by

    Down sampling and are only half the length of the original signal cannot directly be

    combined to reproduce the signal.

    It is necessary to reconstruct the approximations and details before combining

    them. Extending this technique to the components of a multilevel analysis, we find that

    similar relationships hold for all the reconstructed signal constituents.

    That is, there are several ways to reassemble the original signal:

    Figure 34

    Relationship of Filters to Wavelet Shapes:

    In the section Reconstruction Filters, we spoke of the importance of choosing the

    right filters. In fact, the choice of filters not only determines whether perfect reconstruction

    is possible, it also determines the shape of the wavelet we use to perform the analysis. To

    construct a wavelet of some practical utility, you seldom start by drawing a waveform.

    Instead, it usually makes more sense to design the appropriate quadrature mirror filters, and

    then use them to create the waveform. Lets see

    how this is done by focusing on an example.

    Consider the low pass reconstruction filter (L') for the db2 wavelet.

    Wavelet function position

  • 7/31/2019 Img Denoising Time Freq Docum

    51/69

  • 7/31/2019 Img Denoising Time Freq Docum

    52/69

    plot(H2)

    Figure 36

    If we iterate this process several more times, repeatedly up sampling and

    convolving the resultant vector with the four-element filter vector Lprime, a pattern begins

    to emerge:

    Figure 37

  • 7/31/2019 Img Denoising Time Freq Docum

    53/69

    The curve begins to look progressively more like the db2 wavelet. This means that

    the wavelets shape is determined entirely by the coefficients of the reconstruction filters.

    This relationship has profound implications. It means that you cannot choose just any

    shape, call it a wavelet, and perform an analysis. At least, you cant choose an arbitrary

    wavelet waveform if you want to be able to reconstruct the original signal accurately. You

    are compelled to choose a shape determined by quadrature mirror decomposition filters.

    The Scaling Function:

    Weve seen the interrelation of wavelets and quadrature mirror filters. The wavelet

    function is determined by the high pass filter, which also produces the details of the

    wavelet decomposition.

    There is an additional function associated with some, but not all wavelets. This is

    the so-called scaling function . The scaling function is very similar to the wavelet function.

    It is determined by the low pass quadrature mirror filters, and thus is associated with the

    approximations of the wavelet decomposition. In the same way that iteratively up-

    sampling and convolving the high pass filter produces a shape approximating the wavelet

    function, iteratively up-sampling and convolving the low pass filter produces a shape

    approximating the scaling function.

    Multi-step Decomposition and Reconstruction:

  • 7/31/2019 Img Denoising Time Freq Docum

    54/69

    A multi step analysis-synthesis process can be represented as:

    Figure 38

    This process involves two aspects: breaking up a signal to obtain the wavelet

    coefficients, and reassembling the signal from the coefficients. Weve already discussed

    decomposition and reconstruction at some length. Of course, there is no point breaking up a

    signal merely to have the satisfaction of immediately reconstructing it. We may modify the

    wavelet coefficients before performing the reconstruction step. We perform wavelet

    analysis because the coefficients thus obtained have many known uses, de-noising and

    compression being foremost among them. But wavelet analysis is still a new and emerging

    field. No doubt, many uncharted uses of the wavelet coefficients lie in wait. The Wavelet

    Toolbox can be a means of exploring possible uses and hitherto unknown applications of

    wavelet analysis. Explore the toolbox functions and see what you discover.

    WAVELET DECOMPOSITION:

  • 7/31/2019 Img Denoising Time Freq Docum

    55/69

    Images are treated as two dimensional signals, they change horizontally and

    vertically, thus 2D wavelet analysis must be used for images. 2D wavelet analysis uses the

    same mother wavelets but requires an extra step at every level of decomposition. The 1D

    analysis filtered out the high frequency information from the low frequency information at

    every level of decomposition; so only two sub signals were produced at each level.

    In 2D, the images are considered to be matrices with N rows and M columns. At

    every level of decomposition the horizontal data is filtered, then the approximation and

    details produced from this are filtered on columns.

    Fig 1: Decomposition of an Image

    At every level, four sub-images are obtained; the approximation, the vertical

    detail, the horizontal detail and the diagonal detail. Below the Saturn image has been

  • 7/31/2019 Img Denoising Time Freq Docum

    56/69

    decomposed to one level. The wavelet analysis has found how the image changes

    vertically, horizontally and diagonally.

    Fig 2:2-D Decomposition of Saturn Image to level 1

    To get the next level of decomposition the approximation sub-image is decomposed, this

    idea can be seen in figure 3.

  • 7/31/2019 Img Denoising Time Freq Docum

    57/69

    Fig 3: Saturn Image decomposed to Level 3. Only the 9 detail sub-images and the final

    sub-image is required to reconstruct the image perfectly.

    When compressing with orthogonal wavelets the energy retained is:

  • 7/31/2019 Img Denoising Time Freq Docum

    58/69

    The number of zeros in percentage is defined by:

    Wavelet based denoising schemes:

    The idea of wavelet thresholding relies on the assumption that the signal

    magnitudes dominate the magnitudes of the noise in a wavelet representation, so that

    wavelet coefficients can be set to zero if their magnitudes are less than a predetermined

    threshold. Donoho and Johnstone proposed hard- and soft-thresholding methods for

    denoising, where the former leaves the magnitudes of coefficients unchanged if they are

    larger than a given threshold, while the latter just shrinks them to zero by the threshold

    value.

    However, the major problem with both methods and most of its variants is the

    choice of a suitable threshold value. Most signals show a spatially non-uniform energy

    distribution, which motivates the choice of a non-constant threshold. Since a given noisy

    signal may consist of some parts where the magnitudes of the signal are below the globally

    defined threshold and other parts where the noise magnitudes exceed that given threshold,

    methods relying on a globally defined threshold cut of parts of the signal, on the one hand,

    and leave some noise untouched, on the other hand. This observation led to the idea of a

    spatially adaptive threshold choice depending on the relationship of local energy (variance)

    of the observed signal and the noise variance.

  • 7/31/2019 Img Denoising Time Freq Docum

    59/69

    Chang et al. 3, 4 were the first to propose this kind of spatially adaptive wavelet

    thresholding for image denoising. Their method of selecting a spatially adaptive threshold

    is based on a context model, which involves neighboring coefficients of the wavelet

    decomposition for the estimation of the local variance. The authors extended this idea by

    using a more elaborate context model and by iterating the context-based thresholding

    process in the denoised wavelet representation, which led to significantly improved.

    Denoising by wavelet thresholding:

    Wavelet thresholding is a popular approach for denoising due to its simplicity. In its most

    basic form, this technique operates in the orthogonal wavelet domain, where each

    coefficient is thresholdedby comparing against a threshold; if the coefficient is smaller

    than the threshold it is set to zero, otherwise, it is kept or modified. One of the first reports

    about this approach was by Weaver et al[Weaver92].

    Hard and soft thresholding:

    Two standard thresholding policies are: hard-thresholding, (keep or kill), and

    soft-thresholding(shrink or kill). In both cases, the coefficients that are below a certain

    threshold are set to zero. In hard thresholding, the remaining coefficients are left

    unchanged.

  • 7/31/2019 Img Denoising Time Freq Docum

    60/69

  • 7/31/2019 Img Denoising Time Freq Docum

    61/69

    Shrinkage factors that multiply the wavelet coefficients in

    (a) Hard-thresholding and (b) soft-thresholding.

    Most methods for estimating the threshold assume AWGN noise and an orthogonal wavelet

    transform. Among those, well known is the universalthresholdof Donoho and Johnstone

    where n is the estimate of the standard deviation of additive white noise and n is the total

    number of the wavelet coefficients in a given detail image. The rationale behind this

    threshold is to remove all the coefficients that are smaller than the expected maximum of

    i.i.d. normal noise: if {ui} is a sequence of n i.i.d. random variables with normal

    distributionN(0, 1), then the maximum maxi{|ui|} is smaller than 2log(n) with a probability

    approaching one when n tends to infinity. Moreover, the probability that maxi {|ui|}

    exceeds 2log (n) by a value t is smaller than et2/2 [Donoho92a, Vidakovic94]. At

  • 7/31/2019 Img Denoising Time Freq Docum

    62/69

    different resolution scales, the threshold (2.4.3) differs only in the constant factor that is

    related to the number of the coefficients in a given sub band.

    Other thresholds that are estimated in an adaptive way for each level were

    proposed, e.g., in [Donoho95b, Hilton97, Jansen97, Nason94, Weyrich98]. Among those,

    well known is the SURE threshold of [Donoho95b], derived from minimizing the Steins

    unbiased risk estimate [Stein81] when soft-thresholding is used. Nason [Nason94]

    proposed a threshold selection based on a cross-validation procedure, which is further

    extended in [Jansen97, Weyrich98] and applied to correlated noise. Other methods, like

    [Chang00b, Ruggeri99], derive the optimum threshold by minimizing the mean squared

    error in a thresholded signal under an assumed prior distribution of the wavelet

    coefficients. Hiltons data analytic threshold [Hilton97] takes into account the spatial

    clustering properties of wavelet coefficients. However, this threshold as well as all the

    others mentioned above isspatially uniform, i.e., of the constant value for the whole detail

    image.

    It is obvious that spatially uniform thresholding is not the best thing one can do.

    Instead of applying a constant threshold to all the coefficients (in a given sub band) it

    would be better to decide for each coefficient separately what is better: keeping or killing (a

    nice discussion is in [Jansen01b, p.102]). It was shown in Sec. 2.3.5 that the mean squared

    error would be minimized by selecting the coefficients the signal component of which is

    above noise standard deviation and removing the others. A spatially varying threshold

    selection can better approach this unrealistic dream. In this respect, spatially adaptive

    thresholding with context modelingof wavelet coefficients [Chang98, Chang00a] is a state-

    of-the art approach for image denoising. Briefly, this approach applies a soft thresholding

  • 7/31/2019 Img Denoising Time Freq Docum

    63/69

    with the threshold equal to 2n /X, where n is the noise standard deviation and Xis the

    standard deviation of thesignal; to estimate Xat a given position, the coefficients with

    similarcontext are clustered; actually the context variable in [Chang00a] isa weighted

    average of the coefficient magnitudes in a moving window. This method appears as a

    reference method in Table 5.1. Other approaches, which rely on the decay of individual

    coefficients across scales, will be reviewed in the next Section.

    Wavelet domain Bayes estimation:

    Bayesian approaches to wavelet shrinkage are less ad-hoc than earlier proposals

    and were shown to be effective. In general, Bayes rules are shrinkers and their shape in

    many cases has a desirable property: it can heavily shrink small arguments and only

    slightly shrink large arguments. The resulting actions on wavelet coefficients can be very

    close to thresholding.

    4.0 INTRODUCTION TO MATLAB

  • 7/31/2019 Img Denoising Time Freq Docum

    64/69

    What Is MATLAB?

    MATLAB is a high-performance language for technical computing. It integrates

    computation, visualization, and programming in an easy-to-use environment where

    problems and solutions are expressed in familiar mathematical notation. Typical uses

    include

    1. Math and computation

    2. Algorithm development

    3. Data acquisition

    4. Modeling, simulation, and prototyping

    5. Data analysis, exploration, and visualization

    6. Scientific and engineering graphics

    7. Application development, including graphical user interface building.

    MATLAB is an interactive system whose basic data element is an array that does not

    require dimensioning. This allows you to solve many technical computing problems,

    especially those with matrix and vector formulations, in a fraction of the time it would take

    to write a program in a scalar non interactive language such as C or FORTRAN.

    The name MATLAB stands for matrix laboratory. MATLAB was originally written to

    provide easy access to matrix software developed by the LINPACK and EISPACK

  • 7/31/2019 Img Denoising Time Freq Docum

    65/69

    projects. Today, MATLAB engines incorporate the LAPACK and BLAS libraries,

    embedding the state of the art in software for matrix computation.

    MATLAB has evolved over a period of years with input from many users. In

    university environments, it is the standard instructional tool for introductory and advanced

    courses in mathematics, engineering, and science. In industry, MATLAB is the tool of

    choice for high-productivity research, development, and analysis.

    MATLAB features a family of add-on application-specific solutions called

    toolboxes. Very important to most users of MATLAB, toolboxes allow you to learnand

    apply specialized technology. Toolboxes are comprehensive collections of MATLAB

    functions (M-files) that extend the MATLAB environment to solve particular classes of

    problems. Areas in which toolboxes are available include signal processing, control

    systems, neural networks, fuzzy logic, wavelets, simulation, and many others.

    The MATLAB System:

    The MATLAB system consists of five main parts:

    Development Environment:

    This is the set of tools and facilities that help you use MATLAB functions and files.

    Many of these tools are graphical user interfaces. It includes the MATLAB desktop and

    Command Window, a command history, an editor and debugger, and browsers for viewing

    help, the workspace, files, and the search path.

    The MATLAB Mathematical Function:

  • 7/31/2019 Img Denoising Time Freq Docum

    66/69

    This is a vast collection of computational algorithms ranging from elementary

    functions like sum, sine, cosine, and complex arithmetic, to more sophisticated functions

    like matrix inverse, matrix eigen values, Bessel functions, and fast Fourier transforms.

    The MATLAB Language:

    This is a high-level matrix/array language with control flow statements, functions,

    data structures, input/output, and object-oriented programming features. It allows both

    "programming in the small" to rapidly create quick and dirty throw-away programs, and

    "programming in the large" to create complete large and complex application programs.

    Graphics:

    MATLAB has extensive facilities for displaying vectors and matrices as graphs, as

    well as annotating and printing these graphs. It includes high-level functions for two-

    dimensional and three-dimensional data visualization, image processing, animation, and

    presentation graphics. It also includes low-level functions that allow you to fully customize

    the appearance of graphics as well as to build complete graphical user interfaces on your

    MATLAB applications.

    The MATLAB Application Program Interface (API):

    This is a library that allows you to write C and Fortran programs that interact with

    MATLAB. It includes facilities for calling routines from MATLAB (dynamic linking),

    calling MATLAB as a computational engine, and for reading and writing MAT-files.

  • 7/31/2019 Img Denoising Time Freq Docum

    67/69

    MATLAB WORKING ENVIRONMENT:

    MATLAB DESKTOP:-

    Matlab Desktop is the main Matlab application window. The desktop contains five

    sub windows, the command window, the workspace browser, the current directory

    window, the command history window, and one or more figure windows, which are shown

    only when the user displays a graphic.

    The command window is where the user types MATLAB commands and

    expressions at the prompt (>>) and where the output of those commands is displayed.

    MATLAB defines the workspace as the set of variables that the user creates in a work

    session. The workspace browser shows these variables and some information about them.

    Double clicking on a variable in the workspace browser launches the Array Editor, which

    can be used to obtain information and income instances edit certain properties of the

    variable.

    The current Directory tab above the workspace tab shows the contents of the current

    directory, whose path is shown in the current directory window. For example, in the

    windows operating system the path might be as follows: C:\MATLAB\Work, indicating

    that directory work is a subdirectory of the main directory MATLAB; WHICH IS

    INSTALLED IN DRIVE C. clicking on the arrow in the current directory window shows a

    list of recently used paths. Clicking on the button to the right of the window allows the user

    to change the current directory.

  • 7/31/2019 Img Denoising Time Freq Docum

    68/69

    MATLAB uses a search path to find M-files and other MATLAB related files,

    which are organize in directories in the computer file system. Any file run in MATLAB

    must reside in the current directory or in a directory that is on search path. By default, the

    files supplied with MATLAB and math works toolboxes are included in the search path.

    The easiest way to see which directories are on the search path. The easiest way to see

    which directories are soon the search path, or to add or modify a search path, is to select set

    path from the File menu the desktop, and then use the set path dialog box. It is good

    practice to add any commonly used directories to the search path to avoid repeatedly

    having the change the current directory.

    The Command History Window contains a record of the commands a user has

    entered in the command window, including both current and previous MATLAB sessions.

    Previously entered MATLAB commands can be selected and re-executed from the

    command history window by right clicking on a command or sequence of commands. This

    action launches a menu from which to select various options in addition to executing the

    commands. This is useful to select various options in addition to executing the commands.

    This is a useful feature when experimenting with various commands in a work session.

    Using the MATLAB Editor to create M-Files:

    The MATLAB editor is both a text editor specialized for creating M-files and a

    graphical MATLAB debugger. The editor can appear in a window by itself, or it can be a

    sub window in the desktop. M-files are denoted by the extension .m, as in pixelup.m. The

  • 7/31/2019 Img Denoising Time Freq Docum

    69/69

    MATLAB editor window has numerous pull-down menus for tasks such as saving,

    viewing, and debugging files. Because it performs some simple checks and also uses color

    to differentiate between various elements of code, this text editor is recommended as the

    tool of choice for writing and editing M-functions. To open the editor , type edit at the

    prompt opens the M-file filename.m in an editor window, ready for editing. As noted

    earlier, the file must be in the current directory, or in a directory in the search path.

    Getting Help:

    The principal way to get help online is to use the MATLAB help browser, opened as

    a separate window either by clicking on the question mark symbol (?) on the desktop

    toolbar, or by typing help browser at the prompt in the command window. The help

    Browser is a web browser integrated into the MATLAB desktop that displays a Hypertext

    Markup Language(HTML) documents. The Help Browser consists of two panes, the help

    navigator pane, used to find information, and the display pane, used to view the

    information. Self-explanatory tabs other than navigator pane are used to perform a search.