image fusion sati

Upload: swatiayodhya7178

Post on 06-Apr-2018

230 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/3/2019 Image Fusion Sati

    1/19

    Abstract

    Fusing information contained in multiple images plays an increasingly important role for

    quality inspection in industrial processes as well as in situation assessment for

    autonomous systems and assistance systems. The aim of image fusion in general is to use

    images as redundant or complementary sources to extract information from them with

    higher accuracy or reliability. This dissertation describes image fusion in detail, and

    firstly introduces the three basic levels which are pixel level, feature level and decision

    level fusion, and then compares with their properties and all other aspects. Then it

    describes the evaluation criteria of image fusion results from subjective evaluation and

    objective evaluation two aspects. According to the quantitative evaluation of the image

    fusion results and quality, this text uses and defines multiple evaluation parameters such

    as fusion image entropy, mutual information MI, the average gradient, standard

    deviation, cross-entropy, unite entropy, bias, relative bias, mean square error, root mean

    square error and peak SNR, and establishes the corresponding evaluation criteria

    Keywords: - image fusion, wavelet transform, DCT, neural network, Genetic algorithm

    Introduction

    With the continuous development of sensor technology, people have more and more

    ways to obtain images, and the image fusion types are also increasingly rich, such as the

    Image fusion of same sensor, the multi-spectral image fusion of single-sensor, the image

    fusion of the sensors with different types, and the fusion of image and non-image.

    Traditional data fusion can be divided into three levels, which are pixel-level fusion,

    feature-level fusion and decision-level fusion. The different fusion levels use differentfusion algorithms and have different applications, generally, we all research the pixel-

    level fusion. Classical fusion algorithms include computing the average pixel-pixel gray

    level value of the source images, Laplacian pyramid, Contrast pyramid, Ratio pyramid,

    and Discrete Wavelet Transform (DWT). However, computing the average pixel-pixel

    gray level value of the source images method leads to undesirable side effects such as

    1

  • 8/3/2019 Image Fusion Sati

    2/19

    contrast reduction. The basic idea of DWT based methods is to perform decompositions

    on each source image, and then combine all these decompositions to obtain composite

    representation, from which the fused image can be recovered by finding inverse

    transform. This method is shown to be effective. However, wavelets transform can only

    reflect "through" edge characteristics, but can not express "along" edge characteristics. At

    the same time, the wavelet transform cannot precisely show the edge direction since it

    adopts isotropy. According to the limitation of the wavelet transform, Donoho et al. was

    proposed the concept of Curvelet transform, which uses edges as basic elements,

    possesses maturity, and can adapt well to the image characteristics. Moreover, Curvelet

    Transform has anisotropy and has better direction, can provide more information to

    image processing [1-2]. Through the principle of Curvelet transform we know that:

    Curvelet transform has direction characteristic, and its base supporting session satisfies

    content anisotropy relation, except have multi-scale wavelet transform and local

    characteristics. Curvelet transform can represent appropriately the edge of image and

    smoothness area in the same precision of inverse transform. The low-bands coefficient

    adopts NGMS method and different direction high-bands coefficient adopts LREMS

    method was proposed after researching on fusion algorithms of the low-bands coefficient

    and high-bands coefficient in Curvelet transform

    Figure 1 process of image fusion algorithm base on Curvelet transform Fusion methods

    2

  • 8/3/2019 Image Fusion Sati

    3/19

    The following summarize several approaches to the pixel level fusion of spatially

    registered input images. Most of these methods have been developed for the fusion of

    stationary input images (such as multispectral satellite imagery). Due to the static nature

    of the input data, temporal aspects arising in the fusion process of image sequences, e.g.

    stability and consistency, are not addressed.

    A generic categorization of image fusion methods is the following:

    linear superposition

    nonlinear methods

    optimization approaches

    artificial neural networks

    image pyramids

    wavelet transform

    generic multiresolution fusion scheme

    Linear Superposition

    The probably most straightforward way to build a fused image of several input

    frames is performing the fusion as a weighted superposition of all input frames. The

    optimal weighting coefficients, with respect to information content and redundancy

    removal, can be determined by a principal component analysis (PCA) of all input

    intensities. By performing a PCA of the covariance matrix of input intensities, the

    weightings for each input frame are obtained from the eigenvector corresponding to the

    largest eigenvalue. A similar procedure is the linear combination of all inputs in a pre-

    chosen colorspace (eg. R-G-B or H-S-V), leading to a false color representation of the

    fused image.

    Nonlinear Methods

    Another simple approach to image fusion is to build the fused image by the

    application of a simple nonlinear operator such as max or min. If in all input images the

    bright objects are of interest, a good choice is to compute the fused image by an pixel-by-

    3

  • 8/3/2019 Image Fusion Sati

    4/19

    pixel application of the maximum operator. An extension to this approach follows by the

    introduction of morphological operators such as opening or closing. One application is

    the use of conditional morphological operators by the definition of highly reliable 'core'

    features present in both images and a set of 'potential' features present only in one source,

    where the actual fusion process is performed by the application of conditional erosion

    and dilation operators. A further extension to this approach is image algebra, which is a

    high-level algebraic extension of image morphology, designed to describe all image

    processing operations. The basic types defined in image algebra are value sets, coordinate

    sets which allow the integration of different resolutions and tessellations, images and

    templates. For each basic type binary and unary operations are defined which reach from

    the basic set operations to more complex ones for the operations on images and

    templates. Image algebra has been used in a generic way to combine multisensor images

    Optimization Approaches

    In this approach to image fusion, the fusion task is expressed as an bayesian

    optimization problem. Using the multisensor image data and an a-prori model of the

    fusion result, the goal is to find the fused image which maximizes the a-posteriori

    probability. Due to the fact that this problem cannot be solved in general, some

    simplifications are introduced: All input images are modeled as markov random fields to

    define an energy function which describes the fusion goal. Due to the equivalence of of

    gibbs random fields and markov random fields, this energy function can be expressed as

    a sum of so-called clique potentials, where only pixels in a predefined neighborhood

    affect the actual pixel. The fusion task then consists of a maximization of the energy

    function. Since this energy function will be non-convex in general, typically stochastic

    optimization procedures such as simulated annealing or modifications like iterated

    conditional modes will be used.

    Image Pyramids

    Image pyramids have been initially described for multiresolution image analysis

    and as a model for the binocular fusion in human vision. A generic image pyramid is a

    sequence of images where each image is constructed by low pass filtering and sub

    4

  • 8/3/2019 Image Fusion Sati

    5/19

    sampling from its predecessor. Due to sampling, the image size is halved in both spatial

    directions at each level of the decomposition process, thus leading to an multiresolution

    signal representation. The difference between the input image and the filtered image is

    necessary to allow an exact reconstruction from the pyramidal representation. The image

    pyramid approach thus leads to a signal representation with two pyramids: The

    smoothing pyramid containing the averaged pixel values, and the difference pyramid

    containing the pixel differences, i.e. the edges. So the difference pyramid can be viewed

    as a multiresolution edge representation of the input image.

    The actual fusion process can be described by a generic multiresolution fusion

    scheme which is applicable both to image pyramids and the wavelet approach. There are

    several modifications of this generic pyramid construction method described above.

    Some authors propose the computation of nonlinear pyramids, such as the ratio and

    contrast pyramid, where the multistage edge representation is computed by an pixel-by-

    pixel division of neighboring resolutions. A further modification is to substitute the linear

    filters by morphological nonlinear filters, resulting in the morphological pyramid.

    Another type of image pyramid - the gradient pyramid - results, if the input image is

    decomposed into its directional edge representation using directional derivative filter

    Wavelet Transform

    A signal analysis method similar to image pyramids is the discrete wavelet

    transform. The main difference is that while image pyramids lead to an over complete set

    of transform coefficients, the wavelet transform results in a nonredundant image

    representation. The discrete 2-dim wavelet transform is computed by the recursive

    application of lowpass and high pass filters in each direction of the input image (i.e. rows

    and columns) followed by sub sampling. Details on this scheme can be found in the

    reference section. One major drawback of the wavelet transform when applied to imagefusion is its well known shift dependency, i.e. a simple shift of the input signal may lead

    to complete different transform coefficients. This results in inconsistent fused images

    when invoked in image sequence fusion. To overcome the shift dependency of the

    wavelet fusion scheme, the input images must be decomposed into a shift invariant

    representation. There are several ways to achieve this: The straightforward way is to

    5

    http://www.metapix.de/method7_r.htmhttp://www.metapix.de/method7_r.htmhttp://www.metapix.de/references_r.htmhttp://www.metapix.de/method7_r.htmhttp://www.metapix.de/method7_r.htmhttp://www.metapix.de/references_r.htm
  • 8/3/2019 Image Fusion Sati

    6/19

    compute the wavelet transform for all possible circular shifts of the input signal. In this

    case, not all shifts are necessary and it is possible to develop an efficient computation

    scheme for the resulting wavelet representation. Another simple approach is to drop the

    subsampling in the decomposition process and instead modify the filters at each

    decomposition level, resulting in a highly redundant signal representation.

    The actual fusion process can be described by a generic multiresolution fusion

    scheme which is applicable both to image pyramids and the wavelet approach.

    Generic Multiresolution Fusion Scheme

    The basic idea of the generic multiresolution fusion scheme is motivated by the

    fact that the human visual system is primary sensitive to local contrast changes, i.e.

    edges. Motivated from this insight, and in mind that both image pyramids and the wavelet

    transform result in an multiresolution edge representation, it is straightforward to build

    the fused image as a fused multiscale edge representation. The fusion process is

    summarized in the following: In the first step the input images are decomposed into their

    multiscale edge representation, using either any image pyramid or any wavelet transform.

    The actual fusion process takes place in the difference resp. wavelet domain, where the

    fused multiscale representation is built by a pixel-by-pixel selection of the coefficients

    with maximum magnitude. Finally the fused image is computed by an application of the

    appropriate reconstruction scheme

    Fig. 2 Block Diagram Of Basic Image Fusion Proces

    6

    http://www.metapix.de/method7_r.htmhttp://www.metapix.de/method7_r.htmhttp://www.metapix.de/method7_r.htmhttp://www.metapix.de/method7_r.htm
  • 8/3/2019 Image Fusion Sati

    7/19

    Related work (Survey)

    Low frequency coefficient fusion algorithm

    Curvelet transform is close to wavelet transform in low frequency region, image

    component including main energy decide image contour, so it can enhance effect of the

    image vision by correctly selecting low frequency coefficient. Existing fusion rule mostly

    have max pixel method, min pixel method, computing the average pixel-pixel gray level

    value of the source images method,LREMSmethod, local region deviation method [6].

    Max pixel method, min pixel method and computing the average pixel-pixel gray level

    value of the source images method did not take into account local neighbor relativity each

    other, so fusion result can not get better effect; local region energy method and deviation

    method onside take into account local neighbor relativity each other, but did not take into

    account image edge and definition. Accounting to this lack, NGMSmethod, it mainly

    describes image detail and image in focus grade. Eight local neighbor relativity sum of

    Laplacian algorithm was adopted to evaluate of Image definition, it defines as [9]:

    ..(1)

    High frequency coefficient fusion algorithm

    Curvelet transform have excessive direction characteristics, so can precisely express

    image edge orientation, and that region of high frequency coefficient namely express

    image edge detail information. Pixel absolute max method, LREMSmethod, local region

    deviation method, direction contrast method etc. was used in high frequency coefficient.

    Hypothesis image high frequency coefficient is CH, then fusion algorithm such as:

    7

  • 8/3/2019 Image Fusion Sati

    8/19

    (2)

    Where CHA and CHB express Curvelet transform high frequency coefficient of image A

    and image B, CHF(x, y) show high frequency coefficient in pot(x, y) fusion high

    frequency coefficient, ECHA (x, y) show Curvelet transform high frequency coefficient

    of image A in pot(x, y) local region energy, ECHB (x,y) show Curvelet transform high

    frequency coefficient of image B in pot(x, y) local region energy

    Image fusion different levels

    Pixel-level fusion

    Pixel-level fusion is to fuse on the raw data layer with strict registration conditions, and

    carry out data integration and analysis before the raw data of various sensors being

    pre-processed. Pixel-level image fusion is the lowest level of image fusion, which is to

    keep more raw data as much as possible to provide rich and accurate image information

    other fusion levels can not provide, so that the image will be easy to be analyzed and

    processed, such as image fusion, image segmentation and feature extraction, etc.,

    Pixel-level image fusion structure as shown in figure 2:

    The images to participate the fusion may come from multiple image sensors with

    different types, also may from a single image sensor. The various images the single

    image sensor provided may come from different observation time or space (perspective),

    8

  • 8/3/2019 Image Fusion Sati

    9/19

    also may be the image with different spectral characteristics in the same time or space.

    The image after the pixel-level image fusion contains much richer, more accurate

    information content, which is conducive to the analysis and processing of image signal,

    makes it easier for people observation and more suitable for computer detection

    processing, it is the most important and the most fundamental multi-sensor image fusion

    method. Pixel-level image fusion advantage is a minimum loss of information, but it has

    the largest amount of information to be processed, the slowest processing speed, and a

    higher demand for equipment

    Feature-level fusion

    Feature-level fusion is intermediate level, it is to carry out feature extraction (features can

    be the goal edges, direction, speed, etc.) for the original information of the various

    sensors, and then comprehensively analyze and process the feature information. As

    shown in figure

    In general, the extracted feature information should be a sufficient statistic of the pixel

    information, and then multisensor data will be classified, collected and integrated

    according to the feature information. If the data the sensor obtained is image data, then

    9

  • 8/3/2019 Image Fusion Sati

    10/19

    the feature is abstractly extracted from the image pixel information, and the typical

    feature information has cable type, edge, texture, spectrum, similar brightness area,

    similar depth of field areas, etc., and then multi-sensor image feature integration and

    classification will be achieved. Feature-level fusion advantage is that it achieved

    considerable compression of information, is conducive to real-time processing, and its

    fusion results can furthest give the feature information the decision analysis needed,

    which is because that the extracted features are directly related to the decision analysis

    Improved ihs-based fusion

    The basic idea of IHS fusion method is to convert a color image from the RGB (Red,

    Green, Blue) color space into the IHS (Intensity, Hue, Saturation) color space. One of

    them will be replaced by another image when we got the intensive information of both

    images. Then we convert IHS color space with H and S of being replaced image into

    RGB color space. See the following procedure

    Step1: Transform the color space from RGB to IHS.

    (3)

    where Iv is intensity of visual image. R,G, B is color information of visual image

    respectively. 1 Vand 2 Vare components to calculate hueHand saturation S

    Step 2: The intensity component is replaced by intensity of infrared imageIi .Step 3: Transform the color space from IHS to RGB.

    10

  • 8/3/2019 Image Fusion Sati

    11/19

    (4)

    where Ii is intensity of infrared image. R',G',B' is color information of fused image

    respectively. Because our basic idea is to add useful information of far infrared image to

    visual image. We set fused parameters in the matrix instead of the intensity of far infrared

    image Ii to replace the intensity of visual image Iv . The fused parameters will be

    adjusted according different information of each region. The following formula ismodified result

    (5)where , are fused parameters. 0 , 1.

    Artificial neural network

    Artificial neural network (ANN) has good advantage to estimate the relation between

    input and output when we could not know the relation of input and output, especially the

    relation is nonlinear. Generally speaking, ANN is divided into two parts. One is training,

    another is testing. During the training, we have to define training data and relational

    parameters. In the testing, we have to define testing data then get fused parameters. It has

    good ability to learn from examples and extract the statistical properties of the examples

    during the training procedure. Feature extraction is the important pre-procedure for ANN.

    In our case, we choice four feature, respectively, average intensity of visual image Mv ,

    average intensity of infrared image Mi , average intensity of region in infrared image Mir

    11

  • 8/3/2019 Image Fusion Sati

    12/19

    and visibility Vi to present as input of ANN. The following is our introduction of

    features. The average intensity of visual image Mv :

    ..(6)

    where fv is visual gray image, Hand Ware height and width of visual image Generally

    speaking, it possible means the content of the image is shot in the daytime when Mv is

    larger. On the other hand, the content of the image is shot in the night. But it is initial

    assumption, not accurate. The average intensity ofMi is defined as follow:

    where fi is infrared image, H and W are height and width of visual image. Generally

    speaking, it possible means the content of the image was shot in the daytime when Mi is

    larger. On the other hand, the content of the image was shot in the night. If we consider

    Mv and Mi to assume the shot night when Mv and Mi both are larger or smaller

    respectively. IfMi is larger and Mv is smaller then we can suppose that the highlight of

    infrared image could be useful information for us. IfMi is smaller and Mv is larger then

    we can suppose that it could be no useful information in the infrared image to add to

    visual image. The average intensity of region in Miris defined as follow We can start to

    define the training data and testing data when getting the four features. The Fig. 2 is one

    of our training data, they are visual image, infrared image and segmented infrared image

    respectively from left to right. We only segment the infrared image here. And we use

    color depth to represent each region. There are five level to represent five region. Table I

    is the integration of the features of each region from segmented infrared image. Each

    region from 1 to 5 is the color level from deep to shallow respectively. One region has

    four features

    Proposed Technique for Image Fusion

    12

  • 8/3/2019 Image Fusion Sati

    13/19

    Various method are available for image fusion such as wavelet DCT ,WCT trained pixel

    process using neural network but all these are not efficient for the proper matching of

    pixel on the time of property matching . Now we have proposed new method for image

    fusion using the Meta heuristic function genetic algorithm. Genetic algorithm is a

    heuristic function that function used for the purpose for optimization of produced result

    by any process or algorithm.

    Genetic algorithm

    Genetic Algorithms (GAs) are adaptive heuristic search algorithm based on the

    evolutionary ideas of natural selection and genetics. As such they represent an intelligent

    exploitation of a random search used to solve optimization problems. Althoughrandomised, GAs are by no means random, instead they exploit historical information to

    direct the search into the region of better performance within the search space. The basic

    techniques of the GAs are designed to simulate processes in natural systems necessary for

    evolution, specially those follow the principles first laid down by Charles Darwin of

    "survival of the fittest.". Since in nature, competition among individuals for scanty

    resources results in the fittest individuals dominating over the weaker ones. It is better

    than conventional AI in that it is more robust. Unlike older AI systems, they do not break

    easily even if the inputs changed slightly, or in the presence of reasonable noise. Also, in

    searching a large state-space, multi-modal state-space, or n-dimensional surface, a

    genetic algorithm may offer significant benefits over more typical search of optimization

    techniques. (linear programming, heuristic, depth-first, breath-first, and praxis.) GAs

    simulate the survival of the fittest among individuals over consecutive generation for

    solving a problem. Each generation consists of a population of character strings that are

    analogous to the chromosome that we see in our DNA. Each individual represents a point

    in a search space and a possible solution. The individuals in the population are then made

    to go through a process of evolution.GAs are based on an analogy with the genetic

    structure and behaviour of chromosomes within a population of individuals using the

    following foundations:

    Individuals in a population compete for resources and mates.

    13

  • 8/3/2019 Image Fusion Sati

    14/19

    Those individuals most successful in each 'competition' will produce more

    offspring than those individuals that perform poorly.

    Genes from `good' individuals propagate throughout the population so that two

    good parents will sometimes produce offspring that are better than either parent.

    Thus each successive generation will become more suited to their environment.

    Search Space

    A population of individuals are is maintained within search space for a GA, each

    representing a possible solution to a given problem. Each individual is coded as a finite

    length vector of components, or variables, in terms of some alphabet, usually the binary

    alphabet {0,1}. To continue the genetic analogy these individuals are likened to

    chromosomes and the variables are analogous to genes. Thus a chromosome (solution) is

    composed of several genes (variables). A fitness score is assigned to each solution

    representing the abilities of an individual to `compete'. The individual with the optimal

    (or generally near optimal) fitness score is sought. The GA aims to use selective

    `breeding' of the solutions to produce `offspring' better than the parents by combining

    information from the chromosomes. The GA maintains a population of n chromosomes

    (solutions) with associated fitness values. Parents are selected to mate, on the basis of

    their fitness, producing offspring via a reproductive plan. Consequently highly fitsolutions are given more opportunities to reproduce, so that offspring inherit

    characteristics from each parent. As parents mate and produce offspring, room must be

    made for the new arrivals since the population is kept at a static size. Individuals in the

    population die and are replaced by the new solutions, eventually creating a new

    generation once all mating opportunities in the old population have been exhausted. In

    this way it is hoped that over successive generations better solutions will thrive while the

    least fit solutions die out. New generations of solutions are produced containing, on

    average, more good genes than a typical solution in a previous generation. Each

    successive generation will contain more good `partial solutions' than previous

    generations. Eventually, once the population has converged and is not producing

    offspring noticeably different from those in previous generations, the algorithm itself is

    said to have converged to a set of solutions to the problem at hand.

    14

  • 8/3/2019 Image Fusion Sati

    15/19

    Based on Natural Selection

    After an initial population is randomly generated, the algorithm evolves the through three

    operators:

    1. selection which equates to survival of the fittest;

    2. crossover which represents mating between individuals;

    3. mutation which introduces random modifications.

    1. Selection Operator

    key idea: give prefrence to better individuals, allowing them to pass on their genes

    to the next generation.

    The goodness of each individual depends on its fitness.

    Fitness may be determined by an objective function or by a subjective judgement.

    2. CrossoverOperator

    Prime distinguished factor of GA from other optimization techniques

    Two individuals are chosen from the population using the selection operator

    A crossover site along the bit strings is randomly chosen

    The values of the two strings are exchanged up to this point

    If S1=000000 and s2=111111 and the crossover point is 2 then S1'=110000 and

    s2'=001111

    The two new offspring created from this mating are put into the next generation of

    the population

    By recombining portions of good individuals, this process is likely to create even

    better individuals

    15

  • 8/3/2019 Image Fusion Sati

    16/19

    3. Mutation Operator

    With some low probability, a portion of the new individuals will have some of

    their bits flipped.

    Its purpose is to maintain diversity within the population and inhibit premature

    convergence.

    Mutation alone induces a random walk through the search space

    Mutation and selection (without crossover) create a parallel, noise-tolerant, hill-

    climbing algorithms

    Effects of Genetic Operators

    Using selection alone will tend to fill the population with copies of the best

    individual from the population

    Using selection and crossover operators will tend to cause the algorithms to

    converge on a good but sub-optimal solution Using mutation alone induces a random walk through the search space.

    Using selection and mutation creates a parrallel, noise-tolerant, hill climbing

    algorithm

    The Algorithms

    1. randomly initialize population(t)

    2. determine fitness of population(t)3. repeat

    1. select parents from population(t)

    2. perform crossover on parents creating population(t+1)

    3. perform mutation of population(t+1)

    4. determine fitness of population(t+1)

    16

  • 8/3/2019 Image Fusion Sati

    17/19

    Conclusion

    This dissertation describes an application of genetic algorithm to image fusion problem.

    We improve traditional IHS-method, wavelet, NN method and pattern matching method

    and add concept of region-based into image fusion. The aim is that different regions can

    be used by different parameters in different state about time or weather. Due to the

    relation between environment and fused Parameters are nonlinear. So, we adopt artificial

    neural network to solve this problem. On the other hand, the fused parameters will be

    estimated automatically render us to get adaptive appearance in different states. The

    architecture we proposed is not only can be useful for many applications but also adapted

    for many kinds of field. In the next semester we have implemented this entire concept inmatlab.

    17

  • 8/3/2019 Image Fusion Sati

    18/19

    REFERENCES

    [1] Z. Wang, D. Ziou, C. Armenakis, D. Li, and Q. Li, A Comparative Analysis of

    Image Fusion Methods, Geoscience and Remote Sensing, vol. 43, no. 6, pp. 1391-1402,June 2006.

    [2] J. G. Liu, Smoothing filter-based intensity modulation: A spectral preserve image

    fusion technique for improving spatial details, Int. J. Remote Sensing, vol. 21, no. 18,

    pp. 3461-3472, 2000.

    [3] M. Li, W. Cai, and Z. Tan, A region-based multi-sensor image fusion scheme using

    pulse-coupled neural network, Pattern Recognition Letters, vol. 27, pp. 1948-1956,2006.

    [4] L. J. Guo and J. M. Moore, Pixel block intensity modulation: adding spatial detail toTM band 6 thermal imagery, Int. J. Remote Sensing., vol. 19, no. 13, pp. 2477-2491,1988.

    [5] P. S. Chavez and J. A. Bowell, Comparison of the spectral information content ofLandsat thematic mapper and SPOT for three different sites in the Phoenix, Arizona

    region,Photogramm. Eng. Remote Sensing., vol. 54, no.12, pp. 1699-1708, 1988.

    [6] A. R. Gillespie, A. B. Kahle, and R. E. Walker, Color enhancement of highly

    Correlated images-_. Channel ratio and chromaticity transformation Techniques,Remote Sensing Environment, vol. 22, pp. 343-365, 1987.

    [7] J. Sun, J. Li and J. Li, Multi-source remote sensing image fusion, INT. J. RemoteSensing, vol. 2, no. 1, pp. 323-328, Feb. 1998.

    [8] W. J. Carper, T. M. Lillesand, and R. W. Kiefer, The use of Intensity- Hue-

    Saturation transformation for merging SPOT panchromatic and multispectral image

    data,Photogramm. Eng. Remote Sensing, vol. 56, no. 4, pp. 459-467, 1990

    18

  • 8/3/2019 Image Fusion Sati

    19/19

    [9] K. Edwards and P. A. Davis, The use of Intensity-Hue-Saturation transformation for

    producing color shaded-relief images,Photogramm. Eng. Remote Sensing, vol. 60, no.

    11, pp. 1369-1374, 1994.

    [10] E. M. Schetselaar, Fusion by the IHS transform: Should we use cylindrical or

    Spherical coordinates?,Int. J. Remote Sensing, vol. 19, no. 4, pp. 759-765, 1998.

    [11] J. Zhou, D. L. Civco, and J. A. Silander, A wavelet transform method to merge

    Landsat TM and SPOT panchromatic data, Int. J. Remote Sensing, vol. 19, no. 4, pp.743-757, 1998.

    [12] S. Li, J. T. Kwok, Y. Wang, Multifocus image fusion using artificial neural

    networks,Pattern Recognition Letters, vol. 23, pp. 985-997, 2002.

    [13] Q. Yuan, C.Y. Dong, Q. Wang, An adaptive fusion algorithm based on ANFIS for

    radar/infrared system,Expert Systems with Applications, vol. 36, pp. 111-120, 2009.

    19