color image restoration based on dynamic...

5
Color Image Restoration Based on Dynamic Recurrent RBF Neural Network Hongwei Ge, Weinan Yang School of Information, Southern Yangtze University Wuxi, Jiangshu , People’s Republic of China Email: [email protected] Abstract A kind of nearest neighbor classification (NNC) based on dynamic recurrent RBF Neural Network is used to restore color image. It combines a MLP and a RBF Neural Network, and allows for explicit representation of prototype patterns as network parameters. The system structure is self-adaptive, and the prototypes can be added or removed freely. The dynamic classification implemented by the network eliminates all comparisons, which are the vital steps of the conventional NNC process. Some results of image de-noising show the high performance of this model. Moreover, the new model also provides excellent robustness with respect to various percentages of noise in our testing examples. 1. Introduction Digital images often suffer degradation when they are acquired and processed. The goal of removing noise is to suppress the noise while preserving the integrity of edge and detail information associated with the original image. The method of removing noise can be divided into two classes: time domain method and frequency domain method, such as median filter, self-adaptive median filter [1], neural network [2], wavelet analysis [3], etc. The radial basis function (RBF) network is a special type of network with several distinctive features [4]. It has been widely applied in diverse fields, such as pattern recognition, function approximation, and data clustering. Recurrent RBF networks have been applied successfully to such areas that require rapid and accurate data interpolation, such as adaptive control and noise cancellation [5]. In this paper, a dynamic recurrent Neural Network is applied for color image restoration, which employs a hybrid of two algebraic networks, namely a Radial Basis Function (RBF) network and a Multi-Layer Perception (MLP) network. The experimental results show that this method can effectively suppress noise while preserving image details. 2. The structure of RBF networks and NNC RBF networks have been traditionally associated with a simple architecture of three layers. Each of the input components feeds forward to the radial functions. The hidden layer is composed of a number of nodes with radial activation functions called radial basis functions. The outputs of these functions are linearly combined with weights into the network output. Each radial function has a local response, since their output only depends on the distance of the input from a center point. Radial functions in the hidden layer have a structure that can be represented as follows: )) ( ) (( ) ( 1 i i i c x R c x x T = ϕ φ (1) where ϕ is the radial function used, } , , 2 , 1 c { m i i " = is the set of radial function centers and R is a metric. The term ) ( ) ( 1 i i c x R c x T denotes the distance from the input x to the center c on the metric defined by R . There are several common types of functions used, though the Gaussian function is the most typical choice, combined with the Euclidean metric. In this case, the output of the RBF network is: ) c x exp( ) ( 2 2 2 1 0 γ i m i i w w x + = = F (2) where m is the number of basis functions, { } m i w i , , 1 " = are the synaptic weights, denotes the Euclidean norm and γ is the width parameter of the radial function. Nearest neighbor classification (NNC) is the problem of evaluating the association map ) , ( min arg ) ( y z z y d f M = (3) defined on a pattern space P , where P M is a finite set of prototype patterns and ) , ( d is a metric on P .A straightforward way of evaluating exactly Eq.(3) for any given instance ) , ( P P z M requires computation of an array of M m = distances from z to each M y ,then obtaining the index of the minimum element through comparisons, and finally extracting the pattern M y associated with the resulting index. 3. Dynamic recurrent RBF network model In this section, we propose a dynamic recurrent neural network model with adaptive structure that qualifies as a NNC. Each RBF node in the model inserts into the state space an open basin of attraction around its center, thus Third International Conference on Natural Computation (ICNC 2007) 0-7695-2875-9/07 $25.00 © 2007

Upload: others

Post on 27-Jul-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Color Image Restoration Based on Dynamic …papersim.com/wp-content/uploads/2019/01/Neural_Network...این ارائه یک تجزیه و تحلیل واضح تر و پیاده سازی

Color Image Restoration Based on Dynamic Recurrent RBF Neural Network

Hongwei Ge, Weinan Yang School of Information, Southern Yangtze University

Wuxi, Jiangshu , People’s Republic of China Email: [email protected]

Abstract

A kind of nearest neighbor classification (NNC) based on dynamic recurrent RBF Neural Network is used to restore color image. It combines a MLP and a RBF Neural Network, and allows for explicit representation of prototype patterns as network parameters. The system structure is self-adaptive, and the prototypes can be added or removed freely. The dynamic classification implemented by the network eliminates all comparisons, which are the vital steps of the conventional NNC process. Some results of image de-noising show the high performance of this model. Moreover, the new model also provides excellent robustness with respect to various percentages of noise in our testing examples. 1. Introduction

Digital images often suffer degradation when they are acquired and processed. The goal of removing noise is to suppress the noise while preserving the integrity of edge and detail information associated with the original image. The method of removing noise can be divided into two classes: time domain method and frequency domain method, such as median filter, self-adaptive median filter [1], neural network [2], wavelet analysis [3], etc.

The radial basis function (RBF) network is a special type of network with several distinctive features [4]. It has been widely applied in diverse fields, such as pattern recognition, function approximation, and data clustering. Recurrent RBF networks have been applied successfully to such areas that require rapid and accurate data interpolation, such as adaptive control and noise cancellation [5]. In this paper, a dynamic recurrent Neural Network is applied for color image restoration, which employs a hybrid of two algebraic networks, namely a Radial Basis Function (RBF) network and a Multi-Layer Perception (MLP) network. The experimental results show that this method can effectively suppress noise while preserving image details. 2. The structure of RBF networks and NNC

RBF networks have been traditionally associated with a simple architecture of three layers. Each of the input components feeds forward to the radial functions. The

hidden layer is composed of a number of nodes with radial activation functions called radial basis functions. The outputs of these functions are linearly combined with weights into the network output. Each radial function has a local response, since their output only depends on the distance of the input from a center point.

Radial functions in the hidden layer have a structure that can be represented as follows:

))()(()( 1iii cxRcxx T −−= −ϕφ (1)

where ϕ is the radial function used, ,,2,1c mii = is the set of radial function centers and R is a metric. The term )()( 1

ii cxRcx T −− − denotes the distance from the input x to the center c on the metric defined by R . There are several common types of functions used, though the Gaussian function is the most typical choice, combined with the Euclidean metric. In this case, the output of the RBF network is:

)cx

exp()(2

2

2

10 γ

im

iiwwx

−−+= ∑

=

F (2)

where m is the number of basis functions, miwi ,,1= are the synaptic weights, • denotes the Euclidean norm and γ is the width parameter of the radial function.

Nearest neighbor classification (NNC) is the problem of evaluating the association map

),(min arg)( yzzy

dfM∈

= (3) defined on a pattern space P , where P⊆M is a finite set of prototype patterns and ),( ⋅⋅d is a metric on P .A straightforward way of evaluating exactly Eq.(3) for any given instance ),( PPz ⊆∈ M requires computation of an array of Mm = distances from z to each M∈y ,then obtaining the index of the minimum element through comparisons, and finally extracting the pattern M∈∗y associated with the resulting index. 3. Dynamic recurrent RBF network model

In this section, we propose a dynamic recurrent neural network model with adaptive structure that qualifies as a NNC. Each RBF node in the model inserts into the state space an open basin of attraction around its center, thus

Third International Conference on Natural Computation (ICNC 2007)0-7695-2875-9/07 $25.00 © 2007

قوی
تحمل کردن _ کشیدن _ رنج بردن
درستی _ امانت _ راستی _ کمال
حفظ کردن _ محافظت کردن
مرسوم _ مطابق آیین و قاعده
درصد
تنزل
تهیه کردن _ مقرر داشتن
بدست آوردن _ حاصل کردن _ پیدا کردن
موقوف کردن _ فرونشاندن _ خواباندن _ مانع شدن
مقایسه ها _ مقیاس ها
پیاده سازی شده
تقسیم شده
صریح
نمونه اول
مرتبط
متمایز
کارایی
به رسمیت شناختن
تقریب
نیاز
کاربردی
گوناگون _ متنوع
دقیق
هرچند تابع گاوسي معمول ترين انتخاب است، همراه با متريک اقليدسي. در این مورد خروجی شبکه RBF:
تعامل
از آنجا که خروجی آنها فقط به فاصله ورودی از یک نقطه مرکزی بستگی دارد.
به طور سنتی
معماری
تجربی
نشان می دهد
ارزیابی
ارتباطی - اتحادیه
مقایسه ها
در نهایت استخراج الگو
پیشنهاد میکنم
بدست آوردن
واجد شرایط بودن
هر گره RBF در مدل، به فضای حالت، یک حوضه باز از جاذبه در اطراف مرکز آن قرار می گیرد،
فهرست مطالب
Page 2: Color Image Restoration Based on Dynamic …papersim.com/wp-content/uploads/2019/01/Neural_Network...این ارائه یک تجزیه و تحلیل واضح تر و پیاده سازی

introducing a stable fixed point, whereas MLP is utilized merely as a multiplier. The dynamics of the model is defined in the bounded state space [0, 1], such that it maximizes an associated scalar energy function, which is in the form of a sum of Gaussian functions.

3.1. Gradient system model

A single RBF )(⋅φ centered at c is viewed here as a

continuous function to be maximized by the gradient system

))(()-)((2)()( )( ttt t xcxxx xxx φγ

φ −=∇= =

(4)

Such a convergence to the unique fixed point c from all initial conditions nRx ∈)0( is interpreted as a trivial instance of NNC for c=M .

Given m prototypes miiM 1== p , there are m distinct

energy functions mii 1)( =⋅φ centered at ii pc = .These

energy forms yield m separate dynamical classifiers in the form of (4). In order to obtain a single state equation with stable fixed points at the prototypes, we sum up the right-hand sides of the individual equations

))(())((2)(1

ttt ii

m

i i

xcxx φγ

−−= ∑=

(5)

The resulting state Eq. (5) defines a gradient system which now maximizes the continuous and bounded energy function

∑=

=m

iiE

1)()( xx φ (6)

This relation guarantees the continuous-time system (5) is globally convergent, due to Lyapunov's indirect method, and its fixed points are the extremum points of (6).

The unit-hypercube as a state-space has nice features. It offers a clearer analysis and a canonical implementation. Therefore, we focus on the transformed classification problem by assuming that n[0,1]M ∈ . The NNC problem considered initially on nRP = , is equivalent to the one posed within the unit hypercube n]1,0[ through an objective transform, which maps the prototypes and the distorted pattern to n]1,0[ in a distance-preserving way. If the given prototypes m

iM 1][ =Ρ= are not contained already within the unit-hypercube, then a linear contraction that scales them by the maximum entry jiji )(max , p among all prototypes transforms the problem to n]1,0[ . The contracted prototypes reduce the state space of (5) to n]1,0[ .

3.2. Optimal width parameter γ and scale coefficient K

The disadvantage of picking a large width parameter is following: A larger γ not only corrupts the partitioning realized by (5) and increases the distortion on the prototypes, but also may cause some prototypes to disappear completely from the energy landscape. A lesser γ can implement appropinquity ideal partition and increase accuracy representation of prototype, however, has an adverse effect on the system performance as it would decrease exponentially the magnitude of the gradient .In this case, the dynamic classifier may converge too slowly when handling distorted input patterns dissimilar to the prototypes. So scale coefficient K is set to increase convergence speed of gradient system.

In addition, the width parameters of the Gaussians must be picked equal for the sake of the classification performance, despite the definition of energy function (6) allows potentially an independent width iγ for each

mii

,,),(⋅φ , the derivative process as follows: The ideal NNC creates m distinct partitions on the

pattern space defined by

ijmidd jin

i ≠=<∈=ℜ ,,,1),,(),(:]1,0[ …pxpxx (7) Due to the triangle inequality in the definition of the norm induced metric ),( ⋅⋅d , ℜ i is a convex subset of

n]1,0[ bounded by the hyperplanes

,,:]1,0[iji

nij jΩ ∈−=−∈= pxpxx (8)

where i denotes the index set of the partitions adjacent toℜ i . To achieve correct NNC for all points in n]1,0[ , each basin of attraction must be made equal to the corresponding ideal partition ℜ i . If the RBFs other than

)(⋅φ i and )(⋅φ j are neglected along ji ,Ω , then the ideal partitioning among distinct basins of attraction occurs if and only if

ijiji j ∈∀=⇔−=− )()( xxcxcx φφ (9)

this condition is equivalent to ji γγ = . Since it applies to all prototype pairs, we have

γγγ =⋅⋅⋅= 21 (10) And all qualitative features of a time-invariant

dynamical system are invariant under scaling the right-hand side of its state representation with a constant 0>K . Such a scaling corresponds to multiplying each RBF by K, thus switching to the energy form )(⋅×EK , which has exactly the same extremum points as the original )(⋅E . The effect of K, on the other hand, would be on the convergence speed only, as it scales the magnitude of the gradient )(xE×∇ . Consequently, for 1>K , the dynamics

))(())((2

)(1

ttK

t i

m

ii xcxx φ

γ ∑=

•−−= (11)

would perform faster than the original one (5).

Third International Conference on Natural Computation (ICNC 2007)0-7695-2875-9/07 $25.00 © 2007

معرفی یک نقطه ثابت پایدار، در حالی که MLP صرفا به عنوان یک ضریب استفاده می شود.
به طوری که آن را به حداکثر رساندن یک تابع انرژی وابسته اسکالر، که در قالب مجموع توابع گاوسي است.
شیب
محدود
همگرایی
معادلات
بازده
خلاصه
خصوصی- شخصی
متمایز
در ابتدا
در نظر گرفته شده
واحد پیکربند به عنوان یک حالت فضا ویژگی های خوبی دارد. این ارائه یک تجزیه و تحلیل واضح تر و پیاده سازی کانونی می دهد.
با فرض اینکه
معادل آن است که در داخل هیپرکوب واحد وجود دارد n] 1،0 [از طریق تبدیل هدف، که نمونه های اولیه و الگوی تحریف شده را به n] 1.0 [
ادغام- هم کشیدن
معادل آن است که در داخل هیپرکوب واحد وجود دارد n] 1،0 [از طریق تبدیل هدف، که نمونه های اولیه و الگوی تحریف شده را به n] 1.0 [
ادغام- هم کشیدن
یک γ بزرگتر نه تنها تقسیم بندی متوجه شده توسط (5) را تخریب می کند و اعوجاج را بر روی نمونه های اولیه افزایش می دهد بلکه باعث می شود که برخی از نمونه های اولیه به طور کامل از منظر انرژی ناپدید شوند
یک γ کوچک می تواند پارتیشن ایده آل مناسب را اجرا کند و نمایش دقت نمونه اولیه را افزایش دهد، با این حال، اثرات نامطلوب بر عملکرد سیستم را به طور معنی داری کاهش می دهد که مقدار گرادیان
در این حالت، طبقه بندی پویا ممکن است هنگام بارگیری روش های ورودی تحریف شده متفاوت با نمونه های اولیه خیلی کند باشد. بنابراین مقیاس ضریب K برای افزایش سرعت همگرایی سیستم گرادیان تعیین می شود.
معادل
کیفی
امکانات
در این حالت، طبقه بندی پویا ممکن است هنگام بارگیری روش های ورودی تحریف شده متفاوت با نمونه های اولیه خیلی کند باشد. بنابراین مقیاس ضریب K برای افزایش سرعت همگرایی سیستم گرادیان تعیین می شود.
یک γ بزرگتر نه تنها تقسیم بندی متوجه شده توسط (5) را تخریب می کند و اعوجاج را بر روی نمونه های اولیه افزایش می دهد بلکه باعث می شود که برخی از نمونه های اولیه به طور کامل از منظر انرژی ناپدید شوند
ادغام- هم کشیدن
معادل آن است که در داخل هیپرکوب واحد وجود دارد n] 1،0 [از طریق تبدیل هدف، که نمونه های اولیه و الگوی تحریف شده را به n] 1.0 [
یک γ کوچک می تواند پارتیشن ایده آل مناسب را اجرا کند و نمایش دقت نمونه اولیه را افزایش دهد، با این حال، اثرات نامطلوب بر عملکرد سیستم را به طور معنی داری کاهش می دهد که مقدار گرادیان
مطلق است
ثابت
Page 3: Color Image Restoration Based on Dynamic …papersim.com/wp-content/uploads/2019/01/Neural_Network...این ارائه یک تجزیه و تحلیل واضح تر و پیاده سازی

)(1 tx

)(txn

11c

mc11nc

nmc

s

γk2

γk2

γk2−

γk2−

)(1 tx•

)(tx n

•)(1 ⋅φ

)(⋅mφ

Σ

Σ

Σ

Σ

Σ

×

×

3.3. Dynamic recurrent RBF network model that performs NNC

Having addressed the gradient system and its

parameters, we outline the network model of the gradient system (11). We begin by rearranging the state representation (11) as.

−−

−−= ∑∑

=−

2

2

2

112

2

2 )(-exp

)(-exp)(2)(

γγγi

m

ii

m

i

ii tttKt

cxc

cxxx

(12) The block diagram of the gradient model is shown in Fig. 1. It has the feedforward path and the feedback one. The feedforward path employs a hybrid of a RBF network and a MLP network realizing the right-hand side of (12). The feedback path provided via n integrators implements dynamic feedback.

Figure 1. Dynamic recurrent RBF network model

Note that the form (12) requires one straight and n

weighted summation of the RBFs, which could be achieved by an RBF network with 1+n output nodes. This network employs m RBF nodes, to compute the vector

Τ

−= 2

2

22

2

2 -exp-expγγ

mcxcxr 1

. The output node computing the straight sum is a single

summing unit labeled by s , whereas the remaining n output nodes perform the matrix multiplication rw • , where [ ]mccw 1 ,,= .To realize the first term within the parenthesis in (12), n additional blocks are required in order to multiply the state vector )(tx by the output of node s .

To achieve multiplication of two arbitrary scalar variables ]1,0[, ∈ba , we consider the two-layer perceptron network shown in Fig.2, which utilizes four sigmoidal nodes and a linear one. The activation functions of the first layer nodes are all )tanh( ⋅ .

Figure 2. MLP network model

Finally, a single linearlayer subtracts the second term from the first one and scales the outcome by the constant

γ/2K− . To determine the parameters of this subnetwork to

perform multiplication, we generated the training set: 100,...,0,,10,10:),],([ 2

22

12121 ===⋅= −− lklxkxxxxx Tϕ

It is performed the classical backpropagation training algorithm with adaptive step-size and random initial conditions. The training results after 2000 epochs are shown in Table 1(m=1 for now).

Table 1. Parameters of the MLP network Parameter Value Parameter Value Parameter Value

v11 0.13204 v41 12.52180 w1 69.52540m

v12 -0.14537/m v42 0.21656/m w2 63.61120m

v21 0.08676 θ1 -0.79622 w3 -81.75210m

v22 -49.565230/m θ2 -5.56220 w4 0.00322m

v31 0.05429 θ3 0.69874 ρ -71.4552m

v32 -

0.03215/m θ4 6.36672

Note that each state variable is constrained within [0,

1], However, the sum of m Gaussians is in general not within this bound, but certainly in ],0[ m . Thus, it can be applied only after being scaled by 1/m. The output should then be scaled by m in order to realize the first term. These two modifications can be achieved by scaling the input layer weights )4,,1(,2 =iv i and the output layer parameters 41 ,, ww , b by 1/m and m, respectively. 4. Experimental results Several experiments are organized to demonstrate the color image restoration performance of the dynamic recurrent neural network. These experiments have been conducted on a variety of 512512 × benchmark images to compare the performance of the proposed method with a number of existing noise removal techniques. Here, we adopt the peak signal-to-noise ratio (PSNR) and Mean Absolute Error (MAE) criterions to measure the image restoration performance. The experimental steps can be described as follows:

a

Σ )tanh( ⋅

11v

12v21v

22v

31v

32v

41v

42v

ρ

b

abΣ

Σ )tanh( ⋅

Σ )tanh( ⋅

Σ )tanh( ⋅

1w

2w

3w

4w

Third International Conference on Natural Computation (ICNC 2007)0-7695-2875-9/07 $25.00 © 2007

با توجه به سیستم گرادیان و پارامترهای آن، مدل شبکه ای از سیستم گرادیان (11) را به تصویر می کشد. ما با بازنویس حالت ارائه (11) شروع می کنیم.
توجه داشته باشید که فرم (12) نیاز به یک جمع بندی مستقیم و n وزن RBF ها دارد که توسط یک شبکه RBF با گره خروجی 1 + n به دست می آید. این شبکه از گره های RBF استفاده می کند تا بردار را محاسبه کند
ضریب_تکثیر
نشان دادن
انجام دادن
در حالیکه
معیار
کارایی
Page 4: Color Image Restoration Based on Dynamic …papersim.com/wp-content/uploads/2019/01/Neural_Network...این ارائه یک تجزیه و تحلیل واضح تر و پیاده سازی

Figure 3. PSNR varied with width parameter (1) Input the corrupted image, separate out its R, G, B

channel components and represent them as three matrices by mapping linearly each pixel intensity to the interval [0,1]. Using these matrices, we generate the prototype sets.

(2) Input each column of the channel component matrices to the corresponding NNC. These data serve as the initial condition of classifies.

(3) Combine the result vectors of each column into the matrices. These matrices are the corresponding channel component matrices of the restoration image.

Figure 4. PSNR varied with noise ratio

Figure 5. MAE varied with noise ratio

To obtain an optimal width parameter γ , we take the

color image ‘bridge’ corrupted by 40% salt-and-pepper noise as the reference image. If we set 810=K , the relationship between parameter γ and PSNR is shown in Fig.3, where the curve is used to roughly indicate the change tendency between γ and PSNR. It can be easily seen from Fig.3 that parameter γ should be small. But when parameter γ decreases, the convergence time of the NNC increases quickly. When 1.0=γ , the convergence time becomes too long to accept. In our experiments, γ is selected to 0.2, and this value is obtained empirically through extensive experiments.

The first experiment is to compare the proposed method with the standard median filter and the self-adaptive median filter. Fig.4 and Fig.5 serve to compare the PSNR and MAE results for filtering the ‘lena’ and the ‘peppers’ color images corrupted by salt-and-pepper noise with different percentages, respectively. Fig.4 and 5 reveal that the proposed method achieves significant improvement on the other filters for suppressing salt-and-pepper noises. Further more, the amplitude variation of the two evaluation parameters is smaller than others. This shows that the proposed method has stronger robustness than the other two methods.

Third International Conference on Natural Computation (ICNC 2007)0-7695-2875-9/07 $25.00 © 2007

تجربی
گسترده
خراب شده
سرکوب کردن
قابل توجه
علاوه بر این، تنوع دامنه دو پارامتر ارزیابی کمتر از دیگران است. این نشان می دهد که روش پیشنهادی قوی تر از دو روش دیگر است.
زمان همگرایی برای پذیرش بسیار طول می کشد.
بردارهای نتیجه هر ستون را به ماتریس ترکیب کنید. این ماتریسها ماتریس جزء کانال مربوط به تصویر ترمیم هستند.
گرایش_زمینه_میل
منحنی
تقریبا نشان داده می شود
Page 5: Color Image Restoration Based on Dynamic …papersim.com/wp-content/uploads/2019/01/Neural_Network...این ارائه یک تجزیه و تحلیل واضح تر و پیاده سازی

(a) (b)

(c) PSNR=7.3893dB (d) PSNR=7.1409dB

(e) PSNR=27.2201dB (f) PSNR=21.2389dB

(g) PSNR=31.0430dB (h) PSNR=24.8311dB

(i) PSNR=33.4683dB (j) PSNR=25.5706dB Figure 6. Restoration performance comparison on the ‘lena’ and ‘peppers’ images degraded by 55% salt-and-pepper noise: (a) (b) original image,(c) (d) noisy images, and images filtered by (e) (f) median filter, (g) (h) self-adaptive median filter, and (i)(j) dynamic recurrent RBF neural network

The second experiment is to show the excellent

capability of the new method for preserving image details while effectively suppressing noise, Fig.6 shows the

comparative restoration results of the standard median filter, the self-adaptive median filter and the proposed method for the image ‘lena’ and ‘peppers’ corrupted by salt-and-pepper noise at 55%, respectively. Apparently, the proposed method produced a better subjective visual quality restored image with more noise suppression and detail preservation. 5. Conclusions

In this paper, the NNC based on hybrid dynamic recurrent neural network is applied for color image de-noising. This model is self-adaptive, the number of RBF nodes in the first layer equals to the cardinality of M, and the prototypes can be added or removed freely. Further more, although the network calculates all distances from the current state vector to the prototypes, the convergence to the nearest one is guaranteed for each initial state vector without performing any comparisons along the autonomous process. The experimental results show that this model achieves much better performance than some other de-noising techniques both quantitatively and qualitatively. The only network parameter influencing the classification performance is γ , and how to choose the optimal parameter γ is a topic for further study. 6. References [1] F.T. Kuan, A.A. Sawchuk,T.C. Strand and P. Chavel, “Adaptive noise smoothing filter for images with signal-dependent noise”, IEEE Transactions on Pattern Analysis & Machine Intelligence,1985,7(2),pp.165-177. [2] H. Kong, L. Guan, “A neural network adaptive filter for the removal of impulse noise in digital images”, Neural Networks Letter, 1996, 9(3),pp.373-378. [3] J. Portilla, V. Strela, M. J. Wainwright and E. P. Simoncelli, “Image denoising using scale mixtures of Gaussians in the wavelet domain”, IEEE Transactions on Image Processing, 2003, 12(11), pp.1338-1351. [4] O.D. Richard, E.H. Peter and G.S. David, Pattern Classification, China Machine Press, Beijing, 2003. [5] M.K. Muezzinoglu and J.M. Zurada, “RBF-based neurodynamic nearest neighbor classification in real pattern space”, Pattern Recognition, 2006, (39),pp.747-760

Third International Conference on Natural Computation (ICNC 2007)0-7695-2875-9/07 $25.00 © 2007

نتیجه گیری
نتایج ترمیم مقایسهای از فیلترمیانه استاندارد، فیلتر خود تطبیقی میانه و روش پیشنهادی برای تصویر «لنا» و «فلفل ها» خراب شده توسط نویز فلفل نمکی به ترتیب 55٪ است. ظاهرا، روش پیشنهادی یک تصویر بازسازی شده با کیفیت تصویر ذهنی بهتر را با کاهش بیشتر نویز و حفظ جزئیات ایجاد کرد.
قدرتمندی