ieee transactions on neural networks and learning...

13
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 27, NO. 11, NOVEMBER 2016 2413 Echo State Networks With Orthogonal Pigeon-Inspired Optimization for Image Restoration Haibin Duan, Senior Member, IEEE , and Xiaohua Wang Abstract—In this paper, a neurodynamic approach for image restoration is proposed. Image restoration is a process of estimating original images from blurred and/or noisy images. It can be considered as a mapping problem that can be solved by neural networks. Echo state network (ESN) is a recurrent neural network with a simplified training process, which is adopted to estimate the original images in this paper. The parameter selection is important to the performance of the ESN. Thus, the pigeon-inspired optimization (PIO) approach is employed in the training process of the ESN to obtain desired parameters. Moreover, the orthogonal design strategy is utilized in the initialization of PIO to improve the diversity of individuals. The proposed method is tested on several deteriorated images with different sorts and levels of blur and/or noise. Results obtained by the improved ESN are compared with those obtained by several state-of-the-art methods. It is verified experimentally that better image restorations can be obtained for different blurred and/or noisy instances with the proposed neurodynamic method. In addition, the performance of the orthogonal PIO algorithm is compared with that of several existing bioin- spired optimization algorithms to confirm its superiority. Index Terms— Echo state network (ESN), image restoration, neurodynamic, orthogonal, pigeon-inspired optimization (PIO). I. I NTRODUCTION I MAGE acquisition can be considered as a process corrupting the image source with a point spread function (PSF) accounting for the distortion [1]. Image restora- tion aims at obtaining the best estimation of the unknown original image from a blurred and/or noisy observed image [2]. Many effective image restoration approaches have been reported [3]–[8]. Though regularization and iterative methods are appropriate to solve the ill-conditioned problems in image restoration [9], [10], regularization solutions are sensitive to the variation of parameters. A neural network approach has been shown in [11] to achieve a competitive perfor- mance in image restoration. However, the adopted multilayer Manuscript received August 15, 2014; revised May 10, 2015 and September 11, 2015; accepted September 12, 2015. Date of publication October 27, 2015; date of current version October 17, 2016. This work was supported in part by the National Natural Science Foundation of China under Grant 61425008, Grant 61333004, and Grant 61273054, the Aeronautical Foundation of China under Grant 20135851042, and the Innovation Foundation of BUAA for PhD Graduates. The authors are with the State Key Laboratory of Virtual Reality Technology and Systems, School of Automation Science and Electrical Engineering, Beihang University, Beijing 100191, China (e-mail: [email protected]; [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TNNLS.2015.2479117 perceptron (MLP) model can only deal with Gaussian noise. Note that there are few image restoration methods that can present good performance on both deburring and denoising problems. Motivated by this, we focus on the problem of developing an image restoration approach to deal with different sorts of deterioration. Neural network is utilized as the basis of our proposed image restoration approach, since the technique can automatically learn an image restoring procedure from the training examples. Neurodynamic optimization algorithms based on recurrent neural networks (RNNs) are appropriate to solve real-time optimization problems [12]–[15]. RNNs with parallel and distributed information processing are competently computa- tional. Therefore, neural network models have successfully been utilized in many different areas, such as nonlinear control design [16], system identification [17], signal processing [18], and time-series prediction [19], [20]. Significant progress has been made in the research on neurodynamic optimization, since the Hopfield neural network was constructed for opti- mization problems [21], [22]. Moreover, various neurodynamic optimization models, including dynamical canonical nonlin- ear programming circuit [23], Adachi neural network [24], Elman network [25], and echo state network (ESN) [26], have been proposed to guarantee the optimality, improve the convergence properties, extend the applicability, and reduce the complexity. ESN is a discrete-time RNN with fast and efficient learning property. It cannot only reduce the computational complexity in comparison with classical RNNs but also overcome the shortcomings, including slow convergence and local minimum [27]. Thus, it has successfully been adopted in many applications, such as speech processing [28], time- series prediction [29], and pattern recognition [30]. ESN is adopted in this paper to get the best estimate of an unknown original image from a given blurred and/or noisy image. The RNN architecture of ESN can be divided into two components: 1) the dynamical reservoir and 2) the readout neurons [17]. The former component is a recurrent architecture in the hidden layer. It contains a large number of neurons that are randomly and sparsely connected [31]. The latter component denotes the memory-less output layer. Although ESN has received increasing attention from the research community in recent years, some topics related to its practical implementation, such as the reservoir adaptation, hardware implementation method, and optimal 2162-237X © 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Upload: others

Post on 29-Jul-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING …hbduan.buaa.edu.cn/papers/IEEE_NNLS_Duan_2016.pdf · neurodynamic, orthogonal, pigeon-inspired optimization (PIO). I. INTRODUCTION

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 27, NO. 11, NOVEMBER 2016 2413

Echo State Networks With OrthogonalPigeon-Inspired Optimization for

Image RestorationHaibin Duan, Senior Member, IEEE, and Xiaohua Wang

Abstract— In this paper, a neurodynamic approach for imagerestoration is proposed. Image restoration is a process ofestimating original images from blurred and/or noisy images.It can be considered as a mapping problem that can be solvedby neural networks. Echo state network (ESN) is a recurrentneural network with a simplified training process, which isadopted to estimate the original images in this paper. Theparameter selection is important to the performance of theESN. Thus, the pigeon-inspired optimization (PIO) approach isemployed in the training process of the ESN to obtain desiredparameters. Moreover, the orthogonal design strategy is utilizedin the initialization of PIO to improve the diversity of individuals.The proposed method is tested on several deteriorated imageswith different sorts and levels of blur and/or noise. Resultsobtained by the improved ESN are compared with those obtainedby several state-of-the-art methods. It is verified experimentallythat better image restorations can be obtained for differentblurred and/or noisy instances with the proposed neurodynamicmethod. In addition, the performance of the orthogonalPIO algorithm is compared with that of several existing bioin-spired optimization algorithms to confirm its superiority.

Index Terms— Echo state network (ESN), image restoration,neurodynamic, orthogonal, pigeon-inspired optimization (PIO).

I. INTRODUCTION

IMAGE acquisition can be considered as a processcorrupting the image source with a point spread

function (PSF) accounting for the distortion [1]. Image restora-tion aims at obtaining the best estimation of the unknownoriginal image from a blurred and/or noisy observed image [2].Many effective image restoration approaches have beenreported [3]–[8]. Though regularization and iterative methodsare appropriate to solve the ill-conditioned problems in imagerestoration [9], [10], regularization solutions are sensitiveto the variation of parameters. A neural network approachhas been shown in [11] to achieve a competitive perfor-mance in image restoration. However, the adopted multilayer

Manuscript received August 15, 2014; revised May 10, 2015 and September11, 2015; accepted September 12, 2015. Date of publication October 27, 2015;date of current version October 17, 2016. This work was supported in partby the National Natural Science Foundation of China under Grant 61425008,Grant 61333004, and Grant 61273054, the Aeronautical Foundation of Chinaunder Grant 20135851042, and the Innovation Foundation of BUAA for PhDGraduates.

The authors are with the State Key Laboratory of Virtual Reality Technologyand Systems, School of Automation Science and Electrical Engineering,Beihang University, Beijing 100191, China (e-mail: [email protected];[email protected]).

Color versions of one or more of the figures in this paper are availableonline at http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/TNNLS.2015.2479117

perceptron (MLP) model can only deal with Gaussian noise.Note that there are few image restoration methods that canpresent good performance on both deburring and denoisingproblems. Motivated by this, we focus on the problemof developing an image restoration approach to deal withdifferent sorts of deterioration. Neural network is utilizedas the basis of our proposed image restoration approach,since the technique can automatically learn an image restoringprocedure from the training examples.

Neurodynamic optimization algorithms based on recurrentneural networks (RNNs) are appropriate to solve real-timeoptimization problems [12]–[15]. RNNs with parallel anddistributed information processing are competently computa-tional. Therefore, neural network models have successfullybeen utilized in many different areas, such as nonlinear controldesign [16], system identification [17], signal processing [18],and time-series prediction [19], [20]. Significant progress hasbeen made in the research on neurodynamic optimization,since the Hopfield neural network was constructed for opti-mization problems [21], [22]. Moreover, various neurodynamicoptimization models, including dynamical canonical nonlin-ear programming circuit [23], Adachi neural network [24],Elman network [25], and echo state network (ESN) [26],have been proposed to guarantee the optimality, improve theconvergence properties, extend the applicability, and reducethe complexity.

ESN is a discrete-time RNN with fast and efficient learningproperty. It cannot only reduce the computational complexityin comparison with classical RNNs but also overcomethe shortcomings, including slow convergence and localminimum [27]. Thus, it has successfully been adopted inmany applications, such as speech processing [28], time-series prediction [29], and pattern recognition [30]. ESN isadopted in this paper to get the best estimate of an unknownoriginal image from a given blurred and/or noisy image. TheRNN architecture of ESN can be divided into two components:1) the dynamical reservoir and 2) the readout neurons [17].The former component is a recurrent architecture in the hiddenlayer. It contains a large number of neurons that are randomlyand sparsely connected [31]. The latter component denotes thememory-less output layer.

Although ESN has received increasing attention fromthe research community in recent years, some topicsrelated to its practical implementation, such as the reservoiradaptation, hardware implementation method, and optimal

2162-237X © 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Page 2: IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING …hbduan.buaa.edu.cn/papers/IEEE_NNLS_Duan_2016.pdf · neurodynamic, orthogonal, pigeon-inspired optimization (PIO). I. INTRODUCTION

2414 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 27, NO. 11, NOVEMBER 2016

Fig. 1. Architecture of the basic ESN. um , xn , and yl represent an inputunit, a unit in the reservoir, and an output unit, respectively. Teacher is thesupervisor in the training process, which helps the network reform the outputweights.

design of ESN, still deserve to be extensively explored.Pigeon-inspired optimization (PIO), which is a novelevolutionary computation algorithm [32]–[34], is employed inthis paper for the purpose of obtaining the optimal designof ESN.

It has been verified that complex optimization prob-lems can be solved efficiently by bioinspired algorithms,such as genetic algorithm (GA) [35], bee colony algo-rithm [36], [37], particle-swarm optimization (PSO) [38],brain storm optimization [39], [40], and biogeography-basedoptimization (BBO) [41]–[44]. PIO is proposed according tothe behaviors of pigeons. The diversity of individuals can beimproved, and the optimal solution can be obtained in fewergenerations by adopting orthogonal design strategy [45], [46]in the initialization of PIO. The number of reservoir neurons,spectral radius, input scale, and connectivity density, whichare the important parameters to the performance of ESN, willbe optimized with the orthogonal pigeon-inspired optimization(OPIO) algorithm.

The OPIO-ESN image restoration algorithm is testedon several images with various sorts of deterioration. Theproposed algorithm is compared with several state-of-the-artimage restoring algorithms. The restoration results are quan-titatively evaluated. Moreover, comparative experiments onOPIO and other optimization algorithms are conducted todemonstrate the advantage of the OPIO approach.

The rest of this paper is organized as follows. In Section II,the ESN model is introduced, of which the important para-meters are also discussed. In Section III, the PIO algorithmis presented, and the orthogonal design strategy is proposedin its initialization process. The image restoration problemis formulated in Section IV, where the detailed implementa-tion procedure of the proposed OPIO-ESN image restorationalgorithm is provided. In Section V, a series of comparativeexperimental results is given to show the effectiveness of theproposed approach, followed by the concluding remarks givenin Section VI.

II. ECHO STATE NETWORK

A. ESN Architecture

ESN is normally composed of three components: 1) aninput layer; 2) a discrete-time RNN (the reservoir); and 3) alinear output layer [17]. The architecture of the basic ESN isshown in Fig. 1. The dynamic reservoir produces dynamics of

the internal processing units, which has short-term memory.Besides, teacher series is added to the output layer. Thelearning goals can be achieved by calculating the recurrentnetwork state matrix and training the output signal state matrixin terms of minimizing the normalized root mean squareerror (NRMSE).

RNNs are characterized by recurrent loops in synapticconnection pathways in the ESN architecture. Suppose thatthe basic ESN is composed of M external input neurons,N internal units, and L readout neurons. Then, the state andoutput equations of ESN can be presented as

x(k + 1) = H (Winu(k + 1) + Wresx(k) + Wbacky(k))

y(k + 1) = Wout(x(k + 1), u(k + 1), y(k)) (1)

where H (•) is a nonlinear activation function of the hiddenlayer, which is normally a sigmoid function, for example,H (•) = tanh(•). Win ∈ �M×N is the input-hidden connectionweight matrix. Wres ∈ �N×N is the weight matrix of synapticconnections among reservoir neurons. Wback ∈ �L×N denotesthe output-hidden connection weight matrix, and Wout ∈�(M+N+L)×L is the output weight matrix. x(k) and y(k) arethe reservoir state and the output value at time k, respectively.The training process aims at minimizing the error betweend and y. An important property of ESN is that its internalneurons work as echo functions, which display systematicvariants of the exciting external signal. Elements of Win,Wres, and Wback are fixed values to guarantee the echo statesin ESN. Wout is the matrix to be trained. The state of dynamicreservoir is driven by the input together with the internaldynamics of the network, which is independent of the task.

B. Important Parameters in the ESN

Neurons in the hidden layer of ESN are sparsely intercon-nected. The size of the dynamic reservoir NDR has a greatinfluence on the performance of ESN [27]. Clearly, a largernumber of dynamic reservoir internal processing units willresult in a more complicated fitting system, while costinglonger computing time.

The spectral radius spectral radius (SR) is another key factorin the ESN, which is defined as

S R = max{abs(eigenvalue of Wres)}. (2)

The parameter should be selected as S R < 1 to guaranteethat the internal neurons work as echo functions [27].In general, the selecting range of S R is [0.1, 0.99] in practicalapplications.

The sparse degree (SD) is defined as

SD = n

N(3)

where n is the number of interconnected neurons, andN denotes the number of all the neurons in the reservoir. Thevalue of SD determines the variety of vectors in the dynamicreservoir. The reservoir with sparse interconnectivity can bedecomposed into loosely coupled subsystems. This conditionestablishes a richly structured reservoir of excitable dynamics.

Moreover, before the training process, input signals shouldbe adjusted to the activation function of the reservoir units.

Page 3: IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING …hbduan.buaa.edu.cn/papers/IEEE_NNLS_Duan_2016.pdf · neurodynamic, orthogonal, pigeon-inspired optimization (PIO). I. INTRODUCTION

DUAN AND WANG: ESNs WITH ORTHOGONAL PIO FOR IMAGE RESTORATION 2415

Fig. 2. Map and compass operator in the PIO algorithm. Each pigeonis a possible solution. Solid arrows: flying directions of the pigeons. Thenavigation information can be shared among the pigeons, since they cancommunicate with each other. Therefore, the pigeon with the best positioncan be regarded as the leader (the right-centered one). Dotted arrows: restof the pigeons will then improve their directions according to the differencesbetween their current positions and the leader’s position.

Therefore, the input scale (IS) is also a parameter to becarefully selected. Normally, a large I S should be chosen fora highly nonlinear task. It is determined by the property of theactivation function.

In summary, the OPIO algorithm is used to derive theoptimal values of NDR, S R, SD, and I S, which are factorsof the optimization problem. The range of each factor canbe separated into several possible levels. Factors and levelswill be used in the orthogonal design process of OPIO. Moredetails will be given in Section III.

III. ORTHOGONAL PIGEON-INSPIRED OPTIMIZATION

A. Pigeon-Inspired Optimization

It is shown in [47] that homing pigeons can easily find theirhome with the aid of three homing tools: 1) the magneticfield; 2) the sun; and 3) the landmarks. Pigeons rely moreon map and compass-like information at the beginning ofthe journey. Landmarks provide more information to pigeonsin the midway. Moreover, the route is evaluated and revisedtimely to guarantee that they can reach the destination throughthe optimal route [48]. Inspired by these facts, two operatorsare introduced in the PIO algorithm, i.e., the map and compassoperator and the landmark operator [32]. The former one isdesigned according to the functions of the magnetic fieldand the sun, while the latter one is proposed based onthe contribution of landmarks. The optimal solution and theprocess of optimization can be regarded as the destination ofthe pigeons and the homing process, respectively. The positionof each pigeon is a feasible solution, which corresponds to theNRMSE between the estimated output and the original signals.

1) Map and Compass Operator: Previous work shows thatthe perception of the magnetic field can help pigeons to shapea map in their brains. In addition, the altitude of the sun canalso provide information for navigation as a compass [48].The map and compass operator, as shown in Fig. 2, isdesigned according to the utilization of the magnetic field andthe sun.

It is shown in Fig. 2 that the map and compass operatorutilizes the position Xi and the velocity Vi of pigeon i .

Fig. 3. Landmark operator in the PIO algorithm. The pigeon in the centerof the circle represents the best solution in each generation, and it is theintermediate destination of other pigeons. Half of the pigeons that are farfrom the intermediate destination (pigeons outside the circle) will follow thenearest pigeons in the circle, and thus they are thought to be with the sameposition. Therefore, the pigeons outside the circle are ignored.

The dimensions of positions and velocities (which are selectedas four in this paper) are both determined by the problem. Themap and compass operator is presented as

Vi (t) = Vi (t − 1) · e−Rt + rand · (Xg − Xi (t − 1)) (4)

Xi (t) = Xi (t − 1) + Vi (t) (5)

where t is the number of iteration. R is the map and compassfactor to be set according to the problem. rand is a randomnumber between 0 and 1. Xg is the global best position.

2) Landmark Operator: Recent research indicated thatpigeons can also obtain guide information from regular land-marks [47], such as railways, main roads, and rivers. Thelandmark operator is shown in Fig. 3.

As shown in Fig. 3, each pigeon outside the circle isconsidered to share the same position with the nearest pigeonin the circle. Therefore, half of the pigeons are discardedin each generation in the landmark operator. The number ofpigeons at the t th iteration can be computed as

NP (t) = ceil

(NP (t − 1)

2

)(6)

where ceil (A) rounds the variable A to the nearest integersgreater than or equal to A. Suppose that each pigeon can flystraight to the intermediate destination. Then, the position ofpigeon i at the t th iteration is updated as

Xi (t) = Xi (t − 1) + rand · (Xc(t) − Xi (t − 1)) (7)

where Xc(t) denotes the center position at the tth iteration,which is defined as

Xc(t) =∑

i(Xi (t) · f i tness(Xi (t)))

NP (t)∑

if i tness(Xi (t))

(8)

f i tness(•) is the criterion to evaluate the quality ofeach pigeon individual. It is defined as f i tness(Xi (t)) =1/( fmin(Xi (t)) + ε) for minimum optimization problems orf i tness(Xi (t)) = fmax(Xi (t)) for maximum optimizationproblems.

Page 4: IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING …hbduan.buaa.edu.cn/papers/IEEE_NNLS_Duan_2016.pdf · neurodynamic, orthogonal, pigeon-inspired optimization (PIO). I. INTRODUCTION

2416 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 27, NO. 11, NOVEMBER 2016

B. Orthogonal Design

The orthogonal design strategy [45] is introduced to guar-antee the diversity of initialized pigeons in PIO. For n factorswith Q levels, the orthogonal design is executed based onthe orthogonal array Lm(Qn) with m rows and n columns.m denotes the total number of combinations of levels. A gen-eral method to construct an m × n orthogonal array is givenin [46]. An example is shown as follows to explain thecomposition of the orthogonal array:

L9(34) =

⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣

1 1 1 11 2 2 21 3 3 32 1 2 32 2 3 12 3 1 23 1 3 23 2 1 33 3 2 1

⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦

. (9)

It can be seen that there are four factors with three levelsfor each factor. Therefore, a total of 34 = 81 differentcombinations of levels have to be tested in a complete search.However, only nine combinations have to be tested withthe orthogonal design, as shown in L9(34). As presented inSection II, there are four factors in the ESN to be optimizedwith the OPIO algorithm. Supposing that the range can bedivided into four levels for each factor, the orthogonal arrayshould be presented as L25(54).

The operation of the factor analysis can evaluate the effectof each factor on the objective function and can define the mostimportant factor. Then, the best level for each factor can bedetermined to optimize the objective function. The orthogonaldesign is utilized to improve the initialization process of PIO.Therefore, the individuals are selected randomly from theorthogonal array instead of using the factor analysis to retainthe diversity of the individuals in this paper.

IV. OPIO-ESN FOR IMAGE RESTORATION

A. Image Restoration

Image restoration can be formulated as an inverse problemin image processing. The purpose of image restoration is torecover the unknown original image X from a noisy observa-tion Y [4], which can be modeled as

Y = K X + B (10)

where K typically represents a convolution of the originalimage, and B is the additive observation noise, which canbe white Gaussian noise or impulse noise. The function of Kis generally considered to have finite support with the PSF.The degradation caused by the PSF can be represented by aFredholm integral equation of convolution type [2], which isgiven as

U(i, j) = Y(i, j) − B(i, j) =∫

p

∫q

K (i, j, p, q)X(p, q)dpdq

(11)

where K (i, j, p, q) is a 2-D PSF. U(i, j) is the image withdegradation caused by the PSF. Therefore, the process ofimage restoration is to estimate the additive noise and finda solution to the Fredholm equation of the convolution type inthe 2-D space. In addition, PSF is assumed to be a continuousfunction that is spatially invariant and separable.

Assume that the PSF either is known or can be estimatedfrom the data. Then, K is known or can be estimated. Thesolution can be considered as the minimum of a convexobjective function and is given as

F(X) = 1

2‖U − K X‖2 + λ�(X) (12)

where � is the regularization operator, and λ is the regular-ization parameter. ‖ • ‖ is the Euclidean norm. The objectivefunction minimizes the error and a measure of the roughnessof the solution simultaneously.

B. OPIO-ESN Based Image Restoration

Deteriorated images are decomposed into several patches,each of which is restored separately. Image restoration isimplemented with OPIO-ESN mapping each deterioratedimage patch to the corresponding original patch. OPIO-ESNcan automatically learn an image restoring procedure fromtraining examples, which consist of pairs of deteriorated andoriginal images. The input samples of the network are blurredand/or noisy image patches, and the output samples are thecorresponding original patches. Estimations of original imagesare obtained by placing output patches at the correspondinglocations.

Pixels in each patch are arranged as a vector as required bythe input form of OPIO-ESN. NRMSE is given as the fitnessfunction of OPIO, which can be computed as

NRMSE =√√√√ 1

S‖X‖2

S∑k=1

(X(k) − X(k))2 (13)

where S is the number of pixels to be restored in a patch.X(k) and X(k) are the kth pixel in the restored patch and theoriginal patch, respectively. The values of pixels in both thedeteriorated and the original images are normalized to [0, 1].For a series of deteriorated images with the same sort of blurand/or noise, the original images can be restored with thecorresponding well-trained OPIO-ESN.

C. Implementation Procedure of OPIO-ESNfor Image Restoration

The detailed implementation procedure of OPIO-ESN forimage restoration, as shown in Fig. 4, is described as follows.

Step 1: Initialize the parameters of the OPIO algorithm,including the dimension of solution D, rangeof each variable, range of velocity, populationsize N p, map and compass factor R, and numberof iterations Nc1 max and Nc2 max correspondingto the two operators, which are restricted byNc2 max > Nc1 max .

Step 2: Generate an orthogonal array according to the fac-tors and levels of the ESN parameters. There are

Page 5: IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING …hbduan.buaa.edu.cn/papers/IEEE_NNLS_Duan_2016.pdf · neurodynamic, orthogonal, pigeon-inspired optimization (PIO). I. INTRODUCTION

DUAN AND WANG: ESNs WITH ORTHOGONAL PIO FOR IMAGE RESTORATION 2417

Fig. 4. Procedure of OPIO-ESN for image restoration.

four factors of ESN. Suppose that the level numberof each factor is set as five. Then, there are 25 rowsin the orthogonal array (the number of levels shouldbe larger if N p > 25). In addition, velocities aregenerated randomly for all individuals according tothe velocity range.

Step 3: Create input and output samples for ESN. Theoutput training samples are original image patchesthat are randomly extracted from a data set. Theinput training samples are obtained by corruptingthe original patches with blur and/or noise.

Step 4: Randomly select N p individuals from the orthogo-nal array. Calculate the fitness of each pigeon andfind the best solution. Set the number of currentiteration Nc = 1.

Step 5: Execute the map and compass operator. Updatethe velocity and position of each pigeonwith (4) and (5).

Step 6: Calculate the fitness of each pigeon and recordthe new best solution. If Nc > Nc1 max , go toStep 7. Otherwise, renew the count of iteration withNc = Nc + 1 and go to Step 5.

Step 7: Calculate the fitness of each pigeon and sort thepigeons on the basis of their fitness values.

TABLE I

PARAMETERS OF THE OPIO-ESN ALGORITHM

Step 8: Execute the landmark operator. Half of the pigeonswith worse fitness values will be abandoned.Determine the center of the left half of the pigeonsaccording to (8) and define the center position asthe temporary destination. Pigeons will fly to theintermediate destination by adjusting their flyingstates with (7).

Step 9: Calculate the fitness of each pigeon, sort the pigeonsaccording to their fitness values and store the bestsolution. If Nc > Nc2 max , go to Step 10. Other-wise, renew the count of iteration with Nc = Nc+1and go to Step 8.

Step 10: Stop OPIO and train ESN with the optimalparameters.

V. EXPERIMENTAL RESULTS

A series of comparative experiments is conducted toverify the feasibility and effectiveness of the proposedOPIO-ESN algorithm for image restoration. The algorithmsare programmed using MATLAB 2012 and implemented ona PC with 4 GB of RAM. Image patches for OPIO-ESNtraining are randomly picked from the Judd and Torallba dataset [49]. Deteriorated images are generated by corrupting theoriginal ones with blur and/or noise. The number of trainingsamples is set as 20 000. The sizes of input patches and outputpatches are set as 17 × 17 and 9 × 9, respectively. Thus,the number of input neurons is set as 289, and the numberof output neurons is set as 81. The stride size is 9, andthus, the patches are not overlapped. The initial parameters ofOPIO-ESN are selected based on tests and practical experi-ence. They are presented in Table I.

A. Criterions to Evaluate Experimental Results

1) Normalized Mean Square Error: MSE [50] is a criterionthat measures the averaged squares of errors for an estimator.Normalized MSE (NMSE) [51] is computed based on MSE,which takes the values in the original image in consideration.NMSE is computed as

NMSE =∑r

i=1∑c

j=1 (X(i, j) − X(i, j))2∑ri=1

∑cj=1 X(i, j)2 (14)

where X(i, j) is the value at pixel (i, j) in the original image,and X(i, j) is the estimated value at pixel (i, j) in the restoredimage. r and c are the row and column numbers, respectively.A smaller NMSE indicates a better estimator.

Page 6: IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING …hbduan.buaa.edu.cn/papers/IEEE_NNLS_Duan_2016.pdf · neurodynamic, orthogonal, pigeon-inspired optimization (PIO). I. INTRODUCTION

2418 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 27, NO. 11, NOVEMBER 2016

TABLE II

BLUR PSF USED IN EACH SCENARIO

TABLE III

PARAMETERS OF LPA-ICI-RI AND LPA-ICI-RWI

2) Signal-to-Noise Ratio: Signal-to-noise ratio (SNR) [52]is a term for the ratio of signal power to noise power, whichis often expressed in decibels. An SNR greater than 0 dBindicates that the ratio is higher than 1:1, i.e., the power ofsignal is larger than that of noise. The variance is used insteadof power as the power spectrum of an image is difficult to becomputed. SNR is defined as

SNR = 10 log

( ∑ri=1

∑cj=1 (X(i, j) − Xave)

2

∑ri=1

∑cj=1 (X(i, j) − X(i, j))2

)(15)

where Xave is the average of values at all pixels in the originalimage. Other variables are defined as the same to those inthe NMSE.

3) Peak Signal Noise Ratio: Peak SNR (PSNR) [53] isdefined as a ratio of the maximum possible power of theoriginal image to the power of noise

PSNR = 10 log

(r∗c∗X2

max∑ri=1

∑cj=1 (X(i, j) − X(i, j))2

)(16)

where Xmax is the maximum possible pixel value of theoriginal image.

B. Experiment I: Restoration of Images With Blur

Four benchmark images frequently used in many publica-tions, such as [7] and [54], are utilized to test the performanceof the proposed image restoration method. The blur PSFs usedin all scenarios are given in Table II.

ForWaRD [55], linear time-invariant (LTI) Wiener [55],group-based sparse representation (GSR) [7], LPA-ICI-RI,which uses local polynomial approximation (LPA), intersec-tion of confidence intervals (ICI), and regularized inversion(RI) [56], LPA-ICI-RWI, which uses LPA, ICI, and regularizedWiener inversion (RWI) [56], iterative decoupled deblurringblock matching 3-D (IDDBM3D) [54], and fast total variationde-convolution (FTVd) [6] are selected as seven comparativealgorithms to be tested on the four images. The results ofthe seven algorithms are obtained with the correspondingsoftwares available online. Parameters of the LPA-ICI-RI andLPA-ICI-regularized Wiener inversion (RWI) algorithms are

Fig. 5. Deblurring results of the Lena image in Scenario 4 of Experiment I.(a) Original image. (b) Blurred image. (c)–(j) Images restored by theForWaRD [55], LTI Wiener [55], GSR [7], LPA-ICI-RI [56], LPA-ICI-RWI [56], IDDBM3D [54], FTVd [6], and the proposed OPIO-ESN algorithm.

provided in Table III. The balance parameter is set as μ =5.0 ∗ 1010 in the FTVd method as the level of noise is zero.Default parameters of other five comparative algorithms areused. Four parameters of ESN (NDR, S R, SD, and I S) areobtained by the OPIO algorithm.

Simulation results on the Lena image in Scenario 4 ofExperiment I are shown in Fig. 5. It can be seen that the pro-posed OPIO-ESN algorithm outperforms the GSR and LPA-ICI-RI algorithms. The details, such as the straw on the hat,

Page 7: IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING …hbduan.buaa.edu.cn/papers/IEEE_NNLS_Duan_2016.pdf · neurodynamic, orthogonal, pigeon-inspired optimization (PIO). I. INTRODUCTION

DUAN AND WANG: ESNs WITH ORTHOGONAL PIO FOR IMAGE RESTORATION 2419

TABLE IV

PSNRs OF THE EIGHT METHODS IN EXPERIMENT I

Fig. 6. Average NMSEs of the restored benchmark images in the five blurscenarios of Experiment I. The horizontal axis represents the names of thetested images, and the vertical axis is defined as the NMSEs obtained by eachmethod.

are well restored, and the edges are reserved with the proposedOPIO-ESN algorithm. The performance of ForWaRD, LTIWiener, LPA-ICI-RWI, IDDBM3D, FTVd, and OPIO-ESNshown in Fig. 5 is difficult to compare. Thus, PSNRs andSNRs of the eight algorithms are further given in Tables IVand V to compare them objectively. The average NMSEs ofthe five scenarios obtained by the eight algorithms for eachimage are provided in Fig. 6.

TABLE V

SNRs OF THE EIGHT METHODS IN EXPERIMENT I

The IDDBM3D algorithm obtains the largest PSNRs inall the five scenarios, as shown in Table IV. The averagePSNRs given in the last column of Table IV indicate thatthe OPIO-ESN and FTVd are the two best tools in therest seven algorithms. More specifically, OPIO-ESN achieveslarger PSNRs than GSR, LPA-ICI-RI, and LPA-ICI-RWI algo-rithms in all the five scenarios. FTVd, ForWaRD, and LTIWiener outperform OPIO-ESN in Scenario 1 and Scenario 2.However, opposite circumstance can be observed in the restscenarios. The comparative results of SNRs are similar to thatof PSNRs, as shown in Table V, i.e., OPIO-ESN can achievethe second largest average SNRs for all the five scenarios.Moreover, the results shown in Fig. 6 reflect that smallerNMSEs can be obtained by adopting the proposed OPIO-ESNalgorithm in comparison with all the other algorithms exceptfor the IDDBM3D method.

It is worth noting that although IDDBM3D achieves betterresults than OPIO-ESN, its time cost is higher than that ofOPIO-ESN. The parameters of ESN are selected as NDR = 98,S R = 0.78, SD = 0.89, and I S = 0.95 in the firstscenario. The training time cost of ESN is 8.31 s, andthe testing time cost of the Barbara image (256 × 256)is 0.88 s. The restoration of the Barbara image in Scenario 1of Experiment I costs 95.2148 s when the number of iterations

Page 8: IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING …hbduan.buaa.edu.cn/papers/IEEE_NNLS_Duan_2016.pdf · neurodynamic, orthogonal, pigeon-inspired optimization (PIO). I. INTRODUCTION

2420 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 27, NO. 11, NOVEMBER 2016

TABLE VI

NOISE VARIANCE USED IN EACH SCENARIO

is set as 200 in the IDDBM3D. It costs 2.05 s using theFTVd algorithm. Thus, OPIO-ESN costs less time thanIDDBM3D and FTVd.

C. Experiment II: Restoration of Images With Blur and Noise

Experiments on the restoration of images with both theblur and the noise are conducted to test the robustness of theproposed OPIO-ESN algorithm. Seven comparative methodsadopted in Experiment I are still used here. Several balanceparameters in the FTVd method are tested, and the bestrestoration result is conserved for each image. The balanceparameters tested for each image include μ, 10*μ, 20*μ,50*μ, 100*μ, 200*μ, 300*μ, 400*μ, 500*μ, 600*μ, and1000*μ, where μ = 0.05/σ . σ is the standard deviation ofthe addictive white Gaussian (AWG) noise. Parameters of theother six comparative algorithms are selected the same as thosein Experiment I. Parameters of ESN are the correspondingoptimal results obtained by OPIO. The blur PSFs are givenin Table II, and the variances of AWG noises are providedin Table VI.

Restoration results of the Barbara image in Scenario 1of Experiment II are provided in Fig. 7. PSNRs and SNRsof the eight algorithms tested on different blurred and noisyimages are given in Tables VII and VIII, respectively. For eachtested image, NMSEs of the five scenarios obtained by eachalgorithm are averaged, and the results are provided in Fig. 8.The experimental results show that the performance of all theeight algorithms is worse when AWG is added to the blurredimages. The FTVd algorithm is especially sensitive to noise.OPIO-ESN exhibits a better robustness than FTVd and Wiener.

Comparing the PSNRs shown in Tables IV and VII, theresults indicate that the PSNRs of all the eight algorithms aredeclined when there is noise in the blurred images. GSR canachieve higher PSNRs and SNRs in Experiment II, althoughit performs worse than several other methods in Experiment I.The performance of OPIO-ESN cannot compared with thatof IDDBM3D and GSR, but is better than that of othermethods. Fig. 8 shows that the NMSEs obtained by LTIWiener are larger than those obtained by other methods.The NMSEs obtained on the House image are the smallestamong the four tested images. The reason should be thatthe House image has simpler and more regular construction,while the Lena and Cameraman images contain more details.In addition, the time cost of OPIO-ESN is significantly lowerthan those of GSR and IDDBM3D. The testing time cost ofOPIO-ESN on the Barbara image (256 × 256) in Scenario 1of Experiment II is 0.91 s, whereas that of IDDBM3D on theBarbara image in Scenario 1 of Experiment II is 94.9478 s.

Fig. 7. Restoration results of the Barbara image in Scenario 1 ofExperiment II. (a) Original image. (b) Degraded image. (c)–(j) Imagesrestored by the ForWaRD [55], LTI Wiener [55], GSR [7], LPA-ICI-RI [56],LPA-ICI-RWI [56], IDDBM3D [54], FTVd [6], and the proposed OPIO-ESNalgorithm.

The restoration of the Barbara image in Scenario 1 of Exper-iment II costs 653.1772 s using the GSR algorithm when theiteration number is set as 80.

D. Experiment III: Restoration of Images With Gaussian Noise

The proposed OPIO-ESN algorithm is also trained andtested on noisy images. Original images are corrupted with

Page 9: IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING …hbduan.buaa.edu.cn/papers/IEEE_NNLS_Duan_2016.pdf · neurodynamic, orthogonal, pigeon-inspired optimization (PIO). I. INTRODUCTION

DUAN AND WANG: ESNs WITH ORTHOGONAL PIO FOR IMAGE RESTORATION 2421

TABLE VII

PSNRs OF THE EIGHT METHODS IN EXPERIMENT II

Fig. 8. Average NMSEs of the restored benchmark images in the five blurand noisy scenarios of Experiment II. The horizontal axis represents the namesof the tested images, and the vertical axis is defined as the NMSEs obtainedby each method.

AWG noises in Experiment III. The noise level varies from 10to 30. The performance of OPIO-ESN is compared with thoseof the block matching 3-D (BM3D) [8], BM3D-shape-adaptiveprincipal component analysis (SAPCA) [57], MLP [11], non-local means denoising (NLMD) [58], and IDDBM3D [54]algorithms. Results of the five comparative algorithms areobtained based on the corresponding softwares availableonline. The MLP model trained with σ = 10 given in thesoftware available online [11] is used when the noise levels are

TABLE VIII

SNRs OF THE EIGHT METHODS IN EXPERIMENT II

σ = 10 and σ = 15. The model trained with σ = 20 is usedwhen the noise levels are σ = 20 and σ = 25. When the noiselevel is σ = 30, the MLP model trained with σ = 30 is used.Therefore, the comparisons with the MLP are unfair when thenoise levels are σ = 15 and σ = 25. Default parameters ofthe other three algorithms are used. The restoration results ofthe House image with AWG noise σ = 25 is shown in Fig. 9.The PSNRs and SNRs of the six methods on noisy images aregiven in Tables IX and X, respectively. The averaged NMSEsare given in Fig. 10.

The performance of OPIO-ESN on the House image is bet-ter than most algorithms when the noise level is low. However,its performance gets worse when the noise level becomeshigher. OPIO-ESN achieves better results than BM3D andMLP in most test cases. The NLMD and BM3D-SAPCAperform better than the rest four methods, especially onthe Lena image with more details. The performance ofOPIO-ESN in Experiment III is worse than those in the formertwo experiments. A possible reason to this could be the factthat ESN is relatively more sensitive to the random noise.

E. Experiment IV: Comparison of EvolutionaryOptimization Methods

Seven evolutionary optimization methods, includingPIO [32], BBO [41], PSO [38], GA [35], the stud GA [59], thedifferential evolution algorithm (DE) [60], and evolutionary

Page 10: IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING …hbduan.buaa.edu.cn/papers/IEEE_NNLS_Duan_2016.pdf · neurodynamic, orthogonal, pigeon-inspired optimization (PIO). I. INTRODUCTION

2422 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 27, NO. 11, NOVEMBER 2016

Fig. 9. Restoration of the House image with AWG noise σ = 25 inExperiment III. (a) Original image. (b) Noisy image. (c)–(h) Images restoredby the BM3D [8], BM3D-SAPCA [57], MLP [11], NLMD [58], IDDBM3D[54], and the proposed OPIO-ESN algorithm.

strategy [61], are adopted in Experiment IV to conductcomparative results with the proposed OPIO algorithm.

The number of individuals and the number of iterationsare both set as 20 for all the optimization algorithms. Theoptimization algorithms are used to optimize the four para-meters of ESN, as presented in Section II. The input samplesof ESN are 20 000 original patches, and the output patchesare the corresponding deteriorated patches. The sizes of theoriginal patches and the output patches are 17 ×17 and 9 ×9,respectively.

Iterative curves of all the eight optimization algorithmsare provided in Figs. 11–13, where ESNs are trained to

TABLE IX

PSNRs OF THE SIX METHODS IN EXPERIMENT III

TABLE X

SNRs OF THE FIVE METHODS IN EXPERIMENT III

restore images with different sorts of deteriorations. Thetrained OPIO-ESN model corresponding to Fig. 11 is usedto restore blurred images in Scenario 1 of Experiment I.Comparative results presented in Fig. 11 show that the optimalfitness value obtained by OPIO is lower than that obtainedby PIO. The results indicate that the orthogonal design helps

Page 11: IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING …hbduan.buaa.edu.cn/papers/IEEE_NNLS_Duan_2016.pdf · neurodynamic, orthogonal, pigeon-inspired optimization (PIO). I. INTRODUCTION

DUAN AND WANG: ESNs WITH ORTHOGONAL PIO FOR IMAGE RESTORATION 2423

Fig. 10. Average NMSEs of the restored benchmark images with differentlevels of AWG noise in Experiment III. The horizontal axis represents thenames of the tested images, and the vertical axis is defined as the NMSEsobtained by each method.

Fig. 11. Iterative curves of OPIO in comparison with other evolutionaryoptimization methods. The evolutionary optimization methods are all used toobtain the optimal parameters for ESN. The training patches are blurred withthe first PSF.

to provide a wide diversity of individuals for the proposedOPIO algorithm. Moreover, OPIO converges faster than therest seven optimization methods.

The trained OPIO-ESN model corresponding to Fig. 12is used to restore deteriorated images in Scenario 1 ofExperiment II. The OPIO algorithm achieves the smallestNRMSE within ten iterations. DE achieves the same NRMSEas OPIO at the 17th iteration. After 20 iterations, the fitnessvalues of other methods are higher than those of OPIO and DE.Comparing the results shown in Figs. 11 and 12, it can beseen that the NRMSEs increase if there are noises adding tothe blurred patches. It indicates that the performance of ESNwill be worse if there are additive noises in blurred images,which is consistent with the results obtained in the previousexperiments.

In Fig. 13, an AWG noise with σ = 30 is added to thepatches for ESN training. The results shown in Fig. 13 indicatethat OPIO converges faster than other algorithms. NRMSEsshown in Fig. 13 are larger than those in the former twoexperiments, which is consistent with the denoising results.Comparative results given in Figs. 11–13 show the advantagesof the proposed OPIO algorithm in the sense that it can achievethe smallest NRMSEs within the least iterations.

Fig. 12. Iterative curves of OPIO in comparison with other evolutionaryoptimization methods. The evolutionary optimization methods are all used toobtain the optimal parameters for ESN. The training patches are blurred andnoisy. The corresponding PSF and noise σ 2 are given in the first scenario.

Fig. 13. Iterative curves of OPIO in comparison with other evolutionaryoptimization methods. The evolutionary optimization methods are all usedto obtain the optimal parameters for ESN. The output patches are with anAWG noise (σ = 30).

VI. CONCLUSION

In this paper, ESN is optimized by the OPIO algorithm,which is a newly proposed optimization algorithm with orthog-onal design strategy introduced in the initialization process.The proposed OPIO-ESN method is utilized to solve the imagerestoration problem.

The restoration performance of OPIO-ESN is comparedwith some existing methods for different types of deteriorationaccording to several quantitative criterions. The comparativeresults show that OPIO-ESN can deliver desired restorationperformance for images with different sorts and levels of blurand/or noise. Moreover, OPIO-ESN can be trained well withrelatively low time cost. The trained model is effective forall the images with the same deterioration factor. Comparativeexperiments of OPIO and seven other evolutionary optimiza-tion methods are conducted. The experimental results verifythe advantages of OPIO on achieving global optimum withfaster convergence.

It is worth noting that although gray images are tested inthis paper, the proposed method can be extended to colorimage restoration. This is because color images can be con-sidered as a combination of three separated color channels.

Page 12: IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING …hbduan.buaa.edu.cn/papers/IEEE_NNLS_Duan_2016.pdf · neurodynamic, orthogonal, pigeon-inspired optimization (PIO). I. INTRODUCTION

2424 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 27, NO. 11, NOVEMBER 2016

Furthermore, the performance of OPIO-ESN gets worse ifthere are additive noises in blurred images, and thus, ourfuture work will be focused on improving the robustnessof OPIO-ESN.

REFERENCES

[1] V. Agarwal, A. V. Gribok, and M. A. Abidi, “Image restoration usingL1 norm penalty function,” Inverse Problems Sci. Eng., vol. 15, no. 8,pp. 785–809, Dec. 2007.

[2] Y. Xia and M. S. Kamel, “Novel cooperative neural fusion algorithmsfor image restoration and image fusion,” IEEE Trans. Image Process.,vol. 16, no. 2, pp. 367–381, Feb. 2007.

[3] M. Makitalo and A. Foi, “Optimal inversion of the generalizedAnscombe transformation for Poisson–Gaussian noise,” IEEE Trans.Image Process., vol. 22, no. 1, pp. 91–103, Jan. 2013.

[4] J. M. Bioucas-Dias and M. A. T. Figueiredo, “A new TwIST: Two-stepiterative shrinkage/thresholding algorithms for image restoration,” IEEETrans. Image Process., vol. 16, no. 12, pp. 2992–3004, Dec. 2007.

[5] Y.-W. Wen and R. H. Chan, “Parameter selection for total-variation-based image restoration using discrepancy principle,” IEEE Trans. ImageProcess., vol. 21, no. 4, pp. 1770–1781, Apr. 2012.

[6] Y. Wang, J. Yang, W. Yin, and Y. Zhang, “A new alternating minimiza-tion algorithm for total variation image reconstruction,” SIAM J. Imag.Sci., vol. 1, no. 3, pp. 248–272, Aug. 2008.

[7] J. Zhang, D. Zhao, and W. Gao, “Group-based sparse representationfor image restoration,” IEEE Trans. Image Process., vol. 23, no. 8,pp. 3336–3351, Aug. 2014.

[8] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoisingby sparse 3-D transform-domain collaborative filtering,” IEEE Trans.Image Process., vol. 16, no. 8, pp. 2080–2095, Aug. 2007.

[9] A. Chambolle, R. A. De Vore, N.-Y. Lee, and B. J. Lucier, “Nonlinearwavelet image processing: Variational problems, compression, and noiseremoval through wavelet shrinkage,” IEEE Trans. Image Process., vol. 7,no. 3, pp. 319–335, Mar. 1998.

[10] L. He, A. Marquina, and S. J. Osher, “Blind deconvolution usingTV regularization and Bregman iteration,” Int. J. Imag. Syst. Technol.,vol. 15, no. 1, pp. 74–83, Jul. 2005.

[11] H. C. Burger, C. J. Schuler, and S. Harmeling, “Image denoising:Can plain neural networks compete with BM3D?” in Proc. IEEEConf. Comput. Vis. Pattern Recognit., Providence, RI, USA, Jun. 2012,pp. 2392–2399.

[12] X. Le and J. Wang, “Robust pole assignment for synthesizing feedbackcontrol systems using recurrent neural networks,” IEEE Trans. NeuralNetw. Learn. Syst., vol. 25, no. 2, pp. 383–393, Feb. 2014.

[13] Z. Guo, Q. Liu, and J. Wang, “A one-layer recurrent neural network forpseudoconvex optimization subject to linear equality constraints,” IEEETrans. Neural Netw., vol. 22, no. 12, pp. 1892–1900, Dec. 2011.

[14] M. J. Pérez-Ilzarbe, “Convergence analysis of a discrete-time recurrentneural network to perform quadratic real optimization with boundconstraints,” IEEE Trans. Neural Netw., vol. 9, no. 6, pp. 1344–1351,Nov. 1998.

[15] Q. Liu and J. Wang, “A one-layer projection neural network for non-smooth optimization subject to linear equalities and bound constraints,”IEEE Trans. Neural Netw. Learn. Syst., vol. 24, no. 5, pp. 812–824,May 2013.

[16] Z. Yan and J. Wang, “Model predictive control of nonlinear systemswith unmodeled dynamics based on feedforward and recurrent neuralnetworks,” IEEE Trans. Ind. Informat., vol. 8, no. 4, pp. 746–756,Nov. 2012.

[17] Y. Xia, B. Jelfs, M. M. Van Hulle, J. C. Principe, and D. P. Mandic,“An augmented echo state network for nonlinear adaptive filtering ofcomplex noncircular signals,” IEEE Trans. Neural Netw., vol. 22, no. 1,pp. 74–83, Jan. 2011.

[18] A. Balavoine, J. Romberg, and C. J. Rozell, “Convergence and rateanalysis of neural networks for sparse approximation,” IEEE Trans.Neural Netw. Learn. Syst., vol. 23, no. 9, pp. 1377–1389, Sep. 2012.

[19] S. Haykin and L. Li, “Nonlinear adaptive prediction of nonstationarysignals,” IEEE Trans. Signal Process., vol. 43, no. 2, pp. 526–535,Feb. 1995.

[20] D. P. Mandic and J. A. Chambers, “Toward an optimal PRNN-basednonlinear predictor,” IEEE Trans. Neural Netw., vol. 10, no. 6,pp. 1435–1442, Nov. 1999.

[21] J. J. Hopfield and D. W. Tank, “‘Neural’ computation of decisions inoptimization problems,” Biol. Cybern., vol. 52, no. 3, pp. 141–152,Jul. 1985.

[22] D. Tank and J. J. Hopfield, “Simple ‘neural’ optimization networks:An A/D converter, signal decision circuit, and a linear programmingcircuit,” IEEE Trans. Circuits Syst., vol. 33, no. 5, pp. 533–541,May 1986.

[23] W. Zhang and Q. Liu, “A one-layer discrete-time projection neuralnetwork for support vector classification,” in Proc. IEEE Int. Joint Conf.Neural. Netw., Beijing, China, Jul. 2014, pp. 3143–3148.

[24] M. Adachi and K. Aihara, “Associative dynamics in a chaotic neuralnetwork,” Neural Netw., vol. 10, no. 1, pp. 83–98, Jan. 1997.

[25] M. Ardalani-Farsa and S. Zolfaghari, “Chaotic time series predic-tion with residual analysis method using hybrid Elman–NARX neuralnetworks,” Neurocomputing, vol. 73, nos. 13–15, pp. 2540–2553,Aug. 2010.

[26] H. Jaeger and H. Haas, “Harnessing nonlinearity: Predicting chaotic sys-tems and saving energy in wireless communication,” Science, vol. 304,no. 5667, pp. 78–80, Apr. 2004.

[27] M. C. Ozturk, D. Xu, and J. C. Príncipe, “Analysis and design of echostate networks,” Neural Comput., vol. 19, no. 1, pp. 111–138, Jan. 2007.

[28] M. D. Skowronski and J. G. Harris, “Automatic speech recognition usinga predictive echo state network classifier,” Neural Netw., vol. 20, no. 3,pp. 414–423, Apr. 2007.

[29] Z. Shi and M. Han, “Support vector echo-state machine for chaotictime-series prediction,” IEEE Trans. Neural Netw., vol. 18, no. 2,pp. 359–372, Mar. 2007.

[30] M. J. Embrechts, L. A. Alexandre, and J. D. Linton, “Reservoircomputing for static pattern recognition,” in Proc. 17th Eur. Symp. Artif.Neural Netw., Amsterdam, The Netherlands, 2009, pp. 245–250.

[31] H. Jaeger, M. Lukoševicius, D. Popovici, and U. Siewert, “Optimizationand applications of echo state networks with leaky-integrator neurons,”Neural Netw., vol. 20, no. 3, pp. 335–352, Apr. 2007.

[32] H. Duan and P. Qiao, “Pigeon-inspired optimization: A new swarmintelligence optimizer for air robot path planning,” Int. J. Intell. Comput.Cybern., vol. 7, no. 1, pp. 24–37, Mar. 2014.

[33] C. Li and H. Duan, “Target detection approach for UAVs via improvedpigeon-inspired optimization and edge potential function,” Aerosp. Sci.Technol., vol. 39, pp. 352–360, Dec. 2014.

[34] S. Zhang and H. Duan, “Gaussian pigeon-inspired optimization approachto orbital spacecraft formation reconfiguration,” Chin. J. Aeronautics,vol. 28, no. 1, pp. 200–205, Feb. 2015.

[35] J. E. Beasley and P. C. Chu, “A genetic algorithm for the set coveringproblem,” Eur. J. Oper. Res., vol. 94, no. 2, pp. 392–404, Oct. 1996.

[36] H.-B. Duan and C.-F. Xu, “A hybrid artificial bee colony optimizationand quantum evolutionary algorithm for continuous optimization prob-lems,” Int. J. Neural Syst., vol. 20, no. 1, pp. 39–50, Feb. 2010.

[37] H. Duan, Y. Deng, X. Wang, and C. Xu, “Small and dim target detectionvia lateral inhibition filtering and artificial bee colony based selectivevisual attention,” PLoS One, vol. 8, no. 8, p. e72035, Aug. 2013.

[38] H. B. Duan, Y. X. Yu, and Z. Y. Zhao, “Parameters identification ofUCAV flight control system based on predator-prey particle swarmoptimization,” Sci. China Inf. Sci., vol. 56, no. 1, pp. 1–12, Jan. 2013.

[39] Y. Shi, “Brain storm optimization algorithm,” in Proc. 2nd Int. Conf.Swarm Intell., Chongqing, China, 2011, pp. 303–309.

[40] H. Duan, S. Li, and Y. Shi, “Predator–prey brain storm optimiza-tion for DC brushless motor,” IEEE Trans. Magn., vol. 49, no. 10,pp. 5336–5340, Oct. 2013.

[41] D. Simon, “Biogeography-based optimization,” IEEE Trans. Evol.Comput., vol. 12, no. 6, pp. 702–713, Dec. 2008.

[42] X. Wang and H. Duan, “Predator-prey biogeography-based optimizationfor bio-inspired visual attention,” Int. J. Comput. Intell. Syst., vol. 6,no. 6, pp. 1151–1162, Jul. 2013.

[43] X. H. Wang and H. B. Duan, “Biologically adaptive robust mean shiftalgorithm with Cauchy predator-prey BBO and space variant resolutionfor unmanned helicopter formation,” Sci. China Inf. Sci., vol. 57, no. 11,pp. 1–13, Nov. 2014.

[44] X. Wang and H. Duan, “A hybrid biogeography-based optimizationalgorithm for job shop scheduling problem,” Comput. Ind. Eng., vol. 73,pp. 96–114, Jul. 2014.

[45] S.-Y. Ho, H.-S. Lin, W.-H. Liauh, and S.-J. Ho, “OPSO: Orthogonalparticle swarm optimization and its application to task assignmentproblems,” IEEE Trans. Syst., Man, Cybern. A, Syst., Humans, vol. 38,no. 2, pp. 288–298, Mar. 2008.

[46] Y.-W. Leung and Y. Wang, “An orthogonal genetic algorithm withquantization for global numerical optimization,” IEEE Trans. Evol.Comput., vol. 5, no. 1, pp. 41–53, Feb. 2001.

[47] T. Guilford, S. Roberts, D. Biro, and I. Rezek, “Positional entropy duringpigeon homing II: Navigational interpretation of Bayesian latent statemodels,” J. Theoretical Biol., vol. 227, no. 1, pp. 25–38, Mar. 2004.

Page 13: IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING …hbduan.buaa.edu.cn/papers/IEEE_NNLS_Duan_2016.pdf · neurodynamic, orthogonal, pigeon-inspired optimization (PIO). I. INTRODUCTION

DUAN AND WANG: ESNs WITH ORTHOGONAL PIO FOR IMAGE RESTORATION 2425

[48] A. Whiten, “Operant study of sun altitude and pigeon navigation,”Nature, vol. 237, no. 5355, pp. 405–406, Jun. 1972.

[49] T. Judd, K. Ehinger, F. Durand, and A. Torralba, “Learning to predictwhere humans look,” in Proc. IEEE 12th Int. Conf. Comput. Vis., Kyoto,Japan, Sep./Oct. 2009, pp. 2106–2113.

[50] D. Wallach and B. Goffinet, “Mean squared error of prediction as acriterion for evaluating and comparing system models,” Ecol. Model.,vol. 44, nos. 3–4, pp. 299–306, Jan. 1989.

[51] A. A. Poli and M. C. Cirillo, “On the use of the normalized mean squareerror in evaluating dispersion model performance,” Atmos. Environ., A,General Topics, vol. 27, no. 15, pp. 2427–2434, Oct. 1993.

[52] D. I. Hoult and R. E. Richards, “The signal-to-noise ratio of thenuclear magnetic resonance experiment,” J. Magn. Reson., vol. 24, no. 1,pp. 71–85, Oct. 1976.

[53] Z. Wang and A. C. Bovik, “A universal image quality index,” IEEESignal Process. Lett., vol. 9, no. 3, pp. 81–84, Mar. 2002.

[54] A. Danielyan, V. Katkovnik, and K. Egiazarian, “BM3D frames andvariational image deblurring,” IEEE Trans. Image Process., vol. 21,no. 4, pp. 1715–1728, Apr. 2012.

[55] R. N. Neelamani, H. Choi, and R. Baraniuk, “ForWaRD: Fourier-waveletregularized deconvolution for ill-conditioned systems,” IEEE Trans.Signal Process., vol. 52, no. 2, pp. 418–433, Feb. 2004.

[56] V. Katkovnik, A. Foi, K. Egiazarian, and J. Astola, “Directionalvarying scale approximations for anisotropic signal processing,” inProc. 12th Eur. Signal Process. Conf., Vienna, Austria, Sep. 2004,pp. 101–104.

[57] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “BM3D imagedenoising with shape-adaptive principal component analysis,” in Proc.Signal Process. Adapt. Sparse Struct. Represent. (SPARS), Saint-Malo,France, 2009, pp. 1–6.

[58] A. Buades, B. Coll, and J.-M. Morel, “A non-local algorithm for imagedenoising,” in Proc. IEEE Comput. Soc. Conf. Comput. Vis. PatternRecognit., vol. 2. San Diego, CA, USA, Jun. 2005, pp. 60–65.

[59] W. Khatib and P. J. Fleming, “The stud GA: A mini revolution?”in Parallel Problem Solving From Nature, A. E. Eiben, T. Bäck,M. Schoenauer, and H.-P. Schwefel, Eds. New York, NY, USA:Springer-Verlag, 1998.

[60] R. Storn and K. Price, “Differential evolution—A simple and efficientheuristic for global optimization over continuous spaces,” J. GlobalOptim., vol. 11, no. 4, pp. 341–359, Dec. 1997.

[61] E. Mezura-Montes and C. A. Coello Coello, “A simple multimemberedevolution strategy to solve constrained optimization problems,” IEEETrans. Evol. Comput., vol. 9, no. 1, pp. 1–17, Feb. 2005.

Haibin Duan (M’07–SM’08) received the Ph.D.degree from the Nanjing University of Aeronauticsand Astronautics, Nanjing, China, in 2005.

He was a Technician with the AVIC Avia-tion Motor Control System Institute, Wuxi, China,from 1996 to 2000, an Engineer with theShenyang Aircraft Design Research Institute, AVIC,Shenyang, China, in 2006, an Academic Visitor withthe National University of Singapore, Singapore,in 2007, and a Senior Visiting Scholar with TheUniversity of Suwon, Hwaseong, Korea, in 2011.

He is currently a Full Professor with the School of Automation Science andElectrical Engineering, Beihang University, Beijing, China, where he is alsothe Head of the Bioinspired Autonomous Flight Systems Research Group.He has authored or co-authored over 70 publications and three monographs.His current research interests include bioinspired computing, biological com-puter vision, and multiple UAVs autonomous formation control.

Prof. Duan was a recipient of the National Science Fund for DistinguishedYoung Scholars of China, the 16th MAO Yi-Sheng Beijing Youth Scienceand Technology Award, and the Sixth National Outstanding Scientific andTechnological Worker of China in 2014. He was also a recipient of the19th National Youth Five-Four Medal Award in 2015, the 13th China YouthScience and Technology Award, and the 12th Youth Science and TechnologyAward of the Chinese Association of Aeronautics and Astronautics in 2013.He is also enrolled in the Top-Notch Young Talents Program of China, theProgram for New Century Excellent Talents in University of China, and theBeijing NOVA Program. He is the Editor-in-Chief of the International Journalof Intelligent Computing and Cybernetics.

Xiaohua Wang was born in Hebei, China, in 1991.She received the B.S. degree in automation controlfrom the University of Science and TechnologyBeijing, Beijing, China, in 2013. She is currentlypursuing the Ph.D. degree with the State KeyLaboratory of Virtual Reality Technology andSystems, School of Automation Science andElectrical Engineering, Beihang University, Beijing.

She is a member of the Bioinspired AutonomousFlight Systems Research Group with BeihangUniversity. Her current research interests include

bioinspired computation and computer vision.