issn: 0975-766x coden: ijptfi available through online review … · ijpt| sep-2016 | vol. 8 ... a...

24
Swathi J.N*et al. /International Journal Of Pharmacy & Technology IJPT| Sep-2016 | Vol. 8 | Issue No.3 | 4567-4590 Page 4567 ISSN: 0975-766X CODEN: IJPTFI Available through Online Review Article www.ijptonline.com A SURVEY ON NATURE INSPIRED METAHEURISTIC TECHNIQUES FOR TRAINING FEEDFORWAD NEURAL NETWORKS Swathi J.N*, School of Computing Science and Engineering, VIT University, Vellore-14. Email: [email protected] Received on 17-07-2016 Accepted on 15-08-2016 Abstract Among several machine algorithms, Feed Forward Neural Networks are one of the widely used machine learning techniques for pattern classification. Generally, to improve the obtained classification accuracy results, we optimize the parameters (weights and bias) of the neural networks. The optimization aims to minimize the mean square error (MSE) calculated using the actual output produced by the feed forward network and the desired output. The back-propagation (BP) training algorithm is the most prominent approach for optimization in supervised learning strategy. Recently, several nature inspired metaheuristic techniques are widely used for training neural networks. These techniques can be broadly categorized into Swarm-based, Bio inspired based, Physics-Chemistry based and other categories. In this paper, we review the steady improvements made over training neural networks using nature inspired metaheuristic techniques for various domains including medical, manufacturing, business, scientific, etc. Keywords: Artificial Neural Network(ANN); Back propagation Algorithm(BPA); Mean Square Error(MSE),Multilayer Feed Forward Neural Network (MLFNN); Pattern Classification. 1. Introduction Artificial Neural Networks(ANNs)are the biologically inspired methods which processes the information same like the neurons that are present in the brain. ANN consists of small processing units known as Artificial Neurons which can be trained to perform complex calculations. ANNs have several characteristics like adaptability, capability of learning by examples, generalization, function approximation, optimization, pattern matching and associative memories [1-2].The architecture of the neural networks and the training algorithm used largely contributes to the success of ANN for pattern classification. The Feed-forward NNs(FNN) have an input layer of source nodes and an output layer of neurons. In

Upload: others

Post on 19-Jul-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: ISSN: 0975-766X CODEN: IJPTFI Available through Online Review … · IJPT| Sep-2016 | Vol. 8 ... A SURVEY ON NATURE INSPIRED METAHEURISTIC TECHNIQUES FOR TRAINING FEEDFORWAD NEURAL

Swathi J.N*et al. /International Journal Of Pharmacy & Technology

IJPT| Sep-2016 | Vol. 8 | Issue No.3 | 4567-4590 Page 4567

ISSN: 0975-766X

CODEN: IJPTFI

Available through Online Review Article

www.ijptonline.com

A SURVEY ON NATURE INSPIRED METAHEURISTIC TECHNIQUES FOR TRAINING

FEEDFORWAD NEURAL NETWORKS Swathi J.N*,

School of Computing Science and Engineering, VIT University, Vellore-14.

Email: [email protected]

Received on 17-07-2016 Accepted on 15-08-2016

Abstract

Among several machine algorithms, Feed Forward Neural Networks are one of the widely used machine learning

techniques for pattern classification. Generally, to improve the obtained classification accuracy results, we optimize the

parameters (weights and bias) of the neural networks. The optimization aims to minimize the mean square error (MSE)

calculated using the actual output produced by the feed forward network and the desired output. The back-propagation

(BP) training algorithm is the most prominent approach for optimization in supervised learning strategy. Recently, several

nature inspired metaheuristic techniques are widely used for training neural networks. These techniques can be broadly

categorized into Swarm-based, Bio inspired based, Physics-Chemistry based and other categories. In this paper, we

review the steady improvements made over training neural networks using nature inspired metaheuristic techniques for

various domains including medical, manufacturing, business, scientific, etc.

Keywords: Artificial Neural Network(ANN); Back propagation Algorithm(BPA); Mean Square Error(MSE),Multilayer

Feed Forward Neural Network (MLFNN); Pattern Classification.

1. Introduction

Artificial Neural Networks(ANNs)are the biologically inspired methods which processes the information same like the

neurons that are present in the brain. ANN consists of small processing units known as Artificial Neurons which can be

trained to perform complex calculations. ANNs have several characteristics like adaptability, capability of learning by

examples, generalization, function approximation, optimization, pattern matching and associative memories [1-2].The

architecture of the neural networks and the training algorithm used largely contributes to the success of ANN for pattern

classification. The Feed-forward NNs(FNN) have an input layer of source nodes and an output layer of neurons. In

Page 2: ISSN: 0975-766X CODEN: IJPTFI Available through Online Review … · IJPT| Sep-2016 | Vol. 8 ... A SURVEY ON NATURE INSPIRED METAHEURISTIC TECHNIQUES FOR TRAINING FEEDFORWAD NEURAL

IJPT| Sep-2016 | Vol. 8 | Issue No.3 | 4567-4590 Page 4568

addition to these two layers FNNs generally have one or more hidden layers, which extract important features embedded

in the input data. A sample FNN with two hidden layers is shown in Figure 1. In FNN, each node sends a signal to the

nodes of the next layer, and each signal is then is multiplied by a separate weight value. The weighted inputs are summed,

and passed through a limiting function. This further scales the output to a fixed range of values. The output of the limiter

function is then broadcast to all the other nodes in the next layer. The output of every thi node is obtained using Equation

(1)

( )

1

ny f w x bi i ij j i

j

(1)

Where: iy is the output of the node, jx is the thj input to the node, ijw is the connection weight between the node and

input jx , ib is the threshold(or bias) of the node, and if is the node transfer function. Usually, the node transfer functions

used are a linear function, a sigmoid function, a Gaussian function, etc. Here, we assume the logarithmic sigmoid

(Equation 2) transfer function at hidden and output layer neurons.

1( )

1 -nety f net

e

(2)

The optimization goal is to minimize the mean square error (MSE), given in Equation (3) by optimizing the neural

networks parameters (weights and bias).

2

1 1

1( ( )) ( )

N K

k k

j k

E w t d oN

(3)

Where: ( ( ))E w t is the error at the tht iteration, ( )w t , the weights in the connections at the tht iteration, kd and ko represent

the desired and actual values of thk output node, K is the number of output nodes and N is the number of patterns.

Page 3: ISSN: 0975-766X CODEN: IJPTFI Available through Online Review … · IJPT| Sep-2016 | Vol. 8 ... A SURVEY ON NATURE INSPIRED METAHEURISTIC TECHNIQUES FOR TRAINING FEEDFORWAD NEURAL

Swathi J.N*et al. /International Journal Of Pharmacy & Technology

IJPT| Sep-2016 | Vol. 8 | Issue No.3 | 4567-4590 Page 4569

Figure 1: Feed Forward Networks with hidden layers.

Back propagation using gradient descent methods is the most widely used neural network training method [3-9] to

optimize the neural network parameters in supervised learning strategy. In recent years, many improved learning

algorithms have been developed that aim to remove the shortcomings of the gradient descent based systems. The Stuttgart

Neural Network Simulator (SNNS) [10], which was developed in the recent past use many different algorithms including

Error Back Propagation [11], Resilient Error Back Propagation [12], Back percolation, Delta-bar-Delta, Cascade

Correlation [13] etc. All these algorithms are derivatives of steepest gradient search; hence the ANN training is relatively

slow. To have a fast and efficient training method, second order learning algorithms are developed. The most effective

method is Levenberg Marquardt (LM) algorithm [14, 15], which is a derivative of the Newton method. This is quite

multifaceted algorithm since both the gradient and the Jacobian matrix is calculated. The LM algorithm was developed

only for layer-by-layer ANN topology, which is far from optimal. LM algorithm is ranked as one of the most efficient

training algorithms for patterns that are both small and medium sized. It is a good combination of Newton’s method and

steepest descent [16]. It borrows speed from Newton method and convergence capability of steepest descent method. It is

best suited for training neural network which calculates the performance index using Mean Squared Error (MSE) [17] but

still fails at removing local minimum[16,19].

In order to cope with the local minimum problem, many global optimization techniques have been adopted for the

training of NNs. Most of these techniques draw their inspiration from nature inspired optimization techniques like

evolutionary algorithms [20], genetic algorithms [21-23], ant colony optimization [24-25] particle swarm optimization

[26-27], differential evolution [28-29] and artificial bee colony algorithm [30]. Harmony search (HS) algorithm, which is

obtained from improvisation processes done by musicians not from biological or physical processes, is also adopted for

the training of NNs. Kattan et al. [31] introduced a variant of improved harmony search algorithm to train NNs for binary

classification.

In this paper, we review the various nature inspired meta-heuristics techniques for training feed forward neural networks.

Categories of the algorithms are Swarm based, Bio inspired, Physics and chemistry based and other algorithms. In this

survey , we consider nine swarm intelligence based algorithms namely ant colony optimization, particle swarm

optimization, Fish Swarm algorithm, Artificial Bee colony, bacterial Foraging, BAT, CAT swarm, cuckoo search and

Page 4: ISSN: 0975-766X CODEN: IJPTFI Available through Online Review … · IJPT| Sep-2016 | Vol. 8 ... A SURVEY ON NATURE INSPIRED METAHEURISTIC TECHNIQUES FOR TRAINING FEEDFORWAD NEURAL

IJPT| Sep-2016 | Vol. 8 | Issue No.3 | 4567-4590 Page 4570

firefly algorithm. Under the category of bio inspired algorithm, we considered three algorithms namely biogeography

based optimization, differential evolution, Invasive weed optimization. For physics and chemistry based metaheuristic

techniques, we considered three algorithms namely central force optimization, Harmony search and simulated annealing.

In other algorithms, we considered grammatical evolution and Imperialist competitive algorithm. In hybrid techniques we

explored the combination of few above mentioned algorithms with Glow warm, brain storm, Gravitational Search and

Accelerated PSO.

The rest of the paper is organized as follows: In section 2, we present the review of existing literature of nature inspired

techniques for training neural networks. In section 3, we tabulate the characteristics of various nature inspired algorithms

considered for our study. In section 4, we give concluding remarks with possible research directions followed by

references.

2. Literature Review

A. Research Done In Swarm Intelligence Based Algorithms For Training Neural Network

Li and Chung [32] proposed a new Back-Propagation Neural Network (BPN) training algorithm optimized using Ant

Colony Optimization (ACO) to get the optimal connection weights of the BPN. Severalnovel applications were

introduced for FNN training which uses an ant colony optimization algorithm for continuous optimization [24-25].

Sivagaminathan and Ramakrishnan[33] suggested ahybrid approach using ACO and NN for feature subset selection.

Ramesh et al. [34] proposed ANN based cost tolerance model which is then optimized using ACO to obtain optimum

combination of tolerances for obtaining minimum manufacturing cost.

Van and Engelbrecht [35] proposed a method to use Particle Swarms Optimization (PSO)for NNs in a cooperative

configuration. Mendes et al. [36] proposed an application for FNN training using particle swarms. Al-Kazemi et al. [37]

proposed a training that uses a multi-phase PSOfor FNN. Gudiseand Venayagamoorthy [26] made a comparison between

BP and PSO for training neural networks.Juang[38] proposed a hybrid of Genetic Algorithm (GA) and PSO for recurrent

NN.Van and Engelbrecht[39] suggested a change in the traditional PSO algorithm, called the cooperative particle swarm

optimizer, employing cooperative behavior to greatly increase the performance of the original algorithm. Meissner et al.

[40] proposed an optimized PSO for training NN. Chau [41] proposed novelapplication topredict water levels in

Page 5: ISSN: 0975-766X CODEN: IJPTFI Available through Online Review … · IJPT| Sep-2016 | Vol. 8 ... A SURVEY ON NATURE INSPIRED METAHEURISTIC TECHNIQUES FOR TRAINING FEEDFORWAD NEURAL

Swathi J.N*et al. /International Journal Of Pharmacy & Technology

IJPT| Sep-2016 | Vol. 8 | Issue No.3 | 4567-4590 Page 4571

ShingMun River of Hong Kong with different lead times on the basis of the upstream gauging stations or stage/time

history at the specific station.

Zhang et al. [42] proposedhybrid approach using PSO and BP to train the weights of feedforward neural network. Wang

et al. [43] proposed an improved artificial fish-swarm algorithm and its use in feed-forward neural networks. A hybrid of

artificial fish swarm algorithm and particle swarm optimization has been proposed by Li et al. [44], for feedforward

neural network training. Shen et al. [45] suggested application of radial basis function in forecasting stock indices using

neural networks optimized by artificial fish swarm algorithm.Tsai [46] proposed several improvements of the FSA,

including particle swarm optimization formulation to reformulate the FSA, integrating communication behaviour into

FSA, and creating formulas for major FSA parameters.

Karoboga et al. proposed an Artificial Bee Colony (ABC) optimization algorithm for training feed-forward neural

networks [47,49]. Pham et al. [48] proposed an application to identify the defects in the woods using optimized neural

networks usingbees algorithm. Zhu and Kwong [50] proposed an enhanced ABC algorithm called Gbest-Guided ABC

(GABC). Ozturk and Karaboga [51] proposed a hybrid algorithm which is a combination of ABC and Levenberq-

Marquardt (LM) algorithm to train artificial neural networks (ANN).Hsieh et al. [52] proposed integrated systems where

wavelet transforms and recurrent neural network (RNN) are trained based on ABC algorithm (ABC-RNN) for stock price

forecasting. Rashidi et al. [53] proposed a novel application for ANN with ABC. Akayand Karaboga[54] applied ABC

algorithm for large-scale problems of engineering design optimization.

Ulagammai et al. [55] proposed artificial and wavelet neural networks optimization using Bacterial Foraging Algorithm

(BFA) for load forecasting. Majhiand Panda[56] proposed a BFA algorithm for nonlinear dynamic system. Cho et al. [57]

introduceda parameter optimization method for extreme machine learning using BFA. Zhang et al. [58] proposed a BFA

for training NN for short-term load forecasting. Al-Hadi et al. [59] proposed a bacterial Foraging Optimization Algorithm

for neural network learning enhancement. Khan and Sahaiproposed to use BAT algorithm and showed its superiority with

respect to time, performance and quality of solutions compared to two gradient descent algorithms and three populations

based heuristic techniques namely Bat Algorithm, Genetic Algorithm and Particle Swarm Optimization [60].

Yusiong [61] proposed to use Cat Swarm Optimization (CSO) algorithm which mimics the behavior of cats as the training

algorithm and the Optimal Brain Damage (OBD) method as the eliminating algorithm. In this study they propose

Page 6: ISSN: 0975-766X CODEN: IJPTFI Available through Online Review … · IJPT| Sep-2016 | Vol. 8 ... A SURVEY ON NATURE INSPIRED METAHEURISTIC TECHNIQUES FOR TRAINING FEEDFORWAD NEURAL

IJPT| Sep-2016 | Vol. 8 | Issue No.3 | 4567-4590 Page 4572

simultaneous optimization of the connection weights and ANN structure. Nawiet al. [62]proposed to use cuckoo search

(CS) technique which is based on cuckoo bird’s behavior to train back propagation neural network and the results are

compared with ABC-BP and other hybrid variants. Brajevic and Tuba [63]used Firefly algorithm to train feed-forward

neural networks (FNN) for classification purpose and compared the results with ABC and GA.

B. Research Done In Bio Inspired (Not Si-Based) Algorithms For Training Neural Network

Ovreiu and Simon [64] proposed a Biogeography-Based Optimization (BBO) of neuro-fuzzy system parameters and

applied the same for diagnosis of cardiac disease. Mirjalili et al. [65] proposed the BBO for training multi-Layer

Perceptron NN. Wang et al. [66] introduced a novel fruit classification problem to apply ABC and BBO.

Ilonen et al. [28]proposed Differential Evolution (DE) optimization technique for the global optimization of

FNN.Magoulas et al. [67] proposed to apply online NN learning with differential evolution for colonoscopy

diagnosis.Pavlidis et al. [68] proposed aparallel differential evolution algorithm to improve the computational time of

training the NN. Slowik and Bialko[29] introduced a novel application for DE with ANN[29]. Chauhan et al.[69]

proposed DE for training wavelet neural network termed as DEWNN. Giri et al. [70] applied the Invasive Weed

Optimization (IWO) algorithm inspired by the ecological process of weed colonization and distribution. Club et al. [71]

proposed to solve pixel-based potato classification combining IWO and ANN [71]. Ahmed and Amin [72]designed two

evolutionary algorithms- Invasive Weed Optimization (IWO) based power system stabilizer (PSS) and particle swarm

optimization (PSO) based power system stabilizer for multi-machine power system to compare their tuning performances.

Zaharis et al. [73] applied a variant of IWO for an application related to antenna array beam. Safari et al. [74] used ANN

with IWO to estimate the power costs. The ANN model uses the traditional back propagation technique, however the

quantity of neuron hubs, learning rate and momentum constant are ideally decided utilizing the IWO method.

C. Research Done In Physics And Chemistry Based Algorithms For Training Neural Network

Green et al. [75] first applied the Central Force Optimization (CFO) algorithm to train a basic neural network that

represents the logical XOR function. Then the work was extended to train two different neural networks in order to

properly classify members of the Iris data set. Similarities and differences between CFO and Particle Swarm Optimization

are likewise investigated in the regions of algorithm design, computational unpredictability, and common premise. Chao

et al. [76] formulated the classical multi-criterion optimization problem and reviewed the most successful evolutionary

Page 7: ISSN: 0975-766X CODEN: IJPTFI Available through Online Review … · IJPT| Sep-2016 | Vol. 8 ... A SURVEY ON NATURE INSPIRED METAHEURISTIC TECHNIQUES FOR TRAINING FEEDFORWAD NEURAL

Swathi J.N*et al. /International Journal Of Pharmacy & Technology

IJPT| Sep-2016 | Vol. 8 | Issue No.3 | 4567-4590 Page 4573

algorithms for the same. Kattan et al. [31] presented a novel technique using the Harmony Search (HS) algorithm for the

supervised training of FNN. Lee and Yoon [77] proposed a new methodology with harmony search (HS) algorithm and

neural networks (NNs) for concrete mix proportioning. Hasancebiet al. [78] presented an adaptive harmony search

algorithm for solving structural optimization problems. Wong and Guo [79] introduced a hybrid intelligent (HI) model,

which is a combination of a data pre-processing component and a HI forecaster, to tackle the medium-term fashion sales

forecasting problem.

Kulluck et al. [80,82] addressed a novel application for Self-adaptive Global Best Harmony Search (SGHS) algorithm for

the supervised training of FNN. Razfar et al. [81] researched the impact of tuning harmony search-based neural network

for predicting surface roughness [81]. Zinati and Razfar [82] dealt with a modified optimization algorithm of harmony

search (MHS) coupled with modified harmony search-based neural networks (MHSNN).

Treadgold and Gedeon [84] examined combining gradient descent with the global optimization technique of Simulated

Annealing (SA). SA in the form of noise and weight decay is added to resilient backpropagation (RPROP), a powerful

gradient descent algorithm for training feedforward neural networks. Sexton et al. [85] presented the performance

comparison between the two well-known global search techniques, SA and GA. They also conducted a Monte Carlo study

in order to test the appropriateness of these global search techniques for optimizing neural networks .

Yamazaki et al. [86]appliedSA for optimizing neural network architectures and weights for a novel application. Da and

Xiurun[ 87]presented a modified particle swarm optimization (PSO) with simulated annealing technique. They also

developed an improved PSO-based artificial neural network. Liao and Tsao [88] proposed a fuzzy neural network

combined with a chaos-search genetic algorithm and simulated annealing for power-system load forecasting as a sample

test.Pham and Karaboga [89] conducted experiments on intelligent optimization techniques namely genetic algorithms,

tabu search, simulated annealing and neural networks.

D. Research Done In Other Algorithms For Training Neural Network

Jacob and Rehder [90]presented a hierarchically structured system for the evolution of connectionist systems. Giles et al.

[91] discussed fundamental limitations and inherent difficulties when using neural networks for the processing of high

noise, small sample size signals and introduced a new intelligent signal processing method which addresses the mentioned

difficulties. The method proposed used conversion into a symbolic representation with a self-organizing map, and

Page 8: ISSN: 0975-766X CODEN: IJPTFI Available through Online Review … · IJPT| Sep-2016 | Vol. 8 ... A SURVEY ON NATURE INSPIRED METAHEURISTIC TECHNIQUES FOR TRAINING FEEDFORWAD NEURAL

IJPT| Sep-2016 | Vol. 8 | Issue No.3 | 4567-4590 Page 4574

grammatical inference with recurrent neural networks. They connected the technique to the expectation of every day

outside trade rates, tending to troubles with non-stationarity, overfitting, and unequal from the earlier class probabilities,

and discovered critical consistency in exhaustive analyses covering 5 diverse remote trade rates. Tsouloset al.[92]

presented a method which is based on grammatical evolution for the construction of artificial neural networks (ANNs).

Delgado and Pegalajar [93] introduced a multi-objective evolutionary algorithm to decide the optimal size of recurrent

neural networks. Motsinger et al. [94] monitored the performance of grammatical evolution with that of a random search

neural network strategy to better comprehend the advantages of these methods. Tsoulos et al.[95] introduced a new

mechanism for neural network evolution that evolves the network topology along with the network parameters. Turner et

al. [96] presented the changes to a neural network algorithm to discover gene-gene interactions that influence human

traits. De Mingo Lopez [97] applied Artificial Neural Network (ANN) trained with Particle Swarm Optimization (PSO)

and grammatical Evolution for the problem of channel equalization.

Abdechiriet al. [98] proposed a new method for training an Artificial Neural Network using Chaotic Imperialist

Competitive Algorithm (C-ICA). Ahmadi [99] proposed the model based on a feed-forward artificial neural network

(ANN) optimized by imperialist competitive algorithm (ICA) to predict the asphaltene precipitation. Berneti and

Shahbazian [100]used ANN with ICA to increase short-term wind farm power prediction. Ahmadi et al. [101] presented a

new method for oil rate prediction of wells base on ICA, ANN and Fuzzy Logic. Nia et al. [102] investigated the

adsorption of reactive orange 12 (RO-12) by gold nanoparticles packed with activated carbon (Au-NP-AC) using ICA and

ANN. Hajihassan et al. [103] used ANN with ICA to foretell peak particle velocity (PPV) that is created as a result of

quarry blasting [103]. Duan and Huang [104]used ICA and ANN for planning globally optimal path of UCAV.

E. ResearchDoneIn Hybrid Meta-Heuristic Techniques For Back Propagation Neural Network Training

Fang et al. [105] proposed hybrid combination of artificial fish swarm algorithm and particle swarm optimization for feed

forward neural network training that resulted in hybrid method being more effective than single algorithms. Mirjalili et al.

[106]proposed hybrid PSOGSA technique and has shown that PSOGSA outperforms both PSO and GSA for training

FNNs in terms of gathering speed and preventing local minima. It is also proven that an FNN trained with PSOGSA has

better accuracy than one trained with GSA. Nawi et al. [107]proposed a hybrid technique named Accelerated Particle

Swarm Optimization using Levenberg Marquardt to achieve faster convergence rate and to avoid local minima problem.

Page 9: ISSN: 0975-766X CODEN: IJPTFI Available through Online Review … · IJPT| Sep-2016 | Vol. 8 ... A SURVEY ON NATURE INSPIRED METAHEURISTIC TECHNIQUES FOR TRAINING FEEDFORWAD NEURAL

Swathi J.N*et al. /International Journal Of Pharmacy & Technology

IJPT| Sep-2016 | Vol. 8 | Issue No.3 | 4567-4590 Page 4575

This method overcomes the problem of local minima entrapment. The results were compared with ANC and BP

technique. Cui et al. [108] proposed hybrid technique where Glow worm optimization (GSO) technique is incorporated

with the linearly declining inertia weight into the location update formula (LWGSO). After that in order to increase the

robustness capability, LWGSODE was formed by introducing differential evolution (DE) into LWGSO. The results of a

statistical experiment show that the proposed LWGSODE approach has better performance than basic GSO in terms of

solutions, accuracy, convergence speed and robustness, and is used for time series prediction. Cao et al [109] proposed

Improved Brain Storm Optimization (BSO) algorithm combined with differential evolution strategy with new step size.

New step size control method is added to operator creation to give rise to Differential Evolution strategy. This helps

balance exploitation and exploration at various searching generations. Comparative experiments illustrate that the

proposed algorithm has better performance than the original BSO.

3. Characteristics of Nature Inspired Metaheuristic Techniques.

Algorithms Basic

Principle

Solution

representation

Evolution

ary

operators

Fitness Selection

process

Type of decision

variables

Ant colony

optimizatio

n [110]

Cooperative

group of ants

Graph None Scaled

objective

value

Probabilistic,

preservative

Mainly for

discrete values

Artificial

Bee colony

[111]

Collective

knowledge of

bees

Real –valued None Objective

function

value

Probabilistic,

preservative

Both discrete and

continuous

Bacterial

Foraging

[112]

Foraging

behavior of

Escherchia

coli bacteria

Result is:

bacteria will

die/ migrate

based on

split/adapt

nutrient values

which are real

valued

None Objective

function

value

Deterministic Mainly for

continuous values

BAT

[113]

Echolocation

behavior of

micro bats

Real- valued None Objective

function

value

Metaheuristic

(stochastic)

Discrete and

Continuous

Page 10: ISSN: 0975-766X CODEN: IJPTFI Available through Online Review … · IJPT| Sep-2016 | Vol. 8 ... A SURVEY ON NATURE INSPIRED METAHEURISTIC TECHNIQUES FOR TRAINING FEEDFORWAD NEURAL

IJPT| Sep-2016 | Vol. 8 | Issue No.3 | 4567-4590 Page 4576

CAT swarm

[114]

Based on cat

behavior

Real- valued None Objective

function

value

Metaheuristic

(stochastic)

Mainly for

Discrete values

Cuckoo

[115]

Cunning

breeding

behavior of

cuckoos

(obligate

brood

parasitism)

Nests are

ranked as

best/worse

None Objective

function

value

Metaheuristic

(stochastic)

Discrete and

Continuous

Firefly

[116]

Flashing

patterns of

fireflies

Real valued

(Fireflies are

ranked as best

to worst

depending on

attractiveness)

None Objective

function

value

Metaheuristic

(stochastic)

Both discrete and

continuous, but

variants include

an approach only

for discrete values

(DFA)

Fish Swarm

Algorithm

[117]

Imitate fish

behavior such

as preying,

swarming,

etc.

Real valued. None Objective

function

value

Deterministic Position is vector

value, step is a

discrete length,

Crowd factor has

values with a

range of (0,1)

Particle

Swarm

optimizatio

n

[118]

Cooperative

group of

swarm

intelligence

Real- Valued None Objective

function

value

Deterministic,

extinctive

continuous values

and discrete

Accelerated

PSO

[119-120]

Improved

version of

Particle

Swarm

Optimization

Real valued None Objective

function

value

Deterministic,

extinctive

continuous values

and discrete

Biogeograp

hy based

Models of

biogeography

HSI of habitats

is real valued

Includes

mutation

Habitat

Stability

Probabilistic Discrete and

Continuous

Page 11: ISSN: 0975-766X CODEN: IJPTFI Available through Online Review … · IJPT| Sep-2016 | Vol. 8 ... A SURVEY ON NATURE INSPIRED METAHEURISTIC TECHNIQUES FOR TRAINING FEEDFORWAD NEURAL

Swathi J.N*et al. /International Journal Of Pharmacy & Technology

IJPT| Sep-2016 | Vol. 8 | Issue No.3 | 4567-4590 Page 4577

optimizatio

n

[121]

that describe

speciation,

migration,

extinction of

species

Index

(HSI)

Differential

Evolution

[122]

Survival of

the fittest

Real-valued Mutation

and

crossover

Objective

function

value

Deterministic,

extinctive

Real values

(extended to

discrete as well)

Invasive

weed

optimizatio

n technique

(IWO)[123]

Imitates

colonizing

behavior of

weeds for

reproduction

Real-valued None Objective

function

value

Stochastic Continuous and

discrete

Central

force

optimizatio

n

[124]

Gravitational

kinematics

Probe positions

(vector value)

None Objective

function

value

Deterministic Position and

acceleration are

vector values ,

objective function

may be

continuous or

discontinuous

Harmony

Search

[125]

Inspired by

fact that

music is

aimed to look

for perfect

state of

harmony.

Real valued

(harmonic

values like

pitch, range,

etc.)

None Objective

function

value

Metaheuristic

(stochastic)

Discrete and

Continuous values

Simulated

Annealing

[126]

Annealing

process

during heat

treatments of

metals,

Metropolis

algorithm

Solution is a

state with less

energy

None Objective

function

value

Probabilistic Best for discrete

values, used for

continuous values

to a lower extent

Page 12: ISSN: 0975-766X CODEN: IJPTFI Available through Online Review … · IJPT| Sep-2016 | Vol. 8 ... A SURVEY ON NATURE INSPIRED METAHEURISTIC TECHNIQUES FOR TRAINING FEEDFORWAD NEURAL

IJPT| Sep-2016 | Vol. 8 | Issue No.3 | 4567-4590 Page 4578

Grammatic

al Evolution

[127]

Inspired by

the biological

process to

separate

genotype

from

phenotype

Correct

expression

obtained by

mapping integer

to expression

from BNF

Point

mutation,

one-point

crossover

Objective

function

value

Stochastic Works on binary

strings

Imperialist

Competitiv

e Algorithm

[128]

Mathematical

model and

computer

simulation of

social

evolution of

humans

Visualization of

“empires” at

each iteration

Assimilati

on,

Revolution

Objective

function

value

Metaheuristic Originally

designed for

continuous

values, variant

versions aimed at

discrete values

also developed

Glow

Worm

[129-130]

Behavior of

glow worms

to change

luciferin

emission

intensity

Graphical

representation

None Objective

function

value

Probabilistic Discrete as well

as for continuous

Brain Storm

[131]

Based on

brainstorming

process of

human beings

Solution

represented as

ideas

Clustering

Mutation

selection

Objective

function

value

Deterministic Both discrete and

continuous

Gravitation

al Search

[132]

Law of

gravity

Solution

represented as

objects with

agents

None Objective

function

value

Stochastic Both discrete and

continuous

4. Conclusion

In this paper, we reviewed on the metaheuristic techniques for training neural networks. By means of applying

feedforward neural networks, several real world problems related to pattern recognition,dynamicmodeling,and

sensitivityanalysis can be addressed. For this reason, several researchers have shown interest in applying neural networks

for various applications related to scientific applications, life and behavioural sciences applications, industrial

Page 13: ISSN: 0975-766X CODEN: IJPTFI Available through Online Review … · IJPT| Sep-2016 | Vol. 8 ... A SURVEY ON NATURE INSPIRED METAHEURISTIC TECHNIQUES FOR TRAINING FEEDFORWAD NEURAL

Swathi J.N*et al. /International Journal Of Pharmacy & Technology

IJPT| Sep-2016 | Vol. 8 | Issue No.3 | 4567-4590 Page 4579

applications, medical applications, agricultural applications, governmental applications, etc. From the literature survey

conducted, it is clear that still there is scope for few research directions: (i) development of efficient mathematical models

using novel nature inspired Meta heuristic techniques (ii) direct use of the existing metaheuristic techniques for several

real world problems (iii) to explore the metaheuristic techniques for constrained neural network optimization problems

(iv) To explore the metaheuristic techniques for multiobjective neural network training with and without constraints. (v)

to explore the nature inspired techniques for several other network architectures like recurrent neural network, self-

organizing maps, etc., (vi) to explore the applicability of other new metaheuristic techniques like lions algorithm, water

wave optimization, etc. for neural network training and optimization. (vii) Another research work would be to evaluate

the advantages of the metaheuristic techniques under each of the categories like swarm based, bio-inspired, physics and

chemistry based, evolutionary algorithms, etc.,(viii) to work and explore on parallel execution of the nature inspired

metaheuristic techniques for neural network training (ix) to conduct experiments and study the advantages of hybrid

metaheuristic techniques for optimizing neural network parameters. (x) Finally, developing a fast and efficient algorithms

for optimizing neural network parameters is still a challenging problem for researchers.

References

1. Dayhoff, J. E. (1990). Neural network architectures: an introduction. Van Nostrand Reinhold Co..

2. Mehrotra, K., Mohan, C. K., &Ranka, S. (1997). Elements of artificial neural networks. MIT press.

3. Wcrbos, P. J. (1990). Backpropagation through time: what it docs and how to do it. In Proc. of IEEE, 78(10): 1550-

1560.

4. Werbos, P. J. (1994). The roots of backpropagation. NY: John Wiley & Sons.

5. Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1985). Learning internal representations by error

propagation (No. ICS-8506). California Univ San Diego La Jolla Inst For Cognitive Science.

6. Williams, D. R. G. H. R., & Hinton, G. E. (1986). Learning representations by back-propagating errors. Nature, 323-

533.

7. Wilamowski, B. (2002). Neural networks and fuzzy systems. The Microelectronic Handbook.

8. Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1988). Learning representations by back-propagating

errors. Cognitive modeling, 5, 3.

Page 14: ISSN: 0975-766X CODEN: IJPTFI Available through Online Review … · IJPT| Sep-2016 | Vol. 8 ... A SURVEY ON NATURE INSPIRED METAHEURISTIC TECHNIQUES FOR TRAINING FEEDFORWAD NEURAL

IJPT| Sep-2016 | Vol. 8 | Issue No.3 | 4567-4590 Page 4580

9. Gupta, J. N., & Sexton, R. S. (1999). Comparing backpropagation with a genetic algorithm for neural network

training. Omega, 27(6): 679-684.

10. Zell, A. (2002). SNNS Stuttgart Neural Network Simulator http://www-ra. informatik. uni-tuebingen. de.

11. Fahlman, S. E. (1988). Faster-learning variations on back-propagation: An empirical study.

12. Riedmiller, M., & Braun, H. (1993). A direct adaptive method for faster backpropagation learning: The RPROP

algorithm. In Proc. of IEEE International Conference on Neural Networks, (pp. 586-591).

13. Fahlman, S. E., &Lebiere, C. (1989). The cascade-correlation learning architecture.

14. Hagan, M. T., &Menhaj, M. B. (1994). Training feedforward networks with the Marquardt algorithm. IEEE

Transactions onNeural Networks, 5(6), 989-993.

15. Wilamowski, B. M., Cotton, N., Hewlett, J., &Kaynak, O. (2007). Neural network trainer with second order learning

algorithms. In Proc. of 11th

IEEE International Conference on Intelligent Engineering Systems, (pp. 127-132).

16. Cao, X. P., Hu, C. H., ZHENG, Z. Q., & LV, Y. J. (2005). Fault prediction for inertial device based on LMBP neural

network. Electronics Optic and Control, 12(6): 38-41.

17. Haykin, S. (2004). Neural Networks Principle.

18. Xue, Q., Yun, F., Zheng, C., Liu, Y., Wei, Y., Yao, Y., & Zhou, S. (2010). Improved LMBP algorithm in the analysis

and application of simulation data. In Proc. of IEEE International Conference on Computer Application and System

Modeling (Vol. 6, pp. V6-545).

19. Yan, J., Cao, H., Wang, J., Liu, Y., & Zhao, H. (2009). Levenberg-Marquardt algorithm applied to forecast the ice

conditions in Ningmeng Reach of the Yellow River. In Proc. of Fifth IEEE International Conference on Natural

Computation, (Vol. 1, pp. 184-188).

20. Castellani, M., & Rowlands, H. (2009). Evolutionary artificial neural network design and training for wood veneer

classification. Engineering Applications of Artificial Intelligence, 22(4), 732-741.

21. Kim, D., Kim, H., & Chung, D. (2005). A modified genetic algorithm for fast training neural networks. In Advances

in Neural Networks–ISNN 2005 (pp. 660-665). Springer Berlin Heidelberg.

22. Montana, D. J., & Davis, L. (1989). Training Feedforward Neural Networks Using Genetic Algorithms.

In IJCAI (Vol. 89, pp. 762-767).

Page 15: ISSN: 0975-766X CODEN: IJPTFI Available through Online Review … · IJPT| Sep-2016 | Vol. 8 ... A SURVEY ON NATURE INSPIRED METAHEURISTIC TECHNIQUES FOR TRAINING FEEDFORWAD NEURAL

Swathi J.N*et al. /International Journal Of Pharmacy & Technology

IJPT| Sep-2016 | Vol. 8 | Issue No.3 | 4567-4590 Page 4581

23. Zhang, L., & Bai, Y. F. (2005). Genetic algorithm-trained radial basis function neural networks for modelling

photovoltaic panels. Engineering Applications Of Artificial Intelligence, 18(7): 833-844.

24. Blum, C., &Socha, K. (2005). Training feed-forward neural networks with ant colony optimization: An application to

pattern classification. In Proc. of Fifth IEEE International Conference on Hybrid Intelligent Systems, (pp. 6-pp).

25. Socha, K., & Blum, C. (2007). An ant colony optimization algorithm for continuous optimization: application to

feed-forward neural network training. Neural Computing and Applications, 16(3): 235-247.

26. Gudise, V. G., &Venayagamoorthy, G. K. (2003). Comparison of particle swarm optimization and backpropagation

as training algorithms for neural networks. In Proc. of the Swarm Intelligence Symposium (pp. 110-117). IEEE.

27. Zamani, M., &Sadeghian, A. (2010). A variation of particle swarm optimization for training of artificial neural

networks. INTECH Open Access Publisher.

28. Ilonen, J., Kamarainen, J. K., &Lampinen, J. (2003). Differential evolution training algorithm for feed-forward neural

networks. Neural Processing Letters, 17(1), 93-105.

29. Slowik, A., &Bialko, M. (2008). Training of artificial neural networks using differential evolution algorithm. InProc.

of IEEE Conference on Human System Interactions(pp. 60-65).

30. Karaboga, D., &Ozturk, C. (2009). Neural networks training by artificial bee colony algorithm on pattern

classification. Neural Network World, 19(3), 279.

31. Kattan, A., Abdullah, R., & Salam, R. A. (2010). Harmony search based supervised training of artificial neural

networks. In Proc.Of IEEE International Conference on Intelligent Systems, Modelling and Simulation (pp. 105-

110).

32. Li, J. B., & Chung, Y. K. (2005). A novel back-propagation neural network training algorithm designed by an ant

colony optimization. In IEEE Transmission and Distribution Conference and Exhibition: Asia and Pacific (pp. 1-5).

Sivagaminathan, R. K., & Ramakrishnan, S. (2007). A hybrid approach for feature subset selection using neural

networks and ant colony optimization. Expert systems with applications, 33(1): 49-60.

33. Ramesh, R., Jerald, J., Page, T., &Arunachalam, S. (2009). Concurrent tolerance allocation using an artificial neural

network and continuous ant colony optimization. International Journal of Design Engineering, 2(1):1-25.

Page 16: ISSN: 0975-766X CODEN: IJPTFI Available through Online Review … · IJPT| Sep-2016 | Vol. 8 ... A SURVEY ON NATURE INSPIRED METAHEURISTIC TECHNIQUES FOR TRAINING FEEDFORWAD NEURAL

IJPT| Sep-2016 | Vol. 8 | Issue No.3 | 4567-4590 Page 4582

34. Van den Bergh, F., &Engelbrecht, A. P. (2000). Cooperative learning in neural networks using particle swarm

optimizers. South African Computer Journal, (26), p-84.

35. Mendes, R., Cortez, P., Rocha, M., & Neves, J. (2002). Particle swarms for feedforward neural network training.

Learning, 6(1).

36. Al-Kazemi, B., & Mohan, C. K. (2002). Training feedforward neural networks using multi-phase particle swarm

optimization. In Proc. of the 9th International Conference onNeural Information Processing, 2002. (Vol. 5, pp. 2615-

2619). IEEE.

37. Juang, C. F. (2004). A hybrid of genetic algorithm and particle swarm optimization for recurrent network design.

Systems, Man, and Cybernetics, Part B: IEEE Transactions on Cybernetics, 34(2): 997-1006.

38. Van den Bergh, F., &Engelbrecht, A. P. (2004). A cooperative approach to particle swarm optimization. IEEE

Transactions on Evolutionary Computation, 8(3): 225-239.

39. Meissner, M., Schmuker, M., & Schneider, G. (2006). Optimized Particle Swarm Optimization (OPSO) and its

application to artificial neural network training. BMC bioinformatics, 7(1), 125.

40. Chau, K. W. (2006). Particle swarm optimization training algorithm for ANNs in stage prediction of ShingMun

River. Journal of hydrology, 329(3): 363-367.

41. Zhang, J. R., Zhang, J., Lok, T. M., &Lyu, M. R. (2007). A hybrid particle swarm optimization–back-propagation

algorithm for feedforward neural network training. Applied Mathematics and Computation, 185(2), 1026-1037.

42. Wang, C. R., Zhou, C. L., & Ma, J. W. (2005, August). An improved artificial fish-swarm algorithm and its

application in feed-forward neural networks. In Proc. of IEEE International Conference on Machine Learning and

Cybernetics, (Vol. 5, pp. 2890-2894).

43. Li, H. C. S. W. J., & Li, Y. (2007). A hybrid of artificial fish swarm algorithm and particle swarm optimization for

feedforward neural network training. IEEE Advanced Intelligence system research.

44. Shen, W., Guo, X., Wu, C., & Wu, D. (2011). Forecasting stock indices using radial basis function neural networks

optimized by artificial fish swarm algorithm. Knowledge-Based Systems, 24(3): 378-385.

45. Tsai, H. C., & Lin, Y. H. (2011). Modification of the fish swarm algorithm with particle swarm optimization

formulation and communication behavior. Applied Soft Computing, 11(8): 5367-5374.

Page 17: ISSN: 0975-766X CODEN: IJPTFI Available through Online Review … · IJPT| Sep-2016 | Vol. 8 ... A SURVEY ON NATURE INSPIRED METAHEURISTIC TECHNIQUES FOR TRAINING FEEDFORWAD NEURAL

Swathi J.N*et al. /International Journal Of Pharmacy & Technology

IJPT| Sep-2016 | Vol. 8 | Issue No.3 | 4567-4590 Page 4583

46. Karaboga, D., Akay, B., &Ozturk, C. (2007). Artificial bee colony (ABC) optimization algorithm for training feed-

forward neural networks. In Modeling decisions for artificial intelligence (pp. 318-329). Springer Berlin Heidelberg.

47. Pham, D. T., Soroka, A. J., Ghanbarzadeh, A., Koc, E., Otri, S., &Packianather, M. (2006). Optimising neural

networks for identification of wood defects using the bees algorithm. In IEEE International Conference on Industrial

Informatics, (pp. 1346-1351).

48. Karaboga, D., &Akay, B. (2007). Artificial bee colony (ABC) algorithm on training artificial neural networks. In

Proc. of 15th Signal Processing and Communications Applications, IEEE.

49. Zhu, G., &Kwong, S. (2010). Gbest-guided artificial bee colony algorithm for numerical function optimization.

Applied Mathematics and Computation, 217(7):3166-3173.

50. Ozturk, C., &Karaboga, D. (2011). Hybrid artificial bee colony algorithm for neural network training. In Proc. of

IEEE Congress on Evolutionary Computation (pp. 84-88).

51. Hsieh, T. J., Hsiao, H. F., &Yeh, W. C. (2011). Forecasting stock markets using wavelet transforms and recurrent

neural networks: An integrated system based on artificial bee colony algorithm. Applied soft computing, 11(2):2510-

2525.

52. Rashidi, M. M., Galanis, N., Nazari, F., Parsa, A. B., &Shamekhi, L. (2011). Parametric analysis and optimization of

regenerative Clausius and organic Rankine cycles with two feedwater heaters using artificial bees colony and

artificial neural network. Energy, 36(9): 5728-5740.

53. Akay, B., &Karaboga, D. (2012). Artificial bee colony algorithm for large-scale problems and engineering design

optimization. Journal of Intelligent Manufacturing, 23(4): 1001-1014.

54. Ulagammai, M., Venkatesh, P., Kannan, P. S., &Padhy, N. P. (2007). Application of bacterial foraging technique

trained artificial and wavelet neural networks in load forecasting. Neurocomputing, 70(16): 2659-2667.

55. Majhi, B., & Panda, G. (2007). Bacterial foraging based identification of nonlinear dynamic system. In Proc. of IEEE

Congress onEvolutionary Computation, (pp. 1636-1641).

56. Cho, J. H., Lee, D. J., & Chun, M. G. (2007). Parameter optimization of extreme learning machine using bacterial

foraging algorithm. In Proc. of the 8TH Symposium on Advanced Intelligent Systems (pp. 742-747).

Page 18: ISSN: 0975-766X CODEN: IJPTFI Available through Online Review … · IJPT| Sep-2016 | Vol. 8 ... A SURVEY ON NATURE INSPIRED METAHEURISTIC TECHNIQUES FOR TRAINING FEEDFORWAD NEURAL

IJPT| Sep-2016 | Vol. 8 | Issue No.3 | 4567-4590 Page 4584

57. Zhang, Y., Wu, L., & Wang, S. (2010). Bacterial foraging optimization based neural network for short-term load

forecasting. Journal of Computational Information Systems, 6(7): 2099-2105.

58. Al-Hadi, I. A. A., Hashim, S. Z. M., &Shamsuddin, S. M. H. (2011, December). Bacterial Foraging Optimization

Algorithm for neural network learning enhancement. In Proc. of 11th IEEE International Conference on Hybrid

Intelligent Systems (pp. 200-205).

59. Khan, K., &Sahai, A. (2012). A comparison of BA, GA, PSO, BP and LM for training feed forward neural networks

in e-learning context. International Journal of Intelligent Systems and Applications (IJISA), 4(7), 23.

60. Yusiong, J. P. T. (2012). Optimizing Artificial Neural Networks using Cat Swarm Optimization

Algorithm. International Journal of Intelligent Systems and Applications (IJISA), 5(1), 69.

61. Nawi, N. M., Khan, A., &Rehman, M. Z. (2013). A new back-propagation neural network optimized with cuckoo

search algorithm. In Computational Science and Its Applications–ICCSA 2013 (pp. 413-426). Springer Berlin

Heidelberg.

62. Brajevic, I., & Tuba, M. (2013). Training feed-forward neural networks using firefly algorithm. In Proc. of the 12th

International Conference on Artificial Intelligence, Knowledge Engineering and Data Bases (AIKED’13)(pp. 156-

161).

63. Ovreiu, M., & Simon, D. (2010). Biogeography-based optimization of neuro-fuzzy system parameters for diagnosis

of cardiac disease. In Proc. of the 12th Annual Conference OnGenetic And Evolutionary Computation(pp. 1235-

1242). ACM.

64. Mirjalili, S., Mirjalili, S. M., & Lewis, A. (2014). Let a biogeography-based optimizer train your multi-layer

perceptron. Information Sciences, 269, 188-209.

65. Wang, S., Zhang, Y., Ji, G., Yang, J., Wu, J., & Wei, L. (2015). Fruit classification by wavelet-entropy and

feedforward neural network trained by fitness-scaled chaotic ABC and biogeography-based optimization. Entropy,

17(8), 5711-5728.

66. Magoulas, G. D., Plagianakos, V. P., &Vrahatis, M. N. (2004). Neural network-based colonoscopic diagnosis using

on-line learning and differential evolution. Applied Soft Computing, 4(4): 369-379.

Page 19: ISSN: 0975-766X CODEN: IJPTFI Available through Online Review … · IJPT| Sep-2016 | Vol. 8 ... A SURVEY ON NATURE INSPIRED METAHEURISTIC TECHNIQUES FOR TRAINING FEEDFORWAD NEURAL

Swathi J.N*et al. /International Journal Of Pharmacy & Technology

IJPT| Sep-2016 | Vol. 8 | Issue No.3 | 4567-4590 Page 4585

67. Pavlidis, N. G., Tasoulis, D. K., Plagianakos, V. P., Nikiforidis, G., &Vrahatis, M. N. (2005). Spiking neural network

training using evolutionary algorithms. In Proc. of IEEE International Joint Conference onNeural Networks, 2005.

(Vol. 4, pp. 2190-2194).

68. Chauhan, N., Ravi, V., & Chandra, D. K. (2009). Differential evolution trained wavelet neural networks: Application

to bankruptcy prediction in banks. Expert Systems with Applications, 36(4): 7659-7665.

69. Giri, R., Chowdhury, A., Ghosh, A., Das, S., Abraham, A., &Snasel, Y. (2010). A modified invasive weed

optimization algorithm for training of feed-forward neural networks. In Proc. of IEEE International Conference

onSystems Man and Cybernetics (pp. 3166-3173).

70. Club, E. R., & Branch, M. (2012). A multi layer perceptron neural network trained by invasive weed optimization for

potato color image segmentation. Trends in Applied Sciences Research, 7(6), 445-455.

71. Ahmed, A., & Amin, B. R. (2012). Performance Comparison of Invasive Weed Optimization and Particle Swarm

Optimization Algorithm for the tuning of Power System Stabilizer in Multi-machine Power System. International

Journal of Computer Applications, 41(16).

72. Zaharis, Z. D., Skeberis, C., Xenos, T. D., Lazaridis, P., & Cosmas, J. (2013). Design of a novel antenna array

beamformer using neural networks trained by modified adaptive dispersion invasive weed optimization based data.,

IEEE Transactions on Broadcasting, 59(3): 455-460.

73. Safari, M. I. K. M., Dahlan, N. Y., Razali, N. S., & Rahman, T. K. A. (2013). Electricity Prices Forecasting Using

ANN Hybrid with Invasive Weed Optimization (IWO). In Proc. of 3rd IEEE International Conference on System

Engineering and Technology (pp. 275-280).

74. Green, R. C., Wang, L., &Alam, M. (2012). Training neural networks using central force optimization and particle

swarm optimization: insights and comparisons. Expert Systems with Applications, 39(1): 555-563.

75. Chao, M., Xin, S. Z., & San Min, L. (2014). Neural network ensembles based on copula methods and Distributed

Multiobjective Central Force Optimization algorithm. Engineering Applications of Artificial Intelligence, 32: 203-

212.

76. Lee, J. H., & Yoon, Y. S. (2009). Modified harmony search algorithm and neural networks for concrete mix

proportion design. Journal of Computing in Civil Engineering, 23(1): 57-61.

Page 20: ISSN: 0975-766X CODEN: IJPTFI Available through Online Review … · IJPT| Sep-2016 | Vol. 8 ... A SURVEY ON NATURE INSPIRED METAHEURISTIC TECHNIQUES FOR TRAINING FEEDFORWAD NEURAL

IJPT| Sep-2016 | Vol. 8 | Issue No.3 | 4567-4590 Page 4586

77. Hasançebi, O., Erdal, F., & Saka, M. P. (2009). Adaptive harmony search method for structural optimization. Journal

of Structural Engineering, 136(4): 419-431.

78. Wong, W. K., &Guo, Z. X. (2010). A hybrid intelligent model for medium-term sales forecasting in fashion retail

supply chains using extreme learning machine and harmony search algorithm. International Journal of Production

Economics, 128(2): 614-624.

79. Kulluk, S., Ozbakir, L., &Baykasoglu, A. (2011). Self-adaptive global best harmony search algorithm for training

neural networks. Procedia Computer Science, 3, 282-286.

80. Razfar, M. R., Zinati, R. F., &Haghshenas, M. (2011). Optimum surface roughness prediction in face milling by

using neural network and harmony search algorithm. The International Journal of Advanced Manufacturing

Technology, 52(5-8): 487-495.

81. Zinati, R. F., &Razfar, M. R. (2012). Constrained optimum surface roughness prediction in turning of X20Cr13 by

coupling novel modified harmony search-based neural network and modified harmony search algorithm. The

International Journal of Advanced Manufacturing Technology, 58(1-4): 93-107.

82. Kulluk, S., Ozbakir, L., &Baykasoglu, A. (2012). Training neural networks with harmony search algorithms for

classification problems. Engineering Applications of Artificial Intelligence, 25(1): 11-19.

83. Treadgold, N. K., &Gedeon, T. D. (1998). Simulated annealing and weight decay in adaptive learning: the

SARPROP algorithm. IEEE Transactions on Neural Networks, 9(4): 662-668.

84. Sexton, R. S., Dorsey, R. E., & Johnson, J. D. (1999). Optimization of neural networks: A comparative analysis of

the genetic algorithm and simulated annealing. European Journal of Operational Research, 114(3): 589-601.

85. Yamazaki, A., De Souto, M. C. P., &Ludermir, T. B. (2002, May). Optimization of neural network weights and

architectures for odor recognition using simulated annealing. In Proc. International Joint Conference on Neural

Networks (pp. 547-552).

86. Da, Y., &Xiurun, G. (2005). An improved PSO-based ANN with simulated annealing technique. Neurocomputing,

63: 527-533.

87. Liao, G. C., &Tsao, T. P. (2006). Application of a fuzzy neural network combined with a chaos genetic algorithm and

simulated annealing to short-term load forecasting. IEEE Transactions onEvolutionary Computation, 10(3): 330-340.

Page 21: ISSN: 0975-766X CODEN: IJPTFI Available through Online Review … · IJPT| Sep-2016 | Vol. 8 ... A SURVEY ON NATURE INSPIRED METAHEURISTIC TECHNIQUES FOR TRAINING FEEDFORWAD NEURAL

Swathi J.N*et al. /International Journal Of Pharmacy & Technology

IJPT| Sep-2016 | Vol. 8 | Issue No.3 | 4567-4590 Page 4587

88. Pham, D., &Karaboga, D. (2012). Intelligent optimisation techniques: genetic algorithms, tabu search, simulated

annealing and neural networks. Springer Science & Business Media.

89. Jacob, C., &Rehder, J. (1993). Evolution of neural net architectures by a hierarchical grammar-based genetic system.

In Artificial Neural Nets and Genetic Algorithms (pp. 72-79). Springer Vienna.

90. Giles, C. L., Lawrence, S., &Tsoi, A. C. (2001). Noisy time series prediction using recurrent neural networks and

grammatical inference. Machine learning, 44(1-2):161-183.

91. Tsoulos, I. G., Gavrilis, D., &Glavas, E. (2005, December). Neural network construction using grammatical

evolution. In Proc. of the Fifth IEEE International Symposium on Signal Processing and Information Technology,

2005. (pp. 827-831). IEEE.

92. Delgado, M., &Pegalajar, M. C. (2005). A multiobjective genetic algorithm for obtaining the optimal size of a

recurrent neural network for grammatical inference. Pattern Recognition, 38(9): 1444-1456.

93. Motsinger, A., Reif, D. M., Dudek, S. M., & Ritchie, M. D. (2006). Understanding the evolutionary process of

grammatical evolution neural networks for feature selection in genetic epidemiology. In Proc. of IEEE Symposium

onComputational Intelligence and Bioinformatics and Computational Biology, (pp. 1-8).

94. Tsoulos, I., Gavrilis, D., &Glavas, E. (2008). Neural network construction and training using grammatical evolution.

Neurocomputing, 72(1): 269-277.

95. Turner, S. D., Dudek, S. M., & Ritchie, M. D. (2010). ATHENA: A knowledge-based hybrid backpropagation-

grammatical evolution neural network algorithm for discovering epistasis among quantitative trait Loci. BioData

mining, 3(1): 5.

96. De Mingo López, L. F., Blas, N. G., &Arteta, A. (2012). The optimal combination: grammatical swarm, particle

swarm optimization and neural networks. Journal of Computational Science, 3(1):46-55.

97. Abdechiri, M., Faez, K., &Bahrami, H. (2010). Neural network learning based on chaotic imperialist competitive

algorithm. In Proc. of 2nd IEEE International Workshop onIntelligent Systems and Applications (pp. 1-5).

98. Ahmadi, M. A. (2011). Prediction of asphaltene precipitation using artificial neural network optimized by

imperialist competitive algorithm. Journal of Petroleum Exploration and Production Technology, 1(2-4): 99-106.

Page 22: ISSN: 0975-766X CODEN: IJPTFI Available through Online Review … · IJPT| Sep-2016 | Vol. 8 ... A SURVEY ON NATURE INSPIRED METAHEURISTIC TECHNIQUES FOR TRAINING FEEDFORWAD NEURAL

IJPT| Sep-2016 | Vol. 8 | Issue No.3 | 4567-4590 Page 4588

99. Berneti, S. M., &Shahbazian, M. (2011). An imperialist competitive algorithm artificial neural network method to

predict oil flow rate of the wells. International Journal of Computer Applications, 26(10):47-50.

100. Ahmadi, M. A., Ebadi, M., Shokrollahi, A., &Majidi, S. M. J. (2013). Evolving artificial neural network and

imperialist competitive algorithm for prediction oil flow rate of the reservoir. Applied Soft Computing, 13(2):1085-

1098.

101. Nia, R. H., Ghaedi, M., &Ghaedi, A. M. (2014). Modeling of reactive orange 12 (RO 12) adsorption onto gold

nanoparticle-activated carbon using artificial neural network optimization based on an imperialist competitive

algorithm. Journal of Molecular Liquids, 195: 219-229.

102. Hajihassani, M., Armaghani, D. J., Marto, A., & Mohamad, E. T. (2014). Ground vibration prediction in quarry

blasting through an artificial neural network optimized by imperialist competitive algorithm. Bulletin of Engineering

Geology and the Environment, 1-14.

103. Duan, H., & Huang, L. (2014). Imperialist competitive algorithm optimized artificial neural networks for UCAV

global path planning. Neurocomputing, 125: 166-171.

104. Fang, L., Chen, P., & Liu, S. (2007). Particle swarm optimization with simulated annealing for TSP. In Proc. of

the 6th WSEAS Int. Conf. on Artificial Intelligence, Knowledge Engineering and Data Bases(pp. 16-19).

105. Mirjalili, S., Hashim, S. Z. M., &Sardroudi, H. M. (2012). Training feedforward neural networks using hybrid

particle swarm optimization and gravitational search algorithm. Applied Mathematics and Computation,218(22),

11125-11137.

106. Nawi, N. M., Rehman, M. Z., Aziz, M. A., Herawan, T., &Abawajy, J. H. (2014). An Accelerated Particle Swarm

Optimization Based Levenberg Marquardt Back Propagation Algorithm. In Neural Information Processing (pp. 245-

253). Springer International Publishing.

107. Cui, H., Feng, J., Guo, J., & Wang, T. (2015). A novel single multiplicative neuron model trained by an improved

glowworm swarm optimization algorithm for time series prediction. Knowledge-Based Systems, 88, 195-209.

108. Cao, Z., Hei, X., Wang, L., Shi, Y., &Rong, X. (2015). An Improved Brain Storm Optimization with Differential

Evolution Strategy for Applications of ANNs. Mathematical Problems in Engineering, 2015.

109. Marco Dorigo(1992). Optimization, learning and natural algorithms. Ph. D. Thesis, Politecnico di Milano, Italy.

Page 23: ISSN: 0975-766X CODEN: IJPTFI Available through Online Review … · IJPT| Sep-2016 | Vol. 8 ... A SURVEY ON NATURE INSPIRED METAHEURISTIC TECHNIQUES FOR TRAINING FEEDFORWAD NEURAL

Swathi J.N*et al. /International Journal Of Pharmacy & Technology

IJPT| Sep-2016 | Vol. 8 | Issue No.3 | 4567-4590 Page 4589

110. DervisKaraboga and BahriyeBasturk(2007). A powerful and efficient algorithm for numerical function

optimization: artificial bee colony (ABC) algorithm. Journal of global optimization, 39(3):459–471.

111. Kevin M Passino(2002). Biomimicry of bacterial foraging for distributed optimization and control. Control

Systems, 22(3):52–67.

112. X.S. Yang(2010). A new metaheuristic bat-inspired algorithm. Nature Inspired Cooperative Strategies for

Optimization (NICSO 2010),pages 65–74.

113. S.-A. Chu, P.-W. Tsai, and J.-S. Pan (2006). Cat swarm optimization. Lecture Notes in Computer Science

(including subseries LectureNotes in Artificial Intelligence and Lecture Notes in Bioinformatics),4099 LNAI:854–

858.

114. Xin-She Yang and Suash Deb(2009). Cuckoo search via levy flights. In Proc. of World Congress on Nature &

Biologically Inspired Computing., pages 210–214. IEEE.

115. Xin-She Yang(2010). Firefly algorithm, stochastic test functions and design optimisation. International Journal of

Bio-Inspired Computation,2(2):78–84.

116. X.-L. Li, Z.-J. Shao, and J.-X. Qian(2002). Optimizing method based on autonomous animats: Fish-swarm

algorithm. XitongGongchengLilunyuShijian/System Engineering Theory andPractice, 22(11):32.

117. James Kennedy and Russell Eberhart (1995). Particle swarm optimization. In Proc. of IEEE

InternationalConference onNeural Networks, ,Volume 4, (pages 1942–1948).

118. Xin-She Yang(2008). Nature-Inspired Metaheuristic Algorithms. Luniver Press, UK.

119. Xin-She Yang (2010). Nature-Inspired Metaheuristic Algorithms, 2nd

Edition. Luniver Press.

120. Dan Simon(2008). Biogeography-based optimization. IEEE Transactions onEvolutionary Computation,

12(6):702–713.

121. Rainer Storn and Kenneth Price (1997). Differential evolution–a simple and efficient heuristic for global

optimization over continuous spaces. Journal of global optimization, 11(4):341–359.

122. A Reza Mehrabian and C Lucas(2006). A novel numerical optimization algorithm inspired from weed

colonization. Ecological Informatics, 1(4):355–366,

Page 24: ISSN: 0975-766X CODEN: IJPTFI Available through Online Review … · IJPT| Sep-2016 | Vol. 8 ... A SURVEY ON NATURE INSPIRED METAHEURISTIC TECHNIQUES FOR TRAINING FEEDFORWAD NEURAL

IJPT| Sep-2016 | Vol. 8 | Issue No.3 | 4567-4590 Page 4590

123. Richard A Formato (2007). Central force optimization: A new metaheuristic with applications in applied

electromagnetics. ProgressIn Electromagnetics Research, 77:425–491.

124. Zong Woo Geem, JoongHoon Kim, and GV Loganathan(2001). A new heuristic optimization algorithm: harmony

search. Simulation, 76(2):60–68.

125. Scott Kirkpatrick, D. Gelatt Jr., and Mario P Vecchi (1983). Optimization by simulated annealing. Science,

220(4598):671–680.

126. Conor Ryan, JJ Collins, and Michael O Neill(1998). Grammatical evolution: Evolving programs for an arbitrary

language. In Genetic Programming, pages 83–96. Springer.

127. EsmaeilAtashpaz-Gargari and Caro Lucas(2007). Imperialist competitive algorithm: an algorithm for optimization

inspired by imperialistic competition. In Proc. of EEE Congress on Evolutionary Computation., pages 4661–4667.

128. KN Krishnanand and D Ghose(2005). Detection of multiple source locations using a glow worm metaphor with

applications to collective robotics. In Proc. of IEEESwarm Intelligence Symposium., pages 84–91.

129. KN Krishnanand and D Ghose (2009). Glowworm swarm optimisation: a new method for optimising multi-modal

functions. InternationalJournal of Computational Intelligence Studies, 1(1):93–119

130. Yuhui Shi(2011). An optimization algorithm based on brainstorming process. International Journal of Swarm

Intelligence Research(IJSIR), 2(4):35–62.

131. EsmatRashedi, Hossein Nezamabadi-Pour, and SaeidSaryazdi(2009). Gsa: a gravitational search algorithm.

Information sciences, 179(13):2232–2248.

Corresponding author:

Dr. Swathi J. N.*,

Email: [email protected]