neural optimization of evolutionary algorithm strategy parameters
DESCRIPTION
Neural Optimization of Evolutionary Algorithm Strategy Parameters. Hiral Patel. Outline. Why optimize parameters of an EA? Why use neural networks? What has been done so far in this field? Experimental Model Preliminary Results and Conclusion Questions. - PowerPoint PPT PresentationTRANSCRIPT
Neural Optimization of Neural Optimization of Evolutionary Algorithm Evolutionary Algorithm Strategy ParametersStrategy Parameters
Hiral PatelHiral Patel
OutlineOutline
Why optimize parameters of an EA?Why optimize parameters of an EA? Why use neural networks?Why use neural networks? What has been done so far in this What has been done so far in this
field?field? Experimental ModelExperimental Model Preliminary Results and ConclusionPreliminary Results and Conclusion QuestionsQuestions
Why optimize parameters Why optimize parameters of an EA?of an EA?
Faster convergenceFaster convergence Better overall resultsBetter overall results Avoid premature convergenceAvoid premature convergence
Why use neural networks?Why use neural networks?
Ability to learnAbility to learn AdaptabilityAdaptability Pattern recognitionPattern recognition Faster then using another EAFaster then using another EA
What has been done so far What has been done so far in this field?in this field?
Machine Learning primarily Machine Learning primarily used to optimize ES and EPused to optimize ES and EP
Optimized mutation operatorsOptimized mutation operators Little has been done to Little has been done to
optimize GA parametersoptimize GA parameters
Experimental Model Experimental Model OutlineOutline
Neural Network BasicsNeural Network Basics Hebbian LearningHebbian Learning Parameters of the Genetic Parameters of the Genetic
Algorithm to be optimizedAlgorithm to be optimized Neural Network InputsNeural Network Inputs
Neural Network BasicsNeural Network Basics
Weight update algorithm
wq1(k)
wq2(k)
wqn(k)
vq(k)
dq(k)
( )e k
g(•)=f’(•)
yq(k)
bq(k) bias
Vector inputsignal
x(k)Rn1
x(k)Rn1
Deviation of activation function
Synaptic weights
Desired neuron
response
Neuron response(output)
Sigmoid activation function
f(•)
Adapted from: Adapted from: Ham, M. H., Kostanic, I Principles of Neurocomputing for Science and Engineering, McGraw-Hilll, NY, 2001Ham, M. H., Kostanic, I Principles of Neurocomputing for Science and Engineering, McGraw-Hilll, NY, 2001
Hebbian LearningHebbian Learning
Unsupervised learningUnsupervised learning Time-dependentTime-dependent Learning signal and Forgetting Learning signal and Forgetting
factorfactor
Hebb Learning for Hebb Learning for single neuronsingle neuron
Standard Hebbianlearning rule
{,}
x1
xn
w0
w1
wn
vf(v) y
( )( )
d vl y f v
dv
Adapted from: Adapted from: Ham, M. H., Kostanic, I Principles of Neurocomputing for Science and Engineering, McGraw-Hilll, NY, 2001Ham, M. H., Kostanic, I Principles of Neurocomputing for Science and Engineering, McGraw-Hilll, NY, 2001
x0
Parameters of the Genetic Parameters of the Genetic Algorithm to be optimizedAlgorithm to be optimized
Crossover ProbabilityCrossover Probability Crossover Cell DividerCrossover Cell Divider Cell Crossover ProbabilityCell Crossover Probability Mutation ProbabilityMutation Probability Mutation Cell DividerMutation Cell Divider Cell Mutation ProbabilityCell Mutation Probability Bit Mutation ProbabilityBit Mutation Probability
Neural Network InputsNeural Network Inputs
Current Parameter ValuesCurrent Parameter Values VarianceVariance MeanMean Max fitnessMax fitness Average bit changes for crossoverAverage bit changes for crossover Constant parameters of the GAConstant parameters of the GA
Preliminary ResultsPreliminary Results
Tests run with Knapsack problem Tests run with Knapsack problem with dataset 3, pop. size 800, rep. with dataset 3, pop. size 800, rep. size 1600size 1600
Learning Signal and Forgetting Learning Signal and Forgetting factor are not yet optimal enough factor are not yet optimal enough to suggest better performance to suggest better performance with NNwith NN
Output for 1600 Output for 1600 generationsgenerations
-100
0
100
200
300
400
500
0 500 1000 1500 2000
Fitness
Mean
Variance
CCD
MCD
Probabilities for 1600 Probabilities for 1600 generationsgenerations
0
0.2
0.4
0.6
0.8
1
1.2
0 500 1000 1500 2000
CP
CCP
MP
CMP
BMP
ConclusionConclusion
It may be possible to get better It may be possible to get better performance out of a Neural performance out of a Neural Optimized EA as long as the Optimized EA as long as the (unsupervised) Neural Network is (unsupervised) Neural Network is able to adapt to the changes able to adapt to the changes quickly and to recognize local quickly and to recognize local minima.minima.
Possible Future WorkPossible Future Work ES to optimize parameters, use a ES to optimize parameters, use a
SOM to do feature extraction of the SOM to do feature extraction of the optimized parameter values, use the optimized parameter values, use the SOM output as codebook vectors for SOM output as codebook vectors for LVQ network and then classify the LVQ network and then classify the output of the original ES, use the output of the original ES, use the classifications to perform supervised classifications to perform supervised training of Levenberg-Marquardt training of Levenberg-Marquardt Backpropagation network to form Backpropagation network to form rule set.rule set.