research article racing sampling based microimmune
TRANSCRIPT
Research ArticleRacing Sampling Based Microimmune Optimization ApproachSolving Constrained Expected Value Programming
Kai Yang1 and Zhuhong Zhang2
1College of Computer Science Guizhou University Guiyang 550025 China2Department of Big Data Science and Engineering College of Big Data and Information Engineering Guizhou UniversityGuiyang 550025 China
Correspondence should be addressed to Zhuhong Zhang scizhzhanggzueducn
Received 24 December 2015 Accepted 23 February 2016
Academic Editor Eduardo Rodrıguez-Tello
Copyright copy 2016 K Yang and Z Zhang This is an open access article distributed under the Creative Commons AttributionLicense which permits unrestricted use distribution and reproduction in any medium provided the original work is properlycited
This work investigates a bioinspired microimmune optimization algorithm to solve a general kind of single-objective nonlinearconstrained expected value programming without any prior distribution In the study of algorithm two lower bound sampleestimates of random variables are theoretically developed to estimate the empirical values of individuals Two adaptive racingsampling schemes are designed to identify those competitive individuals in a given population by which high-quality individualscan obtain large sampling size An immune evolutionary mechanism along with a local search approach is constructed to evolvethe current population The comparative experiments have showed that the proposed algorithm can effectively solve higher-dimensional benchmark problems and is of potential for further applications
1 Introduction
Many real-world engineering optimization problems suchas industrial control project management portfolio invest-ment and transportation logistics include stochastic param-eters or random variables usually Generally they can besolved by some existing intelligent optimization approacheswith static sampling strategies (ie each candidate is withthe same sampling size) after being transformed into con-strained expected value programming (CEVP) chance con-strained programming or probabilistic optimizationmodelsAlthough CEVP is a relatively simple topic in the contextof stochastic programming it is a still challenging topicas it is difficult to find feasible solutions and meanwhilethe quality of the solution depends greatly on environ-mental disturbance The main concern of solving CEVPinvolves two aspects (i) when stochastic probabilistic dis-tributions are unknown it becomes crucial to distinguishthose high-quality individuals from the current populationin uncertain environments and (ii) although static sam-pling strategies are a usual way to handle random factorsthe expensive computational cost is inevitable and hence
adaptive sampling strategies with low computational cost aredesired
When stochastic characteristics are unknown CEVPmodels are usually replaced by their sample average approx-imation models [1 2] and thereafter some new or existingtechniques can be used to find their approximate solutionsMathematically several researchers [3ndash5] probed into therelationship between CEVP models and their approximationones and acquired some valuable lower bound estimateson sample size capable of being used to design adaptivesampling rules On the other hand intelligent optimiza-tion techniques have become popular for nonconstrainedexpected value programming problems [6ndash8] in whichsome advanced sampling techniques for example adaptivesampling techniques and sample allocation schemes caneffectively suppress environmental influence on the processof solution search Unfortunately studies on general CEVPhave been rarely reported in the literature because of expectedvalue constraints Even if so several researchers made greatefforts to investigate new or hybrid intelligent optimizationapproaches for such kind of uncertain programming prob-lem For example B Liu and Y-K Liu [9] proposed a hybrid
Hindawi Publishing CorporationScientific ProgrammingVolume 2016 Article ID 2148362 9 pageshttpdxdoiorg10115520162148362
2 Scientific Programming
intelligent approach to solve general fuzzy expected valuemodels after combining evolutionary algorithms with neuralnetworks learning methods Sun and Gao [10] suggestedan improved differential evolutionary approach to solve anexpected value programming problem depending on staticsampling and flabby selection
Whereas immune optimization as another popularbranch was well studied for static or dynamic optimiza-tion problems [11 12] it still remains open for stochasticprogramming problems Some comparative works betweenclassical intelligent approaches and immune optimizationalgorithms for stochastic programming demonstrated thatone such branch is competitive For example Hsieh and You[13] proposed a two-phase immune optimization approach tosolve the optimal reliability-redundancy allocation problemTheir numerical results based on four benchmark problemshave showed that such approach is superior to the comparedalgorithms Zhao et al [14] presented a hybrid immuneoptimization approach to deal with chance-constrained pro-gramming in which two operators of double cloning anddouble mutation were adopted to accelerate the process ofevolution
In the present work we study two lower bound estimateson sample size theoretically based onHoeffdingrsquos inequalities[15 16] Afterwards two efficient adaptive racing samplingapproaches are designed to compute the empirical values ofstochastic objective and constraint functionsThese togetherwith immune inspirations included in the clonal selectionprinciple are used to develop a microimmune optimiza-tion algorithm (120583IOA) for handling general nonlinear andhigher-dimensional constrained expected value program-ming problems Such approach is significantly different fromany existing immune optimization approaches On one handthe two lower bound estimates are developed to controlthe sample sizes of random variables while a local searchapproach is adopted to strengthen the ability of local exploita-tion on the other hand the two adaptive racing samplingmethods are utilized to determine dynamically such samplesizes in order to compute the empirical values of objective andconstraint functions at each individual Experimental resultshave illustrated that 120583IOA is an alternative tool for higher-dimensional multimodal expected value programming prob-lems
2 Problem Statement and Preliminaries
Consider the following general single-objective nonlinearconstrained expected value programming problem
minxisin119863
119864 [119891 (x 120585)]
st 119864 [119866119894(x 120585)] le 0 0 le 119894 le 119868
119892119895(x) le 0 1 le 119895 le 119869
ℎ119896(x) = 0 1 le 119896 le 119870
(EP)
with bounded and closed domain 119863 in 119877119901 decision vector x
in119863 and random vector 120585 in 119877119902 where 119864[sdot] is the operator of
expectation 119891(x 120585) and 119866119894(x 120585) are the stochastic objective
and constraint functions respectively among which at leastone is nonlinear and continuous in x 119892
119895(x) and ℎ
119896(x) are
the deterministic and continuous constraint functions If acandidate solution satisfies all the above constraints it iscalled a feasible solution and an infeasible solution otherwiseIntroduce the following constraint violation function tocheck if candidate x is feasible
Γ (x) = 119868minus1
119868
sum
119894=1
max 119864 [119866119894(x 120585)] 0
+ 119869minus1
119869
sum
119895=1
max 119892119895(x) 0 + 119870
minus1
119870
sum
119896=1
1003816100381610038161003816ℎ119896 (x)1003816100381610038161003816
(1)
Obviously x is feasible only when Γ(x) = 0 If Γ(x) lt Γ(y)we prescribe that x is superior to y In order to solve CEVPwe transform the above problem into the following sample-dependent approximation model (SAM)
minxisin119863
1
119899 (x)
119899(x)sum
119897=1
119891 (x 120585119897)
st 1
119898 (x)
119898(x)sum
119903=1
119866119894(x 120585119903) le 0 1 le 119894 le 119868
119892119895(x) le 0 1 le 119895 le 119869
ℎ119896(x) = 0 1 le 119896 le 119870
(2)
where 119898(x) and 119899(x) are the sampling sizes of 120585 at thepoint x for the stochastic objective and constraint functionsrespectively 120585119894 is the 119894th observation It is known that theoptimal solution of problem SAM can approach that ofproblem (EP) when 120576 rarr 0 and 119898(x) 119899(x) rarr infin based onthe law of large number [17] We say that x is an empiricallyfeasible solution for (EP) if the above constraints in SAM aresatisfied
In the subsequent work two adaptive sampling schemeswill be designed to estimate the empirical objective and con-straint values for each individual We here cite the followingconclusions
Theorem 1 ((Hoeffdingrsquos inequality) see [15 16]) Let 119883 bea set and let 119865(sdot) be a probability distribution function on X1198911 119891
119899denote the real-valued functions defined on 119883 with
119891119895
119883 rarr [119886119895 119887119895] for 119895 = 1 119899 where 119886
119895and 119887119895are real
numbers satisfying 119886119895lt 119887119895 Let119909
1 119909
119899be the samples of iid
random variables 1198831 1198832 119883
119899on 119883 respectively Then the
following inequality is true
Pr(10038161003816100381610038161003816100381610038161003816100381610038161003816
1
119899
119899
sum
119895=1
119891119895(x119895) minus
1
119899
119899
sum
119895=1
intxisin119883
119891119895(x) 119889119865 (x)
10038161003816100381610038161003816100381610038161003816100381610038161003816
ge 120576)
le 119890minus212057621198992sum119899
119895=1(119887119895minus119886119895)2
(3)
Scientific Programming 3
Corollary 2 (see [15 16]) If 1198831 1198832 119883
119899are iid random
variables with mean 120583 and 119886 le 119883119895le 119887 1 le 119895 le 119899 then
10038161003816100381610038161003816119883119899minus 120583
10038161003816100381610038161003816le 119870 (Δ 119899 120575) equiv Δradic
1
2119899ln(
2
120575) (4)
with probability at least 1 minus 120575 where 119883119899= (1119899)sum
119899
119895=1119895and
Δ = 119887 minus 119886 119895and 120575 denote the observation of 119883
119895and the
significance level
3 Racing Sampling Approaches
31 Expected Value Constraint Handling Usually when anintelligent optimization approach with static sampling is cho-sen to solve the above problem (EP) each individual with thesame and sufficiently large sampling size which necessarilycauses high computational complexity Therefore in order toensure that each individual in a given finite population119860 hasa rational sampling size we in this subsection give a lowerbound estimate to control the value of119898(x)with x isin 119860 basedon the sample average approximation model of the aboveproblem Define
119883120576= x isin 119860 | 119864 [119866
119894(x 120585)] le 120576 1 le 119894 le 119868
119883119898
=
x isin 119860 |1
119898
119898
sum
119895=1
119866119894(x 120585119895) le 0 1 le 119894 le 119868
(5)
We next give a lower bound estimate to justify that 119883119898is
a subset of119883120576with probability 1 minus 120575 for which the proof can
be found in Appendix
Lemma 3 If there exist 119886119894(x) and 119887
119894(x) such that Pr119886
119894(x) le
119866119894(x 120585) le 119887
119894(x) 1 le 119894 le 119868 = 1 with x isin 119860 one has that
Pr119883119898
sube 119883120576 ge 1 minus 120575 provided that
119898 ge 119872120575equiv
Λ2
21205762log |119860|
120575 (6)
where Λ = max119887119894(x) minus 119886
119894(x) | x isin 119860 1 le 119894 le 119868 and |119860|
denotes the size of 119860
In (6) 119886119894(x) and 119887
119894(x) are decided by the bounds of
the stochastic constraint functions at the point x Λ is themaximal sampling difference computed by the observationsof the stochastic constraints We also observe that once 120576
and 120575 are defined 119872120575is determined by |119860| Additionally
those high-quality individuals in 119860 should usually get largesampling sizes and conversely those inferior ones can onlyget small sampling size This means that different individualswill gain different sampling sizes Based on this considerationand the idea of racing ranking we next compute the empiricalvalue of any expected value constraint function 119866(x 120585) at agiven individual x in 119860 that is (x) This is completed bythe following racing-based constraint evaluation approach(RCEA)
Step 1 Input parameters initial sampling size 1198980 sampling
amplitude 120582 relaxation factor 120576 significance level 120575 andmaximal sampling size 119872
120575
Step 2 Set 119898 = 1198980 119904 = 119898
0 and 120582 larr ln(1 + 119898
0) calculate
the estimate (x) through 119898 observations
Step 3 Set 119904 larr 120582119904
Step 4 Create 119904 observations and update (x) that is
(x) larr997888(119898 (x) + sum
119904
119894=1119866(x 120585119894))
(119898 + 119904) (7)
Step 5 Set 119898 larr 119898 + 119904 if 119898 le 119872120575and (x) le 119870(Δ119898 120575)
then go to Step 3
Step 6 Output (x) as the estimated value of 119866(x 120585)
In the above formulation 119872120575and 119870(Δ119898 120575) are used to
decide when the above algorithm terminates Once the aboveprocedure is stopped x obtains its sampling size 119898(x) thatis 119898(x) = 119898 We note that 119870(Δ119898 120575) is very small if 119898 islarge Thereby we say that x is an empirical feasible solutionif in the precondition of (x) le 0 the above deterministicconstraints are satisfied Further RCEA indicates that anempirical feasible solution x can acquire a large sampling sizeso that (x) is close to the expected value of 119866(x 120585)
32 Objective Function Evaluation Depending on the aboveRCEA and the deterministic constraints in problem (EP) theabove population 119860 is divided into two subpopulations of 119861and 119862 where 119861 consists of empirical feasible solutions in 119860We investigate another lower bound estimate to control thevalue of 119899(x) with x isin 119861 relying upon the sample averageapproximation model of the problem (EP) Afterwards anapproach is designed to calculate the empirical objectivevalues of empirical feasible solutions in 119861 To this pointintroduce
119878120576= x isin 119861 | 119864 [119891 (x 120585)] le 119891
lowast
+ 120576
119878119899= x isin 119861 |
1
119899
119899
sum
119896=1
119891 (x 120585119896) le 119899
(8)
where 119891lowast and
119899stand for the minima of theoretical and
empirical objective values of individuals in 119861 respectivelyThe lower bound estimate is given below by identifying theapproximation relation between 119878
120576and 119878119899 The proof can be
known in Appendix
Lemma 4 If there exist 119888(x) and 119889(x) such that Pr119888(x) le
119891(x 120585) le 119889(x) = 1 with x isin 119861 then Pr119878119899
sube 119878120576 ge 1 minus 120575
provided that
119899 ge 119899|119861|
equiv2Γ2
1205762ln 2 |119861|
120575 (9)
where Γ = max119889(x) minus 119888(x) | x isin 119861
Like the above constraint handling approach we nextcalculate the empirical objective values of individuals in119861 through the following racing-based objective evaluationapproach (ROEA)
4 Scientific Programming
Step 1 Input parameters 120576 and 120575 mentioned above initialsampling size 119899
0 and population 119861
Step 2 Set 119899 larr 1198990 119904 larr 119899
0 120582 larr ln(1 + 119899
0) and Φ larr 119861
Step 3 Calculate the empirical objective average of 119899 obser-vations for each individual x inΦ that is 119891
119899(x) write 119891min =
min119891119899(x) x isin Φ and 119891max = max119891
119899(x) x isin Φ
Step 4 Remove those elements inΦ satisfying 119899(x) gt 119891min+
119870(119891max minus 119891min 119899 120575)
Step 5 Set 119904 larr 120582119904
Step 6 Update the empirical objective values for elements inΦ through
119899(x) larr997888
[119899(x) times 119899 + sum
119904
119896=1119891 (x 120585119896)]
(119899 + 119904) x isin 119861 (10)
Step 7 Set 119899 larr 119899 + 119904
Step 8 If 119899 lt 119899|119861|
andΦ = 120601 then return to Step 3 otherwiseoutput all the empirical objective values of individuals in 119861
Through the above algorithm those individuals in 119861
can acquire their respective empirical objective values withdifferent sampling sizes Those high-quality individuals canget large sampling sizes and hence their empirical objectivevalues can approach their theoretical objective values
4 Algorithm Statement
The clonal selection principle explains how immune cellslearn the pattern structures of invading pathogens It includesmany biological inspirations capable of being adopted todesign 120583IOA such as immune selection cloning and repro-duction Based on RCEA and ROEA above as well as generalimmune inspirations 120583IOA can be illustrated by Figure 1We here view antigen Ag as problem SAM itself whilecandidates from the design space 119863 are regarded as real-coded antibodies Within a run period of 120583IOA by Figure 1the current population is first divided into empirical feasi-ble and infeasible antibody subpopulations after executingRCEA above Second those empirical feasible antibodies arerequired to compute their empirical objective values throughROEA They will produce many more clones than empiricalinfeasible antibodies through proliferation Afterwards allthe clones are enforced mutation If a parent is superior toits clones it will carry out local search and conversely it isupdated by its best clone Based on Figure 1 120583IOA can beformulated in detail below
Step 1 Input parameters population size 119873 maximal clonalsize 119862max sampling parameters 119898
0and 119899
0 relaxation factor
120576 significance level 120575 and maximal iteration number 119866max
Step 2 Set 119905 larr 1
Step 3 Generate an initial population 119860 of 119873 randomantibodies
Step 4 Compute the empirical constraint violations of anti-bodies in 119860 through RCEA and (1) that is Γ(x) with x isin 119860
Step 5 Divide 119860 into empirical feasible subpopulation 119861 andinfeasible subpopulation 119862
Step 6 Calculate the empirical objective values of antibodiesin 119861 through ROEA above
Step 7 For each antibody x in 119861 we have the following
Step 71 Proliferate a clonal set Cl(x) with size 119862max (ie xmultiplies 119862max offsprings)
Step 72 Mutate all clones with mutation rate 119901119898(x) = 1
(Γmax + ln(119905 + 1)) through the classical polynomial mutationand thereafter produce a mutated clonal set Cllowast(x) whereΓmax = maxΓ(x) | x isin 119862
Step 73 Eliminate empirical infeasible clones in Cllowast(x)through RCEA
Step 74 Calculate the empirical objective values of clones inCllowast(x) through ROEA
Step 75 If the best clone has a smaller empirical objectivevalue than x it will update x otherwise antibody x as aninitial state creates a better empirical feasible antibody toreplace it by a local search approach [18] with ROEA andsampling size 119898
0
Step 8 For each antibody y in 119862 we have the following
Step 81 Antibody y creates a clonal population 119862(y) withclonal sizeCl(y) all clones in119862(y) are enforced tomutatewithmutation rate 119901
119898(y) through the conventional nonuniform
mutation and create a mutated clonal population 119862lowast(y)
where
Cl (y) = round(119862max
Γ (y) + 1)
119901119898
(y) =1
Γmax minus Γ (y) + 1
(11)
Step 82 Check if there exist empirical feasible clones in119862lowast(y)
by RCEA and (1) if yes the best clone with the smallestempirical objective value by ROEA replaces y and converselythe clone with the smallest constraint violation updates y
Step 9 119860 larr 119861 cup 119862 and 119905 larr 119905 + 1
Step 10 If 119905 lt 119866max go to Step 4 and conversely output thebest antibody viewed as the optimal solution
As we formulate above the current population 119860 issplit into two subpopulations after being checked if thereare empirical feasible antibodies Each subpopulation is
Scientific Programming 5
Initial antibody population
Empirical feasibility identification
Empirical feasible antibodies Empirical infeasible antibodies
Objective evaluation
Proliferation
Constraint violation
Local search
Mutation
Elitism clone
Comparison between parents and clones
Replacement
New population
Termination
Output best solutionYes
No
NoYes
Figure 1 The flowchart of 120583IOA
updated through proliferation mutation and selection Step7 makes those empirical feasible antibodies search betterclones through proliferation and polynomial mutation Oncesome antibody can not produce a valuable clone a reportedlocal search algorithm is used to perform local evolution so asto enhance the quality of solution searchThe purpose of Step8 urges those poor antibodies to create diverse clones throughproliferation and nonuniform mutation
Additionally120583IOArsquos computational complexity is decidedby Steps 4 6 and 72 Step 4 needs at most 119873(119868119872
120575+ 119869 + 119870 +
1) times to calculate the empirical constraint violations InStep 6 we need to compute the empirical objective valuesof antibodies in 119861 which evaluates at most 119873119899
119873times
Step 72 enforces mutation with at most 119873119901 times 119862max timesConsequently 120583IOArsquos computational complexity in the worstcase can be given by
119874119888= 119874 (119873 (119868119872
120575+ 119869 + 119870) + 119873119899
119873+ 119873119901119862max)
= 119874 (119873 (119868119872120575+ 119869 + 119870 + 119899
119873+ 119901119862max))
(12)
5 Experimental Analysis
Our experiments are executed on a personal computer(CPU3GHz RAM2GB) with VC++ software In order toexamine 120583IOArsquos characteristics four intelligent optimizationalgorithms that is two competitive steady genetic algorithms(SSGA-A and SSGA-B) [19] and two recent immune opti-mization approaches NIOA-A and NIOA-B [20] are takento participate in comparative analysis by means of two100-dimentional multimodal expected value optimization
problems It is pointed out that the two genetic algorithmswith the same fixed sampling size for each individual can stillsolve (EP) problems since their constraint scoring functionsare designed based on static sampling strategies the twoimmune algorithms are with dynamic sampling sizes for allindividuals presented in the process of evolution On theother hand since NIOA-B can not effectively handle high-dimensional problems we in this section improve it byOCBA[21] These comparative approaches together with 120583IOAshare the same termination criterion namely each approachis with the total of evaluations 8 times 10
5 while executing 30times on each test problem Their parameter settings are thesame as those in their literatures except for their evolvingpopulation sizes After manual experimental tuning theytake population size 40 120583IOA takes a small population sizewithin 3 and 5 while the three parameters of 119898
0 1198990 and
119862max as crucial efficiency parameters are usually set as smallintegers We here set 119873 = 5 119898
0= 30 119899
0= 20 119862max = 2 120576 =
01 and 120575 = 005 Additionally those best solutions acquiredby all the algorithms for each test problem are reevaluated 106times because of the demand of algorithm comparison
Example 5 Consider
max 119864[
[
1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816
sum119901
119894=1cos4 (119909
119894) minus 2prod
119901
119894=1cos2 (119909
119894)
radicsum119901
119894=11198941199092119894
1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816
+ 120585]
]
st 119864 [075 minus 1205781
119901
prod
119894=1
119909119894] le 0
6 Scientific Programming
Table 1 Comparison of statistical results for Example 5
Algor Max Min Mean Std dev CI FR()
AR(s)
SSGA-A 014 010 011 001 [011 012] 80 495SSGA-B 013 010 011 001 [011 012] 87 496NIOA-A 025 016 021 003 [020 022] 80 276NIOA-B 026 017 020 002 [019 021] 87 247120583IOA 039 022 029 005 [027 031] 90 161CI represents the confidence interval of objective values for the solutionsacquired FR stands for the rate of feasible solutions among all the gottensolutions AR denotes the average runtime required by 30 runs
119901
sum
119894=1
119864 [1205782119909119894minus 75119901] le 0
0 le 119909119894le 10
120585 sim 119873 (0 1)
1 le 119894 le 119901
1205781sim 119873 (00015 02)
1205782sim 119873 (1 2)
(13)
This is a multimodal expected value programming prob-lem gotten through modifying a static multimodal optimiza-tion problem [22] where 119901 = 100 The main difficultyof solving such problem is that the high dimension andmultimodality make it difficult to find the desired solutionWe solve the above problem by means of the approximationmodel SAM as in Section 2 instead of transforming suchmodel into a deterministic analytical one After respectively30 runs each of the above algorithms acquires 30 bestsolutions used for comparison Their statistical results arelisted in Table 1 while Figures 2 and 3 draw the box plots ofthe objective values acquired and their average search curvesrespectively
In Table 1 the values on FR listed in the seventh columnhint that whereas all the algorithms can find many feasiblesolutions for each run their rates of feasible solutions aredifferent 120583IOA can acquire many more feasible solutionsthan the compared approaches This illustrates that the con-straint handling approach RCEA presented in Section 3 canensure that 120583IOA find feasible solutions with high probability90 for a single run On the other hand the statisticalresults in columns 2 to 6 show that 120583IOArsquos solution qualityis clearly superior to those acquired by other approachesand meanwhile NIOA-A and NIOA-B are secondary Withrespect to performance efficiency we see easily that 120583IOAis a high-efficiency optimization algorithm as it spendsthe least time to seek the desired solution in a run Wealso notice that SSGA-A and SSGA-B present their highcomputational complexity because of their average runtimewhich shows that such two genetic algorithms with staticsampling strategies cause easily expensively computationalcost
SSGA-A SSGA-B NIOA-A NIOA-B
01
015
02
025
03
035
04
Obj
ectiv
e val
ues
120583IOA
Figure 2 Example 5 comparison of box plots acquired
0 2000 4000 6000 8000 100000
01
02
03
04
05
06
SSGA-ASSGA-BNIOA-A
NIOA-B120583IOA
n
f
Figure 3 Example 5 comparison of average search curves
The box plots in Figure 2 formed by 30 objective valuesfor each algorithm exhibit the fact that in addition to 120583IOAobtaining the best effect by comparison with the other fouralgorithms NIOA-A and NIOA-B with similar performancecharacteristics can work well over SSGA-A and SSGA-BBy Figure 3 we observe that 120583IOA can find the desiredsolution rapidly and achieve stable search but the otheralgorithms in particular the two genetic algorithms get easilyinto local search Totally when solving the above higher-dimensional problem the five algorithms have different effi-ciencies solution qualities and search characteristics 120583IOAsuits the above problem and exhibits the best performancefor which the main reason consists in that it can effectivelycombine RCEA with ROEA to find many more empiri-cally feasible individuals and urge them to evolve graduallytowards the desired regions by proliferation and mutation
Scientific Programming 7
Table 2 Comparison of statistical results for Example 6
Algor Max Min Mean Std dev CI FR()
AR(s)
SSGA-A 4880 4018 4589 192 [4521 4658] 53 266SSGA-B 4897 4157 4630 174 [4568 4692] 53 258NIOA-A 4108 3782 3939 87 [3908 3970] 100 160NIOA-B 4146 3713 3952 39 [3913 3990] 100 139120583IOA 4647 4443 4568 55 [4541 4580] 100 121
However those compared approaches present relatively weakperformances In particular SSGA-A and SSGA-B get easilyinto local search and spend the most runtime to solve theabove problem as their constraint handling and selectiontechniques are hard to adapt to high-dimensionalmultimodalproblems NIOA-A and NIOA-B are superior to such twogenetic algorithms owing to their adaptive sampling andconstraint handling techniques
Example 6 Consider
max 119864[
119901
sum
119894=1
119909119894sdot sin (120587119894119909
119894) + 120585]
st 119864 [
119901
sum
119894=1
1205781119909119894] le 500
119864 [
119901
sum
119894=1
12057821199092
119894] le 3000
119909119894ge 0
120585 sim 119873 (0 1)
1 le 119894 le 119901
1205781sim 119880 (08 12)
1205782sim 119873 (1 05)
(14)
This is a still difficult multimodal optimization problemobtained by modifying a static multimodal optimizationproblem [23] including 100 decision variables (ie 119901 = 100)and 3 random variables which greatly influence the qualityof solution search The difficulty of solving such problem isthat the objective function is multimodal and the decisionvariables are unbounded Similar to the above experimentwe acquire the statistical results of the approaches given inTable 2 while the corresponding box plots and their averagesearch curves are drawn by Figures 4 and 5 respectively
The values on FR in Table 2 illustrate that it is alsodifficult to solve Example 6 due to the random variablesand high dimensionality Even if so NIOA-A NIOA-B and120583IOA can all acquire feasible solutions for each run SSGA-Aand SSGA-B however can only get at most 53 of feasiblesolutions namely they can only acquire 53 feasible solutionsafter 100 executions This along with the values on FRin Table 1 follows that although such two approaches can
SSGA-A SSGA-B NIOA-A NIOA-B
380
400
420
440
460
480
Obj
ectiv
e val
ues
120583IOA
Figure 4 Example 6 comparison of box plots acquired
0 2000 4000 6000 8000 100000
100
200
300
400
500
SSGA-ASSGA-BNIOA-A
NIOA-B120583IOA
n
f
Figure 5 Example 6 comparison of average search curves
acquire larger values on Mean than the other approachesthey are poor with respect to solution quality efficiencyand search stability Consequently they need to make someimprovements for example their constraint handling Wenotice that 120583IOA is better than either NIOA-A or NIOA-Bbecause of its efficiency and the value onMeanWeemphasizethat whereas NIOA-A and NIOA-B can only obtain smallvalues on Mean by comparison with 120583IOA they are stillvaluable when solving such kind of hard problem RelativelyNIOA-B behaves well over NIOA-A
It seems to be true that SSGA-A and SSGA-B are superiorto NIOA-A NIOA-B and 120583IOA by Figures 4 and 5 sincethe box plots in Figure 4 and the average search curvesin Figure 5 are far from the horizontal line As a matterof fact column 7 in Table 2 shows clearly that such twogenetic algorithms cannot find feasible solutions 47 timesout of 100 runs whereas the other approaches are opposite
8 Scientific Programming
These indicate a fact that infeasible solutions in the decisiondomain have larger objective function values than feasibleones
Summarily when solving the above two examples thefive algorithms present different characteristics SSGA-Aand SSGA-B have similar performance attributes and sodo NIOA-A and NIOA-B 120583IOA performs well over thecompared approaches while NIOA-B is secondary
6 Conclusions and Further Work
We in this work concentrate on studying a microimmuneoptimization algorithm to solve a general kind of constrainedexpected value programming model relying upon trans-forming suchmodel into a sample-dependent approximationmodel (SAM) One such model requires that different candi-date solutions have different sample sizes and especially eachcandidate has two kinds of samplesrsquo sizes Subsequently twolower bound estimates of random vectors are theoreticallydeveloped and respectively applied to handling expectedvalue constraints of individuals in the current population andcomputing their empirical objective values Based on suchtwo estimates and the idea of racing ranking two racingsampling methods are suggested to execute individual eval-uation and constraint handling respectively Afterwards aracing sampling basedmicroimmune optimization algorithm120583IOA is proposed to deal with such approximation modelwith the merits of small population strong disturbance sup-pression effective constraint handling adaptive samplingand high efficiency The theoretical analysis has indicatedthat 120583IOArsquos computational complexity depends mainly on119899119873 119872120575 and 119862max due to small population 119873 and known
problem parameters By means of the higher-dimensionalmultimodal test problems the comparatively experimentalresults can draw some conclusions (1) RCEA and RCOEAcan make 120583IOA dynamically determine the sample sizes ofdifferent kinds of individuals in the process of evolution(2) the local search approach adopted can help 120583IOA tostrengthen local search (3) stochastic parametersmay be effi-ciently addressed by adaptive sampling techniques (4) 120583IOAperforms well over the compared approaches and (5) SSGA-A and SSGA-B need to make some improvements for higher-dimensional problems Further whereas we make somestudies on how to explore immune optimization approachesto solve higher-dimensional constrained expected value pro-gramming problems some issues will be further studiedFor example 120583IOArsquos theoretical convergence analysis needsto be studied while its engineering applications are to bediscussed
Appendix
Proof of Lemma 3 Let x isin 119883119898
119883120576 Since there exists 119894
0
such that 119864[1198661198940
(x 120585)] gt 120576 with 1 le 1198940
le 119868 by (3) we obtainthat
Pr x isin 119883119898 = Pr
1
119898
119898
sum
119895=1
119866119894(x 120585119895) le 0 1 le 119894 le 119868
le Pr
1
119898
119898
sum
119895=1
1198661198940
(x 120585119895) minus 119864 [1198661198940(x 120585)] le minus120576
le exp[
[
minus21205762119898
(1198861198940(x) minus 119887
1198940(x))2
]
]
le exp[minus21205762119898
Δ2]
(A1)
Hence
1 minus Pr 119883119898
sube 119883120576 = Pr existx isin 119883
119898 st x notin 119883
120576
le sum
xisin119860119883120576
Pr x isin 119883119898
le |119860| exp[minus21205762119898
Δ2]
(A2)
Thereby
Pr 119883119898
sube 119883120576 ge 1 minus |119860| exp[
minus21205762119898
Δ2] (A3)
So the conclusion is true
Proof of Lemma 4 Since 119861 is finite there exists 1199100isin 119861 such
that 119864[119891(y0 120585)] = 119891
lowast If x isin 119878119899 119878120576 then
119864 [119891 (x 120585)] gt 119864 [119891 (y0 120585)] + 120576 (A4)
Hence if
1
119899
119899
sum
119896=1
119891 (x 120585119896) le1
119899
119899
sum
119896=1
119891 (y0 120585119896
) (A5)
we have that
1
119899
119899
sum
119896=1
119891 (x 120585119896) lt 119864 [119891 (x 120585)] minus120576
2(A6)
or
1
119899
119899
sum
119896=1
119891 (y0 120585119896
) gt 119864 [119891 (y0 120585)] +
120576
2 (A7)
Otherwise we acquire that
1
119899
119899
sum
119896=1
119891 (x 120585119896) ge 119864 [119891 (x 120585)] minus120576
2
gt 119864 [119891 (y0 120585)] +
120576
2
ge1
119899
119899
sum
119896=1
119891 (y0 120585119896
)
(A8)
Scientific Programming 9
This yields contraction Consequently by (3) we obtain that
Pr x isin 119878119899 = Pr1
119899
119899
sum
119896=1
119891 (x 120585119896) le 119899
le Pr1
119899
119899
sum
119896=1
119891 (x 120585119896) le1
119899
119899
sum
119896=1
119891 (y0 120585119896
)
le Pr1
119899
119899
sum
119896=1
119891 (x 120585119896) lt 119864 [119891 (x 120585)] minus120576
2
+ Pr1
119899
119899
sum
119896=1
119891 (y0 120585119896
) gt 119864 [119891 (y0 120585)] +
120576
2
le 2 exp[minus1205762119899
2Γ2]
(A9)
Hence
1 minus Pr 119878119899sube 119878120576
= Pr existx isin 119878119899 st x notin 119878
120576
le sum
xisin119861119878120576
Pr x isin 119878119899
le 2 |119861| exp[minus1205762119899
2Γ2]
(A10)
This way it follows from (A10) that the above conclusion istrue
Competing Interests
The authors declare that they have no competing interests
Acknowledgments
This work is supported in part by the National Natural Sci-ence Foundation (61563009) and Doctoral Fund of Ministryof Education of China (20125201110003)
References
[1] J Luedtke and S Ahmed ldquoA sample approximation approachfor optimization with probabilistic constraintsrdquo SIAM Journalon Optimization vol 19 no 2 pp 674ndash699 2008
[2] M Branda ldquoSample approximation technique for mixed-integer stochastic programming problems with several chanceconstraintsrdquoOperations Research Letters vol 40 no 3 pp 207ndash211 2012
[3] A Shapiro D Dentcheva and A Ruszczynski Lectures onStochastic Programming Modeling andTheory SIAM Philadel-phia Pa USA 2009
[4] W Wang and S Ahmed ldquoSample average approximation ofexpected value constrained stochastic programsrdquo OperationsResearch Letters vol 36 no 5 pp 515ndash519 2008
[5] M Branda ldquoSample approximation technique for mixed-integer stochastic programming problems with expected valueconstraintsrdquo Optimization Letters vol 8 no 3 pp 861ndash8752014
[6] B LiuTheory and Practice of Uncertain Programming SpringerBerlin Germany 2009
[7] Y Jin and J Branke ldquoEvolutionary optimization in uncertainenvironmentsmdasha surveyrdquo IEEE Transactions on EvolutionaryComputation vol 9 no 3 pp 303ndash317 2005
[8] K Deb S Gupta D Daum J Branke A K Mall and D Pad-manabhan ldquoReliability-based optimization using evolutionaryalgorithmsrdquo IEEE Transactions on Evolutionary Computationvol 13 no 5 pp 1054ndash1074 2009
[9] B Liu and Y-K Liu ldquoExpected value of fuzzy variable and fuzzyexpected value modelsrdquo IEEE Transactions on Fuzzy Systemsvol 10 no 4 pp 445ndash450 2002
[10] Y Sun and Y L Gao ldquoAn improved differential evolutionalgorithmof stochastic expected valuemodelsrdquoMicroelectronicsamp Computer vol 29 no 4 pp 23ndash25 2012
[11] D Dasgupta S Yu and F Nino ldquoRecent advances in artificialimmune systems models and applicationsrdquo Applied Soft Com-puting Journal vol 11 no 2 pp 1574ndash1587 2011
[12] K Trojanowski and S T Wierzchon ldquoImmune-based algo-rithms for dynamic optimizationrdquo Information Sciences vol 179no 10 pp 1495ndash1515 2009
[13] Y-C Hsieh and P-S You ldquoAn effective immune based two-phase approach for the optimal reliability-redundancy alloca-tion problemrdquo Applied Mathematics and Computation vol 218no 4 pp 1297ndash1307 2011
[14] Q Zhao R Yang and F Duan ldquoAn immune clonal hybrid algo-rithm for solving stochastic chance-constrained programmingrdquoJournal of Computational Information Systems vol 8 no 20 pp8295ndash8302 2012
[15] W Hoeffding ldquoProbability inequalities for sums of boundedrandom variablesrdquo Journal of the American Statistical Associa-tion vol 58 pp 13ndash30 1963
[16] Z Lin and Z Bai Probability Inequalities Springer BerlinGermany 2010
[17] K L Chung A Course in Probability Theory Academic Press2001
[18] M Olguin-Carbajal E Alba and J Arellano-Verdejo ldquoMicro-differential evolution with local search for high dimensionalproblemsrdquo in Proceedings of the IEEE Congress on EvolutionaryComputation (CEC rsquo13) pp 48ndash54 June 2013
[19] C A Poojari and B Varghese ldquoGenetic algorithm basedtechnique for solving chance constrained problemsrdquo EuropeanJournal of Operational Research vol 185 no 3 pp 1128ndash11542008
[20] Z-H Zhang ldquoNoisy immune optimization for chance-constrained programming problemsrdquo Applied Mechanics andMaterials vol 48-49 pp 740ndash744 2011
[21] C-H Chen ldquoEfficient sampling for simulation-based optimiza-tion under uncertaintyrdquo in Proceedings of the 4th InternationalSymposium on Uncertainty Modeling and Analysis (ISUMA rsquo03)pp 386ndash391 IEEE College Park Md USA September 2003
[22] E Mezura-Montes and C A Coello Coello ldquoA simple multi-membered evolution strategy to solve constrained optimizationproblemsrdquo IEEE Transactions on Evolutionary Computationvol 9 no 1 pp 1ndash17 2005
[23] B Varghese and C A Poojari ldquoGenetic algorithm basedtechnique for solving chance-constrained problems arising inrisk managementrdquo Tech Rep 2004
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
2 Scientific Programming
intelligent approach to solve general fuzzy expected valuemodels after combining evolutionary algorithms with neuralnetworks learning methods Sun and Gao [10] suggestedan improved differential evolutionary approach to solve anexpected value programming problem depending on staticsampling and flabby selection
Whereas immune optimization as another popularbranch was well studied for static or dynamic optimiza-tion problems [11 12] it still remains open for stochasticprogramming problems Some comparative works betweenclassical intelligent approaches and immune optimizationalgorithms for stochastic programming demonstrated thatone such branch is competitive For example Hsieh and You[13] proposed a two-phase immune optimization approach tosolve the optimal reliability-redundancy allocation problemTheir numerical results based on four benchmark problemshave showed that such approach is superior to the comparedalgorithms Zhao et al [14] presented a hybrid immuneoptimization approach to deal with chance-constrained pro-gramming in which two operators of double cloning anddouble mutation were adopted to accelerate the process ofevolution
In the present work we study two lower bound estimateson sample size theoretically based onHoeffdingrsquos inequalities[15 16] Afterwards two efficient adaptive racing samplingapproaches are designed to compute the empirical values ofstochastic objective and constraint functionsThese togetherwith immune inspirations included in the clonal selectionprinciple are used to develop a microimmune optimiza-tion algorithm (120583IOA) for handling general nonlinear andhigher-dimensional constrained expected value program-ming problems Such approach is significantly different fromany existing immune optimization approaches On one handthe two lower bound estimates are developed to controlthe sample sizes of random variables while a local searchapproach is adopted to strengthen the ability of local exploita-tion on the other hand the two adaptive racing samplingmethods are utilized to determine dynamically such samplesizes in order to compute the empirical values of objective andconstraint functions at each individual Experimental resultshave illustrated that 120583IOA is an alternative tool for higher-dimensional multimodal expected value programming prob-lems
2 Problem Statement and Preliminaries
Consider the following general single-objective nonlinearconstrained expected value programming problem
minxisin119863
119864 [119891 (x 120585)]
st 119864 [119866119894(x 120585)] le 0 0 le 119894 le 119868
119892119895(x) le 0 1 le 119895 le 119869
ℎ119896(x) = 0 1 le 119896 le 119870
(EP)
with bounded and closed domain 119863 in 119877119901 decision vector x
in119863 and random vector 120585 in 119877119902 where 119864[sdot] is the operator of
expectation 119891(x 120585) and 119866119894(x 120585) are the stochastic objective
and constraint functions respectively among which at leastone is nonlinear and continuous in x 119892
119895(x) and ℎ
119896(x) are
the deterministic and continuous constraint functions If acandidate solution satisfies all the above constraints it iscalled a feasible solution and an infeasible solution otherwiseIntroduce the following constraint violation function tocheck if candidate x is feasible
Γ (x) = 119868minus1
119868
sum
119894=1
max 119864 [119866119894(x 120585)] 0
+ 119869minus1
119869
sum
119895=1
max 119892119895(x) 0 + 119870
minus1
119870
sum
119896=1
1003816100381610038161003816ℎ119896 (x)1003816100381610038161003816
(1)
Obviously x is feasible only when Γ(x) = 0 If Γ(x) lt Γ(y)we prescribe that x is superior to y In order to solve CEVPwe transform the above problem into the following sample-dependent approximation model (SAM)
minxisin119863
1
119899 (x)
119899(x)sum
119897=1
119891 (x 120585119897)
st 1
119898 (x)
119898(x)sum
119903=1
119866119894(x 120585119903) le 0 1 le 119894 le 119868
119892119895(x) le 0 1 le 119895 le 119869
ℎ119896(x) = 0 1 le 119896 le 119870
(2)
where 119898(x) and 119899(x) are the sampling sizes of 120585 at thepoint x for the stochastic objective and constraint functionsrespectively 120585119894 is the 119894th observation It is known that theoptimal solution of problem SAM can approach that ofproblem (EP) when 120576 rarr 0 and 119898(x) 119899(x) rarr infin based onthe law of large number [17] We say that x is an empiricallyfeasible solution for (EP) if the above constraints in SAM aresatisfied
In the subsequent work two adaptive sampling schemeswill be designed to estimate the empirical objective and con-straint values for each individual We here cite the followingconclusions
Theorem 1 ((Hoeffdingrsquos inequality) see [15 16]) Let 119883 bea set and let 119865(sdot) be a probability distribution function on X1198911 119891
119899denote the real-valued functions defined on 119883 with
119891119895
119883 rarr [119886119895 119887119895] for 119895 = 1 119899 where 119886
119895and 119887119895are real
numbers satisfying 119886119895lt 119887119895 Let119909
1 119909
119899be the samples of iid
random variables 1198831 1198832 119883
119899on 119883 respectively Then the
following inequality is true
Pr(10038161003816100381610038161003816100381610038161003816100381610038161003816
1
119899
119899
sum
119895=1
119891119895(x119895) minus
1
119899
119899
sum
119895=1
intxisin119883
119891119895(x) 119889119865 (x)
10038161003816100381610038161003816100381610038161003816100381610038161003816
ge 120576)
le 119890minus212057621198992sum119899
119895=1(119887119895minus119886119895)2
(3)
Scientific Programming 3
Corollary 2 (see [15 16]) If 1198831 1198832 119883
119899are iid random
variables with mean 120583 and 119886 le 119883119895le 119887 1 le 119895 le 119899 then
10038161003816100381610038161003816119883119899minus 120583
10038161003816100381610038161003816le 119870 (Δ 119899 120575) equiv Δradic
1
2119899ln(
2
120575) (4)
with probability at least 1 minus 120575 where 119883119899= (1119899)sum
119899
119895=1119895and
Δ = 119887 minus 119886 119895and 120575 denote the observation of 119883
119895and the
significance level
3 Racing Sampling Approaches
31 Expected Value Constraint Handling Usually when anintelligent optimization approach with static sampling is cho-sen to solve the above problem (EP) each individual with thesame and sufficiently large sampling size which necessarilycauses high computational complexity Therefore in order toensure that each individual in a given finite population119860 hasa rational sampling size we in this subsection give a lowerbound estimate to control the value of119898(x)with x isin 119860 basedon the sample average approximation model of the aboveproblem Define
119883120576= x isin 119860 | 119864 [119866
119894(x 120585)] le 120576 1 le 119894 le 119868
119883119898
=
x isin 119860 |1
119898
119898
sum
119895=1
119866119894(x 120585119895) le 0 1 le 119894 le 119868
(5)
We next give a lower bound estimate to justify that 119883119898is
a subset of119883120576with probability 1 minus 120575 for which the proof can
be found in Appendix
Lemma 3 If there exist 119886119894(x) and 119887
119894(x) such that Pr119886
119894(x) le
119866119894(x 120585) le 119887
119894(x) 1 le 119894 le 119868 = 1 with x isin 119860 one has that
Pr119883119898
sube 119883120576 ge 1 minus 120575 provided that
119898 ge 119872120575equiv
Λ2
21205762log |119860|
120575 (6)
where Λ = max119887119894(x) minus 119886
119894(x) | x isin 119860 1 le 119894 le 119868 and |119860|
denotes the size of 119860
In (6) 119886119894(x) and 119887
119894(x) are decided by the bounds of
the stochastic constraint functions at the point x Λ is themaximal sampling difference computed by the observationsof the stochastic constraints We also observe that once 120576
and 120575 are defined 119872120575is determined by |119860| Additionally
those high-quality individuals in 119860 should usually get largesampling sizes and conversely those inferior ones can onlyget small sampling size This means that different individualswill gain different sampling sizes Based on this considerationand the idea of racing ranking we next compute the empiricalvalue of any expected value constraint function 119866(x 120585) at agiven individual x in 119860 that is (x) This is completed bythe following racing-based constraint evaluation approach(RCEA)
Step 1 Input parameters initial sampling size 1198980 sampling
amplitude 120582 relaxation factor 120576 significance level 120575 andmaximal sampling size 119872
120575
Step 2 Set 119898 = 1198980 119904 = 119898
0 and 120582 larr ln(1 + 119898
0) calculate
the estimate (x) through 119898 observations
Step 3 Set 119904 larr 120582119904
Step 4 Create 119904 observations and update (x) that is
(x) larr997888(119898 (x) + sum
119904
119894=1119866(x 120585119894))
(119898 + 119904) (7)
Step 5 Set 119898 larr 119898 + 119904 if 119898 le 119872120575and (x) le 119870(Δ119898 120575)
then go to Step 3
Step 6 Output (x) as the estimated value of 119866(x 120585)
In the above formulation 119872120575and 119870(Δ119898 120575) are used to
decide when the above algorithm terminates Once the aboveprocedure is stopped x obtains its sampling size 119898(x) thatis 119898(x) = 119898 We note that 119870(Δ119898 120575) is very small if 119898 islarge Thereby we say that x is an empirical feasible solutionif in the precondition of (x) le 0 the above deterministicconstraints are satisfied Further RCEA indicates that anempirical feasible solution x can acquire a large sampling sizeso that (x) is close to the expected value of 119866(x 120585)
32 Objective Function Evaluation Depending on the aboveRCEA and the deterministic constraints in problem (EP) theabove population 119860 is divided into two subpopulations of 119861and 119862 where 119861 consists of empirical feasible solutions in 119860We investigate another lower bound estimate to control thevalue of 119899(x) with x isin 119861 relying upon the sample averageapproximation model of the problem (EP) Afterwards anapproach is designed to calculate the empirical objectivevalues of empirical feasible solutions in 119861 To this pointintroduce
119878120576= x isin 119861 | 119864 [119891 (x 120585)] le 119891
lowast
+ 120576
119878119899= x isin 119861 |
1
119899
119899
sum
119896=1
119891 (x 120585119896) le 119899
(8)
where 119891lowast and
119899stand for the minima of theoretical and
empirical objective values of individuals in 119861 respectivelyThe lower bound estimate is given below by identifying theapproximation relation between 119878
120576and 119878119899 The proof can be
known in Appendix
Lemma 4 If there exist 119888(x) and 119889(x) such that Pr119888(x) le
119891(x 120585) le 119889(x) = 1 with x isin 119861 then Pr119878119899
sube 119878120576 ge 1 minus 120575
provided that
119899 ge 119899|119861|
equiv2Γ2
1205762ln 2 |119861|
120575 (9)
where Γ = max119889(x) minus 119888(x) | x isin 119861
Like the above constraint handling approach we nextcalculate the empirical objective values of individuals in119861 through the following racing-based objective evaluationapproach (ROEA)
4 Scientific Programming
Step 1 Input parameters 120576 and 120575 mentioned above initialsampling size 119899
0 and population 119861
Step 2 Set 119899 larr 1198990 119904 larr 119899
0 120582 larr ln(1 + 119899
0) and Φ larr 119861
Step 3 Calculate the empirical objective average of 119899 obser-vations for each individual x inΦ that is 119891
119899(x) write 119891min =
min119891119899(x) x isin Φ and 119891max = max119891
119899(x) x isin Φ
Step 4 Remove those elements inΦ satisfying 119899(x) gt 119891min+
119870(119891max minus 119891min 119899 120575)
Step 5 Set 119904 larr 120582119904
Step 6 Update the empirical objective values for elements inΦ through
119899(x) larr997888
[119899(x) times 119899 + sum
119904
119896=1119891 (x 120585119896)]
(119899 + 119904) x isin 119861 (10)
Step 7 Set 119899 larr 119899 + 119904
Step 8 If 119899 lt 119899|119861|
andΦ = 120601 then return to Step 3 otherwiseoutput all the empirical objective values of individuals in 119861
Through the above algorithm those individuals in 119861
can acquire their respective empirical objective values withdifferent sampling sizes Those high-quality individuals canget large sampling sizes and hence their empirical objectivevalues can approach their theoretical objective values
4 Algorithm Statement
The clonal selection principle explains how immune cellslearn the pattern structures of invading pathogens It includesmany biological inspirations capable of being adopted todesign 120583IOA such as immune selection cloning and repro-duction Based on RCEA and ROEA above as well as generalimmune inspirations 120583IOA can be illustrated by Figure 1We here view antigen Ag as problem SAM itself whilecandidates from the design space 119863 are regarded as real-coded antibodies Within a run period of 120583IOA by Figure 1the current population is first divided into empirical feasi-ble and infeasible antibody subpopulations after executingRCEA above Second those empirical feasible antibodies arerequired to compute their empirical objective values throughROEA They will produce many more clones than empiricalinfeasible antibodies through proliferation Afterwards allthe clones are enforced mutation If a parent is superior toits clones it will carry out local search and conversely it isupdated by its best clone Based on Figure 1 120583IOA can beformulated in detail below
Step 1 Input parameters population size 119873 maximal clonalsize 119862max sampling parameters 119898
0and 119899
0 relaxation factor
120576 significance level 120575 and maximal iteration number 119866max
Step 2 Set 119905 larr 1
Step 3 Generate an initial population 119860 of 119873 randomantibodies
Step 4 Compute the empirical constraint violations of anti-bodies in 119860 through RCEA and (1) that is Γ(x) with x isin 119860
Step 5 Divide 119860 into empirical feasible subpopulation 119861 andinfeasible subpopulation 119862
Step 6 Calculate the empirical objective values of antibodiesin 119861 through ROEA above
Step 7 For each antibody x in 119861 we have the following
Step 71 Proliferate a clonal set Cl(x) with size 119862max (ie xmultiplies 119862max offsprings)
Step 72 Mutate all clones with mutation rate 119901119898(x) = 1
(Γmax + ln(119905 + 1)) through the classical polynomial mutationand thereafter produce a mutated clonal set Cllowast(x) whereΓmax = maxΓ(x) | x isin 119862
Step 73 Eliminate empirical infeasible clones in Cllowast(x)through RCEA
Step 74 Calculate the empirical objective values of clones inCllowast(x) through ROEA
Step 75 If the best clone has a smaller empirical objectivevalue than x it will update x otherwise antibody x as aninitial state creates a better empirical feasible antibody toreplace it by a local search approach [18] with ROEA andsampling size 119898
0
Step 8 For each antibody y in 119862 we have the following
Step 81 Antibody y creates a clonal population 119862(y) withclonal sizeCl(y) all clones in119862(y) are enforced tomutatewithmutation rate 119901
119898(y) through the conventional nonuniform
mutation and create a mutated clonal population 119862lowast(y)
where
Cl (y) = round(119862max
Γ (y) + 1)
119901119898
(y) =1
Γmax minus Γ (y) + 1
(11)
Step 82 Check if there exist empirical feasible clones in119862lowast(y)
by RCEA and (1) if yes the best clone with the smallestempirical objective value by ROEA replaces y and converselythe clone with the smallest constraint violation updates y
Step 9 119860 larr 119861 cup 119862 and 119905 larr 119905 + 1
Step 10 If 119905 lt 119866max go to Step 4 and conversely output thebest antibody viewed as the optimal solution
As we formulate above the current population 119860 issplit into two subpopulations after being checked if thereare empirical feasible antibodies Each subpopulation is
Scientific Programming 5
Initial antibody population
Empirical feasibility identification
Empirical feasible antibodies Empirical infeasible antibodies
Objective evaluation
Proliferation
Constraint violation
Local search
Mutation
Elitism clone
Comparison between parents and clones
Replacement
New population
Termination
Output best solutionYes
No
NoYes
Figure 1 The flowchart of 120583IOA
updated through proliferation mutation and selection Step7 makes those empirical feasible antibodies search betterclones through proliferation and polynomial mutation Oncesome antibody can not produce a valuable clone a reportedlocal search algorithm is used to perform local evolution so asto enhance the quality of solution searchThe purpose of Step8 urges those poor antibodies to create diverse clones throughproliferation and nonuniform mutation
Additionally120583IOArsquos computational complexity is decidedby Steps 4 6 and 72 Step 4 needs at most 119873(119868119872
120575+ 119869 + 119870 +
1) times to calculate the empirical constraint violations InStep 6 we need to compute the empirical objective valuesof antibodies in 119861 which evaluates at most 119873119899
119873times
Step 72 enforces mutation with at most 119873119901 times 119862max timesConsequently 120583IOArsquos computational complexity in the worstcase can be given by
119874119888= 119874 (119873 (119868119872
120575+ 119869 + 119870) + 119873119899
119873+ 119873119901119862max)
= 119874 (119873 (119868119872120575+ 119869 + 119870 + 119899
119873+ 119901119862max))
(12)
5 Experimental Analysis
Our experiments are executed on a personal computer(CPU3GHz RAM2GB) with VC++ software In order toexamine 120583IOArsquos characteristics four intelligent optimizationalgorithms that is two competitive steady genetic algorithms(SSGA-A and SSGA-B) [19] and two recent immune opti-mization approaches NIOA-A and NIOA-B [20] are takento participate in comparative analysis by means of two100-dimentional multimodal expected value optimization
problems It is pointed out that the two genetic algorithmswith the same fixed sampling size for each individual can stillsolve (EP) problems since their constraint scoring functionsare designed based on static sampling strategies the twoimmune algorithms are with dynamic sampling sizes for allindividuals presented in the process of evolution On theother hand since NIOA-B can not effectively handle high-dimensional problems we in this section improve it byOCBA[21] These comparative approaches together with 120583IOAshare the same termination criterion namely each approachis with the total of evaluations 8 times 10
5 while executing 30times on each test problem Their parameter settings are thesame as those in their literatures except for their evolvingpopulation sizes After manual experimental tuning theytake population size 40 120583IOA takes a small population sizewithin 3 and 5 while the three parameters of 119898
0 1198990 and
119862max as crucial efficiency parameters are usually set as smallintegers We here set 119873 = 5 119898
0= 30 119899
0= 20 119862max = 2 120576 =
01 and 120575 = 005 Additionally those best solutions acquiredby all the algorithms for each test problem are reevaluated 106times because of the demand of algorithm comparison
Example 5 Consider
max 119864[
[
1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816
sum119901
119894=1cos4 (119909
119894) minus 2prod
119901
119894=1cos2 (119909
119894)
radicsum119901
119894=11198941199092119894
1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816
+ 120585]
]
st 119864 [075 minus 1205781
119901
prod
119894=1
119909119894] le 0
6 Scientific Programming
Table 1 Comparison of statistical results for Example 5
Algor Max Min Mean Std dev CI FR()
AR(s)
SSGA-A 014 010 011 001 [011 012] 80 495SSGA-B 013 010 011 001 [011 012] 87 496NIOA-A 025 016 021 003 [020 022] 80 276NIOA-B 026 017 020 002 [019 021] 87 247120583IOA 039 022 029 005 [027 031] 90 161CI represents the confidence interval of objective values for the solutionsacquired FR stands for the rate of feasible solutions among all the gottensolutions AR denotes the average runtime required by 30 runs
119901
sum
119894=1
119864 [1205782119909119894minus 75119901] le 0
0 le 119909119894le 10
120585 sim 119873 (0 1)
1 le 119894 le 119901
1205781sim 119873 (00015 02)
1205782sim 119873 (1 2)
(13)
This is a multimodal expected value programming prob-lem gotten through modifying a static multimodal optimiza-tion problem [22] where 119901 = 100 The main difficultyof solving such problem is that the high dimension andmultimodality make it difficult to find the desired solutionWe solve the above problem by means of the approximationmodel SAM as in Section 2 instead of transforming suchmodel into a deterministic analytical one After respectively30 runs each of the above algorithms acquires 30 bestsolutions used for comparison Their statistical results arelisted in Table 1 while Figures 2 and 3 draw the box plots ofthe objective values acquired and their average search curvesrespectively
In Table 1 the values on FR listed in the seventh columnhint that whereas all the algorithms can find many feasiblesolutions for each run their rates of feasible solutions aredifferent 120583IOA can acquire many more feasible solutionsthan the compared approaches This illustrates that the con-straint handling approach RCEA presented in Section 3 canensure that 120583IOA find feasible solutions with high probability90 for a single run On the other hand the statisticalresults in columns 2 to 6 show that 120583IOArsquos solution qualityis clearly superior to those acquired by other approachesand meanwhile NIOA-A and NIOA-B are secondary Withrespect to performance efficiency we see easily that 120583IOAis a high-efficiency optimization algorithm as it spendsthe least time to seek the desired solution in a run Wealso notice that SSGA-A and SSGA-B present their highcomputational complexity because of their average runtimewhich shows that such two genetic algorithms with staticsampling strategies cause easily expensively computationalcost
SSGA-A SSGA-B NIOA-A NIOA-B
01
015
02
025
03
035
04
Obj
ectiv
e val
ues
120583IOA
Figure 2 Example 5 comparison of box plots acquired
0 2000 4000 6000 8000 100000
01
02
03
04
05
06
SSGA-ASSGA-BNIOA-A
NIOA-B120583IOA
n
f
Figure 3 Example 5 comparison of average search curves
The box plots in Figure 2 formed by 30 objective valuesfor each algorithm exhibit the fact that in addition to 120583IOAobtaining the best effect by comparison with the other fouralgorithms NIOA-A and NIOA-B with similar performancecharacteristics can work well over SSGA-A and SSGA-BBy Figure 3 we observe that 120583IOA can find the desiredsolution rapidly and achieve stable search but the otheralgorithms in particular the two genetic algorithms get easilyinto local search Totally when solving the above higher-dimensional problem the five algorithms have different effi-ciencies solution qualities and search characteristics 120583IOAsuits the above problem and exhibits the best performancefor which the main reason consists in that it can effectivelycombine RCEA with ROEA to find many more empiri-cally feasible individuals and urge them to evolve graduallytowards the desired regions by proliferation and mutation
Scientific Programming 7
Table 2 Comparison of statistical results for Example 6
Algor Max Min Mean Std dev CI FR()
AR(s)
SSGA-A 4880 4018 4589 192 [4521 4658] 53 266SSGA-B 4897 4157 4630 174 [4568 4692] 53 258NIOA-A 4108 3782 3939 87 [3908 3970] 100 160NIOA-B 4146 3713 3952 39 [3913 3990] 100 139120583IOA 4647 4443 4568 55 [4541 4580] 100 121
However those compared approaches present relatively weakperformances In particular SSGA-A and SSGA-B get easilyinto local search and spend the most runtime to solve theabove problem as their constraint handling and selectiontechniques are hard to adapt to high-dimensionalmultimodalproblems NIOA-A and NIOA-B are superior to such twogenetic algorithms owing to their adaptive sampling andconstraint handling techniques
Example 6 Consider
max 119864[
119901
sum
119894=1
119909119894sdot sin (120587119894119909
119894) + 120585]
st 119864 [
119901
sum
119894=1
1205781119909119894] le 500
119864 [
119901
sum
119894=1
12057821199092
119894] le 3000
119909119894ge 0
120585 sim 119873 (0 1)
1 le 119894 le 119901
1205781sim 119880 (08 12)
1205782sim 119873 (1 05)
(14)
This is a still difficult multimodal optimization problemobtained by modifying a static multimodal optimizationproblem [23] including 100 decision variables (ie 119901 = 100)and 3 random variables which greatly influence the qualityof solution search The difficulty of solving such problem isthat the objective function is multimodal and the decisionvariables are unbounded Similar to the above experimentwe acquire the statistical results of the approaches given inTable 2 while the corresponding box plots and their averagesearch curves are drawn by Figures 4 and 5 respectively
The values on FR in Table 2 illustrate that it is alsodifficult to solve Example 6 due to the random variablesand high dimensionality Even if so NIOA-A NIOA-B and120583IOA can all acquire feasible solutions for each run SSGA-Aand SSGA-B however can only get at most 53 of feasiblesolutions namely they can only acquire 53 feasible solutionsafter 100 executions This along with the values on FRin Table 1 follows that although such two approaches can
SSGA-A SSGA-B NIOA-A NIOA-B
380
400
420
440
460
480
Obj
ectiv
e val
ues
120583IOA
Figure 4 Example 6 comparison of box plots acquired
0 2000 4000 6000 8000 100000
100
200
300
400
500
SSGA-ASSGA-BNIOA-A
NIOA-B120583IOA
n
f
Figure 5 Example 6 comparison of average search curves
acquire larger values on Mean than the other approachesthey are poor with respect to solution quality efficiencyand search stability Consequently they need to make someimprovements for example their constraint handling Wenotice that 120583IOA is better than either NIOA-A or NIOA-Bbecause of its efficiency and the value onMeanWeemphasizethat whereas NIOA-A and NIOA-B can only obtain smallvalues on Mean by comparison with 120583IOA they are stillvaluable when solving such kind of hard problem RelativelyNIOA-B behaves well over NIOA-A
It seems to be true that SSGA-A and SSGA-B are superiorto NIOA-A NIOA-B and 120583IOA by Figures 4 and 5 sincethe box plots in Figure 4 and the average search curvesin Figure 5 are far from the horizontal line As a matterof fact column 7 in Table 2 shows clearly that such twogenetic algorithms cannot find feasible solutions 47 timesout of 100 runs whereas the other approaches are opposite
8 Scientific Programming
These indicate a fact that infeasible solutions in the decisiondomain have larger objective function values than feasibleones
Summarily when solving the above two examples thefive algorithms present different characteristics SSGA-Aand SSGA-B have similar performance attributes and sodo NIOA-A and NIOA-B 120583IOA performs well over thecompared approaches while NIOA-B is secondary
6 Conclusions and Further Work
We in this work concentrate on studying a microimmuneoptimization algorithm to solve a general kind of constrainedexpected value programming model relying upon trans-forming suchmodel into a sample-dependent approximationmodel (SAM) One such model requires that different candi-date solutions have different sample sizes and especially eachcandidate has two kinds of samplesrsquo sizes Subsequently twolower bound estimates of random vectors are theoreticallydeveloped and respectively applied to handling expectedvalue constraints of individuals in the current population andcomputing their empirical objective values Based on suchtwo estimates and the idea of racing ranking two racingsampling methods are suggested to execute individual eval-uation and constraint handling respectively Afterwards aracing sampling basedmicroimmune optimization algorithm120583IOA is proposed to deal with such approximation modelwith the merits of small population strong disturbance sup-pression effective constraint handling adaptive samplingand high efficiency The theoretical analysis has indicatedthat 120583IOArsquos computational complexity depends mainly on119899119873 119872120575 and 119862max due to small population 119873 and known
problem parameters By means of the higher-dimensionalmultimodal test problems the comparatively experimentalresults can draw some conclusions (1) RCEA and RCOEAcan make 120583IOA dynamically determine the sample sizes ofdifferent kinds of individuals in the process of evolution(2) the local search approach adopted can help 120583IOA tostrengthen local search (3) stochastic parametersmay be effi-ciently addressed by adaptive sampling techniques (4) 120583IOAperforms well over the compared approaches and (5) SSGA-A and SSGA-B need to make some improvements for higher-dimensional problems Further whereas we make somestudies on how to explore immune optimization approachesto solve higher-dimensional constrained expected value pro-gramming problems some issues will be further studiedFor example 120583IOArsquos theoretical convergence analysis needsto be studied while its engineering applications are to bediscussed
Appendix
Proof of Lemma 3 Let x isin 119883119898
119883120576 Since there exists 119894
0
such that 119864[1198661198940
(x 120585)] gt 120576 with 1 le 1198940
le 119868 by (3) we obtainthat
Pr x isin 119883119898 = Pr
1
119898
119898
sum
119895=1
119866119894(x 120585119895) le 0 1 le 119894 le 119868
le Pr
1
119898
119898
sum
119895=1
1198661198940
(x 120585119895) minus 119864 [1198661198940(x 120585)] le minus120576
le exp[
[
minus21205762119898
(1198861198940(x) minus 119887
1198940(x))2
]
]
le exp[minus21205762119898
Δ2]
(A1)
Hence
1 minus Pr 119883119898
sube 119883120576 = Pr existx isin 119883
119898 st x notin 119883
120576
le sum
xisin119860119883120576
Pr x isin 119883119898
le |119860| exp[minus21205762119898
Δ2]
(A2)
Thereby
Pr 119883119898
sube 119883120576 ge 1 minus |119860| exp[
minus21205762119898
Δ2] (A3)
So the conclusion is true
Proof of Lemma 4 Since 119861 is finite there exists 1199100isin 119861 such
that 119864[119891(y0 120585)] = 119891
lowast If x isin 119878119899 119878120576 then
119864 [119891 (x 120585)] gt 119864 [119891 (y0 120585)] + 120576 (A4)
Hence if
1
119899
119899
sum
119896=1
119891 (x 120585119896) le1
119899
119899
sum
119896=1
119891 (y0 120585119896
) (A5)
we have that
1
119899
119899
sum
119896=1
119891 (x 120585119896) lt 119864 [119891 (x 120585)] minus120576
2(A6)
or
1
119899
119899
sum
119896=1
119891 (y0 120585119896
) gt 119864 [119891 (y0 120585)] +
120576
2 (A7)
Otherwise we acquire that
1
119899
119899
sum
119896=1
119891 (x 120585119896) ge 119864 [119891 (x 120585)] minus120576
2
gt 119864 [119891 (y0 120585)] +
120576
2
ge1
119899
119899
sum
119896=1
119891 (y0 120585119896
)
(A8)
Scientific Programming 9
This yields contraction Consequently by (3) we obtain that
Pr x isin 119878119899 = Pr1
119899
119899
sum
119896=1
119891 (x 120585119896) le 119899
le Pr1
119899
119899
sum
119896=1
119891 (x 120585119896) le1
119899
119899
sum
119896=1
119891 (y0 120585119896
)
le Pr1
119899
119899
sum
119896=1
119891 (x 120585119896) lt 119864 [119891 (x 120585)] minus120576
2
+ Pr1
119899
119899
sum
119896=1
119891 (y0 120585119896
) gt 119864 [119891 (y0 120585)] +
120576
2
le 2 exp[minus1205762119899
2Γ2]
(A9)
Hence
1 minus Pr 119878119899sube 119878120576
= Pr existx isin 119878119899 st x notin 119878
120576
le sum
xisin119861119878120576
Pr x isin 119878119899
le 2 |119861| exp[minus1205762119899
2Γ2]
(A10)
This way it follows from (A10) that the above conclusion istrue
Competing Interests
The authors declare that they have no competing interests
Acknowledgments
This work is supported in part by the National Natural Sci-ence Foundation (61563009) and Doctoral Fund of Ministryof Education of China (20125201110003)
References
[1] J Luedtke and S Ahmed ldquoA sample approximation approachfor optimization with probabilistic constraintsrdquo SIAM Journalon Optimization vol 19 no 2 pp 674ndash699 2008
[2] M Branda ldquoSample approximation technique for mixed-integer stochastic programming problems with several chanceconstraintsrdquoOperations Research Letters vol 40 no 3 pp 207ndash211 2012
[3] A Shapiro D Dentcheva and A Ruszczynski Lectures onStochastic Programming Modeling andTheory SIAM Philadel-phia Pa USA 2009
[4] W Wang and S Ahmed ldquoSample average approximation ofexpected value constrained stochastic programsrdquo OperationsResearch Letters vol 36 no 5 pp 515ndash519 2008
[5] M Branda ldquoSample approximation technique for mixed-integer stochastic programming problems with expected valueconstraintsrdquo Optimization Letters vol 8 no 3 pp 861ndash8752014
[6] B LiuTheory and Practice of Uncertain Programming SpringerBerlin Germany 2009
[7] Y Jin and J Branke ldquoEvolutionary optimization in uncertainenvironmentsmdasha surveyrdquo IEEE Transactions on EvolutionaryComputation vol 9 no 3 pp 303ndash317 2005
[8] K Deb S Gupta D Daum J Branke A K Mall and D Pad-manabhan ldquoReliability-based optimization using evolutionaryalgorithmsrdquo IEEE Transactions on Evolutionary Computationvol 13 no 5 pp 1054ndash1074 2009
[9] B Liu and Y-K Liu ldquoExpected value of fuzzy variable and fuzzyexpected value modelsrdquo IEEE Transactions on Fuzzy Systemsvol 10 no 4 pp 445ndash450 2002
[10] Y Sun and Y L Gao ldquoAn improved differential evolutionalgorithmof stochastic expected valuemodelsrdquoMicroelectronicsamp Computer vol 29 no 4 pp 23ndash25 2012
[11] D Dasgupta S Yu and F Nino ldquoRecent advances in artificialimmune systems models and applicationsrdquo Applied Soft Com-puting Journal vol 11 no 2 pp 1574ndash1587 2011
[12] K Trojanowski and S T Wierzchon ldquoImmune-based algo-rithms for dynamic optimizationrdquo Information Sciences vol 179no 10 pp 1495ndash1515 2009
[13] Y-C Hsieh and P-S You ldquoAn effective immune based two-phase approach for the optimal reliability-redundancy alloca-tion problemrdquo Applied Mathematics and Computation vol 218no 4 pp 1297ndash1307 2011
[14] Q Zhao R Yang and F Duan ldquoAn immune clonal hybrid algo-rithm for solving stochastic chance-constrained programmingrdquoJournal of Computational Information Systems vol 8 no 20 pp8295ndash8302 2012
[15] W Hoeffding ldquoProbability inequalities for sums of boundedrandom variablesrdquo Journal of the American Statistical Associa-tion vol 58 pp 13ndash30 1963
[16] Z Lin and Z Bai Probability Inequalities Springer BerlinGermany 2010
[17] K L Chung A Course in Probability Theory Academic Press2001
[18] M Olguin-Carbajal E Alba and J Arellano-Verdejo ldquoMicro-differential evolution with local search for high dimensionalproblemsrdquo in Proceedings of the IEEE Congress on EvolutionaryComputation (CEC rsquo13) pp 48ndash54 June 2013
[19] C A Poojari and B Varghese ldquoGenetic algorithm basedtechnique for solving chance constrained problemsrdquo EuropeanJournal of Operational Research vol 185 no 3 pp 1128ndash11542008
[20] Z-H Zhang ldquoNoisy immune optimization for chance-constrained programming problemsrdquo Applied Mechanics andMaterials vol 48-49 pp 740ndash744 2011
[21] C-H Chen ldquoEfficient sampling for simulation-based optimiza-tion under uncertaintyrdquo in Proceedings of the 4th InternationalSymposium on Uncertainty Modeling and Analysis (ISUMA rsquo03)pp 386ndash391 IEEE College Park Md USA September 2003
[22] E Mezura-Montes and C A Coello Coello ldquoA simple multi-membered evolution strategy to solve constrained optimizationproblemsrdquo IEEE Transactions on Evolutionary Computationvol 9 no 1 pp 1ndash17 2005
[23] B Varghese and C A Poojari ldquoGenetic algorithm basedtechnique for solving chance-constrained problems arising inrisk managementrdquo Tech Rep 2004
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Scientific Programming 3
Corollary 2 (see [15 16]) If 1198831 1198832 119883
119899are iid random
variables with mean 120583 and 119886 le 119883119895le 119887 1 le 119895 le 119899 then
10038161003816100381610038161003816119883119899minus 120583
10038161003816100381610038161003816le 119870 (Δ 119899 120575) equiv Δradic
1
2119899ln(
2
120575) (4)
with probability at least 1 minus 120575 where 119883119899= (1119899)sum
119899
119895=1119895and
Δ = 119887 minus 119886 119895and 120575 denote the observation of 119883
119895and the
significance level
3 Racing Sampling Approaches
31 Expected Value Constraint Handling Usually when anintelligent optimization approach with static sampling is cho-sen to solve the above problem (EP) each individual with thesame and sufficiently large sampling size which necessarilycauses high computational complexity Therefore in order toensure that each individual in a given finite population119860 hasa rational sampling size we in this subsection give a lowerbound estimate to control the value of119898(x)with x isin 119860 basedon the sample average approximation model of the aboveproblem Define
119883120576= x isin 119860 | 119864 [119866
119894(x 120585)] le 120576 1 le 119894 le 119868
119883119898
=
x isin 119860 |1
119898
119898
sum
119895=1
119866119894(x 120585119895) le 0 1 le 119894 le 119868
(5)
We next give a lower bound estimate to justify that 119883119898is
a subset of119883120576with probability 1 minus 120575 for which the proof can
be found in Appendix
Lemma 3 If there exist 119886119894(x) and 119887
119894(x) such that Pr119886
119894(x) le
119866119894(x 120585) le 119887
119894(x) 1 le 119894 le 119868 = 1 with x isin 119860 one has that
Pr119883119898
sube 119883120576 ge 1 minus 120575 provided that
119898 ge 119872120575equiv
Λ2
21205762log |119860|
120575 (6)
where Λ = max119887119894(x) minus 119886
119894(x) | x isin 119860 1 le 119894 le 119868 and |119860|
denotes the size of 119860
In (6) 119886119894(x) and 119887
119894(x) are decided by the bounds of
the stochastic constraint functions at the point x Λ is themaximal sampling difference computed by the observationsof the stochastic constraints We also observe that once 120576
and 120575 are defined 119872120575is determined by |119860| Additionally
those high-quality individuals in 119860 should usually get largesampling sizes and conversely those inferior ones can onlyget small sampling size This means that different individualswill gain different sampling sizes Based on this considerationand the idea of racing ranking we next compute the empiricalvalue of any expected value constraint function 119866(x 120585) at agiven individual x in 119860 that is (x) This is completed bythe following racing-based constraint evaluation approach(RCEA)
Step 1 Input parameters initial sampling size 1198980 sampling
amplitude 120582 relaxation factor 120576 significance level 120575 andmaximal sampling size 119872
120575
Step 2 Set 119898 = 1198980 119904 = 119898
0 and 120582 larr ln(1 + 119898
0) calculate
the estimate (x) through 119898 observations
Step 3 Set 119904 larr 120582119904
Step 4 Create 119904 observations and update (x) that is
(x) larr997888(119898 (x) + sum
119904
119894=1119866(x 120585119894))
(119898 + 119904) (7)
Step 5 Set 119898 larr 119898 + 119904 if 119898 le 119872120575and (x) le 119870(Δ119898 120575)
then go to Step 3
Step 6 Output (x) as the estimated value of 119866(x 120585)
In the above formulation 119872120575and 119870(Δ119898 120575) are used to
decide when the above algorithm terminates Once the aboveprocedure is stopped x obtains its sampling size 119898(x) thatis 119898(x) = 119898 We note that 119870(Δ119898 120575) is very small if 119898 islarge Thereby we say that x is an empirical feasible solutionif in the precondition of (x) le 0 the above deterministicconstraints are satisfied Further RCEA indicates that anempirical feasible solution x can acquire a large sampling sizeso that (x) is close to the expected value of 119866(x 120585)
32 Objective Function Evaluation Depending on the aboveRCEA and the deterministic constraints in problem (EP) theabove population 119860 is divided into two subpopulations of 119861and 119862 where 119861 consists of empirical feasible solutions in 119860We investigate another lower bound estimate to control thevalue of 119899(x) with x isin 119861 relying upon the sample averageapproximation model of the problem (EP) Afterwards anapproach is designed to calculate the empirical objectivevalues of empirical feasible solutions in 119861 To this pointintroduce
119878120576= x isin 119861 | 119864 [119891 (x 120585)] le 119891
lowast
+ 120576
119878119899= x isin 119861 |
1
119899
119899
sum
119896=1
119891 (x 120585119896) le 119899
(8)
where 119891lowast and
119899stand for the minima of theoretical and
empirical objective values of individuals in 119861 respectivelyThe lower bound estimate is given below by identifying theapproximation relation between 119878
120576and 119878119899 The proof can be
known in Appendix
Lemma 4 If there exist 119888(x) and 119889(x) such that Pr119888(x) le
119891(x 120585) le 119889(x) = 1 with x isin 119861 then Pr119878119899
sube 119878120576 ge 1 minus 120575
provided that
119899 ge 119899|119861|
equiv2Γ2
1205762ln 2 |119861|
120575 (9)
where Γ = max119889(x) minus 119888(x) | x isin 119861
Like the above constraint handling approach we nextcalculate the empirical objective values of individuals in119861 through the following racing-based objective evaluationapproach (ROEA)
4 Scientific Programming
Step 1 Input parameters 120576 and 120575 mentioned above initialsampling size 119899
0 and population 119861
Step 2 Set 119899 larr 1198990 119904 larr 119899
0 120582 larr ln(1 + 119899
0) and Φ larr 119861
Step 3 Calculate the empirical objective average of 119899 obser-vations for each individual x inΦ that is 119891
119899(x) write 119891min =
min119891119899(x) x isin Φ and 119891max = max119891
119899(x) x isin Φ
Step 4 Remove those elements inΦ satisfying 119899(x) gt 119891min+
119870(119891max minus 119891min 119899 120575)
Step 5 Set 119904 larr 120582119904
Step 6 Update the empirical objective values for elements inΦ through
119899(x) larr997888
[119899(x) times 119899 + sum
119904
119896=1119891 (x 120585119896)]
(119899 + 119904) x isin 119861 (10)
Step 7 Set 119899 larr 119899 + 119904
Step 8 If 119899 lt 119899|119861|
andΦ = 120601 then return to Step 3 otherwiseoutput all the empirical objective values of individuals in 119861
Through the above algorithm those individuals in 119861
can acquire their respective empirical objective values withdifferent sampling sizes Those high-quality individuals canget large sampling sizes and hence their empirical objectivevalues can approach their theoretical objective values
4 Algorithm Statement
The clonal selection principle explains how immune cellslearn the pattern structures of invading pathogens It includesmany biological inspirations capable of being adopted todesign 120583IOA such as immune selection cloning and repro-duction Based on RCEA and ROEA above as well as generalimmune inspirations 120583IOA can be illustrated by Figure 1We here view antigen Ag as problem SAM itself whilecandidates from the design space 119863 are regarded as real-coded antibodies Within a run period of 120583IOA by Figure 1the current population is first divided into empirical feasi-ble and infeasible antibody subpopulations after executingRCEA above Second those empirical feasible antibodies arerequired to compute their empirical objective values throughROEA They will produce many more clones than empiricalinfeasible antibodies through proliferation Afterwards allthe clones are enforced mutation If a parent is superior toits clones it will carry out local search and conversely it isupdated by its best clone Based on Figure 1 120583IOA can beformulated in detail below
Step 1 Input parameters population size 119873 maximal clonalsize 119862max sampling parameters 119898
0and 119899
0 relaxation factor
120576 significance level 120575 and maximal iteration number 119866max
Step 2 Set 119905 larr 1
Step 3 Generate an initial population 119860 of 119873 randomantibodies
Step 4 Compute the empirical constraint violations of anti-bodies in 119860 through RCEA and (1) that is Γ(x) with x isin 119860
Step 5 Divide 119860 into empirical feasible subpopulation 119861 andinfeasible subpopulation 119862
Step 6 Calculate the empirical objective values of antibodiesin 119861 through ROEA above
Step 7 For each antibody x in 119861 we have the following
Step 71 Proliferate a clonal set Cl(x) with size 119862max (ie xmultiplies 119862max offsprings)
Step 72 Mutate all clones with mutation rate 119901119898(x) = 1
(Γmax + ln(119905 + 1)) through the classical polynomial mutationand thereafter produce a mutated clonal set Cllowast(x) whereΓmax = maxΓ(x) | x isin 119862
Step 73 Eliminate empirical infeasible clones in Cllowast(x)through RCEA
Step 74 Calculate the empirical objective values of clones inCllowast(x) through ROEA
Step 75 If the best clone has a smaller empirical objectivevalue than x it will update x otherwise antibody x as aninitial state creates a better empirical feasible antibody toreplace it by a local search approach [18] with ROEA andsampling size 119898
0
Step 8 For each antibody y in 119862 we have the following
Step 81 Antibody y creates a clonal population 119862(y) withclonal sizeCl(y) all clones in119862(y) are enforced tomutatewithmutation rate 119901
119898(y) through the conventional nonuniform
mutation and create a mutated clonal population 119862lowast(y)
where
Cl (y) = round(119862max
Γ (y) + 1)
119901119898
(y) =1
Γmax minus Γ (y) + 1
(11)
Step 82 Check if there exist empirical feasible clones in119862lowast(y)
by RCEA and (1) if yes the best clone with the smallestempirical objective value by ROEA replaces y and converselythe clone with the smallest constraint violation updates y
Step 9 119860 larr 119861 cup 119862 and 119905 larr 119905 + 1
Step 10 If 119905 lt 119866max go to Step 4 and conversely output thebest antibody viewed as the optimal solution
As we formulate above the current population 119860 issplit into two subpopulations after being checked if thereare empirical feasible antibodies Each subpopulation is
Scientific Programming 5
Initial antibody population
Empirical feasibility identification
Empirical feasible antibodies Empirical infeasible antibodies
Objective evaluation
Proliferation
Constraint violation
Local search
Mutation
Elitism clone
Comparison between parents and clones
Replacement
New population
Termination
Output best solutionYes
No
NoYes
Figure 1 The flowchart of 120583IOA
updated through proliferation mutation and selection Step7 makes those empirical feasible antibodies search betterclones through proliferation and polynomial mutation Oncesome antibody can not produce a valuable clone a reportedlocal search algorithm is used to perform local evolution so asto enhance the quality of solution searchThe purpose of Step8 urges those poor antibodies to create diverse clones throughproliferation and nonuniform mutation
Additionally120583IOArsquos computational complexity is decidedby Steps 4 6 and 72 Step 4 needs at most 119873(119868119872
120575+ 119869 + 119870 +
1) times to calculate the empirical constraint violations InStep 6 we need to compute the empirical objective valuesof antibodies in 119861 which evaluates at most 119873119899
119873times
Step 72 enforces mutation with at most 119873119901 times 119862max timesConsequently 120583IOArsquos computational complexity in the worstcase can be given by
119874119888= 119874 (119873 (119868119872
120575+ 119869 + 119870) + 119873119899
119873+ 119873119901119862max)
= 119874 (119873 (119868119872120575+ 119869 + 119870 + 119899
119873+ 119901119862max))
(12)
5 Experimental Analysis
Our experiments are executed on a personal computer(CPU3GHz RAM2GB) with VC++ software In order toexamine 120583IOArsquos characteristics four intelligent optimizationalgorithms that is two competitive steady genetic algorithms(SSGA-A and SSGA-B) [19] and two recent immune opti-mization approaches NIOA-A and NIOA-B [20] are takento participate in comparative analysis by means of two100-dimentional multimodal expected value optimization
problems It is pointed out that the two genetic algorithmswith the same fixed sampling size for each individual can stillsolve (EP) problems since their constraint scoring functionsare designed based on static sampling strategies the twoimmune algorithms are with dynamic sampling sizes for allindividuals presented in the process of evolution On theother hand since NIOA-B can not effectively handle high-dimensional problems we in this section improve it byOCBA[21] These comparative approaches together with 120583IOAshare the same termination criterion namely each approachis with the total of evaluations 8 times 10
5 while executing 30times on each test problem Their parameter settings are thesame as those in their literatures except for their evolvingpopulation sizes After manual experimental tuning theytake population size 40 120583IOA takes a small population sizewithin 3 and 5 while the three parameters of 119898
0 1198990 and
119862max as crucial efficiency parameters are usually set as smallintegers We here set 119873 = 5 119898
0= 30 119899
0= 20 119862max = 2 120576 =
01 and 120575 = 005 Additionally those best solutions acquiredby all the algorithms for each test problem are reevaluated 106times because of the demand of algorithm comparison
Example 5 Consider
max 119864[
[
1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816
sum119901
119894=1cos4 (119909
119894) minus 2prod
119901
119894=1cos2 (119909
119894)
radicsum119901
119894=11198941199092119894
1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816
+ 120585]
]
st 119864 [075 minus 1205781
119901
prod
119894=1
119909119894] le 0
6 Scientific Programming
Table 1 Comparison of statistical results for Example 5
Algor Max Min Mean Std dev CI FR()
AR(s)
SSGA-A 014 010 011 001 [011 012] 80 495SSGA-B 013 010 011 001 [011 012] 87 496NIOA-A 025 016 021 003 [020 022] 80 276NIOA-B 026 017 020 002 [019 021] 87 247120583IOA 039 022 029 005 [027 031] 90 161CI represents the confidence interval of objective values for the solutionsacquired FR stands for the rate of feasible solutions among all the gottensolutions AR denotes the average runtime required by 30 runs
119901
sum
119894=1
119864 [1205782119909119894minus 75119901] le 0
0 le 119909119894le 10
120585 sim 119873 (0 1)
1 le 119894 le 119901
1205781sim 119873 (00015 02)
1205782sim 119873 (1 2)
(13)
This is a multimodal expected value programming prob-lem gotten through modifying a static multimodal optimiza-tion problem [22] where 119901 = 100 The main difficultyof solving such problem is that the high dimension andmultimodality make it difficult to find the desired solutionWe solve the above problem by means of the approximationmodel SAM as in Section 2 instead of transforming suchmodel into a deterministic analytical one After respectively30 runs each of the above algorithms acquires 30 bestsolutions used for comparison Their statistical results arelisted in Table 1 while Figures 2 and 3 draw the box plots ofthe objective values acquired and their average search curvesrespectively
In Table 1 the values on FR listed in the seventh columnhint that whereas all the algorithms can find many feasiblesolutions for each run their rates of feasible solutions aredifferent 120583IOA can acquire many more feasible solutionsthan the compared approaches This illustrates that the con-straint handling approach RCEA presented in Section 3 canensure that 120583IOA find feasible solutions with high probability90 for a single run On the other hand the statisticalresults in columns 2 to 6 show that 120583IOArsquos solution qualityis clearly superior to those acquired by other approachesand meanwhile NIOA-A and NIOA-B are secondary Withrespect to performance efficiency we see easily that 120583IOAis a high-efficiency optimization algorithm as it spendsthe least time to seek the desired solution in a run Wealso notice that SSGA-A and SSGA-B present their highcomputational complexity because of their average runtimewhich shows that such two genetic algorithms with staticsampling strategies cause easily expensively computationalcost
SSGA-A SSGA-B NIOA-A NIOA-B
01
015
02
025
03
035
04
Obj
ectiv
e val
ues
120583IOA
Figure 2 Example 5 comparison of box plots acquired
0 2000 4000 6000 8000 100000
01
02
03
04
05
06
SSGA-ASSGA-BNIOA-A
NIOA-B120583IOA
n
f
Figure 3 Example 5 comparison of average search curves
The box plots in Figure 2 formed by 30 objective valuesfor each algorithm exhibit the fact that in addition to 120583IOAobtaining the best effect by comparison with the other fouralgorithms NIOA-A and NIOA-B with similar performancecharacteristics can work well over SSGA-A and SSGA-BBy Figure 3 we observe that 120583IOA can find the desiredsolution rapidly and achieve stable search but the otheralgorithms in particular the two genetic algorithms get easilyinto local search Totally when solving the above higher-dimensional problem the five algorithms have different effi-ciencies solution qualities and search characteristics 120583IOAsuits the above problem and exhibits the best performancefor which the main reason consists in that it can effectivelycombine RCEA with ROEA to find many more empiri-cally feasible individuals and urge them to evolve graduallytowards the desired regions by proliferation and mutation
Scientific Programming 7
Table 2 Comparison of statistical results for Example 6
Algor Max Min Mean Std dev CI FR()
AR(s)
SSGA-A 4880 4018 4589 192 [4521 4658] 53 266SSGA-B 4897 4157 4630 174 [4568 4692] 53 258NIOA-A 4108 3782 3939 87 [3908 3970] 100 160NIOA-B 4146 3713 3952 39 [3913 3990] 100 139120583IOA 4647 4443 4568 55 [4541 4580] 100 121
However those compared approaches present relatively weakperformances In particular SSGA-A and SSGA-B get easilyinto local search and spend the most runtime to solve theabove problem as their constraint handling and selectiontechniques are hard to adapt to high-dimensionalmultimodalproblems NIOA-A and NIOA-B are superior to such twogenetic algorithms owing to their adaptive sampling andconstraint handling techniques
Example 6 Consider
max 119864[
119901
sum
119894=1
119909119894sdot sin (120587119894119909
119894) + 120585]
st 119864 [
119901
sum
119894=1
1205781119909119894] le 500
119864 [
119901
sum
119894=1
12057821199092
119894] le 3000
119909119894ge 0
120585 sim 119873 (0 1)
1 le 119894 le 119901
1205781sim 119880 (08 12)
1205782sim 119873 (1 05)
(14)
This is a still difficult multimodal optimization problemobtained by modifying a static multimodal optimizationproblem [23] including 100 decision variables (ie 119901 = 100)and 3 random variables which greatly influence the qualityof solution search The difficulty of solving such problem isthat the objective function is multimodal and the decisionvariables are unbounded Similar to the above experimentwe acquire the statistical results of the approaches given inTable 2 while the corresponding box plots and their averagesearch curves are drawn by Figures 4 and 5 respectively
The values on FR in Table 2 illustrate that it is alsodifficult to solve Example 6 due to the random variablesand high dimensionality Even if so NIOA-A NIOA-B and120583IOA can all acquire feasible solutions for each run SSGA-Aand SSGA-B however can only get at most 53 of feasiblesolutions namely they can only acquire 53 feasible solutionsafter 100 executions This along with the values on FRin Table 1 follows that although such two approaches can
SSGA-A SSGA-B NIOA-A NIOA-B
380
400
420
440
460
480
Obj
ectiv
e val
ues
120583IOA
Figure 4 Example 6 comparison of box plots acquired
0 2000 4000 6000 8000 100000
100
200
300
400
500
SSGA-ASSGA-BNIOA-A
NIOA-B120583IOA
n
f
Figure 5 Example 6 comparison of average search curves
acquire larger values on Mean than the other approachesthey are poor with respect to solution quality efficiencyand search stability Consequently they need to make someimprovements for example their constraint handling Wenotice that 120583IOA is better than either NIOA-A or NIOA-Bbecause of its efficiency and the value onMeanWeemphasizethat whereas NIOA-A and NIOA-B can only obtain smallvalues on Mean by comparison with 120583IOA they are stillvaluable when solving such kind of hard problem RelativelyNIOA-B behaves well over NIOA-A
It seems to be true that SSGA-A and SSGA-B are superiorto NIOA-A NIOA-B and 120583IOA by Figures 4 and 5 sincethe box plots in Figure 4 and the average search curvesin Figure 5 are far from the horizontal line As a matterof fact column 7 in Table 2 shows clearly that such twogenetic algorithms cannot find feasible solutions 47 timesout of 100 runs whereas the other approaches are opposite
8 Scientific Programming
These indicate a fact that infeasible solutions in the decisiondomain have larger objective function values than feasibleones
Summarily when solving the above two examples thefive algorithms present different characteristics SSGA-Aand SSGA-B have similar performance attributes and sodo NIOA-A and NIOA-B 120583IOA performs well over thecompared approaches while NIOA-B is secondary
6 Conclusions and Further Work
We in this work concentrate on studying a microimmuneoptimization algorithm to solve a general kind of constrainedexpected value programming model relying upon trans-forming suchmodel into a sample-dependent approximationmodel (SAM) One such model requires that different candi-date solutions have different sample sizes and especially eachcandidate has two kinds of samplesrsquo sizes Subsequently twolower bound estimates of random vectors are theoreticallydeveloped and respectively applied to handling expectedvalue constraints of individuals in the current population andcomputing their empirical objective values Based on suchtwo estimates and the idea of racing ranking two racingsampling methods are suggested to execute individual eval-uation and constraint handling respectively Afterwards aracing sampling basedmicroimmune optimization algorithm120583IOA is proposed to deal with such approximation modelwith the merits of small population strong disturbance sup-pression effective constraint handling adaptive samplingand high efficiency The theoretical analysis has indicatedthat 120583IOArsquos computational complexity depends mainly on119899119873 119872120575 and 119862max due to small population 119873 and known
problem parameters By means of the higher-dimensionalmultimodal test problems the comparatively experimentalresults can draw some conclusions (1) RCEA and RCOEAcan make 120583IOA dynamically determine the sample sizes ofdifferent kinds of individuals in the process of evolution(2) the local search approach adopted can help 120583IOA tostrengthen local search (3) stochastic parametersmay be effi-ciently addressed by adaptive sampling techniques (4) 120583IOAperforms well over the compared approaches and (5) SSGA-A and SSGA-B need to make some improvements for higher-dimensional problems Further whereas we make somestudies on how to explore immune optimization approachesto solve higher-dimensional constrained expected value pro-gramming problems some issues will be further studiedFor example 120583IOArsquos theoretical convergence analysis needsto be studied while its engineering applications are to bediscussed
Appendix
Proof of Lemma 3 Let x isin 119883119898
119883120576 Since there exists 119894
0
such that 119864[1198661198940
(x 120585)] gt 120576 with 1 le 1198940
le 119868 by (3) we obtainthat
Pr x isin 119883119898 = Pr
1
119898
119898
sum
119895=1
119866119894(x 120585119895) le 0 1 le 119894 le 119868
le Pr
1
119898
119898
sum
119895=1
1198661198940
(x 120585119895) minus 119864 [1198661198940(x 120585)] le minus120576
le exp[
[
minus21205762119898
(1198861198940(x) minus 119887
1198940(x))2
]
]
le exp[minus21205762119898
Δ2]
(A1)
Hence
1 minus Pr 119883119898
sube 119883120576 = Pr existx isin 119883
119898 st x notin 119883
120576
le sum
xisin119860119883120576
Pr x isin 119883119898
le |119860| exp[minus21205762119898
Δ2]
(A2)
Thereby
Pr 119883119898
sube 119883120576 ge 1 minus |119860| exp[
minus21205762119898
Δ2] (A3)
So the conclusion is true
Proof of Lemma 4 Since 119861 is finite there exists 1199100isin 119861 such
that 119864[119891(y0 120585)] = 119891
lowast If x isin 119878119899 119878120576 then
119864 [119891 (x 120585)] gt 119864 [119891 (y0 120585)] + 120576 (A4)
Hence if
1
119899
119899
sum
119896=1
119891 (x 120585119896) le1
119899
119899
sum
119896=1
119891 (y0 120585119896
) (A5)
we have that
1
119899
119899
sum
119896=1
119891 (x 120585119896) lt 119864 [119891 (x 120585)] minus120576
2(A6)
or
1
119899
119899
sum
119896=1
119891 (y0 120585119896
) gt 119864 [119891 (y0 120585)] +
120576
2 (A7)
Otherwise we acquire that
1
119899
119899
sum
119896=1
119891 (x 120585119896) ge 119864 [119891 (x 120585)] minus120576
2
gt 119864 [119891 (y0 120585)] +
120576
2
ge1
119899
119899
sum
119896=1
119891 (y0 120585119896
)
(A8)
Scientific Programming 9
This yields contraction Consequently by (3) we obtain that
Pr x isin 119878119899 = Pr1
119899
119899
sum
119896=1
119891 (x 120585119896) le 119899
le Pr1
119899
119899
sum
119896=1
119891 (x 120585119896) le1
119899
119899
sum
119896=1
119891 (y0 120585119896
)
le Pr1
119899
119899
sum
119896=1
119891 (x 120585119896) lt 119864 [119891 (x 120585)] minus120576
2
+ Pr1
119899
119899
sum
119896=1
119891 (y0 120585119896
) gt 119864 [119891 (y0 120585)] +
120576
2
le 2 exp[minus1205762119899
2Γ2]
(A9)
Hence
1 minus Pr 119878119899sube 119878120576
= Pr existx isin 119878119899 st x notin 119878
120576
le sum
xisin119861119878120576
Pr x isin 119878119899
le 2 |119861| exp[minus1205762119899
2Γ2]
(A10)
This way it follows from (A10) that the above conclusion istrue
Competing Interests
The authors declare that they have no competing interests
Acknowledgments
This work is supported in part by the National Natural Sci-ence Foundation (61563009) and Doctoral Fund of Ministryof Education of China (20125201110003)
References
[1] J Luedtke and S Ahmed ldquoA sample approximation approachfor optimization with probabilistic constraintsrdquo SIAM Journalon Optimization vol 19 no 2 pp 674ndash699 2008
[2] M Branda ldquoSample approximation technique for mixed-integer stochastic programming problems with several chanceconstraintsrdquoOperations Research Letters vol 40 no 3 pp 207ndash211 2012
[3] A Shapiro D Dentcheva and A Ruszczynski Lectures onStochastic Programming Modeling andTheory SIAM Philadel-phia Pa USA 2009
[4] W Wang and S Ahmed ldquoSample average approximation ofexpected value constrained stochastic programsrdquo OperationsResearch Letters vol 36 no 5 pp 515ndash519 2008
[5] M Branda ldquoSample approximation technique for mixed-integer stochastic programming problems with expected valueconstraintsrdquo Optimization Letters vol 8 no 3 pp 861ndash8752014
[6] B LiuTheory and Practice of Uncertain Programming SpringerBerlin Germany 2009
[7] Y Jin and J Branke ldquoEvolutionary optimization in uncertainenvironmentsmdasha surveyrdquo IEEE Transactions on EvolutionaryComputation vol 9 no 3 pp 303ndash317 2005
[8] K Deb S Gupta D Daum J Branke A K Mall and D Pad-manabhan ldquoReliability-based optimization using evolutionaryalgorithmsrdquo IEEE Transactions on Evolutionary Computationvol 13 no 5 pp 1054ndash1074 2009
[9] B Liu and Y-K Liu ldquoExpected value of fuzzy variable and fuzzyexpected value modelsrdquo IEEE Transactions on Fuzzy Systemsvol 10 no 4 pp 445ndash450 2002
[10] Y Sun and Y L Gao ldquoAn improved differential evolutionalgorithmof stochastic expected valuemodelsrdquoMicroelectronicsamp Computer vol 29 no 4 pp 23ndash25 2012
[11] D Dasgupta S Yu and F Nino ldquoRecent advances in artificialimmune systems models and applicationsrdquo Applied Soft Com-puting Journal vol 11 no 2 pp 1574ndash1587 2011
[12] K Trojanowski and S T Wierzchon ldquoImmune-based algo-rithms for dynamic optimizationrdquo Information Sciences vol 179no 10 pp 1495ndash1515 2009
[13] Y-C Hsieh and P-S You ldquoAn effective immune based two-phase approach for the optimal reliability-redundancy alloca-tion problemrdquo Applied Mathematics and Computation vol 218no 4 pp 1297ndash1307 2011
[14] Q Zhao R Yang and F Duan ldquoAn immune clonal hybrid algo-rithm for solving stochastic chance-constrained programmingrdquoJournal of Computational Information Systems vol 8 no 20 pp8295ndash8302 2012
[15] W Hoeffding ldquoProbability inequalities for sums of boundedrandom variablesrdquo Journal of the American Statistical Associa-tion vol 58 pp 13ndash30 1963
[16] Z Lin and Z Bai Probability Inequalities Springer BerlinGermany 2010
[17] K L Chung A Course in Probability Theory Academic Press2001
[18] M Olguin-Carbajal E Alba and J Arellano-Verdejo ldquoMicro-differential evolution with local search for high dimensionalproblemsrdquo in Proceedings of the IEEE Congress on EvolutionaryComputation (CEC rsquo13) pp 48ndash54 June 2013
[19] C A Poojari and B Varghese ldquoGenetic algorithm basedtechnique for solving chance constrained problemsrdquo EuropeanJournal of Operational Research vol 185 no 3 pp 1128ndash11542008
[20] Z-H Zhang ldquoNoisy immune optimization for chance-constrained programming problemsrdquo Applied Mechanics andMaterials vol 48-49 pp 740ndash744 2011
[21] C-H Chen ldquoEfficient sampling for simulation-based optimiza-tion under uncertaintyrdquo in Proceedings of the 4th InternationalSymposium on Uncertainty Modeling and Analysis (ISUMA rsquo03)pp 386ndash391 IEEE College Park Md USA September 2003
[22] E Mezura-Montes and C A Coello Coello ldquoA simple multi-membered evolution strategy to solve constrained optimizationproblemsrdquo IEEE Transactions on Evolutionary Computationvol 9 no 1 pp 1ndash17 2005
[23] B Varghese and C A Poojari ldquoGenetic algorithm basedtechnique for solving chance-constrained problems arising inrisk managementrdquo Tech Rep 2004
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
4 Scientific Programming
Step 1 Input parameters 120576 and 120575 mentioned above initialsampling size 119899
0 and population 119861
Step 2 Set 119899 larr 1198990 119904 larr 119899
0 120582 larr ln(1 + 119899
0) and Φ larr 119861
Step 3 Calculate the empirical objective average of 119899 obser-vations for each individual x inΦ that is 119891
119899(x) write 119891min =
min119891119899(x) x isin Φ and 119891max = max119891
119899(x) x isin Φ
Step 4 Remove those elements inΦ satisfying 119899(x) gt 119891min+
119870(119891max minus 119891min 119899 120575)
Step 5 Set 119904 larr 120582119904
Step 6 Update the empirical objective values for elements inΦ through
119899(x) larr997888
[119899(x) times 119899 + sum
119904
119896=1119891 (x 120585119896)]
(119899 + 119904) x isin 119861 (10)
Step 7 Set 119899 larr 119899 + 119904
Step 8 If 119899 lt 119899|119861|
andΦ = 120601 then return to Step 3 otherwiseoutput all the empirical objective values of individuals in 119861
Through the above algorithm those individuals in 119861
can acquire their respective empirical objective values withdifferent sampling sizes Those high-quality individuals canget large sampling sizes and hence their empirical objectivevalues can approach their theoretical objective values
4 Algorithm Statement
The clonal selection principle explains how immune cellslearn the pattern structures of invading pathogens It includesmany biological inspirations capable of being adopted todesign 120583IOA such as immune selection cloning and repro-duction Based on RCEA and ROEA above as well as generalimmune inspirations 120583IOA can be illustrated by Figure 1We here view antigen Ag as problem SAM itself whilecandidates from the design space 119863 are regarded as real-coded antibodies Within a run period of 120583IOA by Figure 1the current population is first divided into empirical feasi-ble and infeasible antibody subpopulations after executingRCEA above Second those empirical feasible antibodies arerequired to compute their empirical objective values throughROEA They will produce many more clones than empiricalinfeasible antibodies through proliferation Afterwards allthe clones are enforced mutation If a parent is superior toits clones it will carry out local search and conversely it isupdated by its best clone Based on Figure 1 120583IOA can beformulated in detail below
Step 1 Input parameters population size 119873 maximal clonalsize 119862max sampling parameters 119898
0and 119899
0 relaxation factor
120576 significance level 120575 and maximal iteration number 119866max
Step 2 Set 119905 larr 1
Step 3 Generate an initial population 119860 of 119873 randomantibodies
Step 4 Compute the empirical constraint violations of anti-bodies in 119860 through RCEA and (1) that is Γ(x) with x isin 119860
Step 5 Divide 119860 into empirical feasible subpopulation 119861 andinfeasible subpopulation 119862
Step 6 Calculate the empirical objective values of antibodiesin 119861 through ROEA above
Step 7 For each antibody x in 119861 we have the following
Step 71 Proliferate a clonal set Cl(x) with size 119862max (ie xmultiplies 119862max offsprings)
Step 72 Mutate all clones with mutation rate 119901119898(x) = 1
(Γmax + ln(119905 + 1)) through the classical polynomial mutationand thereafter produce a mutated clonal set Cllowast(x) whereΓmax = maxΓ(x) | x isin 119862
Step 73 Eliminate empirical infeasible clones in Cllowast(x)through RCEA
Step 74 Calculate the empirical objective values of clones inCllowast(x) through ROEA
Step 75 If the best clone has a smaller empirical objectivevalue than x it will update x otherwise antibody x as aninitial state creates a better empirical feasible antibody toreplace it by a local search approach [18] with ROEA andsampling size 119898
0
Step 8 For each antibody y in 119862 we have the following
Step 81 Antibody y creates a clonal population 119862(y) withclonal sizeCl(y) all clones in119862(y) are enforced tomutatewithmutation rate 119901
119898(y) through the conventional nonuniform
mutation and create a mutated clonal population 119862lowast(y)
where
Cl (y) = round(119862max
Γ (y) + 1)
119901119898
(y) =1
Γmax minus Γ (y) + 1
(11)
Step 82 Check if there exist empirical feasible clones in119862lowast(y)
by RCEA and (1) if yes the best clone with the smallestempirical objective value by ROEA replaces y and converselythe clone with the smallest constraint violation updates y
Step 9 119860 larr 119861 cup 119862 and 119905 larr 119905 + 1
Step 10 If 119905 lt 119866max go to Step 4 and conversely output thebest antibody viewed as the optimal solution
As we formulate above the current population 119860 issplit into two subpopulations after being checked if thereare empirical feasible antibodies Each subpopulation is
Scientific Programming 5
Initial antibody population
Empirical feasibility identification
Empirical feasible antibodies Empirical infeasible antibodies
Objective evaluation
Proliferation
Constraint violation
Local search
Mutation
Elitism clone
Comparison between parents and clones
Replacement
New population
Termination
Output best solutionYes
No
NoYes
Figure 1 The flowchart of 120583IOA
updated through proliferation mutation and selection Step7 makes those empirical feasible antibodies search betterclones through proliferation and polynomial mutation Oncesome antibody can not produce a valuable clone a reportedlocal search algorithm is used to perform local evolution so asto enhance the quality of solution searchThe purpose of Step8 urges those poor antibodies to create diverse clones throughproliferation and nonuniform mutation
Additionally120583IOArsquos computational complexity is decidedby Steps 4 6 and 72 Step 4 needs at most 119873(119868119872
120575+ 119869 + 119870 +
1) times to calculate the empirical constraint violations InStep 6 we need to compute the empirical objective valuesof antibodies in 119861 which evaluates at most 119873119899
119873times
Step 72 enforces mutation with at most 119873119901 times 119862max timesConsequently 120583IOArsquos computational complexity in the worstcase can be given by
119874119888= 119874 (119873 (119868119872
120575+ 119869 + 119870) + 119873119899
119873+ 119873119901119862max)
= 119874 (119873 (119868119872120575+ 119869 + 119870 + 119899
119873+ 119901119862max))
(12)
5 Experimental Analysis
Our experiments are executed on a personal computer(CPU3GHz RAM2GB) with VC++ software In order toexamine 120583IOArsquos characteristics four intelligent optimizationalgorithms that is two competitive steady genetic algorithms(SSGA-A and SSGA-B) [19] and two recent immune opti-mization approaches NIOA-A and NIOA-B [20] are takento participate in comparative analysis by means of two100-dimentional multimodal expected value optimization
problems It is pointed out that the two genetic algorithmswith the same fixed sampling size for each individual can stillsolve (EP) problems since their constraint scoring functionsare designed based on static sampling strategies the twoimmune algorithms are with dynamic sampling sizes for allindividuals presented in the process of evolution On theother hand since NIOA-B can not effectively handle high-dimensional problems we in this section improve it byOCBA[21] These comparative approaches together with 120583IOAshare the same termination criterion namely each approachis with the total of evaluations 8 times 10
5 while executing 30times on each test problem Their parameter settings are thesame as those in their literatures except for their evolvingpopulation sizes After manual experimental tuning theytake population size 40 120583IOA takes a small population sizewithin 3 and 5 while the three parameters of 119898
0 1198990 and
119862max as crucial efficiency parameters are usually set as smallintegers We here set 119873 = 5 119898
0= 30 119899
0= 20 119862max = 2 120576 =
01 and 120575 = 005 Additionally those best solutions acquiredby all the algorithms for each test problem are reevaluated 106times because of the demand of algorithm comparison
Example 5 Consider
max 119864[
[
1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816
sum119901
119894=1cos4 (119909
119894) minus 2prod
119901
119894=1cos2 (119909
119894)
radicsum119901
119894=11198941199092119894
1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816
+ 120585]
]
st 119864 [075 minus 1205781
119901
prod
119894=1
119909119894] le 0
6 Scientific Programming
Table 1 Comparison of statistical results for Example 5
Algor Max Min Mean Std dev CI FR()
AR(s)
SSGA-A 014 010 011 001 [011 012] 80 495SSGA-B 013 010 011 001 [011 012] 87 496NIOA-A 025 016 021 003 [020 022] 80 276NIOA-B 026 017 020 002 [019 021] 87 247120583IOA 039 022 029 005 [027 031] 90 161CI represents the confidence interval of objective values for the solutionsacquired FR stands for the rate of feasible solutions among all the gottensolutions AR denotes the average runtime required by 30 runs
119901
sum
119894=1
119864 [1205782119909119894minus 75119901] le 0
0 le 119909119894le 10
120585 sim 119873 (0 1)
1 le 119894 le 119901
1205781sim 119873 (00015 02)
1205782sim 119873 (1 2)
(13)
This is a multimodal expected value programming prob-lem gotten through modifying a static multimodal optimiza-tion problem [22] where 119901 = 100 The main difficultyof solving such problem is that the high dimension andmultimodality make it difficult to find the desired solutionWe solve the above problem by means of the approximationmodel SAM as in Section 2 instead of transforming suchmodel into a deterministic analytical one After respectively30 runs each of the above algorithms acquires 30 bestsolutions used for comparison Their statistical results arelisted in Table 1 while Figures 2 and 3 draw the box plots ofthe objective values acquired and their average search curvesrespectively
In Table 1 the values on FR listed in the seventh columnhint that whereas all the algorithms can find many feasiblesolutions for each run their rates of feasible solutions aredifferent 120583IOA can acquire many more feasible solutionsthan the compared approaches This illustrates that the con-straint handling approach RCEA presented in Section 3 canensure that 120583IOA find feasible solutions with high probability90 for a single run On the other hand the statisticalresults in columns 2 to 6 show that 120583IOArsquos solution qualityis clearly superior to those acquired by other approachesand meanwhile NIOA-A and NIOA-B are secondary Withrespect to performance efficiency we see easily that 120583IOAis a high-efficiency optimization algorithm as it spendsthe least time to seek the desired solution in a run Wealso notice that SSGA-A and SSGA-B present their highcomputational complexity because of their average runtimewhich shows that such two genetic algorithms with staticsampling strategies cause easily expensively computationalcost
SSGA-A SSGA-B NIOA-A NIOA-B
01
015
02
025
03
035
04
Obj
ectiv
e val
ues
120583IOA
Figure 2 Example 5 comparison of box plots acquired
0 2000 4000 6000 8000 100000
01
02
03
04
05
06
SSGA-ASSGA-BNIOA-A
NIOA-B120583IOA
n
f
Figure 3 Example 5 comparison of average search curves
The box plots in Figure 2 formed by 30 objective valuesfor each algorithm exhibit the fact that in addition to 120583IOAobtaining the best effect by comparison with the other fouralgorithms NIOA-A and NIOA-B with similar performancecharacteristics can work well over SSGA-A and SSGA-BBy Figure 3 we observe that 120583IOA can find the desiredsolution rapidly and achieve stable search but the otheralgorithms in particular the two genetic algorithms get easilyinto local search Totally when solving the above higher-dimensional problem the five algorithms have different effi-ciencies solution qualities and search characteristics 120583IOAsuits the above problem and exhibits the best performancefor which the main reason consists in that it can effectivelycombine RCEA with ROEA to find many more empiri-cally feasible individuals and urge them to evolve graduallytowards the desired regions by proliferation and mutation
Scientific Programming 7
Table 2 Comparison of statistical results for Example 6
Algor Max Min Mean Std dev CI FR()
AR(s)
SSGA-A 4880 4018 4589 192 [4521 4658] 53 266SSGA-B 4897 4157 4630 174 [4568 4692] 53 258NIOA-A 4108 3782 3939 87 [3908 3970] 100 160NIOA-B 4146 3713 3952 39 [3913 3990] 100 139120583IOA 4647 4443 4568 55 [4541 4580] 100 121
However those compared approaches present relatively weakperformances In particular SSGA-A and SSGA-B get easilyinto local search and spend the most runtime to solve theabove problem as their constraint handling and selectiontechniques are hard to adapt to high-dimensionalmultimodalproblems NIOA-A and NIOA-B are superior to such twogenetic algorithms owing to their adaptive sampling andconstraint handling techniques
Example 6 Consider
max 119864[
119901
sum
119894=1
119909119894sdot sin (120587119894119909
119894) + 120585]
st 119864 [
119901
sum
119894=1
1205781119909119894] le 500
119864 [
119901
sum
119894=1
12057821199092
119894] le 3000
119909119894ge 0
120585 sim 119873 (0 1)
1 le 119894 le 119901
1205781sim 119880 (08 12)
1205782sim 119873 (1 05)
(14)
This is a still difficult multimodal optimization problemobtained by modifying a static multimodal optimizationproblem [23] including 100 decision variables (ie 119901 = 100)and 3 random variables which greatly influence the qualityof solution search The difficulty of solving such problem isthat the objective function is multimodal and the decisionvariables are unbounded Similar to the above experimentwe acquire the statistical results of the approaches given inTable 2 while the corresponding box plots and their averagesearch curves are drawn by Figures 4 and 5 respectively
The values on FR in Table 2 illustrate that it is alsodifficult to solve Example 6 due to the random variablesand high dimensionality Even if so NIOA-A NIOA-B and120583IOA can all acquire feasible solutions for each run SSGA-Aand SSGA-B however can only get at most 53 of feasiblesolutions namely they can only acquire 53 feasible solutionsafter 100 executions This along with the values on FRin Table 1 follows that although such two approaches can
SSGA-A SSGA-B NIOA-A NIOA-B
380
400
420
440
460
480
Obj
ectiv
e val
ues
120583IOA
Figure 4 Example 6 comparison of box plots acquired
0 2000 4000 6000 8000 100000
100
200
300
400
500
SSGA-ASSGA-BNIOA-A
NIOA-B120583IOA
n
f
Figure 5 Example 6 comparison of average search curves
acquire larger values on Mean than the other approachesthey are poor with respect to solution quality efficiencyand search stability Consequently they need to make someimprovements for example their constraint handling Wenotice that 120583IOA is better than either NIOA-A or NIOA-Bbecause of its efficiency and the value onMeanWeemphasizethat whereas NIOA-A and NIOA-B can only obtain smallvalues on Mean by comparison with 120583IOA they are stillvaluable when solving such kind of hard problem RelativelyNIOA-B behaves well over NIOA-A
It seems to be true that SSGA-A and SSGA-B are superiorto NIOA-A NIOA-B and 120583IOA by Figures 4 and 5 sincethe box plots in Figure 4 and the average search curvesin Figure 5 are far from the horizontal line As a matterof fact column 7 in Table 2 shows clearly that such twogenetic algorithms cannot find feasible solutions 47 timesout of 100 runs whereas the other approaches are opposite
8 Scientific Programming
These indicate a fact that infeasible solutions in the decisiondomain have larger objective function values than feasibleones
Summarily when solving the above two examples thefive algorithms present different characteristics SSGA-Aand SSGA-B have similar performance attributes and sodo NIOA-A and NIOA-B 120583IOA performs well over thecompared approaches while NIOA-B is secondary
6 Conclusions and Further Work
We in this work concentrate on studying a microimmuneoptimization algorithm to solve a general kind of constrainedexpected value programming model relying upon trans-forming suchmodel into a sample-dependent approximationmodel (SAM) One such model requires that different candi-date solutions have different sample sizes and especially eachcandidate has two kinds of samplesrsquo sizes Subsequently twolower bound estimates of random vectors are theoreticallydeveloped and respectively applied to handling expectedvalue constraints of individuals in the current population andcomputing their empirical objective values Based on suchtwo estimates and the idea of racing ranking two racingsampling methods are suggested to execute individual eval-uation and constraint handling respectively Afterwards aracing sampling basedmicroimmune optimization algorithm120583IOA is proposed to deal with such approximation modelwith the merits of small population strong disturbance sup-pression effective constraint handling adaptive samplingand high efficiency The theoretical analysis has indicatedthat 120583IOArsquos computational complexity depends mainly on119899119873 119872120575 and 119862max due to small population 119873 and known
problem parameters By means of the higher-dimensionalmultimodal test problems the comparatively experimentalresults can draw some conclusions (1) RCEA and RCOEAcan make 120583IOA dynamically determine the sample sizes ofdifferent kinds of individuals in the process of evolution(2) the local search approach adopted can help 120583IOA tostrengthen local search (3) stochastic parametersmay be effi-ciently addressed by adaptive sampling techniques (4) 120583IOAperforms well over the compared approaches and (5) SSGA-A and SSGA-B need to make some improvements for higher-dimensional problems Further whereas we make somestudies on how to explore immune optimization approachesto solve higher-dimensional constrained expected value pro-gramming problems some issues will be further studiedFor example 120583IOArsquos theoretical convergence analysis needsto be studied while its engineering applications are to bediscussed
Appendix
Proof of Lemma 3 Let x isin 119883119898
119883120576 Since there exists 119894
0
such that 119864[1198661198940
(x 120585)] gt 120576 with 1 le 1198940
le 119868 by (3) we obtainthat
Pr x isin 119883119898 = Pr
1
119898
119898
sum
119895=1
119866119894(x 120585119895) le 0 1 le 119894 le 119868
le Pr
1
119898
119898
sum
119895=1
1198661198940
(x 120585119895) minus 119864 [1198661198940(x 120585)] le minus120576
le exp[
[
minus21205762119898
(1198861198940(x) minus 119887
1198940(x))2
]
]
le exp[minus21205762119898
Δ2]
(A1)
Hence
1 minus Pr 119883119898
sube 119883120576 = Pr existx isin 119883
119898 st x notin 119883
120576
le sum
xisin119860119883120576
Pr x isin 119883119898
le |119860| exp[minus21205762119898
Δ2]
(A2)
Thereby
Pr 119883119898
sube 119883120576 ge 1 minus |119860| exp[
minus21205762119898
Δ2] (A3)
So the conclusion is true
Proof of Lemma 4 Since 119861 is finite there exists 1199100isin 119861 such
that 119864[119891(y0 120585)] = 119891
lowast If x isin 119878119899 119878120576 then
119864 [119891 (x 120585)] gt 119864 [119891 (y0 120585)] + 120576 (A4)
Hence if
1
119899
119899
sum
119896=1
119891 (x 120585119896) le1
119899
119899
sum
119896=1
119891 (y0 120585119896
) (A5)
we have that
1
119899
119899
sum
119896=1
119891 (x 120585119896) lt 119864 [119891 (x 120585)] minus120576
2(A6)
or
1
119899
119899
sum
119896=1
119891 (y0 120585119896
) gt 119864 [119891 (y0 120585)] +
120576
2 (A7)
Otherwise we acquire that
1
119899
119899
sum
119896=1
119891 (x 120585119896) ge 119864 [119891 (x 120585)] minus120576
2
gt 119864 [119891 (y0 120585)] +
120576
2
ge1
119899
119899
sum
119896=1
119891 (y0 120585119896
)
(A8)
Scientific Programming 9
This yields contraction Consequently by (3) we obtain that
Pr x isin 119878119899 = Pr1
119899
119899
sum
119896=1
119891 (x 120585119896) le 119899
le Pr1
119899
119899
sum
119896=1
119891 (x 120585119896) le1
119899
119899
sum
119896=1
119891 (y0 120585119896
)
le Pr1
119899
119899
sum
119896=1
119891 (x 120585119896) lt 119864 [119891 (x 120585)] minus120576
2
+ Pr1
119899
119899
sum
119896=1
119891 (y0 120585119896
) gt 119864 [119891 (y0 120585)] +
120576
2
le 2 exp[minus1205762119899
2Γ2]
(A9)
Hence
1 minus Pr 119878119899sube 119878120576
= Pr existx isin 119878119899 st x notin 119878
120576
le sum
xisin119861119878120576
Pr x isin 119878119899
le 2 |119861| exp[minus1205762119899
2Γ2]
(A10)
This way it follows from (A10) that the above conclusion istrue
Competing Interests
The authors declare that they have no competing interests
Acknowledgments
This work is supported in part by the National Natural Sci-ence Foundation (61563009) and Doctoral Fund of Ministryof Education of China (20125201110003)
References
[1] J Luedtke and S Ahmed ldquoA sample approximation approachfor optimization with probabilistic constraintsrdquo SIAM Journalon Optimization vol 19 no 2 pp 674ndash699 2008
[2] M Branda ldquoSample approximation technique for mixed-integer stochastic programming problems with several chanceconstraintsrdquoOperations Research Letters vol 40 no 3 pp 207ndash211 2012
[3] A Shapiro D Dentcheva and A Ruszczynski Lectures onStochastic Programming Modeling andTheory SIAM Philadel-phia Pa USA 2009
[4] W Wang and S Ahmed ldquoSample average approximation ofexpected value constrained stochastic programsrdquo OperationsResearch Letters vol 36 no 5 pp 515ndash519 2008
[5] M Branda ldquoSample approximation technique for mixed-integer stochastic programming problems with expected valueconstraintsrdquo Optimization Letters vol 8 no 3 pp 861ndash8752014
[6] B LiuTheory and Practice of Uncertain Programming SpringerBerlin Germany 2009
[7] Y Jin and J Branke ldquoEvolutionary optimization in uncertainenvironmentsmdasha surveyrdquo IEEE Transactions on EvolutionaryComputation vol 9 no 3 pp 303ndash317 2005
[8] K Deb S Gupta D Daum J Branke A K Mall and D Pad-manabhan ldquoReliability-based optimization using evolutionaryalgorithmsrdquo IEEE Transactions on Evolutionary Computationvol 13 no 5 pp 1054ndash1074 2009
[9] B Liu and Y-K Liu ldquoExpected value of fuzzy variable and fuzzyexpected value modelsrdquo IEEE Transactions on Fuzzy Systemsvol 10 no 4 pp 445ndash450 2002
[10] Y Sun and Y L Gao ldquoAn improved differential evolutionalgorithmof stochastic expected valuemodelsrdquoMicroelectronicsamp Computer vol 29 no 4 pp 23ndash25 2012
[11] D Dasgupta S Yu and F Nino ldquoRecent advances in artificialimmune systems models and applicationsrdquo Applied Soft Com-puting Journal vol 11 no 2 pp 1574ndash1587 2011
[12] K Trojanowski and S T Wierzchon ldquoImmune-based algo-rithms for dynamic optimizationrdquo Information Sciences vol 179no 10 pp 1495ndash1515 2009
[13] Y-C Hsieh and P-S You ldquoAn effective immune based two-phase approach for the optimal reliability-redundancy alloca-tion problemrdquo Applied Mathematics and Computation vol 218no 4 pp 1297ndash1307 2011
[14] Q Zhao R Yang and F Duan ldquoAn immune clonal hybrid algo-rithm for solving stochastic chance-constrained programmingrdquoJournal of Computational Information Systems vol 8 no 20 pp8295ndash8302 2012
[15] W Hoeffding ldquoProbability inequalities for sums of boundedrandom variablesrdquo Journal of the American Statistical Associa-tion vol 58 pp 13ndash30 1963
[16] Z Lin and Z Bai Probability Inequalities Springer BerlinGermany 2010
[17] K L Chung A Course in Probability Theory Academic Press2001
[18] M Olguin-Carbajal E Alba and J Arellano-Verdejo ldquoMicro-differential evolution with local search for high dimensionalproblemsrdquo in Proceedings of the IEEE Congress on EvolutionaryComputation (CEC rsquo13) pp 48ndash54 June 2013
[19] C A Poojari and B Varghese ldquoGenetic algorithm basedtechnique for solving chance constrained problemsrdquo EuropeanJournal of Operational Research vol 185 no 3 pp 1128ndash11542008
[20] Z-H Zhang ldquoNoisy immune optimization for chance-constrained programming problemsrdquo Applied Mechanics andMaterials vol 48-49 pp 740ndash744 2011
[21] C-H Chen ldquoEfficient sampling for simulation-based optimiza-tion under uncertaintyrdquo in Proceedings of the 4th InternationalSymposium on Uncertainty Modeling and Analysis (ISUMA rsquo03)pp 386ndash391 IEEE College Park Md USA September 2003
[22] E Mezura-Montes and C A Coello Coello ldquoA simple multi-membered evolution strategy to solve constrained optimizationproblemsrdquo IEEE Transactions on Evolutionary Computationvol 9 no 1 pp 1ndash17 2005
[23] B Varghese and C A Poojari ldquoGenetic algorithm basedtechnique for solving chance-constrained problems arising inrisk managementrdquo Tech Rep 2004
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Scientific Programming 5
Initial antibody population
Empirical feasibility identification
Empirical feasible antibodies Empirical infeasible antibodies
Objective evaluation
Proliferation
Constraint violation
Local search
Mutation
Elitism clone
Comparison between parents and clones
Replacement
New population
Termination
Output best solutionYes
No
NoYes
Figure 1 The flowchart of 120583IOA
updated through proliferation mutation and selection Step7 makes those empirical feasible antibodies search betterclones through proliferation and polynomial mutation Oncesome antibody can not produce a valuable clone a reportedlocal search algorithm is used to perform local evolution so asto enhance the quality of solution searchThe purpose of Step8 urges those poor antibodies to create diverse clones throughproliferation and nonuniform mutation
Additionally120583IOArsquos computational complexity is decidedby Steps 4 6 and 72 Step 4 needs at most 119873(119868119872
120575+ 119869 + 119870 +
1) times to calculate the empirical constraint violations InStep 6 we need to compute the empirical objective valuesof antibodies in 119861 which evaluates at most 119873119899
119873times
Step 72 enforces mutation with at most 119873119901 times 119862max timesConsequently 120583IOArsquos computational complexity in the worstcase can be given by
119874119888= 119874 (119873 (119868119872
120575+ 119869 + 119870) + 119873119899
119873+ 119873119901119862max)
= 119874 (119873 (119868119872120575+ 119869 + 119870 + 119899
119873+ 119901119862max))
(12)
5 Experimental Analysis
Our experiments are executed on a personal computer(CPU3GHz RAM2GB) with VC++ software In order toexamine 120583IOArsquos characteristics four intelligent optimizationalgorithms that is two competitive steady genetic algorithms(SSGA-A and SSGA-B) [19] and two recent immune opti-mization approaches NIOA-A and NIOA-B [20] are takento participate in comparative analysis by means of two100-dimentional multimodal expected value optimization
problems It is pointed out that the two genetic algorithmswith the same fixed sampling size for each individual can stillsolve (EP) problems since their constraint scoring functionsare designed based on static sampling strategies the twoimmune algorithms are with dynamic sampling sizes for allindividuals presented in the process of evolution On theother hand since NIOA-B can not effectively handle high-dimensional problems we in this section improve it byOCBA[21] These comparative approaches together with 120583IOAshare the same termination criterion namely each approachis with the total of evaluations 8 times 10
5 while executing 30times on each test problem Their parameter settings are thesame as those in their literatures except for their evolvingpopulation sizes After manual experimental tuning theytake population size 40 120583IOA takes a small population sizewithin 3 and 5 while the three parameters of 119898
0 1198990 and
119862max as crucial efficiency parameters are usually set as smallintegers We here set 119873 = 5 119898
0= 30 119899
0= 20 119862max = 2 120576 =
01 and 120575 = 005 Additionally those best solutions acquiredby all the algorithms for each test problem are reevaluated 106times because of the demand of algorithm comparison
Example 5 Consider
max 119864[
[
1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816
sum119901
119894=1cos4 (119909
119894) minus 2prod
119901
119894=1cos2 (119909
119894)
radicsum119901
119894=11198941199092119894
1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816
+ 120585]
]
st 119864 [075 minus 1205781
119901
prod
119894=1
119909119894] le 0
6 Scientific Programming
Table 1 Comparison of statistical results for Example 5
Algor Max Min Mean Std dev CI FR()
AR(s)
SSGA-A 014 010 011 001 [011 012] 80 495SSGA-B 013 010 011 001 [011 012] 87 496NIOA-A 025 016 021 003 [020 022] 80 276NIOA-B 026 017 020 002 [019 021] 87 247120583IOA 039 022 029 005 [027 031] 90 161CI represents the confidence interval of objective values for the solutionsacquired FR stands for the rate of feasible solutions among all the gottensolutions AR denotes the average runtime required by 30 runs
119901
sum
119894=1
119864 [1205782119909119894minus 75119901] le 0
0 le 119909119894le 10
120585 sim 119873 (0 1)
1 le 119894 le 119901
1205781sim 119873 (00015 02)
1205782sim 119873 (1 2)
(13)
This is a multimodal expected value programming prob-lem gotten through modifying a static multimodal optimiza-tion problem [22] where 119901 = 100 The main difficultyof solving such problem is that the high dimension andmultimodality make it difficult to find the desired solutionWe solve the above problem by means of the approximationmodel SAM as in Section 2 instead of transforming suchmodel into a deterministic analytical one After respectively30 runs each of the above algorithms acquires 30 bestsolutions used for comparison Their statistical results arelisted in Table 1 while Figures 2 and 3 draw the box plots ofthe objective values acquired and their average search curvesrespectively
In Table 1 the values on FR listed in the seventh columnhint that whereas all the algorithms can find many feasiblesolutions for each run their rates of feasible solutions aredifferent 120583IOA can acquire many more feasible solutionsthan the compared approaches This illustrates that the con-straint handling approach RCEA presented in Section 3 canensure that 120583IOA find feasible solutions with high probability90 for a single run On the other hand the statisticalresults in columns 2 to 6 show that 120583IOArsquos solution qualityis clearly superior to those acquired by other approachesand meanwhile NIOA-A and NIOA-B are secondary Withrespect to performance efficiency we see easily that 120583IOAis a high-efficiency optimization algorithm as it spendsthe least time to seek the desired solution in a run Wealso notice that SSGA-A and SSGA-B present their highcomputational complexity because of their average runtimewhich shows that such two genetic algorithms with staticsampling strategies cause easily expensively computationalcost
SSGA-A SSGA-B NIOA-A NIOA-B
01
015
02
025
03
035
04
Obj
ectiv
e val
ues
120583IOA
Figure 2 Example 5 comparison of box plots acquired
0 2000 4000 6000 8000 100000
01
02
03
04
05
06
SSGA-ASSGA-BNIOA-A
NIOA-B120583IOA
n
f
Figure 3 Example 5 comparison of average search curves
The box plots in Figure 2 formed by 30 objective valuesfor each algorithm exhibit the fact that in addition to 120583IOAobtaining the best effect by comparison with the other fouralgorithms NIOA-A and NIOA-B with similar performancecharacteristics can work well over SSGA-A and SSGA-BBy Figure 3 we observe that 120583IOA can find the desiredsolution rapidly and achieve stable search but the otheralgorithms in particular the two genetic algorithms get easilyinto local search Totally when solving the above higher-dimensional problem the five algorithms have different effi-ciencies solution qualities and search characteristics 120583IOAsuits the above problem and exhibits the best performancefor which the main reason consists in that it can effectivelycombine RCEA with ROEA to find many more empiri-cally feasible individuals and urge them to evolve graduallytowards the desired regions by proliferation and mutation
Scientific Programming 7
Table 2 Comparison of statistical results for Example 6
Algor Max Min Mean Std dev CI FR()
AR(s)
SSGA-A 4880 4018 4589 192 [4521 4658] 53 266SSGA-B 4897 4157 4630 174 [4568 4692] 53 258NIOA-A 4108 3782 3939 87 [3908 3970] 100 160NIOA-B 4146 3713 3952 39 [3913 3990] 100 139120583IOA 4647 4443 4568 55 [4541 4580] 100 121
However those compared approaches present relatively weakperformances In particular SSGA-A and SSGA-B get easilyinto local search and spend the most runtime to solve theabove problem as their constraint handling and selectiontechniques are hard to adapt to high-dimensionalmultimodalproblems NIOA-A and NIOA-B are superior to such twogenetic algorithms owing to their adaptive sampling andconstraint handling techniques
Example 6 Consider
max 119864[
119901
sum
119894=1
119909119894sdot sin (120587119894119909
119894) + 120585]
st 119864 [
119901
sum
119894=1
1205781119909119894] le 500
119864 [
119901
sum
119894=1
12057821199092
119894] le 3000
119909119894ge 0
120585 sim 119873 (0 1)
1 le 119894 le 119901
1205781sim 119880 (08 12)
1205782sim 119873 (1 05)
(14)
This is a still difficult multimodal optimization problemobtained by modifying a static multimodal optimizationproblem [23] including 100 decision variables (ie 119901 = 100)and 3 random variables which greatly influence the qualityof solution search The difficulty of solving such problem isthat the objective function is multimodal and the decisionvariables are unbounded Similar to the above experimentwe acquire the statistical results of the approaches given inTable 2 while the corresponding box plots and their averagesearch curves are drawn by Figures 4 and 5 respectively
The values on FR in Table 2 illustrate that it is alsodifficult to solve Example 6 due to the random variablesand high dimensionality Even if so NIOA-A NIOA-B and120583IOA can all acquire feasible solutions for each run SSGA-Aand SSGA-B however can only get at most 53 of feasiblesolutions namely they can only acquire 53 feasible solutionsafter 100 executions This along with the values on FRin Table 1 follows that although such two approaches can
SSGA-A SSGA-B NIOA-A NIOA-B
380
400
420
440
460
480
Obj
ectiv
e val
ues
120583IOA
Figure 4 Example 6 comparison of box plots acquired
0 2000 4000 6000 8000 100000
100
200
300
400
500
SSGA-ASSGA-BNIOA-A
NIOA-B120583IOA
n
f
Figure 5 Example 6 comparison of average search curves
acquire larger values on Mean than the other approachesthey are poor with respect to solution quality efficiencyand search stability Consequently they need to make someimprovements for example their constraint handling Wenotice that 120583IOA is better than either NIOA-A or NIOA-Bbecause of its efficiency and the value onMeanWeemphasizethat whereas NIOA-A and NIOA-B can only obtain smallvalues on Mean by comparison with 120583IOA they are stillvaluable when solving such kind of hard problem RelativelyNIOA-B behaves well over NIOA-A
It seems to be true that SSGA-A and SSGA-B are superiorto NIOA-A NIOA-B and 120583IOA by Figures 4 and 5 sincethe box plots in Figure 4 and the average search curvesin Figure 5 are far from the horizontal line As a matterof fact column 7 in Table 2 shows clearly that such twogenetic algorithms cannot find feasible solutions 47 timesout of 100 runs whereas the other approaches are opposite
8 Scientific Programming
These indicate a fact that infeasible solutions in the decisiondomain have larger objective function values than feasibleones
Summarily when solving the above two examples thefive algorithms present different characteristics SSGA-Aand SSGA-B have similar performance attributes and sodo NIOA-A and NIOA-B 120583IOA performs well over thecompared approaches while NIOA-B is secondary
6 Conclusions and Further Work
We in this work concentrate on studying a microimmuneoptimization algorithm to solve a general kind of constrainedexpected value programming model relying upon trans-forming suchmodel into a sample-dependent approximationmodel (SAM) One such model requires that different candi-date solutions have different sample sizes and especially eachcandidate has two kinds of samplesrsquo sizes Subsequently twolower bound estimates of random vectors are theoreticallydeveloped and respectively applied to handling expectedvalue constraints of individuals in the current population andcomputing their empirical objective values Based on suchtwo estimates and the idea of racing ranking two racingsampling methods are suggested to execute individual eval-uation and constraint handling respectively Afterwards aracing sampling basedmicroimmune optimization algorithm120583IOA is proposed to deal with such approximation modelwith the merits of small population strong disturbance sup-pression effective constraint handling adaptive samplingand high efficiency The theoretical analysis has indicatedthat 120583IOArsquos computational complexity depends mainly on119899119873 119872120575 and 119862max due to small population 119873 and known
problem parameters By means of the higher-dimensionalmultimodal test problems the comparatively experimentalresults can draw some conclusions (1) RCEA and RCOEAcan make 120583IOA dynamically determine the sample sizes ofdifferent kinds of individuals in the process of evolution(2) the local search approach adopted can help 120583IOA tostrengthen local search (3) stochastic parametersmay be effi-ciently addressed by adaptive sampling techniques (4) 120583IOAperforms well over the compared approaches and (5) SSGA-A and SSGA-B need to make some improvements for higher-dimensional problems Further whereas we make somestudies on how to explore immune optimization approachesto solve higher-dimensional constrained expected value pro-gramming problems some issues will be further studiedFor example 120583IOArsquos theoretical convergence analysis needsto be studied while its engineering applications are to bediscussed
Appendix
Proof of Lemma 3 Let x isin 119883119898
119883120576 Since there exists 119894
0
such that 119864[1198661198940
(x 120585)] gt 120576 with 1 le 1198940
le 119868 by (3) we obtainthat
Pr x isin 119883119898 = Pr
1
119898
119898
sum
119895=1
119866119894(x 120585119895) le 0 1 le 119894 le 119868
le Pr
1
119898
119898
sum
119895=1
1198661198940
(x 120585119895) minus 119864 [1198661198940(x 120585)] le minus120576
le exp[
[
minus21205762119898
(1198861198940(x) minus 119887
1198940(x))2
]
]
le exp[minus21205762119898
Δ2]
(A1)
Hence
1 minus Pr 119883119898
sube 119883120576 = Pr existx isin 119883
119898 st x notin 119883
120576
le sum
xisin119860119883120576
Pr x isin 119883119898
le |119860| exp[minus21205762119898
Δ2]
(A2)
Thereby
Pr 119883119898
sube 119883120576 ge 1 minus |119860| exp[
minus21205762119898
Δ2] (A3)
So the conclusion is true
Proof of Lemma 4 Since 119861 is finite there exists 1199100isin 119861 such
that 119864[119891(y0 120585)] = 119891
lowast If x isin 119878119899 119878120576 then
119864 [119891 (x 120585)] gt 119864 [119891 (y0 120585)] + 120576 (A4)
Hence if
1
119899
119899
sum
119896=1
119891 (x 120585119896) le1
119899
119899
sum
119896=1
119891 (y0 120585119896
) (A5)
we have that
1
119899
119899
sum
119896=1
119891 (x 120585119896) lt 119864 [119891 (x 120585)] minus120576
2(A6)
or
1
119899
119899
sum
119896=1
119891 (y0 120585119896
) gt 119864 [119891 (y0 120585)] +
120576
2 (A7)
Otherwise we acquire that
1
119899
119899
sum
119896=1
119891 (x 120585119896) ge 119864 [119891 (x 120585)] minus120576
2
gt 119864 [119891 (y0 120585)] +
120576
2
ge1
119899
119899
sum
119896=1
119891 (y0 120585119896
)
(A8)
Scientific Programming 9
This yields contraction Consequently by (3) we obtain that
Pr x isin 119878119899 = Pr1
119899
119899
sum
119896=1
119891 (x 120585119896) le 119899
le Pr1
119899
119899
sum
119896=1
119891 (x 120585119896) le1
119899
119899
sum
119896=1
119891 (y0 120585119896
)
le Pr1
119899
119899
sum
119896=1
119891 (x 120585119896) lt 119864 [119891 (x 120585)] minus120576
2
+ Pr1
119899
119899
sum
119896=1
119891 (y0 120585119896
) gt 119864 [119891 (y0 120585)] +
120576
2
le 2 exp[minus1205762119899
2Γ2]
(A9)
Hence
1 minus Pr 119878119899sube 119878120576
= Pr existx isin 119878119899 st x notin 119878
120576
le sum
xisin119861119878120576
Pr x isin 119878119899
le 2 |119861| exp[minus1205762119899
2Γ2]
(A10)
This way it follows from (A10) that the above conclusion istrue
Competing Interests
The authors declare that they have no competing interests
Acknowledgments
This work is supported in part by the National Natural Sci-ence Foundation (61563009) and Doctoral Fund of Ministryof Education of China (20125201110003)
References
[1] J Luedtke and S Ahmed ldquoA sample approximation approachfor optimization with probabilistic constraintsrdquo SIAM Journalon Optimization vol 19 no 2 pp 674ndash699 2008
[2] M Branda ldquoSample approximation technique for mixed-integer stochastic programming problems with several chanceconstraintsrdquoOperations Research Letters vol 40 no 3 pp 207ndash211 2012
[3] A Shapiro D Dentcheva and A Ruszczynski Lectures onStochastic Programming Modeling andTheory SIAM Philadel-phia Pa USA 2009
[4] W Wang and S Ahmed ldquoSample average approximation ofexpected value constrained stochastic programsrdquo OperationsResearch Letters vol 36 no 5 pp 515ndash519 2008
[5] M Branda ldquoSample approximation technique for mixed-integer stochastic programming problems with expected valueconstraintsrdquo Optimization Letters vol 8 no 3 pp 861ndash8752014
[6] B LiuTheory and Practice of Uncertain Programming SpringerBerlin Germany 2009
[7] Y Jin and J Branke ldquoEvolutionary optimization in uncertainenvironmentsmdasha surveyrdquo IEEE Transactions on EvolutionaryComputation vol 9 no 3 pp 303ndash317 2005
[8] K Deb S Gupta D Daum J Branke A K Mall and D Pad-manabhan ldquoReliability-based optimization using evolutionaryalgorithmsrdquo IEEE Transactions on Evolutionary Computationvol 13 no 5 pp 1054ndash1074 2009
[9] B Liu and Y-K Liu ldquoExpected value of fuzzy variable and fuzzyexpected value modelsrdquo IEEE Transactions on Fuzzy Systemsvol 10 no 4 pp 445ndash450 2002
[10] Y Sun and Y L Gao ldquoAn improved differential evolutionalgorithmof stochastic expected valuemodelsrdquoMicroelectronicsamp Computer vol 29 no 4 pp 23ndash25 2012
[11] D Dasgupta S Yu and F Nino ldquoRecent advances in artificialimmune systems models and applicationsrdquo Applied Soft Com-puting Journal vol 11 no 2 pp 1574ndash1587 2011
[12] K Trojanowski and S T Wierzchon ldquoImmune-based algo-rithms for dynamic optimizationrdquo Information Sciences vol 179no 10 pp 1495ndash1515 2009
[13] Y-C Hsieh and P-S You ldquoAn effective immune based two-phase approach for the optimal reliability-redundancy alloca-tion problemrdquo Applied Mathematics and Computation vol 218no 4 pp 1297ndash1307 2011
[14] Q Zhao R Yang and F Duan ldquoAn immune clonal hybrid algo-rithm for solving stochastic chance-constrained programmingrdquoJournal of Computational Information Systems vol 8 no 20 pp8295ndash8302 2012
[15] W Hoeffding ldquoProbability inequalities for sums of boundedrandom variablesrdquo Journal of the American Statistical Associa-tion vol 58 pp 13ndash30 1963
[16] Z Lin and Z Bai Probability Inequalities Springer BerlinGermany 2010
[17] K L Chung A Course in Probability Theory Academic Press2001
[18] M Olguin-Carbajal E Alba and J Arellano-Verdejo ldquoMicro-differential evolution with local search for high dimensionalproblemsrdquo in Proceedings of the IEEE Congress on EvolutionaryComputation (CEC rsquo13) pp 48ndash54 June 2013
[19] C A Poojari and B Varghese ldquoGenetic algorithm basedtechnique for solving chance constrained problemsrdquo EuropeanJournal of Operational Research vol 185 no 3 pp 1128ndash11542008
[20] Z-H Zhang ldquoNoisy immune optimization for chance-constrained programming problemsrdquo Applied Mechanics andMaterials vol 48-49 pp 740ndash744 2011
[21] C-H Chen ldquoEfficient sampling for simulation-based optimiza-tion under uncertaintyrdquo in Proceedings of the 4th InternationalSymposium on Uncertainty Modeling and Analysis (ISUMA rsquo03)pp 386ndash391 IEEE College Park Md USA September 2003
[22] E Mezura-Montes and C A Coello Coello ldquoA simple multi-membered evolution strategy to solve constrained optimizationproblemsrdquo IEEE Transactions on Evolutionary Computationvol 9 no 1 pp 1ndash17 2005
[23] B Varghese and C A Poojari ldquoGenetic algorithm basedtechnique for solving chance-constrained problems arising inrisk managementrdquo Tech Rep 2004
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
6 Scientific Programming
Table 1 Comparison of statistical results for Example 5
Algor Max Min Mean Std dev CI FR()
AR(s)
SSGA-A 014 010 011 001 [011 012] 80 495SSGA-B 013 010 011 001 [011 012] 87 496NIOA-A 025 016 021 003 [020 022] 80 276NIOA-B 026 017 020 002 [019 021] 87 247120583IOA 039 022 029 005 [027 031] 90 161CI represents the confidence interval of objective values for the solutionsacquired FR stands for the rate of feasible solutions among all the gottensolutions AR denotes the average runtime required by 30 runs
119901
sum
119894=1
119864 [1205782119909119894minus 75119901] le 0
0 le 119909119894le 10
120585 sim 119873 (0 1)
1 le 119894 le 119901
1205781sim 119873 (00015 02)
1205782sim 119873 (1 2)
(13)
This is a multimodal expected value programming prob-lem gotten through modifying a static multimodal optimiza-tion problem [22] where 119901 = 100 The main difficultyof solving such problem is that the high dimension andmultimodality make it difficult to find the desired solutionWe solve the above problem by means of the approximationmodel SAM as in Section 2 instead of transforming suchmodel into a deterministic analytical one After respectively30 runs each of the above algorithms acquires 30 bestsolutions used for comparison Their statistical results arelisted in Table 1 while Figures 2 and 3 draw the box plots ofthe objective values acquired and their average search curvesrespectively
In Table 1 the values on FR listed in the seventh columnhint that whereas all the algorithms can find many feasiblesolutions for each run their rates of feasible solutions aredifferent 120583IOA can acquire many more feasible solutionsthan the compared approaches This illustrates that the con-straint handling approach RCEA presented in Section 3 canensure that 120583IOA find feasible solutions with high probability90 for a single run On the other hand the statisticalresults in columns 2 to 6 show that 120583IOArsquos solution qualityis clearly superior to those acquired by other approachesand meanwhile NIOA-A and NIOA-B are secondary Withrespect to performance efficiency we see easily that 120583IOAis a high-efficiency optimization algorithm as it spendsthe least time to seek the desired solution in a run Wealso notice that SSGA-A and SSGA-B present their highcomputational complexity because of their average runtimewhich shows that such two genetic algorithms with staticsampling strategies cause easily expensively computationalcost
SSGA-A SSGA-B NIOA-A NIOA-B
01
015
02
025
03
035
04
Obj
ectiv
e val
ues
120583IOA
Figure 2 Example 5 comparison of box plots acquired
0 2000 4000 6000 8000 100000
01
02
03
04
05
06
SSGA-ASSGA-BNIOA-A
NIOA-B120583IOA
n
f
Figure 3 Example 5 comparison of average search curves
The box plots in Figure 2 formed by 30 objective valuesfor each algorithm exhibit the fact that in addition to 120583IOAobtaining the best effect by comparison with the other fouralgorithms NIOA-A and NIOA-B with similar performancecharacteristics can work well over SSGA-A and SSGA-BBy Figure 3 we observe that 120583IOA can find the desiredsolution rapidly and achieve stable search but the otheralgorithms in particular the two genetic algorithms get easilyinto local search Totally when solving the above higher-dimensional problem the five algorithms have different effi-ciencies solution qualities and search characteristics 120583IOAsuits the above problem and exhibits the best performancefor which the main reason consists in that it can effectivelycombine RCEA with ROEA to find many more empiri-cally feasible individuals and urge them to evolve graduallytowards the desired regions by proliferation and mutation
Scientific Programming 7
Table 2 Comparison of statistical results for Example 6
Algor Max Min Mean Std dev CI FR()
AR(s)
SSGA-A 4880 4018 4589 192 [4521 4658] 53 266SSGA-B 4897 4157 4630 174 [4568 4692] 53 258NIOA-A 4108 3782 3939 87 [3908 3970] 100 160NIOA-B 4146 3713 3952 39 [3913 3990] 100 139120583IOA 4647 4443 4568 55 [4541 4580] 100 121
However those compared approaches present relatively weakperformances In particular SSGA-A and SSGA-B get easilyinto local search and spend the most runtime to solve theabove problem as their constraint handling and selectiontechniques are hard to adapt to high-dimensionalmultimodalproblems NIOA-A and NIOA-B are superior to such twogenetic algorithms owing to their adaptive sampling andconstraint handling techniques
Example 6 Consider
max 119864[
119901
sum
119894=1
119909119894sdot sin (120587119894119909
119894) + 120585]
st 119864 [
119901
sum
119894=1
1205781119909119894] le 500
119864 [
119901
sum
119894=1
12057821199092
119894] le 3000
119909119894ge 0
120585 sim 119873 (0 1)
1 le 119894 le 119901
1205781sim 119880 (08 12)
1205782sim 119873 (1 05)
(14)
This is a still difficult multimodal optimization problemobtained by modifying a static multimodal optimizationproblem [23] including 100 decision variables (ie 119901 = 100)and 3 random variables which greatly influence the qualityof solution search The difficulty of solving such problem isthat the objective function is multimodal and the decisionvariables are unbounded Similar to the above experimentwe acquire the statistical results of the approaches given inTable 2 while the corresponding box plots and their averagesearch curves are drawn by Figures 4 and 5 respectively
The values on FR in Table 2 illustrate that it is alsodifficult to solve Example 6 due to the random variablesand high dimensionality Even if so NIOA-A NIOA-B and120583IOA can all acquire feasible solutions for each run SSGA-Aand SSGA-B however can only get at most 53 of feasiblesolutions namely they can only acquire 53 feasible solutionsafter 100 executions This along with the values on FRin Table 1 follows that although such two approaches can
SSGA-A SSGA-B NIOA-A NIOA-B
380
400
420
440
460
480
Obj
ectiv
e val
ues
120583IOA
Figure 4 Example 6 comparison of box plots acquired
0 2000 4000 6000 8000 100000
100
200
300
400
500
SSGA-ASSGA-BNIOA-A
NIOA-B120583IOA
n
f
Figure 5 Example 6 comparison of average search curves
acquire larger values on Mean than the other approachesthey are poor with respect to solution quality efficiencyand search stability Consequently they need to make someimprovements for example their constraint handling Wenotice that 120583IOA is better than either NIOA-A or NIOA-Bbecause of its efficiency and the value onMeanWeemphasizethat whereas NIOA-A and NIOA-B can only obtain smallvalues on Mean by comparison with 120583IOA they are stillvaluable when solving such kind of hard problem RelativelyNIOA-B behaves well over NIOA-A
It seems to be true that SSGA-A and SSGA-B are superiorto NIOA-A NIOA-B and 120583IOA by Figures 4 and 5 sincethe box plots in Figure 4 and the average search curvesin Figure 5 are far from the horizontal line As a matterof fact column 7 in Table 2 shows clearly that such twogenetic algorithms cannot find feasible solutions 47 timesout of 100 runs whereas the other approaches are opposite
8 Scientific Programming
These indicate a fact that infeasible solutions in the decisiondomain have larger objective function values than feasibleones
Summarily when solving the above two examples thefive algorithms present different characteristics SSGA-Aand SSGA-B have similar performance attributes and sodo NIOA-A and NIOA-B 120583IOA performs well over thecompared approaches while NIOA-B is secondary
6 Conclusions and Further Work
We in this work concentrate on studying a microimmuneoptimization algorithm to solve a general kind of constrainedexpected value programming model relying upon trans-forming suchmodel into a sample-dependent approximationmodel (SAM) One such model requires that different candi-date solutions have different sample sizes and especially eachcandidate has two kinds of samplesrsquo sizes Subsequently twolower bound estimates of random vectors are theoreticallydeveloped and respectively applied to handling expectedvalue constraints of individuals in the current population andcomputing their empirical objective values Based on suchtwo estimates and the idea of racing ranking two racingsampling methods are suggested to execute individual eval-uation and constraint handling respectively Afterwards aracing sampling basedmicroimmune optimization algorithm120583IOA is proposed to deal with such approximation modelwith the merits of small population strong disturbance sup-pression effective constraint handling adaptive samplingand high efficiency The theoretical analysis has indicatedthat 120583IOArsquos computational complexity depends mainly on119899119873 119872120575 and 119862max due to small population 119873 and known
problem parameters By means of the higher-dimensionalmultimodal test problems the comparatively experimentalresults can draw some conclusions (1) RCEA and RCOEAcan make 120583IOA dynamically determine the sample sizes ofdifferent kinds of individuals in the process of evolution(2) the local search approach adopted can help 120583IOA tostrengthen local search (3) stochastic parametersmay be effi-ciently addressed by adaptive sampling techniques (4) 120583IOAperforms well over the compared approaches and (5) SSGA-A and SSGA-B need to make some improvements for higher-dimensional problems Further whereas we make somestudies on how to explore immune optimization approachesto solve higher-dimensional constrained expected value pro-gramming problems some issues will be further studiedFor example 120583IOArsquos theoretical convergence analysis needsto be studied while its engineering applications are to bediscussed
Appendix
Proof of Lemma 3 Let x isin 119883119898
119883120576 Since there exists 119894
0
such that 119864[1198661198940
(x 120585)] gt 120576 with 1 le 1198940
le 119868 by (3) we obtainthat
Pr x isin 119883119898 = Pr
1
119898
119898
sum
119895=1
119866119894(x 120585119895) le 0 1 le 119894 le 119868
le Pr
1
119898
119898
sum
119895=1
1198661198940
(x 120585119895) minus 119864 [1198661198940(x 120585)] le minus120576
le exp[
[
minus21205762119898
(1198861198940(x) minus 119887
1198940(x))2
]
]
le exp[minus21205762119898
Δ2]
(A1)
Hence
1 minus Pr 119883119898
sube 119883120576 = Pr existx isin 119883
119898 st x notin 119883
120576
le sum
xisin119860119883120576
Pr x isin 119883119898
le |119860| exp[minus21205762119898
Δ2]
(A2)
Thereby
Pr 119883119898
sube 119883120576 ge 1 minus |119860| exp[
minus21205762119898
Δ2] (A3)
So the conclusion is true
Proof of Lemma 4 Since 119861 is finite there exists 1199100isin 119861 such
that 119864[119891(y0 120585)] = 119891
lowast If x isin 119878119899 119878120576 then
119864 [119891 (x 120585)] gt 119864 [119891 (y0 120585)] + 120576 (A4)
Hence if
1
119899
119899
sum
119896=1
119891 (x 120585119896) le1
119899
119899
sum
119896=1
119891 (y0 120585119896
) (A5)
we have that
1
119899
119899
sum
119896=1
119891 (x 120585119896) lt 119864 [119891 (x 120585)] minus120576
2(A6)
or
1
119899
119899
sum
119896=1
119891 (y0 120585119896
) gt 119864 [119891 (y0 120585)] +
120576
2 (A7)
Otherwise we acquire that
1
119899
119899
sum
119896=1
119891 (x 120585119896) ge 119864 [119891 (x 120585)] minus120576
2
gt 119864 [119891 (y0 120585)] +
120576
2
ge1
119899
119899
sum
119896=1
119891 (y0 120585119896
)
(A8)
Scientific Programming 9
This yields contraction Consequently by (3) we obtain that
Pr x isin 119878119899 = Pr1
119899
119899
sum
119896=1
119891 (x 120585119896) le 119899
le Pr1
119899
119899
sum
119896=1
119891 (x 120585119896) le1
119899
119899
sum
119896=1
119891 (y0 120585119896
)
le Pr1
119899
119899
sum
119896=1
119891 (x 120585119896) lt 119864 [119891 (x 120585)] minus120576
2
+ Pr1
119899
119899
sum
119896=1
119891 (y0 120585119896
) gt 119864 [119891 (y0 120585)] +
120576
2
le 2 exp[minus1205762119899
2Γ2]
(A9)
Hence
1 minus Pr 119878119899sube 119878120576
= Pr existx isin 119878119899 st x notin 119878
120576
le sum
xisin119861119878120576
Pr x isin 119878119899
le 2 |119861| exp[minus1205762119899
2Γ2]
(A10)
This way it follows from (A10) that the above conclusion istrue
Competing Interests
The authors declare that they have no competing interests
Acknowledgments
This work is supported in part by the National Natural Sci-ence Foundation (61563009) and Doctoral Fund of Ministryof Education of China (20125201110003)
References
[1] J Luedtke and S Ahmed ldquoA sample approximation approachfor optimization with probabilistic constraintsrdquo SIAM Journalon Optimization vol 19 no 2 pp 674ndash699 2008
[2] M Branda ldquoSample approximation technique for mixed-integer stochastic programming problems with several chanceconstraintsrdquoOperations Research Letters vol 40 no 3 pp 207ndash211 2012
[3] A Shapiro D Dentcheva and A Ruszczynski Lectures onStochastic Programming Modeling andTheory SIAM Philadel-phia Pa USA 2009
[4] W Wang and S Ahmed ldquoSample average approximation ofexpected value constrained stochastic programsrdquo OperationsResearch Letters vol 36 no 5 pp 515ndash519 2008
[5] M Branda ldquoSample approximation technique for mixed-integer stochastic programming problems with expected valueconstraintsrdquo Optimization Letters vol 8 no 3 pp 861ndash8752014
[6] B LiuTheory and Practice of Uncertain Programming SpringerBerlin Germany 2009
[7] Y Jin and J Branke ldquoEvolutionary optimization in uncertainenvironmentsmdasha surveyrdquo IEEE Transactions on EvolutionaryComputation vol 9 no 3 pp 303ndash317 2005
[8] K Deb S Gupta D Daum J Branke A K Mall and D Pad-manabhan ldquoReliability-based optimization using evolutionaryalgorithmsrdquo IEEE Transactions on Evolutionary Computationvol 13 no 5 pp 1054ndash1074 2009
[9] B Liu and Y-K Liu ldquoExpected value of fuzzy variable and fuzzyexpected value modelsrdquo IEEE Transactions on Fuzzy Systemsvol 10 no 4 pp 445ndash450 2002
[10] Y Sun and Y L Gao ldquoAn improved differential evolutionalgorithmof stochastic expected valuemodelsrdquoMicroelectronicsamp Computer vol 29 no 4 pp 23ndash25 2012
[11] D Dasgupta S Yu and F Nino ldquoRecent advances in artificialimmune systems models and applicationsrdquo Applied Soft Com-puting Journal vol 11 no 2 pp 1574ndash1587 2011
[12] K Trojanowski and S T Wierzchon ldquoImmune-based algo-rithms for dynamic optimizationrdquo Information Sciences vol 179no 10 pp 1495ndash1515 2009
[13] Y-C Hsieh and P-S You ldquoAn effective immune based two-phase approach for the optimal reliability-redundancy alloca-tion problemrdquo Applied Mathematics and Computation vol 218no 4 pp 1297ndash1307 2011
[14] Q Zhao R Yang and F Duan ldquoAn immune clonal hybrid algo-rithm for solving stochastic chance-constrained programmingrdquoJournal of Computational Information Systems vol 8 no 20 pp8295ndash8302 2012
[15] W Hoeffding ldquoProbability inequalities for sums of boundedrandom variablesrdquo Journal of the American Statistical Associa-tion vol 58 pp 13ndash30 1963
[16] Z Lin and Z Bai Probability Inequalities Springer BerlinGermany 2010
[17] K L Chung A Course in Probability Theory Academic Press2001
[18] M Olguin-Carbajal E Alba and J Arellano-Verdejo ldquoMicro-differential evolution with local search for high dimensionalproblemsrdquo in Proceedings of the IEEE Congress on EvolutionaryComputation (CEC rsquo13) pp 48ndash54 June 2013
[19] C A Poojari and B Varghese ldquoGenetic algorithm basedtechnique for solving chance constrained problemsrdquo EuropeanJournal of Operational Research vol 185 no 3 pp 1128ndash11542008
[20] Z-H Zhang ldquoNoisy immune optimization for chance-constrained programming problemsrdquo Applied Mechanics andMaterials vol 48-49 pp 740ndash744 2011
[21] C-H Chen ldquoEfficient sampling for simulation-based optimiza-tion under uncertaintyrdquo in Proceedings of the 4th InternationalSymposium on Uncertainty Modeling and Analysis (ISUMA rsquo03)pp 386ndash391 IEEE College Park Md USA September 2003
[22] E Mezura-Montes and C A Coello Coello ldquoA simple multi-membered evolution strategy to solve constrained optimizationproblemsrdquo IEEE Transactions on Evolutionary Computationvol 9 no 1 pp 1ndash17 2005
[23] B Varghese and C A Poojari ldquoGenetic algorithm basedtechnique for solving chance-constrained problems arising inrisk managementrdquo Tech Rep 2004
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Scientific Programming 7
Table 2 Comparison of statistical results for Example 6
Algor Max Min Mean Std dev CI FR()
AR(s)
SSGA-A 4880 4018 4589 192 [4521 4658] 53 266SSGA-B 4897 4157 4630 174 [4568 4692] 53 258NIOA-A 4108 3782 3939 87 [3908 3970] 100 160NIOA-B 4146 3713 3952 39 [3913 3990] 100 139120583IOA 4647 4443 4568 55 [4541 4580] 100 121
However those compared approaches present relatively weakperformances In particular SSGA-A and SSGA-B get easilyinto local search and spend the most runtime to solve theabove problem as their constraint handling and selectiontechniques are hard to adapt to high-dimensionalmultimodalproblems NIOA-A and NIOA-B are superior to such twogenetic algorithms owing to their adaptive sampling andconstraint handling techniques
Example 6 Consider
max 119864[
119901
sum
119894=1
119909119894sdot sin (120587119894119909
119894) + 120585]
st 119864 [
119901
sum
119894=1
1205781119909119894] le 500
119864 [
119901
sum
119894=1
12057821199092
119894] le 3000
119909119894ge 0
120585 sim 119873 (0 1)
1 le 119894 le 119901
1205781sim 119880 (08 12)
1205782sim 119873 (1 05)
(14)
This is a still difficult multimodal optimization problemobtained by modifying a static multimodal optimizationproblem [23] including 100 decision variables (ie 119901 = 100)and 3 random variables which greatly influence the qualityof solution search The difficulty of solving such problem isthat the objective function is multimodal and the decisionvariables are unbounded Similar to the above experimentwe acquire the statistical results of the approaches given inTable 2 while the corresponding box plots and their averagesearch curves are drawn by Figures 4 and 5 respectively
The values on FR in Table 2 illustrate that it is alsodifficult to solve Example 6 due to the random variablesand high dimensionality Even if so NIOA-A NIOA-B and120583IOA can all acquire feasible solutions for each run SSGA-Aand SSGA-B however can only get at most 53 of feasiblesolutions namely they can only acquire 53 feasible solutionsafter 100 executions This along with the values on FRin Table 1 follows that although such two approaches can
SSGA-A SSGA-B NIOA-A NIOA-B
380
400
420
440
460
480
Obj
ectiv
e val
ues
120583IOA
Figure 4 Example 6 comparison of box plots acquired
0 2000 4000 6000 8000 100000
100
200
300
400
500
SSGA-ASSGA-BNIOA-A
NIOA-B120583IOA
n
f
Figure 5 Example 6 comparison of average search curves
acquire larger values on Mean than the other approachesthey are poor with respect to solution quality efficiencyand search stability Consequently they need to make someimprovements for example their constraint handling Wenotice that 120583IOA is better than either NIOA-A or NIOA-Bbecause of its efficiency and the value onMeanWeemphasizethat whereas NIOA-A and NIOA-B can only obtain smallvalues on Mean by comparison with 120583IOA they are stillvaluable when solving such kind of hard problem RelativelyNIOA-B behaves well over NIOA-A
It seems to be true that SSGA-A and SSGA-B are superiorto NIOA-A NIOA-B and 120583IOA by Figures 4 and 5 sincethe box plots in Figure 4 and the average search curvesin Figure 5 are far from the horizontal line As a matterof fact column 7 in Table 2 shows clearly that such twogenetic algorithms cannot find feasible solutions 47 timesout of 100 runs whereas the other approaches are opposite
8 Scientific Programming
These indicate a fact that infeasible solutions in the decisiondomain have larger objective function values than feasibleones
Summarily when solving the above two examples thefive algorithms present different characteristics SSGA-Aand SSGA-B have similar performance attributes and sodo NIOA-A and NIOA-B 120583IOA performs well over thecompared approaches while NIOA-B is secondary
6 Conclusions and Further Work
We in this work concentrate on studying a microimmuneoptimization algorithm to solve a general kind of constrainedexpected value programming model relying upon trans-forming suchmodel into a sample-dependent approximationmodel (SAM) One such model requires that different candi-date solutions have different sample sizes and especially eachcandidate has two kinds of samplesrsquo sizes Subsequently twolower bound estimates of random vectors are theoreticallydeveloped and respectively applied to handling expectedvalue constraints of individuals in the current population andcomputing their empirical objective values Based on suchtwo estimates and the idea of racing ranking two racingsampling methods are suggested to execute individual eval-uation and constraint handling respectively Afterwards aracing sampling basedmicroimmune optimization algorithm120583IOA is proposed to deal with such approximation modelwith the merits of small population strong disturbance sup-pression effective constraint handling adaptive samplingand high efficiency The theoretical analysis has indicatedthat 120583IOArsquos computational complexity depends mainly on119899119873 119872120575 and 119862max due to small population 119873 and known
problem parameters By means of the higher-dimensionalmultimodal test problems the comparatively experimentalresults can draw some conclusions (1) RCEA and RCOEAcan make 120583IOA dynamically determine the sample sizes ofdifferent kinds of individuals in the process of evolution(2) the local search approach adopted can help 120583IOA tostrengthen local search (3) stochastic parametersmay be effi-ciently addressed by adaptive sampling techniques (4) 120583IOAperforms well over the compared approaches and (5) SSGA-A and SSGA-B need to make some improvements for higher-dimensional problems Further whereas we make somestudies on how to explore immune optimization approachesto solve higher-dimensional constrained expected value pro-gramming problems some issues will be further studiedFor example 120583IOArsquos theoretical convergence analysis needsto be studied while its engineering applications are to bediscussed
Appendix
Proof of Lemma 3 Let x isin 119883119898
119883120576 Since there exists 119894
0
such that 119864[1198661198940
(x 120585)] gt 120576 with 1 le 1198940
le 119868 by (3) we obtainthat
Pr x isin 119883119898 = Pr
1
119898
119898
sum
119895=1
119866119894(x 120585119895) le 0 1 le 119894 le 119868
le Pr
1
119898
119898
sum
119895=1
1198661198940
(x 120585119895) minus 119864 [1198661198940(x 120585)] le minus120576
le exp[
[
minus21205762119898
(1198861198940(x) minus 119887
1198940(x))2
]
]
le exp[minus21205762119898
Δ2]
(A1)
Hence
1 minus Pr 119883119898
sube 119883120576 = Pr existx isin 119883
119898 st x notin 119883
120576
le sum
xisin119860119883120576
Pr x isin 119883119898
le |119860| exp[minus21205762119898
Δ2]
(A2)
Thereby
Pr 119883119898
sube 119883120576 ge 1 minus |119860| exp[
minus21205762119898
Δ2] (A3)
So the conclusion is true
Proof of Lemma 4 Since 119861 is finite there exists 1199100isin 119861 such
that 119864[119891(y0 120585)] = 119891
lowast If x isin 119878119899 119878120576 then
119864 [119891 (x 120585)] gt 119864 [119891 (y0 120585)] + 120576 (A4)
Hence if
1
119899
119899
sum
119896=1
119891 (x 120585119896) le1
119899
119899
sum
119896=1
119891 (y0 120585119896
) (A5)
we have that
1
119899
119899
sum
119896=1
119891 (x 120585119896) lt 119864 [119891 (x 120585)] minus120576
2(A6)
or
1
119899
119899
sum
119896=1
119891 (y0 120585119896
) gt 119864 [119891 (y0 120585)] +
120576
2 (A7)
Otherwise we acquire that
1
119899
119899
sum
119896=1
119891 (x 120585119896) ge 119864 [119891 (x 120585)] minus120576
2
gt 119864 [119891 (y0 120585)] +
120576
2
ge1
119899
119899
sum
119896=1
119891 (y0 120585119896
)
(A8)
Scientific Programming 9
This yields contraction Consequently by (3) we obtain that
Pr x isin 119878119899 = Pr1
119899
119899
sum
119896=1
119891 (x 120585119896) le 119899
le Pr1
119899
119899
sum
119896=1
119891 (x 120585119896) le1
119899
119899
sum
119896=1
119891 (y0 120585119896
)
le Pr1
119899
119899
sum
119896=1
119891 (x 120585119896) lt 119864 [119891 (x 120585)] minus120576
2
+ Pr1
119899
119899
sum
119896=1
119891 (y0 120585119896
) gt 119864 [119891 (y0 120585)] +
120576
2
le 2 exp[minus1205762119899
2Γ2]
(A9)
Hence
1 minus Pr 119878119899sube 119878120576
= Pr existx isin 119878119899 st x notin 119878
120576
le sum
xisin119861119878120576
Pr x isin 119878119899
le 2 |119861| exp[minus1205762119899
2Γ2]
(A10)
This way it follows from (A10) that the above conclusion istrue
Competing Interests
The authors declare that they have no competing interests
Acknowledgments
This work is supported in part by the National Natural Sci-ence Foundation (61563009) and Doctoral Fund of Ministryof Education of China (20125201110003)
References
[1] J Luedtke and S Ahmed ldquoA sample approximation approachfor optimization with probabilistic constraintsrdquo SIAM Journalon Optimization vol 19 no 2 pp 674ndash699 2008
[2] M Branda ldquoSample approximation technique for mixed-integer stochastic programming problems with several chanceconstraintsrdquoOperations Research Letters vol 40 no 3 pp 207ndash211 2012
[3] A Shapiro D Dentcheva and A Ruszczynski Lectures onStochastic Programming Modeling andTheory SIAM Philadel-phia Pa USA 2009
[4] W Wang and S Ahmed ldquoSample average approximation ofexpected value constrained stochastic programsrdquo OperationsResearch Letters vol 36 no 5 pp 515ndash519 2008
[5] M Branda ldquoSample approximation technique for mixed-integer stochastic programming problems with expected valueconstraintsrdquo Optimization Letters vol 8 no 3 pp 861ndash8752014
[6] B LiuTheory and Practice of Uncertain Programming SpringerBerlin Germany 2009
[7] Y Jin and J Branke ldquoEvolutionary optimization in uncertainenvironmentsmdasha surveyrdquo IEEE Transactions on EvolutionaryComputation vol 9 no 3 pp 303ndash317 2005
[8] K Deb S Gupta D Daum J Branke A K Mall and D Pad-manabhan ldquoReliability-based optimization using evolutionaryalgorithmsrdquo IEEE Transactions on Evolutionary Computationvol 13 no 5 pp 1054ndash1074 2009
[9] B Liu and Y-K Liu ldquoExpected value of fuzzy variable and fuzzyexpected value modelsrdquo IEEE Transactions on Fuzzy Systemsvol 10 no 4 pp 445ndash450 2002
[10] Y Sun and Y L Gao ldquoAn improved differential evolutionalgorithmof stochastic expected valuemodelsrdquoMicroelectronicsamp Computer vol 29 no 4 pp 23ndash25 2012
[11] D Dasgupta S Yu and F Nino ldquoRecent advances in artificialimmune systems models and applicationsrdquo Applied Soft Com-puting Journal vol 11 no 2 pp 1574ndash1587 2011
[12] K Trojanowski and S T Wierzchon ldquoImmune-based algo-rithms for dynamic optimizationrdquo Information Sciences vol 179no 10 pp 1495ndash1515 2009
[13] Y-C Hsieh and P-S You ldquoAn effective immune based two-phase approach for the optimal reliability-redundancy alloca-tion problemrdquo Applied Mathematics and Computation vol 218no 4 pp 1297ndash1307 2011
[14] Q Zhao R Yang and F Duan ldquoAn immune clonal hybrid algo-rithm for solving stochastic chance-constrained programmingrdquoJournal of Computational Information Systems vol 8 no 20 pp8295ndash8302 2012
[15] W Hoeffding ldquoProbability inequalities for sums of boundedrandom variablesrdquo Journal of the American Statistical Associa-tion vol 58 pp 13ndash30 1963
[16] Z Lin and Z Bai Probability Inequalities Springer BerlinGermany 2010
[17] K L Chung A Course in Probability Theory Academic Press2001
[18] M Olguin-Carbajal E Alba and J Arellano-Verdejo ldquoMicro-differential evolution with local search for high dimensionalproblemsrdquo in Proceedings of the IEEE Congress on EvolutionaryComputation (CEC rsquo13) pp 48ndash54 June 2013
[19] C A Poojari and B Varghese ldquoGenetic algorithm basedtechnique for solving chance constrained problemsrdquo EuropeanJournal of Operational Research vol 185 no 3 pp 1128ndash11542008
[20] Z-H Zhang ldquoNoisy immune optimization for chance-constrained programming problemsrdquo Applied Mechanics andMaterials vol 48-49 pp 740ndash744 2011
[21] C-H Chen ldquoEfficient sampling for simulation-based optimiza-tion under uncertaintyrdquo in Proceedings of the 4th InternationalSymposium on Uncertainty Modeling and Analysis (ISUMA rsquo03)pp 386ndash391 IEEE College Park Md USA September 2003
[22] E Mezura-Montes and C A Coello Coello ldquoA simple multi-membered evolution strategy to solve constrained optimizationproblemsrdquo IEEE Transactions on Evolutionary Computationvol 9 no 1 pp 1ndash17 2005
[23] B Varghese and C A Poojari ldquoGenetic algorithm basedtechnique for solving chance-constrained problems arising inrisk managementrdquo Tech Rep 2004
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
8 Scientific Programming
These indicate a fact that infeasible solutions in the decisiondomain have larger objective function values than feasibleones
Summarily when solving the above two examples thefive algorithms present different characteristics SSGA-Aand SSGA-B have similar performance attributes and sodo NIOA-A and NIOA-B 120583IOA performs well over thecompared approaches while NIOA-B is secondary
6 Conclusions and Further Work
We in this work concentrate on studying a microimmuneoptimization algorithm to solve a general kind of constrainedexpected value programming model relying upon trans-forming suchmodel into a sample-dependent approximationmodel (SAM) One such model requires that different candi-date solutions have different sample sizes and especially eachcandidate has two kinds of samplesrsquo sizes Subsequently twolower bound estimates of random vectors are theoreticallydeveloped and respectively applied to handling expectedvalue constraints of individuals in the current population andcomputing their empirical objective values Based on suchtwo estimates and the idea of racing ranking two racingsampling methods are suggested to execute individual eval-uation and constraint handling respectively Afterwards aracing sampling basedmicroimmune optimization algorithm120583IOA is proposed to deal with such approximation modelwith the merits of small population strong disturbance sup-pression effective constraint handling adaptive samplingand high efficiency The theoretical analysis has indicatedthat 120583IOArsquos computational complexity depends mainly on119899119873 119872120575 and 119862max due to small population 119873 and known
problem parameters By means of the higher-dimensionalmultimodal test problems the comparatively experimentalresults can draw some conclusions (1) RCEA and RCOEAcan make 120583IOA dynamically determine the sample sizes ofdifferent kinds of individuals in the process of evolution(2) the local search approach adopted can help 120583IOA tostrengthen local search (3) stochastic parametersmay be effi-ciently addressed by adaptive sampling techniques (4) 120583IOAperforms well over the compared approaches and (5) SSGA-A and SSGA-B need to make some improvements for higher-dimensional problems Further whereas we make somestudies on how to explore immune optimization approachesto solve higher-dimensional constrained expected value pro-gramming problems some issues will be further studiedFor example 120583IOArsquos theoretical convergence analysis needsto be studied while its engineering applications are to bediscussed
Appendix
Proof of Lemma 3 Let x isin 119883119898
119883120576 Since there exists 119894
0
such that 119864[1198661198940
(x 120585)] gt 120576 with 1 le 1198940
le 119868 by (3) we obtainthat
Pr x isin 119883119898 = Pr
1
119898
119898
sum
119895=1
119866119894(x 120585119895) le 0 1 le 119894 le 119868
le Pr
1
119898
119898
sum
119895=1
1198661198940
(x 120585119895) minus 119864 [1198661198940(x 120585)] le minus120576
le exp[
[
minus21205762119898
(1198861198940(x) minus 119887
1198940(x))2
]
]
le exp[minus21205762119898
Δ2]
(A1)
Hence
1 minus Pr 119883119898
sube 119883120576 = Pr existx isin 119883
119898 st x notin 119883
120576
le sum
xisin119860119883120576
Pr x isin 119883119898
le |119860| exp[minus21205762119898
Δ2]
(A2)
Thereby
Pr 119883119898
sube 119883120576 ge 1 minus |119860| exp[
minus21205762119898
Δ2] (A3)
So the conclusion is true
Proof of Lemma 4 Since 119861 is finite there exists 1199100isin 119861 such
that 119864[119891(y0 120585)] = 119891
lowast If x isin 119878119899 119878120576 then
119864 [119891 (x 120585)] gt 119864 [119891 (y0 120585)] + 120576 (A4)
Hence if
1
119899
119899
sum
119896=1
119891 (x 120585119896) le1
119899
119899
sum
119896=1
119891 (y0 120585119896
) (A5)
we have that
1
119899
119899
sum
119896=1
119891 (x 120585119896) lt 119864 [119891 (x 120585)] minus120576
2(A6)
or
1
119899
119899
sum
119896=1
119891 (y0 120585119896
) gt 119864 [119891 (y0 120585)] +
120576
2 (A7)
Otherwise we acquire that
1
119899
119899
sum
119896=1
119891 (x 120585119896) ge 119864 [119891 (x 120585)] minus120576
2
gt 119864 [119891 (y0 120585)] +
120576
2
ge1
119899
119899
sum
119896=1
119891 (y0 120585119896
)
(A8)
Scientific Programming 9
This yields contraction Consequently by (3) we obtain that
Pr x isin 119878119899 = Pr1
119899
119899
sum
119896=1
119891 (x 120585119896) le 119899
le Pr1
119899
119899
sum
119896=1
119891 (x 120585119896) le1
119899
119899
sum
119896=1
119891 (y0 120585119896
)
le Pr1
119899
119899
sum
119896=1
119891 (x 120585119896) lt 119864 [119891 (x 120585)] minus120576
2
+ Pr1
119899
119899
sum
119896=1
119891 (y0 120585119896
) gt 119864 [119891 (y0 120585)] +
120576
2
le 2 exp[minus1205762119899
2Γ2]
(A9)
Hence
1 minus Pr 119878119899sube 119878120576
= Pr existx isin 119878119899 st x notin 119878
120576
le sum
xisin119861119878120576
Pr x isin 119878119899
le 2 |119861| exp[minus1205762119899
2Γ2]
(A10)
This way it follows from (A10) that the above conclusion istrue
Competing Interests
The authors declare that they have no competing interests
Acknowledgments
This work is supported in part by the National Natural Sci-ence Foundation (61563009) and Doctoral Fund of Ministryof Education of China (20125201110003)
References
[1] J Luedtke and S Ahmed ldquoA sample approximation approachfor optimization with probabilistic constraintsrdquo SIAM Journalon Optimization vol 19 no 2 pp 674ndash699 2008
[2] M Branda ldquoSample approximation technique for mixed-integer stochastic programming problems with several chanceconstraintsrdquoOperations Research Letters vol 40 no 3 pp 207ndash211 2012
[3] A Shapiro D Dentcheva and A Ruszczynski Lectures onStochastic Programming Modeling andTheory SIAM Philadel-phia Pa USA 2009
[4] W Wang and S Ahmed ldquoSample average approximation ofexpected value constrained stochastic programsrdquo OperationsResearch Letters vol 36 no 5 pp 515ndash519 2008
[5] M Branda ldquoSample approximation technique for mixed-integer stochastic programming problems with expected valueconstraintsrdquo Optimization Letters vol 8 no 3 pp 861ndash8752014
[6] B LiuTheory and Practice of Uncertain Programming SpringerBerlin Germany 2009
[7] Y Jin and J Branke ldquoEvolutionary optimization in uncertainenvironmentsmdasha surveyrdquo IEEE Transactions on EvolutionaryComputation vol 9 no 3 pp 303ndash317 2005
[8] K Deb S Gupta D Daum J Branke A K Mall and D Pad-manabhan ldquoReliability-based optimization using evolutionaryalgorithmsrdquo IEEE Transactions on Evolutionary Computationvol 13 no 5 pp 1054ndash1074 2009
[9] B Liu and Y-K Liu ldquoExpected value of fuzzy variable and fuzzyexpected value modelsrdquo IEEE Transactions on Fuzzy Systemsvol 10 no 4 pp 445ndash450 2002
[10] Y Sun and Y L Gao ldquoAn improved differential evolutionalgorithmof stochastic expected valuemodelsrdquoMicroelectronicsamp Computer vol 29 no 4 pp 23ndash25 2012
[11] D Dasgupta S Yu and F Nino ldquoRecent advances in artificialimmune systems models and applicationsrdquo Applied Soft Com-puting Journal vol 11 no 2 pp 1574ndash1587 2011
[12] K Trojanowski and S T Wierzchon ldquoImmune-based algo-rithms for dynamic optimizationrdquo Information Sciences vol 179no 10 pp 1495ndash1515 2009
[13] Y-C Hsieh and P-S You ldquoAn effective immune based two-phase approach for the optimal reliability-redundancy alloca-tion problemrdquo Applied Mathematics and Computation vol 218no 4 pp 1297ndash1307 2011
[14] Q Zhao R Yang and F Duan ldquoAn immune clonal hybrid algo-rithm for solving stochastic chance-constrained programmingrdquoJournal of Computational Information Systems vol 8 no 20 pp8295ndash8302 2012
[15] W Hoeffding ldquoProbability inequalities for sums of boundedrandom variablesrdquo Journal of the American Statistical Associa-tion vol 58 pp 13ndash30 1963
[16] Z Lin and Z Bai Probability Inequalities Springer BerlinGermany 2010
[17] K L Chung A Course in Probability Theory Academic Press2001
[18] M Olguin-Carbajal E Alba and J Arellano-Verdejo ldquoMicro-differential evolution with local search for high dimensionalproblemsrdquo in Proceedings of the IEEE Congress on EvolutionaryComputation (CEC rsquo13) pp 48ndash54 June 2013
[19] C A Poojari and B Varghese ldquoGenetic algorithm basedtechnique for solving chance constrained problemsrdquo EuropeanJournal of Operational Research vol 185 no 3 pp 1128ndash11542008
[20] Z-H Zhang ldquoNoisy immune optimization for chance-constrained programming problemsrdquo Applied Mechanics andMaterials vol 48-49 pp 740ndash744 2011
[21] C-H Chen ldquoEfficient sampling for simulation-based optimiza-tion under uncertaintyrdquo in Proceedings of the 4th InternationalSymposium on Uncertainty Modeling and Analysis (ISUMA rsquo03)pp 386ndash391 IEEE College Park Md USA September 2003
[22] E Mezura-Montes and C A Coello Coello ldquoA simple multi-membered evolution strategy to solve constrained optimizationproblemsrdquo IEEE Transactions on Evolutionary Computationvol 9 no 1 pp 1ndash17 2005
[23] B Varghese and C A Poojari ldquoGenetic algorithm basedtechnique for solving chance-constrained problems arising inrisk managementrdquo Tech Rep 2004
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Scientific Programming 9
This yields contraction Consequently by (3) we obtain that
Pr x isin 119878119899 = Pr1
119899
119899
sum
119896=1
119891 (x 120585119896) le 119899
le Pr1
119899
119899
sum
119896=1
119891 (x 120585119896) le1
119899
119899
sum
119896=1
119891 (y0 120585119896
)
le Pr1
119899
119899
sum
119896=1
119891 (x 120585119896) lt 119864 [119891 (x 120585)] minus120576
2
+ Pr1
119899
119899
sum
119896=1
119891 (y0 120585119896
) gt 119864 [119891 (y0 120585)] +
120576
2
le 2 exp[minus1205762119899
2Γ2]
(A9)
Hence
1 minus Pr 119878119899sube 119878120576
= Pr existx isin 119878119899 st x notin 119878
120576
le sum
xisin119861119878120576
Pr x isin 119878119899
le 2 |119861| exp[minus1205762119899
2Γ2]
(A10)
This way it follows from (A10) that the above conclusion istrue
Competing Interests
The authors declare that they have no competing interests
Acknowledgments
This work is supported in part by the National Natural Sci-ence Foundation (61563009) and Doctoral Fund of Ministryof Education of China (20125201110003)
References
[1] J Luedtke and S Ahmed ldquoA sample approximation approachfor optimization with probabilistic constraintsrdquo SIAM Journalon Optimization vol 19 no 2 pp 674ndash699 2008
[2] M Branda ldquoSample approximation technique for mixed-integer stochastic programming problems with several chanceconstraintsrdquoOperations Research Letters vol 40 no 3 pp 207ndash211 2012
[3] A Shapiro D Dentcheva and A Ruszczynski Lectures onStochastic Programming Modeling andTheory SIAM Philadel-phia Pa USA 2009
[4] W Wang and S Ahmed ldquoSample average approximation ofexpected value constrained stochastic programsrdquo OperationsResearch Letters vol 36 no 5 pp 515ndash519 2008
[5] M Branda ldquoSample approximation technique for mixed-integer stochastic programming problems with expected valueconstraintsrdquo Optimization Letters vol 8 no 3 pp 861ndash8752014
[6] B LiuTheory and Practice of Uncertain Programming SpringerBerlin Germany 2009
[7] Y Jin and J Branke ldquoEvolutionary optimization in uncertainenvironmentsmdasha surveyrdquo IEEE Transactions on EvolutionaryComputation vol 9 no 3 pp 303ndash317 2005
[8] K Deb S Gupta D Daum J Branke A K Mall and D Pad-manabhan ldquoReliability-based optimization using evolutionaryalgorithmsrdquo IEEE Transactions on Evolutionary Computationvol 13 no 5 pp 1054ndash1074 2009
[9] B Liu and Y-K Liu ldquoExpected value of fuzzy variable and fuzzyexpected value modelsrdquo IEEE Transactions on Fuzzy Systemsvol 10 no 4 pp 445ndash450 2002
[10] Y Sun and Y L Gao ldquoAn improved differential evolutionalgorithmof stochastic expected valuemodelsrdquoMicroelectronicsamp Computer vol 29 no 4 pp 23ndash25 2012
[11] D Dasgupta S Yu and F Nino ldquoRecent advances in artificialimmune systems models and applicationsrdquo Applied Soft Com-puting Journal vol 11 no 2 pp 1574ndash1587 2011
[12] K Trojanowski and S T Wierzchon ldquoImmune-based algo-rithms for dynamic optimizationrdquo Information Sciences vol 179no 10 pp 1495ndash1515 2009
[13] Y-C Hsieh and P-S You ldquoAn effective immune based two-phase approach for the optimal reliability-redundancy alloca-tion problemrdquo Applied Mathematics and Computation vol 218no 4 pp 1297ndash1307 2011
[14] Q Zhao R Yang and F Duan ldquoAn immune clonal hybrid algo-rithm for solving stochastic chance-constrained programmingrdquoJournal of Computational Information Systems vol 8 no 20 pp8295ndash8302 2012
[15] W Hoeffding ldquoProbability inequalities for sums of boundedrandom variablesrdquo Journal of the American Statistical Associa-tion vol 58 pp 13ndash30 1963
[16] Z Lin and Z Bai Probability Inequalities Springer BerlinGermany 2010
[17] K L Chung A Course in Probability Theory Academic Press2001
[18] M Olguin-Carbajal E Alba and J Arellano-Verdejo ldquoMicro-differential evolution with local search for high dimensionalproblemsrdquo in Proceedings of the IEEE Congress on EvolutionaryComputation (CEC rsquo13) pp 48ndash54 June 2013
[19] C A Poojari and B Varghese ldquoGenetic algorithm basedtechnique for solving chance constrained problemsrdquo EuropeanJournal of Operational Research vol 185 no 3 pp 1128ndash11542008
[20] Z-H Zhang ldquoNoisy immune optimization for chance-constrained programming problemsrdquo Applied Mechanics andMaterials vol 48-49 pp 740ndash744 2011
[21] C-H Chen ldquoEfficient sampling for simulation-based optimiza-tion under uncertaintyrdquo in Proceedings of the 4th InternationalSymposium on Uncertainty Modeling and Analysis (ISUMA rsquo03)pp 386ndash391 IEEE College Park Md USA September 2003
[22] E Mezura-Montes and C A Coello Coello ldquoA simple multi-membered evolution strategy to solve constrained optimizationproblemsrdquo IEEE Transactions on Evolutionary Computationvol 9 no 1 pp 1ndash17 2005
[23] B Varghese and C A Poojari ldquoGenetic algorithm basedtechnique for solving chance-constrained problems arising inrisk managementrdquo Tech Rep 2004
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014