Research ArticleAn Improved Hybrid Encoding Cuckoo Search Algorithm for 0-1Knapsack Problems
Yanhong Feng1 Ke Jia2 and Yichao He1
1 School of Information Engineering Shijiazhuang University of Economics Shijiazhuang 050031 China2 School of Information Science and Engineering Hebei University of Science and Technology Shijiazhuang 050018 China
Correspondence should be addressed to Yanhong Feng qinfyh163com
Received 31 August 2013 Revised 4 December 2013 Accepted 16 December 2013 Published 12 January 2014
Academic Editor Christian W Dawson
Copyright copy 2014 Yanhong Feng et al This is an open access article distributed under the Creative Commons Attribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited
Cuckoo search (CS) is a new robust swarm intelligencemethod that is based on the brood parasitism of some cuckoo species In thispaper an improved hybrid encoding cuckoo search algorithm (ICS) with greedy strategy is put forward for solving 0-1 knapsackproblems First of all for solving binary optimization problem with ICS based on the idea of individual hybrid encoding thecuckoo search over a continuous space is transformed into the synchronous evolution search over discrete space Subsequently theconcept of confidence interval (CI) is introduced hence the new position updating is designed and genetic mutation with a smallprobability is introduced The former enables the population to move towards the global best solution rapidly in every generationand the latter can effectively prevent the ICS from trapping into the local optimum Furthermore the greedy transform method isused to repair the infeasible solution and optimize the feasible solution Experiments with a large number of KP instances show theeffectiveness of the proposed algorithm and its ability to achieve good quality solutions
1 Introduction
The combinatorial optimization plays a very important rolein operational research discrete mathematics and computerscience The knapsack problem is one of the classical combi-natorial optimization problems that are difficult to solve andit has been extensively studied since the pioneering work ofDantzig [1] Generally speaking if the classification of thesemethods that are used to solve such problems is based onthe nature of the algorithm they can be simply divided intotwo categories [2] exact methods and heuristic methodsExact methods like enumeration method [3 4] branch andbound [5] and dynamic programming [6] can give the exactsolutions nevertheless in the worst case it is required to takea long time to get a satisfactory solution sometimes the timeincreases exponentially with the increment of the size of theinstance
Recently nature-inspired metaheuristic algorithms per-form powerfully and efficiently in solving the diverse opti-mization problems including combinatorial problem Meta-heuristic algorithms include genetic algorithm [7] particleswarmoptimization [8] ant colony optimization [9] artificial
bee colony algorithm [10] differential evolution algorithm[11] harmony search algorithm [12 13] and krill herdalgorithm [14ndash16]
As is mentioned above metaheuristic methods have beenproven to be an effective means to cope with the combina-torial optimization problems including 0-1 knapsack prob-lem Unlike deterministic search approaches which have thedrawbacks of being trapped into local minima unavoidablythe main advantage of metaheuristic methods can deliversatisfactory solutions in a reasonable time Because of thisit is crucial to present some new nature-inspired methods todeal with the 0-1 knapsack problem and especially to tacklesome intractable and complex large-scale instances which arecloser to practical applications
Cuckoo search (CS) a population-driven nature-inspiredmetaheuristic algorithms originally proposed by Yang andDeb in 2009 and 2010 [17 18] which showed some promisingefficiency for global optimization and is becoming a newresearch hotspot in evolutionary computation CS is inspiredby the brood parasitism of some cuckoo species by layingtheir eggs in the nests of other host birds Each egg (nest orcuckoo) represents a solution and a cuckoo egg represents
Hindawi Publishing CorporationComputational Intelligence and NeuroscienceVolume 2014 Article ID 970456 9 pageshttpdxdoiorg1011552014970456
2 Computational Intelligence and Neuroscience
a new solution The aim is to use the new and potentiallybetter solutions (cuckoos) to replace a not-so-good solutionin the nests [19] Like othermetaheuristic algorithms CS usesno gradient information during the search so that it has theability to solve nonconvex nonlinear nondifferentiable andmultimodal problems Furthermore there is essentially onlya single parameter 119901
119886in CS and thus it is potentially more
generic to adapt to a wider class of optimization problems[19] In addition Yang and Deb showed that the CS outper-forms particle swarm optimization or genetic algorithms insome real-world optimization problems [18 20] In virtue ofits simplicity robustness and so on books and articles on thesubject have proliferated recently The CS has received moreand more attention and application and it falls into a largenumber of areas [20ndash25] More details can be found in [26]
As far as we know the emphasis ofmuch of previous stud-ies on CS was placed on solving the optimization problemsover discrete or continuous space and only a few scholarswere concerned about binary problems In 2011 Layeb [25]developed a variant of cuckoo search in combination withquantum-based approach to solve knapsack problems effi-ciently Subsequently Gherboudj et al [24] utilized purelybinary cuckoo search to tackle knapsack problems In sum-mary the studies on binary-coded CS have just begun and itsperformance needs to further improve so as to further expandits field of application
Given the above consideration an improved CS algo-rithm (ICS) based on the CS framework in combinationwith a novel greedy strategy is brought forward to solve 0-1 knapsack problem Compared with the original CS theoutstanding characteristics of Levy flights such as stabilitypower law asymptotics used in the original CS is still retainedin ICS Meanwhile the operation which a fraction of worsenests are abandoned with a probability 119901
119886and new solutions
are built randomly is eliminated and a novel operator whichthe search range is adjusted with adaptive step size andthe genetic mutation is embedded is introduced We assessthe performance of our proposed algorithm in terms ofthe quality of solutions convergence rate and robustnessby testing twenty different scale knapsack instances Thesimulation results not only demonstrated that the proposedalgorithm is workable and robust but also held the character-istic of the superior approximation capabilities even in high-dimensional space
The remainder of this paper is structured as followsSection 2 describes the mathematical model for the 0-1knapsack problemsThen the improvement strategies and theoriginal intention of these improvements are given in detailin Section 3 and the greedy transform method is describedSubsequently Section 4 presents the results of comparativeexperiments Finally some conclusions and comments aremade for further research in Section 5
2 Knapsack Problems
Knapsack problem (KP) is a typical optimization problemand it has high theoretical and practical valueMany practicalapplications can be formulated as a KP such as cutting stock
problems portfolio optimization and scheduling problemscryptography [27]This problem has been proven to be a NP-hard problem hence it cannot be solved in a polynomial timeunless P = NP [1] The classical 0-1 knapsack problem can bedefined as follows
Let 119880 = 1199061 1199062 119906
119899 be a set of 119899 items and 119908
119895and 119901
119895
represent the weight and profit of item 119895 respectively Here119908119895119901119895 and 119895 are all positive integersThe problem is to choose
a subset of the items tomake their total weight not to exceed agiven capacity119862 while the total profit is maximizedWithoutloss of generality it may be assumed that the weight of eachitem is smaller than the capacity 119862 so that each item fits intothe knapsackWe can use the binary decision variable 119909
119894 with
119909119894= 1 if item 119894 is selected and 119909
119894= 0 otherwise The problem
can be formulated as follows
max 119891 =
119899
sum
119894=1
119901119894119909119894
st119899
sum
119894=1
119908119894119909119894le 119862
(1)
3 The Improved Cuckoo SearchAlgorithm (ICS)
Although the original CS algorithm possesses some excellentfeatures of simplicity in structure and escaping from localoptima easily compared to several traditional optimizationapproaches the phenomenon of slow convergence rate andlow accuracy still exists That is to say the basic algorithmdoes not adequately exploit the potential of CS algorithmTherefore in this paper in order to improve the convergencerate and precision of CS we designed a series of appropriatestrategies and then a more efficient algorithm (ICS) isproposed
ICS introduced the following five improving strategies
(1) using adaptive step size to adjust search range(2) using confidence interval to enhance the local search(3) using genetic mutation operation with a low proba-
bility to prevent the ICS from being trapped into thelocal optimum
(4) using hybrid encoding to represent each individual inthe population
(5) using greedy transform method to repair the infeasi-ble solution and optimize the feasible solution
More detailed descriptions of these strategieswill be givenin subsections respectively
31 Hybrid Encoding The standard CS algorithm operatesin continuous space Consequently we cannot use it directlyfor solving optimization in binary space Additionally theoperation of the original CS algorithm is closed to the setof real number but it does not have the closure propertyin the binary set 0 1 Since the wide application of binaryoptimization problems in real-world engineering the mainobjective of the ICS algorithm is to deal with the binary
Computational Intelligence and Neuroscience 3
optimization problems One of the most significant featuresof the ICS is that it adopts the hybrid coding scheme [28] andeach cuckoo individual is represented by two-tuples
Definition 1 (auxiliary search space) An auxiliary searchspace 1198781015840 which denotes a subspace of 119899 dimensional realspace 119877119899 where 1198781015840 sub 119877
119899 An auxiliary search space 1198781015840corresponds to a solution space 119878 = 0 1119899 Additionally 119878and 1198781015840 are two parallel search space Here the search in 1198781015840is called active search meanwhile the search in 119878 is calledpassive search
Definition 2 (hybrid encoding representation) Each cuckooindividual in the population is represented by the two tuples⟨xi bi⟩ (119894 = 1 2 119899) where xi works in the auxiliary searchspace and bi performs in the solution space accordinglyand 119899 is the dimensionality of solution Further Sigmoidfunction [26] is adopted to transform a real-coded vectorxi = (119909
1 1199092 119909
119899)119879
isin [minus30 30]119899 to binary vector bi =
(1198871 1198872 119887
119899)119879
isin 0 1119899 The procedure works as follows
119887119894=
1 if sig (119909119894) ge 05
0 else(2)
sig(119909) = 1(1 + 119890minus119909) is sigmoid function
32 Greedy TransformMethod Many optimization problemsare constrained Accordingly constraint handling is crucialfor the efficient design ofmetaheuristics Constraint handlingstrategies whichmainly act on the representation of solutionsor the objective function can be classified as reject strategiespenalizing strategies repairing strategies decoding strate-gies and preserving strategies [29] Repairing strategy mostof them are greedy heuristics can be applied for the knapsackproblem [29] However the traditional greedy strategy hassome disadvantages of solving the knapsack problem [30]Truong invented a new repair operator which depends onboth the greedy strategy and random selection [31] Althoughthis method can correct the infeasible solution randomselection reduced the efficiency because it was not greedyenough to improve the convergence speed and accuracyIn this paper a novel greedy transform method (GTM) isintroduced to solve this problem [32] It can effectively repairthe infeasible solution and optimize the feasible solution
This GTM consists of two stages The first stage (calledRS) examines each variable in descending order of 119901
119894119908119894and
confirms the variable value of one as long as feasibility is notviolatedThe second stage (called OS) changes the remainingvariable from zero to one until the feasibility is violated Thepurpose of theOS stage is to repair an abnormal chromosomecoding to turn into a normal chromosome while the RS stageis to achieve the best chromosome coding Then accordingto the mathematical model in Section 2 pseudo code of theGTM is described in Algorithm 1
33 New Position Updating with Adaptive Step and GeneticMutation An outstanding characteristic of PSO is that theindividual tends to mimic its successful companion Each
individual follows the simple act that is to emulate thesuccessful experience of the adjacent individual and theaccumulation behavior is to search for the best area for ahigh-dimensional space [33] Compared with the PSO thereare some differences and similarities Firstly for the PSO theparticle velocity consists of three parts the previous speedentry cognitive component and social compositionThe roleof social composition of the particles is pulled the directionof the global optimum For the CS algorithm new cuckooindividual is generated by a probability 119901
119886in a completely
random manner which can be seen as the social componentof the CS However it does not well reflect the impactof the entire population on the individual Secondly PSOdemonstrate adaptive behavior because the population stateis changed in pace with the individual optimum and theglobal optimum which have been traced However cuckooindividual does not fully show the adaptive behavior in theCSalgorithmThirdly in PSO position update formula performsmutation in an embedded memory manner which is similarto that is used in CS From the above analyses we cancome to a conclusion that the CS algorithm also has someminor disadvantages Inspired by the idea of particle swarmoptimization a novel position updating operator is proposedand utilized to strengthen the ability of local search The ICSand the CS are different in two aspects as follows
(1) The position updating with adaptive step in ICSreplaces the random walk completely in CS in thestage of local search
(2) The probability 119901119886of alien eggs found by host birds
is excluded from the CS and genetic mutation prob-ability (119901
119898) is included in the ICS
The concept of ldquoconfidence intervalrdquo is introduced firstlyand the schematic is given as well
Definition 3 (confidence interval) Let 119909best119895(119905) be the 119895th
component of xbest ingeneration 119905 and xbest = (119909best1 119909
best2
119909
best119899) is the global best cuckoo individual in generation
119905 Let 119909worst119895
(119905) be the 119895th component of xworst ingeneration119905 and xworst = (119909
worst1
119909worst2
119909worst119899
) is the global worstcuckoo individual in generation 119905 accordingly The step
119895=
|119909best119895
minus 119909worst119895
| is the adaptive step of the 119895th component ofindividual xi = (1199091198941 1199091198942 119909119894119899)
119879 and then the confidenceinterval (CI) of every component 119909
119894119895(119895 = 1 2 119899) of
xi (119894 = 1 2 119898) is defined as CI119894119895 isin [minusstep119895 step119895] Figure 1gives the schematic representation of confidence interval
Two major components of any metaheuristic algorithmsare intensification and diversification or exploitation andexploration [19] and their interaction can have a marginaleffect on the efficiency of a metaheuristic algorithm Theconfidence interval is essentially a region near the global bestcuckoo It is significant that the search step size is adjustedgradually in the evolutionary process which can effectivelybalance the contradictions between exploration and exploita-tion In the early stage of search cuckoo individuals randomlydistributed in the entire response space so most adaptive
4 Computational Intelligence and Neuroscience
Input X = (1199091 1199092 119909
119899) isin 0 1
119899
Step 1 sortThe items are sorted according to the value-to-weight ratio 119901
119894119908119894(119894 = 1 2 3 119899) in
descending order then a queue 1199041 1199042 119904
119899 of length n is formed It means that
p1199041w1199041ge p1199042w1199042 for 119904
1lt 1199042
Step 2 repair stage119894 = 1 Tempc = 119908
1199041
While (Tempc le C)if (119909119904119894= 1) then
119910119904119894= 1 119894 = 119894 + 1 Tempc=Tempc+119908
119904119894
end ifend whileStep 3 optimize stageFor 119895 = 119894 to 119899
Tempc = Tempc + 119908119904119895
if (Tempc le C) then119910119904119895= 1
Else119910119904119895= 0
end ifend forOutput Y = (119910
1 1199102 119910
119899) computation is terminated
Algorithm 1 Greedy transform method
Confidence interval
stepj stepj
TS
xworstj xbest
j
Figure 1 The schematic representation of confidence interval
steps are large and most confidence intervals are wide whichis very beneficial to making a lot of exploration As theiterations continue most adaptive steps gradually becomesmall and most confidence intervals become wide accord-ingly Thus the exploitation capabilities will be graduallystrengthened
The purpose of mutation is to introduce new genes so asto increase the diversity of the population Mutation can alsoplay a balanced exploration-exploitation contradictory roleGeneticmutation operationwith a small probability is carriedout for it can effectively prevent the premature convergenceof the ICS New position updating formula of ICS is shown inAlgorithm 2
Here ldquobestrdquo and ldquoworstrdquo are the indexes of the global bestcuckoo and the worst cuckoo respectively And 119903 119903
1and rand
(sdot) are all uniformly generated random numbers in [0 1]Based on the above-mentioned analyses the pseudo code
of the ICS for 0-1 knapsack problems is described as shownin Algorithm 3
The time complexity of our proposed algorithm is approx-imately 119874(max119879 lowast 119898 lowast 2119899) + 119874(119899log 119899) and it is still
For 119894 = 1 to nstepi = |xbesti minus xworsti |
xi1015840 = xbesti plusmn r lowast stepiif (1199031le 119901119898)
xi1015840 = ai + rand() lowast (bi minus ai)end if
end for
Algorithm 2 New position updating formula of ICS
linear The time complexity of new proposed algorithm doesnot increase in magnitude compared with the original CSalgorithm
4 Experimental Results and Analysis
In order to test the optimization ability of ICS and investigateeffectiveness of the algorithms for different instance types we
Computational Intelligence and Neuroscience 5
Step 1 Sorting According to value-to-weight ratio 119901119894119908119894(119894 = 1 2 3 119899) in descending order
a queue 1199041 1199042 119904
119899 of length n is formed
Step 2 Initialization Generate m cuckoo nests randomly ⟨x1 b1⟩⟨x2 b2⟩ ⟨xm bm⟩Calculate the fitness for each individual 119891(bi) 1 le 119894 le 119898 determine ⟨xbest bbest⟩Set the generation counter 119866 = 1 Set mutation parameter 119901
119898
Step 3While (the stopping criterion is not satisfied)for 119894 = 1 to m
for119895 = 1 to nxi(j) = xi(j) + 120572 oplus Levy(120582)Apply new position updating formula of ICS (Algorithm 2)Repair the illegal individuals and optimize the legal individuals (Algorithm 1)
end forend for
Step 4 Keep best solutions Rank the solutions and find the current best (bbest 119891(bbest))119866 = 119866 + 1
Step 5 end while
Algorithm 3 The main procedure of ICS
Table 1 Experimental result of three algorithms with small KPinstances
Fun Size HS CS ICS1198911 10 295 295 2951198912 20 1024 1024 10241198913 4 35 35 351198914 4 23 23 231198915 15 4810694 4810694 48106941198916 10 50 52 521198917 7 107 107 1071198918 23 9761 9776 97771198919 5 130 130 13011989110 20 1025 1025 1025
consider twenty 0-1 knapsack problems involving ten small-scale instances six medium-scale instances and four large-scale instances The solution quality and performance arecompared with binary version HS and binary version CS forsimplicity denoted as HS and CS respectively
Test problems 1ndash10 are taken from [12] Test problems 11and 13 ofHe et al [28] are used in our numerical experimentsTest problem 15 is conducted from test problems 11 and 13Test problem 16 is generated by Kellerer et al [34] Testproblems 12 and 14 are generated by Gherboudj et al [24]Test problems 17ndash20 are taken from [12]
All the algorithms were implemented in Visual C++ 60The test environment is set up on personal computer withAMDAthlon(tm) II X2 250 Processor 301 GHz 175 G RAMrunning on Windows XP Three groups of experiments wereperformed to assess the efficiency and performance of ouralgorithm In all experiments we have set the parameters ofCS as follows 119901
119886= 025 the number of cuckoo is 20 For
the HS algorithm harmony memory size HMS = 5 harmonymemory consideration rate HMCR = 09 pitch adjusting ratePAR = 03 and bandwidth 119887
119908= 119909119880minus 119909119871 For the ICS
algorithm the number of cuckoo is 20 the generic mutation
0 1 2 3 4 5256
257
258
259
26
261
262
263
264
265
266Av
erge
pro
fits
Time (s)
times104
Figure 2 Best profits (100 items)
probability 119901119898= 015 The experiments on each function
were repeated 30 times independently The quantification ofthe solutions is tabulated in Tables 1ndash3
41 Comparison amongThree Algorithms on Small DimensionKnapsack Problems Table 1 shows the experimental resultsof our ICS algorithm the HS and the CS on ten KPtests with different dimension Observation of the presentedresults in Table 1 indicates that the proposed ICS algorithmperforms better than HS algorithm and CS algorithm in 1198918The optimal solution of the test problem 8 found by ICSis 119909lowast = (1 1 1 1 1 1 1 1 1 0 1 0 1 0 0 0 0 0 0 0 0 0 0)
and 1198918 (119909lowast) = 9777 Additionally CS and ICS have thesame results in 1198916 which is better than that obtained by theHS algorithm and three algorithms have the same resultsin the other instances In a word the solutions obtainedby three algorithms are similar and there is almost no
6 Computational Intelligence and Neuroscience
Table 2 Experimental result of three algorithms with medium KP instances
Fun Dim Algorithm 119881best119862best 119881worst119862worst 119879best119879avg 119866best119866avg Mean Std dev SR
11989111 50HS 31031000 31031000 0000007 96498 3103 0 100CS 31031000 30971000 00151231 6502 3102 2009 95ICS 31031000 31031000 0000064 222 3103 0 100
11989112 80HS 92011505 91991505 0000033 1881445 9200 045 95CS 91991505 91061505 29243776 757980 9176 2627 15ICS 92011505 91991505 00151359 5315 9200 103 50
11989113 100HS 265596717 265596717 0000460 15016521 26559 0 100CS 265596717 258826713 17192565 351529 26447 17858 20ICS 265596717 265346706 00620999 12182 26558 559 95
11989114 120HS 73931109 73931109 0000123 523787 7393 0 100CS 73931109 73341109 13281399 151159 7361 1388 6ICS 73931109 73931109 03442298 53355 7393 0 100
11989115 150HS 300857718 300817718 05782429 1614068532 30081 123 10CS 300817718 299437717 06563142 182872 30072 3213 90ICS 300857718 300817718 12031843 144215 30082 1642 20
11989116 200HS 8822871893 8812971893 09211718 1911535429 88150 4147 25CS 8822871893 8801871881 42004225 868872 88160 6742 10ICS 8822871893 8822171886 15703479 145323 88225 297 30
Table 3 Experimental results of three algorithms with large KP instances
Fun Dim Algorithm 119881best119862best 119881worst119862worst 119879best119866best Mean Std dev
11989117 300HS 143281700 143071700 339150277 14325 523CS 143061696 130271698 3422239 14021 27506ICS 143181700 142791698 2125131 14303 1061
11989118 500HS 160312000 159891999 182817885 16015 1121CS 160092000 153532000 2500104 15755 17470ICS 160422000 159911999 215679 16016 1376
11989119 800HS 397595000 396565000 381222659 39720 2875CS 389874999 382964998 321883 38630 19160ICS 397754995 394324996 396891 39578 8506
11989120 1000HS 6763510000 6763010000 31115208 67633 159CS 669929997 667129998 01091 66877 7068ICS 6712310000 6697510000 068713 67042 4125
significant difference among all the three algorithms FurtherICS algorithm does not show its advantages thoroughlyTherefore in order to further test the performance of thealgorithm we conducted the following experiments in thenext subsection
42 Comparison amongThree Algorithms onMedium Dimen-sion Knapsack Problems Figures 2 3 4 and 5 show conver-gence curves of the best profits of the ICS over 30 runs on fourtest problemswith 100 120 150 and 200 items It indicates theglobal search ability and the convergence ability of the ICSThere are several observations and they are given as follows
The best profit of 100 items test problem is quicklyincreasing and reaching the approximately optimal profit atnearly one second Although the algorithm shows a slowevolution only for a moment in 120 items the best profitis still obtained after about 25 seconds In the 150 items
test problem the best profit is quickly increasing for morethan a second For the large 200 items test problem thebest profit is also increasing very rapidly over 25 secondsThe performance of the ICS can be further understood andanalyzed from Table 2
We observed from Table 2 that the ICS has demonstratedan overwhelming advantage over the other two algorithms onsolving 0-1 knapsack problems with medium scales ICS andHS obtained the same optimal solution in all test problemsThe CS has the worst performance and the best solutionsfound by CS are worse than those obtained by the other twoalgorithms for11989112 and11989115 Furthermore theworst solutionsfound by the ICS are all better than those obtained by CSThe ICS and the HS obtained the same worst solutions except11989113 and 11989116 Unfortunately the worst solution obtained byICS cannot exceed that of the HS The ICS uses little ldquotimerdquoand little ldquoaverage timerdquo compared with CS for almost all
Computational Intelligence and Neuroscience 7
0 1 2 3 4 57280
7290
7300
7310
7320
7330
7340
7350
7360
7370
Time (s)
Aver
ge p
rofit
s
Figure 3 Best profits (120 items)
0 1 2 3 4 5284
286
288
29
292
294
296
298
3
302
Time (s)
Aver
ge p
rofit
s
times104
Figure 4 Best profits (150 items)
0 1 2 3 4 5879
8795
88
8805
881
8815
882
8825
Aver
ge p
rofit
s
Time (s)
times104
Figure 5 Best profits (200 items)
0
05
1
15
2
25
3
35
4
45
Function
Aver
age t
ime
HSCSICS
f11 f12 f13 f14 f15 f16
Figure 6 Comparison of average computation time of the ICS withthe HS and the CS
0 1 2 3 4 5
Aver
ge p
rofit
s (lo
g)
Time (s)
HSCSICS
10415
10414
10413
10412
10411
1041
10409
10408
Figure 7 The performance on function 17
of the test problems In addition the ldquo119866best119866avgrdquo of mostproblems is much smaller than that of the CS and the HSwhich shows that the ICS has a fast convergence ldquoSRrdquo is morethan 95 for almost all of problems except 11989115 and 11989116Further ldquoSRrdquo for 11989115 and 11989116 is slightly higher than that ofother two algorithms which indicates the high efficiency ofthe ICS on solving 0-1 knapsack problems ldquoStddevrdquo is muchsmaller than that of the CS and the difference is not very poorbetween ICS andHS which indicates the good stability of theICS and superior approximation ability
Figure 6 shows a comparison of average computationtime with 11989111 to 11989116 estimated by seconds for the proposed
8 Computational Intelligence and Neuroscience
0 1 2 3 4 5
Aver
ge p
rofit
s (lo
g)
Time (s)
HSCSICS
10419
10417
10415
10413
10411
Figure 8 The performance on function 18
0 1 2 3 4 5
Aver
ge p
rofit
s (lo
g)
Time (s)
HSCSICS
10459
10458
10457
10456
10455
10454
Figure 9 The performance on function 19
algorithms the HS and the CS In terms of the averagecomputation time Figure 6 shows that the HS algorithm isthe best one and the CS algorithm is the worst oneMoreoverICS converges to the optima faster thanCS onmost instances
Although ICS has shown some advantages on solving 0-1knapsack problem with medium-scale instances howeverthe optimal solution obtained by ICS is not very prominentcompared with other two algorithms Therefore in order tofurther verify the efficiency of our proposed algorithm wedesigned the large scale knapsack tests as follows
43 Comparison amongThreeAlgorithms on Solving 0-1 Knap-sack Problems with Large Dimension Similar to the resultsof medium-scale knapsack instances for the large scale
0 1 2 3 4 5
Aver
ge p
rofit
s (lo
g)
Time (s)
HSCSICS
10483
104829
104828
104827
104826
Figure 10 The performance on function 20
knapsack problem we observe that the ICS algorithm obtainsbetter solutions in shorter time and has more obviousadvantages over the CS algorithm from Table 3 RegrettablyICS is slightly inferior to HS in terms of the optimal solutionquality on function 17 and function 20 In a word the ICShas demonstrated better performance and it thus provides anefficient alternative on solving 0-1 knapsack problems
Convergence curves shown in Figures 7 8 9 and 10similarly establish the fact that ICS is more effective thanCS in all four large-scale KP instances Through carefulobservation it can be seen that HS gets outstanding profitsin the initial stage of the evolution and the best value in thefinal population Compared with HS and ICS CS obtainedthe worst mean profits at various stages ICS and HS haveroughly the same convergence speed In addition it is obviousto infer that CS and ICS get stuck at local optima quickly ascan be seen from Figure 10 However HS converges to theglobal optimum rapidly
5 Conclusions
In this paper the ICS algorithm has been proposed basedon the CS framework and a greedy transition method tosolve 0-1 knapsack problem efficiently An adaptive step iscarefully designed to balance the local search and the globalsearch Genetic mutation operator helps the algorithm toyield fast convergence and avoids local optima The simu-lation results demonstrate that the proposed algorithm hassuperior performance when compared with HS and CS Theproposed algorithm thus provides a new method for solving0-1 knapsack problems
Further studies will focus on the two issues On onehand we would apply our proposed approach on othercombinatorial optimization problems such as multidimen-sional knapsack problem (MKP) and traveling salesman
Computational Intelligence and Neuroscience 9
problem (TSP) On the other hand we would examine newmeta-hybrid to solve 0-1 knapsack problems which are toocomplicated
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgment
This project is supported by the National Natural ScienceFoundation of China (Grant no 10971052)
References
[1] G B Dantzig ldquoDiscrete-variable extremum problemsrdquo Opera-tions Research vol 5 no 2 pp 266ndash288 1957
[2] L Jourdan M Basseur and E-G Talbi ldquoHybridizing exactmethods and metaheuristics a taxonomyrdquo European Journal ofOperational Research vol 199 no 3 pp 620ndash629 2009
[3] A V Cabot ldquoAn enumeration algorithm for knapsack prob-lemsrdquo Operations Research vol 18 no 2 pp 306ndash311 1970
[4] R J W James and Y Nakagawa ldquoEnumeration methods forrepeatedly solving Multidimensional Knapsack sub-problemsrdquoIEICE Transactions on Information and Systems vol 88 no 10pp 2329ndash2340 2005
[5] P J Kolesar ldquoA branch and bound algorithm for the knapsackproblemrdquoManagement Science vol 13 no 9 pp 723ndash735 1967
[6] SMartelloKnapsack Problem Algorithms andComputer Imple-mentations John Wiley amp Sons New York NY USA 1990
[7] F-T Lin ldquoSolving the knapsack problem with imprecise weightcoefficients using genetic algorithmsrdquo European Journal ofOperational Research vol 185 no 1 pp 133ndash145 2008
[8] J C Bansal and K Deep ldquoA modified binary particle swarmoptimization for knapsack problemsrdquo Applied Mathematics andComputation vol 218 no 22 pp 11042ndash11061 2012
[9] M Kong P Tian and Y Kao ldquoA new ant colony optimizationalgorithm for the multidimensional Knapsack problemrdquo Com-puters and Operations Research vol 35 no 8 pp 2672ndash26832008
[10] M H Kashan N Nahavandi and A H Kashan ldquoDisABC anew artificial bee colony algorithm for binary optimizationrdquoApplied Soft Computing Journal vol 12 no 1 pp 342ndash352 2012
[11] LWang X Fu Y Mao et al ldquoA novel modified binary differen-tial evolution algorithm and its applicationsrdquo Neurocomputingvol 98 pp 55ndash75 2012
[12] D Zou L Gao S Li and J Wu ldquoSolving 0-1 knapsack problemby a novel global harmony search algorithmrdquo Applied SoftComputing Journal vol 11 no 2 pp 1556ndash1564 2011
[13] L H Guo G G Wang H Wang et al ldquoAn effective hybridfirefly algorithm with harmony search for global numericaloptimizationrdquoThe Scientific World Journal vol 2013 Article ID125625 9 pages 2013
[14] G GWang A H Gandomi and A H Alavi ldquoAn effective krillherd algorithmwith migration operator in biogeography-basedoptimizationrdquo Applied Mathematical Modelling 2013
[15] G G Wang A H Gandomi and A H Alavi ldquoStud krill herdalgorithmrdquo Neurocomputing 2013
[16] G G Wang L H Guo H Wang et al ldquoIncorporatingmutation scheme into krill herd algorithm for global numericaloptimizationrdquo Neural Computing and Applications 2012
[17] X-S Yang and S Deb ldquoCuckoo search via Levy flightsrdquo in Pro-ceedings of the IEEE World Congress on Nature and BiologicallyInspired Computing (NABIC rsquo09) pp 210ndash214 December 2009
[18] X-S Yang and S Deb ldquoEngineering optimisation by cuckoosearchrdquo International Journal of Mathematical Modelling andNumerical Optimisation vol 1 no 4 pp 330ndash343 2010
[19] X S Yang Nature-Inspired Metaheuristic Algorithms LuniverPress Frome UK 2nd edition 2010
[20] K Chaowanawate and A Heednacram ldquoImplementation ofcuckoo search in RBF neural network for flood forecastingrdquoin Proceedings of the 4th International Conference on Computa-tional In-telligence Communication Systems and Networks pp22ndash26 2012
[21] V R Chifu C B Pop I Salomie D S Suia and A N NiculicildquoOptimizing the semantic web service composition processusing Cuckoo Searchrdquo Intelligent Distributed Computing V vol382 pp 93ndash102 2011
[22] K Choudhary andGN Purohit ldquoA new testing approach usingcuckoo search to achieve multi-objective genetic algorithmrdquoJournal of Computing vol 3 no 4 pp 117ndash119 2011
[23] MDhivyaM Sundarambal and LN Anand ldquoEnergy efficientcomputation of data fusion in wireless sensor networks usingcuckoo based particle approach (CBPA)rdquo International Journalof Computer Networks and Security vol 4 no 4 pp 249ndash2552011
[24] A Gherboudj A Layeb and S Chikhi ldquoSolving 0-1 knapsackproblems by a discrete binary version of cuckoo search algo-rithmrdquo International Journal of Bio-Inspired Computation vol4 no 4 pp 229ndash236 2012
[25] A Layeb ldquoA novel quantum inspired cuckoo search for knap-sack problemsrdquo International Journal of Bio-Inspired Computa-tion vol 3 no 5 pp 297ndash305 2011
[26] X S Yang and S Deb ldquoCuckoo search recent advances andapplicationsrdquo Neural Computing and Applications 2013
[27] T K Truong K Li and YM Xu ldquoChemical reaction optimiza-tion with greedy strategy for the 0-1 knapsack problemrdquoAppliedSoft Computing vol 13 no 4 pp 1774ndash1780 2013
[28] Y C He X Z Wang and Y Z Kou ldquoA binary differential evo-lution algorithm with hybrid encodingrdquo Journal of ComputerResearch and Development vol 44 no 9 pp 1476ndash1484 2007
[29] E G Talbi Metaheuristics From Design To ImplementationJohn Wiley amp Sons New York NY USA 2009
[30] J Kennedy and R C Eberhart ldquoA discrete binary versionof the particle swarm algorithmrdquo in Proceedings of the IEEEInternational Conference on Computational Cybernetics andSimulation vol 5 pp 4104ndash4108 October 1997
[31] J Zhao T Huang F Pang and Y Liu ldquoGenetic algorithm basedon greedy strategy in the 0-1 knapsack problemrdquo in Proceedingsof the 3rd International Conference on Genetic and EvolutionaryComputing (WGEC rsquo09) pp 105ndash107 October 2009
[32] Y C He K Q Liu and C J Zhang ldquoGreedy genetic algorithmfor solving knapsack problems and its applicationsrdquo ComputerEngineering and Design vol 28 no 11 pp 2655ndash2657 2007
[33] A P Engelbrecht Fundamentals of Computational SwarmIntelligence John Wiley amp Sons Hoboken NJ USA 2009
[34] H Kellerer U Pferschy and D Pisinger Knapsack ProblemsSpringer Berlin Germany 2004
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
2 Computational Intelligence and Neuroscience
a new solution The aim is to use the new and potentiallybetter solutions (cuckoos) to replace a not-so-good solutionin the nests [19] Like othermetaheuristic algorithms CS usesno gradient information during the search so that it has theability to solve nonconvex nonlinear nondifferentiable andmultimodal problems Furthermore there is essentially onlya single parameter 119901
119886in CS and thus it is potentially more
generic to adapt to a wider class of optimization problems[19] In addition Yang and Deb showed that the CS outper-forms particle swarm optimization or genetic algorithms insome real-world optimization problems [18 20] In virtue ofits simplicity robustness and so on books and articles on thesubject have proliferated recently The CS has received moreand more attention and application and it falls into a largenumber of areas [20ndash25] More details can be found in [26]
As far as we know the emphasis ofmuch of previous stud-ies on CS was placed on solving the optimization problemsover discrete or continuous space and only a few scholarswere concerned about binary problems In 2011 Layeb [25]developed a variant of cuckoo search in combination withquantum-based approach to solve knapsack problems effi-ciently Subsequently Gherboudj et al [24] utilized purelybinary cuckoo search to tackle knapsack problems In sum-mary the studies on binary-coded CS have just begun and itsperformance needs to further improve so as to further expandits field of application
Given the above consideration an improved CS algo-rithm (ICS) based on the CS framework in combinationwith a novel greedy strategy is brought forward to solve 0-1 knapsack problem Compared with the original CS theoutstanding characteristics of Levy flights such as stabilitypower law asymptotics used in the original CS is still retainedin ICS Meanwhile the operation which a fraction of worsenests are abandoned with a probability 119901
119886and new solutions
are built randomly is eliminated and a novel operator whichthe search range is adjusted with adaptive step size andthe genetic mutation is embedded is introduced We assessthe performance of our proposed algorithm in terms ofthe quality of solutions convergence rate and robustnessby testing twenty different scale knapsack instances Thesimulation results not only demonstrated that the proposedalgorithm is workable and robust but also held the character-istic of the superior approximation capabilities even in high-dimensional space
The remainder of this paper is structured as followsSection 2 describes the mathematical model for the 0-1knapsack problemsThen the improvement strategies and theoriginal intention of these improvements are given in detailin Section 3 and the greedy transform method is describedSubsequently Section 4 presents the results of comparativeexperiments Finally some conclusions and comments aremade for further research in Section 5
2 Knapsack Problems
Knapsack problem (KP) is a typical optimization problemand it has high theoretical and practical valueMany practicalapplications can be formulated as a KP such as cutting stock
problems portfolio optimization and scheduling problemscryptography [27]This problem has been proven to be a NP-hard problem hence it cannot be solved in a polynomial timeunless P = NP [1] The classical 0-1 knapsack problem can bedefined as follows
Let 119880 = 1199061 1199062 119906
119899 be a set of 119899 items and 119908
119895and 119901
119895
represent the weight and profit of item 119895 respectively Here119908119895119901119895 and 119895 are all positive integersThe problem is to choose
a subset of the items tomake their total weight not to exceed agiven capacity119862 while the total profit is maximizedWithoutloss of generality it may be assumed that the weight of eachitem is smaller than the capacity 119862 so that each item fits intothe knapsackWe can use the binary decision variable 119909
119894 with
119909119894= 1 if item 119894 is selected and 119909
119894= 0 otherwise The problem
can be formulated as follows
max 119891 =
119899
sum
119894=1
119901119894119909119894
st119899
sum
119894=1
119908119894119909119894le 119862
(1)
3 The Improved Cuckoo SearchAlgorithm (ICS)
Although the original CS algorithm possesses some excellentfeatures of simplicity in structure and escaping from localoptima easily compared to several traditional optimizationapproaches the phenomenon of slow convergence rate andlow accuracy still exists That is to say the basic algorithmdoes not adequately exploit the potential of CS algorithmTherefore in this paper in order to improve the convergencerate and precision of CS we designed a series of appropriatestrategies and then a more efficient algorithm (ICS) isproposed
ICS introduced the following five improving strategies
(1) using adaptive step size to adjust search range(2) using confidence interval to enhance the local search(3) using genetic mutation operation with a low proba-
bility to prevent the ICS from being trapped into thelocal optimum
(4) using hybrid encoding to represent each individual inthe population
(5) using greedy transform method to repair the infeasi-ble solution and optimize the feasible solution
More detailed descriptions of these strategieswill be givenin subsections respectively
31 Hybrid Encoding The standard CS algorithm operatesin continuous space Consequently we cannot use it directlyfor solving optimization in binary space Additionally theoperation of the original CS algorithm is closed to the setof real number but it does not have the closure propertyin the binary set 0 1 Since the wide application of binaryoptimization problems in real-world engineering the mainobjective of the ICS algorithm is to deal with the binary
Computational Intelligence and Neuroscience 3
optimization problems One of the most significant featuresof the ICS is that it adopts the hybrid coding scheme [28] andeach cuckoo individual is represented by two-tuples
Definition 1 (auxiliary search space) An auxiliary searchspace 1198781015840 which denotes a subspace of 119899 dimensional realspace 119877119899 where 1198781015840 sub 119877
119899 An auxiliary search space 1198781015840corresponds to a solution space 119878 = 0 1119899 Additionally 119878and 1198781015840 are two parallel search space Here the search in 1198781015840is called active search meanwhile the search in 119878 is calledpassive search
Definition 2 (hybrid encoding representation) Each cuckooindividual in the population is represented by the two tuples⟨xi bi⟩ (119894 = 1 2 119899) where xi works in the auxiliary searchspace and bi performs in the solution space accordinglyand 119899 is the dimensionality of solution Further Sigmoidfunction [26] is adopted to transform a real-coded vectorxi = (119909
1 1199092 119909
119899)119879
isin [minus30 30]119899 to binary vector bi =
(1198871 1198872 119887
119899)119879
isin 0 1119899 The procedure works as follows
119887119894=
1 if sig (119909119894) ge 05
0 else(2)
sig(119909) = 1(1 + 119890minus119909) is sigmoid function
32 Greedy TransformMethod Many optimization problemsare constrained Accordingly constraint handling is crucialfor the efficient design ofmetaheuristics Constraint handlingstrategies whichmainly act on the representation of solutionsor the objective function can be classified as reject strategiespenalizing strategies repairing strategies decoding strate-gies and preserving strategies [29] Repairing strategy mostof them are greedy heuristics can be applied for the knapsackproblem [29] However the traditional greedy strategy hassome disadvantages of solving the knapsack problem [30]Truong invented a new repair operator which depends onboth the greedy strategy and random selection [31] Althoughthis method can correct the infeasible solution randomselection reduced the efficiency because it was not greedyenough to improve the convergence speed and accuracyIn this paper a novel greedy transform method (GTM) isintroduced to solve this problem [32] It can effectively repairthe infeasible solution and optimize the feasible solution
This GTM consists of two stages The first stage (calledRS) examines each variable in descending order of 119901
119894119908119894and
confirms the variable value of one as long as feasibility is notviolatedThe second stage (called OS) changes the remainingvariable from zero to one until the feasibility is violated Thepurpose of theOS stage is to repair an abnormal chromosomecoding to turn into a normal chromosome while the RS stageis to achieve the best chromosome coding Then accordingto the mathematical model in Section 2 pseudo code of theGTM is described in Algorithm 1
33 New Position Updating with Adaptive Step and GeneticMutation An outstanding characteristic of PSO is that theindividual tends to mimic its successful companion Each
individual follows the simple act that is to emulate thesuccessful experience of the adjacent individual and theaccumulation behavior is to search for the best area for ahigh-dimensional space [33] Compared with the PSO thereare some differences and similarities Firstly for the PSO theparticle velocity consists of three parts the previous speedentry cognitive component and social compositionThe roleof social composition of the particles is pulled the directionof the global optimum For the CS algorithm new cuckooindividual is generated by a probability 119901
119886in a completely
random manner which can be seen as the social componentof the CS However it does not well reflect the impactof the entire population on the individual Secondly PSOdemonstrate adaptive behavior because the population stateis changed in pace with the individual optimum and theglobal optimum which have been traced However cuckooindividual does not fully show the adaptive behavior in theCSalgorithmThirdly in PSO position update formula performsmutation in an embedded memory manner which is similarto that is used in CS From the above analyses we cancome to a conclusion that the CS algorithm also has someminor disadvantages Inspired by the idea of particle swarmoptimization a novel position updating operator is proposedand utilized to strengthen the ability of local search The ICSand the CS are different in two aspects as follows
(1) The position updating with adaptive step in ICSreplaces the random walk completely in CS in thestage of local search
(2) The probability 119901119886of alien eggs found by host birds
is excluded from the CS and genetic mutation prob-ability (119901
119898) is included in the ICS
The concept of ldquoconfidence intervalrdquo is introduced firstlyand the schematic is given as well
Definition 3 (confidence interval) Let 119909best119895(119905) be the 119895th
component of xbest ingeneration 119905 and xbest = (119909best1 119909
best2
119909
best119899) is the global best cuckoo individual in generation
119905 Let 119909worst119895
(119905) be the 119895th component of xworst ingeneration119905 and xworst = (119909
worst1
119909worst2
119909worst119899
) is the global worstcuckoo individual in generation 119905 accordingly The step
119895=
|119909best119895
minus 119909worst119895
| is the adaptive step of the 119895th component ofindividual xi = (1199091198941 1199091198942 119909119894119899)
119879 and then the confidenceinterval (CI) of every component 119909
119894119895(119895 = 1 2 119899) of
xi (119894 = 1 2 119898) is defined as CI119894119895 isin [minusstep119895 step119895] Figure 1gives the schematic representation of confidence interval
Two major components of any metaheuristic algorithmsare intensification and diversification or exploitation andexploration [19] and their interaction can have a marginaleffect on the efficiency of a metaheuristic algorithm Theconfidence interval is essentially a region near the global bestcuckoo It is significant that the search step size is adjustedgradually in the evolutionary process which can effectivelybalance the contradictions between exploration and exploita-tion In the early stage of search cuckoo individuals randomlydistributed in the entire response space so most adaptive
4 Computational Intelligence and Neuroscience
Input X = (1199091 1199092 119909
119899) isin 0 1
119899
Step 1 sortThe items are sorted according to the value-to-weight ratio 119901
119894119908119894(119894 = 1 2 3 119899) in
descending order then a queue 1199041 1199042 119904
119899 of length n is formed It means that
p1199041w1199041ge p1199042w1199042 for 119904
1lt 1199042
Step 2 repair stage119894 = 1 Tempc = 119908
1199041
While (Tempc le C)if (119909119904119894= 1) then
119910119904119894= 1 119894 = 119894 + 1 Tempc=Tempc+119908
119904119894
end ifend whileStep 3 optimize stageFor 119895 = 119894 to 119899
Tempc = Tempc + 119908119904119895
if (Tempc le C) then119910119904119895= 1
Else119910119904119895= 0
end ifend forOutput Y = (119910
1 1199102 119910
119899) computation is terminated
Algorithm 1 Greedy transform method
Confidence interval
stepj stepj
TS
xworstj xbest
j
Figure 1 The schematic representation of confidence interval
steps are large and most confidence intervals are wide whichis very beneficial to making a lot of exploration As theiterations continue most adaptive steps gradually becomesmall and most confidence intervals become wide accord-ingly Thus the exploitation capabilities will be graduallystrengthened
The purpose of mutation is to introduce new genes so asto increase the diversity of the population Mutation can alsoplay a balanced exploration-exploitation contradictory roleGeneticmutation operationwith a small probability is carriedout for it can effectively prevent the premature convergenceof the ICS New position updating formula of ICS is shown inAlgorithm 2
Here ldquobestrdquo and ldquoworstrdquo are the indexes of the global bestcuckoo and the worst cuckoo respectively And 119903 119903
1and rand
(sdot) are all uniformly generated random numbers in [0 1]Based on the above-mentioned analyses the pseudo code
of the ICS for 0-1 knapsack problems is described as shownin Algorithm 3
The time complexity of our proposed algorithm is approx-imately 119874(max119879 lowast 119898 lowast 2119899) + 119874(119899log 119899) and it is still
For 119894 = 1 to nstepi = |xbesti minus xworsti |
xi1015840 = xbesti plusmn r lowast stepiif (1199031le 119901119898)
xi1015840 = ai + rand() lowast (bi minus ai)end if
end for
Algorithm 2 New position updating formula of ICS
linear The time complexity of new proposed algorithm doesnot increase in magnitude compared with the original CSalgorithm
4 Experimental Results and Analysis
In order to test the optimization ability of ICS and investigateeffectiveness of the algorithms for different instance types we
Computational Intelligence and Neuroscience 5
Step 1 Sorting According to value-to-weight ratio 119901119894119908119894(119894 = 1 2 3 119899) in descending order
a queue 1199041 1199042 119904
119899 of length n is formed
Step 2 Initialization Generate m cuckoo nests randomly ⟨x1 b1⟩⟨x2 b2⟩ ⟨xm bm⟩Calculate the fitness for each individual 119891(bi) 1 le 119894 le 119898 determine ⟨xbest bbest⟩Set the generation counter 119866 = 1 Set mutation parameter 119901
119898
Step 3While (the stopping criterion is not satisfied)for 119894 = 1 to m
for119895 = 1 to nxi(j) = xi(j) + 120572 oplus Levy(120582)Apply new position updating formula of ICS (Algorithm 2)Repair the illegal individuals and optimize the legal individuals (Algorithm 1)
end forend for
Step 4 Keep best solutions Rank the solutions and find the current best (bbest 119891(bbest))119866 = 119866 + 1
Step 5 end while
Algorithm 3 The main procedure of ICS
Table 1 Experimental result of three algorithms with small KPinstances
Fun Size HS CS ICS1198911 10 295 295 2951198912 20 1024 1024 10241198913 4 35 35 351198914 4 23 23 231198915 15 4810694 4810694 48106941198916 10 50 52 521198917 7 107 107 1071198918 23 9761 9776 97771198919 5 130 130 13011989110 20 1025 1025 1025
consider twenty 0-1 knapsack problems involving ten small-scale instances six medium-scale instances and four large-scale instances The solution quality and performance arecompared with binary version HS and binary version CS forsimplicity denoted as HS and CS respectively
Test problems 1ndash10 are taken from [12] Test problems 11and 13 ofHe et al [28] are used in our numerical experimentsTest problem 15 is conducted from test problems 11 and 13Test problem 16 is generated by Kellerer et al [34] Testproblems 12 and 14 are generated by Gherboudj et al [24]Test problems 17ndash20 are taken from [12]
All the algorithms were implemented in Visual C++ 60The test environment is set up on personal computer withAMDAthlon(tm) II X2 250 Processor 301 GHz 175 G RAMrunning on Windows XP Three groups of experiments wereperformed to assess the efficiency and performance of ouralgorithm In all experiments we have set the parameters ofCS as follows 119901
119886= 025 the number of cuckoo is 20 For
the HS algorithm harmony memory size HMS = 5 harmonymemory consideration rate HMCR = 09 pitch adjusting ratePAR = 03 and bandwidth 119887
119908= 119909119880minus 119909119871 For the ICS
algorithm the number of cuckoo is 20 the generic mutation
0 1 2 3 4 5256
257
258
259
26
261
262
263
264
265
266Av
erge
pro
fits
Time (s)
times104
Figure 2 Best profits (100 items)
probability 119901119898= 015 The experiments on each function
were repeated 30 times independently The quantification ofthe solutions is tabulated in Tables 1ndash3
41 Comparison amongThree Algorithms on Small DimensionKnapsack Problems Table 1 shows the experimental resultsof our ICS algorithm the HS and the CS on ten KPtests with different dimension Observation of the presentedresults in Table 1 indicates that the proposed ICS algorithmperforms better than HS algorithm and CS algorithm in 1198918The optimal solution of the test problem 8 found by ICSis 119909lowast = (1 1 1 1 1 1 1 1 1 0 1 0 1 0 0 0 0 0 0 0 0 0 0)
and 1198918 (119909lowast) = 9777 Additionally CS and ICS have thesame results in 1198916 which is better than that obtained by theHS algorithm and three algorithms have the same resultsin the other instances In a word the solutions obtainedby three algorithms are similar and there is almost no
6 Computational Intelligence and Neuroscience
Table 2 Experimental result of three algorithms with medium KP instances
Fun Dim Algorithm 119881best119862best 119881worst119862worst 119879best119879avg 119866best119866avg Mean Std dev SR
11989111 50HS 31031000 31031000 0000007 96498 3103 0 100CS 31031000 30971000 00151231 6502 3102 2009 95ICS 31031000 31031000 0000064 222 3103 0 100
11989112 80HS 92011505 91991505 0000033 1881445 9200 045 95CS 91991505 91061505 29243776 757980 9176 2627 15ICS 92011505 91991505 00151359 5315 9200 103 50
11989113 100HS 265596717 265596717 0000460 15016521 26559 0 100CS 265596717 258826713 17192565 351529 26447 17858 20ICS 265596717 265346706 00620999 12182 26558 559 95
11989114 120HS 73931109 73931109 0000123 523787 7393 0 100CS 73931109 73341109 13281399 151159 7361 1388 6ICS 73931109 73931109 03442298 53355 7393 0 100
11989115 150HS 300857718 300817718 05782429 1614068532 30081 123 10CS 300817718 299437717 06563142 182872 30072 3213 90ICS 300857718 300817718 12031843 144215 30082 1642 20
11989116 200HS 8822871893 8812971893 09211718 1911535429 88150 4147 25CS 8822871893 8801871881 42004225 868872 88160 6742 10ICS 8822871893 8822171886 15703479 145323 88225 297 30
Table 3 Experimental results of three algorithms with large KP instances
Fun Dim Algorithm 119881best119862best 119881worst119862worst 119879best119866best Mean Std dev
11989117 300HS 143281700 143071700 339150277 14325 523CS 143061696 130271698 3422239 14021 27506ICS 143181700 142791698 2125131 14303 1061
11989118 500HS 160312000 159891999 182817885 16015 1121CS 160092000 153532000 2500104 15755 17470ICS 160422000 159911999 215679 16016 1376
11989119 800HS 397595000 396565000 381222659 39720 2875CS 389874999 382964998 321883 38630 19160ICS 397754995 394324996 396891 39578 8506
11989120 1000HS 6763510000 6763010000 31115208 67633 159CS 669929997 667129998 01091 66877 7068ICS 6712310000 6697510000 068713 67042 4125
significant difference among all the three algorithms FurtherICS algorithm does not show its advantages thoroughlyTherefore in order to further test the performance of thealgorithm we conducted the following experiments in thenext subsection
42 Comparison amongThree Algorithms onMedium Dimen-sion Knapsack Problems Figures 2 3 4 and 5 show conver-gence curves of the best profits of the ICS over 30 runs on fourtest problemswith 100 120 150 and 200 items It indicates theglobal search ability and the convergence ability of the ICSThere are several observations and they are given as follows
The best profit of 100 items test problem is quicklyincreasing and reaching the approximately optimal profit atnearly one second Although the algorithm shows a slowevolution only for a moment in 120 items the best profitis still obtained after about 25 seconds In the 150 items
test problem the best profit is quickly increasing for morethan a second For the large 200 items test problem thebest profit is also increasing very rapidly over 25 secondsThe performance of the ICS can be further understood andanalyzed from Table 2
We observed from Table 2 that the ICS has demonstratedan overwhelming advantage over the other two algorithms onsolving 0-1 knapsack problems with medium scales ICS andHS obtained the same optimal solution in all test problemsThe CS has the worst performance and the best solutionsfound by CS are worse than those obtained by the other twoalgorithms for11989112 and11989115 Furthermore theworst solutionsfound by the ICS are all better than those obtained by CSThe ICS and the HS obtained the same worst solutions except11989113 and 11989116 Unfortunately the worst solution obtained byICS cannot exceed that of the HS The ICS uses little ldquotimerdquoand little ldquoaverage timerdquo compared with CS for almost all
Computational Intelligence and Neuroscience 7
0 1 2 3 4 57280
7290
7300
7310
7320
7330
7340
7350
7360
7370
Time (s)
Aver
ge p
rofit
s
Figure 3 Best profits (120 items)
0 1 2 3 4 5284
286
288
29
292
294
296
298
3
302
Time (s)
Aver
ge p
rofit
s
times104
Figure 4 Best profits (150 items)
0 1 2 3 4 5879
8795
88
8805
881
8815
882
8825
Aver
ge p
rofit
s
Time (s)
times104
Figure 5 Best profits (200 items)
0
05
1
15
2
25
3
35
4
45
Function
Aver
age t
ime
HSCSICS
f11 f12 f13 f14 f15 f16
Figure 6 Comparison of average computation time of the ICS withthe HS and the CS
0 1 2 3 4 5
Aver
ge p
rofit
s (lo
g)
Time (s)
HSCSICS
10415
10414
10413
10412
10411
1041
10409
10408
Figure 7 The performance on function 17
of the test problems In addition the ldquo119866best119866avgrdquo of mostproblems is much smaller than that of the CS and the HSwhich shows that the ICS has a fast convergence ldquoSRrdquo is morethan 95 for almost all of problems except 11989115 and 11989116Further ldquoSRrdquo for 11989115 and 11989116 is slightly higher than that ofother two algorithms which indicates the high efficiency ofthe ICS on solving 0-1 knapsack problems ldquoStddevrdquo is muchsmaller than that of the CS and the difference is not very poorbetween ICS andHS which indicates the good stability of theICS and superior approximation ability
Figure 6 shows a comparison of average computationtime with 11989111 to 11989116 estimated by seconds for the proposed
8 Computational Intelligence and Neuroscience
0 1 2 3 4 5
Aver
ge p
rofit
s (lo
g)
Time (s)
HSCSICS
10419
10417
10415
10413
10411
Figure 8 The performance on function 18
0 1 2 3 4 5
Aver
ge p
rofit
s (lo
g)
Time (s)
HSCSICS
10459
10458
10457
10456
10455
10454
Figure 9 The performance on function 19
algorithms the HS and the CS In terms of the averagecomputation time Figure 6 shows that the HS algorithm isthe best one and the CS algorithm is the worst oneMoreoverICS converges to the optima faster thanCS onmost instances
Although ICS has shown some advantages on solving 0-1knapsack problem with medium-scale instances howeverthe optimal solution obtained by ICS is not very prominentcompared with other two algorithms Therefore in order tofurther verify the efficiency of our proposed algorithm wedesigned the large scale knapsack tests as follows
43 Comparison amongThreeAlgorithms on Solving 0-1 Knap-sack Problems with Large Dimension Similar to the resultsof medium-scale knapsack instances for the large scale
0 1 2 3 4 5
Aver
ge p
rofit
s (lo
g)
Time (s)
HSCSICS
10483
104829
104828
104827
104826
Figure 10 The performance on function 20
knapsack problem we observe that the ICS algorithm obtainsbetter solutions in shorter time and has more obviousadvantages over the CS algorithm from Table 3 RegrettablyICS is slightly inferior to HS in terms of the optimal solutionquality on function 17 and function 20 In a word the ICShas demonstrated better performance and it thus provides anefficient alternative on solving 0-1 knapsack problems
Convergence curves shown in Figures 7 8 9 and 10similarly establish the fact that ICS is more effective thanCS in all four large-scale KP instances Through carefulobservation it can be seen that HS gets outstanding profitsin the initial stage of the evolution and the best value in thefinal population Compared with HS and ICS CS obtainedthe worst mean profits at various stages ICS and HS haveroughly the same convergence speed In addition it is obviousto infer that CS and ICS get stuck at local optima quickly ascan be seen from Figure 10 However HS converges to theglobal optimum rapidly
5 Conclusions
In this paper the ICS algorithm has been proposed basedon the CS framework and a greedy transition method tosolve 0-1 knapsack problem efficiently An adaptive step iscarefully designed to balance the local search and the globalsearch Genetic mutation operator helps the algorithm toyield fast convergence and avoids local optima The simu-lation results demonstrate that the proposed algorithm hassuperior performance when compared with HS and CS Theproposed algorithm thus provides a new method for solving0-1 knapsack problems
Further studies will focus on the two issues On onehand we would apply our proposed approach on othercombinatorial optimization problems such as multidimen-sional knapsack problem (MKP) and traveling salesman
Computational Intelligence and Neuroscience 9
problem (TSP) On the other hand we would examine newmeta-hybrid to solve 0-1 knapsack problems which are toocomplicated
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgment
This project is supported by the National Natural ScienceFoundation of China (Grant no 10971052)
References
[1] G B Dantzig ldquoDiscrete-variable extremum problemsrdquo Opera-tions Research vol 5 no 2 pp 266ndash288 1957
[2] L Jourdan M Basseur and E-G Talbi ldquoHybridizing exactmethods and metaheuristics a taxonomyrdquo European Journal ofOperational Research vol 199 no 3 pp 620ndash629 2009
[3] A V Cabot ldquoAn enumeration algorithm for knapsack prob-lemsrdquo Operations Research vol 18 no 2 pp 306ndash311 1970
[4] R J W James and Y Nakagawa ldquoEnumeration methods forrepeatedly solving Multidimensional Knapsack sub-problemsrdquoIEICE Transactions on Information and Systems vol 88 no 10pp 2329ndash2340 2005
[5] P J Kolesar ldquoA branch and bound algorithm for the knapsackproblemrdquoManagement Science vol 13 no 9 pp 723ndash735 1967
[6] SMartelloKnapsack Problem Algorithms andComputer Imple-mentations John Wiley amp Sons New York NY USA 1990
[7] F-T Lin ldquoSolving the knapsack problem with imprecise weightcoefficients using genetic algorithmsrdquo European Journal ofOperational Research vol 185 no 1 pp 133ndash145 2008
[8] J C Bansal and K Deep ldquoA modified binary particle swarmoptimization for knapsack problemsrdquo Applied Mathematics andComputation vol 218 no 22 pp 11042ndash11061 2012
[9] M Kong P Tian and Y Kao ldquoA new ant colony optimizationalgorithm for the multidimensional Knapsack problemrdquo Com-puters and Operations Research vol 35 no 8 pp 2672ndash26832008
[10] M H Kashan N Nahavandi and A H Kashan ldquoDisABC anew artificial bee colony algorithm for binary optimizationrdquoApplied Soft Computing Journal vol 12 no 1 pp 342ndash352 2012
[11] LWang X Fu Y Mao et al ldquoA novel modified binary differen-tial evolution algorithm and its applicationsrdquo Neurocomputingvol 98 pp 55ndash75 2012
[12] D Zou L Gao S Li and J Wu ldquoSolving 0-1 knapsack problemby a novel global harmony search algorithmrdquo Applied SoftComputing Journal vol 11 no 2 pp 1556ndash1564 2011
[13] L H Guo G G Wang H Wang et al ldquoAn effective hybridfirefly algorithm with harmony search for global numericaloptimizationrdquoThe Scientific World Journal vol 2013 Article ID125625 9 pages 2013
[14] G GWang A H Gandomi and A H Alavi ldquoAn effective krillherd algorithmwith migration operator in biogeography-basedoptimizationrdquo Applied Mathematical Modelling 2013
[15] G G Wang A H Gandomi and A H Alavi ldquoStud krill herdalgorithmrdquo Neurocomputing 2013
[16] G G Wang L H Guo H Wang et al ldquoIncorporatingmutation scheme into krill herd algorithm for global numericaloptimizationrdquo Neural Computing and Applications 2012
[17] X-S Yang and S Deb ldquoCuckoo search via Levy flightsrdquo in Pro-ceedings of the IEEE World Congress on Nature and BiologicallyInspired Computing (NABIC rsquo09) pp 210ndash214 December 2009
[18] X-S Yang and S Deb ldquoEngineering optimisation by cuckoosearchrdquo International Journal of Mathematical Modelling andNumerical Optimisation vol 1 no 4 pp 330ndash343 2010
[19] X S Yang Nature-Inspired Metaheuristic Algorithms LuniverPress Frome UK 2nd edition 2010
[20] K Chaowanawate and A Heednacram ldquoImplementation ofcuckoo search in RBF neural network for flood forecastingrdquoin Proceedings of the 4th International Conference on Computa-tional In-telligence Communication Systems and Networks pp22ndash26 2012
[21] V R Chifu C B Pop I Salomie D S Suia and A N NiculicildquoOptimizing the semantic web service composition processusing Cuckoo Searchrdquo Intelligent Distributed Computing V vol382 pp 93ndash102 2011
[22] K Choudhary andGN Purohit ldquoA new testing approach usingcuckoo search to achieve multi-objective genetic algorithmrdquoJournal of Computing vol 3 no 4 pp 117ndash119 2011
[23] MDhivyaM Sundarambal and LN Anand ldquoEnergy efficientcomputation of data fusion in wireless sensor networks usingcuckoo based particle approach (CBPA)rdquo International Journalof Computer Networks and Security vol 4 no 4 pp 249ndash2552011
[24] A Gherboudj A Layeb and S Chikhi ldquoSolving 0-1 knapsackproblems by a discrete binary version of cuckoo search algo-rithmrdquo International Journal of Bio-Inspired Computation vol4 no 4 pp 229ndash236 2012
[25] A Layeb ldquoA novel quantum inspired cuckoo search for knap-sack problemsrdquo International Journal of Bio-Inspired Computa-tion vol 3 no 5 pp 297ndash305 2011
[26] X S Yang and S Deb ldquoCuckoo search recent advances andapplicationsrdquo Neural Computing and Applications 2013
[27] T K Truong K Li and YM Xu ldquoChemical reaction optimiza-tion with greedy strategy for the 0-1 knapsack problemrdquoAppliedSoft Computing vol 13 no 4 pp 1774ndash1780 2013
[28] Y C He X Z Wang and Y Z Kou ldquoA binary differential evo-lution algorithm with hybrid encodingrdquo Journal of ComputerResearch and Development vol 44 no 9 pp 1476ndash1484 2007
[29] E G Talbi Metaheuristics From Design To ImplementationJohn Wiley amp Sons New York NY USA 2009
[30] J Kennedy and R C Eberhart ldquoA discrete binary versionof the particle swarm algorithmrdquo in Proceedings of the IEEEInternational Conference on Computational Cybernetics andSimulation vol 5 pp 4104ndash4108 October 1997
[31] J Zhao T Huang F Pang and Y Liu ldquoGenetic algorithm basedon greedy strategy in the 0-1 knapsack problemrdquo in Proceedingsof the 3rd International Conference on Genetic and EvolutionaryComputing (WGEC rsquo09) pp 105ndash107 October 2009
[32] Y C He K Q Liu and C J Zhang ldquoGreedy genetic algorithmfor solving knapsack problems and its applicationsrdquo ComputerEngineering and Design vol 28 no 11 pp 2655ndash2657 2007
[33] A P Engelbrecht Fundamentals of Computational SwarmIntelligence John Wiley amp Sons Hoboken NJ USA 2009
[34] H Kellerer U Pferschy and D Pisinger Knapsack ProblemsSpringer Berlin Germany 2004
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience 3
optimization problems One of the most significant featuresof the ICS is that it adopts the hybrid coding scheme [28] andeach cuckoo individual is represented by two-tuples
Definition 1 (auxiliary search space) An auxiliary searchspace 1198781015840 which denotes a subspace of 119899 dimensional realspace 119877119899 where 1198781015840 sub 119877
119899 An auxiliary search space 1198781015840corresponds to a solution space 119878 = 0 1119899 Additionally 119878and 1198781015840 are two parallel search space Here the search in 1198781015840is called active search meanwhile the search in 119878 is calledpassive search
Definition 2 (hybrid encoding representation) Each cuckooindividual in the population is represented by the two tuples⟨xi bi⟩ (119894 = 1 2 119899) where xi works in the auxiliary searchspace and bi performs in the solution space accordinglyand 119899 is the dimensionality of solution Further Sigmoidfunction [26] is adopted to transform a real-coded vectorxi = (119909
1 1199092 119909
119899)119879
isin [minus30 30]119899 to binary vector bi =
(1198871 1198872 119887
119899)119879
isin 0 1119899 The procedure works as follows
119887119894=
1 if sig (119909119894) ge 05
0 else(2)
sig(119909) = 1(1 + 119890minus119909) is sigmoid function
32 Greedy TransformMethod Many optimization problemsare constrained Accordingly constraint handling is crucialfor the efficient design ofmetaheuristics Constraint handlingstrategies whichmainly act on the representation of solutionsor the objective function can be classified as reject strategiespenalizing strategies repairing strategies decoding strate-gies and preserving strategies [29] Repairing strategy mostof them are greedy heuristics can be applied for the knapsackproblem [29] However the traditional greedy strategy hassome disadvantages of solving the knapsack problem [30]Truong invented a new repair operator which depends onboth the greedy strategy and random selection [31] Althoughthis method can correct the infeasible solution randomselection reduced the efficiency because it was not greedyenough to improve the convergence speed and accuracyIn this paper a novel greedy transform method (GTM) isintroduced to solve this problem [32] It can effectively repairthe infeasible solution and optimize the feasible solution
This GTM consists of two stages The first stage (calledRS) examines each variable in descending order of 119901
119894119908119894and
confirms the variable value of one as long as feasibility is notviolatedThe second stage (called OS) changes the remainingvariable from zero to one until the feasibility is violated Thepurpose of theOS stage is to repair an abnormal chromosomecoding to turn into a normal chromosome while the RS stageis to achieve the best chromosome coding Then accordingto the mathematical model in Section 2 pseudo code of theGTM is described in Algorithm 1
33 New Position Updating with Adaptive Step and GeneticMutation An outstanding characteristic of PSO is that theindividual tends to mimic its successful companion Each
individual follows the simple act that is to emulate thesuccessful experience of the adjacent individual and theaccumulation behavior is to search for the best area for ahigh-dimensional space [33] Compared with the PSO thereare some differences and similarities Firstly for the PSO theparticle velocity consists of three parts the previous speedentry cognitive component and social compositionThe roleof social composition of the particles is pulled the directionof the global optimum For the CS algorithm new cuckooindividual is generated by a probability 119901
119886in a completely
random manner which can be seen as the social componentof the CS However it does not well reflect the impactof the entire population on the individual Secondly PSOdemonstrate adaptive behavior because the population stateis changed in pace with the individual optimum and theglobal optimum which have been traced However cuckooindividual does not fully show the adaptive behavior in theCSalgorithmThirdly in PSO position update formula performsmutation in an embedded memory manner which is similarto that is used in CS From the above analyses we cancome to a conclusion that the CS algorithm also has someminor disadvantages Inspired by the idea of particle swarmoptimization a novel position updating operator is proposedand utilized to strengthen the ability of local search The ICSand the CS are different in two aspects as follows
(1) The position updating with adaptive step in ICSreplaces the random walk completely in CS in thestage of local search
(2) The probability 119901119886of alien eggs found by host birds
is excluded from the CS and genetic mutation prob-ability (119901
119898) is included in the ICS
The concept of ldquoconfidence intervalrdquo is introduced firstlyand the schematic is given as well
Definition 3 (confidence interval) Let 119909best119895(119905) be the 119895th
component of xbest ingeneration 119905 and xbest = (119909best1 119909
best2
119909
best119899) is the global best cuckoo individual in generation
119905 Let 119909worst119895
(119905) be the 119895th component of xworst ingeneration119905 and xworst = (119909
worst1
119909worst2
119909worst119899
) is the global worstcuckoo individual in generation 119905 accordingly The step
119895=
|119909best119895
minus 119909worst119895
| is the adaptive step of the 119895th component ofindividual xi = (1199091198941 1199091198942 119909119894119899)
119879 and then the confidenceinterval (CI) of every component 119909
119894119895(119895 = 1 2 119899) of
xi (119894 = 1 2 119898) is defined as CI119894119895 isin [minusstep119895 step119895] Figure 1gives the schematic representation of confidence interval
Two major components of any metaheuristic algorithmsare intensification and diversification or exploitation andexploration [19] and their interaction can have a marginaleffect on the efficiency of a metaheuristic algorithm Theconfidence interval is essentially a region near the global bestcuckoo It is significant that the search step size is adjustedgradually in the evolutionary process which can effectivelybalance the contradictions between exploration and exploita-tion In the early stage of search cuckoo individuals randomlydistributed in the entire response space so most adaptive
4 Computational Intelligence and Neuroscience
Input X = (1199091 1199092 119909
119899) isin 0 1
119899
Step 1 sortThe items are sorted according to the value-to-weight ratio 119901
119894119908119894(119894 = 1 2 3 119899) in
descending order then a queue 1199041 1199042 119904
119899 of length n is formed It means that
p1199041w1199041ge p1199042w1199042 for 119904
1lt 1199042
Step 2 repair stage119894 = 1 Tempc = 119908
1199041
While (Tempc le C)if (119909119904119894= 1) then
119910119904119894= 1 119894 = 119894 + 1 Tempc=Tempc+119908
119904119894
end ifend whileStep 3 optimize stageFor 119895 = 119894 to 119899
Tempc = Tempc + 119908119904119895
if (Tempc le C) then119910119904119895= 1
Else119910119904119895= 0
end ifend forOutput Y = (119910
1 1199102 119910
119899) computation is terminated
Algorithm 1 Greedy transform method
Confidence interval
stepj stepj
TS
xworstj xbest
j
Figure 1 The schematic representation of confidence interval
steps are large and most confidence intervals are wide whichis very beneficial to making a lot of exploration As theiterations continue most adaptive steps gradually becomesmall and most confidence intervals become wide accord-ingly Thus the exploitation capabilities will be graduallystrengthened
The purpose of mutation is to introduce new genes so asto increase the diversity of the population Mutation can alsoplay a balanced exploration-exploitation contradictory roleGeneticmutation operationwith a small probability is carriedout for it can effectively prevent the premature convergenceof the ICS New position updating formula of ICS is shown inAlgorithm 2
Here ldquobestrdquo and ldquoworstrdquo are the indexes of the global bestcuckoo and the worst cuckoo respectively And 119903 119903
1and rand
(sdot) are all uniformly generated random numbers in [0 1]Based on the above-mentioned analyses the pseudo code
of the ICS for 0-1 knapsack problems is described as shownin Algorithm 3
The time complexity of our proposed algorithm is approx-imately 119874(max119879 lowast 119898 lowast 2119899) + 119874(119899log 119899) and it is still
For 119894 = 1 to nstepi = |xbesti minus xworsti |
xi1015840 = xbesti plusmn r lowast stepiif (1199031le 119901119898)
xi1015840 = ai + rand() lowast (bi minus ai)end if
end for
Algorithm 2 New position updating formula of ICS
linear The time complexity of new proposed algorithm doesnot increase in magnitude compared with the original CSalgorithm
4 Experimental Results and Analysis
In order to test the optimization ability of ICS and investigateeffectiveness of the algorithms for different instance types we
Computational Intelligence and Neuroscience 5
Step 1 Sorting According to value-to-weight ratio 119901119894119908119894(119894 = 1 2 3 119899) in descending order
a queue 1199041 1199042 119904
119899 of length n is formed
Step 2 Initialization Generate m cuckoo nests randomly ⟨x1 b1⟩⟨x2 b2⟩ ⟨xm bm⟩Calculate the fitness for each individual 119891(bi) 1 le 119894 le 119898 determine ⟨xbest bbest⟩Set the generation counter 119866 = 1 Set mutation parameter 119901
119898
Step 3While (the stopping criterion is not satisfied)for 119894 = 1 to m
for119895 = 1 to nxi(j) = xi(j) + 120572 oplus Levy(120582)Apply new position updating formula of ICS (Algorithm 2)Repair the illegal individuals and optimize the legal individuals (Algorithm 1)
end forend for
Step 4 Keep best solutions Rank the solutions and find the current best (bbest 119891(bbest))119866 = 119866 + 1
Step 5 end while
Algorithm 3 The main procedure of ICS
Table 1 Experimental result of three algorithms with small KPinstances
Fun Size HS CS ICS1198911 10 295 295 2951198912 20 1024 1024 10241198913 4 35 35 351198914 4 23 23 231198915 15 4810694 4810694 48106941198916 10 50 52 521198917 7 107 107 1071198918 23 9761 9776 97771198919 5 130 130 13011989110 20 1025 1025 1025
consider twenty 0-1 knapsack problems involving ten small-scale instances six medium-scale instances and four large-scale instances The solution quality and performance arecompared with binary version HS and binary version CS forsimplicity denoted as HS and CS respectively
Test problems 1ndash10 are taken from [12] Test problems 11and 13 ofHe et al [28] are used in our numerical experimentsTest problem 15 is conducted from test problems 11 and 13Test problem 16 is generated by Kellerer et al [34] Testproblems 12 and 14 are generated by Gherboudj et al [24]Test problems 17ndash20 are taken from [12]
All the algorithms were implemented in Visual C++ 60The test environment is set up on personal computer withAMDAthlon(tm) II X2 250 Processor 301 GHz 175 G RAMrunning on Windows XP Three groups of experiments wereperformed to assess the efficiency and performance of ouralgorithm In all experiments we have set the parameters ofCS as follows 119901
119886= 025 the number of cuckoo is 20 For
the HS algorithm harmony memory size HMS = 5 harmonymemory consideration rate HMCR = 09 pitch adjusting ratePAR = 03 and bandwidth 119887
119908= 119909119880minus 119909119871 For the ICS
algorithm the number of cuckoo is 20 the generic mutation
0 1 2 3 4 5256
257
258
259
26
261
262
263
264
265
266Av
erge
pro
fits
Time (s)
times104
Figure 2 Best profits (100 items)
probability 119901119898= 015 The experiments on each function
were repeated 30 times independently The quantification ofthe solutions is tabulated in Tables 1ndash3
41 Comparison amongThree Algorithms on Small DimensionKnapsack Problems Table 1 shows the experimental resultsof our ICS algorithm the HS and the CS on ten KPtests with different dimension Observation of the presentedresults in Table 1 indicates that the proposed ICS algorithmperforms better than HS algorithm and CS algorithm in 1198918The optimal solution of the test problem 8 found by ICSis 119909lowast = (1 1 1 1 1 1 1 1 1 0 1 0 1 0 0 0 0 0 0 0 0 0 0)
and 1198918 (119909lowast) = 9777 Additionally CS and ICS have thesame results in 1198916 which is better than that obtained by theHS algorithm and three algorithms have the same resultsin the other instances In a word the solutions obtainedby three algorithms are similar and there is almost no
6 Computational Intelligence and Neuroscience
Table 2 Experimental result of three algorithms with medium KP instances
Fun Dim Algorithm 119881best119862best 119881worst119862worst 119879best119879avg 119866best119866avg Mean Std dev SR
11989111 50HS 31031000 31031000 0000007 96498 3103 0 100CS 31031000 30971000 00151231 6502 3102 2009 95ICS 31031000 31031000 0000064 222 3103 0 100
11989112 80HS 92011505 91991505 0000033 1881445 9200 045 95CS 91991505 91061505 29243776 757980 9176 2627 15ICS 92011505 91991505 00151359 5315 9200 103 50
11989113 100HS 265596717 265596717 0000460 15016521 26559 0 100CS 265596717 258826713 17192565 351529 26447 17858 20ICS 265596717 265346706 00620999 12182 26558 559 95
11989114 120HS 73931109 73931109 0000123 523787 7393 0 100CS 73931109 73341109 13281399 151159 7361 1388 6ICS 73931109 73931109 03442298 53355 7393 0 100
11989115 150HS 300857718 300817718 05782429 1614068532 30081 123 10CS 300817718 299437717 06563142 182872 30072 3213 90ICS 300857718 300817718 12031843 144215 30082 1642 20
11989116 200HS 8822871893 8812971893 09211718 1911535429 88150 4147 25CS 8822871893 8801871881 42004225 868872 88160 6742 10ICS 8822871893 8822171886 15703479 145323 88225 297 30
Table 3 Experimental results of three algorithms with large KP instances
Fun Dim Algorithm 119881best119862best 119881worst119862worst 119879best119866best Mean Std dev
11989117 300HS 143281700 143071700 339150277 14325 523CS 143061696 130271698 3422239 14021 27506ICS 143181700 142791698 2125131 14303 1061
11989118 500HS 160312000 159891999 182817885 16015 1121CS 160092000 153532000 2500104 15755 17470ICS 160422000 159911999 215679 16016 1376
11989119 800HS 397595000 396565000 381222659 39720 2875CS 389874999 382964998 321883 38630 19160ICS 397754995 394324996 396891 39578 8506
11989120 1000HS 6763510000 6763010000 31115208 67633 159CS 669929997 667129998 01091 66877 7068ICS 6712310000 6697510000 068713 67042 4125
significant difference among all the three algorithms FurtherICS algorithm does not show its advantages thoroughlyTherefore in order to further test the performance of thealgorithm we conducted the following experiments in thenext subsection
42 Comparison amongThree Algorithms onMedium Dimen-sion Knapsack Problems Figures 2 3 4 and 5 show conver-gence curves of the best profits of the ICS over 30 runs on fourtest problemswith 100 120 150 and 200 items It indicates theglobal search ability and the convergence ability of the ICSThere are several observations and they are given as follows
The best profit of 100 items test problem is quicklyincreasing and reaching the approximately optimal profit atnearly one second Although the algorithm shows a slowevolution only for a moment in 120 items the best profitis still obtained after about 25 seconds In the 150 items
test problem the best profit is quickly increasing for morethan a second For the large 200 items test problem thebest profit is also increasing very rapidly over 25 secondsThe performance of the ICS can be further understood andanalyzed from Table 2
We observed from Table 2 that the ICS has demonstratedan overwhelming advantage over the other two algorithms onsolving 0-1 knapsack problems with medium scales ICS andHS obtained the same optimal solution in all test problemsThe CS has the worst performance and the best solutionsfound by CS are worse than those obtained by the other twoalgorithms for11989112 and11989115 Furthermore theworst solutionsfound by the ICS are all better than those obtained by CSThe ICS and the HS obtained the same worst solutions except11989113 and 11989116 Unfortunately the worst solution obtained byICS cannot exceed that of the HS The ICS uses little ldquotimerdquoand little ldquoaverage timerdquo compared with CS for almost all
Computational Intelligence and Neuroscience 7
0 1 2 3 4 57280
7290
7300
7310
7320
7330
7340
7350
7360
7370
Time (s)
Aver
ge p
rofit
s
Figure 3 Best profits (120 items)
0 1 2 3 4 5284
286
288
29
292
294
296
298
3
302
Time (s)
Aver
ge p
rofit
s
times104
Figure 4 Best profits (150 items)
0 1 2 3 4 5879
8795
88
8805
881
8815
882
8825
Aver
ge p
rofit
s
Time (s)
times104
Figure 5 Best profits (200 items)
0
05
1
15
2
25
3
35
4
45
Function
Aver
age t
ime
HSCSICS
f11 f12 f13 f14 f15 f16
Figure 6 Comparison of average computation time of the ICS withthe HS and the CS
0 1 2 3 4 5
Aver
ge p
rofit
s (lo
g)
Time (s)
HSCSICS
10415
10414
10413
10412
10411
1041
10409
10408
Figure 7 The performance on function 17
of the test problems In addition the ldquo119866best119866avgrdquo of mostproblems is much smaller than that of the CS and the HSwhich shows that the ICS has a fast convergence ldquoSRrdquo is morethan 95 for almost all of problems except 11989115 and 11989116Further ldquoSRrdquo for 11989115 and 11989116 is slightly higher than that ofother two algorithms which indicates the high efficiency ofthe ICS on solving 0-1 knapsack problems ldquoStddevrdquo is muchsmaller than that of the CS and the difference is not very poorbetween ICS andHS which indicates the good stability of theICS and superior approximation ability
Figure 6 shows a comparison of average computationtime with 11989111 to 11989116 estimated by seconds for the proposed
8 Computational Intelligence and Neuroscience
0 1 2 3 4 5
Aver
ge p
rofit
s (lo
g)
Time (s)
HSCSICS
10419
10417
10415
10413
10411
Figure 8 The performance on function 18
0 1 2 3 4 5
Aver
ge p
rofit
s (lo
g)
Time (s)
HSCSICS
10459
10458
10457
10456
10455
10454
Figure 9 The performance on function 19
algorithms the HS and the CS In terms of the averagecomputation time Figure 6 shows that the HS algorithm isthe best one and the CS algorithm is the worst oneMoreoverICS converges to the optima faster thanCS onmost instances
Although ICS has shown some advantages on solving 0-1knapsack problem with medium-scale instances howeverthe optimal solution obtained by ICS is not very prominentcompared with other two algorithms Therefore in order tofurther verify the efficiency of our proposed algorithm wedesigned the large scale knapsack tests as follows
43 Comparison amongThreeAlgorithms on Solving 0-1 Knap-sack Problems with Large Dimension Similar to the resultsof medium-scale knapsack instances for the large scale
0 1 2 3 4 5
Aver
ge p
rofit
s (lo
g)
Time (s)
HSCSICS
10483
104829
104828
104827
104826
Figure 10 The performance on function 20
knapsack problem we observe that the ICS algorithm obtainsbetter solutions in shorter time and has more obviousadvantages over the CS algorithm from Table 3 RegrettablyICS is slightly inferior to HS in terms of the optimal solutionquality on function 17 and function 20 In a word the ICShas demonstrated better performance and it thus provides anefficient alternative on solving 0-1 knapsack problems
Convergence curves shown in Figures 7 8 9 and 10similarly establish the fact that ICS is more effective thanCS in all four large-scale KP instances Through carefulobservation it can be seen that HS gets outstanding profitsin the initial stage of the evolution and the best value in thefinal population Compared with HS and ICS CS obtainedthe worst mean profits at various stages ICS and HS haveroughly the same convergence speed In addition it is obviousto infer that CS and ICS get stuck at local optima quickly ascan be seen from Figure 10 However HS converges to theglobal optimum rapidly
5 Conclusions
In this paper the ICS algorithm has been proposed basedon the CS framework and a greedy transition method tosolve 0-1 knapsack problem efficiently An adaptive step iscarefully designed to balance the local search and the globalsearch Genetic mutation operator helps the algorithm toyield fast convergence and avoids local optima The simu-lation results demonstrate that the proposed algorithm hassuperior performance when compared with HS and CS Theproposed algorithm thus provides a new method for solving0-1 knapsack problems
Further studies will focus on the two issues On onehand we would apply our proposed approach on othercombinatorial optimization problems such as multidimen-sional knapsack problem (MKP) and traveling salesman
Computational Intelligence and Neuroscience 9
problem (TSP) On the other hand we would examine newmeta-hybrid to solve 0-1 knapsack problems which are toocomplicated
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgment
This project is supported by the National Natural ScienceFoundation of China (Grant no 10971052)
References
[1] G B Dantzig ldquoDiscrete-variable extremum problemsrdquo Opera-tions Research vol 5 no 2 pp 266ndash288 1957
[2] L Jourdan M Basseur and E-G Talbi ldquoHybridizing exactmethods and metaheuristics a taxonomyrdquo European Journal ofOperational Research vol 199 no 3 pp 620ndash629 2009
[3] A V Cabot ldquoAn enumeration algorithm for knapsack prob-lemsrdquo Operations Research vol 18 no 2 pp 306ndash311 1970
[4] R J W James and Y Nakagawa ldquoEnumeration methods forrepeatedly solving Multidimensional Knapsack sub-problemsrdquoIEICE Transactions on Information and Systems vol 88 no 10pp 2329ndash2340 2005
[5] P J Kolesar ldquoA branch and bound algorithm for the knapsackproblemrdquoManagement Science vol 13 no 9 pp 723ndash735 1967
[6] SMartelloKnapsack Problem Algorithms andComputer Imple-mentations John Wiley amp Sons New York NY USA 1990
[7] F-T Lin ldquoSolving the knapsack problem with imprecise weightcoefficients using genetic algorithmsrdquo European Journal ofOperational Research vol 185 no 1 pp 133ndash145 2008
[8] J C Bansal and K Deep ldquoA modified binary particle swarmoptimization for knapsack problemsrdquo Applied Mathematics andComputation vol 218 no 22 pp 11042ndash11061 2012
[9] M Kong P Tian and Y Kao ldquoA new ant colony optimizationalgorithm for the multidimensional Knapsack problemrdquo Com-puters and Operations Research vol 35 no 8 pp 2672ndash26832008
[10] M H Kashan N Nahavandi and A H Kashan ldquoDisABC anew artificial bee colony algorithm for binary optimizationrdquoApplied Soft Computing Journal vol 12 no 1 pp 342ndash352 2012
[11] LWang X Fu Y Mao et al ldquoA novel modified binary differen-tial evolution algorithm and its applicationsrdquo Neurocomputingvol 98 pp 55ndash75 2012
[12] D Zou L Gao S Li and J Wu ldquoSolving 0-1 knapsack problemby a novel global harmony search algorithmrdquo Applied SoftComputing Journal vol 11 no 2 pp 1556ndash1564 2011
[13] L H Guo G G Wang H Wang et al ldquoAn effective hybridfirefly algorithm with harmony search for global numericaloptimizationrdquoThe Scientific World Journal vol 2013 Article ID125625 9 pages 2013
[14] G GWang A H Gandomi and A H Alavi ldquoAn effective krillherd algorithmwith migration operator in biogeography-basedoptimizationrdquo Applied Mathematical Modelling 2013
[15] G G Wang A H Gandomi and A H Alavi ldquoStud krill herdalgorithmrdquo Neurocomputing 2013
[16] G G Wang L H Guo H Wang et al ldquoIncorporatingmutation scheme into krill herd algorithm for global numericaloptimizationrdquo Neural Computing and Applications 2012
[17] X-S Yang and S Deb ldquoCuckoo search via Levy flightsrdquo in Pro-ceedings of the IEEE World Congress on Nature and BiologicallyInspired Computing (NABIC rsquo09) pp 210ndash214 December 2009
[18] X-S Yang and S Deb ldquoEngineering optimisation by cuckoosearchrdquo International Journal of Mathematical Modelling andNumerical Optimisation vol 1 no 4 pp 330ndash343 2010
[19] X S Yang Nature-Inspired Metaheuristic Algorithms LuniverPress Frome UK 2nd edition 2010
[20] K Chaowanawate and A Heednacram ldquoImplementation ofcuckoo search in RBF neural network for flood forecastingrdquoin Proceedings of the 4th International Conference on Computa-tional In-telligence Communication Systems and Networks pp22ndash26 2012
[21] V R Chifu C B Pop I Salomie D S Suia and A N NiculicildquoOptimizing the semantic web service composition processusing Cuckoo Searchrdquo Intelligent Distributed Computing V vol382 pp 93ndash102 2011
[22] K Choudhary andGN Purohit ldquoA new testing approach usingcuckoo search to achieve multi-objective genetic algorithmrdquoJournal of Computing vol 3 no 4 pp 117ndash119 2011
[23] MDhivyaM Sundarambal and LN Anand ldquoEnergy efficientcomputation of data fusion in wireless sensor networks usingcuckoo based particle approach (CBPA)rdquo International Journalof Computer Networks and Security vol 4 no 4 pp 249ndash2552011
[24] A Gherboudj A Layeb and S Chikhi ldquoSolving 0-1 knapsackproblems by a discrete binary version of cuckoo search algo-rithmrdquo International Journal of Bio-Inspired Computation vol4 no 4 pp 229ndash236 2012
[25] A Layeb ldquoA novel quantum inspired cuckoo search for knap-sack problemsrdquo International Journal of Bio-Inspired Computa-tion vol 3 no 5 pp 297ndash305 2011
[26] X S Yang and S Deb ldquoCuckoo search recent advances andapplicationsrdquo Neural Computing and Applications 2013
[27] T K Truong K Li and YM Xu ldquoChemical reaction optimiza-tion with greedy strategy for the 0-1 knapsack problemrdquoAppliedSoft Computing vol 13 no 4 pp 1774ndash1780 2013
[28] Y C He X Z Wang and Y Z Kou ldquoA binary differential evo-lution algorithm with hybrid encodingrdquo Journal of ComputerResearch and Development vol 44 no 9 pp 1476ndash1484 2007
[29] E G Talbi Metaheuristics From Design To ImplementationJohn Wiley amp Sons New York NY USA 2009
[30] J Kennedy and R C Eberhart ldquoA discrete binary versionof the particle swarm algorithmrdquo in Proceedings of the IEEEInternational Conference on Computational Cybernetics andSimulation vol 5 pp 4104ndash4108 October 1997
[31] J Zhao T Huang F Pang and Y Liu ldquoGenetic algorithm basedon greedy strategy in the 0-1 knapsack problemrdquo in Proceedingsof the 3rd International Conference on Genetic and EvolutionaryComputing (WGEC rsquo09) pp 105ndash107 October 2009
[32] Y C He K Q Liu and C J Zhang ldquoGreedy genetic algorithmfor solving knapsack problems and its applicationsrdquo ComputerEngineering and Design vol 28 no 11 pp 2655ndash2657 2007
[33] A P Engelbrecht Fundamentals of Computational SwarmIntelligence John Wiley amp Sons Hoboken NJ USA 2009
[34] H Kellerer U Pferschy and D Pisinger Knapsack ProblemsSpringer Berlin Germany 2004
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
4 Computational Intelligence and Neuroscience
Input X = (1199091 1199092 119909
119899) isin 0 1
119899
Step 1 sortThe items are sorted according to the value-to-weight ratio 119901
119894119908119894(119894 = 1 2 3 119899) in
descending order then a queue 1199041 1199042 119904
119899 of length n is formed It means that
p1199041w1199041ge p1199042w1199042 for 119904
1lt 1199042
Step 2 repair stage119894 = 1 Tempc = 119908
1199041
While (Tempc le C)if (119909119904119894= 1) then
119910119904119894= 1 119894 = 119894 + 1 Tempc=Tempc+119908
119904119894
end ifend whileStep 3 optimize stageFor 119895 = 119894 to 119899
Tempc = Tempc + 119908119904119895
if (Tempc le C) then119910119904119895= 1
Else119910119904119895= 0
end ifend forOutput Y = (119910
1 1199102 119910
119899) computation is terminated
Algorithm 1 Greedy transform method
Confidence interval
stepj stepj
TS
xworstj xbest
j
Figure 1 The schematic representation of confidence interval
steps are large and most confidence intervals are wide whichis very beneficial to making a lot of exploration As theiterations continue most adaptive steps gradually becomesmall and most confidence intervals become wide accord-ingly Thus the exploitation capabilities will be graduallystrengthened
The purpose of mutation is to introduce new genes so asto increase the diversity of the population Mutation can alsoplay a balanced exploration-exploitation contradictory roleGeneticmutation operationwith a small probability is carriedout for it can effectively prevent the premature convergenceof the ICS New position updating formula of ICS is shown inAlgorithm 2
Here ldquobestrdquo and ldquoworstrdquo are the indexes of the global bestcuckoo and the worst cuckoo respectively And 119903 119903
1and rand
(sdot) are all uniformly generated random numbers in [0 1]Based on the above-mentioned analyses the pseudo code
of the ICS for 0-1 knapsack problems is described as shownin Algorithm 3
The time complexity of our proposed algorithm is approx-imately 119874(max119879 lowast 119898 lowast 2119899) + 119874(119899log 119899) and it is still
For 119894 = 1 to nstepi = |xbesti minus xworsti |
xi1015840 = xbesti plusmn r lowast stepiif (1199031le 119901119898)
xi1015840 = ai + rand() lowast (bi minus ai)end if
end for
Algorithm 2 New position updating formula of ICS
linear The time complexity of new proposed algorithm doesnot increase in magnitude compared with the original CSalgorithm
4 Experimental Results and Analysis
In order to test the optimization ability of ICS and investigateeffectiveness of the algorithms for different instance types we
Computational Intelligence and Neuroscience 5
Step 1 Sorting According to value-to-weight ratio 119901119894119908119894(119894 = 1 2 3 119899) in descending order
a queue 1199041 1199042 119904
119899 of length n is formed
Step 2 Initialization Generate m cuckoo nests randomly ⟨x1 b1⟩⟨x2 b2⟩ ⟨xm bm⟩Calculate the fitness for each individual 119891(bi) 1 le 119894 le 119898 determine ⟨xbest bbest⟩Set the generation counter 119866 = 1 Set mutation parameter 119901
119898
Step 3While (the stopping criterion is not satisfied)for 119894 = 1 to m
for119895 = 1 to nxi(j) = xi(j) + 120572 oplus Levy(120582)Apply new position updating formula of ICS (Algorithm 2)Repair the illegal individuals and optimize the legal individuals (Algorithm 1)
end forend for
Step 4 Keep best solutions Rank the solutions and find the current best (bbest 119891(bbest))119866 = 119866 + 1
Step 5 end while
Algorithm 3 The main procedure of ICS
Table 1 Experimental result of three algorithms with small KPinstances
Fun Size HS CS ICS1198911 10 295 295 2951198912 20 1024 1024 10241198913 4 35 35 351198914 4 23 23 231198915 15 4810694 4810694 48106941198916 10 50 52 521198917 7 107 107 1071198918 23 9761 9776 97771198919 5 130 130 13011989110 20 1025 1025 1025
consider twenty 0-1 knapsack problems involving ten small-scale instances six medium-scale instances and four large-scale instances The solution quality and performance arecompared with binary version HS and binary version CS forsimplicity denoted as HS and CS respectively
Test problems 1ndash10 are taken from [12] Test problems 11and 13 ofHe et al [28] are used in our numerical experimentsTest problem 15 is conducted from test problems 11 and 13Test problem 16 is generated by Kellerer et al [34] Testproblems 12 and 14 are generated by Gherboudj et al [24]Test problems 17ndash20 are taken from [12]
All the algorithms were implemented in Visual C++ 60The test environment is set up on personal computer withAMDAthlon(tm) II X2 250 Processor 301 GHz 175 G RAMrunning on Windows XP Three groups of experiments wereperformed to assess the efficiency and performance of ouralgorithm In all experiments we have set the parameters ofCS as follows 119901
119886= 025 the number of cuckoo is 20 For
the HS algorithm harmony memory size HMS = 5 harmonymemory consideration rate HMCR = 09 pitch adjusting ratePAR = 03 and bandwidth 119887
119908= 119909119880minus 119909119871 For the ICS
algorithm the number of cuckoo is 20 the generic mutation
0 1 2 3 4 5256
257
258
259
26
261
262
263
264
265
266Av
erge
pro
fits
Time (s)
times104
Figure 2 Best profits (100 items)
probability 119901119898= 015 The experiments on each function
were repeated 30 times independently The quantification ofthe solutions is tabulated in Tables 1ndash3
41 Comparison amongThree Algorithms on Small DimensionKnapsack Problems Table 1 shows the experimental resultsof our ICS algorithm the HS and the CS on ten KPtests with different dimension Observation of the presentedresults in Table 1 indicates that the proposed ICS algorithmperforms better than HS algorithm and CS algorithm in 1198918The optimal solution of the test problem 8 found by ICSis 119909lowast = (1 1 1 1 1 1 1 1 1 0 1 0 1 0 0 0 0 0 0 0 0 0 0)
and 1198918 (119909lowast) = 9777 Additionally CS and ICS have thesame results in 1198916 which is better than that obtained by theHS algorithm and three algorithms have the same resultsin the other instances In a word the solutions obtainedby three algorithms are similar and there is almost no
6 Computational Intelligence and Neuroscience
Table 2 Experimental result of three algorithms with medium KP instances
Fun Dim Algorithm 119881best119862best 119881worst119862worst 119879best119879avg 119866best119866avg Mean Std dev SR
11989111 50HS 31031000 31031000 0000007 96498 3103 0 100CS 31031000 30971000 00151231 6502 3102 2009 95ICS 31031000 31031000 0000064 222 3103 0 100
11989112 80HS 92011505 91991505 0000033 1881445 9200 045 95CS 91991505 91061505 29243776 757980 9176 2627 15ICS 92011505 91991505 00151359 5315 9200 103 50
11989113 100HS 265596717 265596717 0000460 15016521 26559 0 100CS 265596717 258826713 17192565 351529 26447 17858 20ICS 265596717 265346706 00620999 12182 26558 559 95
11989114 120HS 73931109 73931109 0000123 523787 7393 0 100CS 73931109 73341109 13281399 151159 7361 1388 6ICS 73931109 73931109 03442298 53355 7393 0 100
11989115 150HS 300857718 300817718 05782429 1614068532 30081 123 10CS 300817718 299437717 06563142 182872 30072 3213 90ICS 300857718 300817718 12031843 144215 30082 1642 20
11989116 200HS 8822871893 8812971893 09211718 1911535429 88150 4147 25CS 8822871893 8801871881 42004225 868872 88160 6742 10ICS 8822871893 8822171886 15703479 145323 88225 297 30
Table 3 Experimental results of three algorithms with large KP instances
Fun Dim Algorithm 119881best119862best 119881worst119862worst 119879best119866best Mean Std dev
11989117 300HS 143281700 143071700 339150277 14325 523CS 143061696 130271698 3422239 14021 27506ICS 143181700 142791698 2125131 14303 1061
11989118 500HS 160312000 159891999 182817885 16015 1121CS 160092000 153532000 2500104 15755 17470ICS 160422000 159911999 215679 16016 1376
11989119 800HS 397595000 396565000 381222659 39720 2875CS 389874999 382964998 321883 38630 19160ICS 397754995 394324996 396891 39578 8506
11989120 1000HS 6763510000 6763010000 31115208 67633 159CS 669929997 667129998 01091 66877 7068ICS 6712310000 6697510000 068713 67042 4125
significant difference among all the three algorithms FurtherICS algorithm does not show its advantages thoroughlyTherefore in order to further test the performance of thealgorithm we conducted the following experiments in thenext subsection
42 Comparison amongThree Algorithms onMedium Dimen-sion Knapsack Problems Figures 2 3 4 and 5 show conver-gence curves of the best profits of the ICS over 30 runs on fourtest problemswith 100 120 150 and 200 items It indicates theglobal search ability and the convergence ability of the ICSThere are several observations and they are given as follows
The best profit of 100 items test problem is quicklyincreasing and reaching the approximately optimal profit atnearly one second Although the algorithm shows a slowevolution only for a moment in 120 items the best profitis still obtained after about 25 seconds In the 150 items
test problem the best profit is quickly increasing for morethan a second For the large 200 items test problem thebest profit is also increasing very rapidly over 25 secondsThe performance of the ICS can be further understood andanalyzed from Table 2
We observed from Table 2 that the ICS has demonstratedan overwhelming advantage over the other two algorithms onsolving 0-1 knapsack problems with medium scales ICS andHS obtained the same optimal solution in all test problemsThe CS has the worst performance and the best solutionsfound by CS are worse than those obtained by the other twoalgorithms for11989112 and11989115 Furthermore theworst solutionsfound by the ICS are all better than those obtained by CSThe ICS and the HS obtained the same worst solutions except11989113 and 11989116 Unfortunately the worst solution obtained byICS cannot exceed that of the HS The ICS uses little ldquotimerdquoand little ldquoaverage timerdquo compared with CS for almost all
Computational Intelligence and Neuroscience 7
0 1 2 3 4 57280
7290
7300
7310
7320
7330
7340
7350
7360
7370
Time (s)
Aver
ge p
rofit
s
Figure 3 Best profits (120 items)
0 1 2 3 4 5284
286
288
29
292
294
296
298
3
302
Time (s)
Aver
ge p
rofit
s
times104
Figure 4 Best profits (150 items)
0 1 2 3 4 5879
8795
88
8805
881
8815
882
8825
Aver
ge p
rofit
s
Time (s)
times104
Figure 5 Best profits (200 items)
0
05
1
15
2
25
3
35
4
45
Function
Aver
age t
ime
HSCSICS
f11 f12 f13 f14 f15 f16
Figure 6 Comparison of average computation time of the ICS withthe HS and the CS
0 1 2 3 4 5
Aver
ge p
rofit
s (lo
g)
Time (s)
HSCSICS
10415
10414
10413
10412
10411
1041
10409
10408
Figure 7 The performance on function 17
of the test problems In addition the ldquo119866best119866avgrdquo of mostproblems is much smaller than that of the CS and the HSwhich shows that the ICS has a fast convergence ldquoSRrdquo is morethan 95 for almost all of problems except 11989115 and 11989116Further ldquoSRrdquo for 11989115 and 11989116 is slightly higher than that ofother two algorithms which indicates the high efficiency ofthe ICS on solving 0-1 knapsack problems ldquoStddevrdquo is muchsmaller than that of the CS and the difference is not very poorbetween ICS andHS which indicates the good stability of theICS and superior approximation ability
Figure 6 shows a comparison of average computationtime with 11989111 to 11989116 estimated by seconds for the proposed
8 Computational Intelligence and Neuroscience
0 1 2 3 4 5
Aver
ge p
rofit
s (lo
g)
Time (s)
HSCSICS
10419
10417
10415
10413
10411
Figure 8 The performance on function 18
0 1 2 3 4 5
Aver
ge p
rofit
s (lo
g)
Time (s)
HSCSICS
10459
10458
10457
10456
10455
10454
Figure 9 The performance on function 19
algorithms the HS and the CS In terms of the averagecomputation time Figure 6 shows that the HS algorithm isthe best one and the CS algorithm is the worst oneMoreoverICS converges to the optima faster thanCS onmost instances
Although ICS has shown some advantages on solving 0-1knapsack problem with medium-scale instances howeverthe optimal solution obtained by ICS is not very prominentcompared with other two algorithms Therefore in order tofurther verify the efficiency of our proposed algorithm wedesigned the large scale knapsack tests as follows
43 Comparison amongThreeAlgorithms on Solving 0-1 Knap-sack Problems with Large Dimension Similar to the resultsof medium-scale knapsack instances for the large scale
0 1 2 3 4 5
Aver
ge p
rofit
s (lo
g)
Time (s)
HSCSICS
10483
104829
104828
104827
104826
Figure 10 The performance on function 20
knapsack problem we observe that the ICS algorithm obtainsbetter solutions in shorter time and has more obviousadvantages over the CS algorithm from Table 3 RegrettablyICS is slightly inferior to HS in terms of the optimal solutionquality on function 17 and function 20 In a word the ICShas demonstrated better performance and it thus provides anefficient alternative on solving 0-1 knapsack problems
Convergence curves shown in Figures 7 8 9 and 10similarly establish the fact that ICS is more effective thanCS in all four large-scale KP instances Through carefulobservation it can be seen that HS gets outstanding profitsin the initial stage of the evolution and the best value in thefinal population Compared with HS and ICS CS obtainedthe worst mean profits at various stages ICS and HS haveroughly the same convergence speed In addition it is obviousto infer that CS and ICS get stuck at local optima quickly ascan be seen from Figure 10 However HS converges to theglobal optimum rapidly
5 Conclusions
In this paper the ICS algorithm has been proposed basedon the CS framework and a greedy transition method tosolve 0-1 knapsack problem efficiently An adaptive step iscarefully designed to balance the local search and the globalsearch Genetic mutation operator helps the algorithm toyield fast convergence and avoids local optima The simu-lation results demonstrate that the proposed algorithm hassuperior performance when compared with HS and CS Theproposed algorithm thus provides a new method for solving0-1 knapsack problems
Further studies will focus on the two issues On onehand we would apply our proposed approach on othercombinatorial optimization problems such as multidimen-sional knapsack problem (MKP) and traveling salesman
Computational Intelligence and Neuroscience 9
problem (TSP) On the other hand we would examine newmeta-hybrid to solve 0-1 knapsack problems which are toocomplicated
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgment
This project is supported by the National Natural ScienceFoundation of China (Grant no 10971052)
References
[1] G B Dantzig ldquoDiscrete-variable extremum problemsrdquo Opera-tions Research vol 5 no 2 pp 266ndash288 1957
[2] L Jourdan M Basseur and E-G Talbi ldquoHybridizing exactmethods and metaheuristics a taxonomyrdquo European Journal ofOperational Research vol 199 no 3 pp 620ndash629 2009
[3] A V Cabot ldquoAn enumeration algorithm for knapsack prob-lemsrdquo Operations Research vol 18 no 2 pp 306ndash311 1970
[4] R J W James and Y Nakagawa ldquoEnumeration methods forrepeatedly solving Multidimensional Knapsack sub-problemsrdquoIEICE Transactions on Information and Systems vol 88 no 10pp 2329ndash2340 2005
[5] P J Kolesar ldquoA branch and bound algorithm for the knapsackproblemrdquoManagement Science vol 13 no 9 pp 723ndash735 1967
[6] SMartelloKnapsack Problem Algorithms andComputer Imple-mentations John Wiley amp Sons New York NY USA 1990
[7] F-T Lin ldquoSolving the knapsack problem with imprecise weightcoefficients using genetic algorithmsrdquo European Journal ofOperational Research vol 185 no 1 pp 133ndash145 2008
[8] J C Bansal and K Deep ldquoA modified binary particle swarmoptimization for knapsack problemsrdquo Applied Mathematics andComputation vol 218 no 22 pp 11042ndash11061 2012
[9] M Kong P Tian and Y Kao ldquoA new ant colony optimizationalgorithm for the multidimensional Knapsack problemrdquo Com-puters and Operations Research vol 35 no 8 pp 2672ndash26832008
[10] M H Kashan N Nahavandi and A H Kashan ldquoDisABC anew artificial bee colony algorithm for binary optimizationrdquoApplied Soft Computing Journal vol 12 no 1 pp 342ndash352 2012
[11] LWang X Fu Y Mao et al ldquoA novel modified binary differen-tial evolution algorithm and its applicationsrdquo Neurocomputingvol 98 pp 55ndash75 2012
[12] D Zou L Gao S Li and J Wu ldquoSolving 0-1 knapsack problemby a novel global harmony search algorithmrdquo Applied SoftComputing Journal vol 11 no 2 pp 1556ndash1564 2011
[13] L H Guo G G Wang H Wang et al ldquoAn effective hybridfirefly algorithm with harmony search for global numericaloptimizationrdquoThe Scientific World Journal vol 2013 Article ID125625 9 pages 2013
[14] G GWang A H Gandomi and A H Alavi ldquoAn effective krillherd algorithmwith migration operator in biogeography-basedoptimizationrdquo Applied Mathematical Modelling 2013
[15] G G Wang A H Gandomi and A H Alavi ldquoStud krill herdalgorithmrdquo Neurocomputing 2013
[16] G G Wang L H Guo H Wang et al ldquoIncorporatingmutation scheme into krill herd algorithm for global numericaloptimizationrdquo Neural Computing and Applications 2012
[17] X-S Yang and S Deb ldquoCuckoo search via Levy flightsrdquo in Pro-ceedings of the IEEE World Congress on Nature and BiologicallyInspired Computing (NABIC rsquo09) pp 210ndash214 December 2009
[18] X-S Yang and S Deb ldquoEngineering optimisation by cuckoosearchrdquo International Journal of Mathematical Modelling andNumerical Optimisation vol 1 no 4 pp 330ndash343 2010
[19] X S Yang Nature-Inspired Metaheuristic Algorithms LuniverPress Frome UK 2nd edition 2010
[20] K Chaowanawate and A Heednacram ldquoImplementation ofcuckoo search in RBF neural network for flood forecastingrdquoin Proceedings of the 4th International Conference on Computa-tional In-telligence Communication Systems and Networks pp22ndash26 2012
[21] V R Chifu C B Pop I Salomie D S Suia and A N NiculicildquoOptimizing the semantic web service composition processusing Cuckoo Searchrdquo Intelligent Distributed Computing V vol382 pp 93ndash102 2011
[22] K Choudhary andGN Purohit ldquoA new testing approach usingcuckoo search to achieve multi-objective genetic algorithmrdquoJournal of Computing vol 3 no 4 pp 117ndash119 2011
[23] MDhivyaM Sundarambal and LN Anand ldquoEnergy efficientcomputation of data fusion in wireless sensor networks usingcuckoo based particle approach (CBPA)rdquo International Journalof Computer Networks and Security vol 4 no 4 pp 249ndash2552011
[24] A Gherboudj A Layeb and S Chikhi ldquoSolving 0-1 knapsackproblems by a discrete binary version of cuckoo search algo-rithmrdquo International Journal of Bio-Inspired Computation vol4 no 4 pp 229ndash236 2012
[25] A Layeb ldquoA novel quantum inspired cuckoo search for knap-sack problemsrdquo International Journal of Bio-Inspired Computa-tion vol 3 no 5 pp 297ndash305 2011
[26] X S Yang and S Deb ldquoCuckoo search recent advances andapplicationsrdquo Neural Computing and Applications 2013
[27] T K Truong K Li and YM Xu ldquoChemical reaction optimiza-tion with greedy strategy for the 0-1 knapsack problemrdquoAppliedSoft Computing vol 13 no 4 pp 1774ndash1780 2013
[28] Y C He X Z Wang and Y Z Kou ldquoA binary differential evo-lution algorithm with hybrid encodingrdquo Journal of ComputerResearch and Development vol 44 no 9 pp 1476ndash1484 2007
[29] E G Talbi Metaheuristics From Design To ImplementationJohn Wiley amp Sons New York NY USA 2009
[30] J Kennedy and R C Eberhart ldquoA discrete binary versionof the particle swarm algorithmrdquo in Proceedings of the IEEEInternational Conference on Computational Cybernetics andSimulation vol 5 pp 4104ndash4108 October 1997
[31] J Zhao T Huang F Pang and Y Liu ldquoGenetic algorithm basedon greedy strategy in the 0-1 knapsack problemrdquo in Proceedingsof the 3rd International Conference on Genetic and EvolutionaryComputing (WGEC rsquo09) pp 105ndash107 October 2009
[32] Y C He K Q Liu and C J Zhang ldquoGreedy genetic algorithmfor solving knapsack problems and its applicationsrdquo ComputerEngineering and Design vol 28 no 11 pp 2655ndash2657 2007
[33] A P Engelbrecht Fundamentals of Computational SwarmIntelligence John Wiley amp Sons Hoboken NJ USA 2009
[34] H Kellerer U Pferschy and D Pisinger Knapsack ProblemsSpringer Berlin Germany 2004
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience 5
Step 1 Sorting According to value-to-weight ratio 119901119894119908119894(119894 = 1 2 3 119899) in descending order
a queue 1199041 1199042 119904
119899 of length n is formed
Step 2 Initialization Generate m cuckoo nests randomly ⟨x1 b1⟩⟨x2 b2⟩ ⟨xm bm⟩Calculate the fitness for each individual 119891(bi) 1 le 119894 le 119898 determine ⟨xbest bbest⟩Set the generation counter 119866 = 1 Set mutation parameter 119901
119898
Step 3While (the stopping criterion is not satisfied)for 119894 = 1 to m
for119895 = 1 to nxi(j) = xi(j) + 120572 oplus Levy(120582)Apply new position updating formula of ICS (Algorithm 2)Repair the illegal individuals and optimize the legal individuals (Algorithm 1)
end forend for
Step 4 Keep best solutions Rank the solutions and find the current best (bbest 119891(bbest))119866 = 119866 + 1
Step 5 end while
Algorithm 3 The main procedure of ICS
Table 1 Experimental result of three algorithms with small KPinstances
Fun Size HS CS ICS1198911 10 295 295 2951198912 20 1024 1024 10241198913 4 35 35 351198914 4 23 23 231198915 15 4810694 4810694 48106941198916 10 50 52 521198917 7 107 107 1071198918 23 9761 9776 97771198919 5 130 130 13011989110 20 1025 1025 1025
consider twenty 0-1 knapsack problems involving ten small-scale instances six medium-scale instances and four large-scale instances The solution quality and performance arecompared with binary version HS and binary version CS forsimplicity denoted as HS and CS respectively
Test problems 1ndash10 are taken from [12] Test problems 11and 13 ofHe et al [28] are used in our numerical experimentsTest problem 15 is conducted from test problems 11 and 13Test problem 16 is generated by Kellerer et al [34] Testproblems 12 and 14 are generated by Gherboudj et al [24]Test problems 17ndash20 are taken from [12]
All the algorithms were implemented in Visual C++ 60The test environment is set up on personal computer withAMDAthlon(tm) II X2 250 Processor 301 GHz 175 G RAMrunning on Windows XP Three groups of experiments wereperformed to assess the efficiency and performance of ouralgorithm In all experiments we have set the parameters ofCS as follows 119901
119886= 025 the number of cuckoo is 20 For
the HS algorithm harmony memory size HMS = 5 harmonymemory consideration rate HMCR = 09 pitch adjusting ratePAR = 03 and bandwidth 119887
119908= 119909119880minus 119909119871 For the ICS
algorithm the number of cuckoo is 20 the generic mutation
0 1 2 3 4 5256
257
258
259
26
261
262
263
264
265
266Av
erge
pro
fits
Time (s)
times104
Figure 2 Best profits (100 items)
probability 119901119898= 015 The experiments on each function
were repeated 30 times independently The quantification ofthe solutions is tabulated in Tables 1ndash3
41 Comparison amongThree Algorithms on Small DimensionKnapsack Problems Table 1 shows the experimental resultsof our ICS algorithm the HS and the CS on ten KPtests with different dimension Observation of the presentedresults in Table 1 indicates that the proposed ICS algorithmperforms better than HS algorithm and CS algorithm in 1198918The optimal solution of the test problem 8 found by ICSis 119909lowast = (1 1 1 1 1 1 1 1 1 0 1 0 1 0 0 0 0 0 0 0 0 0 0)
and 1198918 (119909lowast) = 9777 Additionally CS and ICS have thesame results in 1198916 which is better than that obtained by theHS algorithm and three algorithms have the same resultsin the other instances In a word the solutions obtainedby three algorithms are similar and there is almost no
6 Computational Intelligence and Neuroscience
Table 2 Experimental result of three algorithms with medium KP instances
Fun Dim Algorithm 119881best119862best 119881worst119862worst 119879best119879avg 119866best119866avg Mean Std dev SR
11989111 50HS 31031000 31031000 0000007 96498 3103 0 100CS 31031000 30971000 00151231 6502 3102 2009 95ICS 31031000 31031000 0000064 222 3103 0 100
11989112 80HS 92011505 91991505 0000033 1881445 9200 045 95CS 91991505 91061505 29243776 757980 9176 2627 15ICS 92011505 91991505 00151359 5315 9200 103 50
11989113 100HS 265596717 265596717 0000460 15016521 26559 0 100CS 265596717 258826713 17192565 351529 26447 17858 20ICS 265596717 265346706 00620999 12182 26558 559 95
11989114 120HS 73931109 73931109 0000123 523787 7393 0 100CS 73931109 73341109 13281399 151159 7361 1388 6ICS 73931109 73931109 03442298 53355 7393 0 100
11989115 150HS 300857718 300817718 05782429 1614068532 30081 123 10CS 300817718 299437717 06563142 182872 30072 3213 90ICS 300857718 300817718 12031843 144215 30082 1642 20
11989116 200HS 8822871893 8812971893 09211718 1911535429 88150 4147 25CS 8822871893 8801871881 42004225 868872 88160 6742 10ICS 8822871893 8822171886 15703479 145323 88225 297 30
Table 3 Experimental results of three algorithms with large KP instances
Fun Dim Algorithm 119881best119862best 119881worst119862worst 119879best119866best Mean Std dev
11989117 300HS 143281700 143071700 339150277 14325 523CS 143061696 130271698 3422239 14021 27506ICS 143181700 142791698 2125131 14303 1061
11989118 500HS 160312000 159891999 182817885 16015 1121CS 160092000 153532000 2500104 15755 17470ICS 160422000 159911999 215679 16016 1376
11989119 800HS 397595000 396565000 381222659 39720 2875CS 389874999 382964998 321883 38630 19160ICS 397754995 394324996 396891 39578 8506
11989120 1000HS 6763510000 6763010000 31115208 67633 159CS 669929997 667129998 01091 66877 7068ICS 6712310000 6697510000 068713 67042 4125
significant difference among all the three algorithms FurtherICS algorithm does not show its advantages thoroughlyTherefore in order to further test the performance of thealgorithm we conducted the following experiments in thenext subsection
42 Comparison amongThree Algorithms onMedium Dimen-sion Knapsack Problems Figures 2 3 4 and 5 show conver-gence curves of the best profits of the ICS over 30 runs on fourtest problemswith 100 120 150 and 200 items It indicates theglobal search ability and the convergence ability of the ICSThere are several observations and they are given as follows
The best profit of 100 items test problem is quicklyincreasing and reaching the approximately optimal profit atnearly one second Although the algorithm shows a slowevolution only for a moment in 120 items the best profitis still obtained after about 25 seconds In the 150 items
test problem the best profit is quickly increasing for morethan a second For the large 200 items test problem thebest profit is also increasing very rapidly over 25 secondsThe performance of the ICS can be further understood andanalyzed from Table 2
We observed from Table 2 that the ICS has demonstratedan overwhelming advantage over the other two algorithms onsolving 0-1 knapsack problems with medium scales ICS andHS obtained the same optimal solution in all test problemsThe CS has the worst performance and the best solutionsfound by CS are worse than those obtained by the other twoalgorithms for11989112 and11989115 Furthermore theworst solutionsfound by the ICS are all better than those obtained by CSThe ICS and the HS obtained the same worst solutions except11989113 and 11989116 Unfortunately the worst solution obtained byICS cannot exceed that of the HS The ICS uses little ldquotimerdquoand little ldquoaverage timerdquo compared with CS for almost all
Computational Intelligence and Neuroscience 7
0 1 2 3 4 57280
7290
7300
7310
7320
7330
7340
7350
7360
7370
Time (s)
Aver
ge p
rofit
s
Figure 3 Best profits (120 items)
0 1 2 3 4 5284
286
288
29
292
294
296
298
3
302
Time (s)
Aver
ge p
rofit
s
times104
Figure 4 Best profits (150 items)
0 1 2 3 4 5879
8795
88
8805
881
8815
882
8825
Aver
ge p
rofit
s
Time (s)
times104
Figure 5 Best profits (200 items)
0
05
1
15
2
25
3
35
4
45
Function
Aver
age t
ime
HSCSICS
f11 f12 f13 f14 f15 f16
Figure 6 Comparison of average computation time of the ICS withthe HS and the CS
0 1 2 3 4 5
Aver
ge p
rofit
s (lo
g)
Time (s)
HSCSICS
10415
10414
10413
10412
10411
1041
10409
10408
Figure 7 The performance on function 17
of the test problems In addition the ldquo119866best119866avgrdquo of mostproblems is much smaller than that of the CS and the HSwhich shows that the ICS has a fast convergence ldquoSRrdquo is morethan 95 for almost all of problems except 11989115 and 11989116Further ldquoSRrdquo for 11989115 and 11989116 is slightly higher than that ofother two algorithms which indicates the high efficiency ofthe ICS on solving 0-1 knapsack problems ldquoStddevrdquo is muchsmaller than that of the CS and the difference is not very poorbetween ICS andHS which indicates the good stability of theICS and superior approximation ability
Figure 6 shows a comparison of average computationtime with 11989111 to 11989116 estimated by seconds for the proposed
8 Computational Intelligence and Neuroscience
0 1 2 3 4 5
Aver
ge p
rofit
s (lo
g)
Time (s)
HSCSICS
10419
10417
10415
10413
10411
Figure 8 The performance on function 18
0 1 2 3 4 5
Aver
ge p
rofit
s (lo
g)
Time (s)
HSCSICS
10459
10458
10457
10456
10455
10454
Figure 9 The performance on function 19
algorithms the HS and the CS In terms of the averagecomputation time Figure 6 shows that the HS algorithm isthe best one and the CS algorithm is the worst oneMoreoverICS converges to the optima faster thanCS onmost instances
Although ICS has shown some advantages on solving 0-1knapsack problem with medium-scale instances howeverthe optimal solution obtained by ICS is not very prominentcompared with other two algorithms Therefore in order tofurther verify the efficiency of our proposed algorithm wedesigned the large scale knapsack tests as follows
43 Comparison amongThreeAlgorithms on Solving 0-1 Knap-sack Problems with Large Dimension Similar to the resultsof medium-scale knapsack instances for the large scale
0 1 2 3 4 5
Aver
ge p
rofit
s (lo
g)
Time (s)
HSCSICS
10483
104829
104828
104827
104826
Figure 10 The performance on function 20
knapsack problem we observe that the ICS algorithm obtainsbetter solutions in shorter time and has more obviousadvantages over the CS algorithm from Table 3 RegrettablyICS is slightly inferior to HS in terms of the optimal solutionquality on function 17 and function 20 In a word the ICShas demonstrated better performance and it thus provides anefficient alternative on solving 0-1 knapsack problems
Convergence curves shown in Figures 7 8 9 and 10similarly establish the fact that ICS is more effective thanCS in all four large-scale KP instances Through carefulobservation it can be seen that HS gets outstanding profitsin the initial stage of the evolution and the best value in thefinal population Compared with HS and ICS CS obtainedthe worst mean profits at various stages ICS and HS haveroughly the same convergence speed In addition it is obviousto infer that CS and ICS get stuck at local optima quickly ascan be seen from Figure 10 However HS converges to theglobal optimum rapidly
5 Conclusions
In this paper the ICS algorithm has been proposed basedon the CS framework and a greedy transition method tosolve 0-1 knapsack problem efficiently An adaptive step iscarefully designed to balance the local search and the globalsearch Genetic mutation operator helps the algorithm toyield fast convergence and avoids local optima The simu-lation results demonstrate that the proposed algorithm hassuperior performance when compared with HS and CS Theproposed algorithm thus provides a new method for solving0-1 knapsack problems
Further studies will focus on the two issues On onehand we would apply our proposed approach on othercombinatorial optimization problems such as multidimen-sional knapsack problem (MKP) and traveling salesman
Computational Intelligence and Neuroscience 9
problem (TSP) On the other hand we would examine newmeta-hybrid to solve 0-1 knapsack problems which are toocomplicated
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgment
This project is supported by the National Natural ScienceFoundation of China (Grant no 10971052)
References
[1] G B Dantzig ldquoDiscrete-variable extremum problemsrdquo Opera-tions Research vol 5 no 2 pp 266ndash288 1957
[2] L Jourdan M Basseur and E-G Talbi ldquoHybridizing exactmethods and metaheuristics a taxonomyrdquo European Journal ofOperational Research vol 199 no 3 pp 620ndash629 2009
[3] A V Cabot ldquoAn enumeration algorithm for knapsack prob-lemsrdquo Operations Research vol 18 no 2 pp 306ndash311 1970
[4] R J W James and Y Nakagawa ldquoEnumeration methods forrepeatedly solving Multidimensional Knapsack sub-problemsrdquoIEICE Transactions on Information and Systems vol 88 no 10pp 2329ndash2340 2005
[5] P J Kolesar ldquoA branch and bound algorithm for the knapsackproblemrdquoManagement Science vol 13 no 9 pp 723ndash735 1967
[6] SMartelloKnapsack Problem Algorithms andComputer Imple-mentations John Wiley amp Sons New York NY USA 1990
[7] F-T Lin ldquoSolving the knapsack problem with imprecise weightcoefficients using genetic algorithmsrdquo European Journal ofOperational Research vol 185 no 1 pp 133ndash145 2008
[8] J C Bansal and K Deep ldquoA modified binary particle swarmoptimization for knapsack problemsrdquo Applied Mathematics andComputation vol 218 no 22 pp 11042ndash11061 2012
[9] M Kong P Tian and Y Kao ldquoA new ant colony optimizationalgorithm for the multidimensional Knapsack problemrdquo Com-puters and Operations Research vol 35 no 8 pp 2672ndash26832008
[10] M H Kashan N Nahavandi and A H Kashan ldquoDisABC anew artificial bee colony algorithm for binary optimizationrdquoApplied Soft Computing Journal vol 12 no 1 pp 342ndash352 2012
[11] LWang X Fu Y Mao et al ldquoA novel modified binary differen-tial evolution algorithm and its applicationsrdquo Neurocomputingvol 98 pp 55ndash75 2012
[12] D Zou L Gao S Li and J Wu ldquoSolving 0-1 knapsack problemby a novel global harmony search algorithmrdquo Applied SoftComputing Journal vol 11 no 2 pp 1556ndash1564 2011
[13] L H Guo G G Wang H Wang et al ldquoAn effective hybridfirefly algorithm with harmony search for global numericaloptimizationrdquoThe Scientific World Journal vol 2013 Article ID125625 9 pages 2013
[14] G GWang A H Gandomi and A H Alavi ldquoAn effective krillherd algorithmwith migration operator in biogeography-basedoptimizationrdquo Applied Mathematical Modelling 2013
[15] G G Wang A H Gandomi and A H Alavi ldquoStud krill herdalgorithmrdquo Neurocomputing 2013
[16] G G Wang L H Guo H Wang et al ldquoIncorporatingmutation scheme into krill herd algorithm for global numericaloptimizationrdquo Neural Computing and Applications 2012
[17] X-S Yang and S Deb ldquoCuckoo search via Levy flightsrdquo in Pro-ceedings of the IEEE World Congress on Nature and BiologicallyInspired Computing (NABIC rsquo09) pp 210ndash214 December 2009
[18] X-S Yang and S Deb ldquoEngineering optimisation by cuckoosearchrdquo International Journal of Mathematical Modelling andNumerical Optimisation vol 1 no 4 pp 330ndash343 2010
[19] X S Yang Nature-Inspired Metaheuristic Algorithms LuniverPress Frome UK 2nd edition 2010
[20] K Chaowanawate and A Heednacram ldquoImplementation ofcuckoo search in RBF neural network for flood forecastingrdquoin Proceedings of the 4th International Conference on Computa-tional In-telligence Communication Systems and Networks pp22ndash26 2012
[21] V R Chifu C B Pop I Salomie D S Suia and A N NiculicildquoOptimizing the semantic web service composition processusing Cuckoo Searchrdquo Intelligent Distributed Computing V vol382 pp 93ndash102 2011
[22] K Choudhary andGN Purohit ldquoA new testing approach usingcuckoo search to achieve multi-objective genetic algorithmrdquoJournal of Computing vol 3 no 4 pp 117ndash119 2011
[23] MDhivyaM Sundarambal and LN Anand ldquoEnergy efficientcomputation of data fusion in wireless sensor networks usingcuckoo based particle approach (CBPA)rdquo International Journalof Computer Networks and Security vol 4 no 4 pp 249ndash2552011
[24] A Gherboudj A Layeb and S Chikhi ldquoSolving 0-1 knapsackproblems by a discrete binary version of cuckoo search algo-rithmrdquo International Journal of Bio-Inspired Computation vol4 no 4 pp 229ndash236 2012
[25] A Layeb ldquoA novel quantum inspired cuckoo search for knap-sack problemsrdquo International Journal of Bio-Inspired Computa-tion vol 3 no 5 pp 297ndash305 2011
[26] X S Yang and S Deb ldquoCuckoo search recent advances andapplicationsrdquo Neural Computing and Applications 2013
[27] T K Truong K Li and YM Xu ldquoChemical reaction optimiza-tion with greedy strategy for the 0-1 knapsack problemrdquoAppliedSoft Computing vol 13 no 4 pp 1774ndash1780 2013
[28] Y C He X Z Wang and Y Z Kou ldquoA binary differential evo-lution algorithm with hybrid encodingrdquo Journal of ComputerResearch and Development vol 44 no 9 pp 1476ndash1484 2007
[29] E G Talbi Metaheuristics From Design To ImplementationJohn Wiley amp Sons New York NY USA 2009
[30] J Kennedy and R C Eberhart ldquoA discrete binary versionof the particle swarm algorithmrdquo in Proceedings of the IEEEInternational Conference on Computational Cybernetics andSimulation vol 5 pp 4104ndash4108 October 1997
[31] J Zhao T Huang F Pang and Y Liu ldquoGenetic algorithm basedon greedy strategy in the 0-1 knapsack problemrdquo in Proceedingsof the 3rd International Conference on Genetic and EvolutionaryComputing (WGEC rsquo09) pp 105ndash107 October 2009
[32] Y C He K Q Liu and C J Zhang ldquoGreedy genetic algorithmfor solving knapsack problems and its applicationsrdquo ComputerEngineering and Design vol 28 no 11 pp 2655ndash2657 2007
[33] A P Engelbrecht Fundamentals of Computational SwarmIntelligence John Wiley amp Sons Hoboken NJ USA 2009
[34] H Kellerer U Pferschy and D Pisinger Knapsack ProblemsSpringer Berlin Germany 2004
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
6 Computational Intelligence and Neuroscience
Table 2 Experimental result of three algorithms with medium KP instances
Fun Dim Algorithm 119881best119862best 119881worst119862worst 119879best119879avg 119866best119866avg Mean Std dev SR
11989111 50HS 31031000 31031000 0000007 96498 3103 0 100CS 31031000 30971000 00151231 6502 3102 2009 95ICS 31031000 31031000 0000064 222 3103 0 100
11989112 80HS 92011505 91991505 0000033 1881445 9200 045 95CS 91991505 91061505 29243776 757980 9176 2627 15ICS 92011505 91991505 00151359 5315 9200 103 50
11989113 100HS 265596717 265596717 0000460 15016521 26559 0 100CS 265596717 258826713 17192565 351529 26447 17858 20ICS 265596717 265346706 00620999 12182 26558 559 95
11989114 120HS 73931109 73931109 0000123 523787 7393 0 100CS 73931109 73341109 13281399 151159 7361 1388 6ICS 73931109 73931109 03442298 53355 7393 0 100
11989115 150HS 300857718 300817718 05782429 1614068532 30081 123 10CS 300817718 299437717 06563142 182872 30072 3213 90ICS 300857718 300817718 12031843 144215 30082 1642 20
11989116 200HS 8822871893 8812971893 09211718 1911535429 88150 4147 25CS 8822871893 8801871881 42004225 868872 88160 6742 10ICS 8822871893 8822171886 15703479 145323 88225 297 30
Table 3 Experimental results of three algorithms with large KP instances
Fun Dim Algorithm 119881best119862best 119881worst119862worst 119879best119866best Mean Std dev
11989117 300HS 143281700 143071700 339150277 14325 523CS 143061696 130271698 3422239 14021 27506ICS 143181700 142791698 2125131 14303 1061
11989118 500HS 160312000 159891999 182817885 16015 1121CS 160092000 153532000 2500104 15755 17470ICS 160422000 159911999 215679 16016 1376
11989119 800HS 397595000 396565000 381222659 39720 2875CS 389874999 382964998 321883 38630 19160ICS 397754995 394324996 396891 39578 8506
11989120 1000HS 6763510000 6763010000 31115208 67633 159CS 669929997 667129998 01091 66877 7068ICS 6712310000 6697510000 068713 67042 4125
significant difference among all the three algorithms FurtherICS algorithm does not show its advantages thoroughlyTherefore in order to further test the performance of thealgorithm we conducted the following experiments in thenext subsection
42 Comparison amongThree Algorithms onMedium Dimen-sion Knapsack Problems Figures 2 3 4 and 5 show conver-gence curves of the best profits of the ICS over 30 runs on fourtest problemswith 100 120 150 and 200 items It indicates theglobal search ability and the convergence ability of the ICSThere are several observations and they are given as follows
The best profit of 100 items test problem is quicklyincreasing and reaching the approximately optimal profit atnearly one second Although the algorithm shows a slowevolution only for a moment in 120 items the best profitis still obtained after about 25 seconds In the 150 items
test problem the best profit is quickly increasing for morethan a second For the large 200 items test problem thebest profit is also increasing very rapidly over 25 secondsThe performance of the ICS can be further understood andanalyzed from Table 2
We observed from Table 2 that the ICS has demonstratedan overwhelming advantage over the other two algorithms onsolving 0-1 knapsack problems with medium scales ICS andHS obtained the same optimal solution in all test problemsThe CS has the worst performance and the best solutionsfound by CS are worse than those obtained by the other twoalgorithms for11989112 and11989115 Furthermore theworst solutionsfound by the ICS are all better than those obtained by CSThe ICS and the HS obtained the same worst solutions except11989113 and 11989116 Unfortunately the worst solution obtained byICS cannot exceed that of the HS The ICS uses little ldquotimerdquoand little ldquoaverage timerdquo compared with CS for almost all
Computational Intelligence and Neuroscience 7
0 1 2 3 4 57280
7290
7300
7310
7320
7330
7340
7350
7360
7370
Time (s)
Aver
ge p
rofit
s
Figure 3 Best profits (120 items)
0 1 2 3 4 5284
286
288
29
292
294
296
298
3
302
Time (s)
Aver
ge p
rofit
s
times104
Figure 4 Best profits (150 items)
0 1 2 3 4 5879
8795
88
8805
881
8815
882
8825
Aver
ge p
rofit
s
Time (s)
times104
Figure 5 Best profits (200 items)
0
05
1
15
2
25
3
35
4
45
Function
Aver
age t
ime
HSCSICS
f11 f12 f13 f14 f15 f16
Figure 6 Comparison of average computation time of the ICS withthe HS and the CS
0 1 2 3 4 5
Aver
ge p
rofit
s (lo
g)
Time (s)
HSCSICS
10415
10414
10413
10412
10411
1041
10409
10408
Figure 7 The performance on function 17
of the test problems In addition the ldquo119866best119866avgrdquo of mostproblems is much smaller than that of the CS and the HSwhich shows that the ICS has a fast convergence ldquoSRrdquo is morethan 95 for almost all of problems except 11989115 and 11989116Further ldquoSRrdquo for 11989115 and 11989116 is slightly higher than that ofother two algorithms which indicates the high efficiency ofthe ICS on solving 0-1 knapsack problems ldquoStddevrdquo is muchsmaller than that of the CS and the difference is not very poorbetween ICS andHS which indicates the good stability of theICS and superior approximation ability
Figure 6 shows a comparison of average computationtime with 11989111 to 11989116 estimated by seconds for the proposed
8 Computational Intelligence and Neuroscience
0 1 2 3 4 5
Aver
ge p
rofit
s (lo
g)
Time (s)
HSCSICS
10419
10417
10415
10413
10411
Figure 8 The performance on function 18
0 1 2 3 4 5
Aver
ge p
rofit
s (lo
g)
Time (s)
HSCSICS
10459
10458
10457
10456
10455
10454
Figure 9 The performance on function 19
algorithms the HS and the CS In terms of the averagecomputation time Figure 6 shows that the HS algorithm isthe best one and the CS algorithm is the worst oneMoreoverICS converges to the optima faster thanCS onmost instances
Although ICS has shown some advantages on solving 0-1knapsack problem with medium-scale instances howeverthe optimal solution obtained by ICS is not very prominentcompared with other two algorithms Therefore in order tofurther verify the efficiency of our proposed algorithm wedesigned the large scale knapsack tests as follows
43 Comparison amongThreeAlgorithms on Solving 0-1 Knap-sack Problems with Large Dimension Similar to the resultsof medium-scale knapsack instances for the large scale
0 1 2 3 4 5
Aver
ge p
rofit
s (lo
g)
Time (s)
HSCSICS
10483
104829
104828
104827
104826
Figure 10 The performance on function 20
knapsack problem we observe that the ICS algorithm obtainsbetter solutions in shorter time and has more obviousadvantages over the CS algorithm from Table 3 RegrettablyICS is slightly inferior to HS in terms of the optimal solutionquality on function 17 and function 20 In a word the ICShas demonstrated better performance and it thus provides anefficient alternative on solving 0-1 knapsack problems
Convergence curves shown in Figures 7 8 9 and 10similarly establish the fact that ICS is more effective thanCS in all four large-scale KP instances Through carefulobservation it can be seen that HS gets outstanding profitsin the initial stage of the evolution and the best value in thefinal population Compared with HS and ICS CS obtainedthe worst mean profits at various stages ICS and HS haveroughly the same convergence speed In addition it is obviousto infer that CS and ICS get stuck at local optima quickly ascan be seen from Figure 10 However HS converges to theglobal optimum rapidly
5 Conclusions
In this paper the ICS algorithm has been proposed basedon the CS framework and a greedy transition method tosolve 0-1 knapsack problem efficiently An adaptive step iscarefully designed to balance the local search and the globalsearch Genetic mutation operator helps the algorithm toyield fast convergence and avoids local optima The simu-lation results demonstrate that the proposed algorithm hassuperior performance when compared with HS and CS Theproposed algorithm thus provides a new method for solving0-1 knapsack problems
Further studies will focus on the two issues On onehand we would apply our proposed approach on othercombinatorial optimization problems such as multidimen-sional knapsack problem (MKP) and traveling salesman
Computational Intelligence and Neuroscience 9
problem (TSP) On the other hand we would examine newmeta-hybrid to solve 0-1 knapsack problems which are toocomplicated
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgment
This project is supported by the National Natural ScienceFoundation of China (Grant no 10971052)
References
[1] G B Dantzig ldquoDiscrete-variable extremum problemsrdquo Opera-tions Research vol 5 no 2 pp 266ndash288 1957
[2] L Jourdan M Basseur and E-G Talbi ldquoHybridizing exactmethods and metaheuristics a taxonomyrdquo European Journal ofOperational Research vol 199 no 3 pp 620ndash629 2009
[3] A V Cabot ldquoAn enumeration algorithm for knapsack prob-lemsrdquo Operations Research vol 18 no 2 pp 306ndash311 1970
[4] R J W James and Y Nakagawa ldquoEnumeration methods forrepeatedly solving Multidimensional Knapsack sub-problemsrdquoIEICE Transactions on Information and Systems vol 88 no 10pp 2329ndash2340 2005
[5] P J Kolesar ldquoA branch and bound algorithm for the knapsackproblemrdquoManagement Science vol 13 no 9 pp 723ndash735 1967
[6] SMartelloKnapsack Problem Algorithms andComputer Imple-mentations John Wiley amp Sons New York NY USA 1990
[7] F-T Lin ldquoSolving the knapsack problem with imprecise weightcoefficients using genetic algorithmsrdquo European Journal ofOperational Research vol 185 no 1 pp 133ndash145 2008
[8] J C Bansal and K Deep ldquoA modified binary particle swarmoptimization for knapsack problemsrdquo Applied Mathematics andComputation vol 218 no 22 pp 11042ndash11061 2012
[9] M Kong P Tian and Y Kao ldquoA new ant colony optimizationalgorithm for the multidimensional Knapsack problemrdquo Com-puters and Operations Research vol 35 no 8 pp 2672ndash26832008
[10] M H Kashan N Nahavandi and A H Kashan ldquoDisABC anew artificial bee colony algorithm for binary optimizationrdquoApplied Soft Computing Journal vol 12 no 1 pp 342ndash352 2012
[11] LWang X Fu Y Mao et al ldquoA novel modified binary differen-tial evolution algorithm and its applicationsrdquo Neurocomputingvol 98 pp 55ndash75 2012
[12] D Zou L Gao S Li and J Wu ldquoSolving 0-1 knapsack problemby a novel global harmony search algorithmrdquo Applied SoftComputing Journal vol 11 no 2 pp 1556ndash1564 2011
[13] L H Guo G G Wang H Wang et al ldquoAn effective hybridfirefly algorithm with harmony search for global numericaloptimizationrdquoThe Scientific World Journal vol 2013 Article ID125625 9 pages 2013
[14] G GWang A H Gandomi and A H Alavi ldquoAn effective krillherd algorithmwith migration operator in biogeography-basedoptimizationrdquo Applied Mathematical Modelling 2013
[15] G G Wang A H Gandomi and A H Alavi ldquoStud krill herdalgorithmrdquo Neurocomputing 2013
[16] G G Wang L H Guo H Wang et al ldquoIncorporatingmutation scheme into krill herd algorithm for global numericaloptimizationrdquo Neural Computing and Applications 2012
[17] X-S Yang and S Deb ldquoCuckoo search via Levy flightsrdquo in Pro-ceedings of the IEEE World Congress on Nature and BiologicallyInspired Computing (NABIC rsquo09) pp 210ndash214 December 2009
[18] X-S Yang and S Deb ldquoEngineering optimisation by cuckoosearchrdquo International Journal of Mathematical Modelling andNumerical Optimisation vol 1 no 4 pp 330ndash343 2010
[19] X S Yang Nature-Inspired Metaheuristic Algorithms LuniverPress Frome UK 2nd edition 2010
[20] K Chaowanawate and A Heednacram ldquoImplementation ofcuckoo search in RBF neural network for flood forecastingrdquoin Proceedings of the 4th International Conference on Computa-tional In-telligence Communication Systems and Networks pp22ndash26 2012
[21] V R Chifu C B Pop I Salomie D S Suia and A N NiculicildquoOptimizing the semantic web service composition processusing Cuckoo Searchrdquo Intelligent Distributed Computing V vol382 pp 93ndash102 2011
[22] K Choudhary andGN Purohit ldquoA new testing approach usingcuckoo search to achieve multi-objective genetic algorithmrdquoJournal of Computing vol 3 no 4 pp 117ndash119 2011
[23] MDhivyaM Sundarambal and LN Anand ldquoEnergy efficientcomputation of data fusion in wireless sensor networks usingcuckoo based particle approach (CBPA)rdquo International Journalof Computer Networks and Security vol 4 no 4 pp 249ndash2552011
[24] A Gherboudj A Layeb and S Chikhi ldquoSolving 0-1 knapsackproblems by a discrete binary version of cuckoo search algo-rithmrdquo International Journal of Bio-Inspired Computation vol4 no 4 pp 229ndash236 2012
[25] A Layeb ldquoA novel quantum inspired cuckoo search for knap-sack problemsrdquo International Journal of Bio-Inspired Computa-tion vol 3 no 5 pp 297ndash305 2011
[26] X S Yang and S Deb ldquoCuckoo search recent advances andapplicationsrdquo Neural Computing and Applications 2013
[27] T K Truong K Li and YM Xu ldquoChemical reaction optimiza-tion with greedy strategy for the 0-1 knapsack problemrdquoAppliedSoft Computing vol 13 no 4 pp 1774ndash1780 2013
[28] Y C He X Z Wang and Y Z Kou ldquoA binary differential evo-lution algorithm with hybrid encodingrdquo Journal of ComputerResearch and Development vol 44 no 9 pp 1476ndash1484 2007
[29] E G Talbi Metaheuristics From Design To ImplementationJohn Wiley amp Sons New York NY USA 2009
[30] J Kennedy and R C Eberhart ldquoA discrete binary versionof the particle swarm algorithmrdquo in Proceedings of the IEEEInternational Conference on Computational Cybernetics andSimulation vol 5 pp 4104ndash4108 October 1997
[31] J Zhao T Huang F Pang and Y Liu ldquoGenetic algorithm basedon greedy strategy in the 0-1 knapsack problemrdquo in Proceedingsof the 3rd International Conference on Genetic and EvolutionaryComputing (WGEC rsquo09) pp 105ndash107 October 2009
[32] Y C He K Q Liu and C J Zhang ldquoGreedy genetic algorithmfor solving knapsack problems and its applicationsrdquo ComputerEngineering and Design vol 28 no 11 pp 2655ndash2657 2007
[33] A P Engelbrecht Fundamentals of Computational SwarmIntelligence John Wiley amp Sons Hoboken NJ USA 2009
[34] H Kellerer U Pferschy and D Pisinger Knapsack ProblemsSpringer Berlin Germany 2004
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience 7
0 1 2 3 4 57280
7290
7300
7310
7320
7330
7340
7350
7360
7370
Time (s)
Aver
ge p
rofit
s
Figure 3 Best profits (120 items)
0 1 2 3 4 5284
286
288
29
292
294
296
298
3
302
Time (s)
Aver
ge p
rofit
s
times104
Figure 4 Best profits (150 items)
0 1 2 3 4 5879
8795
88
8805
881
8815
882
8825
Aver
ge p
rofit
s
Time (s)
times104
Figure 5 Best profits (200 items)
0
05
1
15
2
25
3
35
4
45
Function
Aver
age t
ime
HSCSICS
f11 f12 f13 f14 f15 f16
Figure 6 Comparison of average computation time of the ICS withthe HS and the CS
0 1 2 3 4 5
Aver
ge p
rofit
s (lo
g)
Time (s)
HSCSICS
10415
10414
10413
10412
10411
1041
10409
10408
Figure 7 The performance on function 17
of the test problems In addition the ldquo119866best119866avgrdquo of mostproblems is much smaller than that of the CS and the HSwhich shows that the ICS has a fast convergence ldquoSRrdquo is morethan 95 for almost all of problems except 11989115 and 11989116Further ldquoSRrdquo for 11989115 and 11989116 is slightly higher than that ofother two algorithms which indicates the high efficiency ofthe ICS on solving 0-1 knapsack problems ldquoStddevrdquo is muchsmaller than that of the CS and the difference is not very poorbetween ICS andHS which indicates the good stability of theICS and superior approximation ability
Figure 6 shows a comparison of average computationtime with 11989111 to 11989116 estimated by seconds for the proposed
8 Computational Intelligence and Neuroscience
0 1 2 3 4 5
Aver
ge p
rofit
s (lo
g)
Time (s)
HSCSICS
10419
10417
10415
10413
10411
Figure 8 The performance on function 18
0 1 2 3 4 5
Aver
ge p
rofit
s (lo
g)
Time (s)
HSCSICS
10459
10458
10457
10456
10455
10454
Figure 9 The performance on function 19
algorithms the HS and the CS In terms of the averagecomputation time Figure 6 shows that the HS algorithm isthe best one and the CS algorithm is the worst oneMoreoverICS converges to the optima faster thanCS onmost instances
Although ICS has shown some advantages on solving 0-1knapsack problem with medium-scale instances howeverthe optimal solution obtained by ICS is not very prominentcompared with other two algorithms Therefore in order tofurther verify the efficiency of our proposed algorithm wedesigned the large scale knapsack tests as follows
43 Comparison amongThreeAlgorithms on Solving 0-1 Knap-sack Problems with Large Dimension Similar to the resultsof medium-scale knapsack instances for the large scale
0 1 2 3 4 5
Aver
ge p
rofit
s (lo
g)
Time (s)
HSCSICS
10483
104829
104828
104827
104826
Figure 10 The performance on function 20
knapsack problem we observe that the ICS algorithm obtainsbetter solutions in shorter time and has more obviousadvantages over the CS algorithm from Table 3 RegrettablyICS is slightly inferior to HS in terms of the optimal solutionquality on function 17 and function 20 In a word the ICShas demonstrated better performance and it thus provides anefficient alternative on solving 0-1 knapsack problems
Convergence curves shown in Figures 7 8 9 and 10similarly establish the fact that ICS is more effective thanCS in all four large-scale KP instances Through carefulobservation it can be seen that HS gets outstanding profitsin the initial stage of the evolution and the best value in thefinal population Compared with HS and ICS CS obtainedthe worst mean profits at various stages ICS and HS haveroughly the same convergence speed In addition it is obviousto infer that CS and ICS get stuck at local optima quickly ascan be seen from Figure 10 However HS converges to theglobal optimum rapidly
5 Conclusions
In this paper the ICS algorithm has been proposed basedon the CS framework and a greedy transition method tosolve 0-1 knapsack problem efficiently An adaptive step iscarefully designed to balance the local search and the globalsearch Genetic mutation operator helps the algorithm toyield fast convergence and avoids local optima The simu-lation results demonstrate that the proposed algorithm hassuperior performance when compared with HS and CS Theproposed algorithm thus provides a new method for solving0-1 knapsack problems
Further studies will focus on the two issues On onehand we would apply our proposed approach on othercombinatorial optimization problems such as multidimen-sional knapsack problem (MKP) and traveling salesman
Computational Intelligence and Neuroscience 9
problem (TSP) On the other hand we would examine newmeta-hybrid to solve 0-1 knapsack problems which are toocomplicated
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgment
This project is supported by the National Natural ScienceFoundation of China (Grant no 10971052)
References
[1] G B Dantzig ldquoDiscrete-variable extremum problemsrdquo Opera-tions Research vol 5 no 2 pp 266ndash288 1957
[2] L Jourdan M Basseur and E-G Talbi ldquoHybridizing exactmethods and metaheuristics a taxonomyrdquo European Journal ofOperational Research vol 199 no 3 pp 620ndash629 2009
[3] A V Cabot ldquoAn enumeration algorithm for knapsack prob-lemsrdquo Operations Research vol 18 no 2 pp 306ndash311 1970
[4] R J W James and Y Nakagawa ldquoEnumeration methods forrepeatedly solving Multidimensional Knapsack sub-problemsrdquoIEICE Transactions on Information and Systems vol 88 no 10pp 2329ndash2340 2005
[5] P J Kolesar ldquoA branch and bound algorithm for the knapsackproblemrdquoManagement Science vol 13 no 9 pp 723ndash735 1967
[6] SMartelloKnapsack Problem Algorithms andComputer Imple-mentations John Wiley amp Sons New York NY USA 1990
[7] F-T Lin ldquoSolving the knapsack problem with imprecise weightcoefficients using genetic algorithmsrdquo European Journal ofOperational Research vol 185 no 1 pp 133ndash145 2008
[8] J C Bansal and K Deep ldquoA modified binary particle swarmoptimization for knapsack problemsrdquo Applied Mathematics andComputation vol 218 no 22 pp 11042ndash11061 2012
[9] M Kong P Tian and Y Kao ldquoA new ant colony optimizationalgorithm for the multidimensional Knapsack problemrdquo Com-puters and Operations Research vol 35 no 8 pp 2672ndash26832008
[10] M H Kashan N Nahavandi and A H Kashan ldquoDisABC anew artificial bee colony algorithm for binary optimizationrdquoApplied Soft Computing Journal vol 12 no 1 pp 342ndash352 2012
[11] LWang X Fu Y Mao et al ldquoA novel modified binary differen-tial evolution algorithm and its applicationsrdquo Neurocomputingvol 98 pp 55ndash75 2012
[12] D Zou L Gao S Li and J Wu ldquoSolving 0-1 knapsack problemby a novel global harmony search algorithmrdquo Applied SoftComputing Journal vol 11 no 2 pp 1556ndash1564 2011
[13] L H Guo G G Wang H Wang et al ldquoAn effective hybridfirefly algorithm with harmony search for global numericaloptimizationrdquoThe Scientific World Journal vol 2013 Article ID125625 9 pages 2013
[14] G GWang A H Gandomi and A H Alavi ldquoAn effective krillherd algorithmwith migration operator in biogeography-basedoptimizationrdquo Applied Mathematical Modelling 2013
[15] G G Wang A H Gandomi and A H Alavi ldquoStud krill herdalgorithmrdquo Neurocomputing 2013
[16] G G Wang L H Guo H Wang et al ldquoIncorporatingmutation scheme into krill herd algorithm for global numericaloptimizationrdquo Neural Computing and Applications 2012
[17] X-S Yang and S Deb ldquoCuckoo search via Levy flightsrdquo in Pro-ceedings of the IEEE World Congress on Nature and BiologicallyInspired Computing (NABIC rsquo09) pp 210ndash214 December 2009
[18] X-S Yang and S Deb ldquoEngineering optimisation by cuckoosearchrdquo International Journal of Mathematical Modelling andNumerical Optimisation vol 1 no 4 pp 330ndash343 2010
[19] X S Yang Nature-Inspired Metaheuristic Algorithms LuniverPress Frome UK 2nd edition 2010
[20] K Chaowanawate and A Heednacram ldquoImplementation ofcuckoo search in RBF neural network for flood forecastingrdquoin Proceedings of the 4th International Conference on Computa-tional In-telligence Communication Systems and Networks pp22ndash26 2012
[21] V R Chifu C B Pop I Salomie D S Suia and A N NiculicildquoOptimizing the semantic web service composition processusing Cuckoo Searchrdquo Intelligent Distributed Computing V vol382 pp 93ndash102 2011
[22] K Choudhary andGN Purohit ldquoA new testing approach usingcuckoo search to achieve multi-objective genetic algorithmrdquoJournal of Computing vol 3 no 4 pp 117ndash119 2011
[23] MDhivyaM Sundarambal and LN Anand ldquoEnergy efficientcomputation of data fusion in wireless sensor networks usingcuckoo based particle approach (CBPA)rdquo International Journalof Computer Networks and Security vol 4 no 4 pp 249ndash2552011
[24] A Gherboudj A Layeb and S Chikhi ldquoSolving 0-1 knapsackproblems by a discrete binary version of cuckoo search algo-rithmrdquo International Journal of Bio-Inspired Computation vol4 no 4 pp 229ndash236 2012
[25] A Layeb ldquoA novel quantum inspired cuckoo search for knap-sack problemsrdquo International Journal of Bio-Inspired Computa-tion vol 3 no 5 pp 297ndash305 2011
[26] X S Yang and S Deb ldquoCuckoo search recent advances andapplicationsrdquo Neural Computing and Applications 2013
[27] T K Truong K Li and YM Xu ldquoChemical reaction optimiza-tion with greedy strategy for the 0-1 knapsack problemrdquoAppliedSoft Computing vol 13 no 4 pp 1774ndash1780 2013
[28] Y C He X Z Wang and Y Z Kou ldquoA binary differential evo-lution algorithm with hybrid encodingrdquo Journal of ComputerResearch and Development vol 44 no 9 pp 1476ndash1484 2007
[29] E G Talbi Metaheuristics From Design To ImplementationJohn Wiley amp Sons New York NY USA 2009
[30] J Kennedy and R C Eberhart ldquoA discrete binary versionof the particle swarm algorithmrdquo in Proceedings of the IEEEInternational Conference on Computational Cybernetics andSimulation vol 5 pp 4104ndash4108 October 1997
[31] J Zhao T Huang F Pang and Y Liu ldquoGenetic algorithm basedon greedy strategy in the 0-1 knapsack problemrdquo in Proceedingsof the 3rd International Conference on Genetic and EvolutionaryComputing (WGEC rsquo09) pp 105ndash107 October 2009
[32] Y C He K Q Liu and C J Zhang ldquoGreedy genetic algorithmfor solving knapsack problems and its applicationsrdquo ComputerEngineering and Design vol 28 no 11 pp 2655ndash2657 2007
[33] A P Engelbrecht Fundamentals of Computational SwarmIntelligence John Wiley amp Sons Hoboken NJ USA 2009
[34] H Kellerer U Pferschy and D Pisinger Knapsack ProblemsSpringer Berlin Germany 2004
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
8 Computational Intelligence and Neuroscience
0 1 2 3 4 5
Aver
ge p
rofit
s (lo
g)
Time (s)
HSCSICS
10419
10417
10415
10413
10411
Figure 8 The performance on function 18
0 1 2 3 4 5
Aver
ge p
rofit
s (lo
g)
Time (s)
HSCSICS
10459
10458
10457
10456
10455
10454
Figure 9 The performance on function 19
algorithms the HS and the CS In terms of the averagecomputation time Figure 6 shows that the HS algorithm isthe best one and the CS algorithm is the worst oneMoreoverICS converges to the optima faster thanCS onmost instances
Although ICS has shown some advantages on solving 0-1knapsack problem with medium-scale instances howeverthe optimal solution obtained by ICS is not very prominentcompared with other two algorithms Therefore in order tofurther verify the efficiency of our proposed algorithm wedesigned the large scale knapsack tests as follows
43 Comparison amongThreeAlgorithms on Solving 0-1 Knap-sack Problems with Large Dimension Similar to the resultsof medium-scale knapsack instances for the large scale
0 1 2 3 4 5
Aver
ge p
rofit
s (lo
g)
Time (s)
HSCSICS
10483
104829
104828
104827
104826
Figure 10 The performance on function 20
knapsack problem we observe that the ICS algorithm obtainsbetter solutions in shorter time and has more obviousadvantages over the CS algorithm from Table 3 RegrettablyICS is slightly inferior to HS in terms of the optimal solutionquality on function 17 and function 20 In a word the ICShas demonstrated better performance and it thus provides anefficient alternative on solving 0-1 knapsack problems
Convergence curves shown in Figures 7 8 9 and 10similarly establish the fact that ICS is more effective thanCS in all four large-scale KP instances Through carefulobservation it can be seen that HS gets outstanding profitsin the initial stage of the evolution and the best value in thefinal population Compared with HS and ICS CS obtainedthe worst mean profits at various stages ICS and HS haveroughly the same convergence speed In addition it is obviousto infer that CS and ICS get stuck at local optima quickly ascan be seen from Figure 10 However HS converges to theglobal optimum rapidly
5 Conclusions
In this paper the ICS algorithm has been proposed basedon the CS framework and a greedy transition method tosolve 0-1 knapsack problem efficiently An adaptive step iscarefully designed to balance the local search and the globalsearch Genetic mutation operator helps the algorithm toyield fast convergence and avoids local optima The simu-lation results demonstrate that the proposed algorithm hassuperior performance when compared with HS and CS Theproposed algorithm thus provides a new method for solving0-1 knapsack problems
Further studies will focus on the two issues On onehand we would apply our proposed approach on othercombinatorial optimization problems such as multidimen-sional knapsack problem (MKP) and traveling salesman
Computational Intelligence and Neuroscience 9
problem (TSP) On the other hand we would examine newmeta-hybrid to solve 0-1 knapsack problems which are toocomplicated
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgment
This project is supported by the National Natural ScienceFoundation of China (Grant no 10971052)
References
[1] G B Dantzig ldquoDiscrete-variable extremum problemsrdquo Opera-tions Research vol 5 no 2 pp 266ndash288 1957
[2] L Jourdan M Basseur and E-G Talbi ldquoHybridizing exactmethods and metaheuristics a taxonomyrdquo European Journal ofOperational Research vol 199 no 3 pp 620ndash629 2009
[3] A V Cabot ldquoAn enumeration algorithm for knapsack prob-lemsrdquo Operations Research vol 18 no 2 pp 306ndash311 1970
[4] R J W James and Y Nakagawa ldquoEnumeration methods forrepeatedly solving Multidimensional Knapsack sub-problemsrdquoIEICE Transactions on Information and Systems vol 88 no 10pp 2329ndash2340 2005
[5] P J Kolesar ldquoA branch and bound algorithm for the knapsackproblemrdquoManagement Science vol 13 no 9 pp 723ndash735 1967
[6] SMartelloKnapsack Problem Algorithms andComputer Imple-mentations John Wiley amp Sons New York NY USA 1990
[7] F-T Lin ldquoSolving the knapsack problem with imprecise weightcoefficients using genetic algorithmsrdquo European Journal ofOperational Research vol 185 no 1 pp 133ndash145 2008
[8] J C Bansal and K Deep ldquoA modified binary particle swarmoptimization for knapsack problemsrdquo Applied Mathematics andComputation vol 218 no 22 pp 11042ndash11061 2012
[9] M Kong P Tian and Y Kao ldquoA new ant colony optimizationalgorithm for the multidimensional Knapsack problemrdquo Com-puters and Operations Research vol 35 no 8 pp 2672ndash26832008
[10] M H Kashan N Nahavandi and A H Kashan ldquoDisABC anew artificial bee colony algorithm for binary optimizationrdquoApplied Soft Computing Journal vol 12 no 1 pp 342ndash352 2012
[11] LWang X Fu Y Mao et al ldquoA novel modified binary differen-tial evolution algorithm and its applicationsrdquo Neurocomputingvol 98 pp 55ndash75 2012
[12] D Zou L Gao S Li and J Wu ldquoSolving 0-1 knapsack problemby a novel global harmony search algorithmrdquo Applied SoftComputing Journal vol 11 no 2 pp 1556ndash1564 2011
[13] L H Guo G G Wang H Wang et al ldquoAn effective hybridfirefly algorithm with harmony search for global numericaloptimizationrdquoThe Scientific World Journal vol 2013 Article ID125625 9 pages 2013
[14] G GWang A H Gandomi and A H Alavi ldquoAn effective krillherd algorithmwith migration operator in biogeography-basedoptimizationrdquo Applied Mathematical Modelling 2013
[15] G G Wang A H Gandomi and A H Alavi ldquoStud krill herdalgorithmrdquo Neurocomputing 2013
[16] G G Wang L H Guo H Wang et al ldquoIncorporatingmutation scheme into krill herd algorithm for global numericaloptimizationrdquo Neural Computing and Applications 2012
[17] X-S Yang and S Deb ldquoCuckoo search via Levy flightsrdquo in Pro-ceedings of the IEEE World Congress on Nature and BiologicallyInspired Computing (NABIC rsquo09) pp 210ndash214 December 2009
[18] X-S Yang and S Deb ldquoEngineering optimisation by cuckoosearchrdquo International Journal of Mathematical Modelling andNumerical Optimisation vol 1 no 4 pp 330ndash343 2010
[19] X S Yang Nature-Inspired Metaheuristic Algorithms LuniverPress Frome UK 2nd edition 2010
[20] K Chaowanawate and A Heednacram ldquoImplementation ofcuckoo search in RBF neural network for flood forecastingrdquoin Proceedings of the 4th International Conference on Computa-tional In-telligence Communication Systems and Networks pp22ndash26 2012
[21] V R Chifu C B Pop I Salomie D S Suia and A N NiculicildquoOptimizing the semantic web service composition processusing Cuckoo Searchrdquo Intelligent Distributed Computing V vol382 pp 93ndash102 2011
[22] K Choudhary andGN Purohit ldquoA new testing approach usingcuckoo search to achieve multi-objective genetic algorithmrdquoJournal of Computing vol 3 no 4 pp 117ndash119 2011
[23] MDhivyaM Sundarambal and LN Anand ldquoEnergy efficientcomputation of data fusion in wireless sensor networks usingcuckoo based particle approach (CBPA)rdquo International Journalof Computer Networks and Security vol 4 no 4 pp 249ndash2552011
[24] A Gherboudj A Layeb and S Chikhi ldquoSolving 0-1 knapsackproblems by a discrete binary version of cuckoo search algo-rithmrdquo International Journal of Bio-Inspired Computation vol4 no 4 pp 229ndash236 2012
[25] A Layeb ldquoA novel quantum inspired cuckoo search for knap-sack problemsrdquo International Journal of Bio-Inspired Computa-tion vol 3 no 5 pp 297ndash305 2011
[26] X S Yang and S Deb ldquoCuckoo search recent advances andapplicationsrdquo Neural Computing and Applications 2013
[27] T K Truong K Li and YM Xu ldquoChemical reaction optimiza-tion with greedy strategy for the 0-1 knapsack problemrdquoAppliedSoft Computing vol 13 no 4 pp 1774ndash1780 2013
[28] Y C He X Z Wang and Y Z Kou ldquoA binary differential evo-lution algorithm with hybrid encodingrdquo Journal of ComputerResearch and Development vol 44 no 9 pp 1476ndash1484 2007
[29] E G Talbi Metaheuristics From Design To ImplementationJohn Wiley amp Sons New York NY USA 2009
[30] J Kennedy and R C Eberhart ldquoA discrete binary versionof the particle swarm algorithmrdquo in Proceedings of the IEEEInternational Conference on Computational Cybernetics andSimulation vol 5 pp 4104ndash4108 October 1997
[31] J Zhao T Huang F Pang and Y Liu ldquoGenetic algorithm basedon greedy strategy in the 0-1 knapsack problemrdquo in Proceedingsof the 3rd International Conference on Genetic and EvolutionaryComputing (WGEC rsquo09) pp 105ndash107 October 2009
[32] Y C He K Q Liu and C J Zhang ldquoGreedy genetic algorithmfor solving knapsack problems and its applicationsrdquo ComputerEngineering and Design vol 28 no 11 pp 2655ndash2657 2007
[33] A P Engelbrecht Fundamentals of Computational SwarmIntelligence John Wiley amp Sons Hoboken NJ USA 2009
[34] H Kellerer U Pferschy and D Pisinger Knapsack ProblemsSpringer Berlin Germany 2004
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience 9
problem (TSP) On the other hand we would examine newmeta-hybrid to solve 0-1 knapsack problems which are toocomplicated
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgment
This project is supported by the National Natural ScienceFoundation of China (Grant no 10971052)
References
[1] G B Dantzig ldquoDiscrete-variable extremum problemsrdquo Opera-tions Research vol 5 no 2 pp 266ndash288 1957
[2] L Jourdan M Basseur and E-G Talbi ldquoHybridizing exactmethods and metaheuristics a taxonomyrdquo European Journal ofOperational Research vol 199 no 3 pp 620ndash629 2009
[3] A V Cabot ldquoAn enumeration algorithm for knapsack prob-lemsrdquo Operations Research vol 18 no 2 pp 306ndash311 1970
[4] R J W James and Y Nakagawa ldquoEnumeration methods forrepeatedly solving Multidimensional Knapsack sub-problemsrdquoIEICE Transactions on Information and Systems vol 88 no 10pp 2329ndash2340 2005
[5] P J Kolesar ldquoA branch and bound algorithm for the knapsackproblemrdquoManagement Science vol 13 no 9 pp 723ndash735 1967
[6] SMartelloKnapsack Problem Algorithms andComputer Imple-mentations John Wiley amp Sons New York NY USA 1990
[7] F-T Lin ldquoSolving the knapsack problem with imprecise weightcoefficients using genetic algorithmsrdquo European Journal ofOperational Research vol 185 no 1 pp 133ndash145 2008
[8] J C Bansal and K Deep ldquoA modified binary particle swarmoptimization for knapsack problemsrdquo Applied Mathematics andComputation vol 218 no 22 pp 11042ndash11061 2012
[9] M Kong P Tian and Y Kao ldquoA new ant colony optimizationalgorithm for the multidimensional Knapsack problemrdquo Com-puters and Operations Research vol 35 no 8 pp 2672ndash26832008
[10] M H Kashan N Nahavandi and A H Kashan ldquoDisABC anew artificial bee colony algorithm for binary optimizationrdquoApplied Soft Computing Journal vol 12 no 1 pp 342ndash352 2012
[11] LWang X Fu Y Mao et al ldquoA novel modified binary differen-tial evolution algorithm and its applicationsrdquo Neurocomputingvol 98 pp 55ndash75 2012
[12] D Zou L Gao S Li and J Wu ldquoSolving 0-1 knapsack problemby a novel global harmony search algorithmrdquo Applied SoftComputing Journal vol 11 no 2 pp 1556ndash1564 2011
[13] L H Guo G G Wang H Wang et al ldquoAn effective hybridfirefly algorithm with harmony search for global numericaloptimizationrdquoThe Scientific World Journal vol 2013 Article ID125625 9 pages 2013
[14] G GWang A H Gandomi and A H Alavi ldquoAn effective krillherd algorithmwith migration operator in biogeography-basedoptimizationrdquo Applied Mathematical Modelling 2013
[15] G G Wang A H Gandomi and A H Alavi ldquoStud krill herdalgorithmrdquo Neurocomputing 2013
[16] G G Wang L H Guo H Wang et al ldquoIncorporatingmutation scheme into krill herd algorithm for global numericaloptimizationrdquo Neural Computing and Applications 2012
[17] X-S Yang and S Deb ldquoCuckoo search via Levy flightsrdquo in Pro-ceedings of the IEEE World Congress on Nature and BiologicallyInspired Computing (NABIC rsquo09) pp 210ndash214 December 2009
[18] X-S Yang and S Deb ldquoEngineering optimisation by cuckoosearchrdquo International Journal of Mathematical Modelling andNumerical Optimisation vol 1 no 4 pp 330ndash343 2010
[19] X S Yang Nature-Inspired Metaheuristic Algorithms LuniverPress Frome UK 2nd edition 2010
[20] K Chaowanawate and A Heednacram ldquoImplementation ofcuckoo search in RBF neural network for flood forecastingrdquoin Proceedings of the 4th International Conference on Computa-tional In-telligence Communication Systems and Networks pp22ndash26 2012
[21] V R Chifu C B Pop I Salomie D S Suia and A N NiculicildquoOptimizing the semantic web service composition processusing Cuckoo Searchrdquo Intelligent Distributed Computing V vol382 pp 93ndash102 2011
[22] K Choudhary andGN Purohit ldquoA new testing approach usingcuckoo search to achieve multi-objective genetic algorithmrdquoJournal of Computing vol 3 no 4 pp 117ndash119 2011
[23] MDhivyaM Sundarambal and LN Anand ldquoEnergy efficientcomputation of data fusion in wireless sensor networks usingcuckoo based particle approach (CBPA)rdquo International Journalof Computer Networks and Security vol 4 no 4 pp 249ndash2552011
[24] A Gherboudj A Layeb and S Chikhi ldquoSolving 0-1 knapsackproblems by a discrete binary version of cuckoo search algo-rithmrdquo International Journal of Bio-Inspired Computation vol4 no 4 pp 229ndash236 2012
[25] A Layeb ldquoA novel quantum inspired cuckoo search for knap-sack problemsrdquo International Journal of Bio-Inspired Computa-tion vol 3 no 5 pp 297ndash305 2011
[26] X S Yang and S Deb ldquoCuckoo search recent advances andapplicationsrdquo Neural Computing and Applications 2013
[27] T K Truong K Li and YM Xu ldquoChemical reaction optimiza-tion with greedy strategy for the 0-1 knapsack problemrdquoAppliedSoft Computing vol 13 no 4 pp 1774ndash1780 2013
[28] Y C He X Z Wang and Y Z Kou ldquoA binary differential evo-lution algorithm with hybrid encodingrdquo Journal of ComputerResearch and Development vol 44 no 9 pp 1476ndash1484 2007
[29] E G Talbi Metaheuristics From Design To ImplementationJohn Wiley amp Sons New York NY USA 2009
[30] J Kennedy and R C Eberhart ldquoA discrete binary versionof the particle swarm algorithmrdquo in Proceedings of the IEEEInternational Conference on Computational Cybernetics andSimulation vol 5 pp 4104ndash4108 October 1997
[31] J Zhao T Huang F Pang and Y Liu ldquoGenetic algorithm basedon greedy strategy in the 0-1 knapsack problemrdquo in Proceedingsof the 3rd International Conference on Genetic and EvolutionaryComputing (WGEC rsquo09) pp 105ndash107 October 2009
[32] Y C He K Q Liu and C J Zhang ldquoGreedy genetic algorithmfor solving knapsack problems and its applicationsrdquo ComputerEngineering and Design vol 28 no 11 pp 2655ndash2657 2007
[33] A P Engelbrecht Fundamentals of Computational SwarmIntelligence John Wiley amp Sons Hoboken NJ USA 2009
[34] H Kellerer U Pferschy and D Pisinger Knapsack ProblemsSpringer Berlin Germany 2004
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014