solving the traveling salesman’s problem using the african...

13
Research Article Solving the Traveling Salesman’s Problem Using the African Buffalo Optimization Julius Beneoluchi Odili and Mohd Nizam Mohmad Kahar Faculty of Computer Systems & Soſtware Engineering, Universiti Malaysia Pahang, 26300 Kuantan, Malaysia Correspondence should be addressed to Julius Beneoluchi Odili; odili [email protected] Received 16 April 2015; Revised 2 August 2015; Accepted 27 August 2015 Academic Editor: Hong Man Copyright © 2015 J. B. Odili and M. N. Mohmad Kahar. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. is paper proposes the African Buffalo Optimization (ABO) which is a new metaheuristic algorithm that is derived from careful observation of the African buffalos, a species of wild cows, in the African forests and savannahs. is animal displays uncommon intelligence, strategic organizational skills, and exceptional navigational ingenuity in its traversal of the African landscape in search for food. e African Buffalo Optimization builds a mathematical model from the behavior of this animal and uses the model to solve 33 benchmark symmetric Traveling Salesman’s Problem and six difficult asymmetric instances from the TSPLIB. is study shows that buffalos are able to ensure excellent exploration and exploitation of the search space through regular communication, cooperation, and good memory of its previous personal exploits as well as tapping from the herd’s collective exploits. e results obtained by using the ABO to solve these TSP cases were benchmarked against the results obtained by using other popular algorithms. e results obtained using the African Buffalo Optimization algorithm are very competitive. 1. Introduction e growing need for profit maximization and cost min- imization has never been greater in human history than what we have today. is need has made optimization a very favoured area of scientific investigations. is development has led to the design of a number of optimization algorithms. Some of the most popular algorithms are the Particle Swarm Optimization [1], Ant Colony Optimization [2], Genetic Algorithm [3], Artificial Bee Colony [4], and many others. However, the above algorithms do have some drawbacks ranging from premature convergence [5], delay in obtaining results, easily being stuck in local minima, and complicated fitness function to having many parameters that require setting up [6]. An attempt to proffer solutions to some of the weaknesses of these algorithms is the motivation for the development of the African Buffalo Optimization (ABO). e ABO is a population-based stochastic optimization technique that has its inspiration from the behaviour of African buffalos: a species of wild cows, similar to their domestic counterparts that navigate their way through thou- sands of kilometres in the African rain forests and savannahs by moving together in a large herd of, sometimes, over a thousand buffalos. eir migration is inspired by their search for lush grazing lands. ey tend to track the movement of the rainy seasons when they can get lush grazing pastures. As the seasons differ from location to location in the vast African landscape, the buffalos are always mobile in pursuit of their target pastures. In ABO, our interest is in how the buffalos are able to organize themselves in searching the solution space with two basic modes of communications, that is, the alarm “waaa” sounds to indicate the presence of dangers or lack of good grazing fields and, as such, asking the animals to explore other locations that may hold greater promise. On the other hand, the alert “maaa” sound is used to indicate favourable grazing area and is an indication to the animals to stay on to exploit the available resources. e Traveling Salesman Problem. e Traveling Salesman’s Problem (TSP) is the problem faced by a salesman who, start- ing from a particular town, has the assignment of finding the shortest possible round trip through a given set of customer towns or cities. e salesman has a mandate to visit each city once before finally returning to the starting town/city. e Hindawi Publishing Corporation Computational Intelligence and Neuroscience Article ID 929547

Upload: others

Post on 08-Aug-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Solving the Traveling Salesman’s Problem Using the African ...umpir.ump.edu.my/id/eprint/11365/1/Solving the Traveling Salesman’… · ResearchArticle Solving the Traveling Salesman’s

Research ArticleSolving the Traveling Salesmanrsquos Problem Usingthe African Buffalo Optimization

Julius Beneoluchi Odili and Mohd Nizam Mohmad Kahar

Faculty of Computer Systems amp Software Engineering Universiti Malaysia Pahang 26300 Kuantan Malaysia

Correspondence should be addressed to Julius Beneoluchi Odili odili julestyahoocom

Received 16 April 2015 Revised 2 August 2015 Accepted 27 August 2015

Academic Editor Hong Man

Copyright copy 2015 J B Odili and M N Mohmad Kahar This is an open access article distributed under the Creative CommonsAttribution License which permits unrestricted use distribution and reproduction in any medium provided the original work isproperly cited

This paper proposes the African Buffalo Optimization (ABO) which is a new metaheuristic algorithm that is derived from carefulobservation of the African buffalos a species of wild cows in the African forests and savannahs This animal displays uncommonintelligence strategic organizational skills and exceptional navigational ingenuity in its traversal of the African landscape in searchfor food The African Buffalo Optimization builds a mathematical model from the behavior of this animal and uses the model tosolve 33 benchmark symmetric Traveling Salesmanrsquos Problem and six difficult asymmetric instances from the TSPLIB This studyshows that buffalos are able to ensure excellent exploration and exploitation of the search space through regular communicationcooperation and good memory of its previous personal exploits as well as tapping from the herdrsquos collective exploits The resultsobtained by using the ABO to solve these TSP cases were benchmarked against the results obtained by using other popularalgorithms The results obtained using the African Buffalo Optimization algorithm are very competitive

1 Introduction

The growing need for profit maximization and cost min-imization has never been greater in human history thanwhat we have today This need has made optimization a veryfavoured area of scientific investigations This developmenthas led to the design of a number of optimization algorithmsSome of the most popular algorithms are the Particle SwarmOptimization [1] Ant Colony Optimization [2] GeneticAlgorithm [3] Artificial Bee Colony [4] and many othersHowever the above algorithms do have some drawbacksranging from premature convergence [5] delay in obtainingresults easily being stuck in local minima and complicatedfitness function to having many parameters that requiresetting up [6] An attempt to proffer solutions to some ofthe weaknesses of these algorithms is the motivation for thedevelopment of the African Buffalo Optimization (ABO)

The ABO is a population-based stochastic optimizationtechnique that has its inspiration from the behaviour ofAfrican buffalos a species of wild cows similar to theirdomestic counterparts that navigate their way through thou-sands of kilometres in the African rain forests and savannahs

by moving together in a large herd of sometimes over athousand buffalosTheir migration is inspired by their searchfor lush grazing lands They tend to track the movement ofthe rainy seasons when they can get lush grazing pastures Asthe seasons differ from location to location in the vast Africanlandscape the buffalos are always mobile in pursuit of theirtarget pastures In ABO our interest is in how the buffalos areable to organize themselves in searching the solution spacewith two basic modes of communications that is the alarmldquowaaardquo sounds to indicate the presence of dangers or lack ofgood grazing fields and as such asking the animals to exploreother locations that may hold greater promise On the otherhand the alert ldquomaaardquo sound is used to indicate favourablegrazing area and is an indication to the animals to stay on toexploit the available resources

The Traveling Salesman Problem The Traveling SalesmanrsquosProblem (TSP) is the problem faced by a salesman who start-ing from a particular town has the assignment of finding theshortest possible round trip through a given set of customertowns or cities The salesman has a mandate to visit each cityonce before finally returning to the starting towncity The

Hindawi Publishing CorporationComputational Intelligence and NeuroscienceArticle ID 929547

2 Computational Intelligence and Neuroscience

Travelling Salesmanrsquos Problem (TSP) can be represented bya complete weighted graph 119866 = (119881 119864) with119881 being the set of119899 nodes (cities) and 119864 being the set of edges fully connectingthe nodes in the graph 119866 In this graph each edge 119864 is givena weight 119889

119894119895which represents the distance between cities 119894

and 119895 It may be important to emphasize that the distancebetween townscities may be symmetric (where the distancesbetween the cities are the same in either going or returningfrom the townscities) or asymmetric (where due to somerestrictions possibly due to one-way lanes or other reasonsthe distances of going from city 119860 to city 119861 may not be thesame) The basic minimization equation for TSP is given 119899

towns and their coordinates find an integer permutation 120587 =

1198621 1198622 1198623 119862

119899with 119862

119899being the city ldquo119899rdquo our task is to

minimize the sum of the cities [7 8] Consider

119891 (120587) =

119899minus1

sum

119894=1

119889 (119862119894 119862119894+1) + 119889 (119862

119899 1198621) (1)

Here 119889(119862119894 119862119894+1) represents the distance between city ldquo119894rdquo and

city 119894 + 1 and 119889(119862119899 1198621) is the distance of city ldquo119899rdquo and city

ldquo1rdquo The aim is to find the shortest path between the adjacentcities in the form of a vector of cities

This paper is organized in this way Section 1 intro-duces the African Buffalo Optimization and the TravellingSalesmanrsquos Problem Section 2 presents a brief review ofrelevant literature Section 3 discusses the basic flow of theAfrican Buffalo Optimization (ABO) algorithm Section 4is concerned with how the ABO solves the TSP Section 5deals with the experiments and discussion of the resultsof the symmetric TSP instances Section 6 is concernedwith the experiments on the asymmetric TSP instancesand the discussions of the results Section 7 examines theperformance of ABO vis-a-vis Neural Networks methodsSection 8 draws conclusions on the study

2 Literature Review

Nature-inspired algorithms (NAs) draw their inspirationfrom the complex but highly organised attributes of natureNature-inspired algorithms which are generally stochasticbecame the hotbed of several researchers due to the ineffi-ciencies of the exact optimization algorithms (as the problemsize enlarges) like the Linear Programming [9] DynamicProgramming [10] finite elements [11] and finite volumemethods In general NAs simulate the interplay and some-times interdependence of natural elements such as plantsanimals and rivers The most popular class of algorithmsamong the NAs are the biology-inspired algorithms Ahandful of other NAs however are inspired by Chemistry orPhysics Some of those algorithms inspired by Chemistry orPhysics include Harmony Search (HS) algorithm IntelligentWater Drop (IWD) Simulated Annealing (SA) and BlackHole (BH) [12] In this study our interest is the biology-inspired optimization algorithms

Biology-inspired algorithms (BAs) can be categorisedinto three broad classes namely Evolutionary Algorithms(EAs) that are inspired by natural evolutions the SwarmIntelligence (SI) which simulates the collective behavior in

plants and animals and thirdly Ecology algorithms whichare concernedwith simulating the inter- and intracooperativeor competitive interactions within the ecosystem [13]

The Evolutionary Algorithms (EAs) generally simu-late the iterative progress comprising growth developmentreproduction selection and survival as seen in a populationEAs are concerned with the feasible solutions that emanateiteratively from generation to generation towards the bestsolution EAs employ fitness-based selection within thepopulation of solutions in a way that fitter solutions areselected for survival into the next generation of solutions Inthis category of EAs are Genetic Programming (GP) PaddyField Algorithm (PFA) Evolutionary Strategies (ES) GeneticAlgorithm (GA) and the Differential Evolution (DE) [12] Itmay be necessary to state that experts have different opinionson the classification of Differential Evolution as an Evolution-ary Algorithm The classification of DE as an EvolutionaryAlgorithm stems from its use of ldquoEvolutionrdquo as one of itsparameters (just like other EAs) A critical expert may notclass Differential Evolution as being bioinspired actually [14]

The second category of BAs is the Swarm Intelligence(SI) algorithmswhich are concernedwith the collective socialbehavior of organismsThemotivation of Swarm Intelligenceis the collective intelligence of groups of simple agents suchas insects fishes birds bacteria worms and other animalsbased on their behavior in real life As simple as these animalsare they are able to exhibit exceptional intelligence wheneverthey work collectively as a group These algorithms track thecollective behavior of animals that exhibit decentralized self-organized patterns in their foraging duties Examples of thesealgorithms are the Bee Colony Optimization (BCO) FireflyAlgorithm (FFA) Particle Swarm Optimization (PSO) AntColony Optimization (ACO) Artificial Bee Colony (ABC)Bacteria Foraging Algorithm (BFA) and so on [15]

The third category of BAs is the Ecology algorithmswhich are concernedwith the numerous inter- or intraspeciescompetitive or cooperative interactions within the naturalecosystems The ecosystem is made up of living organismsalong with their abiotic environment with which the organ-isms interact such as water air and soil Cooperation amongorganisms includes division of labor and represents the coreof their sociality Some of the interactions are cooperative andothers are competitive leading to a complex and harmoniousbalancing within the ecosystem Algorithms in this categoryare the PS2O Invasive Weed Colony Optimization andbiogeography-based optimization [16]

The development of the African Buffalo Optimization(ABO) is in response to the observed lapses in the existingalgorithmsThe ABO belongs to the Swarm Intelligence classof algorithms based on the social behavior in animals withthe aim of achieving greater exploitation and exploration ofthe search space ease of use and faster speed in achievingthe optimal result with its use of relatively fewer parametersin solving combinatorial optimization problems

21 Ant Colony Optimization (ACO) Ant Colony Optimiza-tion algorithm is a population-based optimization techniquedeveloped by Marco Dorigo and has been successfullyapplied to solve several NP-hard combinatorial optimization

Computational Intelligence and Neuroscience 3

problems This algorithm was inspired by the behavior ofant colonies especially by their foraging behavior in reallife Usually ants when leaving their nests move randomlyaround the areas surrounding their nests in search for food Ifany of the ants come across food it first collects somepieces ofthe food and on its way back to the nest deposits a chemicalsubstance called pheromones as a way of communicating toits peers that there has been a breakthrough Other nearbyants on perceiving the fragrance of the pheromone under-stand andmove towards the pheromone path Once they dis-cover the food source they in turn drop fresh pheromonesas a way of alerting other ants In amatter of time several antspick this information and are on the pheromone path

Another interesting part of the antsrsquo behavior is that asthey return to the nest they optimize their route In a shortwhile the ants have created a shorter route to the food sourcethan the previous routes Moreover in case an obstruction isput on the shorter route making movements impossible theants are able to find another short route among the availableoptions to evade the obstacleThe highlights of this algorithminclude tapping into the indirect communication of a colonyof (artificial) ants using pheromone trails as a means ofcommunication tracking their cooperative ability to solve acomplex problem and harnessing their capacity to optimizetheir routes from the food source to the nest and vice versa

There have been several modifications of the ant colonyalgorithms starting from the initial Ant System (AS) to AntColony System (ACS) to Min-Max Ant System (MMAS)and then to the Ant Colony Optimization (ACO) algorithmsand so forth [17] In ACO a colony of ants in each iterationconstructs a solution probabilistically as ant 119896 at node 119894 selectsthe next node119895 tomove on toThe choice of node is influencedby the pheromone trail value 120591

119894119895(119905) and the available heuristic

120578119894119895 In TSP 120578

119894119895= 1119889

119894119895 So an ant moves from location 119894 to

location 119895 with the probability

119875119905

119894119895(119905) =

lceil120591

119894119895(119905)rceil120572

119909 [120578

119894119895]120573

sum119897isinN119894119896 [120591119894119895(119905)]

120572119909 [120578119894119895]120573

if 119895 isin N119894119896

0 otherwise

(2)

Here 120591119894119895(119905) represents the pheromone trail 120578

119894119895represents

the local heuristic information 119905 represents the iterationN119894119896 represents the nodes ant 119896 can go to and 120572 and 120573 are

parameters that bias the pheromone trails By the end of aniteration the pheromone trail on each edge 119894119895 is updated usingthe following equation

120591

119894119895(119905 + 1) = (1 minus 120588) 119909120591

119894119895(119905) + Δ120591

119894119895

best(119905) (3)

In (3) 120591119894119895(119905+1) represents the pheromone trail in iteration 119905+1

120588 takes any values from 01 to 09 The amount of pheromonedeposited by the best ant is represented by

Δ120591

119894119895

best(119905)

=

1

119891(119904

best(119905)) if the best ant used 119894119895 in iteration 119905

0 otherwise

(4)

In (4) 119891(119904best(119905)) represents cost of the best solution (119904best(119905))

A critical examination of the Ant Colony Optimizationtechnique of solving optimization problems reveals that thereis little or no similarity between the ACOrsquos searchmechanismand that of the ABOThis could be due to their application ofdifferent search techniques in arriving at solutions while theACO employs path construction technique the ABO favourspath improvement search mechanism

22 Particle Swarm Optimization Particle Swarm Optimiza-tion which was inspired by the social behavior of birdsflocking or fish schooling is one of the biology-inspiredcomputation techniques developed by Eberhart andKennedy[18] This algorithm obtains solutions using a swarm ofparticles where each particle represents a candidate solutionWhen compared to evolutionary computation paradigms aswarm is similar to a population and a particle represents anindividual In searching for a solution the particles are flownthrough a multidimensional search space where the positionof each particle is adjusted according to its own experienceand that of its neighbors The velocity vector drives the opti-mization process as well as highlighting each particlersquos experi-ence and the information exchange within its neighborhoodJust like the ABO PSO has two controlling equations in itssearch for solution and these are (5) and (6) Consider

V119905+1119894119895

= 119909 (V119905119894119895+ 011198801(119887119905

119894119895minus 119909119905

119894119895) + 021198802(119887119905

(119899)119894119895minus 119909119905

119894119895)) (5)

where V119905+1119894119895

represents the present velocity V119905119894119895is the previous

velocity 119909 is the constriction factor 01and 0

2are the

acceleration coefficients 1198801and 119880

2are random numbers

119887119905

119894119895is the individual particlesrsquo best position 119909119905

119894119895is the present

position and 119887119905

(119899)119894119895is the swarmrsquos best position The next

equation in PSO that calculates the position of the swarm is

119909119905+1

119894119895= 119909119905

119894119895+ V119905+1119894119895 (6)

In PSO algorithm the particles move in the domain ofan objective function 119891 Θ isin 119877119899 where 119899 represents thevariables to be optimized Each particle at a particulariteration is associated with three vectors

(a) Present position denoted by119909This vector records thepresent position of a particular particle

(b) Present velocity denoted by V This vector stores theparticlersquos direction of search

(c) Individualrsquos best position denoted by 119887 This vectorrecords the particular particlersquos best solution so farsince the search began (since the beginning of thealgorithmrsquos execution) In addition to these the indi-vidual particles relate with the best particle in theswarmwhich PSO algorithm tracks in each iterationto help direct the search to promising areas [19]

23 Artificial Bee Colony This algorithm which is inspiredby the behavior of natural honey bee swarm was proposedby Karaboga and Akay in 2009 [20] It searches for solutionthrough the use of three classes of bees scout bees onlookerbees and employed bees Scout bees are those that fly over

4 Computational Intelligence and Neuroscience

the search space in search for solutions (food source) Theonlooker bees on the other hand are the ones that stay inthe nest waiting for the report of the scout bees while theemployed bees refer to the class of bees which after watchingthe waggle dance of the scout bees opts to join in harvestingthe food source (exploitation) A particular strength of thisalgorithm lies in its bee transformation capabilities Forinstance a scout bee could transform to an employed beeonce it (the same scout bee) is involved in harvesting the foodsource and vice versa

Generally the bees can change their statuses dependingon the needs of the algorithm at a particular point in timeIn this algorithm the food source represents a solution to theoptimization problemThe volume of nectar in a food sourcerepresents the quality (fitness) of the solutionMoreover eachemployed bee is supposed to exploit only one food sourcemeaning that the number of employed bees is the same as thenumber of food sources The scout bees are always exploringfor new food sources V

119898with higher nectar quantity andor

quality (fitness) 119898

within the neighbourhood The beesevaluate the nectar fitness using

V119898119894= 119909119898119894+ 0119898119894(119909119898119894minus 119909119896119894) (7)

where 119894 is a randomly chosen parameter index 0119898119894

is arandom number within a given range 119909

119896119894is a food source

The quality (fitness) of a solution 119891119894119905119898(997888rarr119909119898) is calculated

using the following equation

119891119894119905119898(997888rarr119909119898) =

1

1 + 119891119898(997888rarr119909119898)

119909 if 119891119898(997888rarr119909119898) gt 0

1 + 119891119898(997888rarr119909119898) if 119891

119898(997888rarr119909119898) lt 0

(8)

From the foregoing discussion it is clear that there is slightsimilarity between (5) in PSO and (7) in ABO since eachalgorithm subtracts a variable from the personal and indi-vidual bests of the particlesbuffalos For PSO the subtractedvariable is the present position and for ABO it is theimmediate-past explored location (the waaa values 119908119896)However the two equations are different in several respectswhile the PSO uses 119909 (being the constriction factor) or120596 (as inertia factor in some versions of the PSO) thereare no such equivalents in ABO Moreover while the PSOemploys random numbers (119880

1and 119880

2) ABO does not use

random numbers only learning parameters Also PSO usesacceleration coefficients (0

1and 0

2) ABO does not In the

case of the ABC even though it employs the same searchtechnique in arriving at solutions the algorithm proceduresare quite different

24 Information Propagation In searching for solutions toan optimization problem the ACO employs the path con-struction technique while the PSO ABC and the ABO usethe path improvement technique However while the PSOuses the Von Neumann (see Figure 1) as its best techniquefor information propagation [21] the ACO obtains goodresults using the ring topology [22] and the ABO uses thestar topology which connects all the buffalos together TheVon Neumann topology enables the particles to connect to

neighboring particles on the east west north and southEffectively a particular particle relates with the other fourparticles surrounding it The ABO employs the star topologysuch that a particular buffalo is connected to every otherbuffalo in the herdThis enhancesABOrsquos information dissem-ination at any particular point in time

3 African Buffalo Optimization Algorithm

In using the ABO to proffer solutions in the search space thebuffalos are first initialized within the herd population andare made to search for the global optima by updating theirlocations as they follow the current best buffalo 119887119892max inthe herd Each buffalo keeps track of its coordinates in theproblem space which are associated with the best solution(fitness) it has achieved so far This value is called 119887119901max119896representing the best location of the particular buffalo inrelation to the optimal solution The ABO algorithm followsthis pattern at each step it tracks the dynamic location ofeach buffalo towards the 119887119901max119896 and 119887119892max depending onwhere the emphasis is placed at a particular iteration Thespeed of each animal is influenced by the learning parameters

31 ABO Algorithm The ABO algorithm is presented below

(1) Initialization randomly place buffalos to nodes at thesolution space

(2) Update the buffalos fitness values using

119898119896 + 1 = 119898119896 + 1198971199011 (119887119892max minus 119908119896)

+ 1198971199012 (119887119901max119896 minus 119908119896) (9)

where 119908119896 and 119898119896 represent the exploration andexploitation moves respectively of the 119896th buffalo(119896 = 1 2 119873) 1198971199011 and 1198971199012 are learning factors119887119892max is the herdrsquos best fitness and 119887119901max119896 theindividual buffalo 119896rsquos best found location

(3) Update the location of buffalo 119896 (119887119901max119896 and 119887119892max)using

119908119896 + 1 =(119908119896 + 119898119896)

plusmn05 (10)

(4) Is 119887119892max updating Yes go to (5) No go to (2)(5) If the stopping criteria are not met go back to

algorithm step (3) else go to (6)(6) Output best solution

A closer look at the algorithm (the ABO algorithm)reveals that (9) which shows the democratic attitude of thebuffalos has three parts the first 119898119896 represents the memoryof the buffalos past location The buffalo has innate memoryability that enables it to tell where it has been before This iscrucial in its search for solutions as it helps it to avoid areasthat produced bad resultsThememory of each buffalo is a listof solutions that can be used as an alternative for the currentlocal maximum location The second 1198971199011(119887119892max minus 119908119896) is

Computational Intelligence and Neuroscience 5

Star topology Ring topology Von Neumann topology

Figure 1 Information propagation topologies

concerned with the caring or cooperative part of the buffalosand is a pointer to the buffalorsquos social and information-sharing experience and then the third part 1198971199012(119887119901max119896minus119908119896)indicates the intelligence part of the buffalos So the ABOexploits the memory caring intelligent capabilities of thebuffalos in the democratic equation (9) Similarly (10) is thewaaa vocalization equation that propels the animals to moveon to explore other environments as the present area has beenfully exploited or is unfavourable for further exploration andexploitation

32 Highlights of the ABO Algorithm They are as follows

(1) Stagnation handling through regular update of thebest buffalo 119887119892max in each iteration

(2) Use of relatively few parameters to ensure speed fastconvergence

(3) A very simple algorithm that require less than 100lines of coding effort in any programming language

(4) Ensuring adequate exploration by tracking the loca-tion of best buffalo (119887119892max) and each buffalorsquos per-sonal best location (119887119901max119896)

(5) Ensuring adequate exploitation through tapping intothe exploits of other buffalos 1198971199011(119887119892max minus 119908119896)

33 Initialization The initialization phase is done by ran-domly placing the 119896th buffalo in the solution space For ini-tialization some known previous knowledge of the problemcan help the algorithm to converge in less iterations

34 Update Speed and Location In each iteration eachbuffalo updates its location according to its formermaximumlocation (119887119901max) and some information gathered from theexploits of the neighboring buffalos This is done using (9)and (10) (refer to the ABO algorithm steps (2) and (3) above)This enables the algorithm to track the movement of thebuffalos in relation to the optimal solution

4 Using ABO to Solve the TravellingSalesmanrsquos Problem

TheABOhas the advantage of using very simple steps to solvecomplex optimization problems such as the TSP The basicsolution steps are as follows

(a) Choose according to some criterion a start city foreach of the buffalos and randomly locate them inthose cities Consider

119875119886119887 =11990811989711990111198861198871198981198971199012119886119887

sum119899

119894=111990811989711990111198861198871198981198971199012119886119887

119886119887 = plusmn05

(11)

(b) Update buffalo fitness using (9) and (10) respectively(c) Determine the 119887119901max119896 and max(d) Using (11) and heuristic values probabilistically con-

struct a buffalo tour by adding cities that the buffaloshave not visited

(e) Is the 119887119892max updating Go to (f) No go to (a)(f) Is the exit criteria reached Yes go to (g) No return

to (b)(g) Output the best result

Here 1198971199011 and 1198971199012 are learning parameters and are 06 and04 respectively 119886119887 takes the values of 05 and minus05 inalternate iterations 119898 is the positive reinforcement alertinvitation which tells the animals to stay to exploit theenvironment since there are enough pastures and 119908 is thenegative reinforcement alarm which tells the animals to keepon exploring the environment since the present location is notproductive For buffalo 119896 the probability 119901119896 of moving fromcity 119895 to city 119896 depends on the combination of two valuesnamely the desirability of the move as computed by someheuristic indicating the previous attractiveness of that moveand the summative benefit of themove to the herd indicatinghow productive it has been in the past to make that particularmoveThe denominator values represent an indication of thedesirability of that move

6 Computational Intelligence and Neuroscience

Table 1 TSPLIB datasets

1st experimental datasets 2nd experimental datasets ATSP dataset NN comparative datasetsBerlin52 Att48 R48p Eil51 KroA100 B FL1400St70 St70 Ft70 Eil76 KroA200 D1655Eil76 Eil76 Kro124p Eil101 KroB100Pr76 Pr152 Ftv70 Berlin52 KroB150Kroa100 Gil262 P43 Bier127 KroB200Eil101 Rd400 Ftv170 Ch130 KroC100Ch150 Pr1002 Ch150 KroD100Tsp225 D1291 Rd100 KroE100

Fnl4461 Lin105 Rat575Brd14051 Lin318 RL1323

41 ABO Solution Mechanism for the TSP The ABO appliestheModifiedKarp Steele algorithm in its solution of the Trav-elling Salesmanrsquos Problem [23] It follows a simple solutionstep of first constructing a cycle factor 119865 of the cheapestweight in the 119870 graph Next it selects a pair of edges takenfrom different cycles of the 119870 graph and patch in a waythat will result in a minimum weight Patching is simplyremoving the selected edges in the two cycle factors and thenreplacing them with cheaper edges and in this way forminga larger cycle factor thus reducing the number of cyclefactors in graph119870 by oneThirdly the second step is repeateduntil we arrive at a single cycle factor in the entire graph119870 This technique fits into the path improvement techniquedescription [24] ABO overcomes the problem of delay inthis process through the use of two primary parameters toensure speed namely 1198971199011 and 1198971199012 coupledwith the algorithmkeeping a track of the route of the 119887119892max as well as 119887119901max119896in each construction step

5 Experiments and Discussion of Results

In this study the ABO was implemented on three sets ofsymmetric TSP datasets and a set of asymmetric TSP (ATSP)ranging from 48 to 14461 cities from TSPLIB95 [25] The firstexperiment was concerned with the comparison of the per-formance of ABO in TSP instances with the results obtainedfrom a recent study [26] involving Berlin52 St70 Eil76 Pr76KroA100 Eil101 Ch150 and Tsp225The second set of exper-iments was concerned with testing ABOrsquos performance withanother recently published study [27] on Att48 St70 Eil76Pr152 Gil262 Rd400 Pr1002 D1291 Fn14461 and Brd14051The third experiment examinedABOrsquos performance in asym-metric TSP instancesThe fourth set of experiments involvedcomparing ABO results with those obtained using somepopular Artificial Neural Networks methods [28] The TSPbenchmark datasets are presented in Table 1

51 Experimental Parameters Setting For the sets of exper-iments involving PSO the parameters are as follows popu-lation size 200 iteration (119879max) 1000 inertia weight 0851198621 2 119862

2 2 rand1(0 1) rand2(0 1) For the HPSACO the

experimental parameters are as follows population 200iteration (119879max) 1000 inertia weight 085 1198621 2 1198622 2 ants

(N) 100 pheromone factor (120572) 10 heuristic factor (120573) 20evaporation coefficient (120588) 005 pheromone amount 100For the experiments involving other algorithms in this studyTable 2 contains the details of the parameters used

52 Experiment Analysis and Discussion of Results Theexperiments were performed using MATLAB on Intel DuoCore i7-3770CPU 340 ghzwith 4GBRAMTheexperimentson the asymmetric Travelling Salesmanrsquos Problems wereexecuted on a Pentium Duo Core 180Ghz processor and2GB RAM desktop computer Similarly the experiments ontheArtificial Neural Networks were executed usingMicrosoftVirtual C++ 2008 on an Intel Duo Core i7 CPU Toauthenticate the veracity of the ABO algorithm in solving theTSP we initially carried out experiments on eight TSP casesThe city-coordinates data are available in [29]The results arepresented in Table 3

InTable 3 the ldquoAverageValuerdquo refers to the average fitnessof each algorithm and the ldquorelative errorrdquo values are obtainedby calculating

(Average value minus Best value

Best value) times 100 (12)

As can be seen in Table 3 the ABO outperformed the otheralgorithms in all test instances under investigationTheABOfor instance obtained the optimal solution to Berlin52 andEil76 No other algorithmdid Besides this theABOobtainedthe nearest-optimal solution to the remaining TSP instancescompared to any other algorithm In terms of the averageresults obtained by each algorithm the ABO still has thebest performance It is rather surprising that the HybridAlgorithm (HA) which uses a similar memorymatrix like theABO could not pose a better result This is traceable to theuse of several parameters since the HA is a combination ofthe ACO and the ABC the two algorithms that have some ofthe largest numbers of parameters to tune in order to obtaingood solutions

The dominance of the ABO can also be seen in the useof computer resources (CPU time) where it is clearly seenthat the ABO is the fastest of all four algorithms In Berlin52for instance the ABO is 58335 times faster than the ACO1085 times faster than the ABC 30320 times faster thanthe Hybrid Algorithm (HA) This trend continues in all the

Computational Intelligence and Neuroscience 7

Table 2 Experimental parameters

ABO ACO ABC HA CGASParameter Values Parameters Values Parameters Values Parameters Values Parameter ValuesPopulation 40 Ants 119863

lowast Population 119863lowast Population 119863

lowast Generation 100119898119896 10 120573 50 120601ij rand(minus1 1) 120573 50 120573 20119887119892max119887119901max 06 120588 065 120596ij rand(0 15) 120588 065 120588 0111989711990111198971199012 05 120572 10 SN NP2 120572 10 119877119900 033119908119896 10 O 200 Limit 119863

lowastSN 120601ij rand(minus1 1) Crossover rate 10NA mdash qo 09 Max cycle number 500 120596ij rand(0 15) qo 09NA mdash NA mdash Colony 50 SN NP2 120601119903 03NA mdash NA mdash NA mdash Limit 119863

lowastSN 120601120588 02NA mdash NA mdash NA mdash Max cycle number 500 120591min 120591max20

NA mdash NA mdash NA mdash Colony 50 120591max 1 minus (1 minus 120588)

NA mdash NA mdash NA mdash O 200 NA mdashNA mdash NA mdash NA mdash qo 09 NA mdashTotal number of runs 50 50 50 50 50

Table 3 Comparative experimental result

Problem Number of cities Optima Method Best Mean Rel err () Time (s)

Berlin52

ABO 7542 7616 0 0002ACO 754899 765931 152 11667

52 7542 ABC 947911 1039026 3772 217HA 754437 754437 003 6064

St70

ABO 676 67833 015 008ACO 69605 70916 473 22606

70 675 ABC 116212 123049 8173 315HA 68724 70058 347 11565

Eil76

ABO 538 56304 0 00376 538 ACO 55446 56198 304 27198

ABC 87728 93144 7078 349HA 55107 55798 231 13882

Pr76

108159 ABO 108167 108396 0007 00876 ACO 11516666 11632122 755 27241

ABC 1951989 20511961 8965 350HA 11379856 11507229 639 13892

Kroa100

21282 ABO 21311 221638 04 000100 ACO 2245589 2288012 749 61506

ABC 4951951 5384003 15294 517HA 2212275 2243531 540 31112

Eil101

629 ABO 640 640 17 0027101 ACO 67804 69342 796 52742

ABC 123731 131595 10488 517HA 67271 68339 639 26708

Ch150

6528 ABO 6532 6601 006 0032150 ACO 664851 670287 261 138765

ABC 2090889 2161748 23093 895HA 664169 667712 221 69861

Tsp225

3916 ABO 3917 3982 003 009225 ACO 411235 417608 822 403875

ABC 1699841 1795512 3652792 1668HA 409054 415785 774 203733

8 Computational Intelligence and Neuroscience

Table 4 Comparative optimal results

TSP instance Optima ABO PSO ACO HPSACOBest Avg Err Best Avg Err Best Avg Err Best Avg Err

att48 33522 33524 33579 016 33734 33982 063 33649 33731 062 33524 33667 016st70 675 676 67833 015 6912 7026 240 6857 6947 159 6803 6986 079eil76 538 538 56304 000 5723 5891 638 5507 5604 236 5462 5581 152pr152 73682 73730 73990 007 75361 75405 228 74689 74936 137 74165 74654 066gil262 2378 2378 2386 000 2513 2486 568 2463 2495 357 2413 2468 147rd400 15281 15301 15304 500 16964 17024 1101 16581 16834 851 16067 16513 514pr1002 259045 259132 261608 003 278923 279755 767 269758 271043 414 267998 269789 346d1291 50801 50839 50839 007 53912 54104 612 52942 53249 421 52868 52951 407fnl4461 182566 182745 183174 010 199314 199492 917 192964 194015 570 191352 192585 481brd14051 469385 469835 479085 010 518631 519305 1049 505734 511638 774 498471 503594 620

test cases under investigation To solve all the TSP problemshere it took ABO a cumulative time of 0279 seconds toACOrsquos 7456 seconds ABCrsquos 4311 seconds and HArsquos 336227seconds The speed of the ABO is traceable to effectivememorymanagement technique since the ABO uses the pathimprovement technique as against the ACO that uses theslower path construction method The difference in speedwith the ABC that uses similar technique with ABO is dueto the ABCrsquos use of several parameters The speed of theHA could have been affected by the combination of the pathconstruction and path improvement techniques coupled withthe use of several parameters

53 ABO and Other Algorithms Encouraged by theextremely good performance of the ABO in the firstexperiments more experiments were performed and theresults obtained compared with those from PSO ACO andHPSACO from another recently published study [27] TheHPSACO is a combination of three techniques namely theCollaborative Strategy Particle SwarmOptimization and theAnt Colony Optimization algorithms The datasets used inthis experiment are from the TSPLIB95 and they are Att48St70 Eil76 Pr152 Gil262 Rd400 Pr1002 D1291 Fn14461and Brd14051 The results are presented in Table 4

Table 4 further underscores the excellent performance ofABO The ABO obtained the optimal solution to Eil76 andGil262 and very near-optimal solution in the rest of test casesIt is obvious from the relative error calculations that the ABOobtained over 99 accuracy in all the TSP instances hereexcept the difficult rd100where it obtained 95 It isworthy ofnote that the 95 accuracy of the ABO is still superior to theperformance of the other comparative algorithms in this TSPinstance The cumulative relative error of the ABO is 568to the PSOrsquos 6183 the ACOrsquos 3981 and the HPSACOrsquos2828 Clearly from this analysis the ABO has a superiorsearch capabilityThis is traceable to its use of relatively fewerparameters than most other Swarm Intelligence techniquesThe controlling parameters of the ABO are just the learningparameters (1198971199011 and 1198971199012)

In designing the ABO the authors deliberately tried toavoid the ldquoFrankenstein phenomenardquo [30] that is a case

of developing intricate techniques with several parame-tersoperators that most times the individual contributionsof some of such parametersoperators to the workings of thealgorithm are difficult to pinpoint The ability to achieve thisldquolean metaheuristicrdquo design (which is what we tried to do indesigning the ABO) is a mark of good algorithm design [30]

6 ABO on Asymmetric TSP Test Cases

Moreover we had to experiment on some ATSP test casesto verify the performance of the novel algorithm on suchcases The results obtained from this experiment using ABOare compared with the results obtained from solving somebenchmark asymmetric TSP instances available in TSPLIB95using the Randomized Arbitrary Insertion algorithm (RAI)[31] Ant Colony System 3-opt (ACS) Min-Max Ant System(MMAS) and Iterated Local Search (ILS) which is reportedto be one of the best methods for solving TSP problemsThese ATSP instances are ry48p ft70 Kro124p ftv70 p43and ftv170 [32]

Table 5 shows the comparative results obtained by apply-ing five different algorithms to solving a number of bench-mark asymmetric TSP instances with the aim of evaluatingthe performance of the African Buffalo Optimization (ABO)algorithm The first column lists the ATSP instances asrecorded in the TSPLIB the second column indicates thenumber of citieslocationsnodes involved in each instancethe third indicates the optimal solutions then followed by theperformances of the different algorithms in at most 50 testruns

A quick look at the table shows the excellent performanceof the ILS and theMMASwith both obtaining 100 results inthe only four cases available in literature that they solvedTheABO performed very well achieving over 99 in all test casesbut one The ABO has approximately the same performancewith RAI and the ACS obtaining about 99 optimal resultsin virtually all cases It is significant to note however thatthe ABO performed better than ACS and RAI in ft70 whichwas said to be a difficult ATSP instance to solve in spite of itsrelatively small size [33]

Next the authors examined the speed of each algorithmto achieve result since one of the aims of the ABO is to

Computational Intelligence and Neuroscience 9

Table5Com

parativ

eresults

TSP

cases

Num

bero

fcities

Optim

alvalues

ABO

RAI

MMAS

ACS

ILS

Best

Avg

Relerror

Best

Avg

Relerror

Best

Avg

Relerror

Best

Avg

Relerror

Best

Avg

Relerror

Ry48p

4814422

1444

014455

012

14422

1454320

014422

14422

014422

1456545

014422

14422

0Ft70

7038673

38753

388705

021

38855

391877

5047

38673

38687

038781

390990

5028

38673

38687

0Kr

o124p

100

36230

36275

36713

012

36241

3659423

004

36230

36542

036241

36857

004

36230

36542

0Ftv70

711950

1955

19585

026

1950

196844

0mdash

mdashmdash

mdashmdash

mdashmdash

mdashmdash

P43

435620

5645

5698

044

5620

562065

0mdash

mdashmdash

mdashmdash

mdashmdash

mdashmdash

Ftv170

171

2755

2795

2840

514

52764

283274

033

2755

2755

02774

2826

069

2755

2756

0

10 Computational Intelligence and Neuroscience

Table 6 Comparative speed of algorithms

TSP cases Number of citiesABO MIMM-ACO MMAS CGAS RAI

Avg time(secs)

Avg time(secs)

Avg time(secs)

Avg time(secs)

Avg time(secs)

Ry48p 48 007 783 797 1235 1598Ft70 70 005 985 1015 1532 7068Kro124p 100 008 3325 234 7852 3034Ftv70 71 009 6453 6125 6964 7376P43 43 01 835 938 0997 0997Ftv170 171 065 10828 9673 2761 2761

solve the problem of delay in achieving optimal solutionssince speed is one of the hallmarks of a good algorithm[34] In Table 6 we compare the speed of the ABO withthose of the recently publishedModel InducedMax-Min AntColony Optimization (MIMM-ACO) Min-Max Ant System(MMAS)CooperativeGenetic Ant Systems (CGAS) [35] andRAI

Table 6 displays the exceptional capacity of the novelAfrican Buffalo Optimization (ABO) to obtain optimal ornear-optimal solutions at record times It took the ABOless than a second (precisely 095 seconds) to solve allthe benchmark cases under investigation to MIMM-ACOrsquos23209 seconds MMASrsquos 20888 seconds CGASrsquo 452927seconds and RAIrsquos 3234 seconds Undoubtedly the ABOoutperformed the other algorithms in its quest to obtainresults using very limited CPU time and resources Thisbrought aboutABOrsquos use of very few parameters coupledwithstraightforward fitness functionThe exceptional speed of theABO compared with its competitive ability to obtain verycompetitive results recommends the ABO as an algorithm ofchoice when speed is a factor

7 Comparing ABO to Neural NetworkSolutions to TSP

Moreover following the popularity and efficiency of NeuralNetworks in obtaining good solutions to the Travelling Sales-manrsquos Problem [36] the authors compared the performanceof African Buffalo Optimization to the known solutions ofsome popular Neural Network algorithms These are Ange-niolrsquos method Somhom et alrsquos method Pasti and Castrorsquosmethod and Masutti and Castrorsquos method Comparativeexperimental figures are obtained from [28]

From the experimental results in Table 7 it can beseen that only the ABO obtained the optimal solutionsin Eil51 and Eil76 in addition to jointly obtaining theoptimal result with Masutti and Castrorsquos method in Berlin52Aside from these ABO outperformed the other meth-ods in getting near-optimal solutions in Bier127 KroA100KroB100 KroB100 KroC100 KroD100 KroE100 Ch130Ch150 KroA150 KroB150 KroA200 KroB150 KroB200Rat575 rl1323 fl1400 fl1400 and Rat783 It was only in Eil101that Masutti and Castrorsquos method outperformed ABOThis is

a further evidence that ABO is an excellent performer evenwhen in competition with Neural Networks methods

8 Conclusion

In general this paper introduces the novel algorithm theAfrican Buffalo Optimization and shows its capacity to solvethe Traveling Salesmanrsquos ProblemThe performance obtainedfrom using the ABO is weighed against the performanceof some popular algorithms such as the Ant Colony Opti-mization (ACO) Particle Swarm Optimization (PSO) Arti-ficial Bee Colony Optimization (ABO) Min-Max Ant Sys-tem (MMAS) and Randomized Insertion Algorithm (RAI)some hybrid methods such as Model Induced Max-MinAnt ColonyOptimization (MIMM-ACO) Hybrid Algorithm(HA) and Cooperative Genetic Ant Systems (CGAS) andsome popular Neural Networks-based optimization meth-ods In total 33 TSP datasets cases ranging from 48 to 14461cities were investigated and the ABO results obtained werecompared with results obtained from 11 other optimizationalgorithms and methods The results show the amazingperformance of the novel algorithmrsquos capacity to obtainoptimal or near-optimal solutions at an incredibly fast rateThe ABOrsquos outstanding performance is a testimony to thefact that ABOhas immense potentials in solving optimizationproblems using relatively fewer parameters than most otheroptimization algorithms in literature

Having observed the robustness of theABO in solving theTravelling Salesmanrsquos Problems with encouraging outcomeit is recommended that more investigations be carried outto ascertain the veracity or otherwise of this new algorithmin solving other problems such as PID tuning knapsackproblem vehicle routing job scheduling and urban trans-portation problems

Conflict of Interests

The authors assert that there is no conflict of interests in thepublication of this research paper

Acknowledgments

The authors acknowledge the support of the anonymousreviewers for their constructive reviews that helped to

Computational Intelligence and Neuroscience 11

Table 7 ABO versus NN results

TSPinstance Optima

ABO Angeniolrsquosmethod

Somhom et alrsquosmethod

Pasti and Castrorsquosmethod

Masutti and Castrorsquosmethod

Best Mean Best Mean Best Mean Best Mean Best Meaneil51 426 426 427 432 44290 433 44057 429 43870 427 43747eil76 538 538 56304 554 56320 552 56227 542 55610 541 55633eil101 629 640 640 655 66593 640 65557 641 65483 638 64863berlin52 7542 7542 765931 7778 836370 7715 802507 7716 807397 7542 793250bier127 118282 118297 118863 120110 12892033 119840 12173333 118760 12178033 118970 12088633ch130 6110 6111 630714 6265 641680 6203 630723 6142 629177 6145 628240ch150 6528 6532 6601 6634 684280 6631 6751 6629 675320 6602 673837rd100 7910 7935 7956 8088 844450 8028 823940 7947 825393 7982 819977lin105 14379 14419 144527 14999 1611137 14379 1447560 14379 1470223 14379 1440017lin318 42029 42101 42336 44869 4583283 43154 4392290 42975 4370497 42834 4369687kroA100 21282 21311 221638 23009 2467880 21410 2161677 21369 2186847 21333 2152273kroA150 26524 26526 27205 28948 2996090 26930 2740133 26932 2734643 26678 2735597kroA200 29368 29370 30152 31669 3322833 30144 3041567 29594 3025753 29600 3019027kroB100 22141 22160 22509 24026 2596640 22548 2262250 22596 2285360 22343 2266147kroB150 26130 26169 26431 27886 2940453 26342 2680633 26395 2675213 26264 2663187kroB200 29437 29487 29534 32351 3383813 29703 3028647 29831 3041560 29637 3013500kroC100 20749 20755 208817 22344 2349613 20921 2114987 20915 2123160 20915 2097123kroD100 21294 21347 21462 23076 2390903 21500 2184573 21457 2202787 21374 2169737kroE100 22068 22088 22702 23642 2482803 22379 2268247 22427 2281550 22395 2271563rat575 6773 6774 6810 8107 830183 7090 717363 7039 712507 7047 711567rat783 8806 8811 888175 10532 1072160 9316 938757 9185 932630 9246 934377rl1323 270199 270480 278977 293350 30142433 295780 30089900 295060 30028600 300770 30531433fl1400 20127 20134 20167 20649 2117467 20558 2074260 20745 2107057 20851 2111000d1655 62128 62346 625995 68875 7116807 67459 6804637 70323 7143170 70918 7211317

enhance the quality of this research paper Similarly theauthors are grateful to the Faculty of Computer Systems andSoftware Engineering Universiti Malaysia Pahang KuantanMalaysia for providing the conducive environment facilitiesand funds for this research through Grant GRS 1403118

References

[1] J Kennedy ldquoParticle swarm optimizationrdquo in Encyclopedia ofMachine Learning pp 760ndash766 Springer 2010

[2] M Dorigo and M Birattari ldquoAnt colony optimizationrdquo inEncyclopedia of Machine Learning pp 36ndash39 Springer 2010

[3] E Cantu-Paz ldquoA summary of research on parallel geneticalgorithmsrdquo IlliGAL Report 95007 Illinois Genetic AlgorithmsLaboratory 1995

[4] D Karaboga and B Basturk ldquoA powerful and efficient algo-rithm for numerical function optimization artificial bee colony(ABC) algorithmrdquo Journal of Global Optimization vol 39 no 3pp 459ndash471 2007

[5] J Andre P Siarry and T Dognon ldquoAn improvement of thestandard genetic algorithm fighting premature convergence incontinuous optimizationrdquo Advances in Engineering Softwarevol 32 no 1 pp 49ndash60 2001

[6] N Kumbharana and G M Pandey ldquoA comparative study ofACO GA and SA for solving travelling salesman problemrdquo

International Journal of Societal Applications of Computer Sci-ence vol 2 pp 224ndash228 2013

[7] Z Michalewicz and D B Fogel ldquoThe traveling salesmanproblemrdquo in How to Solve It Modern Heuristics pp 189ndash224Springer 2000

[8] J B Odili H Deng C Yang Y Sun M C d E Santo Ramosand M C G Ramos ldquoApplication of ant colony optimizationto solving the traveling salesmanrsquos problemrdquo Science Journal ofElectrical amp Electronic Engineering In press

[9] A Ben-Israel ldquoA Newton-Raphson method for the solution ofsystems of equationsrdquo Journal of Mathematical Analysis andApplications vol 15 pp 243ndash252 1966

[10] D P Bertsekas Dynamic Programming and Optimal Controlvol 1 Athena Scientific Belmont Mass USA 1995

[11] R G Ghanem and P D Spanos Stochastic Finite Elements ASpectral Approach Courier Corporation 2003

[12] S Binitha and S S Sathya ldquoA survey of bio inspired optimiza-tion algorithmsrdquo International Journal of Soft Computing andEngineering vol 2 pp 137ndash151 2012

[13] F Dressler and O B Akan ldquoA survey on bio-inspired network-ingrdquo Computer Networks vol 54 no 6 pp 881ndash900 2010

[14] X-S Yang and S Deb ldquoTwo-stage eagle strategy with differen-tial evolutionrdquo International Journal of Bio-Inspired Computa-tion vol 4 no 1 pp 1ndash5 2012

12 Computational Intelligence and Neuroscience

[15] S K Dhurandher S Misra P Pruthi S Singhal S Aggarwaland I Woungang ldquoUsing bee algorithm for peer-to-peer filesearching in mobile ad hoc networksrdquo Journal of Network andComputer Applications vol 34 no 5 pp 1498ndash1508 2011

[16] A RMehrabian and C Lucas ldquoA novel numerical optimizationalgorithm inspired from weed colonizationrdquo Ecological Infor-matics vol 1 no 4 pp 355ndash366 2006

[17] M Dorigo G Di Caro and LM Gambardella ldquoAnt algorithmsfor discrete optimizationrdquo Artificial Life vol 5 no 2 pp 137ndash172 1999

[18] M Clerc Particle Swarm Optimization vol 93 John Wiley ampSons 2010

[19] Z YuanM AM deOcaM Birattari and T Stutzle ldquoContinu-ous optimization algorithms for tuning real and integer param-eters of swarm intelligence algorithmsrdquo Swarm Intelligence vol6 no 1 pp 49ndash75 2012

[20] D Karaboga and B Akay ldquoA survey algorithms simulating beeswarm intelligencerdquo Artificial Intelligence Review vol 31 no 1ndash4 pp 61ndash85 2009

[21] J Kennedy and R Mendes ldquoPopulation structure and particleswarm performancerdquo in Proceedings of the Congress on Evo-lutionary Computation (CEC rsquo02) vol 2 pp 1671ndash1676 IEEEHonolulu Hawaii USA May 2002

[22] A M Mora P Garcıa-Sanchez J J Merelo and P A CastilloldquoMigration study on a pareto-based island model for MOA-COsrdquo in Proceedings of the 15th Genetic and EvolutionaryComputation Conference (GECCO rsquo13) pp 57ndash64 July 2013

[23] M Dorigo and L M Gambardella ldquoAnt colonies for thetravelling salesman problemrdquo BioSystems vol 43 no 2 pp 73ndash81 1997

[24] MG C Resende RMartıM Gallego andADuarte ldquoGRASPand path relinking for the maxndashmin diversity problemrdquo Com-puters amp Operations Research vol 37 no 3 pp 498ndash508 2010

[25] G Reinelt ldquoTsplib95 1995rdquo 2012 httpcomoptifiuni-heidel-bergdesoftwareTSPLIB95

[26] M Gunduz M S Kiran and E Ozceylan ldquoA hierarchicapproach based on swarm intelligence to solve the travelingsalesmanproblemrdquoTurkish Journal of Electrical Engineering andComputer Sciences vol 23 no 1 pp 103ndash117 2015

[27] H Jia ldquoA novel hybrid optimization algorithm and its appli-cation in solving complex problemrdquo International Journal ofHybrid Information Technology vol 8 pp 1ndash10 2015

[28] S-M Chen and C-Y Chien ldquoA new method for solving thetraveling salesman problem based on the genetic simulatedannealing ant colony system with particle swarm optimizationtechniquesrdquo in Proceedings of the International Conference onMachine Learning and Cybernetics (ICMLC rsquo10) pp 2477ndash2482July 2010

[29] S Georg ldquoMP-TESTDATAmdashThe TSPLIB symmetric travelingsalesman problem instancesrdquo 2008

[30] K Sorensen ldquoMetaheuristicsmdashthemetaphor exposedrdquo Interna-tional Transactions in Operational Research vol 22 no 1 pp3ndash18 2015

[31] J Brest and J Zerovnik ldquoA heuristic for the asymmetric travel-ing salesman problemrdquo in Proceedings of the 6th MetaheuristicsInternational Conference (MIC rsquo05) pp 145ndash150 Vienna Aus-tria 2005

[32] M Dorigo and L M Gambardella ldquoAnt colony system a coop-erative learning approach to the traveling salesman problemrdquoIEEETransactions on Evolutionary Computation vol 1 no 1 pp53ndash66 1997

[33] O C Martin and S W Otto ldquoCombining simulated annealingwith local search heuristicsrdquoAnnals of Operations Research vol63 pp 57ndash75 1996

[34] B Baritompa and E M Hendrix ldquoOn the investigation ofstochastic global optimization algorithmsrdquo Journal of GlobalOptimization vol 31 no 4 pp 567ndash578 2005

[35] J Bai G-K Yang Y-WChen L-SHu andC-C Pan ldquoAmodelinduced max-min ant colony optimization for asymmetrictraveling salesman problemrdquo Applied Soft Computing vol 13no 3 pp 1365ndash1375 2013

[36] N Aras B J Oommen and I K Altinel ldquoThe Kohonennetwork incorporating explicit statistics and its application tothe travelling salesman problemrdquo Neural Networks vol 12 no9 pp 1273ndash1284 1999

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 2: Solving the Traveling Salesman’s Problem Using the African ...umpir.ump.edu.my/id/eprint/11365/1/Solving the Traveling Salesman’… · ResearchArticle Solving the Traveling Salesman’s

2 Computational Intelligence and Neuroscience

Travelling Salesmanrsquos Problem (TSP) can be represented bya complete weighted graph 119866 = (119881 119864) with119881 being the set of119899 nodes (cities) and 119864 being the set of edges fully connectingthe nodes in the graph 119866 In this graph each edge 119864 is givena weight 119889

119894119895which represents the distance between cities 119894

and 119895 It may be important to emphasize that the distancebetween townscities may be symmetric (where the distancesbetween the cities are the same in either going or returningfrom the townscities) or asymmetric (where due to somerestrictions possibly due to one-way lanes or other reasonsthe distances of going from city 119860 to city 119861 may not be thesame) The basic minimization equation for TSP is given 119899

towns and their coordinates find an integer permutation 120587 =

1198621 1198622 1198623 119862

119899with 119862

119899being the city ldquo119899rdquo our task is to

minimize the sum of the cities [7 8] Consider

119891 (120587) =

119899minus1

sum

119894=1

119889 (119862119894 119862119894+1) + 119889 (119862

119899 1198621) (1)

Here 119889(119862119894 119862119894+1) represents the distance between city ldquo119894rdquo and

city 119894 + 1 and 119889(119862119899 1198621) is the distance of city ldquo119899rdquo and city

ldquo1rdquo The aim is to find the shortest path between the adjacentcities in the form of a vector of cities

This paper is organized in this way Section 1 intro-duces the African Buffalo Optimization and the TravellingSalesmanrsquos Problem Section 2 presents a brief review ofrelevant literature Section 3 discusses the basic flow of theAfrican Buffalo Optimization (ABO) algorithm Section 4is concerned with how the ABO solves the TSP Section 5deals with the experiments and discussion of the resultsof the symmetric TSP instances Section 6 is concernedwith the experiments on the asymmetric TSP instancesand the discussions of the results Section 7 examines theperformance of ABO vis-a-vis Neural Networks methodsSection 8 draws conclusions on the study

2 Literature Review

Nature-inspired algorithms (NAs) draw their inspirationfrom the complex but highly organised attributes of natureNature-inspired algorithms which are generally stochasticbecame the hotbed of several researchers due to the ineffi-ciencies of the exact optimization algorithms (as the problemsize enlarges) like the Linear Programming [9] DynamicProgramming [10] finite elements [11] and finite volumemethods In general NAs simulate the interplay and some-times interdependence of natural elements such as plantsanimals and rivers The most popular class of algorithmsamong the NAs are the biology-inspired algorithms Ahandful of other NAs however are inspired by Chemistry orPhysics Some of those algorithms inspired by Chemistry orPhysics include Harmony Search (HS) algorithm IntelligentWater Drop (IWD) Simulated Annealing (SA) and BlackHole (BH) [12] In this study our interest is the biology-inspired optimization algorithms

Biology-inspired algorithms (BAs) can be categorisedinto three broad classes namely Evolutionary Algorithms(EAs) that are inspired by natural evolutions the SwarmIntelligence (SI) which simulates the collective behavior in

plants and animals and thirdly Ecology algorithms whichare concernedwith simulating the inter- and intracooperativeor competitive interactions within the ecosystem [13]

The Evolutionary Algorithms (EAs) generally simu-late the iterative progress comprising growth developmentreproduction selection and survival as seen in a populationEAs are concerned with the feasible solutions that emanateiteratively from generation to generation towards the bestsolution EAs employ fitness-based selection within thepopulation of solutions in a way that fitter solutions areselected for survival into the next generation of solutions Inthis category of EAs are Genetic Programming (GP) PaddyField Algorithm (PFA) Evolutionary Strategies (ES) GeneticAlgorithm (GA) and the Differential Evolution (DE) [12] Itmay be necessary to state that experts have different opinionson the classification of Differential Evolution as an Evolution-ary Algorithm The classification of DE as an EvolutionaryAlgorithm stems from its use of ldquoEvolutionrdquo as one of itsparameters (just like other EAs) A critical expert may notclass Differential Evolution as being bioinspired actually [14]

The second category of BAs is the Swarm Intelligence(SI) algorithmswhich are concernedwith the collective socialbehavior of organismsThemotivation of Swarm Intelligenceis the collective intelligence of groups of simple agents suchas insects fishes birds bacteria worms and other animalsbased on their behavior in real life As simple as these animalsare they are able to exhibit exceptional intelligence wheneverthey work collectively as a group These algorithms track thecollective behavior of animals that exhibit decentralized self-organized patterns in their foraging duties Examples of thesealgorithms are the Bee Colony Optimization (BCO) FireflyAlgorithm (FFA) Particle Swarm Optimization (PSO) AntColony Optimization (ACO) Artificial Bee Colony (ABC)Bacteria Foraging Algorithm (BFA) and so on [15]

The third category of BAs is the Ecology algorithmswhich are concernedwith the numerous inter- or intraspeciescompetitive or cooperative interactions within the naturalecosystems The ecosystem is made up of living organismsalong with their abiotic environment with which the organ-isms interact such as water air and soil Cooperation amongorganisms includes division of labor and represents the coreof their sociality Some of the interactions are cooperative andothers are competitive leading to a complex and harmoniousbalancing within the ecosystem Algorithms in this categoryare the PS2O Invasive Weed Colony Optimization andbiogeography-based optimization [16]

The development of the African Buffalo Optimization(ABO) is in response to the observed lapses in the existingalgorithmsThe ABO belongs to the Swarm Intelligence classof algorithms based on the social behavior in animals withthe aim of achieving greater exploitation and exploration ofthe search space ease of use and faster speed in achievingthe optimal result with its use of relatively fewer parametersin solving combinatorial optimization problems

21 Ant Colony Optimization (ACO) Ant Colony Optimiza-tion algorithm is a population-based optimization techniquedeveloped by Marco Dorigo and has been successfullyapplied to solve several NP-hard combinatorial optimization

Computational Intelligence and Neuroscience 3

problems This algorithm was inspired by the behavior ofant colonies especially by their foraging behavior in reallife Usually ants when leaving their nests move randomlyaround the areas surrounding their nests in search for food Ifany of the ants come across food it first collects somepieces ofthe food and on its way back to the nest deposits a chemicalsubstance called pheromones as a way of communicating toits peers that there has been a breakthrough Other nearbyants on perceiving the fragrance of the pheromone under-stand andmove towards the pheromone path Once they dis-cover the food source they in turn drop fresh pheromonesas a way of alerting other ants In amatter of time several antspick this information and are on the pheromone path

Another interesting part of the antsrsquo behavior is that asthey return to the nest they optimize their route In a shortwhile the ants have created a shorter route to the food sourcethan the previous routes Moreover in case an obstruction isput on the shorter route making movements impossible theants are able to find another short route among the availableoptions to evade the obstacleThe highlights of this algorithminclude tapping into the indirect communication of a colonyof (artificial) ants using pheromone trails as a means ofcommunication tracking their cooperative ability to solve acomplex problem and harnessing their capacity to optimizetheir routes from the food source to the nest and vice versa

There have been several modifications of the ant colonyalgorithms starting from the initial Ant System (AS) to AntColony System (ACS) to Min-Max Ant System (MMAS)and then to the Ant Colony Optimization (ACO) algorithmsand so forth [17] In ACO a colony of ants in each iterationconstructs a solution probabilistically as ant 119896 at node 119894 selectsthe next node119895 tomove on toThe choice of node is influencedby the pheromone trail value 120591

119894119895(119905) and the available heuristic

120578119894119895 In TSP 120578

119894119895= 1119889

119894119895 So an ant moves from location 119894 to

location 119895 with the probability

119875119905

119894119895(119905) =

lceil120591

119894119895(119905)rceil120572

119909 [120578

119894119895]120573

sum119897isinN119894119896 [120591119894119895(119905)]

120572119909 [120578119894119895]120573

if 119895 isin N119894119896

0 otherwise

(2)

Here 120591119894119895(119905) represents the pheromone trail 120578

119894119895represents

the local heuristic information 119905 represents the iterationN119894119896 represents the nodes ant 119896 can go to and 120572 and 120573 are

parameters that bias the pheromone trails By the end of aniteration the pheromone trail on each edge 119894119895 is updated usingthe following equation

120591

119894119895(119905 + 1) = (1 minus 120588) 119909120591

119894119895(119905) + Δ120591

119894119895

best(119905) (3)

In (3) 120591119894119895(119905+1) represents the pheromone trail in iteration 119905+1

120588 takes any values from 01 to 09 The amount of pheromonedeposited by the best ant is represented by

Δ120591

119894119895

best(119905)

=

1

119891(119904

best(119905)) if the best ant used 119894119895 in iteration 119905

0 otherwise

(4)

In (4) 119891(119904best(119905)) represents cost of the best solution (119904best(119905))

A critical examination of the Ant Colony Optimizationtechnique of solving optimization problems reveals that thereis little or no similarity between the ACOrsquos searchmechanismand that of the ABOThis could be due to their application ofdifferent search techniques in arriving at solutions while theACO employs path construction technique the ABO favourspath improvement search mechanism

22 Particle Swarm Optimization Particle Swarm Optimiza-tion which was inspired by the social behavior of birdsflocking or fish schooling is one of the biology-inspiredcomputation techniques developed by Eberhart andKennedy[18] This algorithm obtains solutions using a swarm ofparticles where each particle represents a candidate solutionWhen compared to evolutionary computation paradigms aswarm is similar to a population and a particle represents anindividual In searching for a solution the particles are flownthrough a multidimensional search space where the positionof each particle is adjusted according to its own experienceand that of its neighbors The velocity vector drives the opti-mization process as well as highlighting each particlersquos experi-ence and the information exchange within its neighborhoodJust like the ABO PSO has two controlling equations in itssearch for solution and these are (5) and (6) Consider

V119905+1119894119895

= 119909 (V119905119894119895+ 011198801(119887119905

119894119895minus 119909119905

119894119895) + 021198802(119887119905

(119899)119894119895minus 119909119905

119894119895)) (5)

where V119905+1119894119895

represents the present velocity V119905119894119895is the previous

velocity 119909 is the constriction factor 01and 0

2are the

acceleration coefficients 1198801and 119880

2are random numbers

119887119905

119894119895is the individual particlesrsquo best position 119909119905

119894119895is the present

position and 119887119905

(119899)119894119895is the swarmrsquos best position The next

equation in PSO that calculates the position of the swarm is

119909119905+1

119894119895= 119909119905

119894119895+ V119905+1119894119895 (6)

In PSO algorithm the particles move in the domain ofan objective function 119891 Θ isin 119877119899 where 119899 represents thevariables to be optimized Each particle at a particulariteration is associated with three vectors

(a) Present position denoted by119909This vector records thepresent position of a particular particle

(b) Present velocity denoted by V This vector stores theparticlersquos direction of search

(c) Individualrsquos best position denoted by 119887 This vectorrecords the particular particlersquos best solution so farsince the search began (since the beginning of thealgorithmrsquos execution) In addition to these the indi-vidual particles relate with the best particle in theswarmwhich PSO algorithm tracks in each iterationto help direct the search to promising areas [19]

23 Artificial Bee Colony This algorithm which is inspiredby the behavior of natural honey bee swarm was proposedby Karaboga and Akay in 2009 [20] It searches for solutionthrough the use of three classes of bees scout bees onlookerbees and employed bees Scout bees are those that fly over

4 Computational Intelligence and Neuroscience

the search space in search for solutions (food source) Theonlooker bees on the other hand are the ones that stay inthe nest waiting for the report of the scout bees while theemployed bees refer to the class of bees which after watchingthe waggle dance of the scout bees opts to join in harvestingthe food source (exploitation) A particular strength of thisalgorithm lies in its bee transformation capabilities Forinstance a scout bee could transform to an employed beeonce it (the same scout bee) is involved in harvesting the foodsource and vice versa

Generally the bees can change their statuses dependingon the needs of the algorithm at a particular point in timeIn this algorithm the food source represents a solution to theoptimization problemThe volume of nectar in a food sourcerepresents the quality (fitness) of the solutionMoreover eachemployed bee is supposed to exploit only one food sourcemeaning that the number of employed bees is the same as thenumber of food sources The scout bees are always exploringfor new food sources V

119898with higher nectar quantity andor

quality (fitness) 119898

within the neighbourhood The beesevaluate the nectar fitness using

V119898119894= 119909119898119894+ 0119898119894(119909119898119894minus 119909119896119894) (7)

where 119894 is a randomly chosen parameter index 0119898119894

is arandom number within a given range 119909

119896119894is a food source

The quality (fitness) of a solution 119891119894119905119898(997888rarr119909119898) is calculated

using the following equation

119891119894119905119898(997888rarr119909119898) =

1

1 + 119891119898(997888rarr119909119898)

119909 if 119891119898(997888rarr119909119898) gt 0

1 + 119891119898(997888rarr119909119898) if 119891

119898(997888rarr119909119898) lt 0

(8)

From the foregoing discussion it is clear that there is slightsimilarity between (5) in PSO and (7) in ABO since eachalgorithm subtracts a variable from the personal and indi-vidual bests of the particlesbuffalos For PSO the subtractedvariable is the present position and for ABO it is theimmediate-past explored location (the waaa values 119908119896)However the two equations are different in several respectswhile the PSO uses 119909 (being the constriction factor) or120596 (as inertia factor in some versions of the PSO) thereare no such equivalents in ABO Moreover while the PSOemploys random numbers (119880

1and 119880

2) ABO does not use

random numbers only learning parameters Also PSO usesacceleration coefficients (0

1and 0

2) ABO does not In the

case of the ABC even though it employs the same searchtechnique in arriving at solutions the algorithm proceduresare quite different

24 Information Propagation In searching for solutions toan optimization problem the ACO employs the path con-struction technique while the PSO ABC and the ABO usethe path improvement technique However while the PSOuses the Von Neumann (see Figure 1) as its best techniquefor information propagation [21] the ACO obtains goodresults using the ring topology [22] and the ABO uses thestar topology which connects all the buffalos together TheVon Neumann topology enables the particles to connect to

neighboring particles on the east west north and southEffectively a particular particle relates with the other fourparticles surrounding it The ABO employs the star topologysuch that a particular buffalo is connected to every otherbuffalo in the herdThis enhancesABOrsquos information dissem-ination at any particular point in time

3 African Buffalo Optimization Algorithm

In using the ABO to proffer solutions in the search space thebuffalos are first initialized within the herd population andare made to search for the global optima by updating theirlocations as they follow the current best buffalo 119887119892max inthe herd Each buffalo keeps track of its coordinates in theproblem space which are associated with the best solution(fitness) it has achieved so far This value is called 119887119901max119896representing the best location of the particular buffalo inrelation to the optimal solution The ABO algorithm followsthis pattern at each step it tracks the dynamic location ofeach buffalo towards the 119887119901max119896 and 119887119892max depending onwhere the emphasis is placed at a particular iteration Thespeed of each animal is influenced by the learning parameters

31 ABO Algorithm The ABO algorithm is presented below

(1) Initialization randomly place buffalos to nodes at thesolution space

(2) Update the buffalos fitness values using

119898119896 + 1 = 119898119896 + 1198971199011 (119887119892max minus 119908119896)

+ 1198971199012 (119887119901max119896 minus 119908119896) (9)

where 119908119896 and 119898119896 represent the exploration andexploitation moves respectively of the 119896th buffalo(119896 = 1 2 119873) 1198971199011 and 1198971199012 are learning factors119887119892max is the herdrsquos best fitness and 119887119901max119896 theindividual buffalo 119896rsquos best found location

(3) Update the location of buffalo 119896 (119887119901max119896 and 119887119892max)using

119908119896 + 1 =(119908119896 + 119898119896)

plusmn05 (10)

(4) Is 119887119892max updating Yes go to (5) No go to (2)(5) If the stopping criteria are not met go back to

algorithm step (3) else go to (6)(6) Output best solution

A closer look at the algorithm (the ABO algorithm)reveals that (9) which shows the democratic attitude of thebuffalos has three parts the first 119898119896 represents the memoryof the buffalos past location The buffalo has innate memoryability that enables it to tell where it has been before This iscrucial in its search for solutions as it helps it to avoid areasthat produced bad resultsThememory of each buffalo is a listof solutions that can be used as an alternative for the currentlocal maximum location The second 1198971199011(119887119892max minus 119908119896) is

Computational Intelligence and Neuroscience 5

Star topology Ring topology Von Neumann topology

Figure 1 Information propagation topologies

concerned with the caring or cooperative part of the buffalosand is a pointer to the buffalorsquos social and information-sharing experience and then the third part 1198971199012(119887119901max119896minus119908119896)indicates the intelligence part of the buffalos So the ABOexploits the memory caring intelligent capabilities of thebuffalos in the democratic equation (9) Similarly (10) is thewaaa vocalization equation that propels the animals to moveon to explore other environments as the present area has beenfully exploited or is unfavourable for further exploration andexploitation

32 Highlights of the ABO Algorithm They are as follows

(1) Stagnation handling through regular update of thebest buffalo 119887119892max in each iteration

(2) Use of relatively few parameters to ensure speed fastconvergence

(3) A very simple algorithm that require less than 100lines of coding effort in any programming language

(4) Ensuring adequate exploration by tracking the loca-tion of best buffalo (119887119892max) and each buffalorsquos per-sonal best location (119887119901max119896)

(5) Ensuring adequate exploitation through tapping intothe exploits of other buffalos 1198971199011(119887119892max minus 119908119896)

33 Initialization The initialization phase is done by ran-domly placing the 119896th buffalo in the solution space For ini-tialization some known previous knowledge of the problemcan help the algorithm to converge in less iterations

34 Update Speed and Location In each iteration eachbuffalo updates its location according to its formermaximumlocation (119887119901max) and some information gathered from theexploits of the neighboring buffalos This is done using (9)and (10) (refer to the ABO algorithm steps (2) and (3) above)This enables the algorithm to track the movement of thebuffalos in relation to the optimal solution

4 Using ABO to Solve the TravellingSalesmanrsquos Problem

TheABOhas the advantage of using very simple steps to solvecomplex optimization problems such as the TSP The basicsolution steps are as follows

(a) Choose according to some criterion a start city foreach of the buffalos and randomly locate them inthose cities Consider

119875119886119887 =11990811989711990111198861198871198981198971199012119886119887

sum119899

119894=111990811989711990111198861198871198981198971199012119886119887

119886119887 = plusmn05

(11)

(b) Update buffalo fitness using (9) and (10) respectively(c) Determine the 119887119901max119896 and max(d) Using (11) and heuristic values probabilistically con-

struct a buffalo tour by adding cities that the buffaloshave not visited

(e) Is the 119887119892max updating Go to (f) No go to (a)(f) Is the exit criteria reached Yes go to (g) No return

to (b)(g) Output the best result

Here 1198971199011 and 1198971199012 are learning parameters and are 06 and04 respectively 119886119887 takes the values of 05 and minus05 inalternate iterations 119898 is the positive reinforcement alertinvitation which tells the animals to stay to exploit theenvironment since there are enough pastures and 119908 is thenegative reinforcement alarm which tells the animals to keepon exploring the environment since the present location is notproductive For buffalo 119896 the probability 119901119896 of moving fromcity 119895 to city 119896 depends on the combination of two valuesnamely the desirability of the move as computed by someheuristic indicating the previous attractiveness of that moveand the summative benefit of themove to the herd indicatinghow productive it has been in the past to make that particularmoveThe denominator values represent an indication of thedesirability of that move

6 Computational Intelligence and Neuroscience

Table 1 TSPLIB datasets

1st experimental datasets 2nd experimental datasets ATSP dataset NN comparative datasetsBerlin52 Att48 R48p Eil51 KroA100 B FL1400St70 St70 Ft70 Eil76 KroA200 D1655Eil76 Eil76 Kro124p Eil101 KroB100Pr76 Pr152 Ftv70 Berlin52 KroB150Kroa100 Gil262 P43 Bier127 KroB200Eil101 Rd400 Ftv170 Ch130 KroC100Ch150 Pr1002 Ch150 KroD100Tsp225 D1291 Rd100 KroE100

Fnl4461 Lin105 Rat575Brd14051 Lin318 RL1323

41 ABO Solution Mechanism for the TSP The ABO appliestheModifiedKarp Steele algorithm in its solution of the Trav-elling Salesmanrsquos Problem [23] It follows a simple solutionstep of first constructing a cycle factor 119865 of the cheapestweight in the 119870 graph Next it selects a pair of edges takenfrom different cycles of the 119870 graph and patch in a waythat will result in a minimum weight Patching is simplyremoving the selected edges in the two cycle factors and thenreplacing them with cheaper edges and in this way forminga larger cycle factor thus reducing the number of cyclefactors in graph119870 by oneThirdly the second step is repeateduntil we arrive at a single cycle factor in the entire graph119870 This technique fits into the path improvement techniquedescription [24] ABO overcomes the problem of delay inthis process through the use of two primary parameters toensure speed namely 1198971199011 and 1198971199012 coupledwith the algorithmkeeping a track of the route of the 119887119892max as well as 119887119901max119896in each construction step

5 Experiments and Discussion of Results

In this study the ABO was implemented on three sets ofsymmetric TSP datasets and a set of asymmetric TSP (ATSP)ranging from 48 to 14461 cities from TSPLIB95 [25] The firstexperiment was concerned with the comparison of the per-formance of ABO in TSP instances with the results obtainedfrom a recent study [26] involving Berlin52 St70 Eil76 Pr76KroA100 Eil101 Ch150 and Tsp225The second set of exper-iments was concerned with testing ABOrsquos performance withanother recently published study [27] on Att48 St70 Eil76Pr152 Gil262 Rd400 Pr1002 D1291 Fn14461 and Brd14051The third experiment examinedABOrsquos performance in asym-metric TSP instancesThe fourth set of experiments involvedcomparing ABO results with those obtained using somepopular Artificial Neural Networks methods [28] The TSPbenchmark datasets are presented in Table 1

51 Experimental Parameters Setting For the sets of exper-iments involving PSO the parameters are as follows popu-lation size 200 iteration (119879max) 1000 inertia weight 0851198621 2 119862

2 2 rand1(0 1) rand2(0 1) For the HPSACO the

experimental parameters are as follows population 200iteration (119879max) 1000 inertia weight 085 1198621 2 1198622 2 ants

(N) 100 pheromone factor (120572) 10 heuristic factor (120573) 20evaporation coefficient (120588) 005 pheromone amount 100For the experiments involving other algorithms in this studyTable 2 contains the details of the parameters used

52 Experiment Analysis and Discussion of Results Theexperiments were performed using MATLAB on Intel DuoCore i7-3770CPU 340 ghzwith 4GBRAMTheexperimentson the asymmetric Travelling Salesmanrsquos Problems wereexecuted on a Pentium Duo Core 180Ghz processor and2GB RAM desktop computer Similarly the experiments ontheArtificial Neural Networks were executed usingMicrosoftVirtual C++ 2008 on an Intel Duo Core i7 CPU Toauthenticate the veracity of the ABO algorithm in solving theTSP we initially carried out experiments on eight TSP casesThe city-coordinates data are available in [29]The results arepresented in Table 3

InTable 3 the ldquoAverageValuerdquo refers to the average fitnessof each algorithm and the ldquorelative errorrdquo values are obtainedby calculating

(Average value minus Best value

Best value) times 100 (12)

As can be seen in Table 3 the ABO outperformed the otheralgorithms in all test instances under investigationTheABOfor instance obtained the optimal solution to Berlin52 andEil76 No other algorithmdid Besides this theABOobtainedthe nearest-optimal solution to the remaining TSP instancescompared to any other algorithm In terms of the averageresults obtained by each algorithm the ABO still has thebest performance It is rather surprising that the HybridAlgorithm (HA) which uses a similar memorymatrix like theABO could not pose a better result This is traceable to theuse of several parameters since the HA is a combination ofthe ACO and the ABC the two algorithms that have some ofthe largest numbers of parameters to tune in order to obtaingood solutions

The dominance of the ABO can also be seen in the useof computer resources (CPU time) where it is clearly seenthat the ABO is the fastest of all four algorithms In Berlin52for instance the ABO is 58335 times faster than the ACO1085 times faster than the ABC 30320 times faster thanthe Hybrid Algorithm (HA) This trend continues in all the

Computational Intelligence and Neuroscience 7

Table 2 Experimental parameters

ABO ACO ABC HA CGASParameter Values Parameters Values Parameters Values Parameters Values Parameter ValuesPopulation 40 Ants 119863

lowast Population 119863lowast Population 119863

lowast Generation 100119898119896 10 120573 50 120601ij rand(minus1 1) 120573 50 120573 20119887119892max119887119901max 06 120588 065 120596ij rand(0 15) 120588 065 120588 0111989711990111198971199012 05 120572 10 SN NP2 120572 10 119877119900 033119908119896 10 O 200 Limit 119863

lowastSN 120601ij rand(minus1 1) Crossover rate 10NA mdash qo 09 Max cycle number 500 120596ij rand(0 15) qo 09NA mdash NA mdash Colony 50 SN NP2 120601119903 03NA mdash NA mdash NA mdash Limit 119863

lowastSN 120601120588 02NA mdash NA mdash NA mdash Max cycle number 500 120591min 120591max20

NA mdash NA mdash NA mdash Colony 50 120591max 1 minus (1 minus 120588)

NA mdash NA mdash NA mdash O 200 NA mdashNA mdash NA mdash NA mdash qo 09 NA mdashTotal number of runs 50 50 50 50 50

Table 3 Comparative experimental result

Problem Number of cities Optima Method Best Mean Rel err () Time (s)

Berlin52

ABO 7542 7616 0 0002ACO 754899 765931 152 11667

52 7542 ABC 947911 1039026 3772 217HA 754437 754437 003 6064

St70

ABO 676 67833 015 008ACO 69605 70916 473 22606

70 675 ABC 116212 123049 8173 315HA 68724 70058 347 11565

Eil76

ABO 538 56304 0 00376 538 ACO 55446 56198 304 27198

ABC 87728 93144 7078 349HA 55107 55798 231 13882

Pr76

108159 ABO 108167 108396 0007 00876 ACO 11516666 11632122 755 27241

ABC 1951989 20511961 8965 350HA 11379856 11507229 639 13892

Kroa100

21282 ABO 21311 221638 04 000100 ACO 2245589 2288012 749 61506

ABC 4951951 5384003 15294 517HA 2212275 2243531 540 31112

Eil101

629 ABO 640 640 17 0027101 ACO 67804 69342 796 52742

ABC 123731 131595 10488 517HA 67271 68339 639 26708

Ch150

6528 ABO 6532 6601 006 0032150 ACO 664851 670287 261 138765

ABC 2090889 2161748 23093 895HA 664169 667712 221 69861

Tsp225

3916 ABO 3917 3982 003 009225 ACO 411235 417608 822 403875

ABC 1699841 1795512 3652792 1668HA 409054 415785 774 203733

8 Computational Intelligence and Neuroscience

Table 4 Comparative optimal results

TSP instance Optima ABO PSO ACO HPSACOBest Avg Err Best Avg Err Best Avg Err Best Avg Err

att48 33522 33524 33579 016 33734 33982 063 33649 33731 062 33524 33667 016st70 675 676 67833 015 6912 7026 240 6857 6947 159 6803 6986 079eil76 538 538 56304 000 5723 5891 638 5507 5604 236 5462 5581 152pr152 73682 73730 73990 007 75361 75405 228 74689 74936 137 74165 74654 066gil262 2378 2378 2386 000 2513 2486 568 2463 2495 357 2413 2468 147rd400 15281 15301 15304 500 16964 17024 1101 16581 16834 851 16067 16513 514pr1002 259045 259132 261608 003 278923 279755 767 269758 271043 414 267998 269789 346d1291 50801 50839 50839 007 53912 54104 612 52942 53249 421 52868 52951 407fnl4461 182566 182745 183174 010 199314 199492 917 192964 194015 570 191352 192585 481brd14051 469385 469835 479085 010 518631 519305 1049 505734 511638 774 498471 503594 620

test cases under investigation To solve all the TSP problemshere it took ABO a cumulative time of 0279 seconds toACOrsquos 7456 seconds ABCrsquos 4311 seconds and HArsquos 336227seconds The speed of the ABO is traceable to effectivememorymanagement technique since the ABO uses the pathimprovement technique as against the ACO that uses theslower path construction method The difference in speedwith the ABC that uses similar technique with ABO is dueto the ABCrsquos use of several parameters The speed of theHA could have been affected by the combination of the pathconstruction and path improvement techniques coupled withthe use of several parameters

53 ABO and Other Algorithms Encouraged by theextremely good performance of the ABO in the firstexperiments more experiments were performed and theresults obtained compared with those from PSO ACO andHPSACO from another recently published study [27] TheHPSACO is a combination of three techniques namely theCollaborative Strategy Particle SwarmOptimization and theAnt Colony Optimization algorithms The datasets used inthis experiment are from the TSPLIB95 and they are Att48St70 Eil76 Pr152 Gil262 Rd400 Pr1002 D1291 Fn14461and Brd14051 The results are presented in Table 4

Table 4 further underscores the excellent performance ofABO The ABO obtained the optimal solution to Eil76 andGil262 and very near-optimal solution in the rest of test casesIt is obvious from the relative error calculations that the ABOobtained over 99 accuracy in all the TSP instances hereexcept the difficult rd100where it obtained 95 It isworthy ofnote that the 95 accuracy of the ABO is still superior to theperformance of the other comparative algorithms in this TSPinstance The cumulative relative error of the ABO is 568to the PSOrsquos 6183 the ACOrsquos 3981 and the HPSACOrsquos2828 Clearly from this analysis the ABO has a superiorsearch capabilityThis is traceable to its use of relatively fewerparameters than most other Swarm Intelligence techniquesThe controlling parameters of the ABO are just the learningparameters (1198971199011 and 1198971199012)

In designing the ABO the authors deliberately tried toavoid the ldquoFrankenstein phenomenardquo [30] that is a case

of developing intricate techniques with several parame-tersoperators that most times the individual contributionsof some of such parametersoperators to the workings of thealgorithm are difficult to pinpoint The ability to achieve thisldquolean metaheuristicrdquo design (which is what we tried to do indesigning the ABO) is a mark of good algorithm design [30]

6 ABO on Asymmetric TSP Test Cases

Moreover we had to experiment on some ATSP test casesto verify the performance of the novel algorithm on suchcases The results obtained from this experiment using ABOare compared with the results obtained from solving somebenchmark asymmetric TSP instances available in TSPLIB95using the Randomized Arbitrary Insertion algorithm (RAI)[31] Ant Colony System 3-opt (ACS) Min-Max Ant System(MMAS) and Iterated Local Search (ILS) which is reportedto be one of the best methods for solving TSP problemsThese ATSP instances are ry48p ft70 Kro124p ftv70 p43and ftv170 [32]

Table 5 shows the comparative results obtained by apply-ing five different algorithms to solving a number of bench-mark asymmetric TSP instances with the aim of evaluatingthe performance of the African Buffalo Optimization (ABO)algorithm The first column lists the ATSP instances asrecorded in the TSPLIB the second column indicates thenumber of citieslocationsnodes involved in each instancethe third indicates the optimal solutions then followed by theperformances of the different algorithms in at most 50 testruns

A quick look at the table shows the excellent performanceof the ILS and theMMASwith both obtaining 100 results inthe only four cases available in literature that they solvedTheABO performed very well achieving over 99 in all test casesbut one The ABO has approximately the same performancewith RAI and the ACS obtaining about 99 optimal resultsin virtually all cases It is significant to note however thatthe ABO performed better than ACS and RAI in ft70 whichwas said to be a difficult ATSP instance to solve in spite of itsrelatively small size [33]

Next the authors examined the speed of each algorithmto achieve result since one of the aims of the ABO is to

Computational Intelligence and Neuroscience 9

Table5Com

parativ

eresults

TSP

cases

Num

bero

fcities

Optim

alvalues

ABO

RAI

MMAS

ACS

ILS

Best

Avg

Relerror

Best

Avg

Relerror

Best

Avg

Relerror

Best

Avg

Relerror

Best

Avg

Relerror

Ry48p

4814422

1444

014455

012

14422

1454320

014422

14422

014422

1456545

014422

14422

0Ft70

7038673

38753

388705

021

38855

391877

5047

38673

38687

038781

390990

5028

38673

38687

0Kr

o124p

100

36230

36275

36713

012

36241

3659423

004

36230

36542

036241

36857

004

36230

36542

0Ftv70

711950

1955

19585

026

1950

196844

0mdash

mdashmdash

mdashmdash

mdashmdash

mdashmdash

P43

435620

5645

5698

044

5620

562065

0mdash

mdashmdash

mdashmdash

mdashmdash

mdashmdash

Ftv170

171

2755

2795

2840

514

52764

283274

033

2755

2755

02774

2826

069

2755

2756

0

10 Computational Intelligence and Neuroscience

Table 6 Comparative speed of algorithms

TSP cases Number of citiesABO MIMM-ACO MMAS CGAS RAI

Avg time(secs)

Avg time(secs)

Avg time(secs)

Avg time(secs)

Avg time(secs)

Ry48p 48 007 783 797 1235 1598Ft70 70 005 985 1015 1532 7068Kro124p 100 008 3325 234 7852 3034Ftv70 71 009 6453 6125 6964 7376P43 43 01 835 938 0997 0997Ftv170 171 065 10828 9673 2761 2761

solve the problem of delay in achieving optimal solutionssince speed is one of the hallmarks of a good algorithm[34] In Table 6 we compare the speed of the ABO withthose of the recently publishedModel InducedMax-Min AntColony Optimization (MIMM-ACO) Min-Max Ant System(MMAS)CooperativeGenetic Ant Systems (CGAS) [35] andRAI

Table 6 displays the exceptional capacity of the novelAfrican Buffalo Optimization (ABO) to obtain optimal ornear-optimal solutions at record times It took the ABOless than a second (precisely 095 seconds) to solve allthe benchmark cases under investigation to MIMM-ACOrsquos23209 seconds MMASrsquos 20888 seconds CGASrsquo 452927seconds and RAIrsquos 3234 seconds Undoubtedly the ABOoutperformed the other algorithms in its quest to obtainresults using very limited CPU time and resources Thisbrought aboutABOrsquos use of very few parameters coupledwithstraightforward fitness functionThe exceptional speed of theABO compared with its competitive ability to obtain verycompetitive results recommends the ABO as an algorithm ofchoice when speed is a factor

7 Comparing ABO to Neural NetworkSolutions to TSP

Moreover following the popularity and efficiency of NeuralNetworks in obtaining good solutions to the Travelling Sales-manrsquos Problem [36] the authors compared the performanceof African Buffalo Optimization to the known solutions ofsome popular Neural Network algorithms These are Ange-niolrsquos method Somhom et alrsquos method Pasti and Castrorsquosmethod and Masutti and Castrorsquos method Comparativeexperimental figures are obtained from [28]

From the experimental results in Table 7 it can beseen that only the ABO obtained the optimal solutionsin Eil51 and Eil76 in addition to jointly obtaining theoptimal result with Masutti and Castrorsquos method in Berlin52Aside from these ABO outperformed the other meth-ods in getting near-optimal solutions in Bier127 KroA100KroB100 KroB100 KroC100 KroD100 KroE100 Ch130Ch150 KroA150 KroB150 KroA200 KroB150 KroB200Rat575 rl1323 fl1400 fl1400 and Rat783 It was only in Eil101that Masutti and Castrorsquos method outperformed ABOThis is

a further evidence that ABO is an excellent performer evenwhen in competition with Neural Networks methods

8 Conclusion

In general this paper introduces the novel algorithm theAfrican Buffalo Optimization and shows its capacity to solvethe Traveling Salesmanrsquos ProblemThe performance obtainedfrom using the ABO is weighed against the performanceof some popular algorithms such as the Ant Colony Opti-mization (ACO) Particle Swarm Optimization (PSO) Arti-ficial Bee Colony Optimization (ABO) Min-Max Ant Sys-tem (MMAS) and Randomized Insertion Algorithm (RAI)some hybrid methods such as Model Induced Max-MinAnt ColonyOptimization (MIMM-ACO) Hybrid Algorithm(HA) and Cooperative Genetic Ant Systems (CGAS) andsome popular Neural Networks-based optimization meth-ods In total 33 TSP datasets cases ranging from 48 to 14461cities were investigated and the ABO results obtained werecompared with results obtained from 11 other optimizationalgorithms and methods The results show the amazingperformance of the novel algorithmrsquos capacity to obtainoptimal or near-optimal solutions at an incredibly fast rateThe ABOrsquos outstanding performance is a testimony to thefact that ABOhas immense potentials in solving optimizationproblems using relatively fewer parameters than most otheroptimization algorithms in literature

Having observed the robustness of theABO in solving theTravelling Salesmanrsquos Problems with encouraging outcomeit is recommended that more investigations be carried outto ascertain the veracity or otherwise of this new algorithmin solving other problems such as PID tuning knapsackproblem vehicle routing job scheduling and urban trans-portation problems

Conflict of Interests

The authors assert that there is no conflict of interests in thepublication of this research paper

Acknowledgments

The authors acknowledge the support of the anonymousreviewers for their constructive reviews that helped to

Computational Intelligence and Neuroscience 11

Table 7 ABO versus NN results

TSPinstance Optima

ABO Angeniolrsquosmethod

Somhom et alrsquosmethod

Pasti and Castrorsquosmethod

Masutti and Castrorsquosmethod

Best Mean Best Mean Best Mean Best Mean Best Meaneil51 426 426 427 432 44290 433 44057 429 43870 427 43747eil76 538 538 56304 554 56320 552 56227 542 55610 541 55633eil101 629 640 640 655 66593 640 65557 641 65483 638 64863berlin52 7542 7542 765931 7778 836370 7715 802507 7716 807397 7542 793250bier127 118282 118297 118863 120110 12892033 119840 12173333 118760 12178033 118970 12088633ch130 6110 6111 630714 6265 641680 6203 630723 6142 629177 6145 628240ch150 6528 6532 6601 6634 684280 6631 6751 6629 675320 6602 673837rd100 7910 7935 7956 8088 844450 8028 823940 7947 825393 7982 819977lin105 14379 14419 144527 14999 1611137 14379 1447560 14379 1470223 14379 1440017lin318 42029 42101 42336 44869 4583283 43154 4392290 42975 4370497 42834 4369687kroA100 21282 21311 221638 23009 2467880 21410 2161677 21369 2186847 21333 2152273kroA150 26524 26526 27205 28948 2996090 26930 2740133 26932 2734643 26678 2735597kroA200 29368 29370 30152 31669 3322833 30144 3041567 29594 3025753 29600 3019027kroB100 22141 22160 22509 24026 2596640 22548 2262250 22596 2285360 22343 2266147kroB150 26130 26169 26431 27886 2940453 26342 2680633 26395 2675213 26264 2663187kroB200 29437 29487 29534 32351 3383813 29703 3028647 29831 3041560 29637 3013500kroC100 20749 20755 208817 22344 2349613 20921 2114987 20915 2123160 20915 2097123kroD100 21294 21347 21462 23076 2390903 21500 2184573 21457 2202787 21374 2169737kroE100 22068 22088 22702 23642 2482803 22379 2268247 22427 2281550 22395 2271563rat575 6773 6774 6810 8107 830183 7090 717363 7039 712507 7047 711567rat783 8806 8811 888175 10532 1072160 9316 938757 9185 932630 9246 934377rl1323 270199 270480 278977 293350 30142433 295780 30089900 295060 30028600 300770 30531433fl1400 20127 20134 20167 20649 2117467 20558 2074260 20745 2107057 20851 2111000d1655 62128 62346 625995 68875 7116807 67459 6804637 70323 7143170 70918 7211317

enhance the quality of this research paper Similarly theauthors are grateful to the Faculty of Computer Systems andSoftware Engineering Universiti Malaysia Pahang KuantanMalaysia for providing the conducive environment facilitiesand funds for this research through Grant GRS 1403118

References

[1] J Kennedy ldquoParticle swarm optimizationrdquo in Encyclopedia ofMachine Learning pp 760ndash766 Springer 2010

[2] M Dorigo and M Birattari ldquoAnt colony optimizationrdquo inEncyclopedia of Machine Learning pp 36ndash39 Springer 2010

[3] E Cantu-Paz ldquoA summary of research on parallel geneticalgorithmsrdquo IlliGAL Report 95007 Illinois Genetic AlgorithmsLaboratory 1995

[4] D Karaboga and B Basturk ldquoA powerful and efficient algo-rithm for numerical function optimization artificial bee colony(ABC) algorithmrdquo Journal of Global Optimization vol 39 no 3pp 459ndash471 2007

[5] J Andre P Siarry and T Dognon ldquoAn improvement of thestandard genetic algorithm fighting premature convergence incontinuous optimizationrdquo Advances in Engineering Softwarevol 32 no 1 pp 49ndash60 2001

[6] N Kumbharana and G M Pandey ldquoA comparative study ofACO GA and SA for solving travelling salesman problemrdquo

International Journal of Societal Applications of Computer Sci-ence vol 2 pp 224ndash228 2013

[7] Z Michalewicz and D B Fogel ldquoThe traveling salesmanproblemrdquo in How to Solve It Modern Heuristics pp 189ndash224Springer 2000

[8] J B Odili H Deng C Yang Y Sun M C d E Santo Ramosand M C G Ramos ldquoApplication of ant colony optimizationto solving the traveling salesmanrsquos problemrdquo Science Journal ofElectrical amp Electronic Engineering In press

[9] A Ben-Israel ldquoA Newton-Raphson method for the solution ofsystems of equationsrdquo Journal of Mathematical Analysis andApplications vol 15 pp 243ndash252 1966

[10] D P Bertsekas Dynamic Programming and Optimal Controlvol 1 Athena Scientific Belmont Mass USA 1995

[11] R G Ghanem and P D Spanos Stochastic Finite Elements ASpectral Approach Courier Corporation 2003

[12] S Binitha and S S Sathya ldquoA survey of bio inspired optimiza-tion algorithmsrdquo International Journal of Soft Computing andEngineering vol 2 pp 137ndash151 2012

[13] F Dressler and O B Akan ldquoA survey on bio-inspired network-ingrdquo Computer Networks vol 54 no 6 pp 881ndash900 2010

[14] X-S Yang and S Deb ldquoTwo-stage eagle strategy with differen-tial evolutionrdquo International Journal of Bio-Inspired Computa-tion vol 4 no 1 pp 1ndash5 2012

12 Computational Intelligence and Neuroscience

[15] S K Dhurandher S Misra P Pruthi S Singhal S Aggarwaland I Woungang ldquoUsing bee algorithm for peer-to-peer filesearching in mobile ad hoc networksrdquo Journal of Network andComputer Applications vol 34 no 5 pp 1498ndash1508 2011

[16] A RMehrabian and C Lucas ldquoA novel numerical optimizationalgorithm inspired from weed colonizationrdquo Ecological Infor-matics vol 1 no 4 pp 355ndash366 2006

[17] M Dorigo G Di Caro and LM Gambardella ldquoAnt algorithmsfor discrete optimizationrdquo Artificial Life vol 5 no 2 pp 137ndash172 1999

[18] M Clerc Particle Swarm Optimization vol 93 John Wiley ampSons 2010

[19] Z YuanM AM deOcaM Birattari and T Stutzle ldquoContinu-ous optimization algorithms for tuning real and integer param-eters of swarm intelligence algorithmsrdquo Swarm Intelligence vol6 no 1 pp 49ndash75 2012

[20] D Karaboga and B Akay ldquoA survey algorithms simulating beeswarm intelligencerdquo Artificial Intelligence Review vol 31 no 1ndash4 pp 61ndash85 2009

[21] J Kennedy and R Mendes ldquoPopulation structure and particleswarm performancerdquo in Proceedings of the Congress on Evo-lutionary Computation (CEC rsquo02) vol 2 pp 1671ndash1676 IEEEHonolulu Hawaii USA May 2002

[22] A M Mora P Garcıa-Sanchez J J Merelo and P A CastilloldquoMigration study on a pareto-based island model for MOA-COsrdquo in Proceedings of the 15th Genetic and EvolutionaryComputation Conference (GECCO rsquo13) pp 57ndash64 July 2013

[23] M Dorigo and L M Gambardella ldquoAnt colonies for thetravelling salesman problemrdquo BioSystems vol 43 no 2 pp 73ndash81 1997

[24] MG C Resende RMartıM Gallego andADuarte ldquoGRASPand path relinking for the maxndashmin diversity problemrdquo Com-puters amp Operations Research vol 37 no 3 pp 498ndash508 2010

[25] G Reinelt ldquoTsplib95 1995rdquo 2012 httpcomoptifiuni-heidel-bergdesoftwareTSPLIB95

[26] M Gunduz M S Kiran and E Ozceylan ldquoA hierarchicapproach based on swarm intelligence to solve the travelingsalesmanproblemrdquoTurkish Journal of Electrical Engineering andComputer Sciences vol 23 no 1 pp 103ndash117 2015

[27] H Jia ldquoA novel hybrid optimization algorithm and its appli-cation in solving complex problemrdquo International Journal ofHybrid Information Technology vol 8 pp 1ndash10 2015

[28] S-M Chen and C-Y Chien ldquoA new method for solving thetraveling salesman problem based on the genetic simulatedannealing ant colony system with particle swarm optimizationtechniquesrdquo in Proceedings of the International Conference onMachine Learning and Cybernetics (ICMLC rsquo10) pp 2477ndash2482July 2010

[29] S Georg ldquoMP-TESTDATAmdashThe TSPLIB symmetric travelingsalesman problem instancesrdquo 2008

[30] K Sorensen ldquoMetaheuristicsmdashthemetaphor exposedrdquo Interna-tional Transactions in Operational Research vol 22 no 1 pp3ndash18 2015

[31] J Brest and J Zerovnik ldquoA heuristic for the asymmetric travel-ing salesman problemrdquo in Proceedings of the 6th MetaheuristicsInternational Conference (MIC rsquo05) pp 145ndash150 Vienna Aus-tria 2005

[32] M Dorigo and L M Gambardella ldquoAnt colony system a coop-erative learning approach to the traveling salesman problemrdquoIEEETransactions on Evolutionary Computation vol 1 no 1 pp53ndash66 1997

[33] O C Martin and S W Otto ldquoCombining simulated annealingwith local search heuristicsrdquoAnnals of Operations Research vol63 pp 57ndash75 1996

[34] B Baritompa and E M Hendrix ldquoOn the investigation ofstochastic global optimization algorithmsrdquo Journal of GlobalOptimization vol 31 no 4 pp 567ndash578 2005

[35] J Bai G-K Yang Y-WChen L-SHu andC-C Pan ldquoAmodelinduced max-min ant colony optimization for asymmetrictraveling salesman problemrdquo Applied Soft Computing vol 13no 3 pp 1365ndash1375 2013

[36] N Aras B J Oommen and I K Altinel ldquoThe Kohonennetwork incorporating explicit statistics and its application tothe travelling salesman problemrdquo Neural Networks vol 12 no9 pp 1273ndash1284 1999

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 3: Solving the Traveling Salesman’s Problem Using the African ...umpir.ump.edu.my/id/eprint/11365/1/Solving the Traveling Salesman’… · ResearchArticle Solving the Traveling Salesman’s

Computational Intelligence and Neuroscience 3

problems This algorithm was inspired by the behavior ofant colonies especially by their foraging behavior in reallife Usually ants when leaving their nests move randomlyaround the areas surrounding their nests in search for food Ifany of the ants come across food it first collects somepieces ofthe food and on its way back to the nest deposits a chemicalsubstance called pheromones as a way of communicating toits peers that there has been a breakthrough Other nearbyants on perceiving the fragrance of the pheromone under-stand andmove towards the pheromone path Once they dis-cover the food source they in turn drop fresh pheromonesas a way of alerting other ants In amatter of time several antspick this information and are on the pheromone path

Another interesting part of the antsrsquo behavior is that asthey return to the nest they optimize their route In a shortwhile the ants have created a shorter route to the food sourcethan the previous routes Moreover in case an obstruction isput on the shorter route making movements impossible theants are able to find another short route among the availableoptions to evade the obstacleThe highlights of this algorithminclude tapping into the indirect communication of a colonyof (artificial) ants using pheromone trails as a means ofcommunication tracking their cooperative ability to solve acomplex problem and harnessing their capacity to optimizetheir routes from the food source to the nest and vice versa

There have been several modifications of the ant colonyalgorithms starting from the initial Ant System (AS) to AntColony System (ACS) to Min-Max Ant System (MMAS)and then to the Ant Colony Optimization (ACO) algorithmsand so forth [17] In ACO a colony of ants in each iterationconstructs a solution probabilistically as ant 119896 at node 119894 selectsthe next node119895 tomove on toThe choice of node is influencedby the pheromone trail value 120591

119894119895(119905) and the available heuristic

120578119894119895 In TSP 120578

119894119895= 1119889

119894119895 So an ant moves from location 119894 to

location 119895 with the probability

119875119905

119894119895(119905) =

lceil120591

119894119895(119905)rceil120572

119909 [120578

119894119895]120573

sum119897isinN119894119896 [120591119894119895(119905)]

120572119909 [120578119894119895]120573

if 119895 isin N119894119896

0 otherwise

(2)

Here 120591119894119895(119905) represents the pheromone trail 120578

119894119895represents

the local heuristic information 119905 represents the iterationN119894119896 represents the nodes ant 119896 can go to and 120572 and 120573 are

parameters that bias the pheromone trails By the end of aniteration the pheromone trail on each edge 119894119895 is updated usingthe following equation

120591

119894119895(119905 + 1) = (1 minus 120588) 119909120591

119894119895(119905) + Δ120591

119894119895

best(119905) (3)

In (3) 120591119894119895(119905+1) represents the pheromone trail in iteration 119905+1

120588 takes any values from 01 to 09 The amount of pheromonedeposited by the best ant is represented by

Δ120591

119894119895

best(119905)

=

1

119891(119904

best(119905)) if the best ant used 119894119895 in iteration 119905

0 otherwise

(4)

In (4) 119891(119904best(119905)) represents cost of the best solution (119904best(119905))

A critical examination of the Ant Colony Optimizationtechnique of solving optimization problems reveals that thereis little or no similarity between the ACOrsquos searchmechanismand that of the ABOThis could be due to their application ofdifferent search techniques in arriving at solutions while theACO employs path construction technique the ABO favourspath improvement search mechanism

22 Particle Swarm Optimization Particle Swarm Optimiza-tion which was inspired by the social behavior of birdsflocking or fish schooling is one of the biology-inspiredcomputation techniques developed by Eberhart andKennedy[18] This algorithm obtains solutions using a swarm ofparticles where each particle represents a candidate solutionWhen compared to evolutionary computation paradigms aswarm is similar to a population and a particle represents anindividual In searching for a solution the particles are flownthrough a multidimensional search space where the positionof each particle is adjusted according to its own experienceand that of its neighbors The velocity vector drives the opti-mization process as well as highlighting each particlersquos experi-ence and the information exchange within its neighborhoodJust like the ABO PSO has two controlling equations in itssearch for solution and these are (5) and (6) Consider

V119905+1119894119895

= 119909 (V119905119894119895+ 011198801(119887119905

119894119895minus 119909119905

119894119895) + 021198802(119887119905

(119899)119894119895minus 119909119905

119894119895)) (5)

where V119905+1119894119895

represents the present velocity V119905119894119895is the previous

velocity 119909 is the constriction factor 01and 0

2are the

acceleration coefficients 1198801and 119880

2are random numbers

119887119905

119894119895is the individual particlesrsquo best position 119909119905

119894119895is the present

position and 119887119905

(119899)119894119895is the swarmrsquos best position The next

equation in PSO that calculates the position of the swarm is

119909119905+1

119894119895= 119909119905

119894119895+ V119905+1119894119895 (6)

In PSO algorithm the particles move in the domain ofan objective function 119891 Θ isin 119877119899 where 119899 represents thevariables to be optimized Each particle at a particulariteration is associated with three vectors

(a) Present position denoted by119909This vector records thepresent position of a particular particle

(b) Present velocity denoted by V This vector stores theparticlersquos direction of search

(c) Individualrsquos best position denoted by 119887 This vectorrecords the particular particlersquos best solution so farsince the search began (since the beginning of thealgorithmrsquos execution) In addition to these the indi-vidual particles relate with the best particle in theswarmwhich PSO algorithm tracks in each iterationto help direct the search to promising areas [19]

23 Artificial Bee Colony This algorithm which is inspiredby the behavior of natural honey bee swarm was proposedby Karaboga and Akay in 2009 [20] It searches for solutionthrough the use of three classes of bees scout bees onlookerbees and employed bees Scout bees are those that fly over

4 Computational Intelligence and Neuroscience

the search space in search for solutions (food source) Theonlooker bees on the other hand are the ones that stay inthe nest waiting for the report of the scout bees while theemployed bees refer to the class of bees which after watchingthe waggle dance of the scout bees opts to join in harvestingthe food source (exploitation) A particular strength of thisalgorithm lies in its bee transformation capabilities Forinstance a scout bee could transform to an employed beeonce it (the same scout bee) is involved in harvesting the foodsource and vice versa

Generally the bees can change their statuses dependingon the needs of the algorithm at a particular point in timeIn this algorithm the food source represents a solution to theoptimization problemThe volume of nectar in a food sourcerepresents the quality (fitness) of the solutionMoreover eachemployed bee is supposed to exploit only one food sourcemeaning that the number of employed bees is the same as thenumber of food sources The scout bees are always exploringfor new food sources V

119898with higher nectar quantity andor

quality (fitness) 119898

within the neighbourhood The beesevaluate the nectar fitness using

V119898119894= 119909119898119894+ 0119898119894(119909119898119894minus 119909119896119894) (7)

where 119894 is a randomly chosen parameter index 0119898119894

is arandom number within a given range 119909

119896119894is a food source

The quality (fitness) of a solution 119891119894119905119898(997888rarr119909119898) is calculated

using the following equation

119891119894119905119898(997888rarr119909119898) =

1

1 + 119891119898(997888rarr119909119898)

119909 if 119891119898(997888rarr119909119898) gt 0

1 + 119891119898(997888rarr119909119898) if 119891

119898(997888rarr119909119898) lt 0

(8)

From the foregoing discussion it is clear that there is slightsimilarity between (5) in PSO and (7) in ABO since eachalgorithm subtracts a variable from the personal and indi-vidual bests of the particlesbuffalos For PSO the subtractedvariable is the present position and for ABO it is theimmediate-past explored location (the waaa values 119908119896)However the two equations are different in several respectswhile the PSO uses 119909 (being the constriction factor) or120596 (as inertia factor in some versions of the PSO) thereare no such equivalents in ABO Moreover while the PSOemploys random numbers (119880

1and 119880

2) ABO does not use

random numbers only learning parameters Also PSO usesacceleration coefficients (0

1and 0

2) ABO does not In the

case of the ABC even though it employs the same searchtechnique in arriving at solutions the algorithm proceduresare quite different

24 Information Propagation In searching for solutions toan optimization problem the ACO employs the path con-struction technique while the PSO ABC and the ABO usethe path improvement technique However while the PSOuses the Von Neumann (see Figure 1) as its best techniquefor information propagation [21] the ACO obtains goodresults using the ring topology [22] and the ABO uses thestar topology which connects all the buffalos together TheVon Neumann topology enables the particles to connect to

neighboring particles on the east west north and southEffectively a particular particle relates with the other fourparticles surrounding it The ABO employs the star topologysuch that a particular buffalo is connected to every otherbuffalo in the herdThis enhancesABOrsquos information dissem-ination at any particular point in time

3 African Buffalo Optimization Algorithm

In using the ABO to proffer solutions in the search space thebuffalos are first initialized within the herd population andare made to search for the global optima by updating theirlocations as they follow the current best buffalo 119887119892max inthe herd Each buffalo keeps track of its coordinates in theproblem space which are associated with the best solution(fitness) it has achieved so far This value is called 119887119901max119896representing the best location of the particular buffalo inrelation to the optimal solution The ABO algorithm followsthis pattern at each step it tracks the dynamic location ofeach buffalo towards the 119887119901max119896 and 119887119892max depending onwhere the emphasis is placed at a particular iteration Thespeed of each animal is influenced by the learning parameters

31 ABO Algorithm The ABO algorithm is presented below

(1) Initialization randomly place buffalos to nodes at thesolution space

(2) Update the buffalos fitness values using

119898119896 + 1 = 119898119896 + 1198971199011 (119887119892max minus 119908119896)

+ 1198971199012 (119887119901max119896 minus 119908119896) (9)

where 119908119896 and 119898119896 represent the exploration andexploitation moves respectively of the 119896th buffalo(119896 = 1 2 119873) 1198971199011 and 1198971199012 are learning factors119887119892max is the herdrsquos best fitness and 119887119901max119896 theindividual buffalo 119896rsquos best found location

(3) Update the location of buffalo 119896 (119887119901max119896 and 119887119892max)using

119908119896 + 1 =(119908119896 + 119898119896)

plusmn05 (10)

(4) Is 119887119892max updating Yes go to (5) No go to (2)(5) If the stopping criteria are not met go back to

algorithm step (3) else go to (6)(6) Output best solution

A closer look at the algorithm (the ABO algorithm)reveals that (9) which shows the democratic attitude of thebuffalos has three parts the first 119898119896 represents the memoryof the buffalos past location The buffalo has innate memoryability that enables it to tell where it has been before This iscrucial in its search for solutions as it helps it to avoid areasthat produced bad resultsThememory of each buffalo is a listof solutions that can be used as an alternative for the currentlocal maximum location The second 1198971199011(119887119892max minus 119908119896) is

Computational Intelligence and Neuroscience 5

Star topology Ring topology Von Neumann topology

Figure 1 Information propagation topologies

concerned with the caring or cooperative part of the buffalosand is a pointer to the buffalorsquos social and information-sharing experience and then the third part 1198971199012(119887119901max119896minus119908119896)indicates the intelligence part of the buffalos So the ABOexploits the memory caring intelligent capabilities of thebuffalos in the democratic equation (9) Similarly (10) is thewaaa vocalization equation that propels the animals to moveon to explore other environments as the present area has beenfully exploited or is unfavourable for further exploration andexploitation

32 Highlights of the ABO Algorithm They are as follows

(1) Stagnation handling through regular update of thebest buffalo 119887119892max in each iteration

(2) Use of relatively few parameters to ensure speed fastconvergence

(3) A very simple algorithm that require less than 100lines of coding effort in any programming language

(4) Ensuring adequate exploration by tracking the loca-tion of best buffalo (119887119892max) and each buffalorsquos per-sonal best location (119887119901max119896)

(5) Ensuring adequate exploitation through tapping intothe exploits of other buffalos 1198971199011(119887119892max minus 119908119896)

33 Initialization The initialization phase is done by ran-domly placing the 119896th buffalo in the solution space For ini-tialization some known previous knowledge of the problemcan help the algorithm to converge in less iterations

34 Update Speed and Location In each iteration eachbuffalo updates its location according to its formermaximumlocation (119887119901max) and some information gathered from theexploits of the neighboring buffalos This is done using (9)and (10) (refer to the ABO algorithm steps (2) and (3) above)This enables the algorithm to track the movement of thebuffalos in relation to the optimal solution

4 Using ABO to Solve the TravellingSalesmanrsquos Problem

TheABOhas the advantage of using very simple steps to solvecomplex optimization problems such as the TSP The basicsolution steps are as follows

(a) Choose according to some criterion a start city foreach of the buffalos and randomly locate them inthose cities Consider

119875119886119887 =11990811989711990111198861198871198981198971199012119886119887

sum119899

119894=111990811989711990111198861198871198981198971199012119886119887

119886119887 = plusmn05

(11)

(b) Update buffalo fitness using (9) and (10) respectively(c) Determine the 119887119901max119896 and max(d) Using (11) and heuristic values probabilistically con-

struct a buffalo tour by adding cities that the buffaloshave not visited

(e) Is the 119887119892max updating Go to (f) No go to (a)(f) Is the exit criteria reached Yes go to (g) No return

to (b)(g) Output the best result

Here 1198971199011 and 1198971199012 are learning parameters and are 06 and04 respectively 119886119887 takes the values of 05 and minus05 inalternate iterations 119898 is the positive reinforcement alertinvitation which tells the animals to stay to exploit theenvironment since there are enough pastures and 119908 is thenegative reinforcement alarm which tells the animals to keepon exploring the environment since the present location is notproductive For buffalo 119896 the probability 119901119896 of moving fromcity 119895 to city 119896 depends on the combination of two valuesnamely the desirability of the move as computed by someheuristic indicating the previous attractiveness of that moveand the summative benefit of themove to the herd indicatinghow productive it has been in the past to make that particularmoveThe denominator values represent an indication of thedesirability of that move

6 Computational Intelligence and Neuroscience

Table 1 TSPLIB datasets

1st experimental datasets 2nd experimental datasets ATSP dataset NN comparative datasetsBerlin52 Att48 R48p Eil51 KroA100 B FL1400St70 St70 Ft70 Eil76 KroA200 D1655Eil76 Eil76 Kro124p Eil101 KroB100Pr76 Pr152 Ftv70 Berlin52 KroB150Kroa100 Gil262 P43 Bier127 KroB200Eil101 Rd400 Ftv170 Ch130 KroC100Ch150 Pr1002 Ch150 KroD100Tsp225 D1291 Rd100 KroE100

Fnl4461 Lin105 Rat575Brd14051 Lin318 RL1323

41 ABO Solution Mechanism for the TSP The ABO appliestheModifiedKarp Steele algorithm in its solution of the Trav-elling Salesmanrsquos Problem [23] It follows a simple solutionstep of first constructing a cycle factor 119865 of the cheapestweight in the 119870 graph Next it selects a pair of edges takenfrom different cycles of the 119870 graph and patch in a waythat will result in a minimum weight Patching is simplyremoving the selected edges in the two cycle factors and thenreplacing them with cheaper edges and in this way forminga larger cycle factor thus reducing the number of cyclefactors in graph119870 by oneThirdly the second step is repeateduntil we arrive at a single cycle factor in the entire graph119870 This technique fits into the path improvement techniquedescription [24] ABO overcomes the problem of delay inthis process through the use of two primary parameters toensure speed namely 1198971199011 and 1198971199012 coupledwith the algorithmkeeping a track of the route of the 119887119892max as well as 119887119901max119896in each construction step

5 Experiments and Discussion of Results

In this study the ABO was implemented on three sets ofsymmetric TSP datasets and a set of asymmetric TSP (ATSP)ranging from 48 to 14461 cities from TSPLIB95 [25] The firstexperiment was concerned with the comparison of the per-formance of ABO in TSP instances with the results obtainedfrom a recent study [26] involving Berlin52 St70 Eil76 Pr76KroA100 Eil101 Ch150 and Tsp225The second set of exper-iments was concerned with testing ABOrsquos performance withanother recently published study [27] on Att48 St70 Eil76Pr152 Gil262 Rd400 Pr1002 D1291 Fn14461 and Brd14051The third experiment examinedABOrsquos performance in asym-metric TSP instancesThe fourth set of experiments involvedcomparing ABO results with those obtained using somepopular Artificial Neural Networks methods [28] The TSPbenchmark datasets are presented in Table 1

51 Experimental Parameters Setting For the sets of exper-iments involving PSO the parameters are as follows popu-lation size 200 iteration (119879max) 1000 inertia weight 0851198621 2 119862

2 2 rand1(0 1) rand2(0 1) For the HPSACO the

experimental parameters are as follows population 200iteration (119879max) 1000 inertia weight 085 1198621 2 1198622 2 ants

(N) 100 pheromone factor (120572) 10 heuristic factor (120573) 20evaporation coefficient (120588) 005 pheromone amount 100For the experiments involving other algorithms in this studyTable 2 contains the details of the parameters used

52 Experiment Analysis and Discussion of Results Theexperiments were performed using MATLAB on Intel DuoCore i7-3770CPU 340 ghzwith 4GBRAMTheexperimentson the asymmetric Travelling Salesmanrsquos Problems wereexecuted on a Pentium Duo Core 180Ghz processor and2GB RAM desktop computer Similarly the experiments ontheArtificial Neural Networks were executed usingMicrosoftVirtual C++ 2008 on an Intel Duo Core i7 CPU Toauthenticate the veracity of the ABO algorithm in solving theTSP we initially carried out experiments on eight TSP casesThe city-coordinates data are available in [29]The results arepresented in Table 3

InTable 3 the ldquoAverageValuerdquo refers to the average fitnessof each algorithm and the ldquorelative errorrdquo values are obtainedby calculating

(Average value minus Best value

Best value) times 100 (12)

As can be seen in Table 3 the ABO outperformed the otheralgorithms in all test instances under investigationTheABOfor instance obtained the optimal solution to Berlin52 andEil76 No other algorithmdid Besides this theABOobtainedthe nearest-optimal solution to the remaining TSP instancescompared to any other algorithm In terms of the averageresults obtained by each algorithm the ABO still has thebest performance It is rather surprising that the HybridAlgorithm (HA) which uses a similar memorymatrix like theABO could not pose a better result This is traceable to theuse of several parameters since the HA is a combination ofthe ACO and the ABC the two algorithms that have some ofthe largest numbers of parameters to tune in order to obtaingood solutions

The dominance of the ABO can also be seen in the useof computer resources (CPU time) where it is clearly seenthat the ABO is the fastest of all four algorithms In Berlin52for instance the ABO is 58335 times faster than the ACO1085 times faster than the ABC 30320 times faster thanthe Hybrid Algorithm (HA) This trend continues in all the

Computational Intelligence and Neuroscience 7

Table 2 Experimental parameters

ABO ACO ABC HA CGASParameter Values Parameters Values Parameters Values Parameters Values Parameter ValuesPopulation 40 Ants 119863

lowast Population 119863lowast Population 119863

lowast Generation 100119898119896 10 120573 50 120601ij rand(minus1 1) 120573 50 120573 20119887119892max119887119901max 06 120588 065 120596ij rand(0 15) 120588 065 120588 0111989711990111198971199012 05 120572 10 SN NP2 120572 10 119877119900 033119908119896 10 O 200 Limit 119863

lowastSN 120601ij rand(minus1 1) Crossover rate 10NA mdash qo 09 Max cycle number 500 120596ij rand(0 15) qo 09NA mdash NA mdash Colony 50 SN NP2 120601119903 03NA mdash NA mdash NA mdash Limit 119863

lowastSN 120601120588 02NA mdash NA mdash NA mdash Max cycle number 500 120591min 120591max20

NA mdash NA mdash NA mdash Colony 50 120591max 1 minus (1 minus 120588)

NA mdash NA mdash NA mdash O 200 NA mdashNA mdash NA mdash NA mdash qo 09 NA mdashTotal number of runs 50 50 50 50 50

Table 3 Comparative experimental result

Problem Number of cities Optima Method Best Mean Rel err () Time (s)

Berlin52

ABO 7542 7616 0 0002ACO 754899 765931 152 11667

52 7542 ABC 947911 1039026 3772 217HA 754437 754437 003 6064

St70

ABO 676 67833 015 008ACO 69605 70916 473 22606

70 675 ABC 116212 123049 8173 315HA 68724 70058 347 11565

Eil76

ABO 538 56304 0 00376 538 ACO 55446 56198 304 27198

ABC 87728 93144 7078 349HA 55107 55798 231 13882

Pr76

108159 ABO 108167 108396 0007 00876 ACO 11516666 11632122 755 27241

ABC 1951989 20511961 8965 350HA 11379856 11507229 639 13892

Kroa100

21282 ABO 21311 221638 04 000100 ACO 2245589 2288012 749 61506

ABC 4951951 5384003 15294 517HA 2212275 2243531 540 31112

Eil101

629 ABO 640 640 17 0027101 ACO 67804 69342 796 52742

ABC 123731 131595 10488 517HA 67271 68339 639 26708

Ch150

6528 ABO 6532 6601 006 0032150 ACO 664851 670287 261 138765

ABC 2090889 2161748 23093 895HA 664169 667712 221 69861

Tsp225

3916 ABO 3917 3982 003 009225 ACO 411235 417608 822 403875

ABC 1699841 1795512 3652792 1668HA 409054 415785 774 203733

8 Computational Intelligence and Neuroscience

Table 4 Comparative optimal results

TSP instance Optima ABO PSO ACO HPSACOBest Avg Err Best Avg Err Best Avg Err Best Avg Err

att48 33522 33524 33579 016 33734 33982 063 33649 33731 062 33524 33667 016st70 675 676 67833 015 6912 7026 240 6857 6947 159 6803 6986 079eil76 538 538 56304 000 5723 5891 638 5507 5604 236 5462 5581 152pr152 73682 73730 73990 007 75361 75405 228 74689 74936 137 74165 74654 066gil262 2378 2378 2386 000 2513 2486 568 2463 2495 357 2413 2468 147rd400 15281 15301 15304 500 16964 17024 1101 16581 16834 851 16067 16513 514pr1002 259045 259132 261608 003 278923 279755 767 269758 271043 414 267998 269789 346d1291 50801 50839 50839 007 53912 54104 612 52942 53249 421 52868 52951 407fnl4461 182566 182745 183174 010 199314 199492 917 192964 194015 570 191352 192585 481brd14051 469385 469835 479085 010 518631 519305 1049 505734 511638 774 498471 503594 620

test cases under investigation To solve all the TSP problemshere it took ABO a cumulative time of 0279 seconds toACOrsquos 7456 seconds ABCrsquos 4311 seconds and HArsquos 336227seconds The speed of the ABO is traceable to effectivememorymanagement technique since the ABO uses the pathimprovement technique as against the ACO that uses theslower path construction method The difference in speedwith the ABC that uses similar technique with ABO is dueto the ABCrsquos use of several parameters The speed of theHA could have been affected by the combination of the pathconstruction and path improvement techniques coupled withthe use of several parameters

53 ABO and Other Algorithms Encouraged by theextremely good performance of the ABO in the firstexperiments more experiments were performed and theresults obtained compared with those from PSO ACO andHPSACO from another recently published study [27] TheHPSACO is a combination of three techniques namely theCollaborative Strategy Particle SwarmOptimization and theAnt Colony Optimization algorithms The datasets used inthis experiment are from the TSPLIB95 and they are Att48St70 Eil76 Pr152 Gil262 Rd400 Pr1002 D1291 Fn14461and Brd14051 The results are presented in Table 4

Table 4 further underscores the excellent performance ofABO The ABO obtained the optimal solution to Eil76 andGil262 and very near-optimal solution in the rest of test casesIt is obvious from the relative error calculations that the ABOobtained over 99 accuracy in all the TSP instances hereexcept the difficult rd100where it obtained 95 It isworthy ofnote that the 95 accuracy of the ABO is still superior to theperformance of the other comparative algorithms in this TSPinstance The cumulative relative error of the ABO is 568to the PSOrsquos 6183 the ACOrsquos 3981 and the HPSACOrsquos2828 Clearly from this analysis the ABO has a superiorsearch capabilityThis is traceable to its use of relatively fewerparameters than most other Swarm Intelligence techniquesThe controlling parameters of the ABO are just the learningparameters (1198971199011 and 1198971199012)

In designing the ABO the authors deliberately tried toavoid the ldquoFrankenstein phenomenardquo [30] that is a case

of developing intricate techniques with several parame-tersoperators that most times the individual contributionsof some of such parametersoperators to the workings of thealgorithm are difficult to pinpoint The ability to achieve thisldquolean metaheuristicrdquo design (which is what we tried to do indesigning the ABO) is a mark of good algorithm design [30]

6 ABO on Asymmetric TSP Test Cases

Moreover we had to experiment on some ATSP test casesto verify the performance of the novel algorithm on suchcases The results obtained from this experiment using ABOare compared with the results obtained from solving somebenchmark asymmetric TSP instances available in TSPLIB95using the Randomized Arbitrary Insertion algorithm (RAI)[31] Ant Colony System 3-opt (ACS) Min-Max Ant System(MMAS) and Iterated Local Search (ILS) which is reportedto be one of the best methods for solving TSP problemsThese ATSP instances are ry48p ft70 Kro124p ftv70 p43and ftv170 [32]

Table 5 shows the comparative results obtained by apply-ing five different algorithms to solving a number of bench-mark asymmetric TSP instances with the aim of evaluatingthe performance of the African Buffalo Optimization (ABO)algorithm The first column lists the ATSP instances asrecorded in the TSPLIB the second column indicates thenumber of citieslocationsnodes involved in each instancethe third indicates the optimal solutions then followed by theperformances of the different algorithms in at most 50 testruns

A quick look at the table shows the excellent performanceof the ILS and theMMASwith both obtaining 100 results inthe only four cases available in literature that they solvedTheABO performed very well achieving over 99 in all test casesbut one The ABO has approximately the same performancewith RAI and the ACS obtaining about 99 optimal resultsin virtually all cases It is significant to note however thatthe ABO performed better than ACS and RAI in ft70 whichwas said to be a difficult ATSP instance to solve in spite of itsrelatively small size [33]

Next the authors examined the speed of each algorithmto achieve result since one of the aims of the ABO is to

Computational Intelligence and Neuroscience 9

Table5Com

parativ

eresults

TSP

cases

Num

bero

fcities

Optim

alvalues

ABO

RAI

MMAS

ACS

ILS

Best

Avg

Relerror

Best

Avg

Relerror

Best

Avg

Relerror

Best

Avg

Relerror

Best

Avg

Relerror

Ry48p

4814422

1444

014455

012

14422

1454320

014422

14422

014422

1456545

014422

14422

0Ft70

7038673

38753

388705

021

38855

391877

5047

38673

38687

038781

390990

5028

38673

38687

0Kr

o124p

100

36230

36275

36713

012

36241

3659423

004

36230

36542

036241

36857

004

36230

36542

0Ftv70

711950

1955

19585

026

1950

196844

0mdash

mdashmdash

mdashmdash

mdashmdash

mdashmdash

P43

435620

5645

5698

044

5620

562065

0mdash

mdashmdash

mdashmdash

mdashmdash

mdashmdash

Ftv170

171

2755

2795

2840

514

52764

283274

033

2755

2755

02774

2826

069

2755

2756

0

10 Computational Intelligence and Neuroscience

Table 6 Comparative speed of algorithms

TSP cases Number of citiesABO MIMM-ACO MMAS CGAS RAI

Avg time(secs)

Avg time(secs)

Avg time(secs)

Avg time(secs)

Avg time(secs)

Ry48p 48 007 783 797 1235 1598Ft70 70 005 985 1015 1532 7068Kro124p 100 008 3325 234 7852 3034Ftv70 71 009 6453 6125 6964 7376P43 43 01 835 938 0997 0997Ftv170 171 065 10828 9673 2761 2761

solve the problem of delay in achieving optimal solutionssince speed is one of the hallmarks of a good algorithm[34] In Table 6 we compare the speed of the ABO withthose of the recently publishedModel InducedMax-Min AntColony Optimization (MIMM-ACO) Min-Max Ant System(MMAS)CooperativeGenetic Ant Systems (CGAS) [35] andRAI

Table 6 displays the exceptional capacity of the novelAfrican Buffalo Optimization (ABO) to obtain optimal ornear-optimal solutions at record times It took the ABOless than a second (precisely 095 seconds) to solve allthe benchmark cases under investigation to MIMM-ACOrsquos23209 seconds MMASrsquos 20888 seconds CGASrsquo 452927seconds and RAIrsquos 3234 seconds Undoubtedly the ABOoutperformed the other algorithms in its quest to obtainresults using very limited CPU time and resources Thisbrought aboutABOrsquos use of very few parameters coupledwithstraightforward fitness functionThe exceptional speed of theABO compared with its competitive ability to obtain verycompetitive results recommends the ABO as an algorithm ofchoice when speed is a factor

7 Comparing ABO to Neural NetworkSolutions to TSP

Moreover following the popularity and efficiency of NeuralNetworks in obtaining good solutions to the Travelling Sales-manrsquos Problem [36] the authors compared the performanceof African Buffalo Optimization to the known solutions ofsome popular Neural Network algorithms These are Ange-niolrsquos method Somhom et alrsquos method Pasti and Castrorsquosmethod and Masutti and Castrorsquos method Comparativeexperimental figures are obtained from [28]

From the experimental results in Table 7 it can beseen that only the ABO obtained the optimal solutionsin Eil51 and Eil76 in addition to jointly obtaining theoptimal result with Masutti and Castrorsquos method in Berlin52Aside from these ABO outperformed the other meth-ods in getting near-optimal solutions in Bier127 KroA100KroB100 KroB100 KroC100 KroD100 KroE100 Ch130Ch150 KroA150 KroB150 KroA200 KroB150 KroB200Rat575 rl1323 fl1400 fl1400 and Rat783 It was only in Eil101that Masutti and Castrorsquos method outperformed ABOThis is

a further evidence that ABO is an excellent performer evenwhen in competition with Neural Networks methods

8 Conclusion

In general this paper introduces the novel algorithm theAfrican Buffalo Optimization and shows its capacity to solvethe Traveling Salesmanrsquos ProblemThe performance obtainedfrom using the ABO is weighed against the performanceof some popular algorithms such as the Ant Colony Opti-mization (ACO) Particle Swarm Optimization (PSO) Arti-ficial Bee Colony Optimization (ABO) Min-Max Ant Sys-tem (MMAS) and Randomized Insertion Algorithm (RAI)some hybrid methods such as Model Induced Max-MinAnt ColonyOptimization (MIMM-ACO) Hybrid Algorithm(HA) and Cooperative Genetic Ant Systems (CGAS) andsome popular Neural Networks-based optimization meth-ods In total 33 TSP datasets cases ranging from 48 to 14461cities were investigated and the ABO results obtained werecompared with results obtained from 11 other optimizationalgorithms and methods The results show the amazingperformance of the novel algorithmrsquos capacity to obtainoptimal or near-optimal solutions at an incredibly fast rateThe ABOrsquos outstanding performance is a testimony to thefact that ABOhas immense potentials in solving optimizationproblems using relatively fewer parameters than most otheroptimization algorithms in literature

Having observed the robustness of theABO in solving theTravelling Salesmanrsquos Problems with encouraging outcomeit is recommended that more investigations be carried outto ascertain the veracity or otherwise of this new algorithmin solving other problems such as PID tuning knapsackproblem vehicle routing job scheduling and urban trans-portation problems

Conflict of Interests

The authors assert that there is no conflict of interests in thepublication of this research paper

Acknowledgments

The authors acknowledge the support of the anonymousreviewers for their constructive reviews that helped to

Computational Intelligence and Neuroscience 11

Table 7 ABO versus NN results

TSPinstance Optima

ABO Angeniolrsquosmethod

Somhom et alrsquosmethod

Pasti and Castrorsquosmethod

Masutti and Castrorsquosmethod

Best Mean Best Mean Best Mean Best Mean Best Meaneil51 426 426 427 432 44290 433 44057 429 43870 427 43747eil76 538 538 56304 554 56320 552 56227 542 55610 541 55633eil101 629 640 640 655 66593 640 65557 641 65483 638 64863berlin52 7542 7542 765931 7778 836370 7715 802507 7716 807397 7542 793250bier127 118282 118297 118863 120110 12892033 119840 12173333 118760 12178033 118970 12088633ch130 6110 6111 630714 6265 641680 6203 630723 6142 629177 6145 628240ch150 6528 6532 6601 6634 684280 6631 6751 6629 675320 6602 673837rd100 7910 7935 7956 8088 844450 8028 823940 7947 825393 7982 819977lin105 14379 14419 144527 14999 1611137 14379 1447560 14379 1470223 14379 1440017lin318 42029 42101 42336 44869 4583283 43154 4392290 42975 4370497 42834 4369687kroA100 21282 21311 221638 23009 2467880 21410 2161677 21369 2186847 21333 2152273kroA150 26524 26526 27205 28948 2996090 26930 2740133 26932 2734643 26678 2735597kroA200 29368 29370 30152 31669 3322833 30144 3041567 29594 3025753 29600 3019027kroB100 22141 22160 22509 24026 2596640 22548 2262250 22596 2285360 22343 2266147kroB150 26130 26169 26431 27886 2940453 26342 2680633 26395 2675213 26264 2663187kroB200 29437 29487 29534 32351 3383813 29703 3028647 29831 3041560 29637 3013500kroC100 20749 20755 208817 22344 2349613 20921 2114987 20915 2123160 20915 2097123kroD100 21294 21347 21462 23076 2390903 21500 2184573 21457 2202787 21374 2169737kroE100 22068 22088 22702 23642 2482803 22379 2268247 22427 2281550 22395 2271563rat575 6773 6774 6810 8107 830183 7090 717363 7039 712507 7047 711567rat783 8806 8811 888175 10532 1072160 9316 938757 9185 932630 9246 934377rl1323 270199 270480 278977 293350 30142433 295780 30089900 295060 30028600 300770 30531433fl1400 20127 20134 20167 20649 2117467 20558 2074260 20745 2107057 20851 2111000d1655 62128 62346 625995 68875 7116807 67459 6804637 70323 7143170 70918 7211317

enhance the quality of this research paper Similarly theauthors are grateful to the Faculty of Computer Systems andSoftware Engineering Universiti Malaysia Pahang KuantanMalaysia for providing the conducive environment facilitiesand funds for this research through Grant GRS 1403118

References

[1] J Kennedy ldquoParticle swarm optimizationrdquo in Encyclopedia ofMachine Learning pp 760ndash766 Springer 2010

[2] M Dorigo and M Birattari ldquoAnt colony optimizationrdquo inEncyclopedia of Machine Learning pp 36ndash39 Springer 2010

[3] E Cantu-Paz ldquoA summary of research on parallel geneticalgorithmsrdquo IlliGAL Report 95007 Illinois Genetic AlgorithmsLaboratory 1995

[4] D Karaboga and B Basturk ldquoA powerful and efficient algo-rithm for numerical function optimization artificial bee colony(ABC) algorithmrdquo Journal of Global Optimization vol 39 no 3pp 459ndash471 2007

[5] J Andre P Siarry and T Dognon ldquoAn improvement of thestandard genetic algorithm fighting premature convergence incontinuous optimizationrdquo Advances in Engineering Softwarevol 32 no 1 pp 49ndash60 2001

[6] N Kumbharana and G M Pandey ldquoA comparative study ofACO GA and SA for solving travelling salesman problemrdquo

International Journal of Societal Applications of Computer Sci-ence vol 2 pp 224ndash228 2013

[7] Z Michalewicz and D B Fogel ldquoThe traveling salesmanproblemrdquo in How to Solve It Modern Heuristics pp 189ndash224Springer 2000

[8] J B Odili H Deng C Yang Y Sun M C d E Santo Ramosand M C G Ramos ldquoApplication of ant colony optimizationto solving the traveling salesmanrsquos problemrdquo Science Journal ofElectrical amp Electronic Engineering In press

[9] A Ben-Israel ldquoA Newton-Raphson method for the solution ofsystems of equationsrdquo Journal of Mathematical Analysis andApplications vol 15 pp 243ndash252 1966

[10] D P Bertsekas Dynamic Programming and Optimal Controlvol 1 Athena Scientific Belmont Mass USA 1995

[11] R G Ghanem and P D Spanos Stochastic Finite Elements ASpectral Approach Courier Corporation 2003

[12] S Binitha and S S Sathya ldquoA survey of bio inspired optimiza-tion algorithmsrdquo International Journal of Soft Computing andEngineering vol 2 pp 137ndash151 2012

[13] F Dressler and O B Akan ldquoA survey on bio-inspired network-ingrdquo Computer Networks vol 54 no 6 pp 881ndash900 2010

[14] X-S Yang and S Deb ldquoTwo-stage eagle strategy with differen-tial evolutionrdquo International Journal of Bio-Inspired Computa-tion vol 4 no 1 pp 1ndash5 2012

12 Computational Intelligence and Neuroscience

[15] S K Dhurandher S Misra P Pruthi S Singhal S Aggarwaland I Woungang ldquoUsing bee algorithm for peer-to-peer filesearching in mobile ad hoc networksrdquo Journal of Network andComputer Applications vol 34 no 5 pp 1498ndash1508 2011

[16] A RMehrabian and C Lucas ldquoA novel numerical optimizationalgorithm inspired from weed colonizationrdquo Ecological Infor-matics vol 1 no 4 pp 355ndash366 2006

[17] M Dorigo G Di Caro and LM Gambardella ldquoAnt algorithmsfor discrete optimizationrdquo Artificial Life vol 5 no 2 pp 137ndash172 1999

[18] M Clerc Particle Swarm Optimization vol 93 John Wiley ampSons 2010

[19] Z YuanM AM deOcaM Birattari and T Stutzle ldquoContinu-ous optimization algorithms for tuning real and integer param-eters of swarm intelligence algorithmsrdquo Swarm Intelligence vol6 no 1 pp 49ndash75 2012

[20] D Karaboga and B Akay ldquoA survey algorithms simulating beeswarm intelligencerdquo Artificial Intelligence Review vol 31 no 1ndash4 pp 61ndash85 2009

[21] J Kennedy and R Mendes ldquoPopulation structure and particleswarm performancerdquo in Proceedings of the Congress on Evo-lutionary Computation (CEC rsquo02) vol 2 pp 1671ndash1676 IEEEHonolulu Hawaii USA May 2002

[22] A M Mora P Garcıa-Sanchez J J Merelo and P A CastilloldquoMigration study on a pareto-based island model for MOA-COsrdquo in Proceedings of the 15th Genetic and EvolutionaryComputation Conference (GECCO rsquo13) pp 57ndash64 July 2013

[23] M Dorigo and L M Gambardella ldquoAnt colonies for thetravelling salesman problemrdquo BioSystems vol 43 no 2 pp 73ndash81 1997

[24] MG C Resende RMartıM Gallego andADuarte ldquoGRASPand path relinking for the maxndashmin diversity problemrdquo Com-puters amp Operations Research vol 37 no 3 pp 498ndash508 2010

[25] G Reinelt ldquoTsplib95 1995rdquo 2012 httpcomoptifiuni-heidel-bergdesoftwareTSPLIB95

[26] M Gunduz M S Kiran and E Ozceylan ldquoA hierarchicapproach based on swarm intelligence to solve the travelingsalesmanproblemrdquoTurkish Journal of Electrical Engineering andComputer Sciences vol 23 no 1 pp 103ndash117 2015

[27] H Jia ldquoA novel hybrid optimization algorithm and its appli-cation in solving complex problemrdquo International Journal ofHybrid Information Technology vol 8 pp 1ndash10 2015

[28] S-M Chen and C-Y Chien ldquoA new method for solving thetraveling salesman problem based on the genetic simulatedannealing ant colony system with particle swarm optimizationtechniquesrdquo in Proceedings of the International Conference onMachine Learning and Cybernetics (ICMLC rsquo10) pp 2477ndash2482July 2010

[29] S Georg ldquoMP-TESTDATAmdashThe TSPLIB symmetric travelingsalesman problem instancesrdquo 2008

[30] K Sorensen ldquoMetaheuristicsmdashthemetaphor exposedrdquo Interna-tional Transactions in Operational Research vol 22 no 1 pp3ndash18 2015

[31] J Brest and J Zerovnik ldquoA heuristic for the asymmetric travel-ing salesman problemrdquo in Proceedings of the 6th MetaheuristicsInternational Conference (MIC rsquo05) pp 145ndash150 Vienna Aus-tria 2005

[32] M Dorigo and L M Gambardella ldquoAnt colony system a coop-erative learning approach to the traveling salesman problemrdquoIEEETransactions on Evolutionary Computation vol 1 no 1 pp53ndash66 1997

[33] O C Martin and S W Otto ldquoCombining simulated annealingwith local search heuristicsrdquoAnnals of Operations Research vol63 pp 57ndash75 1996

[34] B Baritompa and E M Hendrix ldquoOn the investigation ofstochastic global optimization algorithmsrdquo Journal of GlobalOptimization vol 31 no 4 pp 567ndash578 2005

[35] J Bai G-K Yang Y-WChen L-SHu andC-C Pan ldquoAmodelinduced max-min ant colony optimization for asymmetrictraveling salesman problemrdquo Applied Soft Computing vol 13no 3 pp 1365ndash1375 2013

[36] N Aras B J Oommen and I K Altinel ldquoThe Kohonennetwork incorporating explicit statistics and its application tothe travelling salesman problemrdquo Neural Networks vol 12 no9 pp 1273ndash1284 1999

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 4: Solving the Traveling Salesman’s Problem Using the African ...umpir.ump.edu.my/id/eprint/11365/1/Solving the Traveling Salesman’… · ResearchArticle Solving the Traveling Salesman’s

4 Computational Intelligence and Neuroscience

the search space in search for solutions (food source) Theonlooker bees on the other hand are the ones that stay inthe nest waiting for the report of the scout bees while theemployed bees refer to the class of bees which after watchingthe waggle dance of the scout bees opts to join in harvestingthe food source (exploitation) A particular strength of thisalgorithm lies in its bee transformation capabilities Forinstance a scout bee could transform to an employed beeonce it (the same scout bee) is involved in harvesting the foodsource and vice versa

Generally the bees can change their statuses dependingon the needs of the algorithm at a particular point in timeIn this algorithm the food source represents a solution to theoptimization problemThe volume of nectar in a food sourcerepresents the quality (fitness) of the solutionMoreover eachemployed bee is supposed to exploit only one food sourcemeaning that the number of employed bees is the same as thenumber of food sources The scout bees are always exploringfor new food sources V

119898with higher nectar quantity andor

quality (fitness) 119898

within the neighbourhood The beesevaluate the nectar fitness using

V119898119894= 119909119898119894+ 0119898119894(119909119898119894minus 119909119896119894) (7)

where 119894 is a randomly chosen parameter index 0119898119894

is arandom number within a given range 119909

119896119894is a food source

The quality (fitness) of a solution 119891119894119905119898(997888rarr119909119898) is calculated

using the following equation

119891119894119905119898(997888rarr119909119898) =

1

1 + 119891119898(997888rarr119909119898)

119909 if 119891119898(997888rarr119909119898) gt 0

1 + 119891119898(997888rarr119909119898) if 119891

119898(997888rarr119909119898) lt 0

(8)

From the foregoing discussion it is clear that there is slightsimilarity between (5) in PSO and (7) in ABO since eachalgorithm subtracts a variable from the personal and indi-vidual bests of the particlesbuffalos For PSO the subtractedvariable is the present position and for ABO it is theimmediate-past explored location (the waaa values 119908119896)However the two equations are different in several respectswhile the PSO uses 119909 (being the constriction factor) or120596 (as inertia factor in some versions of the PSO) thereare no such equivalents in ABO Moreover while the PSOemploys random numbers (119880

1and 119880

2) ABO does not use

random numbers only learning parameters Also PSO usesacceleration coefficients (0

1and 0

2) ABO does not In the

case of the ABC even though it employs the same searchtechnique in arriving at solutions the algorithm proceduresare quite different

24 Information Propagation In searching for solutions toan optimization problem the ACO employs the path con-struction technique while the PSO ABC and the ABO usethe path improvement technique However while the PSOuses the Von Neumann (see Figure 1) as its best techniquefor information propagation [21] the ACO obtains goodresults using the ring topology [22] and the ABO uses thestar topology which connects all the buffalos together TheVon Neumann topology enables the particles to connect to

neighboring particles on the east west north and southEffectively a particular particle relates with the other fourparticles surrounding it The ABO employs the star topologysuch that a particular buffalo is connected to every otherbuffalo in the herdThis enhancesABOrsquos information dissem-ination at any particular point in time

3 African Buffalo Optimization Algorithm

In using the ABO to proffer solutions in the search space thebuffalos are first initialized within the herd population andare made to search for the global optima by updating theirlocations as they follow the current best buffalo 119887119892max inthe herd Each buffalo keeps track of its coordinates in theproblem space which are associated with the best solution(fitness) it has achieved so far This value is called 119887119901max119896representing the best location of the particular buffalo inrelation to the optimal solution The ABO algorithm followsthis pattern at each step it tracks the dynamic location ofeach buffalo towards the 119887119901max119896 and 119887119892max depending onwhere the emphasis is placed at a particular iteration Thespeed of each animal is influenced by the learning parameters

31 ABO Algorithm The ABO algorithm is presented below

(1) Initialization randomly place buffalos to nodes at thesolution space

(2) Update the buffalos fitness values using

119898119896 + 1 = 119898119896 + 1198971199011 (119887119892max minus 119908119896)

+ 1198971199012 (119887119901max119896 minus 119908119896) (9)

where 119908119896 and 119898119896 represent the exploration andexploitation moves respectively of the 119896th buffalo(119896 = 1 2 119873) 1198971199011 and 1198971199012 are learning factors119887119892max is the herdrsquos best fitness and 119887119901max119896 theindividual buffalo 119896rsquos best found location

(3) Update the location of buffalo 119896 (119887119901max119896 and 119887119892max)using

119908119896 + 1 =(119908119896 + 119898119896)

plusmn05 (10)

(4) Is 119887119892max updating Yes go to (5) No go to (2)(5) If the stopping criteria are not met go back to

algorithm step (3) else go to (6)(6) Output best solution

A closer look at the algorithm (the ABO algorithm)reveals that (9) which shows the democratic attitude of thebuffalos has three parts the first 119898119896 represents the memoryof the buffalos past location The buffalo has innate memoryability that enables it to tell where it has been before This iscrucial in its search for solutions as it helps it to avoid areasthat produced bad resultsThememory of each buffalo is a listof solutions that can be used as an alternative for the currentlocal maximum location The second 1198971199011(119887119892max minus 119908119896) is

Computational Intelligence and Neuroscience 5

Star topology Ring topology Von Neumann topology

Figure 1 Information propagation topologies

concerned with the caring or cooperative part of the buffalosand is a pointer to the buffalorsquos social and information-sharing experience and then the third part 1198971199012(119887119901max119896minus119908119896)indicates the intelligence part of the buffalos So the ABOexploits the memory caring intelligent capabilities of thebuffalos in the democratic equation (9) Similarly (10) is thewaaa vocalization equation that propels the animals to moveon to explore other environments as the present area has beenfully exploited or is unfavourable for further exploration andexploitation

32 Highlights of the ABO Algorithm They are as follows

(1) Stagnation handling through regular update of thebest buffalo 119887119892max in each iteration

(2) Use of relatively few parameters to ensure speed fastconvergence

(3) A very simple algorithm that require less than 100lines of coding effort in any programming language

(4) Ensuring adequate exploration by tracking the loca-tion of best buffalo (119887119892max) and each buffalorsquos per-sonal best location (119887119901max119896)

(5) Ensuring adequate exploitation through tapping intothe exploits of other buffalos 1198971199011(119887119892max minus 119908119896)

33 Initialization The initialization phase is done by ran-domly placing the 119896th buffalo in the solution space For ini-tialization some known previous knowledge of the problemcan help the algorithm to converge in less iterations

34 Update Speed and Location In each iteration eachbuffalo updates its location according to its formermaximumlocation (119887119901max) and some information gathered from theexploits of the neighboring buffalos This is done using (9)and (10) (refer to the ABO algorithm steps (2) and (3) above)This enables the algorithm to track the movement of thebuffalos in relation to the optimal solution

4 Using ABO to Solve the TravellingSalesmanrsquos Problem

TheABOhas the advantage of using very simple steps to solvecomplex optimization problems such as the TSP The basicsolution steps are as follows

(a) Choose according to some criterion a start city foreach of the buffalos and randomly locate them inthose cities Consider

119875119886119887 =11990811989711990111198861198871198981198971199012119886119887

sum119899

119894=111990811989711990111198861198871198981198971199012119886119887

119886119887 = plusmn05

(11)

(b) Update buffalo fitness using (9) and (10) respectively(c) Determine the 119887119901max119896 and max(d) Using (11) and heuristic values probabilistically con-

struct a buffalo tour by adding cities that the buffaloshave not visited

(e) Is the 119887119892max updating Go to (f) No go to (a)(f) Is the exit criteria reached Yes go to (g) No return

to (b)(g) Output the best result

Here 1198971199011 and 1198971199012 are learning parameters and are 06 and04 respectively 119886119887 takes the values of 05 and minus05 inalternate iterations 119898 is the positive reinforcement alertinvitation which tells the animals to stay to exploit theenvironment since there are enough pastures and 119908 is thenegative reinforcement alarm which tells the animals to keepon exploring the environment since the present location is notproductive For buffalo 119896 the probability 119901119896 of moving fromcity 119895 to city 119896 depends on the combination of two valuesnamely the desirability of the move as computed by someheuristic indicating the previous attractiveness of that moveand the summative benefit of themove to the herd indicatinghow productive it has been in the past to make that particularmoveThe denominator values represent an indication of thedesirability of that move

6 Computational Intelligence and Neuroscience

Table 1 TSPLIB datasets

1st experimental datasets 2nd experimental datasets ATSP dataset NN comparative datasetsBerlin52 Att48 R48p Eil51 KroA100 B FL1400St70 St70 Ft70 Eil76 KroA200 D1655Eil76 Eil76 Kro124p Eil101 KroB100Pr76 Pr152 Ftv70 Berlin52 KroB150Kroa100 Gil262 P43 Bier127 KroB200Eil101 Rd400 Ftv170 Ch130 KroC100Ch150 Pr1002 Ch150 KroD100Tsp225 D1291 Rd100 KroE100

Fnl4461 Lin105 Rat575Brd14051 Lin318 RL1323

41 ABO Solution Mechanism for the TSP The ABO appliestheModifiedKarp Steele algorithm in its solution of the Trav-elling Salesmanrsquos Problem [23] It follows a simple solutionstep of first constructing a cycle factor 119865 of the cheapestweight in the 119870 graph Next it selects a pair of edges takenfrom different cycles of the 119870 graph and patch in a waythat will result in a minimum weight Patching is simplyremoving the selected edges in the two cycle factors and thenreplacing them with cheaper edges and in this way forminga larger cycle factor thus reducing the number of cyclefactors in graph119870 by oneThirdly the second step is repeateduntil we arrive at a single cycle factor in the entire graph119870 This technique fits into the path improvement techniquedescription [24] ABO overcomes the problem of delay inthis process through the use of two primary parameters toensure speed namely 1198971199011 and 1198971199012 coupledwith the algorithmkeeping a track of the route of the 119887119892max as well as 119887119901max119896in each construction step

5 Experiments and Discussion of Results

In this study the ABO was implemented on three sets ofsymmetric TSP datasets and a set of asymmetric TSP (ATSP)ranging from 48 to 14461 cities from TSPLIB95 [25] The firstexperiment was concerned with the comparison of the per-formance of ABO in TSP instances with the results obtainedfrom a recent study [26] involving Berlin52 St70 Eil76 Pr76KroA100 Eil101 Ch150 and Tsp225The second set of exper-iments was concerned with testing ABOrsquos performance withanother recently published study [27] on Att48 St70 Eil76Pr152 Gil262 Rd400 Pr1002 D1291 Fn14461 and Brd14051The third experiment examinedABOrsquos performance in asym-metric TSP instancesThe fourth set of experiments involvedcomparing ABO results with those obtained using somepopular Artificial Neural Networks methods [28] The TSPbenchmark datasets are presented in Table 1

51 Experimental Parameters Setting For the sets of exper-iments involving PSO the parameters are as follows popu-lation size 200 iteration (119879max) 1000 inertia weight 0851198621 2 119862

2 2 rand1(0 1) rand2(0 1) For the HPSACO the

experimental parameters are as follows population 200iteration (119879max) 1000 inertia weight 085 1198621 2 1198622 2 ants

(N) 100 pheromone factor (120572) 10 heuristic factor (120573) 20evaporation coefficient (120588) 005 pheromone amount 100For the experiments involving other algorithms in this studyTable 2 contains the details of the parameters used

52 Experiment Analysis and Discussion of Results Theexperiments were performed using MATLAB on Intel DuoCore i7-3770CPU 340 ghzwith 4GBRAMTheexperimentson the asymmetric Travelling Salesmanrsquos Problems wereexecuted on a Pentium Duo Core 180Ghz processor and2GB RAM desktop computer Similarly the experiments ontheArtificial Neural Networks were executed usingMicrosoftVirtual C++ 2008 on an Intel Duo Core i7 CPU Toauthenticate the veracity of the ABO algorithm in solving theTSP we initially carried out experiments on eight TSP casesThe city-coordinates data are available in [29]The results arepresented in Table 3

InTable 3 the ldquoAverageValuerdquo refers to the average fitnessof each algorithm and the ldquorelative errorrdquo values are obtainedby calculating

(Average value minus Best value

Best value) times 100 (12)

As can be seen in Table 3 the ABO outperformed the otheralgorithms in all test instances under investigationTheABOfor instance obtained the optimal solution to Berlin52 andEil76 No other algorithmdid Besides this theABOobtainedthe nearest-optimal solution to the remaining TSP instancescompared to any other algorithm In terms of the averageresults obtained by each algorithm the ABO still has thebest performance It is rather surprising that the HybridAlgorithm (HA) which uses a similar memorymatrix like theABO could not pose a better result This is traceable to theuse of several parameters since the HA is a combination ofthe ACO and the ABC the two algorithms that have some ofthe largest numbers of parameters to tune in order to obtaingood solutions

The dominance of the ABO can also be seen in the useof computer resources (CPU time) where it is clearly seenthat the ABO is the fastest of all four algorithms In Berlin52for instance the ABO is 58335 times faster than the ACO1085 times faster than the ABC 30320 times faster thanthe Hybrid Algorithm (HA) This trend continues in all the

Computational Intelligence and Neuroscience 7

Table 2 Experimental parameters

ABO ACO ABC HA CGASParameter Values Parameters Values Parameters Values Parameters Values Parameter ValuesPopulation 40 Ants 119863

lowast Population 119863lowast Population 119863

lowast Generation 100119898119896 10 120573 50 120601ij rand(minus1 1) 120573 50 120573 20119887119892max119887119901max 06 120588 065 120596ij rand(0 15) 120588 065 120588 0111989711990111198971199012 05 120572 10 SN NP2 120572 10 119877119900 033119908119896 10 O 200 Limit 119863

lowastSN 120601ij rand(minus1 1) Crossover rate 10NA mdash qo 09 Max cycle number 500 120596ij rand(0 15) qo 09NA mdash NA mdash Colony 50 SN NP2 120601119903 03NA mdash NA mdash NA mdash Limit 119863

lowastSN 120601120588 02NA mdash NA mdash NA mdash Max cycle number 500 120591min 120591max20

NA mdash NA mdash NA mdash Colony 50 120591max 1 minus (1 minus 120588)

NA mdash NA mdash NA mdash O 200 NA mdashNA mdash NA mdash NA mdash qo 09 NA mdashTotal number of runs 50 50 50 50 50

Table 3 Comparative experimental result

Problem Number of cities Optima Method Best Mean Rel err () Time (s)

Berlin52

ABO 7542 7616 0 0002ACO 754899 765931 152 11667

52 7542 ABC 947911 1039026 3772 217HA 754437 754437 003 6064

St70

ABO 676 67833 015 008ACO 69605 70916 473 22606

70 675 ABC 116212 123049 8173 315HA 68724 70058 347 11565

Eil76

ABO 538 56304 0 00376 538 ACO 55446 56198 304 27198

ABC 87728 93144 7078 349HA 55107 55798 231 13882

Pr76

108159 ABO 108167 108396 0007 00876 ACO 11516666 11632122 755 27241

ABC 1951989 20511961 8965 350HA 11379856 11507229 639 13892

Kroa100

21282 ABO 21311 221638 04 000100 ACO 2245589 2288012 749 61506

ABC 4951951 5384003 15294 517HA 2212275 2243531 540 31112

Eil101

629 ABO 640 640 17 0027101 ACO 67804 69342 796 52742

ABC 123731 131595 10488 517HA 67271 68339 639 26708

Ch150

6528 ABO 6532 6601 006 0032150 ACO 664851 670287 261 138765

ABC 2090889 2161748 23093 895HA 664169 667712 221 69861

Tsp225

3916 ABO 3917 3982 003 009225 ACO 411235 417608 822 403875

ABC 1699841 1795512 3652792 1668HA 409054 415785 774 203733

8 Computational Intelligence and Neuroscience

Table 4 Comparative optimal results

TSP instance Optima ABO PSO ACO HPSACOBest Avg Err Best Avg Err Best Avg Err Best Avg Err

att48 33522 33524 33579 016 33734 33982 063 33649 33731 062 33524 33667 016st70 675 676 67833 015 6912 7026 240 6857 6947 159 6803 6986 079eil76 538 538 56304 000 5723 5891 638 5507 5604 236 5462 5581 152pr152 73682 73730 73990 007 75361 75405 228 74689 74936 137 74165 74654 066gil262 2378 2378 2386 000 2513 2486 568 2463 2495 357 2413 2468 147rd400 15281 15301 15304 500 16964 17024 1101 16581 16834 851 16067 16513 514pr1002 259045 259132 261608 003 278923 279755 767 269758 271043 414 267998 269789 346d1291 50801 50839 50839 007 53912 54104 612 52942 53249 421 52868 52951 407fnl4461 182566 182745 183174 010 199314 199492 917 192964 194015 570 191352 192585 481brd14051 469385 469835 479085 010 518631 519305 1049 505734 511638 774 498471 503594 620

test cases under investigation To solve all the TSP problemshere it took ABO a cumulative time of 0279 seconds toACOrsquos 7456 seconds ABCrsquos 4311 seconds and HArsquos 336227seconds The speed of the ABO is traceable to effectivememorymanagement technique since the ABO uses the pathimprovement technique as against the ACO that uses theslower path construction method The difference in speedwith the ABC that uses similar technique with ABO is dueto the ABCrsquos use of several parameters The speed of theHA could have been affected by the combination of the pathconstruction and path improvement techniques coupled withthe use of several parameters

53 ABO and Other Algorithms Encouraged by theextremely good performance of the ABO in the firstexperiments more experiments were performed and theresults obtained compared with those from PSO ACO andHPSACO from another recently published study [27] TheHPSACO is a combination of three techniques namely theCollaborative Strategy Particle SwarmOptimization and theAnt Colony Optimization algorithms The datasets used inthis experiment are from the TSPLIB95 and they are Att48St70 Eil76 Pr152 Gil262 Rd400 Pr1002 D1291 Fn14461and Brd14051 The results are presented in Table 4

Table 4 further underscores the excellent performance ofABO The ABO obtained the optimal solution to Eil76 andGil262 and very near-optimal solution in the rest of test casesIt is obvious from the relative error calculations that the ABOobtained over 99 accuracy in all the TSP instances hereexcept the difficult rd100where it obtained 95 It isworthy ofnote that the 95 accuracy of the ABO is still superior to theperformance of the other comparative algorithms in this TSPinstance The cumulative relative error of the ABO is 568to the PSOrsquos 6183 the ACOrsquos 3981 and the HPSACOrsquos2828 Clearly from this analysis the ABO has a superiorsearch capabilityThis is traceable to its use of relatively fewerparameters than most other Swarm Intelligence techniquesThe controlling parameters of the ABO are just the learningparameters (1198971199011 and 1198971199012)

In designing the ABO the authors deliberately tried toavoid the ldquoFrankenstein phenomenardquo [30] that is a case

of developing intricate techniques with several parame-tersoperators that most times the individual contributionsof some of such parametersoperators to the workings of thealgorithm are difficult to pinpoint The ability to achieve thisldquolean metaheuristicrdquo design (which is what we tried to do indesigning the ABO) is a mark of good algorithm design [30]

6 ABO on Asymmetric TSP Test Cases

Moreover we had to experiment on some ATSP test casesto verify the performance of the novel algorithm on suchcases The results obtained from this experiment using ABOare compared with the results obtained from solving somebenchmark asymmetric TSP instances available in TSPLIB95using the Randomized Arbitrary Insertion algorithm (RAI)[31] Ant Colony System 3-opt (ACS) Min-Max Ant System(MMAS) and Iterated Local Search (ILS) which is reportedto be one of the best methods for solving TSP problemsThese ATSP instances are ry48p ft70 Kro124p ftv70 p43and ftv170 [32]

Table 5 shows the comparative results obtained by apply-ing five different algorithms to solving a number of bench-mark asymmetric TSP instances with the aim of evaluatingthe performance of the African Buffalo Optimization (ABO)algorithm The first column lists the ATSP instances asrecorded in the TSPLIB the second column indicates thenumber of citieslocationsnodes involved in each instancethe third indicates the optimal solutions then followed by theperformances of the different algorithms in at most 50 testruns

A quick look at the table shows the excellent performanceof the ILS and theMMASwith both obtaining 100 results inthe only four cases available in literature that they solvedTheABO performed very well achieving over 99 in all test casesbut one The ABO has approximately the same performancewith RAI and the ACS obtaining about 99 optimal resultsin virtually all cases It is significant to note however thatthe ABO performed better than ACS and RAI in ft70 whichwas said to be a difficult ATSP instance to solve in spite of itsrelatively small size [33]

Next the authors examined the speed of each algorithmto achieve result since one of the aims of the ABO is to

Computational Intelligence and Neuroscience 9

Table5Com

parativ

eresults

TSP

cases

Num

bero

fcities

Optim

alvalues

ABO

RAI

MMAS

ACS

ILS

Best

Avg

Relerror

Best

Avg

Relerror

Best

Avg

Relerror

Best

Avg

Relerror

Best

Avg

Relerror

Ry48p

4814422

1444

014455

012

14422

1454320

014422

14422

014422

1456545

014422

14422

0Ft70

7038673

38753

388705

021

38855

391877

5047

38673

38687

038781

390990

5028

38673

38687

0Kr

o124p

100

36230

36275

36713

012

36241

3659423

004

36230

36542

036241

36857

004

36230

36542

0Ftv70

711950

1955

19585

026

1950

196844

0mdash

mdashmdash

mdashmdash

mdashmdash

mdashmdash

P43

435620

5645

5698

044

5620

562065

0mdash

mdashmdash

mdashmdash

mdashmdash

mdashmdash

Ftv170

171

2755

2795

2840

514

52764

283274

033

2755

2755

02774

2826

069

2755

2756

0

10 Computational Intelligence and Neuroscience

Table 6 Comparative speed of algorithms

TSP cases Number of citiesABO MIMM-ACO MMAS CGAS RAI

Avg time(secs)

Avg time(secs)

Avg time(secs)

Avg time(secs)

Avg time(secs)

Ry48p 48 007 783 797 1235 1598Ft70 70 005 985 1015 1532 7068Kro124p 100 008 3325 234 7852 3034Ftv70 71 009 6453 6125 6964 7376P43 43 01 835 938 0997 0997Ftv170 171 065 10828 9673 2761 2761

solve the problem of delay in achieving optimal solutionssince speed is one of the hallmarks of a good algorithm[34] In Table 6 we compare the speed of the ABO withthose of the recently publishedModel InducedMax-Min AntColony Optimization (MIMM-ACO) Min-Max Ant System(MMAS)CooperativeGenetic Ant Systems (CGAS) [35] andRAI

Table 6 displays the exceptional capacity of the novelAfrican Buffalo Optimization (ABO) to obtain optimal ornear-optimal solutions at record times It took the ABOless than a second (precisely 095 seconds) to solve allthe benchmark cases under investigation to MIMM-ACOrsquos23209 seconds MMASrsquos 20888 seconds CGASrsquo 452927seconds and RAIrsquos 3234 seconds Undoubtedly the ABOoutperformed the other algorithms in its quest to obtainresults using very limited CPU time and resources Thisbrought aboutABOrsquos use of very few parameters coupledwithstraightforward fitness functionThe exceptional speed of theABO compared with its competitive ability to obtain verycompetitive results recommends the ABO as an algorithm ofchoice when speed is a factor

7 Comparing ABO to Neural NetworkSolutions to TSP

Moreover following the popularity and efficiency of NeuralNetworks in obtaining good solutions to the Travelling Sales-manrsquos Problem [36] the authors compared the performanceof African Buffalo Optimization to the known solutions ofsome popular Neural Network algorithms These are Ange-niolrsquos method Somhom et alrsquos method Pasti and Castrorsquosmethod and Masutti and Castrorsquos method Comparativeexperimental figures are obtained from [28]

From the experimental results in Table 7 it can beseen that only the ABO obtained the optimal solutionsin Eil51 and Eil76 in addition to jointly obtaining theoptimal result with Masutti and Castrorsquos method in Berlin52Aside from these ABO outperformed the other meth-ods in getting near-optimal solutions in Bier127 KroA100KroB100 KroB100 KroC100 KroD100 KroE100 Ch130Ch150 KroA150 KroB150 KroA200 KroB150 KroB200Rat575 rl1323 fl1400 fl1400 and Rat783 It was only in Eil101that Masutti and Castrorsquos method outperformed ABOThis is

a further evidence that ABO is an excellent performer evenwhen in competition with Neural Networks methods

8 Conclusion

In general this paper introduces the novel algorithm theAfrican Buffalo Optimization and shows its capacity to solvethe Traveling Salesmanrsquos ProblemThe performance obtainedfrom using the ABO is weighed against the performanceof some popular algorithms such as the Ant Colony Opti-mization (ACO) Particle Swarm Optimization (PSO) Arti-ficial Bee Colony Optimization (ABO) Min-Max Ant Sys-tem (MMAS) and Randomized Insertion Algorithm (RAI)some hybrid methods such as Model Induced Max-MinAnt ColonyOptimization (MIMM-ACO) Hybrid Algorithm(HA) and Cooperative Genetic Ant Systems (CGAS) andsome popular Neural Networks-based optimization meth-ods In total 33 TSP datasets cases ranging from 48 to 14461cities were investigated and the ABO results obtained werecompared with results obtained from 11 other optimizationalgorithms and methods The results show the amazingperformance of the novel algorithmrsquos capacity to obtainoptimal or near-optimal solutions at an incredibly fast rateThe ABOrsquos outstanding performance is a testimony to thefact that ABOhas immense potentials in solving optimizationproblems using relatively fewer parameters than most otheroptimization algorithms in literature

Having observed the robustness of theABO in solving theTravelling Salesmanrsquos Problems with encouraging outcomeit is recommended that more investigations be carried outto ascertain the veracity or otherwise of this new algorithmin solving other problems such as PID tuning knapsackproblem vehicle routing job scheduling and urban trans-portation problems

Conflict of Interests

The authors assert that there is no conflict of interests in thepublication of this research paper

Acknowledgments

The authors acknowledge the support of the anonymousreviewers for their constructive reviews that helped to

Computational Intelligence and Neuroscience 11

Table 7 ABO versus NN results

TSPinstance Optima

ABO Angeniolrsquosmethod

Somhom et alrsquosmethod

Pasti and Castrorsquosmethod

Masutti and Castrorsquosmethod

Best Mean Best Mean Best Mean Best Mean Best Meaneil51 426 426 427 432 44290 433 44057 429 43870 427 43747eil76 538 538 56304 554 56320 552 56227 542 55610 541 55633eil101 629 640 640 655 66593 640 65557 641 65483 638 64863berlin52 7542 7542 765931 7778 836370 7715 802507 7716 807397 7542 793250bier127 118282 118297 118863 120110 12892033 119840 12173333 118760 12178033 118970 12088633ch130 6110 6111 630714 6265 641680 6203 630723 6142 629177 6145 628240ch150 6528 6532 6601 6634 684280 6631 6751 6629 675320 6602 673837rd100 7910 7935 7956 8088 844450 8028 823940 7947 825393 7982 819977lin105 14379 14419 144527 14999 1611137 14379 1447560 14379 1470223 14379 1440017lin318 42029 42101 42336 44869 4583283 43154 4392290 42975 4370497 42834 4369687kroA100 21282 21311 221638 23009 2467880 21410 2161677 21369 2186847 21333 2152273kroA150 26524 26526 27205 28948 2996090 26930 2740133 26932 2734643 26678 2735597kroA200 29368 29370 30152 31669 3322833 30144 3041567 29594 3025753 29600 3019027kroB100 22141 22160 22509 24026 2596640 22548 2262250 22596 2285360 22343 2266147kroB150 26130 26169 26431 27886 2940453 26342 2680633 26395 2675213 26264 2663187kroB200 29437 29487 29534 32351 3383813 29703 3028647 29831 3041560 29637 3013500kroC100 20749 20755 208817 22344 2349613 20921 2114987 20915 2123160 20915 2097123kroD100 21294 21347 21462 23076 2390903 21500 2184573 21457 2202787 21374 2169737kroE100 22068 22088 22702 23642 2482803 22379 2268247 22427 2281550 22395 2271563rat575 6773 6774 6810 8107 830183 7090 717363 7039 712507 7047 711567rat783 8806 8811 888175 10532 1072160 9316 938757 9185 932630 9246 934377rl1323 270199 270480 278977 293350 30142433 295780 30089900 295060 30028600 300770 30531433fl1400 20127 20134 20167 20649 2117467 20558 2074260 20745 2107057 20851 2111000d1655 62128 62346 625995 68875 7116807 67459 6804637 70323 7143170 70918 7211317

enhance the quality of this research paper Similarly theauthors are grateful to the Faculty of Computer Systems andSoftware Engineering Universiti Malaysia Pahang KuantanMalaysia for providing the conducive environment facilitiesand funds for this research through Grant GRS 1403118

References

[1] J Kennedy ldquoParticle swarm optimizationrdquo in Encyclopedia ofMachine Learning pp 760ndash766 Springer 2010

[2] M Dorigo and M Birattari ldquoAnt colony optimizationrdquo inEncyclopedia of Machine Learning pp 36ndash39 Springer 2010

[3] E Cantu-Paz ldquoA summary of research on parallel geneticalgorithmsrdquo IlliGAL Report 95007 Illinois Genetic AlgorithmsLaboratory 1995

[4] D Karaboga and B Basturk ldquoA powerful and efficient algo-rithm for numerical function optimization artificial bee colony(ABC) algorithmrdquo Journal of Global Optimization vol 39 no 3pp 459ndash471 2007

[5] J Andre P Siarry and T Dognon ldquoAn improvement of thestandard genetic algorithm fighting premature convergence incontinuous optimizationrdquo Advances in Engineering Softwarevol 32 no 1 pp 49ndash60 2001

[6] N Kumbharana and G M Pandey ldquoA comparative study ofACO GA and SA for solving travelling salesman problemrdquo

International Journal of Societal Applications of Computer Sci-ence vol 2 pp 224ndash228 2013

[7] Z Michalewicz and D B Fogel ldquoThe traveling salesmanproblemrdquo in How to Solve It Modern Heuristics pp 189ndash224Springer 2000

[8] J B Odili H Deng C Yang Y Sun M C d E Santo Ramosand M C G Ramos ldquoApplication of ant colony optimizationto solving the traveling salesmanrsquos problemrdquo Science Journal ofElectrical amp Electronic Engineering In press

[9] A Ben-Israel ldquoA Newton-Raphson method for the solution ofsystems of equationsrdquo Journal of Mathematical Analysis andApplications vol 15 pp 243ndash252 1966

[10] D P Bertsekas Dynamic Programming and Optimal Controlvol 1 Athena Scientific Belmont Mass USA 1995

[11] R G Ghanem and P D Spanos Stochastic Finite Elements ASpectral Approach Courier Corporation 2003

[12] S Binitha and S S Sathya ldquoA survey of bio inspired optimiza-tion algorithmsrdquo International Journal of Soft Computing andEngineering vol 2 pp 137ndash151 2012

[13] F Dressler and O B Akan ldquoA survey on bio-inspired network-ingrdquo Computer Networks vol 54 no 6 pp 881ndash900 2010

[14] X-S Yang and S Deb ldquoTwo-stage eagle strategy with differen-tial evolutionrdquo International Journal of Bio-Inspired Computa-tion vol 4 no 1 pp 1ndash5 2012

12 Computational Intelligence and Neuroscience

[15] S K Dhurandher S Misra P Pruthi S Singhal S Aggarwaland I Woungang ldquoUsing bee algorithm for peer-to-peer filesearching in mobile ad hoc networksrdquo Journal of Network andComputer Applications vol 34 no 5 pp 1498ndash1508 2011

[16] A RMehrabian and C Lucas ldquoA novel numerical optimizationalgorithm inspired from weed colonizationrdquo Ecological Infor-matics vol 1 no 4 pp 355ndash366 2006

[17] M Dorigo G Di Caro and LM Gambardella ldquoAnt algorithmsfor discrete optimizationrdquo Artificial Life vol 5 no 2 pp 137ndash172 1999

[18] M Clerc Particle Swarm Optimization vol 93 John Wiley ampSons 2010

[19] Z YuanM AM deOcaM Birattari and T Stutzle ldquoContinu-ous optimization algorithms for tuning real and integer param-eters of swarm intelligence algorithmsrdquo Swarm Intelligence vol6 no 1 pp 49ndash75 2012

[20] D Karaboga and B Akay ldquoA survey algorithms simulating beeswarm intelligencerdquo Artificial Intelligence Review vol 31 no 1ndash4 pp 61ndash85 2009

[21] J Kennedy and R Mendes ldquoPopulation structure and particleswarm performancerdquo in Proceedings of the Congress on Evo-lutionary Computation (CEC rsquo02) vol 2 pp 1671ndash1676 IEEEHonolulu Hawaii USA May 2002

[22] A M Mora P Garcıa-Sanchez J J Merelo and P A CastilloldquoMigration study on a pareto-based island model for MOA-COsrdquo in Proceedings of the 15th Genetic and EvolutionaryComputation Conference (GECCO rsquo13) pp 57ndash64 July 2013

[23] M Dorigo and L M Gambardella ldquoAnt colonies for thetravelling salesman problemrdquo BioSystems vol 43 no 2 pp 73ndash81 1997

[24] MG C Resende RMartıM Gallego andADuarte ldquoGRASPand path relinking for the maxndashmin diversity problemrdquo Com-puters amp Operations Research vol 37 no 3 pp 498ndash508 2010

[25] G Reinelt ldquoTsplib95 1995rdquo 2012 httpcomoptifiuni-heidel-bergdesoftwareTSPLIB95

[26] M Gunduz M S Kiran and E Ozceylan ldquoA hierarchicapproach based on swarm intelligence to solve the travelingsalesmanproblemrdquoTurkish Journal of Electrical Engineering andComputer Sciences vol 23 no 1 pp 103ndash117 2015

[27] H Jia ldquoA novel hybrid optimization algorithm and its appli-cation in solving complex problemrdquo International Journal ofHybrid Information Technology vol 8 pp 1ndash10 2015

[28] S-M Chen and C-Y Chien ldquoA new method for solving thetraveling salesman problem based on the genetic simulatedannealing ant colony system with particle swarm optimizationtechniquesrdquo in Proceedings of the International Conference onMachine Learning and Cybernetics (ICMLC rsquo10) pp 2477ndash2482July 2010

[29] S Georg ldquoMP-TESTDATAmdashThe TSPLIB symmetric travelingsalesman problem instancesrdquo 2008

[30] K Sorensen ldquoMetaheuristicsmdashthemetaphor exposedrdquo Interna-tional Transactions in Operational Research vol 22 no 1 pp3ndash18 2015

[31] J Brest and J Zerovnik ldquoA heuristic for the asymmetric travel-ing salesman problemrdquo in Proceedings of the 6th MetaheuristicsInternational Conference (MIC rsquo05) pp 145ndash150 Vienna Aus-tria 2005

[32] M Dorigo and L M Gambardella ldquoAnt colony system a coop-erative learning approach to the traveling salesman problemrdquoIEEETransactions on Evolutionary Computation vol 1 no 1 pp53ndash66 1997

[33] O C Martin and S W Otto ldquoCombining simulated annealingwith local search heuristicsrdquoAnnals of Operations Research vol63 pp 57ndash75 1996

[34] B Baritompa and E M Hendrix ldquoOn the investigation ofstochastic global optimization algorithmsrdquo Journal of GlobalOptimization vol 31 no 4 pp 567ndash578 2005

[35] J Bai G-K Yang Y-WChen L-SHu andC-C Pan ldquoAmodelinduced max-min ant colony optimization for asymmetrictraveling salesman problemrdquo Applied Soft Computing vol 13no 3 pp 1365ndash1375 2013

[36] N Aras B J Oommen and I K Altinel ldquoThe Kohonennetwork incorporating explicit statistics and its application tothe travelling salesman problemrdquo Neural Networks vol 12 no9 pp 1273ndash1284 1999

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 5: Solving the Traveling Salesman’s Problem Using the African ...umpir.ump.edu.my/id/eprint/11365/1/Solving the Traveling Salesman’… · ResearchArticle Solving the Traveling Salesman’s

Computational Intelligence and Neuroscience 5

Star topology Ring topology Von Neumann topology

Figure 1 Information propagation topologies

concerned with the caring or cooperative part of the buffalosand is a pointer to the buffalorsquos social and information-sharing experience and then the third part 1198971199012(119887119901max119896minus119908119896)indicates the intelligence part of the buffalos So the ABOexploits the memory caring intelligent capabilities of thebuffalos in the democratic equation (9) Similarly (10) is thewaaa vocalization equation that propels the animals to moveon to explore other environments as the present area has beenfully exploited or is unfavourable for further exploration andexploitation

32 Highlights of the ABO Algorithm They are as follows

(1) Stagnation handling through regular update of thebest buffalo 119887119892max in each iteration

(2) Use of relatively few parameters to ensure speed fastconvergence

(3) A very simple algorithm that require less than 100lines of coding effort in any programming language

(4) Ensuring adequate exploration by tracking the loca-tion of best buffalo (119887119892max) and each buffalorsquos per-sonal best location (119887119901max119896)

(5) Ensuring adequate exploitation through tapping intothe exploits of other buffalos 1198971199011(119887119892max minus 119908119896)

33 Initialization The initialization phase is done by ran-domly placing the 119896th buffalo in the solution space For ini-tialization some known previous knowledge of the problemcan help the algorithm to converge in less iterations

34 Update Speed and Location In each iteration eachbuffalo updates its location according to its formermaximumlocation (119887119901max) and some information gathered from theexploits of the neighboring buffalos This is done using (9)and (10) (refer to the ABO algorithm steps (2) and (3) above)This enables the algorithm to track the movement of thebuffalos in relation to the optimal solution

4 Using ABO to Solve the TravellingSalesmanrsquos Problem

TheABOhas the advantage of using very simple steps to solvecomplex optimization problems such as the TSP The basicsolution steps are as follows

(a) Choose according to some criterion a start city foreach of the buffalos and randomly locate them inthose cities Consider

119875119886119887 =11990811989711990111198861198871198981198971199012119886119887

sum119899

119894=111990811989711990111198861198871198981198971199012119886119887

119886119887 = plusmn05

(11)

(b) Update buffalo fitness using (9) and (10) respectively(c) Determine the 119887119901max119896 and max(d) Using (11) and heuristic values probabilistically con-

struct a buffalo tour by adding cities that the buffaloshave not visited

(e) Is the 119887119892max updating Go to (f) No go to (a)(f) Is the exit criteria reached Yes go to (g) No return

to (b)(g) Output the best result

Here 1198971199011 and 1198971199012 are learning parameters and are 06 and04 respectively 119886119887 takes the values of 05 and minus05 inalternate iterations 119898 is the positive reinforcement alertinvitation which tells the animals to stay to exploit theenvironment since there are enough pastures and 119908 is thenegative reinforcement alarm which tells the animals to keepon exploring the environment since the present location is notproductive For buffalo 119896 the probability 119901119896 of moving fromcity 119895 to city 119896 depends on the combination of two valuesnamely the desirability of the move as computed by someheuristic indicating the previous attractiveness of that moveand the summative benefit of themove to the herd indicatinghow productive it has been in the past to make that particularmoveThe denominator values represent an indication of thedesirability of that move

6 Computational Intelligence and Neuroscience

Table 1 TSPLIB datasets

1st experimental datasets 2nd experimental datasets ATSP dataset NN comparative datasetsBerlin52 Att48 R48p Eil51 KroA100 B FL1400St70 St70 Ft70 Eil76 KroA200 D1655Eil76 Eil76 Kro124p Eil101 KroB100Pr76 Pr152 Ftv70 Berlin52 KroB150Kroa100 Gil262 P43 Bier127 KroB200Eil101 Rd400 Ftv170 Ch130 KroC100Ch150 Pr1002 Ch150 KroD100Tsp225 D1291 Rd100 KroE100

Fnl4461 Lin105 Rat575Brd14051 Lin318 RL1323

41 ABO Solution Mechanism for the TSP The ABO appliestheModifiedKarp Steele algorithm in its solution of the Trav-elling Salesmanrsquos Problem [23] It follows a simple solutionstep of first constructing a cycle factor 119865 of the cheapestweight in the 119870 graph Next it selects a pair of edges takenfrom different cycles of the 119870 graph and patch in a waythat will result in a minimum weight Patching is simplyremoving the selected edges in the two cycle factors and thenreplacing them with cheaper edges and in this way forminga larger cycle factor thus reducing the number of cyclefactors in graph119870 by oneThirdly the second step is repeateduntil we arrive at a single cycle factor in the entire graph119870 This technique fits into the path improvement techniquedescription [24] ABO overcomes the problem of delay inthis process through the use of two primary parameters toensure speed namely 1198971199011 and 1198971199012 coupledwith the algorithmkeeping a track of the route of the 119887119892max as well as 119887119901max119896in each construction step

5 Experiments and Discussion of Results

In this study the ABO was implemented on three sets ofsymmetric TSP datasets and a set of asymmetric TSP (ATSP)ranging from 48 to 14461 cities from TSPLIB95 [25] The firstexperiment was concerned with the comparison of the per-formance of ABO in TSP instances with the results obtainedfrom a recent study [26] involving Berlin52 St70 Eil76 Pr76KroA100 Eil101 Ch150 and Tsp225The second set of exper-iments was concerned with testing ABOrsquos performance withanother recently published study [27] on Att48 St70 Eil76Pr152 Gil262 Rd400 Pr1002 D1291 Fn14461 and Brd14051The third experiment examinedABOrsquos performance in asym-metric TSP instancesThe fourth set of experiments involvedcomparing ABO results with those obtained using somepopular Artificial Neural Networks methods [28] The TSPbenchmark datasets are presented in Table 1

51 Experimental Parameters Setting For the sets of exper-iments involving PSO the parameters are as follows popu-lation size 200 iteration (119879max) 1000 inertia weight 0851198621 2 119862

2 2 rand1(0 1) rand2(0 1) For the HPSACO the

experimental parameters are as follows population 200iteration (119879max) 1000 inertia weight 085 1198621 2 1198622 2 ants

(N) 100 pheromone factor (120572) 10 heuristic factor (120573) 20evaporation coefficient (120588) 005 pheromone amount 100For the experiments involving other algorithms in this studyTable 2 contains the details of the parameters used

52 Experiment Analysis and Discussion of Results Theexperiments were performed using MATLAB on Intel DuoCore i7-3770CPU 340 ghzwith 4GBRAMTheexperimentson the asymmetric Travelling Salesmanrsquos Problems wereexecuted on a Pentium Duo Core 180Ghz processor and2GB RAM desktop computer Similarly the experiments ontheArtificial Neural Networks were executed usingMicrosoftVirtual C++ 2008 on an Intel Duo Core i7 CPU Toauthenticate the veracity of the ABO algorithm in solving theTSP we initially carried out experiments on eight TSP casesThe city-coordinates data are available in [29]The results arepresented in Table 3

InTable 3 the ldquoAverageValuerdquo refers to the average fitnessof each algorithm and the ldquorelative errorrdquo values are obtainedby calculating

(Average value minus Best value

Best value) times 100 (12)

As can be seen in Table 3 the ABO outperformed the otheralgorithms in all test instances under investigationTheABOfor instance obtained the optimal solution to Berlin52 andEil76 No other algorithmdid Besides this theABOobtainedthe nearest-optimal solution to the remaining TSP instancescompared to any other algorithm In terms of the averageresults obtained by each algorithm the ABO still has thebest performance It is rather surprising that the HybridAlgorithm (HA) which uses a similar memorymatrix like theABO could not pose a better result This is traceable to theuse of several parameters since the HA is a combination ofthe ACO and the ABC the two algorithms that have some ofthe largest numbers of parameters to tune in order to obtaingood solutions

The dominance of the ABO can also be seen in the useof computer resources (CPU time) where it is clearly seenthat the ABO is the fastest of all four algorithms In Berlin52for instance the ABO is 58335 times faster than the ACO1085 times faster than the ABC 30320 times faster thanthe Hybrid Algorithm (HA) This trend continues in all the

Computational Intelligence and Neuroscience 7

Table 2 Experimental parameters

ABO ACO ABC HA CGASParameter Values Parameters Values Parameters Values Parameters Values Parameter ValuesPopulation 40 Ants 119863

lowast Population 119863lowast Population 119863

lowast Generation 100119898119896 10 120573 50 120601ij rand(minus1 1) 120573 50 120573 20119887119892max119887119901max 06 120588 065 120596ij rand(0 15) 120588 065 120588 0111989711990111198971199012 05 120572 10 SN NP2 120572 10 119877119900 033119908119896 10 O 200 Limit 119863

lowastSN 120601ij rand(minus1 1) Crossover rate 10NA mdash qo 09 Max cycle number 500 120596ij rand(0 15) qo 09NA mdash NA mdash Colony 50 SN NP2 120601119903 03NA mdash NA mdash NA mdash Limit 119863

lowastSN 120601120588 02NA mdash NA mdash NA mdash Max cycle number 500 120591min 120591max20

NA mdash NA mdash NA mdash Colony 50 120591max 1 minus (1 minus 120588)

NA mdash NA mdash NA mdash O 200 NA mdashNA mdash NA mdash NA mdash qo 09 NA mdashTotal number of runs 50 50 50 50 50

Table 3 Comparative experimental result

Problem Number of cities Optima Method Best Mean Rel err () Time (s)

Berlin52

ABO 7542 7616 0 0002ACO 754899 765931 152 11667

52 7542 ABC 947911 1039026 3772 217HA 754437 754437 003 6064

St70

ABO 676 67833 015 008ACO 69605 70916 473 22606

70 675 ABC 116212 123049 8173 315HA 68724 70058 347 11565

Eil76

ABO 538 56304 0 00376 538 ACO 55446 56198 304 27198

ABC 87728 93144 7078 349HA 55107 55798 231 13882

Pr76

108159 ABO 108167 108396 0007 00876 ACO 11516666 11632122 755 27241

ABC 1951989 20511961 8965 350HA 11379856 11507229 639 13892

Kroa100

21282 ABO 21311 221638 04 000100 ACO 2245589 2288012 749 61506

ABC 4951951 5384003 15294 517HA 2212275 2243531 540 31112

Eil101

629 ABO 640 640 17 0027101 ACO 67804 69342 796 52742

ABC 123731 131595 10488 517HA 67271 68339 639 26708

Ch150

6528 ABO 6532 6601 006 0032150 ACO 664851 670287 261 138765

ABC 2090889 2161748 23093 895HA 664169 667712 221 69861

Tsp225

3916 ABO 3917 3982 003 009225 ACO 411235 417608 822 403875

ABC 1699841 1795512 3652792 1668HA 409054 415785 774 203733

8 Computational Intelligence and Neuroscience

Table 4 Comparative optimal results

TSP instance Optima ABO PSO ACO HPSACOBest Avg Err Best Avg Err Best Avg Err Best Avg Err

att48 33522 33524 33579 016 33734 33982 063 33649 33731 062 33524 33667 016st70 675 676 67833 015 6912 7026 240 6857 6947 159 6803 6986 079eil76 538 538 56304 000 5723 5891 638 5507 5604 236 5462 5581 152pr152 73682 73730 73990 007 75361 75405 228 74689 74936 137 74165 74654 066gil262 2378 2378 2386 000 2513 2486 568 2463 2495 357 2413 2468 147rd400 15281 15301 15304 500 16964 17024 1101 16581 16834 851 16067 16513 514pr1002 259045 259132 261608 003 278923 279755 767 269758 271043 414 267998 269789 346d1291 50801 50839 50839 007 53912 54104 612 52942 53249 421 52868 52951 407fnl4461 182566 182745 183174 010 199314 199492 917 192964 194015 570 191352 192585 481brd14051 469385 469835 479085 010 518631 519305 1049 505734 511638 774 498471 503594 620

test cases under investigation To solve all the TSP problemshere it took ABO a cumulative time of 0279 seconds toACOrsquos 7456 seconds ABCrsquos 4311 seconds and HArsquos 336227seconds The speed of the ABO is traceable to effectivememorymanagement technique since the ABO uses the pathimprovement technique as against the ACO that uses theslower path construction method The difference in speedwith the ABC that uses similar technique with ABO is dueto the ABCrsquos use of several parameters The speed of theHA could have been affected by the combination of the pathconstruction and path improvement techniques coupled withthe use of several parameters

53 ABO and Other Algorithms Encouraged by theextremely good performance of the ABO in the firstexperiments more experiments were performed and theresults obtained compared with those from PSO ACO andHPSACO from another recently published study [27] TheHPSACO is a combination of three techniques namely theCollaborative Strategy Particle SwarmOptimization and theAnt Colony Optimization algorithms The datasets used inthis experiment are from the TSPLIB95 and they are Att48St70 Eil76 Pr152 Gil262 Rd400 Pr1002 D1291 Fn14461and Brd14051 The results are presented in Table 4

Table 4 further underscores the excellent performance ofABO The ABO obtained the optimal solution to Eil76 andGil262 and very near-optimal solution in the rest of test casesIt is obvious from the relative error calculations that the ABOobtained over 99 accuracy in all the TSP instances hereexcept the difficult rd100where it obtained 95 It isworthy ofnote that the 95 accuracy of the ABO is still superior to theperformance of the other comparative algorithms in this TSPinstance The cumulative relative error of the ABO is 568to the PSOrsquos 6183 the ACOrsquos 3981 and the HPSACOrsquos2828 Clearly from this analysis the ABO has a superiorsearch capabilityThis is traceable to its use of relatively fewerparameters than most other Swarm Intelligence techniquesThe controlling parameters of the ABO are just the learningparameters (1198971199011 and 1198971199012)

In designing the ABO the authors deliberately tried toavoid the ldquoFrankenstein phenomenardquo [30] that is a case

of developing intricate techniques with several parame-tersoperators that most times the individual contributionsof some of such parametersoperators to the workings of thealgorithm are difficult to pinpoint The ability to achieve thisldquolean metaheuristicrdquo design (which is what we tried to do indesigning the ABO) is a mark of good algorithm design [30]

6 ABO on Asymmetric TSP Test Cases

Moreover we had to experiment on some ATSP test casesto verify the performance of the novel algorithm on suchcases The results obtained from this experiment using ABOare compared with the results obtained from solving somebenchmark asymmetric TSP instances available in TSPLIB95using the Randomized Arbitrary Insertion algorithm (RAI)[31] Ant Colony System 3-opt (ACS) Min-Max Ant System(MMAS) and Iterated Local Search (ILS) which is reportedto be one of the best methods for solving TSP problemsThese ATSP instances are ry48p ft70 Kro124p ftv70 p43and ftv170 [32]

Table 5 shows the comparative results obtained by apply-ing five different algorithms to solving a number of bench-mark asymmetric TSP instances with the aim of evaluatingthe performance of the African Buffalo Optimization (ABO)algorithm The first column lists the ATSP instances asrecorded in the TSPLIB the second column indicates thenumber of citieslocationsnodes involved in each instancethe third indicates the optimal solutions then followed by theperformances of the different algorithms in at most 50 testruns

A quick look at the table shows the excellent performanceof the ILS and theMMASwith both obtaining 100 results inthe only four cases available in literature that they solvedTheABO performed very well achieving over 99 in all test casesbut one The ABO has approximately the same performancewith RAI and the ACS obtaining about 99 optimal resultsin virtually all cases It is significant to note however thatthe ABO performed better than ACS and RAI in ft70 whichwas said to be a difficult ATSP instance to solve in spite of itsrelatively small size [33]

Next the authors examined the speed of each algorithmto achieve result since one of the aims of the ABO is to

Computational Intelligence and Neuroscience 9

Table5Com

parativ

eresults

TSP

cases

Num

bero

fcities

Optim

alvalues

ABO

RAI

MMAS

ACS

ILS

Best

Avg

Relerror

Best

Avg

Relerror

Best

Avg

Relerror

Best

Avg

Relerror

Best

Avg

Relerror

Ry48p

4814422

1444

014455

012

14422

1454320

014422

14422

014422

1456545

014422

14422

0Ft70

7038673

38753

388705

021

38855

391877

5047

38673

38687

038781

390990

5028

38673

38687

0Kr

o124p

100

36230

36275

36713

012

36241

3659423

004

36230

36542

036241

36857

004

36230

36542

0Ftv70

711950

1955

19585

026

1950

196844

0mdash

mdashmdash

mdashmdash

mdashmdash

mdashmdash

P43

435620

5645

5698

044

5620

562065

0mdash

mdashmdash

mdashmdash

mdashmdash

mdashmdash

Ftv170

171

2755

2795

2840

514

52764

283274

033

2755

2755

02774

2826

069

2755

2756

0

10 Computational Intelligence and Neuroscience

Table 6 Comparative speed of algorithms

TSP cases Number of citiesABO MIMM-ACO MMAS CGAS RAI

Avg time(secs)

Avg time(secs)

Avg time(secs)

Avg time(secs)

Avg time(secs)

Ry48p 48 007 783 797 1235 1598Ft70 70 005 985 1015 1532 7068Kro124p 100 008 3325 234 7852 3034Ftv70 71 009 6453 6125 6964 7376P43 43 01 835 938 0997 0997Ftv170 171 065 10828 9673 2761 2761

solve the problem of delay in achieving optimal solutionssince speed is one of the hallmarks of a good algorithm[34] In Table 6 we compare the speed of the ABO withthose of the recently publishedModel InducedMax-Min AntColony Optimization (MIMM-ACO) Min-Max Ant System(MMAS)CooperativeGenetic Ant Systems (CGAS) [35] andRAI

Table 6 displays the exceptional capacity of the novelAfrican Buffalo Optimization (ABO) to obtain optimal ornear-optimal solutions at record times It took the ABOless than a second (precisely 095 seconds) to solve allthe benchmark cases under investigation to MIMM-ACOrsquos23209 seconds MMASrsquos 20888 seconds CGASrsquo 452927seconds and RAIrsquos 3234 seconds Undoubtedly the ABOoutperformed the other algorithms in its quest to obtainresults using very limited CPU time and resources Thisbrought aboutABOrsquos use of very few parameters coupledwithstraightforward fitness functionThe exceptional speed of theABO compared with its competitive ability to obtain verycompetitive results recommends the ABO as an algorithm ofchoice when speed is a factor

7 Comparing ABO to Neural NetworkSolutions to TSP

Moreover following the popularity and efficiency of NeuralNetworks in obtaining good solutions to the Travelling Sales-manrsquos Problem [36] the authors compared the performanceof African Buffalo Optimization to the known solutions ofsome popular Neural Network algorithms These are Ange-niolrsquos method Somhom et alrsquos method Pasti and Castrorsquosmethod and Masutti and Castrorsquos method Comparativeexperimental figures are obtained from [28]

From the experimental results in Table 7 it can beseen that only the ABO obtained the optimal solutionsin Eil51 and Eil76 in addition to jointly obtaining theoptimal result with Masutti and Castrorsquos method in Berlin52Aside from these ABO outperformed the other meth-ods in getting near-optimal solutions in Bier127 KroA100KroB100 KroB100 KroC100 KroD100 KroE100 Ch130Ch150 KroA150 KroB150 KroA200 KroB150 KroB200Rat575 rl1323 fl1400 fl1400 and Rat783 It was only in Eil101that Masutti and Castrorsquos method outperformed ABOThis is

a further evidence that ABO is an excellent performer evenwhen in competition with Neural Networks methods

8 Conclusion

In general this paper introduces the novel algorithm theAfrican Buffalo Optimization and shows its capacity to solvethe Traveling Salesmanrsquos ProblemThe performance obtainedfrom using the ABO is weighed against the performanceof some popular algorithms such as the Ant Colony Opti-mization (ACO) Particle Swarm Optimization (PSO) Arti-ficial Bee Colony Optimization (ABO) Min-Max Ant Sys-tem (MMAS) and Randomized Insertion Algorithm (RAI)some hybrid methods such as Model Induced Max-MinAnt ColonyOptimization (MIMM-ACO) Hybrid Algorithm(HA) and Cooperative Genetic Ant Systems (CGAS) andsome popular Neural Networks-based optimization meth-ods In total 33 TSP datasets cases ranging from 48 to 14461cities were investigated and the ABO results obtained werecompared with results obtained from 11 other optimizationalgorithms and methods The results show the amazingperformance of the novel algorithmrsquos capacity to obtainoptimal or near-optimal solutions at an incredibly fast rateThe ABOrsquos outstanding performance is a testimony to thefact that ABOhas immense potentials in solving optimizationproblems using relatively fewer parameters than most otheroptimization algorithms in literature

Having observed the robustness of theABO in solving theTravelling Salesmanrsquos Problems with encouraging outcomeit is recommended that more investigations be carried outto ascertain the veracity or otherwise of this new algorithmin solving other problems such as PID tuning knapsackproblem vehicle routing job scheduling and urban trans-portation problems

Conflict of Interests

The authors assert that there is no conflict of interests in thepublication of this research paper

Acknowledgments

The authors acknowledge the support of the anonymousreviewers for their constructive reviews that helped to

Computational Intelligence and Neuroscience 11

Table 7 ABO versus NN results

TSPinstance Optima

ABO Angeniolrsquosmethod

Somhom et alrsquosmethod

Pasti and Castrorsquosmethod

Masutti and Castrorsquosmethod

Best Mean Best Mean Best Mean Best Mean Best Meaneil51 426 426 427 432 44290 433 44057 429 43870 427 43747eil76 538 538 56304 554 56320 552 56227 542 55610 541 55633eil101 629 640 640 655 66593 640 65557 641 65483 638 64863berlin52 7542 7542 765931 7778 836370 7715 802507 7716 807397 7542 793250bier127 118282 118297 118863 120110 12892033 119840 12173333 118760 12178033 118970 12088633ch130 6110 6111 630714 6265 641680 6203 630723 6142 629177 6145 628240ch150 6528 6532 6601 6634 684280 6631 6751 6629 675320 6602 673837rd100 7910 7935 7956 8088 844450 8028 823940 7947 825393 7982 819977lin105 14379 14419 144527 14999 1611137 14379 1447560 14379 1470223 14379 1440017lin318 42029 42101 42336 44869 4583283 43154 4392290 42975 4370497 42834 4369687kroA100 21282 21311 221638 23009 2467880 21410 2161677 21369 2186847 21333 2152273kroA150 26524 26526 27205 28948 2996090 26930 2740133 26932 2734643 26678 2735597kroA200 29368 29370 30152 31669 3322833 30144 3041567 29594 3025753 29600 3019027kroB100 22141 22160 22509 24026 2596640 22548 2262250 22596 2285360 22343 2266147kroB150 26130 26169 26431 27886 2940453 26342 2680633 26395 2675213 26264 2663187kroB200 29437 29487 29534 32351 3383813 29703 3028647 29831 3041560 29637 3013500kroC100 20749 20755 208817 22344 2349613 20921 2114987 20915 2123160 20915 2097123kroD100 21294 21347 21462 23076 2390903 21500 2184573 21457 2202787 21374 2169737kroE100 22068 22088 22702 23642 2482803 22379 2268247 22427 2281550 22395 2271563rat575 6773 6774 6810 8107 830183 7090 717363 7039 712507 7047 711567rat783 8806 8811 888175 10532 1072160 9316 938757 9185 932630 9246 934377rl1323 270199 270480 278977 293350 30142433 295780 30089900 295060 30028600 300770 30531433fl1400 20127 20134 20167 20649 2117467 20558 2074260 20745 2107057 20851 2111000d1655 62128 62346 625995 68875 7116807 67459 6804637 70323 7143170 70918 7211317

enhance the quality of this research paper Similarly theauthors are grateful to the Faculty of Computer Systems andSoftware Engineering Universiti Malaysia Pahang KuantanMalaysia for providing the conducive environment facilitiesand funds for this research through Grant GRS 1403118

References

[1] J Kennedy ldquoParticle swarm optimizationrdquo in Encyclopedia ofMachine Learning pp 760ndash766 Springer 2010

[2] M Dorigo and M Birattari ldquoAnt colony optimizationrdquo inEncyclopedia of Machine Learning pp 36ndash39 Springer 2010

[3] E Cantu-Paz ldquoA summary of research on parallel geneticalgorithmsrdquo IlliGAL Report 95007 Illinois Genetic AlgorithmsLaboratory 1995

[4] D Karaboga and B Basturk ldquoA powerful and efficient algo-rithm for numerical function optimization artificial bee colony(ABC) algorithmrdquo Journal of Global Optimization vol 39 no 3pp 459ndash471 2007

[5] J Andre P Siarry and T Dognon ldquoAn improvement of thestandard genetic algorithm fighting premature convergence incontinuous optimizationrdquo Advances in Engineering Softwarevol 32 no 1 pp 49ndash60 2001

[6] N Kumbharana and G M Pandey ldquoA comparative study ofACO GA and SA for solving travelling salesman problemrdquo

International Journal of Societal Applications of Computer Sci-ence vol 2 pp 224ndash228 2013

[7] Z Michalewicz and D B Fogel ldquoThe traveling salesmanproblemrdquo in How to Solve It Modern Heuristics pp 189ndash224Springer 2000

[8] J B Odili H Deng C Yang Y Sun M C d E Santo Ramosand M C G Ramos ldquoApplication of ant colony optimizationto solving the traveling salesmanrsquos problemrdquo Science Journal ofElectrical amp Electronic Engineering In press

[9] A Ben-Israel ldquoA Newton-Raphson method for the solution ofsystems of equationsrdquo Journal of Mathematical Analysis andApplications vol 15 pp 243ndash252 1966

[10] D P Bertsekas Dynamic Programming and Optimal Controlvol 1 Athena Scientific Belmont Mass USA 1995

[11] R G Ghanem and P D Spanos Stochastic Finite Elements ASpectral Approach Courier Corporation 2003

[12] S Binitha and S S Sathya ldquoA survey of bio inspired optimiza-tion algorithmsrdquo International Journal of Soft Computing andEngineering vol 2 pp 137ndash151 2012

[13] F Dressler and O B Akan ldquoA survey on bio-inspired network-ingrdquo Computer Networks vol 54 no 6 pp 881ndash900 2010

[14] X-S Yang and S Deb ldquoTwo-stage eagle strategy with differen-tial evolutionrdquo International Journal of Bio-Inspired Computa-tion vol 4 no 1 pp 1ndash5 2012

12 Computational Intelligence and Neuroscience

[15] S K Dhurandher S Misra P Pruthi S Singhal S Aggarwaland I Woungang ldquoUsing bee algorithm for peer-to-peer filesearching in mobile ad hoc networksrdquo Journal of Network andComputer Applications vol 34 no 5 pp 1498ndash1508 2011

[16] A RMehrabian and C Lucas ldquoA novel numerical optimizationalgorithm inspired from weed colonizationrdquo Ecological Infor-matics vol 1 no 4 pp 355ndash366 2006

[17] M Dorigo G Di Caro and LM Gambardella ldquoAnt algorithmsfor discrete optimizationrdquo Artificial Life vol 5 no 2 pp 137ndash172 1999

[18] M Clerc Particle Swarm Optimization vol 93 John Wiley ampSons 2010

[19] Z YuanM AM deOcaM Birattari and T Stutzle ldquoContinu-ous optimization algorithms for tuning real and integer param-eters of swarm intelligence algorithmsrdquo Swarm Intelligence vol6 no 1 pp 49ndash75 2012

[20] D Karaboga and B Akay ldquoA survey algorithms simulating beeswarm intelligencerdquo Artificial Intelligence Review vol 31 no 1ndash4 pp 61ndash85 2009

[21] J Kennedy and R Mendes ldquoPopulation structure and particleswarm performancerdquo in Proceedings of the Congress on Evo-lutionary Computation (CEC rsquo02) vol 2 pp 1671ndash1676 IEEEHonolulu Hawaii USA May 2002

[22] A M Mora P Garcıa-Sanchez J J Merelo and P A CastilloldquoMigration study on a pareto-based island model for MOA-COsrdquo in Proceedings of the 15th Genetic and EvolutionaryComputation Conference (GECCO rsquo13) pp 57ndash64 July 2013

[23] M Dorigo and L M Gambardella ldquoAnt colonies for thetravelling salesman problemrdquo BioSystems vol 43 no 2 pp 73ndash81 1997

[24] MG C Resende RMartıM Gallego andADuarte ldquoGRASPand path relinking for the maxndashmin diversity problemrdquo Com-puters amp Operations Research vol 37 no 3 pp 498ndash508 2010

[25] G Reinelt ldquoTsplib95 1995rdquo 2012 httpcomoptifiuni-heidel-bergdesoftwareTSPLIB95

[26] M Gunduz M S Kiran and E Ozceylan ldquoA hierarchicapproach based on swarm intelligence to solve the travelingsalesmanproblemrdquoTurkish Journal of Electrical Engineering andComputer Sciences vol 23 no 1 pp 103ndash117 2015

[27] H Jia ldquoA novel hybrid optimization algorithm and its appli-cation in solving complex problemrdquo International Journal ofHybrid Information Technology vol 8 pp 1ndash10 2015

[28] S-M Chen and C-Y Chien ldquoA new method for solving thetraveling salesman problem based on the genetic simulatedannealing ant colony system with particle swarm optimizationtechniquesrdquo in Proceedings of the International Conference onMachine Learning and Cybernetics (ICMLC rsquo10) pp 2477ndash2482July 2010

[29] S Georg ldquoMP-TESTDATAmdashThe TSPLIB symmetric travelingsalesman problem instancesrdquo 2008

[30] K Sorensen ldquoMetaheuristicsmdashthemetaphor exposedrdquo Interna-tional Transactions in Operational Research vol 22 no 1 pp3ndash18 2015

[31] J Brest and J Zerovnik ldquoA heuristic for the asymmetric travel-ing salesman problemrdquo in Proceedings of the 6th MetaheuristicsInternational Conference (MIC rsquo05) pp 145ndash150 Vienna Aus-tria 2005

[32] M Dorigo and L M Gambardella ldquoAnt colony system a coop-erative learning approach to the traveling salesman problemrdquoIEEETransactions on Evolutionary Computation vol 1 no 1 pp53ndash66 1997

[33] O C Martin and S W Otto ldquoCombining simulated annealingwith local search heuristicsrdquoAnnals of Operations Research vol63 pp 57ndash75 1996

[34] B Baritompa and E M Hendrix ldquoOn the investigation ofstochastic global optimization algorithmsrdquo Journal of GlobalOptimization vol 31 no 4 pp 567ndash578 2005

[35] J Bai G-K Yang Y-WChen L-SHu andC-C Pan ldquoAmodelinduced max-min ant colony optimization for asymmetrictraveling salesman problemrdquo Applied Soft Computing vol 13no 3 pp 1365ndash1375 2013

[36] N Aras B J Oommen and I K Altinel ldquoThe Kohonennetwork incorporating explicit statistics and its application tothe travelling salesman problemrdquo Neural Networks vol 12 no9 pp 1273ndash1284 1999

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 6: Solving the Traveling Salesman’s Problem Using the African ...umpir.ump.edu.my/id/eprint/11365/1/Solving the Traveling Salesman’… · ResearchArticle Solving the Traveling Salesman’s

6 Computational Intelligence and Neuroscience

Table 1 TSPLIB datasets

1st experimental datasets 2nd experimental datasets ATSP dataset NN comparative datasetsBerlin52 Att48 R48p Eil51 KroA100 B FL1400St70 St70 Ft70 Eil76 KroA200 D1655Eil76 Eil76 Kro124p Eil101 KroB100Pr76 Pr152 Ftv70 Berlin52 KroB150Kroa100 Gil262 P43 Bier127 KroB200Eil101 Rd400 Ftv170 Ch130 KroC100Ch150 Pr1002 Ch150 KroD100Tsp225 D1291 Rd100 KroE100

Fnl4461 Lin105 Rat575Brd14051 Lin318 RL1323

41 ABO Solution Mechanism for the TSP The ABO appliestheModifiedKarp Steele algorithm in its solution of the Trav-elling Salesmanrsquos Problem [23] It follows a simple solutionstep of first constructing a cycle factor 119865 of the cheapestweight in the 119870 graph Next it selects a pair of edges takenfrom different cycles of the 119870 graph and patch in a waythat will result in a minimum weight Patching is simplyremoving the selected edges in the two cycle factors and thenreplacing them with cheaper edges and in this way forminga larger cycle factor thus reducing the number of cyclefactors in graph119870 by oneThirdly the second step is repeateduntil we arrive at a single cycle factor in the entire graph119870 This technique fits into the path improvement techniquedescription [24] ABO overcomes the problem of delay inthis process through the use of two primary parameters toensure speed namely 1198971199011 and 1198971199012 coupledwith the algorithmkeeping a track of the route of the 119887119892max as well as 119887119901max119896in each construction step

5 Experiments and Discussion of Results

In this study the ABO was implemented on three sets ofsymmetric TSP datasets and a set of asymmetric TSP (ATSP)ranging from 48 to 14461 cities from TSPLIB95 [25] The firstexperiment was concerned with the comparison of the per-formance of ABO in TSP instances with the results obtainedfrom a recent study [26] involving Berlin52 St70 Eil76 Pr76KroA100 Eil101 Ch150 and Tsp225The second set of exper-iments was concerned with testing ABOrsquos performance withanother recently published study [27] on Att48 St70 Eil76Pr152 Gil262 Rd400 Pr1002 D1291 Fn14461 and Brd14051The third experiment examinedABOrsquos performance in asym-metric TSP instancesThe fourth set of experiments involvedcomparing ABO results with those obtained using somepopular Artificial Neural Networks methods [28] The TSPbenchmark datasets are presented in Table 1

51 Experimental Parameters Setting For the sets of exper-iments involving PSO the parameters are as follows popu-lation size 200 iteration (119879max) 1000 inertia weight 0851198621 2 119862

2 2 rand1(0 1) rand2(0 1) For the HPSACO the

experimental parameters are as follows population 200iteration (119879max) 1000 inertia weight 085 1198621 2 1198622 2 ants

(N) 100 pheromone factor (120572) 10 heuristic factor (120573) 20evaporation coefficient (120588) 005 pheromone amount 100For the experiments involving other algorithms in this studyTable 2 contains the details of the parameters used

52 Experiment Analysis and Discussion of Results Theexperiments were performed using MATLAB on Intel DuoCore i7-3770CPU 340 ghzwith 4GBRAMTheexperimentson the asymmetric Travelling Salesmanrsquos Problems wereexecuted on a Pentium Duo Core 180Ghz processor and2GB RAM desktop computer Similarly the experiments ontheArtificial Neural Networks were executed usingMicrosoftVirtual C++ 2008 on an Intel Duo Core i7 CPU Toauthenticate the veracity of the ABO algorithm in solving theTSP we initially carried out experiments on eight TSP casesThe city-coordinates data are available in [29]The results arepresented in Table 3

InTable 3 the ldquoAverageValuerdquo refers to the average fitnessof each algorithm and the ldquorelative errorrdquo values are obtainedby calculating

(Average value minus Best value

Best value) times 100 (12)

As can be seen in Table 3 the ABO outperformed the otheralgorithms in all test instances under investigationTheABOfor instance obtained the optimal solution to Berlin52 andEil76 No other algorithmdid Besides this theABOobtainedthe nearest-optimal solution to the remaining TSP instancescompared to any other algorithm In terms of the averageresults obtained by each algorithm the ABO still has thebest performance It is rather surprising that the HybridAlgorithm (HA) which uses a similar memorymatrix like theABO could not pose a better result This is traceable to theuse of several parameters since the HA is a combination ofthe ACO and the ABC the two algorithms that have some ofthe largest numbers of parameters to tune in order to obtaingood solutions

The dominance of the ABO can also be seen in the useof computer resources (CPU time) where it is clearly seenthat the ABO is the fastest of all four algorithms In Berlin52for instance the ABO is 58335 times faster than the ACO1085 times faster than the ABC 30320 times faster thanthe Hybrid Algorithm (HA) This trend continues in all the

Computational Intelligence and Neuroscience 7

Table 2 Experimental parameters

ABO ACO ABC HA CGASParameter Values Parameters Values Parameters Values Parameters Values Parameter ValuesPopulation 40 Ants 119863

lowast Population 119863lowast Population 119863

lowast Generation 100119898119896 10 120573 50 120601ij rand(minus1 1) 120573 50 120573 20119887119892max119887119901max 06 120588 065 120596ij rand(0 15) 120588 065 120588 0111989711990111198971199012 05 120572 10 SN NP2 120572 10 119877119900 033119908119896 10 O 200 Limit 119863

lowastSN 120601ij rand(minus1 1) Crossover rate 10NA mdash qo 09 Max cycle number 500 120596ij rand(0 15) qo 09NA mdash NA mdash Colony 50 SN NP2 120601119903 03NA mdash NA mdash NA mdash Limit 119863

lowastSN 120601120588 02NA mdash NA mdash NA mdash Max cycle number 500 120591min 120591max20

NA mdash NA mdash NA mdash Colony 50 120591max 1 minus (1 minus 120588)

NA mdash NA mdash NA mdash O 200 NA mdashNA mdash NA mdash NA mdash qo 09 NA mdashTotal number of runs 50 50 50 50 50

Table 3 Comparative experimental result

Problem Number of cities Optima Method Best Mean Rel err () Time (s)

Berlin52

ABO 7542 7616 0 0002ACO 754899 765931 152 11667

52 7542 ABC 947911 1039026 3772 217HA 754437 754437 003 6064

St70

ABO 676 67833 015 008ACO 69605 70916 473 22606

70 675 ABC 116212 123049 8173 315HA 68724 70058 347 11565

Eil76

ABO 538 56304 0 00376 538 ACO 55446 56198 304 27198

ABC 87728 93144 7078 349HA 55107 55798 231 13882

Pr76

108159 ABO 108167 108396 0007 00876 ACO 11516666 11632122 755 27241

ABC 1951989 20511961 8965 350HA 11379856 11507229 639 13892

Kroa100

21282 ABO 21311 221638 04 000100 ACO 2245589 2288012 749 61506

ABC 4951951 5384003 15294 517HA 2212275 2243531 540 31112

Eil101

629 ABO 640 640 17 0027101 ACO 67804 69342 796 52742

ABC 123731 131595 10488 517HA 67271 68339 639 26708

Ch150

6528 ABO 6532 6601 006 0032150 ACO 664851 670287 261 138765

ABC 2090889 2161748 23093 895HA 664169 667712 221 69861

Tsp225

3916 ABO 3917 3982 003 009225 ACO 411235 417608 822 403875

ABC 1699841 1795512 3652792 1668HA 409054 415785 774 203733

8 Computational Intelligence and Neuroscience

Table 4 Comparative optimal results

TSP instance Optima ABO PSO ACO HPSACOBest Avg Err Best Avg Err Best Avg Err Best Avg Err

att48 33522 33524 33579 016 33734 33982 063 33649 33731 062 33524 33667 016st70 675 676 67833 015 6912 7026 240 6857 6947 159 6803 6986 079eil76 538 538 56304 000 5723 5891 638 5507 5604 236 5462 5581 152pr152 73682 73730 73990 007 75361 75405 228 74689 74936 137 74165 74654 066gil262 2378 2378 2386 000 2513 2486 568 2463 2495 357 2413 2468 147rd400 15281 15301 15304 500 16964 17024 1101 16581 16834 851 16067 16513 514pr1002 259045 259132 261608 003 278923 279755 767 269758 271043 414 267998 269789 346d1291 50801 50839 50839 007 53912 54104 612 52942 53249 421 52868 52951 407fnl4461 182566 182745 183174 010 199314 199492 917 192964 194015 570 191352 192585 481brd14051 469385 469835 479085 010 518631 519305 1049 505734 511638 774 498471 503594 620

test cases under investigation To solve all the TSP problemshere it took ABO a cumulative time of 0279 seconds toACOrsquos 7456 seconds ABCrsquos 4311 seconds and HArsquos 336227seconds The speed of the ABO is traceable to effectivememorymanagement technique since the ABO uses the pathimprovement technique as against the ACO that uses theslower path construction method The difference in speedwith the ABC that uses similar technique with ABO is dueto the ABCrsquos use of several parameters The speed of theHA could have been affected by the combination of the pathconstruction and path improvement techniques coupled withthe use of several parameters

53 ABO and Other Algorithms Encouraged by theextremely good performance of the ABO in the firstexperiments more experiments were performed and theresults obtained compared with those from PSO ACO andHPSACO from another recently published study [27] TheHPSACO is a combination of three techniques namely theCollaborative Strategy Particle SwarmOptimization and theAnt Colony Optimization algorithms The datasets used inthis experiment are from the TSPLIB95 and they are Att48St70 Eil76 Pr152 Gil262 Rd400 Pr1002 D1291 Fn14461and Brd14051 The results are presented in Table 4

Table 4 further underscores the excellent performance ofABO The ABO obtained the optimal solution to Eil76 andGil262 and very near-optimal solution in the rest of test casesIt is obvious from the relative error calculations that the ABOobtained over 99 accuracy in all the TSP instances hereexcept the difficult rd100where it obtained 95 It isworthy ofnote that the 95 accuracy of the ABO is still superior to theperformance of the other comparative algorithms in this TSPinstance The cumulative relative error of the ABO is 568to the PSOrsquos 6183 the ACOrsquos 3981 and the HPSACOrsquos2828 Clearly from this analysis the ABO has a superiorsearch capabilityThis is traceable to its use of relatively fewerparameters than most other Swarm Intelligence techniquesThe controlling parameters of the ABO are just the learningparameters (1198971199011 and 1198971199012)

In designing the ABO the authors deliberately tried toavoid the ldquoFrankenstein phenomenardquo [30] that is a case

of developing intricate techniques with several parame-tersoperators that most times the individual contributionsof some of such parametersoperators to the workings of thealgorithm are difficult to pinpoint The ability to achieve thisldquolean metaheuristicrdquo design (which is what we tried to do indesigning the ABO) is a mark of good algorithm design [30]

6 ABO on Asymmetric TSP Test Cases

Moreover we had to experiment on some ATSP test casesto verify the performance of the novel algorithm on suchcases The results obtained from this experiment using ABOare compared with the results obtained from solving somebenchmark asymmetric TSP instances available in TSPLIB95using the Randomized Arbitrary Insertion algorithm (RAI)[31] Ant Colony System 3-opt (ACS) Min-Max Ant System(MMAS) and Iterated Local Search (ILS) which is reportedto be one of the best methods for solving TSP problemsThese ATSP instances are ry48p ft70 Kro124p ftv70 p43and ftv170 [32]

Table 5 shows the comparative results obtained by apply-ing five different algorithms to solving a number of bench-mark asymmetric TSP instances with the aim of evaluatingthe performance of the African Buffalo Optimization (ABO)algorithm The first column lists the ATSP instances asrecorded in the TSPLIB the second column indicates thenumber of citieslocationsnodes involved in each instancethe third indicates the optimal solutions then followed by theperformances of the different algorithms in at most 50 testruns

A quick look at the table shows the excellent performanceof the ILS and theMMASwith both obtaining 100 results inthe only four cases available in literature that they solvedTheABO performed very well achieving over 99 in all test casesbut one The ABO has approximately the same performancewith RAI and the ACS obtaining about 99 optimal resultsin virtually all cases It is significant to note however thatthe ABO performed better than ACS and RAI in ft70 whichwas said to be a difficult ATSP instance to solve in spite of itsrelatively small size [33]

Next the authors examined the speed of each algorithmto achieve result since one of the aims of the ABO is to

Computational Intelligence and Neuroscience 9

Table5Com

parativ

eresults

TSP

cases

Num

bero

fcities

Optim

alvalues

ABO

RAI

MMAS

ACS

ILS

Best

Avg

Relerror

Best

Avg

Relerror

Best

Avg

Relerror

Best

Avg

Relerror

Best

Avg

Relerror

Ry48p

4814422

1444

014455

012

14422

1454320

014422

14422

014422

1456545

014422

14422

0Ft70

7038673

38753

388705

021

38855

391877

5047

38673

38687

038781

390990

5028

38673

38687

0Kr

o124p

100

36230

36275

36713

012

36241

3659423

004

36230

36542

036241

36857

004

36230

36542

0Ftv70

711950

1955

19585

026

1950

196844

0mdash

mdashmdash

mdashmdash

mdashmdash

mdashmdash

P43

435620

5645

5698

044

5620

562065

0mdash

mdashmdash

mdashmdash

mdashmdash

mdashmdash

Ftv170

171

2755

2795

2840

514

52764

283274

033

2755

2755

02774

2826

069

2755

2756

0

10 Computational Intelligence and Neuroscience

Table 6 Comparative speed of algorithms

TSP cases Number of citiesABO MIMM-ACO MMAS CGAS RAI

Avg time(secs)

Avg time(secs)

Avg time(secs)

Avg time(secs)

Avg time(secs)

Ry48p 48 007 783 797 1235 1598Ft70 70 005 985 1015 1532 7068Kro124p 100 008 3325 234 7852 3034Ftv70 71 009 6453 6125 6964 7376P43 43 01 835 938 0997 0997Ftv170 171 065 10828 9673 2761 2761

solve the problem of delay in achieving optimal solutionssince speed is one of the hallmarks of a good algorithm[34] In Table 6 we compare the speed of the ABO withthose of the recently publishedModel InducedMax-Min AntColony Optimization (MIMM-ACO) Min-Max Ant System(MMAS)CooperativeGenetic Ant Systems (CGAS) [35] andRAI

Table 6 displays the exceptional capacity of the novelAfrican Buffalo Optimization (ABO) to obtain optimal ornear-optimal solutions at record times It took the ABOless than a second (precisely 095 seconds) to solve allthe benchmark cases under investigation to MIMM-ACOrsquos23209 seconds MMASrsquos 20888 seconds CGASrsquo 452927seconds and RAIrsquos 3234 seconds Undoubtedly the ABOoutperformed the other algorithms in its quest to obtainresults using very limited CPU time and resources Thisbrought aboutABOrsquos use of very few parameters coupledwithstraightforward fitness functionThe exceptional speed of theABO compared with its competitive ability to obtain verycompetitive results recommends the ABO as an algorithm ofchoice when speed is a factor

7 Comparing ABO to Neural NetworkSolutions to TSP

Moreover following the popularity and efficiency of NeuralNetworks in obtaining good solutions to the Travelling Sales-manrsquos Problem [36] the authors compared the performanceof African Buffalo Optimization to the known solutions ofsome popular Neural Network algorithms These are Ange-niolrsquos method Somhom et alrsquos method Pasti and Castrorsquosmethod and Masutti and Castrorsquos method Comparativeexperimental figures are obtained from [28]

From the experimental results in Table 7 it can beseen that only the ABO obtained the optimal solutionsin Eil51 and Eil76 in addition to jointly obtaining theoptimal result with Masutti and Castrorsquos method in Berlin52Aside from these ABO outperformed the other meth-ods in getting near-optimal solutions in Bier127 KroA100KroB100 KroB100 KroC100 KroD100 KroE100 Ch130Ch150 KroA150 KroB150 KroA200 KroB150 KroB200Rat575 rl1323 fl1400 fl1400 and Rat783 It was only in Eil101that Masutti and Castrorsquos method outperformed ABOThis is

a further evidence that ABO is an excellent performer evenwhen in competition with Neural Networks methods

8 Conclusion

In general this paper introduces the novel algorithm theAfrican Buffalo Optimization and shows its capacity to solvethe Traveling Salesmanrsquos ProblemThe performance obtainedfrom using the ABO is weighed against the performanceof some popular algorithms such as the Ant Colony Opti-mization (ACO) Particle Swarm Optimization (PSO) Arti-ficial Bee Colony Optimization (ABO) Min-Max Ant Sys-tem (MMAS) and Randomized Insertion Algorithm (RAI)some hybrid methods such as Model Induced Max-MinAnt ColonyOptimization (MIMM-ACO) Hybrid Algorithm(HA) and Cooperative Genetic Ant Systems (CGAS) andsome popular Neural Networks-based optimization meth-ods In total 33 TSP datasets cases ranging from 48 to 14461cities were investigated and the ABO results obtained werecompared with results obtained from 11 other optimizationalgorithms and methods The results show the amazingperformance of the novel algorithmrsquos capacity to obtainoptimal or near-optimal solutions at an incredibly fast rateThe ABOrsquos outstanding performance is a testimony to thefact that ABOhas immense potentials in solving optimizationproblems using relatively fewer parameters than most otheroptimization algorithms in literature

Having observed the robustness of theABO in solving theTravelling Salesmanrsquos Problems with encouraging outcomeit is recommended that more investigations be carried outto ascertain the veracity or otherwise of this new algorithmin solving other problems such as PID tuning knapsackproblem vehicle routing job scheduling and urban trans-portation problems

Conflict of Interests

The authors assert that there is no conflict of interests in thepublication of this research paper

Acknowledgments

The authors acknowledge the support of the anonymousreviewers for their constructive reviews that helped to

Computational Intelligence and Neuroscience 11

Table 7 ABO versus NN results

TSPinstance Optima

ABO Angeniolrsquosmethod

Somhom et alrsquosmethod

Pasti and Castrorsquosmethod

Masutti and Castrorsquosmethod

Best Mean Best Mean Best Mean Best Mean Best Meaneil51 426 426 427 432 44290 433 44057 429 43870 427 43747eil76 538 538 56304 554 56320 552 56227 542 55610 541 55633eil101 629 640 640 655 66593 640 65557 641 65483 638 64863berlin52 7542 7542 765931 7778 836370 7715 802507 7716 807397 7542 793250bier127 118282 118297 118863 120110 12892033 119840 12173333 118760 12178033 118970 12088633ch130 6110 6111 630714 6265 641680 6203 630723 6142 629177 6145 628240ch150 6528 6532 6601 6634 684280 6631 6751 6629 675320 6602 673837rd100 7910 7935 7956 8088 844450 8028 823940 7947 825393 7982 819977lin105 14379 14419 144527 14999 1611137 14379 1447560 14379 1470223 14379 1440017lin318 42029 42101 42336 44869 4583283 43154 4392290 42975 4370497 42834 4369687kroA100 21282 21311 221638 23009 2467880 21410 2161677 21369 2186847 21333 2152273kroA150 26524 26526 27205 28948 2996090 26930 2740133 26932 2734643 26678 2735597kroA200 29368 29370 30152 31669 3322833 30144 3041567 29594 3025753 29600 3019027kroB100 22141 22160 22509 24026 2596640 22548 2262250 22596 2285360 22343 2266147kroB150 26130 26169 26431 27886 2940453 26342 2680633 26395 2675213 26264 2663187kroB200 29437 29487 29534 32351 3383813 29703 3028647 29831 3041560 29637 3013500kroC100 20749 20755 208817 22344 2349613 20921 2114987 20915 2123160 20915 2097123kroD100 21294 21347 21462 23076 2390903 21500 2184573 21457 2202787 21374 2169737kroE100 22068 22088 22702 23642 2482803 22379 2268247 22427 2281550 22395 2271563rat575 6773 6774 6810 8107 830183 7090 717363 7039 712507 7047 711567rat783 8806 8811 888175 10532 1072160 9316 938757 9185 932630 9246 934377rl1323 270199 270480 278977 293350 30142433 295780 30089900 295060 30028600 300770 30531433fl1400 20127 20134 20167 20649 2117467 20558 2074260 20745 2107057 20851 2111000d1655 62128 62346 625995 68875 7116807 67459 6804637 70323 7143170 70918 7211317

enhance the quality of this research paper Similarly theauthors are grateful to the Faculty of Computer Systems andSoftware Engineering Universiti Malaysia Pahang KuantanMalaysia for providing the conducive environment facilitiesand funds for this research through Grant GRS 1403118

References

[1] J Kennedy ldquoParticle swarm optimizationrdquo in Encyclopedia ofMachine Learning pp 760ndash766 Springer 2010

[2] M Dorigo and M Birattari ldquoAnt colony optimizationrdquo inEncyclopedia of Machine Learning pp 36ndash39 Springer 2010

[3] E Cantu-Paz ldquoA summary of research on parallel geneticalgorithmsrdquo IlliGAL Report 95007 Illinois Genetic AlgorithmsLaboratory 1995

[4] D Karaboga and B Basturk ldquoA powerful and efficient algo-rithm for numerical function optimization artificial bee colony(ABC) algorithmrdquo Journal of Global Optimization vol 39 no 3pp 459ndash471 2007

[5] J Andre P Siarry and T Dognon ldquoAn improvement of thestandard genetic algorithm fighting premature convergence incontinuous optimizationrdquo Advances in Engineering Softwarevol 32 no 1 pp 49ndash60 2001

[6] N Kumbharana and G M Pandey ldquoA comparative study ofACO GA and SA for solving travelling salesman problemrdquo

International Journal of Societal Applications of Computer Sci-ence vol 2 pp 224ndash228 2013

[7] Z Michalewicz and D B Fogel ldquoThe traveling salesmanproblemrdquo in How to Solve It Modern Heuristics pp 189ndash224Springer 2000

[8] J B Odili H Deng C Yang Y Sun M C d E Santo Ramosand M C G Ramos ldquoApplication of ant colony optimizationto solving the traveling salesmanrsquos problemrdquo Science Journal ofElectrical amp Electronic Engineering In press

[9] A Ben-Israel ldquoA Newton-Raphson method for the solution ofsystems of equationsrdquo Journal of Mathematical Analysis andApplications vol 15 pp 243ndash252 1966

[10] D P Bertsekas Dynamic Programming and Optimal Controlvol 1 Athena Scientific Belmont Mass USA 1995

[11] R G Ghanem and P D Spanos Stochastic Finite Elements ASpectral Approach Courier Corporation 2003

[12] S Binitha and S S Sathya ldquoA survey of bio inspired optimiza-tion algorithmsrdquo International Journal of Soft Computing andEngineering vol 2 pp 137ndash151 2012

[13] F Dressler and O B Akan ldquoA survey on bio-inspired network-ingrdquo Computer Networks vol 54 no 6 pp 881ndash900 2010

[14] X-S Yang and S Deb ldquoTwo-stage eagle strategy with differen-tial evolutionrdquo International Journal of Bio-Inspired Computa-tion vol 4 no 1 pp 1ndash5 2012

12 Computational Intelligence and Neuroscience

[15] S K Dhurandher S Misra P Pruthi S Singhal S Aggarwaland I Woungang ldquoUsing bee algorithm for peer-to-peer filesearching in mobile ad hoc networksrdquo Journal of Network andComputer Applications vol 34 no 5 pp 1498ndash1508 2011

[16] A RMehrabian and C Lucas ldquoA novel numerical optimizationalgorithm inspired from weed colonizationrdquo Ecological Infor-matics vol 1 no 4 pp 355ndash366 2006

[17] M Dorigo G Di Caro and LM Gambardella ldquoAnt algorithmsfor discrete optimizationrdquo Artificial Life vol 5 no 2 pp 137ndash172 1999

[18] M Clerc Particle Swarm Optimization vol 93 John Wiley ampSons 2010

[19] Z YuanM AM deOcaM Birattari and T Stutzle ldquoContinu-ous optimization algorithms for tuning real and integer param-eters of swarm intelligence algorithmsrdquo Swarm Intelligence vol6 no 1 pp 49ndash75 2012

[20] D Karaboga and B Akay ldquoA survey algorithms simulating beeswarm intelligencerdquo Artificial Intelligence Review vol 31 no 1ndash4 pp 61ndash85 2009

[21] J Kennedy and R Mendes ldquoPopulation structure and particleswarm performancerdquo in Proceedings of the Congress on Evo-lutionary Computation (CEC rsquo02) vol 2 pp 1671ndash1676 IEEEHonolulu Hawaii USA May 2002

[22] A M Mora P Garcıa-Sanchez J J Merelo and P A CastilloldquoMigration study on a pareto-based island model for MOA-COsrdquo in Proceedings of the 15th Genetic and EvolutionaryComputation Conference (GECCO rsquo13) pp 57ndash64 July 2013

[23] M Dorigo and L M Gambardella ldquoAnt colonies for thetravelling salesman problemrdquo BioSystems vol 43 no 2 pp 73ndash81 1997

[24] MG C Resende RMartıM Gallego andADuarte ldquoGRASPand path relinking for the maxndashmin diversity problemrdquo Com-puters amp Operations Research vol 37 no 3 pp 498ndash508 2010

[25] G Reinelt ldquoTsplib95 1995rdquo 2012 httpcomoptifiuni-heidel-bergdesoftwareTSPLIB95

[26] M Gunduz M S Kiran and E Ozceylan ldquoA hierarchicapproach based on swarm intelligence to solve the travelingsalesmanproblemrdquoTurkish Journal of Electrical Engineering andComputer Sciences vol 23 no 1 pp 103ndash117 2015

[27] H Jia ldquoA novel hybrid optimization algorithm and its appli-cation in solving complex problemrdquo International Journal ofHybrid Information Technology vol 8 pp 1ndash10 2015

[28] S-M Chen and C-Y Chien ldquoA new method for solving thetraveling salesman problem based on the genetic simulatedannealing ant colony system with particle swarm optimizationtechniquesrdquo in Proceedings of the International Conference onMachine Learning and Cybernetics (ICMLC rsquo10) pp 2477ndash2482July 2010

[29] S Georg ldquoMP-TESTDATAmdashThe TSPLIB symmetric travelingsalesman problem instancesrdquo 2008

[30] K Sorensen ldquoMetaheuristicsmdashthemetaphor exposedrdquo Interna-tional Transactions in Operational Research vol 22 no 1 pp3ndash18 2015

[31] J Brest and J Zerovnik ldquoA heuristic for the asymmetric travel-ing salesman problemrdquo in Proceedings of the 6th MetaheuristicsInternational Conference (MIC rsquo05) pp 145ndash150 Vienna Aus-tria 2005

[32] M Dorigo and L M Gambardella ldquoAnt colony system a coop-erative learning approach to the traveling salesman problemrdquoIEEETransactions on Evolutionary Computation vol 1 no 1 pp53ndash66 1997

[33] O C Martin and S W Otto ldquoCombining simulated annealingwith local search heuristicsrdquoAnnals of Operations Research vol63 pp 57ndash75 1996

[34] B Baritompa and E M Hendrix ldquoOn the investigation ofstochastic global optimization algorithmsrdquo Journal of GlobalOptimization vol 31 no 4 pp 567ndash578 2005

[35] J Bai G-K Yang Y-WChen L-SHu andC-C Pan ldquoAmodelinduced max-min ant colony optimization for asymmetrictraveling salesman problemrdquo Applied Soft Computing vol 13no 3 pp 1365ndash1375 2013

[36] N Aras B J Oommen and I K Altinel ldquoThe Kohonennetwork incorporating explicit statistics and its application tothe travelling salesman problemrdquo Neural Networks vol 12 no9 pp 1273ndash1284 1999

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 7: Solving the Traveling Salesman’s Problem Using the African ...umpir.ump.edu.my/id/eprint/11365/1/Solving the Traveling Salesman’… · ResearchArticle Solving the Traveling Salesman’s

Computational Intelligence and Neuroscience 7

Table 2 Experimental parameters

ABO ACO ABC HA CGASParameter Values Parameters Values Parameters Values Parameters Values Parameter ValuesPopulation 40 Ants 119863

lowast Population 119863lowast Population 119863

lowast Generation 100119898119896 10 120573 50 120601ij rand(minus1 1) 120573 50 120573 20119887119892max119887119901max 06 120588 065 120596ij rand(0 15) 120588 065 120588 0111989711990111198971199012 05 120572 10 SN NP2 120572 10 119877119900 033119908119896 10 O 200 Limit 119863

lowastSN 120601ij rand(minus1 1) Crossover rate 10NA mdash qo 09 Max cycle number 500 120596ij rand(0 15) qo 09NA mdash NA mdash Colony 50 SN NP2 120601119903 03NA mdash NA mdash NA mdash Limit 119863

lowastSN 120601120588 02NA mdash NA mdash NA mdash Max cycle number 500 120591min 120591max20

NA mdash NA mdash NA mdash Colony 50 120591max 1 minus (1 minus 120588)

NA mdash NA mdash NA mdash O 200 NA mdashNA mdash NA mdash NA mdash qo 09 NA mdashTotal number of runs 50 50 50 50 50

Table 3 Comparative experimental result

Problem Number of cities Optima Method Best Mean Rel err () Time (s)

Berlin52

ABO 7542 7616 0 0002ACO 754899 765931 152 11667

52 7542 ABC 947911 1039026 3772 217HA 754437 754437 003 6064

St70

ABO 676 67833 015 008ACO 69605 70916 473 22606

70 675 ABC 116212 123049 8173 315HA 68724 70058 347 11565

Eil76

ABO 538 56304 0 00376 538 ACO 55446 56198 304 27198

ABC 87728 93144 7078 349HA 55107 55798 231 13882

Pr76

108159 ABO 108167 108396 0007 00876 ACO 11516666 11632122 755 27241

ABC 1951989 20511961 8965 350HA 11379856 11507229 639 13892

Kroa100

21282 ABO 21311 221638 04 000100 ACO 2245589 2288012 749 61506

ABC 4951951 5384003 15294 517HA 2212275 2243531 540 31112

Eil101

629 ABO 640 640 17 0027101 ACO 67804 69342 796 52742

ABC 123731 131595 10488 517HA 67271 68339 639 26708

Ch150

6528 ABO 6532 6601 006 0032150 ACO 664851 670287 261 138765

ABC 2090889 2161748 23093 895HA 664169 667712 221 69861

Tsp225

3916 ABO 3917 3982 003 009225 ACO 411235 417608 822 403875

ABC 1699841 1795512 3652792 1668HA 409054 415785 774 203733

8 Computational Intelligence and Neuroscience

Table 4 Comparative optimal results

TSP instance Optima ABO PSO ACO HPSACOBest Avg Err Best Avg Err Best Avg Err Best Avg Err

att48 33522 33524 33579 016 33734 33982 063 33649 33731 062 33524 33667 016st70 675 676 67833 015 6912 7026 240 6857 6947 159 6803 6986 079eil76 538 538 56304 000 5723 5891 638 5507 5604 236 5462 5581 152pr152 73682 73730 73990 007 75361 75405 228 74689 74936 137 74165 74654 066gil262 2378 2378 2386 000 2513 2486 568 2463 2495 357 2413 2468 147rd400 15281 15301 15304 500 16964 17024 1101 16581 16834 851 16067 16513 514pr1002 259045 259132 261608 003 278923 279755 767 269758 271043 414 267998 269789 346d1291 50801 50839 50839 007 53912 54104 612 52942 53249 421 52868 52951 407fnl4461 182566 182745 183174 010 199314 199492 917 192964 194015 570 191352 192585 481brd14051 469385 469835 479085 010 518631 519305 1049 505734 511638 774 498471 503594 620

test cases under investigation To solve all the TSP problemshere it took ABO a cumulative time of 0279 seconds toACOrsquos 7456 seconds ABCrsquos 4311 seconds and HArsquos 336227seconds The speed of the ABO is traceable to effectivememorymanagement technique since the ABO uses the pathimprovement technique as against the ACO that uses theslower path construction method The difference in speedwith the ABC that uses similar technique with ABO is dueto the ABCrsquos use of several parameters The speed of theHA could have been affected by the combination of the pathconstruction and path improvement techniques coupled withthe use of several parameters

53 ABO and Other Algorithms Encouraged by theextremely good performance of the ABO in the firstexperiments more experiments were performed and theresults obtained compared with those from PSO ACO andHPSACO from another recently published study [27] TheHPSACO is a combination of three techniques namely theCollaborative Strategy Particle SwarmOptimization and theAnt Colony Optimization algorithms The datasets used inthis experiment are from the TSPLIB95 and they are Att48St70 Eil76 Pr152 Gil262 Rd400 Pr1002 D1291 Fn14461and Brd14051 The results are presented in Table 4

Table 4 further underscores the excellent performance ofABO The ABO obtained the optimal solution to Eil76 andGil262 and very near-optimal solution in the rest of test casesIt is obvious from the relative error calculations that the ABOobtained over 99 accuracy in all the TSP instances hereexcept the difficult rd100where it obtained 95 It isworthy ofnote that the 95 accuracy of the ABO is still superior to theperformance of the other comparative algorithms in this TSPinstance The cumulative relative error of the ABO is 568to the PSOrsquos 6183 the ACOrsquos 3981 and the HPSACOrsquos2828 Clearly from this analysis the ABO has a superiorsearch capabilityThis is traceable to its use of relatively fewerparameters than most other Swarm Intelligence techniquesThe controlling parameters of the ABO are just the learningparameters (1198971199011 and 1198971199012)

In designing the ABO the authors deliberately tried toavoid the ldquoFrankenstein phenomenardquo [30] that is a case

of developing intricate techniques with several parame-tersoperators that most times the individual contributionsof some of such parametersoperators to the workings of thealgorithm are difficult to pinpoint The ability to achieve thisldquolean metaheuristicrdquo design (which is what we tried to do indesigning the ABO) is a mark of good algorithm design [30]

6 ABO on Asymmetric TSP Test Cases

Moreover we had to experiment on some ATSP test casesto verify the performance of the novel algorithm on suchcases The results obtained from this experiment using ABOare compared with the results obtained from solving somebenchmark asymmetric TSP instances available in TSPLIB95using the Randomized Arbitrary Insertion algorithm (RAI)[31] Ant Colony System 3-opt (ACS) Min-Max Ant System(MMAS) and Iterated Local Search (ILS) which is reportedto be one of the best methods for solving TSP problemsThese ATSP instances are ry48p ft70 Kro124p ftv70 p43and ftv170 [32]

Table 5 shows the comparative results obtained by apply-ing five different algorithms to solving a number of bench-mark asymmetric TSP instances with the aim of evaluatingthe performance of the African Buffalo Optimization (ABO)algorithm The first column lists the ATSP instances asrecorded in the TSPLIB the second column indicates thenumber of citieslocationsnodes involved in each instancethe third indicates the optimal solutions then followed by theperformances of the different algorithms in at most 50 testruns

A quick look at the table shows the excellent performanceof the ILS and theMMASwith both obtaining 100 results inthe only four cases available in literature that they solvedTheABO performed very well achieving over 99 in all test casesbut one The ABO has approximately the same performancewith RAI and the ACS obtaining about 99 optimal resultsin virtually all cases It is significant to note however thatthe ABO performed better than ACS and RAI in ft70 whichwas said to be a difficult ATSP instance to solve in spite of itsrelatively small size [33]

Next the authors examined the speed of each algorithmto achieve result since one of the aims of the ABO is to

Computational Intelligence and Neuroscience 9

Table5Com

parativ

eresults

TSP

cases

Num

bero

fcities

Optim

alvalues

ABO

RAI

MMAS

ACS

ILS

Best

Avg

Relerror

Best

Avg

Relerror

Best

Avg

Relerror

Best

Avg

Relerror

Best

Avg

Relerror

Ry48p

4814422

1444

014455

012

14422

1454320

014422

14422

014422

1456545

014422

14422

0Ft70

7038673

38753

388705

021

38855

391877

5047

38673

38687

038781

390990

5028

38673

38687

0Kr

o124p

100

36230

36275

36713

012

36241

3659423

004

36230

36542

036241

36857

004

36230

36542

0Ftv70

711950

1955

19585

026

1950

196844

0mdash

mdashmdash

mdashmdash

mdashmdash

mdashmdash

P43

435620

5645

5698

044

5620

562065

0mdash

mdashmdash

mdashmdash

mdashmdash

mdashmdash

Ftv170

171

2755

2795

2840

514

52764

283274

033

2755

2755

02774

2826

069

2755

2756

0

10 Computational Intelligence and Neuroscience

Table 6 Comparative speed of algorithms

TSP cases Number of citiesABO MIMM-ACO MMAS CGAS RAI

Avg time(secs)

Avg time(secs)

Avg time(secs)

Avg time(secs)

Avg time(secs)

Ry48p 48 007 783 797 1235 1598Ft70 70 005 985 1015 1532 7068Kro124p 100 008 3325 234 7852 3034Ftv70 71 009 6453 6125 6964 7376P43 43 01 835 938 0997 0997Ftv170 171 065 10828 9673 2761 2761

solve the problem of delay in achieving optimal solutionssince speed is one of the hallmarks of a good algorithm[34] In Table 6 we compare the speed of the ABO withthose of the recently publishedModel InducedMax-Min AntColony Optimization (MIMM-ACO) Min-Max Ant System(MMAS)CooperativeGenetic Ant Systems (CGAS) [35] andRAI

Table 6 displays the exceptional capacity of the novelAfrican Buffalo Optimization (ABO) to obtain optimal ornear-optimal solutions at record times It took the ABOless than a second (precisely 095 seconds) to solve allthe benchmark cases under investigation to MIMM-ACOrsquos23209 seconds MMASrsquos 20888 seconds CGASrsquo 452927seconds and RAIrsquos 3234 seconds Undoubtedly the ABOoutperformed the other algorithms in its quest to obtainresults using very limited CPU time and resources Thisbrought aboutABOrsquos use of very few parameters coupledwithstraightforward fitness functionThe exceptional speed of theABO compared with its competitive ability to obtain verycompetitive results recommends the ABO as an algorithm ofchoice when speed is a factor

7 Comparing ABO to Neural NetworkSolutions to TSP

Moreover following the popularity and efficiency of NeuralNetworks in obtaining good solutions to the Travelling Sales-manrsquos Problem [36] the authors compared the performanceof African Buffalo Optimization to the known solutions ofsome popular Neural Network algorithms These are Ange-niolrsquos method Somhom et alrsquos method Pasti and Castrorsquosmethod and Masutti and Castrorsquos method Comparativeexperimental figures are obtained from [28]

From the experimental results in Table 7 it can beseen that only the ABO obtained the optimal solutionsin Eil51 and Eil76 in addition to jointly obtaining theoptimal result with Masutti and Castrorsquos method in Berlin52Aside from these ABO outperformed the other meth-ods in getting near-optimal solutions in Bier127 KroA100KroB100 KroB100 KroC100 KroD100 KroE100 Ch130Ch150 KroA150 KroB150 KroA200 KroB150 KroB200Rat575 rl1323 fl1400 fl1400 and Rat783 It was only in Eil101that Masutti and Castrorsquos method outperformed ABOThis is

a further evidence that ABO is an excellent performer evenwhen in competition with Neural Networks methods

8 Conclusion

In general this paper introduces the novel algorithm theAfrican Buffalo Optimization and shows its capacity to solvethe Traveling Salesmanrsquos ProblemThe performance obtainedfrom using the ABO is weighed against the performanceof some popular algorithms such as the Ant Colony Opti-mization (ACO) Particle Swarm Optimization (PSO) Arti-ficial Bee Colony Optimization (ABO) Min-Max Ant Sys-tem (MMAS) and Randomized Insertion Algorithm (RAI)some hybrid methods such as Model Induced Max-MinAnt ColonyOptimization (MIMM-ACO) Hybrid Algorithm(HA) and Cooperative Genetic Ant Systems (CGAS) andsome popular Neural Networks-based optimization meth-ods In total 33 TSP datasets cases ranging from 48 to 14461cities were investigated and the ABO results obtained werecompared with results obtained from 11 other optimizationalgorithms and methods The results show the amazingperformance of the novel algorithmrsquos capacity to obtainoptimal or near-optimal solutions at an incredibly fast rateThe ABOrsquos outstanding performance is a testimony to thefact that ABOhas immense potentials in solving optimizationproblems using relatively fewer parameters than most otheroptimization algorithms in literature

Having observed the robustness of theABO in solving theTravelling Salesmanrsquos Problems with encouraging outcomeit is recommended that more investigations be carried outto ascertain the veracity or otherwise of this new algorithmin solving other problems such as PID tuning knapsackproblem vehicle routing job scheduling and urban trans-portation problems

Conflict of Interests

The authors assert that there is no conflict of interests in thepublication of this research paper

Acknowledgments

The authors acknowledge the support of the anonymousreviewers for their constructive reviews that helped to

Computational Intelligence and Neuroscience 11

Table 7 ABO versus NN results

TSPinstance Optima

ABO Angeniolrsquosmethod

Somhom et alrsquosmethod

Pasti and Castrorsquosmethod

Masutti and Castrorsquosmethod

Best Mean Best Mean Best Mean Best Mean Best Meaneil51 426 426 427 432 44290 433 44057 429 43870 427 43747eil76 538 538 56304 554 56320 552 56227 542 55610 541 55633eil101 629 640 640 655 66593 640 65557 641 65483 638 64863berlin52 7542 7542 765931 7778 836370 7715 802507 7716 807397 7542 793250bier127 118282 118297 118863 120110 12892033 119840 12173333 118760 12178033 118970 12088633ch130 6110 6111 630714 6265 641680 6203 630723 6142 629177 6145 628240ch150 6528 6532 6601 6634 684280 6631 6751 6629 675320 6602 673837rd100 7910 7935 7956 8088 844450 8028 823940 7947 825393 7982 819977lin105 14379 14419 144527 14999 1611137 14379 1447560 14379 1470223 14379 1440017lin318 42029 42101 42336 44869 4583283 43154 4392290 42975 4370497 42834 4369687kroA100 21282 21311 221638 23009 2467880 21410 2161677 21369 2186847 21333 2152273kroA150 26524 26526 27205 28948 2996090 26930 2740133 26932 2734643 26678 2735597kroA200 29368 29370 30152 31669 3322833 30144 3041567 29594 3025753 29600 3019027kroB100 22141 22160 22509 24026 2596640 22548 2262250 22596 2285360 22343 2266147kroB150 26130 26169 26431 27886 2940453 26342 2680633 26395 2675213 26264 2663187kroB200 29437 29487 29534 32351 3383813 29703 3028647 29831 3041560 29637 3013500kroC100 20749 20755 208817 22344 2349613 20921 2114987 20915 2123160 20915 2097123kroD100 21294 21347 21462 23076 2390903 21500 2184573 21457 2202787 21374 2169737kroE100 22068 22088 22702 23642 2482803 22379 2268247 22427 2281550 22395 2271563rat575 6773 6774 6810 8107 830183 7090 717363 7039 712507 7047 711567rat783 8806 8811 888175 10532 1072160 9316 938757 9185 932630 9246 934377rl1323 270199 270480 278977 293350 30142433 295780 30089900 295060 30028600 300770 30531433fl1400 20127 20134 20167 20649 2117467 20558 2074260 20745 2107057 20851 2111000d1655 62128 62346 625995 68875 7116807 67459 6804637 70323 7143170 70918 7211317

enhance the quality of this research paper Similarly theauthors are grateful to the Faculty of Computer Systems andSoftware Engineering Universiti Malaysia Pahang KuantanMalaysia for providing the conducive environment facilitiesand funds for this research through Grant GRS 1403118

References

[1] J Kennedy ldquoParticle swarm optimizationrdquo in Encyclopedia ofMachine Learning pp 760ndash766 Springer 2010

[2] M Dorigo and M Birattari ldquoAnt colony optimizationrdquo inEncyclopedia of Machine Learning pp 36ndash39 Springer 2010

[3] E Cantu-Paz ldquoA summary of research on parallel geneticalgorithmsrdquo IlliGAL Report 95007 Illinois Genetic AlgorithmsLaboratory 1995

[4] D Karaboga and B Basturk ldquoA powerful and efficient algo-rithm for numerical function optimization artificial bee colony(ABC) algorithmrdquo Journal of Global Optimization vol 39 no 3pp 459ndash471 2007

[5] J Andre P Siarry and T Dognon ldquoAn improvement of thestandard genetic algorithm fighting premature convergence incontinuous optimizationrdquo Advances in Engineering Softwarevol 32 no 1 pp 49ndash60 2001

[6] N Kumbharana and G M Pandey ldquoA comparative study ofACO GA and SA for solving travelling salesman problemrdquo

International Journal of Societal Applications of Computer Sci-ence vol 2 pp 224ndash228 2013

[7] Z Michalewicz and D B Fogel ldquoThe traveling salesmanproblemrdquo in How to Solve It Modern Heuristics pp 189ndash224Springer 2000

[8] J B Odili H Deng C Yang Y Sun M C d E Santo Ramosand M C G Ramos ldquoApplication of ant colony optimizationto solving the traveling salesmanrsquos problemrdquo Science Journal ofElectrical amp Electronic Engineering In press

[9] A Ben-Israel ldquoA Newton-Raphson method for the solution ofsystems of equationsrdquo Journal of Mathematical Analysis andApplications vol 15 pp 243ndash252 1966

[10] D P Bertsekas Dynamic Programming and Optimal Controlvol 1 Athena Scientific Belmont Mass USA 1995

[11] R G Ghanem and P D Spanos Stochastic Finite Elements ASpectral Approach Courier Corporation 2003

[12] S Binitha and S S Sathya ldquoA survey of bio inspired optimiza-tion algorithmsrdquo International Journal of Soft Computing andEngineering vol 2 pp 137ndash151 2012

[13] F Dressler and O B Akan ldquoA survey on bio-inspired network-ingrdquo Computer Networks vol 54 no 6 pp 881ndash900 2010

[14] X-S Yang and S Deb ldquoTwo-stage eagle strategy with differen-tial evolutionrdquo International Journal of Bio-Inspired Computa-tion vol 4 no 1 pp 1ndash5 2012

12 Computational Intelligence and Neuroscience

[15] S K Dhurandher S Misra P Pruthi S Singhal S Aggarwaland I Woungang ldquoUsing bee algorithm for peer-to-peer filesearching in mobile ad hoc networksrdquo Journal of Network andComputer Applications vol 34 no 5 pp 1498ndash1508 2011

[16] A RMehrabian and C Lucas ldquoA novel numerical optimizationalgorithm inspired from weed colonizationrdquo Ecological Infor-matics vol 1 no 4 pp 355ndash366 2006

[17] M Dorigo G Di Caro and LM Gambardella ldquoAnt algorithmsfor discrete optimizationrdquo Artificial Life vol 5 no 2 pp 137ndash172 1999

[18] M Clerc Particle Swarm Optimization vol 93 John Wiley ampSons 2010

[19] Z YuanM AM deOcaM Birattari and T Stutzle ldquoContinu-ous optimization algorithms for tuning real and integer param-eters of swarm intelligence algorithmsrdquo Swarm Intelligence vol6 no 1 pp 49ndash75 2012

[20] D Karaboga and B Akay ldquoA survey algorithms simulating beeswarm intelligencerdquo Artificial Intelligence Review vol 31 no 1ndash4 pp 61ndash85 2009

[21] J Kennedy and R Mendes ldquoPopulation structure and particleswarm performancerdquo in Proceedings of the Congress on Evo-lutionary Computation (CEC rsquo02) vol 2 pp 1671ndash1676 IEEEHonolulu Hawaii USA May 2002

[22] A M Mora P Garcıa-Sanchez J J Merelo and P A CastilloldquoMigration study on a pareto-based island model for MOA-COsrdquo in Proceedings of the 15th Genetic and EvolutionaryComputation Conference (GECCO rsquo13) pp 57ndash64 July 2013

[23] M Dorigo and L M Gambardella ldquoAnt colonies for thetravelling salesman problemrdquo BioSystems vol 43 no 2 pp 73ndash81 1997

[24] MG C Resende RMartıM Gallego andADuarte ldquoGRASPand path relinking for the maxndashmin diversity problemrdquo Com-puters amp Operations Research vol 37 no 3 pp 498ndash508 2010

[25] G Reinelt ldquoTsplib95 1995rdquo 2012 httpcomoptifiuni-heidel-bergdesoftwareTSPLIB95

[26] M Gunduz M S Kiran and E Ozceylan ldquoA hierarchicapproach based on swarm intelligence to solve the travelingsalesmanproblemrdquoTurkish Journal of Electrical Engineering andComputer Sciences vol 23 no 1 pp 103ndash117 2015

[27] H Jia ldquoA novel hybrid optimization algorithm and its appli-cation in solving complex problemrdquo International Journal ofHybrid Information Technology vol 8 pp 1ndash10 2015

[28] S-M Chen and C-Y Chien ldquoA new method for solving thetraveling salesman problem based on the genetic simulatedannealing ant colony system with particle swarm optimizationtechniquesrdquo in Proceedings of the International Conference onMachine Learning and Cybernetics (ICMLC rsquo10) pp 2477ndash2482July 2010

[29] S Georg ldquoMP-TESTDATAmdashThe TSPLIB symmetric travelingsalesman problem instancesrdquo 2008

[30] K Sorensen ldquoMetaheuristicsmdashthemetaphor exposedrdquo Interna-tional Transactions in Operational Research vol 22 no 1 pp3ndash18 2015

[31] J Brest and J Zerovnik ldquoA heuristic for the asymmetric travel-ing salesman problemrdquo in Proceedings of the 6th MetaheuristicsInternational Conference (MIC rsquo05) pp 145ndash150 Vienna Aus-tria 2005

[32] M Dorigo and L M Gambardella ldquoAnt colony system a coop-erative learning approach to the traveling salesman problemrdquoIEEETransactions on Evolutionary Computation vol 1 no 1 pp53ndash66 1997

[33] O C Martin and S W Otto ldquoCombining simulated annealingwith local search heuristicsrdquoAnnals of Operations Research vol63 pp 57ndash75 1996

[34] B Baritompa and E M Hendrix ldquoOn the investigation ofstochastic global optimization algorithmsrdquo Journal of GlobalOptimization vol 31 no 4 pp 567ndash578 2005

[35] J Bai G-K Yang Y-WChen L-SHu andC-C Pan ldquoAmodelinduced max-min ant colony optimization for asymmetrictraveling salesman problemrdquo Applied Soft Computing vol 13no 3 pp 1365ndash1375 2013

[36] N Aras B J Oommen and I K Altinel ldquoThe Kohonennetwork incorporating explicit statistics and its application tothe travelling salesman problemrdquo Neural Networks vol 12 no9 pp 1273ndash1284 1999

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 8: Solving the Traveling Salesman’s Problem Using the African ...umpir.ump.edu.my/id/eprint/11365/1/Solving the Traveling Salesman’… · ResearchArticle Solving the Traveling Salesman’s

8 Computational Intelligence and Neuroscience

Table 4 Comparative optimal results

TSP instance Optima ABO PSO ACO HPSACOBest Avg Err Best Avg Err Best Avg Err Best Avg Err

att48 33522 33524 33579 016 33734 33982 063 33649 33731 062 33524 33667 016st70 675 676 67833 015 6912 7026 240 6857 6947 159 6803 6986 079eil76 538 538 56304 000 5723 5891 638 5507 5604 236 5462 5581 152pr152 73682 73730 73990 007 75361 75405 228 74689 74936 137 74165 74654 066gil262 2378 2378 2386 000 2513 2486 568 2463 2495 357 2413 2468 147rd400 15281 15301 15304 500 16964 17024 1101 16581 16834 851 16067 16513 514pr1002 259045 259132 261608 003 278923 279755 767 269758 271043 414 267998 269789 346d1291 50801 50839 50839 007 53912 54104 612 52942 53249 421 52868 52951 407fnl4461 182566 182745 183174 010 199314 199492 917 192964 194015 570 191352 192585 481brd14051 469385 469835 479085 010 518631 519305 1049 505734 511638 774 498471 503594 620

test cases under investigation To solve all the TSP problemshere it took ABO a cumulative time of 0279 seconds toACOrsquos 7456 seconds ABCrsquos 4311 seconds and HArsquos 336227seconds The speed of the ABO is traceable to effectivememorymanagement technique since the ABO uses the pathimprovement technique as against the ACO that uses theslower path construction method The difference in speedwith the ABC that uses similar technique with ABO is dueto the ABCrsquos use of several parameters The speed of theHA could have been affected by the combination of the pathconstruction and path improvement techniques coupled withthe use of several parameters

53 ABO and Other Algorithms Encouraged by theextremely good performance of the ABO in the firstexperiments more experiments were performed and theresults obtained compared with those from PSO ACO andHPSACO from another recently published study [27] TheHPSACO is a combination of three techniques namely theCollaborative Strategy Particle SwarmOptimization and theAnt Colony Optimization algorithms The datasets used inthis experiment are from the TSPLIB95 and they are Att48St70 Eil76 Pr152 Gil262 Rd400 Pr1002 D1291 Fn14461and Brd14051 The results are presented in Table 4

Table 4 further underscores the excellent performance ofABO The ABO obtained the optimal solution to Eil76 andGil262 and very near-optimal solution in the rest of test casesIt is obvious from the relative error calculations that the ABOobtained over 99 accuracy in all the TSP instances hereexcept the difficult rd100where it obtained 95 It isworthy ofnote that the 95 accuracy of the ABO is still superior to theperformance of the other comparative algorithms in this TSPinstance The cumulative relative error of the ABO is 568to the PSOrsquos 6183 the ACOrsquos 3981 and the HPSACOrsquos2828 Clearly from this analysis the ABO has a superiorsearch capabilityThis is traceable to its use of relatively fewerparameters than most other Swarm Intelligence techniquesThe controlling parameters of the ABO are just the learningparameters (1198971199011 and 1198971199012)

In designing the ABO the authors deliberately tried toavoid the ldquoFrankenstein phenomenardquo [30] that is a case

of developing intricate techniques with several parame-tersoperators that most times the individual contributionsof some of such parametersoperators to the workings of thealgorithm are difficult to pinpoint The ability to achieve thisldquolean metaheuristicrdquo design (which is what we tried to do indesigning the ABO) is a mark of good algorithm design [30]

6 ABO on Asymmetric TSP Test Cases

Moreover we had to experiment on some ATSP test casesto verify the performance of the novel algorithm on suchcases The results obtained from this experiment using ABOare compared with the results obtained from solving somebenchmark asymmetric TSP instances available in TSPLIB95using the Randomized Arbitrary Insertion algorithm (RAI)[31] Ant Colony System 3-opt (ACS) Min-Max Ant System(MMAS) and Iterated Local Search (ILS) which is reportedto be one of the best methods for solving TSP problemsThese ATSP instances are ry48p ft70 Kro124p ftv70 p43and ftv170 [32]

Table 5 shows the comparative results obtained by apply-ing five different algorithms to solving a number of bench-mark asymmetric TSP instances with the aim of evaluatingthe performance of the African Buffalo Optimization (ABO)algorithm The first column lists the ATSP instances asrecorded in the TSPLIB the second column indicates thenumber of citieslocationsnodes involved in each instancethe third indicates the optimal solutions then followed by theperformances of the different algorithms in at most 50 testruns

A quick look at the table shows the excellent performanceof the ILS and theMMASwith both obtaining 100 results inthe only four cases available in literature that they solvedTheABO performed very well achieving over 99 in all test casesbut one The ABO has approximately the same performancewith RAI and the ACS obtaining about 99 optimal resultsin virtually all cases It is significant to note however thatthe ABO performed better than ACS and RAI in ft70 whichwas said to be a difficult ATSP instance to solve in spite of itsrelatively small size [33]

Next the authors examined the speed of each algorithmto achieve result since one of the aims of the ABO is to

Computational Intelligence and Neuroscience 9

Table5Com

parativ

eresults

TSP

cases

Num

bero

fcities

Optim

alvalues

ABO

RAI

MMAS

ACS

ILS

Best

Avg

Relerror

Best

Avg

Relerror

Best

Avg

Relerror

Best

Avg

Relerror

Best

Avg

Relerror

Ry48p

4814422

1444

014455

012

14422

1454320

014422

14422

014422

1456545

014422

14422

0Ft70

7038673

38753

388705

021

38855

391877

5047

38673

38687

038781

390990

5028

38673

38687

0Kr

o124p

100

36230

36275

36713

012

36241

3659423

004

36230

36542

036241

36857

004

36230

36542

0Ftv70

711950

1955

19585

026

1950

196844

0mdash

mdashmdash

mdashmdash

mdashmdash

mdashmdash

P43

435620

5645

5698

044

5620

562065

0mdash

mdashmdash

mdashmdash

mdashmdash

mdashmdash

Ftv170

171

2755

2795

2840

514

52764

283274

033

2755

2755

02774

2826

069

2755

2756

0

10 Computational Intelligence and Neuroscience

Table 6 Comparative speed of algorithms

TSP cases Number of citiesABO MIMM-ACO MMAS CGAS RAI

Avg time(secs)

Avg time(secs)

Avg time(secs)

Avg time(secs)

Avg time(secs)

Ry48p 48 007 783 797 1235 1598Ft70 70 005 985 1015 1532 7068Kro124p 100 008 3325 234 7852 3034Ftv70 71 009 6453 6125 6964 7376P43 43 01 835 938 0997 0997Ftv170 171 065 10828 9673 2761 2761

solve the problem of delay in achieving optimal solutionssince speed is one of the hallmarks of a good algorithm[34] In Table 6 we compare the speed of the ABO withthose of the recently publishedModel InducedMax-Min AntColony Optimization (MIMM-ACO) Min-Max Ant System(MMAS)CooperativeGenetic Ant Systems (CGAS) [35] andRAI

Table 6 displays the exceptional capacity of the novelAfrican Buffalo Optimization (ABO) to obtain optimal ornear-optimal solutions at record times It took the ABOless than a second (precisely 095 seconds) to solve allthe benchmark cases under investigation to MIMM-ACOrsquos23209 seconds MMASrsquos 20888 seconds CGASrsquo 452927seconds and RAIrsquos 3234 seconds Undoubtedly the ABOoutperformed the other algorithms in its quest to obtainresults using very limited CPU time and resources Thisbrought aboutABOrsquos use of very few parameters coupledwithstraightforward fitness functionThe exceptional speed of theABO compared with its competitive ability to obtain verycompetitive results recommends the ABO as an algorithm ofchoice when speed is a factor

7 Comparing ABO to Neural NetworkSolutions to TSP

Moreover following the popularity and efficiency of NeuralNetworks in obtaining good solutions to the Travelling Sales-manrsquos Problem [36] the authors compared the performanceof African Buffalo Optimization to the known solutions ofsome popular Neural Network algorithms These are Ange-niolrsquos method Somhom et alrsquos method Pasti and Castrorsquosmethod and Masutti and Castrorsquos method Comparativeexperimental figures are obtained from [28]

From the experimental results in Table 7 it can beseen that only the ABO obtained the optimal solutionsin Eil51 and Eil76 in addition to jointly obtaining theoptimal result with Masutti and Castrorsquos method in Berlin52Aside from these ABO outperformed the other meth-ods in getting near-optimal solutions in Bier127 KroA100KroB100 KroB100 KroC100 KroD100 KroE100 Ch130Ch150 KroA150 KroB150 KroA200 KroB150 KroB200Rat575 rl1323 fl1400 fl1400 and Rat783 It was only in Eil101that Masutti and Castrorsquos method outperformed ABOThis is

a further evidence that ABO is an excellent performer evenwhen in competition with Neural Networks methods

8 Conclusion

In general this paper introduces the novel algorithm theAfrican Buffalo Optimization and shows its capacity to solvethe Traveling Salesmanrsquos ProblemThe performance obtainedfrom using the ABO is weighed against the performanceof some popular algorithms such as the Ant Colony Opti-mization (ACO) Particle Swarm Optimization (PSO) Arti-ficial Bee Colony Optimization (ABO) Min-Max Ant Sys-tem (MMAS) and Randomized Insertion Algorithm (RAI)some hybrid methods such as Model Induced Max-MinAnt ColonyOptimization (MIMM-ACO) Hybrid Algorithm(HA) and Cooperative Genetic Ant Systems (CGAS) andsome popular Neural Networks-based optimization meth-ods In total 33 TSP datasets cases ranging from 48 to 14461cities were investigated and the ABO results obtained werecompared with results obtained from 11 other optimizationalgorithms and methods The results show the amazingperformance of the novel algorithmrsquos capacity to obtainoptimal or near-optimal solutions at an incredibly fast rateThe ABOrsquos outstanding performance is a testimony to thefact that ABOhas immense potentials in solving optimizationproblems using relatively fewer parameters than most otheroptimization algorithms in literature

Having observed the robustness of theABO in solving theTravelling Salesmanrsquos Problems with encouraging outcomeit is recommended that more investigations be carried outto ascertain the veracity or otherwise of this new algorithmin solving other problems such as PID tuning knapsackproblem vehicle routing job scheduling and urban trans-portation problems

Conflict of Interests

The authors assert that there is no conflict of interests in thepublication of this research paper

Acknowledgments

The authors acknowledge the support of the anonymousreviewers for their constructive reviews that helped to

Computational Intelligence and Neuroscience 11

Table 7 ABO versus NN results

TSPinstance Optima

ABO Angeniolrsquosmethod

Somhom et alrsquosmethod

Pasti and Castrorsquosmethod

Masutti and Castrorsquosmethod

Best Mean Best Mean Best Mean Best Mean Best Meaneil51 426 426 427 432 44290 433 44057 429 43870 427 43747eil76 538 538 56304 554 56320 552 56227 542 55610 541 55633eil101 629 640 640 655 66593 640 65557 641 65483 638 64863berlin52 7542 7542 765931 7778 836370 7715 802507 7716 807397 7542 793250bier127 118282 118297 118863 120110 12892033 119840 12173333 118760 12178033 118970 12088633ch130 6110 6111 630714 6265 641680 6203 630723 6142 629177 6145 628240ch150 6528 6532 6601 6634 684280 6631 6751 6629 675320 6602 673837rd100 7910 7935 7956 8088 844450 8028 823940 7947 825393 7982 819977lin105 14379 14419 144527 14999 1611137 14379 1447560 14379 1470223 14379 1440017lin318 42029 42101 42336 44869 4583283 43154 4392290 42975 4370497 42834 4369687kroA100 21282 21311 221638 23009 2467880 21410 2161677 21369 2186847 21333 2152273kroA150 26524 26526 27205 28948 2996090 26930 2740133 26932 2734643 26678 2735597kroA200 29368 29370 30152 31669 3322833 30144 3041567 29594 3025753 29600 3019027kroB100 22141 22160 22509 24026 2596640 22548 2262250 22596 2285360 22343 2266147kroB150 26130 26169 26431 27886 2940453 26342 2680633 26395 2675213 26264 2663187kroB200 29437 29487 29534 32351 3383813 29703 3028647 29831 3041560 29637 3013500kroC100 20749 20755 208817 22344 2349613 20921 2114987 20915 2123160 20915 2097123kroD100 21294 21347 21462 23076 2390903 21500 2184573 21457 2202787 21374 2169737kroE100 22068 22088 22702 23642 2482803 22379 2268247 22427 2281550 22395 2271563rat575 6773 6774 6810 8107 830183 7090 717363 7039 712507 7047 711567rat783 8806 8811 888175 10532 1072160 9316 938757 9185 932630 9246 934377rl1323 270199 270480 278977 293350 30142433 295780 30089900 295060 30028600 300770 30531433fl1400 20127 20134 20167 20649 2117467 20558 2074260 20745 2107057 20851 2111000d1655 62128 62346 625995 68875 7116807 67459 6804637 70323 7143170 70918 7211317

enhance the quality of this research paper Similarly theauthors are grateful to the Faculty of Computer Systems andSoftware Engineering Universiti Malaysia Pahang KuantanMalaysia for providing the conducive environment facilitiesand funds for this research through Grant GRS 1403118

References

[1] J Kennedy ldquoParticle swarm optimizationrdquo in Encyclopedia ofMachine Learning pp 760ndash766 Springer 2010

[2] M Dorigo and M Birattari ldquoAnt colony optimizationrdquo inEncyclopedia of Machine Learning pp 36ndash39 Springer 2010

[3] E Cantu-Paz ldquoA summary of research on parallel geneticalgorithmsrdquo IlliGAL Report 95007 Illinois Genetic AlgorithmsLaboratory 1995

[4] D Karaboga and B Basturk ldquoA powerful and efficient algo-rithm for numerical function optimization artificial bee colony(ABC) algorithmrdquo Journal of Global Optimization vol 39 no 3pp 459ndash471 2007

[5] J Andre P Siarry and T Dognon ldquoAn improvement of thestandard genetic algorithm fighting premature convergence incontinuous optimizationrdquo Advances in Engineering Softwarevol 32 no 1 pp 49ndash60 2001

[6] N Kumbharana and G M Pandey ldquoA comparative study ofACO GA and SA for solving travelling salesman problemrdquo

International Journal of Societal Applications of Computer Sci-ence vol 2 pp 224ndash228 2013

[7] Z Michalewicz and D B Fogel ldquoThe traveling salesmanproblemrdquo in How to Solve It Modern Heuristics pp 189ndash224Springer 2000

[8] J B Odili H Deng C Yang Y Sun M C d E Santo Ramosand M C G Ramos ldquoApplication of ant colony optimizationto solving the traveling salesmanrsquos problemrdquo Science Journal ofElectrical amp Electronic Engineering In press

[9] A Ben-Israel ldquoA Newton-Raphson method for the solution ofsystems of equationsrdquo Journal of Mathematical Analysis andApplications vol 15 pp 243ndash252 1966

[10] D P Bertsekas Dynamic Programming and Optimal Controlvol 1 Athena Scientific Belmont Mass USA 1995

[11] R G Ghanem and P D Spanos Stochastic Finite Elements ASpectral Approach Courier Corporation 2003

[12] S Binitha and S S Sathya ldquoA survey of bio inspired optimiza-tion algorithmsrdquo International Journal of Soft Computing andEngineering vol 2 pp 137ndash151 2012

[13] F Dressler and O B Akan ldquoA survey on bio-inspired network-ingrdquo Computer Networks vol 54 no 6 pp 881ndash900 2010

[14] X-S Yang and S Deb ldquoTwo-stage eagle strategy with differen-tial evolutionrdquo International Journal of Bio-Inspired Computa-tion vol 4 no 1 pp 1ndash5 2012

12 Computational Intelligence and Neuroscience

[15] S K Dhurandher S Misra P Pruthi S Singhal S Aggarwaland I Woungang ldquoUsing bee algorithm for peer-to-peer filesearching in mobile ad hoc networksrdquo Journal of Network andComputer Applications vol 34 no 5 pp 1498ndash1508 2011

[16] A RMehrabian and C Lucas ldquoA novel numerical optimizationalgorithm inspired from weed colonizationrdquo Ecological Infor-matics vol 1 no 4 pp 355ndash366 2006

[17] M Dorigo G Di Caro and LM Gambardella ldquoAnt algorithmsfor discrete optimizationrdquo Artificial Life vol 5 no 2 pp 137ndash172 1999

[18] M Clerc Particle Swarm Optimization vol 93 John Wiley ampSons 2010

[19] Z YuanM AM deOcaM Birattari and T Stutzle ldquoContinu-ous optimization algorithms for tuning real and integer param-eters of swarm intelligence algorithmsrdquo Swarm Intelligence vol6 no 1 pp 49ndash75 2012

[20] D Karaboga and B Akay ldquoA survey algorithms simulating beeswarm intelligencerdquo Artificial Intelligence Review vol 31 no 1ndash4 pp 61ndash85 2009

[21] J Kennedy and R Mendes ldquoPopulation structure and particleswarm performancerdquo in Proceedings of the Congress on Evo-lutionary Computation (CEC rsquo02) vol 2 pp 1671ndash1676 IEEEHonolulu Hawaii USA May 2002

[22] A M Mora P Garcıa-Sanchez J J Merelo and P A CastilloldquoMigration study on a pareto-based island model for MOA-COsrdquo in Proceedings of the 15th Genetic and EvolutionaryComputation Conference (GECCO rsquo13) pp 57ndash64 July 2013

[23] M Dorigo and L M Gambardella ldquoAnt colonies for thetravelling salesman problemrdquo BioSystems vol 43 no 2 pp 73ndash81 1997

[24] MG C Resende RMartıM Gallego andADuarte ldquoGRASPand path relinking for the maxndashmin diversity problemrdquo Com-puters amp Operations Research vol 37 no 3 pp 498ndash508 2010

[25] G Reinelt ldquoTsplib95 1995rdquo 2012 httpcomoptifiuni-heidel-bergdesoftwareTSPLIB95

[26] M Gunduz M S Kiran and E Ozceylan ldquoA hierarchicapproach based on swarm intelligence to solve the travelingsalesmanproblemrdquoTurkish Journal of Electrical Engineering andComputer Sciences vol 23 no 1 pp 103ndash117 2015

[27] H Jia ldquoA novel hybrid optimization algorithm and its appli-cation in solving complex problemrdquo International Journal ofHybrid Information Technology vol 8 pp 1ndash10 2015

[28] S-M Chen and C-Y Chien ldquoA new method for solving thetraveling salesman problem based on the genetic simulatedannealing ant colony system with particle swarm optimizationtechniquesrdquo in Proceedings of the International Conference onMachine Learning and Cybernetics (ICMLC rsquo10) pp 2477ndash2482July 2010

[29] S Georg ldquoMP-TESTDATAmdashThe TSPLIB symmetric travelingsalesman problem instancesrdquo 2008

[30] K Sorensen ldquoMetaheuristicsmdashthemetaphor exposedrdquo Interna-tional Transactions in Operational Research vol 22 no 1 pp3ndash18 2015

[31] J Brest and J Zerovnik ldquoA heuristic for the asymmetric travel-ing salesman problemrdquo in Proceedings of the 6th MetaheuristicsInternational Conference (MIC rsquo05) pp 145ndash150 Vienna Aus-tria 2005

[32] M Dorigo and L M Gambardella ldquoAnt colony system a coop-erative learning approach to the traveling salesman problemrdquoIEEETransactions on Evolutionary Computation vol 1 no 1 pp53ndash66 1997

[33] O C Martin and S W Otto ldquoCombining simulated annealingwith local search heuristicsrdquoAnnals of Operations Research vol63 pp 57ndash75 1996

[34] B Baritompa and E M Hendrix ldquoOn the investigation ofstochastic global optimization algorithmsrdquo Journal of GlobalOptimization vol 31 no 4 pp 567ndash578 2005

[35] J Bai G-K Yang Y-WChen L-SHu andC-C Pan ldquoAmodelinduced max-min ant colony optimization for asymmetrictraveling salesman problemrdquo Applied Soft Computing vol 13no 3 pp 1365ndash1375 2013

[36] N Aras B J Oommen and I K Altinel ldquoThe Kohonennetwork incorporating explicit statistics and its application tothe travelling salesman problemrdquo Neural Networks vol 12 no9 pp 1273ndash1284 1999

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 9: Solving the Traveling Salesman’s Problem Using the African ...umpir.ump.edu.my/id/eprint/11365/1/Solving the Traveling Salesman’… · ResearchArticle Solving the Traveling Salesman’s

Computational Intelligence and Neuroscience 9

Table5Com

parativ

eresults

TSP

cases

Num

bero

fcities

Optim

alvalues

ABO

RAI

MMAS

ACS

ILS

Best

Avg

Relerror

Best

Avg

Relerror

Best

Avg

Relerror

Best

Avg

Relerror

Best

Avg

Relerror

Ry48p

4814422

1444

014455

012

14422

1454320

014422

14422

014422

1456545

014422

14422

0Ft70

7038673

38753

388705

021

38855

391877

5047

38673

38687

038781

390990

5028

38673

38687

0Kr

o124p

100

36230

36275

36713

012

36241

3659423

004

36230

36542

036241

36857

004

36230

36542

0Ftv70

711950

1955

19585

026

1950

196844

0mdash

mdashmdash

mdashmdash

mdashmdash

mdashmdash

P43

435620

5645

5698

044

5620

562065

0mdash

mdashmdash

mdashmdash

mdashmdash

mdashmdash

Ftv170

171

2755

2795

2840

514

52764

283274

033

2755

2755

02774

2826

069

2755

2756

0

10 Computational Intelligence and Neuroscience

Table 6 Comparative speed of algorithms

TSP cases Number of citiesABO MIMM-ACO MMAS CGAS RAI

Avg time(secs)

Avg time(secs)

Avg time(secs)

Avg time(secs)

Avg time(secs)

Ry48p 48 007 783 797 1235 1598Ft70 70 005 985 1015 1532 7068Kro124p 100 008 3325 234 7852 3034Ftv70 71 009 6453 6125 6964 7376P43 43 01 835 938 0997 0997Ftv170 171 065 10828 9673 2761 2761

solve the problem of delay in achieving optimal solutionssince speed is one of the hallmarks of a good algorithm[34] In Table 6 we compare the speed of the ABO withthose of the recently publishedModel InducedMax-Min AntColony Optimization (MIMM-ACO) Min-Max Ant System(MMAS)CooperativeGenetic Ant Systems (CGAS) [35] andRAI

Table 6 displays the exceptional capacity of the novelAfrican Buffalo Optimization (ABO) to obtain optimal ornear-optimal solutions at record times It took the ABOless than a second (precisely 095 seconds) to solve allthe benchmark cases under investigation to MIMM-ACOrsquos23209 seconds MMASrsquos 20888 seconds CGASrsquo 452927seconds and RAIrsquos 3234 seconds Undoubtedly the ABOoutperformed the other algorithms in its quest to obtainresults using very limited CPU time and resources Thisbrought aboutABOrsquos use of very few parameters coupledwithstraightforward fitness functionThe exceptional speed of theABO compared with its competitive ability to obtain verycompetitive results recommends the ABO as an algorithm ofchoice when speed is a factor

7 Comparing ABO to Neural NetworkSolutions to TSP

Moreover following the popularity and efficiency of NeuralNetworks in obtaining good solutions to the Travelling Sales-manrsquos Problem [36] the authors compared the performanceof African Buffalo Optimization to the known solutions ofsome popular Neural Network algorithms These are Ange-niolrsquos method Somhom et alrsquos method Pasti and Castrorsquosmethod and Masutti and Castrorsquos method Comparativeexperimental figures are obtained from [28]

From the experimental results in Table 7 it can beseen that only the ABO obtained the optimal solutionsin Eil51 and Eil76 in addition to jointly obtaining theoptimal result with Masutti and Castrorsquos method in Berlin52Aside from these ABO outperformed the other meth-ods in getting near-optimal solutions in Bier127 KroA100KroB100 KroB100 KroC100 KroD100 KroE100 Ch130Ch150 KroA150 KroB150 KroA200 KroB150 KroB200Rat575 rl1323 fl1400 fl1400 and Rat783 It was only in Eil101that Masutti and Castrorsquos method outperformed ABOThis is

a further evidence that ABO is an excellent performer evenwhen in competition with Neural Networks methods

8 Conclusion

In general this paper introduces the novel algorithm theAfrican Buffalo Optimization and shows its capacity to solvethe Traveling Salesmanrsquos ProblemThe performance obtainedfrom using the ABO is weighed against the performanceof some popular algorithms such as the Ant Colony Opti-mization (ACO) Particle Swarm Optimization (PSO) Arti-ficial Bee Colony Optimization (ABO) Min-Max Ant Sys-tem (MMAS) and Randomized Insertion Algorithm (RAI)some hybrid methods such as Model Induced Max-MinAnt ColonyOptimization (MIMM-ACO) Hybrid Algorithm(HA) and Cooperative Genetic Ant Systems (CGAS) andsome popular Neural Networks-based optimization meth-ods In total 33 TSP datasets cases ranging from 48 to 14461cities were investigated and the ABO results obtained werecompared with results obtained from 11 other optimizationalgorithms and methods The results show the amazingperformance of the novel algorithmrsquos capacity to obtainoptimal or near-optimal solutions at an incredibly fast rateThe ABOrsquos outstanding performance is a testimony to thefact that ABOhas immense potentials in solving optimizationproblems using relatively fewer parameters than most otheroptimization algorithms in literature

Having observed the robustness of theABO in solving theTravelling Salesmanrsquos Problems with encouraging outcomeit is recommended that more investigations be carried outto ascertain the veracity or otherwise of this new algorithmin solving other problems such as PID tuning knapsackproblem vehicle routing job scheduling and urban trans-portation problems

Conflict of Interests

The authors assert that there is no conflict of interests in thepublication of this research paper

Acknowledgments

The authors acknowledge the support of the anonymousreviewers for their constructive reviews that helped to

Computational Intelligence and Neuroscience 11

Table 7 ABO versus NN results

TSPinstance Optima

ABO Angeniolrsquosmethod

Somhom et alrsquosmethod

Pasti and Castrorsquosmethod

Masutti and Castrorsquosmethod

Best Mean Best Mean Best Mean Best Mean Best Meaneil51 426 426 427 432 44290 433 44057 429 43870 427 43747eil76 538 538 56304 554 56320 552 56227 542 55610 541 55633eil101 629 640 640 655 66593 640 65557 641 65483 638 64863berlin52 7542 7542 765931 7778 836370 7715 802507 7716 807397 7542 793250bier127 118282 118297 118863 120110 12892033 119840 12173333 118760 12178033 118970 12088633ch130 6110 6111 630714 6265 641680 6203 630723 6142 629177 6145 628240ch150 6528 6532 6601 6634 684280 6631 6751 6629 675320 6602 673837rd100 7910 7935 7956 8088 844450 8028 823940 7947 825393 7982 819977lin105 14379 14419 144527 14999 1611137 14379 1447560 14379 1470223 14379 1440017lin318 42029 42101 42336 44869 4583283 43154 4392290 42975 4370497 42834 4369687kroA100 21282 21311 221638 23009 2467880 21410 2161677 21369 2186847 21333 2152273kroA150 26524 26526 27205 28948 2996090 26930 2740133 26932 2734643 26678 2735597kroA200 29368 29370 30152 31669 3322833 30144 3041567 29594 3025753 29600 3019027kroB100 22141 22160 22509 24026 2596640 22548 2262250 22596 2285360 22343 2266147kroB150 26130 26169 26431 27886 2940453 26342 2680633 26395 2675213 26264 2663187kroB200 29437 29487 29534 32351 3383813 29703 3028647 29831 3041560 29637 3013500kroC100 20749 20755 208817 22344 2349613 20921 2114987 20915 2123160 20915 2097123kroD100 21294 21347 21462 23076 2390903 21500 2184573 21457 2202787 21374 2169737kroE100 22068 22088 22702 23642 2482803 22379 2268247 22427 2281550 22395 2271563rat575 6773 6774 6810 8107 830183 7090 717363 7039 712507 7047 711567rat783 8806 8811 888175 10532 1072160 9316 938757 9185 932630 9246 934377rl1323 270199 270480 278977 293350 30142433 295780 30089900 295060 30028600 300770 30531433fl1400 20127 20134 20167 20649 2117467 20558 2074260 20745 2107057 20851 2111000d1655 62128 62346 625995 68875 7116807 67459 6804637 70323 7143170 70918 7211317

enhance the quality of this research paper Similarly theauthors are grateful to the Faculty of Computer Systems andSoftware Engineering Universiti Malaysia Pahang KuantanMalaysia for providing the conducive environment facilitiesand funds for this research through Grant GRS 1403118

References

[1] J Kennedy ldquoParticle swarm optimizationrdquo in Encyclopedia ofMachine Learning pp 760ndash766 Springer 2010

[2] M Dorigo and M Birattari ldquoAnt colony optimizationrdquo inEncyclopedia of Machine Learning pp 36ndash39 Springer 2010

[3] E Cantu-Paz ldquoA summary of research on parallel geneticalgorithmsrdquo IlliGAL Report 95007 Illinois Genetic AlgorithmsLaboratory 1995

[4] D Karaboga and B Basturk ldquoA powerful and efficient algo-rithm for numerical function optimization artificial bee colony(ABC) algorithmrdquo Journal of Global Optimization vol 39 no 3pp 459ndash471 2007

[5] J Andre P Siarry and T Dognon ldquoAn improvement of thestandard genetic algorithm fighting premature convergence incontinuous optimizationrdquo Advances in Engineering Softwarevol 32 no 1 pp 49ndash60 2001

[6] N Kumbharana and G M Pandey ldquoA comparative study ofACO GA and SA for solving travelling salesman problemrdquo

International Journal of Societal Applications of Computer Sci-ence vol 2 pp 224ndash228 2013

[7] Z Michalewicz and D B Fogel ldquoThe traveling salesmanproblemrdquo in How to Solve It Modern Heuristics pp 189ndash224Springer 2000

[8] J B Odili H Deng C Yang Y Sun M C d E Santo Ramosand M C G Ramos ldquoApplication of ant colony optimizationto solving the traveling salesmanrsquos problemrdquo Science Journal ofElectrical amp Electronic Engineering In press

[9] A Ben-Israel ldquoA Newton-Raphson method for the solution ofsystems of equationsrdquo Journal of Mathematical Analysis andApplications vol 15 pp 243ndash252 1966

[10] D P Bertsekas Dynamic Programming and Optimal Controlvol 1 Athena Scientific Belmont Mass USA 1995

[11] R G Ghanem and P D Spanos Stochastic Finite Elements ASpectral Approach Courier Corporation 2003

[12] S Binitha and S S Sathya ldquoA survey of bio inspired optimiza-tion algorithmsrdquo International Journal of Soft Computing andEngineering vol 2 pp 137ndash151 2012

[13] F Dressler and O B Akan ldquoA survey on bio-inspired network-ingrdquo Computer Networks vol 54 no 6 pp 881ndash900 2010

[14] X-S Yang and S Deb ldquoTwo-stage eagle strategy with differen-tial evolutionrdquo International Journal of Bio-Inspired Computa-tion vol 4 no 1 pp 1ndash5 2012

12 Computational Intelligence and Neuroscience

[15] S K Dhurandher S Misra P Pruthi S Singhal S Aggarwaland I Woungang ldquoUsing bee algorithm for peer-to-peer filesearching in mobile ad hoc networksrdquo Journal of Network andComputer Applications vol 34 no 5 pp 1498ndash1508 2011

[16] A RMehrabian and C Lucas ldquoA novel numerical optimizationalgorithm inspired from weed colonizationrdquo Ecological Infor-matics vol 1 no 4 pp 355ndash366 2006

[17] M Dorigo G Di Caro and LM Gambardella ldquoAnt algorithmsfor discrete optimizationrdquo Artificial Life vol 5 no 2 pp 137ndash172 1999

[18] M Clerc Particle Swarm Optimization vol 93 John Wiley ampSons 2010

[19] Z YuanM AM deOcaM Birattari and T Stutzle ldquoContinu-ous optimization algorithms for tuning real and integer param-eters of swarm intelligence algorithmsrdquo Swarm Intelligence vol6 no 1 pp 49ndash75 2012

[20] D Karaboga and B Akay ldquoA survey algorithms simulating beeswarm intelligencerdquo Artificial Intelligence Review vol 31 no 1ndash4 pp 61ndash85 2009

[21] J Kennedy and R Mendes ldquoPopulation structure and particleswarm performancerdquo in Proceedings of the Congress on Evo-lutionary Computation (CEC rsquo02) vol 2 pp 1671ndash1676 IEEEHonolulu Hawaii USA May 2002

[22] A M Mora P Garcıa-Sanchez J J Merelo and P A CastilloldquoMigration study on a pareto-based island model for MOA-COsrdquo in Proceedings of the 15th Genetic and EvolutionaryComputation Conference (GECCO rsquo13) pp 57ndash64 July 2013

[23] M Dorigo and L M Gambardella ldquoAnt colonies for thetravelling salesman problemrdquo BioSystems vol 43 no 2 pp 73ndash81 1997

[24] MG C Resende RMartıM Gallego andADuarte ldquoGRASPand path relinking for the maxndashmin diversity problemrdquo Com-puters amp Operations Research vol 37 no 3 pp 498ndash508 2010

[25] G Reinelt ldquoTsplib95 1995rdquo 2012 httpcomoptifiuni-heidel-bergdesoftwareTSPLIB95

[26] M Gunduz M S Kiran and E Ozceylan ldquoA hierarchicapproach based on swarm intelligence to solve the travelingsalesmanproblemrdquoTurkish Journal of Electrical Engineering andComputer Sciences vol 23 no 1 pp 103ndash117 2015

[27] H Jia ldquoA novel hybrid optimization algorithm and its appli-cation in solving complex problemrdquo International Journal ofHybrid Information Technology vol 8 pp 1ndash10 2015

[28] S-M Chen and C-Y Chien ldquoA new method for solving thetraveling salesman problem based on the genetic simulatedannealing ant colony system with particle swarm optimizationtechniquesrdquo in Proceedings of the International Conference onMachine Learning and Cybernetics (ICMLC rsquo10) pp 2477ndash2482July 2010

[29] S Georg ldquoMP-TESTDATAmdashThe TSPLIB symmetric travelingsalesman problem instancesrdquo 2008

[30] K Sorensen ldquoMetaheuristicsmdashthemetaphor exposedrdquo Interna-tional Transactions in Operational Research vol 22 no 1 pp3ndash18 2015

[31] J Brest and J Zerovnik ldquoA heuristic for the asymmetric travel-ing salesman problemrdquo in Proceedings of the 6th MetaheuristicsInternational Conference (MIC rsquo05) pp 145ndash150 Vienna Aus-tria 2005

[32] M Dorigo and L M Gambardella ldquoAnt colony system a coop-erative learning approach to the traveling salesman problemrdquoIEEETransactions on Evolutionary Computation vol 1 no 1 pp53ndash66 1997

[33] O C Martin and S W Otto ldquoCombining simulated annealingwith local search heuristicsrdquoAnnals of Operations Research vol63 pp 57ndash75 1996

[34] B Baritompa and E M Hendrix ldquoOn the investigation ofstochastic global optimization algorithmsrdquo Journal of GlobalOptimization vol 31 no 4 pp 567ndash578 2005

[35] J Bai G-K Yang Y-WChen L-SHu andC-C Pan ldquoAmodelinduced max-min ant colony optimization for asymmetrictraveling salesman problemrdquo Applied Soft Computing vol 13no 3 pp 1365ndash1375 2013

[36] N Aras B J Oommen and I K Altinel ldquoThe Kohonennetwork incorporating explicit statistics and its application tothe travelling salesman problemrdquo Neural Networks vol 12 no9 pp 1273ndash1284 1999

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 10: Solving the Traveling Salesman’s Problem Using the African ...umpir.ump.edu.my/id/eprint/11365/1/Solving the Traveling Salesman’… · ResearchArticle Solving the Traveling Salesman’s

10 Computational Intelligence and Neuroscience

Table 6 Comparative speed of algorithms

TSP cases Number of citiesABO MIMM-ACO MMAS CGAS RAI

Avg time(secs)

Avg time(secs)

Avg time(secs)

Avg time(secs)

Avg time(secs)

Ry48p 48 007 783 797 1235 1598Ft70 70 005 985 1015 1532 7068Kro124p 100 008 3325 234 7852 3034Ftv70 71 009 6453 6125 6964 7376P43 43 01 835 938 0997 0997Ftv170 171 065 10828 9673 2761 2761

solve the problem of delay in achieving optimal solutionssince speed is one of the hallmarks of a good algorithm[34] In Table 6 we compare the speed of the ABO withthose of the recently publishedModel InducedMax-Min AntColony Optimization (MIMM-ACO) Min-Max Ant System(MMAS)CooperativeGenetic Ant Systems (CGAS) [35] andRAI

Table 6 displays the exceptional capacity of the novelAfrican Buffalo Optimization (ABO) to obtain optimal ornear-optimal solutions at record times It took the ABOless than a second (precisely 095 seconds) to solve allthe benchmark cases under investigation to MIMM-ACOrsquos23209 seconds MMASrsquos 20888 seconds CGASrsquo 452927seconds and RAIrsquos 3234 seconds Undoubtedly the ABOoutperformed the other algorithms in its quest to obtainresults using very limited CPU time and resources Thisbrought aboutABOrsquos use of very few parameters coupledwithstraightforward fitness functionThe exceptional speed of theABO compared with its competitive ability to obtain verycompetitive results recommends the ABO as an algorithm ofchoice when speed is a factor

7 Comparing ABO to Neural NetworkSolutions to TSP

Moreover following the popularity and efficiency of NeuralNetworks in obtaining good solutions to the Travelling Sales-manrsquos Problem [36] the authors compared the performanceof African Buffalo Optimization to the known solutions ofsome popular Neural Network algorithms These are Ange-niolrsquos method Somhom et alrsquos method Pasti and Castrorsquosmethod and Masutti and Castrorsquos method Comparativeexperimental figures are obtained from [28]

From the experimental results in Table 7 it can beseen that only the ABO obtained the optimal solutionsin Eil51 and Eil76 in addition to jointly obtaining theoptimal result with Masutti and Castrorsquos method in Berlin52Aside from these ABO outperformed the other meth-ods in getting near-optimal solutions in Bier127 KroA100KroB100 KroB100 KroC100 KroD100 KroE100 Ch130Ch150 KroA150 KroB150 KroA200 KroB150 KroB200Rat575 rl1323 fl1400 fl1400 and Rat783 It was only in Eil101that Masutti and Castrorsquos method outperformed ABOThis is

a further evidence that ABO is an excellent performer evenwhen in competition with Neural Networks methods

8 Conclusion

In general this paper introduces the novel algorithm theAfrican Buffalo Optimization and shows its capacity to solvethe Traveling Salesmanrsquos ProblemThe performance obtainedfrom using the ABO is weighed against the performanceof some popular algorithms such as the Ant Colony Opti-mization (ACO) Particle Swarm Optimization (PSO) Arti-ficial Bee Colony Optimization (ABO) Min-Max Ant Sys-tem (MMAS) and Randomized Insertion Algorithm (RAI)some hybrid methods such as Model Induced Max-MinAnt ColonyOptimization (MIMM-ACO) Hybrid Algorithm(HA) and Cooperative Genetic Ant Systems (CGAS) andsome popular Neural Networks-based optimization meth-ods In total 33 TSP datasets cases ranging from 48 to 14461cities were investigated and the ABO results obtained werecompared with results obtained from 11 other optimizationalgorithms and methods The results show the amazingperformance of the novel algorithmrsquos capacity to obtainoptimal or near-optimal solutions at an incredibly fast rateThe ABOrsquos outstanding performance is a testimony to thefact that ABOhas immense potentials in solving optimizationproblems using relatively fewer parameters than most otheroptimization algorithms in literature

Having observed the robustness of theABO in solving theTravelling Salesmanrsquos Problems with encouraging outcomeit is recommended that more investigations be carried outto ascertain the veracity or otherwise of this new algorithmin solving other problems such as PID tuning knapsackproblem vehicle routing job scheduling and urban trans-portation problems

Conflict of Interests

The authors assert that there is no conflict of interests in thepublication of this research paper

Acknowledgments

The authors acknowledge the support of the anonymousreviewers for their constructive reviews that helped to

Computational Intelligence and Neuroscience 11

Table 7 ABO versus NN results

TSPinstance Optima

ABO Angeniolrsquosmethod

Somhom et alrsquosmethod

Pasti and Castrorsquosmethod

Masutti and Castrorsquosmethod

Best Mean Best Mean Best Mean Best Mean Best Meaneil51 426 426 427 432 44290 433 44057 429 43870 427 43747eil76 538 538 56304 554 56320 552 56227 542 55610 541 55633eil101 629 640 640 655 66593 640 65557 641 65483 638 64863berlin52 7542 7542 765931 7778 836370 7715 802507 7716 807397 7542 793250bier127 118282 118297 118863 120110 12892033 119840 12173333 118760 12178033 118970 12088633ch130 6110 6111 630714 6265 641680 6203 630723 6142 629177 6145 628240ch150 6528 6532 6601 6634 684280 6631 6751 6629 675320 6602 673837rd100 7910 7935 7956 8088 844450 8028 823940 7947 825393 7982 819977lin105 14379 14419 144527 14999 1611137 14379 1447560 14379 1470223 14379 1440017lin318 42029 42101 42336 44869 4583283 43154 4392290 42975 4370497 42834 4369687kroA100 21282 21311 221638 23009 2467880 21410 2161677 21369 2186847 21333 2152273kroA150 26524 26526 27205 28948 2996090 26930 2740133 26932 2734643 26678 2735597kroA200 29368 29370 30152 31669 3322833 30144 3041567 29594 3025753 29600 3019027kroB100 22141 22160 22509 24026 2596640 22548 2262250 22596 2285360 22343 2266147kroB150 26130 26169 26431 27886 2940453 26342 2680633 26395 2675213 26264 2663187kroB200 29437 29487 29534 32351 3383813 29703 3028647 29831 3041560 29637 3013500kroC100 20749 20755 208817 22344 2349613 20921 2114987 20915 2123160 20915 2097123kroD100 21294 21347 21462 23076 2390903 21500 2184573 21457 2202787 21374 2169737kroE100 22068 22088 22702 23642 2482803 22379 2268247 22427 2281550 22395 2271563rat575 6773 6774 6810 8107 830183 7090 717363 7039 712507 7047 711567rat783 8806 8811 888175 10532 1072160 9316 938757 9185 932630 9246 934377rl1323 270199 270480 278977 293350 30142433 295780 30089900 295060 30028600 300770 30531433fl1400 20127 20134 20167 20649 2117467 20558 2074260 20745 2107057 20851 2111000d1655 62128 62346 625995 68875 7116807 67459 6804637 70323 7143170 70918 7211317

enhance the quality of this research paper Similarly theauthors are grateful to the Faculty of Computer Systems andSoftware Engineering Universiti Malaysia Pahang KuantanMalaysia for providing the conducive environment facilitiesand funds for this research through Grant GRS 1403118

References

[1] J Kennedy ldquoParticle swarm optimizationrdquo in Encyclopedia ofMachine Learning pp 760ndash766 Springer 2010

[2] M Dorigo and M Birattari ldquoAnt colony optimizationrdquo inEncyclopedia of Machine Learning pp 36ndash39 Springer 2010

[3] E Cantu-Paz ldquoA summary of research on parallel geneticalgorithmsrdquo IlliGAL Report 95007 Illinois Genetic AlgorithmsLaboratory 1995

[4] D Karaboga and B Basturk ldquoA powerful and efficient algo-rithm for numerical function optimization artificial bee colony(ABC) algorithmrdquo Journal of Global Optimization vol 39 no 3pp 459ndash471 2007

[5] J Andre P Siarry and T Dognon ldquoAn improvement of thestandard genetic algorithm fighting premature convergence incontinuous optimizationrdquo Advances in Engineering Softwarevol 32 no 1 pp 49ndash60 2001

[6] N Kumbharana and G M Pandey ldquoA comparative study ofACO GA and SA for solving travelling salesman problemrdquo

International Journal of Societal Applications of Computer Sci-ence vol 2 pp 224ndash228 2013

[7] Z Michalewicz and D B Fogel ldquoThe traveling salesmanproblemrdquo in How to Solve It Modern Heuristics pp 189ndash224Springer 2000

[8] J B Odili H Deng C Yang Y Sun M C d E Santo Ramosand M C G Ramos ldquoApplication of ant colony optimizationto solving the traveling salesmanrsquos problemrdquo Science Journal ofElectrical amp Electronic Engineering In press

[9] A Ben-Israel ldquoA Newton-Raphson method for the solution ofsystems of equationsrdquo Journal of Mathematical Analysis andApplications vol 15 pp 243ndash252 1966

[10] D P Bertsekas Dynamic Programming and Optimal Controlvol 1 Athena Scientific Belmont Mass USA 1995

[11] R G Ghanem and P D Spanos Stochastic Finite Elements ASpectral Approach Courier Corporation 2003

[12] S Binitha and S S Sathya ldquoA survey of bio inspired optimiza-tion algorithmsrdquo International Journal of Soft Computing andEngineering vol 2 pp 137ndash151 2012

[13] F Dressler and O B Akan ldquoA survey on bio-inspired network-ingrdquo Computer Networks vol 54 no 6 pp 881ndash900 2010

[14] X-S Yang and S Deb ldquoTwo-stage eagle strategy with differen-tial evolutionrdquo International Journal of Bio-Inspired Computa-tion vol 4 no 1 pp 1ndash5 2012

12 Computational Intelligence and Neuroscience

[15] S K Dhurandher S Misra P Pruthi S Singhal S Aggarwaland I Woungang ldquoUsing bee algorithm for peer-to-peer filesearching in mobile ad hoc networksrdquo Journal of Network andComputer Applications vol 34 no 5 pp 1498ndash1508 2011

[16] A RMehrabian and C Lucas ldquoA novel numerical optimizationalgorithm inspired from weed colonizationrdquo Ecological Infor-matics vol 1 no 4 pp 355ndash366 2006

[17] M Dorigo G Di Caro and LM Gambardella ldquoAnt algorithmsfor discrete optimizationrdquo Artificial Life vol 5 no 2 pp 137ndash172 1999

[18] M Clerc Particle Swarm Optimization vol 93 John Wiley ampSons 2010

[19] Z YuanM AM deOcaM Birattari and T Stutzle ldquoContinu-ous optimization algorithms for tuning real and integer param-eters of swarm intelligence algorithmsrdquo Swarm Intelligence vol6 no 1 pp 49ndash75 2012

[20] D Karaboga and B Akay ldquoA survey algorithms simulating beeswarm intelligencerdquo Artificial Intelligence Review vol 31 no 1ndash4 pp 61ndash85 2009

[21] J Kennedy and R Mendes ldquoPopulation structure and particleswarm performancerdquo in Proceedings of the Congress on Evo-lutionary Computation (CEC rsquo02) vol 2 pp 1671ndash1676 IEEEHonolulu Hawaii USA May 2002

[22] A M Mora P Garcıa-Sanchez J J Merelo and P A CastilloldquoMigration study on a pareto-based island model for MOA-COsrdquo in Proceedings of the 15th Genetic and EvolutionaryComputation Conference (GECCO rsquo13) pp 57ndash64 July 2013

[23] M Dorigo and L M Gambardella ldquoAnt colonies for thetravelling salesman problemrdquo BioSystems vol 43 no 2 pp 73ndash81 1997

[24] MG C Resende RMartıM Gallego andADuarte ldquoGRASPand path relinking for the maxndashmin diversity problemrdquo Com-puters amp Operations Research vol 37 no 3 pp 498ndash508 2010

[25] G Reinelt ldquoTsplib95 1995rdquo 2012 httpcomoptifiuni-heidel-bergdesoftwareTSPLIB95

[26] M Gunduz M S Kiran and E Ozceylan ldquoA hierarchicapproach based on swarm intelligence to solve the travelingsalesmanproblemrdquoTurkish Journal of Electrical Engineering andComputer Sciences vol 23 no 1 pp 103ndash117 2015

[27] H Jia ldquoA novel hybrid optimization algorithm and its appli-cation in solving complex problemrdquo International Journal ofHybrid Information Technology vol 8 pp 1ndash10 2015

[28] S-M Chen and C-Y Chien ldquoA new method for solving thetraveling salesman problem based on the genetic simulatedannealing ant colony system with particle swarm optimizationtechniquesrdquo in Proceedings of the International Conference onMachine Learning and Cybernetics (ICMLC rsquo10) pp 2477ndash2482July 2010

[29] S Georg ldquoMP-TESTDATAmdashThe TSPLIB symmetric travelingsalesman problem instancesrdquo 2008

[30] K Sorensen ldquoMetaheuristicsmdashthemetaphor exposedrdquo Interna-tional Transactions in Operational Research vol 22 no 1 pp3ndash18 2015

[31] J Brest and J Zerovnik ldquoA heuristic for the asymmetric travel-ing salesman problemrdquo in Proceedings of the 6th MetaheuristicsInternational Conference (MIC rsquo05) pp 145ndash150 Vienna Aus-tria 2005

[32] M Dorigo and L M Gambardella ldquoAnt colony system a coop-erative learning approach to the traveling salesman problemrdquoIEEETransactions on Evolutionary Computation vol 1 no 1 pp53ndash66 1997

[33] O C Martin and S W Otto ldquoCombining simulated annealingwith local search heuristicsrdquoAnnals of Operations Research vol63 pp 57ndash75 1996

[34] B Baritompa and E M Hendrix ldquoOn the investigation ofstochastic global optimization algorithmsrdquo Journal of GlobalOptimization vol 31 no 4 pp 567ndash578 2005

[35] J Bai G-K Yang Y-WChen L-SHu andC-C Pan ldquoAmodelinduced max-min ant colony optimization for asymmetrictraveling salesman problemrdquo Applied Soft Computing vol 13no 3 pp 1365ndash1375 2013

[36] N Aras B J Oommen and I K Altinel ldquoThe Kohonennetwork incorporating explicit statistics and its application tothe travelling salesman problemrdquo Neural Networks vol 12 no9 pp 1273ndash1284 1999

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 11: Solving the Traveling Salesman’s Problem Using the African ...umpir.ump.edu.my/id/eprint/11365/1/Solving the Traveling Salesman’… · ResearchArticle Solving the Traveling Salesman’s

Computational Intelligence and Neuroscience 11

Table 7 ABO versus NN results

TSPinstance Optima

ABO Angeniolrsquosmethod

Somhom et alrsquosmethod

Pasti and Castrorsquosmethod

Masutti and Castrorsquosmethod

Best Mean Best Mean Best Mean Best Mean Best Meaneil51 426 426 427 432 44290 433 44057 429 43870 427 43747eil76 538 538 56304 554 56320 552 56227 542 55610 541 55633eil101 629 640 640 655 66593 640 65557 641 65483 638 64863berlin52 7542 7542 765931 7778 836370 7715 802507 7716 807397 7542 793250bier127 118282 118297 118863 120110 12892033 119840 12173333 118760 12178033 118970 12088633ch130 6110 6111 630714 6265 641680 6203 630723 6142 629177 6145 628240ch150 6528 6532 6601 6634 684280 6631 6751 6629 675320 6602 673837rd100 7910 7935 7956 8088 844450 8028 823940 7947 825393 7982 819977lin105 14379 14419 144527 14999 1611137 14379 1447560 14379 1470223 14379 1440017lin318 42029 42101 42336 44869 4583283 43154 4392290 42975 4370497 42834 4369687kroA100 21282 21311 221638 23009 2467880 21410 2161677 21369 2186847 21333 2152273kroA150 26524 26526 27205 28948 2996090 26930 2740133 26932 2734643 26678 2735597kroA200 29368 29370 30152 31669 3322833 30144 3041567 29594 3025753 29600 3019027kroB100 22141 22160 22509 24026 2596640 22548 2262250 22596 2285360 22343 2266147kroB150 26130 26169 26431 27886 2940453 26342 2680633 26395 2675213 26264 2663187kroB200 29437 29487 29534 32351 3383813 29703 3028647 29831 3041560 29637 3013500kroC100 20749 20755 208817 22344 2349613 20921 2114987 20915 2123160 20915 2097123kroD100 21294 21347 21462 23076 2390903 21500 2184573 21457 2202787 21374 2169737kroE100 22068 22088 22702 23642 2482803 22379 2268247 22427 2281550 22395 2271563rat575 6773 6774 6810 8107 830183 7090 717363 7039 712507 7047 711567rat783 8806 8811 888175 10532 1072160 9316 938757 9185 932630 9246 934377rl1323 270199 270480 278977 293350 30142433 295780 30089900 295060 30028600 300770 30531433fl1400 20127 20134 20167 20649 2117467 20558 2074260 20745 2107057 20851 2111000d1655 62128 62346 625995 68875 7116807 67459 6804637 70323 7143170 70918 7211317

enhance the quality of this research paper Similarly theauthors are grateful to the Faculty of Computer Systems andSoftware Engineering Universiti Malaysia Pahang KuantanMalaysia for providing the conducive environment facilitiesand funds for this research through Grant GRS 1403118

References

[1] J Kennedy ldquoParticle swarm optimizationrdquo in Encyclopedia ofMachine Learning pp 760ndash766 Springer 2010

[2] M Dorigo and M Birattari ldquoAnt colony optimizationrdquo inEncyclopedia of Machine Learning pp 36ndash39 Springer 2010

[3] E Cantu-Paz ldquoA summary of research on parallel geneticalgorithmsrdquo IlliGAL Report 95007 Illinois Genetic AlgorithmsLaboratory 1995

[4] D Karaboga and B Basturk ldquoA powerful and efficient algo-rithm for numerical function optimization artificial bee colony(ABC) algorithmrdquo Journal of Global Optimization vol 39 no 3pp 459ndash471 2007

[5] J Andre P Siarry and T Dognon ldquoAn improvement of thestandard genetic algorithm fighting premature convergence incontinuous optimizationrdquo Advances in Engineering Softwarevol 32 no 1 pp 49ndash60 2001

[6] N Kumbharana and G M Pandey ldquoA comparative study ofACO GA and SA for solving travelling salesman problemrdquo

International Journal of Societal Applications of Computer Sci-ence vol 2 pp 224ndash228 2013

[7] Z Michalewicz and D B Fogel ldquoThe traveling salesmanproblemrdquo in How to Solve It Modern Heuristics pp 189ndash224Springer 2000

[8] J B Odili H Deng C Yang Y Sun M C d E Santo Ramosand M C G Ramos ldquoApplication of ant colony optimizationto solving the traveling salesmanrsquos problemrdquo Science Journal ofElectrical amp Electronic Engineering In press

[9] A Ben-Israel ldquoA Newton-Raphson method for the solution ofsystems of equationsrdquo Journal of Mathematical Analysis andApplications vol 15 pp 243ndash252 1966

[10] D P Bertsekas Dynamic Programming and Optimal Controlvol 1 Athena Scientific Belmont Mass USA 1995

[11] R G Ghanem and P D Spanos Stochastic Finite Elements ASpectral Approach Courier Corporation 2003

[12] S Binitha and S S Sathya ldquoA survey of bio inspired optimiza-tion algorithmsrdquo International Journal of Soft Computing andEngineering vol 2 pp 137ndash151 2012

[13] F Dressler and O B Akan ldquoA survey on bio-inspired network-ingrdquo Computer Networks vol 54 no 6 pp 881ndash900 2010

[14] X-S Yang and S Deb ldquoTwo-stage eagle strategy with differen-tial evolutionrdquo International Journal of Bio-Inspired Computa-tion vol 4 no 1 pp 1ndash5 2012

12 Computational Intelligence and Neuroscience

[15] S K Dhurandher S Misra P Pruthi S Singhal S Aggarwaland I Woungang ldquoUsing bee algorithm for peer-to-peer filesearching in mobile ad hoc networksrdquo Journal of Network andComputer Applications vol 34 no 5 pp 1498ndash1508 2011

[16] A RMehrabian and C Lucas ldquoA novel numerical optimizationalgorithm inspired from weed colonizationrdquo Ecological Infor-matics vol 1 no 4 pp 355ndash366 2006

[17] M Dorigo G Di Caro and LM Gambardella ldquoAnt algorithmsfor discrete optimizationrdquo Artificial Life vol 5 no 2 pp 137ndash172 1999

[18] M Clerc Particle Swarm Optimization vol 93 John Wiley ampSons 2010

[19] Z YuanM AM deOcaM Birattari and T Stutzle ldquoContinu-ous optimization algorithms for tuning real and integer param-eters of swarm intelligence algorithmsrdquo Swarm Intelligence vol6 no 1 pp 49ndash75 2012

[20] D Karaboga and B Akay ldquoA survey algorithms simulating beeswarm intelligencerdquo Artificial Intelligence Review vol 31 no 1ndash4 pp 61ndash85 2009

[21] J Kennedy and R Mendes ldquoPopulation structure and particleswarm performancerdquo in Proceedings of the Congress on Evo-lutionary Computation (CEC rsquo02) vol 2 pp 1671ndash1676 IEEEHonolulu Hawaii USA May 2002

[22] A M Mora P Garcıa-Sanchez J J Merelo and P A CastilloldquoMigration study on a pareto-based island model for MOA-COsrdquo in Proceedings of the 15th Genetic and EvolutionaryComputation Conference (GECCO rsquo13) pp 57ndash64 July 2013

[23] M Dorigo and L M Gambardella ldquoAnt colonies for thetravelling salesman problemrdquo BioSystems vol 43 no 2 pp 73ndash81 1997

[24] MG C Resende RMartıM Gallego andADuarte ldquoGRASPand path relinking for the maxndashmin diversity problemrdquo Com-puters amp Operations Research vol 37 no 3 pp 498ndash508 2010

[25] G Reinelt ldquoTsplib95 1995rdquo 2012 httpcomoptifiuni-heidel-bergdesoftwareTSPLIB95

[26] M Gunduz M S Kiran and E Ozceylan ldquoA hierarchicapproach based on swarm intelligence to solve the travelingsalesmanproblemrdquoTurkish Journal of Electrical Engineering andComputer Sciences vol 23 no 1 pp 103ndash117 2015

[27] H Jia ldquoA novel hybrid optimization algorithm and its appli-cation in solving complex problemrdquo International Journal ofHybrid Information Technology vol 8 pp 1ndash10 2015

[28] S-M Chen and C-Y Chien ldquoA new method for solving thetraveling salesman problem based on the genetic simulatedannealing ant colony system with particle swarm optimizationtechniquesrdquo in Proceedings of the International Conference onMachine Learning and Cybernetics (ICMLC rsquo10) pp 2477ndash2482July 2010

[29] S Georg ldquoMP-TESTDATAmdashThe TSPLIB symmetric travelingsalesman problem instancesrdquo 2008

[30] K Sorensen ldquoMetaheuristicsmdashthemetaphor exposedrdquo Interna-tional Transactions in Operational Research vol 22 no 1 pp3ndash18 2015

[31] J Brest and J Zerovnik ldquoA heuristic for the asymmetric travel-ing salesman problemrdquo in Proceedings of the 6th MetaheuristicsInternational Conference (MIC rsquo05) pp 145ndash150 Vienna Aus-tria 2005

[32] M Dorigo and L M Gambardella ldquoAnt colony system a coop-erative learning approach to the traveling salesman problemrdquoIEEETransactions on Evolutionary Computation vol 1 no 1 pp53ndash66 1997

[33] O C Martin and S W Otto ldquoCombining simulated annealingwith local search heuristicsrdquoAnnals of Operations Research vol63 pp 57ndash75 1996

[34] B Baritompa and E M Hendrix ldquoOn the investigation ofstochastic global optimization algorithmsrdquo Journal of GlobalOptimization vol 31 no 4 pp 567ndash578 2005

[35] J Bai G-K Yang Y-WChen L-SHu andC-C Pan ldquoAmodelinduced max-min ant colony optimization for asymmetrictraveling salesman problemrdquo Applied Soft Computing vol 13no 3 pp 1365ndash1375 2013

[36] N Aras B J Oommen and I K Altinel ldquoThe Kohonennetwork incorporating explicit statistics and its application tothe travelling salesman problemrdquo Neural Networks vol 12 no9 pp 1273ndash1284 1999

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 12: Solving the Traveling Salesman’s Problem Using the African ...umpir.ump.edu.my/id/eprint/11365/1/Solving the Traveling Salesman’… · ResearchArticle Solving the Traveling Salesman’s

12 Computational Intelligence and Neuroscience

[15] S K Dhurandher S Misra P Pruthi S Singhal S Aggarwaland I Woungang ldquoUsing bee algorithm for peer-to-peer filesearching in mobile ad hoc networksrdquo Journal of Network andComputer Applications vol 34 no 5 pp 1498ndash1508 2011

[16] A RMehrabian and C Lucas ldquoA novel numerical optimizationalgorithm inspired from weed colonizationrdquo Ecological Infor-matics vol 1 no 4 pp 355ndash366 2006

[17] M Dorigo G Di Caro and LM Gambardella ldquoAnt algorithmsfor discrete optimizationrdquo Artificial Life vol 5 no 2 pp 137ndash172 1999

[18] M Clerc Particle Swarm Optimization vol 93 John Wiley ampSons 2010

[19] Z YuanM AM deOcaM Birattari and T Stutzle ldquoContinu-ous optimization algorithms for tuning real and integer param-eters of swarm intelligence algorithmsrdquo Swarm Intelligence vol6 no 1 pp 49ndash75 2012

[20] D Karaboga and B Akay ldquoA survey algorithms simulating beeswarm intelligencerdquo Artificial Intelligence Review vol 31 no 1ndash4 pp 61ndash85 2009

[21] J Kennedy and R Mendes ldquoPopulation structure and particleswarm performancerdquo in Proceedings of the Congress on Evo-lutionary Computation (CEC rsquo02) vol 2 pp 1671ndash1676 IEEEHonolulu Hawaii USA May 2002

[22] A M Mora P Garcıa-Sanchez J J Merelo and P A CastilloldquoMigration study on a pareto-based island model for MOA-COsrdquo in Proceedings of the 15th Genetic and EvolutionaryComputation Conference (GECCO rsquo13) pp 57ndash64 July 2013

[23] M Dorigo and L M Gambardella ldquoAnt colonies for thetravelling salesman problemrdquo BioSystems vol 43 no 2 pp 73ndash81 1997

[24] MG C Resende RMartıM Gallego andADuarte ldquoGRASPand path relinking for the maxndashmin diversity problemrdquo Com-puters amp Operations Research vol 37 no 3 pp 498ndash508 2010

[25] G Reinelt ldquoTsplib95 1995rdquo 2012 httpcomoptifiuni-heidel-bergdesoftwareTSPLIB95

[26] M Gunduz M S Kiran and E Ozceylan ldquoA hierarchicapproach based on swarm intelligence to solve the travelingsalesmanproblemrdquoTurkish Journal of Electrical Engineering andComputer Sciences vol 23 no 1 pp 103ndash117 2015

[27] H Jia ldquoA novel hybrid optimization algorithm and its appli-cation in solving complex problemrdquo International Journal ofHybrid Information Technology vol 8 pp 1ndash10 2015

[28] S-M Chen and C-Y Chien ldquoA new method for solving thetraveling salesman problem based on the genetic simulatedannealing ant colony system with particle swarm optimizationtechniquesrdquo in Proceedings of the International Conference onMachine Learning and Cybernetics (ICMLC rsquo10) pp 2477ndash2482July 2010

[29] S Georg ldquoMP-TESTDATAmdashThe TSPLIB symmetric travelingsalesman problem instancesrdquo 2008

[30] K Sorensen ldquoMetaheuristicsmdashthemetaphor exposedrdquo Interna-tional Transactions in Operational Research vol 22 no 1 pp3ndash18 2015

[31] J Brest and J Zerovnik ldquoA heuristic for the asymmetric travel-ing salesman problemrdquo in Proceedings of the 6th MetaheuristicsInternational Conference (MIC rsquo05) pp 145ndash150 Vienna Aus-tria 2005

[32] M Dorigo and L M Gambardella ldquoAnt colony system a coop-erative learning approach to the traveling salesman problemrdquoIEEETransactions on Evolutionary Computation vol 1 no 1 pp53ndash66 1997

[33] O C Martin and S W Otto ldquoCombining simulated annealingwith local search heuristicsrdquoAnnals of Operations Research vol63 pp 57ndash75 1996

[34] B Baritompa and E M Hendrix ldquoOn the investigation ofstochastic global optimization algorithmsrdquo Journal of GlobalOptimization vol 31 no 4 pp 567ndash578 2005

[35] J Bai G-K Yang Y-WChen L-SHu andC-C Pan ldquoAmodelinduced max-min ant colony optimization for asymmetrictraveling salesman problemrdquo Applied Soft Computing vol 13no 3 pp 1365ndash1375 2013

[36] N Aras B J Oommen and I K Altinel ldquoThe Kohonennetwork incorporating explicit statistics and its application tothe travelling salesman problemrdquo Neural Networks vol 12 no9 pp 1273ndash1284 1999

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 13: Solving the Traveling Salesman’s Problem Using the African ...umpir.ump.edu.my/id/eprint/11365/1/Solving the Traveling Salesman’… · ResearchArticle Solving the Traveling Salesman’s

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014