[ieee 2009 4th international symposium on computational intelligence and intelligent informatics...

6
978-1-4244-5382-5/09/$26.00 ©2009 IEEE Ant Supervised by PSO Walid ELLOUMI, Nizar ROKBANI and Adel. M. ALIMI REGIM: Research Group on Intelligent Machines, University of Sfax, ENIS, BP - 1173 Sfax 3038, Tunisia {elloumiwalid, nizar.rokbani, adel.alimi}@ieee.org Abstract— Swarm-inspired optimization has become an attractive research field. Since most real world problems are multi criteria ones’, multi-objective algorithms seem to be the most fitted to solve them. Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO) have attracted the interest of researchers. Our proposal is to make PSO supervising an ant optimizer. In this paper we propose an Ant colony algorithms supervised by Particle Swarm Optimization to solve continuous optimization problems. Traditional ACO are used for discrete optimization while PSO is for continuous optimization problems. Separately, PSO and ACO shown great potential in solving a wide range of optimization problems. Aimed at solving continuous problems effectively, this paper develops a novel ant algorithm ”Ant Supervised by PSO” (A.S.PSO) the proposed algorithm can reduce the probability of being trapped in local optima and enhance the global search capability and accuracy. An elitist strategy is also employed to reserve the most valuable points. Pheromone deposit by the ants’ mechanisms would be used by the PSO as a weight of its particles ensuring a better global search strategy. By using the A.S.PSO design method, ants supervised by PSO in the feasible domain can explore their chosen regions rapidly and efficiently. Keywords- Ant Colony Optimization; Particle Swarm Optimiza- tion; continuous optimization I. I NTRODUCTION Russl Eberhart and James Kennedy [1] initially proposed the particle swarm optimization, PSO, as an approach for modeling inspired from social and cultural organization of animals and insects group, their aptitudes to organization and to solve problems. The PSO, is then proposed as an optimization technique [2][3]. Ant-based approaches was proposed for combinatorial optimization problems [4][5], originally motivated by the attempt to solve the well known Travelling Salesperson Problem (TSP), the inventors of the approach soon recognized that their technique is applicable to a much larger range of problems. In an explicit form, this insight was established by the creation of the Ant Colony Optimization (ACO) metaheuristic by Dorigo and Di Caro [6]. Gutjahr carried out a proof of ACO convergence [8][9]; Sttzle and Dorigo, later on, gave evidence to the convergence of an optimal solution [7][10]. The analytical investigation of the runtime of ACO algorithms is still a very new issue at the moment, there are already some first results available, and they promise that a large amount of knowledge on this issue can be won within the next years. Some of the existing results already allow a comparison of specific ACO algorithms with counterparts from the field of Evolutionary Algorithms, in particular with the so-called Evolutionary Algorithms (EA) [11]. In parallel to the development of ACO, a few years ago a new paradigm ”PSO” ushered. At first it was simply an approach, yet now it has attracted the interest of researchers around the globe. PSO is intended to give an overview of important work that gave direction to research in particle swarms as well as some interesting new directions and applications. Our subject matter that we will test consists of joining ACO and PSO to build a hybrid new approach to the TSP problem. As soon as one passes from the question of convergence of an algorithmic function to that of the speed of convergence, one cannot expect anymore to obtain very general results. This is a consequence of the so-called ”no-free-lunch theorems” which basically state that for each algorithm that performs well on some optimization problems, there must necessarily be other problems where it fails to be efficient. The aim of the present article is to introduce a new PSO ACO paradigm based on which results of the outlined type can be derived in a mathematically sound way, to present already available results, to give an introduction into the techniques by which these results have been obtained, and to outline their scope of application. The paper is organized as follows: Section 2 introduces the social intelligence paradigm with a focus on social insects. Section 3 presents the PSO and ANT based optimization since these techniques are the key issues of our proposal. In section 4 we detail the A.S.PSO, Ant Supervised by PSO algorithm. The paper is concluded in section 5. II. ACCOMPLISHMENTS OF THE SOCIAL INSECTS An insect may have only a few hundred brain cells, but insect organizations are capable of architectural marvels, elaborate communication systems, and terrific resistance to the threats of nature [12]. Extrapolating from Lorenz’ descriptions of the fixed action patterns, the stereotypical ISCIII 2009 4 th International Symposium on Computational Intelligence and Intelligent Informatics 21–25 October 2009 Egypt 161

Upload: adel-m

Post on 27-Mar-2017

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: [IEEE 2009 4th International Symposium on Computational Intelligence and Intelligent Informatics (ISCIII) - Luxor, Egypt (2009.10.21-2009.10.25)] 2009 4th International Symposium on

978-1-4244-5382-5/09/$26.00 ©2009 IEEE

Ant Supervised by PSOWalid ELLOUMI, Nizar ROKBANI and Adel. M. ALIMI

REGIM: Research Group on Intelligent Machines,University of Sfax, ENIS, BP - 1173

Sfax 3038, Tunisia{elloumiwalid, nizar.rokbani, adel.alimi}@ieee.org

Abstract— Swarm-inspired optimization has become anattractive research field. Since most real world problems aremulti criteria ones’, multi-objective algorithms seem to be themost fitted to solve them. Particle Swarm Optimization (PSO)and Ant Colony Optimization (ACO) have attracted the interestof researchers. Our proposal is to make PSO supervising an antoptimizer. In this paper we propose an Ant colony algorithmssupervised by Particle Swarm Optimization to solve continuousoptimization problems. Traditional ACO are used for discreteoptimization while PSO is for continuous optimization problems.Separately, PSO and ACO shown great potential in solvinga wide range of optimization problems. Aimed at solvingcontinuous problems effectively, this paper develops a novel antalgorithm ”Ant Supervised by PSO” (A.S.PSO) the proposedalgorithm can reduce the probability of being trapped in localoptima and enhance the global search capability and accuracy.An elitist strategy is also employed to reserve the most valuablepoints. Pheromone deposit by the ants’ mechanisms would beused by the PSO as a weight of its particles ensuring a betterglobal search strategy. By using the A.S.PSO design method,ants supervised by PSO in the feasible domain can explore theirchosen regions rapidly and efficiently.

Keywords- Ant Colony Optimization; Particle Swarm Optimiza-tion; continuous optimization

I. INTRODUCTION

Russl Eberhart and James Kennedy [1] initially proposedthe particle swarm optimization, PSO, as an approach formodeling inspired from social and cultural organization ofanimals and insects group, their aptitudes to organizationand to solve problems. The PSO, is then proposed as anoptimization technique [2][3].

Ant-based approaches was proposed for combinatorialoptimization problems [4][5], originally motivated bythe attempt to solve the well known Travelling SalespersonProblem (TSP), the inventors of the approach soon recognizedthat their technique is applicable to a much larger range ofproblems. In an explicit form, this insight was establishedby the creation of the Ant Colony Optimization (ACO)metaheuristic by Dorigo and Di Caro [6]. Gutjahr carried outa proof of ACO convergence [8][9]; Sttzle and Dorigo, lateron, gave evidence to the convergence of an optimal solution[7][10].

The analytical investigation of the runtime of ACOalgorithms is still a very new issue at the moment, there arealready some first results available, and they promise that a

large amount of knowledge on this issue can be won withinthe next years. Some of the existing results already allowa comparison of specific ACO algorithms with counterpartsfrom the field of Evolutionary Algorithms, in particular withthe so-called Evolutionary Algorithms (EA) [11].

In parallel to the development of ACO, a few years agoa new paradigm ”PSO” ushered. At first it was simply anapproach, yet now it has attracted the interest of researchersaround the globe. PSO is intended to give an overview ofimportant work that gave direction to research in particleswarms as well as some interesting new directions andapplications.

Our subject matter that we will test consists of joining ACOand PSO to build a hybrid new approach to the TSP problem.As soon as one passes from the question of convergence of analgorithmic function to that of the speed of convergence, onecannot expect anymore to obtain very general results. Thisis a consequence of the so-called ”no-free-lunch theorems”which basically state that for each algorithm that performswell on some optimization problems, there must necessarilybe other problems where it fails to be efficient.

The aim of the present article is to introduce a new PSOACO paradigm based on which results of the outlined typecan be derived in a mathematically sound way, to presentalready available results, to give an introduction into thetechniques by which these results have been obtained, and tooutline their scope of application.

The paper is organized as follows: Section 2 introduces thesocial intelligence paradigm with a focus on social insects.Section 3 presents the PSO and ANT based optimizationsince these techniques are the key issues of our proposal. Insection 4 we detail the A.S.PSO, Ant Supervised by PSOalgorithm. The paper is concluded in section 5.

II. ACCOMPLISHMENTS OF THE SOCIAL INSECTS

An insect may have only a few hundred brain cells, butinsect organizations are capable of architectural marvels,elaborate communication systems, and terrific resistanceto the threats of nature [12]. Extrapolating from Lorenz’descriptions of the fixed action patterns, the stereotypical

ISCIII 2009 • 4th International Symposium on Computational Intelligence and Intelligent Informatics • 21–25 October 2009 Egypt

161

Page 2: [IEEE 2009 4th International Symposium on Computational Intelligence and Intelligent Informatics (ISCIII) - Luxor, Egypt (2009.10.21-2009.10.25)] 2009 4th International Symposium on

”instinctive” behaviors of organisms, Wilson theorized thatthe dazzling accomplishments of ant societies could beexplained and understood in terms of simple fixed actionpatterns, which, Wilson discovered, included behavioralresponses to pheromones, chemicals that possess a kind ofodor that can be detected by other ants.

The study of complex systems and the rise to prominenceof computer simulation models of such systems gave scientiststhe tools they needed to model the simple behaviors of antsand how they could combine to produce an effect that is muchmore than the sum of its parts, and these insights have in turnled to more insights about the nature of man and society andabout the physical world. Insect sociality is a classic exampleof the emergence of global effects from local interactions [13].

In fact, all their published models derive from the activitiesof the social insects. Our use of the term is even less restrictivethan Bonabeau, Dorigo, and Theraulz’ [14]. We note, forinstance, that the term ”swarm intelligence” has been usedin the field of semiotics to describe the kind of irrational buzzof ideas in the mind that underlies the communication of signsbetween two individuals [15].

III. COMPUTATIONAL SWARM INTELLIGENCE

We use the term ”swarm” in a general sense to refer to anysuch loosely structured collection of interacting agents [16].The classic example of a swarm is a swarm of bees, but themetaphor of a swarm can be extended to other systems witha similar architecture. An ant colony can be thought of as aswarm whose individual agents are ants [18], a flock of birdsis a swarm whose agents are birds [17], traffic is a swarm ofcars, a crowd is a swarm of people, an immune system is aswarm of cells and molecules, and an economy is a swarm ofeconomic agents.

A. PSO

The initial ideas on particle swarms of Kennedy (a socialpsychologist) and Eberhart (an electrical engineer) wereessentially aimed at producing computational intelligence byexploiting simple analogues of social interaction, rather thanpurely individual cognitive abilities. The first simulations [1]were influenced by Heppner and Grenander’s work [21] andinvolved analogues of bird flocks searching for corn. Thesesoon developed [1][22][23][24] into a powerful optimizationmethod, Particle Swarm Optimization (PSO).

In PSO a number of simple entities, the particles, areplaced in the search space of some problem or function, andeach evaluates the objective function at its current location.Each particle then determines its movement through thesearch space by combining some aspect of the history ofits own current and best (best-fitness) locations with thoseof one or more members of the swarm, with some randomperturbations. The next iteration takes place after all particleshave been moved. Eventually the swarm as a whole, like a

flock of birds collectively foraging for food, is likely to moveclose to an optimum of the fitness function.

Each individual in the particle swarm is composed of threeD-dimensional vectors, where D is the dimensionality of thesearch space. These are the current position xi, the previousbest position pi, and the velocity vi.

The current position xi can be considered as a set ofcoordinates describing a point in space. On each iteration ofthe algorithm, the current position is evaluated as a problemsolution. If that position is better than any that has been foundso far, then the coordinates are stored in the second vector,pi. The value of the best function result so far is stored in avariable that can be called pbesti (for ”previous best”), forcomparison on later iterations. The objective, of course, isto keep finding better positions and updating pi and pbesti.New points are chosen by adding vi coordinates to xi, andthe algorithm operates by adjusting vi, which can effectivelybe seen as a step size.

The particle swarm is more than just a collection ofparticles. A particle by itself has almost no power to solveany problem; progress occurs only when the particles interact.

V ij,t+1 = WV i

j,t+C1R1(P ij,t−Xi

j,t)+C2R2(Pi,gj,t −Xi

j,t) (1)

Xij,t+1 = Xi

j,t + V ij,t+1 (2)

Where j = 1,...,n, i = 1,...,N, and are two positive constants,and are random values in range [0, 1] and

• W is the called inertia weight of the particle. This isemployed to control the impact of previous history ofvelocities on the current velocity, thus to influence hetrade-off between global and local exploration abilitiesof the particles. A larger inertia weight w facilitatesglobal exploration while a smaller inertia weight tendsto facilitate global exploration while a smaller inertiaweight tends to facilitate local exploration to fine-tunethe current search area. Suitable selection of the inertiaweight w can provide a balance between global andlocal exploration abilities requiring less iteration forfinding the optimum on average. A nonzero inertiaweight introduces.

The preference for the particle to continue moving inthe same direction as in the previous iteration.

• C1R1 and C2R2 are called control parameters. Thesetwo control parameters determine the type of trajectorythe particle travels. If R1 and R2 are 0.0, it is obviousthat v = v + 0 and x = x + v (for w = 1). It meansthe particles move linearly. If they are set to very small

W. Elloumi, N. Rokbani, A. M. Alimi • Ant Supervised by PSO

162

Page 3: [IEEE 2009 4th International Symposium on Computational Intelligence and Intelligent Informatics (ISCIII) - Luxor, Egypt (2009.10.21-2009.10.25)] 2009 4th International Symposium on

values, the trajectory of x rises and falls slowly over time.

• P i,gt is the position of the global best particle in the

population, which guides the particles to move towardsthe optimum, The important part in MOPSO is todetermine the best global particle P i,g

t for each particlei of the population. In single-objective PSO, the globalbest particle is determined easily by selecting the particlethat has the best position. But in MOPSO P i,g

t , must beselected from the updated set of non-dominated solutionsstored in the archive At+1. Selecting the best local guideis achieved in the function FindBestGlobal (At+1, X

it )

for each particle i.

• P it is the best position that particle i could find so

far. This is like a memory for the particle i and keepsthe non-dominated (best) position of the particle bycomparing the new position Xi

t+1 in the objective spacewith P i

t ( P it is the last non-dominated (best) position

of the particle i).

B. ACO

A metaheuristic that belongs now to the most prominentand most frequently applied techniques for search andheuristic optimization started its development fifteen yearsago out of the seminal work by Dorigo et al.[4][5]: the ant-based approach to the solution of combinatorial optimizationproblems. Originally motivated by the attempt to solvethe wellknown Travelling Salesperson Problem (TSP), theinventors of the approach soon recognized that their techniqueis applicable to a much larger range of problems. In an explicitform, this insight was established by the creation of the AntColony Optimization (ACO) metaheuristic by Dorigo and DiCaro [6]. In the meantime, there exist several hundreds ofpublications reporting on successful applications of ACO in alarge variety of areas. For a recent survey, we refer the readerto the profound and comprehensive textbook by Dorigo andSttzle [7].

Notwithstanding the mentioned fact that there are alreadynumerous experimental investigations on ACO, much lesshas been done in the field of ACO theory. It was not beforethe year 2000 that the first convergence proofs for ACOalgorithms appeared [8][9][10]. At present, convergenceresults are already available for several ACO variants. Evenif an ACO algorithm is ensured to converge to an optimalsolution, there remains the practically relevant question ofhow much time it takes (e.g., in the average) until such asolution is found. Contrary to the situation in the GeneticAlgorithms (GA) field, results concerning this last questionare still scarce in the ACO area.

Marco Dorigo, Vittorio Maniezzo, and Alberto Colorni[20] (1996; Dorigo and Gambardella, 1997) [19] showed howa very simple pheromone-following behavior could be used

to optimize the travelling salesman problem.

The travelling salesman problem (TSP) is a classical exam-ple of a combinatorial optimization problem, which has provedto be an NP-hard. In the TSP, the objective is to find thesalesman’s tour to visit all the N cities on his list once andonly once, returning to the starting point after travelling theshortest possible distance. Additionally, we assume that thedistance from city i to city j is the same as from city j to cityi (symmetrical TSP). A tour can be represented as an orderedlist of N cities. In this case, for N > 2 there is N!/2N differenttours (the same tour may be started from any city from amongN cities and traversed either clockwise or anti-clockwise).

C. Procedure ACO

1. Initialize pheromone trails τi,j on the edges (i,j) of c;

2. For iteration m = 1, 2, . . . do

For agent s = 1, . . . , S do

set i, the current position of the agent,

equal to the start node of C;

set u, the current partial path of the agent,equal to the empty list;

3. While a feasible continuation (i, j) of the path u of theagent exists do

select successor node j with probabilitypij , where

pij = 0, if continuation (i, j ) isinfeasible, and

Pi,j =g(τij , ηij(u))

(∑

i,r g(τir, ηir(u))

where the sum is over all feasible

continuations (i, r), otherwise;

continue the current path u of the

agent by adding edge (i, j) to u

and setting i = j ;

end while

4. Update the pheromone trails τij ;

5. End of two For

ISCIII 2009 • 4th International Symposium on Computational Intelligence and Intelligent Informatics • 21–25 October 2009 Egypt

163

Page 4: [IEEE 2009 4th International Symposium on Computational Intelligence and Intelligent Informatics (ISCIII) - Luxor, Egypt (2009.10.21-2009.10.25)] 2009 4th International Symposium on

• Where the values ηij(u) are called heuristic informationvalues.

• Contrary to τij , the quantities ηij(u) are allowed todepend on the entire partial path u traversed, as indicatedby the argument u.

• The function g combines pheromone trail and heuristicinformation. The most popular choice is :

g(τ, η) = τα.ηβ (3)

with parameters α > 0 (often chosen as α = 1) andβ > 0.

• The diverse ACO variants mainly differ by the way theupdate of the pheromone trails is performed. That wehave make PSO supervising an ant optimizer called ”AntSupervised by PSO” (A. S. PSO) to solve continuousoptimization problems. This issue can be seen in figure 1.

Fig. 1. Search space

IV. THE ANT COLONY SUPERVISED BY PSOALGORITHM (A.S.PSO)

Our proposal is to make PSO supervising an ant optimizer.In this paper we propose an Ant colony algorithms supervisedby Particle Swarm Optimization to solve continuousoptimization problems. Traditional ACO are used for discreteoptimization while PSO is for continuous optimizationproblems. Separately, PSO and ACO shown great potential insolving a wide range of optimization problems.

Aimed at solving continuous problems effectively, thispaper develops a novel ant algorithm ”Ant Supervised byPSO” (A.S.PSO) the proposed algorithm can reduce theprobability of being trapped in local optima and enhancethe global search capability and accuracy. An elitist strategyis also employed to reserve the most valuable points.Pheromone deposit by the ants’ mechanisms would be usedby the PSO as a weight of its particles ensuring a betterglobal search strategy. By using the A.S.PSO design method,

ants supervised by PSO in the feasible domain can exploretheir chosen regions rapidly and efficiently.

1) Initialization particle swarm

2) Ant Initilization of spaces and their respective research

3) Run the Ant (iteration n) and find the best solution

4) To assimilate the best solutions to particles PSO

5) To iterate PSO

Fig. 2. Procedure A.S.PSO

First we start by initializing the search in light of research

W. Elloumi, N. Rokbani, A. M. Alimi • Ant Supervised by PSO

164

Page 5: [IEEE 2009 4th International Symposium on Computational Intelligence and Intelligent Informatics (ISCIII) - Luxor, Egypt (2009.10.21-2009.10.25)] 2009 4th International Symposium on

in space, the number of iterations for PSO and the costfunction. The second step is to initialize the ant, the cost ofthis function and assimilate an ant for each PSO particle.Thereafter we proceed to simulate the ant algorithm andintegrate the best solution to the ant than the PSO. Finallywe will evaluate the best local solutions and the best positionof the PSO. The last step is to change the speed and positionof particles PSO, and the positions that of Ant. If conditionsare satisfied we reach the end if not we go back to the thirdstage.

Since walk has symmetry characteristics, we assignedsub-Ant, respectively to the left foot and to the right one.Both must cooperate to ensure a generation of the jointstrajectories ensuring a stable walking. The joins consideredhere are the hip, the knee and the ankle joints.

The inference graph of the particle swarm is conceived asfollows: The memory particles are connected hierarchicallyfrom bottom to top, such a connection means that particle(p0.2), particle 2 of swarm 0 communicate it’s position toboth particles (p0.1) and (p0.3), it gathers also the respectivepositions’ of these particles. Particle (p0.3) has only theposition of (p0.2). P0 is a specific particle used to representsthe center of mass of the body. This particle is critic since theanalysis of it’s coordinates will help the decision on whetherthe generated joint position is a statically stable one or notC. The architecture of the proposed Ant supervised by PSO(A.S.PSO) can be seen in figure 3.

Fig. 3. Case of application

V. CONCLUSION

In this paper we introduced first the PSO and ACOalgorithms then we developped our proposal called A.S.PSO; issued from a hybridation of both technics. It took the bestof the ACO and to integrate the PSO. This proposal willbe soon applied to solve classical test bench problems suchas TSP, it will be used also to optimize the control of ahumanoid robot.

The model of the locomotion system is to be extended tohigher degree of freedom including a new and complicated

particles representation called A.S.PSO. Well conduct moretestes to evaluate and enhance The A.S.PSO proposalmethodology.

A humans have an natural knowledge of the their bodieslimits, even in the cases of amputation the human stillassuming his leg or arm while this one is no longer partof the body. For its locomotion system, IZIman, will betterinclude a virtual model with a limited degree of freedom,such a model is useful to help the robot to optimise apath including different criteria such as global time andenergy minimising; as a human think, IZIman will carry onsimulations to optimise his strategies choices.

Acknowledgments

I would like to thanks Mr Mohamed Adel ALIMI for theopportunity he gave me to have my training in the computerscience Laboratory with the “IZIman” term is a researchproject that conducting in REGIM laboratory, Research groupon intelligent Machines, and the main challenge is to proposeintelligent architectures, controllers and algorithms that arehumanly inspired.

Several ideas that were of central importance to ourresearch were the outcome of our discussion with Mr Nizarwith the latter’s sound collaboration we worked out the notionof Ant Supervised by PSO which constitute in fact the subjectmatter in the current study.

REFERENCES

[1] J. Kennedy and R. C. Eberhart, “Particle Swarm Optimisation,” IEEEInternational Conference on neural Networks, pp. 1942–1948, 1995.

[2] J. Dreo, A. Petrowski and P. Siarray, “methaheuristiques pourl’optimisation difficile,” Eyrolles, Paris, 2003.

[3] Y. Shi and R. C. Eberhart, “Fuzzy adaptive Particle swarm Optimisation,”Congress on evolutionary computation, Seoul, 2001.

[4] M. Dorigo, V. Maniezzo and A. Colorni, “Positive feedback as a searchstrategy,” Technical Report TR-POLI-91-016, Dipartimento di Elletronica,Politecnico di Milano, Milan, Italy, 1991.

[5] M. Dorigo, V. Maniezzo and A. Colorni, “Ant system: optimization by acolony of cooperating agents,” IEEE Transactions on Systems, Man, andCybernetics, pp. 1–13, 1996.

[6] M. Dorigo and G. Di Caro, “The Ant colony optimization metaheuristic,”In D. Corne, M. Dorigo and F. Glover (Eds.), New ideas in optimizationMaidenhead: McGraw-Hill. pp. 11–32, 1999.

[7] M. Dorigo and T. Sttzle, “Ant colony optimization,” Cambridge: MIT,2004.

[8] W. J. Gutjahr, “A graph-based ant system and its convergence,” FutureGeneration Computer Systems, pp. 873–888, 2000.

[9] W. J. Gutjahr, “ACO algorithms with guaranteed convergence to theoptimal solution,” Information Processing Letters, pp. 145–153, 2002.

ISCIII 2009 • 4th International Symposium on Computational Intelligence and Intelligent Informatics • 21–25 October 2009 Egypt

165

Page 6: [IEEE 2009 4th International Symposium on Computational Intelligence and Intelligent Informatics (ISCIII) - Luxor, Egypt (2009.10.21-2009.10.25)] 2009 4th International Symposium on

[10] T. Sttzle and M. Dorigo, “A short convergence proof for a class ofACO algorithms,” IEEE Transactions on Evolutionary Computation, pp.358–365, 2002.

[11] S. Droste, T. Jansen and I. Wegener, “On the analysis of the (1+1)evolutionary algorithm,” Theoretical Computer Science, pp. 51–81, 2002.

[12] W. Elloumi and A. M. Alimi, “ The behavior of swarm intelligence:ACO and PSO for optimization,” Journal of Mathematical Modelling andAlgorithms (JMMA), submitted, 2009.

[13] G. K. Purushothama and L. Jenkins, “ Simulated annealing with localsearch: A hybrid algorithm for unit commitment,“ IEEE Trans. PowerSyst., 18(1): pp. 273–278, 2003.

[14] E. Bonabeau, M. Dorigo, and G. Theraulaz, “ Swarm intelligence-fromnatural to artificial systems,“ Oxford: Oxford University Press, 1999.

[15] S. N. Beshers and J. H. Fewell, “ Models of division of labor in socialinsects,“ Annual Review of Entomology, 46(1) : pp. 413–440, 2001.

[16] W. Elloumi, and A. M. Alimi, “ Combinatory Optimization of ACOand PSO,“ in Conf META, Oct 29-31, 2008.

[17] V. Miranda, “ Evolutionary algorithms with particle swarm movements,“In IEEE 13th Int. Conf. ISAP, Nov 6-10, pp. 6–21, 2005.

[18] K. Vittori, G. Talbot, J. Gautrais, V. Fourcassi, A. F. Araujo, and G.Theraulaz, “ Path efficiency of ant foraging trails in an artificial network,“Journal of heoretical Biology, pp. 507–515, 2006.

[19] M. Dorigo, and L. M. Gambardella, “ Ant colony system: A cooperativelearning approach to the travelling salesman problem,“ IEEE Transactionson Evolutionary Computation, 1997.

[20] M. Dorigo, V. Maniezzo, and A. Colorni, “ The ant system: Optimizationby a colony of cooperating agents,“ IEEE Trans. Syst. Man Cybern B,Cybern., 26(2): pp. 29–41, 1996.

[21] H. Heppner and U. Grenander, “ A stochastic non-linear model forcoordinated bird flocks,“ In S. Krasner (Ed.), The ubiquity of chaos pp.233–238, Washington: AAAS, 1990.

[22] R. C. Eberhart and J. Kennedy, “ A new optimizer using particle swarmtheory,“ In Proceedings of the sixth international symposium on micromachine and human science pp. 39–43, Nagoya, Japan, Piscataway: IEEE, 1995.

[23] R. C. Eberhart, P. K. Simpson and R. W. Dobbins, “ Computationalintelligence PC tools,“ Boston: Academic Press, 1996 .

[24] N. Rokbani , Adel. M. Alimi and B. Ammar “Architectural Proposalfor a Robotized Intelligent humanoid, IZiman,“ Proceedings of the IEEEInternational Conference on Automation and Logistics pp. 1941–1946,Jinan, China, 2007.

W. Elloumi, N. Rokbani, A. M. Alimi • Ant Supervised by PSO

166