swarm intelligencedoc
DESCRIPTION
it is all about swarms and their behaviour based on which algorithms are developed to solve complex problemsTRANSCRIPT
SWARM INTELLIGENCE
A Technical Seminar Report submitted to the Faculty of Computer Science and Engineering
Geethanjali College of Engineering & Technology
(Cheeryal (V), Keesara(M), R.R. Dist., Hyderabad-A.P.)
Accredited by NBA
(Affiliated to J.N.T.U.H, Approved by AICTE, New Delhi)
In partial fulfillment of the requirement for the award of degree of
BACHELOR OF TECHNOLOGY IN
COMPUTER SCIENCE AND ENGINEERING
Under the esteemed guidance of
Mr. P. Srinivas, M.Tech, (Ph.D)Sr. Associate Professor
By
G.RAHUL
09R11A0549
Department of Computer Science & Engineering
Year: 2012-2013
1
Geethanjali College of Engineering &
Technology
(Affiliated to J.N.T.U.H, Approved by AICTE, NEW DELHI.)
Accredited by NBA
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
Date:
CERTIFICATE
This is to Certify that the Technical Seminar report on “Swarm Intelligence ” is a bonafide work done by G.Rahul (09R11A0549) in partial fulfillment of the requirement of the award for the degree of Bachelor of Technology in “Computer Science and Engineering” J.N.T.U.H, Hyderabad during the year 2012 - 2013.
Technical Seminar Co-Ordinator HOD-CSE (Mr. P. Srinivas) (Prof. Dr. P.V.S. Srinivas) Sr. Associate Professor
2
ABSTRACT
Swarm intelligence (SI) is the collective behaviour of decentralized, self-
organized systems, natural or artificial. The concept is employed in work on
artificial intelligence. The expression was introduced by Gerardo Beni and Jing
Wang in 1989, in the context of cellular robotic systems.
SI systems are typically made up of a population of simple agents or
boids interacting locally with one another and with their environment. The agents
follow very Simple rules, and although there is no centralized control
Structure dictating how individual agents should behave, local, and to a
certain degree random, interactions between such agents lead to the emergence
of "intelligent" global behavior, unknown to the individual agents. Natural
examples of SI include ant colonies, bird flocking, animal herding, bacterial
growth, and fish schooling. The application of swarm principles to robots is
called swarm robotics, while ‘Swarm Intelligence’ refers to the more general set
of algorithms. 'Swarm prediction' has been used in the context of forecasting
problems.
Swarm describes behaviour of an aggregate of animals of similar size and body
orientation, often moving en masse or migrating in the same direction.
Swarming is a general term that can be applied to any animal that swarms. The
term is applied particularly to insects, but can also be applied to birds, fish,
various microorganisms such as bacteria, and people.
The term flocking is usually used to refer to swarming behaviour in birds, while
the terms shoaling or schooling are used to refer to swarming behaviour in fish.
The swarm size is a major parameter of a swarm.
3
Contents
Chapters Page No.
1. Introduction
5
2. Properties of Swarm Intelligence
6
3. Modelling Swarm behaviour
8
4. Algorithms of Swarm Intelligence
10
5. Applications of Swarm Intelligence
25
6. Advanatages & Disadvantages of Swarm Intelligence
36
7. Conclusion
38
8. Future Scope
39
9. List of Abbreviations
40
10. References
41
4
Chapter 1
INTRODUCTION
Swarm Intelligence is the property of a system whereby the collective behaviours
of agents interacting locally with their environment cause coherent functional
global patterns to emerge. SI provides a basis with which it is possible to explore
distributed problem solving without centralized control or the provision of a global
model. One of the cores tenets of SI work is that often a decentralized, bottom-
up approach to controlling a system is much more effective than traditional,
centralized approach. Groups performing tasks effectively by using only a small
set of rules for individual behaviour is called swarm intelligence. Swarm
Intelligence is a property of systems of no intelligent agents exhibiting
collectively intelligent behaviour. In Swarm Intelligence, two individuals interact
indirectly when one of them modifies the environment and the other responds to
the new environment at a later time. For years scientists have been Studying
about insects like ants, bees, termites etc. The most amazing thing about social
insect colonies is that there’s no individual in charge. For example consider the
case of ants. But the way social insects form highways and other amazing
Structures such as bridges, chains, nests and can perform complex tasks is very
different: they self-organize through direct and indirect interactions. The
characteristics of social insects are
1. Flexibility
2. Robustness
3. Self-Organization
5
Chapter 2
PROPERTIES OF SWARM INTELLIGENCE
The typical swarm intelligence system has the following properties:
it is composed of many individuals;
the individuals are relatively homogeneous (i.e., they are either all identical or
they belong to a few typologies);
the interactions among the individuals are based on simple behavioral rules
that exploit only local information that the individuals exchange directly or via the
environment (stigmergy);
the overall behaviour of the system results from the interactions of individuals
with each other and with their environment, that is, the group behavior self-
organizes.
The characterizing property of a swarm intelligence system is its ability to act in a
coordinated way without the presence of a coordinator or of an external controller.
Many examples can be observed in nature of swarms that perform some collective
behavior without any individual controlling the group, or being aware of the overall
group behavior. Notwithstanding the lack of individuals in charge of the group, the
swarm as a whole can show an intelligent behavior. This is the result of the
interaction of spatially neighboring individuals that act on the basis of simple rules.
Most often, the behavior of each individual of the swarm is described in
probabilistic terms: Each individual has a stochastic behavior that depends on his
local perception of the neighborhood.
6
Because of the above properties, it is possible to design swarm intelligence system
that are scalable, parallel, and fault tolerant.
Scalability means that a system can maintain its function while increasing its
size without the need to redefine the way its parts interact.
Because in a swarm intelligence system interactions involve only neighboring
individuals, the number of interactions tends not to grow with the overall number of
individuals in the swarm: each individual's behavior is only loosely influenced by
the swarm dimension. In artificial systems, scalability is interesting because a
scalable system can increase its performance by simply increasing its size, without
the need for any reprogramming.
Parallel action is possible in swarm intelligence systems because individuals
composing the swarm can perform different actions in different places at the same
time. In artificial systems, parallel action is desirable because it can help to make
the system more flexible, that is, capable to self-organize in teams that take care
simultaneously of different aspects of a complex task.
Fault tolerance is an inherent property of swarm intelligence systems due to the
decentralized, self-organized nature of their control structures. Because the system
is composed of many interchangeable individuals and none of them is in charge of
controlling the overall system behavior, a failing individual can be easily dismissed
and substituted by another one that is fully functioning.
7
Chapter 3
MODELLING SWARM BEHAVIOUR
The Simplest mathematical models of animal swarms generally represent
individual animals as following three rules:
1. Move in the same direction as your neighbour
2. Remain close to your neighbours
3. Avoid collisions with your neighbours
Many current models use variations on these rules, often implementing them by
means of concentric "zones" around each animal. In the zone of repulsion, very
close to the animal, the focal animal will seek to distance itself from its
neighbours to avoid collision. Slightly further away, in the zone of alignment,
the focal animal will seek to align its direction of motion with its neighbours.
In the outermost zone of attraction, this extends as far away from the focal animal
as it is able to sense, the focal animal will seek to move towards a neighbour.
8
The shape of these zones will necessarily be affected by the sensory capabilities
of the given animal. For example the visual field of a bird does not extend
behind its body. Fish rely on both vision and on hydrodynamic perceptions
relayed through their lateral line, while Antarctic krill rely both on vision and
hydrodynamic signals relayed through antennae. Some of the animals that
exhibit swarm behaviour are
1. Insects – Ants, bees, locusts, termites, mosquitoes and insects migration.
2. Bacteria
3. Birds
4. Land animals
5. Aquatic animals – fish, krill and other aquatic animals
6. People
9
Chapter 4
ALGORITHMS OF SWARM INTELLIGENCE
Algorithms of Swarm Intelligence are
Ant colony optimization(ACO)
River formation dynamics
Particle swarm optimization(PSO)
Stochastic diffusion search
Gravitational search algorithm(GSA)
Intelligent Water Drops
Charged System Search
Backtracking optimization Search Algorithm(BSA)
Bat Algorithm
Differential Search Algorithm
Firefly Algorithm
10
Glowworm Swarm Optimization
Krill Herd Algorithm
Magnetic Optimization Algorithm
Self-propelled Particles
I. Ant Colony Optimization
Ant colony optimization (ACO) is a class of optimization algorithms modeled on
the actions of an ant colony. ACO methods are useful in problems that need to
find paths to goals. Artificial 'ants'—Stimulation agents—locate optimal solutions
by moving through a parameter space representing all possible solutions. Real
ants lay down pheromones directing each other to resources while exploring their
environment. The Stimulated 'ants' similarly record their positions and the quality
of their solutions, so that in later Stimulation iterations more ants locate better
solutions. One variation on this approach is the bees algorithm, which is more
analogous to the foraging patterns of the honey bee.
In other words we can say that , the ant colony optimization algorithm
(ACO) is a probabilistic technique for solving computational problems which
can be reduced to finding good paths through graphs. In the real world, ants
wander randomly, and upon finding food return to their colony while laying down
pheromone trails. If other ants find such a path, they are likely not to keep
travelling at random, but to instead follow the trail, returning and reinforcing it if
they eventually find food through that way.
This algorithm is inspired by forgiving behaviour of the ants.
1. The first ant finds the food source (F), via any way (a), then returns to the
nest (N), leaving behind a trail pheromone (b)
11
2. Ants indiscriminately follow four possible ways, but the Strengthening of
the runway makes it more attractive as the shortest route.
3. Ants take the shortest route, long portions of other ways lose
their trail pheromones.
FIG: Ant Colony Optimization
In a series of experiments on a colony of ants with a choice between two unequal
length paths leading to a source of food, biologists have observed that ants
tended to use the shortest route. A model explaining this behaviour is as follows:
1. An ant (called "blitz") runs more or less at random around the colony;
2. If it discovers a food source, it returns more or less directly to the nest, leaving
in its path
3. A trail of pheromone;
4. These pheromones are attractive, nearby ants will be inclined to follow, more
or less
5. Directly, the track;
6. Returning to the colony, these ants will strengthen the route;
7. If there are two routes to reach the same food source then, in a given amount
of time,
12
8. The shorter one will be travelled by more ants than the long route;
9. The short route will be increasingly enhanced, and therefore become more
10. attractive;
11. The long route will eventually disappear because pheromones are volatile;
12. Eventually, all the ants have determined and therefore "chosen" the shortest
route.
Pseudo code of ACO
1: repeat
2: if antCount < maxAnts then
3: create a new ant
4: set initial State
5: end if
6: for all ants do
7: determine all feasible neighbour States {considering the ant's visited
States}
8: if solution found V no feasible neighbour State then
9: kill ant
10: if we use delayed pheromone update then
11: evaluate solution
12: deposit pheromone on all used edges
13: end if
14: else
13
15: Stochastically select a feasible neighbour State {directed by the ants
memory,
The pheromone concentration on the edges and local heuristics}
16: if we use Step-by-Step pheromone update then
17: deposit pheromone on the used edge
18: end if
19: end if
20: end for
21: evaporate pheromone until termination criterion satisfied {e.g., found a
Satisfying solution}
II. River Formation Dynamics
River formation dynamics (RFD) is an heuristic method Similar to ant
colony optimization (ACO). In fact, RFD can be seen as a gradient version of ACO,
based on copying how water forms rivers by eroding the ground and
depositing sediments. As water transforms the environment, altitudes of places
are dynamically modified, and decreasing gradients are constructed. The
gradients are followed by subsequent drops to create new gradients, reinforcing
the best ones. By doing so, good solutions are given in the form of decreasing
altitudes. This method has been applied to solve different NP-complete
problems (for example, the problems of finding a minimum distances tree and
finding a minimum spanning tree in a variable-cost graph). The gradient
orientation of RFD makes it especially suitable for solving these problems and
provides a good tradeoff between finding good results and not spending
much computational time. In fact, RFD fits particularly well for problems
consisting in forming a kind of covering tree.
III. Particle Swarm Optimization
Particle swarm optimization (PSO) is a population based Stochastic
optimization technique developed by Dr. Eberhart and Dr. Kennedy in 1995,
inspired by social behavior of bird flocking or fish schooling. Particle
14
swarm optimization (PSO) is a global optimization algorithm for dealing
with problems in which a best solution can be represented as a point or
surface in an n-dimensional space. Hypotheses are plotted in this space and
seeded with an initial velocity, as well as a communication channel between the
particles. Particles then move through the solution space, and are evaluated
according to some fitness criterion after each time Step. Over time, particles are
accelerated towards those particles within their communication grouping which
have better fitness values. The main advantage of such an approach over other
global minimization Strategies such as Stimulated annealing is that the large
numbers of members that make up the particle swarm make the technique
impressively resilient to the problem of local minima.
PSO shares many Simi lar i t ies with evolutionary computation techniques such
as Genetic Algorithms (GA). The system is initialized with a population of random
solutions and searches for optima by updating generations. However, unlike
GA, PSO has no evolution operators such as crossover and mutation. In PSO, the
potential solutions, called particles, fly through the problem space by following the
current optimum particles.
Ex. Birds flocking
FIG: PARTICLE SWARM OPTIMIZATION
Algorithm of PSO
As Stated before, PSO stimulates the behaviors of bird flocking. Suppose the
15
following scenario: a group of birds are randomly searching food in an area. There
is only one piece of food in the area being searched. All the birds do not know
where the food is. But they know how far the food is in each iteration. So what's
the best Strategy to find the food? The effective one is to follow the bird which is
nearest to the food.
PSO learned from the scenario and used it to solve the optimization problems. In
PSO, each Single solution is a "bird" in the search space. We call it "particle". All
of particles have fitness values which are evaluated by the fitness function
to be optimized, and have velocities which direct the flying of the particles.
The particles fly through the problem space by following the current optimum
particles.
PSO is initialized with a group of random particles (solutions) and then searches for
optima by updating generations. In every iteration, each particle is updated by
following two "best" values. The first one is the best solution (fitness) it has
achieved so far. (The fitness value is also stored.) This value is called pbest.
Another "best" value that is tracked by the particle swarm optimizer is the best
value, obtained so far by any particle in the population. This best value is a
global best and called best. When a particle takes part of the population as its
topological neighbors, the best value is a local best and is called lbest.
After finding the two best values, the particle updates its velocity and
positions with following equation (a) and (b).
v[] = v[] + c1 * rand() * (pbest[] - present[]) + c2 * rand() * (gbest[] -
present[])----------(a)
present [] = present[] + v[] ----------------------------------------------------- (b)
Where:-
v[] is the particle velocity,16
present [] is the current particle (solution).
pbest [] and gbest[] are defined as Stated before. i.e. personal best and global best
respv. rand () is a random number between (0,1).
c1, c2 are learning factors. Usually c1 = c2 = 2.
The pseudo code of the procedure is as follows
For each particle
Initialize particle
END
Do
For each particle
Calculate fitness value
If the fitness value is better than the best fitness value (pBeST) in history
Set current value as the new pBeST
End
Choose the particle with the best fitness value of all the particles as the gbest
For each particle
Calculate particle velocity according equation (a) Update particle position
according equation (b)
End
While maximum iterations or minimum error criteria is not attained
Particles' velocities on each dimension are clamped to a maximum velocity Vmax.
If the sum of accelerations would cause the velocity on that dimension to exceed
Vmax, which is a parameter specified by the user. Then the velocity on that
17
dimension is limited to Vmax.
IV. Stochastic Diffusion Search
Stochastic diffusion search (SDS) is an agent-based probabilistic global
search and optimization technique best suited to problems where the
objective function can be decomposed into multiple independent partial-
functions. Each agent maintains a hypothesis which is iteratively tested by
evaluating a randomly selected partial objective function parameterized by the
agent's current hypothesis. In the standard version of SDS such partial function
evaluations are binary, resulting in each agent becoming active or inactive.
Information on hypotheses is diffused across the population via inter-agent
communication. Unlike the stigmergic communication used in ACO, in
SDS agents communicate hypotheses via a one-to-one communication
strategy analogous to the tandem running procedure observed in some
species of ant. A positive feedback mechanism ensures that, over time, a
population of agents Stabilize around the global-best solution. SDS is both an
efficient and robust search and optimization algorithm, which has been
extensively mathematically described.
Or in simple words we can say that It belongs to a family of swarm intelligence
and naturally inspired search and optimization algorithms which includes
ant colony optimization, particle swarm optimization and genetic algorithms.
It is an agent-based probabilistic global search and optimization technique best
suited to problems where the objective function can be decomposed into multiple
independent partial-functions. Each agent maintains a hypothesis which is
iteratively tested by evaluating a randomly selected partial objective function
parameterized by the agent's current hypothesis.
V. Gravitational Search Algorithm
Gravitational search algorithm (GSA) is constructed based on the law of Gravity
and the notion of mass interactions. The GSA algorithm uses the theory of
Newtonian physics and its searcher agents are the collection of masses. In GSA,
we have an isolated system of masses. Using the gravitational force, every mass
in the system can see the situation of other masses. The gravitational force is
18
therefore a way of transferring information between different masses. In GSA,
agents are considered as objects and their performance is measured by their
masses. All these objects attract each other by a gravity force, and this force
causes a movement of all objects globally towards the objects with heavier masses.
The heavy masses correspond to good solutions of the problem. The position of the
agent corresponds to a solution of the problem, and its mass is determined using a
fitness function. By lapse of time, masses are attracted by the heaviest mass. We
hope that this mass would present an optimum solution in the search space. The
GSA could be considered as an isolated system of masses. It is like a small artificial
world of masses obeying the Newtonian laws of gravitation and motion. A multi-
objective variant of GSA, called Non-dominated Sorting Gravitational Search
Algorithm (NSGSA), was proposed by Nobahari and Nikusokhan in 2011.
VI. Intelligent Water Drops
Intelligent Water Drops algorithm (IWD) is a swarm-based nature-inspired
optimization algorithm, which has been inspired from natural rivers and how
they find almost optimal paths to their destination. These near optimal or optimal
paths follow from actions and reactions occurring among the water drops and the
water drops with their riverbeds. In the IWD algorithm, several artificial water
drops cooperate to change their environment in such a way that the optimal path
is revealed as the one with the lowest soil on its links. The solutions are
incrementally constructed by the IWD algorithm. Consequently, the IWD
algorithm is generally a constructive population-based optimization algorithm.
VII. Charged System Search
Charged System Search (CSS) is a new optimization algorithm based on some
principles from physics and mechanics. CSS utilizes the governing laws of
Coulomb and Gauss from electrostatics and the Newtonian laws of mechanics. CSS
is a multi-agent approach in which each agent is a Charged Particle (CP). CPs can
19
affect each other based on their fitness values and their separation distances. The
quantity of the resultant force is determined by using the electrostatics laws and
the quality of the movement is determined using Newtonian mechanics laws. CSS
is applicable to all optimization fields; especially it is suitable for non- smooth or
non-convex domains. This algorithm provides a good balance between the
exploration and the exploitation paradigms of the algorithm which can
considerably improve the efficiency of the algorithm and therefore the CSS also
can be considered as a good global and local optimizer simultaneously.
VIII. Backtracking optimization Search Algorithm
Backtracking Optimization Search Algorithm (BSA), a new evolutionary algorithm
(EA) for solving real-valued numerical optimization problems. EAs are popular
stochastic search algorithms that are widely used to solve non-linear, non-
differentiable and complex numerical optimization problems. Current research aims
at mitigating the effects of problems that are frequently encountered in EAs, such
as excessive sensitivity to control parameters, premature convergence and slow
computation. In this vein, development of BSA was motivated by studies that
attempt to develop simpler and more effective search algorithms. Unlike many
search algorithms, BSA has a single control parameter. Moreover, BSA’s problem-
solving performance is not over sensitive to the initial value of this parameter. BSA
has a simple structure that is effective, fast and capable of solving multimodal
problems and that enables it to easily adapt to different numerical optimization
problems. BSA’s strategy for generating a trial population includes two new
crossover and mutation operators. BSA’s strategies for generating trial populations
and controlling the amplitude of the search-direction matrix and search-space
boundaries give it very powerful exploration and exploitation capabilities. In
particular, BSA possesses a memory in which it stores a population from a
randomly chosen previous generation for use in generating the search-direction
matrix. Thus, BSA’s memory allows it to take advantage of experiences gained from
previous generations when it generates a trial preparation. This paper uses the
Wilcoxon Signed-Rank Test to statistically compare BSA’s effectiveness in solving
numerical optimization problems with the performances of six widely used EA
algorithms: PSO, CMAES, ABC, JDE, CLPSO and SADE. The comparison, which uses
75 boundary-constrained benchmark problems and three constrained real-world
20
benchmark problems, shows that in general, BSA can solve the benchmark
problems more successfully than the comparison algorithms.
IX. Differential search algorithm
Differential search algorithm (DSA) has been inspired by migration of super
organisms. DSA is population based, single/multi objective optimization algorithm
utilizing the concept of Brownian like motion. The problem solving success of DSA
was compared to the successes of ABC, JDE, JADE, SADE, EPSDE, GSA, PSO2011
and CMA-ES algorithms for solution of numerical optimization problems in 2012.
X. Firefly algorithm
The Firefly algorithm (FA) is a metaheuristic algorithm, inspired by the flashing
behaviour of fireflies. The primary purpose for a firefly's flash is to act as a signal
system to attract other fireflies. Xin-She Yang formulated this firefly algorithm by
assuming:
All fireflies are unisexual, so that one firefly will be attracted to all other fireflies;
Attractiveness is proportional to their brightness, and for any two fireflies, the
less brighter one will be attracted by (and thus move to) the brighter one; however,
the brightness can decrease as their distance increases;
If there are no fireflies brighter than a given firefly, it will move randomly.
The brightness should be associated with the objective function.
Firefly algorithm is a nature-inspired metaheuristic optimization algorithm.
XI. Glowworm swarm optimization
Glowworm swarm optimization (GSO), introduced by Krishnan and Ghose in 2005
for simultaneous computation of multiple optima of multimodal functions. The
algorithm shares a few features with some better known algorithms, such as ant
colony optimization and particle swarm optimization, but with several significant
differences. The agents in GSO are thought of as glowworms that carry a
luminescence quantity called luciferin along with them. The glowworms encode the
fitness of their current locations, evaluated using the objective function, into a
luciferin value that they broadcast to their neighbors. The glowworm identifies its
21
neighbors and computes its movements by exploiting an adaptive neighborhood,
which is bounded above by its sensor range. Each glowworm selects, using a
probabilistic mechanism, a neighbor that has a luciferin value higher than its own
and moves toward it. These movements—based only on local information and
selective neighbor interactions—enable the swarm of glowworms to partition into
disjoint subgroups that converge on multiple optima of a given multimodal function.
XII. Krill herd algorithm
Krill herd (KH) is a novel biologically inspired algorithm proposed by Gandomi and
Alavi in 2012. The KH algorithm is based on simulating the herding behavior
of krill individuals. The minimum distances of each individual krill from food and
from highest density of the herd are considered as the objective function for the
krill movement.
The time-dependent position of the krill individuals is formulated by three main
factors:
movement induced by the presence of other individuals;
foraging activity; and
random diffusion.
The derivative information is not necessary in the KH algorithm because it uses a
stochastic random search instead of a gradient search. For
each metaheuristic algorithm, it is important to tune its related parameters. One of
interesting parts of the proposed algorithm is that it carefully simulates the krill
behavior and it uses the real world empirical studies to obtain the coefficients.
Because of this fact, only time interval should be fine-tuned in the KH algorithm.
This can be considered as a remarkable advantage of the proposed algorithm in
comparison with other nature-inspired algorithms. The validation phases indicate
that the KH method is very encouraging for its future application to optimization
tasks.
22
XIII. Magnetic optimization algorithm
Magnetic Optimization Algorithm (MOA), proposed by Tayarani in 2008, is an
optimization algorithm inspired by the interaction among some magnetic particles
with different masses. In this algorithm, the possible solutions are some particles
with different masses and different magnetic fields. Based on the fitness of the
particles, the mass and the magnetic field of each particle is determined, thus the
better particles are more massive objects with stronger magnetic fields. The
particles in the population apply attractive forces to each other and so move in the
search space. Since the better solutions have greater mass and magnetic field, the
inferior particles tend to move toward the fitter solutions and thus migrate to area
around the better local optima, where they wander in search of better solutions.
XIV. Self-Propelled Particles
Self-propelled particles (SPP), also referred to as the Vicsek model, was introduced
in 1995 by Vicsek et al. as a special case of the boids model introduced in 1986
by Reynolds. A swarm is modeled in SPP by a collection of particles that move with
a constant speed but respond to a random perturbation by adopting at each time
increment the average direction of motion of the other particles in their local
neighbourhood. SPP models predict that swarming animals share certain properties
at the group level, regardless of the type of animals in the swarm. Swarming
systems give rise to emergent behaviours which occur at many different scales,
some of which are turning out to be both universal and robust. It has become a
challenge in theoretical physics to find minimal statistical models that capture
these behaviours.
The SPP model is based on a collection of points or particles, each functioning
individually as an autonomous agent, and each following the same simple rules
which govern their behaviour. The particles move in a plane with constant speed
23
but in different directions. The direction of each particle is updated using a "nearest
neighbor rule", a local rule which replaces the direction of each particle with the
average of the particle's own direction plus the directions of its immediate
neighbours.
Simulations demonstrate that a suitable "nearest neighbour rule" eventually results
in all the particles swarming together, or moving in the same direction. This
emerges, even though there is no centralized coordination, and even though the
neighbours for each particle constantly change over time.
Although more realistic swarming models have been explored, the SPP model
remains important because of its simplicity and the strength and the variety of its
emergent phenomena. The SPP model is an agent-based model based on
a Lagrangian viewpoint, which follows individual particles rather than working with
the density of the swarm. It is a discrete switched linear system which is stable,
even though no common quadratic Lyapunov function exists. It is an analogue of
the Ising model in ferromagnetism, where temperature corresponds to particle
randomness and spin clusters correspond to particle clusters.
SPP models have been applied in areas, such as marching locusts, bird landings ,
schooling fish, robotic swarms, molecular motors, the development of human
stampedes and the evolution of human trails in urban green spaces.
24
FIG : Flocks of birds , make an unanimous group decision to land
Chapter 5
APPLICATIONS OF SWARM INTELLIGENCE
Swarm Intelligence-based techniques can be used in a number of applications.
The U.S. military i s investigating s w a r m t e c h n i q u e s f o r c o n t r o l l i n g
unmanned v e h i c l e s . The European Space Agency is thinking about an
orbital swarm for self assembly and interferometer. NASA is investigating the
use of swarm intelligence for planetary mapping. A 1992 paper by M. Anthony
Lewis and George A. Bekey discusses the possibility of using swarm intelligence
to control nanobots within the body for the purpose of killing cancer tumors! Here
are some of the applications of Swarm intelligence.
a. Crowd Simulation
Artists are using swarm intelligence as a means of creating complex interactive
25
systems or simulating crowds.
Stanley and Stella in: Breaking the Ice was the first movie to make use of swarm
intelligence for rendering, realistically depicting the movements of groups of fish
and birds using the Boids system. Tim Burton's Batman Returns also made
use of swarm technology for showing the movements of a group of bats. The
Lord of the Rings film trilogy made use of similar technology, known as
Massive, during battle scenes. Swarm technology is particularly attractive
because it is cheap, robust, and simple.
Airlines have used swarm theory to Simulate passengers boarding a plane.
Southwest Airlines researcher Douglas A. Lawson used an ant-based computer
Simulation employing only six interaction rules to evaluate boarding times using
various boarding methods.
FIG: Crowd Simulation in Maya
26
b. Ant-Based Routing
The use of Swarm Intelligence in Telecommunication Networks has also
been researched, in the form of Ant Based Routing. This was pioneered separately
by Dorigo et al. and Hewlett Packard in the mid-1990s, with a number of
variations since. Basically this uses a probabilistic routing table
rewarding/reinforcing the route successfully traversed by each "ant" (a small
control packet) which flood the network. Reinforcement of the route in the
forwards, reverse direction and both Simultaneously have been researched:
backwards reinforcement requires a symmetric network and couples the two
directions together; forwards reinforcement rewards a route before the outcome
is known (but then you pay for the cinema before you know how good the film is).
As the system behaves stochastically and is therefore lacking repeatability, there
are large hurdles to commercial deployment. Mobile media and new
technologies have the potential to change the threshold for collective action
due to swarm intelligence.
Airlines have also used ant-based routing in assigning aircraft arrivals to airport
gates. At Southwest Airlines software program uses swarm theory, or swarm
intelligence -- the idea that a colony of ants works better than one alone. Each
pilot acts like an ant searching for the best airport gate. "The pilot learns from his
experience what the best is for him, and it turns out that that's the best solution for
the airline," Dr. Douglas A. Lawson explains. As a result, the "colony" of pilots
always go to gates they can arrive and depart quickly. The program can even alert
a pilot of plane back-ups before they happen. "We can anticipate that it's going to
happen, so we'll have a gate available," Dr. Lawson says.
27
FIG: Swarm Intelligence used in Airlines
c. Clustering Behavior Of Ants
Ants build cemeteries by collecting dead bodies into a single place in the nest.
They also organize the spatial disposition of larvae into clusters with the younger,
smaller larvae in the cluster center and the older ones at its periphery.
This clustering behavior has motivated a number of scientific studies.
28
FIG: Clustering Behaviour of Ants
d. Nest Building Behaviour of Wasps and Termites
Wasps build nests with a highly complex internal Structure that is well beyond
the cognitive capabilities of a Single wasp. Termites build nests whose dimensions
are enormous when compared to a Single individual, which can measure as little
as a few millimeters. Scientists have been studying the coordination mechanisms
that allow the construction of these Structures and have proposed probabilistic
models exploiting insects behavior. Some of these models are implemented in
computer programs to produce Simulated Structures that recall the morphology of
the real nests.
29
FIG: Nest building behaviour of Wasps and Termites
e. Flocking and Schooling In Birds and Fish
Scientists have shown that these elegant swarm-level behaviors can be
understood as the result of a self-organized process where no leader is in charge
and each individual bases i t s movement dec i s ions solely on l oca l l y
ava i l ab le i n fo rmat i on ; the d i s tance , perceived speed, and direction of
movement of neighbours. These Studies have inspired a number of computer
Simulations that are now used in the computer graphics industry for the realistic
reproduction of flocking in movies and computer games.
30
FIG: Flock of Birds
FIG: Flocking Simulation
f. Ant Colony Optimization
31
In ant colony optimization (ACO), a set of software agents called "artificial ants"
search for good solutions to a given optimization problem transformed into the
problem of finding the minimum cost path on a weighted graph. The artificial
ants incrementally build solutions by moving on the graph. The solution
construction process is Stochastic and is biased by a pheromone model, that
is, a set of parameters associated with graph components the values of which
are modified at runtime by the ants.
FIG: Ant Colony Optimization
g. Particle Swarm Optimization
It is inspired by social behaviors in flocks of birds and schools of fish. In practice, in
the initialization phase each particle is given a random initial position and an
initial velocity. The position of the particle represents a solution of the problem
and has therefore a value, given by the objective function. At each iteration of the
algorithm, each particle moves with a velocity that is a weighted sum of
three components: the old velocity, a velocity component that drives the
32
particle towards the location in the search space where it previously found
the best solution so far, and a velocity component that drives the particle towards
the location in the search space where the neighbor particles found the
best solution so far.
FIG: Graph based on Particle Swarm Optimization
h. Swarm Based Network Management
Schoonderwoerd et al. proposed Ant-based Control (ABC), an algorithm for routing
and load balancing in circuit-switched networks; Di Caro and Dorigo
proposed AntNet, an algorithm for routing in packet-switched networks. While
ABC was a proof of- concept, AntNet, which is an ACO algorithm, was compared
to many State-of-the-art algorithms and its performance was found to be
competitive especially in situation of highly dynamic and stochastic data traffic as
can be observed in Internet-like networks. An extension of AntNet has been
successfully applied to ad-hoc networks.
33
FIG: Network Management using Swarm Intelligence
i. Cooperative Behaviour in Swarms of Robots
There are a number of swarm behaviours observed in natural systems that have
inspired innovative ways of solving problems by using swarms of robots. This is
what is called swarm robotics. In other words, swarm robotics is the application of
swarm intelligence principles to the control of swarms of robots. As with swarm
intelligence systems in general, swarm robotics systems can have either a
scientific or an engineering flavour. Clustering in a swarm of robots was mentioned
above as an example of artificial/scientific system.
34
FIG: Swarm Robotics
FIG: Swarm Robot
j. Swarmic art
35
In a series of works al-Rifaie et al[50] have successfully used two swarm
intelligence algorithms – one mimicking the behaviour of one species of ants
(Leptothorax acervorum) foraging (Stochastic diffusion search (SDS)) and the other
algorithm mimicking the behaviour of birds flocking (Particle swarm
optimization PSO) – to describe a novel integration strategy exploiting the local
search properties of the PSO with global SDS behaviour. The resulting hybrid
algorithm is used to sketch novel drawings of an input image, exploiting an artistic
tension between the local behaviour of the ‘birds flocking’ - as they seek to follow
the input sketch - and the global behaviour of the ‘ants foraging’ - as they seek to
encourage the flock to explore novel regions of the canvas. The 'creativity’ of this
hybrid swarm system has been analyzed under the philosophical light of the
‘rhizome’ in the context of Deleuze’s well known ‘Orchid and Wasp’ metaphor.
36
Chapter 6
ADVANTAGES & DISADVANTAGES OF SWARM INTELLIGENCE
Advantages of Swarm Intelligence
It is easily adoptable as conventional workgroups devise various
Standard operating procedures to react to predetermined Stimuli. But swarms
have better ability to adjust to new Situations or to change beyond a narrow
range of options.
Evolution is the result of adaptation. Conventional bureaucratic systems can
shift the locus of adaptation (slowly) from one part of the system to another. In
swarm systems, individual variation and imperfection lead to perpetual
novelty, which leads to evolution Resilient.
A swarm is a collective system made up of multitudes in parallel, which
results in enormous redundancy. Because the swarm is highly adaptable
and evolves quickly, failures tend to be minimal.
Disadvantages of Swarm Intelligence
37
It is non-optimal and uncontrollable as it is very difficult to exercise control
over a swarm. Swarm systems require guidance in the way that a shepherd
drives a herd by applying force at crucial leverage points.
It is unpredictable as the complexity of a swarm system leads to
unforeseeable results. Emergent novelty i s a primary characteristic of self-
organization by adaptive systems.
Non-understandable – Sequential systems are understandable; complex
adaptive systems, instead, are a jumble of intersecting logic. Instead of A
causing B, which in turn causes C, A indirectly causes everything, and everything
indirectly causes A.
It is non-immediate as linear systems tend to be very direct. Flip a switch and
the light comes on. Simple collective systems tend to operate simply. But
complex swarm systems with rich hierarchies take time.
CONCLUSION
38
The idea of swarm behavior may still seem Strange because we are used to
relatively linear bureaucratic models. In fact, this kind of behaviour characterizes
natural systems ranging from flocks of birds to schools of fish. Humans are more
complex than ants or fish and have lots more capacity for novel behavior, some
unexpected results are likely, and for this reason, leading scientists and
organizations will further pursue swarm approaches. Swarm Intelligence provides
a distributive approach to the problem solving mimicking the very Simple natural
process of cooperation. According to my survey many solutions that had been
previously solved using other Artificial Intelligence (AI) approach like genetic
algorithm neural network are also solve able by this approach also. Due to its
Simple architecture and adaptive nature like Ant Colony Optimization (ACO) has it
is more likely to be seen much more in the future.
39
FUTURE SCOPE
In the future, Swarm Intelligence will be an important tool for researchers and
engineers interested in solving certain classes of complex problems. To build the
foundations of this discipline and to develop an appropriate methodology, we
should proceed in parallel both at an abstract level and by tackling a number of
challenging problems in selected research domains. The research domains that
have chosen are optimization, robotics, networks and data mining, Pipe Inspection,
Miniaturization, Telecommunications, Medical, Self assembling Robots, Engine
maintenance, cleaning Ship hulls, Satellite maintenance, Pest eradication.
LIST OF ABBREVATIONS
40
SI – Swarm Intelligence
ST - Swarm Technology
ACO - Ant colony optimization
PSO - Particle swarm optimization
GSA - Gravitational search algorithm
BSA - Backtracking optimization Search Algorithm
SPP - Self-propelled Particles
REFERENCES
41
1. http://en.wikipedia.org/wiki/Swarm_intelligence
2. Beni, G., Wang, J. Swarm Intelligence in Cellular Robotic Systems, Proceed.
NATO Advanced Workshop on Robots and Biological Systems, Tuscany, Italy, June
26–30 (1989).
3. Altruism helps swarming robots fly bettergenevalunch.com, 4 May 2011.
4. Ant Colony Optimization by Marco Dorigo and Thomas Stützle, MIT Press,
2004. ISBN 0-262-04219-3
5. Karaboga, Dervis (2010). "Artificial bee colony algorithm".Scholarpedia 5 (3):
6915.
6. Civicioglu, Pinar (2013). "Artificial cooperative search algorithm for numerical
optimization problems". Information Sciences 229: 58–76.
7. Civicioglu, P. (2013). "Backtracking Search Optimization Algorithm for numerical
optimization problems". Applied Mathematics and Computation 219: 8121–8144.
8. oglich, M.; Maschwitz, U.; Holldobler, B., Tandem Calling: A New Kind of Signal in
Ant Communication, Science, Volume 186, Issue 4168, pp. 1046-1047
9. Nasuto, S.J., Bishop, J.M. & Lauria, S., Time complexity analysis of the Stochastic
Diffusion Search, Proc. Neural Computation '98, pp. 260-266, Vienna, Austria,
(1998).
10. Nasuto, S.J., & Bishop, J.M., (1999), Convergence of the Stochastic Diffusion
Search, Parallel Algorithms, 14:2, pp: 89-107.
11. Myatt, D.M., Bishop, J.M., Nasuto, S.J., (2004), Minimum stable convergence
criteria for Stochastic Diffusion Search, Electronics Letters, 22:40, pp. 112-113.
12. al-Rifaie, M.M., Bishop, J.M. & Blackwell, T., An investigation into the merger of
stochastic diffusion search and particle swarm optimisation,
42