soft computing unit-5 by arun pratap singh

Upload: arunpratapsingh

Post on 07-Feb-2018

224 views

Category:

Documents


4 download

TRANSCRIPT

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    1/78

    UNIT : V

    SOFT COMPUTINGII SEMESTER (MCSE 205)

    PREPARED BY ARUN PRATAP SINGH

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    2/78

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    3/78

    PREPARED BY ARUN PRATAP SINGH 2

    2

    In a genetic algorithm, a population of candidate solutions (called individuals, creatures,

    or phenotypes) to an optimization problem is evolved toward better solutions. Each candidate

    solution has a set of properties (itschromosomes orgenotype)which can be mutated and altered;

    traditionally, solutions are represented in binary as strings of 0s and 1s, but other encodings are

    also possible.

    http://en.wikipedia.org/wiki/Populationhttp://en.wikipedia.org/wiki/Candidate_solutionhttp://en.wikipedia.org/wiki/Phenotypehttp://en.wikipedia.org/wiki/Chromosomehttp://en.wikipedia.org/wiki/Genotypehttp://en.wikipedia.org/wiki/Genotypehttp://en.wikipedia.org/wiki/Chromosomehttp://en.wikipedia.org/wiki/Phenotypehttp://en.wikipedia.org/wiki/Candidate_solutionhttp://en.wikipedia.org/wiki/Population
  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    4/78

    PREPARED BY ARUN PRATAP SINGH 3

    3

    The evolution usually starts from a population of randomly generated individuals, and is

    aniterative process,with the population in each iteration called a generation. In each generation,

    the fitness of every individual in the population is evaluated; the fitness is usually the value of

    the objective function in the optimization problem being solved. The more fit individuals

    arestochastically selected from the current population, and each individual's genome is modified(recombined and possibly randomly mutated) to form a new generation. The new generation of

    candidate solutions is then used in the next iteration of thealgorithm.Commonly, the algorithm

    terminates when either a maximum number of generations has been produced, or a satisfactory

    fitness level has been reached for the population.

    A typical genetic algorithm requires:

    1. agenetic representation of the solution domain,

    2. afitness function to evaluate the solution domain.

    A standard representation of each candidate solution is as anarray of bits.Arrays of other types

    and structures can be used in essentially the same way. The main property that makes these

    genetic representations convenient is that their parts are easily aligned due to their fixed size,

    which facilitates simplecrossover operations. Variable length representations may also be used,

    but crossover implementation is more complex in this case. Tree-like representations are explored

    in genetic programming and graph-form representations are explored in evolutionary

    programming; a mix of both linear chromosomes and trees is explored in gene expression

    programming.

    Once the genetic representation and the fitness function are defined, a GA proceeds to initialize

    a population of solutions and then to improve it through repetitive application of the mutation,

    crossover, inversion and selection operators.

    http://en.wikipedia.org/wiki/Iterationhttp://en.wikipedia.org/wiki/Fitness_(biology)http://en.wikipedia.org/wiki/Objective_functionhttp://en.wikipedia.org/wiki/Stochasticshttp://en.wikipedia.org/wiki/Crossover_(genetic_algorithm)http://en.wikipedia.org/wiki/Algorithmhttp://en.wikipedia.org/wiki/Genetic_representationhttp://en.wikipedia.org/wiki/Fitness_functionhttp://en.wikipedia.org/wiki/Bit_arrayhttp://en.wikipedia.org/wiki/Crossover_(genetic_algorithm)http://en.wikipedia.org/wiki/Genetic_programminghttp://en.wikipedia.org/wiki/Evolutionary_programminghttp://en.wikipedia.org/wiki/Evolutionary_programminghttp://en.wikipedia.org/wiki/Gene_expression_programminghttp://en.wikipedia.org/wiki/Gene_expression_programminghttp://en.wikipedia.org/wiki/Gene_expression_programminghttp://en.wikipedia.org/wiki/Gene_expression_programminghttp://en.wikipedia.org/wiki/Evolutionary_programminghttp://en.wikipedia.org/wiki/Evolutionary_programminghttp://en.wikipedia.org/wiki/Genetic_programminghttp://en.wikipedia.org/wiki/Crossover_(genetic_algorithm)http://en.wikipedia.org/wiki/Bit_arrayhttp://en.wikipedia.org/wiki/Fitness_functionhttp://en.wikipedia.org/wiki/Genetic_representationhttp://en.wikipedia.org/wiki/Algorithmhttp://en.wikipedia.org/wiki/Crossover_(genetic_algorithm)http://en.wikipedia.org/wiki/Stochasticshttp://en.wikipedia.org/wiki/Objective_functionhttp://en.wikipedia.org/wiki/Fitness_(biology)http://en.wikipedia.org/wiki/Iteration
  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    5/78

    PREPARED BY ARUN PRATAP SINGH 4

    4

    WORKING PRINCIPLE OF GENETIC ALGORITHMS:

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    6/78

    PREPARED BY ARUN PRATAP SINGH 5

    5

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    7/78

    PREPARED BY ARUN PRATAP SINGH 6

    6

    Genetic Algorithms are search algorithms that are based on concepts of natural selection andnatural genetics. Genetic algorithm was developed to simulate some of the processes observedin natural evolution, a process that operates on chromosomes (organic devices for encoding thestructure of living being). The genetic algorithm differs from other search methods in that itsearches among a population of points, and works with a coding of parameter set, rather than theparameter values themselves. It also uses objective function information without any gradient

    information. The transition scheme of the genetic algorithm is probabilistic, whereas traditionalmethods use gradient information. Because of these features of genetic algorithm, they are usedas general purpose optimization algorithm. They also provide means to search irregular spaceand hence are applied to a variety of function optimization, parameter estimation and machinelearning applications.

    Basic Principle

    The working principle of a canonical GA is illustrated in Fig.1.The major steps involved are thegeneration of a population of solutions, finding the objective function and fitness function and theapplication of genetic operators. These aspects are described briefly below. They are describedin detail in the following subsection.

    1:The Working Principle of a Simple Genetic Algorithm

    http://www.civil.iitb.ac.in/tvm/2701_dga/2701-ga-notes/gadoc/gadoc.html#qf_ga_wphttp://www.civil.iitb.ac.in/tvm/2701_dga/2701-ga-notes/gadoc/gadoc.html#qf_ga_wphttp://www.civil.iitb.ac.in/tvm/2701_dga/2701-ga-notes/gadoc/gadoc.html#qf_ga_wphttp://www.civil.iitb.ac.in/tvm/2701_dga/2701-ga-notes/gadoc/gadoc.html#qf_ga_wp
  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    8/78

    PREPARED BY ARUN PRATAP SINGH 7

    7

    2: The basic GA operations: One generation is broken down into a selection phase and

    recombination phase. Strings are assigned into adjacent slots during selection.

    An important characteristic of genetic algorithm is the coding of variables that describes theproblem. The most common coding method is to transform the variables to a binary string orvector; GAs perform best when solution vectors are binary. If the problem has more than onevariable, a multi-variable coding is constructed by concatenating as many single variables codingas the number of variables in the problem. Genetic Algorithm processes a number of solutionssimultaneously. Hence, in the first step a population having P individuals is generated by pseudorandom generators whose individuals represent a feasible solution. This is a representation ofsolution vector in a solution space and is called initial solution. This ensures the search to berobust and unbiased, as it starts from wide range of points in the solution space.

    In the next step, individual members of the population are evaluated to find the objective function

    value. In this step, the exterior penalty function method is utilized to transform a constrainedoptimization problem to an unconstrained one. This is exclusively problem specific. In the thirdstep, the objective function is mapped into a fitness function that computes a fitness value foreach member of the population. This is followed by the application of GA operators.

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    9/78

    PREPARED BY ARUN PRATAP SINGH 8

    8

    Working Principle

    To illustrate the working principles of GAs, an unconstrained optimization problem is considered.Let us consider following maximization problem,

    (1)

    where, and are the lower and upper bound the variable can take. Although a

    maximization problem is considered here, a maximization problem can also be handled using

    GAs. The working of GAs is completed by performing the following tasks.

    OPERATORS OF GENETIC ALGORITHM :

    A genetic operatoris anoperator used ingenetic algorithms to maintaingenetic diversity,known

    as mutation and to combine existing solutions into others, crossover. The main difference

    between them is that the mutation operators operate on one chromosome, that is, they are unary,

    while the crossover operators are binary operators.

    Genetic variation is a necessity for the process ofevolution.Genetic operators used in genetic

    algorithms are analogous to those in the natural world: survival of the fittest, or selection;

    reproduction (crossover,also called recombination); andmutation.

    http://en.wikipedia.org/wiki/Operator_(programming)http://en.wikipedia.org/wiki/Genetic_algorithmshttp://en.wikipedia.org/wiki/Genetic_diversityhttp://en.wikipedia.org/wiki/Mutation_(genetic_algorithm)http://en.wikipedia.org/wiki/Crossover_(genetic_algorithm)http://en.wikipedia.org/wiki/Evolutionhttp://en.wikipedia.org/wiki/Survival_of_the_fittesthttp://en.wikipedia.org/wiki/Selection_(genetic_algorithm)http://en.wikipedia.org/wiki/Crossover_(genetic_algorithm)http://en.wikipedia.org/wiki/Mutation_(genetic_algorithm)http://en.wikipedia.org/wiki/Mutation_(genetic_algorithm)http://en.wikipedia.org/wiki/Crossover_(genetic_algorithm)http://en.wikipedia.org/wiki/Selection_(genetic_algorithm)http://en.wikipedia.org/wiki/Survival_of_the_fittesthttp://en.wikipedia.org/wiki/Evolutionhttp://en.wikipedia.org/wiki/Crossover_(genetic_algorithm)http://en.wikipedia.org/wiki/Mutation_(genetic_algorithm)http://en.wikipedia.org/wiki/Genetic_diversityhttp://en.wikipedia.org/wiki/Genetic_algorithmshttp://en.wikipedia.org/wiki/Operator_(programming)
  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    10/78

    PREPARED BY ARUN PRATAP SINGH 9

    9

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    11/78

    PREPARED BY ARUN PRATAP SINGH 10

    10

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    12/78

    PREPARED BY ARUN PRATAP SINGH 11

    11

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    13/78

    PREPARED BY ARUN PRATAP SINGH 12

    12

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    14/78

    PREPARED BY ARUN PRATAP SINGH 13

    13

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    15/78

    PREPARED BY ARUN PRATAP SINGH 14

    14

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    16/78

    PREPARED BY ARUN PRATAP SINGH 15

    15

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    17/78

    PREPARED BY ARUN PRATAP SINGH 16

    16

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    18/78

    PREPARED BY ARUN PRATAP SINGH 17

    17

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    19/78

    PREPARED BY ARUN PRATAP SINGH 18

    18

    DELETION AND INVERSION :

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    20/78

    PREPARED BY ARUN PRATAP SINGH 19

    19

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    21/78

    PREPARED BY ARUN PRATAP SINGH 20

    20

    FITNESS FUNCTION:

    A fitness function is a particular type of objective function that is used to summarise, as a

    singlefigure of merit,how close a given design solution is to achieving the set aims.

    In particular, in the fields ofgenetic programming andgenetic algorithms,each design solution is

    represented as a string of numbers (referred to as achromosome). After each round of testing,

    or simulation, the idea is to delete the 'n' worst design solutions, and tobreed 'n' new ones from

    the best design solutions. Each design solution, therefore, needs to be awarded a figure of merit,

    to indicate how close it came to meeting the overall specification, and this is generated by applying

    the fitness function to the test, or simulation, results obtained from that solution.

    The reason that genetic algorithms cannot be considered to be a lazy way of performing design

    work is precisely because of the effort involved in designing a workable fitness function. Even

    though it is no longer the human designer, but the computer, that comes up with the final design,

    it is the human designer who has to design the fitness function. If this is designed badly, the

    algorithm will either converge on an inappropriate solution, or will have difficulty converging at all.

    Moreover, the fitness function must not only correlate closely with the designer's goal, it must also

    be computed quickly. Speed of execution is very important, as a typical genetic algorithm must

    be iterated many times in order to produce a usable result for a non-trivial problem.

    Fitness approximation may be appropriate, especially in the following cases:

    Fitness computation time of a single solution is extremely high

    Precise model for fitness computation is missing The fitness function is uncertain or noisy.

    Two main classes of fitness functions exist: one where the fitness function does not change, as

    in optimizing a fixed function or testing with a fixed set of test cases; and one where the fitness

    function is mutable, as inniche differentiation orco-evolving the set of test cases.

    Another way of looking at fitness functions is in terms of a fitness landscape,which shows the

    fitness for each possible chromosome.

    Definition of the fitness function is not straightforward in many cases and often is performed

    iteratively if the fittest solutions produced by GA are not what is desired. In some cases, it is veryhard or impossible to come up even with a guess of what fitness function definition might

    be. Interactive genetic algorithms address this difficulty by outsourcing evaluation to external

    agents (normally humans).

    http://en.wikipedia.org/wiki/Objective_functionhttp://en.wikipedia.org/wiki/Figure_of_merithttp://en.wikipedia.org/wiki/Genetic_programminghttp://en.wikipedia.org/wiki/Genetic_algorithmhttp://en.wikipedia.org/wiki/Chromosome_(genetic_algorithm)http://en.wikipedia.org/wiki/Crossover_(genetic_algorithm)http://en.wikipedia.org/wiki/Fitness_approximationhttp://en.wikipedia.org/wiki/Niche_differentiationhttp://en.wikipedia.org/wiki/Co-evolutionhttp://en.wikipedia.org/wiki/Fitness_landscapehttp://en.wikipedia.org/wiki/Interactive_genetic_algorithmshttp://en.wikipedia.org/wiki/Interactive_genetic_algorithmshttp://en.wikipedia.org/wiki/Fitness_landscapehttp://en.wikipedia.org/wiki/Co-evolutionhttp://en.wikipedia.org/wiki/Niche_differentiationhttp://en.wikipedia.org/wiki/Fitness_approximationhttp://en.wikipedia.org/wiki/Crossover_(genetic_algorithm)http://en.wikipedia.org/wiki/Chromosome_(genetic_algorithm)http://en.wikipedia.org/wiki/Genetic_algorithmhttp://en.wikipedia.org/wiki/Genetic_programminghttp://en.wikipedia.org/wiki/Figure_of_merithttp://en.wikipedia.org/wiki/Objective_function
  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    22/78

    PREPARED BY ARUN PRATAP SINGH 21

    21

    As mentioned earlier, GAs mimic the survival-of-the-fittest principle of nature to make

    a search process. Therefore, GAs are naturally suitable for solving maximization

    problems. Maximization problems are usually transformed into maximization problem

    by suitable transformation. In general, a fitness function is first derived from the

    objective function and used in successive genetic operations. Fitness in biological sense

    is a quality value which is a measure of the reproductive efficiency of chromosomes. In

    genetic algorithm, fitness is used to allocate reproductive traits to the individuals in the

    population and thus act as some measure of goodness to be maximized. This means that

    individuals with higher fitness value will have higher probability of being selected as

    candidates for further examination. Certain genetic operators require that the fitness

    function be non-negative, although certain operators need not have this requirement.

    For maximization problems, the fitness function can be considered to be the same as

    the objective function or . For minimization problems, to generate non-

    negative values in all the cases and to reflect the relative fitness of individual string, it

    is necessary to map the underlying natural objective function to fitness function form.

    A number of such transformations is possible. Two commonly adopted fitness

    mappings is presented below.

    (4)

    This transformation does not alter the location of the minimum, but converts aminimization problem to an equivalent maximization problem. An alternate function

    to transform the objective function to get the fitness value as below.

    (5)

    where, is the objective function value of individual, is the population

    size and is a large value to ensure non-negative fitness values. The value of V

    adopted in this work is the maximum value of the second term of equation5so that

    the fitness value corresponding to maximum value of the objective function is zero.

    This transformation also does not alter the location of the solution, but converts a

    minimization problem to an equivalent maximization problem. The fitness function

    value of a string is known as the string fitness.

    http://www.civil.iitb.ac.in/tvm/2701_dga/2701-ga-notes/gadoc/gadoc.html#qe_trf_fun_2http://www.civil.iitb.ac.in/tvm/2701_dga/2701-ga-notes/gadoc/gadoc.html#qe_trf_fun_2http://www.civil.iitb.ac.in/tvm/2701_dga/2701-ga-notes/gadoc/gadoc.html#qe_trf_fun_2http://www.civil.iitb.ac.in/tvm/2701_dga/2701-ga-notes/gadoc/gadoc.html#qe_trf_fun_2
  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    23/78

    PREPARED BY ARUN PRATAP SINGH 22

    22

    ENCODING IN GENETIC ALGORITHM :

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    24/78

    PREPARED BY ARUN PRATAP SINGH 23

    23

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    25/78

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    26/78

    PREPARED BY ARUN PRATAP SINGH 25

    25

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    27/78

    PREPARED BY ARUN PRATAP SINGH 26

    26

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    28/78

    PREPARED BY ARUN PRATAP SINGH 27

    27

    OPTIMIZATION :

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    29/78

    PREPARED BY ARUN PRATAP SINGH 28

    28

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    30/78

    PREPARED BY ARUN PRATAP SINGH 29

    29

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    31/78

    PREPARED BY ARUN PRATAP SINGH 30

    30

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    32/78

    PREPARED BY ARUN PRATAP SINGH 31

    31

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    33/78

    PREPARED BY ARUN PRATAP SINGH 32

    32

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    34/78

    PREPARED BY ARUN PRATAP SINGH 33

    33

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    35/78

    PREPARED BY ARUN PRATAP SINGH 34

    34

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    36/78

    PREPARED BY ARUN PRATAP SINGH 35

    35

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    37/78

    PREPARED BY ARUN PRATAP SINGH 36

    36

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    38/78

    PREPARED BY ARUN PRATAP SINGH 37

    37

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    39/78

    PREPARED BY ARUN PRATAP SINGH 38

    38

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    40/78

    PREPARED BY ARUN PRATAP SINGH 39

    39

    GENETIC MODELING :

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    41/78

    PREPARED BY ARUN PRATAP SINGH 40

    40

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    42/78

    PREPARED BY ARUN PRATAP SINGH 41

    41

    JOB-SHOP SCHEDULING PROBLEM (JSSP):

    Job shop scheduling (or job-shop problem) is an optimization problem in computer

    science andoperations research in which ideal jobs are assigned to resources at particular times.

    The most basic version is as follows:

    We are given n jobs J1, J2, ..., Jnof varying sizes, which need to be scheduled on m identical

    machines, while trying to minimize the makespan. The makespan is the total length of the

    schedule (that is, when all the jobs have finished processing). Nowadays, the problem is

    presented as an online problem (dynamic scheduling), that is, each job is presented, and

    theonline algorithm needs to make a decision about that job before the next job is presented.

    This problem is one of the best known online problems, and was the first problem forwhichcompetitive analysis was presented, by Graham in 1966.[1]Best problem instances for basic

    model with makespan objective are due to Taillard.

    http://en.wikipedia.org/wiki/Computer_sciencehttp://en.wikipedia.org/wiki/Computer_sciencehttp://en.wikipedia.org/wiki/Operations_Researchhttp://en.wikipedia.org/wiki/Online_problemhttp://en.wikipedia.org/wiki/Online_algorithmhttp://en.wikipedia.org/wiki/Competitive_analysis_(online_algorithm)http://en.wikipedia.org/wiki/Job_shop_scheduling#cite_note-graham1966-1http://en.wikipedia.org/wiki/Job_shop_scheduling#cite_note-graham1966-1http://en.wikipedia.org/wiki/Job_shop_scheduling#cite_note-graham1966-1http://en.wikipedia.org/wiki/Job_shop_scheduling#cite_note-graham1966-1http://en.wikipedia.org/wiki/Competitive_analysis_(online_algorithm)http://en.wikipedia.org/wiki/Online_algorithmhttp://en.wikipedia.org/wiki/Online_problemhttp://en.wikipedia.org/wiki/Operations_Researchhttp://en.wikipedia.org/wiki/Computer_sciencehttp://en.wikipedia.org/wiki/Computer_science
  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    43/78

    PREPARED BY ARUN PRATAP SINGH 42

    42

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    44/78

    PREPARED BY ARUN PRATAP SINGH 43

    43

    Problem variations:

    Many variations of the problem exist, including the following:

    Machines can be related, independent, equal

    Machines can require a certain gap between jobs or no idle-time

    Machines can havesequence-dependent setups

    Objective function can be to minimize the make span, the Lp norm, tardiness, maximum

    lateness etc. It can also be multi-objective optimization problem

    Jobs may have constraints, for example a job ineeds to finish before job jcan be started

    (seeworkflow). Also, the objective function can be multi-criteria.

    Jobs and machines have mutual constraints, for example, certain jobs can be scheduled on

    some machines only

    http://en.wikipedia.org/wiki/Sequence-dependent_setuphttp://en.wikipedia.org/wiki/Lp_spacehttp://en.wikipedia.org/wiki/Lp_spacehttp://en.wikipedia.org/wiki/Lp_spacehttp://en.wikipedia.org/wiki/Workflowhttp://en.wikipedia.org/wiki/Workflowhttp://en.wikipedia.org/wiki/Lp_spacehttp://en.wikipedia.org/wiki/Sequence-dependent_setup
  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    45/78

    PREPARED BY ARUN PRATAP SINGH 44

    44

    Set of jobs can relate to different set of machines

    Deterministic (fixed) processing times or probabilistic processing times

    There may also be some other side constraints

    NP-hardness:

    If one already knows that thetravelling salesman problem is NP-hard (as it is), then the job-shop

    problem is clearly also NP-hard, since the TSP is special case of the JSP with (the

    salesman is the machine and the cities are the jobs).

    Problem representation:

    Thedisjunctive graph[4]is one of the popular models used for describing the job shop scheduling

    problem instances.[5]

    A mathematical statement of the problem can be made as follows:

    Let and be twofinitesets.On account

    of the industrial origins of the problem, the are called machinesand the are calledjobs.

    Let denote the set of all sequential assignments of jobs to machines, such that every job is

    done by every machine exactly once; elements may be written as matrices, in

    which column lists the jobs that machine will do, in order. For example, the matrix

    means that machine will do the three jobs in the order , while

    machine will do the jobs in the order .

    Suppose also that there is some cost function . The cost function may

    be interpreted as a "total processing time", and may have some expression in terms of

    times , the cost/time for machine to do job .

    The job-shop problem is to find an assignment of jobs such that is a

    minimum, that is, there is no such that .

    The problem of infinite cost :

    One of the first problems that must be dealt with in the JSP is that many proposed solutions have

    infinite cost: i.e., there exists such that . In fact, it is quite simple to

    concoct examples of such by ensuring that two machines willdeadlock,so that each waits

    for the output of the other's next step.

    http://en.wikipedia.org/wiki/Travelling_salesman_problemhttp://en.wikipedia.org/wiki/Disjunctive_graphhttp://en.wikipedia.org/wiki/Job_shop_scheduling#cite_note-4http://en.wikipedia.org/wiki/Job_shop_scheduling#cite_note-4http://en.wikipedia.org/wiki/Job_shop_scheduling#cite_note-4http://en.wikipedia.org/wiki/Job_shop_scheduling#cite_note-5http://en.wikipedia.org/wiki/Job_shop_scheduling#cite_note-5http://en.wikipedia.org/wiki/Job_shop_scheduling#cite_note-5http://en.wikipedia.org/wiki/Finite_sethttp://en.wikipedia.org/wiki/Set_(mathematics)http://en.wikipedia.org/wiki/Deadlockhttp://en.wikipedia.org/wiki/Deadlockhttp://en.wikipedia.org/wiki/Set_(mathematics)http://en.wikipedia.org/wiki/Finite_sethttp://en.wikipedia.org/wiki/Job_shop_scheduling#cite_note-5http://en.wikipedia.org/wiki/Job_shop_scheduling#cite_note-4http://en.wikipedia.org/wiki/Disjunctive_graphhttp://en.wikipedia.org/wiki/Travelling_salesman_problem
  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    46/78

    PREPARED BY ARUN PRATAP SINGH 45

    45

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    47/78

    PREPARED BY ARUN PRATAP SINGH 46

    46

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    48/78

    PREPARED BY ARUN PRATAP SINGH 47

    47

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    49/78

    PREPARED BY ARUN PRATAP SINGH 48

    48

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    50/78

    PREPARED BY ARUN PRATAP SINGH 49

    49

    TRAVELING SALESMAN PROBLEM :

    "The traveling salesman problem, or TSP for short, is this: given a finite number of 'cities' along with

    the cost of travel between each pair of them, find the cheapest way of visiting all the cities and

    returning to your starting point."

    A solution: distance = 941

    The traveling salesman must visit every city in his territory exactly once (possibly then return to

    the starting point).

    Given the cost of travel between all cities, how should he plan his itinerary for minimum total

    cost of the entire tour?

    TSP NP-Complete

    Complexities :

    Testing every possibility for an N city tour would be O(N!) math additions.

    A 30 city tour would take additions.

    Assuming 1 billion additions per second, this would take over 8,000,000,000,000,000

    years, i.e., 8 million billion years. The age of the universe since the big bang is only about 14 billion years.

    GA approach :

    Can find a pretty good solution in less than a minute

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    51/78

    PREPARED BY ARUN PRATAP SINGH 50

    50

    Representation: randompermutation, example of a tour for 9 cities,

    (6 7 8 9 3 4 1 2 5)

    Each represent an individual animal or organism

    Population:a group of random tours

    A community of individuals

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    52/78

    PREPARED BY ARUN PRATAP SINGH 51

    51

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    53/78

    PREPARED BY ARUN PRATAP SINGH 52

    52

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    54/78

    PREPARED BY ARUN PRATAP SINGH 53

    53

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    55/78

    PREPARED BY ARUN PRATAP SINGH 54

    54

    A solution, cost = 800

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    56/78

    PREPARED BY ARUN PRATAP SINGH 55

    55

    A solution, distance = 652

    Best Solution (Distance = 420)

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    57/78

    PREPARED BY ARUN PRATAP SINGH 56

    56

    I have developed a solution to the Traveling Salesman Problem (TSP) using a Genetic

    Algorithm (GA). In the Traveling Salesman Problem, the goal is to find the shortest distance

    between N different cities. The path that the salesman takes is called a tour.

    Testing every possibility for an N city tour would be N! math additions. A 30 city tour would have

    to measure the total distance of be 2.65 X 1032different tours. Assuming a trillion additions per

    second, this would take 252,333,390,232,297 years. Adding one more city would cause the time

    to increase by a factor of 31. Obviously, this is an impossible solution.

    A genetic algorithm can be used to find a solution is much less time. Although it might not find the

    best solution, it can find a near perfect solution for a 100 city tour in less than a minute. There are

    a couple of basic steps to solving the traveling salesman problem using a GA.

    First, create a group of many random tours in what is called a population. This algorithm

    uses a greedy initial population that gives preference to linking cities that are close to each

    other.

    Second, pick 2 of the better (shorter) tours parentsin the population and combine them

    to make 2 new childtours. Hopefully, these children tour will be better than either parent.

    A small percentage of the time, the child tours are mutated. This is done to prevent all

    tours in the population from looking identical.

    The new child tours are inserted into the population replacing two of the longer tours. The

    size of the population remains the same.

    New children tours are repeatedly created until the desired goal is reached.

    As the name implies, Genetic Algorithms mimic nature and evolution using the principles

    of Survival of the Fittest.

    The two complex issues with using a Genetic Algorithm to solve the Traveling SalesmanProblem are the encoding of the tour and the crossoveralgorithm that is used to combine the

    two parent tours to make the child tours.

    In a standard Genetic Algorithm, the encoding is a simple sequence of numbers and Crossover

    is performed by picking a random point in the parent's sequences and switching every number

    in the sequence after that point. In this example, the crossover point is between the 3rdand

    4thitem in the list. To create the children, every item in the parent's sequence after the

    crossover point is swapped.

    Parent 1 F A B |E C G D

    Parent 2 D E A |C G B F

    Child 1 F A B |C G B F

    Child 1 D E A |E C G D

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    58/78

    PREPARED BY ARUN PRATAP SINGH 57

    57

    The difficulty with the Traveling Salesman Problem is that every city can only be used once in a

    tour. If the letters in the above example represented cities, this child tours created by this

    crossover operation would be invalid. Child 1 goes to city F & B twice, and never goes to cities

    D or E.

    The encoding cannot simply be the list of cities in the order they are traveled. Other encoding

    methods have been created that solve the crossover problem. Although these methods will not

    create invalid tours, they do not take into account the fact that the tour "A B C D E F G" is the

    same as "G F E D C B A". To solve the problem properly the crossover algorithm will have to

    get much more complicated.

    My solution stores the linksin both directions for each tour. In the above tour example, Parent 1

    would be stored as:

    City First Connection Second Connection

    A F B

    B A E

    C E G

    D G F

    E B C

    F D A

    G C D

    The crossover operation is more complicated than combining 2 strings. The crossover will take

    every link that exists in both parents and place those links in both children. Then, for Child 1 it

    alternates between taking links that appear in Parent 2 and then Parent 1. For Child 2, it

    alternates between Parent 2 and Parent 1 taking a different set of links. For either child, there isa chance that a link could create an invalid tour where instead of a single path in the tour there

    are several disconnected paths. These links must be rejected. To fill in the remaining missing

    links, cities are chosen at random. Since the crossover is not completely random, this is

    considered a greedy crossover.

    Eventually, this GA would make every solution look identical. This is not ideal. Once every tour

    in the population is identical, the GA will not be able to find a better solution. There are two ways

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    59/78

    PREPARED BY ARUN PRATAP SINGH 58

    58

    around this. The first is to use a very large initial population so that it takes the GA longer to

    make all of the solutions the same. The second method is mutation, where some child tours

    are randomly altered to produce a new unique tour.

    This Genetic Algorithm also uses a greedy initial population. The city links in the initial tours are

    not completely random. The GA will prefer to make links between cities that are close to each

    other. This is not done 100% of the time, becuase that would cause every tour in the initial

    population to be very similar.

    There are 6 parameters to control the operation of the Genetic Algorithm:

    Population Size - The population size is the initial number of random tours that are

    created when the algorithm starts. A large population takes longer to find a result. A

    smaller population increases the chance that every tour in the population will eventually

    look the same. This increases the chance that the best solution will not be found.

    Neighborhood / Group Size- Each generation, this number of tours are randomly chosen

    from the population. The best 2 tours are the parents. The worst 2 tours get replaced by

    the children. For group size, a high number will increase the likelyhood that the really good

    tours will be selected as parents, but it will also cause many tours to never be used as

    parents. A large group size will cause the algorithm to run faster, but it might not find the

    best solution.

    Mutation %- The percentage that each child after crossover will undergo mutationWhen

    a tour is mutated, one of the cities is randomly moved from one point in the tour to another.

    # Nearby Cities- As part of a greedy initial population, the GA will prefer to link cities that

    are close to each other to make the initial tours. When creating the initial population this

    is the number of cities that are considered to be close.

    Nearby City Odds %- This is the percent chance that any one link in a random tour inthe initial population will prefer to use a nearby city instead of a completely random city. If

    the GA chooses to use a nearby city, then there is an equally random chance that it will

    be any one of the cities from the previous parameter.

    Maximum Generations - How many crossovers are run before the algorithm is

    terminated.

    The other options that can be configured are (note: these are only available in the downloadable

    version):

    Random Seed - This is the seed for the random number generator. By having a fixed

    instead of a random seed, you can duplicate previous results as long as all otherparameters are the same. This is very helpful when looking for errors in the algorithm.

    City List- The downloadable version allows you to import city lists from XML files. Again,

    when debugging problems it is useful to be able to run the algorithm with the same exact

    parameters.

    The starting parameter values are:

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    60/78

    PREPARED BY ARUN PRATAP SINGH 59

    59

    Parameter Initial Value

    Population Size 10,000

    Group Size 5

    Mutation 3 %

    # Nearby Cities 5

    Nearby City Odds 90 %

    Note: I originally wrote this program in 1995 in straight C. The tours in the population were stored

    as an array of 32 bit int's, where each bit indicated a connection. Ex: If tour[0] =

    00000000000001000000010000000000 in binary, then city 0 connected to city 11 and 19. Thatimplementation was much faster than the current C# version. The greedy part of crossover could

    be performed by doing a binary AND on the two tours. While that code was very fast, it had a lot

    of binary operations, was limited in the number of cities it could support, and the code wasn't

    readable. Hopefully, this new version will allow for more re-use.

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    61/78

    PREPARED BY ARUN PRATAP SINGH 60

    60

    APPLICATIONS OF GENETIC ALGORITHMS :

    The keys of the success in GA applications are effective GA representation and meaningful

    fitness evaluation. The demand of Genetic Algorithm comes from their grace and simplicity as

    robust search algorithms. Also from their power to discover good solution quickly for complex

    high- dimensional problems. The benefits of using the GA approach is the ease to handle

    arbitrary kinds of constraints and objectives. For problem solving and modeling Gas have beenused. GA are applied to number of engineering and scientific problems, in business and

    entertainment including the followings-

    1. Automatic Programming - These algorithms are used to evolve computer programs for

    unique tasks and to design other computational structures. For instance, storage

    network and cellular automata.

    2. OptimizationThese algorithm used in a various optimization tasks which include

    numerical optimization and combinatorial optimization problems like circuit design. Job

    shop scheduling (JSP) and travelling salesman problem.

    3. Models of Social Systems-These algorithms are used to study evolutionary aspects of

    social systems like the evolution of communication and trail following behavior in ants

    and the evolution of cooperation.

    4. Ecological Models- To model ecological phenomenon like host-parasite co-evolutions,

    biological arm races, and symbiosis and resources flow in ecological GAs are used.

    5. Economical Models- To model processes of innovation, the development of bidding

    strategies and the emergence of economic markets GAs are used.

    6. Population Genetics Models- Genetic algorithms are employed to study questions in

    population genetics like under what situations will a gene for recombination be

    evolutionary viable?

    7. Immune System Models- To model various aspects of the natural immune system

    which include somatic mutation during an individuals lifetime and discovery of multi-

    gene families during evolutionary time GAs are used.

    8. Machine And Robot Learning- Genetic algorithms are used to control and design

    robots, symbolic production systems, to evolve rules for learning classifier and to design

    neural networks. These algorithms are also used machine learning applications that

    includes prediction and classification.

    9. Interaction between Evolution and Learning- To study how individual learning and

    species evolution affect one another GAs are used.

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    62/78

    PREPARED BY ARUN PRATAP SINGH 61

    61

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    63/78

    PREPARED BY ARUN PRATAP SINGH 62

    62

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    64/78

    PREPARED BY ARUN PRATAP SINGH 63

    63

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    65/78

    PREPARED BY ARUN PRATAP SINGH 64

    64

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    66/78

    PREPARED BY ARUN PRATAP SINGH 65

    65

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    67/78

    PREPARED BY ARUN PRATAP SINGH 66

    66

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    68/78

    PREPARED BY ARUN PRATAP SINGH 67

    67

    SOME OTHER APPLICATIONS OF GENETIC ALGORITHM

    Airlines Revenue Management.

    Artificial creativity

    Audio watermark insertion/detection

    Automated design =computer-automated design Biology and computational chemistry

    Control engineering

    Financial Mathematics

    File allocation for adistributed system

    Filtering and signal processing

    Finding hardware bugs

    http://en.wikipedia.org/wiki/Computational_creativityhttp://en.wikipedia.org/w/index.php?title=Audio_watermark_insertion/detection&action=edit&redlink=1http://en.wikipedia.org/wiki/Computer-automated_designhttp://en.wikipedia.org/wiki/Control_engineeringhttp://en.wikipedia.org/wiki/Distributed_systemhttp://en.wikipedia.org/wiki/Distributed_systemhttp://en.wikipedia.org/wiki/Control_engineeringhttp://en.wikipedia.org/wiki/Computer-automated_designhttp://en.wikipedia.org/w/index.php?title=Audio_watermark_insertion/detection&action=edit&redlink=1http://en.wikipedia.org/wiki/Computational_creativity
  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    69/78

    PREPARED BY ARUN PRATAP SINGH 68

    68

    Game theory equilibrium resolution

    Economics

    Mechanical engineering

    Mobile communications infrastructureoptimization

    Power electronics design

    DIFFERENCE BETWEEN GENETIC ALGORITHM AND TRADITIONAL METHOD:

    GAs are radically different from most traditional optimization methods. Genetic algorithms work

    with a string coding of variables instead of the variables. The advantages of working with a codingof variable is that coding discretizes the search space even though the function may be

    continuous. On the other hand, since GA requires only function values at discrete points, a

    discrete or discontinuous function can be handled with no extra cost. This allows GA to be applied

    to a wide variety of problems. Genetic algorithm operators exploits the similarities in string

    structure to make an effective search. Genetic algorithm works with a population of points instead

    of a single points instead of a single point. In GA, previously found good information is emphasized

    using reproduction operator and propagated adaptively through crossover and mutation

    operators. Genetic algorithm is a population based search algorithm and multiple optimal

    solutions can be captured. Hence reducing the effort to use the algorithm many times.

    http://en.wikipedia.org/wiki/Game_theoryhttp://en.wikipedia.org/wiki/Genetic_algorithm_in_economicshttp://en.wikipedia.org/wiki/Mechanical_engineeringhttp://en.wikipedia.org/wiki/Optimization_(mathematics)http://en.wikipedia.org/wiki/Power_electronicshttp://en.wikipedia.org/wiki/Power_electronicshttp://en.wikipedia.org/wiki/Optimization_(mathematics)http://en.wikipedia.org/wiki/Mechanical_engineeringhttp://en.wikipedia.org/wiki/Genetic_algorithm_in_economicshttp://en.wikipedia.org/wiki/Game_theory
  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    70/78

    PREPARED BY ARUN PRATAP SINGH 69

    69

    Multi modal function

    Even though Gas, are different than most traditional search algorithms, there are some

    similarities. In traditional search methods, where a search direction is used to find a new point, at

    least two points are either implicitly or explicitly used to define the search direction. In the cross

    over operator, two points are used to create new points. Thus, cross over operator is similar to a

    directional search method with an exception that the search direction is not fixed for all points in

    the population and that no effort is made to find the optimal point in any particular direction. Since

    two points used in crossover operator are chosen at random, many search directions are possible.

    Among them, some may lead to global basin and some may not. The reproduction operator has

    an indirect effect of filtering the good search direction and helps to guide the search. The purposeof mutation operator is to create a point in the vicinity of the current point. The search in the

    mutation operator is similar to a local search method such as exploratory search used in Hooke-

    Jeeves method.

    ISSUES OF GENETIC ALGORITHM:-

    The following issues are important while applying GA to practical problems, namely

    1. Choosing basic implementation issues such as

    a. Representation

    b. Population size and mutation rate

    c. Selection, deletion policies

    d. Crossover and mutation operators.2. Termination criterion

    3. Performance and scalability

    4. Solution is only as good as the evaluation functions.

    BENEFITS OF GENETIC ALGORITHM:-

    The benefits of genetic algorithm are-

  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    71/78

    PREPARED BY ARUN PRATAP SINGH 70

    70

    1. Easy to understand.

    2. Modular, separate from application

    3. Supports multi-objective optimization

    4. Good for noisy environment

    5. We always get an answer and the answer gets better with the time.

    6. Inherently parallel and easily distributed.

    7. There are many ways to speed up and improve a GAs basic applications as knowledge

    about the problem domain is general.

    8. Easy to exploit for previous or alternate solutions.

    9. Flexible in forming building blocks for hybrid applications.

    EVOLUTIONARY COMPUTATION :

    In computer science, evolutionary computation is a subfield of artificial intelligence (more

    particularly computational intelligence) that involves continuous optimization and combinatorial

    optimization problems. Its algorithms can be considered global optimization methods with

    ametaheuristic orstochastic optimization character and are mostly applied for black box problems

    (no derivatives known), often in the context of expensive optimization.

    Evolutionary computation uses iterative progress, such as growth or development in a population.

    This population is thenselected in a guidedrandom search usingparallel processing to achieve

    the desired end. Such processes are often inspired by biological mechanisms ofevolution.

    As evolution can produce highly optimised processes and networks, it has many applications

    incomputer science.

    Techniques-

    Evolutionary computing techniques mostly involvemetaheuristicoptimizationalgorithms.

    Broadly speaking, the field includes:

    Evolutionary algorithms

    Gene expression programming

    Genetic algorithm

    Genetic programming

    Evolutionary programming Evolution strategy

    Differential evolution

    Swarm intelligence

    Ant colony optimization

    Particle swarm optimization

    Artificial Bee Colony Algorithm

    http://en.wikipedia.org/wiki/Computer_sciencehttp://en.wikipedia.org/wiki/Artificial_intelligencehttp://en.wikipedia.org/wiki/Computational_intelligencehttp://en.wikipedia.org/wiki/Continuous_optimizationhttp://en.wikipedia.org/wiki/Combinatorial_optimizationhttp://en.wikipedia.org/wiki/Combinatorial_optimizationhttp://en.wikipedia.org/wiki/Global_optimizationhttp://en.wikipedia.org/wiki/Metaheuristichttp://en.wikipedia.org/wiki/Stochastic_optimizationhttp://en.wikipedia.org/wiki/Artificial_selectionhttp://en.wikipedia.org/wiki/Randomhttp://en.wikipedia.org/wiki/Parallel_processinghttp://en.wikipedia.org/wiki/Evolutionhttp://en.wikipedia.org/wiki/Computer_sciencehttp://en.wikipedia.org/wiki/Metaheuristichttp://en.wikipedia.org/wiki/Mathematical_optimizationhttp://en.wikipedia.org/wiki/Algorithmhttp://en.wikipedia.org/wiki/Evolutionary_algorithmhttp://en.wikipedia.org/wiki/Gene_expression_programminghttp://en.wikipedia.org/wiki/Genetic_algorithmhttp://en.wikipedia.org/wiki/Genetic_programminghttp://en.wikipedia.org/wiki/Evolutionary_programminghttp://en.wikipedia.org/wiki/Evolution_strategyhttp://en.wikipedia.org/wiki/Differential_evolutionhttp://en.wikipedia.org/wiki/Swarm_intelligencehttp://en.wikipedia.org/wiki/Ant_colony_optimizationhttp://en.wikipedia.org/wiki/Particle_swarm_optimizationhttp://en.wikipedia.org/wiki/Artificial_Bee_Colony_Algorithmhttp://en.wikipedia.org/wiki/Artificial_Bee_Colony_Algorithmhttp://en.wikipedia.org/wiki/Particle_swarm_optimizationhttp://en.wikipedia.org/wiki/Ant_colony_optimizationhttp://en.wikipedia.org/wiki/Swarm_intelligencehttp://en.wikipedia.org/wiki/Differential_evolutionhttp://en.wikipedia.org/wiki/Evolution_strategyhttp://en.wikipedia.org/wiki/Evolutionary_programminghttp://en.wikipedia.org/wiki/Genetic_programminghttp://en.wikipedia.org/wiki/Genetic_algorithmhttp://en.wikipedia.org/wiki/Gene_expression_programminghttp://en.wikipedia.org/wiki/Evolutionary_algorithmhttp://en.wikipedia.org/wiki/Algorithmhttp://en.wikipedia.org/wiki/Mathematical_optimizationhttp://en.wikipedia.org/wiki/Metaheuristichttp://en.wikipedia.org/wiki/Computer_sciencehttp://en.wikipedia.org/wiki/Evolutionhttp://en.wikipedia.org/wiki/Parallel_processinghttp://en.wikipedia.org/wiki/Randomhttp://en.wikipedia.org/wiki/Artificial_selectionhttp://en.wikipedia.org/wiki/Stochastic_optimizationhttp://en.wikipedia.org/wiki/Metaheuristichttp://en.wikipedia.org/wiki/Global_optimizationhttp://en.wikipedia.org/wiki/Combinatorial_optimizationhttp://en.wikipedia.org/wiki/Combinatorial_optimizationhttp://en.wikipedia.org/wiki/Continuous_optimizationhttp://en.wikipedia.org/wiki/Computational_intelligencehttp://en.wikipedia.org/wiki/Artificial_intelligencehttp://en.wikipedia.org/wiki/Computer_science
  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    72/78

    PREPARED BY ARUN PRATAP SINGH 71

    71

    Bees algorithm

    Artificial life (also seedigital organism)

    Artificial immune systems

    Cultural algorithms

    Harmony search

    Learning classifier systems

    Learnable Evolution Model

    Self-organization such asself-organizing maps,competitive learning

    Evolutionary algorithms

    Evolutionary algorithms form a subset of evolutionary computation in that they generally only

    involve techniques implementing mechanisms inspired by biological evolution such as

    reproduction, mutation, recombination, natural selection and survival of the fittest. Candidatesolutions to the optimization problem play the role of individuals in a population, and the cost

    function determines the environment within which the solutions "live" (see also fitness

    function).Evolutionof the population then takes place after the repeated application of the above

    operators.

    In this process, there are two main forces that form the basis of evolutionary

    systems: Recombination and mutation create the necessary diversity and thereby facilitate

    novelty, while selectionacts as a force increasing quality.

    Many aspects of such an evolutionary process arestochastic.Changed pieces of information due

    to recombination and mutation are randomly chosen. On the other hand, selection operators can

    be either deterministic, or stochastic. In the latter case, individuals with a higherfitnesshave a

    higher chance to be selected than individuals with a lowerfitness,but typically even the weak

    individuals have a chance to become a parent or to survive.

    Evolutionary algorithm :

    Inartificial intelligence,an evolutionary algorithm(EA) is asubset ofevolutionary computation,

    a generic population-based metaheuristic optimization algorithm. An EA uses mechanisms

    inspired by biological evolution, such as reproduction, mutation, recombination,and selection.Candidate solutions to the optimization problem play the role of individuals in a

    population, and the fitness function determines the quality of the solutions (see also loss

    function).Evolution of the population then takes place after the repeated application of the above

    operators. Artificial evolution (AE) describes a process involving individual evolutionary

    algorithms; EAs are individual components that participate in an AE.

    http://en.wikipedia.org/wiki/Bees_algorithmhttp://en.wikipedia.org/wiki/Artificial_lifehttp://en.wikipedia.org/wiki/Digital_organismhttp://en.wikipedia.org/wiki/Artificial_immune_systemhttp://en.wikipedia.org/wiki/Cultural_algorithmhttp://en.wikipedia.org/wiki/Harmony_searchhttp://en.wikipedia.org/wiki/Learning_classifier_systemhttp://en.wikipedia.org/wiki/Learnable_Evolution_Modelhttp://en.wikipedia.org/wiki/Self-organizationhttp://en.wikipedia.org/wiki/Self-organizing_maphttp://en.wikipedia.org/wiki/Competitive_learninghttp://en.wikipedia.org/wiki/Evolutionary_algorithmshttp://en.wikipedia.org/wiki/Evolutionary_algorithmshttp://en.wikipedia.org/wiki/Biological_evolutionhttp://en.wikipedia.org/wiki/Biological_evolutionhttp://en.wikipedia.org/wiki/Reproductionhttp://en.wikipedia.org/wiki/Reproductionhttp://en.wikipedia.org/wiki/Mutationhttp://en.wikipedia.org/wiki/Mutationhttp://en.wikipedia.org/wiki/Genetic_recombinationhttp://en.wikipedia.org/wiki/Genetic_recombinationhttp://en.wikipedia.org/wiki/Natural_selectionhttp://en.wikipedia.org/wiki/Natural_selectionhttp://en.wikipedia.org/wiki/Survival_of_the_fittesthttp://en.wikipedia.org/wiki/Survival_of_the_fittesthttp://en.wikipedia.org/wiki/Candidate_solutionshttp://en.wikipedia.org/wiki/Candidate_solutionshttp://en.wikipedia.org/wiki/Candidate_solutionshttp://en.wikipedia.org/wiki/Loss_functionhttp://en.wikipedia.org/wiki/Loss_functionhttp://en.wikipedia.org/wiki/Loss_functionhttp://en.wikipedia.org/wiki/Loss_functionhttp://en.wikipedia.org/wiki/Fitness_functionhttp://en.wikipedia.org/wiki/Fitness_functionhttp://en.wikipedia.org/wiki/Fitness_functionhttp://en.wikipedia.org/wiki/Evolutionhttp://en.wikipedia.org/wiki/Evolutionhttp://en.wikipedia.org/wiki/Evolutionhttp://en.wikipedia.org/wiki/Stochastichttp://en.wikipedia.org/wiki/Stochastichttp://en.wikipedia.org/wiki/Stochastichttp://en.wikipedia.org/wiki/Physical_fitnesshttp://en.wikipedia.org/wiki/Physical_fitnesshttp://en.wikipedia.org/wiki/Physical_fitnesshttp://en.wikipedia.org/wiki/Physical_fitnesshttp://en.wikipedia.org/wiki/Physical_fitnesshttp://en.wikipedia.org/wiki/Physical_fitnesshttp://en.wikipedia.org/wiki/Artificial_intelligencehttp://en.wikipedia.org/wiki/Subsethttp://en.wikipedia.org/wiki/Evolutionary_computationhttp://en.wikipedia.org/wiki/Metaheuristichttp://en.wikipedia.org/wiki/Optimization_(mathematics)http://en.wikipedia.org/wiki/Algorithmhttp://en.wikipedia.org/wiki/Biological_evolutionhttp://en.wikipedia.org/wiki/Reproductionhttp://en.wikipedia.org/wiki/Mutationhttp://en.wikipedia.org/wiki/Genetic_recombinationhttp://en.wikipedia.org/wiki/Natural_selectionhttp://en.wikipedia.org/wiki/Candidate_solutionhttp://en.wikipedia.org/wiki/Fitness_functionhttp://en.wikipedia.org/wiki/Loss_functionhttp://en.wikipedia.org/wiki/Loss_functionhttp://en.wikipedia.org/wiki/Evolutionhttp://en.wikipedia.org/wiki/Evolutionhttp://en.wikipedia.org/wiki/Loss_functionhttp://en.wikipedia.org/wiki/Loss_functionhttp://en.wikipedia.org/wiki/Fitness_functionhttp://en.wikipedia.org/wiki/Candidate_solutionhttp://en.wikipedia.org/wiki/Natural_selectionhttp://en.wikipedia.org/wiki/Genetic_recombinationhttp://en.wikipedia.org/wiki/Mutationhttp://en.wikipedia.org/wiki/Reproductionhttp://en.wikipedia.org/wiki/Biological_evolutionhttp://en.wikipedia.org/wiki/Algorithmhttp://en.wikipedia.org/wiki/Optimization_(mathematics)http://en.wikipedia.org/wiki/Metaheuristichttp://en.wikipedia.org/wiki/Evolutionary_computationhttp://en.wikipedia.org/wiki/Subsethttp://en.wikipedia.org/wiki/Artificial_intelligencehttp://en.wikipedia.org/wiki/Physical_fitnesshttp://en.wikipedia.org/wiki/Physical_fitnesshttp://en.wikipedia.org/wiki/Stochastichttp://en.wikipedia.org/wiki/Evolutionhttp://en.wikipedia.org/wiki/Fitness_functionhttp://en.wikipedia.org/wiki/Fitness_functionhttp://en.wikipedia.org/wiki/Loss_functionhttp://en.wikipedia.org/wiki/Loss_functionhttp://en.wikipedia.org/wiki/Candidate_solutionshttp://en.wikipedia.org/wiki/Candidate_solutionshttp://en.wikipedia.org/wiki/Survival_of_the_fittesthttp://en.wikipedia.org/wiki/Natural_selectionhttp://en.wikipedia.org/wiki/Genetic_recombinationhttp://en.wikipedia.org/wiki/Mutationhttp://en.wikipedia.org/wiki/Reproductionhttp://en.wikipedia.org/wiki/Biological_evolutionhttp://en.wikipedia.org/wiki/Evolutionary_algorithmshttp://en.wikipedia.org/wiki/Competitive_learninghttp://en.wikipedia.org/wiki/Self-organizing_maphttp://en.wikipedia.org/wiki/Self-organizationhttp://en.wikipedia.org/wiki/Learnable_Evolution_Modelhttp://en.wikipedia.org/wiki/Learning_classifier_systemhttp://en.wikipedia.org/wiki/Harmony_searchhttp://en.wikipedia.org/wiki/Cultural_algorithmhttp://en.wikipedia.org/wiki/Artificial_immune_systemhttp://en.wikipedia.org/wiki/Digital_organismhttp://en.wikipedia.org/wiki/Artificial_lifehttp://en.wikipedia.org/wiki/Bees_algorithm
  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    73/78

    PREPARED BY ARUN PRATAP SINGH 72

    72

    Evolutionary algorithms often perform well approximating solutions to all types of problems

    because they ideally do not make any assumption about the underlying fitness landscape;this

    generality is shown by successes in fields as diverse

    asengineering,art,biology,economics,marketing,genetics,operations research,robotics,social

    sciences,physics,politics andchemistry.

    Techniques from evolutionary algorithms applied to the modeling of biological evolution are

    generally limited to explorations of micro evolutionary processes. The computer

    simulationsTierra andAvida attempt to modelmacro evolutionary dynamics.

    In most real applications of EAs, computational complexity is a prohibiting factor. In fact, this

    computational complexity is due to fitness function evaluation.Fitness approximation is one of

    the solutions to overcome this difficulty. However, seemingly simple EA can solve often complex

    problems; therefore, there may be no direct link between algorithm complexity and problem

    complexity.

    Implementation of biological processes :

    1. Generate the initialpopulation ofindividuals randomly - first Generation

    2. Evaluate thefitness of each individual in that population

    3. Repeat on thisgeneration until termination (time limit, sufficient fitness achieved, etc.):

    1. Select the best-fit individuals forreproduction - parents

    2.Breed new individuals throughcrossover and mutation operations to give birthtooffspring

    3. Evaluate the individual fitness of new individuals

    4. Replace least-fit population with new individuals

    Evolutionary algorithm types :

    Similar techniques differ in the implementation details and the nature of the particular applied

    problem.

    Genetic algorithm - This is the most popular type of EA. One seeks the solution of a problem

    in the form of strings of numbers (traditionally binary, although the best representations are

    usually those that reflect something about the problem being solved), by applying operators

    such as recombination and mutation (sometimes one, sometimes both). This type of EA is

    often used inoptimization problems.

    Genetic programming - Here the solutions are in the form of computer programs, and their

    fitness is determined by their ability to solve a computational problem.

    Evolutionary programming - Similar to genetic programming, but the structure of the program

    is fixed and its numerical parameters are allowed to evolve.

    http://en.wikipedia.org/wiki/Fitness_landscapehttp://en.wikipedia.org/wiki/Engineeringhttp://en.wikipedia.org/wiki/Arthttp://en.wikipedia.org/wiki/Biologyhttp://en.wikipedia.org/wiki/Economicshttp://en.wikipedia.org/wiki/Marketinghttp://en.wikipedia.org/wiki/Geneticshttp://en.wikipedia.org/wiki/Operations_researchhttp://en.wikipedia.org/wiki/Evolutionary_roboticshttp://en.wikipedia.org/wiki/Social_scienceshttp://en.wikipedia.org/wiki/Social_scienceshttp://en.wikipedia.org/wiki/Physicshttp://en.wikipedia.org/wiki/Politicshttp://en.wikipedia.org/wiki/Chemistryhttp://en.wikipedia.org/wiki/Chemistryhttp://en.wikipedia.org/wiki/Microevolutionhttp://en.wikipedia.org/wiki/Tierra_(computer_simulation)http://en.wikipedia.org/wiki/Avidahttp://en.wikipedia.org/wiki/Macroevolutionhttp://en.wikipedia.org/wiki/Fitness_approximationhttp://en.wikipedia.org/wiki/Populationhttp://en.wikipedia.org/wiki/Individualhttp://en.wikipedia.org/wiki/Fitness_(biology)http://en.wikipedia.org/wiki/Generationhttp://en.wikipedia.org/wiki/Reproducehttp://en.wikipedia.org/wiki/Breedhttp://en.wikipedia.org/wiki/Crossover_(genetic_algorithm)http://en.wikipedia.org/wiki/Mutation_(genetic_algorithm)http://en.wikipedia.org/wiki/Offspringhttp://en.wikipedia.org/wiki/Genetic_algorithmhttp://en.wikipedia.org/wiki/Optimization_(mathematics)http://en.wikipedia.org/wiki/Genetic_programminghttp://en.wikipedia.org/wiki/Evolutionary_programminghttp://en.wikipedia.org/wiki/Evolutionary_programminghttp://en.wikipedia.org/wiki/Genetic_programminghttp://en.wikipedia.org/wiki/Optimization_(mathematics)http://en.wikipedia.org/wiki/Genetic_algorithmhttp://en.wikipedia.org/wiki/Offspringhttp://en.wikipedia.org/wiki/Mutation_(genetic_algorithm)http://en.wikipedia.org/wiki/Crossover_(genetic_algorithm)http://en.wikipedia.org/wiki/Breedhttp://en.wikipedia.org/wiki/Reproducehttp://en.wikipedia.org/wiki/Generationhttp://en.wikipedia.org/wiki/Fitness_(biology)http://en.wikipedia.org/wiki/Individualhttp://en.wikipedia.org/wiki/Populationhttp://en.wikipedia.org/wiki/Fitness_approximationhttp://en.wikipedia.org/wiki/Macroevolutionhttp://en.wikipedia.org/wiki/Avidahttp://en.wikipedia.org/wiki/Tierra_(computer_simulation)http://en.wikipedia.org/wiki/Microevolutionhttp://en.wikipedia.org/wiki/Chemistryhttp://en.wikipedia.org/wiki/Politicshttp://en.wikipedia.org/wiki/Physicshttp://en.wikipedia.org/wiki/Social_scienceshttp://en.wikipedia.org/wiki/Social_scienceshttp://en.wikipedia.org/wiki/Evolutionary_roboticshttp://en.wikipedia.org/wiki/Operations_researchhttp://en.wikipedia.org/wiki/Geneticshttp://en.wikipedia.org/wiki/Marketinghttp://en.wikipedia.org/wiki/Economicshttp://en.wikipedia.org/wiki/Biologyhttp://en.wikipedia.org/wiki/Arthttp://en.wikipedia.org/wiki/Engineeringhttp://en.wikipedia.org/wiki/Fitness_landscape
  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    74/78

    PREPARED BY ARUN PRATAP SINGH 73

    73

    Gene expression programming - Like genetic programming, GEP also evolves computer

    programs but it explores a genotype-phenotype system, where computer programs of

    different sizes are encoded in linear chromosomes of fixed length.

    Evolution strategy - Works with vectors of real numbers as representations of solutions, and

    typically uses self-adaptive mutation rates. Differential evolution - Based on vector differences and is therefore primarily suited

    fornumerical optimization problems.

    Neuro evolution - Similar to genetic programming but the genomes represent artificial neural

    networks by describing structure and connection weights. The genome encoding can be direct

    or indirect.

    Learning classifier system - Here the solutions are classifiers (rules or conditions). A

    Michigan-LCS works with individual classifiers whereas a Pittsburgh-LCS uses populations of

    classifier-sets. Initially, classifiers were only binary, but now include real, neural net, orS-

    expression types. Fitness is determined with either a strength or accuracy basedreinforcement approach.

    SWARM INTELLIGENCE :

    Swarm intelligence (SI) is artificial intelligence based on the collective behavior of

    decentralized, self-organized systems

    Swarm intelligence (SI) is the collective behavior of decentralized, self-organized systems,

    natural or artificial. The concept is employed in work onartificial intelligence.The expression wasintroduced byGerardo Beni and Jing Wang in 1989, in the context of cellular robotic systems.[1]

    SI systems consist typically of a population of simpleagents orboids interacting locally with one

    another and with their environment. The inspiration often comes from nature, especially biological

    systems. The agents follow very simple rules, and although there is no centralized control

    structure dictating how individual agents should behave, local, and to a certain degree random,

    interactions between such agents lead to theemergence of "intelligent" global behavior, unknown

    to the individual agents. Examples in natural systems of SI includeant colonies,birdflocking,

    animalherding,bacterial growth,and fishschooling.The definition of swarm intelligence is still

    not quite clear. In principle, it should be a multi-agent system that has self-organized behaviourthat shows some intelligent behaviour.

    The application of swarm principles torobots is calledswarm robotics,while 'swarm intelligence'

    refers to the more general set of algorithms. 'Swarm prediction' has been used in the context of

    forecasting problems.

    http://en.wikipedia.org/wiki/Gene_expression_programminghttp://en.wikipedia.org/wiki/Evolution_strategyhttp://en.wikipedia.org/wiki/Differential_evolutionhttp://en.wikipedia.org/wiki/Numerical_optimizationhttp://en.wikipedia.org/wiki/Neuroevolutionhttp://en.wikipedia.org/wiki/Learning_classifier_systemhttp://en.wikipedia.org/wiki/S-expressionhttp://en.wikipedia.org/wiki/S-expressionhttp://en.wikipedia.org/wiki/Collective_behaviorhttp://en.wikipedia.org/wiki/Decentralizationhttp://en.wikipedia.org/wiki/Self-organizationhttp://en.wikipedia.org/wiki/Artificial_intelligencehttp://en.wikipedia.org/wiki/Gerardo_Benihttp://en.wikipedia.org/wiki/Swarm_intelligence#cite_note-1http://en.wikipedia.org/wiki/Swarm_intelligence#cite_note-1http://en.wikipedia.org/wiki/Swarm_intelligence#cite_note-1http://en.wikipedia.org/wiki/Intelligent_agenthttp://en.wikipedia.org/wiki/Boidshttp://en.wikipedia.org/wiki/Emergencehttp://en.wikipedia.org/wiki/Ant_colonyhttp://en.wikipedia.org/wiki/Flocking_(behavior)http://en.wikipedia.org/wiki/Herdinghttp://en.wikipedia.org/wiki/Bacteria#Growth_and_reproductionhttp://en.wikipedia.org/wiki/Shoaling_and_schoolinghttp://en.wikipedia.org/wiki/Robothttp://en.wikipedia.org/wiki/Swarm_roboticshttp://en.wikipedia.org/wiki/Swarm_roboticshttp://en.wikipedia.org/wiki/Robothttp://en.wikipedia.org/wiki/Shoaling_and_schoolinghttp://en.wikipedia.org/wiki/Bacteria#Growth_and_reproductionhttp://en.wikipedia.org/wiki/Herdinghttp://en.wikipedia.org/wiki/Flocking_(behavior)http://en.wikipedia.org/wiki/Ant_colonyhttp://en.wikipedia.org/wiki/Emergencehttp://en.wikipedia.org/wiki/Boidshttp://en.wikipedia.org/wiki/Intelligent_agenthttp://en.wikipedia.org/wiki/Swarm_intelligence#cite_note-1http://en.wikipedia.org/wiki/Gerardo_Benihttp://en.wikipedia.org/wiki/Artificial_intelligencehttp://en.wikipedia.org/wiki/Self-organizationhttp://en.wikipedia.org/wiki/Decentralizationhttp://en.wikipedia.org/wiki/Collective_behaviorhttp://en.wikipedia.org/wiki/S-expressionhttp://en.wikipedia.org/wiki/S-expressionhttp://en.wikipedia.org/wiki/Learning_classifier_systemhttp://en.wikipedia.org/wiki/Neuroevolutionhttp://en.wikipedia.org/wiki/Numerical_optimizationhttp://en.wikipedia.org/wiki/Differential_evolutionhttp://en.wikipedia.org/wiki/Evolution_strategyhttp://en.wikipedia.org/wiki/Gene_expression_programming
  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    75/78

    PREPARED BY ARUN PRATAP SINGH 74

    74

    Example algorithms :

    Particle swarm optimization

    Particle swarm optimization (PSO) is aglobal optimization algorithm for dealing with problems in

    which a best solution can be represented as a point or surface in an n-dimensional space.

    Hypotheses are plotted in this space and seeded with an initial velocity, as well as a

    communication channel between the particles. Particles then move through the solution space,

    and are evaluated according to somefitness criterion after each timestep. Over time, particles are

    accelerated towards those particles within their communication grouping which have better fitness

    values. The main advantage of such an approach over other global minimization strategies such

    as simulated annealing is that the large number of members that make up the particle swarm

    make the technique impressively resilient to the problem oflocal minima.

    Ant colony optimization

    Ant colony optimization (ACO), introduced by Dorigo in his doctoral dissertation, is a class

    of optimization algorithms modeled on the actions of an ant colony. ACO is a probabilistic

    technique useful in problems that deal with finding better paths through graphs. Artificial 'ants'

    simulation agentslocate optimal solutions by moving through aparameter spacerepresenting

    all possible solutions. Natural ants lay downpheromones directing each other to resources while

    exploring their environment. The simulated 'ants' similarly record their positions and the quality of

    their solutions, so that in later simulation iterations more ants locate better solutions.

    Artificial bee colony algorithm

    Artificial bee colony algorithm (ABC) is a meta-heuristic algorithm introduced by Karaboga in

    2005, and simulates the foraging behaviour of honey bees. The ABC algorithm has three phases:employed bee, onlooker bee and scout bee. In the employed bee and the onlooker bee phases,

    bees exploit the sources by local searches in the neighbourhood of the solutions selected based

    on deterministic selection in the employed bee phase and the probabilistic selection in the

    onlooker bee phase. In the scout bee phase which is an analogy of abandoning exhausted food

    sources in the foraging process, solutions that are not beneficial anymore for search progress are

    abandoned, and new solutions are inserted instead of them to explore new regions in the search

    space. The algorithm has a well-balanced exploration and exploitation ability.

    Differential evolution

    Differential evolution is similar to genetic algorithm and pattern search. It uses multiagents or

    search vectors to carry out search. It has mutation and crossover, but do not have the global best

    solution in its search equations, in contrast with the particle swarm optimization.

    http://en.wikipedia.org/wiki/Particle_swarm_optimizationhttp://en.wikipedia.org/wiki/Global_optimizationhttp://en.wikipedia.org/wiki/Velocityhttp://en.wikipedia.org/wiki/Fitness_(biology)http://en.wikipedia.org/wiki/Simulated_annealinghttp://en.wikipedia.org/wiki/Local_minimahttp://en.wikipedia.org/wiki/Ant_colony_optimizationhttp://en.wikipedia.org/wiki/Optimization_(mathematics)http://en.wikipedia.org/wiki/Algorithmhttp://en.wikipedia.org/wiki/Ant_colonyhttp://en.wikipedia.org/wiki/Probabilistic_algorithmhttp://en.wikipedia.org/wiki/Probabilistic_algorithmhttp://en.wikipedia.org/wiki/Parameter_spacehttp://en.wikipedia.org/wiki/Pheromonehttp://en.wikipedia.org/wiki/Artificial_bee_colony_algorithmhttp://en.wikipedia.org/wiki/Random_selectionhttp://en.wikipedia.org/wiki/Differential_evolutionhttp://en.wikipedia.org/wiki/Differential_evolutionhttp://en.wikipedia.org/wiki/Random_selectionhttp://en.wikipedia.org/wiki/Artificial_bee_colony_algorithmhttp://en.wikipedia.org/wiki/Pheromonehttp://en.wikipedia.org/wiki/Parameter_spacehttp://en.wikipedia.org/wiki/Probabilistic_algorithmhttp://en.wikipedia.org/wiki/Probabilistic_algorithmhttp://en.wikipedia.org/wiki/Ant_colonyhttp://en.wikipedia.org/wiki/Algorithmhttp://en.wikipedia.org/wiki/Optimization_(mathematics)http://en.wikipedia.org/wiki/Ant_colony_optimizationhttp://en.wikipedia.org/wiki/Local_minimahttp://en.wikipedia.org/wiki/Simulated_annealinghttp://en.wikipedia.org/wiki/Fitness_(biology)http://en.wikipedia.org/wiki/Velocityhttp://en.wikipedia.org/wiki/Global_optimizationhttp://en.wikipedia.org/wiki/Particle_swarm_optimization
  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    76/78

    PREPARED BY ARUN PRATAP SINGH 75

    75

    The bees algorithm

    Thebees algorithm in its basic formulation was created by Pham and his co-workers in 2005 and

    further refined in the following years.[7]Modelled on the foraging behaviour ofhoney bees, the

    algorithm combines global explorative search with local exploitative search. A small number of

    artificial bees (scouts) explores randomly the solution space (environment) for solutions of highfitness (highly profitable food sources), whilst the bulk of the population search (harvest) the

    neighbourhood of the fittest solutions looking for the fitness optimum. A deterministics recruitment

    procedure which simulates the waggle dance of biological bees is used to communicate the

    scouts' findings to the foragers, and distribute the foragers depending on the fitness of the

    neighbourhoods selected for local search. Once the search in the neighbourhood of a solution

    stagnates, the local fitness optimum is considered to be found, and the site is abandoned. In

    summary, the Bees Algorithm searches concurrently the most promising regions of the solution

    space, whilst continuously sampling it in search of new favourable regions.

    Artificial immune systems

    Artificial immune systems (AIS) concerns the usage of abstract structure and function of the

    immune system to computational systems, and investigating the application of these systems

    towards solving computational problems from mathematics, engineering, and information

    technology. AIS is a sub-field of Biologically inspired computing, and natural computation, with

    interests in Machine Learning and belonging to the broader field of Artificial Intelligence.

    Grey wolf optimizer

    The Grey wolf optimizer (GWO) algorithm mimics the leadership hierarchy and hunting

    mechanism of gray wolves in nature proposed by Mirjalili et al. in 2014.[8]Four types of grey

    wolves such as alpha, beta, delta, and omega are employed for simulating the leadership

    hierarchy. In addition, three main steps of hunting, searching for prey, encircling prey, and

    attacking prey, are implemented to perform optimization.

    Bat algorithm

    Bat algorithm (BA) is a swarm-intelligence-based algorithm, inspired by theecholocation behavior

    ofmicrobats.BA uses a frequency-tuning and automatic balance of exploration and exploitation

    by controlling loudness and pulse emission rates.

    Gravitational search algorithm

    Gravitational search algorithm (GSA) based on the law of gravity and the notion of mass

    interactions. The GSA algorithm uses the theory of Newtonian physics and its searcher agents

    are the collection of masses. In GSA, there is an isolated system of masses. Using the

    gravitational force, every mass in the system can see the situation of other masses. The

    gravitational force is therefore a way of transferring information between different masses

    (Rashedi, Nezamabadi-pour and Saryazdi 2009).[10] In GSA, agents are considered as objects

    http://en.wikipedia.org/wiki/Bees_algorithmhttp://en.wikipedia.org/wiki/Swarm_intelligence#cite_note-Pham_.26_Castellani.2C_2009-7http://en.wikipedia.org/wiki/Swarm_intelligence#cite_note-Pham_.26_Castellani.2C_2009-7http://en.wikipedia.org/wiki/Swarm_intelligence#cite_note-Pham_.26_Castellani.2C_2009-7http://en.wikipedia.org/wiki/Honey_beeshttp://en.wikipedia.org/wiki/Waggle_dancehttp://en.wikipedia.org/wiki/Artificial_immune_systemshttp://en.wikipedia.org/w/index.php?title=Grey_wolf_optimizer&action=edit&redlink=1http://en.wikipedia.org/wiki/Gray_wolveshttp://en.wikipedia.org/wiki/Swarm_intelligence#cite_note-8http://en.wikipedia.org/wiki/Swarm_intelligence#cite_note-8http://en.wikipedia.org/wiki/Swarm_intelligence#cite_note-8http://en.wikipedia.org/wiki/Bat_algorithmhttp://en.wikipedia.org/wiki/Animal_echolocationhttp://en.wikipedia.org/wiki/Microbathttp://en.wikipedia.org/wiki/Law_of_gravityhttp://en.wikipedia.org/wiki/Swarm_intelligence#cite_note-10http://en.wikipedia.org/wiki/Swarm_intelligence#cite_note-10http://en.wikipedia.org/wiki/Swarm_intelligence#cite_note-10http://en.wikipedia.org/wiki/Swarm_intelligence#cite_note-10http://en.wikipedia.org/wiki/Law_of_gravityhttp://en.wikipedia.org/wiki/Microbathttp://en.wikipedia.org/wiki/Animal_echolocationhttp://en.wikipedia.org/wiki/Bat_algorithmhttp://en.wikipedia.org/wiki/Swarm_intelligence#cite_note-8http://en.wikipedia.org/wiki/Gray_wolveshttp://en.wikipedia.org/w/index.php?title=Grey_wolf_optimizer&action=edit&redlink=1http://en.wikipedia.org/wiki/Artificial_immune_systemshttp://en.wikipedia.org/wiki/Waggle_dancehttp://en.wikipedia.org/wiki/Honey_beeshttp://en.wikipedia.org/wiki/Swarm_intelligence#cite_note-Pham_.26_Castellani.2C_2009-7http://en.wikipedia.org/wiki/Bees_algorithm
  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    77/78

    PREPARED BY ARUN PRATAP SINGH 76

    76

    and their performance is measured by their masses. All these objects attract each other by a

    gravity force, and this force causes a movement of all objects globally towards the objects with

    heavier masses. The heavy masses correspond to good solutions of the problem. The position of

    the agent corresponds to a solution of the problem, and its mass is determined using a fitness

    function. By lapse of time, masses are attracted by the heaviest mass. We hope that this masswould present an optimum solution in the search space. The GSA could be considered as an

    isolated system of masses. It is like a small artificial world of masses obeying the Newtonian laws

    of gravitation and motion (Rashedi, Nezamabadi-pour and Saryazdi 2009). A multi-objective

    variant of GSA, called Non-dominated Sorting Gravitational Search Algorithm (NSGSA), was

    proposed by Nobahari and Nikusokhan in 2011.

    Altruism algorithm

    Researchers in Switzerland have developed an algorithm based on Hamilton's rule of kin

    selection. This algorithm shows how altruism in a swarm of entities can, over time, evolve and

    result in more effective swarm behaviour.

    Glowworm swarm optimization

    Glowworm swarm optimization (GSO), introduced by Krishnanand and Ghose in 2005 for

    simultaneous computation of multiple optima of multimodal functions.[14][15][16][17] The algorithm

    shares a few features with some better known algorithms, such as ant colony

    optimization andparticle swarm optimization,but with several significant differences. The agents

    in GSO are thought of asglowworms that carry aluminescence quantity calledluciferin along with

    them. The glowworms encode the fitness of their current locations, evaluated using the objective

    function, into a luciferin value that they broadcast to their neighbors. The glowworm identifies its

    neighbors and computes its movements by exploiting an adaptive neighborhood, which is

    bounded above by its sensor range. Each glowworm selects, using aprobabilistic mechanism,a

    neighbor that has a luciferin value higher than its own and moves toward it. These movements

    based only on local information and selective neighbor interactionsenable the swarm of

    glowworms to partition into disjoint subgroups that converge on multiple optima of a given

    multimodal function.

    Self-propelled particles

    Self-propelled particles (SPP), also referred to as the Vicsek model, was introduced in 1995 by

    Vicsek et al.as a special case of theboids model introduced in 1986 byReynolds.A swarm is

    modelled in SPP by a collection of particles that move with a constant speed but respond to a

    random perturbation by adopting at each time increment the average direction of motion of the

    other particles in their local neighbourhood. SPP models predict that swarming animals share

    certain properties at the group level, regardless of the type of animals in the swarm.Swarming

    systems give rise toemergent behaviours which occur at many different scales, some of which

    http://en.wikipedia.org/wiki/Hamilton%27s_rulehttp://en.wikipedia.org/wiki/Altruism_in_animalshttp://en.wikipedia.org/wiki/Glowworm_swarm_optimizationhttp://en.wikipedia.org/wiki/Swarm_intelligence#cite_note-14http://en.wikipedia.org/wiki/Swarm_intelligence#cite_note-14http://en.wikipedia.org/wiki/Swarm_intelligence#cite_note-16http://en.wikipedia.org/wiki/Swarm_intelligence#cite_note-16http://en.wikipedia.org/wiki/Ant_colony_optimizationhttp://en.wikipedia.org/wiki/Ant_colony_optimizationhttp://en.wikipedia.org/wiki/Particle_swarm_optimizationhttp://en.wikipedia.org/wiki/Glowwormhttp://en.wikipedia.org/wiki/Luminescencehttp://en.wikipedia.org/wiki/Luciferinhttp://en.wikipedia.org/wiki/Probabilistic_algorithmhttp://en.wikipedia.org/wiki/Self-propelled_particleshttp://en.wikipedia.org/wiki/Boidshttp://en.wikipedia.org/wiki/Craig_Reynolds_(computer_graphics)http://en.wikipedia.org/wiki/Emergent_behaviourhttp://en.wikipedia.org/wiki/Emergent_behaviourhttp://en.wikipedia.org/wiki/Craig_Reynolds_(computer_graphics)http://en.wikipedia.org/wiki/Boidshttp://en.wikipedia.org/wiki/Self-propelled_particleshttp://en.wikipedia.org/wiki/Probabilistic_algorithmhttp://en.wikipedia.org/wiki/Luciferinhttp://en.wikipedia.org/wiki/Luminescencehttp://en.wikipedia.org/wiki/Glowwormhttp://en.wikipedia.org/wiki/Particle_swarm_optimizationhttp://en.wikipedia.org/wiki/Ant_colony_optimizationhttp://en.wikipedia.org/wiki/Ant_colony_optimizationhttp://en.wikipedia.org/wiki/Swarm_intelligence#cite_note-16http://en.wikipedia.org/wiki/Swarm_intelligence#cite_note-16http://en.wikipedia.org/wiki/Swarm_intelligence#cite_note-14http://en.wikipedia.org/wiki/Swarm_intelligence#cite_note-14http://en.wikipedia.org/wiki/Glowworm_swarm_optimizationhttp://en.wikipedia.org/wiki/Altruism_in_animalshttp://en.wikipedia.org/wiki/Hamilton%27s_rule
  • 7/21/2019 Soft Computing Unit-5 by Arun Pratap Singh

    78/78

    77

    are turning out to be both universal and robust. It has become a challenge in theoretical physics

    to find minimal statistical models that capture these behaviours.

    Stochastic diffusion search

    Stochastic diffusion search (SDS) is an agent-basedprobabilistic global search and optimization

    technique best suited to problems where the objective function can be decomposed into multiple

    independent partial-functions. Each agent maintains a hypothesis which is iteratively tested by

    evaluating a randomly selected partial objective function parameterized by the agent's current

    hypothesis. In the standard version of SDS such partial function evaluations are binary, resulting

    in each agent becoming active or inactive. Information on hypotheses is diffused across the

    population via inter-agent communication. Unlike thestigmergic communication used in ACO, in

    SDS agents communicatehypotheses via a one-to-one communication strategy analogous to the

    tandem running procedure observed in Leptothorax acervorum.[27] A positive feedback

    mechanism ensures that, over time, a population of agents stabilize around the global-best

    solution. SDS is both an efficient and robust global search and optimization algorithm, which hasbeen extensively mathematically described. Recent work has involved merging the global search

    properties of SDS with other swarm intelligence algorithms.

    Multi-swarm optimization

    Multi-swarm optimization is a variant of particle swarm optimization (PSO) based on the use of

    multiple sub-swarms instead of one (standard) swarm. The general approach in multi-swarm

    optimization is that each sub-swarm focuses on a specific region while a specific diversification

    method decides where and when to launch the sub-swarms. The multi-swarm framework is

    especially fitted for the optimization on multi-modal problems, where multiple (local) optima exist.

    http://en.wikipedia.org/wiki/Stochastic_diffusion_searchhttp://en.wikipedia.org/wiki/Probabilistic_algorithmhttp://en.wikipedia.org/wiki/Stigmergyhttp://en.wikipedia.org/wiki/Hypothesishttp://en.wikipedia.org/wiki/Leptothorax_acervorumhttp://en.wikipedia.org/wiki/Swarm_intelligence#cite_note-27http://en.wikipedia.org/wiki/Swarm_intelligence#cite_note-27http://en.wikipedia.org/wiki/Swarm_intelligence#cite_note-27http://en.wikipedia.org/wiki/Multi-swarm_optimizationhttp://en.wikipedia.org/wiki/Multi-swarm_optimizationhttp://en.wikipedia.org/wiki/Swarm_intelligence#cite_note-27http://en.wikipedia.org/wiki/Leptothorax_acervorumhttp://en.wikipedia.org/wiki/Hypothesishttp://en.wikipedia.org/wiki/Stigmergyhttp://en.wikipedia.org/wiki/Probabilistic_algorithmhttp://en.wikipedia.org/wiki/Stochastic_diffusion_search