[ieee 2011 ieee congress on evolutionary computation (cec) - new orleans, la, usa...

8

Click here to load reader

Upload: romis

Post on 17-Apr-2017

214 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: [IEEE 2011 IEEE Congress on Evolutionary Computation (CEC) - New Orleans, LA, USA (2011.06.5-2011.06.8)] 2011 IEEE Congress of Evolutionary Computation (CEC) - Magnetic particle swarm

Magnetic Particle Swarm Optimization with

Estimation of Distribution

Paulo S. Prampero

Federal Institute of Science, Technology and Education of São

Paulo – Campus Salto

Salto, Brazil

Department of Computer Engineering and Industrial

Automation

FEEC – University of Campinas (Unicamp)

Campinas, Brazil

[email protected]

Romis Attux

Department of Computer Engineering and Industrial

Automation

FEEC – University of Campinas (Unicamp)

Campinas, Brazil

[email protected]

Abstract— The magnetic particle swarm (MPS) algorithm is an

approach based on the phenomenon dipole repulsion by a

magnetic field. In this paper, a new version of this algorithm,

which employs an estimation of distribution step as an additional

search heuristic, is proposed and applied to seven well-known test

functions for 30-dimensional, 120-dimensional and 1000-

dimensional search spaces. The performance of the new approach

is compared to that of benchmark differential evolution and

estimation of distribution methods, and the results reveal its

potential in terms of multimodal search in the context of complex

optimization tasks.

Index Terms – particle swarm optimization, magnetic particle

swarm, global search methods, differential evolution, estimation of

distribution algorithms.

I. INTRODUCTION

Real-valued optimization problems arise as crucial aspects of several key theoretical formulations and practical applications. Many algorithms have been proposed to solve these problems, from classical approaches based on the derivatives of the cost function of interest [22] to metaheuristics inspired in physical and biological phenomena [21].

Insofar as global search is concerned, all of these methods are faced with an increasingly complex task when the number of parameters to be optimized – i.e. the dimension of the underlying search space – grows. Ultimately, this leads to the idea of large-scale optimization, which gives rise to a research field in its own merit [23], and a field whose practical relevance only increases.

In this work, we propose a new optimization metaheuristic – the magnetic particle swarm (MPS) algorithm with estimation of distribution -, which can be understood as an extension of the original MPS algorithm, first proposed in [24]. The MPS approach corresponds, in simple terms, to a version of the particle swarm optimization (PSO) paradigm inspired by a model of the effect of repulsion by a magnetic field. The

introduction of the notion of estimation of distribution [7] corresponds to an attempt to incorporate specific information concerning the problem to be solved, which can be decisive to allow a more effective exploration of the search space in the context of problems with a high-dimensional search space. In order to give support to this assumption, the new approach is tested for search problems with a number of parameters ranging from 30 to 1000 dimensions, and the obtained results are compared with a representative set of metaheuristics, which includes differential evolution, estimation of distribution algorithm and a combination of both algorithms.

The work is structured as follows. In section II, we describe the general idea of differential evolution. Section III presents the estimation of distribution algorithm, which is used as a heuristic in the algorithm. In section IV, we explain in brief a combination of DE and EDA. Magnetic particle swarm with ED is analyzed in section V. Section VI brings the obtained results and section VII closes the work by presenting our conclusions.

II. DIFFERENTIAL EVOLUTION

The differential evolution (DE) approach, developed by Storn and Price in 1995 [2], is a stochastic, population-based search method. The algorithm extracts the differential information, distance and direction, from the current population to guide its search.

The algorithm starts by generating random points in the search space, being each point an individual of the associated population [4]. The size of each individual is the dimension of problem to be solved, and the number of individuals is a free parameter of algorithm. For each individual, three are randomly selected, being specifically one with higher fitness. The crossover is modeled as described in eq. (1) [4], and its occurrence is followed by a mutation, which combines the new individual and the current in accordance with a certain probability of recombination, another parameter of algorithm.

After crossover and mutation, if the fitness of the new individual is better than the fitness of the current one, the new

1994978-1-4244-7835-4/11/$26.00 ©2011 IEEE

Page 2: [IEEE 2011 IEEE Congress on Evolutionary Computation (CEC) - New Orleans, LA, USA (2011.06.5-2011.06.8)] 2011 IEEE Congress of Evolutionary Computation (CEC) - Magnetic particle swarm

individual is passed to the next generation; otherwise, the current individual is kept.

The algorithm was implemented as shown in the following pseudo-code.

Algorithm 1: Differential Evolution

1) generate N points as initial population 2) while stopping condition is not true do

3) for each point i do

3.1) select randomly point x such that f(x) ≥ f(x)

3.2) select randomly two points, xa and xb

(1)

3.3) for each j dimension do if rand() > pr

else

3.4) if f(P) is better than f(x) add P in new population else

add x in new population

In the above code, pr is probability of recombination, N is the population size and F is scaling factor, which controls the amplification of differential variations.

According to [5], the eq. (1) improves the equilibrium between exploitation and exploration.

III. ESTIMATION OF DISTRIBUTION ALGORITHM

Estimation of Distribution Algorithms (EDAs) are population-based algorithms that employ operators based on repeated estimation and sampling from a probabilistic model. In the context of an algorithm of this kind, which is also known as Probabilistic Model Building Genetic Algorithm (PMBGA) [6], each individual is a candidate solution i.e. a point in a search space, and the modus operandi is based on the idea of identifying important patterns from the population of promising solutions [7].

At first, a random set of points is generated to form an initial population. These points are evaluated using the objective function, which indicates the adequacy of each solution to the problem at hand. Based on this evaluation, a subset of promising points is selected, the role of which is to allow that an efficient model of solutions of this kind be built and used to guide the next stages of search space sampling. In the following pseudo-code, we present the adopted implementation of an EDA.

Algorithm 2: Estimation of Distribution Algorithm

1) generate N individual randomly in a search space 2) while not stopping conditions is true do

2.1) evaluate population 2.2) select M promising solutions. 2.3) build a probabilistic model from M selected individuals.

2.4) generate new solutions according to probabilistic model.

In the above code, N is the population size and M is the number of individuals selected to create the probabilistic model.

Naturally, the question of how to select the M individuals is of the highest relevance, as well as the question of what probabilistic model to use for a certain kind of problem. These points led to several proposals, like the following:

Factorized Distribution Algorithm (FDA) [8]

Bayesian Optimization Algorithm (BOA) [9]

Univariate Marginal Distribution Algorithm (UMDA) [10]

Population-Based Incremental Learning (PBIL) [11]

Mutual information maximization for input clustering algorithm (MIMIC) [12]

One criterion that could be followed in the choice of a suitable method is to analyze the complexity of the probabilistic model against the computational cost of storing and learning the selected model. Both issues are also related to the problem of dimensionality and to the type of representation. Another point is whether there is any previous knowledge about the problem structure, and what kind of probabilistic model is most adequate to represent this knowledge.

IV. DE / EDA ALGORITHM

The DE / EDA algorithm is a combination of the two presented techniques, in an attempt to make use of the good features of both. Essentially, this algorithm combines the global information obtained by the EDA with the differential information employed by the DE to create promising solutions [4]. The following pseudo-code presents our implementation of this algorithm.

Algorithm 3: DE / EDA

1) generate N individual randomly in a search space 2) while not stopping conditions is true do 2.1) evaluate population

2.2) select M promising solutions.

2.3) build a probabilistic model Prob(µ,σ) from M selected individuals.

2.4) for each individual i do

2.4.1) select randomly point x such that f(x) ≥ f(x)

2.4.2) select randomly two points, x and x (6) 2.4.3) for each j dimension do

if rand() > L

(2)

else

2.5) if f(P) is better than f(x)

1995

Page 3: [IEEE 2011 IEEE Congress on Evolutionary Computation (CEC) - New Orleans, LA, USA (2011.06.5-2011.06.8)] 2011 IEEE Congress of Evolutionary Computation (CEC) - Magnetic particle swarm

add P in new population else

add x in new population

In the above code, Prob(µ,σ) is the probability distribution model and L is used to balance the contributions from the global information and the differential information.

V. MAGNETIC PARTICLE SWARM OPTIMIZATION WITH

ESTIMATION OF DISTRIBUTION

Inspired in the idea of combining probabilistic models to information brought by a different metaheuristic, we propose a new optimization method, the Magnetic Particle Swarm Algorithm with Estimation of Distribution, which is an extension of the so-called Magnetic Particle Swarm Algorithm (MPS), proposed by the authors in [24]. Note that the proposal is not made in the “naïve” spirit of building a sort of “ultimate search method”, which would be in conflict with the essence of no free lunch theorems [14], but aims at enhancing the potential of efficient exploration associated with the MPS, which, in [24], already showed a promising performance in the context of several types of complex search problems.

The MPS can be defined as an algorithm based on framework of Particle Swarm Optimization (PSO) and also on the behavior of dipoles in a magnetic field. In [1], it is stated that a particle swarm algorithm should include the following behaviors:

Assess – the particles (individuals) should have the ability to sense the environment and to estimate their own behavior;

Compare – individuals should use each other as a comparative reference;

Mimic – keeping in mind that imitation is central to human social organizations and important for the acquisition and maintenance of mental abilities.

These fundamental behaviors can be found in the original PSO approach, developed by Kennedy and Eberhart in 1995 [13]. The method employs a population of particles - each one corresponding to a potential solution to a given problem – which are capable of moving in the search space and of “recalling” favorable positions visited by themselves and by their neighbors.

On the other hand, the magnetic particle swarm algorithm [24], corresponds, in essence, to a method that uses elements of the behavior of dipoles

1 in a magnetic field to establish

search mechanisms.

A. The Algorithm

The algorithm can be initialized by uniformly distributing the particles in the search space. Afterwards, the maximum and minimum radii of each particle repulsion region are calculated

1 We will use the term “particle” while referring to a magnetic dipole.

based on the number of particles and the size of the search space. The maximum radius can be calculated as the size of the search space divided by twice the number of particles, in this way, it is possible put all particles with the maximum radius inside the search space, without touch between theirs repulsion regions. The minimum radius is set throughout this work as the thousandth part of the maximum radius. This is arbitrary, but preliminary tests showed that the performance of the algorithm is not particularly sensitive to this choice.

The repulsion region prevents the particles from getting excessively close to one another. Nothing happens when there is intersection only between repulsion regions, but, when the algorithm detects that a particle has invaded another‟s repulsion region, the worse particle (in terms of the cost function) is removed from it. The size of the repulsion region of each particle is calculated during the algorithm execution, being bounded by the maximum and minimum radii. This size also depends on the values of the cost function, being increased or decreased depending on the positive or negative evolution of the cost associated with the particle in question.

At first, the particle moves randomly in the search space until it finds a better point: when this happens, the particle stores the found solution and the direction along which it has moved: in the next step, the algorithm generates the new point on this line. If the new point is worse than current, two perpendicular points are studied, these points are generated inside the repulsion region. The best perpendicular point can be adopted if it is better than the current point, and, in this case, the perpendicular line will also be adopted. If any point along the perpendicular line is better than the current, the particle loses the direction and the new point is randomly generated within the repulsion region. In this case, the repulsion region decreases until the algorithm finds a better point than the current.

If the particle is unable to find better points, the radius of its region of repulsion is decreased 1% by cycle until the minimum value is reached, which accounts for a refinement of the obtained good solutions. As mentioned earlier, if a particle enters another‟s region, the particle with lowest value will be taken away along the line of better particle. The size of this position change should be enough to place the particle outside the repulsion region.

However, if the better particle has no direction to follow, the worse particle uses its own direction to leave the repulsion region, with a step sufficient size. Finally, if both particles have no associated direction, the worse particle will be sent towards the best particle.

The novel MPS, which is combined with an estimation of distribution step, is different from the standard MPS in that, for every T of its iterations, K particles will be created using an a probabilistic model. The mean and standard deviation of each dimension are calculated using all current particles and K new particles are generated with a normal distribution. In alg. 4, we find the main structure of the proposed algorithm.

1996

Page 4: [IEEE 2011 IEEE Congress on Evolutionary Computation (CEC) - New Orleans, LA, USA (2011.06.5-2011.06.8)] 2011 IEEE Congress of Evolutionary Computation (CEC) - Magnetic particle swarm

Algorithm 4: Magnetic Particle Swarm Algorithm (with

estimation of distribution)

initialize()

while termination criteria are not satisfied do

Move particles()

Verify confronts()

if (cycles are multiple of T)

Estimation of Distribution ()

end if

end while

Let us now consider in more detail each function present in the above pseudo-code.

1) Initialize()

This function distributes the particles in the space of feasible solutions in accordance with a uniform distribution, and calculates the upper and the lower bounds of the repulsion regions. The upper bound of the repulsion region (ubrr) is:

(3)

being uk the upper bound of the search space in dimension k,

and lk the lower bound of the search space in dimension k, n the

number of dimensions, k =1, …, n, and m the number of particles. The lower bound of the repulsion region (lbrr) is:

(4)

2) Move particles()

for each particle do if(particle has a direction) generate one point in this direction if( the new point is better) store this point increase in 10% the repulsion region limited by ubrr else generate two perpendicular points if(the best of them is better than current point) store the best of them and its direction else lose direction end if end if else generate three points using eqs.(5)(6)(7)

if( the best generated point is better than the current point) store the best generated point and its direction else if( never had a direction) store best generated point else decrease in 1% the repulsion region limited by lbrr end if end if end if end for

At the beginning, the particle does not have a defined direction. Therefore, it moves in a random way in the search space. When point better than the current is found, it is stored and defines a good direction. In the next step, the first point to be tested is along this line (direction). If the new point is better than the current one, it continues to move along this line and the repulsion region is increased. When the particle has a direction, all points are generated inside the particle repulsion region.

When the line / direction is not good, two points on a perpendicular line will be tested. If the best of these points is better than the current one, the new direction is stored. In problems with more two dimensions, the perpendicular is generated on a randomly selected plane.

When the particle “does not know how to move”, its position is randomly modified in accordance with the following expressions:

(5)

(6)

(7)

where k = 1,…, n and i=1,…, m, being n and m as in equation (3). RepRegioni is the repulsion region of particle i, λ is a

random number belonging to [-1,1], and is calculated for

each dimension n.

The difference between equations (5) and (7) is that, in (5), for each dimension, a random number is used, while, in (7), the same random number is used for all dimensions. If the best particle of the group Paux1, Paux2 and Paux3 is better than the current point, the algorithm accepts it and finds out a new line to define the movement; if this is not the case, the repulsion region has its size decreased.

3) Verify confronts ()

for each particle pair do if(distance between pair is less than the largest repulsion region) if(the better particle has a direction) move the worse particle in this direction else if(the worse particle has a direction) move it in this direction else

move the worse particle in the direction of the best particle. end if end if end if end for

Essentially, this function is responsible for verifying if a particle invades the repulsion region of another. If this happens, the worst of the particles is removed from this region. If the best of them has a direction to move along, the worse is forced towards this direction. If this is not so, but the worse particle has a defined direction, then it follows with a step large enough

1997

Page 5: [IEEE 2011 IEEE Congress on Evolutionary Computation (CEC) - New Orleans, LA, USA (2011.06.5-2011.06.8)] 2011 IEEE Congress of Evolutionary Computation (CEC) - Magnetic particle swarm

to place it outside the repulsion region. Finally, if neither of them has a defined direction, the worst of them follows the direction of the globally best particle.

4) Estimation of Distribution () using all particles calculate mean and standard deviation of each dimension. generate K new particles with normal distribution. include these K new particles in a population. select M better particles.

This function is the essence of the new proposal: an estimation of distribution step that is periodically executed with the aim of improving the search for promising areas. Each execution of this module generates k individuals that replace the k worst particles.

VI. EXPERIMENTS AND RESULTS

In this section, the results obtained from several tests of the new proposal and of the selected benchmark techniques in the context of seven different cost function and distinct search space sizes are presented.

A. Algorithms

In order to establish a representative comparison with well-established methods, we chose to use as benchmarks following algorithms: DE, EDA and DE/EDA - which were discussed, respectively, in sections II, III and IV. Two versions of the proposed algorithm were used, one with the Verify confronts (.) module other without it, which allows an investigation concerning the relevance of this module for problems with different features.

B. Cost Functions

All algorithms were tested in the context of seven optimization problems defined by cost functions with different characteristics, all of which are well-known in the literature [22].

1) Sphere Function

, (8)

where x [-100,100]n. The global minimum is located at

x*=[0,0,…,0].

2) Rosenbrock Function

, (9)

where x [-30,30]n and the global optimum is

x*=[1,1,…,1].

3) Griewank Function

, (10)

where x [-600,600]n. The global optimum is

x*=[0,0,…,0].

4) Rastrigin Function

, (11)

where x [-5.12, 5.12]n and the global optimum is

x*=[0,0,…,0].

5) Salomon Function

, (12)

where x [-100,100]n and the optimum is x*=[0,0,…,0].

6) Schwefel Function

, (13)

where x [-500,500]n and the global optimum is

x*=[420.97,…,420.97].

7) Levy Function

(14)

where

, x [-10,10]

n and the global

optimum is x*=[1,1,…,1].

C. Results

In the experiments, all algorithms have their performances compared with those of the magnetic particle swarm with estimation of distribution. To allow common bases for comparison, all algorithms used 30 particles (individuals), one per dimension, and were allowed to evaluate 1,000,000 (one million) times each cost function. We tried to use similar settings for analogous methods and employed several preliminary trials to define all parameters. All algorithms were run 30 times for each case and the mean and standard deviation of the attained cost were calculated.

The algorithm DE used the following parameters for all functions

Probability of recombination = 50%;

Scaling factor F=0.6.

1998

Page 6: [IEEE 2011 IEEE Congress on Evolutionary Computation (CEC) - New Orleans, LA, USA (2011.06.5-2011.06.8)] 2011 IEEE Congress of Evolutionary Computation (CEC) - Magnetic particle swarm

The EDA used the following parameters for all functions:

All individuals were selected to generate the new population;

The used probability model was based on the normal distribution.

The DE / EDA algorithm used a normal distribution and the following parameters for all functions:

Balance of contribution L = 50%;

Size of population = 30;

Scaling factor F=0.6.

The two versions of the proposed modified magnetic particle swarm algorithm executed the ED step every 100 cycles, being the mean and standard deviation calculated with the best five particles, and a new particle generated replacing the worst current particle.

TABLE I. MEAN AND STANDARD DEVIATION VALUES OF THE BEST COST VALUE – SEARCH SPACE WITH 30 DIMENSIONS.

Functions

Algorithms

Global DE EDA DE/EDA Magnetic Particle Swarm with ED without verify

confronts

Magnetic Particle Swarm with ED

Sphere 0,00 0.37 ± 1.26 2.86 ± 5.17 0.00 ± 0.00 0.00 ± 0.00 0.00 ± 0.00

Rosenbrock 0,00 258565.89 ± 404585.16

525.77 ± 881.36 0.62 ± 1.26 0.03 ± 0.02 0.15 ± 0.70

Griewank 0,00 0.22 ± 0.48 0.10 ± 0.12 0.00 ± 0.00 0.00 ± 0.00 0.00 ± 0.00

Rastrigin 0,00 98.83 ± 9.24 6.25 ± 3.08 68.64 ± 15.07 0.00 ± 0.00 0.00 ± 0.00

Salomon 0,00 0.22 ± 0.10 0.39 ± 0.21 0.11 ± 0.05 0.06 ± 0.05 0.09 ± 0.03

Schwefel -12569,49 -10385.89 ± 316.91

-10708.41 ± 257.83 -8846.75 ± 923.57 -11753.31 ± 1308.24 -12569.38 ± 0.12

Levy 0,00 1.52 ± 0.73 0.06 ± 0.14 0.00 ± 0.00 0.00 ± 0.00 0.00 ± 0.00

Table I shows that the Magnetic Particle Swarm Algorithm reached, for a 30-dimensional search space, in some of the two versions, with or without verify confronts, the best performance for all problems. It is important to mention that the DE/EDA algorithm was also capable of finding the global optimum in three cases. Notice that the module of conflict verification was particularly relevant to deal with Schwefel‟s function.

In order to provide more insight on the statistical relevance of the obtained results, the following figures present the performance histograms generated for the Rosenbrock function, using 10 first executions. In particular, these histograms show that the performances of all methods, except for the MPS with ED (and no conflict verification), are compromised by “outliers”. This indicates that the latter approach showed a significant degree of robustness to the execution of several trials.

Figure 1: Histogram of DE

Figure 2: Histogram of EDA

Figure 3: Histogram of DE/EDA

0 2 4 6 8 10 12

x 105

0

1

2

3

4

5

0 0.5 1 1.5 2 2.5 3

x 105

0

2

4

6

8

10

0 1 2 3 4 50

1

2

3

4

5

6

7

1999

Page 7: [IEEE 2011 IEEE Congress on Evolutionary Computation (CEC) - New Orleans, LA, USA (2011.06.5-2011.06.8)] 2011 IEEE Congress of Evolutionary Computation (CEC) - Magnetic particle swarm

Figure 4: Histogram of Magnetic Particle Swarm with ED

without verify confront

Figure 5: Histogram of Magnetic Particle Swarm with ED

The following test was more challenging, as it was based on a search space that can be considered to have a high number of dimensions – 120. The population size was changed to 120, following the previous pattern of having one individual per dimension, and no other parameters were changed. The results are shown in Tab. II.

TABLE II. MEAN AND STANDARD DEVIATION VALUES OF THE BEST COST VALUE - SEARCH SPACE WITH 120 DIMENSIONS.

Functions

Algorithms

Global DE EDA DE/EDA Magnetic Particle Swarm with ED without verify

confronts

Magnetic Particle Swarm with ED

Sphere 0,00 255.60 ± 90.80 0.00 ± 0.00 0.00 ± 0.00 0.00 ± 0.00 0.01 ± 0.03

Rosenbrock 0,00 51897591.61± 9365980.42

160.48 ± 43.93 111.05 ± 0.17 0.09 ± 0.08 4.31 ± 8.34

Griewank 0,00 3.83 ± 0.91 0.00 ± 0.00 0.00 ± 0.00 0.00 ± 0.00 0.00 ± 0.00

Rastrigin 0,00 1070.91 ± 38.42 924.05 ± 20.41 943.37 ± 27.16 0.00 ± 0.00 0.00 ± 0.00

Salomon 0,00 8.75 ± 0.63 0.29 ± 0.00 0.49 ± 0.00 0.05 ± 0.04 0.10 ± 0.00

Schwefel -50277.948 -23363.84 ± 247.18 -8150.40 ± 337.54 -14719.93 ± 559.19 -50215.89 ± 61.45 -50272.48 ± 17.26

Levy 0,00 129.14 ± 15.32 0.00 ± 0.00 0.00 ± 0.00 0.00 ± 0.00 0.01 ± 0.01

Note that, once more, one of the versions of the proposed algorithm reached the best performance for all problems. Once again, the robustness of the method is indicated by a standard deviation that can be considered small in comparison with that of other methods.

EDA improve its performance comparing 120 dimensions with 30 dimensions, it happened because the number of individuals too was changed. Thus, although the ratio between the number of individuals and the number of dimensions be the same, the result was better with 120 dimensions for the EDA due to the better estimate of probability. The estimation of probability depends on the quality of the sample and the sample size, and in this case, the best estimate was of 120 dimensions.

Finally, the magnetic particle swarm with estimation of distribution and DE/EDA were tested with the same functions in a 1000-dimensional search space (with 100 individuals in the population). The results, which show once more that the method is capable of reaching a very good average performance without significant fluctuations in terms of the standard deviation, are shown in Tab. III.

0 0.05 0.1 0.15 0.2 0.250

0.5

1

1.5

2

2.5

3

0 0.5 1 1.5 2 2.50

2

4

6

8

10

2000

Page 8: [IEEE 2011 IEEE Congress on Evolutionary Computation (CEC) - New Orleans, LA, USA (2011.06.5-2011.06.8)] 2011 IEEE Congress of Evolutionary Computation (CEC) - Magnetic particle swarm

TABLE III. MEAN AND STANDARD DEVIATION VALUES OF THE BEST COST VALUE SEARCH SPACE WITH 1000 DIMENSIONS.

Functions

Algorithm Magnetic Particle Swarm with Estimation of Distribution

DE/EDA

Global Mean ± Standard

Deviation Worst Best

Mean ± Standard Deviation

Worst Best

Sphere 0,00 0.00 ± 0.00 0.026 0.00 1059.74 ± 60.73 1109.73 992.42

Rosenbrock 0,00 12.04 ± 13.85 39.68 0.42 331460.81 ± 42780.31 367249.82 284078.76

Griewank 0,00 0.00 ± 0.00 0.00 0.00 11.45 ± 1.10 12.72 10.73

Rastrigin 0,00 0.00 ± 0.01 0.04 0.00 10337.03 ± 56.98 10372.60 10271.31

Salomon 0,00 0.12 ± 0.06 0.30 0.10 15.55 ± 0.13 15.63 15.40

Schwefel -418982,9 -418958.34 ± 31.17 -418912.52 -418982.88 -43622.91 ± 1670.93 -41694.32 -44636.23

Levy 0,00 0.06 ± 0.06 0.21 0.00 33.89 ± 2.67 36.30 31.00

VII. CONCLUSIONS

The magnetic particle swarm (MPS) algorithm has points of contact with particle swarm methods and electromagnetic-inspired approaches. In this work, the original MPS was modified by the introduction of a stage of solution generation via estimation of distribution, in analogy with what was done in the context of the differential evolution framework. The proposed algorithm was tested for a number of well-known cost functions in the context of a 30, 120 and 1000-dimension search space, and its performance can be considered very satisfactory in comparison with that of the chosen benchmark algorithms. In particular, the last of the tested scenarios revealed that the proposal has a potential of application even to large-scale optimization problems, being a further investigation of this point a concrete perspective for future work. Other perspectives include a more detailed analysis of the proposed method, including comparisons with other algorithms, and a study on its sensitivity to parameter tuning.

REFERENCES

[1] J. Kennedy, R. C. Eberhart and Y. SHI, “Swarm Intelligence,” San Francisco: Morgan Kaufmann/ Academic Press, 2001.

[2] R. Storn and K. Price, “Differential Evolution: a simple and efficient adaptive scheme for global optimization over continuous spaces”, Technical Report TR-95-012, International Computer Science Institute, Berkeley, 1995.

[3] A. P. Engelbrecht, “Computational intelligence: an introduction” 2nd ed. Jonh Wiley and Sons, Chichester, England, 2007, pp. 237-260.

[4] J. Sun, Q. Zhang and Edward P. K. Tsang, “DE/EDA: A New Evolutionary Algorithm for Global Optimization”, Information Sciences, 2005.

[5] K. Price, “Differential Evolution vs. the Functions of the 2nd ICEO”, In Proceedings of 1997 IEEE International Conference on Evolutionary Computation (ICEC „97), Indianapolis, USA, 13-16 April 1997, pp. 153-157.

[6] M. Pelikan, D. E. Goldberg and F. Lobo, “A survey of optimization by building and using probabilistic models” Computational Optimization and Applications, 21(1):5–20, 2002.

[7] P. Larrañaga and J. A. Lozano, “Eds: Estimation of Distribution Algorithms”, ser. GENA. Kluwer Academic Publishers, 2002.

[8] Mühlenbein H, Mahnig T and Ochoa A., “Schemata, distributions and graphical models in evolutionary optimization.” Journal of Heuristics 1999 , 5(2):213-247.

[9] Pelikan M, Goldberg D and Cantú-Paz E., “BOA: The Bayesian optimization algorithm.” Evol Comput 2000 , 8(3):311-340.

[10] Mühlenbein H and Paaß G., “From recombination of genes to the estimation of distributions. Binary parameters.” Lecture Notes in Computer Science 1411: Parallel Problem Solving from Nature, PPSN IV 1996 , 178-187.

[11] Baluja S., “Population-based incremental learning: A method for integrating genetic search based function optimization and competitive learning”. Tech Rep CMU-CS-94–163, Carnegie Mellon University, Pittsburgh, PA; 1994.

[12] De Bonet JS, Isbell CL and Viola P., “MIMIC: Finding optima by estimating probability densities.” In Advances in Neural Information Processing Systems. Volume 9. Edited by: Mozer MC, Jordan MI, Petsche T. The MIT Press; 1997:424-430.

[13] J. Kennedy and R. Eberhart, “Particle Swarm Optimization,” Proceedings of IEEE International Conference on Neural Networks, pp. 1942–1948, 1995.

[14] D. H. Wolpert and W. G. Macready, “No Free Lunch Theorems for Optimization,” IEEE Transactions on Evolutionary Computation, Vol. 1, No. 1, pp. 67-82, 1997.

[15] Eberhart, R. and Y. Shi, “Particle Swarm Optimization: Developments, Aplications and Resources”, IEEE, 2001.

[16] Q. BAI, “Analysis of Particle Swarm Optization Algorithm”. Computer and Information Science, vol. 3, n. 1,pp. 180 -184, 2010.

[17] Y. Shi and R. C. Eberhart, “A modified particle swam optimizer,” IEEE Word Congress on Computational Intelligence, pp. 69-73, 1998.

[18] R. C. Eberhart and Y. Shi, “Comparing inertia weights and constriction factors in Particle Swarm Optimization,” Proceedings of the Congress on Evolutionary Computating, pp 84-88, 2000.

[19] M. Clerc, “The swarm and the queen: towards a deterministic and adaptive particle swarm optimization,” Proceedings of the Congress on Evolutionary Computation. Piscataway, NJ:IEEE Service Center, pp:1951-1957, 1999

[20] Z. S. Lu and Z. R. Hou, “Particle Swarm Optimization with Adaptive Mutation,” Acta Electronica Sinica, Vol. 32, No. 3, pp. 416–420, 2004.

[21] Leandro N. Castro, “Fundamentals of Natural Computing: basic concepts, algorithms, and aplication.” Flórida: CRC Press, 2006.

[22] Molga M. and Smutnicki C., “Test functions for optimization needs” -(2005), available at http://www.zsd.ict.pwr.wroc.pl/files/docs/functions.pdf

[23] F. van den Bergh and A. P. Engelbrecht. “A cooperative approach to particle swarm optimization.” IEEE Transactions on Evolutionary Computation 8 (3), pages 225–239, 2004.

[24] Paulo S. Prampero and Romis Attux, “Magnetic Particle Swarm Optimization”, accepted for publication in the Proceedings of the IEEE Swarm Intelligence Symposium (SIS2011), Paris, France, 2011.

2001