combinations of simulation and evolutionary algorithms in ... · 186 j. biethahn, v....

24
Annals of Operations Research 52(1994)183-208 Combinations of simulation and Evolutionary Algorithms in management science and economics Jörg Biethahn and Volker Nissen Interdisziplinäres Graduiertenkolleg, Universität Göttingen, Goßlerstr. 12a, 0-37073 Göttingen, Germany Evolutionary Algonthms are robust search methods that mimic basic pnnciples of evolution. We discuss different combinations of Evolutionary Algorithms and the versatile simulation method resulting in powerful tools not only for complex decision situations but explanatory models also. Realised and suggested applications from the domains of management and economics demonstrate the relevance of this approach. In a practical example three EA-variants produce better results than two conventional methods when optimising the decision variables of a stochastic inventory simulation. We show that EA are also more robust optirnisers when only few simulations of each tnal solution are performed. This characteristic may be used to reduce the generally higher CPU-requirements of population-based search methods like EA as opposed to point-based traditional optimisation techniques. 1. Introduction This identifies alternative combinations of Evolutionary Algorithms and the simulation method for applications in complex economic situations. Simulation can be regarded as a well established method for studying the behaviour of complex systems. Evolutionary Algorithms (EA) are powerful general purpose search and optimisation techniques of an heuristic character. They abstract the basic prin- ciples of replication, variation (including operators like mptation and recombina- tion) as well as selection from evolution theory (figure 1). EA proved robust in solving very diverse complex optimisation and adaptation tasks. Table 1 gives an overview of the most important application areas of EA in management science. A more exhaustive overview and many references to relevant work are given in Nissen [28]. In many of these applications EA have successfully been hybridised with other technologies like neural networks, fuzzy set theory or more conventional approaches. We will focus on possible combinations of EA and the simulation method here. The most obvious approach would be to use ' We build upon ideas which were presented in Biethahn and Nissen [8] and Nissen [27. 0 J.C. Baltzer AG, Science Publishers

Upload: others

Post on 13-Jul-2020

12 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Combinations of simulation and Evolutionary Algorithms in ... · 186 J. Biethahn, V. Nissen/Evolutionary Algorithms Table 2 Main advantages and disadvantages of EA as an optimisation

Annals of Operations Research 52(1994)183-208

Combinations of simulation and Evolutionary Algorithms in management science and economics

Jörg Biethahn and Volker Nissen Interdisziplinäres Graduiertenkolleg, Universität Göttingen, Goßlerstr. 12a,

0-37073 Göttingen, Germany

Evolutionary Algonthms are robust search methods that mimic basic pnnciples of evolution. We discuss different combinations of Evolutionary Algorithms and the versatile simulation method resulting in powerful tools not only for complex decision situations but explanatory models also. Realised and suggested applications from the domains of management and economics demonstrate the relevance of this approach. In a practical example three EA-variants produce better results than two conventional methods when optimising the decision variables of a stochastic inventory simulation. We show that EA are also more robust optirnisers when only few simulations of each tnal solution are performed. This characteristic may be used to reduce the generally higher CPU-requirements of population-based search methods like EA as opposed to point-based traditional optimisation techniques.

1. Introduction

This identifies alternative combinations of Evolutionary Algorithms and the simulation method for applications in complex economic situations. Simulation can be regarded as a well established method for studying the behaviour of complex systems.

Evolutionary Algorithms (EA) are powerful general purpose search and optimisation techniques of an heuristic character. They abstract the basic prin- ciples of replication, variation (including operators like mptation and recombina- tion) as well as selection from evolution theory (figure 1). EA proved robust in solving very diverse complex optimisation and adaptation tasks. Table 1 gives an overview of the most important application areas of EA in management science. A more exhaustive overview and many references to relevant work are given in Nissen [28]. In many of these applications EA have successfully been hybridised with other technologies like neural networks, fuzzy set theory or more conventional approaches. We will focus on possible combinations of EA and the simulation method here. The most obvious approach would be to use

' We build upon ideas which were presented in Biethahn and Nissen [8] and Nissen [27.

0 J.C. Baltzer AG, Science Publishers

Page 2: Combinations of simulation and Evolutionary Algorithms in ... · 186 J. Biethahn, V. Nissen/Evolutionary Algorithms Table 2 Main advantages and disadvantages of EA as an optimisation

J. Biethahn, V. NissenlEvolutionary Algorithms

Fig. 1. Basic EA-cycle.

Table 1 Applications of EA (frequently hybridised with other technologies) in management science.

Economic Economic sector Application sector Application

Industry line balancing Financial services Assessing insurance risks lotsizing predicting bankruptcy sequencing and production rule induction for trading scheduling credit scoring process planning security selection stacking time series analysis personnel scheduling credit card appiication scoring siting of retail outlets load default prediction

Energy load management Others scheduling patients in a hosiital core optimisation of nuclear designing telecommunications reactors networks

siting of local waste disposal Traffic routing and scheduling of systems

freight trains scheduling a !light simulator to vehicle routing pilots scheduling aircraft landing times planning water supply Systems

Education school timetable problem

Page 3: Combinations of simulation and Evolutionary Algorithms in ... · 186 J. Biethahn, V. Nissen/Evolutionary Algorithms Table 2 Main advantages and disadvantages of EA as an optimisation

J. Biethahn, V. Nissen/Evolutionary Algorithms 185

EA for tuning decision variables of simulation models. This is a way to transform simulation into a powerful tool for optimisation in complex decision tasks. How- ever, there are further possibilities for combining these two methods as will be discussed later.

2. Evolutionary ~ l ~ o n t h m s

2.1. OVERVIEW

While more EA-types exist, we will focus on Genetic Algorithms (Goldberg [16]), Evolution Strategies (Schwefel [34]) and Evolutionary Programming (Fogel [13]) here. They are the most prominent variants of EA and, besides biologically inspired operators and terminology, differ from conventional search methods in a number of ways:

(1) They operate on a string or vector representation of the decision variables.

(2) They, generally, process a "population" of solutions ("individuals"), explor- ing the search space from many different points simultaneously.

(3) Using them for goal-directed search, only information on the quality ("fitness") of solutions derived from objective function values is required but no auxiliary knowledge such as derivatives. However, incorporating available domain knowledge in problem representation, initialisation, opera- tors or decoding function may substantially increase the competitiveness of an EA at the cost of a reduced scope of application.

(4) Stochastic elements are deliberately employed. However, this means no pure random search, but an intelligent exploration of the search space.

EA show a certain similarity to Simulated Annealing (Kirkpatrick et al. [22]), a thermodynamically motivated local search heuristic. Notable differences exist w.r.t. the population approach of EA as well as selection mechanisms and diversity of evolutionary operators which allow for more flexibility in the design of an EA as compared to Simulated Annealing. Our own experiments (Nissen [29]) have shown mutation-based EA to be a strong competitor of Tabu Search (Glover [15]) and Simulated Annealing on the Quadratic Assignment Problem, a very tough combinatorial optirnisation problem.

Table 2 gives an overview of main advantages and disadvantages of EA as an optimisation method.

2.2. GENETIC ALGORITHMS

The standard Genetic Algorithm (GA) operates on a population of bitstrings. For many applications (in particular combinatorial optimisation), however, decimal problem representations are frequently used. Each individual represents a

Page 4: Combinations of simulation and Evolutionary Algorithms in ... · 186 J. Biethahn, V. Nissen/Evolutionary Algorithms Table 2 Main advantages and disadvantages of EA as an optimisation

186 J . Biethahn, V . Nissen/Evolutionary Algorithms

Table 2 Main advantages and disadvantages of EA as an optimisation method.

Advantages: - robust, broadly applicable technique - reliable - well suited for high dimensional, complex search spaces - relatively easy to develop and implement - no restrictive assumptions about the objective function - no prior knowledge about the search topology required - allow for multiple criteria optimisation - flexible algorithmic design options - easily combined with other solution methods (initialisation heuristics, local search) - efficient use of parallel hardware (inherently parallel algorithms)

Disadvantages: - heuristic character (no guarantee to reach global optimum in limited time) - comparatively time-consuming (alleviated by rapid Progress in hardware) - often ineffective in fine-tuning final solution (hybridising with local search proved useful) - finding good settings for strategy Parameters (e.g. crossover rate) can be difficult

problem solution, coding all the decision variables. A full population of individuals is replaced during each generation cycle. In the reproduction phase individuals are selected according to their "fitness" in terms of solution quality. Then, recombina- tion of mating partners takes place through crossover (figure 2), the main genetic Operator within GA, according to a prespecified crossover probability p, (usually 20.6).

The resulting two offspring may then undergo mutation (i.e. bit inversion) depending on some mutation rate p, before they are inserted in the new pop- ulation. Usually this probability is Set to values between 0.001 and 0.01 per bit. When the new population is complete this cycle is repeated. The whole process continues until a prespecified termination criterion holds (figure 3).

There are many variations to this basic scheme. GA usually employ either fitness-proportional selection or ranking to determine the expected number of off- spring for a given individual. With proportional selection the absolute differences

Fig. 2. Simple I-point crossover of two chromosomes. Generally, more complex forms ofcrossover are being employed.

Page 5: Combinations of simulation and Evolutionary Algorithms in ... · 186 J. Biethahn, V. Nissen/Evolutionary Algorithms Table 2 Main advantages and disadvantages of EA as an optimisation

J . Biethahn, V . Nissen/Evolutionary Algorithms

start generate initial population of binary coded individuals at random; repeat ' evaluate all individuals using some fitness function; new population = I ) ; while new population not full do

select two individuals for mating according to their fitness; recombine individuals according to p, to form two offspring; mutate each bit of offspring with probability p,; insert offspring into new population;

end of while; old population = new population;

until termination criterion holds; print results stop;

Fig. 3. Standard GA in pseudocode.

in fitness between solutions are important. Ranking, however, takes the order of individuals in terms of solution quality as a basis for reproduction. In both cases even the worst individual has a rninor chance to reproduce. Moreover, instead of replacing a full population each generation, only one or two individuals (usually the worst) might be replaced in a generational cycle. This approach is termed steady-state GA (Syswerda [38]).

Strategy parameters of the algorithm, like mutation rate and crossover probability, are generally either fine-tuned by experiment or (seldom) by using a meta-GA for this purpose. Lately, one has begun to investigate the possibility of keeping these parameters self-adaptive (Bäck [5]).

2.3. EVOLUTION STRATEGIES

Evolution Strategies (ES) are similar to GA with some notable differences (Hoffmeister and Bäck [19]). ES directly process real-valued vectors of the objective variables instead of bitstrings. GA rely mainly on recombination to explore the search space with mutation serving as some background operator to reintroduce lost bitvalues. Contrary to that, mutation is the dominating operator within ES, implementing some kind of hill-climbing search procedure when coupled with selection. Recombination, though, takes a prominent part in standard ES as well.

Imitating the observation that children are similar to their parents, mutation adds normally distributed random variables with Zero mean and dynamically adjusted standard deviation (mutation step size) to all components of each solution in the population. The number of different step sizes can vary between one and the number of decision variables n. Also, mutations with respect to different decision variables may be correlated to more effectively guide the search process (Schwefel i341).

Page 6: Combinations of simulation and Evolutionary Algorithms in ... · 186 J. Biethahn, V. Nissen/Evolutionary Algorithms Table 2 Main advantages and disadvantages of EA as an optimisation

188 J. Biethahn, V. Nissen/Evolutionary Algorithms

5 start 10 generate initial population of p real-valued individuals at random; 15 evaluate all initial individuals according to some fitness function; 20 repeat 25 intermediate population = 0; 30 for i=l to % d o begin 3 5 select two individuals for mating with equal probability; 40 discrete recombination (component values are randomly

copied from either parent) to determine values for objective variables of offspring;

45 intennediate recombination (averaging parent values) to determine strategy parameter values of offspring;

5 0 mutate offspring using normally distributed random variables; 5 5 insert ith offspring into intermediate population; 60 end; 65 evaluate all neu individuals according to some fitness function; 70 select p best individuals (including or excluding parents)

as new population; 75 until termination criterion holds; 80 print results; 85 stop;

Fig. 4. Standard multimembered ES in pseudo-code.

An additional feature of ES is the self-adaption of mutation standard deviations. This is achieved by incorporating these strategy Parameters into each solution's representation. They become objects of mutation and recombination themselves then. With "extinctive selection" ES also employ a "tougher" selection scheme than GA. There are two general methods for maintaining a population of solutions in ES. In the first, X offspring are generated from p parents ( p < X) and all p + X solutions are placed in competition. Only the p best individuals survive to form the next generation. This is denoted by the term ( p + X)-ES. In the second, the parents are eliminated after each generation. Again, only the p best children survive to form the next generation and this is called a (p,X)-ES. Figure 4 gives an o v e ~ e w of the standard ES-procedure. Again, there are many variants of this basic scheme.

2.4. EVOLUTIONARY PROGRAMMING

Evolutionary Programming (EP) is very similar to Evolution Strategies, although independently developed. It starts from the assumption that evolution optimises behaviour (phenotype level) and not the encoding genetics. EP, there- fore, focusses on evolution at the phenotypic level. EP has no restrictions on the kind of problem representation employed, since coding should follow naturally from the given optirnisation task (Fogel [13]). However, mutation is the only source of variation in the algorithm. No recombination or other genetic Operators are used, since they are attributed to the genotype level.

Page 7: Combinations of simulation and Evolutionary Algorithms in ... · 186 J. Biethahn, V. Nissen/Evolutionary Algorithms Table 2 Main advantages and disadvantages of EA as an optimisation

J. Biethahn, V . Nissen/Evolutionary Algorithms

5 start 10 generate initial population of m individuals at random using some

adequate problem representation; 15 evaluate initial individuals according to some fitness function; 20 current population = initial population; 25 repeat 30 intermediate population = current population (parents); 35 for i=l to m do begin 4 0 generate ith offspring by duplicating ith parent; 45 mutate ith offspring using normally distributed random variables; 5 0 evaluate ith offspring according to some fitness function; 5 5 insert ith offspring into intermediate population;

end; for j=l to 2m do

hold sequential competitions between jth individual and C other randomiy determined individuals from the intermediate popula- tion using their respective fitness values to determine wins;

rank individuals in the intermediate population in descending order according to no. of wins in the competition;

select m highest ranking individuals as new current population; until termination criterion holds; print results; stop;

Fig. 5. Standard EP in pseudo-code.

In a standard EP, offspring are created from parent solutions by duplicating them. Mutation is irnplemented as adding normally distributed random variables with Zero mean and dynamically adjusted variance (or standard deviation) to the components of all new trial solutions. In contrast to ES, the mutation variance is commonly derived from the parent's fitness Score when fitness measures an error term. This promotes convergence to a globally optimal solution, assuming the optimum is known. In our example (section 4) where the global optimum is unknown, we have made mutation a function of parent fitness and generation number to promote convergence. Selection is different from ES, too, employing some form of stochastic tournament between individuals (children and parents) that leads to a final ranking.

All in all, a standard EP is roughly comparable to a strictly mutation-based ( p + X)-ES with p = X. Self-adaptive mutation step sizes and correlated mutations as in some ES are not Part of standard EP, although corresponding variants exist (see Fogel [13] for further details). Figure 5 gives an overview of the standard EP approach in pseudo-code.

3. Combining simulation and Evolutionary Algonthms: an overview

We identify six alternatives of combining the simulation method with EA. In this section we will briefly comrnent on these variants and their application potential for the fields of management science and economics.

Page 8: Combinations of simulation and Evolutionary Algorithms in ... · 186 J. Biethahn, V. Nissen/Evolutionary Algorithms Table 2 Main advantages and disadvantages of EA as an optimisation

190 J. Biethahn, V. Nissen/Evolutionary Algorithms

3.1. VARIANT I: TUNING DECISION VARIABLES

Optimising a real system with simulation is frequently a very difficult task, one of the reasons being the often large, complex and constrained search space of deci- sion variables which prohibits the use of more conventional optimisation methods.

EA are particularly well suited for determining good decision variable settings of a simulation model in the case of a complex search space. For a given parameter setting a fitness value can be calculated by executing the simulation. Fitness values enable the EA to perform a goal-oriented search in the space of parameter settings. The design flexibility of EA allows for shifting the focus of EA-search from high solution quality to quick convergence if required. However, since EA are population-based, they are comparatively time-consuming. One should, therefore, assess the complexity of the optirnisation problem and alterna- tive optimisation techniques first before deciding to implement an EA.

Constraints may be incorporated in evolutionary search by the following means:

- adapted problem representation and operators; - intelligent decoding schemes and repair mechanisms to guarantee valid

solutions;

- constraint propagation; - use of penalty functions which degrade an individual's performance accord-

ing to the violation of constraints; - application of multi-criteria optimisation techniques, where the number of

violated constraints becomes an additional goal to be minimised.

For stochastic models optimisation is more complicated. Due to inherent random noise in the simulation output an objective function must be constructed first. Generally, the simulation must be run a reasonable number of times to obtain a reliable estimate for the expected performance of the current decision values. This makes optimisation a time-consuming task. We discuss the example of a stochastic inventory simulation in section 4.

Assessing the attained quality of simulation results might Pose a problem since information about the global optimum is usually not available. This should be treated in a pragmatic way, e.g. by comparing to historic data or the given situa- tion of the simulated real system. The EA could also be run a few times with varying initialisations.

Applications of the type described here (variant I, figure 6) have been realised in the areas of production scheduling (Ablay [I], Storer et al. [37]), parameterising of material-flow Systems (Noche [30]) and allocation of orders to loading docks in a large brewery (Starkweather et al. [36]).

EA are also capable of processing decision vectors in management simula- tions. This is useful for practical assistance in business planning in the following

Page 9: Combinations of simulation and Evolutionary Algorithms in ... · 186 J. Biethahn, V. Nissen/Evolutionary Algorithms Table 2 Main advantages and disadvantages of EA as an optimisation

J. Biethahn, V . Nissen/Evolutionary Algorithms

r Fi<-fim-FI Algorithrn

detennination

.1 T l of decision + of objective variables

Fig. 6. Tuning decision variables (variant I).

way. First, a simulation model of the decision situation must be implemented. Management decisions to be taken at different points in time are coded as variables on a decision vector. Starting from a given or random initialisation, the coded decision values can then be improved by an EA that uses simulation for fitness evaluations. During optimisation, "How-to-achieve" questions could be analysed as well. The EA must then optimise the decision vector with the goal of minimising the difference to some desired final simulation result. It is even possible to optimise simultaneously with respect to multiple goals (Schaffer [32], Kursawe [24]) or to locate alternative good solutions within one EA-run by Operators abstracting ecological niches (Goldberg [16]). This will not lead to some form of automised decision-making, but should be a helpful assistance for planning.

With some modifications, this type of application also extends to complex educational management games where, so far, the game supervisor lacks an objective benchmark to rate the quality of player's decisions and achievements in a final review of the game. EA can be used to calculate such benchmarks for a given team (Nissen [27] further elaborates this point).

3.2. VARIANT 11: EVOLVING MODEL STRUCTURE

EA have successfully been used for structure optimisation tasks in the design of neural networks and CAD applications. In the domain of econornic systems we might also wish to optimise structural aspects of (planned) real systems. This can, in principle, be supported by evolving the corresponding simulation model structure

Page 10: Combinations of simulation and Evolutionary Algorithms in ... · 186 J. Biethahn, V. Nissen/Evolutionary Algorithms Table 2 Main advantages and disadvantages of EA as an optimisation

J. Biethahn, V . Nissen/Evolutionary Algorithms

ofdecision + variables

Fig. 7. Evolving model structure with EA (variant 11).

(variant 11, figure 7) - a difficult task that does not seem to have been approached in this domain, yet. Some may view structure as yet another parameter. We believe that structure is of a rather different quality, giving meaning to parameters. This is best illustrated with an example from neural network design where one must decide on the number of layers and nodes as well as connection topology first before starting to optimise connection weights or other parameters. EA have successfully been used for these purposes (Whitley [39]). One has to admit, though, that finding a suitable representation for the task of combined structure and parameter optimisation will be considerably more difficult in the case of complex industrial simulation models. Graph grammars and object oriented pro- gramming look prornising in alleviating this difficulty.

Fruitful areas of application in management science are recognised in the design of material flow systems where, for instance, such structural decisions as quantity and location of buffers must be taken. In the distribution area we find many structure optimisation problems, such as determining quantity and siting of warehouses, where a combination of EA and simulation can provide assistance for the decision-maker.

3.3. VARIANT 111: METALEVEL-EA

In other domains EA have successfully been applied to the task of tuning the strategy parameters of a subsidiary method which is actually solving the practical problem. Güvenir and Sirin [17] give an illustrative example of this hybridisation.

Page 11: Combinations of simulation and Evolutionary Algorithms in ... · 186 J. Biethahn, V. Nissen/Evolutionary Algorithms Table 2 Main advantages and disadvantages of EA as an optimisation

J. Biethahn, V. Nissen/Evolutionary Algorithms

:::;% 1 F, Situation

I fitness I detamination goals,

I (optimisation success) I

constraints V/

optimisation 1 , I<-, bditim 1.7 method

strategy paramctcr valucs

I

Fig. 8. Two-stage optimisation with metalevel-EA (variant 111).

They employ a GA to optimise weights and generalisation lirnits of a feature partitioning classification algorithm. Fitness here means the practical success (accuracy) of the tuned classification algorithm with an EA-determined Set of strategy parameters.

In the context of simulation (figure 8), the mix of local scheduling heuristics for a complex production process could be the object of an EA-optimisation. A more advanced approach would concern the tuning of certain PPS-parameters using a production simulator to evaluate particular settings. Tuning PPS-systems by hand has often led to insufficient parameter settings due to the high-dimensional, complex search space. EA look like a promising alternative here.

3.4. VARIANT IV: SIMULATION METAMODELS

A simulation metamodel is defined as a model of a given simulation model. The metamodeling problem is to determine, using the results of preliminary simula- tion experiments, a function such that for given values of the influencing factors the differences between estimations of the response variables using the simulation model and estimations using the metamodel are as small as possible (Pierreval[31]).

Frequently, metamodels take the form of linear regressions. Experiments with the simulation model are used to estimate the parameters of the regression model. The two main advantages of metamodels over the original simulation

Page 12: Combinations of simulation and Evolutionary Algorithms in ... · 186 J. Biethahn, V. Nissen/Evolutionary Algorithms Table 2 Main advantages and disadvantages of EA as an optimisation

J. Biethahn, V. Nissen/Evolutionary Algorithms

Classifier System

I V I

r Black Box Environment

Fig. 10. Standard CS.

- Genetic Algorithm, - input interface (detectors), - output interface (effectors).

Messages from the environment enter the system via detectors which code environmental input as messages for placement on the internal message list. Messages may activate simple string rules (called classifiers) in the rule base by matching their condition parts. Activated classifiers become candidates for posting their action part as an internal message to the message list. Since more classifiers might have been activated than space of the message list permits a conflict resolu- tion component is required. Whether candidate classifiers can post their action to the message list is determined by the outcome of a stochastic activation auction which in turn depends on the individual importance (weight, strength) of the classi- fiers involved. The message list is purged and chosen classifiers place their messages on the list. If effectors (fixed rules whose action affects the environment) are

Page 13: Combinations of simulation and Evolutionary Algorithms in ... · 186 J. Biethahn, V. Nissen/Evolutionary Algorithms Table 2 Main advantages and disadvantages of EA as an optimisation

196 J. Biethahn, V . Nissen/Evolutionary Algorithms

matched by the current message list, then they are allowed to post their actions to the environment.

Feedback from the environment is distributed by the credit allocation component. It varies the relative importance (strength) of individual classifiers according to their ability to contribute to correct output. Holland's bucket brigade is the most prominent credit allocation algorithm that Sets up the mechanisms of a competitive service economy and links conflict resolution with credit allocation (see Goldberg [16] for further details). From time to time a Genetic Algorithm is invoked to search the rule space and replace weak classifiers by new rules generated by mutating and recombining good classifiers. Over time the CS builds up an intemal model of its environment and learns to produce correct output from a given input. The functional similarity of CS to neural networks is obvious. However, production rules exhibit some advantages over neural nodes with respect to semantic content, inclusion of domain knowledge and scalability of the rule base.

In the framework of metamodeling one can imagine a CS that produces equivalent output as the underlying simulation model from identical input. How- ever, before achieving this goal, technical difficulties of today's CS must be overcome. These difficulties mainly refer to the apportionment of credit component and the question of adequate problem representation. Finally, variant IV may, of Course, be combined with variant I or I11 for the purpose of determining good values of decision variables.

3.5. VARIANT V: EMERGENT COMPUTATION

There is a growing interest in applying certain types of EA within explana- tory models of economics. The basic idea is to simulate institutions, markets or economies and study their behaviour (figure 11). Also common to these approaches is a concem with artificial adaptive agents - consumers, firms, govern- ment agencies, banks etc. - that face and rank alternative actions which impinge upon one another, often in a setting of constrained resources (Arthur [2]). Artificial agents may be designed with GA, CS or the Echo model (Holland [21]) as well as Genetic Programming (Koza [23]), other recent developments in the area of EA.

These agents interact with one another and their simulated environment following and adapting local rules, thereby creating a complex, global system behaviour that mirrors aspects of economic reality. Varying the experimental design gives rise to different system evolutions, which may be helpful in understand- ing real-world phenomena. Computational systems where interesting global behaviour emerges from local interactions are generally referred to as emergent computations (see Forrest [14] for further details).

Viewing companies and organisations as evolving, self-organising systems was an approach also put forward by Malik et al. under the theme of "Evolutionary Management'' since the late 1970s (see Malik [25] for a concise overview). Variant V

Page 14: Combinations of simulation and Evolutionary Algorithms in ... · 186 J. Biethahn, V. Nissen/Evolutionary Algorithms Table 2 Main advantages and disadvantages of EA as an optimisation

J. Biethahn, V. Nissen/Evolutionary Algorithms

simuiated environment

f' intaaction ii

9 E2a*k4

i I-EI mitialisation

d-L, I , < mdris

4' I perception infmnccs

I J/ aaa = artifiäal adaptive agent k22A

Fig. 11. Emergent computation (variant V).

may be interpreted as a tool for the experimental analysis of system behaviour under this paradigm.

Several authors have conducted investigations along these lines of argument. Marengo [26] has looked at the relationship between the organisational structure of a firm and its capability to learn and adapt to changing environmental con- ditions. He used a simulation model of organisational decision making and learning in which the members of the organisation, modeled by CS, possess no prior knowledge of the environment they are facing. Marengo puts his emphasis on the formation and evolution of a common knowledge base, shared by all members of an organisation, under different environmental conditions and organi- sational structures.

Bruderer [10] develops a theory of strategic learning that attempts to explain and predict strategic behaviour in complex games. To evaluate his theory, Bruderer implements the cornmon good game as a game among artificial agents (CS) compar- ing the results to economic laboratory experiments.

Holland and Arthur have set up a model of financial markets where 100 artificial adaptive agents can buy or sell units of stock trying to maximise their profit (Arthur [3]). They find no evidence that the market behaviour ever settles down and experience phenomena like hausse, baisse and crash that are so dificult to predict using standard economic theory.

Page 15: Combinations of simulation and Evolutionary Algorithms in ... · 186 J. Biethahn, V. Nissen/Evolutionary Algorithms Table 2 Main advantages and disadvantages of EA as an optimisation

J. Biethahn, V . Nissen/Evolutionary Algorithms

Note that mathematical models of evolution based on differential equations are intensively being discussed in the fields of organisation ecology (Hannan and Freeman [18]) and evolutionary economics (see Witt [40] for an overview).

4. A practical example: optimising a stochastic inventory simulation

In this section we want to look in detail at a practical example of variant I, i.e. using EA to optimise decision variables of a simulation model. Although it is no "real. world" problem but rather serves illustrative purposes, the application is of particular interest due to its stochastic character. It also serves as a vehicle to compare various EA and more conventional optimisation methods under different criteria. We report on work in Progress here.

The investigated optimisation problem is one of inventory management in a make-to-stock plant and discussed in Biethahn [7]. He describes a capital-oriented one-product inventory model with two decision variables: order point (s) and order quantity (q). A quantity of q is ordered whenever the item's inventory position (inventory on-hand + orders outstanding - demand backlog) is observed to have fallen to or below the level s. We assume that the item in question is con- tinuously reviewed. Starting with a given amount of capital and a certain inventory level the aim is to maximise equity capital over a period of one year by setting s and q to optimal integer values. The model is further characterised by stochastic demand sizes and demand time intervals as well as stochastic replenishment times. To find an optimal (s, q)-policy the model was implemented as an event-driven simuiation. Note that, due to the stochastic character of the simulation, different runs with the Same decision values will not lead to fully identical results.

To implement a simulation, one can either use a programming language like Pascal or employ a more user-friendly simulation tool. We tried the second possi- bility first and encountered the following difficulties. The first tool, GRAMOS, a Pascal-implementation of GPSS, did not allow a natural and easy interface between optimiser and simulation model. In particular, GRAMOS assumed that the total number of simulations required was known in advance. Since this was not the case, we then tried a second, more elaborate tool, PACE, based on Small- talk. Although an excellent medium for production simuiations, PACE was not well suited for our type of inventory simulation. The greatest problem were the user-interface induced high CPU-requirements for each simulation that made optimisation prohibitively time-consuming. Finally, we implemented the inventory simulation and all optimisation methods in Pascal on 486133 MHz PCs, avoiding trouble with interfaces and excessive computational requirements.

The next question was how to account for the stochastic character of the simulation when determining good settings for order point and order quantity. To be able to compare alternative Parameter settings a substitute objective function must be constructed first. In Biethahn [7] the first author systematically simulated equidistant points in a grid of (s, q)-policies and concluded that by taking the

Page 16: Combinations of simulation and Evolutionary Algorithms in ... · 186 J. Biethahn, V. Nissen/Evolutionary Algorithms Table 2 Main advantages and disadvantages of EA as an optimisation

J. Biethahn, V. Nissen/Evolutionary Algorithms

mean of 40 simulations/policy the resulting multimodal landscape of objective function values was approximately stable. However, 10 simulations/policy were sufficient to locate the relevant optima. With only one simulation/policy one would simply ignore the stochastic factors in the inventory model.

To investigate the influence of the number of sirnulations per solution on the final result under different optimisation techniques we conducted experiments with 1, 10 and 40 simulations per (s, 9)-policy. With 40 and 10 simulations the mean was taken to be the "true" objective function value as a basis for comparing different solutions. The final result of each optimisation was always evaluated with 40 simulations to allow for a more reliable estimate of the true performance of this particular (s, 9)-policy. Since the substitute objective function was neither con- tinuous nor differentiable, only derivative-free optimisation methods were applic- able. Results for a GA, an ES, and an EP as well as for two conventional optimisation techniques are given in tables 3 and 4. All EA Start from the Same initial solutions also taken individually as starting points for the two conventional optimisation methods. Results are based on five runs per methods that vary only with respect to the random number seed. The flows of random numbers are identical for all optimisation techniques compared. It is planned to implement further numerical optimisation methods and compare them on the given inventory model.

Our three EA-implementations follow the pseudo-code given in figures 3-5 with the following specifications and extensions:

(1) GA: We employ a population size of 20 individuals. It is common prac- tice to use Gray-code instead of standard binary code within GAS. Both codes are bit-based. The distinguishing feature is that adjacent integers when Gray-coded differ by only one bit (a hamrning distance of 1). We follow this common practice in our own implementation. 9 bits are used to code s and another 9 bits for q, allow- ing values between 0 and 51 1 for both decision variables. (This scope was also adopted for the other methods of optimisation.) The entire solution space, there- fore, totals approx. 2.6 X 105 different possible (s, 9)-policies. The initial popula- tion (generation 0) is identically initialised for all EA with the (s,q)-policies (20, 20); (20, 150); (150, 20); (150, 150). The other 16 starting solutions of the GA are random. To determine mating partners for producing offspring we employ deterministic binary tournament selection. Two individuals are randomly drawn from the parent generation and the one with higher fitness is kept. This procedure is then repeated to determine the second mating partner. Crossover is applied with a probabilityp, = 0.8. We use uniform crossover (Syswerda [38]), a variant that leads to a stronger mixing of parental information in the children than 1-point crossover. Mutation is implemented as bit inversion with a mutation probability per bit Set to p, = 0.01. The termination criterion for all EA is a predefined number of 25 genera- tions leading to a total of 500 evaluated solutions (0.2% of the solution space), not considering the initial solutions.

Page 17: Combinations of simulation and Evolutionary Algorithms in ... · 186 J. Biethahn, V. Nissen/Evolutionary Algorithms Table 2 Main advantages and disadvantages of EA as an optimisation

J. Biethahn, V. Nissen/Evolutionary Algorithms 20 1

(2) ES: We use a (4, 20)-ES. Following standard practice (Bäck and Schwefl [6]), mutation is implemented for each individual by first mutating the mutation step sizes (standard deviations) U, and U, and then applying the modified step sizes U,' and ai to mutate s and q:

where N ( p , U) is a ( p , U)-normally distributed random variable, and t l = 0.5, t2 = 0.6.

Mutations leading to s or q values outside the interval [O, 51 11 are considered lethal and assigned a very low fitness value. The standard deviations U, and U, are Set to a value of 50.0 in the initial population.

(3) EP: We employ a (4 + 20)-EP with real-coded individuals. It is not sensible to stick to the standard approach with an equal number of parents and children for the following reason. Under such conditions survival of the best individuals is almost guaranteed. This is undesirable since it could then happen that a mediocre (s, q)-policy achieves by chance a good series of evaluations and hence a high fitness value. With the standard EP approach such an individual is very likely to manifest itself in the population for many generations while its true quality does not merit survival. This problem is particularly acute with 1 or 10 simulations/solution. For the Same reason a steady-state GA is not applicable to this optimisation problem.

We, therefore, decided to increase the nurnber of children and kept the competition for survival at C = 6 for each individual. Thereby, survival of the best individual in the population is not guaranteed, leading to a stochastic selection process. Note that survival of the best is also not guaranteed in the GA or the ES. Since children inherit (modified) parameter values from their parents the informa- tion of good solutions is not completely lost, though, but rather subject to further testing and exploitation while new regions of the search space continue to be explored. Mutation alters the parental values of s and q to produce the correspond- ing decision values s' and q' of the child by setting

Page 18: Combinations of simulation and Evolutionary Algorithms in ... · 186 J. Biethahn, V. Nissen/Evolutionary Algorithms Table 2 Main advantages and disadvantages of EA as an optimisation

J. Biethahn, V . Nissen/Evolutionary Algorithms

where N(0, U) is a (0, U)-normally distributed random variable, ß, = ß, = 100 are scaling constants, @, is the worst fitness value in the start population, is the fit- ness value of the parent.

The mutation step size is a function of generation number and parent fitness. The factor a decreases with a rising number of generations and increasing quality of solutions in the population, promoting convergence at the end of the run. Mutations leading to s or q values outside the interval [O, 5111 are considered lethal and assigned a very low fitness.

The two conventional optimisation techniques irnplemented are an alter- nating variable search (AVS) and a simplified gradient search (SGS).

The basic idea of AVS is to vary the variables individually in turn. Starting from an initial point in solution space variable s is increased with a constant step- size of 20. If this is unsuccessful, decreasing s is tested. In case of success the search proceeds in the chosen direction until failure. Then q is varied. Both variables are iteratively changed in turn until no further Progress occurs. Then the steplength is halved and the search continues as before. AVS terminates after a minimum steplength of 1 has been employed.

SGS proceeds in a similar way, but instead of varying only one variable at a time finds the direction of steepest ascent by changing s and/or q with predefined

Table 3 Empirical results using five different optimisation techniques to determine good settings for order point (s) and order quantity (q) in a stochastic inventory simulation. Each trial solution is evaluated with 40 simulations, taking the mean as fitness. (OF-value = objective function value; Ever-best = best solution ever discovered during the run; Last-best = best solution of the final population.)

Optimisation Eval. sol. Best run Worst run Average method (avg.) excl. OF-value (starting point) illegal sol. OF-value (s, q)-policy OF-value (s, q)-policy (5 runs)

AVS 20,20 20, 150

150,20 150, 150

SGS 20.20 20, 150

150,20 150, 150

Page 19: Combinations of simulation and Evolutionary Algorithms in ... · 186 J. Biethahn, V. Nissen/Evolutionary Algorithms Table 2 Main advantages and disadvantages of EA as an optimisation

J. Biethahn, V. Nissen/Evolutionary Algorithms 203

steplength. It then proceeds in this direction until failure, determines the steepest ascent anew and so On. When no further Progress is achieved the steplength is halved. This procedure terminates after a minimum steplength of 1. Both AVS and SGS employ identical steplengths and start from the Same initial solutions.

Empirical results for all five optimisation techniques tested are given in table 3 (40 simulations/solution) and table 4 (1 and 10 simulations/solution). For the EA we give two solutions. The first and more important one refers to the best

Table 4 Empirical results for 1 and 10 simulations/solution. The last column gives the average objective function value over 5 runs after evaluating eachJina1 solution with 40 simulations. This must not be confused with the data in table 3.

Optimisation Avg. OF-value method Sim.1 Avg. total Best run Worst run after 40 (starting point) point simulations OF-value OF-value simulations

AVS 20,20

SGS 20,20

Page 20: Combinations of simulation and Evolutionary Algorithms in ... · 186 J. Biethahn, V. Nissen/Evolutionary Algorithms Table 2 Main advantages and disadvantages of EA as an optimisation

204 J. Biethahn, V. Nissen/Evolutionary Algorithms

individual ever discovered during the search. The second is the best solution found in the final population. Note that all three EA are designed in a way that the best solution does not necessarily survive forever. This ability to "forget" has proven useful in overcoming local optima. In optimisation it is, however, sensible to addi- tionally store the best solution so far during one run in a separate "golden cage", and update it whenever a better (s, 9)-policy is discovered. This does not change the basic search process, however.

Interpreting table 3 first, one finds that all three EA-variants produce similar results which are on average (Ever-best) better than those of SGS and particularly AVS. SGS proves to perform better than AVS but also requires the evaluation of more trial solutions. However, all three EA evaluate even more solutions during each run of 25 generations and, hence, are rather time- consuming when 40 simulations/solution are performed. The difference in solution quality between best and worst run is smaller for the population- based EA as compared to the point-based conventional optimisation methods. Moreover, in particular AVS shows a certain dependency of the final result on the initial point of the search. Thus, the three EA are more reliable optimisers than both SGS and AVS at the price of higher CPU-requirements. One should also point out that both conventional methods terminate after the minimum steplength has been reached. The GA, ES and EP can, in principle, continue as long as resources are available, possibly raising the attained solution quality even further. For ES and EP roughly 20% of the trial solutions fall outside the interval [O, 5111 with respect to s and q and are thus considered lethal. Due to the 9 bit-coding employed the GA produces only legal solutions. The recornrnended (s, 9)-policy is roughly similar in the final results of all three EA leading to a very small value (510) for s and a rather high value (x250) for q. The fitness value of the best individual in the final solution is generally a bit lower than the best solution ever discovered during the search. Recalling the design of our EA with their built-in ability to "forget" good (local) optima this Comes as no surprise and is just a sign of continuing search-efforts.

Table 4 gives empirical results for 1 and 10 simulations/solution respectively. An obvious finding is that the fewer evaluations one performs the higher the fitness of the final solution becomes. This is, however, only true before we evaluate these final solutions with 40 simulations to deterrnine their "real" quality. Then, we see that AVS and SGS produce rather bad results with only 1 simulation per point. With 10 simulations the results are somewhat better but still strongly dependent on the starting solution employed. Moreover, we found that the fluctuations between best and worst run (after 40 simulations of the final solution) are rather pronounced for both conventional optimisation methods. It is fair to say that w.r.t. AVS and SGS simulating each trial solution 1 or 10 times is no reliable indicator for a really good result after 40 simulations/solution.

The explanation is that with a point-based search lucky evaluations of mediocre solutions manifest themselves and lead to a premature termination of

Page 21: Combinations of simulation and Evolutionary Algorithms in ... · 186 J. Biethahn, V. Nissen/Evolutionary Algorithms Table 2 Main advantages and disadvantages of EA as an optimisation

J. Biethahn, V . Nissen/Evolutionary Algorithms

the search process. With fewer simulations this effect becomes even more pro- nounced. Population-based approaches are not so easily misled. It is useful and adequate to think of a population-based EA not as many individual trials scattered over the entire search space but as a gradually moving and tightening cloud of trial solutions. The search is concentrated on promising regions by exploiting the information contained in good solutions already discovered. At the same time, new search areas are continuously explored due to the stochastic element inherent in EA. To keep a proper balance between such exploitation and exploration is important for the success of an EA.

Table 4 shows that the average results with 10 simulations/solution for all three EA are comparable to the two conventional methods, AVS and SGS, with 40 simulations/solution. Particularly EP and GA produce good results even with only 1 simulation per trial solution while ES performs slightly worse here.

The average results of all three EA with only 1 simulation for each (s, 9)- policy are equal to or better than what can be expected when evaluating each solution 10 times in AVS or SGS. Thus, to achieve comparable results in the case of a stochastic optimisation it suffices to evaluate each trial solution less frequently than with point-based conventional techniques. The similarity of parents and children as well as the cloud-like type of search in EA lead to an implicit, intensive evaluation of information about the search space contained in the population as a whole. This gives EA increased reliability as opposed to conventional point-based search techniques when only few evaluations per trial solution are performed. It also explains why in table 4 the detailed evaluation of the best individual in the final generation often leads to similar, and sometimes even better results than the best individual discovered during the entire EA-search. The information contained in the final population has been implicitly tested quite intensively since convergence is rather pronounced in all three EA-variants at generations 20-25. So, when only few simulations of each (s, 9)-policy are performed it is sensible to cross-check both EA-solutions (Ever-best and Last-best) for more reliable results. The robust- ness of EA in stochastic optimisation with few evaluations of each trial solution also alleviates their high CPU-requirements when 40 simulations/solution are per- formed. It may be taken as a counter-argument to the impression that EA always have drastically higher CPU-requirements than more conventional optimisation methods.

Finally, we would like to point out that we have chosen an optimisation pro- blem of only moderate complexity to gain first experience. It is our view that EA are particularly advantageous in very complex search spaces such as simulations with many decision parameters. Classical optimisation techniques are, generally, not applicable in these domains since they are too easily misled by local optima. It remains a topic for further study to compare EA with one another and modern point-based search techniques (including, e.g., simulated annealing, tabu search) on more complex stochastic optimisation tasks.

Page 22: Combinations of simulation and Evolutionary Algorithms in ... · 186 J. Biethahn, V. Nissen/Evolutionary Algorithms Table 2 Main advantages and disadvantages of EA as an optimisation

J. Biethahn, V. Nissen/Evolutionary Algorithms

5. Conclusions

We have presented various alternatives to combine EA with the simulation method. These concerned decision models as well as explanatory models. Using EA as a means to deterrnine good values of decision variables can turn simulation into a powerful optimisation tool for complex economic situations. Simulation metamodels, built with EA, reduce practical run-time disadvantages of simulations and allow for sensitivity analysis and System optimisation. Finally, techniques from the field of EA are helpful in structure optimisation problems and can be used for the experimental analysis of emerging economic Patterns.

Practical experience is rather sparse yet, and some technical difficulties have to be overcome, particularly w.r.t. classifier systems. However, we are only begin- ning to explore these new promising techniques. In our own, though only moderately complex example all three main EA-variants produce superior results over two conventional techniques in optimising a stochastic inventory simulation. Moreover, the robustness of EA in stochastic optimisation when only few simula- tions are performed for each trial solution counters the otherwise high CPU- requirements of population-based EA as compared to point-based conventional optimisers. Enhanced hardware, especially parallel systems, will reduce run-time requirements and make combinations of EA and simulation even more attractive. This effect will be increased by the development of better EA- and simulation-tools.

For the future, we envisage a remarkable potential for research and practical applications (not only) in complex economic systems.

Acknowledgements

We thank Matthias Krause for technical assistance and the anonymous referees for their comments that helped to improve the presentation of this Paper. The second author gratefully acknowledges financial Support by the Stiftung Volkswagenwerk.

References

[I] P. Ablay, Konstruktion kontrollierter Evolutionsstrategien zur Lösung schwieriger Optimie- rungsprobleme der Wirtschaft, in: Evolution und Evolutionsstrategien in Biologie, Technik und Gesellschaft, 2nd ed., ed. J. Albertz (Freie Akademie, Wiesbaden, 1990) pp. 73-106.

[2] W.B. Arthur, The economy and complexity, in: Lectures in the Sciences of Complexity, SFI Studies in the Sciences of Complexity, ed. E. Stein (Addison-Wesley, Redwood City, 1989) pp. 713-740.

[3] W.B. Arthur, On learning and adaptation in the economy, Research Report 92-07-038, Santa Fe Institute (SFI), Santa Fe (1992).

[4] R.M. Axelrod, The evolution of strategies in the iterated Prisoner's Dilemma, in: Genetic Algorithms und Sirnulated Annealing, ed. L.D. Davis (Morgan Kaufmann, Los Altos, 1987) pp. 32-41.

Page 23: Combinations of simulation and Evolutionary Algorithms in ... · 186 J. Biethahn, V. Nissen/Evolutionary Algorithms Table 2 Main advantages and disadvantages of EA as an optimisation

J. Biethahn, V. Nissen/Evolutionary Algorithms 207

[5] T. Bäck, The interaction of mutation rate, selection, and self-adaptation within a genetic algorithm, in: Parallel Problem Solving from Nature 2,2nd Workshop PPSN 11, eds. R. Männer and B. Manderick (North-Holland, 1992) pp. 8594.

[6] T. Bäck and H.-P. Schwefel, An ovewiew of evolutionary algorithms for Parameter optimization, Evol. Comp. 1 (1993) 1-23.

[7] J. Biethahn, Optimierung und Simulation (Gabler, Wiesbaden, 1978). [8] J. Biethahn and V. Nissen, Combining simulation and evolutionary algorithms for applications

in complex economic systems, in: Modelling und Simulation ESM 1993, Proc. 1993 Simulation Multiconference Lyon, ed. A. Pave (SCS, Ghent, 1993) pp. 351-357.

[9] E. Bruderer, How organizational learning guides environmental selection, Working Paper, University of Michigan, School of Business Administration, Ann Arbor (1992).

[I01 E. Bruderer, Strategic learning, Working Paper, University of Michigan, School of Business Administration, Ann Arbor (1992).

[ l l ] K. Carley, J. Kjaer-Hansen, A. Newell and M. Prietula, Plural-SOAR: a prolegomenon to artificial agents and organizational behavior, in: Artificial Zntelligence in Organization und Management Theory, eds. M. Masuch and M. Warglien (North-Holland, 1992) pp. 87-1 18.

[12] G. Dosi, L. Marengo, A. Bassanini and M. Valente, Microbehaviours and dynamical systems: economic routines as emergent properties of adaptive learning, in: Path-&pendent Economics, eds. C. Antonelli and P.A. David (Kluwer, 1993).

[13] D.B. Fogel, Evolving artificial intelligente, Doctoral Dissertation, University of California, San Diego (1992).

[14] S. Forrest, Emergent computation: self-organizing, collective, and woperative phenomena in natural and artificial computing networks, Physica D 42 (1990) 1-1 1.

[15] F. Glover, Tabu search - Part I, ORSA J. Comp. 1 (1989) 190-206. [16] D.E. Goldberg, Genetic Algorithms in Search, Optimization, und Machine Learning (Addison-

Wesley, Reading, MA, 1989). [17] H.A. Güvenir and J. Sirin, A genetic algorithm for classification by feature partitioning, in:

Proc. 5th Int. Conf. on Genetic Algorithms, Urbana-Champaign, IL, ed. S. Forrest (Morgan Kaufmann, San Mateo, 1993) pp. 543-548.

[18] M.T. Hannan and J. Freeman, Organizational Ecology (Haward University Press, Cambridge, MA, 1989).

[19] F. Hoffmeister and T. Bäck, Genetic algorithms and evolution strategies: similarities and differences, in: Parallel Problem Solving from Nature, Ist Workshop PPSN I, eds. H.-P. Schwefel and R. Männer (Springer, Berlin, 1991) pp. 455-469.

[20] J.H. Holland, K.J. Holyoak, R.E. Nisbett and P.R. Thagard, Znduction: Processes of Znference, Learning, und Discovery (MIT Press, Cambridge, MA, 1986).

[21] J.H. Holland, Adaptation in Natural und Artificial Systems, 2nd ed. (MIT Press, Cambridge, MA, 1 992).

[22] S. Kirkpatrick, C. Gelatt and M. Vechi, Optimization by simulated annealing, Science 220 (1983) 671680.

[23] J.R. Koza, Genetic Programming (MIT Press, Cambridge, MA, 1993). [24] F. Kursawe, A variant of evolution strategies for vector optimization, in: Parallel Problem

Solving from Nature, 1st Workshop PPSN I, eds. H.-P. Schwefel and R. Männer (Springer, Berlin, 1991) pp. 193-197.

[25] F. Malik and G. Probst, Evolutionäres Management, Die Unternehmung 35 (1981) 121-140. [26] L. Marengo, Structure, competence and learning in an adaptive model of the firm, Papers

on Economics & Evolution, #9203, European Study Group for Evolutionary Economics (1992).

[27] V. Nissen, Simulation und evolutionäre Algorithmen, Technical Report, Institut für Wirtschafts- informatik, Abteilung I, Universität Göttingen, Göttingen, Germany (1992).

Page 24: Combinations of simulation and Evolutionary Algorithms in ... · 186 J. Biethahn, V. Nissen/Evolutionary Algorithms Table 2 Main advantages and disadvantages of EA as an optimisation

208 J. Biethahn, V . Nissen/Evolutionary Algorithms

[28] V. Nissen, Evolutionary algorithms in management science. An overview and list of referenoes, Papers on Economics & Evolution, ##9303, European Study Group for Evolutionary Economics (1993).

[29] V. Nissen, Solving the quadratic assignment problem with clues from nature, IEEE Trans. Neural Networks, special issue on Evolutionary Programming (1994), to appear.

[30] B. Noche, Simulation in Produktion und MateriaIfruß (Verlag TÜV Rheinland, Köln, 1990). [31] H. Pierreval, Rule-based simulation metamodels, Eur. J. Oper. Res. 61 (1992) 6-17. [32] J.D. Schaffer, Some experiments in machine leaming using vectorevaluated genetic algorithms,

Doctoral Dissertation, Vanderbilt University, Nashville (1984). [33] E. Schöneburg, Zeitreihenanalyse und -Prognose mit Evolutionsalgorithmen, Research Paper,

Expert Informatik GmbH, Berlin (1993). [34] H.-P. Schwefel, Numerical Optimization of Computer Models (Wiley, Chichester, 1981). [35] R. Sikora and M. Shaw, The evolutionary model of group problem solving: a computational

study of distributed mle learning, Inf. Syst. Res., to appear. [36] T. Starkweather, S. McDaniel, K. Mathias and D. Whitley, A comparison of genetic sequencing

Operators, in: Proc. 4th Znt. Conf. on Genetic Algorithms, San Diego, CA, eds. R.K. Belew and L.B. Booker (Morgan Kaufmann, San Mateo, 1991) pp. 69-76.

[37l R.H. Storer, S.D. Wu and R. Vaccari, Local search in problem and heuristic space for job shop scheduling genetic algorithms, in: New Directions for Operations Research in Manufacturing, eds. G. Fandel, T. Gulledge and A. Jones (Springer, Berlin, 1992) pp. 149-160.

[38] G. Syswerda, Uniform crossover in genetic algorithms, in: Proc. 3rd Conf. on Genetic Algorithms, George Mason University, ed. J.D. Schaffer (Morgan Kaufmann, San Mateo, 1989) pp. 2-9.

[39] D. Whitley and J.D. Schaffer (eds.), COGANN-92, Int. Workshop on Combinations of Genetic Algorithms und Neural Networks, Baltimore (IEEE Computer Society Press, Washington, 1992).

[40] U. Witt, Evolutionary economics - an interpretative survey, Papers on Economics & Evolution, #9104, European Study Group for Evolutionary Economics (1991).