evolutionary algorithms in embedded systems

25
256 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 10, NO. 3, JUNE 2006 Evolutionary Algorithms + Domain Knowledge = Real-World Evolutionary Computation Piero P. Bonissone, Fellow, IEEE, Raj Subbu, Senior Member, IEEE, Neil Eklund, Member, IEEE, and Thomas R. Kiehl Abstract—We discuss implicit and explicit knowledge repre- sentation mechanisms for evolutionary algorithms (EAs). We also describe offline and online metaheuristics as examples of explicit methods to leverage this knowledge. We illustrate the benefits of this approach with four real-world applications. The first application is automated insurance underwriting—a discrete clas- sification problem, which requires a careful tradeoff between the percentage of insurance applications handled by the classifier and its classification accuracy. The second application is flexible design and manufacturing—a combinatorial assignment problem, where we optimize design and manufacturing assignments with respect to time and cost of design and manufacturing for a given product. Both problems use metaheuristics as a way to encode domain knowledge. In the first application, the EA is used at the metalevel, while in the second application, the EA is the object-level problem solver. In both cases, the EAs use a single-valued fitness function that represents the required tradeoffs. The third application is a lamp spectrum optimization that is formulated as a multiobjec- tive optimization problem. Using domain customized mutation operators, we obtain a well-sampled Pareto front showing all the nondominated solutions. The fourth application describes a scheduling problem for the maintenance tasks of a constellation of 25 low earth orbit satellites. The domain knowledge in this application is embedded in the design of a structured chromo- some, a collection of time-value transformations to reflect static constraints, and a time-dependent penalty function to prevent schedule collisions. Index Terms—Automated insurance underwriting, combi- natorial optimization, design and manufacturing planning, evolutionary algorithms (EAs), knowledge representation, lamp spectrum optimization, metaheuristics, multiobjective optimiza- tion, satellite scheduling, soft computing. I. INTRODUCTION M OST decision and control problems may be cast in the semantics of an optimization or search problem. It is no surprise, therefore, that research in optimization and search al- gorithms has a rich and diverse history, and is a principal focus across several scientific and engineering disciplines. When the structure of a problem is such that its constraints and objec- tives are all linear or one or more of them are nonlinear but convex, a host of mathematical-programming-based algorithms may be readily applied to realize exact solutions. Mathemat- ical-programming-based algorithms are also applicable in some Manuscript received April 30, 2004; revised March 3, 2005. P. P. Bonissone, R. Subbu, and N. Eklund are with General Electric Global Research, Niskayuna, NY 12309 USA (e-mail: [email protected]; [email protected]; [email protected]). T. R. Kiehl is with Rensselaer Polytechnic Institute, Troy, NY 12180 USA (e-mail: [email protected]). Digital Object Identifier 10.1109/TEVC.2005.857695 circumstances even if the search space is discrete, provided the problem constraints have certain desirable geometrical prop- erties such as unimodularity [1]. However, a wide variety of real-world decision and control problems have significant non- linearities, complex coupling, and multiple dimensions, and are often times discrete as well and with complex constraints. Evolutionary algorithms (EAs) utilize principles of natural selection and are robust adaptive search schemes suitable for searching nonlinear, discontinuous, and high-dimensional spaces. This class of algorithms is being increasingly applied to obtain optimal or near-optimal solutions to many com- plex real-world optimization problems. It has therefore been tempting for the EA research community at large to categorize these algorithms as universal problem solvers. However, it is also being increasingly realized that EAs in the absence of additional problem-specific knowledge incorporation do not perform as well as mathematical-programming-based algorithms on certain classes of problems. Mathematical-pro- gramming-based approaches in general attempt to leverage unique geometrical characteristics such as linearity and con- vexity of the problem classes to which they are applied. It is also our collective experience that even for problems that do not admit mathematical-programming-based algorithms, and for which EAs can be readily applied, the quality of the optimization is in general remarkably better when knowledge about the domain or geometrical properties of the search space is incorporated in the evolutionary problem solving. This knowledge may be incorporated either implicitly, in the design of data structures, encoding, and constraints representations, or explicitly, via the initialization of the first population and in the control of EAs parameters. Insight from the no free lunch theorems (NFLTs) for optimization [2], [3] has placed a critical focus on the effect of knowledge incorporation in EAs’ performance, although it is argued that the NFLT applies to problems closed under permutation [4]. The NFLT states that “taken over the set of all possible com- binatorial optimization problems, the performance of any two search algorithms is the same” and that “any algorithm performs only as well as the knowledge concerning the cost function put into the algorithm” [3]. We could formally define NFLT as follows: for any two “black box” optimization algorithms and , we have that the performance averaged over all combinatorial optimiza- tion problems is constant for any pair of algorithms (1) 1089-778X/$20.00 © 2006 IEEE

Upload: boovarahan-chakravarthy

Post on 20-Jul-2016

32 views

Category:

Documents


0 download

DESCRIPTION

Evolutionary algorithms

TRANSCRIPT

Page 1: Evolutionary algorithms in Embedded systems

256 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 10, NO. 3, JUNE 2006

Evolutionary Algorithms + Domain Knowledge =Real-World Evolutionary Computation

Piero P. Bonissone, Fellow, IEEE, Raj Subbu, Senior Member, IEEE, Neil Eklund, Member, IEEE, andThomas R. Kiehl

Abstract—We discuss implicit and explicit knowledge repre-sentation mechanisms for evolutionary algorithms (EAs). We alsodescribe offline and online metaheuristics as examples of explicitmethods to leverage this knowledge. We illustrate the benefitsof this approach with four real-world applications. The firstapplication is automated insurance underwriting—a discrete clas-sification problem, which requires a careful tradeoff between thepercentage of insurance applications handled by the classifier andits classification accuracy. The second application is flexible designand manufacturing—a combinatorial assignment problem, wherewe optimize design and manufacturing assignments with respectto time and cost of design and manufacturing for a given product.Both problems use metaheuristics as a way to encode domainknowledge. In the first application, the EA is used at the metalevel,while in the second application, the EA is the object-level problemsolver. In both cases, the EAs use a single-valued fitness functionthat represents the required tradeoffs. The third application is alamp spectrum optimization that is formulated as a multiobjec-tive optimization problem. Using domain customized mutationoperators, we obtain a well-sampled Pareto front showing allthe nondominated solutions. The fourth application describes ascheduling problem for the maintenance tasks of a constellationof 25 low earth orbit satellites. The domain knowledge in thisapplication is embedded in the design of a structured chromo-some, a collection of time-value transformations to reflect staticconstraints, and a time-dependent penalty function to preventschedule collisions.

Index Terms—Automated insurance underwriting, combi-natorial optimization, design and manufacturing planning,evolutionary algorithms (EAs), knowledge representation, lampspectrum optimization, metaheuristics, multiobjective optimiza-tion, satellite scheduling, soft computing.

I. INTRODUCTION

MOST decision and control problems may be cast in thesemantics of an optimization or search problem. It is no

surprise, therefore, that research in optimization and search al-gorithms has a rich and diverse history, and is a principal focusacross several scientific and engineering disciplines. When thestructure of a problem is such that its constraints and objec-tives are all linear or one or more of them are nonlinear butconvex, a host of mathematical-programming-based algorithmsmay be readily applied to realize exact solutions. Mathemat-ical-programming-based algorithms are also applicable in some

Manuscript received April 30, 2004; revised March 3, 2005.P. P. Bonissone, R. Subbu, and N. Eklund are with General Electric Global

Research, Niskayuna, NY 12309 USA (e-mail: [email protected];[email protected]; [email protected]).

T. R. Kiehl is with Rensselaer Polytechnic Institute, Troy, NY 12180 USA(e-mail: [email protected]).

Digital Object Identifier 10.1109/TEVC.2005.857695

circumstances even if the search space is discrete, provided theproblem constraints have certain desirable geometrical prop-erties such as unimodularity [1]. However, a wide variety ofreal-world decision and control problems have significant non-linearities, complex coupling, and multiple dimensions, and areoften times discrete as well and with complex constraints.

Evolutionary algorithms (EAs) utilize principles of naturalselection and are robust adaptive search schemes suitablefor searching nonlinear, discontinuous, and high-dimensionalspaces. This class of algorithms is being increasingly appliedto obtain optimal or near-optimal solutions to many com-plex real-world optimization problems. It has therefore beentempting for the EA research community at large to categorizethese algorithms as universal problem solvers. However, itis also being increasingly realized that EAs in the absenceof additional problem-specific knowledge incorporation donot perform as well as mathematical-programming-basedalgorithms on certain classes of problems. Mathematical-pro-gramming-based approaches in general attempt to leverageunique geometrical characteristics such as linearity and con-vexity of the problem classes to which they are applied. Itis also our collective experience that even for problems thatdo not admit mathematical-programming-based algorithms,and for which EAs can be readily applied, the quality of theoptimization is in general remarkably better when knowledgeabout the domain or geometrical properties of the search spaceis incorporated in the evolutionary problem solving. Thisknowledge may be incorporated either implicitly, in the designof data structures, encoding, and constraints representations,or explicitly, via the initialization of the first population andin the control of EAs parameters. Insight from the no freelunch theorems (NFLTs) for optimization [2], [3] has placed acritical focus on the effect of knowledge incorporation in EAs’performance, although it is argued that the NFLT applies toproblems closed under permutation [4].

The NFLT states that “taken over the set of all possible com-binatorial optimization problems, the performance of any twosearch algorithms is the same” and that “any algorithm performsonly as well as the knowledge concerning the cost function putinto the algorithm” [3].

We could formally define NFLT as follows: for any two“black box” optimization algorithms and , we have thatthe performance averaged over all combinatorial optimiza-tion problems is constant for any pair of algorithms

(1)

1089-778X/$20.00 © 2006 IEEE

Page 2: Evolutionary algorithms in Embedded systems

BONISSONE et al.: REAL-WORLD EVOLUTIONARY COMPUTATION 257

where, two optimization algorithms;

number of algorithm iterations;time-ordered set of distinct points visited;combinatorial optimization problem.

This theorem states that for any optimization algorithm, anelevated performance over one class of problems is offset by de-graded performance over another class. Thus, the performanceaverage of a given optimization algorithm over the entire class ofpotential problems is constant. If an algorithm performs betterthan random search for some problems, it will perform worsethan random search for other problems, maintaining a constantperformance average. These results suggest that any one set ofpotentially optimal algorithms can be considered so only for alimited subset of problems and their use cannot result in consis-tently superior performance over the entire space of optimiza-tion problems, as was previously generally expected. The onlyway a strategy can outperform another is if “it is specialized tothe structure of the specific problem under consideration” [5].

The NFLT’s conclusions highlight the need to embed do-main knowledge into an EA to achieve good performance.Equally critical is to define a methodology for representingand reasoning with such domain knowledge. In the followingsections, we explore various approaches to the representationof domain knowledge in evolutionary algorithms, rangingfrom customized data structures to the use of metaheuristics toprovide external EA parameter control.

A. Objectives

When addressing real-world problems, we are faced withsystems that are usually ill defined, difficult to model, andpossess large solution spaces. In these cases, precise modelsare impractical, too expensive, or nonexistent. Therefore, weneed to generate approximate solutions by leveraging the twotypes of resources that are generally available: problem domainknowledge of the process (or product) and field data that char-acterize the system’s behavior. The relevant available domainknowledge is typically a combination of first principles andempirical knowledge. This knowledge is often incomplete andsometimes erroneous. The available data are typically a collec-tion of input–output measurements, representing instances ofthe system’s behavior, and are generally incomplete and noisy.

In this paper, we illustrate different approaches for incorpo-rating domain knowledge at the algorithm’s representation (orobject) level and at its control (or meta) level. We also high-light the versatility of hybrid soft computing (SC) techniques[6] for integrating knowledge with data. The applications de-scribed in this paper are far from trivial. In some cases, we usedcompetitive alternatives to baseline the performance of our hy-brid SC systems, as illustrated in [7]. We were able to developand, in some cases, deploy robust solutions for these applica-tions by leveraging hybrid soft computing techniques. SC is aflexible framework offering a broad spectrum of design choicesto perform the integration of imprecise knowledge and data toimprove the performance of our hybrid models [8], [9]. Fuzzysystems exploit the tolerance for imprecision and offer a flex-ible representation for imprecise knowledge. Evolutionary al-gorithms offer a great amount of robustness in their search,

when efficient encoding schemes and good strategies to main-tain population diversity are used. Integration of fuzzy systemsand evolutionary algorithms allows the represented knowledgeto help initialize and guide the search for a solution in an effi-cient manner [10].

We also make a few remarks regarding the customization as-pects of the algorithms. The NFLT formally supports leveragingdomain knowledge within the search algorithm. However, adhoc approaches for representing and integrating such knowledgecould lead to algorithms that are virtually impossible to maintainover time. It is essential to have a process in which the knowl-edge is described explicitly rather than implicitly via proceduralchanges. The use of hybrid systems based on an explicit knowl-edge base (KB) and the automation of their KB’s tuning allow usto deploy high-performance systems that can also be supportedand updated during their lifecycle. This concept is illustrated inthe first two applications described in this paper, where we usemetaheuristics, and is further described in [11] and [12].

B. Structure of This Paper

In the next section, we address the issue of knowledgerepresentation in evolutionary algorithms and consider implicitrepresentation approaches (specialized data structure, encoding,and constraints) and explicit representation approaches (pop-ulation initialization, customized variational operators, offlineand online metaheuristics). We describe the role of hybridfuzzy systems in incorporating domain knowledge within evo-lutionary algorithms and emphasize the natural framework thatthey provide for these hybridizations. In the following sections,we illustrate four real-world applications in which hybrid softcomputing systems were successfully developed, tested, andin some cases deployed. All applications are described using acommon six-part structure: a) problem description; b) relatedwork; c) solution architecture; d) domain knowledge represen-tation; e) results; and f) remarks. Our intention is to presentthem as self-contained units, while highlighting their commonproblem-solution aspects and benefits by incorporating domainknowledge into the algorithms.

The first application is the automation of risk classificationin underwriting life insurance applications. This applicationshowcases the use of an evolutionary algorithm as an offlinemetaheuristic used to tune the parameters of a fuzzy classifier.The second application describes the optimization of designand manufacturing planning and illustrates the role of a fuzzysystem as an online metaheuristic to control the real-time pa-rameters of a genetic algorithm. The third application addressesthe optimization of lamp spectrum by explicitly incorporatingdomain knowledge into the variational operators used by theevolutionary algorithms. The last application is devoted to thescheduling of the maintenance tasks for a constellation of 25satellites in low earth orbit (LEO). This application illustratesthe use of implicit knowledge representation approaches, inwhich we used customized data structures to reflect staticconstraints (verifiable at compile time) and penalty functionsto represent dynamic constraints. In the Conclusions section,we note again the importance of representing and integratingdomain knowledge with search algorithms and the natural ease

Page 3: Evolutionary algorithms in Embedded systems

258 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 10, NO. 3, JUNE 2006

to define these hybrid systems within the framework of softcomputing.

II. A STRUCTURED APPROACH TO REPRESENTING DOMAIN

KNOWLEDGE IN EAS

There are two principal methods for embedding such knowl-edge: implicitly, in the design of data structures, encoding, andconstraints representations, and explicitly, via initialization ofthe population and the control of EA parameters. We will brieflydescribe some of these available options.

A. Implicit Knowledge Representation

1) Data Structure: The representation of the solution spaceis one of the most critical design decisions for any EA, sinceit will likely have a strong impact on the algorithm’s overallperformance [13]. EA designers should be parsimonious indefining the solution representation, to limit the size of thesearch space and to avoid potentially generating infeasiblesolutions. For example, in a traveling salesman problem (TSP),the data structure should be designed in such a way so as todetect (and perhaps correct) the presence of loops in a circuitof cities. Other constraints, such as feasible range of values foreach position in the chromosome, should be included in thedata structure to prevent the generation of infeasible solutions.Another example of embedding knowledge in the data structureis the case when the solution has a tree structure. In such a case,the chromosome should not be a general connectivity matrix,which could represent any other type of graph besides a tree.Rather, a data structure that can only generate trees, such aspossible via Prim’s algorithm [14], would be a more efficientrepresentation.

2) Encoding: EA designers have several options when itcomes to solution encoding: binary, real, integer, finite state au-tomata, and tree structures are just a few of the possible choicesfor solution encoding. For a given problem, a certain encoding(or encodings) may result in a relatively compact search space,which is advantageous. For example, if the solution space fora problem is known to be discrete (i.e., month of the year,pointer in a list), then it would be wasteful to use a real numberrepresentation for the problem unless there are other overridingreasons. Similarly, real encoding is the most natural way torepresent real-valued parameters, thus avoiding the need topredefine their required resolution, as is required when usingbinary encoding. Structural encoding, such as grammaticalencoding [15], is one of the most efficient ways to representnetwork topologies.

3) Constraints: Constraint representation falls into twocategories: static and dynamic. Static constraints are those con-straints that remain the same throughout the evolution process.One method to tackle these constraints is by incorporatingthem into the data structures used [13]. Alternatively, a penaltyfunction approach may be used for constraint satisfaction.Dynamic constraints are more difficult to handle. Usually gen-eration-dependent penalty functions may be used to capture thetradeoff between pushing the search to the limits and violatingconstraints. An early use of dynamic penalty functions andtheir justification can be found in [16]. In their approach, the

authors promote the use of graded penalty functions, stating,“Care must be taken to write an evaluator which balances thepreservation of information with the pressure for feasibility.Well-chosen, graded penalties which preserve the informa-tion of all strings should be advantageous to harsh penaltyfunctions.” Furthermore, they suggest the benefit of genera-tion-dependant constraints, which begin in a relaxed state andthen gradually tighten over time. Additionally, Siedlecki andSklansky [17] demonstrate that a genetic algorithm with avariable penalty coefficient can outperform one with a fixedpenalty factor.

B. Explicit Knowledge Representation

1) Seeding the Initial Population: The initial populationis an excellent place to embed knowledge from the problemdomain. The idea of smart initialization of the initial populationcan be traced back at least to the work of Kubalik and Lazanski[18]. In their paper, the authors claim that “‘lucky’ initializationcan increase the likelihood of successful composing of theglobal solution’s chromosome through the iteration processof information exchange.” In their approach, they initializethe population by resorting to prior information of the desiredsolution itself or of the structure of the solution space and itsmore promising parts. To achieve this goal they use a prepro-cess phase, in which they perform several short runs with asmall population and take some individuals as members of theinitial population. Then the initial population can better samplethose more useful chromosomes, which contain gene clusterswith the important pieces of information. In addition, when theinitial population is generated, boundary conditions should betaken into account, and no solutions outside of these boundariesshould be added to the initial population. Seeding the initialpopulation with a large set of samples from the boundaryof the search space is a strategy that has been successfullyapplied by the authors to real-world decision problems [19].This strategy is especially helpful when constraint satisfaction,solution repair, or penalty imposition methodologies are highlycomplex and expensive. Samples along the boundary are thenused to generate new interior samples via variational (mutationand crossover) operators. This idea is further developed in thecontext of multicriteria optimization and decision problems[19].

It is also the authors’ experience that oftentimes in engi-neering design, an experienced designer may be able to achievea few solutions that are good but not necessarily optimal ornear optimal. Though an EA may be directly applied to a givendesign problem and be expected to perform well, incorporationof the expert knowledge from the design engineer via seedingof the initial population has the ability to favorably bias thesearch and achieve optimal or near-optimal solutions faster.Thus, any prior knowledge about ranges of values where theoptimal solution might be found should be seeded in the initialpopulation to test the conjecture and promote convergence ifappropriate.

2) Interweaving Local Exploitation Within GlobalSearch: Another approach to leverage knowledge is to inter-twine local exploitation (search) within the global exploration

Page 4: Evolutionary algorithms in Embedded systems

BONISSONE et al.: REAL-WORLD EVOLUTIONARY COMPUTATION 259

Fig. 1. EA parameter setting and control (adapted from [24]).

performed by the EAs. In this case, the knowledge of the land-scape is probed by local hill-climbing searches that improveeach individual in the population before sharing informationthrough crossover operators and competing via the selectionprocess. Usually these approaches are referred to as hybridgenetic algorithms or genetic local search. Memetic algorithms(MAs) [20], [21] are a prime example of combining local searchheuristics with crossover operators. In MAs, each individualattempts to improve its performance up to a predeterminedlevel by exploring possible correlations in the landscape. Thenthey recombine to create new individuals and use crossoveroperators to share this new information about local optima.Finally, the new improved offspring compete among them-selves through a selection process. A similar philosophy canbe found in the approaches suggested by Renders and Bersini[22], named genetic hill climbing (GHC) and the simplex GA.GHC also interweaves evolution with local learning, with itshill-climbing component acting as an accelerator for the fitnessof the GA population. In this approach, individuals extendtheir life and improve their fitness using a small number oflocal search steps before undergoing crossover, mutation, andselection. The simplex GA uses search mechanisms derivedfrom the simplex method to design new crossover operators.In all these cases, the computational complexity increases butthe overall quality of the solution improves and is attained in asmall number of generations [22].

3) Variational and Selection Operators: Another conve-nient method to embed knowledge in EAs is via customizationof the variational and selection operators. Mutation operatorsshould be designed to take into account static constraintsembedded in the data structure and, if needed, should beaugmented by a repair mechanism to fix infeasible solutionscreated by the mutation. For instance, when a Gaussian muta-tion generates an infeasible value for an allele, it could reapplyitself to the original allele value, using a systematically de-creasing standard deviation, and iterate until the new value no

longer violates the allele value constraint. Similarly, crossoveroperators should take into account static constraints embeddedin the data structure and repair infeasible solutions createdby their application. For both variational operators, specialcustomized versions could be implemented, such as those usedfor TSP-type problems originally suggested by Michalewicz[13].

Parent selection pressure could be modified with respect togenerations of search to increase pressure over time, varyingfrom linear to nonlinear, to manage the transition from explo-ration to exploitation. This could be done with an external fuzzycontroller, in a similar fashion to the control of population sizeand probability of mutation [23].

4) Selection, Tuning, and Control of EA Parameters: Theselection, tuning, and control of EA parameters could be equallyimportant design decisions. The population size, number of gen-erations, probability of crossover, probability of mutation, andgenerational gap are all parameters that need to be set for mostEAs. There has been much research in the area of dynamicallycontrolling parameters that change during runtime using fuzzycontrollers. Other approaches to handling dynamic constraintsinclude scheduling the modification of parameters, or offlinetuning possibly using another EA to find optimal solutions for aparticular algorithm. Basically, there are three options, as illus-trated in Fig. 1. This was first noted in [24] and further elabo-rated in [25] and [26].

a) Prior design from the literature: EA designers maychoose to select the values for parameters based on those valuespublished in literature. Early work due to De Jong [27] in thearea of genetic algorithms popularized the use of a static pa-rameter set, and these suggested parameters were generally ac-cepted for several years by researchers and practitioners alike.However, the NFLT suggests that any one set of potentially op-timal algorithm parameters cannot be expected to result in con-sistently superior performance over the entire space of optimiza-tion problems, as was previously generally accepted.

Page 5: Evolutionary algorithms in Embedded systems

260 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 10, NO. 3, JUNE 2006

b) Offline tuning: Another option is to tune these param-eters offline (usually with a meta-EA). This method embedsknowledge in the design of the meta-EA so that the tuned valuesprovide optimal performance for the metrics of interest. Thismethod was first proposed by Grefenstette [28], who used a met-alevel EA to search for the optimal parameter values for thesame suite of five object-level landscapes used in De Jong’searly work. In this case, a metalevel EA replaced the manualanalysis performed by De Jong. Each individual in the meta-EArepresented an instance of six parameter values for the object-level EA. Each trial at the metalevel EA caused the object-levelEA to run with the instantiated parameters for its maximumnumber of generations. The results of the runs, evaluated interms of performance measures, provided the fitness value forthe individual in the metalevel EA. Note, that given the impli-cation of the NFLT, the optimized parameters obtained by themeta-EA should be recomputed for each type of problem. How-ever, for most practical applications, such an a priori tuning maybe very expensive or time consuming, or both.

c) Online Control: The third option is to tune the param-eters online via some control mechanism. As depicted in Fig. 1,these corrections can be generated by an open loop, determin-istic procedure (e.g., a scheduler), by a closed loop, adaptivesystem (e.g., a controller), or by a self-adaptation mechanism.The first option is to use a deterministic schedule to allocateresource changes over time. Unfortunately, obtaining an op-timal schedule might be as difficult as the original optimizationproblem. The second option is to use a controller that explic-itly represents the best heuristics regarding such resource allo-cation. Recent research into using fuzzy logic controllers to pro-vide an online closed-loop control system for EA parametersshows promising results [23], [29]. These controllers dynami-cally manipulate parameters such as population size and mu-tation rate based on metrics extracted during run time. In thiscase, the knowledge of the problem is embedded in the heuris-tics of the controller. The third option is to tune the parametersby adding them to the genomes so that they evolve with the pop-ulation, as in evolutionary strategies with self-adaptation [30],which use this technique to embed knowledge in the represen-tation of individuals.

C. Metaheuristics: A Generalization

The offline tuning and online control of EA parameters de-scribed in the previous sections are specific instances of meta-heuristics applied to an EA-based problem solver. In general,metaheuristics are heuristic procedures defined at the metalevelto control, guide, tune, and allocate computational resources for,or reason about, object-level problem-solvers in order to im-prove their quality, performance, or efficiency. The use of metar-easoning is a common approach in artificial intelligence (AI),e.g., in resource-bounded agent applications [31], planning [32],machine learning and case-based reasoning [33], [34], real-timefuzzy reasoning systems [35], etc. Therefore, it is a natural stepto extend it from its AI origins to the field of soft computing,and in particular to evolutionary algorithms.

Offline metaheuristics are used when we are not concernedwith runtime modifications or control of the object-levelproblem-solver. In these cases, the goal is usually to define the

Fig. 2. Schematic of offline metaheuristics.

best structural and/or parametric configuration for a model ofthe problem solver that works on the object-level task. Mostparameter tuning or optimization efforts follow this architec-ture. Fig. 2 illustrates a typical approach for parametric tuning.A suite of representative problems from a problem class (ora sample of instances from the same problem) is used offlineto test and optimize the solver. At runtime, the problem solveris instantiated to perform its tasks with the resulting tuned pa-rameters. In general, offline metaheuristics can be used if thereis no need for the object-level problem solver to adapt to newruntime situations. In this case, the offline heuristics will definethe best structure and parameters of the problem solver andgenerate a configuration file containing such information. Aninstance of the problem solver generated in accordance to thisconfiguration information will perform at runtime without anyfurther change. Recent examples of offline metaheuristics arethe tuning of fuzzy controllers by EA [36], the EA-based gen-eration of neural networks [37] and Bayesian belief networks[38], and the EA-driven feature space selection and parametrictuning of an instance-based model [12]. In Section V, we willdescribe the implementation of offline metaheuristics usingEAs to develop a fuzzy rule-based classifier.

Alternatively, online metaheuristics can be used when wewant to generate runtime corrections for the behavior of the ob-ject-level problem solver. In these cases, we might want to steerthe solver toward the most promising reasoning paths, identifyopportunistic or critical situations, invoke the appropriate spe-cialized routines or knowledge bases, reallocate computationalresources to improve the solver performance, manage the tran-sition between two different modalities, etc. Examples of on-line metaheuristics are supervisory fuzzy controllers and meta-controllers for EAs. Supervisory control decomposes complexproblems into smaller and simpler ones, which are then assignedto low-level controllers. Classical supervisory controllers re-combine their outputs by selecting the most appropriate con-troller to solve the larger problem. Fuzzy supervisory controllersperform a soft recombination by taking into account the partialdegree of applicability of each low-level controller and mixingtheir output accordingly. This soft switching among operationalmodes allows the control engineer to explicitly specify tradeoff

Page 6: Evolutionary algorithms in Embedded systems

BONISSONE et al.: REAL-WORLD EVOLUTIONARY COMPUTATION 261

Fig. 3. Schematic of online metaheuristics.

policies of efficiency against performance, within safety con-straints. Fuzzy supervisory controllers execute these policies ina smooth fashion interpolating, instead of switching, among op-erational modes [39]. Metacontrollers for EAs play a similarrole. As shown in Fig. 3, they monitor the performance of theobject-level EAs, and by leveraging the same interpolation ca-pabilities of the supervisory fuzzy controllers, they control keyresources and parameters to provide smooth transitions from theexploration to the exploitation stages of the EAs. In Section VI,we will describe the implementation of online metaheuristicsusing a fuzzy controller to improve the runtime performance ofan EA.

Metaheuristics provide an explicit way to represent domainknowledge to guide an EA-based object-level solver. However,in light of the NFLT implications, we need to remind ourselvesthat: a) the use of metaheuristics will improve the performanceof optimization algorithms for a subset of problems and b) themetaheuristics will not be universal, but specific for a problemor a class of problems. Not even the KB used by the onlineadaptation scheme can be considered of general applicability.These conclusions have been validated by several experimentsin the parametric control of EAs [23], [29] and were originallyreported in [26].

III. HYBRID SOFT COMPUTING: A UNIFIED FRAMEWORK FOR

DEVELOPING METAHEURISTICS

The literature covering SC is expanding at a rapid pace, asevidenced from the numerous congresses, books, and journalsdevoted to this issue. Its original definition provided by Zadeh[6] denotes systems that “exploit the tolerance for imprecision,uncertainty, and partial truth to achieve tractability, robustness,low solution cost, and better rapport with reality.” As discussedin previous papers [8], [9], we view soft computing as the syn-ergistic association of computing methodologies that includesas its principal members fuzzy logic, neurocomputing, evolu-tionary computing, and probabilistic computing. We have alsostressed the synergy derived from hybrid SC systems that arebased on a loose or tight integration of their constituent tech-nologies. This integration provides complementary reasoningand search methods that allow us to combine domain knowl-edge and empirical data to develop flexible computing tools and

solve complex problems. Thus, we consider soft computing asa framework in which we can derive models that capture heuris-tics for object–level tasks or metaheuristics for metalevel tasks.

A. Using SC to Develop Metaheuristics: Design Tradeoffs

Since leveraging domain knowledge and problem structure iscritical to the performance of the object-level problem solver,we will briefly review how such knowledge can be encoded inSC systems.

In the development of hybrid soft computing systems,we usually face one critical design tradeoff: performanceaccuracy versus interpretability [40], [41]. This tradeoff isusually derived from legal or compliance regulations, andit constrains the underlying technologies used to implementthe SC system. When we use SC techniques, the equation“ ” assumes amore important role as we have a much richer repertoire torepresent the structure, to tune the parameters, and to iteratethis process [9]. This repertoire enables us to choose amongdifferent tradeoffs between the model’s interpretability and itsaccuracy. For instance, one approach aimed at maintaining themodel’s transparency usually starts with knowledge-derivedlinguistic models, in which domain knowledge is translatedinto an initial structure and parameters. The model’s accu-racy is further improved by using global or local data-drivensearch methods to tune the structure and/or parameters. Analternative approach, aimed at building models that are moreaccurate, might start with data-driven search methods. Then,we can embed domain knowledge into the search operators tocontrol or limit the search space or to maintain the model’sinterpretability. Postprocessing approaches can also be usedto extract explicit structural information from the models. Thecommonalities among these models are the tight integrationof knowledge and data leveraged in their construction and theloose integration of their outputs, exploited in their offline use.For brevity, we will focus on fuzzy systems, one of SC’s maincomponents, which provide the most explicit mechanism forrepresenting metaheuristic knowledge directly into a knowl-edge base.

B. Fuzzy Logic Systems

Fuzzy set theory (and isomorphic fuzzy logic) was proposedby Zadeh [42] as a way to represent and reason with ill-definedconcepts that are so prevalent in common knowledge. The orig-inal treatment of imprecision and vagueness can be traced backto the work of Post, Kleene, and Lukasiewicz, multiple-valuedlogicians who in the early 1930s proposed the use of three-valued logic systems (later followed by infinite-valued logic)to represent undetermined, unknown, or other possible interme-diate truth-values between the classical Boolean true and falsevalues [43] . In 1937, Black suggested the use of a consistencyprofile to represent vague concepts [44] but did not provide acalculus to reason with the profiles. While vagueness relates toambiguity, fuzziness addresses the lack of sharp set-boundaries.It was not until Zadeh’s work in 1965 that we had a completetheory of fuzzy sets and fuzzy logic. A comprehensive reviewof fuzzy logic and fuzzy computing can be found in [45].

Page 7: Evolutionary algorithms in Embedded systems

262 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 10, NO. 3, JUNE 2006

Fuzzy logic (FL) provides us with a language (syntax andlocal semantics), to which we can translate qualitative knowl-edge about the problem to be solved [46]. In particular, FL al-lows the use of linguistic variables to model dynamic systems.These variables take fuzzy values that are characterized by alabel (a sentence generated from the syntax) and a meaning(a membership function determined by a local semantic proce-dure). The meaning of a linguistic variable may be interpretedas an elastic constraint on its value. These constraints are prop-agated by fuzzy inference operations, based on the generalizedmodus ponens [47]. This reasoning mechanism with its interpo-lation properties gives FL a robustness with respect to variationsin the system’s parameters, disturbances, etc., which is one ofFL’s main characteristics [48].

The use of linguistic variables to define complex dynamicsystems was first proposed by Zadeh in a pioneering paper [49]and extended by Mamdani and Assilian to synthesize the firstfuzzy controller [50]. In this controller, the domain knowledgeis represented by a set of fuzzy if-then rules that approximatea mapping from a state space to an output space . Eachrule’s left-hand side describes a fuzzy partition of the statespace, while each rule’s right-hand side defines the value ofcorresponding control action. In these fuzzy systems, this valueis a fuzzy set defined on the universe of control actions. InTakagi–Sugeno–Kang (TSK) fuzzy systems [51], this valueis represented by a linear polynomial in the state space. In aMamdani-type fuzzy system, the KB is completely defined bya set of scaling factors, determining the ranges of values for thestate and output variables; a term set, defining the membershipfunction of the values taken by each state and output variable;and a rule set, characterizing a syntactic mapping of symbolsfrom to . The structure of the underlying model is the ruleset, while the model parameters are the scaling factors and termsets. The inference obtained from such a system is the resultof interpolating among the outputs of all relevant rules. Theinference’s outcome is a membership function defined on theoutput space, which is then aggregated (defuzzified) to producea crisp output. With this inference mechanism, we can definea deterministic mapping between each point in the state spaceand its corresponding output. Therefore, we are able to equate afuzzy KB to a response surface in the cross-product of state andoutput spaces, which approximates the original relationship. ATSK type of fuzzy system increases its representational powerby allowing the use of a first-order polynomial, defined onthe state space, to be the output of each rule in the rule set.This enhanced representational power at the expense of locallegibility [52] results in a model that is equivalent to radialbasis functions [53].

C. Hybrid Fuzzy Logic Systems

Fuzzy systems (FSs) provide us with an intuitive translationof domain knowledge into an executable model. However, withfew exceptions, such as the self-organizing fuzzy controller[54], the determination of the model’s structure and parametersis usually a manual process. Therefore, it is a natural extensionto hybridize these systems and augment their effectiveness byintegrating local and/or global search methods to tune theirparameters or modify their structures. We will briefly describe

two typical hybrid fuzzy systems: FS tuned by neural networks(NNs) and FS tuned by evolutionary algorithms. In Section VI,we will describe the reciprocal role in which an FS tunesthe runtime parameters of an EA. For a more comprehensivetreatment of hybrid genetic and fuzzy systems, the reader isreferred to [8], [9], and [55].

1) FL Tuned by NNs: The TSK model can be translated intoa structured network, such as the adaptive neural fuzzy infer-ence systems (ANFIS) [56]. In ANFIS, the rule set determinesthe topology of the net (model structure), while dedicated nodesin the corresponding layers of the net (model parameters) de-fine the term sets and the polynomial coefficients. ANFIS doesnot modify the structure and tries to fine-tune the model param-eter. ANFIS consists of a six-layer generalized network. Thefirst and sixth layers correspond, respectively, to the system in-puts and outputs. The second layer defines the fuzzy partitions(term sets) on the input space, while the third layer performsa differentiable T-norm operation, such as the product or thesoft minimum. The fourth layer normalizes the evaluation of theleft-hand side of each rule, so that their degrees of applicabilitywill add up to one. The fifth layer computes the polynomialcoefficients in the right-hand side of each Takagi–Sugeno rule.Jang’s approach is based on a two-stroke optimization process.During the forward stroke, the term sets of the second layer arekept constant, while the coefficients of the fifth layer are com-puted using a least mean square method. ANFIS’ output is com-pared with that from the training set to produce an error. In thebackward stroke, using a backpropagation-like algorithm, theerror gradient drives the modification of the fuzzy partitions ofthe second layer. This process is continued until convergence isreached. This local search method produces efficient results, atthe risk of entrapment in local minima (as is the case in otherhill-climbing methods). On the other hand, the initial valuesused by ANFIS are derived from the domain knowledge, andas such, they should not be extremely distant from the desiredparameter values.

2) FS Tuned by EAs: In designing a fuzzy controller, it isnot easy to create a training set that associates each instance ofthe state vector to its corresponding desired controller’s output,as required by supervised learning and local search methods.On the other hand, it is relatively easy to specify a cost func-tion that evaluates the state trajectory of the closed-loop system.Thus, it is common to resort to global search methods, like evo-lutionary algorithms, to evolve the fuzzy controller. Many re-searchers have explored the use of EAs to tune fuzzy logic con-trollers. In the mid 1990s, Cordon et al. compiled a bibliographyof 544 papers combining genetic with fuzzy logic, with morethan 50% related to the tuning and design of fuzzy controllers bygenetic algorithms [57]. Since then, the literature has expandedeven further. In our brief discussion, we will only mention a fewhistorical contributions in this area. These methods differ mostlyin the order or the selection of the various fuzzy logic controllercomponents that are tuned (term sets, rules, scaling factors).

Karr, one of the early investigators in this quest, used geneticalgorithms to modify the membership functions in the term setsof the variables used by the fuzzy logic controller [58]. Karrused a binary encoding to represent three parameters defininga membership value in each term set. The binary chromosome

Page 8: Evolutionary algorithms in Embedded systems

BONISSONE et al.: REAL-WORLD EVOLUTIONARY COMPUTATION 263

was the concatenation of all term sets. The fitness function wasa quadratic error calculated for four randomly chosen initialconditions.

Lee and Takagi tuned both rules and term sets [59]. Theyused a binary encoding for each three-tuple characterizing a tri-angular membership distribution. Each chromosome representsa TSK-rule, concatenating the membership distributions in therule antecedent with the polynomial coefficients of the conse-quent. Most of the above approaches used the sum of quadraticerrors as a fitness function. Surmann et al. [60] extended thisquadratic function by adding to it an entropy term describingthe number of activated rules. Kinzel et al. [61] departed fromthe string representation and used a cross-product matrix to en-code the rule set (as if it were in table form). They also proposedcustomized (point-radius) crossover operators that were similarto the two-point crossover for string encoding. They first initial-ized the rule base according to intuitive heuristics, then used ge-netic algorithms to generate a better rule base, and finally tunedthe membership functions of the best rule base. This order of thetuning process is similar to that typically used by self-organizingcontrollers [62]. Herrera et al. [63] used a genetic algorithm totune each rule used by a fuzzy logic controller. They utilize a realencoding for a four-parameter representation of a trapezoidalmembership function in each term set. A rule is achieved bythe concatenation of membership functions. A member in thegenetic search is such a concatenation of the encoding of mem-bership functions.

Bonissone et al. [64] followed the tuning order suggested byZheng [65] for manual tuning. They began with macroscopiceffects by tuning the fuzzy logic controller state and controlvariable scaling factors while using a standard uniformly spreadterm set and a homogeneous rule base. After obtaining the bestscaling factors, they proceeded to tune the term sets, causingmedium-size effects. Finally, if additional improvements wereneeded, they tuned the rule base to achieve microscopic effects.Tsang and Yeung [66] describe a method that combines geneticalgorithms and neural networks for automated tuning of the pa-rameters of a fuzzy expert system used as an advisor for jobplacement. Grauel et al. [67] present a methodology for op-timizing fuzzy classifiers based on B-splines using an evolu-tionary algorithm. The tuning algorithm maximizes the perfor-mance of breast cancer detection and at the same time minimizesthe size of the classifier. Other reports that explore the evolu-tionary tuning of fuzzy classifiers appear in [68]–[70].

All these approaches demonstrate the synergy obtained byintegrating domain knowledge, represented by fuzzy systems,with robust data-driven search method, exemplified by evolu-tionary algorithms.

IV. LEVERAGING DOMAIN KNOWLEDGE IN INDUSTRIAL AND

COMMERCIAL EA APPLICATIONS

To illustrate the benefits of integrating domain knowledgewith evolutionary algorithms, we discuss four real-world appli-cations in the areas of classification, scheduling, configurationmanagement, and optimization. These applications were devel-oped, tested, and in some cases deployed. The first two applica-tions illustrate the use of domain knowledge in metaheuristics,

while the last two incorporate such knowledge in variationaloperators and data structures. The first area is a discrete clas-sification problem in which the object-level problem solver is afuzzy-knowledge-based classifier and the metalevel is an evo-lutionary algorithm. The second application relies on an EA asthe object-level problem solver. In this case, a fuzzy controlleris used at the metalevel to provide online corrections to the EAparameters such as population size and probability of mutation.The third application is a multicriteria lamp spectrum optimiza-tion problem where the domain knowledge is incorporated viathe use of efficiency-boosting problem-specific variational op-erators. The last application illustrates a scheduling problem inwhich an EA is the object level-problem solver. In this case, thedomain knowledge is embedded in the data structure and otherdesign choices that define the EA. The first three applicationshave previously been individually reported in the literature, andreferences to those articles are made in the appropriate locationsthroughout this paper.

A common six-part presentation structure is followed for eachof these applications—a) problem description, b) related work,c) solution architecture, d) domain knowledge representation,e) results, and f) remarks—in an attempt to present each appli-cation as a self-contained unit and at the same time to highlightthe common problem-solution aspects across these diverse ap-plications related to the incorporation of domain knowledge.

V. RISK CLASSIFICATION FOR UNDERWRITING

INSURANCE APPLICATIONS

A. Problem Description

Insurance underwriting is a complex decision-making taskthat is traditionally performed by trained individuals. An un-derwriter must evaluate each insurance application in terms ofits potential risk for generating a claim, such as mortality in thecase of term life insurance. An application is compared againststandards adopted by the insurance company, which are derivedfrom actuarial principles related to mortality. Based on this com-parison, the application is classified into one of the risk cate-gories available for the type of insurance requested by the appli-cant. The accept/reject decision is also part of this risk classifi-cation, since risks above a certain tolerance level will typicallybe rejected. The estimated risk, in conjunction with other fac-tors such as gender, age, and policy face value, will determinethe appropriate price (premium) for the insurance policy. Whenall other factors are the same, to retain the fair value of expectedreturn, higher risk entails higher premium.

We represent an insurance application as an input vectorthat contains a combination of discrete, continuous, and attributevariables. These variables represent the applicant’s medical anddemographic information that has been identified by actuarialstudies to be pertinent to the estimation of the applicant’s claimrisk. Similarly, we represent the output space , e.g., the under-writing decision space, as an ordered list of rate classes. Due tothe intrinsic difficulty of representing risk as a real number on ascale, e.g., 97% of nominal mortality, the output space is sub-divided into bins (rate classes) containing similar risks. For ex-ample, 96%–104% nominal mortality could be labeled the stan-dard rate class. Therefore, we consider the underwriting (UW)

Page 9: Evolutionary algorithms in Embedded systems

264 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 10, NO. 3, JUNE 2006

process as a discrete classifier mapping an input vector intoa discrete decision space , where and .

This problem is not straightforward due to several require-ments.

1) The UW mapping is highly nonlinear, since small incre-mental changes in one of the input components can causelarge changes in the corresponding rate class.

2) Most inputs require interpretations. Underwriting stan-dards cannot explicitly cover all possible variations of aninsurance application, causing ambiguity. Thus the under-writer’s subjective judgment will almost always play arole in this process. Variations in factors such as under-writer training and experience will likely cause variabilityin their decisions.

3) These interpretations require an intrinsic amount of flex-ibility to preserve a balance between risk tolerance, nec-essary to preserve price competitiveness, and risk avoid-ance, necessary to prevent overexposure to risk.

4) Legal and compliance regulations require that the modelsused to make the underwriting decisions be transparentand interpretable.

To address these requirements, we decided to extend sometraditional AI reasoning methodologies, such as rule-basedand case-based reasoning, with soft computing techniques,such as fuzzy logic and evolutionary algorithms. With thishybrid system, we were able to provide both flexibility andconsistency, while maintaining interpretability and accuracy aspart of an underwriting and a risk management platform.

B. Related Work

1) Automated Insurance Underwriting: Reported researchin the area of automated insurance underwriting is quitesparse. However, there are a few documented approaches.Collins et al.[71] describe the application of a neural networkto replicate the decision-making of mortgage insurance un-derwriters by training the system on a database of certifiedcases. Insurance underwriting based on neural networks orsimilar modeling approaches leads to opaque decision-making,wherein the learned and encoded interrelationships betweendecision variables that are used to arrive at decisions are notwell defined and explainable. Nikolopoulos and Duvendack[72] describe the application of evolutionary learning andclassification tree techniques to build a knowledge base thatdetermines the termination criteria for an insurance policy.

2) Evolutionary Optimization of Fuzzy Systems: InSection III-C2, we have presented a broad review of theliterature in this area.

C. Solution Architecture

The fuzzy logic engine (FLE) uses rule sets to encode under-writing standards. Each rule set represents a set of fuzzy con-straints defining the boundaries between rate classes. These con-straints were first determined from the underwriting guidelines.They were then refined using knowledge engineering sessionswith expert underwriters to identify factors such as blood pres-sure levels and cholesterol levels, which are critical in definingthe applicant’s risk and corresponding premium. The goal of

Fig. 4. Example of three fuzzy constraints for rate class Z.

the classifier is to assign an applicant to the most competitiverate class, providing that the applicant’s vital data meet all ofthe constraints of that particular rate class to a minimum de-gree of satisfaction. The constraints for each rate class arerepresented by fuzzy sets: . Each con-straint can be interpreted as the degree of preferenceinduced by value for satisfying constraint . After evalu-ating all constraints, we compute two measures for each rateclass . The first one is the degree of intersection of all theconstraints and measures the weakest constraint satisfaction:1

The second one is acumulative measure of missing points (the complement of theaverage satisfaction of all constraints) and measures the overalltolerance allowed to each applicant, i.e.,

The final classification is obtained by comparing the two mea-sures and against two lower bounds defined bythresholds and . The parametric definition of each fuzzyconstraint and the values of and are design pa-rameters that were initialized following knowledge engineeringsessions with domain experts.

1This expression implies that each criterion has equal weight. If we want toattach a weightw to each criterionA , we could use the weighted minimum op-erator I (r)=\ W A (x )=Min (Max ((1�w ) ; A (x ))), wherew 2 [0; 1].

Page 10: Evolutionary algorithms in Embedded systems

BONISSONE et al.: REAL-WORLD EVOLUTIONARY COMPUTATION 265

Fig. 5. FLE optimization using EA.

Fig. 4 illustrates an example of three constraints (trapezoidalmembership functions) associated with rate class , the inputdata corresponding to an application, and the evaluation of thefirst measure, indicating the weakest degree of satisfaction of allconstraints.

D. Domain Knowledge Representation

The FLE design parameters must be tuned, monitored, andmaintained to assure the classifier’s optimal performance. Tothis end, we have chosen EAs. Our EA is composed of a popu-lation of individuals (“chromosomes”), each of which containsa vector of elements that represent distinct tunable parametersto configure the FLE classifier, i.e., the parametric definition ofthe fuzzy constraints and thresholds and .

A chromosome, the genotypic representation of an individual,defines a complete parametric configuration of the classifier.Thus, an instance of such a classifier can be initialized for eachchromosome, as shown in Fig. 5. Each chromosome , of thepopulation (left-hand side of Fig. 5), goes through a de-coding process to initialize the classifier on the right. Each clas-sifier is then tested on all the cases in the case base, assigninga rate class to each case. We can determine the quality of theconfiguration encoded by the chromosome (the “fitness” of thechromosome) by analyzing the results of the test. Our EA usesstochastic variation to produce new individuals in the popula-tion. The fitter chromosomes in generation are more likely tobe selected and pass their genetic material to the next genera-tion 1. Similarly, the less fit solutions will be culled from thepopulation. At the conclusion of the EA’s execution, the bestchromosome of the last generation determines the classifier’sconfiguration. The EA for this application employs a popula-tion size of 100 and a mutation-based stochastic variation forgenerating new individuals over a significant search duration(250 generations). The mutation rate is scheduled to decreaseas the search progresses, such that a more aggressive muta-tion occurs during the earlier parts of the search, and a moreconservative mutation is applied during the later stages of thesearch. This is motivated by the preference to explore during

the earlier stages of an evolutionary search, with a gradually in-creasing preference toward exploitation as the search matures.A crossover heuristic could have been employed as well in thisreal-space search as an additional recombination-oriented sto-chastic variation operation. However, the scheduled mutationtechnique achieves the purpose of transition from explorationto exploitation that a crossover operation could realize in thisreal-space search problem.

1) Fitness Function: In discrete classification problemssuch as this one, we can use two matrices to construct thefitness function that we want to optimize. The first matrix is a

confusion matrix that contains frequencies of correctand incorrect classifications for all possible combinations of thestandard reference decisions (SRDs)2 and classifier decisions.The first columns represent the rate classes availableto the classifier. Column represents the classifier’s choiceof not assigning any rate class, sending the case to a humanunderwriter. The same ordering is used to sort the rows forthe SRD. The second matrix is a penalty matrixthat contains the cost of misclassification. The fitness function

combines the values of resulted from a test run of theclassifier configured with chromosome with the penaltymatrix to produce a single value

(2)

Function represents the overall misclassification cost.

E. Results

After defining measures of coverage and (relative and global)accuracy, we performed a comparison against the SRD. The re-sults, partially reported in [73], show a remarkable improvementin all measures. We evaluated the performance of the decisionsystems based on the three metrics below.

2Standard reference decisions represent ground truth rate class decisions asreached by consensus among senior expert underwriters for a set of insuranceapplications.

Page 11: Evolutionary algorithms in Embedded systems

266 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 10, NO. 3, JUNE 2006

TABLE ITYPICAL PERFORMANCE OF THE UNTUNED AND TUNED RULE-BASED

DECISION SYSTEM (FLE)

• Coverage: Percentage of cases as a fraction of the totalnumber of input cases, whose rate class assignments aredecided by a decision system. Each decision system hasthe option to not make a decision on a case and to refer anundecided case to a human underwriter.

• Relative accuracy: Percentage of correct decisions onthose cases that were not referred to the human under-writer.

• Global accuracy: Percentage of correct decisions, in-cluding making correct rate class decisions and making acorrect decision to refer cases to human underwriters asa fraction of total input cases.

Specifically we obtained the following results.Using the initial parameters (first column of Table I), we can

observe a large moderate coverage ( 94%) associated with alow relative accuracy ( 76%) and a lower global accuracy( 75%). These performance values are the result of applying astrict interpretation of the UW guidelines, without allowing forany tolerance. Had we implemented such crisp rules with a tra-ditional rule-based system, we would have obtained these sameevaluations. This strictness would prevent the insurer frombeing price competitive and would not represent the typicalmodus operandi of human underwriters. However, by allowingeach underwriter to use his/her own interpretation of suchguidelines, we could introduce a large underwriters’ variability.One of our main goals of this project was to provide a uniforminterpretation, while still allowing for some tolerance. Thisgoal is addressed in the second column of Table I, which showsthe results of performing knowledge engineering and encodingthe desired tradeoff between risk and price competitiveness asfuzzy constraints with preference semantics. This intermediatestage shows a different tradeoff since both global and relativeaccuracy have improved. Coverage slightly decreases ( 90%)for a considerable gain in relative accuracy ( 93%). Althoughwe obtained this initial parameter set by interviewing the ex-perts, we had no guarantee that such parameters were optimal.Therefore, we used an evolutionary algorithm to tune them.We allowed the parameters to move within a predefine rangecentered around their initial values and, using the SRD andthe fitness function described above, obtained an optimizedparameter set, whose results are described in the third columnof Table I. The results of the optimization show the pointcorresponding to the final parameter set dominates the secondset point (in a Pareto sense), since both coverage and relative

TABLE IIMEAN (�) AND STANDARD DEVIATION (�) OF FLE PERFORMANCE OVER FIVE

TUNING CASE SETS COMPARED TO FIVE DISJOINT TEST SETS

accuracy were improved. Finally, we can observe that thefinal metric, global accuracy (last row in Table I), improvesmonotonically as we move from using the strict interpretationof the guidelines ( 75%) through the knowledge-engineeredparameters ( 90%), to the optimized parameters ( 94%).While the reported performance of the optimized parameters(in Table I) is typical of the performance we achieved throughthe optimization, a fivefold cross-validation on the optimizationwas also performed to identify stable parameters in the designspace and stable metrics in the performance space (see Table II).

F. Remarks

We have developed a design methodology using a fuzzyknowledge-based classifier and an evolutionary algorithm.According to Fig. 2, the classifier is the object-level problemsolver, while the EA is the offline metaheuristic. The domainknowledge has been distributed into the design of the classifier,the encoding of the chromosome for the EA, the definitionof a fitness function to enforce a specific tradeoff betweenclassification coverage and accuracy, and the establishment ofa repository of cases representing ground-truth decisions, i.e.,the SRD.

We have created a decision system that is able to automati-cally determine risk categories for insurance applications. Thedecision thresholds and internal parameters of these decision-making systems are tuned using a multistage mutation-basedevolutionary algorithm to achieve a specific tradeoff betweenaccuracy and coverage The fitness function selectively penal-izes different degrees of misclassification and serves as a forcingfunction that drives correct classifications. The tunable param-eters have a critical impact on the coverage and accuracy of de-cision-making, and a reliable method to optimally tune theseparameters is critical to the quality of decision-making and themaintainability of these systems.

Maintenance of the classification accuracy over time is an im-portant requirement considering that decision guidelines mayevolve, and so can the set of certified cases. Therefore, it isof paramount importance to maintain the quality of the set ofcertified test cases that will be used as a benchmark for repre-senting any new behavior desired of the decision-making sys-tems. In [11], we describe the salient steps in the life cycle ofa fuzzy knowledge-based classifier and show how the classi-fiers’ knowledge base could be maintained over time by usingthe same EA and a fusion module to support the SRD life cycleand identify the most recent and reliable cases to be used in theupdating of the SRD.

Page 12: Evolutionary algorithms in Embedded systems

BONISSONE et al.: REAL-WORLD EVOLUTIONARY COMPUTATION 267

VI. OPTIMIZATION OF DESIGN AND MANUFACTURING

PLANNING

A. Problem Description

In the printed circuit board industry, designers face increasedcomplexity in coordinating suppliers and manufacturers ofsubassemblies and components to be competitive in cost, time,and quality. Design, supplier, and manufacturing decisions aremade using the experience base in an organization and are oftendifficult to adapt to changing needs. Systems that support acoupling among design, supplier, and manufacturing decisions,simultaneously reduce supply and manufacturing costs and leadtimes, and better utilize available manufacturing facilities arecritical to improved performance in these industries. A methodand architecture for optimal design, supplier, and manufacturingplanning for printed circuit assemblies is presented in [74].The problem formulation from this reference is used as a basisfor generating object-level test problems in this paper. Thisformulation poses the optimal selection of designs that realizea given functional specification, the selection of parts to realizea design, the selection of suppliers to supply these parts, andthe selection of a production facility to manufacture the chosendesign as a global optimization problem, where each selectionhas the potential to affect other selections.The goal is to minimizean aggregate nonlinear objective function of total cost and totaltime of the th design3 assignedto the th manufacturing facility. The total cost and total time forrealizing a printed circuit assembly are each coupled nonlinearfunctions dependent on characteristics of a chosen design, partssupply chain characteristics, and characteristics of a chosenmanufacturing facility. This decision problem is in the class ofnonlinear discrete assignment problems, and its characteristicsdo not support optimization using traditional techniques basedon mathematical programming. An evolutionary optimizationtechnique is more easily applied to this problem domain, isrobust, and simultaneously searches multiple solutions.

B. Related Work

Subbu et al. [75] present a comparison of the performanceof a fuzzy logic controlled genetic algorithm (FLC-GA) and aparameter-tuned genetic algorithm (TGA) for an agile manu-facturing application. These strategies are benchmarked usinga genetic algorithm (GA) that utilizes a canonical static pa-rameter set. In the FLC-GA, fuzzy logic controllers dynami-cally schedule the population size, crossover rate, and mutationrate of the object-level GA using as inputs diversity (genotypicand phenotypic) measures of the population. A fuzzy knowl-edge base is automatically identified using a meta-GA. In theTGA, a meta-GA is used to determine an optimal static param-eter set for the object-level GA. The object-level GA supports aglobal evolutionary optimization of design, supplier, and man-ufacturing planning decisions for realizing printed circuit as-semblies in an agile environment. The authors demonstrate thathigh-level control system identification (for the FLC-GA) ortuning (for the TGA) performed with small object-level search

3Each realizable design requires assignments of several parts, and eachassigned part requires an assignment of a supplier who is capable of supplyingthat part.

Fig. 6. Architecture of the adaptive control system for an evolutionaryalgorithm.

spaces can be extended to more elaborate object-level searchspaces, without employing additional identification or tuning.The TGA performs superior searches but incurs large searchtimes. The FLC-GA performs faster searches than a TGA andis slower than the GA that utilizes a canonical static parameterset. However, search quality measured by the variance in searchperformance of the FLC-GA is comparable to that of the GAthat utilizes a canonical static parameter set. This latter negativeresult served as the key motivation for investigating the alterna-tive, less complex approach discussed in this paper and reportedearlier in [23]. It is our opinion that searching for a knowledgebase for a fuzzy controller used to control the parameters of theobject-level algorithm via a metalevel algorithm not only addsa second layer of complexity but also generates control surfacesthat do not generally correspond with expert knowledge. Evo-lutionary algorithms on the other hand have been successfullyused to tune the performance of an existing fuzzy knowledgebase, since this is typically a much smaller problem space.

Herrera and Lozano [76] present a detailed review of a varietyof approaches for adapting the parameters of an evolutionary al-gorithm using fuzzy logic controllers, and in [77], they presentan approach where rule bases for the fuzzy logic controllers si-multaneously coevolve with the object-level evolutionary algo-rithm. Tettamanzi and Tomassini [78] discuss several combina-tions of fuzzy logic and evolutionary algorithms that include thedevelopment of fuzzy rule-based systems for adaptive control ofan evolutionary algorithm.

C. Solution Architecture

The architecture for adaptive fuzzy control of an evolutionaryalgorithm’s resources appears in Fig. 6. During evolutionarysearch, the fuzzy logic controller observes the population diver-sity and percentage of completed trials and, using the embeddedexpert knowledge base, specifies changes to the population sizeand mutation rate.

D. Knowledge Representation (Explicit)

We use a fuzzy controller at the metalevel to manage thetransition from exploration to exploitation, by modifying over

Page 13: Evolutionary algorithms in Embedded systems

268 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 10, NO. 3, JUNE 2006

time the population size and the probability of mutation. Thefuzzy controller takes the state vector [Genotypic Diversity,Percentage Completed Trials] and produces the output vector[Delta Population Size, Delta Mutation Rate].

Genotypic diversity (GD) of a population is the first inputand is a search-space-level measure of the similarity of genes inmembers of a population. Since the object-level problem spaceis inherently discrete, the evolutionary algorithm utilizes an in-teger representation, which maps well to the problem formu-lation. Given two integer coded genomes of the same length,we compute their Hamming distance . A Hamming distance isusually applied to bit strings and computes the number of bit lo-cations that differ in strings of the same length. So, given two bitstrings 00 111 and 11011, . We extend thisidea to integer strings—if the integer alleles for correspondinggenes in two genomes are unequal, their distance count is incre-mented by one. The genotypic diversity of two integer-codedgenomes is normalized using the genome length. The genotypicdiversity for a population of size is therefore given by

(3)

We use a normalization term of 2 1 since, givenmembers, only 1 2 comparisons are distinct and nonre-flexive. The range for GD is [0, 1]. When GD is close to zero,the diversity is low, indicating convergence of the population;and when it approaches one, the diversity is very high.

Percentage completed trials (PCT) is the second input and is anumber in the range [0, 1], where “1” signifies exhaustion of allallowed trials, and consequently a maturity of the search givena fixed amount of resources.

In our prior work [75], we used genotypic diversity and phe-notypic (fitness-level) diversity as the two inputs to a fuzzy con-troller. In that paper, we did not explicitly introduce time orsearch maturity as an input, and implicitly used the phenotypicdiversity measure to serve this purpose. Since the objectives ofthe evolutionary search are different in the early and later stages,it is desirable to include time or search maturity as an explicitinput. In addition, this leads to a simpler and more intuitive de-sign of the fuzzy knowledge base.

Fuzzy membership distributions that partition the inputs andoutputs spaces are shown in Fig. 7. Triangular, proportionallydistributed membership functions are used for specificationof the knowledge base of the fuzzy logic controller. At theearly stage of the evolutionary process, it is useful to reducegreediness and increase perturbation factor in order to explorethe search space as much as possible. With the developmentof the evolutionary process, i.e., with an increasing genera-tions count, exploration needs to be reduced gradually, whileincreasing the emphasis on exploitation. When there are onlya small percentage of generations remaining, most emphasisshould be placed on exploitation in order to refine the solutions.The knowledge bases for population size control and mutationrate control that encode this knowledge are shown in Tables IIIand IV, respectively. These knowledge bases are also similar tothe well-known class of fuzzy proportional-integral (PI) con-trollers. As noted in [79], [80], a fuzzy PI is a generalization of a

Fig. 7. Linguistic terms and membership distributions associated with thefuzzy logic controller.

TABLE IIIKNOWLEDGE BASE FOR � POPULATION SIZE

AS A FUNCTION OF GD AND PCT

TABLE IVKNOWLEDGE BASE FOR � MUTATION RATE

AS A FUNCTION OF GD AND PCT

two-dimensional sliding-mode (SM) controller [81] and sharesa similar structure. In Table III, we can observe the no change(NC) values in the main diagonal cells, which correspond tothe switching line of a SM controller. Similarly, all controlactions at opposite sides of the switching line have oppositesigns, and their magnitude does not decrease as we move away(perpendicularly) from the switching line. Half of the structurein Table IV is similar to Table III, with M (medium) playingthe same role as NC in Table III. The other half saturates at themedium value.

Given a population size, the new population size is computedas a product of the current population size and the currentpopulation factor. The search is initialized with a populationsize of 50, and further population sizes are bounded to therange [25, 150] to prevent a search with very small or verylarge populations. The fuzzy control for population size isfired (applied) at each generation. A new mutation rate scaled

Page 14: Evolutionary algorithms in Embedded systems

BONISSONE et al.: REAL-WORLD EVOLUTIONARY COMPUTATION 269

to the range [0.005, 0.1] is realized as a transient spike (mutation rate) relative to the baseline mutation of 0.005 everyten generations. The mutation rate is returned to the baselinelevel at the generation following the one where the control actionoccurs. Such a mutation is designed to serve as a temporarydisruption to introduce additional population diversity, and wereturn the mutation to the baseline level between control actionsin order to exploit the potential benefit due to the disruption.

The operator is used for conjunction of clauses in the IFpart of a rule, the operator is used to fire each rule, theoperator is used to aggregate the outputs of all fired rules, and acenter of gravity method is used for defuzzification.

E. Results

In this section, we compare performance of two basic typesof evolutionary algorithms to their respective performance whenfuzzy control is introduced. The first algorithm, called the stan-dard evolutionary algorithm (SEA), has a population size of50, crossover rate of 0.6, and mutation rate of 0.005, uses pro-portional selection, applies uniform crossover to two selectedparents to produce two offspring, and completely replaces withelitism the parent population with the offspring population. Thesecond algorithm, called the steady state evolutionary algorithm(SSEA), is identical to the SEA except that only 25% of thepopulation is replaced at each generation. The fuzzy controlledversions of these respective evolutionary algorithms are fuzzystandard evolutionary algorithm (F-SEA) and fuzzy steady-stateevolutionary algorithm (F-SSEA).

The experimental setup is based on an underlying discrete de-cision problem space of the order of 6.4 feasible options.Three different minimization problems are defined over this de-cision space by introducing various aggregation functions givenby

Cost Time (4a)

Cost Time (4b)

CostTime

(4c)

These objectives represent various heuristic tradeoffs betweentotal cost and total time objectives in the design, supplier, andmanufacturing planning. Experiments are conducted using3000, 5000, 7000, 9000, and 11 000 allowed fitness trials perevolutionary search. For each experimental setup an algo-rithm’s performance is observed over 20 repeat trials. Theresulting experiment space consists of the cross-product of fouralgorithm types, three versions of objectives, five versions ofmaximum fitness trials, and 20 repeat trials, resulting in a totalof 1200 experiments. Algorithm performance measures includea) the mean of the optimum found in 20 repeat trials and b)the associated standard deviation. The t-test for the mean andF-test for variance are used to evaluate statistically significantdifferences, and we utilize a standard value of forsignificance determination.

Table V shows a rounded-up performance comparison ofthe SEA and F-SEA. The performance of the F-SEA issuperior to the performance of the SEA 93% of the time,

TABLE VPERFORMANCE COMPARISON OF THE STANDARD EVOLUTIONARY ALGORITHM

(SEA) AND FUZZY STANDARD EVOLUTIONARY ALGORITHM (F-SEA). ASHADED CELL DENOTES COMPARATIVELY SUPERIOR PERFORMANCE. A

SHADED CELL WITH THE SYMBOL � DENOTES STATISTICALLY

SIGNIFICANT SUPERIOR PERFORMANCE

and the performance of the F-SEA is superior to theperformance of the SEA 73% of the time. The performanceof the F-SEA is statistically superior to the performanceof the SEA 40% of the time, and the performance of theF-SEA is statistically superior to the performance of theSEA 60% of the time. Statistically superior performance occurswhen the threshold for the corresponding t-test or F-test issmaller than 0.05. Each such occurrence is highlighted andmarked with the symbol .

Table VI shows a rounded-up performance comparison of theSSEA and F-SSEA. The performance of the F-SSEA is supe-rior to the performance of the SSEA 73% of the time, and the

performance of the F-SSEA is superior to the performanceof the SSEA 80% of the time. The performance of the F-SSEAis statistically superior to the performance of the SSEA 20%of the time, while the performance of the F-SSEA is statisti-cally inferior to the performance of the SSEA 7% of the time,and the performance of the F-SSEA is statistically superior tothe performance of the SSEA 47% of the time.

The above results support the statistically based inference thatintroducing fuzzy control to manage resources of the SEA hasmore of an impact than when fuzzy control is introduced tomanage resources of an SSEA. Regardless, fuzzy control is ableto improve performance of the SSEA as well.

In Table VII, we compare performance of the SEA to theSSEA in an effort to infer relative superiority of the basic ap-proaches. These results support the statistical inference that theperformance of the SSEA is significantly better than the perfor-mance of the SEA. A potential explanation for this phenomenonis since the SSEA only replaces a portion of its population ateach generation it delays the onset of premature convergence,which is detrimental to the evolutionary search process.

Page 15: Evolutionary algorithms in Embedded systems

270 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 10, NO. 3, JUNE 2006

TABLE VIPERFORMANCE COMPARISON OF THE STEADY-STATE EVOLUTIONARY

ALGORITHM (SSEA) AND FUZZY STEADY STATE EVOLUTIONARY ALGORITHM

(F-SSEA). A SHADED CELL DENOTES COMPARATIVELY SUPERIOR

PERFORMANCE. A SHADED CELL WITH THE SYMBOL � DENOTES

STATISTICALLY SIGNIFICANT SUPERIOR PERFORMANCE

TABLE VIIPERFORMANCE COMPARISON OF THE STANDARD EVOLUTIONARY ALGORITHM

AND STEADY STATE EVOLUTIONARY ALGORITHM. A SHADED CELL DENOTES

COMPARATIVELY SUPERIOR PERFORMANCE. A SHADED CELL WITH THE

SYMBOL � DENOTES STATISTICALLY SIGNIFICANT SUPERIOR PERFORMANCE

F. Remarks

We have presented statistical results based on a large suite ofexperiments, which support the argument that adaptive controlof an evolutionary algorithm’s resources via fuzzy logic maybe realized in a simple, intuitive, and efficient manner. We haveobserved that the overhead due to fuzzy control intervention isminimal, in spite of activating the fuzzy controller for popula-tion size at each generation. This overhead could increase forsome applications that require larger and more complex fuzzy

rule bases. In such cases, rules compilation [82] could be usedas a novel technique to achieve up to an order of magnitudespeedup.

We have demonstrated that expert heuristic knowledge onmanaging an evolutionary algorithm can be suitably encodedusing a fuzzy logic framework to remarkably improve anevolutionary algorithm’s search performance. In this function,the fuzzy logic control scheme serves as an intelligent managerthat routinely observes the progress of the search and makesmodifications to increase or decrease population diversity asnecessary to both improve the search and control the transitionfrom exploration in the initial stages to exploitation in thelater stages.

The principal finding of our work based on hypothesistesting is that an adaptive evolutionary algorithm generallyresults in a much tighter variance in search results and is asignificant improvement over evolutionary algorithms basedon a static canonical parameter set. Moreover, this approachleads to a higher degree of search confidence as evidenced bya narrower variance in the search. What is also remarkableis that, except for one outlier in the space of experiments,the fuzzy controlled evolutionary algorithm approaches areinherently superior to their static parameter versions.

In the approach to adaptive fuzzy control presented, we haveemphasized the explicit use of time as a state variable via theintroduction of the percentage of completed trials measure. Thisis motivated by the fact that the nature of the evolution is differentin the early, intermediate, and mature stages of search, andstriking a balance between diversity and resource-constrainedconvergence is important in solving practical problems. Optimaldiversitymaintenancewithoutconsideringtimeorcomputationalresources would imply postponement of convergence to infinity,which is not a practical option. Considering time or searchmaturity as an explicit input also leads to a simpler and moreintuitive design of the fuzzy knowledge base.

Fitness-based diversity measures do not differentiate dissim-ilar solutions that may have the same or similar fitness, such aswhen the solutions correspond to points along a fitness contouror when they lie near different modes with comparable fitness.However, genotypic diversity, such as the measure used in thispaper, is general and would differentiate the solutions in theabove examples. It is our opinion that selecting an appropriatediversity measure is an important task, and a practitioner mustclearly understand the goals of the optimization applicationin order to choose the most applicable diversity measure. Forinstance, in multimodal function optimization, where identifi-cation of several modes becomes necessary, one may have toconsider clustering-based diversity measures instead. Nonsta-tionary function optimization is another challenging examplethat requires the selection of a novel diversity measure.

VII. LAMP SPECTRUM OPTIMIZATION

A. Problem Description

In the United States, 20% of electrical generation (which isresponsible for 39 million tons of carbon dioxide emissions)is dedicated to lighting [83]. In less industrialized countries,

Page 16: Evolutionary algorithms in Embedded systems

BONISSONE et al.: REAL-WORLD EVOLUTIONARY COMPUTATION 271

Fig. 8. Luminous efficacy of common electric light sources (and the sulphurlamp).

this fraction can be much higher (e.g., 37% in Tunisia [83]).Thus, one of the principal goals motivating advances in lightingtechnology is maximization of luminous efficacy, the ratio ofthe total luminous flux to total power input (i.e., the “amountof light” per watt [84]). However, colorimetric properties havea strong influence on the application and adoption of new lightsource technologies, probably because an observer can easilyassess them. For example, even though metal halide lamps,which produce a white light that renders color well, are lessefficient and more expensive to manufacture than low-pressuresodium lamps, which produce a yellow light that renderscolor poorly, there are many more practical applications formetal halide lamps because they have much better colorimetricproperties.

Ideally, a new electric light source technology will have highluminous efficacy, have acceptable apparent color (typically“white”), and render colors well. While getting all three prop-erties in a new light source technology just right is rare, thereare cases where efficacy is very high and the color propertiesare just slightly off. For example, the sulphur lamp [85], [86]has very high luminous efficacy (see Fig. 8), a color-renderingindex (CRI) of 78, and a greenish-white apparent color (1931CIE chromaticity coordinates , ), which is apoor color for many applications.

It is possible to exchange some efficacy for better colori-metric properties: there are an infinite number of ways to filter abroad-spectrum light so that it has better colorimetric properties.However, almost all of these filters will reduce luminous effi-cacy by an unacceptable amount. The lamp spectrum optimiza-tion (LSO) problem is a multiobjective optimization problemconcerned with the tradeoff between luminous efficacy and oneor more colorimetric properties of lamp spectra.

The spectral power distributions [(SPDs)—the radiant powerper unit wavelength as a function of wavelength] of four lightsources employed in this research are plotted in Fig. 9. Notethat SPDs can be smooth and continuous (e.g., incandescentlamps, sulfur lamps) or spiky (e.g., metal halide), with energyeither spread throughout the visible spectrum (e.g., metal halidelamps, fluorescent lamps) or concentrated principally in oneportion of the visible spectrum (e.g., high-pressure sodiumlamps, low-pressure sodium lamps). Thus, one might expectvery different filters for each lamp type for a given chromaticitycoordinate.

Fig. 9. SPDs for metal halide, high-pressure sodium, sulphur, andincandescent lamps.

B. Related Work

MacAdam [87] presents a proof of a theorem that allows theoptimal spectral reflectance for a pigment to achieve a maximumcolorimetric purity for a given illuminant and dominant wave-length to be determined. This can also be used to determine howan arbitrary SPD may be filtered to achieve any chromaticity atmaximum efficiency [87], [88]. However, this method offers noguarantee that the color rendering will be acceptable and it islikely to be poor for many chromaticities.

Koedam and Opstelten [89] use trial and error and colortheory to develop three-line spectra on the blackbody locuswith CRI . Koedam et al. [90] examined (again via trialand error) the effect of using three bands of differing bandwidthon color rendering and efficacy, although they only look at afew points. Thornton [91] uses a similar approach to explorethe tradeoff between CRI and efficiency of some three linespectra for white light. All three of these papers identify similarregions of the spectrum (around 450, 540, and 610 nm) asbeing particularly important for color rendering in line or bandspectra.

Einhorn and Einhorn [92], Walter [93], Haft and Thornton[94], and Opstelten et al. [95] were the first researchers toexamine in a systematic way the relationship between CRIand efficacy for certain chromaticities, using three line or threeband spectra. All four papers present calculations, starting withdifferent assumptions, of CRI/efficacy Pareto optimal frontsfor different colors and note that the lighting industry wasmanufacturing many lamps that were far from this front (i.e.,even given physical limitations, there was substantial room forimprovement).

Walter [96] applied nonlinear programming to the spectrumoptimization problem for efficacy and color rendering at a par-ticular chromaticity. Although he was never able to get the algo-rithm to converge, he was able to find three and four line spectrawith high efficiency and high CRI.

Other researchers have applied a variety of approaches tosolve related problems. Ohta and Wyszecki [97] use linear pro-gramming to design illuminants that render a limited number ofobjects at desired chromaticities. Ohta and Wyszecki [98] usenonlinear programming to explore the relationship between il-luminant changes and color change in relation to setting toler-

Page 17: Evolutionary algorithms in Embedded systems

272 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 10, NO. 3, JUNE 2006

ances for illuminant differences. DuPont [99] applied a varietyof methods, including neural networks, the simplex method, ge-netic algorithms, simulated annealing, etc., to reconstruct re-flectance curves from tristimulus values for a given SPD.

C. Solution Architecture

Genetic algorithms were used to solve the LSO problem. Thepopulation size for the search was set at 175, the number ofgenerations was set at 500, the mutation probability was setat 5%, and the population is replaced at each generation withnew individuals, except for the elite individuals, which are trans-ferred across generations. The crossover and selection methodswere relatively generic [100]–[102]. However, domain knowl-edge was incorporated into the mutation methods to developvery effective problem-specific operators.

The portion of the visible spectrum between 400 and 700nm was partitioned in 150 bins, each 2 nm wide. The influ-ence on chromaticity coordinates of the visible spectrum outsidethis range is trivial, e.g., less than 0.26% for a uniform spectralpower distribution. Each chromosome consisted of 150 genes,with each floating-point gene representing the transmittance ofthe filter in one of the bins. The order of the genes correspondedto the order of the wavelength range in the spectrum; i.e., thefirst gene represented the 400 to 402 nm interval, the secondgene represented the 402 to 404 nm interval, and so on. The 2nm bin width was chosen as a compromise between smoothnessand computational tractability. Valid allele values for each genecould range from zero to one.

D. Domain Knowledge Representation

This encoding permits the use of several problem specificmutation methods by capitalizing on the expected propertiesof good chromosomes. Optimal filters tend to have severalcommon properties. MacAdam [87], [88] showed that for thechromaticity-only problem (i.e., ignoring color rendering),optimal filters had a single notch (narrow band) of zero trans-mittance, and 100% transmittance otherwise. Early resultsshowed that in relatively fit chromosomes, many gene valuesare exactly at the limits of the allele (i.e., 100% transmission or0% transmission) and that there is a smooth transition in valuesacross adjacent genes. Finally, from a colorimetric perspective,the portion of the visible spectrum at either extreme has verylittle effect on efficiency. Therefore, there is little selectionpressure applied by these regions. By the same argument, thecentral portion of the visible spectrum exerts a much strongerinfluence on the fitness than either extreme.

This suggests several methods of mutation specific to thespectrum optimization problem that might be expected toproduce a substantial decrease in the number of generationsrequired to converge to a good solution. For example, considerthe filter in Fig. 10. Because region A has very little influenceon efficiency, there is not much pressure to either smoothout the genes, or move them toward a boundary. In regionB, there is clearly a notch developing, but many generationsmay be required to smooth it out. Region C is near but not atthe boundary and, like region A, has little effect on fitness.Our domain knowledge was embedded into a set of three

Fig. 10. Suboptimal filter.

Fig. 11. BCM applied to part of region A (bold line).

customized mutation operators which were designed to addressthese issues: boundary chunk mutation (BCM), push mutation(PM), and smooth mutation (SM), which are described in thissection.

1) Boundary Chunk Mutation: BCM selects a random con-tiguous portion up to 10% of the total length of the chromo-some and sets it to one of the allele limits (either 1 or 0). More-over, because most of the genes for many chromaticities couldbe expected to be at 100% transmission, the selection of whichboundary to mutate to was biased slightly: 65% of the time itwent to 1, 35% of the time it went to 0. For the genes in ran-domly selected region

otherwise(5)

where “rand” is a [0 1] uniformly distributed random value. Thismutation is effective because many of the genes of fit solutionswere at the maximum or minimum value (i.e., either 100% trans-mission or 0%). Fig. 11 shows BCM applied to part of region Aof the solution depicted in Fig. 10.

2) Push Mutation: PM selects a random contiguous portionup to 20% of the total length of the chromosome and scalesthe genes from their current values toward either 1 or 0 by arandomly chosen value, which is uniformly distributed between0.0 and 0.2. For the genes in randomly selected region

otherwise(6)

Page 18: Evolutionary algorithms in Embedded systems

BONISSONE et al.: REAL-WORLD EVOLUTIONARY COMPUTATION 273

Fig. 12. PM applied to part of region C (bold line).

Fig. 13. SM applied to part of region B (bold line).

Fig. 12 shows PM applied to part of region C of the solutiondepicted in Fig. 10. Note that this might be effectively appliedto region B as well, to help shape the developing notch.

3) Smooth Mutation: SM selects a random contiguousportion up to 20% of the total length of the chromosome andsmooths it. Specifically, the value of each gene in the mutatedportion is weighted by the value of its neighboring genes

(7)

where is the order of the gene in the chromosome. Fig. 13shows SM applied to part of region B of the solution depictedin Fig. 10.

E. Results

Fig. 14 presents filters evolved by three typical runs of thistechnique for three spectra (MH, HPS, and incandescent) at the1931 CIE chromaticity ( , ). This chromaticitywas selected to be about equidistant from the unfiltered spectraof the three lamps considered and achievable, albeit at low effi-ciency, by all three lamps. Also plotted are optimal filters pro-duced for that color following method [87].

The incandescent and metal halide filters are simple notches,very similar to the optimal notches found by MacAdam’s tech-nique. The form of the HPS filter is quite different from theMacAdam optimal; however, the difference in efficiency be-tween the two is less than 0.56%. This example illustrates twoimportant features of genetic algorithms. First, the exact op-timum is rarely found, although the difference is usually trivial.Second, the GA approach is easily able to develop a variety of

Fig. 14. Filter transmittance for three spectra (for the given chromaticitycoordinates).

Fig. 15. Comparison of Pareto-optimal surface found using GA approach toknown optimum (using MacAdam’s [87] method) on the three-dimensionalproblem (filtered x, y, and efficiency). Contour lines are efficiency at a givenchromaticity.

solutions that have dissimilar form but similar performance, al-lowing an engineer to choose from these Pareto-optimal solu-tions based on criteria not encoded in the fitness function (e.g.,ease of manufacturing).

Fig. 15 is a comparison of the chromaticity-efficiency Paretooptimal surfaces obtained using both methods for the HPS

Page 19: Evolutionary algorithms in Embedded systems

274 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 10, NO. 3, JUNE 2006

Fig. 16. Tradeoff between efficiency and CRI at each chromaticity for thefour-dimensional problem (filtered x, y, CRI, and efficiency). For ease ofvisualization, chromaticity is represented here by the corresponding colortemperature on the blackbody locus.

spectrum. The GA approach does a good job of defining theMacAdam limit of efficiency at any chromaticity, particularlywhere filtered efficiency is above 50%, which encompasses thearea of likely interest for any industrial application. The GAperforms less well at very low efficiencies; this is a relativelymore difficult area to find the optimum filter in, so the stoppingcriterion (500 generations in this case) was probably too re-strictive for this region, i.e., if it had run longer, it would haveperformed better.

Fig. 16 is a plot of the Pareto optimal front for the four-di-mensional problem of 1931 CIE and chromaticity, colorrendering index, and filtered efficiency. Note that the maximumefficiency is quite close to the MacAdam limit for each chro-maticity,4 except for metal halide at low color temperatures. Thisis further evidence that the GA approach is finding solutions ator very near the true global optima: at some CRI, the efficiencyfor just chromaticity and for chromaticity and CRI optimizedmust be the same. Note also that as color temperature is in-creased, the effect of CRI on maximum efficiency is decreased.

4The appearance of efficiency exceeding the MacAdam limit at high colortemperature is illusory, a result of mapping two-dimensional chromaticity spaceto one-dimensional color temperature space.

F. Remarks

The use of domain knowledge to develop problem-specificmutation methods had a large beneficial effect on the perfor-mance of the GA in this case. For a relatively simple problemwith a smooth spectral power distribution, as obtained froman incandescent source, the number of function evaluations re-quired to arrive at a solution of comparable quality is reduced bya factor of four by using the problem specific mutation methods.For a more complex problem, such as one with a very spikyspectral power distribution (as obtained from a metal halidelamp), the effect is even more profound, reducing the numberof function evaluations by a factor of 20.

VIII. SCHEDULING MAINTENANCE TASKS FOR A

CONSTELLATION OF LOW EARTH ORBITING SATELLITES

A. Problem Description

The objective of this problem is to schedule the maintenancetasks for a constellation of 25 satellites in LEO. These satellites’orbits do not follow a regular pattern. Thus any one of these 25satellites does not necessarily follow the same orbit twice withina finite period of time. As these satellites follow their orbits, theyprovide services to the cities that have their view.

A given city on the surface sees a steady stream of satellitesappearing on the horizon (see Table VIII), remaining in viewfor a widely varying period of time, and then disappearing overthe horizon. A city may have several satellites in view duringany given period of time. These satellites provide a variety ofservices to the cities while they are in view. Table VIII repre-sents a sample of the information known about the orbit of asatellite and when it will be available to a specific city. Alongwith the schedule of satellite–city interactions, there were twoother schedules. One schedule was for satellite–eclipse interac-tions, which tells us when a satellite will be in eclipse, and onefor satellite–ground station interactions, which tells us when asatellite will be able to contact certain control centers.

The satellite owners are responsible for providing a certainlevel of service to customers in these cities of interest. The satel-lites cannot be reallocated, that is, they cannot be moved intoanother orbit in order to provide more service to a city that isnot in their immediate path. It is to the satellite owners’ finan-cial advantage to ensure that these cities receive the highest levelof service possible. Often a city is provided with more servicethan is necessary. However, the goal in this project is to maxi-mize overall coverage.

Our goal is to schedule a set of maintenance tasks on thesesatellites. These tasks must be scheduled so that they infringeminimally on the satellite’s ability to provide services. Thereare two primary reasons why one might utilize an evolutionaryalgorithm for this problem. First, there are some constraints thatmay add significant nonlinearity to the solution space. Second,there may be other conditions outside of the framework given tothe evolutionary algorithm that a decision maker may be awareof. A side effect of utilizing an EA is the production of a set ofviable alternatives. These alternatives are attractive to a decisionmaker who may be aware of other information that was not in-corporated into the EA.

Page 20: Evolutionary algorithms in Embedded systems

BONISSONE et al.: REAL-WORLD EVOLUTIONARY COMPUTATION 275

TABLE VIIISATELLITE–CITY INTERACTION SCHEDULE

B. Related Work

This application has been previously reported in [103]. To thebest of our knowledge, there is no other prior work related to theuse of EAs to schedule satellite maintenance tasks. Other sched-uling tools have been used in this problem domain, such as bi-nary/integer programming [104], neural networks [105], [106],dynamic constraints satisfaction [107], etc. EAs have been ap-plied to a broad range of scheduling tasks from the ubiquitoustraveling salesman problem to the standard job shop scheduling.This basic optimization task is nicely covered in many introduc-tory genetic algorithm texts [13], [108].

C. Solution Architecture

As illustrated in Fig. 17, the EA utilizes an event simulationto determine the total coverage provided in a given schedule.This simulation is provided by a data structure that is con-structed based on a known schedule of coverage events and theEA scheduled task-starting times. After constructing the datastructure, the final adjusted coverage events are read directlyfrom the data structure. These final events have been reducedin accordance with any overlapping tasks.

Fig. 17. Architecture of satellite scheduling problem.

D. Domain Knowledge Representation

The representation utilized was a chromosome with realvalued alleles. Each allele represents the start time for a main-tenance task. In one instance of the problem, we had six tasksto schedule on each of 25 satellites, resulting in a chromosomewith 150 real valued alleles.

This problem has static and dynamic constraints, each ofwhich were handled differently in the EA. Static constraintsare those that are based on information known prior to the EAbeing run. In contrast, dynamic constraints are not known untilthe EA is run.

Static constraints are based on a priori information. For ex-ample, certain tasks, like battery reconditioning, cannot be per-formed during a satellite eclipse. Therefore, its start time andits entire duration should not overlap with any satellite eclipse.This is illustrated in Table IX (second row). The schedule ofeach eclipse is known for each satellite prior to running the EA.Thus, this information can be encoded into the chromosome. Foreach allele representing a battery-reconditioning task, we needto ensure that it is not scheduled during an eclipse. Rather thanallowing the start time to span the full duration of the simulated

Page 21: Evolutionary algorithms in Embedded systems

276 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 10, NO. 3, JUNE 2006

TABLE IXINTERACTIONS BETWEEN TASKS AND EVENTS

Fig. 18. Mapping between real time and an artificial time horizon.

Fig. 19. This set of static constraints starts with valid and potentiallyoverlapping windows of starting times and maps them to a continuous range ofvalid start times.

period, we truncate the period to account for the time that is notavailable for this task.

As shown in Fig. 18, we map real time (except for eclipses)into a truncated time horizon that represents only valid starttimes. During the evaluation of this individual, we can then cal-culate a mapping from our truncated starting time to the actualstarting time.

A similar problem is for those tasks that must occur whilea satellite is in contact with a ground station. Since we knowwhen valid times for this task occur, we can once again createan artificial time scale that incorporates only the valid range oftimes for the tasks. This is shown in Fig. 19.

Dynamic constraints must be enforced differently, as they arebased on information not known until the EA is run. They aredependent on the nature of the solution generated by the EA.The primary dynamic constraint in this problem is the issue oftask collisions. There are several types of tasks, and each taskhas specific interactions with other task types and other events.These interactions are shown in Table IX.

The interaction types are defined as follows.

• Shrinks: indicates that the task reduces the usable time inthat event.

• No Interaction: indicates that the task may be scheduledduring the event or task.

• Within: indicates that the usable time for the task is con-tained entirely within the event.

• No Overlap: The task is not allowed to intersect the otherevent or task type.

As these schedule conflicts arise dynamically, they must beresolved dynamically as well. In this case we opted for a penaltyfunction that penalizes an individual for invalid overlappingtasks. This constraint was even enforced with a dynamicpenalty so as to maintain diversity. Over time, that is, in latergenerations, the penalty for this collision is more severe than inearly generations

(8)

In the above equation is the final fitness, is the fit-ness before any penalty is applied, is a linearity factor, andis a constant weight to be multiplied with the number of colli-sions. This method was important in that it allowed for greaterdiversity in early generations, which increased the likelihood ofoptimum results in later generations.

E. Results

Since a real-coded representation was used, the geneticoperators assumed a principal role in the solution process. Ourcrossover and mutation methods sought to mimic effectivemethods common on binary representations. In both operators,we utilized normal distributions around parents to generatechildren. One early method used in this domain was the BLXcrossover method [109]. We found that this method was insuf-ficient to improve our performance. We sought to develop amethod analogous to crossover methods typically employed inbinary representations. In these methods, the genetic material insubsequent generations is directly inherited from either parent.We attempted to utilize a method that mapped more readilyonto that construct than does the BLX method. In our parentweighted crossover method, the children inherited values thatwere generated from distributions centered on the parent values.This is similar to the parent-centric recombination [110], whichtracks the direction in which children are moving from parents.Similarly, our mutation operator selected a value from a distri-bution around the allele value being modified. This mutationmethod proved to be much more effective than a method whichgenerated completely new allele values. These methods im-proved our results significantly. The EA in our approach was asteady-state GA using a 50% population replacement strategywith a mutation probability of 0.1% and a crossover probabilityof 60%. In our experiments, the number of generations variedin the range [300, 500], and the population sizes varied in therange [25, 500].

In parallel to our EA, the problem was also solved usinga linear programming (LP) method. The LP was allowed torun until it found a provable objective function optimum of69 366.58 min. This number represents the aggregate time cov-erage of a Walker constellation of 25 LEO satellites. The tunedEA typically returns values that are one to two minutes short ofthis value. The LP method had to run for approximately ten min-utes in order to find a provable optimum. As shown in Table X,the EA ran for a comparable amount of time with a very large

Page 22: Evolutionary algorithms in Embedded systems

BONISSONE et al.: REAL-WORLD EVOLUTIONARY COMPUTATION 277

TABLE XAVERAGE RUNNING TIMES FOR CONVERGENCE RELATIVE TO POPULATION SIZE

population. One advantage provided by the EA is an end resultthat is a population, not just a point solution. This offers a de-cision maker various options that may be considered in light ofconstraints that may not be encoded in the EA, such as energymanagement restrictions.

F. Remarks

In this application, we have incorporated a priori informa-tion into the representation as static constraints, e.g., scheduleof known events. Dynamic constraints, which are dependent oneach scheduled individual, are represented with a time-depen-dent penalty function. We performed several experiments to ob-tain the structure and parameters of the most suitable penaltyfunction that would remove task collisions without inhibitingthe search for good solutions.

IX. CONCLUSION

A. Summary

We have discussed the impact of the “no free lunch” theoremand its implied requirements for leveraging domain knowledgein evolutionary search. There have been many studies provingthe applicability of the NFLT to problems that are closed underpermutation [4]. There are still discussions as to its extensibilityto other classes of search and decision problems. Nevertheless,we found that the solution of real-world problems via evolu-tionary search can greatly benefit from the incorporation of do-main knowledge.

We have examined implicit and explicit knowledge represen-tation mechanisms for evolutionary algorithms. We have alsodescribed offline and online metaheuristics as examples of ex-plicit methods to leverage this knowledge. To illustrate the ben-efits of this knowledge integration approach, we have describedfour real-world applications: a discrete classification, a com-binatorial assignment, a design optimization, and a schedulingproblem.

The first application is an insurance underwriting problemin which we try to balance risk assessment and price compet-itiveness to determine the rate class of the applicant. In arrivingat such a decision, we also want to find the best tradeoff be-tween the percentage of insurance applications handled by theclassifier (coverage) and its accuracy. In this application the ob-ject-level problem solver is a fuzzy knowledge-based classifier,while the EA is the metalevel search heuristic used to tune andmaintain the classifier’s parameters.

The second application is a flexible manufacturing problemin which we optimize the time and cost of design and man-ufacturing of a given product, and we realize the optimal

assignments of parts to a design, suppliers to parts, and designto a manufacturing facility. This approach leverages onlinemetaheuristics as a way to encode domain knowledge. A fuzzyknowledge-based controller guides the evolutionary searchfor optimal planning decisions and controls its transition fromexploration to exploitation.

The third application is a multiobjective lamp-spectrum opti-mization problem. This approach leverages domain knowledgevia the use of customized mutation operators. Such incorpora-tion has a profound effect on the quality and speed of evolu-tionary search in the exploration of the Pareto front.

The fourth application describes a scheduling problem for themaintenance tasks of a constellation of 25 low earth orbit satel-lites. In this approach, the domain knowledge is embedded in thedesign of a structured chromosome, a collection of time-valuetransformations to reflect static constraints and a time-depen-dent penalty function to prevent schedule collisions. The useof domain knowledge in the design of the EA’s data structuresallowed the evolutionary search to focus on the satisfaction ofdynamic constraints and resulted in a considerable reduction ofthe search space.

“Knowledge is power” was a common AI slogan of the 1980s.We claim that “leveraged domain-knowledge is power” is stillvery relevant from the perspective of evolutionary search. Aswe address control, optimization, and other decision-makingproblems, we need to exploit domain knowledge to properly de-sign, initialize, and modify the structures and parameters of theproblem solvers. The implication for future work is clear: re-searchers and particularly practitioners interested in solving dif-ficult problems quickly and reliably are well served to eschewgeneric approaches when domain knowledge can be leverageddirectly to make a more efficient evolutionary algorithm tailoredto a specific problem.

If “leveraged domain-knowledge is power,” then proper do-main-knowledge representation is key to the success of this en-deavor. We have illustrated several approaches, both implicitand explicit, that support this statement. We have stressed theusefulness of fuzzy systems to represent domain knowledge andthe synergy derived from hybrid soft computing systems that arebased on a loose or tight integration of their constituent tech-nologies. This integration provides complementary reasoningand search methods that allow us to combine domain knowl-edge and empirical data to develop flexible computing tools andsolve complex problems. Thus, we consider soft computing asa framework in which we can derive models that capture heuris-tics for object-level tasks or metaheuristics for metalevel tasks.

B. Closing Remarks on Relevant Work

In closing, we would like to briefly describe one of ourresearch and development efforts wherein the problem wouldbe unsolvable without the suitable incorporation of do-main-specific knowledge within the evolutionary optimizationframework [19].

The problem is the optimal allocation of available financialresources to a diversified portfolio of long- and short-term finan-cial assets in accordance with risk, liability, and regulatory con-straints. The problems we have tackled typically involve severalhundreds to thousands of financial assets, and investment deci-sions of several billion dollars. In this application, the goal is

Page 23: Evolutionary algorithms in Embedded systems

278 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 10, NO. 3, JUNE 2006

the simultaneous maximization of return measures and the min-imization of risk measures for the portfolio of assets. The returnand risk measures are complex linear or nonlinear functions of avariety of market and asset factors. The principal characteristicof this problem class is the presence of a large number of linearallocation constraints, and multiple linear and nonlinear objec-tives defined over the resulting feasible space.

We have developed and tested evolutionary multiobjectiveoptimization algorithms that are able to robustly identify thePareto frontier of optimal portfolios defined over the space ofreturns and risks. However, the key challenge in solving thisproblem is presented by the large number of linear allocationconstraints. The feasible space defined by these constraints isa high dimensional real-valued space (up to 2000 dimensions)and is a highly compact convex polytope, making for an enor-mously challenging constraint satisfaction problem.

Linear programming methodologies can routinely handleproblems with thousands of linear constraints, but they areunable to tackle nonlinear objectives. We leveraged knowledgeon the geometrical nature of the feasible space by designinga specialized LP-based algorithm that robustly samples theboundary vertices of the convex feasible space. These extremitysamples are seeded in the initial population and then exclu-sively used by the evolutionary multiobjective algorithm togenerate interior points (via convex crossover) that are alwaysgeometrically feasible.

In this specific application, we have explicitly incorporateddomain knowledge by a specialized LP-based population initial-izer and convex crossover operators that always generate geo-metrically feasible interior points. This application further rein-forces our central thesis that evolutionary algorithms + domainknowledge = real-world evolutionary computation.

REFERENCES

[1] M. Gondran and M. Minoux, Graphs and Algorithms. New York:Wiley, 1984.

[2] D. H. Wolpert and W. G. MacReady, “No free lunch theorems forsearch,”, Santa Fe, NM, SFI-TR-95-02-010, 1995.

[3] , “No free lunch theorems for optimization,” IEEE Trans. Evol.Comput., vol. 1, no. 1, pp. 67–82, 1997.

[4] C. R. Reeves and J. E. Rowe, Genetic Algorithms – Principles andPerspectives: A Guide to GA Theory. Amsterdam, The Netherlands:Kluwer Academic, 2003, ch. 4.

[5] Y.-C. Ho and D. L. Pepyne, “Simple explanation of the no free lunch the-orem of optimization,” Cybern. Syst. Anal., vol. 38, no. 2, pp. 292–298,2002.

[6] L. A. Zadeh, “Fuzzy logic and soft computing: Issues, contentions andperspectives,” in Proc. IIZUKA’94: Third Int. Conf. Fuzzy Logic, NeuralNets Soft Computing, Iizuka, Japan, 1994, pp. 1–2.

[7] P. Bonissone, “Automating the quality assurance of an on-line knowl-edge-based classifier by fusing multiple off-line classifiers,” in Proc.IPMU, Perugia, Italy, Jul. 4–9, 2004, pp. 309–316.

[8] , “Soft computing: The convergence of emerging reasoning tech-nologies,” Soft Comput., vol. 1, no. 1, pp. 6–18, 1997.

[9] P. Bonissone, Y.-T. Chen, K. Goebel, and P. Khedkar, “Hybrid soft com-puting systems: Industrial and commercial applications,” Proc. IEEE,vol. 87, pp. 1641–1667, 1999.

[10] T. Pal and N. R. Pal, “SOGARG: A self-organized genetic algorithm-based rule generation scheme for fuzzy controllers,” IEEE Trans. Evol.Comput., vol. 7, pp. 397–415, 2003.

[11] P. Bonissone, “The life cycle of a fuzzy knowledge-based classifier,” inProc. 2003 North Amer. Fuzzy Information Processing Soc., Chicago,IL, Jul. 2003, pp. 488–494.

[12] P. Bonissone, A. Varma, and K. Aggour, “An evolutionary process fordesigning and maintaining a fuzzy instance-based model (FIM),” inProc. 2005 Genetic Fuzzy Systems, Granada, Spain, Mar. 2005.

[13] Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution Pro-grams. New York: Springer-Verlag, 1996.

[14] R. C. Prim, “Shortest connection networks and some generalizations,”Bell Syst. Technol. J., vol. 36, pp. 1389–1401, 1957.

[15] E. Vonk, L. C. Jain, and R. P. Johnson, Automatic Generation of NeuralNetwork Architecture using Evolutionary Computation, Singapore:World Scientific, 1997.

[16] J. T. Richardson, M. R. Palmer, G. E. Liepins, and M. Hilliard, “Someguidelines for genetic algorithms with penalty functions,” in Proc. 3rdInt. Conf. Genetic Algorithms, San Mateo, CA, 1989, pp. 191–197.

[17] W. Siedlecki and J. Sklansky, “Constrained genetic optimization viadynamic reward-penalty balancing and its use in pattern recognition,”in Proc. 3rd Int. Conf. Genetic Algorithms, San Mateo, CA, 1989, pp.141–150.

[18] J. Kubalik and J. Lazansky, “Genetic algorithms and their tuning,” inComputing Anticipatory Systems, D. M. Dubois, Ed. Liege, Belgium:American Institute of Physics, 1999, pp. 217–229.

[19] R. Subbu, P. Bonissone, N. Eklund, S. Bollapragada, and K.Chalermkraivuth, “Multiobjective financial portfolio design: A hy-brid evolutionary approach,” in Proc. IEEE Int. Congr. Evol. Comput.,Edinburgh, U.K., Sep. 2–5, 2005, pp. 1722–1729.

[20] P. Moscato, “On evolution, search, optimization, genetic algorithms andmartial arts: Toward memetic algorithms,” Caltech Concurrent Compu-tation Program, Caltech, CA, C3P Rep. 826, 1989.

[21] , “Memetic algorithms: A short introduction,” in New Ideas in Op-timization, D. Corne, M. Dorigo, and F. Glower, Eds. London, U.K.:McGraw-Hill, 1999, pp. 219–234.

[22] J.-M. Renders and H. Bersini, “Hybridizing genetic algorithms with hill-climbing methods for global optimization: Two possible ways,” in Proc.Int. Conf. Evol. Comput., 1994, pp. 312–317.

[23] R. Subbu and P. Bonissone, “A retrospective view of fuzzy control ofevolutionary algorithm resources,” in Proc. IEEE Int. Conf. Fuzzy Syst.,St Louis, MO, May 25, 2003, pp. 143–148.

[24] E. Eiben, R. Hinterding, and Z. Michalewicz, “Parameter control in evo-lutionary algorithms,” IEEE Trans. Evol. Comput., vol. 3, no. 2, pp.124–141, 1999.

[25] F. Herrera and M. Lozano, “Fuzzy adaptive genetic algorithms: De-sign, taxonomy, and future directions,” Soft Comput., vol. 7, no. 8, pp.545–562, 2003.

[26] P. Bonissone, “Soft computing and meta-heuristics: Using knowledgeand reasoning to control search and vice-versa,” in Proc. SPIEApplications Science Neural Networks (Fuzzy Systems EvolutionaryComputation VI), vol. 5200, San Diego, CA, Aug. 2003, pp. 133–149.

[27] K. A. De Jong, “An analysis of the behavior of a class of geneticadaptive systems,” Ph.D. dissertation, Univ. of Michigan, Ann Arbor,MI, 1975.

[28] J. J. Grefenstette, “Optimization of control parameters for genetic algo-rithms,” IEEE Trans. Syst., Man, Cybern., vol. 16, no. 1, pp. 122–128,1986.

[29] F. Xue, “Fuzzy logic controlled multi-objective differential evolution,”Rensselaer Polytechnic Inst., Troy, NY, Electronics Agile Manufac-turing Research Institute Research Rep. ER03-4, 2003.

[30] T. Bäck, Evolutionary Algorithms in Theory and Practice. New York:Oxford Univ. Press, 1996.

[31] M. Schut and J. Wooldridge, “The control of reasoning in resource-bounded agents,” Knowl. Eng. Rev., vol. 16, no. 3, pp. 215–240, 2000.

[32] E. Melis and A. Meier, “Proof planning with multiple strategies,” in Lec-ture Notes in Computer Science. Berlin, Germany: Springer-Verlag,2000, vol. 1861, pp. 644–659.

[33] M. Cox, “Machines that forget: Learning from retrieval failure of mis-indexed explanations,” in Proc. 16th Annu. Conf. Cognitive Science Soc.,1994, pp. 225–230.

[34] S. Fox, “Introspective learning for case-based reasoning,” Ph.D. dis-sertation, Dept. of Computer Science, Indiana Univ., Bloomington, IN,1995.

[35] P. Bonissone and P. Halverson, “Time-constrained reasoning under un-certainty,” J. Real-Time Syst., vol. 2, no. 1/2, pp. 25–45, 1990.

[36] P. P. Bonissone, P. S. Khedkar, and Y. Chen, “Genetic algorithms forautomated tuning of fuzzy controllers: A transportation application,” inProc IEEE Conf. Fuzzy Syst., New Orleans, LA, 1996, pp. 674–680.

[37] X. Yao, “Evolving artificial neural networks,” Proc. IEEE, vol. 87, pp.1423–1447, 1999.

[38] P. Larrañaga and J. Lozano, “Synergies between evolutionary compu-tation and probabilistic graphical models,” Int. J. Approx. Reason., vol.31, no. 3, pp. 155–156, Nov. 2002.

[39] P. Bonissone, V. Badami, K. Chiang, P. Khedkar, K. Marcelle, and M.Schutten, “Industrial applications of fuzzy logic at general electric,”Proc. IEEE, vol. 83, pp. 450–465, 1995.

Page 24: Evolutionary algorithms in Embedded systems

BONISSONE et al.: REAL-WORLD EVOLUTIONARY COMPUTATION 279

[40] S. Guillaume, “Designing fuzzy inference systems from data: An inter-pretability-oriented review,” IEEE Trans. Fuzzy Syst., vol. 9, no. 3, pp.426–443, 2001.

[41] J. Casillas, O. Cordón, F. Herrera, and L. Magdalena, Eds., “Inter-pretability issues in fuzzy modeling, and accuracy improvements inlinguistic fuzzy modeling,” in Studies in Fuzziness and Soft Com-puting. Berlin, Germany: Springer-Verlag, 2003, vol. 128–129.

[42] L. A. Zadeh, “Fuzzy sets,” Inf. Contr., vol. 8, pp. 338–353, 1965.[43] N. Rescher, Many-Valued Logic. New York: McGraw-Hill, 1969.[44] M. Black, “Vagueness: An exercise in logical analysis,” Phil. Sci., vol.

4, pp. 427–455, 1937.[45] E. H. Ruspini, P. P. Bonissone, and W. Pedycz, Handbook of Fuzzy Com-

putation. Bristol, U.K.: Institute of Physics, 1998.[46] L. A. Zadeh, “Quantitative fuzzy semantics,” Inf. Sci., vol. 3, pp.

159–176, 1971.[47] , “The concept of a linguistic variable and its application to approx-

imate reasoning, Part 1,” Inf. Sci., vol. 8, pp. 199–249, 1975.[48] Y.-M. Pok and J.-X. Xu, “Why is fuzzy control robust,” in Proc. 3rd

IEEE Int. Conf. Fuzzy Syst., Orlando, FL, 1994, pp. 1018–1022.[49] L. A. Zadeh, “Outline of a new approach to the analysis of complex

systems and decision processes,” IEEE Trans. Syst., Man, Cybern., vol.SMC-3, pp. 28–44, 1973.

[50] E. H. Mamdani and S. Assilian, “An experiment in linguistic synthesiswith a fuzzy logic controller,” Int. J. Man Mach. Studies, vol. 7, no. 1,pp. 1–13, 1975.

[51] T. Takagi and M. Sugeno, “Fuzzy identification of systems and its appli-cations to modeling and control,” IEEE Trans. Syst., Man, Cybern., vol.SMC-15, pp. 116–132, 1985.

[52] R. Babuska, R. Jager, and H. B. Verbruggen, “Interpolation issues inSugeno-Takagi reasoning,” in Proc. 3rd IEEE Int. Conf. Fuzzy Syst., Or-lando, FL, 1994, pp. 859–863.

[53] H. Bersini, G. Bontempi, and C. Decaestecker, “Comparing RBFand fuzzy inference systems on theoretical and practical basis,” inProc. Int. Conf. Artificial Neural Networks, vol. 1, Paris, France,1995, pp. 169–74.

[54] T. J. Procyck and E. H. Mamdani, “A linguistic self-organizing processcontroller,” Automatica, vol. 15, pp. 15–30, 1979.

[55] F. Herrera and J. L. Verdegay, Eds., “Genetic algorithms and soft com-puting,” in Studies in Fuzziness and Soft Computing. Berlin, Germany:Physica-Verlag, 1996, vol. 8.

[56] J. S. R. Jang, “ANFIS: Adaptive-network-based fuzzy inferencesystem,” IEEE Trans. Syst., Man, Cybern., vol. 233, pp. 665–685,1993.

[57] O. Cordón, F. Herrera, and M. Lozano, “A classified review on the com-bination fuzzy logic-genetic algorithms bibliography,” Dept. of Com-puter Science and A.I., Univ. of Granada, Granada, Spain, Tech. Rep.DECSAI-95 129, 1995.

[58] C. L. Karr, “Design of an adaptive fuzzy logic controller using genetic al-gorithms,” in Proc. Int. Conf. Genetic Algorithms, San Diego, CA, 1991,pp. 450–456.

[59] M. A. Lee and H. Tagaki, “Dynamic control of genetic algorithm usingfuzzy logic techniques,” in Proc. 5th Int. Conf. Genetic Algorithms, CA,1993, pp. 76–83.

[60] H. Surmann, A. Kanstein, and K. Goser, “Self-organizing and geneticalgorithms for an automatic design of fuzzy control and decision sys-tems,” in Proc. EUFIT, Aachen, Germany, 1993, pp. 1097–1104.

[61] J. Kinzel, F. Klawoon, and R. Kruse, “Modifications of genetic algo-rithms for designing and optimizing fuzzy controllers,” in Proc. 1st IEEEConf. Evol. Comput., Orlando, FL, 1994, pp. 28–33.

[62] D. Burkhardt and P. P. Bonissone, “Automated fuzzy knowledge basegeneration and tuning,” in Proc 1st IEEE Int. Conf. Fuzzy Syst., SanDiego, CA, 1992, pp. 179–188.

[63] F. Herrera, M. Lozano, and J. L. Verdegay, “Tuning fuzzy logic con-trollers by genetic algorithms,” Int. J. Approx. Reason., vol. 12, no. 3/4,pp. 299–315, 1995.

[64] P. P. Bonissone, P. S. Khedkar, and Y.-T. Chen, “Genetic algorithmsfor automated tuning of fuzzy controllers, A transportation application,”in Proc. 5th IEEE Int. Conf. Fuzzy Syst., New Orleans, LA, 1996, pp.674–680.

[65] L. Zheng, “A practical guide to tune proportional and integral (PI) likefuzzy controllers,” in Proc. 1st IEEE Int. Conf. Fuzzy Syst., S. Diego,CA, 1992, pp. 633–640.

[66] E. C. C. Tsang and D. S. Yeung, “Optimizing fuzzy knowledge base bygenetic algorithms and neural networks,” in Proc. IEEE Int. Conf. Syst.,Man, Cybern., Tokyo, Japan, 1999, pp. 367–371.

[67] A. Grauel, I. Renners, and L. A. Ludwig, “Optimizing fuzzy classifiersby evolutionary algorithms,” in Proc. IEEE 4th Int. Conf. Knowledge-Based Intelligent Engineering Systems Allied Technologies, 2000, pp.353–356.

[68] S.-Y. Ho, T.-K. Chen, and S.-J. Ho, “Designing an efficient fuzzyclassifier using an intelligent genetic algorithm,” in Proc. IEEE 24thAnnu. Int. Computer Software Applications Conf. (COMPSAC), 2000,pp. 293–298.

[69] H. Ishibuchi, T. Nakashima, and T. Murata, “Genetic-algorithm-basedapproaches to the design of fuzzy systems for multi-dimensional patternclassification problems,” in Proc. IEEE Int. Conf. Evolutionary Compu-tation, 1996, pp. 229–234.

[70] C.-C. Wong, C.-C. Chen, and B.-C. Lin, “Design of fuzzy classificationsystem using genetic algorithms,” in Proc. IEEE 9th Int. Conf. FuzzySystems, San Antonio, TX, 2000, pp. 297–301.

[71] E. Collins, S. Ghosh, and C. Scofield, “An application of a multipleneural network learning system to emulation of mortgage underwritingjudgments,” in Proc. IEEE Int. Conf. Neural Networks, 1988, pp.351–357.

[72] C. Nikolopoulos and S. Duvendack, “A hybrid machine learning systemand its application to insurance underwriting,” in Proc. IEEE Int. Congr.Evol. Comput., 1994, pp. 692–695.

[73] P. Bonissone, R. Subbu, and K. Aggour, “Evolutionary optimizationof fuzzy decision systems for automated insurance underwriting,” inProc. 2002 IEEE World Conf. Comput. Intell., Honolulu, HI, 2002, pp.1003–1008.

[74] R. Subbu, C. Hocaoglu, and A. C. Sanderson, “A virtual design envi-ronment using evolutionary agents,” in Proc. IEEE Int. Conf. RoboticsAutomation, Leuven, Belgium, 1998, pp. 247–253.

[75] R. Subbu, A. C. Sanderson, and P. P. Bonissone, “Fuzzy logic controlledgenetic algorithms versus tuned genetic algorithms: An agile manufac-turing application,” in Proc. ISIC/CIRA/ISAS Conf., Gaithersburg, MD,1998b, pp. 434–440.

[76] F. Herrera and M. Lozano, “Adaptation of genetic algorithm parametersbased on fuzzy logic controllers,” in Genetic Algorithms and Soft Com-puting, F. Herrera and J. L. Verdegay, Eds. Berlin, Germany: Physica-Verlag, 1996, pp. 95–129.

[77] , “Adaptive genetic operators based on coevolution with fuzzy be-haviors,” IEEE Trans. Evol. Comput., vol. 5, no. 2, pp. 149–165, 2001.

[78] A. Tettamanzi and M. Tomassini, “Fuzzy evolutionary algorithms,”in Soft Computing: Integrating Evolutionary, Neural, and Fuzzy Sys-tems. Berlin, Germany: Springer-Verlag, 2001, ch. 7.

[79] R. Palmer, “Sliding mode fuzzy control,” in Proc. IEEE Int. Conf. FuzzySystems, 1992, pp. 519–526.

[80] P. Bonissone and K. H. Chiang, “Fuzzy logic controllers: From develop-ment to deployment,” in Proc. IEEE Int. Conf. Neural Networks, 1993,pp. 610–619.

[81] J. E. Slotine and W. Li, Applied Nonlinear Control. Englewood Cliffs,NJ: Prentice-Hall, 1991.

[82] P. Bonissone, “A compiler for fuzzy logic controllers,” in Proc. Int.Fuzzy Eng. Symp., 1991, pp. 706–717.

[83] “Memorandum of understanding for the implementation of a Euro-pean concerted research action designated as COST Action 529 efficientlighting for the 21st century,” in European Co-Operation in the Field ofScientific and Technical Research: COST, 2001.

[84] G. Wyszecki and W. Stiles, Color Science: Concepts and Methods,Quantitative Data and Formulae, 2nd ed. New York: Wiley, 1982.

[85] M. Siminovitch, C. Gould, and E. Page, “A high-efficiency indirectlighting system utilizing the solar 1000 sulfur lamp,” in Proc. RightLight 4 Conf., vol. 2, Copenhagen, Denmark, Nov. 19–21, 1997, pp.35–40.

[86] B. Turner, M. Ury, Y. Leng, and W. Love, “Sulfur lamps – Progress intheir development,” J. Illum. Eng. Soc., vol. 26, pp. 10–16, 1997.

[87] D. MacAdam, “The theory of the maximum visual efficiency of coloredmaterials,” J. Opt. Soc. Amer., vol. 25, pp. 249–252, 1935.

[88] , “Maximum visual efficiency of colored materials,” J. Opt. Soc.Amer., vol. 25, pp. 361–367, 1935.

[89] M. Koedam and J. Opstelten, “Measurement and computer-aided opti-mization of spectral power distributions,” Light. Res. Technol., vol. 3,no. 3, pp. 205–210, 1971.

[90] M. Koedam, J. Opstelten, and D. Radielovic, “The application of simu-lated spectral power distributions in lamp development,” J. Illum. Eng.Soc., vol. 1, pp. 285–289, 1972.

[91] W. Thornton, “Luminosity and color-rendering capability of whitelight,” J. Opt. Soc. Amer., vol. 61, no. 9, pp. 1155–1163, 1971.

[92] H. Einhorn and F. Einhorn, “Inherent efficiency and color rendering ofwhite light sources,” Illum. Eng., vol. 62, pp. 154–158, 1967.

[93] W. Walter, “Optimum phosphor blends for fluorescent lamps,” Appl.Opt., vol. 10, pp. 1108–1113, 1971.

[94] H. Haft and W. Thornton, “High performance fluorescent lamps,” J.Illum. Eng. Soc., vol. 1, pp. 29–35, 1972.

[95] J. Opstelten, D. Radielovic, and J. Verstegen, “Optimum spectra for lightsources,” Phil. Tech. Rev., vol. 35, pp. 361–370, 1975.

Page 25: Evolutionary algorithms in Embedded systems

280 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 10, NO. 3, JUNE 2006

[96] W. Walter, “Optimum lamp spectra,” J. Illum. Eng. Soc., vol. 7, no. 1,pp. 66–73, 1978.

[97] N. Ohta and G. Wyszecki, “Designing illuminants that render given ob-jects in prescribed colors,” J. Opt. Soc. Amer., vol. 66, pp. 269–275,1976.

[98] , “Color changes caused by specified changes in the illuminant,”Color Res. Appl., vol. 1, pp. 17–21, 1976.

[99] D. DuPont, “Study of the reconstruction of reflectance curves based ontristimulus values: Comparison of methods of optimization,” Color Res.Appl., vol. 27, pp. 88–99, 2002.

[100] N. Eklund and M. Embrechts, “GA-based multi-objective optimizationof visible spectra for lamp design,” in Smart Engineering System De-sign: Neural Networks, Fuzzy Logic, Evolutionary Programming, DataMining and Complex Systems, C. H. Dagli, A. L. Buczak, J. Ghosh, M.J. Embrechts, and O. Ersoy, Eds. New York: ASME Press, 1999, pp.451–456.

[101] , “Determining the color-efficiency Pareto optimal surface forfiltered light sources,” in Evolutionary Multi-Criterion Optimization,Zitzler, Deb, Thiele, Coello, and Corne, Eds. Berlin, Germany:Springer-Verlag, 2001, vol. 1993, Lecture Notes in Computer Science,pp. 603–611.

[102] N. Eklund, “Multiobjective visible spectrum optimization: A geneticalgorithm approach,” Ph.D. dissertation, Engineering Science Dept.,Rensselaer Polytechnic Inst., Troy, NY, 2002.

[103] T. Kiehl, “Genetic algorithms for autonomous satellite control,” MS,Computer Science Dept., Rensselaer Polytechnic Inst., Troy, NY, 1999.

[104] K.-D. Lee, H.-J. Lee, Y.-H. Cho, and D. G. Oh, “Throughput-maxi-mizing timeslot scheduling for interactive satellite multiclass services,”IEEE Commun. Lett., vol. 7, pp. 263–265, 2003.

[105] N. Funabiki and S. Nishikawa, “A binary hopfield neural-network ap-proach for satellite broadcast scheduling problems,” IEEE Trans. NeuralNetw., vol. 8, pp. 441–445, 1997.

[106] T. Tambouratzis, “Decomposition co-ordination artificial neural net-work for satellite broadcast scheduling,” Electron. Lett., vol. 34, no. 15,pp. 1503–1504, 1998.

[107] C. Plaunt, A. K. Jonsson, and J. Frank, “Run-time satellite tele-com-munications call handling as dynamic constraint satisfaction,” in Proc.IEEE Aerospace Conf., vol. 5, 1999, pp. 165–174.

[108] D. E. Goldberg, Genetic Algorithms in Search, Optimization and Ma-chine Learning. Reading, MA: Addison-Wesley, 1989.

[109] F. Herrera, M. Lozano, and J. L. Verdegay, “Tackling real-coded geneticalgorithms: Operators and tools for behavioral analysis,” Artif. Intell.Rev., vol. 12, no. 4, pp. 265–319, 1998.

[110] K. Deb, A. Anand, and D. Joshi, “A computationally efficient evolu-tionary algorithm for real-parameter optimization,” Evol. Comput., vol.10, no. 4, pp. 371–395, 2002.

Piero P. Bonissone (S’75–M’79–SM’02–F’04) re-ceived the Ph.D. degree in electrical engineering andcomputer science from the University of Californiaat Berkeley in 1979.

Since then, he has been a Computer Scientist withGeneral Electric Global Research, where he has car-ried out research in artificial intelligence, expert sys-tems, fuzzy sets, evolutionary algorithms, and softcomputing, ranging from the control of turbo-shaftengines to the use of fuzzy logic in dishwashers, lo-comotives, power supplies and financial applications.

He has developed case-based and fuzzy-neural systems to accurately estimatethe value of residential properties when used as mortgage collaterals and to pre-dict paper-web breakages in paper mills. He has led a large internal project thatuses a fuzzy-rule base to partially automate the underwriting process of lifeinsurance applications. Recently he led a soft computing group in the develop-ment of prognostics of products’ remaining life. He is also an Adjunct Professorwith Rensselaer Polytechnic Institute, Troy, NY. Since 1993, he has been Ed-itor-in-Chief of the International Journal of Approximate Reasoning. He hascoedited four books and published more than 120 articles. He received 33 U.S.patents (with more than 25 pending). He has been the Keynote Speaker of manyimportant conferences in the artificial intelligence and soft computing field.

Dr. Bonissone is a Fellow of the American Association for Artificial Intelli-gence and the International Fuzzy Systems Association. In 1993, he receivedthe Coolidge Fellowship Award from GE CRD for overall technical accom-plishments. From 1993 to 2000, he was Vice-President of Finance of the IEEENNC. In 2002, he became President of the IEEE Neural Networks Society. Heis Vice-President of Finances of the IEEE Computational Intelligence Society.

Raj Subbu (M’00–SM’04) received the Ph.D. degreein computer and systems engineering from Rensse-laer Polytechnic Institute (RPI), Troy, NY, in 2000.

Since 2001, he has been a Senior Research Scien-tist in the Computing and Decision Sciences Group,General Electric Global Research Center, Niskayuna,NY. His research interests are in the areas of infor-mation systems, control systems, novel multiobjec-tive optimization methodologies, and soft computing.At General Electric, he serves as a principal tech-nologist at the intersection of these areas for several

high-business-impact projects. In addition, he was Co-Principal Investigator ina multiyear (2001–2004) National Science Foundation funded project in scal-able enterprise decision systems, in collaboration with RPI. He has authoredover 30 publications and proceedings, has received two U.S. patents, and hasover 16 U.S. patents pending. He is the principal coauthor of Network-BasedDistributed Planning Using Coevolutionary Algorithms (Singapore: World Sci-entific, 2004).

Dr. Subbu received the Best Paper Award at the IEEE International Con-ference on Fuzzy Systems in 2003. He is an Associate Editor of the IEEETRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS

AND HUMANS and PART C: APPLICATIONS AND REVIEWS.

Neil Eklund (S’91–M’98) received the B.S. degree,two M.S. degrees, and the Ph.D. degree from Rens-selaer Polytechnic Institute, Troy, NY, in 1991, 1998,and 2002, respectively.

He was a Research Scientist with the LightingResearch Center from 1993 to 1999. He was inthe Network Planning Department of PSINet from1999 to 2002 before joining General Electric GlobalResearch, Niskayuna, NY, in 2002. He has workedon a wide variety of research projects, includingearly detection of cataract using intraocular photo-

luminescence, multiobjective bond portfolio optimization, and on-wing faultdetection and accommodation in gas turbine aircraft engines. His currentresearch interests involve developing hybrid soft/hard computing approachesfor real-world problems, particularly real-time monitoring, diagnostics, andprognostics.

Thomas R. Kiehl received the B.S. and M.S. degreesin computer science from Rensselaer PolytechnicInstitute, Troy, NY, in 1996 and 1999, respectively,where he is currently pursuing a Ph.D. in multidisci-plinary science.

From 1996 to 2005, he was a Computer Scientistwith General Electric Global Research, where heworked on a variety of applications of genetic algo-rithms from scheduling to combinatorial chemistry,systems biology, and machine learning. In his tenurethere he also worked in systems architecture/security

and biological simulation.Mr. Kiehl is a National Science Foundation Graduate Research Fellow. He is

a member of IEEE CIS and ACM SIGEVO.