graduate school of information, production and systems, waseda university 2. constrained...

86
Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimizat ion

Upload: adela-norton

Post on 13-Dec-2015

214 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Graduate School of Information, Production and Systems, Waseda University

2. Constrained Optimization

Page 2: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 2

2. Constrained Optimization1. Unconstraint Optimization

1.1 Introduction to Unconstraint Optimization 1.2 Ackley’s Function1.3 Genetic Approach for Ackley’s Function

2. Nonlinear Programming2.1 Handling Constraints2.2 Penalty Function2.3 Genetic Operators2.4 Numerical Examples

3. Stochastic Optimization3.1 Stochastic Programming Model3.2 Monte Carlo Simulation3.3 Genetic Algorithm for Stochastic Optimization Problems

Page 3: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 3

2. Constrained Optimization4. Nonlinear Goal Programming

4.1 Formulation of Nonlinear Goal Programming

4.2 Genetic Approach for Nonlinear Goal Programming

5. Interval Programming5.1 Interval Programming

5.2 Interval Inequality

5.3 Order Relation of Interval

5.4 Transforming Interval Programming

5.5 Pareto Solution for Interval Programming

5.6 GA Procedure for Interval Programming

Page 4: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 4

2. Constrained Optimization1. Unconstraint Optimization

1.1 Introduction to Unconstraint Optimization 1.2 Ackley’s Function1.3 Genetic Approach for Ackley’s Function

2. Nonlinear Programming

3. Stochastic Optimization

4. Nonlinear Goal Programming

5. Interval Programming

Page 5: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 5

1.1 Introduction to Unconstraint Optimization For a real function of several real variables, we want to find an a

rgument vector which corresponds to a minimal function value: The optimization problem:

Find x* = argminx { f(x)| xL≤ x≤ xR }where, the function f is called the objective function and x* is the minimum In some cases we want a maximum of a function. This is ea

sily determined if we find a minimum of the function with opposite sign.

Optimization model plays an important role in many branches of science and engineering: Economics Operations Research Network Analysis Engineering Design Electrical Systems

Page 6: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 6

1.1 Introduction to Unconstraint Optimization

Example In this example we consider functions of one variable. The function f(x) = (x-x*)2

has one unique minimum x*.

Periodical optimization:

the function f(x) =-2cos(x-x*) has many minimums. x = x* + 2pπ, where p is an integer.

= f(x) f(x)

x* x* x*

= f(x) f(x)

-4 ≤ x ≤ 4

Page 7: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 7

1.1 Introduction to Unconstraint Optimization Multi-modal optimization:

the function f(x) = 0.015(x-x*)2-2cos(x-x*) has a unique global minimum x*. Besides that, it also has several so-called local minimums such as x1 and x2, each giving the minimal function value inside a certain region.

x1x2x*

= f(x) f(x)

Page 8: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 8

1.1 Introduction to Unconstraint Optimization The ideal situation for optimization computations is that the objective

function has a unique minimum. We call this the global minimum. In some cases the objective function has several (or even infinitely many)

minimums. In such problems it may be sufficient to find one of these minimums.

In many objective functions from applications we have a global minimum and several local minimums. It is very difficult to develop methods which can find the global minimum with certainty in this situation.

The methods described here can find a local minimum for the objective function. When a local minimum has been discovered, we do not know whether it is a

global minimum or one of the local minimums. We cannot even be sure that our optimization method will find the local

minimum closest to the starting point. In order to explore several local minimums we can try several runs with

different starting points, or better still examine intermediate results produced by a global minimum.

Page 9: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 9

1.1 Introduction to Unconstraint Optimization

In general, an unconstrained optimization problem can be mathematically represented as follows: Luenberger, D. : Linear and Nonlinear Programming, 2nd ed., Addison-

Wesley, Reading, MA, 1984.

where, f is a real-valued function and , the feasible set, is a subset of En. When attention is restricted to the case where = En, it corresponds to the completely unconstrained case En.

A point x* is said to be a local minima of f over if there is an > 0 such that f(x)f(x*) for all x within a distance of x*.

A point x* is said to be a global minima of f over if f(x)f(x*) for all x .

min f (x)s. t. x

Page 10: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 10

1.2 Ackley’s Function Ackley’s function is a continuous and multimodal test function

obtained by modulating an exponential function with a cosine wave of moderate amplitude. Ackley, D. : A Connectionist Machine for Genetic Hillc

limbing, Kluwer Academic Publishers, Boston, 1987.

As Ackley pointed out, this function causes moderate complications to the search, since though a strictly local optimization algorithm that performs hill-climbing would surely get trapped in a local optimum, a search strategy that scans a slightly bigger neighborhood would be able to cross intervening valleys towards increasingly better optima.

Therefore, Ackley's function provides a reasonable test case for genetic search.

Page 11: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 11

1.3 Genetic Approach for Ackley’s Function To minimize Ackley’s function, we simply use the following implementa

tion of the Genetic Algorithm (GA): Real Number Encoding Arithmetic Crossover Nonuniform Mutation Top popSize Selection

Real Number Encodingv = [x1, x2, …, xn]

xi : real number, i = 1, 2, …, n

Arithmetic Crossover The arithmetic crossover is defined as the combination of two chromosom

es v1 and v2:

v1’ = v1+(1-)v2

v2’ = v2+(1-)v1

where (0, 1).

Page 12: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 12

1.3 Genetic Approach for Ackley’s Function Nonuniform Mutation

For a given parent v, if the element xk of its selected for mutation.

The resulting: offspring is

v’=[x1, …, xk, …, xn]

where xk is randomly selected from two possible choices:

where xkU and xk

L are the upper and lower bounds for xk. The function (t, y) returns a value in the range [0, y] such that the value of (t,

y) approaches to 0 as t increases (t is the generation number) as follows:

where r is a random number from [0, 1], T is the maximal generation number, and b is a parameter determining the degree of nonuniformity.

x1’ = xk +(t, xkU - xk) or

x2’ = xk -(t, xk - xkL)

b

T

tryyt

1),(

Page 13: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 13

1.3 Genetic Approach for Ackley’s Function Top popSize Selection

Top popSize selection produces the next generation by selecting the best

popSize chromosome from parents and offspring.

For this case, we can simply use the values of objective as fitness values c

hromosomes according to these values.

Fitness Function

eval(v) = f(x)

Parameter setting of Genetic Algorithm (GA) The parameters of GA are set as follows:

Population size: popSize = 10

Crossover probability: pC = 0.20

Mutation probability:pM = 0.03

Maximum generation: maxGen =1000

Page 14: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 14

1.3 Genetic Approach for Ackley’s Function GA for Unconstraint Optimization

procedure: GA for Unconstraint Optimization ( uO)

input: uO data set, GA parameters

output: best solution

begin

t 0;

initialize P(t) by real number encoding;

fitness eval(P);

while (not termination condition) do

crossover P(t) to yield C(t) by arithmetic crossover;

mutation P(t) to yield C(t) by nonuniform mutation;

fitness eval(C);

select P(t+1) from P(t) and C(t) by top popSize selection;

t t + 1;

end

output best solution;

end

Page 15: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 15

1.3 Genetic Approach for Ackley’s Function

.71282.2 and ,2 ,20 ,20 where

2 ,1 ,55

cos2

1exp

2

1exp),( min

321

1

2

13

2

1

22121

ec.cc

jx

ecxcxccxxf

j

jj

jj

Optimal solution: (x1*, x2*)=(0, 0), f(x1*, x2*)=0

f = -20 Exp[ -0.2 Sqrt[ 0.5 [x1^2 + x2^2 ] ] ] - Exp[ 0.5 [Cos[ 2Pi x1 ] + Cos[ 2Pi x2 ] ] + 20 + 2.71282;

Plot3D[f, {x1,-5, 5}, {x2, -5, 5},PlotPoints ->1,AxesLabel ->{x1, x2, “f(x1,x2)”}];

f(x1, x2)

Page 16: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 16

1.3 Genetic Approach for Ackley’s Function

Initial Population The initial population v is randomly created within [-5, 5]:

4.539171] ,[-4.515976=

3.441115]- 3.812220, [=

4.712821]- 1.182745, [=

4.064608] ,[-3.792383=

0.959655]- ,[-3.832667=

0.750603] ,[-2.127598=

0.196387]- 1.897794, [=

1.867275]- 4.672536, [=

1.630757]- ,[-4.806207 =

0.169225] 4.954222, [=

10

9

8

7

6

5

4

3

2

1

21

v

v

v

v

v

v

v

v

v

v

x x

Page 17: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 17

1.3 Genetic Approach for Ackley’s Function

The corresponding fitness function values eval(v) are calculated:

25176414)53917145159764()(

27965312)4411153 8122203()(

55936311)7128214 1827451 ()(

79540211)06460847923833()(

1947289 )9596550 8326673()(

7576916 )7506030 1275982()(

6819005 )1963870 8977941()(

788221118672751 6725364 ()(

11025912)6307571 8062074()(

73194510)16922509542224 ()(

10

9

8

7

6

5

4

3

2

1

.=., .-=fveval

.=.-,. = fveval

.=.-,.=fveval

.=., .-=fveval

.=.-,.-=fveval

.=., .-=fveval

.=.-,. =fveval

.)=.-,.=fveval

.=.-,.-=fveval

.=., .= fveval

Page 18: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 18

1.3 Genetic Approach for Ackley’s Function

Crossover The sequence of random numbers for crossover was

generated:

This means that the chromosomes v2, v6, v8 and v9 were selected for crossover.

The offspring by arithmetical operation were generated as follows:

0.828211 0.199683 0.639149 0.629170 0.957427

0.149358 0.304788 0.058504 0.149693 0.326670

1.206594]- ,[-4.194488= 0.959655]- ,[-3.832667=

1.383817]- ,[-4.444387= 1.630757]- ,[-4.806207='26

'12

vv

vv

3.631985]- [1.311703,= 3.441115]- [3.812220,=

4.521950]- [3.683262,= 4.712821]- [1.182745,='49

'38

vv

vv

Page 19: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 19

1.3 Genetic Approach for Ackley’s Function A sequence of random numbers rk (k=1, 2, …, 20) generates fro

m the range [0, 1]. The corresponding gene to be mutated is

The resulting offspring is

The fitness value for each offspring is

bitPos chromNum variable randomNum 11 6 x1 0.081393

]959655.0 ,068506.4['5 v

9.083240 =0.959655)- 4.068506,- (=)(

10.538330=3.631985)- 1.311703, (=)(

13.449167=4.521950)- 3.683262, (=)(

10.566867=1.206594)- ,(-4.194488=)(

11.927451=1.383817)- ,(-4.444387=)(

'5

'4

'3

'2

'1

fveval

fveval

fveval

fveval

fveval

Page 20: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 20

1.3 Genetic Approach for Ackley’s Function Selection

The best 10 chromosomes among parents and offspring and the corresponding fitness values are calculated as follows:

= [4.954222, 0.169225 ]

= [1.311703, - 3.631985 ]

= [4.672536, -1.867275 ]

= [1.897794, - 0.196387 ]

= [-2.127598, 0.750603 ]

= [-3.832667, - 0.959655 ]

= [-3.792383, 4.064608 ]

= [1.182745, - 4.712821 ]

= [-4.194488, -1.206594 ]

= [-4.068506, - 0.959655]

x x

v

v

v

v

v

v

v

v

v

v

1 2

1

2

3

4

5

6

7

8

9

109.083240 =0.959655)- ,(-4.068506=)(

10.566867 =1.206594)- ,(-4.194488=)(

11.559363 =4.712821)- (1.182745,=)(

11.795402 =4.064608) ,(-3.792383=)

9.194728 =0.959655)- ,(-3.832667=)(

6.757691 =0.750603) ,(-2.127598=)(

5.681900 =0.196387)- (1.897794,=)(

11.788221 =1.867275)- (4.672536,=)(

10.538330=3.631985)- (1.311703,=)(

10.731945 =0.169225) (4.954222,=)(

10

9

8

7

6

5

4

3

2

1

fveval

fveval

fveval

fveval

fveval

fveval

fveval

fveval

fveval

fveval

(

Page 21: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 21

1.3 Genetic Approach for Ackley’s Function After the 1000th generation, we have the following

chromosomes:

The fitness value:

= [-0.000002, - 0.000000]

= [-0.000002, - 0.000000]

= [-0.000002, - 0.000000]

= [-0.000002, - 0.000000]

= [-0.000002, - 0.000000]

= [-0.000002, - 0.000000]

= [-0.000002, 0.000000]

= [-0.000002, - 0.000000]

= [-0.000002, - 0.000000]

= [-0.000002, - 0.000000]

x x

v

v

v

v

v

v

v

v

v

v

1 2

1

2

3

4

5

6

7

8

9

10

f x x( , ) .* *1 2 0 005456

Page 22: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 22

1.3 Genetic Approach for Ackley’s Function Evolutional Process

Simulation

Page 23: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 23

2. Constrained Optimization

1. Unconstraint Optimization

2. Nonlinear Programming2.1 Handling Constraints

2.2 Penalty Function

2.3 Genetic Operators

2.4 Numerical Examples

3. Stochastic Optimization

4. Nonlinear Goal Programming

5. Interval Programming

Page 24: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 24

2. Nonlinear Programming Nonlinear programming (or constrained optimization) deals with th

e problem of optimizing an objective function in the presence of equality and/or inequality constraints.

Many practical problems can be successfully modeled as nonlinear program (NP). The general NP model may be written as follows:

where f is objective function, gi(x) 0 is inequality constraint, and each of the constraints hi(x) = 0 is equality constraint, the set X is domain constraint which include lower and upper bounds on the variables.

The nonlinear programming problem is to find a feasible point y such that f(y) ≥ f(x) for any feasible point x.

Xx

x

x

x

211

1

...,,1,0)(

...,,2,1,0)(

)(max

mmmih

mig

f

i

it.s.

Page 25: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 25

2.1 Handling Constraints Several techniques have been proposed to handle constraints with

Genetic Algorithms (GAs): Rejecting Strategy

Rejecting strategy discards all infeasible chromosomes created throughout the evolutionary process.

Repairing Strategy Repairing a chromosome involves taking an infeasible chromosome an

d generating a feasible one through some repairing procedure. Repairing strategy depends on the existence of a deterministic repair p

rocedure to convert an infeasible offspring into a feasible one. Modifying Genetic Operator Strategy

One reasonable approach for dealing with the issue of feasibility is to invent problem-specific representation and specialized genetic operators to maintain the feasibility of chromosomes.

Penalty Strategy These strategies above have the advantage that they never generate i

nfeasible solutions but have the disadvantage that they consider no points outside the feasible regions.

Page 26: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 26

2.2 Penalty Function Penalty Methods

Gen, M. and R. Cheng, Genetic Algorithms and Engineering Design, John Wiley & Sons, New York, 1997.

Penalty techniques transform a constrained problem into an unconstrained problem by penalizing infeasible solutions, in which a penalty term is added to the objective function for any violation of the constraints.

The basic idea of the penalty technique is borrowed from conventional optimization.

Conventional Optimization

How to choose a proper value of the penalty term so as to get a fast convergence and avoid premature termination.

Use penalty method to generate a sequence of feasible points whose limit is an optimal solution to the original problem.

Genetic Algorithm

How to determine the penalty term so as to strike a proper balance between the information preservation and the selection pressure for the infeasible solutions and avoid both under-penalty and over-penalty

Keep some infeasible solution in population so as to force genetic search toward to optimal solution from both side of feasible and infeasible region.

Page 27: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 27

2.2 Penalty Function Evaluation Function with Penalty Term

Two possible method to construct the evaluation function with penalty term.

One method is to take the addition form expressed as follows:

where x represents a chromosome, f(x) the objective function of problem, and p(x) the penalty term. For maximization problems, we usually require that

Let | p(x)|max and | f(x)|min be the maximum of | p(x)| and minimum of | f(x)| among infeasible solutions in current population, respectively. We also require that

to avoid negative fitness value.

eval (x) = f (x) + p (x)

p(x) = 0 if x is feasible p(x) < 0 otherwise

| p(x) |max | f (x) |min

Page 28: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 28

2.2 Penalty Function The second method is to take the multiplication form expressed

as follows:

In this case, for maximization problems we require that

and for minimization problems we require that

Note that for the minimization problems, the fitter chromosome has the lower value of eval(x).

eval (x) = f (x) · p(x)

p(x) = 1 if x is feasible0 p(x) < 1 otherwise

p(x) = 1 if x is feasible1 < p(x) otherwise

Page 29: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 29

2.2 Penalty Function Several techniques for handling infeasibility have been proposed in

the area of genetic algorithms. In general, we can classify them into two classes: constant penalty approach is known to be less effective for

complex problems, and most recent research works put the attention on the variable penalty.

variable penalty approach contains two components: variable penalty ratio: it can be adjusted according to the degree of

violation of constraints and the iteration number of genetic algorithms. penalty amount for the violation of constraints.

Essentially, the penalty is a function of the distance from the feasible area. This can be given in the following three possible ways: absolute distance of a single infeasible solution. relative distance of all infeasible solution in the current population. the adaptive penalty term.

The penalty approaches can be further distinguished by the dependence of the problem or the existence of parameter.

Page 30: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 30

2.3 Genetic Operators Real Coding Technique

Each chromosome is encoded as vector of real numbers as follows:

Such coding is known as floating point representation, real number representation, or continuous representation.

Several genetic operators have been proposed for such coding, which can be roughly classified into three classes: Conventional Operators for binary representation, extend the

operators into the real coding case. Arithmetic Operators borrow the concept of linear combination

of vectors from the area of convex sets theory. Direction-based Operators introduce the approximate gradient

(sub-gradient) direction or negative direction into genetic operators.

x = [x1, x2, …, xn]

Page 31: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 31

2.3 Genetic Operators Conventional Operators

Simple Crossover: One-cut point crossover is basic one. Davis, L. : Handbook of Genetic Algorithms, Van Nostrand Reinhold, New Yor

k, 1991.

Generations of one-cut point crossover are two-cut point, multi-cut point, and uniform crossover. Spears, W. & K. Jong: “On the virtues of parameterized uniform crossover,” Pro

c. of the 4th Inter. Conf. on GA, pp.230-236, 1991. Syswerda, G. : “Uniform crossover in genetic algorithms,” Proc. of the 3rd Inter.

Conf. on GA, pp.2-9, 1989.

],...,,,...,,,[

],...,,,...,,,[

2121

2121

nkkk

nkkk

yyyyyy

xxxxxx

y

xcrossing point at kth position

parents

offspring],...,,,...,,,[

],...,,,...,,,[

2121

2121

nkkk

nkkk

xxxyyy

yyyxxx

y'

x'

Page 32: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 32

2.3 Genetic Operators Conventional Operators

Random Crossover: Essentially, this kinds of crossover operators create off

spring randomly within a hyper-rectangle defined by the parent points. Flat Crossover: which makes an offspring by uniformly pick

ing a value for each gene from the range formed by the values of two corresponding parents' genes.

Blend Crossover: to introduce more variance. It uniformly picks values that lie between two points that contain the two parents.

Page 33: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 33

2.3 Genetic Operators Conventional Operators

Mutation Mutation operators are quite different from the traditional one: a

gene, being a real number, is mutated in a context-dependent range.

Uniform Mutation This one simply replaces a gene (real number) with a randomly sel

ected real number within a specified range.

This kind variation is called boundary mutation when the range of xk’ is formed as [xk

L, xkR], while it is called plain mutation when th

e range of xk’ is formed as [xk-1, xk+1].

mutating point at kth position

],...,,,...,,,[ 2121 nkkk xxxxxx xparent

offspring ],...,,,...,,,[ 21'21 nkkk xxxxxx x'

Page 34: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 34

2.3 Genetic Operators Arithmetical operators

Crossover Suppose that these are two parents x1 and x2, the offspring

can be obtained by 1x1+ 2x2 with different multipliers 1 and 2 .

Convex crossover

Affine crossover

Linear crossover

If 1+2=1, 1 >0, 2 >0

If 1+2=1

If 1+2 2, 1 >0, 2 >0

x1’=1x1+ 2x2

x2’=1x2+ 2x1

x1

x2 linear hull = R2

solution space

x1

x2

convex hull

affine hull

Fig. 2.1 Illustration showing convex, affine, and linear hull

Page 35: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 35

2.3 Genetic Operators Arithmetical operators

Nonuniform Mutation (Dynamic mutation) For a given parent x, if the element xk of it is selected for mutation, the

resulting offspring is x' = [x1 … xk' … xn],

where xk' is randomly selected from two possible choice:

where xkU and xk

L are the upper and lower bounds for xk . The function Δ(t, y) returns a value in the range [0, y] such that the val

ue of Δ(t, y) approaches to 0 as t increases (t is the generation number):

where r is a random number from [0, 1], T is the maximal generation number, and b is a parameter determining the degree of nonuniformity.

),('k

Ukkk xxtxx ),(' L

kkkk xxtxx or

b

T

tryyt

1),(

Page 36: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 36

2.3 Genetic Operators Direction-based operators

This operation use the values of objective function in determining the direction of genetic search:

Direction-based crossover Generate a single offspring x' from two parents x1 and x2

according to the following rules:

where 0< r 1. Directional mutation

The offspring after mutation would be:

i

ninii

x

xxxfxxxxf

),,,,(),,,,( 11

d

x' = x + r · d

where

r = a random nonnegative real number

x' = r · (x2 - x1)+x2

Page 37: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 37

2.3 Genetic Operators GA for Nonlinear Programming Problem

procedure: GA for Nonlinear Programming (NP) Problem

input: NP data set, GA parameters

output: best solution

begin

t 0;

initialize P(t) by real number encoding;

fitness eval(P) by penalty function method;

while (not termination condition) do

crossover P(t) to yield C(t) by convex crossover;

mutation P(t) to yield C(t) by nonuniform mutation;

fitness eval(C) by penalty function method;

select P(t+1) from P(t) and C(t) by top popSize selection;

t t + 1;

end

output best solution;

end

Page 38: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 38

2.4 Numerical Examples Example 1: This problem was originally given by Bracken and Mc

Cormick. Bracken, J. & G. McCormick: Selected Applications of Programming,

John Wiley & Sons, New York, 1968. Homaifar, Qi and Lai solved it with genetic algorithms. The penalty function p(x)=r1g1(x)+r2g2(x), where r1 and r2 are penalty f

actors. Homaifar, A., C. Qi & S. Lai: “Constrained optimization via genetic algorit

hms,” Simulation, Vol.62, No.4, pp.242-254, 1994.

0 1 4

) (

0 1 2 ) ( t.s.

22

21

2

211

xx

g

xxg

x

x

22

21 ) 1 ( ) 2 ( ) ( min xxf x

Page 39: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 39

Solutions of Numerical Experimentation The GA's parameters are chosen as follows:

popSize=400 pC= 0.85 pM= 0.02

The results show that both constraints are satisfied by the GA solutions, where GRG stands for generalized reduced gradient method. Gabriete, D. & K. Ragsdell, “Large scale nonlinear programming using

the generalized reduced gradient method,” ASME J. of Mechanical Design, Vol.102, pp.566-573, 1980.

Simulation

2.4 Numerical Examples

0 1 4

)(

0 1 2 )(

22

21

2

211

xx

xg

xxxg

Table 2.1: Solutions of Numerical Experimentation

Reference   GRG    GA   Items Solution Solution Solution

)(

)(

)(

2

1

2

1

xg

xg

x

x

xf

4

3

1046.7

100.1

9110.0

8230.0

3930.1

×

2500.0

0000.0

9994.0

9989.0

0021.1

5

4

1018.5

1000.1

9115.0

8229.0

3934.1

×

×

Page 40: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 40

2.4 Numerical Examples Evolutional Process

×100generation

z

Page 41: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 41

2.4 Numerical Examples Example 2: This problem is an interesting nonlinear optimization problem provide

d by Himmelblau. M. Himmelblau: Applied Nonlinear Programming, McGraw-Hill, New York, 1972. Homaifar, Qi and Lai solved it with genetic algorithms.

Homaifar, A., C. Qi & S. Lai: Constrained optimization via genetic algorithms, Simulation, Vol.62, No.4, pp.242-254, 1994.

45 27

45 27 45 27

45 33 102 78

250019085.0

0012547.0 0047026.0 300961.9 20

1100021813.0

0029955.0 0071317.0 51249.80 90

920022053.0

00026.0 0056858.0 334407.85 0 t.s.

141.407921293239.37 8356891.0 3578547.5 ) ( min

5

43

21

43

3153

23

2152

53

4152

15123

x

xx

xx

xx

xxxx

x

xxxx

xx

xxxx

xxxxxf

Page 42: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 42

2.4 Numerical Examples Solutions of Numerical Experimentation

The GA's parameters are chosen as follows: popSize = 400 pC = 0.8 pM = 0.088

The results indicate that the solution based on local reference is slightly better than the global reference.

Simulation

Table 2.2: Solutions of Numerical Experimentation

78≤x1≤102

33≤x2≤45

27≤x3≤45

27≤x4≤45

27≤x5≤450≤g1(x)≤92

90≤g2(x)≤110

20≤g3(x)≤25

Items Solution

)(

x5

x4

x3

x2

x1

xf

g1(x)g2(x)g3(x)

Reference

36.776

45.000

29.995

33.000

78.000 30665.500

90.71598.84124.073

GASolution

78.63233.67727.79043.55443.57491.898

20.047

-30876.964

100.595

GRGSolution

220.35

180.44070.31440.33620.78950.-30373

90.52198.89320.132

Page 43: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 43

2.4 Numerical Examples Evolutional Process

z

Objective Function: f(x)=-30876.964609730574

Page 44: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 44

2. Constrained Optimization

1. Unconstraint Optimization

2. Nonlinear Programming

3. Stochastic Optimization3.1 Stochastic Programming Model3.2 Monte Carlo Simulation3.3 Genetic Algorithm for Stochastic

Optimization Problems

4. Nonlinear Goal Programming

5. Interval Programming

Page 45: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 45

3. Stochastic Optimization In the practical application of mathematical programming, it is

difficult to determine the proper values of model parameters.

Liu, B. : Uncertainty Theory, Springer-Verlag, New York, 2004.

Some or all of them may be random variables, that is, they are often influenced by random events that are impossible to predict.

It is needed to formulate the problem so that the optimization will directly take the uncertainty into account.

Stochastic Programming A approach for mathematical programming under uncertainty

Page 46: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 46

3.1 Stochastic Programming Model Let f (x,) be a random function for any given x and , where x=[x1, x2, …, xn]

is a vector of real variables and =[1, 2, …,l ] is a vector of random variables. Since it is meaningless to maximize a random variable, it is natural to use its expected value E[f (x,)] to replace it. Thus the stochastic programming problem can be generally written as follows:

If the distribution and density functions of stochastic vector are () and (), respectively, then we have

max E [ f (x,)]s.t. E [ gi(x,)] 0, i = 1, 2, …, m1

E [ hi(x,)] = 0, i = m1 + 1, …, m1+ m2

ξξξxξξxξx

ξξξxξξxξx

ξξξxξξxξx

dhdhhE

dgdggE

dfdffE

mm

mm

mm

R iR ii

Ri

Rii

RR

)(),()(Φ),()],([

)(),()(Φ),()],([

)(),()(Φ),()],([

Page 47: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 47

3.2 Monte Carlo Simulation Monte Carlo simulation is a scheme employing independent and iden

tically distributed random variables to approximate the solutions of problems.

One of the standard applications is to evaluate the integral.

where g(x) is a real-valued function that is not analytically integral. To see this deterministic problem can be approached by Monte Carlo simu

lation, let Y be the random variable (b-a)g(X), where X is a continuous random variable distributed uniformly on [a, b], denoted by U(a, b).

Then the expected value of Y is as follows:

b

adxg )(xI

b

a

b

a

dxg

dxfgab

gEab

gabEE

IX

XX

X

XY

X

)(

)()()(

)]([)(

)]()[(][

Page 48: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 48

3.2 Monte Carlo Simulation where fX(x)=1/(b-a) is the probability density function of a U(a, b) ra

ndom variable. Thus the problem of evaluation of the integral has been reduced to on

e of estimating the expected value of E(Y). In particular, we can estimate the value by the sample mean:

where X1, X2, …, Xn are independent and identically distributed U(a ,b) ran

dom variables. Furthermore, it can be shown that Y(n) is an unbiased estimator of I.

n

jj

n

jj g

n

ab

nn

11

)(1

)( XYY

nn

nE

/][Var)]([Var

)]([

YY

IY

Page 49: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 49

3.2 Monte Carlo Simulation How to evaluate a stochastic integral

For a fixed vector x, let Y be the random variable |D|f (x,) (), where is a random vector uniformly distributed over a bounded domain DRm and |D| is the volume of the bounded domain. E(y) is estimated by the sample mean as follows:

where 1, 2, …, n are independent random vectors uniformly distribu

ted over the bounded domain. Y(n) is an unbiased estimator of the value of integral and Var[Y(n)]=Var[Y(n)]/n.

ξξξxξx dffED )(),()],([

)(),()(1

j

n

jjf

n

Dn

xY

Page 50: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 50

3.2 Monte Carlo Simulation

procedure: Monte Carlo Simulation

input: the number of simulation ns

output: the objective obj

begin

obj 0;

for j 1 to ns do

j a random vector vr();

obj obj + f (x, j )·(j );

end

obj obj |D |/ ns;

output the objective obj;

end

Procedure of Monte Carlo Simulation

Page 51: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 51

3.3 Genetic Algorithm for Stochastic Optimization Problems

Gen, Liu and Ida have proposed a Genetic Algorithm for the following Stochastic optimization problem: Gen, M., B. Liu & K. Ida: “Evolution program for deterministic and s

tochastic optimizations,” Euro. J. of OR, Vol.94, pp.618-625, 1996.

They termed their algorithm as evolution program. The concept of Genetic Algorithm is firstly given by Michalewicz.

Michalewicz, Z.: Genetic Algorithm + Data Structure = Evolution Programs, 2nd ed., Springer-Verlag, New York, 1994.

It is based entirely on the idea of genetic algorithms. The difference is that it suggests to use any data structure suitable

for a problem together with any set of meaningful genetic operators.

max E [f (x,)]s.t. gi(x) 0, i = 1, 2, …, m

Page 52: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 52

3.3 Genetic Algorithm for Stochastic Optimization Problems

Representation

Initialization The initialization procedure begin with a predetermined interior point V0

inter constraint set and a large positive number M0.

v = [x1, x2, …, xn], xi : real number, i = 1, 2, …, n

procedure: Initializationinput: a large positive number M0, a predetermined interior point v0

output: chromosome vk, k = 1, 2, …, popSize

begin for k 1 to popSize do M M0;

produce a random direction d; vk v0 + M d;

while (vk is not feasible) do

M random(0, M); vk v0 + M d;

end end output vk , k = 1, 2, …, popSize;

end

Page 53: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 53

3.3 Genetic Algorithm for Stochastic Optimization Problems

Evaluation function Exponential-fitness scaling scheme First

The objective function for each chromosome is evaluated with Monte Carlo simulation and then chromosomes are ranked so as to the worst one in position 1 and the best one in position popSize according to their objective function values.

Second Three preference parameters p0, p1 and p2 (0 < p0 < p1 < p2 < 1)

are defined; these are used to determine three critical chromosomes with ranking u0, u1 and u2, respectively, such that u0= p0*popSize, u1=p1*popSize and u2=p2*popSize.

Page 54: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 54

Finally The chromosome with the ranking u1 is assigned a fitness v

alue of e-1 0.37, the one with ranking u0 is assigned a fitness value of 1, and the one with ranking u2 is assigned a fitness value of 2-e-11.63. For a chromosome v with ranking u, the relation between the ranking u and the exponential fitness eval(v) is given as follows:

3.3 Genetic Algorithm for Stochastic Optimization Problems

fitness

u1 u0 u20

0.37

1

1.63

2

ranking

Fig. 2.2 Exponential fitness

001

0

001

0

,exp2

,exp

)(

uuuu

uu

uuuu

uu

eval v

Page 55: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 55

3.3 Genetic Algorithm for Stochastic Optimization Problems

Procedure of Evaluation function

procedure: Evaluation

input: chromosome vk, k = 1, 2, …, popSize

output: exponential-fitness eval(vk), k = 1, 2, …, popSize

begin

for k 1 to popSize do

compute the objectives fk for vk by Monte Carlo simulation;

rank chromosomes according to the objectives;

for k 1 to popSize do

compute the exponential-fitness eval(vk) based on the ranking;

output eval(vk), k = 1, 2, …, popSize;

end

Page 56: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 56

3.3 Genetic Algorithm for Stochastic Optimization Problems

Crossover and Mutation Arithmetical crossover is adopted and Nonuniform mutati

on operators use the same procedure as initialization to mutate chromosomes in a free direction.

Offspring is made, respectively, as follows: Arithmetical crossover :

Nonumiform mutation

v1’= c1v1 + c2v2

v2’ = c2v1 + c1v2

where c1, c2 and c1 + c2 = 1.

v’ = v + M d where d is a randomly generated direction of mutation and M a large positive number.

Page 57: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 57

3.3 Genetic Algorithm for Stochastic Optimization Problems

GA Procedure for Stochastic Optimization Problem

procedure: GA for Stochastic Optimization Problem (sOP)

input: sOP data set, GA parameters

output: best solution

begin

t 0;

initialize P(t) by initialization routine;

fitness eval(P) by evaluation routine;

while (not termination condition) do

crossover P(t) to yield C(t) by arithmetical crossover;

mutation P(t) to yield C(t) by nonuniform mutation;

fitness eval(C) by evaluation routine;

select P(t+1) from P(t) and C(t) by roulette wheel selection;

t t + 1;

end

output best solution;

end

Page 58: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 58

2. Constrained Optimization

1. Unconstraint Optimization

2. Nonlinear Programming

3. Stochastic Optimization

4. Nonlinear Goal Programming4.1 Formulation of Nonlinear Goal Programming

4.2 Genetic Approach for Nonlinear Goal

Programming

5. Interval Programming

Page 59: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 59

4. Nonlinear Goal Programming Goal Programming is one of the powerful techniques for solving

multicriteria optimization problems. The basic idea of goal programming problem is to

Establish a specific numeric goal for each objective. Formulate an goal objective function for each objective. Seek a solution that minimizes the (weighted) sum of deviation of

these objective functions from their respective goals.

There are two kinds of goal programming problems: Nonpreemptive goal programming all of the goals are of roughly

comparable importance. Preemptive goal programming there is a hierarchy of priority levels for

the goals, so that the goals of primary importance receive first-priority attention, those of secondary importance receive second-priority, and so forth. The term goal programming today usually refers to the preemptive approach.

Since the nonlinear goal programming problem can have different structures and degrees of nonlinearity, how to select a nonlinear goal programming technique is problem-dependent.

Page 60: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 60

4.1 Formulation of Nonlinear Goal Programming The generalized formulation of nonlinear goal programming is

given as follows:

We sometimes rewrite the objective function as follows:

where lexmin means lexicographically minimizing the objective.

midd

mmmmih

mmmmig

mib -ddf

dwdwPz

ii

i

i

iiii

q

k

m

iikiikik

...,,2,1,0,

)(...,,1,0)(

)(...,,1,0)(

...,,2,1,)(t.s.

)(min

211

1010

0

1 10

0

x

x

x

00

11111 )(...,,)(lexmin

m

iiqiiqiq

m

iiiii dwdwzdwdwz

Page 61: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 61

4.1 Formulation of Nonlinear Goal Programming

Notations Indices

i : index of goal

k : index of preemptive priority

Parameters

Pk : the kth preemptive priority (Pk >> Pk+1 for all k)

wki+ : the positive weight assigned to di

+ at priority Pk

wki- : the negative weight assigned to di

- at priority Pk

x : n-dimensional decision vector

fi : RnR1 in goal constraints

gi : RnR1 in real inequality constraints

hi : RnR1 in real equality constraints

bi : the target value or aspiration level of goal i

q : the number of priorities

m0 : the number of goal constraints

m1 : the number of real inequality constraints

m2 : the number of real equality constraints

Decision Variables

di+ : the positive deviation variable representing the over-achievements of goal i

di- : the negative deviation variable representing the under-achievements of goal i

Page 62: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 62

4.2 Genetic Approach for Nonlinear Goal Programming

Gen and Liu have proposed an alternative approach

Gen, M. & R. Cheng: Genetic Algorithms and Engineering Design. John Wiley & Sons, New York, 1997.

Gen, M. & B. Liu: “A Genetic Algorithm for Nonlinear Goal Programming”, Evolutionary Optimization, Vol.1, No.1, pp.65-76, 1999.

A genetic algorithm for solving nonlinear goal programming problem:

Genetic Algorithms belong to the class of problem-independent and probabilistic algorithms and can handle any kind of objective functions and any kind of constraints.

Due to its evolutionary nature, genetic algorithms perform multiple directional and robust search in complex spaces by maintaining a population of potential solutions.

So it renders us much ability to deal with much complex real world nonlinear goal programming problem.

Page 63: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 63

3.3 Genetic Algorithm for Stochastic Optimization Problems

Representation

Initialization The initialization procedure begin with a predetermined interior point V0

inter constraint set and a large positive number M0.

v = [x1, x2, …, xn], xi : real number, i = 1, 2, …, n

procedure: Initializationinput: a large positive number M0, a predetermined interior point v0

output: chromosome vk, k = 1, 2, …, popSize

begin for k 1 to popSize do M M0;

produce a random direction d; vk v0 + M d;

while (vk is not feasible) do

M random(0, M); vk v0 + M d;

end end output vk , k = 1, 2, …, popSize;

end

Page 64: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 64

4.2 Genetic Approach for Nonlinear Goal Programming Evaluation function

step 1: Calculate the objective values according to objective. There are q objective

values associated with each chromosome:

step 2: Sort chromosomes on the value of the first priority objective i=1m0 (w1i

+ di+ +

w1i- di

-). If some chromosomes have the same value of the objective, then sort

them on the second priority objective i=1m0 (w2i

+ di+ + w2i

- di-), and so forth.

The tie is broken randomly. Thus chromosomes are rearranged in the ascend-

ing order according to the values of objective.

step 3: Assign each chromosome a rank-based fitness value. Let rk be the rank of

chromosome vk. For a parameter a a (0, 1) specified by user, the rank-based

fitness function is defined as follows:

where rk=1 means the best chromosome and rk=popSize means the worst chr

omosome.

000

1122

111 )(...,,)(,)(

m

iiqiiqi

m

iiiii

m

iiiii dwdwdwdwdwdw

1)1()( krk aaeval v

popSize

kkeval

1

1)(v

Page 65: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 65

4.2 Genetic Approach for Nonlinear Goal Programming

Crossover Arithmetical crossover is adopted, in which it is defined as a con

vex combination of two vectors. For each pair of parents v1 and v2, the crossover operator with produce two children v1’ and v2’ as follows:

where c1, c2 and c1 + c2 = 1. Mutation

Mutation operators use the same procedure as initialization to mutate chromosomes in a free direction. Let a randomly generated direction of mutation be d. An offspring is made as follows:

If the offspring is not feasible, then set M by a random number in (0, M) until v+Md is feasible.

v1’= c1v1 + c2v2

v2’ = c2v1 + c1v2

v’ = v + M d

Page 66: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 66

4.2 Genetic Approach for Nonlinear Goal Programming GA Procedure for Nonlinear Goal Programming

procedure: GA for Nonlinear Goal Programming (nGP) Problem

input: nGP data set, GA parameters

output: best solution

begin

t 0;

initialize P(t) by initialization routine;

fitness eval(P);

while (not termination condition) do

crossover P(t) to yield C(t) by arithmetical crossover;

mutation P(t) to yield C(t) by nonuniform mutation;

fitness eval(C);

select P(t+1) from P(t) and C(t) by roulette wheel selection;

t t + 1;

end

output best solution;

end

Page 67: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 67

4.2 Genetic Approach for Nonlinear Goal Programming Taguchi, Ida and Gen proposed and implementation of t

he genetic algorithms to solve the nonlinear goal programming problem with interval coefficients. Taguchi, T., K. Ida & M. Gen: “Method for solving nonlinear goal pr

ogramming with interval coefficients using genetic algorithms,” Computer & Industrial Eng., Vol.33, No.1-4, pp.579-600, 1997.

For a given individual x, the fitness function proposed by them takes the following weighted-sum form:

where Pk is the preemptive priority calculated as follows:

Note that this fitness function has the property that the smaller the value of fitness, the fitter the individual. Therefore, individuals are sorted on the fitness values in an ascending order and the next generation are selected by the first popSize distinct individuals.

q

k

m

iikiikik dwdwPeval

1 1

0

)()(x

qkP kqk ...,,2,1,10 )*(2

Page 68: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 68

2. Constrained Optimization1. Unconstraint Optimization

2. Nonlinear Programming

3. Stochastic Optimization

4. Nonlinear Goal Programming

5. Interval Programming5.1 Interval Programming

5.2 Interval Inequality

5.3 Order Relation of Interval

5.4 Transforming Interval Programming

5.5 Pareto Solution for Interval Programming

5.6 GA Procedure for Interval Programming

Page 69: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 69

5. Interval Programming In the past decade, two different approaches have been

proposed for interval optimization: Interval Analysis: interval variables and normal coefficients. Interval Programming: interval coefficients and normal variables.

Motivation for developing Interval Programming technique: When using mathematical programming methods to solve practical

problem, it is usually not so easy for decision makers to determine the proper values of model parameters; on the contrary, such uncertainty can be roughly represented as an interval of confidence.

Basic idea of Interval Programming problem: Transform interval programming model into an equivalent

bicriteria programming model Find the Pareto solutions of the bicriteria programming problem

using genetic algorithms

Page 70: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 70

5.1 Introduction Interval Arithmetic : An interval is defined as an order

ed pair of real numbers as follows:

where aL and aR are the left bound and right bound of interval A, respectively. The interval also can be defined as follows:

where aC and aW are the center and width of interval A, respectively. They are calculated as follows:

A = [aL, aR] = {x | aL x aR; x R1}

A = < aC, aW >= {x | aC - aW x aC + aW ; x R1}

aL aC

aW

aR

A

Fig. 2.3 Illustration of an interval

)(2

1

)(2

1

LRW

LRC

aaa

aaa

aW

Page 71: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 71

5.1 Introduction For two intervals A = [aL, aR ] and B = [bL, bR ] the basic definiti

ons of interval arithmetic are given as follows:

0,0if),log()log()log(

0,0if,,

0,0if],,[

0if],,[

0if],,[

],[

],[

LL

LLL

R

R

L

LLRRLL

LR

RL

LRRL

RRLL

ba

bab

a

b

a

bababa

kkaka

kkakak

baba

baba

BAAB

B

A

BA

A

BA

BA

Page 72: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 72

5.2 Interval Inequality Interval Inequality: Consider the following constraint with interval coefficient.

How to determine the feasible region for the constraint is a fundamental problem for interval programming. Several definitions of feasible region have been proposed in past years. The definition given by Ishibuchi and Tanaka is based on the concept of degree of i

nequality holding true for two intervals. Ishibuchi, H. & H. Tanaka: “Formulation and analysis of linear programming problem

with interval coefficients,” J. of Japan Industrial Management Assoc., Vol.40, No.5, pp.320-329, 1989 (in Japanese).

where B = [bL, bR ], Aj = [ajL, aj

R ], xj 0, j=1, 2, …, n

n

jjj x

1

BA

x

degr

ee

ajL aj

R

g (A x)1

Illustration of the degree of inequality holding truex

degr

ee

ajL aj

R

g (A x)1

degr

ee

ajL aj

R

g (A x)1

Illustration of the degree of inequality holding trueFig. 2.4

Page 73: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 73

5.2 Interval InequalityDefinition 1: For an interval A and a real number x, the degree for inequality A

x holding true is given as follows:

LR

L

aa

axxAg ,1min,0max)(

Definition 2: For an interval A and B, the degree for inequality A B holding true is given as follows:

LRLR

LR

abba

abBAq ,1min,0max)(

Theorem 1: For a given degree of inequality holding true q, the interval inequality constraint can be transformed equivalently into the following crisp inequality constraint:

n

j=

LRj

Ljj

Rj qbbqxaqxqa

1

)1())1((

According to definition 2, the above feasible region of interval constraint can be determined by the following theorem:

Page 74: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 74

5.3 Order Relation of Interval Okada, S. & M. Gen: “Order Relation Between Intervals and Its Applica

tion to Shortest Path Problem”, Computers & Industrial Eng., Vol.25, No.1-4, pp.147-150, 1993.

Order Relation between Intervals: Consider the following interval programming problem.

where S is a feasible region of x and Cj is an interval coefficient which represents the uncertain unit profit from xj. For a given x, the total profit Z(x) is an interval. We need to make a decision based on such interval profits.

The order relation of intervals is proposed to help us to do this, which represents the decision-maker’s preference among interval profits.

The following order relations are defined for the maximization problem:

n

j

njj SxxCZ

1

|)(max Rx

Page 75: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 75

5.3 Order Relation of IntervalDefinition 3: For two intervals A and B, the order relation LR is defined

as follows:RRLL

LR babaBA and if

The order relation represents the decision-maker`s preference for the alternative with the higher minimum profit and higher maximum profit.

Definition 4: For two intervals A and B, the order relation CW is defined as follows:

WWCCCW babaBA and if

The order relation represents the decision-maker’s preference for the alternative with the higher expected value and less uncertainty.

Definition 5: For two intervals A and B, the order relation LC is defined as follows:

Note that the orders given above are partial order, which are transitive, reflexive, and anti-symmetric.

CCLLLC babaBA and if

Page 76: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 76

5.4 Transforming Interval Programming

Definition 6: A vector x S is a solution of problem if and only if there is no x’ S which satisfies

)()( xx ZZ LC

According to definition 6, the interval programming problem can be transformed into an equivalent crisp bicriteria programming problem.

Theorem 2: The solution set of problem defined by the definition 5 can be obtained as the Pareto solutions of the following bicriteria programming problem:

}|)(,)({max NCL Szz Rxx'x

Page 77: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 77

5.4 Transforming Interval Programming Now consider the following nonlinear interval programming

problem:

where the objective takes a product form. By the following theorem, this kind of nonlinear interval

programming problem can be transformed into an equivalent linear interval programming problem.

Theorem 3: For the given order CW and two positive intervals A and B, the following statement holds true:

)log()log( BABA LCLC Corollary 1: For the nonlinear interval programming problem, if it is

equivalent to the following linear interval programming problem;

n

j

njj SxxCZ

1

|)(max Rx

n

j

njj SxCZ

1

|)log()(max Rxx

Page 78: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 78

5.4 Transforming Interval Programming Maximization Problem: There are two key steps when transforming

interval programming to bicriteria linear programming: Using the definition of the degree of inequality holding true for two

intervals, transform interval constrains into equivalent crisp constraints. Using the definition of the order relation between intervals, transform

interval objective into two equivalent crisp objectives. Let us consider maximization interval programming problem:

where, Cj = [cjL, cj

R ], Aij = [aijL, aij

R ], B = [bL, bR ] , respectively, and xjL and xj

U are the lower and upper bounds for xj, respectively.

njxxx

miBxAG

xCZ

Ujj

Lj

n

jijiji

n

jjj

,,2,1 ,integer:

,,2,1 ,)( t.s.

)(max

1

1

x

x

Page 79: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 79

5.4 Transforming Interval Programming The above problems can be transformed into the following

problem:

where, aij = q aijR + (1 - q) aij

L and bj = q biR + (1 - q) bi

L .

njxxx

mibxag

xccz

xcz

Rjj

Lj

n

jijiji

n

jj

Rj

Lj

C

n

jj

Lj

L

,,2,1 integer,:

,,2,1,)( t.s.

)(2

1)(max

)(max

1

1

1

x

x

x

Page 80: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 80

5.5 Pareto Solution for Interval ProgrammingDefinition 7: Let F be the set of feasible solutions. A feasible solution y

F is said to be a nondominated solution if and only if

)()()()(, yxyxFx zzzz where

Definition 8: The positive ideal solution (PIS) is composed of all the best objective values attainable; this is denoted as

z+ = {z1+, z2

+, …, zq+},

where zq+ is the best value for the qth objective without co

nsidering other objectives.Definition 9: The negative ideal solution (NIS) is composed of all th

e worst objective values attainable; this is denoted asz - = {z1

-, z2-, …, zq

-},

where zq- is the worst value for the qth objective withou

t considering other objectives.

))(...,),(),(()( 21 xxxx qzzzz

Page 81: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 81

5.5 Pareto Solution for Interval Programming Three primary approaches or philosophies that form the basis for

nearly all the candidate multiobjective techniques:

Weight or utility method: This approaches that attempt to express all objectives in terms of a single measure. It is attractive from a strictly computational point of view. However, the obvious drawback is that associated with actually developing truly credible weights.

Ranking or prioritizing methods: This methods try to circumvent the heady problem s indicated above. They assign priorities to each objective according to their perceived importance. Most decision makers can do this.

Efficient solution or generation methods: This avoids the problems of finding weights and satisfying the ranking. It generates the entire set of nondominated solutions or an approximation of this set and then allows to the decision makers to select the nondominated solution which best represents their tradeoff among the objectives.

Page 82: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 82

5.6 GA Procedure for Interval Programming

procedure: GA for Interval Programming (IntP)input: IntP data set, GA parametersoutput: Pareto optimal solutions E(P,C)begin

t 0;initialize P(t) by integer vector;objectives f1(P), f2(P);create Pareto E(P);fitness eval(P) by hyper-plane routine;while (not termination condition) do crossover P(t) to yield C(t) by uniform crossover; mutation P(t) to yield C(t) by random perturbation mutation; objectives f1(C), f2(C); update Pareto E(P,C); fitness eval(P,C) by hyper-plane routine; select P(t+1) from P(t) and C(t) by top popSize selection; t t + 1;endoutput Pareto optimal solutions E(P,C);

end

GA Procedure for Interval Programming

Page 83: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 83

Interval programming problem: Let us consider the following example with an

interval objective function:

5.7 Numerical Example

integer :3,2,1,0

604)(

402)(

30)( t.s.

]30 ,10[]20 ,15[]17 ,15[)( max

313

3212

3211

321

jx

xxg

xxxg

xxxg

xxxz

j

x

x

x

x

Page 84: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 84

5.7 Numerical Example The problem can be transformed into the following

bicriteria linear programming problem:

integer :3,2,1,0

604)(

402)(

30)( t.s.

205.1716)( max

101515)( max

313

3212

3211

321

321

jx

xxg

xxxg

xxxg

xxxz

xxxz

j

C

L

x

x

x

x

x

Page 85: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 85

5.7 Numerical Example The Pareto solutions found by the proposed method are shown

in the Figure 2.5.

Fig. 2.5 The Pareto solutions of numerical example

Page 86: Graduate School of Information, Production and Systems, Waseda University 2. Constrained Optimization

Soft Computing Lab. WASEDA UNIVERSITY , IPS 86

Conclusion In many objective functions from applications we have a global

minimum and several local minimums. It is very difficult to develop methods which can find the global minimum with certainty in this situation.

In this chapter, we have introduced as following: Unconstraint Optimization

Introduction and Ackley’s Function Nonlinear Programming

Handling Constraints and Penalty Function Stochastic Optimization

Monte Carlo Simulation Nonlinear Goal Programming Interval Programming

Order Relation of Interval Transforming Interval Programming Pareto Solution for Interval Programming