chapter 5 approximating nonlinear optimization problem...
TRANSCRIPT
102
CHAPTER 5
Approximating nonlinear optimization problem with
fuzzy relation equations
______________________________________________________________________________
5.1 Introduction
In general, an abstract system is defined as the relation among its various possible inputs
and outputs. So their behavior can be described as the collection of facts and if-then rules
that in turn can be represented in the form of fuzzy relations. The inference process from
such systems ends up with solving a system of fuzzy relational equations. Fuzzy relation
equations (FRE) offer an appropriate tool to handle and model imprecision present in
relational structures. The notion of fuzzy relation equations (FRE) and fuzzy relational
calculus lie in the centre of the fuzzy set theory and its applications particularly in the
area of fuzzy modeling, diagnostic, fuzzy control etc. For this, several resolution
strategies of the fuzzy inverse problems based on various heuristics and metaheuristics
have been proposed in the past and the search is still on [35,73,79,124,157]. However
solving a system of fuzzy relational equations is not straightforward. It has been well
established that the solution set of a consistent system of sup-t FRE comprises of a
unique maximal solution and several minimal solutions [4,49].When the solution set of
the system of fuzzy relation equations is empty the system is said to be consistent and
inconsistent otherwise. The consistency of a system in this case can be easily verified by
checking the potential maximum solution. As the complexity of the system increases or
perturbed systems occur; it becomes difficult to find the exact solutions. This combats the
application of FRE for solving many practical problems. Many times the solution set of
FRE is an empty set. In such situation when no exact solution exists, the notion of
“Approximate solution” of FRE is addressed, as the solution is essential from practical
aspect. Hence, the field of searching methods of determining approximate solutions of
FRE needs to be explored.
103
Numerous researchers [35,36,73,124,128] have investigated the issue of approximate
solutions of FRE. But despite of wide applicability, the stream for investigation of
methods to find the approximate solutions is still not rich enough.
Firstly, Pedrycz [124] came up with a numerical method and proposed quasi-Newton
method for solving fuzzy relation equations. Perfilieva and Gottwald [138] studied
solvability and approximate solvability of fuzzy relation equations. There are also
methods dealing with removal of one or more equations of system of fuzzy relation
equations in case of no solution. Further Pedrycz [128] showed that approximate
solutions for fuzzy relation equations of type X A B=� are those fuzzy relations in
which X has minimum distance of X A� from .B But the structure of the approximate
solution set obtained by this method is not clear. Statistical approach to solve the system
of FRE is presented in [130]. Gottwald and Pedrycz [35] discussed the solvability indices
of fuzzy relation equations based on the equality index introduced for fuzzy sets by
Gottwald [34]; and found that the degree of difficulty to solve the FRE system depends
upon the solvability index. Yuan and Klir [204] introduced a method based on goodness
measure of the performance of approximate solutions and derived a lower bound and an
upper bound of solvability of systems of fuzzy relation equations.
Later on various metaheuristics were applied to solve FRE. Different neural network
based approaches have been suggested to solve the system of fuzzy relation equations
[79,131,153,181]. Wu [185] described approximate solutions of fuzzy relation equations
based on approximate reasoning. Liu, Lur and Wu [88] studied the fuzzy relational
equations with max-Łukaseiwicz composition and proposed an algorithm that yields the
best approximate solution of the considered system.
The credit to apply genetic algorithms to solve fuzzy relation equations goes to Sanchez
[157]. Further work and results in this direction can be found in Negoita�
et al.
[108].Recently, Luoh and Liaw [94] gave a novel genetic algorithm to find approximate
solutions of a system of fuzzy relational equations based on max-product composition.
104
Optimization problems with consistent fuzzy relation equations as constraints have
extensively been studied by numerous researchers [27,89,121,166,167,174]. But little
attention is paid to optimization problems when the system of FRE is inconsistent.
Thapar, Pandey and Gaur [176] studied a nonlinear optimization problem with an
inconsistent system of fuzzy relation equations based on max-min composition. The
method proceeds in two steps. Firstly, the search space is reduced. Then, a random jump
method is applied that results in a good approximate solution of the optimization
problem.
This chapter considers two nonlinear optimization problems subjected to system of max-
� fuzzy relational equations, when the system has no unique solution and � is any
continuous t- norm. Two different approaches have been proposed to solve the nonlinear
optimization problems presented.
5.2 A system of fuzzy relational equations
Consider the following system of fuzzy relational equations
(5.1)x A b=�
where [ ], 0 1,ij ijA a a= ≤ ≤ be a m n× dimensional fuzzy matrix and 1 2[ , , , ],n
b b b b= …
0 1,jb≤ ≤ be a -n dimensional vector, � is a continuous t-norm operator from the
residuated lattice [0,1], , , , ,0,1t
L = ∧ ∨ Θ� and “ �” denotes the max-� composition of
x and .A The resolution problem of FRE is to determine a solution vector
1 2[ , , , ],m
x x x x= … with 0 1,i
x≤ ≤ such that
1
max ( ) , 1, 2, , (5.2)m
i ij ji
x a b j n=
= ∀ =� …
Let {1, 2, , }I m= … and {1, 2, , }J n= … be the index sets. The maximum solution of (5.1)
can be computed explicitly by the residual implicator (pseudo complement) as follows:
105
min ( ) (5.3)t ij t jj J i I
x A b a b∈ ∈
= Θ = Θ
�
where sup{ [0,1] | ( ) }ij t j i i ij ja b x x a bΘ = ∈ ≤�
The solution set of a consistent system of FRE is given by one unique maximal solution
and possibly finite number of minimal solutions say .L Let pX denotes thp sub-feasible
region given as { | }p pX x X x x x= ∈ ≤ ≤� �
which is a lattice. The entire solution set
( , )X A b of system (5.1) is given as:
1 2
1( , ) [ , ] [ , ] [ , ] [ , ].
Lp p L
pX A b X x x x x x x x x
== ∪ = ∪ ∪ ∪ ∪ ∪
� � � � � � � �… …
Lemma 5.2.1. If in the thj equation of system (5.1) we have , ,ij ja b i I< ∀ ∈ then the
solution set ( , ) .X A b φ=
Proof. If in the thj equation ij ja b< holds for all ,i I∈ then for ,i ijx a≠
( ) (1 )i ij ij ij jx a a a b≤ = <� � and for , ( ) ( ) .i ij i ij ij ij ij jx a x a a a a b= = ≤ <� � Thus, for
both cases , .ij ja b i I< ∀ ∈ Hence, max ( )i ij j
i Ix a b
∈<� and there exists no solution for the
thj equation. Thus, ( , ) .X A b φ=
Lemma 5.2.1 just represents only a sufficient condition. If this is not true, it does not
imply that some solution exists.
Lemma 5.2.2.[49]. Let ( , )X A b�
be the set of all minimal solutions of (5.1) then
( , ) ( , ) ( , ).X A b X A b x X A bφ φ≠ ⇔ ≠ ⇔ ∈� �
Lemma 5.2.3 gives a necessary and sufficient condition for the existence of ( , ).X A b
106
Lemma 5.2.3. If ( , )x X A b∈ then for each j J∈ there exists i I∈�
such that
( )i i j jx a b=� �
� and ( ) , .i ij jx a b i I≤ ∀ ∈�
Proof. For ( , ),x X A b∈ max ( ) , .i ij j
i Ix a b j J
∈= ∀ ∈� This implies ( ) , .i ij jx a b j J≤ ∀ ∈�
Therefore, in order to satisfy the equality constraint, there must exist at least one i I∈�
such that ( ) .i i j jx a b=� �
�
We first consider the nonlinear optimization problem with a general nonlinear objective
function constrained to a system of continuous t-norm based FRE. A well structured real
coded genetic algorithm is suggested that is designed keeping the structure of the domain
under consideration.
5.3 The problem I
We are interested in solving the following nonlinear optimization problem
Min ( ) (5.4)
( , )
f x
x X A b∈
where ( )f x is a nonlinear function and 1 2[ , , , ]m
x x x x= … is the solution vector. We
assume the case when the solution set of (5.1) is empty i.e. ( , ) .X A b φ= Then, a solution
1 2[ , , , ]m
x x x x= … is said to be an approximate solution of (5.1) satisfying x A b b′= ≠� .
The goodness of the solution x is measured on the basis of Euclidean distance of x A�
from ,b also called as the error associated with that solution. The error of a particular
solution x with [0,1], 1,2, , ,i
x i m∈ = … is calculated as follows:
2( ) ( , ) ( ) (5.5)j j
j J
e x d b b b b∈
′ ′= = −∑
107
5.4 Implementation of the genetic algorithm
We propose a real valued genetic algorithm (RVGA) that is designed specifically for the
considered optimization problem. The genetic operators are designed such that they
accelerate the procedure and help the algorithm to converge easily.
In the considered fuzzy optimization problem the feasible domain given by fuzzy relation
equations has no exact solutions. So an error based genetic algorithm is applied that leads
to the good and convergent approximate solutions of the considered optimization
problem. The design of the proposed algorithm is described as follows:
The procedure starts with generating a population of finite size with each chromosome as
a string of random values in the unit interval (0, 1). Once the population has been created,
the individuals are evaluated using some fitness criterion (or fitness function). In general
real valued genetic algorithm, the objective function is itself used as the fitness function.
In the considered case, the feasible domain has no exact solution so the goal is not only
just optimization but also the exotic exploration of the search space so as to find good
approximate and converging solutions of the optimization problem. For this, a pre-fixed
threshold error value maxε is set and the aim is to find solutions having distance (or error)
lesser than this threshold error maxε and optimizing the objective function as well. To
serve the purpose, a modified version of combined objective is formulated as follows that
optimizes the objective and minimizes the error function in parallel:
max
max 2
( ), if
( ) (5.6)( ) ( max ( ( )) , otherwisej i ij
i Ij J
f x
f x f x b x a
ε ε
∈∈
≤
′ = + −
∑ �
where max ( )f x is the value of the original objective function for the solution giving the
maximum error in the interval max(0, ]ε in that population(i.e. the most unfit individual
108
for that run). This design of the objective lowers the possibility of transmission of the
unwanted solutions from the current population to the subsequent generations.
5.4.1 Selection
After the fitness function, the design of the selection scheme is the second factor
responsible for the faster and efficient operation of the proposed algorithm. Keeping the
insight of the considered problem, a binary tournament selection operator is designed that
has a dual criterion of selection. Two solutions are selected at random from the
population and compared at a time. The comparison of two solutions is performed
according to the following conditions:
1. Out of any two solutions having errors lesser than the threshold error maxε , the
solution giving the lesser value of the original objective function is selected.
2. Among two solutions having errors greater than the pre-specified threshold maxε ,
the solution giving lesser error value is selected.
3. The solution giving the error value in the range max(0, ]ε is preferred to the
solution giving error value greater than the threshold maxε .
Using the above selection scheme a finite strength of good population is selected that
undergoes the further cycle of genetic algorithm.
5.4.2 Crossover and mutation
The crossover operator is the main search tool and the major exploratory mechanism of
the genetic algorithms. The arithmetic crossover operation [63] is used. The operator
pushes the solutions towards the feasible region, as it is based on the ideas of linear
contraction and linear extraction (as discussed in section 4.5.2 of chapter 4) of two
109
individuals. In arithmetic crossover, two parents, say 1 2, ,x x are randomly selected from
population to produce two offsprings. The two offsprings 1 2,y y are generated as follows:
1 1 2
2 1 2
(1 )
(1 )
i i i i i
i i i i i
y x x
y x x
λ λ
λ λ
← + −
← − +
where (0,1)i
λ ∈ 1, 2, , i m∀ = … are uniform random numbers in the unit interval. As a
result there might be a case when the variable components attain values outside their
obvious range (0,1). Therefore, if the value of the variable is larger than 1, it is set to 1
and if the value is negative then it is set to 0. If the value is between 0 and 1, it does not
change. After the crossover operation, the resulted crossed population undergoes the
mutation operation.
Mutation is a genetic operator responsible to introduce new genetic information in the
generation and prevents premature convergence of the algorithm. To meet the purpose,
feasible mutation has been adopted that applies to a solution probabilistically. For the
mutation operation, a random element is selected for thi variable { 1, 2, , }i i m∀ ∈ = … of a
chromosome, according to the mutation probability that is replaced with a feasible
random value lying in the range (0, ).i
x�
For example, let x be the chromosome and the
mutation probability be (0,1)δ ∈ .The mutation is applied to x probabilistically as:
1. For each 1, 2, , ,i m= … generate random number (0,1).i
r ∈
2. For 1, 2, , ,i m= … if ,i
r δ≤ then randomly assign ix a number in the range of (0, ).
ix�
The obtained new population again undergoes the cycle of RVGA and the algorithm
keeps running until some termination criterion is not met.
110
The whole cycle of applied GA can be summarized in the following steps:
Algorithm 1: Procedure to solve optimization problem (5.4)
Step 1: Get the matrices ,A b and the nonlinear objective function .f
Step 2: Find the maximum solution x�
using (5.3).
Step 3: If system of FRE is not solvable i.e. ,x A b≠�� go to step 4, else stop.
Step 4: Initialize population of fixed size, say ,k and set the threshold error value as
max | |x A bε = −�� and set generations counter gen=1.
Step 5: Evaluate population using the fitness function defined in equation (5.6).
Step 6: Select fixed no. of good solutions by the binary tournament selection operator
described in section 5.4.1 and the best fit individual for that generation say x′ and
determine its corresponding error | |x A bε ′ ′= −� by (5.5).
Step 7: Apply crossover and mutation operators as described in section 5.4.2.
Step 8: If maxε ε′ < , update the threshold error as max .ε ε ′=
Step 9: If the termination criterion is meet stop, otherwise set gen=gen+1and go to step 5.
5.5 Illustrative examples
For illustration of the proposed procedure, we consider some optimization problems with
nonlinear objective functions subjected to max-� composition based fuzzy relation
equations having no unique solution i.e. .x A b≠�� Nonlinear objective functions are
111
considered from well known source [52]. Algorithm 1 is used to obtain the best
converging solution of the considered optimization problem. Due to the large search
space, generally a large population size results in faster convergence of the algorithm. In
our case we set the threshold as 2
max ( max ( ( )) .j i iji I
j J
b x aε∈
∈
= −∑�� For our algorithm, we
take the following parameters settings:
Mutation probabilityδ : 0.1
Population size k : 200
The results obtained are presented numerically in tables 5.1-5.3 and graphically by
figures 5.1-5.3.
Example 5.1. 3 3
1 1 2 2Min ( ) 3000 1000 2000 666.667f x x x x x= + + +
ith (s.t., w ) wherexx A b a x a= ⋅= ��
[ ]0.4350 0.0128 0.5065
0.4352 0.4229 0.5323 , 0.5000 0.4092 0.6159
0.3440 0.2057 0.4385
A b
= =
The maximum solution of the system comes out to be [ ] 1.0000 0.9676 1.0000
giving max 0.0144ε = .
Table 5.1: Objective values by iterations - Example 5.1
Iterations 1x 2x 3x
310f × ( )e x
1
5
7
13
51
66
0.2490 0.9829 0.4491
0.0588 0.9847 0.3426
0.0185 0.9852 0.2225
0.0438 0.9867 0.7883
0.0254 0.9885 0.6087
0.0000 0.9834 0.5994
3.3613
2.7826
2.7451
2.6974
2.6631
2.6207
0.0139
0.0136
0.0135
0.0133
0.0130
0.0130
Figure
Figure 5.2: N
02600
2800
3000
3200
3400
Obje
cti
ve v
alu
e
0.810
2000
4000
6000
8000
Obje
cti
ve f
uncti
on
112
Figure 5.1: Performance of GA for Example 5.1
Nonlinear function with optimal point in Example
20 40 60 80Iterations
00.2
0.40.6
0.81
00.2
0.40.6
0.8
x1x2
Example 5.1
Example 5.2. Min ( )f x x x x x x=
ith ( ) min( , ) wheres.t., w xAx b= ��
0.5 0.7 0.5 0.8
0.6 0.3 0.6 0.9
0.1 0.9 1 0
0.8 0.5 0.9 0.6
0.1 0.4 0.7 0.9
A b
= =
The maximum solution is obtained as
max 0.1000ε = .
Table 5.
Figure
0
0
2
4
6
8
10
12
14x 10
Obje
cti
ve v
alu
e
Iteration 1x
1
3
8
14
17
0.6501 0.2210 0.0401 0.7244 0.2987
0.3044
0.0365 0.0266 0.6947 0.7148 0.3362
0.0407 0.0091 0.6942 0.7104 0.3039
0.0595 0.3228 0.6940 0.7090 0.0036
113
1 2 3 4 5Min ( )f x x x x x x=
ith ( ) min( , ) wherex a x a=�
0.5 0.7 0.5 0.8
0.6 0.3 0.6 0.9
, [0.8 0.7 0.6 0.5]0.1 0.9 1 0
0.8 0.5 0.9 0.6
0.1 0.4 0.7 0.9
A b
= =
The maximum solution is obtained as [ ]0.5000 0.5000 0.6000 0.5000 0.50
5.2: Objective values by iterations - Example 5.2
Figure 5.3: Performance of GA for Example 5.2
5 10 15 20
x 10-4
Iterations
1x 2x 3x 4x 5x ( )f x
0.6501 0.2210 0.0401 0.7244 0.2987
0.3044 0.6373 0.6476 0.7088 0.0026
0.0365 0.0266 0.6947 0.7148 0.3362
0.0407 0.0091 0.6942 0.7104 0.3039
0.0595 0.3228 0.6940 0.7090 0.0036
0.0012
0.0002
0.0002
0.0001
0.0000
[ ]0.5000 0.5000 0.6000 0.5000 0.5000 giving
( )e x
0.0462
0.0417
0.0305
0.0303
0.0302
Example 5.3. Min ( ) 10( 0.5) 10( 0.5) 5f x x x= − + − +
s.t., where ( ) min( , )x A b x a x a= =� �
0.9 0.8 0.8
0.8 0.7 0.8
0.9 0.7 0.6
A
=
b =
The maximum solution is
Table 5.
Figure
04.99
5
5.01
5.02
5.03
5.04
Obje
cti
ve v
alu
e
Iterations
1
2
3
4
6
31
114
2 2
1 2Min ( ) 10( 0.5) 10( 0.5) 5f x x x= − + − +
s.t., where ( ) min( , )x A b x a x a= =� �
[ ]0.7 0.6 0.5=
The maximum solution is obtained as [0.5000 0.5000 0.5000] givingε
5.3: Objective values by iterations - Example 5.3
Figure 5.4: Performance of GA for Example 5.3
10 20 30 40Iterations
1x 2x 3x ( )f x ( )e x
0.4423 0.4704 0.6688
0.4631 0.4853 0.6334
0.4784 0.5164 0.6617
0.4802 0.4933 0.6586
0.5027 0.4976 0.6506
0.5002 0.4990 0.6461
5.0421
5.0158
5.0074
5.0044
5.0001
5.0000
0.0157
0.0156
0.0153
0.0151
0.0150
0.0150
max 0.05.ε =
Figure 5.5: N
The following section considers
kind of geometric objective functions called generalized monomials subjected to a system
of continuous t-norm based FRE
5.6 The problem II
We consider the following nonlinear optimization problem
Min ( ) (5.7)
( , )
Z x
x X A b∈
where ( )Z x is the generalized monomial function defined as:
( ) max{ ( )} max{ }k k i
k K k KZ x f x c x
∈ ∈= =
where ( )k
f x is a monomial geometric objective function in
0k
c > and ,ikr R∈ (1 ,1 )k K i n≤ ≤ ≤ ≤
monomial and 1 2[ , , , ]x x x x=
is generally not differentiable
0.814
6
8
10
Obje
ctiv
e function
115
Nonlinear function with optimal point in Example
considers the second nonlinear optimization problem with special
kind of geometric objective functions called generalized monomials subjected to a system
norm based FRE, when the system has no unique solution.
the following nonlinear optimization problem:
Min ( ) (5.7)
is the generalized monomial function defined as:
1
( ) max{ ( )} max{ }ik
mr
k k ik K k K
i
Z x f x c x=
∏
is a monomial geometric objective function in x with
(1 ,1 )k K i n≤ ≤ ≤ ≤ are corresponding exponents of variable
1 2[ , , , ]m
x x x x… is the solution vector. The maximum of two
is generally not differentiable (when the two monomials are same), whereas a
00.2
0.40.6 0.8
1
00.2
0.40.6
0.8
x1x2
Example 5.3
nonlinear optimization problem with special
kind of geometric objective functions called generalized monomials subjected to a system
when the system has no unique solution.
Min ( ) (5.7)
each coefficient
are corresponding exponents of variables ini
x thk
he maximum of two monomials
, whereas a monomial
116
is everywhere differentiable. More detailed discussion on behavior and extensions of GP
can be viewed in [12].
5.7 Approximate solutions of FRE
The problem of approximating fuzzy relation equations (5.1) is to find one or more
vectors 1 2[ , , , ]m
x x x x= … having the least distance (error) between the left and right
parts of system of (5.1) i.e. finding the approximate solutions that have minimum
distance of x A� from .b
To get such solutions a real coded genetic algorithm (RCGA) is designed that finds a
vector min min min min
1 2[ , , , ]m
x x x x= … which provides the least distance between the left and
right parts of system (5.1) among all the solution vectors i.e. min
min ( ( )).x
e e x= Once the
solution giving least error value is obtained, the uncertainty interval providing the
essential range of each decision variable is determined. For this, the algorithm is operated
till a set of solution vectors having the same distance as the solution vector minx is
obtained. When a set of such equivalent solutions and vector minx has been obtained,
upper and lower bounds of the individual components of the obtained vectors are found.
Let 1 2{ , , , }, 1,2, ,l l l
mx x x l L=… … be the collection of such L equivalent vectors obtained.
The uncertainty interval [ , ]i ix x representing the essential range for each component ix is
determined by selecting the lower bound ix and upper bound i
x of thi component
respectively determined as 1
min { }L
l
i il
x x=
= and 1
max { }.L
l
i il
x x=
=
RCGA starts by randomly generating initial population of several solutions, where every
component , 1,2, , ,i
x i m= … of all the solution vectors is a random number in the unit
interval (0,1).
117
The chromosomes are evaluated using the distance function (error) as the fitness function
given as follows:
2( ) | | [ max ( ( )] (5.8)j i ij
i Ij J
e x x A b b x a∈
∈
= − = −∑� �
After evaluation of chromosomes, the selection is performed according to rank based
selection procedure discussed in section 4.5.1 of chapter 4. The lower fitness value (error
or distance) of chromosome represents the good candidature of the chromosome to
perform the criteria of optimization. For the speedy convergence of the algorithm and to
avoid unnecessary exploration, we set the upper bound of the error value (distance
function) as a real numberε .
After selecting good individuals, we apply the arithmetic crossover and feasible mutation
described in section 5.4.1 above. The newly generated solutions are processed until some
termination criterion is met. The proposed procedure to solve system (5.1) can be
summarized in algorithm 2.
Algorithm 2: Procedure to solve system (5.1)
1. Get the matrices and .A b
2. Find the maximum solution x�
using (5.3).
3. Check whether system (5.1) is solvable i.e. x A b=�� if yes, stop.
4. Run real coded genetic algorithm (RCGA) for finding vectors minx s.t. min min( )e x e= .
Determine L such equivalent solutions having the same error value mine .
5. Obtain uncertainty intervals [ , ],i ix x for each of the decision variable ix
1, 2, , .i m∀ = …
118
Once the essential range of decision variables have been determined by Algorithm 2, the
modified problem is formed as follows:
Min ( ) (5.9)
[ , ] [0,1], 1,2, , i i i
Z x
x x x i m∈ ⊆ ∀ = …
Again the genetic procedure RCGA as described above for finding the essential range is
used to solve the problem (5.9). The design of RCGA remains same except the fitness
function that is the objective function ( )Z x itself here. To improvise the convergence of
the genetic algorithm, the elitism criterion [18] is used in which a fraction of the best
chromosomes from the previous population is placed to the new population so that the
best chromosomes never disappear from the population through crossover or mutation.
The procedure to solve the considered optimization problem (5.9) is described in
algorithm 3.
Algorithm 3: Procedure to solve optimization problem (5.9)
1. Find the uncertainty intervals for each of the decision variable using Algorithm 2.
2. Solve the optimization problem with considered objectives using the RCGA.
5.8 Illustrative examples
We consider some optimization problems with generalized monomial objectives subject
to max-� composition based fuzzy relation equations having no unique solution.
Algorithm 2 is applied to obtain the approximate solutions of fuzzy relation equations
within their uncertainty intervals and then the modified optimization problem is solved.
The results obtained are presented with the help of tables 5.4-5.9. The behaviour of the
algorithm is presented graphically via figures 5.6-5.11.
119
Example 5.4. 1 2 3Min ( ) max{ ( ) ( ) ( )}Z x f x f x f x=
where0.2 0.3 2 3 0.2 1.5 0.33 2 1.5
1 1 2 3 2 1 2 3 3 1 2 3( ) 10 , ( ) 0.2 , ( ) 0.3f x x x x f x x x x f x x x x− − − − −= = =
s.t., where ( ) min( , ) andx A b x a x a= =� �
0.9 0.8 0.8
0.8 0.7 0.8
0.9 0.7 0.6
A
=
[ ]0.7 0.6 0.5b =
The maximum solution comes out to be [0.5000 0.5000 0.5000] .Since x A b≠�� , the
system is inconsistent. After running the Algorithm 2, the solution of fuzzy relation
equations can be represented in the form of uncertainty intervals as 1 [0,0.6],x ∈
2 [0,0.6],x ∈ 3 [0.6450,0.6550].x ∈
Table 5.4: Error values by iterations from RCGA - Example 5.4
Iteration 1x 2x 3x ( )e x
1
98
194
300
409
467
600
635
801
914
935
1025
1200
0.5000 0.5000 0.5000
0.5615 0.6800 0.3486
0.2399 0.5276 0.1762
0.0732 0.6105 0.8094
0.6128 0.0901 0.2476
0.1586 0.0380 0.6173
0.0516 0.5711 0.6061
0.2367 0.3015 0.6088
0.1955 0.4756 0.6754
0.1454 0.5591 0.6394
0.5705 0.5158 0.6648
0.3444 0.3877 0.6400
0.1957 0.3804 0.6481
0.0500
0.0392
0.0357
0.0342
0.0205
0.0189
0.0184
0.0171
0.0163
0.0157
0.0154
0.0152
0.0150
Figure 5.6: Performance
Iteration
20
27
45
86
110
165
204
224
303
309
378
500
536
840
2196
2490
9730
12637
14935
Table 5.
00.01
0.02
0.03
0.04
0.05
Error v
alu
e
120
Performance of GA for solving FRE in Example
Iterations 1x 2x 3x ( )Z x
1
20
27
45
86
110
165
204
224
303
309
378
500
536
840
2196
2490
9730
12637
14935
0.5949 0.5801 0.6503
0.5607 0.5964 0.6466
0.5812 0.5969 0.6488
0.5715 0.5936 0.6454
0.5885 0.5914 0.6463
0.5993 0.5890 0.6459
0.5841 0.5998 0.6456
0.5975 0.5926 0.6453
0.5932 0.5997 0.6455
0.5947 0.5980 0.6453
0.5937 0.5995 0.6453
0.5959 0.5988 0.6453
0.5973 0.5985 0.6450
0.5989 0.5980 0.6450
0.5996 0.5993 0.6450
0.5989 0.5999 0.6450
0.5998 0.5999 0.6450
0.5999 0.5999 0.6450
0.5999 0.5999 0.6450
0.6000 0.6000 0.6450
5.5244
5.4802
5.4767
5.4477
5.4363
5.4171
5.4096
5.3999
5.3920
5.3905
5.3884
5.3865
5.3806
5.3791
5.3741
5.3734
5.3716
5.3714
5.3712
5.3710
Table 5.5: Objective values by iterations - Example 5.4
500 1000 1500Iterations
Example 5.4
Figure 5.
Example 5.5. Min ( ) max{ ( ) ( ) ( )}Z x f x f x f x
where0.5
11 2 3 2 1 2 3 3 1 2 3
2
( ) 2 , ( ) 3 , ( ) 0.5x
f x x x f x x x x f x x x xx
= = =
s.t., where ( ) max (0, 1)x A b x a x a= = + −� �
0.5000 0.6000 0.2000 0.3000
0.7000 0.2000 0.6000 0.4000 ,
0.8000 0.1000 0.2000 0.4000
A
=
The maximum solution of the considered FRE
min ( ) where min(1,1 ) ij t j ij t j ij jj J i I
x a b a b a b∈ ∈
= Θ Θ = − +
�
The maximum solution is obtained
system does not have a unique solution.
fuzzy relation equations can be represented in the form of uncertainty intervals as
1 2 3[0.0011,0.4084], [0.0021,0.4683], [0.8714,x x x∈ ∈ ∈
05.35
5.4
5.45
5.5
5.55
Obje
cti
ve v
alu
e
121
Figure 5.7: Performance of GA for Example 5.4
1 2 3Min ( ) max{ ( ) ( ) ( )}Z x f x f x f x=
0.5
0.5 2 0.3 0.2 1.5 0.5 2 2.5
1 2 3 2 1 2 3 3 1 2 3( ) 2 , ( ) 3 , ( ) 0.5f x x x f x x x x f x x x x− − −= = =
s.t., where ( ) max (0, 1)x A b x a x a= = + −� � and
0.5000 0.6000 0.2000 0.3000
0.7000 0.2000 0.6000 0.4000 ,
0.8000 0.1000 0.2000 0.4000
[ ]1.000 0 0 0b =
The maximum solution of the considered FRE is computed by assigning:
min ( ) where min(1,1 ) ij t j ij t j ij jx a b a b a b= Θ Θ = − +
The maximum solution is obtained as[0.4000 0.4000 0.6000] . Here
a unique solution. After running the algorithm 2
fuzzy relation equations can be represented in the form of uncertainty intervals as
1 2 3[0.0011,0.4084], [0.0021,0.4683], [0.8714,0.8719].x x x∈ ∈ ∈
5000 10000 15000Iterations
0.5 2 0.3 0.2 1.5 0.5 2 2.5
1 2 3 2 1 2 3 3 1 2 3f x x x f x x x x f x x x x− − −
:
x A b≠�� i.e. the
2 the solution of
fuzzy relation equations can be represented in the form of uncertainty intervals as
Table 5.6: Error value
Figure 5.8: Performance
Iterations
1
4
22
54
74
186
570
792
1073
2412
2426
7467
Table 5.
100
0.2
0.25
0.3
0.35
Err
or v
alu
e
122
Iterations 1x 2x 3x ( )e x
1
2
18
94
113
149
800
1000
0.4000 0.4000 0.6000
0.3800 0.4274 0.7971
0.3432 0.3264 0.8064
0.2261 0.1064 0.8358
0.4154 0.3050 0.8744
0.1709 0.1044 0.8577
0.4028 0.0347 0.8603
0.1400 0.2418 0.8692
0.3600
0.2019
0.1976
0.1895
0.1871
0.1869
0.1868
0.1867
: Error value by iterations from RCGA - Example
Performance of GA for solving FRE in Example
Iterations 1x 2x 3x ( )Z x
1
4
22
54
74
186
570
792
1073
2412
2426
7467
0.0110 0.3969 0.8717
0.0071 0.3933 0.8715
0.0050 0.2845 0.8716
0.0032 0.4622 0.8715
0.0015 0.4489 0.8716
0.0014 0.4422 0.8717
0.0012 0.4676 0.8714
0.0012 0.4682 0.8714
0.0011 0.4619 0.8714
0.0011 0.4668 0.8714
0.0011 0.4676 0.8714
0.0011 0.4682 0.8714
0.7592
0.6667
0.6401
0.5081
0.4104
0.3974
0.3795
0.3787
0.3722
0.3720
0.3685
0.3680
Table 5.7: Objective values by iterations - Example 5.5
010
110
210
3
Iterations
Example 5.5
Example 5.5
123
Figure 5.9: Performance of GA for Example 5.5
Example 5.6. 1 2 3Min ( ) max{ ( ) ( ) ( )}Z x f x f x f x=
where0.2 3.9 1.7 0.7 0.5 2.2 0.5
1 2 3 2 1 2 3 3 1 2 3( ) 2 , ( ) 0.1 , ( ) 0.2f x x x f x x x x f x x x x− − −= = =
s.t., where ( ) andx A b x a x a= = ⋅� �
0.2 0.5 0.1
0.6 0.8 0.1 ,
0.3 0.1 0.5
A
=
[ ]0.7 0.3 0.54b =
The maximum solution comes out to be [0.5000 0.5000 0.5000] . Since x A b≠�� , the
system is inconsistent. After running the algorithm 2, the solution of fuzzy relation
equations can be represented in the form of uncertainty intervals as
1 2 3[0.0151, 0.9238], [0.6534, 0.6653], [0.9988,1].x x x∈ ∈ ∈
0 2000 4000 6000 8000
0.4
0.5
0.6
0.7
0.8
Iterations
Obje
ctive v
alu
e
Table 5.8: Error value
Figure 5.10:
Iterations
1
10
30
53
76
93
109
258
460
764
908
1132
4797
5304
0.33150000000000 0.65460000000000
0.05359783524843 0.65497163207262 0.99997144103778
0.37525811320337 0.65429688731845 0.99992731972972
0.56627835452936 0.65478799778839 0.99998623143182
0.17900064235879 0.65422067161832 0.99994951632460
0.33293273955089
0.28431221943608 0.65428387117017 0.99998899203319
0.21421775187278 0.65423073084255 0.99998727173686
0.45424581988486 0.65434033833548 0.99999930512376
0.58563359959818 0.65424282346368 0.99999505619264
0.31251051189700 0.65423061585757 0.99999875990685
0.34095105249210 0.65420464530495 0.99999931816709
0.25349123838667 0.65420530804149 0.99999990118377
0.48401520815171 0.65420181545464 0.99999984605373
Table 5.
0
0.146
0.148
0.15
0.152
0.154
0.156
Erro
r valu
e
124
Iterations 1x 2x 3x ( )e x
1 2 4 18
103 170 711
0.0178 0.7341 0.9449
0.4151 0.6194 0.9202
0.1542 0.6346 0.9825
0.5204 0.6388 0.9891
0.3204 0.6717 0.9877
0.6145 0.6597 0.9943
0.3281 0.6566 0.9998
0.1545
0.1524
0.1474
0.1469
0.1467
0.1462
0.1460
: Error value by iterations from RCGA-Example 5.
: Performance of GA for solving FRE in Example
1x 2x 3x
0.33150000000000 0.65460000000000 0.99990000000000
0.05359783524843 0.65497163207262 0.99997144103778
0.37525811320337 0.65429688731845 0.99992731972972
0.56627835452936 0.65478799778839 0.99998623143182
0.17900064235879 0.65422067161832 0.99994951632460
0.33293273955089 0.65471758308899 0.99999864362696
0.28431221943608 0.65428387117017 0.99998899203319
0.21421775187278 0.65423073084255 0.99998727173686
0.45424581988486 0.65434033833548 0.99999930512376
0.58563359959818 0.65424282346368 0.99999505619264
0.31251051189700 0.65423061585757 0.99999875990685
0.34095105249210 0.65420464530495 0.99999931816709
0.25349123838667 0.65420530804149 0.99999990118377
0.48401520815171 0.65420181545464 0.99999984605373
Table 5.9: Objective values by iterations - Example 5.6
0 200 400 600 800Iterations
5.6
Example 5.6
( )Z x
1.838208
1.837904
1.837842
1.837695
1.837640
1.837567
1.837392
1.837375
1.837350
1.837326
1.837292
1.837274
1.837270
1.837268
Figure 5.11
5.9 Conclusion
This chapter considers two
norm based system of max
unsolvable, the approximate solutions
optimizing the objective simultaneously. Such optimization models are common in
practical situations when we face perturbed systems
The first problem states a nonlinear optimization problem with a general nonlinear
objective function. A well structured genetic al
converging solutions of nonlinear programming problem
the selection operator and fitness function are key features of the algorithm responsible
for the faster convergence. The generation wi
dual optimization of objective function and error function simultaneously. Experiment
results for optimization problems with different compositions based FRE systems are
presented that validate the capability o
The second problem considers a
inconsistent system of constraints
1.8372
1.8374
1.8376
1.8378
1.838
1.8382
Obje
cti
ve v
alu
e
125
Figure 5.11: Performance of GA for Example 5.6
two nonlinear optimization problems subjected to a continuous t
norm based system of max-� fuzzy relational equations. As the system of constraints is
the approximate solutions are determined leading to the least error and
the objective simultaneously. Such optimization models are common in
practical situations when we face perturbed systems.
The first problem states a nonlinear optimization problem with a general nonlinear
A well structured genetic algorithm is applied to get the
solutions of nonlinear programming problem. The problem specific design of
the selection operator and fitness function are key features of the algorithm responsible
for the faster convergence. The generation wise update of threshold error value results in
dual optimization of objective function and error function simultaneously. Experiment
results for optimization problems with different compositions based FRE systems are
presented that validate the capability of the proposed algorithm.
The second problem considers a generalized geometric optimization problem
system of constraints. A method is proposed to find approximate solutions of
0 1000 2000 3000 4000 5000 6000
1.8372
1.8374
1.8376
1.8378
1.838
1.8382
Iterations
subjected to a continuous t-
fuzzy relational equations. As the system of constraints is
are determined leading to the least error and
the objective simultaneously. Such optimization models are common in
The first problem states a nonlinear optimization problem with a general nonlinear
applied to get the good
. The problem specific design of
the selection operator and fitness function are key features of the algorithm responsible
se update of threshold error value results in
dual optimization of objective function and error function simultaneously. Experiment
results for optimization problems with different compositions based FRE systems are
generalized geometric optimization problem with an
method is proposed to find approximate solutions of
126
such fuzzy relation equations within uncertainty intervals described by lower and upper
bounds of each component of solution vector. Then the problem is modified to an
optimization problem with reduced search space. A well structured genetic algorithm is
applied to get the good converging solutions of nonlinear programming problem where
the value of each component of solution vector lies within their respective uncertainty
intervals. Experiment results for optimization problems with different compositions based
FRE systems are presented to demonstrate the working of the proposed procedure.
**********