an -approximation method in nonlinear vector optimization

12
Nonlinear Analysis 63 (2005) 225 – 236 www.elsevier.com/locate/na An -approximation method in nonlinear vector optimization Tadeusz Antczak Faculty of Mathematics, University of Łód´ z, Banacha 22, 90-238 Łód´ z, Poland Received 27 January 2005; accepted 6 May 2005 Abstract In this paper, a new approach for a solution of a nonlinear multiobjective programming problem is introduced. An equivalent -approximated vector optimization problem is constructed by a mod- ification of the objective and the constraint functions in the original multiobjective programming problem. The connection between (weak) efficient points in the original multiobjective program- ming problem and its equivalent -approximated vector optimization problem is proved. In this way, optimality conditions for nonlinear constrained multiobjective programming problems having invex and/or generalized invex objective and constraint functions (with respect to the same functions ) are obtained. 2005 Elsevier Ltd. All rights reserved. Keywords: Multiobjective programming; Invex function with respect to ; -approximated vector optimization problem; (weak) Pareto optimal point 1. Introduction Multiobjective programming, in recent times, has become an important area of investi- gation. This is mainly because of its practical usage, for example, in economics, decision theory, optimal control, and game theory, and many more. A lot of a number of optimiza- tion problems are actually multiobjective programming problems, where the objectives are conflicting. As a result, there is usually no single solution which optimizes all objectives Tel.: +48 42 355949; fax: +48 42 354266. E-mail address: [email protected]. 0362-546X/$ - see front matter 2005 Elsevier Ltd. All rights reserved. doi:10.1016/j.na.2005.05.008

Upload: tadeusz-antczak

Post on 21-Jun-2016

213 views

Category:

Documents


0 download

TRANSCRIPT

Nonlinear Analysis 63 (2005) 225–236www.elsevier.com/locate/na

An �-approximation method in nonlinearvector optimization

Tadeusz Antczak∗Faculty of Mathematics, University ofŁódz, Banacha 22, 90-238Łódz, Poland

Received 27 January 2005; accepted 6 May 2005

Abstract

In this paper, a new approach for a solution of a nonlinear multiobjective programming problemis introduced. An equivalent�-approximated vector optimization problem is constructed by a mod-ification of the objective and the constraint functions in the original multiobjective programmingproblem. The connection between (weak) efficient points in the original multiobjective program-ming problem and its equivalent�-approximated vector optimization problem is proved. In this way,optimality conditions for nonlinear constrained multiobjective programming problems having invexand/or generalized invex objective and constraint functions (with respect to the same functions�) areobtained.� 2005 Elsevier Ltd. All rights reserved.

Keywords:Multiobjective programming; Invex function with respect to�; �-approximated vector optimizationproblem; (weak) Pareto optimal point

1. Introduction

Multiobjective programming, in recent times, has become an important area of investi-gation. This is mainly because of its practical usage, for example, in economics, decisiontheory, optimal control, and game theory, and many more. A lot of a number of optimiza-tion problems are actually multiobjective programming problems, where the objectives areconflicting. As a result, there is usually no single solution which optimizes all objectives

∗ Tel.: +4842355949; fax: +4842354266.E-mail address:[email protected].

0362-546X/$ - see front matter� 2005 Elsevier Ltd. All rights reserved.doi:10.1016/j.na.2005.05.008

226 T. Antczak / Nonlinear Analysis 63 (2005) 225–236

simultaneously. The concept of an (weakly) efficient point ((weak) Pareto optimal point)has played useful roles in the analysis and solutions this type of optimization problems.The study of the optimal solution of multiobjective programming problems may be ap-

proached from two aspects: one, trying to relate them with the solutions to the scalarproblems, whose resolutions have been studied extensively and another, trying to locateconditions which are easier to deal with computationally and which guarantee Paretooptimality.The properties of the objective function and the constraints determine the applicable

method. For example, in linear programming the multiobjective simplex algorithm can beused to solve this type vector optimization problem, but there is not a general solutionmethod for any problem. The convex vector optimization problem has been extensivelystudied in the last years, and various generalizations have been applied successfully in thegeneralization of efficiency and weak efficiency necessary conditions, alternative theorems,multipliers rules, duality results, etc. (see, for example,[4,6–8,11,12,15–18], and others).In the recent years, Hanson[9] has introduced the concept of differentiable invex func-

tion which is a generalization of the differentiable convex function, and he proved theKarush–Kuhn–Tucker sufficient optimality theorem and theWolfe duality results for a non-linear scalar mathematical programming problem involving invex functions.A few authors intended the relevant results in the theory of multiobjective optimization

with this concept. Weir[18] considered a multiobjective programming problem involvinginvex functions andobtainedKarush–Kuhn–Tucker type necessary and sufficient conditionsfor a feasible point to be properly efficient. Using the concept ofV-invexity as a generaliza-tion of invexity in a vector case, Jeyakumar and Mond[10] obtained some weak efficiencyconditionsandduality results foranonconvexmultiobjectiveprogrammingproblem.Osuna-Gomez, et al.[13] give new characterizations of weakly efficient solutions for constrainedmultiobjective programming problems via a generalization of the Karush–Kuhn–Tuckeroptimality conditions.Considerable attention has been given recently to devising new methods which solve the

original multiobjective mathematical programming problem and its duals by the help ofsome associated vector optimization problem.One of a such method is that proposed by Antczak[1]. He introduced a new approach

with a modified objective function for solving a differentiable multiobjective optimizationproblems involved invex functions. He obtained optimality conditions for Pareto optimalityby constructing for a considered multiobjective programming problem an equivalent vectorminimization problem and then using an invexity concept in multiobjective programming.Moreover, a definition of the so-called�-Lagrange function in such vector optimizationproblem was given, for which modified vector valued saddle points results were presented.This paper develops a modified objective function method introduced by Antczak[1]

for solving a multiobjectve programming problem with nonlinear objective functions andalso the�-approximation method introduced by Antczak[2] for solving a nonlinear scalarmathematical programming problem involving invex functions with respect to the samefunction�.The aim of the present paper is to show how one can obtain optimality conditions for

Pareto optimality by constructing for the considered nonlinear multiobjective programmingproblem an equivalent vector minimization problem and then using an invexity concept in

T. Antczak / Nonlinear Analysis 63 (2005) 225–236 227

multiobjective programming. In opposition to the modified objective function method[1],the equivalent vector optimization problem is obtained by a modification both the variousobjective functions and the constraints in the givenmultiobjective programming problem atan arbitrary but fixed feasible pointx. This construction depends heavily on results provedin this paper which connects the (weakly) efficient points of the original multiobjectiveprogramming problem to the (weakly) efficient points of the modified vector minimizationproblem. In thisway,weobtain the so-called associated�-approximated vector optimizationproblemwith the same (weak) Pareto optimal solutionx and the sameoptimal value as in theoriginal multiobjective programming problem. Furthermore, this equivalence we establishunder weaker restrictions imposed on a function� than Condition (A) defined by Antczakin [1].

2. Preliminaries

The following convention for equalities and inequalities will be used throughout thepaper:For anyx = (x1, x2, . . . , xn)

T, y = (y1, y2, . . . , yn)T, we define:

(i) x = y if and only if xi = yi for all i = 1,2, . . . , n,(ii) x < y if and only if xi < yi for all i = 1,2, . . . , n,(iii) x�y if and only if xi �yi for all i = 1,2, . . . , n,(iv) x�y if and only if x�y andx �= y.

In the past few years extensive literature relative to the other families of more generalfunctions to substitute the convex functions in theoptimization theory hasgrown immensely.Such functions are called generalized convex.Within these and because of their importance,we point out the invex functions, introduced by Hanson[9], and later named by Craven[5].To make things easier, we consider the invexity and generalized invexity definitions for

vectorial functions, which coincide with those given in the scalar case (see[9]).

Definition 1. Let f : X → Rk be a differentiable function on a nonempty open setX ⊂Rn. Then,f is invex with respect to� atu ∈ X onX if, there exists� : X × X → Rn suchthat, for allx ∈ X,

f (x) − f (u)�∇f (u)�(x, u). (1)

If inequality (1) holds for anyu ∈ X thenf is invex with respect to� onX.

Definition 2. Let f : X → Rk be a differentiable function on a nonempty open setX ⊂Rn. Then,f is strictly invex with respect to� atu ∈ X onX if, there exists� : X ×X → Rn

such that, for allx ∈ X with x �= u,

f (x) − f (u) > ∇f (u)�(x, u). (2)

If inequality (2) holds for anyu ∈ X thenf is strictly invex with respect to� onX.

228 T. Antczak / Nonlinear Analysis 63 (2005) 225–236

Definition 3. Let f : X → Rk be a differentiable function on a nonempty open setX ⊂Rn. Then,f is pseudo-invex (pseudo-incave) with respect to� atu ∈ X onX if, there exists� : X × X → Rn such that, for allx ∈ X,

∇f (u)�(x, u)�0 �⇒ f (x) − f (u)�0,

(∇f (u)�(x, u)�0 �⇒ f (x) − f (u)�0). (3)

If inequality (3) holds for anyu ∈ X thenf is pseudo-invex (pseudo-incave) with respectto � onX.

Definition 4. Let f : X → Rk be a differentiable function on a nonempty open setX ⊂ Rn. Then,f is strictly pseudo-invex with respect to� at u ∈ X onX if, there exists� : X × X → Rn such that, for allx ∈ X, x �= u,

∇f (u)� (x, u)�0 �⇒ f (x) − f (u) >0. (4)

Definition 5. Let f : X → Rk be a differentiable function on a nonempty open setX ⊂Rn. Then,f is quasi-invex (quasi-incave) with respect to� at u ∈ X onX if, there exists� : X × X → Rn such that, for allx ∈ X,

f (x) − f (u)�0 �⇒ ∇f (u)�(x, u)�0,

(f (x) − f (u)�0 �⇒ ∇f (u)�(x, u)�0). (5)

If inequality (5) holds for anyu ∈ X thenf is quasi-invex (quasi-incave) with respect to�onX.

The following two propositions can be obtained by the definitions immediately.

Theorem 6. f : X → Rk is quasi-invex with respect to� at u on X and pseudo-incavewith respect to� at u on X if and only if for anyx ∈ X

f (x)�f (u) ⇔ ∇f (u)�(x, u)�0. (6)

Theorem 7. f : X → Rk is quasi-incave with respect to� at u on X and pseudo-invexwith respect to� at u on X if and only if for anyx ∈ X

f (x)�f (u) ⇔ ∇f (u)�(x, u)�0.

We consider the multiobjective programming problem

f (x) := (f1(x), . . . , fk(x)) → minsubject to g(x) := (g1(x), . . . , gm(x))�0,

(VP)

wheref : X → Rk andg : X → Rm are differentiable functions on a nonempty opensetX ⊂ Rn. We will be called (VP) as the original multiobjective optimization problem.

T. Antczak / Nonlinear Analysis 63 (2005) 225–236 229

Let

D := {x ∈ X : gj (x)�0, j = 1, . . . , m}denote the set of all feasible solutions of (VP).Unlike problems with a unique objective, in which there may exists an optimal solution

to the effect that it minimizes the objective function, in the multiobjective programmingproblem there does not necessarily exist a point which may be optimal for all objectives.Consequently, the concept of optimality for single-objective optimization problems cannotbe applied directly to (VP).The concept of an ideal point, one thatmaximizes eachobjective,is also, in general, not feasible. For such multicriterion optimization problems, the solutionis defined in terms of a (weak) Pareto optimal solution (an (weakly) efficient solution)[14]in the following sense:

Definition 8. A feasible pointx is said to be a Pareto solution (efficient solution) for (VP)if and only if there exists nox ∈ D such that

f (x)�f (x).

Definition 9. A feasible pointx is said to be a weak Pareto solution (weakly efficientsolution, weak minimum) for (VP) if and only if there exists nox ∈ D such that

f (x) < f (x).

It is easy to verify that every Pareto solution is a weak Pareto solution.It is well known (see, for example,[6,8,16,19]), that theKarush–Kuhn–Tucker conditions

are necessary conditions for optimality in such vector optimization problems.

Theorem 10. Let x be a(weak) Pareto optimal solution in problem(VP) and a suitableconstraint qualification[3] be satisfied atx. Then there exist� ∈ Rk and� ∈ Rm such that

�∇f (x) + �∇g(x) = 0, (7)

�g(x) = 0, (8)

��0, ��0. (9)

3. An equivalent vector optimization problem and optimality conditions

Let x be a feasible solution in (VP). We consider the following�-approximated vectoroptimization problem(VP�(x)) given by

(f1(x) + ∇f1(x)�(x, x), . . . , fk(x) + ∇fk(x)�(x, x)) → minsubject to gj (x) + ∇gj (x)�(x, x)�0, j = 1, . . . , m,

(VP�(x))

wheref,g,Xare defined as in (VP) and� is a vector-valued function defined by� : X×X →Rn. We denote by�(·, x) the functionx → �(x, x). Throughout the paper we will assumethat�(·, x) is a differentiable function at the pointx =x with respect to the first component,and, moreover, satisfies�(x, x) �= 0 for somex ∈ D, such thatx �= x.

230 T. Antczak / Nonlinear Analysis 63 (2005) 225–236

Let

D(x) := {x ∈ X : gj (x) + ∇gj (x)�(x, x)�0, j = 1, . . . , m}

denote the set of all feasible solutions of(VP�(x)).Note that if, some extra invex-type conditions are imposed on the function constraintg,

then the sets of all feasible solutions in (VP) and(VP�(x)) are the same. Indeed, we havethe following theorem:

Theorem 11. Letx be a feasible point in(VP) such thatg(x)=0.Further,we assume thatthe constraint function g is quasi-invex with respect to� at x on D and pseudo-incave withrespect to� atx on D. Then the sets of feasible solutions in(VP) and(VP�(x)) are the same.

Proof. Follows from Theorem 6 and from the forms of the problems (VP) and(VP�(x)). �

Now, we establish the equivalence between the vector optimization problems (VP) and(VP�(x)), that is, we prove that if,x is a (weak) Pareto optimal in the original multiobjectiveprogramming problem (VP) then it is also a (weak) Pareto optimal in its�-approximatedvector optimization problem(VP�(x)) and conversely, ifx is a (weak) Pareto optimal in the�-approximated vector optimization problem(VP�(x)) then it is also a (weak) Pareto opti-mal in the original multiobjective programming problem (VP). It turns out that to establishthis equivalence, we need not assume that the sets of all feasible solutions in problems (VP)and(VP�(x)), respectively, are the same.To prove the equivalence between the original multiobjective programming problem

and its associated modified vector optimization problem Antczak[1] used the followingrestrictions imposed on the function�:Condition (A). Itwill besaid that�satisfiesCondition (A),when�(·, x) is adifferentiable

function at the pointx = x with respect to the first component and satisfies the followingconditions:�(x, x)=0 and�x(x, x)=� ·1, where�x(x, x) denotes the derivative of�(·, x)

at the pointx = x, and� is some positive real number.

Remark 12. In [1], Antczak introduced Condition (A) to prove the equivalence betweenthe original multiobjective programming problem and its associated vector optimizationproblem with a modified objective function. However, in this paper, we prove the equiv-alence between the original multiobjective programming problem (VP) and its associated�-approximated vector optimization problem(VP�(x))without Condition (A).We use onlythe second relation from Condition (A), that is,�(x, x) = 0. It is useful from the practicalpoint of view since weaker restrictions imposed on� extend the class of functions� withrespect to which all functions involved in the original multiobjective programming problem(VP) are invex atx on the set of all feasible solutionsD.

Now, we show that the�-approximation method introduced by Antczak[2] in the scalarcase it can be used also in more general case, that is, for multiobjective programmingproblems.

T. Antczak / Nonlinear Analysis 63 (2005) 225–236 231

As for the original multiobjective programming problem, the Karush–Kuhn–Tucker con-ditions are necessary optimality conditions for the�-approximated vector optimizationproblem(VP�(x)) and they have the following form:

Theorem 13. Let x be a(weak) Pareto optimal solution in(VP�(x)) and a suitable con-

straint qualification[3] be satisfied atx. Then there exist� ∈ Rk and� ∈ Rm such that

(�∇f (x) + �∇g(x))�x(x, x) = 0, (10)

�(g(x) + ∇g(x)�(x, x)) = 0, (11)

��0, ��0. (12)

Remark 14. In [2], Antczak proved that the Karush–Kuhn–Tucker necessary optimalityconditions for the original scalar programming problem and its associated�-approximatedscalar optimization problem have the same form if, a function� is assumed to satisfyCondition (A). It is not difficult to see that the same result we have also in the vectorialcase.

Now, we prove that a (weak) Pareto optimal solutionx in the original multiobjectiveprogramming problem (VP) is also (weak) Pareto optimal in its associated�-approximatedvector optimization problem(VP�(x)).

Theorem 15. Letx bea(weak)Paretooptimal solution in(VP)andsomesuitableconstraintqualification[3] be fulfilled atx. If � satisfies�(x, x)=0 thenx is also(weak)Pareto optimalin (VP�(x)).

Proof. Assume thatx is a weak Pareto optimal solution in (VP) and some suitable con-straint qualification is fulfilled atx. Then there exist� ∈ Rk and� ∈ Rm such that theKarush–Kuhn–Tucker necessary optimality (7)–(9) are satisfied.We proceed by contradiction. Suppose thatx is not a weak Pareto optimal solution in

(VP�(x)). This implies that there existsx feasible for(VP�(x)) such that

f (x) + ∇f (x)�(x, x) < f (x) + ∇f (x)�(x, x). (13)

Thus, the condition�(x, x) = 0 gives

∇f (x)�(x, x) <0,

and so, by��0,

�∇f (x)�(x, x)�0. (14)

Sincex ∈ D(x) then by the Karush–Kuhn–Tucker optimality (8) we obtain

�∇g(x)�(x, x)�0. (15)

By (14) and (15), we get the inequality

[�∇f (x) + �∇g(x)]�(x, x)�0,

which contradicts (7). Hence,x is a weak Pareto optimal in(VP�(x)).

232 T. Antczak / Nonlinear Analysis 63 (2005) 225–236

The proof for efficiency is similar (but the Lagrange multiplier� should be assumed tobe�>0). �

Remark 16. Note that we establish Theorem 15 without any assumption to which classof functions belong the functions involved in problems (VP) and(VP�(x)), respectively.However, we assume that a suitable constraint qualification is fulfilled at a (weak) Paretooptimal solutionx in problem (VP). It turns out that this assumption is essential to establishTheorem15and it will be not omitted.To illustrate this result wegive the following example.

Example 17.We consider the following nonlinear multiobjective programming problem:

f (x) = (ex2−2x+3, arctanx) → min

g(x) ={

x2 if x�0,0 if x�0.

(VP)

Note that the set of all feasible solutionsD = {x ∈ R : x�0}, and, moreover,f andg aredifferentiable onR. Further,x = 0 andx = 1 are Pareto optimal solutions in the consideredmultiobjective programming problem (VP). If, for example, we set that

�(x, x) = (1+ x2)(arctanx − arctanx),

then, using the�-approximation approach to solve problem (VP), we obtain the followingvector optimization problems(VP�(x)), respectively,

(e3 − 2e3 arctanx,arctanx) → minx ∈ R,

(VP�(0))

(e2,arctanx) → minx ∈ R.

(VP�(1))

Note that the introduced approach enlarges the feasible set of the considered vector opti-mization problem fromD toD(x)=R. Thus, the above�-approximated vector optimizationproblems VP(x)) have the unbounded set of all feasible solutions and, therefore, none ofthe pointsx = 0 and 1 is a Pareto optimal solution in the associated�-approximated vectoroptimization problem (VP(0)) and (VP(1)), respectively, (since both (VP(0)) and (VP(1))have an unbounded optimal solution). This results follows from the fact that some constraintqualification is not fulfilled at any Pareto optimal solutionx in the considered multiobjec-tive programming problem (VP). It is also not difficult to prove by Definition 1 that boththe objective functionf and the constraint functiong are invex atx onD with respect tothe same function� defined above. Nevertheless, as follows from the considered example,the invexity assumption, without some suitable constraint qualification, is not sufficientcondition to ensure the above theorem to hold.

Now, under invexity assumption with respect to the same function� imposed on theobjective functionf and the constraint functiong, we establish that a (weak) Pareto optimalsolutionx in an�-approximated vector optimization problem (VP�(x)) is also (weak) Paretooptimal in the original multiobjective programming problem (VP). To prove this result wealsoassume that� satisfies the following condition� (x, x)=0, and,moreover, somesuitableconstraint qualification is fulfilled atx.

T. Antczak / Nonlinear Analysis 63 (2005) 225–236 233

Theorem 18. Let x be a (weak) Pareto optimal solution in(VP�(x)) and some suitableconstraint qualification[3] be satisfied atx.We also assume that f and g are invex atx on Dwith respect to�, and,moreover, �(x, x) = 0.Thenx is also(weak) Paretooptimal in(VP).

Proof. By assumption,g is invex with respect to� atx onD. Hence, any feasible solutionin (VP) is also feasible in (VP�(x)), that is,D ⊂ D(x). Letx be Pareto optimal in (VP�(x)).Then by Definition 8 there is nox ∈ D(x) such that

f (x) + ∇f (x)�(x, x)�f (x) + ∇f (x)�(x, x). (16)

FromD ⊂ D(x) follows that there is nox ∈ D such that (16) is satisfied. Then, using�(x, x) = 0 in (16) we have that there is nox ∈ D such that

∇f (x)�(x, x)�0. (17)

We proceed by contradiction. Suppose thatx is not Pareto optimal in (VP). Then by Defi-nition 8 there existsx ∈ D such that

f (x)�f (x). (18)

By assumptionf is invex with respect to� atx onD. Therefore, (18) together with invexityof f gives the inequality

∇f (x)�(x, x)�0, (19)

which contradicts (17). Hence, the theorem is proved.The proof forx to be a weak Pareto optimal solution is similar.�

In view of Theorems 15 and 18, if we assume thatf and g satisfy some invex typeconditions with respect to the same function� atx on the set of all feasible solutionsD, �satisfies the relation�(x, x) = 0, and also some suitable constraint qualification is satisfiedat x, then the vector optimization problems (VP) and (VP�(x)) are equivalent in the sensediscussed above.Now, we prove this theorem under the weakened assumption on the functions involved.

This follows from the proof of Theorem18, inwhich, in fact, we used the assumption of gen-eralized invexity (that is, pseudo-invexity). Therefore, we replace the invexity assumptionof f by (pseudo-invexity) strictly pseudo-invexity to prove the relationship between (weak)Pareto optimal points of the original multiobjective programming problem (VP) and its�-approximated vector optimization problem (VP�(x)). Also the invexity assumption ofgcan be replaced by quasi-invexity as follows from the following theorem:

Theorem 19. Letx be a feasible point for(VP�(x)) such thatg(x)=0.Further,we assumethat f is (pseudo-invex) strictly pseudo-invex with respect to� at x on D, g is quasi-invexwith respect to� atx on D, and,moreover, � satisfies the following condition�(x, x)=0. Ifx is a (weak) Pareto optimal point in(VP�(x)) thenxis also(weak) Pareto optimal in(VP).

Remark 20. If a function� : X × X → Rn is linear with respect to the first componentthen (VP�(x)) is a linear vector optimization problem.

234 T. Antczak / Nonlinear Analysis 63 (2005) 225–236

Now, we give an example of a nonlinear multiobjective programming problem (VP),which by using the approach discussed in this paper is transformed to a linear vectoroptimization problem (VP�(x)).

Example 21.We consider the following nonlinear multiobjective programming problem

f (x) = (x32 arctan(x

21) + x2

1 − x1 + 2,2x22 ln(x2

1 + 1) + 12x2 + 1

) → min

g1(x) = ln(x21 − x1 + 1)�0,

g2(x) = − ln(

x2x1+1 + 1

)�0.

(VP)

Note thatD ={(x1, x2) ∈ R2 : 0�x1�1∧x2�0} andx = (0,0) is a Pareto optimal point inthe considered nonlinear multiobjective programming problem. It is not difficult to provethatf andgare invex with respect to the same function� atx onD, for example, defined by

�(x, x) =[�(x1 − x1)

x2 − x2

],

where� is a positive real number such that��1.Now, using the approach discussed in the paper, we construct the associated vector op-

timization problem (VP�(x)) by transforming atx both the objective functionf and theconstraint functiong. Thus, we obtain the following linear vector optimization problem

(2− �x1,1+ 1

2x2) → min

−�x1�0,

−x2�0,

It is notdifficult to see, that, similarlyas in theoriginalmultiobjectiveprogrammingproblem,x = (0,0) is also a Pareto optimal solution in the above vector optimization problem, thatis, in the associated�-approximated vector optimization problem (VP�(x)).

Remark 22. As follows from the example above, we construct for the original multiob-jective programming problem (VP) more than one its associated�-approximated vectoroptimization problem (VP�(x)). Moreover, every constructed�-approximated vector opti-mization problem (VP�(x)) is equivalent to the original multiobjective programming prob-lem (VP) in the sense discussed in the paper. This property is useful from the practical pointof view.

Remark 23. The assumption that a function� satisfies the condition�(x, x)=0 is essentialto confirm the equivalence between the vector optimization problems (VP) and (VP�(x))

in the sense discussed in the paper. In the example below, we show that in the case whenthis condition does not hold then we have no equivalence between (VP) and (VP�(x)).

T. Antczak / Nonlinear Analysis 63 (2005) 225–236 235

Example 24.Weconsider the following nonlinear multiobjective programming problem

f (x) = (x2,2x2) → ming1(x) = ex1−1 − 2 lnx1 − x2�0,g2(x) = − ln x1�0.

Note thatD = {(x1, x2) ∈ R2 : x1�1∧ x2�1} andx = (1,1) is a Pareto optimal point inthe considered multiobjective programming problem. Further, it can be proved thatf andgare strictly invex atx with respect to the same function� defined by

�(x, x) =[

x1 + 3x2 − 2

].

It is not difficult to see that the condition�(x, x)=0 is not fulfilled by the function� definedabove. For the considered nonlinear multiobjective programming problem we construct itsassociated�-approximated vector optimization problem (VP�(x)). Thus, we have

(−1+ x2, −2+ 2x2) → min−x1 − x2 − 1�0,

−x1 − 3�0.(VP�(x))

Note that the above vector optimization problem is unbounded on the set of all feasiblesolutionsD. Thus, the considered multiobjective programming problem (VP) and its asso-ciated�-approximated vector optimization problem (VP�(x)) are no equivalent in the sensediscussed in the paper.

4. Conclusion

In the paper, we introduce a new approach for solving a strongly nonlinear (nonconvex)differentiable multiobjective programming problem. The formulation of the introduced�-approximated vector optimization problem requires the Lagrangemultipliers of the originalmultiobjective programming problem. (Although, as follows from Theorem 13, the La-grange multipliers for an�-approximated vector optimization problem are the same underCondition (A).) Thus, apparently one cannot compute with the�-approximated vector op-timization problem without first computing the original multiobjective programming prob-lem. This follows from the formulation of the introduced�-approximation approach, sincewe need a pointx, which is expecting suspected to be optimal. Then, an�-approximatedvector optimization problem can be constructed at such selected point. In general, we obtaina simpler vector optimization problem to solve than the original nonlinear multiobjectiveprogramming problem. Hence, to solve the constructed (in most cases linear) vector op-timization problem, we can apply the known computational procedures. Moreover, theremay exist more than one associated�-approximated vector optimization problem (Example21 and Remark 22). These properties are also useful from the practical point of view. Asfollows from the above, we are in position to solve strongly nonlinear nonconvex multi-objective programming problems by using the computational procedures, for example, forsolving linear vector optimization problems.

236 T. Antczak / Nonlinear Analysis 63 (2005) 225–236

References

[1] T. Antczak, A new approach to multiobjective programming with a modified objective function, J. GlobalOptimization 27 (2003) 485–495.

[2] T. Antczak, An�-approximation approach for nonlinear mathematical programming problems involvinginvex functions, Num. Functional Anal. Optimization 25 (5&6) (2004) 423–438.

[3] M.S. Bazaraa, H.D. Sherali, C.M. Shetty, Nonlinear Programming: Theory andAlgorithms,Wiley, NewYork,1991.

[4] S. Brumelle, Duality for multiple objective convex programs, Math. Oper. Res. 6 (1981) 159–172.[5] B.D. Craven, Invex functions and constrained local minima, Bull. Aust. Math. Soc. 24 (1981) 357–366.[6] B.D. Craven, Quasimin and quasisaddlepoint for vector optimization, Num. Functional Anal. Optimization

11 (1990) 45–54.[7] M.A. Geoffrion, Proper efficiency and the theory of vector maximization, J. Math. Anal. Appl. 22 (1968)

613–630.[8] G. Giorgi, A. Guerraggio, The notion of invexity in vector optimization: smooth and nonsmooth case, in:

J.P. Crouzeix, J.E. Martinez-Legaz, M. Volle (Eds.), Generalized Convexity, Generalized Monotonicity,Proceedings of the Fifth Symposium on Generalized Convexity, Luminy, France, Kluwer AcademicPublishers, Dordrecht, 1997.

[9] M.A. Hanson, On sufficiency of the Kuhn–Tucker conditions, J. Math. Anal. Appl. 80 (1981) 545–550.[10] V. Jeyakumar, B. Mond, On generalized convex mathematical programming, J. Aust. Math. Soc. Ser. B 34

(1992) 43–53.[11] R.N. Kaul, S.K. Suneja, M.K. Srivastava, Optimality criteria and duality in multiple objective optimization

involving generalized invexity, J. Optimization Theory Appl. 80 (1994) 465–482.[12] D.T. Luc, Theory of vector optimization, Lecture Notes in Economics and Mathematical Systems, vol. 319,

Springer, Berlin, 1989.[13] R.A. Osuna-Gomez, Rufian-Lizana, P. Ruiz-Canales, Invex functions and generalized convexity in

multiobjective programming, J. Optimization Theory Appl. 98 (1998) 651–661.[14] V. Pareto, Course d’economie politique, Rouge, Lausanne, 1896.[15] P. Ruiz-Canales, A. Rufián-Lizana, A characterization of weakly efficient points, Math. Program. 68 (1995)

205–212.[16] C. Singh, Optimality conditions in multiobjective differentiable programming, J. Optimization TheoryAppl.

53 (1987) 115–123.[17] R.E. Steuer, Multiple Criteria Optimization: Theory, Computation, andApplication,Wiley, NewYork, 1986.[18] T. Weir, A note on invex functions and duality in multiple objective optimization, Opsearch 25 (1988)

98–104.[19] T. Weir, B. Mond, B.D. Craven, On duality for weakly minimized vector valued optimization problems,

Optimization 17 (1986) 711–721.