on the numerical treatment of linearly constrained semi-infinite optimization problems

14

Click here to load reader

Upload: t-leon

Post on 04-Jul-2016

218 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: On the numerical treatment of linearly constrained semi-infinite optimization problems

Theory and Methodology

On the numerical treatment of linearly constrained semi-in®niteoptimization problems 1

T. Leon a, S. Sanmatias b, E. Vercher a,*

a Departamento de Estad�õstica e Investigaci�on Operativa, Universitat de Val�encia, C/ Doctor Moliner 50, 46100 Burjassot, Valencia,

Spainb Departamento de Estad�õstica e Investigaci�on Operativa, Universidad Polit�ecnica de Valencia, Camino de Vera s/n, Valencia, Spain

Received 17 February 1998; accepted 24 November 1998

Abstract

We consider the application of two primal algorithms to solve linear semi-in®nite programming problems depending

on a real parameter. Combining a simplex-type strategy with a feasible-direction scheme we obtain a descent algorithm

which enables us to manage the degeneracy of the extreme points e�ciently. The second algorithm runs a feasible-

direction method ®rst and then switches to the puri®cation procedure. The linear programming subproblems that yield

the search direction involve only a small subset of the constraints. These subsets are updated at each iteration using a

multi-local optimization algorithm. Numerical test examples, taken from the literature in order to compare the nu-

merical e�ort with other methods, show the e�ciency of the proposed algorithms. Ó 2000 Elsevier Science B.V. All

rights reserved.

Keywords: Linear programming; Semi-in®nite optimization; Pivoting rules; Feasible-direction methods

1. Introduction

In this paper we study the problem (P) thatseeks the minimum of a linear objective functioncTx subject to a system of linear constraints onx 2 Rn, expressed as fi�s�Txÿ bi�s�P 0 for alls 2 Si, being Si a closed interval in R, i 2 q, andl6 x6 u, where l and u are given vectors of boundsand q :� f1; . . . ; qg. The term semi-in®nite pro-

gramming (SIP) derives from the property that xdenotes ®nitely many variables, while each Si in-dexes an in®nite set of linear constraints.

There are a number of motivating applicationsof semi-in®nite optimization in a wide variety of®elds, such as Chebyshev and the one-sided L1-approximation, computing solutions of monotoniclinear boundary value problems, experimental de-sign in regression and engineering design (see, forinstance, the excellent review paper by Hettich andKortanek [11]). All of them involve speci®cationsthat have to be satis®ed over a set of values of anindependent parameter such as time, frequency,temperature, etc.

European Journal of Operational Research 121 (2000) 78±91www.elsevier.com/locate/orms

* Corresponding author.1 This work was partially supported by the Ministerio de

Educaci�on y Ciencia, Spain, DGICYT under the grant PB93-

0703.

0377-2217/00/$ - see front matter Ó 2000 Elsevier Science B.V. All rights reserved.

PII: S 0 3 7 7 - 2 2 1 7 ( 9 9 ) 0 0 0 4 2 - 9

Page 2: On the numerical treatment of linearly constrained semi-infinite optimization problems

With regard to the methods that solve SIPproblems, it is appropriate to point out that anumber of them replace the semi-in®nite problemby a sequence of ®nite ones which are solved byclassical constrained optimization techniques. So,they could be classi®ed according to the way the®nite problems are generated. The discretizationmethods (see, for instance, Refs. [9,13,21,25]) dealwith ®nite constrained programs, are globallyconvergent and are usually external methods. Sothey do not have a feasible solution until the op-timum is reached. In fact, their optimality check isthe feasibility of the iterate. On the other hand, inmethods based on local reduction, the in®nitelymany constraints are replaced by a ®nite set oflocally determined restrictions. The reducedproblems are usually solved using SQP techniques([7], for instance). A common approach yieldingglobal convergence is the use of SQP techniques inconjunction with penalty functions [2,24,29].Based on the theory of consistent approximations,Polak has developed some descent methods thattreat the SIP problem in the context of non-dif-ferentiable optimization (see his book [23] and thereferences given there for this approach).

For the linearly constrained case some speci®-cally designed algorithms have been developed. The®nite linear programs that arise along the applica-tion of the discretization, the primal±dual, and thecutting plane methods have been solved by thesimplex method or using interior point techniques([3,10,12,30], for instance). In addition, some sim-plex-like methods are available (Refs. [1,19] for theprimal problem; Refs. [4,5,27] for the dual problem).

The advantages of extreme point solutionmethods for linear semi-in®nite problems havebeen exhaustively discussed in Ref. [1], althoughthese discussions do not include any comparisonwith interior point approaches. A complementaryreference is the paper by Reemtsen and G�orner[26] that reviews the state-of-the-art numericalmethods for general smooth semi-in®nite pro-gramming problems.

Our method for solving (P) is based on the in-®nite-dimensional setting of the LP problem givenby Nash [20]. The author de®nes, in an algebraicframework, the fundamental objects and opera-tions of linear programming posed in in®nite di-

mensional spaces. In this way, he extends thede®nitions of degenerate and non-degenerate basicfeasible solutions, reduced costs and pivoting of®nite linear programming to the in®nite dimen-sional case. Let us brie¯y recall some of thesede®nitions. Consider the following problem:

�LP� min hx; c�is:t: Ax � b;

x P h; x 2 X ;where A is a linear map A: X ! Z, X and Z are realvector spaces and h is the zero element of X. Forc� 2 X �, the dual of X, denote the image of x underc� by hx; c�i. According to his notation, for ourproblem we have that

X � �Pj�1;...;n�lj; uj��xPi�1;...;qC�Si�;where

Pi2qC�Si� � fv � �v1; v2; . . . ; vq�: vi 2 C�Si�gand the linear map

A: X ! �Pi2q�C�Si���x; v1; . . . ; vq� ! �f1�:�Txÿ v1�x; :�; . . . ; fq�:�T

xÿ vq�x; :��;being C�Si� the space of continuous functions on Si.Let N�A� � f�x; v� 2 X : vi�s� � fi�s�Tx; i 2 qg thenull space of A and B�x; v� � f�n; t� 2 X :9k >0; k 2 R with ��x; v� � k�n; t�� 2 X and v� kt P 0gthe solidi®cation of �x; v�. A result developed byNash ensures that if an optimal solution �x�; v�� 2 Xexists, such that the dimension of B�x�; v�� \ N�A� is®nite, then there also exists an optimal extremepoint. In our case, the ®niteness of the dimension ofN�A� is easily checked, and so the optimum of theproblem is reached at an extreme point.

The methods that we propose to solve (P)combine a simplex-type pivoting strategy with afeasible-direction scheme which enables us tomanage the eventual degeneracy of the extremepoints e�ciently. They also include a puri®cationphase to proceed from a feasible solution to anextreme point.

2. Preliminary results and de®nitions

We are concerned with the following problem:

T. Leon et al. / European Journal of Operational Research 121 (2000) 78±91 79

Page 3: On the numerical treatment of linearly constrained semi-infinite optimization problems

80 T. Leon et al. / European Journal of Operational Research 121 (2000) 78±91

�P � min cTxs:t: fi�s�Tx P bi�s�; s 2 Si; i 2 q;

lj6 xj6 uj; j � 1; 2; . . . ; n;

where the n components of each function fi�:� andthe functions bi�:�, are n-times continuously dif-ferentiable on Si � �ai; bi� for i 2 q and any or all ofthe lower (respectively upper) bounds may be ÿ1(+1). For the sake of simplicity, throughoutthe paper, we shall assume that Si � S, for all i.In order to deal with the standard form of (P),it is necessary to introduce q slack functionszi�x; :� :� fi�:�Txÿ bi�:�; s 2 S. We assume that (P)is consistent and denote by F the feasible set. Foreach �x; z� 2 F , where z � �z1; z2; . . . zq�, we havethat zi�x; s�P 0; 8s 2 S. Since our algorithms treatthe bounds on the variables as constraints, we de-®ne the set of active indices, constr(x), as the unionof all the roots of the slack functions Zi�x� togetherwith the indices for which the bounds are achieved:

constr�x� �[i2q

Zi�x� [ Jl�x� [ Ju�x�;

where

Zi�x� � fs 2 S: zi�x; s� � 0g;Jl�x� � fj: lj � xj; 16 j6 ng and

Ju�x� � fj: xj � uj; 16 j6 ng:

Assumption 1. We assume that the set of activeconstraints has ®nite cardinality for every feasiblesolution �x; z�, and we denote

Si2q Zi�x� �

fs1; s2; . . . ; smg; jJl�x�j � ml and jJu�x�j � mu.

De®nition. Let �x; z� 2 F , with constr�x� 6�£. Letsp 2 Zi�x�, then d�sip� denotes the degree of sp as azero of the slack function zi�x; :�, that is d�sip� is aninteger such that:

z�k�i �x; sp� � 0; k � 0; 1; . . . ; d�sip�;z�k�i �x; sp� 6� 0; k � d�sip� � 1:

The points whose indices belong to Jl�x� or Ju�x�are always zero degree. And, if it is clear thatsp 2 Zi�x�, then we write d�sp� instead of d�sip�.

We have checked that the adaptation for (P) ofNash's characterizations for extreme points anddegenerate and non-degenerate basic feasible so-

lutions are similar to those obtained by Andersonand Lewis [1]. Hence we do not include the proofof the next theorem.

Theorem 1. Let �x; z� 2 F . Then �x; z� is an extremepoint if and only if V �x� � f0ng, where

V �x� �\i2q

fy 2 Rn: f �k�i �sp�Ty � 0;

k � 0; 1; . . . ; d�sp� for each sp 2 Zi�x�g\fy 2 Rn: �ej�Ty � 0 for j 2 Jl�x� [ Ju�x�g:

Moreover, �x; z� is a non-degenerate extreme point ifand only if B(x) is invertible, where B(x) is thematrix whose columns are:

f1�s1�; . . . ; f1�sr�; f2�sr�1�; . . . ; fq�sm�; . . . ;

ÿ ej; . . . ; ek; . . . ; f �1�1 �s1�; . . . ; f �d�s1��1 �s1�; . . .

f �d�sr��1 �sr�; f �1�2 �sr�1�; . . . ; f �d�sr�1��

2 �sr�1�; . . . ; . . . ;

f �1�q �sm�; . . . ; f �d�sm��q �sm�;

for j 2 Ju�x� and k 2 Jl�x�.

Remarks.

1. We could consider that there are two classesof vectors involved in the de®nition of V(x): theactive gradients and `their derivatives' as functionsof s. In the context of Linear Programming it onlymakes sense to de®ne a subspace V(x) with the ®rstkind of vectors and the characterization of extremepoint in Theorem 1 would also be valid. Roughlyspeaking, we would say that an active constraintprovides more information in a linear semi-in®niteprogram than in a ®nite one.

2. The condition V �x� � f0ng is equivalent tothe linear independence of the rows of the matrixB(x). If #C�x� denotes the number of columns ofB(x), then #C�x� � m� m1 � mu �

Pp�1;...;m d�sp�,

in particular, for a non-degenerate extreme point#C�x� � n.

3. Notice that in order to check if a feasiblesolution �x; z� is basic and also to decide about itsdegeneracy we need to ®nd the set of global min-imizers Zi�x� of the slack functions.

The following result is an extension of a char-acterization of optimality for non-degenerate ex-treme points in Ref. [1]. It is easy to prove that it isa Kuhn±Tucker type optimality condition with thelinear independence constraint quali®cation [17].

Page 4: On the numerical treatment of linearly constrained semi-infinite optimization problems

T. Leon et al. / European Journal of Operational Research 121 (2000) 78±91 81

Theorem 2. A non-degenerate extreme point �x; z�is optimal if and only if kP 0m�m1�mu and q �0nÿ�m�m1�mu�, where �k; q�T � B�x�ÿ1c.

Notice that the optimality condition above isbased on the existence of the inverse of the matrixB(x), otherwise we must introduce some othercriterion. Instead of de®ning a speci®c rule fordegenerate extreme points, we decided to look foran optimality criterion useful for feasible solutionswith some active constraints that satisfy a regu-larity condition. The former requirement does notimply any loss of generality because the onlycandidates to be optima are the points with thisfeature. Moreover, in order to establish Karush±Kuhn±Tucker conditions a constraint quali®cationmust be imposed.

Assumption 2. We assume that the Slater condi-tion, for problem (P), holds, i.e., there exists apoint x0 such that lj < x0

j < uj, for j � 1; 2; . . . ; n;and fi�s�T x0 ÿ bi�s� > 0 for all s 2 S; i 2 q.

This condition is equivalent to the following: atevery x 2 F , with constr�x� 6�£, there exists avector u 2 Rn such that fi�s�Tu > 0 for all s 2 Zi�x�,uj > 0 if j 2 Jl�x� and uj < 0 if j 2 Ju�x�, which isthe Mangasarian±Fromovitz constraint quali®ca-tion (see, for instance [17] and [28]). We have alsoshown that if there exists a non-degenerate ex-treme point of (P), then the Slater condition fol-lows. Then we use a KKT type theorem:

Theorem 3. Let �x; z� 2 F optimal for (P) andsuppose that Assumption 2 holds. Then there existt1; . . . ; tm 2

Si2q Zi�x�, j1; . . . ; jk 2 Jl�x� [ Ju�x� and

non-negative numbers k1; k1; . . . ; km; m1; . . . ; mk suchthat c �Pi�1;...;r kif1�ti� � � � � �

Pi�mÿp;...;m kifq�ti�

� Pj2Jl�x� mjej �Pj2Ju�x� mj�ÿej�.

3. Algorithmic statements

The ®rst extension of the simplex algorithm forsemi-in®nite linear programs was developed byAnderson and Lewis [1]. They proposed a primalmethod where the pivot operation is sometimesreplaced by either descent steps or puri®cationsteps. For non-degenerate extreme points (next

Case 1.a) our proposal coincides with theirs, butthey require Z(x) to be in the interior of S. For theother cases in Section 3.1, and for degenerate ex-treme points (Section 3.3), the rules that theseauthors give are quite di�erent from those that weintroduce here. In fact, in the presence of degen-eracy, the solution of ordinary LP problems allowsus to recognize an optimal solution or to deter-mine feasible descent directions, avoiding thepractical disadvantages of using the local reduc-tion properties and orthogonal projections re-quired in Ref. [1].

Next, we present the foundations for the mainprocedures appearing in the algorithms in Sec-tion 4. The simplex pivoting procedures are basedon the existence of the inverse of the matrix B(x),and therefore they can only be applied for non-degenerate extreme points. A puri®cation step isnecessary if we are interested in obtaining an ex-treme point, starting from a feasible one andwithout worsening the objective value. And ®nally,notice that the descent rules for feasible solutionsare more general than the pivoting rules, in thesense that they do not require the current point tobe a basic feasible solution.

3.1. Pivoting rules for non-degenerate extremepoints

Let us suppose that for �x; z� the optimalitycondition of Theorem 2 fails: then either kp < 0 forsome p 2 f1; 2; . . . ;m; Ju; Jlg or q 6� 0nÿ�m�ml�mu�.We consider the two cases separately and in bothof them we generate a descent direction d.

Case 1: If kp < 0, we determine a suitable di-rection from the geometrical meaning of thesimplex method pivoting rule: moving along anedge of the feasible region to an adjacent improvedextreme point. Let kp < 0 for some p 2 f1; 2; . . . ;m; Ju; Jlg, and let d � gp be the pth row of B�x�ÿ1

.(1.a) If kp is associated with sp 2 Zi�x�, then the

following inequalities hold:(i) dTf �k�i �st� � 0, for all st 2 Zi�x�; t 6� p and forall k, 06 k6 d�st�,(ii) dTfi�sp� � 1,(iii) dTf �k�i �sp� � 0, for all k, l6 k6 d�sp�, and(iv) dTf �k�r �st� � 0, for all st 2 Zr�x�, r � 1;. . . ; q; r 6� i, and for all k, 06 k6 d�st�.

Page 5: On the numerical treatment of linearly constrained semi-infinite optimization problems

Let us denote the next iterate �x�; z��, being x�

:� x� l�x�d and z�i �x�; :� � zi�x; :� � l�x�dTfi�:�,where l�x� > 0 is the maximum steplength from xalong d. Then, by construction, it is an improvedfeasible solution (possibly an extreme point, pos-sibly not) which maintains all the previous zerosexcept sp, and at least one constraint that was notactive at �x; z� becomes active at �x�; z��.

(1.b) If kp corresponds to a column of B(x)associated with a component xj at its upper (re-spectively lower) bound, following a similar rea-soning, we have that at x� the value of �xj��decreases (respectively increases) and a new con-straint becomes active.

Case 2: If kP 0m�m1�mu and q 6� 0nÿ�m�m1�mu�,the simplex rule cannot be directly extended. Then,we propose another one that also maintains all theconstraints active, except that related to the non-null component of q.

(2.a) Let sp 2 Zi�x� with d�sp�P 1, such thatkp P 0 and qp;1 6� 0, where qp;1 denotes the com-ponent of �k; q�T associated to f �1�i �sp�. Then thenon-null component of q corresponds to a ®rstderivative. Let us suppose that qp;1 > 0 and de®nethe direction d from the solution of the followingequations:

(i) dTf �k�i �st� � 0, for all st 2 Zi�x�, t 6� p and forall k, 06 k6 d�st�,(ii) dTfi�sp� � 1,(iii) dTf �1�i �sp� � ÿ1,(iv) dTf �k�i �sp� � 0, for all k, l6 k6 d�sp�, and(v) dTf �k�r �st� � 0, for all st 2 Zr�x�; r � 1; . . . ; q;r 6� i, and for all k, 06 k6 d�st�.The pivoting rule has a plain geometrical

meaning: at the next iterate x� it holds that sp 62Zi�x�� and that the ®rst derivative of the slackfunction z�1�i �x�; sp� will also be non-null. Actually,since qp;1 is positive at �x; z�, the slack functionzi�x�; :� is strictly decreasing at the `old' zero sp.

By construction cTd � kp ÿ qp;1. So, in order toensure that d is a descent direction, condition (ii)may be replaced by

�ii�0: dTfi�sp� � qp;1=2kp;

hence cTd � ÿqp;1=2. Likewise, if qp;1 < 0, condi-tion (iii) has a positive sign and we may replace (ii)by (ii)00: dT fi�sp� � ÿqp;1=2kp:

(2.b) If qp;u 6� 0, for u > 1, where qp;u denotesthe �k; q�T component associated with f �u�i �sp�, forsp 2 Zi�x� with d�sp�P u, we choose the descentdirection d in such a way that the followingequations hold:

(i) dTf �k�i �st� � 0, for all st 2 Zi�x�, t 6� p and forall k, 06 k6 d�st�,(ii) dTfi�sp� � jqp;uj=2kp,(iii) dTf �u�i�sp� � b, where b � 1 if qp;u < 0 andb � ÿ1 if qp;u > 0,(iv) dTf �k�i �sp� � 0, for all k 6� u; l6 k6 d�sp�,and(v) dTf �k�r �st� � 0, for all st 2 Zr�x�; r � 1; . . . ; q;r 6� i, and for all k, 06 k6 d�st�.

The geometrical meaning for d if u � 2 is the fol-lowing: let us suppose that qk;2 > 0, then b < 0,therefore sk will be a local maximum of the slackfunction for the next iterate. Clearly, this case canonly occur at the endpoints of the interval becausein the interior the degree of the zeros is even.

Remarks.

1. It is easy to prove that the obtained direction dis either a recession direction of F or an improvingfeasible direction at �x; z�. Since the eventual un-boundedness of (P) cannot be discarded if some ofthe variables are not bounded, both the pivotingand descent rules, and the puri®cation procedure,incorporate a test to detect if a feasible descent di-rection is actually a recession direction of the fea-sible set and conclude the unboundness of (P).

2. The new iterate �x�; z��, which is a feasiblesolution, is not necessarily an extreme point. Inmany cases we need to use a puri®cation procedurein order to reach an improved basic feasible solu-tion.

3.2. Puri®cation procedure

The characterization of extreme point in termsof the subspaces V(x) has given rise to severalpuri®cation algorithms, whose ®nite convergencehas been proved under di�erent assumptions (seeRefs. [1,6,18]).

The next procedure is an extension of the pu-ri®cation algorithm appearing in Ref. [18]. Let ussummarize its more relevant key points: starting

82 T. Leon et al. / European Journal of Operational Research 121 (2000) 78±91

Page 6: On the numerical treatment of linearly constrained semi-infinite optimization problems

from a feasible solution, the algorithm generates anew feasible point whose slack functions maintainall the zeros of the previous iterate. Moreover, itadds a new zero or increases the multiplicity of anexisting one. The objective function value at thesuccessive iterates does not get worse. On the otherhand, because of the addition of more constraintsand the steplength determination process, the di-mension of the subspace V(x) is iteratively re-duced, and then either an extreme point is obtainedin a ®nite number of iterations or the algorithmconcludes with the unboundedness of (P).

The algorithm is well de®ned under the fol-lowing rank assumption which is compatible withthe eventual unboundedness of (P):

Assumption 3. There exists at least one index i 2 qsuch that rankffi�s�: s 2 Sg � n.

The puri®cation scheme can be structured asfollows. First, the set of active indices constr(x) ofthe initial feasible solution is found. Then it checksif the subspace V(x) is the null vector. With thisaim, the subproblems �SPj� minfyj: y 2 V �x�g forj � 1; . . . ; n are sequentially solved. If v�SPj� � 0for all j, then x is an extreme point and the algo-rithm ends. Otherwise, the optimal solution of theLP problem

�P1� minfcTd: d 2 V �x�; ÿ16 dj6 1;

j � 1; . . . ; ngyields a descent direction d�. Finally, through aline search conducted from x along d� for the nextiterate, the maximum steplength is calculated. Theprocess is repeated until either an extreme point ora recession direction of F is found.

Example 1. [21]

�P � inf : f�ÿ3=4�x1g;s:t: �3ÿ 7s�x1 ÿ 4sx2 P 4s2 ÿ 11s� 7;

s 2 �0; 1�:The problem is consistent and Assumption 3holds. Take x � �2:5;ÿ2:5� as a starting feasiblepoint, z�s� � ÿ4s2 � 3:5s� 0:5, constr(x)� {1}with d(1)� 0. Since V �x� � fy 2 R2: y1 � y2 � 0g,x is not an extreme point, and the optimal solution

of (P1) is d� � �1;ÿ1�, which is a recession direc-tion of F.

Example 2. [30] Consider the problem

�P� min x2;s:t: x1 cos�s� � x2 sin�s�6 1; s 2 �0; 2p�:

The starting solution x � �0:599; 0:8�, withconstr(x)�B, is not an extreme point. Thedirection �1;ÿ1�, given by the puri®cation algo-rithm, leads to x� � �0:80293979; 0:59606021� withconstr(x�� � 0:6386, a non-degenerate extremepoint. Then, we apply the pivoting rule Case 1.a(because k<0) and obtain the non-degenerateextreme point x � �ÿ0:80292231;ÿ0:59608363�,with constr(x)� {3.78}. Hence it is not necessary topurify but to apply a pivoting rule again.

3.3. Descent rules for feasible solutions

Now, let us introduce a new assumption namedAssumption 4 in order to avoid the jammingphenomena that appear in the computation of thesearch directions in the feasible-direction methods([22], for instance). It ensures that (P) is tractable,and permits us to take into account the gradientsof the local minimizers in the linear problem thatreports on either the optimality of the current it-erate or yields the search direction.

Assumption 4. For each x 2 Rn, the set of localminimizers mi�x� of zi�x; s� over S is ®nite, for i 2 q.

Theorem 4. Let �x; z� 2 F and suppose that As-sumptions 2 and 4 hold. Then �x; z� is optimal if andonly if v�Q�x�� � 0, where v�:� is the objective valueof the linear problem:

�Q�x�� min cTd;s:t: fi�s�Td P ÿ zi�x; s�;

s 2 mi�x�; i 2 q;dj P 0; for j 2 Jl�x�;dj6 0; for j 2 Ju�x�;ÿ 16 dj6 1 for all j:

Proof. Let �x�; z�� be a feasible solution such thatv�Q�x��� � 0 and let us suppose that it is notoptimal. Then a feasible solution �x#; z#� for (P)

T. Leon et al. / European Journal of Operational Research 121 (2000) 78±91 83

Page 7: On the numerical treatment of linearly constrained semi-infinite optimization problems

would exist such that cTx# < cTx�. Then, thesegment X � �x�; x#� would be a subset of feasiblesolutions verifying that cTx# < cTx�, for eachx 2 X . We could take x 2 X , such that maxfjxjÿx�j j; j � 1; . . . ; ng6 1, where xj denotes the jthcomponent of x, and consider the directionu � xÿ x�. By construction, fi�t�Tu � fi�t�Txÿfi�t�Tx� � zi�x; t�T ÿ zi�x�; t�T P ÿzi�x�; t�, for t 2mi�x� and i 2 q. So, u would be a feasible solutionfor �Q�x��� and cTu � cTxÿ cTx� < 0, and wewould get a contradiction.

Conversely, let us suppose that �x�; z�� is opti-mal. Since Zi�x�� � mi�x�� and from Theorem 3 wehave that v�Q�x��P 0, but d � 0n is a feasible so-lution to Q�x��, then v�Q�x��� � 0. h

If the current iterate �x; z� is a degenerate ex-treme point, we use Theorem 4 both for checkingits optimality and for improving the objectivevalue. If v�Q�x�� < 0, then every optimal solutiond� of the LP is a descent direction at �x; z�. How-ever, in many cases the maximum steplength fromx along d� is zero. We will avoid this di�culty byconstructing a new descent direction g such that:

fi�s�Tg > ÿzi�x; s�; s 2 mi�x� for i 2 q;

gj > 0 for j 2 Jl�x� and

gj < 0 for j 2 Ju�x�:The existence of such a direction g is ensured byAssumption 2. The method obtains it by solving asecond linear program whose solution g� is addedto d� to construct g. In Ref. [19] we present acomplete discussion of this subject for a less gen-eral but analogous problem.

Example 3. (OET 1 in Ref. [32])

�P � min x0;s:t: x0 � sx1 � exp�s�x2 P s2;

x0 ÿ sx1 ÿ exp�s�x2 P ÿ s2; s 2 �0; 2�:The starting point x0 � �5:3890561; 1; 1� hasZ1�x0� �£; Z2�x0� � 2; d�2� � 0 and m1�x0� �f0g, hence x0 is not a basic feasible solution.The puri®cation algorithm (after two iterations)leads to the non-degenerate extreme point:x2 � �0:91135768;ÿ0:91135768; 0:91135768�, withZ1�x2� �£; Z2�x2� � f0; 2g; d�0� � 1; d�2� � 0;

m1�x2� � f1:4091g and m2�x2� � f0; 2g. The pivot-ing rule Case 1.a �k2 < 0� yields a degenerateextreme point: x3 � �0:62607057;ÿ0:62607057;0:62607057�. The optimality test provides a descentdirection d� (the solution of Q�x3�). We check thatthe maximum steplength along d� is equal to zero,so a second LP problem is solved to get g� andd � d� � g� � �ÿ0:14900342; 1; ÿ0:23840584�. Wethereby reach an improved feasible solution x4 ��0:56011812;ÿ0:18344684; 0:52054649�, withZ1�x4� �£; Z2�x4� � f0:23873g, d�:23873� � 1 andm1�x4� � f0; 2g, which is not an extreme point.Now, we can decide if we should follow it witheither a puri®cation step or a pure descent step.

3.4. Steplength computation

Once we have a search direction provided byeither the pivoting or the descent rules we mustdetermine the step size. Moving as far as possi-ble along d, a new improved feasible point x� �x� l�x� d is obtained, where l�x� � minfmini2qfli�x�g; l�x�; u�x�g, being li�x� � inffÿzi

�x; s�=�fi�s�Td�: for s 2 Sÿi �d�g; Sÿi �d� � fs 2 Ssuch that fi�s�Td < 0g, for i 2 q; l�x� � min16 j6 n

f�lj ÿ xj�=dj: dj < 0g and u�x� � min16 j6 n f�ujÿxj�=dj: dj > 0g. Notice that if Sÿi �d� �£ for alli 2 q, and lj � ÿ1 and uj � �1 for all j, thenl�x� � �1 and the problem (P) is unbounded. Inwhat follows, and for the sake of simplicity, wewill consider that Sÿi �d� 6�£ for i 2 q.

In practice, we do not use the formula above toobtain li�x� for i 2 q. We have implemented twodi�erent and more operative versions: one corre-sponds to a direction obtained by using the descentrules (in Lemma 5) and the other is the version fordirections from the pivoting rules and the puri®-cation algorithm.

Lemma 5. Suppose that �x; z� is not the optimum of(P) and d� 2 Argmin Q�x�. Let li�x� � inf fÿzi

�x; s�=fi�s�Td�: for s 2 Sÿi �d��g, then li�x� � 1=ki

where ki � maxfÿfi�s�Td�=zi�x; s�: s 2 Sg, for i 2 q.

Proof. We are checking that for every i, i 2 q, ki

cannot be attained at an index s such thatfi�s�Td�P 0. Let us denote Li�s� � ÿfi�s�Td�=

84 T. Leon et al. / European Journal of Operational Research 121 (2000) 78±91

Page 8: On the numerical treatment of linearly constrained semi-infinite optimization problems

zi�x; s�. Suppose that fi�s0�Td�P 0 for some s0 2 S,then we have that

(i) if zi�x; s0� > 0 then Li�s0�6 0, and(ii) if zi�x; s0� � 0, then by construction fi�s0�Td� > 0, and Li�s0� � ÿ1.

However, for any s 2 Sÿi �d��, we have thatzi�x; s� > 0 and Li�s� > 0. h

So, for computing the maximum steplengthalong these kinds of directions we have to maxi-mize several univariate functions Li�s�; i 2 q. EachLi�s� has a ®nite number of discontinuities (at theactive indices), but we need not be concernedabout them because near the active indices thefunction being maximized is negative and thereforeour computer code jumps them. However, for adirection provided by the puri®cation procedure orthe pivoting rules the situation is di�erent, becauseli�x� could be attained at an active point. Whenthe algorithm constructs one of these directions ittakes into account the derivatives of zi�x; s� at theactive indices and L'Hopital rule allows us toproperly de®ne the value of each Li�s� at the activeconstraints by taking limits. Hence the algorithmmaximizes univariate functions with a ®nite num-ber of avoidable discontinuities.

3.5. Implementation issues

We now discuss several implementation issuesrelated to the procedures described in the preced-ing subsections. We have introduced two kinds ofrules that we call pivoting rules and descent rules.Both of them provide the search directions bysolving ®nite Linear Programming problems. The®rst ones are used for non-degenerate extremepoints and the second ones for the degenerate case.However, descent rules are more general and canbe applied for improving every non-optimal fea-sible solution. So a question arises: is it worthwhileusing the pivoting rules? From our previous com-putational experience, for q� 1, we have checkedthat the pivoting rules are preferable because theylead to the optimum more e�ciently.

On the other hand a puri®cation procedure isessential for using pivoting rules, both to constructan initial extreme point to begin with and to obtaina basic feasible solution when the current iterate is

not. Moreover, since purifying never worsens theobjective value and maintains feasibility, it can beconsidered as a good idea to move from a feasiblesolution in a feasible-direction scheme. In fact, wealso consider a new approach to combine descentrules and puri®cation procedures. This cross-overscheme runs a feasible-direction method ®rst andlater switches to the puri®cation procedure. Oneimportant issue is to decide when to start the pu-ri®cation phase. Our best choice, from the com-putational point of view, has been given by aheuristic rule: `begin with an extreme point, alongn iterations use a pure feasible-direction scheme,check if the current iterate is an extreme point, if itis not, then purify; and repeat'.

Checking if a feasible solution �x; z� is basic andalso computing descent directions requires us to®nd all the global and some local minimizers ofzi�x; s�; i 2 q, with respect to s. This subproblem isreferred to as the multi-local optimization sub-problem. The algorithms that solve it are describedin more detail in Ref. [16].

The step size computation involves the evalua-tion of the global maxima of several univariatefunctions, each one with a ®nite number of dis-continuities. So the scheme that we have developedto ®nd li�x� i 2 q is a three stage procedure, and itcan be considered as an application of the multi-local optimization procedure presented in Ref. [16].Firstly, it identi®es either the avoidable disconti-nuities or breakpoints and isolates them, noticethat accuracy is essential at this stage. Secondly, itcalculates the global maximum of Li�s� at eachsubinterval between two consecutive breakpointsby applying the multi-local routine. If we arecomputing the steplength along a direction gener-ated by the pivoting rules or the puri®cation pro-cedure, the value of Li�:� at the avoidablediscontinuities is obtained as the ratio of the cor-responding derivatives at them (according to theL'Hopital rule). Finally, it compares all the max-ima, saves just the global one ki and takes its inverse.

4. Description of the algorithms

In this section, we discuss two algorithms forsolving (P) based on the procedures described in the

T. Leon et al. / European Journal of Operational Research 121 (2000) 78±91 85

Page 9: On the numerical treatment of linearly constrained semi-infinite optimization problems

previous section. For instance, Algorithm 1 ap-proaches the optimal solution through a sequenceof basic feasible solutions, combining a simplex-type strategy with a search direction scheme. Nowwe can formally state the following method:

Algorithm 1.

Step 0. Let �x; z� 2 F . Find the global minimiz-ers of z with respect to s.Step 1. Puri®cation phase.

If V �x� 6� f0g apply a puri®cation algorithmin order to determine a basic feasible solution�x�; z��.

Step 2. Search directions ®nding step.2.1. If the matrix B�x�� is invertible, evaluatek and q. If k > 0 and q � 0, STOP. Other-wise, ®nd d through the suitable pivotingrule, and go to Step 3.2.2. If the matrix B�x�� is non-invertible, de-termine the set of local minimizers mi�x��,i 2 q. Compute the optimality function valuev�Q�x���. If v�Q�x��� � 0, STOP. Otherwise,compute the search direction d 2 ArgminQ�x��, and go to Step 3.

Step 3. Steplength calculation.Evaluate l�x��. Set x � x� � l�x��d, and goto Step 1.

In summary, it checks if the current solution isbasic, and if it is not, a puri®cation step is per-formed. When the solution is an extreme point itapplies the optimality criterion to stop or to gen-erate a feasible descent direction.

On the other hand, Algorithm 2 is a feasible-direction method that only uses the descent rules,although sometimes it also works with extremepoints. In fact, it combines descent rules with apuri®cation scheme. Firstly, it runs a feasible-di-rection method and later switches to the puri®ca-tion procedure. The basic structure of eachiteration of the algorithm is as follows:

Algorithm 2.Step 0. Let �x; z� 2 F . Find the global minimiz-ers of z with respect to s.Step 1. If V �x� 6� f0g apply a puri®cation algo-rithm in order to determine a basic feasible so-lution �x�; z��. Let y1 � x� and k � 1.

Step 2. Determine the set of local minimizersmi�yk�; i 2 q. Compute the optimality functionvalue v�Q�yk��. If v�Q�yk�� � 0, STOP. Other-wise, compute the search direction dk 2Argmin Q�yk�, and go to Step 3.Step 3. Evaluate l�yk�. Let yk�1 � yk � l�yk�dk.If k < n, replace k by k � 1 and repeat Step 2.If k � n, let yk�1 � x and go to Step 1.

We note here that the inner loop of the foregoingalgorithm resets the procedure every n steps(whenever k � n at Step 3). We have also run otherrules to start the puri®cation phase, but ourcomputational experience for q � 2 shows that thischoice is far better since it results in a considerablesaving of computer time [15].

5. Computational results

The algorithms described in the previous sec-tion were tested on a number of problems, for q �1 and q � 2. The numerical results were obtainedon an HP Vectra VL4/100. Our source code iswritten in C language and is compiled as Win-dows Applications. The linear programs aresolved with routines of CPLEX Callable Library.Concerning the parameters, we consider that qequals zero if it belongs to �ÿ10ÿ4; 10ÿ4�, that k isnegative if it is less than ÿ10ÿ10, and that a localminimum s� of z�x; :� is a zero if z�x; s�� < 10ÿ8.The stopping condition for descent rules isv�Q�x�� < ÿ10ÿ10.

Most of our test problems have been borrowedfrom the literature. Some of them are relativelysmall but well-known and they permit us to es-tablish comparisons with the results obtained fromrelevant papers. Ours are summarised in Tables 1±3, where the column IT shows the number of it-erations performed in solving the problem and NFdenotes the number of evaluations of slack func-tions. ND is the total number of gradient evalua-tions and OBJECT indicates the value of theobjective function at the stopping point. By (!) wemean that the algorithm does not ®nd the optimalsolution.

86 T. Leon et al. / European Journal of Operational Research 121 (2000) 78±91

Page 10: On the numerical treatment of linearly constrained semi-infinite optimization problems

5.1. Comments on Table 1 instances

In Ref. [30] some interior point algorithms areapplied for solving problem CC: a dual a�ne-scaling method that converges to a non-optimalsolution x � �0:6445; 0:7646�, and a dual projec-tive-scaling algorithm with a strict upper boundthat converges to �1:01� 10ÿ4;ÿ1� 5:52� 10ÿ9�in 8 iterations.

Problem PT was introduced by Panier andTits [21] to illustrate how to overcome somedi�culties in the process of discretization. InRef. [32] an SQP-type algorithm appears fordiscretized minimax problems. Their results, fora discretization of 501 points, are as follows:IT� 8, NF� 7337, ND� 2 and OBJECT�0.23606791.

Goberna and Lopez [6] use problem GLto compare the performance of several methods(two grid and two cutting plane discretization al-gorithms) that solve semi-in®nite linear problems.This problem is very sensitive with respect to the

boundary of its level sets and has a unique optimalsolution.

In Ref. [8], in the section concerning well-posedness in SIP, the author introduces two ex-amples, which we call GU2 and GU3, in which theoptimal value varies very strongly by small per-turbations in input data. Actually, v(P) is poorlydetermined if jej is a small number in comparisonwith the computational errors. It is justi®ed by thenon-existence of n linearly independent vectorsff �sj�; j � 1; . . . ; ng such that c �Pj�1;...;n ajf �sj�,with aj > 0 and sj 2 �0; 1�, for j � 1; . . . ; n. Forthese kinds of problems, discrete approximationsare not recommended. Moreover, in exampleGU2, for e > 0 the problem is unbounded and, forseveral values of e, both of our algorithms detect itin at most 2 iterations. Concerning GU3, we havesolved it for several e's and our results are quitegood for jej > 10ÿ5.

One of the few examples in the literature statedwith bounds on the variables appears in Ref. [31].This is a simple example in which all the proce-

Table 1

Numerical results for problems with q� 1

Problem n ALGO IT NF ND OBJECT

CC 2 A 1 6 142 996 ÿ1

A 2 14 304 2129 ÿ0.99999999999996

PT 2 A 1 2 31 89 0.2360679779

A 2 13 200 466 0.2360679775

GL 2 A 1 1 16 55 0.6666666667

A 2 14 211 1697 0.6666666667

GU2 2 A 1 4 42 134 ÿ0.6666663856

e�ÿ0.5 A 2 15 267 529 ÿ0.6666666667

GU3 3 A 1 4 104 10080 1.7262807025

e� 1.5 ´ 10ÿ6 A 2(!) 2 5 63 1.7394272761

WBI 3 A 1, A 2 1 2 42 ÿ2

TAN 3 A 1 3 74 137 0.6490424215

A 2 24 557 936 0.6490420934

6 A 1 9 302 434 0.6160851913

A 2(!) 76 3435 4181 0.6160862020

8 A 1 24 1063 1304 0.6156532268

10 A 1 31 1705 1945 0.6156280585

12(�) A 1 17 2422 2215 0.6156265660

T. Leon et al. / European Journal of Operational Research 121 (2000) 78±91 87

Page 11: On the numerical treatment of linearly constrained semi-infinite optimization problems

dures described in Section 4 lead to the optimumin one iteration, for every starting solution that wehave tried.

TAN is a widely used test problem (see Refs.[1,2,24,27,29], for instance). It arises from the one-sided approximation to tan�x� by a polynomial. InRef. [2] there is a detailed exposition of numericalresults for this problem and it is pointed out that it

is severely ill-conditioned for n > 6. In Ref. [24] avery interesting discussion appears, comparingsome quasi-Newton and Newton-type algorithms;concerning TAN, it can be seen that for n � 3 and6 all of these algorithms take more iterations toreach the solution than Algorithm 1, although wemust say that our starting point is not the same,because we need a feasible solution. For n� 8

Table 3

Numerical results for other minimax problems

Problem n ALGO IT NF ND OBJECT

OET 1 3 A 1 7 152 759 0.5382453182

A 2 19 499 1987 0.5382453186

LCA 6 5 A 1 121 9018 31372 0.0020997300

A 2 13 757 1156 0.0020997412

LCA 7 7 A 1 158 9484 17027 0.0542221647

A 2 18 1717 16544 0.0542221955

LCA 8 7 A 1 119 11786 22192 0.1633811960

A 2 30 2549 9421 0.1633812290

LBV 1 3 A 1 4 54 242 0.0484144275

A 2 10 194 591 0.0484144407

LBV 2 4 A 1 23 972 1703 0.0056471544

A 2 13 530 964 0.0056471571

Table 2

Numerical results for uniform approximation problems using polynomials

Problem n ALGO IT NF ND OBJECT

OET 3 4 A 1 19 756 1404 0.0045050731

A 2 9 406 848 0.0045050895

LCA 1 6 A 1 44 4257 6855 0.0000418826

A 2 16 1257 4183 0.0000419117

LCA 2 6 A 1 88 7564 23677 0.0005219913

A 2 19 1418 2951 0.0005220463

LCA 3 7 A 1 42 5256 21256 0.0026028298

A 2 15 1858 2838 0.0026028248

LCA 4 8 A 1 54 8334 50381 0.0142565680

A 2 18 2678 5594 0.0142565963

LCA 5 5 A 1 57 3661 14469 0.0001554092

A 2 12 699 1193 0.0001554075

88 T. Leon et al. / European Journal of Operational Research 121 (2000) 78±91

Page 12: On the numerical treatment of linearly constrained semi-infinite optimization problems

some of these methods terminate with fewer iter-ations but without locating all the global mini-mizers. For the purpose of comparison we includethe set of active points of the optimum in theAppendix. Notice that Algorithm 2 does not ®ndthe optimal solution for n P 6.

In our opinion Algorithm 1 may be consideredcompetitive with other semi-in®nite programmingalgorithms for problems with q� 1. On the otherhand, we can show that Algorithm 2 betters itsperformance for minimax-type problems.

5.2. Minimax problems set

The problems in Tables 2 and 3 are continuousminimax problems. We have considered uniformapproximation problems using polynomials, whichcan be restated as follows:

minimize x0

s:t: x0 �X

j�1;...;nÿ1

sjÿ1xj P b�s�;

x0 ÿX

j�1;...;nÿ1

sjÿ1xj P ÿ b�s�;

�x0; x� 2 Rn; s 2 S;

with the following b(s), S and starting solution x0:OET 3: b�s� � sin�s�, S � �0; 1�,x0 � �2:158529; 1; 1; 1�.LCA 1: b�s� � cos�s�, S � �ÿ1; 1�,x0 � �1; 0; 0; 0; 0; 0�.LCA 2: b�s� � 1=�1� s2�, S � �0; 1�,x0 � �1; 0; 0; 0; 0; 0�.LCA 3: b�s� � tan�s�, S � �ÿ1; 1�,x0 � �tan�1�; 0; 0; 0; 0; 0; 0�.LCA 4: b�s� � 1ÿ exp�ÿs2�, S � �ÿ2; 2�,x0 � �b�2�; 0; 0; 0; 0; 0; 0; 0�.LCA 5: b�s� � sin�s�, S � �0; 1�,x0 � �sin�1�; 0; 0; 0; 0�.Actually, all the LCA problems are linear

Chebyshev approximations to several functions bypolynomials (from 1 to 5), or using di�erent ap-proximating functions sets that we denote by W(from 6 to 8).

LCA 6: b�s� � 1=�1� s2�, S � �0; 1�,W � f1; s; sin�s�; cos�s�g, x0 � �1; 0; 0; 0; 0�.LCA 7: b�s� � 1ÿ exp�ÿs2�, S � �ÿ2; 2�,W � f1; s; s2; s3; sin�s�; cos�s�g,

x0 � �b�2�; 0; 0; 0; 0; 0; 0�.LCA 8: b�s� � 1ÿ exp�ÿs2�, S � �ÿ3; 3�,W � f1; s; 2s2 ÿ 1; 4s3 ÿ 3s; 8s4 ÿ 8s2 � 1;16s5 ÿ 20s3 � 5sg, x0 � �b�3�; 0; 0; 0; 0; 0; 0�.OET 1: b�s� � s2, S � �0; 2�, W � fs; esg,x0 � �5:3890561; 1; 1�.We have also considered a linear boundary

value problem of monotonic type: ®nd an ap-proximate solution of L�y��t� � ÿy00�t� � �1� t2�y�t� � t2, t 2 �ÿ1; 1�, with boundary conditionsy�ÿ1� � y�1� � 0. Its statement as a semi-in®niteprogram can be found in Ref. [14]:

minimize x0

s:t:X

j�1;...;nÿ1

wj�s�xj P s;

x0 ÿX

j�1;...;nÿ1

wj�s�xj P ÿ s;

�x0; x� 2 Rn; s 2 �0; 1�;where wj�s� :� 2j�2jÿ 1�sjÿ1 � �1� s��1ÿ sj�. Wehave solved it for n� 3 (LBV 1) and n� 4 (LBV 2)with starting solutions: �3; 1; 0� and �3; 1; 0; 0�, re-spectively.

We think that these examples may illustrate thecomputational behaviour of our algorithms. Inour opinion, it is di�cult to provide serious com-parisons with other approaches in the literature,because the methods may be very sensitive to theselection of the starting point and the tolerancein the termination tests is not the same. Let uscomment on some results of the instances of Ta-bles 2 and 3:

Numerical results concerning problems OET 3and OET 1 appear in Ref. [32], for a discretizationof 501 points, they have that: IT� 7, NF� 8920,ND� 9, OBJECT� 0.00450552 and IT� 11,NF� 13420, ND� 6, OBJECT� 0.53824312, re-spectively. We would like to point out that theproblem OET1 is ill-conditioned if it is solved withthe dual version of the simplex method, becausethe maximal error of the best approximation isonly reached at two points of S.

Some numerical results concerning LCA 5 areshown in Ref. [10]. For the ®nest grid of 181points, they achieve a maximal approximationerror of 1:55� 10ÿ4, the solution of three qua-dratic subproblems (QP) is required (19 itera-

T. Leon et al. / European Journal of Operational Research 121 (2000) 78±91 89

Page 13: On the numerical treatment of linearly constrained semi-infinite optimization problems

tions), for which the average number of con-straints is 15.

With respect to the mean size of the ordinaryLP subproblems solved by our algorithms, it canbe pointed out that at each step just one discreteset of points are retained; these are either thepoints at constr(x) or the points at mi�x�; i � q,hence the mean size has not exceeded 2n. In fact, atthe puri®cation steps it is always less than n.

In our opinion, it does not seem too di�cult togive advice concerning the question of whichmethod (A1 or A2) should be selected for theminimax problems: Algorithm 2.

Appendix A

The test problems used in the numerical ex-periments in Section 5.1 are de®ned here. In allcases x0 denotes the starting point, x� the optimalsolution if it is known and constr(x�) denotes theset of active points at x�.

CC: minimize x2

s:t:

x1 cos�s� � x2 sin�s�6 1; s 2 �0; 2p�;x0 � �0:599; 0:8�;x� � �0;ÿ1� is a non-degenerate extreme point;

constr�x�� � f4:7124g:

PT: minimize x1

s:t:

x1 � �1ÿ sÿ s2�x2 P sÿ s2; s 2 �0; 1�;x0 � �0:25; 0�;x� � �p5ÿ 2; 1ÿ 2

p5=5� is a non-degenerate

extreme point; constr�x�� � f0:6181g:

GL: minimize 2x1 � x2

s:t:

sx1 � �1ÿ s�x2 P sÿ s2; s 2 �0; 1�;x0 � �1; 0�;x� � �1=9; 4=9� is a non-degenerate extreme point

with constr�x�� � f0:6666g:

GU2: minimize x1 � �1� e�x2

s:t:

x1 � sx2 P ÿ 1=�1� s�; s 2 �0; 1�;x0 � �0; 0�:For e � 0 the optimal value is ÿ 0:5;

for e < 0 the optimal value is ÿ 1=�2� e� and

for e > 0 the problem is unbounded:

For e � ÿ0:5 x� � �ÿ0:888600; 0:443867�;a non-degenerate extreme point with constr�x��� f0:50097g:

GU3: minimize x1 � �1=2�x2 � ��1=2� � e=3�x3

s:t:

x1 � sx2 � �s� es2�x3 P es; s 2 �0; 1�;x0 � �e; 0; 0�:For e � 0 the optimal value is 1=2�1� e� and

for e 6� 0 the optimal value is 1=4�3e1=3 � e�:Actually; for e � 1:5� 10ÿ6;

x� � �1:027996;ÿ587431:709682; 587432:518819�;a non-degenerate extreme point with constr�x��� f0:3333; 1g:

WBI: minimize x1 ÿ 2x2 ÿ x3

s:t:

x1 � sx2 � s2x36 s5; s 2 �1; 2�;xj P 0; j � 1; 2; 3

with x0 � �0; 0; 0� and x0 � �0; 0:5; 0:5�:x� � �0; 1; 0� is a non-degenerate extreme point

with constr�x�� � f1g; Jl�x�� � f1; 3g:

TAN: minimizeX

i�1;...;n

�1=i�xi

s:t:X

i�1;...;n

siÿ1xi P tan�s�; s 2 �0; 1�;

x0 � �tan�1�; 0nÿ1�;except for 12��� where the solution to the n � 10problem with zeros in the last two components isused as starting point. Let us show the set of activepoints at the optimum, for

n � 3: constr�x�� � f0; 0:3325; 1g;

90 T. Leon et al. / European Journal of Operational Research 121 (2000) 78±91

Page 14: On the numerical treatment of linearly constrained semi-infinite optimization problems

n � 6: constr�x�� � f0; 0:2763; 0:7251; 1g;n � 8: constr�x�� � f0; 0:1716; 0:4985; 0:8273; 1g;n � 10: constr�x��� f0; 0:1168; 0:3558; 0:6419; 0:8827; 1g; and

n � 12: constr�x��� f0; 0:0854; 0:2602; 0:5048; 0:7303; 0:9142; 1g:

References

[1] E.J. Anderson, A.S. Lewis, An extension of the simplex

algorithm for semi-in®nite linear programming, Mathe-

matical Programming 44 (1989) 247±269.

[2] I.D. Coope, G.A. Watson, A projected Lagrangian

algorithm for semi-in®nite programming, Mathematical

Programming 32 (1985) 337±356.

[3] M.C. Ferris, A.B. Philpott, An interior point algorithm for

semi-in®nite linear programming, Mathematical Program-

ming 43 (1989) 257±276.

[4] Glasho�, K., Gustafson, S.A., Linear Optimization and

Approximation, Springer, Berlin, 1983.

[5] M.A. Goberna, V. Jornet, Geometric fundamentals of the

simplex method in semi-In®nite programming, OR Spekt-

rum 10 (1988) 145±152.

[6] Goberna, M.A., Lopez, M.A., Linear Semi-In®nite Opti-

mization, Wiley, Chichester, 1998.

[7] G. Gramlich, R. Hettich, E.W. Sachs, Local convergence

of SQP methods in semi-in®nite programming, SIAM

Journal of Optimization 5 (1995) 641±658.

[8] S.A. Gustafson, On numerical analysis in semi-in®nite

programming, in: R. Hettich (Ed.), Semi-In®nite Pro-

gramming, Springer, Berlin, 1979, pp. 51±65.

[9] R. Hettich, An implementation of a discretization method

for semi-in®nite programming, Mathematical Program-

ming 34 (1986) 354±361.

[10] R. Hettich, G. Gramlich, A note on an implementation

of a method for quadratic semi-in®nite programming,

Mathematical Programming 46 (1990) 249±254.

[11] R. Hettich, K.O. Kortanek, Semi-in®nite programming:

theory, methods and applications, SIAM Review 35 (1993)

380±429.

[12] H. Hu, A one-phase algorithm for semi-in®nite linear

programming, Mathematical Programming 46 (1990) 85±

104.

[13] K.O. Kortanek, H. No, A central cutting plane algorithm

for convex semi-in®nite programming problems, SIAM

Journal of Optimization 3 (1993) 901±918.

[14] W. Krabs, Optimization and Approximation, Wiley,

Chichester, 1979.

[15] T. Leon, S. Sanmatias, E. Vercher, Un m�etodo primal de

optimizaci�on semi-in®nita para la aproximaci�on uniforme

de funciones, Q�uestii�o 22 (1998) 313±335.

[16] T. Leon, S. Sanmatias, E. Vercher, A multi-local optimi-

zation algorithm, TOP 6 (1998) 1±18.

[17] T. Leon, E. Vercher, An optimality test for semi-in®nite

linear programming, Optimization 26 (1992) 51±60.

[18] T. Leon, E. Vercher, A puri®cation algorithm for semi-

in®nite programming, European Journal of Operational

Research 57 (1992) 412±420.

[19] T. Leon, E. Vercher, New descent rules for solving the

linear semi-in®nite programming problem, Operations

Research Letters 15 (1994) 105±114.

[20] P. Nash, Algebraic fundamentals of linear programming,

in: E.J. Anderson, A.B. Philpott (Eds.), In®nite Program-

ming, Springer, Berlin, 1985, pp. 37±52.

[21] E.R. Panier, A. Tits, A globally convergent algorithm with

adaptively re®ned discretization for semi-in®nite optimi-

zation problems arising in design, IEEE Transactions on

Automatic Control 34 (1989) 903±908.

[22] E. Polak, L. He, Uni®ed steerable phase I-Phase II of

feasible directions for semi-in®nite optimization, Journal

of Optimization Theory and Applications 69 (1991) 83±

107.

[23] E. Polak, Optimization: Algorithms and Consistent Ap-

proximations, Springer, New York, 1997.

[24] C.J. Price, I.D. Coope, Numerical experiments in semi-

in®nite programming, Computer Optimization and Appli-

cations 6 (1996) 169±189.

[25] R. Reemtsen, Discretization methods for the solution of

Semi-in®nite Programming problems, Jounral of Optimi-

zation Theory and Appllications 71 (1991) 85±103.

[26] R. Reemtsen, S. G�orner, Numerical methods for semi-

in®nite programming: A survey, in: R. Reemtsen, J.J.

R�uckmann (Eds.), Semi-In®nite Programming, Kluwer

Academic, 1998, pp. 195±275.

[27] R. Role�, A stable multiple exchange algorithm for linear

SIP, in: R. Hettich (Ed.), Semi-In®nite Programming,

Springer, Berlin, 1979, pp. 83±96.

[28] A. Shapiro, Directional di�erentiability of the optimal

value function in convex semi-in®nite programming,

Mathematical Programming 70 (1995) 149±157.

[29] Y. Tanaka, M. Fukushima, T. Ibaraki, A comparative

study of several semi-in®nite nonlinear programming

algorithms, European Journal of Operational Research

36 (1988) 92±100.

[30] M.J. Todd, Interior point algorithms for semi-in®nite

programming, Mathematical Programming 65 (1994) 217±

245.

[31] H. Wolkowicz, A. Ben-Israel, A recursive, volume-reduc-

ing algorithm for semi-in®nite linear programming, in:

F.Y. Phillips, J.J. Rouseau (Eds), Systems and Manage-

ment Science by Extremal Methods, Kluwer, Boston,

1992, pp. 479±490.

[32] J.L. Zhou, A.L. Tits, An SQP algorithm for ®nely

discretized continuous minimax problems and other mini-

max problems with many objective functions, SIAM

Journal of Optimization 6 (1996) 461±487.

T. Leon et al. / European Journal of Operational Research 121 (2000) 78±91 91