how to solve a semi-in nite optimization problem · semi-in nite optimization (sip). as basic...

30
How to Solve a Semi-infinite Optimization Problem Oliver Stein * March 27, 2012 Abstract After an introduction to main ideas of semi-infinite optimization, this article surveys recent developments in theory and numerical meth- ods for standard and generalized semi-infinite optimization problems. Particular attention is paid to connections with mathematical pro- grams with complementarity constraints, lower level Wolfe duality, semi-smooth approaches, as well as branch and bound techniques in adaptive convexification procedures. A section on recent genericity re- sults includes a discussion of the symmetry effect in generalized semi- infinite optimization. Keywords: Semi-infinite optimization, design centering, robust optimiza- tion, mathematical program with complementarity constraints, Wolfe dual- ity, semi-smooth equation, adaptive convexification, genericity, symmetry. AMS subject classifications: 90C34, 90C30, 49M37, 65K10, 00-02 * Institute of Operations Research, Karlsruhe Institute of Technology (KIT), Germany, [email protected] 1

Upload: others

Post on 19-Jul-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: How to Solve a Semi-in nite Optimization Problem · semi-in nite optimization (SIP). As basic references we mention [34] for an introduction to semi-in nite op-timization, [36, 74]

How to Solve a Semi-infiniteOptimization Problem

Oliver Stein∗

March 27, 2012

Abstract

After an introduction to main ideas of semi-infinite optimization,this article surveys recent developments in theory and numerical meth-ods for standard and generalized semi-infinite optimization problems.Particular attention is paid to connections with mathematical pro-grams with complementarity constraints, lower level Wolfe duality,semi-smooth approaches, as well as branch and bound techniques inadaptive convexification procedures. A section on recent genericity re-sults includes a discussion of the symmetry effect in generalized semi-infinite optimization.

Keywords: Semi-infinite optimization, design centering, robust optimiza-tion, mathematical program with complementarity constraints, Wolfe dual-ity, semi-smooth equation, adaptive convexification, genericity, symmetry.

AMS subject classifications: 90C34, 90C30, 49M37, 65K10, 00-02

∗Institute of Operations Research, Karlsruhe Institute of Technology (KIT), Germany,[email protected]

1

Page 2: How to Solve a Semi-in nite Optimization Problem · semi-in nite optimization (SIP). As basic references we mention [34] for an introduction to semi-in nite op-timization, [36, 74]

1 Introduction

This article reviews recent developments in theory, applications and numer-ical methods of so-called semi-infinite optimization problems, where finitelymany variables are subject to infinitely many inequality constraints. In ageneral form, these problems can be stated as

GSIP : minimize f(x) subject to x ∈M

with the set of feasible points

M = {x ∈ Rn| g(x, y) ≤ 0 for all y ∈ Y (x)} (1)

and a set-valued mapping Y : Rn ⇒ Rm describing the index set of inequalityconstraints. The defining functions f and g are assumed to be real-valuedand at least continuous on their respective domains. Standard assumptionson the set-valued mapping Y are somewhat technical and will be statedin Section 3 below. Clearly, the main numerical challenge in semi-infiniteoptimization is to find ways to treat the infinitely many constraints.

While in the case of an x−dependent index set Y (x) one speaks of a general-ized semi-infinite problem (thus, the acronym GSIP), the important subclassformed by problems with constant index set Y ⊂ Rm is termed standardsemi-infinite optimization (SIP).

As basic references we mention [34] for an introduction to semi-infinite op-timization, [36, 74] for numerical methods in SIP, and the monographs [23]for linear semi-infinite optimization as well as [70] for algorithmic aspects.The monograph [85] contains a detailed study of generalized semi-infiniteoptimization.

The most recent comprehensive reviews on theory, applications and numericalmethods of standard and generalized semi-infinite optimization are [58] and[31], respectively. The reader is referred to these articles for descriptionsof the state of art in semi-infinite optimization around the year 2007. Thepresent paper, on the other hand, will focus on important developmentsof this very active field during the past five years. As internet resourceswe recommend the semi-infinite programming bibliography [59] with severalhundreds of references, and the NEOS semi-infinite programming directory[22]. Furthermore, since the most recent results on stability theory for linearsemi-infinite problems are excellently surveyed in [57], we will not discussthem in the present paper.

2

Page 3: How to Solve a Semi-in nite Optimization Problem · semi-in nite optimization (SIP). As basic references we mention [34] for an introduction to semi-in nite op-timization, [36, 74]

This article is structured as follows. In Section 2 we will briefly discuss someclassical as well as recent applications of semi-infinite optimization. Sec-tion 3 introduces basic theoretical concepts on which both the derivation ofoptimality conditions and the design of numerical methods rely. Section 4explains how the bilevel structure of semi-infinite optimization can be em-ployed numerically, where besides a well-known lifting approach leading tomathematical programs with complementarity constraints, we also presentrecent results on a lifting idea resulting in nondegenerate smooth problems.Section 5 describes a feasible point algorithm for standard SIPs which isbased on recent developments in global optimization. Important genericityresults in generalized semi-infinite optimization, which were obtained only inthe past few years, are reviewed in Section 6, before we close the article withsome final remarks in Section 7.

2 Applications

Among the numerous applications of semi-infinite optimization, in this sec-tion we focus on design centering and on robust optimization. We emphasize,however, that semi-infinite optimization historically emerged as a smooth re-formulation of the intrinsically nonsmooth Chebyshev approximation prob-lem (cf., e.g., [31, 36, 85]). Further applications include the optimal layoutof an assembly line ([47, 102]), time minimal control ([51, 54, 102]), disjunc-tive optimization ([85]) and, more recently, robust portfolio optimization([13, 105, 106]), the identification of regression models as well as the dynam-ics of networks in the presence of uncertainty ([55, 103]). We also remarkthat semi-definite optimization ([100, 109]) can be interpreted as a specialcase of SIP. This approach is elaborated in [18, 101].

2.1 Design centering

A design centering problem consists in maximizing some measure f(x), forexample the volume, of a parametrized body Y (x) while it is inscribed intoa container set G(x) :

DC : maxx∈Rn

f(x) s.t. Y (x) ⊂ G(x).

Design centering problems have been studied extensively, see for example[25, 38, 68, 69, 87] and the references therein. They are also related to theso-called set containment problem from [61].

3

Page 4: How to Solve a Semi-in nite Optimization Problem · semi-in nite optimization (SIP). As basic references we mention [34] for an introduction to semi-in nite op-timization, [36, 74]

In applications the set G(x) often is independent of x and has a complicatedstructure, while Y (x) possesses a simpler geometry. For example, in the so-called maneuverability problem of a robot from [24] the authors determinelower bounds for the volume of a complicated container set G by inscribingan ellipsoid Y (x) into G whose volume can be calculated. This approachactually gave rise to one of the first formulations of a generalized semi-infiniteoptimization problem in [35].

An obvious application of design centering is the cutting stock problem. Theproblem of cutting a gem of maximal volume with prescribed shape featuresfrom a raw gem is treated in [68] and, with the bilevel algorithm for semi-infinite optimization from [93] (cf. also Sec. 4.1), in [108]. For a gentledescription of this industrial application see [56].

The connection with semi-infinite optimization becomes apparent when weassume that the container set G(x) is described by a functional constraint,

G(x) = {y ∈ Rm| g(x, y) ≤ 0}. (2)

Then the inclusion constraint Y (x) ⊂ G(x) of DC is equivalent to the gen-eralized semi-infinite constraint

g(x, y) ≤ 0 for all y ∈ Y (x),

so that the design centering problem becomes a GSIP. Note that an (easierto treat) standard SIP appears, if the body Y is independent of x, while thecontainer G(x) is allowed to vary with x. Many design centering problemscan actually be reformulated as standard SIPs, if the x−dependent transfor-mations of Y (x) are, for example, translations, rotations, or scaling, whoseinverse transformations can as well be imposed on the container set.

We point out that any GSIP with a given constraint function g can beinterpreted as a design centering problem by defining G(x) as in (2). Thus,generalized semi-infinite optimization and design centering are equivalentproblem classes.

2.2 Robust optimization

Robustness questions arise when an optimization problem is subject to un-certain data. If, for example, an inequality constraint function g(x, y) de-pends on some uncertain parameter vector y from a so-called uncertainty setY ⊂ Rm, then the ‘robust’ or ‘pessimistic’ way to deal with this constraint

4

Page 5: How to Solve a Semi-in nite Optimization Problem · semi-in nite optimization (SIP). As basic references we mention [34] for an introduction to semi-in nite op-timization, [36, 74]

is to use its worst case reformulation

g(x, y) ≤ 0 for all y ∈ Y,

which is clearly of semi-infinite type. A point x which is feasible for this semi-infinite constraint satisfies g(x, y) ≤ 0, no matter what the actual parametery ∈ Y turns out to be. This approach is also known as the ‘principle ofguaranteed results’ (cf. [21]). When the uncertainty set Y also dependson the decision variable x, we arrive at a generalized semi-infinite constraint.For example, uncertainties concerning small displacements of an aircraft maybe modeled as being dependent on its speed. For an example from portfolioanalysis see [93].

In [7] it is shown that under special structural assumptions the semi-infiniteproblem appearing in robust optimization can be reformulated as a semi-definite problem and then be solved with polynomial time algorithms ([5]).Under similarly special assumptions a saddle point approach for robust pro-grams is given in [99]. We will discuss these assumption in some more detailin the subsequent Section 3. The current state of the art in robust optimiza-tion is surveyed in [8].

Again, any GSIP can be interpreted as a robust optimization problem, sothat also these two problem classes are equivalent. The reason for ratherseparate bodies of literature on semi-infinite optimization on the one hand,and robust optimization on the other hand, lies in the fact that solutionmethods for robust optimization are mainly motivated from a complexitypoint of view, whereas methods in semi-infinite optimization try to deal withthe numerical difficulties which arise if the stronger assumptions of robustoptimization are relaxed.

3 The lower level problem

The key to the theoretical as well as algorithmic treatment of semi-infiniteoptimization problems lies in their bilevel structure. In fact, it is not hardto see that an alternative description of the feasible set in (1) is given by

M = {x ∈ Rn| ϕ(x) ≤ 0} (3)

with the functionϕ(x) := sup

y∈Y (x)

g(x, y).

5

Page 6: How to Solve a Semi-in nite Optimization Problem · semi-in nite optimization (SIP). As basic references we mention [34] for an introduction to semi-in nite op-timization, [36, 74]

The common convention ‘sup∅ = −∞” is consistent here, as an empty indexset Y (x) corresponds, loosely speaking, to ‘the absence of restrictions’ at xand, hence, to the feasibility of x.

In applications (cf. Sec. 2) often finitely many semi-infinite constraintsgi(x, y) ≤ 0, y ∈ Yi(x), i ∈ I, describe the feasible setM of GSIP, along withfinitely many equality constraints, so that with ϕi(x) = supy∈Yi(x)

gi(x, y)one obtains feasible sets of the form

M = {x ∈ Rn| ϕi(x) ≤ 0, i ∈ I, hj(x) = 0, j ∈ J}.

In order to avoid technicalities, however, in this article we focus on the basiccase of a single semi-infinite constraint and refer the interested reader to [85]for more general formulations.

3.1 Topological properties of the feasible set

At this point we introduce the usual assumptions on the set-valued mappingY : Rn ⇒ Rm, as they can be viewed as sufficient conditions for desirableproperties of the function ϕ. In fact, Y is assumed to be

• nontrivial, that is, its graph

gphY = {(x, y) ∈ Rn × Rm| y ∈ Y (x)}

is nonempty,

• outer semi-continuous, that is, gphY is a closed set, and

• locally bounded, that is, for each x ∈ Rn there exists a neighborhoodU of x such that

⋃x∈U Y (x) is bounded in Rm.

In standard semi-infinite optimization, that is, for a constant set-valued map-ping Y (x) ≡ Y , these assumptions are obviously equivalent to Y being anonempty and compact subset of Rm.

Together with the continuity of g, standard results from parametric opti-mization (cf., e.g., [3]) imply that ϕ is at least upper semi-continuous on Rn

if Y is outer semi-continuous and locally bounded. Since for each x ∈ Rn

the set Y (x) is compact, we also have ϕ(x) < +∞ on Rn. Moreover, thenontriviality of Y ensures that ϕ(x) > −∞ holds at least for one x ∈ Rn sothat, altogether, ϕ is upper semi-continuous and proper on Rn.

6

Page 7: How to Solve a Semi-in nite Optimization Problem · semi-in nite optimization (SIP). As basic references we mention [34] for an introduction to semi-in nite op-timization, [36, 74]

In the standard semi-infinite case, one can even conclude that ϕ is a contin-uous and real-valued function for a nonempty and compact index set Y . Inview of (3), the feasible set M is, thus, closed. On the other hand, in thegeneralized semi-infinite case upper semi-continuity of ϕ is not sufficient toguarantee closedness of M (see [85] for examples of nonclosed feasible sets).A natural assumption which ensures lower semi-continuity of ϕ and, thus,closedness of M for GSIP, is inner semi-continuity of Y (cf. [85]). Thisassumption often turns out to be satisfied in practical applications.

3.2 Computation of the lower level optimal value

The function ϕ actually is the optimal value function of some subordinateoptimization problem, the so-called lower level problem

Q(x) : maxy∈Rm

g(x, y) s.t. y ∈ Y (x). (4)

In contrast to the upper level problem which consists in minimizing f overM ,in the lower level problem x plays the role of an n−dimensional parameter,and y is the decision variable. The main computational problem in semi-infinite optimization is that the lower level problem has to be solved to globaloptimality, even if only a stationary point of the upper level problem is sought.

In fact, by (3) a point x ∈ Rn is feasible for GSIP if and only if the glob-ally maximal value ϕ(x) of Q(x) is nonpositive. Clearly, the calculation ofsome locally maximal point yloc(x) of Q(x) with ϕ(x) = g(x, yloc) ≤ 0 is notsufficient to ensure feasibility of x, since then still indices y ∈ Y (x) withg(x, y) > 0 might exist, making x infeasible.

The need to numerically calculate a globally maximal value and the need tocheck infinitely many inequality constraints to ensure feasibility of a givenpoint are equivalent problems. However, most algorithms for (standard)semi-infinite optimization, in particular discretization and exchange methods(cf., e.g., [74]), contain steps which check feasibility of an iterate, at least upto some tolerance. In implementations, such steps are usually performed byevaluating g on a ‘very fine’ discretization of the index set which, from theglobal optimization point of view, corresponds to the brute force methodof evaluating the objective function at a huge number of feasible points.While this approach is at least disputable from a conceptual point of view,it may certainly not be expected to work efficiently for index sets of alreadymoderate dimensions.

7

Page 8: How to Solve a Semi-in nite Optimization Problem · semi-in nite optimization (SIP). As basic references we mention [34] for an introduction to semi-in nite op-timization, [36, 74]

An obvious remedy for this situation is to make assumptions under whichit is easy to solve the lower level problem to global optimality, for exampleconvexity. More precisely, if for any x ∈ Rn the set Y (x) is convex and thefunction g(x, ·) is concave on Y (x), then the globally maximal value ϕ(x) ofthe convex problem Q(x) can be calculated efficiently (at least in theory).This seemingly obvious assumption appears only recently in the literature onsemi-infinite optimization (cf., e.g., [93]) since it is violated in many classicalapplications, like Chebyshev approximation. In the subsequent Sections 4and 5 we will see, however, that not only interesting applications exist, butthat also nonconvex lower level problems can be treated in this way by anapproximation procedure.

From a computational point of view, it is of course desirable to have a func-tional description of the index set at hand, for example

Y (x) = {y ∈ Rm| v(x, y) ≤ 0}

with some at least continuous function v with vector values in Rs. Again,we omit additional equality constraints for the ease of presentation. Notethat then outer semi-continuity of Y is automatic, and certain constraintqualifications imply the inner semi-continuity of Y (cf. [85]). Moreover, Y (x)is convex if each component of v(x, ·) is convex on Rm.

In this framework, the main idea of robust optimization is to strengthen theconvexity assumption on Q(x) further, so that ϕ(x) is not only efficientlycomputable, but also the resulting inequality constraint ϕ(x) ≤ 0 in theupper level problem may be treated efficiently. For a stylized illustration ofthis idea, assume that we have n = m, that g(x, y) = xᵀ(y−a) with a ∈ Rn isa bilinear function, and that Y = {y ∈ Rm| ‖y‖2 ≤ 1} is the unit ball. Thena closed form of the lower level optimal value function, ϕ(x) = ‖x‖2 − aᵀx,is available, and ϕ(x) ≤ 0 becomes the second-order cone constraint ‖x‖2 ≤aᵀx. Eventually, via reformulation as a semi-definite constraint and assuminga linear objective function f , polynomial time algorithms are available tosolve the upper level problem ([7, 8]).

3.3 Optimality conditions

While smoothness of ϕ may not be expected even for smooth functions g andv, numerous results about properties of optimal value functions from para-metric optimization ([3, 11, 76]) can be employed to prove first and secondorder optimality conditions (cf., e.g., [41, 77, 78, 84, 91, 85, 110]). A recent

8

Page 9: How to Solve a Semi-in nite Optimization Problem · semi-in nite optimization (SIP). As basic references we mention [34] for an introduction to semi-in nite op-timization, [36, 74]

review on optimality conditions and duality in semi-infinite optimization ispresented in [81].

Optimality conditions for standard and generalized semi-infinite optimiza-tion problems with nonsmooth data functions f , g, and v, have recently beenderived in several articles. In particular, using Clarke calculus, first ordernecessary and sufficient conditions along with different constraint qualifica-tions are studied in [48, 49, 50], while [63] develops first order necessary andsufficient conditions along with duality results by Mordukhovich’s limitingsubdifferentials. The articles [66, 111] show that Mordukhovich calculus stillyields first order conditions under very weak assumptions.

4 Bilevel reformulations

While discretization methods can be formulated at least conceptually evenfor GSIPs ([97, 98]), implementation issues (like the need to compute lowerlevel globally optimal points and, at least in GSIP, the need to controlx−dependent discretization points of Y (x)) motivate the design of alternativemethods. The key idea behind bilevel approaches in semi-infinite optimiza-tion is the reformulation of GSIP as a so-called Stackelberg game. In fact,in [92] the equivalence of GSIP with the problem

SG : minx,y

f(x) s.t. g(x, y) ≤ 0, y is a solution of Q(x)

is shown, as long as Y (x) is nonempty for all x ∈ Rn. Note that the formerindex variable y is treated as an additional decision variable in SG, whichmakes this reformulation a lifting approach. Since a part of the decisionvariables is constrained to solve an optimization problem depending on theother decision variables, this problem has the structure of a Stackelberg game([4, 14]). One may wonder why it should be convenient to formulate theproblem SG in which even an optimal point of Q(x) has to be determined,while in GSIP only the optimal value is needed. The reason is that, underadditional assumptions, y solves Q(x) if and only if more tractable conditionshold.

4.1 The MPCC reformulation

The main such assumption is convexity of the optimization problem Q(x) foreach x ∈ Rn, as mentioned before in Section 3.2. If, in addition, a constraint

9

Page 10: How to Solve a Semi-in nite Optimization Problem · semi-in nite optimization (SIP). As basic references we mention [34] for an introduction to semi-in nite op-timization, [36, 74]

qualification like Slater’s condition holds in the feasible set Y (x) of Q(x) foreach x, and the involved functions are at least continuously differentiable,then the optimal points y of Q(x) can be characterized by solutions of thecorresponding Karush-Kuhn-Tucker (KKT) systems. To formulate the KKTsystems, let

L(x, y, γ) = g(x, y)− γᵀv(x, y)

denote the Lagrangian of Q(x) with multiplier vector γ ∈ Rs, and let ∇y

stand for the partial gradient with respect to the variable vector y.

Then SG is equivalent ([93, 85]) to the mathematical program with comple-mentary constraints

MPCC : minx,y,γ

f(x) s.t. g(x, y) ≤ 0

∇yL(x, y, γ) = 0

0 ≤ −v(x, y) ⊥ γ ≥ 0.

Note that in MPCC also the multiplier vector γ serves as a decision variable,so that GSIP is lifted twice. For introductions to MPCC and the larger classof mathematical programs with equilibrium constraints (MPEC ) see [52, 60].They turn out to be numerically challenging, since the so-called Mangasarian-Fromovitz constraint qualification (MFCQ) is violated everywhere in theirfeasible set ([80]).

A first numerical approach to the MPCC reformulation of GSIP was given in[93, 85] by applying the smoothing procedure for MPCC from [17]. In fact,each scalar complementarity constraint 0 ≤ −v`(x, y) ⊥ γ` ≥ 0, ` = 1, . . . , s,is first equivalently replaced by the equation ψ(−v`(x, y), γ`) = 0 with acomplementarity function ψ like the natural residual function ψNR(a, b) =min(a, b) or the Fischer-Burmeister function ψFB(a, b) = a + b −

√a2 + b2.

The nonsmooth function ψ is then equipped with a smoothing parameterτ > 0, for example

ψNRτ (a, b) =

1

2

(a+ b−

√(a− b)2 + 4τ 2

)or

ψFBτ (a, b) = a+ b−

√a2 + b2 + 2τ 2,

so that ψτ is smooth and ψ0 coincides with ψ. This gives rise to the familyof smoothed problems

MPCCτ : minx,y,γ

f(x) s.t. g(x, y) ≤ 0

∇yL(x, y, γ) = 0

ψτ (−v(x, y), γ) = 0

10

Page 11: How to Solve a Semi-in nite Optimization Problem · semi-in nite optimization (SIP). As basic references we mention [34] for an introduction to semi-in nite op-timization, [36, 74]

with τ > 0, where ψτ is extended to vector arguments componentwise. Un-der mild assumptions, in [93, 85] it is shown that MPCC τ is numericallytractable, and that stationary points of MPCC τ tend to a stationary pointof GSIP for τ → 0. This approach was successfully applied to the industrialproblem of gemstone cutting in [108] (cf. also [56]) which can be formulatedas a design centering problem with convex lower level problems.

While MPCC is still an equivalent formulation of GSIP, the smoothed prob-lem MPCC τ is only an approximation. This leads to the question how the‘x−part’ of the feasible set of MPCC τ , that is, the orthogonal projectionMτ of this set to Rn, is related to the original feasible set M of GSIP. In[85] it is shown that Mτ is an outer approximation of M for τ > 0 so thatoptimal points of MPCC τ must be expected to be infeasible for GSIP. Thisis unfortunate, since infeasibility of iterates is also a major drawback of moreclassical numerical approaches in semi-infinite optimization, like discretiza-tion and exchange methods.

However, in [96] it could recently be shown that a simple modification ofMPCC τ leads to inner approximations of M and, thus, to feasible iterates.In fact, an error analysis for the approximation of the lower level optimalvalue proves that the orthogonal projection Mτ of the feasible set of

MPCCτ : minx,y,γ

f(x) s.t. g(x, y) + sτ 2 ≤ 0

∇yL(x, y, γ) = 0

ψτ (−v(x, y), γ) = 0

to Rn is contained in M (where s denotes the number of lower level inequal-ity constraints). The numerical tractability and the computational cost areidentical to those of the formulation MPCC τ , so that it is strongly recom-

mended to use the reformulation MPCCτ instead of MPCC τ . Moreover,a combination of both approaches leads to ‘sandwiching’ procedures for thefeasible set of GSIP (cf. [96]).

Numerical approaches to theMPCC reformulation ofGSIP other than smooth-ing are currently under investigation, among them the lifting approach forMPCC s from [88] (which amounts to lifting GSIP a third time).

4.2 The reformulation by lower level Wolfe duality

An alternative approach to treat the Stackelberg game reformulation SG ofGSIP, inspired by approaches from robust optimization (e.g., [6]) is presented

11

Page 12: How to Solve a Semi-in nite Optimization Problem · semi-in nite optimization (SIP). As basic references we mention [34] for an introduction to semi-in nite op-timization, [36, 74]

in the recent article [15]. This reformulation can do without the numericallydemanding complementarity conditions of MPCC, which comes at the priceof slightly stronger assumptions, namely convexity of the functions −g(x, ·),v`(x, ·), ` = 1, . . . , s, on all of Rm for each x ∈ Rn.

Under this assumption, for the Wolfe dual problem of Q(x)

D(x) : miny,γ

L(x, y, γ) s.t. ∇yL(x, y, γ) = 0, γ ≥ 0,

its feasible set YD(x), and its optimal value function

ϕD(x) = inf(y,γ)∈YD(x)

L(x, y, γ)

it is well-known that the existence of a KKT point of Q(x) implies solvabilityof both Q(x) and D(x) as well as strong duality. Thus, if Q(x) possesses aKKT point for each x, then we have

M = {x ∈ Rn| min(y,γ)∈YD(x)

L(x, y, γ) ≤ 0}.

This motivates to introduce the lifted Wolfe problem

LWP : minx,y,γ

f(x) s.t. L(x, y, γ) ≤ 0, ∇yL(x, y, γ) = 0, γ ≥ 0.

As f does not depend on the variables y and γ, the minimizers of GSIPcoincide with the x−components of the minimizers of LWP, whenever furtherassumption guarantee that Q(x) possesses a Karush-Kuhn-Tucker point foreach x ∈ Rn (see [15] for details). The article [15] mainly deals with the factthat under mild assumptions the problem LWP is not only smooth, but alsonumerically tractable, that is, not intrinsically degenerate.

A drawback of the LWP reformulation against the MPCC reformulation ofGSIP is the fact that the lower level constraint v(x, y) ≤ 0 is not stated ex-plicitly but follows implicitly by duality arguments. For lower level problemsin which g(x, ·) is only convex on Y (x), but not on all of Rm this can actu-ally lead to lower level infeasibility, as examples in [15] show. In particular,this problem arises routinely for the lower level problems constructed in theadaptive convexification procedure which we will discuss below in Section 5.As a remedy, [15] suggests to explicitly add the constraint v(x, y) ≤ 0 tothe constraints of LWP. It is proven and illustrated that the resulting liftedproblem still is numerically tractable.

12

Page 13: How to Solve a Semi-in nite Optimization Problem · semi-in nite optimization (SIP). As basic references we mention [34] for an introduction to semi-in nite op-timization, [36, 74]

4.3 The formulation as a semi-smooth equation

A related approach which does not base on the Stackelberg game reformu-lation SG of GSIP was introduced in [73] and further developed in [94, 95].Here, the focus is on calculating a stationary point of GSIP by reformulatingan appropriate first order necessary optimality condition as a semi-smoothsystem of equations.

We sketch the main ideas for the standard semi-infinite case, while [94, 95]also cover GSIP. To state the first order optimality condition, first we intro-duce the set of active indices

Y0(x) = {y ∈ Y | g(x, y) = 0}

of a feasible point x ∈M . Note that Y0(x) coincides with the set of globallymaximal points of Q(x) in the case ϕ(x) = 0, and that only the latter case isinteresting for local optimality conditions, as ϕ(x) < 0 forces x to lie in thetopological interior of M .

The Extended Mangasarian-Fromovitz Constraint Qualification (EMFCQ)([36, 45, 86]) holds at x, if there is a vector d ∈ Rn with

〈∇xg(x, y), d〉 < 0 for all y ∈ Y0(x),

where 〈·, ·〉 denotes the standard inner product. A combination of FritzJohn’s optimality condition for SIP from the seminal paper [39] with EMFCQimmediately yields the KKT type result that at any local minimizer x of SIP,at which EMFCQ is satisfied, there exist a p ∈ {0, . . . , n}, multipliers λi ≥ 0and active indices yi ∈ Y0(x), i = 1, . . . , p, such that

∇f(x) +p∑

i=1

λi∇xg(x, yi) = 0

holds.

If, again, the lower level problem Q(x) is assumed to be convex with Slater’scondition holding in its feasible set for each x ∈ Rn, then the conditionyi ∈ Y0(x) may equivalently be replaced by the lower level KKT conditions.Altogether, the first order optimality condition for SIP can then be replacedby the combination of upper and lower level KKT systems, resulting in the

13

Page 14: How to Solve a Semi-in nite Optimization Problem · semi-in nite optimization (SIP). As basic references we mention [34] for an introduction to semi-in nite op-timization, [36, 74]

system of equations

T (x, λ, y1, . . . , yp, γ1, . . . , γp) =

∇f(x) +∑p

i=1 λi∇xg(x, yi)

ψ(−g(x, y1), λ1)...

ψ(−g(x, yp), λp)∇yL(x, y1, γ1)ψ(−v(y1), γ1)

...∇yL(x, yp, γp)ψ(−v(yp), γp)

= 0, (5)

where ψ is again a complementarity function (cf. Sec. 4.1).

To deal with the intrinsic nonsmoothness of complementarity functions likeψNR and ψFB, one may apply the so-called semi-smooth Newton methodfrom [72] to (5). In fact, for a locally Lipschitzian function F : Rn → Rm

let ∂F (x) denote Clarke’s generalized Jacobian of F at x ([12]). F is calledstrongly semismooth at x if F is directionally differentiable at x and if for allV ∈ ∂F (x+ d) and d→ 0 we have

V d− F ′(x; d) = O(‖d‖2).

If, in the definition of T , we use the special NCP functions ψNR or ψFB,and if the data functions are at least twice continuously differentiable, thena result from [71] guarantees that T is strongly semismooth.

In [72] it is shown that the semi-smooth Newton method for solving theequation F (x) = 0 with F : Rn → Rn, defined by

xk+1 = xk − (W k)−1F (xk)

and W k ∈ ∂F (xk), is well-defined and q-quadratically convergent in a neigh-borhood of a solution point x for strongly semi-smooth F , if F is additionallyCD-regular at x, that is, all matrices W ∈ ∂F (x) are nonsingular.

Thus, the semi-smooth Newton method can be expected to work efficientlyfor the solution of (5) if CD-regularity of T is guaranteed in a solution point.While [73] derives somewhat technical conditions for CD-regularity involvingstrict complementarity in the upper and lower level problems (implying thatthe semi-smooth Newton method collapses to the usual Newton method), in[94] it is shown that natural conditions imply CD-regularity if the so-calledReduction Ansatz (cf. Sec. 6) holds in the lower level, and in [95] also thecase of violated strict complementarity in the lower level problems is treatedsuccessfully.

14

Page 15: How to Solve a Semi-in nite Optimization Problem · semi-in nite optimization (SIP). As basic references we mention [34] for an introduction to semi-in nite op-timization, [36, 74]

5 Adaptive convexification

The adaptive convexification algorithm is a method to solve standard semi-infinite optimization problems via a sequence of feasible iterates, even if thelower level problems are nonconvex. Its main idea ([20]) is to tessellate theindex set into finitely many smaller index sets (as opposed to the approachof discretization methods, which choose finitely many elements of the indexset), and to convexify the resulting subproblems. Ideas of spatial branching[62] are then used to efficiently refine the tessellation.

Alternative feasible point methods from [9, 10, 65] apply spatial branchingeven simultaneously to the decision and index variables, so that a branch-and-bound approach for the global solution of SIP generates convergent sequencesof lower and upper bounds for its globally optimal value. While these ap-proaches may be time consuming, in the following we explain the main ideasof the approach from [20], which only takes care of global optimality in thelower level, but not necessarily in the upper level problem. It is, thus, a localnumerical method for the upper level problem which guarantees semi-infinitefeasibility by global optimization ideas for the lower level problem.

As in [20], let us first focus on a standard SIP with a onedimensional indexset of the form Y = [y, y], and with twice continuously differentiable datafunctions f and g. Furthermore, the feasible setM is assumed to be containedin the n−dimensional box X = [x, x] ⊂ Rn with x < x ∈ Rn

For any x ∈ X, the lower level problem Q(x) then consists in maximizingg(x, ·) under the constraints v1(y) = y− y ≤ 0 and v2(y) = y − y ≤ 0, whichgives rise to the lower level Lagrangian

L(x, y, γ, γ) = g(x, y)− γ (y − y)− γ (y − y).

If we assume for a moment that Q(x) is a convex problem for all x ∈ X,then the MPCC reformulation from Section 4.1 yields that SIP is equivalentto the problem

MPCC : minx,y,γ,γ

f(x) s.t. x ∈ X, g(x, y) ≤ 0

∇yg(x, y) + γ − γ = 0

0 ≤ y − y ⊥ γ ≥ 0

0 ≤ y − y ⊥ γ ≥ 0.

Next, to treat nonconvex lower level problems, [20] uses ideas of the αBBmethod from [1, 2], which is a spatial branching method for nonconvex global

15

Page 16: How to Solve a Semi-in nite Optimization Problem · semi-in nite optimization (SIP). As basic references we mention [34] for an introduction to semi-in nite op-timization, [36, 74]

optimization. In fact, since the lower level feasible set Y = [y, y] is certainlyconvex, nonconvexity of Q(x) for some x ∈ X can only be introduced bynonconcavity of g(x, ·). The main idea is to replace g(x, ·) by a concaveoverestimator.

If no additional information is available (cf. [19]), αBB constructs concaveoverestimators by adding a quadratic relaxation function

ψ(y;α, y, y) = α2(y − y)(y − y) (6)

to the original function g(x, ·), which results in

g(x, y;α, y, y) = g(x, y) + ψ(y;α, y, y).

In the sequel we will temporarily suppress the dependence of g on α, y, y.For α ≥ 0 the function g(x, ·) clearly is an overestimator of g(x, ·) on [y, y],and it coincides with g(x, ·) at the boundary points y, y of the index set.Moreover, g is twice continuously differentiable with second derivative

D2y g(x, y) = D2

yg(x, y)− α

on [y, y]. Consequently g(x, ·) is concave on [y, y] for

α ≥ maxy∈[y,y]

D2yg(x, y)

(cf. also [1, 2]), and g(x, ·) even is concave for any choice x ∈ X, if α satisfies

α ≥ max(x,y)∈X×[y,y]

D2yg(x, y). (7)

Unfortunately, the computation of α thus involves a global optimization prob-lem itself. Note, however, that α may be any upper bound for the right-handside in (7). Such upper bounds can be provided by interval methods (see,e.g., [19, 32, 67]) under natural assumptions on the function g.

Combining these facts shows that for

α ≥ max

(0, max

(x,y)∈X×[y,y]D2

yg(x, y)

)and arbitrary x ∈ X, the function g(x, ·) is a concave overestimator of g(x, ·)on [y, y]. By the overestimation property, any feasible point (if it exists) ofthe approximating semi-infinite problem

minx∈X

f(x) s.t. g(x, y;α, y, y) ≤ 0 for all y ∈ [y, y]

16

Page 17: How to Solve a Semi-in nite Optimization Problem · semi-in nite optimization (SIP). As basic references we mention [34] for an introduction to semi-in nite op-timization, [36, 74]

is also feasible for the original problem. On the other hand, the concavity ofthe overestimator entails that the lower level problem of the approximatingproblem is convex and can be solved, for example, by the MPCC reformula-tion.

A straightforward generalization of this idea relies on the obvious fact thatthe single semi-infinite constraint

g(x, y) ≤ 0 for all y ∈ Y

is equivalent to the finitely many semi-infinite constraints

g(x, y) ≤ 0 for all y ∈ Y k, k ∈ K,

if the sets Y k, k ∈ K, form a tessellation of Y , that is, for N ∈ N we choosey = η0 < η1 < ... < ηN−1 < ηN = y and put K = {1, ..., N} as well as

Y k = [ηk−1, ηk], k ∈ K.

Given such a tessellation, one can construct concave overestimators

gk(x, y;α, ηk−1, ηk) = g(x, y) + ψ(y;α, ηk−1, ηk).

for each of these finitely many semi-infinite constraints, and solve the corre-sponding semi-infinite problem with finitely many convex lower level prob-lems by the MPCC formulation. Again, any element (if it exists) of theapproximating feasible set

MαBB(E,α) = {x ∈ Rn| gk(x, y) ≤ 0 for all y ∈ Y k, k ∈ K}

is also feasible for the original problem, where E = {ηk| k ∈ K} denotes theset of subdivision points defining the tessellation of Y . This means that anysolution concept for

SIPαBB(E,α) : minx∈X

f(x) s.t. x ∈MαBB(E,α),

be it global solutions, local solutions or stationary points, will at least gen-erate a feasible point of SIP.

Given a consistent approximating problem, the adaptive convexification algo-rithm computes a stationary point x of SIPαBB(E,α) by the MPCC reformu-lation, and terminates if x is also stationary for SIP within given tolerances(where a suitable stationarity condition from Sec. 3.3 is employed). If x isnot stationary, the algorithm refines the tessellation in the spirit of exchange

17

Page 18: How to Solve a Semi-in nite Optimization Problem · semi-in nite optimization (SIP). As basic references we mention [34] for an introduction to semi-in nite op-timization, [36, 74]

methods ([34, 74]) by adding the active indices of the solution point x to theset of subdivision point E, constructs a refined approximating problem, anditerates this procedure.

Error bounds for the concave overestimators ([20]) indicate that finer tessel-lations of Y should lead to better approximations of the original feasible setM . In fact, the resulting algorithm in [20] is well-defined, convergent andfinitely terminating. Numerical examples for the performance of the methodfrom Chebyshev approximation and design centering as well as an approachto calculate a consistent initial approximation of the feasible set are given in[20].

A generalization of the adaptive convexification algorithm for higher dimen-sional index sets was recently presented in [89]. Here, the index set Y is notnecessarily assumed to be box-shaped, which gives rise to further approxi-mation issues. Again, the resulting algorithm is well-defined, convergent andfinitely terminating. An implementation is freely available at [90].

6 Genericity results

Throughout optimization theory and numerics (and beyond), assumptionsare made to derive optimality conditions, ensure convergence of algorithms,etc. Usually it is cumbersome or even impossible to check these assumptions,for example if they are requested to hold in a solution point which has yet tobe determined. Then a fundamental question is whether such assumptionsare mild, so that an urgent need to check them a-priorily becomes obsoletebut, instead, they may be expected to hold ‘typically’.

One way to translate ‘mild’ and ‘typically’ into mathematical terms is theconcept of genericity. For optimization problems, it is formulated by topol-ogizing the linear space of their data functions by a strong (or Whitney)Ck−topology ([37, 40]) with k ∈ N0, denoted by Ck

s . The latter is generatedby allowing perturbations of functions and their derivatives up to k−th or-der, where the perturbations are controlled by continuous positive functions.The space of Ck functions endowed with the Ck

s−topology turns out to bea Baire space, that is, every countable intersection of open and dense sets isdense. A set is called Ck

s−generic if it contains such a countable intersectionof Ck

s−open and dense sets. Clearly, generic sets in a Baire space are denseas well. A property is called generic, if it holds on a generic set.

To give an example from [40], in smooth finite optimization the linear inde-

18

Page 19: How to Solve a Semi-in nite Optimization Problem · semi-in nite optimization (SIP). As basic references we mention [34] for an introduction to semi-in nite op-timization, [36, 74]

pendence constraint qualification (LICQ) generically holds everywhere in thefeasible set. More explicitly, this means that the set of data functions defin-ing finite optimization problems in which the gradients of active constraintsare linearly independent at each point of the feasible set, form a generic setin the space of all data functions. In particular, the frequent assumption thatsome unknown optimal point satisfies LICQ is mild in this sense. Moreover,since under LICQ a local C1 change of coordinates shows that the feasibleset looks like the Cartesian product of a linear space and finitely many onedimensional halfspaces, the generic local structure of the feasible set is clear.

In standard semi-infinite optimization, however, from [46] an example isknown where the feasible set contains the upper part of the so-called swallowtail singularity. This example is stable under perturbations so that, generi-cally, it cannot be possible to describe the whole feasible set of a semi-infiniteproblem locally by finitely many smooth inequality constraints satisfyingLICQ.

On the other hand, one may ask if such a ‘nice’ local description of M is atleast possible in optimal points of SIP, since this would be sufficiently helpfulfor the formulation of optimality conditions and for convergence results ofalgorithms. For standard semi-infinite problems it was established already in[112, 82], that generically the so-called Reduction Ansatz holds at all locallyminimal points, and even on the larger set of all generalized critical points(which contain the Fritz John and Karush-Kuhn-Tucker points of SIP). TheReduction Ansatz was first introduced in [33, 107] and constitutes naturalregularity conditions at each active index (i.e., at each optimal point of thelower level problem), namely LICQ, strict complementary slackness (SCS),and a standard second order condition (SOC). Under these assumptions, Mcan locally be described by finitely many smooth functions, and then LICQ,SCS as well as SOC even hold generically in this local description of theupper level problem.

A long-standing open question was whether such a result also holds for gen-eralized SIPs. Partial positive answers were given already in [83] for the caseof ‘sufficiently many’ active indices, and in [92] for the case of affine datafunctions. Only in the recent series of articles [27, 28, 29] we were able toshow that a certain modification of the Reduction Ansatz, the SymmetricReduction Ansatz, generically holds for GSIPs at each locally minimal point,and that under this set of regularity assumptions the closure ofM can be de-scribed by finitely many smooth functions. While this genericity result onlyholds at locally minimal points, the generic structure of the closure of Mat Karush-Kuhn-Tucker points was shown in [42] to be that of a disjunctive

19

Page 20: How to Solve a Semi-in nite Optimization Problem · semi-in nite optimization (SIP). As basic references we mention [34] for an introduction to semi-in nite op-timization, [36, 74]

optimization problem. Based on these results, a Morse theory for GSIP wasdeveloped in [43].

While we will not review the Symmetric Reduction Ansatz here, at leastsome explanation of the symmetry observation for GSIP is appropriate, as itconstitutes a fundamental and, at the same time, fruitful difference betweenstandard and generalized semi-infinite optimization. We will illustrate thesymmetry effect by a description of the closure of the (not necessarily closed,cf. Sec. 3.1) feasible set M of GSIP.

In fact, consider the ‘index set without boundary’

Y <(x) = {y ∈ Rm| v`(x, y) < 0, 1 ≤ ` ≤ s}

and define the set

M = {x ∈ Rn| g(x, y) ≤ 0 for all y ∈ Y <(x)}.

In view of Y <(x) ⊂ Y (x) we clearly have M ⊂ M . As (under mild as-

sumptions) Y <(x) is ‘only slightly smaller’ than Y (x), the set M may beexpected to be ‘only slightly larger’ than M . In [27] it was first shown that

generically the set M actually coincides with the topological closure M ofM . While the corresponding proof is rather technical, recently this resultwas significantly improved in [30] by formulating the symmetric Mangasar-ian Fromovitz constraint qualification (Sym-MFCQ) for GSIP. It was shown

that, generically, Sym-MFCQ holds everywhere in M and that, under thisnatural generic condition, M coincides with M .

To see the symmetry aspect, note that an alternative description of M is

M = {x ∈ Rn| σ(x, y) ≤ 0 for all y ∈ Rn}, (8)

with the continuous function

σ(x, y) = min {g(x, y),−v1(x, y), . . . ,−vs(x, y)}.

Symmetry refers to the fact that, via σ, all data functions g,−v1, . . . ,−vscontribute in the same way to the definition of M , as opposed to the lowerlevel objective function g playing a different role in Q(x) than the lowerlevel constraint functions v`, ` = 1, . . . , s. While this effect was implicitlyused already in [84, 87], its full consequences were only understood recentlyin the above mentioned articles. In fact, coarsely speaking the SymmetricReduction Ansatz is a set of nondegeneracy assumptions for active indices of

20

Page 21: How to Solve a Semi-in nite Optimization Problem · semi-in nite optimization (SIP). As basic references we mention [34] for an introduction to semi-in nite op-timization, [36, 74]

σ in the description (8) of M , and Sym-MFCQ is a Mangasarian Fromovitztype condition for this description (jointly in the variables x and y).

Note, in particular, that symmetry entails the following surprising fact: theset M stays invariant when the lower level optimal value function g is ex-changed with one of the lower level constraint functions−v` which, in general,leads to a different lower level problem. The difference to the standard SIPcase is that this exchange operation would replace SIP by a GSIP, while inthe GSIP case the considered optimization problem stays in the same class.

7 Final remarks

In this survey we tried to explain some major developments in theory andnumerics of semi-infinite optimization during the past couple of years. Manyother interesting topics could not be treated explicitly, including stabilityof the feasible set ([26]), the convexity structure of critical value functions([16]), multiplier rules via augmented Lagrangians ([79]), smoothing of thelower level optimal value function by mollifiers ([44]), optimality conditionsin degenerate cases ([53]) and, with respect to numerics, combinations ofdiscretization with interval methods ([64]), and special purpose exchangemethods ([104]). All these different contributions clearly indicate, however,that semi-infinite optimization will stay a broad and highly active field ofresearch also in the years to come.

References

[1] C.S. Adjiman, I.P. Androulakis, C.A. Floudas, A global op-timization method, αBB, for general twice-differentiable constrainedNLPs - I: Theoretical advances, Computers and Chemical Engineering,Vol. 22 (1998), 1137-1158.

[2] C.S. Adjiman, I.P. Androulakis, C.A. Floudas, A global op-timization method, αBB, for general twice-differentiable constrainedNLPs - II: Implementation and computational results, Computers andChemical Engineering, Vol. 22 (1998), 1159-1179.

[3] B. Bank, J. Guddat, D. Klatte, B. Kummer, K. Tammer, Non-linear Parametric Optimization, Birkhauser, Basel, 1983.

21

Page 22: How to Solve a Semi-in nite Optimization Problem · semi-in nite optimization (SIP). As basic references we mention [34] for an introduction to semi-in nite op-timization, [36, 74]

[4] J.F. Bard, Practical Bilevel Optimization, Kluwer, Dordrecht, 1998.

[5] A. Ben-Tal, L. El Ghaoui, A. Nemirovski, Robustness, in:H. Wolkowicz (ed) et al., Handbook of semidefinite programming,Kluwer, 2000, 139-162.

[6] A. Ben-Tal, A. Nemirovski, Robust convex optimization, Mathe-matics of Operations Research, Vol. 23 (1998), 769-805.

[7] A. Ben-Tal, A. Nemirovski, Robust solutions of uncertain linearprograms, Operations Research Letters, Vol. 25 (1999), 1-13.

[8] D. Bertsimas, D.B. Brown, C. Caramanis, Theory and applica-tions of robust optimization, SIAM Review, Vol. 53 (2011), 464-501.

[9] B. Bhattacharjee, W.H. Green Jr., P.I. Barton, Interval meth-ods for semi-infinite programs, Computational Optimization and Appli-cations, Vol. 30 (2005), 63-93.

[10] B. Bhattacharjee, P. Lemonidis, W.H. Green, P.I. Barton,Global solution of semi-infinite programs, Mathematical Programming,Vol. 103 (2005), 283-307.

[11] J.F. Bonnans, A. Shapiro, Perturbation Analysis of OptimizationProblems, Springer, New York, 2000.

[12] F.H. Clarke, Optimization and Nonsmooth Analysis, Wiley, NewYork, 1983.

[13] S. Daum, R. Werner, A novel feasible discretization method for linearsemi-infinite programming applied to basket option pricing, Optimiza-tion, Vol. 60 (2011), 1379-1398.

[14] S. Dempe, Foundations of Bilevel Programming, Kluwer, 2002.

[15] M. Diehl, B. Houska, O. Stein, P. Steuermann, A lifting methodfor generalized semi-infinite programs based on lower level Wolfe duality,Optimization Online, 2011.

[16] D. Dorsch, F. Guerra Vazquez, H. Gunzel, H.Th. Jongen,J.-J. Ruckmann, SIP: critical value functions have finite modulus ofnon-convexity, Mathematical Programming, to appear.

[17] F. Facchinei, H. Jiang, L. Qi, A smoothing method for mathemati-cal programs with equilibrium constraints, Mathematical Programming,Vol. 85 (1999), 107-134.

22

Page 23: How to Solve a Semi-in nite Optimization Problem · semi-in nite optimization (SIP). As basic references we mention [34] for an introduction to semi-in nite op-timization, [36, 74]

[18] U. Faigle, W. Kern, G. Still, Algorithmic Principles of Mathemat-ical Programming, Kluwer, 2002.

[19] C.A. Floudas, Deterministic Global Optimization, Theory, Methodsand Applications, Kluwer, 2000.

[20] C. A. Floudas, O. Stein, The adaptive convexification algorithm: afeasible point method for semi-infinite programming, SIAM Journal onOptimization, Vol. 18 (2007), 1187-1208.

[21] Y.B. Germeyer, Einfuhrung in die Theorie des Operations Research,Akademie Verlag, Berlin, 1976.

[22] M.A. Goberna, NEOS Semiinfinite Programming Directory,www.neos-guide.org/NEOS/index.php/Semiinfinite Programming Directory

[23] M.A. Goberna, M.A. Lopez, Linear Semi-infinite Optimization, Wi-ley, Chichester, 1998.

[24] T.J. Graettinger, B.H. Krogh, The acceleration radius: aglobal performance measure for robotic manipulators, IEEE Journal ofRobotics and Automation, Vol. 4 (1988), 60-69.

[25] P. Gritzmann, V. Klee, On the complexity of some basic problemsin computational convexity. I. Containment problems, Discrete Mathe-matics, Vol. 136 (1994), 129-174.

[26] H. Gunzel, H.Th. Jongen, J.-J. Ruckmann, On stable feasible setsin generalized semi-infinite programming, SIAM Journal on Optimiza-tion, Vol. 19 (2008), 644-654.

[27] H. Gunzel, H.Th. Jongen, O. Stein, On the closure of the feasibleset in generalized semi-infinite programming, Central European Journalof Operations Research, Vol. 15 (2007), 271-280.

[28] H. Gunzel, H. Th. Jongen, O. Stein, Generalized semi-infiniteprogramming: the Symmetric Reduction Ansatz, Optimization Letters,Vol. 2 (2008), 415-424.

[29] H. Gunzel, H. Th. Jongen, O. Stein, Generalized semi-infinite pro-gramming: on generic local minimizers, Journal of Global Optimization,Vol. 42 (2008), 413-421.

23

Page 24: How to Solve a Semi-in nite Optimization Problem · semi-in nite optimization (SIP). As basic references we mention [34] for an introduction to semi-in nite op-timization, [36, 74]

[30] F. Guerra Vazquez, H.Th. Jongen, V. Shikhman, General semi-infinite programming: symmetric Mangasarian-Fromovitz constraintqualification and the closure of the feasible set, SIAM Journal on Opti-mization, Vol. 20 (2010), 2487-2503.

[31] F. Guerra Vazquez, J.-J. Ruckmann, O. Stein, G. Still, Gen-eralized semi-infinite programming: a tutorial, Journal of Computa-tional and Applied Mathematics, Vol. 217 (2008), 394-419.

[32] E. Hansen, Global Optimization using Interval Analysis, M. Dekker,New York, 1992.

[33] R. Hettich, H.Th. Jongen, Semi-infinite programming: conditionsof optimality and applications, in: J. Stoer (ed): Optimization Tech-niques, Part 2, Lecture Notes in Control and Information Sciences,Vol. 7, Springer, Berlin, 1978, 1-11.

[34] R. Hettich, K.O. Kortanek, Semi-infinite programming: theory,methods, and applications, SIAM Review, Vol. 35 (1993), 380-429.

[35] R. Hettich, G. Still, Semi-infinite programming models in robotics,in: J. Guddat, H.Th. Jongen, B. Kummer, F. Nozicka (eds): ParametricOptimization and Related Topics II, Akademie Verlag, Berlin, 1991, 112-118.

[36] R. Hettich, P. Zencke, Numerische Methoden der Approximationund semi-infiniten Optimierung, Teubner, Stuttgart, 1982.

[37] M.W. Hirsch, Differential Topology, Springer, New York, 1976.

[38] R. Horst, H. Tuy, Global Optimization, Springer, Berlin, 1990.

[39] F. John, Extremum problems with inequalities as subsidiary conditions,in: Studies and Essays, R. Courant Anniversary Volume, Interscience,New York, 1948, 187-204.

[40] H.Th. Jongen, P. Jonker, F. Twilt, Nonlinear Optimization inFinite Dimensions, Kluwer, Dordrecht, 2000.

[41] H.Th. Jongen, J.-J. Ruckmann, O. Stein, Generalized semi-infinite optimization: a first order optimality condition and examples,Mathematical Programming, Vol. 83 (1998), 145-158.

24

Page 25: How to Solve a Semi-in nite Optimization Problem · semi-in nite optimization (SIP). As basic references we mention [34] for an introduction to semi-in nite op-timization, [36, 74]

[42] H.Th. Jongen, V. Shikhman, On generic one-parametric semi-infinite optimization, SIAM Journal on Optimization, Vol. 21 (2011),193-211.

[43] H.Th. Jongen, V. Shikhman, General semi-infinite programming:critical point theory, Optimization, Vol. 60 (2011), 859-873.

[44] H.Th. Jongen, O. Stein, Smoothing by mollifiers. Part I: Semi-infinite optimization, Journal of Global Optimization, Vol. 41 (2008),319-334.

[45] H.Th. Jongen, F. Twilt, G.-W. Weber, Semi-infinite optimiza-tion: structure and stability of the feasible set, Journal of OptimizationTheory and Applications, Vol. 72 (1992), 529-552.

[46] H.Th. Jongen, G. Zwier, On the local structure of the feasible set insemi-infinite optimization, in: Brosowski, Deutsch (eds): InternationalSeries of Numerical Mathematics, Vol. 72, Birkhauser, Basel, 1984, 185-202.

[47] C. Kaiser, W. Krabs, Ein Problem der semi-infiniten Optimierungim Maschinenbau und seine Verallgemeinerung, Working paper, Darm-stadt University of Technology, Department of Mathematics, 1986.

[48] N. Kanzi, S. Nobakhtian, Nonsmooth semi-infinite programmingproblems with mixed constraints, Journal of Mathematical Analysis andApplications, Vol. 351 (2009), 170-181.

[49] N. Kanzi, S. Nobakhtian, Optimality conditions for non-smoothsemi-infinite programming, Optimization, Vol. 59 (2010), 717-727.

[50] N. Kanzi, S. Nobakhtian, Necessary optimality conditions for non-smooth generalized semi-infinite programming problems, European Jour-nal of Operational Research, Vol. 205 (2010), 253-261.

[51] A. Kaplan, R. Tichatschke, On a class of terminal variational prob-lems, in: J. Guddat, H.Th. Jongen, F. Nozicka, G. Still, F. Twilt (eds):Parametric Optimization and Related Topics IV, Peter Lang, Frank-furt a.M., 1997, 185-199.

[52] M. Kocvara, J. Outrata, J. Zowe, Nonsmooth Approach to Opti-mization Problems with Equilibrium Constraints: Theory, Applicationsand Numerical Results, Kluwer, Dordrecht, 1998.

25

Page 26: How to Solve a Semi-in nite Optimization Problem · semi-in nite optimization (SIP). As basic references we mention [34] for an introduction to semi-in nite op-timization, [36, 74]

[53] O.I. Kostyukova, T.V. Tchemisova, S.A. Yermalinskaya, Con-vex semi-infinite programming: implicit optimality criterion based on theconcept of immobile indices, Journal of Optimization Theory and Ap-plications, Vol. 145 (2010), 325-342.

[54] W. Krabs, On time-minimal heating or cooling of a ball, in: Inter-national Series of Numerical Mathematics, Vol. 81, Birkhauser, Basel,1987, 121-131.

[55] E. Kropat, G.W. Weber, Robust regression analysis for gene-environment and eco-finance networks under polyhedral and ellipsoidaluncertainty, Preprint at IAM, METU.

[56] K.-H. Kufer, O. Stein, A. Winterfeld, Semi-infinite optimizationmeets industry: a deterministic approach to gemstone cutting, SIAMNews, Vol. 41 (2008), 1 and 16.

[57] M. Lopez, Stability in linear optimization and related topics. A personaltour, TOP, DOI 10.1007/s11750-011-0213-9, to appear.

[58] M. Lopez, G. Still, Semi-infinite Programming, European Journalof Operational Research, Vol. 180 (2007), 491-518.

[59] M. Lopez, G. Still, References in Semi-infinite Optimization,wwwhome.math.utwente.nl/∼stillgj/sip/lit-sip.pdf

[60] Z. Luo, J. Pang, D. Ralph, Mathematical Programs with EquilibriumConstraints, Cambridge University Press, Cambridge, 1996.

[61] O.L. Mangasarian, Set containment characterization, Journal ofGlobal Optimization, Vol. 24 (2002), 473-480.

[62] G.P. McCormick, Computability of global solutions to factorable non-convex programs. Part I. Convex underestimating problems, Mathemat-ical Programming, Vol. 10 (1976), 146-175.

[63] S.K. Mishra, M. Jaiswal, H.A. Le Thi, Nonsmooth semi-infiniteprogramming problem using limiting subdifferentials, Journal of GlobalOptimization, DOI 10.1007/s10898-011-9690-5, to appear.

[64] A. Mitsos, Global optimization of semi-infinite programs via restrictionof the right-hand side, Optimization, Vol. 60 (2011), 1291-1308.

26

Page 27: How to Solve a Semi-in nite Optimization Problem · semi-in nite optimization (SIP). As basic references we mention [34] for an introduction to semi-in nite op-timization, [36, 74]

[65] A. Mitsos, P. Lemonidis, C.K. Lee, P.I. Barton, Relaxation-based bounds for semi-infinite programs, SIAM Journal on Optimization,Vol. 19 (2007), 77-113.

[66] B. Mordukhovich, N. Tran, Subdifferentials of nonconvex supre-mum functions and their applications to semi-infinite and infinite pro-grams with Lipschitzian data, Optimization Online, 2011.

[67] A. Neumaier, Interval Methods for Systems of Equations, CambridgeUniversity Press, 1990.

[68] V.H. Nguyen, J.J. Strodiot, Computing a global optimal solution toa design centering problem, Mathematical Programming, Vol. 53 (1992),111-123.

[69] E. Polak, An implementable algorithm for the optimal design center-ing, tolerancing and tuning problem, Journal of Optimization Theoryand Applications, Vol. 37 (1982), 45-67.

[70] E. Polak, Optimization. Algorithms and Consistent Approximations,Springer, Berlin, 1997.

[71] L. Qi, H. Jiang, Semismooth KKT equations and convergence anal-ysis of Newton and quasi-Newton methods for solving these equations,Mathematics of Operations Research, Vol. 22 (1997), 301-325.

[72] L. Qi, J. Sun, A nonsmooth version of Newton’s method, MathematicalProgramming, Vol. 58 (1993), 353-367.

[73] L. Qi, S.-Y. Wu, G. Zhou, Semismooth Newton methods for solvingsemi-infinite problems, Journal of Global Optimization, Vol. 27 (2003),215-232.

[74] R. Reemtsen, S. Gorner, Numerical methods for semi-infinite pro-gramming: a survey, in [75], 195-275.

[75] R. Reemtsen, J.-J. Ruckmann (eds), Semi-Infinite Programming,Kluwer, Boston, 1998.

[76] R.T. Rockafellar, R.J.B. Wets, Variational Analysis, Springer,Berlin, 1998.

[77] J.-J. Ruckmann, A. Shapiro, First-order optimality conditions ingeneralized semi-infinite programming, Journal of Optimization Theoryand Applications, Vol. 101 (1999), 677-691.

27

Page 28: How to Solve a Semi-in nite Optimization Problem · semi-in nite optimization (SIP). As basic references we mention [34] for an introduction to semi-in nite op-timization, [36, 74]

[78] J.-J. Ruckmann, A. Shapiro, Second-order optimality conditionsin generalized semi-infinite programming, Set-Valued Analysis, Vol. 9(2001), 169-186.

[79] J.-J. Ruckmann, A. Shapiro, Augmented Lagrangians in semi-infinite programming,Mathematical Programming, Vol. 116 (2009), 499-512.

[80] H. Scheel, S. Scholtes, Mathematical programs with complementar-ity constraints: Stationarity, optimality, and sensitivity, Mathematicsof Operations Research, Vol. 25 (2000), 1-22.

[81] A. Shapiro, Semi-infinite programming, duality, discretization and op-timality conditions, Optimization, Vol. 58 (2009), 133-161.

[82] O. Stein, On parametric semi-infinite optimization, Thesis, Shaker,Aachen, 1997.

[83] O. Stein, On level sets of marginal functions, Optimization, Vol. 48(2000), 43-67.

[84] O. Stein, First order optimality conditions for degenerate index setsin generalized semi-infinite programming, Mathematics of OperationsResearch, Vol. 26 (2001), 565-582.

[85] O. Stein, Bi-level Strategies in Semi-infinite Programming, Kluwer,Boston, 2003.

[86] O. Stein, On constraint qualifications in non-smooth optimization,Journal of Optimization Theory and Applications, Vol. 121 (2004), 647-671.

[87] O. Stein, A semi-infinite approach to design centering, in: S. Dempe,S. Kalashnikow (eds.): Optimization with Multivalued Mappings,Springer, 2006, 209-228.

[88] O. Stein, Lifting mathematical programs with complementarity con-straints, Mathematical Programming, Vol. 131 (2012), 71-94.

[89] O. Stein, P. Steuermann, The adaptive convexification algorithmfor semi-infinite programming with arbitrary index sets, MathematicalProgramming B, to appear.

28

Page 29: How to Solve a Semi-in nite Optimization Problem · semi-in nite optimization (SIP). As basic references we mention [34] for an introduction to semi-in nite op-timization, [36, 74]

[90] O. Stein, P. Steuermann, An implementation of the adaptive con-vexification algorithm for semi-infinite programming with arbitrary indexsets, kop.ior.kit.edu/downloads.php

[91] O. Stein, G. Still, On optimality conditions for generalized semi-infinite programming problems, Journal of Optimization Theory andApplications, Vol. 104 (2000), 443-458.

[92] O. Stein, G. Still, On generalized semi-infinite optimization andbilevel optimization, European Journal of Operational Research, Vol. 142(2002), 444-462.

[93] O. Stein, G. Still, Solving semi-infinite optimization problems withinterior point techniques, SIAM Journal on Control and Optimization,Vol. 42 (2003), 769-788.

[94] O. Stein, A. Tezel, The semismooth approach for semi-infinite pro-gramming under the Reduction Ansatz, Journal of Global Optimization,Vol. 41 (2008), 245-266.

[95] O. Stein, A. Tezel, The semismooth approach for semi-infinite pro-gramming without strict complementarity, SIAM Journal on Optimiza-tion, Vol. 20 (2009), 1052-1072.

[96] O. Stein, A. Winterfeld, A feasible method for generalized semi-infinite programming, Journal of Optimization Theory and Applications,Vol. 146 (2010), 419-443.

[97] G. Still, Generalized semi-infinite programming: theory and methods,European Journal of Operational Research, Vol. 119 (1999), 301-313.

[98] G. Still, Generalized semi-infinite programming: numerical aspects,Optimization, Vol. 49 (2001), 223-242.

[99] R. H. Tutuncu and M. Koenig, Robust asset allocation, Annals ofOperations Research, Vol. 132 (2004), pp. 157-187.

[100] L. Vandenberghe, S. Boyd, Semidefinite programming, SIAM Re-view, Vol. 38 (1996), 49-95.

[101] L. Vandenberghe, S. Boyd, Connections between semi-infinite andsemi-definite programming, in [75], 277-294.

[102] G.-W. Weber, Generalized Semi-Infinite Optimization and RelatedTopics, Heldermann Verlag, Lemgo, 2003.

29

Page 30: How to Solve a Semi-in nite Optimization Problem · semi-in nite optimization (SIP). As basic references we mention [34] for an introduction to semi-in nite op-timization, [36, 74]

[103] G.W. Weber, A. Tezel, On generalized semi-infinite optimizationof genetic networks, TOP, Vol. 15 (2007), 65-77.

[104] R. Werner, Cascading: an adjusted exchange method for robust conicprogramming, Central European Journal of Operations Research, Vol. 16(2008), 179-189.

[105] R. Werner, Consistency of robust optimization with application toportfolio optimization, Optimization Online, 2010.

[106] R. Werner, Costs and benefits of robust optimization, OptimizationOnline, 2010.

[107] W. Wetterling, Definitheitsbedingungen fur relative Extrema beiOptimierungs- und Approximationsaufgaben, Numerische Mathematik,Vol. 15 (1970), 122-136.

[108] A. Winterfeld, Application of general semi-infinite programming tolapidary cutting problems, European Journal of Operational Research,Vol. 191 (2008), 838-854.

[109] H. Wolkowicz, R. Saigal, L. Vandenberghe (eds), Handbookof Semidefinite Programming: Theory, Algorithms and Applications,Kluwer, Dordrecht, 2000.

[110] J.J. Ye, S.-Y. Wu, First order optimality conditions for generalizedsemi-infinite programming problems, Journal of Optimization Theoryand Applications, Vol. 137 (2008), 419-434.

[111] X.Y. Zheng, X. Yang, Lagrange multipliers in nonsmooth semi-infinite optimization problems, Mathematics of Operations Research,Vol. 32 (2007), 168181.

[112] G. Zwier, Structural Analysis in Semi-Infinite Programming, Thesis,University of Twente, 1987.

30