solving chance-constrained optimization problems with ...€¦ · a chance-constrained problem is a...

39
Solving Chance-Constrained Optimization Problems with Stochastic Quadratic Inequalities Miguel A. Lejeune * , F. Margot Abstract We study a complex class of stochastic programming problems involving a joint chance constraint with random technology matrix and stochastic quadratic inequalities. We present a basic mixed- integer nonlinear reformulation based on Boolean modeling and derive several variants of it. We present detailed empirical results comparing the various reformulations and several easy to imple- ment algorithmic ideas that improve performances of the mixed-integer nonlinear solver Couenne for solving these problems. Guidelines on how to tune the solver and selecting reformulations are presented. The test instances are epidemiology and disaster management facility location models and cover the three types of stochastic quadratic inequalities, namely product of two variables that are (i) both binary, (ii) binary and continuous, or (iii) both continuous. Keywords: Stochastic Programming, Mixed-Integer Nonlinear Programming, Boolean Program- ming, Nonlinear Branch-and-Bound Algorithm, Quadratic Stochastic Inequality, Joint Probabilistic Constraint, Random Technology Matrix, Epidemiology, Facility Location 1 Introduction A chance-constrained problem is a stochastic programming optimization problem involving one or more stochastic constraints that must hold simultaneously with a minimum given probability. Applications of chance constraint problems are numerous, e.g., management of resources [62], financial risk management [14], process engineering [24] (see [53] for additional examples and references). Chance constraint problems are very challenging to solve in practice as they typically involve non- linearities, nonconvexities, discrete variables, and random variables. Even checking the feasibility of a given solution might already be hard in practice and NP-hard in theory. In this paper, we assume no par- ticular form for the joint distribution of the random variables. We consider that the distribution is given as a collection of joint realizations of the random variables, i.e., a problem with a discrete joint distribution. Probabilistic programming problems with finitely distributed random variables (see, e.g., [21, 52, 58]) are challenging and pervasive. Discrete distributions can capture features such as skewness or kurtosis, and are frequently used either directly or as empirical approximations of a general distribution. The problems studied in this paper are significantly more complex than typical chance constraint problems studied in the literature. They have three distinguishing features: (i) the presence of several stochastic inequalities that must be satisfied jointly with a specified probability (as opposed to having a single stochastic inequality); (ii) the presence of random variables in the left-hand side of the inequalities (i.e., as opposed to only in the right-hand side); (iii) the stochastic inequalities are quadratic, i.e., sums of terms that may involve the product of up to two decision variables and one random variable (as opposed to linear, where only sums of products of a single decision variable with a random variable can occur). Chance constraint problems with features (i) and (ii) are known as having a multi-row random * George Washington University, Washington, DC, USA; [email protected] Carnegie Mellon University, Pittsburgh, PA, USA; [email protected] Supported by ONR grant N00014-12-10032. 1

Upload: others

Post on 02-May-2020

13 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Solving Chance-Constrained Optimization Problems with ...€¦ · A chance-constrained problem is a stochastic programming optimization problem involving one or more stochastic constraints

Solving Chance-Constrained Optimization Problems withStochastic Quadratic Inequalities

Miguel A. Lejeune∗, F. Margot†

Abstract

We study a complex class of stochastic programming problems involving a joint chance constraintwith random technology matrix and stochastic quadratic inequalities. We present a basic mixed-integer nonlinear reformulation based on Boolean modeling and derive several variants of it. Wepresent detailed empirical results comparing the various reformulations and several easy to imple-ment algorithmic ideas that improve performances of the mixed-integer nonlinear solver Couennefor solving these problems. Guidelines on how to tune the solver and selecting reformulations arepresented. The test instances are epidemiology and disaster management facility location models andcover the three types of stochastic quadratic inequalities, namely product of two variables that are (i)both binary, (ii) binary and continuous, or (iii) both continuous.

Keywords: Stochastic Programming, Mixed-Integer Nonlinear Programming, Boolean Program-ming, Nonlinear Branch-and-Bound Algorithm, Quadratic Stochastic Inequality, Joint ProbabilisticConstraint, Random Technology Matrix, Epidemiology, Facility Location

1 Introduction

A chance-constrained problem is a stochastic programming optimization problem involving one or morestochastic constraints that must hold simultaneously with a minimum given probability. Applications ofchance constraint problems are numerous, e.g., management of resources [62], financial risk management[14], process engineering [24] (see [53] for additional examples and references).

Chance constraint problems are very challenging to solve in practice as they typically involve non-linearities, nonconvexities, discrete variables, and random variables. Even checking the feasibility of agiven solution might already be hard in practice and NP-hard in theory. In this paper, we assume no par-ticular form for the joint distribution of the random variables. We consider that the distribution is given asa collection of joint realizations of the random variables, i.e., a problem with a discrete joint distribution.Probabilistic programming problems with finitely distributed random variables (see, e.g., [21, 52, 58])are challenging and pervasive. Discrete distributions can capture features such as skewness or kurtosis,and are frequently used either directly or as empirical approximations of a general distribution.

The problems studied in this paper are significantly more complex than typical chance constraintproblems studied in the literature. They have three distinguishing features: (i) the presence of severalstochastic inequalities that must be satisfied jointly with a specified probability (as opposed to having asingle stochastic inequality); (ii) the presence of random variables in the left-hand side of the inequalities(i.e., as opposed to only in the right-hand side); (iii) the stochastic inequalities are quadratic, i.e., sumsof terms that may involve the product of up to two decision variables and one random variable (asopposed to linear, where only sums of products of a single decision variable with a random variable canoccur). Chance constraint problems with features (i) and (ii) are known as having a multi-row random

∗George Washington University, Washington, DC, USA; [email protected]†Carnegie Mellon University, Pittsburgh, PA, USA; [email protected] Supported by ONR grant N00014-12-10032.

1

Page 2: Solving Chance-Constrained Optimization Problems with ...€¦ · A chance-constrained problem is a stochastic programming optimization problem involving one or more stochastic constraints

technology matrix. We call problems with all three features Quadratic and Multi-row chance-constrainedoptimization problems and denote them by QM.

In this paper, we develop theory and algorithmic approaches to solve QM to optimality. ProblemQM has r decision variables that can be continuous or binary. We assume that x j ∈ R for j = 1, 2, . . . , r1

and x j ∈ 0, 1 for j = r1 + 1, r1 + 2, . . . , r1 + r2 = r. In addition, it also has a set of random variablesξ following a joint discrete distribution with finite support and not assumed to be independent. Theobjective function is linear in the decision variables and does not involve the random variables. Theproblem has a set of stochastic inequalities (indexed by i ∈ I) that must hold jointly with a specifiedprobability level p. Each such inequality includes products of two decision variables (x j1 and x j2) andone stochastic variable ξi j1 j2 , and its right hand side is deterministic. The multi-row technology matrix istherefore random. Let J be the set of all pairs of indices of decision variables appearing in a monomialof these stochastic constraints. The problem might have additional deterministic constraints gb(x) ≤ 0for b = 1, 2, . . . ,m that are convex. The notation P refers to a probability measure.

The base formulation of problem QM thus reads:

QM : max qTx (1)

subject to gb(x) ≤ 0 , b = 1, . . . ,m (2)

P

( ∑j=( j1, j2)∈J

si jx j1 x j2ξi j ≤ di, i ∈ I)≥ p (3)

x ∈ Rr1+ × 0, 1

r2 . (4)

The constraints (3) are known as multi-row probabilistic constraint with random technology matrix.For discretely distributed random variables, the feasible set of QM is non-convex even when all the deci-sion variables are continuous and the deterministic constraints are linear [11]. The type of reformulationstudied in this paper assumes that the problem QM is monotone in ξ, i.e., if x is feasible for ξ, thenx is also feasible for all ξ′ in the support of the distribution with ξ′ ≤ ξ. While this condition mightseem restrictive, it is actually a very general property that most applications exhibit either in their naturalformulation or after replacing some ξi j by −ξi j and changing the sign of the corresponding coefficientssi j. Note that the monotone property holds if for all j ∈ J and all i, i′ ∈ I we have si jsi′ j ≥ 0, and inparticular if each random variable appears in at most one of the stochastic inequalities.

For such a monotone problem QM, an optimal solution of the proposed reformulation gives a feasiblebut possibly suboptimal solution. In Section 2.2.1, we give sufficient conditions under which an optimalsolution of the reformulation gives an optimal solution of QM. These conditions are satisfied in theapplications studied in this paper, namely propagation of diseases or viruses [22, 25] and facility locationin the plane [23]. Additional examples of applications fitting the QM model are some transportationproblems [27], network flow problems with quadratic costs [63], pooling problems [42], internationalsupply chain problems with transfer pricing [50], siting problems of electrical substations [49], andtransmission network expansion problems [38]. This shows that the proposed approach can be used inmultiple application areas and industrial sectors.

The left-hand sides of the stochastic inequalities (3) are denoted by hi(ξ, x). They include bilinearterms x j1 x j2 multiplied by a random variable ξi j and by a constant coefficient si j, where j = ( j1, j2) ∈ J.They are not assumed to be convex or even separable. The problem QM is general enough to encompassthe following cases:

2

Page 3: Solving Chance-Constrained Optimization Problems with ...€¦ · A chance-constrained problem is a stochastic programming optimization problem involving one or more stochastic constraints

(i) Some monomials involve fewer than three variables with at most one random variable and at mosttwo decision variables. In that case, we can add at most two artificial decision variables and atmost one random variable and fix them to 1 to fit the base formulation QM.

(ii) Some decision variables are general integer with finite upper and lower bounds instead of beingbinary variables. In that case, we can replace each general integer variable by a sum of binaryvariables. Note that modifying the method presented in this paper to handle directly general integervariables is possible and is likely to be more efficient, but since the applications we consider belowinvolve only binary variables, we restrict the presentation to the binary case.

Probabilistic programming problems with random technology matrix have first been studied byKataoka [32]. He considers an individual (one-row) chance constraint with random technology matrixwhere the vector ξ of random variables is assumed to be normally distributed. If the prescribed proba-bility level p is at least 0.5, they show that the chance constraint can be equivalently reformulated as asecond-order cone one. We focus here our review on chance constraint with joint (multi-row) randomtechnology matrix. The complexity of these problems increases with the number |I| of inequalities thatmust hold jointly. Very few results are known for constraints with multi-row random technology matrix.They are generally limited to the case where the probabilistic constraints P(hi(ξ, x) ≥ 0, i ∈ I) ≥ p arelinear, i.e., of the form hi(ξ, x) = ξix − di, i ∈ I. Inequalities containing products of one random variableby one decision variable are called stochastic linear inequalities in the stochastic programming literature.

For discretely distributed random variables, Ruszczynski [56] derived cutting planes based on prece-dence knapsack constraints and used them within a branch-and-cut algorithm. Tanner and Ntaimo [59]proposed a Mixed-integer Linear Programming (MILP) formulation in which they incorporate irre-ducibly infeasible optimality cuts. Beraldi and Bruni [11] and Beraldi et al. [12] have both relied uponthe concept of p-efficiency [52] and specialized branch-and-bound algorithms to solve a reformulation ofthe probabilistic set covering problem. Using Monte Carlo approaches, sample approximation methodshave been shown to find good solutions for the original problem and provide statistical bounds on thesolution quality. Recently, Kogan and Lejeune [33] proposed a Boolean method to solve problems withjoint probabilistic constraints in which the elements of the multi-row random technology matrix followa joint probability distribution. For continuously distributed random variables, the focus has been onthe identification of conditions under which the feasible set defined by the chance constraint is convex(see, e.g., [29, 51]) and on methods allowing for the computation of the gradients and value of multi-variate continuous distributions, in particular the Gaussian one (see, e.g., [28, 62]). All the above papersconsidered only the case where the stochastic constraints are linear, i.e., hi(ξ, x) = ξix − di.

The method developed in this paper builds on the deterministic Boolean programming [20] reformu-lation of linear chance constraint problems introduced in [39, 40]. The method uses for each randomvariable ξi j a number of discrete values called cut points and an associated binary vector to representthe value of ξi j. When the cut points are chosen appropriately, the mapping of the values of the randomvariables in a scenario to a set of binary vectors allows for a linear description of the set of feasiblerealizations. Strengths of this method are: (i) the approach can handle any type of dependency betweenrandom variables; (ii) the method can be applied to any type (linear, quadratic, etc.) of stochastic in-equalities; (iii) the size of the reformulated problem, in particular its number of binary variables, do notgrow linearly with the number of scenarios used to represent uncertainty, but with the number of cutpoints used in the binarization process [39, 33]. Point (iii) is particularly important and sets apart the

3

Page 4: Solving Chance-Constrained Optimization Problems with ...€¦ · A chance-constrained problem is a stochastic programming optimization problem involving one or more stochastic constraints

binarization approach from other methods. The base reformulation can then be further simplified into amixed-integer (linear or not) optimization problem in which the number of binary variables or quadraticmixed-integer terms (depending on the specifics of the problem) is only a function of the number of cutpoints used in the binarization process and not on the number of scenarios. The number of cut pointsgrows but very slowly with the number of scenarios.

To the best of our knowledge, this study is the first one that proposes a reformulation and algorithmicframework allowing for the solution of chance-constrained programming problem of type QM. Ourcontribution span across the modeling, algorithmic, and computational fronts. It straddles stochastic,mixed-integer nonlinear and Boolean programming as well as application areas within the OperationsResearch discipline. Indeed, the models studied are relevant in multiple industrial sectors and haveextended applicability.

We present a new mixed-integer nonlinear reformulation based on Boolean modeling and derive sev-eral sparser variants of it that provide significant computational benefits. We present detailed empiricalresults comparing the various reformulations and several algorithmic ideas are implemented to providetwo new nonlinear branch-and-bound algorithms that improve the performance of the mixed-integernonlinear solver Couenne [9, 17]. The improvement is linked to one of the most difficult decision regard-ing branching decisions for solving mixed-integer nonlinear programs, namely anticipating the effect ofbranching on a continuous variable versus branching on a continuous one. While comparing the effectof branching either on two integer variables or on two continuous variables can be done relatively well,comparing between two variables of different types is empirically hard.

Guidelines on how to tune the solver and to select reformulations are presented. The test instances areepidemiology and disaster management facility location models and cover the three types of stochasticquadratic inequalities, namely product of two variables that are (i) both binary, (ii) binary and continuous,or (iii) both continuous.

In Section 2, we succinctly describe the Boolean reformulation framework and propose new exten-sion for the models studied in this paper. In Section 3, we derive a series of reformulations for thestochastic programming problem QM. Section 4 presents the disaster management facility location andepidemiology problems used to benchmark our method. Section 5 describes the algorithmic methods.Section 6 details the new nonlinear branch-and-bound algorithms and analyzes the computational results,while Section 7 provides concluding remarks.

2 Boolean Modeling Framework

In this section, we present the Boolean programming framework introduced in [33, 39, 40] that we willextend to reformulate the type of probabilistic constraint studied in this paper. The Boolean Programmingmethod was initially designed to handle joint probabilistic constraints with dependent random right-handsides [39, 40] and later extended to stochastic programming problems with joint probabilistic constraintsand multi-row random technology matrix [33]. In [41], the Boolean framework is implemented to tacklemultiobjective probabilistic problems in which the reliability level is a decision variable. While in theabove studies all stochastic inequalities are linear, we consider here quadratic stochastic inequalities,which poses additional challenges in terms of reformulation and solution methods. The Boolean model-ing framework involves the construction of the set of recombinations, the binarization of the probabilitydistribution and the partially defined Boolean function representation of a chance constraint (Section

4

Page 5: Solving Chance-Constrained Optimization Problems with ...€¦ · A chance-constrained problem is a stochastic programming optimization problem involving one or more stochastic constraints

2.1), and the modeling of the feasible area of a chance constraint with a system of mixed-integer non-linear inequalities (Section 2.2). Readers familiar with the topic will see that Section 2.1 and Section2.2.1 are essentially based on Sections 2.1 and 2.2 in [33]. New results extending the Boolean modelingmethod are presented in Section 2.2.2 and Section 3.

2.1 Recombinations, Binarization, and Boolean functions

To simplify the exposition, we assume that ξi j = ξi′ j, for all i, i′ ∈ I. The method can handle the casewhere this assumption is not met without any conceptual modification, but requires heavier notation. Wethus drop the subscript i in ξi j and rewrite the chance constraint (3) as follows:

P

∑j=( j1, j2)∈J

si jx j1 x j2ξ j ≤ di, i ∈ I

≥ p . (5)

Let Ω be the set of all realizations (scenarios) ωk, k = 1, 2, . . . , |Ω| that characterize the joint probabilitydistribution function F of ξ.

Definition 1 [39] The realization ωk ∈ Ω is p-sufficient if and only if P(ξ ≤ ωk) = F(ωk) ≥ p and isp-insufficient otherwise.

The concept of p-sufficiency defined above must not be confused with the p-efficiency concept [52] thatimposes stricter conditions. A p-sufficient realization defines sufficient conditions for (3) to hold.Let F j be the marginal probability distribution of ξ j. The inequalities based on the univariate quantile

F j(ωkj) ≥ p, j = 1, . . . , |J| (6)

define necessary conditions for P(ξ ≤ ωk) ≥ p. The direct product Ω = C1 × . . . ×C|J| of the sets

C j =ωk

j : F j(ωkj) ≥ p, k = 1, . . . , |Ω|

, j ∈ J (7)

provides the set Ω of recombinations [33] that defines the exhaustive list of points that can be p-sufficientbased on the univariate quantile rule. The set Ω is partitioned into the set of p-sufficient recombinationsΩ+ := ωk ∈ Ω+ : F(ωk) ≥ p and the set of p-insufficient recombinations Ω− := ωk ∈ Ω− : F(ωk) < p.We index the set Ω (resp. Ω+, Ω−) with the index set K (resp. K+, K−), i.e., Ω = ωk : k ∈ K.

We now binarize the probability distribution and the set of recombinations with a set of values calledcut points [15]. For each j ∈ J, a collection c j1 < c j2 < . . . < c jn j of cut points are selected, with n j, j ∈ Jbeing the number of cut points associated with j. Then the binarization of the value ωk

j maps it to the

vector βkj =

[βk

j1, . . . , βkjn j

]with

βkjl =

1 if ωk

j ≥ c jl

0 otherwise. (8)

The binarization of a vector ωk is obtained by the binarization of each of its components. By definition,the set of relevant Boolean vectors βk is regularized (as defined in [20]), i.e.,

βkjl ≤ β

kjl′ , j ∈ J , l < l′ , k ∈ K . (9)

For the binarization to be useful, the set of cut points must be selected such that the set Ω+B of bina-

rizations of all recombinations in Ω+ is disjoint from the set Ω−B of those obtained from Ω−. A set of cut

5

Page 6: Solving Chance-Constrained Optimization Problems with ...€¦ · A chance-constrained problem is a stochastic programming optimization problem involving one or more stochastic constraints

points satisfying this condition is said consistent. To derive a consistent set, we use the method proposedin [39, 40], which is itself inspired from constructive approaches stemming from the combinatorial datamining literature [15].

Definition 2 [40] The sufficient-equivalent consistent set of cut points is given by:

Ce =

|J|⋃j=1

C j , (10)

where the sets C j are defined by (7).

The use of the sufficient-equivalent set of cut points partitions the set of relevant Boolean vectors into thedisjoint sets of p-sufficient Ω+

B and p-insufficient Ω−B relevant Boolean vectors, and permits the derivationof a partially defined Boolean function modeling the satisfiability of the chance constraint (3).

Definition 3 [33] Given two disjoint subsets Ω+B, Ω

−B: Ω+

B⋃

Ω−B = ΩB ⊆ 0, 1n, g(Ω+

B, Ω−B

)is a partially

defined Boolean function defining the mapping (Ω+B⋃

Ω−B) → 0, 1 such that g(k) = 1 (resp., g(k) = 0)if ωk ∈ Ω+

B (resp., Ω−B).

A numerical example of application of the Boolean method is given in Appendix I.

2.2 Chance Constraint Feasibility with System of Mixed-Integer Nonlinear Inequalities

In this section, we use the partially defined Boolean function (pdBf) representation of the chance con-straint to obtain a system of mixed-integer nonlinear inequalities enforcing the feasibility of (3). We firstlist properties of threshold Boolean functions and then extend recent results obtained for the stochasticlinear case [33, 39].

2.2.1 Properties of Threshold Boolean Function

The following two definitions are useful to derive a set of mixed-integer nonlinear inequalities definingthe feasible set of the chance constraint (3).

Definition 4 [20] A function f : 0, 1n → 0, 1 is a threshold Boolean function if, for all (a1, . . . , an) ∈

0, 1n, there exists λ ∈ Rn and θ ∈ R, such that f (a1, . . . , an) = 1 if and only ifn∑

l=1λlal ≥ θ.

The (n + 1)-tuple (λ, θ) is a separating structure for the threshold Boolean function f .

Definition 5 Let f be a Boolean function and let T (resp. F ) be the sets of points for which f takesvalue 1 (resp. 0). The function f is said a tight minorant of a pdBf g

(Ω+

B, Ω−B

)if (1) Ω−B ⊆ F , and (2)

T⋂

Ω+B , ∅.

Theorem 6 [33] A threshold Boolean function f defined by the separating structure (λ, θ) is a tightminorant of a pdBf g(Ω+

B, Ω−B) if the system of inequalities∑

j∈J

n j∑l=1

λ jl βkjl ≥ θ, for at least one k ∈ K+ (11)

∑j∈J

n j∑l=1

λ jl βkjl ≤ θ − 1, k ∈ K− (12)

has a feasible solution.

6

Page 7: Solving Chance-Constrained Optimization Problems with ...€¦ · A chance-constrained problem is a stochastic programming optimization problem involving one or more stochastic constraints

Inequalities (11) mean that each p-insufficient realization is also defined as p-insufficient by the separat-ing structure (λ, θ) characterizing the tight minorant f .

Theorem 7 [33] Every feasible solution λ∗ of the system of inequalities∑j∈J

n j∑l=1

λ jl βkjl ≤ |J| − 1, k ∈ K− (13)

n j∑l=1

λ jl = 1, j ∈ J (14)

λ jl ∈ 0, 1, j ∈ J, l = 1, . . . , n j (15)

defines a threshold tight minorant f with integral separating structure (λ∗, |J|) ∈ 0, 1n×Z+ for g(Ω+

B, Ω−B

).

Every ωk ∈ Ω such that ∑j∈J

n j∑l=1

λ∗jl βkjl = |J| (16)

belongs to Ω+. There is at least one ωk ∈ Ω+ for which (16) holds.

Any feasible solution λ∗ of the system (13)-(15) permits to derive a p-sufficient recombination∑j∈J

n j∑l=1λ∗jlc jl

and can thus be used to obtain a feasible solution for (3).

2.2.2 Mixed-Integer Nonlinear Feasible Set

We now propose a new system of linear inequalities having the same properties as (13)-(15). This newsystem is smaller than (13)-(15). We first construct a partially ordered set in the space of p-insufficientrecombinations k ∈ K−.

Definition 8 For any ωk, ωk′

∈ Ω−, the partial order over the set Ω−B is defined by

ωk ωk′

⇔ βk ≤ βk′

. (17)

If (17) holds, the p-insufficient recombination ωk is said to be dominated by ωk′

. We define the setΩ− of all recombinations whose binarization is dominated by at least one other (distinct) binarization ofa recombination in Ω−B. We index the set Ω− with K−. We have the following strengthening of (13)-(15).

Lemma 9 The feasible set defined by the system of inequalities∑j∈J

n j∑l=1

λ jl βkjl ≤ |J| − 1, k ∈ K− (18)

is equivalent to the one defined by (13).

Proof. Consider two arbitrary ωk, ωk′

∈ Ω−B such that ωk ωk′

. Then∑j∈J

n j∑l=1λ jl β

kjl ≤

∑j∈J

n j∑l=1λ jl β

k′

jl for

any λ jl ∈ 0, 1. The result follows.

Lemma 9 implies that instead of deriving a threshold tight minorant for g(Ω+

B, Ω−B

), it is enough and

equivalent to derive a threshold tight minorant for g(Ω+

B, Ω−B

).

7

Page 8: Solving Chance-Constrained Optimization Problems with ...€¦ · A chance-constrained problem is a stochastic programming optimization problem involving one or more stochastic constraints

Theorem 10 Every feasible solution of the system of inequalities (14)− (15); (18) defines a thresholdtight minorant with integral separating structure (λ, |J|) ∈ 0, 1n × Z+ for g

(Ω+

B, Ω−B

). Every ωk ∈ Ω for

which (16) holds belongs to Ω+.

For any pdBf g(Ω+

B, Ω−B

), there exists a tight minorant of g

(Ω+

B, Ω−B

)that is threshold [33]. An

identical result can be obtained and proven the same way for g(Ω+

B, Ω−B

)Theorem 11 For any pdBf g

(Ω+

B, Ω−B

), there exists a tight minorant of g

(Ω+

B, Ω−B

)that is threshold.

The next step is to use the threshold structure of the tight minorant of the pdBf g(Ω+

B, Ω−B

)to derive

a system of inequalities representing exactly the feasible area defined by the chance constraint (3).

Theorem 12 Any solution feasible for the set of mixed-integer trilinear inequalities

(4) ; (14) − (15) ; (18)∑j=( j1, j2)∈J

si jx j1 x j2

( n j∑l=1λ jl c jl

)≤ di, i ∈ I (19)

is feasible for the chance constraint (3).

Proof. Let (x, λ) a feasible solution of the above system. As shown in Theorem 7, λ defines a threshold

tight minorant f and G =

ωk ∈ Ω :

∑( j1, j2)∈J

n j∑l=1λ jl β

kjl = |J|

⊆ Ω+.

Let L =( j, l) : λ jl = 1, j ∈ J, l = 1, . . . , n j

. The binarization (8) implies that ωk

j ≥n j∑

p=1λ jpc jp =

c jl, ( j, l) ∈ L, ωk ∈ G.Furthermore, due to the definition of the sufficient-equivalent set of cut points, it is always possible

to find ωk′ ∈ G such that:ωk′

j = c jl ( j, l) ∈ L . (20)

Since ωk′ ∈ G ⊆ Ω+B, we have P

(ξ ≤ ωk′

)≥ p, and:

P

∑( j1, j2)∈J

si j ξ j x j1 x j2 ≤∑

( j1, j2)∈J

si j ωk′j x j1 x j2 , i ∈ I

≥ p (21)

∑( j1, j2)∈J

si j ωk′j x j1 x j2 ≤ di, i ∈ I ⇒ P

∑( j1, j2)∈J

si j ξ j x j1 x j2 ≤ di, i ∈ I

≥ p . (22)

Therefore, it follows from (20) and (22) that:

∑( j1, j2)∈J

si j x j1 x j2

n j∑i=1

λ jl c jl

≤ di, i ∈ I ⇒ (3) holds. (23)

We now characterize in Corollary 13 a series of sufficient conditions under which the feasible setsdefined by the chance constraint (3) and the system of inequalities given in Theorem 12 are identical.

8

Page 9: Solving Chance-Constrained Optimization Problems with ...€¦ · A chance-constrained problem is a stochastic programming optimization problem involving one or more stochastic constraints

Corollary 13 The feasible set of the system

(4) ; (14) − (15) ; (18) − (19)

is equivalent to the feasible set of the chance constraint (3) if the set of stochastic inequalities subject tothe probabilistic requirement takes one of the following forms:

(i) Each stochastic inequality in (3) has exactly one monomial, i.e.,

P(si jx j1 x j2ξi ≤ di, i ∈ I

)≥ p . (24)

(ii) Each stochastic inequality in (3) contains exactly one random variable, i.e.,

P

∑j=( j1, j2)∈J

(si jx j1 x j2

)ξi ≤ di, i ∈ I

≥ p . (25)

(iii) Each stochastic inequality in (3) contains exactly one pair of decision variables xi1 xi2:

P

sixi1 xi2

∑j=( j1, j2)∈J

ξi j ≤ di, i ∈ I

≥ p (26)

(iv) Each stochastic inequality in (3) is the product of a sum of random variables by a sum of productof two decision variables, i.e.,

P

j∈J

ξi j

∑j=( j1, j2)∈J

(si jx j1 x j2

)≤ di, i ∈ I

≥ p . (27)

(v) Each stochastic inequality in (3) is a sum of trilinear terms x j1 x j2ξi j and at most one of them canbe non-zero.

(vi) The system of inequalities subject to the probabilistic requirement includes any mix or combinationof stochastic inequalities of the form described in (i) to (v).

Proof. Theorem 12 shows that if (x, λ) is a feasible solution of (4); (14)-(15); (18)-(19) then x isfeasible for (3). We show now that there exists a p-sufficient recombinationω ∈ Ω for which x is feasible.(i) Let

ti =

∞ if si j x j1 x j2 = 0di

si j x j1 x j2otherwise and t′i =

−∞ if si j x j1 x j2 = 0di

si j x j1 x j2otherwise .

The probability that a feasible x for QM satisfies (3) is P(ξi ≤ ti, i ∈ I). Then x is feasible for all ω suchthat ωi ≥ t′i . As we assume that the probability distribution is discrete, all such ω which componentsωi, i ∈ I having a positive probability of occurrence are recombinations in a set S with S ⊆ Ω+. Defineωi = mins∈S si for i ∈ I. Observe that ω ∈ Ω+ and that x is feasible for ω.(ii): Similar proof as in (i), as we have a single random variable in each stochastic inequality.(iii): Let ζi =

∑j=( j1, j2)∈J

ξi j, i ∈ I. We can now rewrite (26) as

P(sixi1 xi2 ζi ≤ di, i ∈ I

)≥ p . (28)

9

Page 10: Solving Chance-Constrained Optimization Problems with ...€¦ · A chance-constrained problem is a stochastic programming optimization problem involving one or more stochastic constraints

As ξi j is discretely distributed with finite support, so is ζi. As (28) has the same form as (24), the proofof (i) applies here too.(iv): The chance constraint (27) can be recast as

P

∑j′=( j′1, j

′2)∈J

(si j′ x j′1 x j′2

)ζi ≤ di, i ∈ I

≥ p (29)

similar to (25).(v): For all i ∈ I, the stochastic inequality i is equivalent to the stochastic inequalities si jx j1 x j2 ≤ di forall j ∈ J, and thus the proof of (i) applies.(vi): The proofs of (i)-(v) essentially show that each stochastic inequality is equivalent to a collection ofinequalities of type (i).

3 Mixed-Integer Nonlinear Programming Reformulations

In this section, we derive a series of mixed-integer nonlinear programming reformulations for problemQM and present their characteristics.

3.1 Basic Model M1

The first reformulation M1 is obtained by a direct application of Theorem 12.

Lemma 14 The mixed-integer nonlinear programming problem M1

M1 : max qTx

subject to (2) ; (4) ; (14) − (15) ; (18) − (19)

is equivalent to the probabilistic programming problem QM.

Problem M1 is a Mixed-Integer NonLinear Programming (MINLP) problem involving trilinear termsx j1 x j2 λ jl for j = ( j1, j2) ∈ J and its continuous relaxation is in general non-convex. Such problemsare usually NP-hard to solve. Problem M1 contains n =

∑j∈J

n j binary variables λ jl as defined in (15),

|Ω−B| knapsack constraints of form (13), |J| set partitioning constraints of form (14), and |I| nonlinearconstraints (19).

We now present a number of reformulations whose relevance depends on the properties of the trilinearterms x j1 x j2 λ jl. These properties are here understood in terms of the continuous or binary nature of thedecision variables x j1 and x j2 . The variables λ jl are always binary. The MINLP reformulations presentedin Sections 3.2 and 3.3 are the most general ones and are always applicable regardless of whether thevariables x j1 and x j2 are continuous or not. The MILP reformulation presented in Section 3.3 is applicableif at least one of the two variables x j1 x j2 in each trilinear term is integer.

3.2 Sparse Reformulation M2

The derivation of the model proposed in this section involves two main steps. First, we derive a newand sparser system of mixed-integer inequalities (different from those mentioned in Section 2.2.2 and in

10

Page 11: Solving Chance-Constrained Optimization Problems with ...€¦ · A chance-constrained problem is a stochastic programming optimization problem involving one or more stochastic constraints

[33]) to define the feasible set of a chance constraint. Second, we use the McCormick convexificationstep to reformulate the mixed-integer trilinear terms as bilinear ones.

Theorem 15 Consider an arbitrary ωk ∈ Ω−B and let ¯( j) = max1 ≤ l ≤ n j | βkjl = 1, j ∈ J. The

feasible set defined by the constraints

∑j∈J

n j∑l=1

λ jl βkjl ≤ |J| − 1 (30)

(14) − (15)

can be equivalently reformulated as:∑j∈Jγ j ¯( j)+1 ≥ 1 (31)

γ j,l−1 ≥ γ jl j ∈ J, 2 ≤ l ≤ n j (32)

γ j1 = 1 j ∈ J (33)

γ jl ∈ 0, 1 j ∈ J, 1 ≤ l ≤ n j (34)

using the one-to-one correspondence between λ and γ variables given by

λ jl =

γ j,l+1 − γ j,l if l < n j

γ jl if l = n jand γ jl =

n j∑k=l

λ jk . (35)

Proof. As the transformation T given by (35) is one-to-one and linear, the polytope P correspondingto the linear relaxation of (30); (14)-(15) is mapped to a polytope Q := T (P) and there is a one-to-onecorrespondence between their respective facets. Observe that the facets of P are (30), (14), and λ jl ≥ 0for all j ∈ J, l ∈ 1, 2, . . . , n j. Moreover, (14) is mapped to (33), and λ jl ≥ 0 for all j ∈ J, l ∈ 1, 2, . . . , n j aremapped to (32) together with γ jn j ≥ 0. The result thus holds if (30) is mapped to (31), as the remaininginequalities in the system 31-(34) are implied by the others.

To show that (30) is mapped to (31), observe that the constraint

∑j∈J

n j∑l=1

λ jl (1 − βkjl) ≥ 1 (36)

is equivalent to (30).From the definition of ¯( j), we have 1 − β jl = 0 for l ≤ ¯( j), and 1 − βk

jl = 1 for l = ¯( j) + 1, . . . , n j.Therefore, (36) is equivalent to

∑j∈J

n j∑l= ¯( j)+1

λ jl ≥ 1 (37)

Using (35) this becomes∑j∈Jγ j ¯( j)+1 ≥ 1 , proving the result.

Note that although the two systems in the above theorem are equivalent in term of their number offacets, number of extreme points and integer feasible solutions, there are notable advantages to using(31)-(34). First, the number of nonzero entries in the facet defining inequalities is smaller. This usually

11

Page 12: Solving Chance-Constrained Optimization Problems with ...€¦ · A chance-constrained problem is a stochastic programming optimization problem involving one or more stochastic constraints

translates in faster solution times. Second, in term of branching, one can note that branching on a variableλ jl yields very unbalanced benefits in the two branches. In the branch where λ jl is set to 1, all variablesλ jk for k , l are set to 0 while not much progress is made in the branch where λ jl is set to 0. On theother hand, if we branch on a variable γ jl, then in the branch where γ jl is set to 1, all variables γ jk fork < l are set to 1 and in the branch where γ jl is set to 0, all variables γ jk for k > l are also set to 0.The effect of the transformation is of course similar to defining the variables λ jl as an SOS1 [3] set, butthe sparsity of the system (31)-(34) has also its benefit in term of bound tightening propagation used inmixed-integer nonlinear solvers such as Couenne. Bound tightening procedures are described in moredetails in Section 5.

In order to reformulate M1 using Theorem 15, we need to take care of the computation of∑n j

l=1 λ jlc jl, j ∈J appearing in (19). To this aim, we define the constants o jl, j ∈ J, l = 1, . . . , n j as

o jl+1 = c jl+1 − c jl, j ∈ J, l = 1, . . . , n j − 1o j1 = c j1, j ∈ J

. (38)

Each o jl+1 measures the distance between two consecutive cut points c jl and c jl+1. It follows immediatelyfrom (38) that

n j∑l=1

o jlγ jl =

n j∑j=1

c jlλ jl , j ∈ J (39)

Lemma 16 follows.

Lemma 16 Let ¯k( j) = max1 ≤ l ≤ n j | βkjl = 1, j ∈ J, ωk ∈ Ω−B. The feasible region defined by

(4); (14) − (15); (18); (19) is equivalent to the one defined by the system of inequalities∑j∈Jγ j ¯k( j)+1 ≥ 1, k ∈ K− (40)

∑j∈J

si jx j1 x j2

( n j∑l=1γ jl o jl

)≤ di, i ∈ I (41)

(4) ; (32) − (34) .

The mixed-integer trilinear problem M2-1

M2-1 : max qTx

subject to (2) ; (4) ; (32) − (34) ; (40) − (41)

is equivalent to M1.

We now present reformulations of M2-1 using the McCormick reformulation of bilinear terms [48] invarious ways. This assumes that the variables implied in the substitution are bounded, a typical situation.In the application we consider, we always have 0 ≤ x j1 ≤ u j1 < ∞ for all j1 = 1, . . . , r. We thus presentthe McCormick reformulation for that particular case. We linearize each mixed-integer bilinear termx j1γ jl in (41) and introduce an auxiliary decision variable y jl that the four McCormick inequalities

y jl ≤ u j1γ jl j = ( j1, j2) ∈ J, l = 1, . . . , n j (42)

y jl ≤ x j1 j = ( j1, j2) ∈ J, l = 1, . . . , n j (43)

y jl ≥ x j1 + u j1γ jl − u j1 j = ( j1, j2) ∈ J, l = 1, . . . , n j (44)

y jl ≥ 0 j = ( j1, j2) ∈ J, l = 1, . . . , n j (45)

12

Page 13: Solving Chance-Constrained Optimization Problems with ...€¦ · A chance-constrained problem is a stochastic programming optimization problem involving one or more stochastic constraints

force to take value x j1γ jl. Indeed, (42) and (45) guarantee that y jl is equal to 0 if γ jl = 0, while (43) and(44) make sure that y jl = x j1 if λ jl = 1. This, in turn, allows to replace the trilinear term x j1 x j2γ jl by thebilinear one x j2y jl. The above results lead to the MINLP problem M2-1-MC presented in Lemma 17.

Lemma 17 The mixed-integer nonlinear programming problem M2-1-MC

M2-1-MC : max qTx

subject to∑j∈J

n j∑l=1

si j x j2 y jl o jl ≤ di, i ∈ I (46)

(2) ; (4) ; (32) − (34) ; (40) ; (42) − (45)

is equivalent to M1.

Problem M2-1-MC is also an MINLP problem whose continuous relaxation is in general non-convexand in which the set partitioning constraints (14) are removed and some binary variables are fixed.

In M2-1, if si j ≥ 0, we introduce the auxiliary variables e j, j ∈ J constrained by

e j ≥

n j∑l=1

γ jl o jl , j ∈ J , (47)

in order to reduce the number of trilinear terms in the inequalities (41). Recall that x ∈ Rr1+ .

This yields the mixed-integer nonlinear programming problem M2-2

M2-2 : max qTx

subject to∑j∈J

si j x j1 x j2 e j ≤ di, i ∈ I (48)

(2) ; (4) ; (32) − (34) ; (40) ; (47)

equivalent to M1 and M2-1.

3.3 Mixed-Integer Linear Reformulations with Trilinear Terms Including at Least TwoBinary Variables

In this section, we consider the cases where the trilinear terms x j1 x j2γ jl include at least two binaryvariables. Recall that the variables γ jl are always binary. Using two McCormick reformulations, wederive an MILP reformulation for this case. Note that the MINLP formulations derived in sections 3.1and 3.2 remain valid.

In each trilinear term x j1 x j2γ jl, one of the two variables x j1 or x j2 is assumed to be binary. Withoutloss of generality, we assume that this variable is always x j1 . As in Section 3.2, we start with thelinearization of each bilinear term x j1γ jl in (19) by introducing the auxiliary decision variable y jl andthe McCormick inequalities (42)-(45). As y jl is the product of two binary variables, it is automaticallybinary. We can thus now apply a second round of McCormick reformulation, introducing variables v jl

for the bilinear terms y jlx j2 and the inequalities:

v jl ≤ x j2 , j = ( j1, j2) ∈ J, l = 1, . . . , n j (49)

v jl ≤ y jl , j = ( j1, j2) ∈ J, l = 1, . . . , n j (50)

v jl ≥ y jl + x j2 − 1 , j = ( j1, j2) ∈ J, l = 1, . . . , n j (51)

v jl ≥ 0 , j = ( j1, j2) ∈ J, l = 1, . . . , n j . (52)

13

Page 14: Solving Chance-Constrained Optimization Problems with ...€¦ · A chance-constrained problem is a stochastic programming optimization problem involving one or more stochastic constraints

The two successive series of McCormick inequalities lead to the MILP problem M3.

Lemma 18 The mixed-integer linear programming problem M3

M3 : max qTx

subject to (2) ; (4) ; (32) − (34) ; (40) ; (42) − (52)∑j∈J

si j

( n j∑l=1

v jl c jl

)≤ di, i ∈ I (53)

is equivalent to the probabilistic programming problem QM in which for all ( j1, j2) ∈ J at least one ofthe two variables x j1 and x j2 is binary.

4 Application Problems

In this section we described two applications used for the computational testing of the reformulationspresented in Section 3. Algorithmic methods and specific algorithmic development for solving theseinstances are described in Section 5. The problems described below are selected so that dependingon the reformulation used, we obtain a problem of type QM with trilinear terms involving either (i)one continuous and two binary variables (Section 4.1), or (ii) two continuous and one binary variables(Section 4.2), or (iii) three binary variables (Section 4.3).

4.1 Probabilistic Facility Location and Assignment with Random Demand: TrilinearTerms with One Continuous and Two Binary Variables

In this section, we study a type of facility location problems in which (i) the demand is stochastic [5, 13,16, 35]; (ii) a fixed number of facilities can be opened anywhere in the Euclidean plane [4, 10, 6]; (iii)the distances are measured with the Manhattan (or L1) metric [23, 36]; (iv) the facilities are capacitated;and (v) each customer is served by a single facility.

The objective is to minimize an upper-bound U on the weighted total-distance (i.e., the sum of theproduct of the demand of each customer times the distance to the facility serving that customer) such thatthe bound U is satisfied with a specified reliability level p. A related model was studied in [16] in whichthe demand is normally distributed and a set of candidate locations to open the facilities is pre-defined.

Let D be the set of demand points and M be the number of facilities to be opened. The randomdemand originating from d is denoted by ξd. The maximum demand level stemming from d is given byUd. The parameter Ci is the capacity of facility i. The binary variable yid takes value 1 if the demandpoint d is served by facility i. The probabilistic total weighted distance is t =

∑d∈D

td, with td denoting the

probabilistic weighted distance to deliver to demand point d.The facilities can be located anywhere. The Cartesian coordinates of facility i are represented by a

pair of continuous decision variables (ai, bi), i = 1, 2, . . . ,M, while the known coordinates of demandpoint d are the constants (ad, bd), d ∈ D. The Manhattan distance hid between facility i and demand pointd is a non-negative continuous decision variable

hid = |ai − ad | + |bi − bd | , i = 1, . . . ,M, d ∈ D . (54)

This distance function can be linearized with the introduction of non-negative auxiliary variables sid ands′id and using the constraints (55)-(59):

14

Page 15: Solving Chance-Constrained Optimization Problems with ...€¦ · A chance-constrained problem is a stochastic programming optimization problem involving one or more stochastic constraints

sid ≥ ai − ad , i ∈ I, d = 1, . . . ,M (55)

sid ≥ a j − ai , i ∈ I, d = 1, . . . ,M (56)

s′

id ≥ bi − bd , i ∈ I, d = 1, . . . ,M (57)

s′

id ≥ bd − bi , i ∈ I, d = 1, . . . ,M (58)

hid = sid + s′

id , i ∈ I, d = 1, . . . ,M (59)

The problem is thus formulated as:

SFL1 : min∑

d∈Dtd (60)

subject to (55) − (59)

P

(M∑

i=1ξdyidhid ≤ td , d ∈ D

)≥ p (61)

M∑i=1

yid = 1 d ∈ D (62)

M∑i=1

yidU j ≤ Ci i = 1, . . . ,M (63)

lai ≤ ai ≤ uai i = 1, . . . ,M (64)

lbi ≤ bi ≤ ubi i = 1, . . . ,M (65)

yid ∈ 0, 1 i = 1, . . . ,M, d ∈ D (66)

The plane where the facilities can be opened is delineated which provides lower and upper bounds for thecoordinates of the facilities (see (64) and (65)). The constraints (62) stipulate that each demand point isassigned to and served by exactly one facility. The constraints (63) ensure that the capacity of a facilityis never exceeded. Using the proposed Boolean approach, the reformulation of the above stochasticproblem takes the form of an MINLP problem and includes trilinear terms involving the product of twobinary variables by one continuous.

The instances handled in the numerical section are based on a real-life disaster network (see, e.g.,[30, 55] that corresponds to the US Southeastern area. This region is frequently hit by hurricanes and itis crucial to have a network enabling a timely provision of emergency commodities (water, medical kits,food) [55]. The network includes 30 demand points (Brownsville, Corpus Christi, San Antonio, Dal-las, Houston,Little Rock, Memphis, Biloxi, Jackson, Monroe, Lake Charles, Baton Rouge, Hammond,New Orleans, Mobile, Birmingham, Nashville, Atlanta, Savannah, Columbia, Charlotte, Wilmington,Charleston, Tallahassee, Lake City, Jacksonville, Orlando, Tampa, Miami, and Key West). We have gen-erated problem instances that differ in terms of the number of scenarios (5,000, 10,000), the reliabilitylevel enforced (0.8, 0.85, 0.9, 0.95), and the number of warehouses (2, 3, 4) that can be opened. TheAMPL and nl files corresponding to all instances used in this paper can be downloaded from [47].

4.2 Probabilistic Facility Location with Random Demand: Trilinear Terms with TwoContinuous and One Binary Variables

The difference with the model proposed in this section and the one proposed in Section 4.1 is that eachclient may be served by more than one facility. The basic reformulation includes mixed-integer trilinear

15

Page 16: Solving Chance-Constrained Optimization Problems with ...€¦ · A chance-constrained problem is a stochastic programming optimization problem involving one or more stochastic constraints

terms including one binary variable and two continuous ones. The decision variables yi j are now definedas continuous with lower bound 0 and upper bound 1.

The problem is formulated as:

SFL2 : min∑j∈J

t j

subject to (55) − (59) ; (61) − (65)

yi j ≥ 0 i = 1, . . . ,M, j ∈ J (67)

As in Section 4.1, the test instances correspond to the real disaster network considered in [55].

4.3 Probabilistic Response Model to Propagation of Epidemics: Trilinear Terms withThree Binary Variables

We consider here a stochastic response model that limits the propagation of epidemics or viral attacks ina network. The recent literature devoted to this area is abundant [22, 25, 37, 54]. The Center for DiseaseControl and Prevention plays an instrumental role in supporting this research and has, for example, de-veloped resource allocation to keep under control the HIV incidence [37]. The time at which the diseaseis identified, its lethality level, and the rate at which it propagates are difficult to estimate. Decisions haveto be taken under a substantial amount of uncertainty [54], and stochastic models explicitly accountingfor this uncertainty are needed.

Consider a network modeled as an undirected graph G = (V, E) with vertex set V and edge set E.The set of nodes may correspond to locations, users, or patients, and the edges represent connections orpropagation links. Let Vc ⊆ V be a set of infected nodes and Vu = V \ Vc be the set of nodes susceptibleto be infected. The objective is to maximize the utility of the network, defined as the maximization of thenumber of currently non-infected nodes that can remain open. Decisions to be taken concern the nodesthat must be shut down so that the tolerable level of threat is not exceeded. A binary decision variableyi, i ∈ Vu defines whether node i susceptible to be contaminated can remain open (yi = 1) or must beclosed (yi = 0). The random rate at which a disease propagates from node i to v is denoted by ξi j. Theacceptable threat level as node i is upper bounded by the parameter ti, while ti j is the maximal threatlevel admissible for the arc (i, j). The notation r0

i refers to the level of threat of node i if i is isolated. Theset N(i) = j ∈ V : (i, j) ∈ E contains the nodes directly connected to i. The problem is formulated as aprobabilistically constrained stochastic programming problem including bilinear stochastic inequalities:

EPID : max∑

i∈Vu

yi (68)

subject to P

r0i +

∑j∈N(i)

ξi jyiy j ≤ ti

ξi jyiy j ≤ ti j j ∈ N(i)

≥ p , i ∈ VU (69)

y ∈ 0, 1|Vu | . (70)

The reformulations M1, M2-1, and M2-2 of the above chance constraint contain trilinear termsinvolving each the product of three binary variables. Each trilinear term is multiplied by a coefficientci jl ∈ [0, 1] which can be understood as a possible value taken by the random rate at which node i can beinfected from node j (see [25]).

16

Page 17: Solving Chance-Constrained Optimization Problems with ...€¦ · A chance-constrained problem is a stochastic programming optimization problem involving one or more stochastic constraints

The test instances are similar to those used in recent epidemics papers [22, 25]. The instances differin terms of the number of scenarios (5,000 or 10,000), the reliability level enforced (0.8, 0.85, 0.9, 0.95),the density of the graph (10%, 20%), the maximum degree for each node in the graph (5, 10), and theratio of the number of infected nodes to the number of uncompromised nodes (10%, 20%, 40%).

5 Algorithmic Methods

The reformulations presented in sections 3.1 and 3.2 are nonconvex MINLP problems. Solving problemsof this type to optimality is in general difficult. Many solution techniques for MINLP have been proposed.The surveys [8, 26, 43] give a good overview of several of these methods, and additional information canbe found in [31, 60, 61].

The method of choice for solution is the spatial Branch-and-Bound, based on convexification usinglinear relaxations of the problem and applying usual Branch-and-Bound or Branch-and-Cut techniquesfor solving the resulting MILP. There are several software available based on this method, commercialones (Baron [57], Lindoglobal [44]) or open-source (Couenne [9, 17]). The complexity of dealing withnonlinear problems makes the performance of the solvers very unpredictable. In addition, mathematicallyequivalent reformulations of an instance can also result in solution times orders of magnitude apart.The goal of this section is to illustrate how reformulation selection and customization of a solver canhelp solve problems involving chance constraints. We base our experiments on the open-source solverCouenne, as having access to the source code allows for experiments that could not be conducted with theother software. Due to the large number of implementation options, there is no guarantee that conclusionsdrawn for Couenne are valid for another software, but it is likely that experimenting similarly withanother software would yield wide range of solution times and that default settings can be dramaticallyimproved when solving a very specific MINLP type of instances, such as those of interest here.

5.1 Couenne Reformulation and Bound Tightening

This section presents very briefly the framework implemented in Couenne. For more details, the inter-ested reader is referred to [7, 9, 17]. Couenne has many options that can be activated or disabled usingan option file named couenne.opt. A file with default settings is provided with the code. All commentsin this section refer to the code in [18].

We assume that the reader is familiar with the Branch-and-bound framework for MILP, includingreliability branching [1] and strong branching [2]. The extension of Branch-and-Bound to nonlinearprograms requires two essential additional components. The first one is the construction of a linearrelaxation of the feasible set, the second one is dealing with the fact that branching on continuous vari-ables might be needed (for example, to find the global optimum of a nonconvex problem having onlycontinuous variables).

To construct the linear relaxation of the problem, Couenne first builds an acyclic directed graphrepresenting the nonlinear expressions appearing in the instance. Each node v of the graph represents amathematical operation f v (from a fixed set) using all the immediate successors xv

1, . . . , xvt of v. A variable

xv is associated with v and the equality xv = f v(xv1, . . . , x

vt ) is associated with v. Since the possible

operations f v() are known, a linear programming convexification of the feasible set of each of them isalso known. This linearization is essentially the convex hull of feasible solutions of xv = f v(xv

1, . . . , xvt )

17

Page 18: Solving Chance-Constrained Optimization Problems with ...€¦ · A chance-constrained problem is a stochastic programming optimization problem involving one or more stochastic constraints

and depends on the upper and lower bounds of all the involved variables. If the function f is not linear,Couenne associates a nonlinear object to the node v. Note that imposing integrality on a variable is onesuch nonlinearity and the corresponding object is called an integer object. Once all auxiliary variables xv

are introduced, we get the Couenne reformulation of the problem. Tighter bounds on the variables leadto tighter linear relaxations, making bound tightening a crucial operation. Several options for performingbound tightening are implemented.

Bounds on variable xk can be obtained by minimizing and maximizing the value of xk over the currentlinear relaxation of the problem. This procedure is called optimality based bound tightening (OBBT).It is the most effective bound-tightening technique, but is expensive as it involves solving 2n LPs fora problem with n variables. The frequency at which OBBT is used is controlled by the value ` of theparameter log num obbt per level. At a node at depth k of the enumeration tree, OBBT is appliedwith probability min1, `2k . The default settings for OBBT are:

optimality bt yes

log num obbt per level 1

A weaker but much faster procedure, called feasibility based bound tightening (FBBT), is also avail-able. It amounts to bound propagation through the expression tree. The default settings for FBBT is:

feasibility bt yes

Finally, an aggressive FBBT procedure, called ABT, using information from the solution of thenonlinear continuous relaxation for the problem is also available. The frequency at which ABT is usedis controlled by the value ` of the parameter log num abt per level in a way similar to the control ofOBBT. The default settings for ABT is:

aggressive fbbt yes

log num abt per level 2

5.2 Precision on the solution

Requirement on the precision of the solution is set to 10−6, meaning that any linear or nonlinear expres-sion in the Couenne reformulation must have an absolute violation smaller than that number for a solutionto be considered feasible. Ideally, one would like to find a solution feasible (within tolerances) for theinitial formulation, but as Couenne works with its reformulation, it might obtain a solution satisfyingall constraints of the reformulation within tolerances, while taking the values of the original variables inthat solution does not give a solution of the original formulation within tolerances. For example, if theoriginal formulation has a constraint x1x2 + x3x4 = 1, the Couenne reformulation is

w1 = x1x2

w2 = x3x4

w1 + w2 = 1

The solution x1 = x3 = 0.5, x2 = x4 = 1 + 10−6 + 10−7, w1 = w2 = 0.5 is feasible within tolerancesfor the reformulation, but this is not the case for x1 = x3 = 0.5, x2 = x4 = 1 + 10−6 + 10−7 for theinitial formulation. On the other hand, the Couenne reformulation sometimes imposes more precision

18

Page 19: Solving Chance-Constrained Optimization Problems with ...€¦ · A chance-constrained problem is a stochastic programming optimization problem involving one or more stochastic constraints

on the original variables than the initial formulation. For example, if x1, x2 are binary variables while100 ≥ x3 ≥ 0 is continuous and the initial formulation contains the equation (100x1 + x2)x3 = 1, theCouenne reformulation is

w1 = 100x1 + x2

w1x3 = 1

with the additional requirement that w1 must be integer. Adding this requirement is in general beneficial,as it strengthen bound tightening operations and branching. However, the solution x1 = 1 + 10−6,x2 = −10−6, x3 = 10−2 is within tolerances for the initial formulation while 100x1+x2 = 100+10−4−10−6

is not within precision to be considered integer.Couenne of course checks if any solution it runs into is feasible within tolerance for the initial for-

mulation. Couenne updates the upper bound when this is the case, but it has still to process the nodeif its lower bound does not match the upper bound. For most problems, this has little practical impact,the difference in objective value of slightly infeasible solutions being close to each other. However, toavoid having Couenne branch sometimes very deep to obtain a matching upper and lower bound we set atolerance for the final gap between upper and lower bounds to 0.5% of the upper bound using the option

bonmin.allowable fraction gap 0.005

5.3 Scaling of constraints and objective function

As the precision of the solution is set to 10−6, and Couenne does not rescale the constraints or variables,the scaling of the original constraints is very important. For the facility location problem instances,the demand for commodities varies a lot between locations (i.e., between 10 and 15000 units), and itsmagnitude differs from the values taken by the decision variables representing the coordinates of theopened facilities and the distances between demand points and facilities. We have rescaled the demandso that it takes value in (0, 100]. The problem instances corresponding to the epidemic propagation model4.3 did not need any scaling.

5.4 Strong Branching vs. Osi Branching

Deciding on which variable to branch is a crucial and difficult decision, more so in spatial Branch-and-Bound than in “regular” Branch-and-Bound where usually branching variables are selected among theinteger ones. At a given node of the Branch-and-Bound, Couenne computes the optimal solution x to thelinear relaxation at the node, scans the nonlinear objects, and flags the violated ones (i.e., those for whichxv , f v(xv

1, xv2, . . . , x

vt ). It then has two major options to select the branching variable:

• Osi branching. With this option, Couenne essentially selects a most violated nonlinear object,applies rules to select one variable involved in the object, and chooses a way to split the currentdomain of the variable in two parts. This is very fast, but usually provides a poor choice ofbranching variable. On easy problems, this option might be the fastest, but on hard problems thisis rarely the case.

• Strong branching. As for MILP, strong branching in Couenneworks by building a list of candidatebranchings, compute the lower bounds resulting of each of the branchings and then select the most

19

Page 20: Solving Chance-Constrained Optimization Problems with ...€¦ · A chance-constrained problem is a stochastic programming optimization problem involving one or more stochastic constraints

promising one according to a scoring function. The candidate selection is quite involved. It mayuse essentially two methods:

– Reliability Branching. Introduced by Achterberg et al. [1] in the context of MILP, it canbe generalized to MINLP, but this generalization is far from unique. It relies on the ideaof pseudo-cost, but computation of reliable pseudo-cost is very challenging for MINLP.Couenne implementation of pseudo-cost is unsatisfactory and we disable it, turning relia-bility branching into strong branching.

– Violation Transfer. Proposed by Tawarmalani and Sahinidis [61], its implementation inCouenne is a variant of the original version, due to internal constraints on the branchingselection process in Couenne. The basic idea is to scan the violated objects and attribute partof the violation of the object to each of the variables involved in the corresponding expres-sion. The reader might consult [9, 61] for details.

The choice between Osi and Strong branching is made using one of the two options

variable selection osi-simple

variable selection osi-strong

With the latter option, candidates can be selected using reliability branching or violation transfer usingrespectively one of the options

branching object var obj

branching object vt obj

If strong branching is used, the parameters for selecting the number of candidates to consider at the root(say, 30) and at other nodes of the enumeration tree (say, 20) are set using the options

bonmin.number strong branch root 30bonmin.number strong branch 20

Disabling the use of pseudo-costs for all practical purposes can be done using

bonmin.number before trust 1000000

5.5 Branching First on Classes of Variables

When using strong branching, the candidate selection is a rule involving a mix of a priority order onthe nonlinear objects (integer objects being preferred), and a usefulness criterion, trying to estimate theimpact of branching on the object. It turns out that, for some classes of problems, selecting as candidatesfor strong branching only violated integer objects when some are violated, and using other nonlinearobjects only when all integer objects are satisfied is much better than the default candidate selection.While it is possible to select different priority for continuous and integer variables in the couenne.optfile, this does not guarantee that branching will occur on an integer variable when one violated integerobject exists. It only guarantees that the strong branching candidate list will be first filled with violated

20

Page 21: Solving Chance-Constrained Optimization Problems with ...€¦ · A chance-constrained problem is a stochastic programming optimization problem involving one or more stochastic constraints

integer objects and if space in the list is still available, then additional variables involved in violated non-integer object might be selected. We implemented the requirement that branching on an integer variablewith fractional value in the current LP solution must occur if one exist by modifying the strong branchingcandidate selection. We call this modification Branching First on Integer (BFI).

Branching candidates selected by Couenne are always among variables appearing in a nonlinearobject. The justification for this is that all linear constraints of the problem are in the linear relaxationused by Couenne, so branching to make sure that all nonlinear expressions are satisfied is sufficient forcorrectness. But in specific situations, branching on variables that are not involved in any nonlinearexpression is far superior (see in Section 6.1). Modifications to the code of Couenne to implement thisoption are non trivial. We coded this option by modifying the strong branching candidate selection andthe computations of violations for nonlinear objects and call it Branching for Facility Location (BFL).

5.6 Tight Variable Bounds

Variables with no lower or no upper bounds create numerical problems for bound propagation. Asmuch as possible, giving finite bounds on all variables is advisable and can make a huge performancedifference. For each instance considered in this paper, we have computed the tightest lower and upperbounds that can be assigned a priori to each decision variable and have explicitly listed these bounds inthe initial formulation provided to Couenne.

6 Computational Results

In this section, we evaluate the empirical strength of the proposed reformulation and algorithmic methodsfor the applications described in Section 4.

The binarization process employed for deriving the proposed MILP formulations is implementedin C++. The AMPL modeling language is used to formulate the mathematical programming prob-lems. The mixed-integer programming formulations are solved with the Cplex 12.6 solver and themixed-integer nonlinear programming ones (with nonconvex continuous relaxations) are solved withthe Couenne solver. The machine used is a 64 bits PowerEdge R515 with twelve AMD Opteron 41762.4GHz processors, of which we used only one, with 64GB of memory, the Linux Fedora 19 operatingsystem, and gcc version 4.8.2 20131212 (Red Hat 4.8.2-7) compiler.

6.1 Stochastic Facility Location Model – Part I: Trilinear Terms with Two Continuousand One Binary Variables

This section describes the steps typically needed when trying to evaluate the strength of various MINLPformulations and algorithmic options using the Couenne solver. We start with the default solvers dubbedSt and Tr (out-of-the-box codes from [19] (stable version) and [18] (trunk version)) and continues withtwo new variants Trb and Trb-I of the latter, altering the strong branching code in both and in additionwith Trb-I using branching first on integer variables (BFI) as described in Section 5.5. All solvers usestrong branching for branching decisions. Unless otherwise specified, the parameters for maximum cputime, branching object type, relative optimal gap, strong branching, and bound tightening are set asfollows (see Section 5.1 for the explanation of the parameters). All other parameters are set to theirdefault value:

21

Page 22: Solving Chance-Constrained Optimization Problems with ...€¦ · A chance-constrained problem is a stochastic programming optimization problem involving one or more stochastic constraints

• time limit 9000

• bonmin.allowable fraction gap 0.005

• branching object vt obj

• bonmin.number strong branch root 20

• bonmin.number strong branch 2

• bonmin.number before trust 1000000

• feasibility bt yes

• optimality bt no

6.1.1 Preliminary Results with Defaults Solvers

For the Facility Location Problem with possible split satisfaction of demands described in Section 4.2,the basic formulation is M1 containing trilinear terms that are products of a binary variable (γ j`, fromthe discretization of the stochastic variable) with two continuous variables (hid, for the distance betweenfacility i and demand point d, and yid, for the fraction of the demand of customer d satisfied by facility i).As discussed in Section 3.2, the reformulations M2-2 are expected to work better than the reformulationM1. However, results obtained by the four codes St, Tr, Trb, and Trb-I on M1 and M2-2 are horrible,none of the codes solving any of the instances within the time limit of 9,000 seconds.

6.1.2 Nonlinear Branch-and-Bound Algorithms Trb-L and Trb-L-I

When studying the branching decisions made during the tests with Trb and Trb-I, we noticed that thecodes branch often on variables hid or yid. Due to bound propagation techniques used by Couenne andthe presence of equations (54), it should be much better to branch on variables ai or bi instead of hid.Indeed, ai and bi are involved in equation (54) for all customers and reducing the range of one of themwill, in general, reduce the ranges of all variables hid for all d. On the other hand, branching on variablehid for one particular d might reduce or not the range for variable ai or bi and is likely to induce far lessbound tightening. This option is called Branching for Facility Location (BFL).

A second improvement is generated by the following simple observation. Let R be the ranges of allvariables hid at a node of the enumeration tree. Let G(R) be the bipartite graph whose vertices correspondsto the facilities and demand points and containing an edge between facility i and demand point d if andonly if the range of variable hid excludes both 0 and 1, i.e. if and only if hid must take a fractional valuein any feasible solution. Then there exists an optimal solution with ranges S such that G(S ) is acyclic.Thus, any node of the enumeration tree for which G(R) is not acyclic can be pruned.

Based on these two observations, we modified the codes of Trb and Trb-I such that when Couennewishes to include a variable hid in its strong branching candidates, it includes either ai or bi instead (theone with largest range) and prune nodes according to the second observation. This gives us two newcodes specifically designed to solve these facility location instances named Trb-L and Trb-L-I. Notethat Trb-L-I applies first BFL and if no variable is selected, then it uses BFI, and then regular candidateselection.

22

Page 23: Solving Chance-Constrained Optimization Problems with ...€¦ · A chance-constrained problem is a stochastic programming optimization problem involving one or more stochastic constraints

6.1.3 Results with Trb-L and Trb-L-I

We characterize each problem instance with the notation bB cC M p Ω, where b is the number of binaryvariables in the trilinear terms, c is the number of continuous variables in the trilinear terms, M is thenumber of facilities to be opened, p is the enforced reliability level, and Ω is the number of scenariosused to represent the joint distribution of the random variables.

Table 1: Results for Trb-L and Trb-L-I for formulations M1, M2-1, and M2-2.

Trb-L Trb-L-IM1 M2-1 M2-2 M1 M2-1 M2-2

1B2C 2 95 5000 724.3 11.9 8.7 1825.0 13.4 12.61B2C 3 95 5000 507.3 353.1 331.3 335.6 621.5 797.21B2C 4 95 5000 9000 4700.3 3868.3 9000 3803.6 4302.11B2C 2 90 5000 9000 119.7 93.2 9000 254.2 212.11B2C 3 90 5000 9000 3743.5 2841.4 9000 9000 90001B2C 4 90 5000 9000 9000 9000 9000 9000 90001B2C 2 85 5000 9000 509.1 587.3 9000 532.2 571.71B2C 3 85 5000 9000 9000 9000 9000 9000 90001B2C 4 85 5000 9000 9000 9000 9000 9000 90001B2C 2 80 5000 9000 9000 4820.5 9000 3192.7 3393.41B2C 3 80 5000 9000 9000 9000 9000 9000 90001B2C 4 80 5000 9000 9000 9000 9000 9000 90001B2C 2 95 10000 422.2 15.2 14.0 524.5 21.8 17.51B2C 3 95 10000 9000 477.7 401.2 9000 492.0 488.91B2C 4 95 10000 9000 3585.7 3588.3 9000 4075.6 90001B2C 2 90 10000 9000 141.5 127.9 9000 187.2 253.41B2C 3 90 10000 9000 3356.7 3342.0 9000 4608.7 6936.61B2C 4 90 10000 9000 9000 9000 9000 9000 90001B2C 2 85 10000 9000 706.1 648.0 9000 771.1 1023.91B2C 3 85 10000 9000 9000 9000 9000 9000 90001B2C 4 85 10000 9000 9000 9000 9000 9000 90001B2C 2 80 10000 9000 9000 9000 9000 5176.6 4348.71B2C 3 80 10000 9000 9000 9000 9000 9000 90001B2C 4 80 10000 9000 9000 9000 9000 9000 9000

Average Time 7943.9 5238.4 4986.3 7986.9 5114.6 5431.6Minimal Time 507.3 11.9 8.7 335.6 13.4 12.6Maximal Time 9000 9000 9000 9000 9000 9000

Unsolved Instances 21 12 11 21 11 12

Table 1 clearly shows that both formulations M2-1 and M2-2 are superior to M1. Only 4 in-stances can be solved in three hours with the M1 formulation. Solving M2-2 with Trb-L is clearlythe formulation-algorithm combination that works best.

We evaluate the difficulty of solving a problem instances in terms of the computational time and thenumber of instances that could not be solved to optimality within 3 hours. The cpu time and number ofunsolved instances increase with the number M of facilities that are opened and as the probability level pdecreases. This is expected as this is directly related to the number of trilinear terms in the reformulation.

23

Page 24: Solving Chance-Constrained Optimization Problems with ...€¦ · A chance-constrained problem is a stochastic programming optimization problem involving one or more stochastic constraints

On the other hand, note that the cpu time and number of unsolved instances is not much impacted by thenumber of scenarios used to represent uncertainty. The first 12 instances in Table 1 and the last 12 can bepaired in the obvious way such that the only difference in a pair is the number of scenarios used, 5,000vs. 10,000. If both instances in a pair can be solved within the time limit, the instance using 10,000scenarios requires about 25% more cpu time than its mate, far from the typical super-linear increase forother methods (see, e.g., the method proposed in [34] for linear stochastic inequalities with random right-hand sides and deterministic technology matrix). This is a major advantage of the proposed method. Thisreflects the fact the employed Boolean method employed provides reformulations in which the numberof included binary variables does not depend directly on the number of scenarios. This further translatesinto the fact that the number of trilinear terms involving these binary variables is not a function of thenumber of scenarios.

When formulation M2-2 is passed to Couenne, it breaks the trilinear term hid · yid · γd` by creating anew variable for vid = hid · yid and then a variable zid` = vid · γd`. Another possibility is to introduce inM2-2 a new variable vid` = γd` · yid and apply a McCormick reformulation on these equalities and thena variable zid` = vid · hid. This gives us formulation M2-2-MC1. A third possibility is to introduce inM2-2 a new variable vid` = γd` · hid and apply a McCormick reformulation on these equalities and thena variable zid` = vid · yid. This gives us formulation M2-2-MC2.

Table 2 gives the results obtained by Trb-L on the formulations M2-2, M2-2-MC1, and M2-2-MC2.It is worth noting that Trb-L-I does not perform well on the M2-2-MC1, and M2-2-MC2 formulations,being unable to solve any of the instances. Performances on M2-2 are slightly better than with M2-2-MC2, and much better than with M2-2-MC1.

At first sight, it is surprising to observe such a difference between the reformulations M2-2-MC1and M2-2-MC2. Both involve the incorporation of the same number of auxiliary variables due to thelinearization of bilinear terms with the McCormick approach. One likely reason for the difference isthat since Trb-L looks first at violated nonlinear objects involving variables hid, in M2-2-MC1 it mightbranch on the objects zid` = vid` · hid while the variables vid` are not set to values close to satisfy vid` =

γd` ·yid. This creates useless branches. On the other hand, in M2-2-MC2, Trb-L branches more logicallyfirst on the objects vid` = γd` · hid and only when these objects are satisfied, it moves on to branching onthe objects zid` = vid · yid. This type of ordering information between objects is available in Couenne, butis not used in branching decisions. This observation might lead to improvement in branching decisionsin nonlinear solvers as it has relevance beyond the very specific instances at hand.

Next, we investigate the effect of changing some of the parameters of Trb-L. First, we look at threesettings for the following parameters:

• bonmin.number strong branch root: 30, 20 or 10

• bonmin.number strong branch: 10, 5, or 2.

This gives us 9 parameter settings labeled (a, b) where a (resp. b) is the value for the first (resp.second) parameter. The results are in Table 8 in the appendix. The variation in the results are notdramatic, with a difference of about 10% in computing time between best and worst, but the numberof instances not solved (8) is smallest for the settings of (10, 10), the other variants having a number ofunsolved instances ranging between 9 and 11.

Finally, we look at variations of the bound tightening options of Couenne, namely

24

Page 25: Solving Chance-Constrained Optimization Problems with ...€¦ · A chance-constrained problem is a stochastic programming optimization problem involving one or more stochastic constraints

Table 2: Results for Trb-L on formulation obtained from M2-2 by various McCormick reformulations.

M2-2 M2-2-MC1 M2-2-MC21B2C 2 95 5000 8.7 1962.3 15.61B2C 3 95 5000 331.3 9000 806.21B2C 4 95 5000 3868.3 9000 90001B2C 2 90 5000 93.2 9000 140.71B2C 3 90 5000 2841.4 9000 4646.01B2C 4 90 5000 9000 9000 90001B2C 2 85 5000 587.3 9000 1008.41B2C 3 85 5000 9000 9000 90001B2C 4 85 5000 9000 9000 90001B2C 2 80 5000 4820.5 9000 3080.71B2C 3 80 5000 9000 9000 90001B2C 4 80 5000 9000 9000 90001B2C 2 95 10000 14.0 9000 16.31B2C 3 95 10000 401.2 9000 595.11B2C 4 95 10000 3588.3 1893.2 7347.21B2C 2 90 10000 127.9 9000 69.11B2C 3 90 10000 3342.0 9000 4826.01B2C 4 90 10000 9000 9000 90001B2C 2 85 10000 648.0 9000 624.21B2C 3 85 10000 9000 9000 90001B2C 4 85 10000 9000 9000 90001B2C 2 80 10000 9000 9000 5630.21B2C 3 80 10000 9000 9000 90001B2C 4 80 10000 9000 9000 9000

Average Time 4986.3 8410.6 5325.2Minimal Time 8.7 1893.2 15.6Maximal Time 9000 9000 9000

Unsolved Instances 11 22 11

• feasibility bt: yes or no

• optimality bt: yes or no.

This gives us 4 parameter settings, labeled “yy”, “yn”, “ny”, and “nn”, the first letter referring to thesetting of feasibility bt. The results are similar to those for varying the strong branching parameters(see Table 9 in the appendix), with the setting “yn” being the fastest by a small margin but solving 16 outof 24 instances within the time limit, all other variants solving fewer.

6.2 Stochastic Facility Location Model – Part II: Trilinear Terms with Two Binary andOne Continuous Variables

When the demand of a customer is required to be satisfied from a single facility (see Section 4.1), weobtain a problem with trilinear terms identical to those in the previous section, but each term has nowtwo binary and one continuous variables. We start with testing the four codes described in the previous

25

Page 26: Solving Chance-Constrained Optimization Problems with ...€¦ · A chance-constrained problem is a stochastic programming optimization problem involving one or more stochastic constraints

section, together with the Stable and Tr codes on the formulation M2-2, that proved to be vastly superiorto M1 in the previous section.

Table 3: Results for codes with formulation M2-2.

Stable Tr Trb-I Trb Trb-L-I Trb-L2B1C 2 95 5000 20.5 11.2 4.5 22.6 6.1 20.72B1C 3 95 5000 110.3 244.9 43.4 197.2 39.1 2489.92B1C 4 95 5000 446.8 9000 106.2 644.5 72.6 90002B1C 2 90 5000 15435.2 57.99 48.2 50.9 17.8 90002B1C 3 90 5000 1183.1 852.62 65.2 535.4 65.8 90002B1C 4 90 5000 9000 9000 888.1 9000 197.7 90002B1C 2 85 5000 9000 142.0 176.1 254.4 43.2 90002B1C 3 85 5000 9000 9000.3 1091.0 1512.7 425.6 90002B1C 4 85 5000 9000 9000 9000 9000 3621.5 90002B1C 2 80 5000 9000 1107.8 1226.9 2155.4 304.0 90002B1C 3 80 5000 9000 9000 7017.6 8070.9 4521.5 90002B1C 4 80 5000 9000 9000 9000 9000 9000 9000

2B1C 2 95 10000 25.2 12.2 8.8 18.2 7.1 38.42B1C 3 95 10000 148.4 543.4 41.1 188.4 39.4 90002B1C 4 95 10000 450.2 9000 103.9 472.0 82.5 90002B1C 2 90 10000 147.2 55.0 54.3 138.8 18.6 90002B1C 3 90 10000 1515.5 1102.8 116.2 710.9 122.9 90002B1C 4 90 10000 9000 4010.8 843.7 3033.8 375.0 90002B1C 2 85 10000 1080.3 350.1 59.9 491.4 78.6 90002B1C 3 85 10000 9000 2907.0 2055.1 2009.0 549.3 90002B1C 4 85 10000 9000 9000 9000 9000 9000 90002B1C 2 80 10000 9000 9000 1119.5 9000 387.5 90002B1C 3 80 10000 9000 9000 9000 9000 4721.3 90002B1C 4 80 10000 9000 9000 9000 9000 9000 9000

Average Time 5731.8 4599.9 2502.9 3479.4 1779.1 7981.2Minimal Time 20.5 11.2 4.5 18.2 6.1 20.7Maximal Time 15435 9000 9000 9000 9000 9000

Unsolved Instances 14 11 5 7 3 21

Table 3 shows that the codes branching first on integer variables (Trb-I and Trb-L-I) dominates theother, and between these two, the latter is the best on formulation M2-2. We nevertheless present tests forboth codes on alternative formulations, due to the difficulty of estimating the importance of branchingfirst on integer variables, as Trb-I does, or first on facility coordinates and then integer variables asTrb-L-I does.

We now report the results for three reformulations of M2-2 obtained by using McCormick lineariza-tions of various bilinear terms:

• formulation M2-2-MC0 obtained by introducing in M2-2 a new variable vid = hid ·yid and applyingthe McCormick reformulation on these equalities.

• formulation M2-2-MC1 obtained by introducing in M2-2 a new binary variable vid` = γd` · yid and

26

Page 27: Solving Chance-Constrained Optimization Problems with ...€¦ · A chance-constrained problem is a stochastic programming optimization problem involving one or more stochastic constraints

applying the McCormick reformulation on these equalities.

• formulation M2-2-MC2 obtained by introducing in M2-2 a new variable vid` = γd` · hid andapplying the McCormick reformulation on these equalities.

The last two reformulations are identical to those presented in the previous section except that herevariables y are binary, justifying that we re-use these names.

Table 4: Results for Trb-L-I on various McCormick reformulations.

M2-2 M2-2-MC1-0 M2-2-MC1-1 M2-2-MC1-22B1C 2 95 5000 6.1 4.2 12.3 4.32B1C 3 95 5000 39.1 34.1 9000 35.82B1C 4 95 5000 72.6 354.4 258.2 96.52B1C 2 90 5000 17.8 11.6 56.8 10.22B1C 3 90 5000 65.8 68.4 530.3 90.32B1C 4 90 5000 197.7 795.9 1669.6 185.82B1C 2 85 5000 43.2 25.5 170.9 34.12B1C 3 85 5000 425.6 359.8 9000 248.72B1C 4 85 5000 3621.5 1023.2 816.3 761.02B1C 2 80 5000 304.0 61.9 9000 152.52B1C 3 80 5000 4521.5 1578.0 9000 2092.12B1C 4 80 5000 9000 1982.3 9000 2916.52B1C 2 95 10000 7.1 4.7 12.6 4.22B1C 3 95 10000 39.4 49.9 187.0 610.62B1C 4 95 10000 82.5 140.4 592.3 122.62B1C 2 90 10000 18.6 11.8 36.2 16.22B1C 3 90 10000 122.9 62.9 451.8 1472.32B1C 4 90 10000 375.0 368.0 9000 940.22B1C 2 85 10000 78.6 29.3 200.6 42.32B1C 3 85 10000 549.3 307.6 9000 726.52B1C 4 85 10000 9000 581.6 9000 463.12B1C 2 80 10000 387.5 134.7 1761.3 129.22B1C 3 80 10000 4721.3 6233.4 9000 5088.72B1C 4 80 10000 9000 1542.7 9000 6395.7

Average Time 1778.7 656.9 4030.7 943.3Minimal Time 6.1 4.2 12.3 4.2Maximal Time 9000 6233.4 9000 6395.7

Unsolved Instances 3 0 10 0

Table 4 shows that the reformulation M2-2-MC0 improves results of Trb-L-I significantly. Note thatthe discussion on performances of Trb-L-I on M2-2-MC1 and M2-2-MC2 in the previous section seemsto be pertinent here: M2-2-MC0 and M2-2-MC2 are expected to perform better than M2-2-MC1 dueto prioritization of branching. In additions, M2-2-MC0 introduces fewer nonlinear objects and this isprobably the explanation for its better performances.

Table 5 shows that the reformulation M2-2-MC0 improves results of Trb-I even more, so that on thisformulation Trb-I is clearly the best performer. It is difficult to explain why the performances of Trb-I

27

Page 28: Solving Chance-Constrained Optimization Problems with ...€¦ · A chance-constrained problem is a stochastic programming optimization problem involving one or more stochastic constraints

Table 5: Results for Trb-I on various McCormick reformulations.

M2-2 M2-2-MC0 M2-2-MC1 M2-2-MC22B1C 2 95 5000 4.5 5.0 13.8 3.92B1C 3 95 5000 43.4 29.8 170.8 28.52B1C 4 95 5000 106.2 418.6 267.4 228.62B1C 2 90 5000 48.2 10.1 53.9 12.22B1C 3 90 5000 65.2 77.3 716.2 90.72B1C 4 90 5000 888.1 384.4 3047.7 217.72B1C 2 85 5000 176.1 28.5 565.3 32.62B1C 3 85 5000 1091.0 388.3 9000 342.42B1C 4 85 5000 9000 835.5 9000 1158.92B1C 2 80 5000 1226.9 56.7 1775.7 132.12B1C 3 80 5000 7017.6 1239.8 9000 892.42B1C 4 80 5000 9000 1617.0 9000 2229.7

2B1C 2 95 10000 8.8 5.1 11.7 4.02B1C 3 95 10000 41.1 48.1 156.8 606.42B1C 4 95 10000 103.9 160.5 567.9 145.82B1C 2 90 10000 54.3 10.0 54.3 17.62B1C 3 90 10000 116.2 60.9 659.6 212.52B1C 4 90 10000 843.7 371.0 2950.3 317.22B1C 2 85 10000 59.9 29.5 363.8 37.72B1C 3 85 10000 2055.1 329.3 9000 590.42B1C 4 85 10000 9000 567.3 9000 498.12B1C 2 80 10000 1119.5 110.9 4177.3 128.92B1C 3 80 10000 9000 1507.3 9000 6039.12B1C 4 80 10000 9000 1609.6 9000 3561.1

Average Time 2502.9 412.5 3646.7 730.4Minimal Time 4.5 5.0 11.7 3.9Maximal Time 9000 1617 9000 6039.1

Unsolved Instances 5 0 8 0

on M2-2-MC1-1 are so much worse than on the other two formulations. The McCormick reformulationused to obtain M2-2-MC1-1 gives the convex hull of the feasible points for the three binary variablesinvolved. One might expect that this would lead to a easier problem to solve, as the nonlinear constraintsbecome quadratic instead of trilinear, but this is not the case.

Next, we investigate the effect of changing two of the strong branching parameters as explained inthe previous section. For both Trb-I and Trb-L-I the settings (10, 2) are best. For average cpu time,difference between best and worst setting is a factor of up to 3, with the best setting solving all instances(see Table 10 and 11 in the appendix for full details.)

Finally, we look at variations of the bound tightening options feasibility bt and optimality bt,as described in the previous section. For Trb-L-I, the best options are “yn” by a small margin. For Trb-Iall settings are more or less equivalent (see Table 12 and 13 in the appendix for full details).

As aforementioned, we can apply two rounds of McCormick linearizations if the trilinear termsinclude at least two binary variables. We can then obtain the MIP linear formulation M3 that can be

28

Page 29: Solving Chance-Constrained Optimization Problems with ...€¦ · A chance-constrained problem is a stochastic programming optimization problem involving one or more stochastic constraints

solved with Cplex. Note that Cplex is about eight times faster than Couenne on average.

6.3 Stochastic Epidemics Propagation Model: Trilinear Terms with Products of ThreeBinary Variables

For the epidemics problem instances, we only report the results obtained by solving the MIP linearreformulation obtained by applying two rounds of McCormick linearizations with the Cplex solver. Allinstances are solved in about 8 seconds on average. This is much faster than anything that can be obtainedusing Couenne. In general, using an MILP reformulation and Cplex is likely to be the fastest way to solveinstances of this type.

7 ConclusionThis paper present reformulations, algorithms, and empirical guidelines for solving a complex class ofprobabilistically constrained stochastic programming problems QM. To our knowledge, it is the firststudy that proposes a method to solve that class of stochastic nonlinear programming problems. Thisclass covers applications in multiple areas ranging from epidemiology [22, 25], transportation [27], sup-ply chain [50], natural gas and refinery [42], or energy [38], to name a few. The proposed reformulationmethod is particularly appealing, since: (i) it can tackle any type of dependency between random vari-ables; (ii) it is applicable in a uniform way if the stochastic inequalities are linear, quadratic, or includepolynomials of higher degree; (iii) the size of the reformulation, and in particular the number of binaryvariables and nonlinear terms do not increase linearly with the number of scenarios used to representuncertainty.

The theoretical contributions of this study span across the stochastic, mixed-integer nonlinear andBoolean programming disciplines. We propose a Boolean programming approach to derive determin-istic reformulations for problems in QM with trilinear inequalities with terms involving the product oftwo decision variables with one random variable. We classify problems in QM according to the types(continuous or binary) of decision variables appearing in the trilinear terms.

We evaluate computationally the performance of the proposed reformulations on facility locationand epidemic propagation problems, covering the three main problem classes in QM. For problems withtrilinear terms involving two continuous decision variables, we modify the behavior of the generic solverCouenne and improve its performances dramatically. We show how the knowledge of the inner-workingof a software can give clues on how to improve its performances on a particular class of problems.

We also derive potential explanations for the superiority of some reformulations over others. Theseobservations, if valid in a larger context, could help in developing rules-of-thumbs for selecting reformu-lations for MINLP problems. We also evaluate several priority order for selecting branching variablesand observe that matching the formulation used and the branching rule is important. When a problem inQM has trilinear terms that involve at least one binary decision variable, our advice is to use two roundsof McCormick linearization to obtain an MILP problem that can be solved with an MILP solver such asCplex. If both decision variables appearing in the trilinear terms are continuous, we observe that the bestoption is problem dependent. This should serve as a reminder that a fair empirical comparison betweena tailored made code with a generic solver such as Couenne on a class of instances is difficult to obtain,as minor modifications of the generic code might improve its performances dramatically.

29

Page 30: Solving Chance-Constrained Optimization Problems with ...€¦ · A chance-constrained problem is a stochastic programming optimization problem involving one or more stochastic constraints

References[1] Achterberg T., Koch T., Martin A. 2005. Branching Rules Revisited. OR Letters 33, 42–54.

[2] Applegate D.L., Bixby R.E., Chvatal V., Cook W.J. 2006. The Traveling Salesman Problem, AComputational Study. Princeton.

[3] Beale E. M. L., Forrest J. J. H. 1976. Global Optimization Using Special Ordered Sets. Mathemat-ical Programming 10, 52–69.

[4] Averbakh I., Bereg S. 2005. Facility Location Problems with Uncertainty on the Plane. DiscreteOptimization 2 (1), 3–34.

[5] Baron O., Berman O., Krass D. 2008. Facility Location with Stochastic Demand and Constraintson Waiting Time. Manufacturing & Service Operations Management 100 (3), 484–505.

[6] Baron O., Berman O., Krass D., Wang Q. 2007. The Equitable Location Problem on the Plane.European Journal of Operational Research 90, 183–578.

[7] Belotti P. 2009. Couenne: A User’s Manual. Technical Report, Lehigh University.

[8] Belotti P., Kirches C., Leyffer S., Linderoth J., Luedtke J., Mahajan A. 2013. Mixed-Integer Non-linear Optimization. Acta Numerica 22, 1–131.

[9] Belotti P., Lee J., Liberti L., Margot, F., Wachter, A. 2009. Branching and Bound Tightening Tech-niques for Nonconvex MINLPs. Optimization Methods and Software 24, 597–634.

[10] Benkoczi R., Bhattacharya B.K., Das S., Sember J. 2009. Single Facility Collection Depots Loca-tion Problem in the Plane. Computational Geometry 42 (5), 403–418.

[11] Beraldi P., Bruni M.E. 2010. An Exact Approach for Solving Integer Problems Under ProbabilisticConstraints with Random Technology Matrix. Annals of Operations Research 177 (1), 127–137.

[12] Beraldi P., Bruni M.E., Violi A. 2012. Capital Rationing Problems Under Uncertainty and Risk.Computational Optimization and Applications 51 (3), 1375–1396.

[13] Berman O., Krass D. 2011. On n-Facility Median Problem with Facilities Subject to Failure FacingUniform Demand. Discrete Applied Mathematics 159, 420–432.

[14] Bonami P., Lejeune M.A. 2009. An Exact Solution Approach for Portfolio Optimization ProblemsUnder Stochastic and Integer Constraints. Operations Research 57 (3), 650-670.

[15] Boros E., Hammer P.L., Ibaraki T., Kogan A. 1997. Logical Analysis of Numerical Data. Mathe-matical Programming 79 (1-3), 163–190.

[16] Carbone R. 1974 Public Facilities Location under Stochastic Demand. INFOR 12 (3), 261–270.

[17] COIN-OR. Computational Infrastructure for Operations Research Project.http://www.coin-or.org/

[18] https://projects.coin-or.org/svn/Couenne/trunk, revision 1070.

[19] https://projects.coin-or.org/svn/Couenne/stable/0.4, revision 1068.

[20] Crama Y., Hammer P.L. 2011. Boolean Functions - Theory, Algorithms, and Applications. Cam-bridge Press University.

[21] Dentcheva D., Prekopa A., Ruszczynski A. 2001. Concavity and Efficient Points of DiscreteDistributions in Probabilistic Programming. Mathematical Programming 47 (3), 1997–2009.

[22] Enns E.A., Mounzer J.J., Brandeau M.L. 2012. Optimal Link Removal for Epidemic Mitigation: ATwo-Way Partitioning Approach. Mathematical Biosciences 235, 138-147.

30

Page 31: Solving Chance-Constrained Optimization Problems with ...€¦ · A chance-constrained problem is a stochastic programming optimization problem involving one or more stochastic constraints

[23] Farahani R.Z., Hekmatfar M. 2009. Facility Location: Concepts, Models, Algorithms and CaseStudies. Springer, Heidelberg, Germany.

[24] Geletu A., Kloeppel M., Zhang H., Li P. 2013. Advances and Applications of Chance-ConstrainedApproaches to Systems Optimization under Uncertainty. International Journal of Systems Science44, 1209–1232.

[25] Goldberg N, Leyffer S., Safro I. 2012. Optimal Response to Epidemics and Cyber Attacks inNetworks. Preprint Mathematics and Computer Science Division, Argonne National LaboratoryANL/MCS-1992-0112, 1–24.

[26] Grossmann I. 2002. Review of Nonlinear Mixed-Integer and Disjunctive Programming Techniques.Optimization and Engineering 3, 227–252.

[27] Gunluk D., Lee J., Weismantel R. 2007. MINLP Strengthening for Separable Convex QuadraticTransportation-Cost UFL. IBM Research Report: 1–16.

[28] Henrion R., Moller A. 2012. A Gradient Formula for Linear Chance Constraints under GaussianDistribution. Mathematics of Operations Research 37, 475–488.

[29] Henrion R., Strugarek C. 2008. Convexity of Chance Constraints with Independent Random Vari-ables. Computational Optimization and Applications 41 (2), 263–276.

[30] Hong X., Lejeune M.A., Noyan N. 2014. Stochastic Network Design for Disaster Preparedness. IIETransactions. In Press.

[31] Horst R., Tuy H. 1996. Global Optimization: Deterministic Approaches. Springer, Berlin.

[32] Kataoka S. 1963. A Stochastic Programming Model. Econometrica 31 (1-2), 181–196.

[33] Kogan A., Lejeune M.A. 2013. Threshold Boolean Form for Joint Probabilistic Constraints withRandom Technology Matrix. Mathematical Programming. In Press.

[34] Kucukyavuz S. 2012. On Mixing Sets Arising in Probabilistic Programming. Mathematical Pro-gramming 132, 31-56.

[35] Laporte G., Louveaux F.V., van Hamme L. 1994. Exact Solution to a Location Problem withStochastic Demands. Transportation Science 28 (2), 95–103.

[36] Larson R.C., Sadiq G. 1983. Facility Locations with the Manhattan Metric in the Presence of Bar-riers to Travel. Operations Research 31 (4), 652–669.

[37] Lasry A., Sansom S.L., Hicks K.A., Uzunangelov V. 2011. A Model for Allocating CDC’s HIVPrevention Resources in the United States. Health Care Management Science 14, 115–124.

[38] Latorre G., Cruz R.D., Areiza J.M., Villegas A. 2003. Classification of Publications and Models onTransmission Expansion Planning. IEEE Transactions on Power Systems 18(2): 938–46.

[39] Lejeune M.A. 2012. Pattern-Based Modeling and Solution of Probabilistically Constrained Opti-mization Problems. Operations Research 60 (6), 1356–1372.

[40] Lejeune M.A. 2012. Pattern Definition of the p-Efficiency Concept. Annals of Operations Research200 (1), 23–36.

[41] Lejeune M.A., Shen S. 2013. Multi-Objective Probabilistically Constrained Programming withVariable Risk: New Models and Applications. Working Paper. Submitted.

[42] Li X., Tomasgard A., Barton P.I. 2012. Decomposition Strategy for the Stochastic Pooling Problem.Journal of Global Optimization 54, 765–790.

[43] Liberti L. 2006. Writing Global Optimization Software. In: Liberti L., Maculan N. (Eds.) GlobalOptimization: from Theory to Implementation. Springer, Berlin, 211–262.

31

Page 32: Solving Chance-Constrained Optimization Problems with ...€¦ · A chance-constrained problem is a stochastic programming optimization problem involving one or more stochastic constraints

[44] LINDO. Solver Suite. Lindo Systems Inc., Chigago, IL. http://www.lindo.com/

[45] Lipsitch M., Riley S., Cauchemez S., Ghani A.C., Ferguson N.M. 2009. Managing and ReducingUncertainty in an Emerging Influenza Pandemic. New England Journal of Medicine 361 (2), 112–115.

[46] Marathe M., Vullikanti A.K.S. 2013. Computational Epidemiology. Communications of the ACM56 (7), 88–96.

[47] wpweb2.tepper.cmu.edu/fmargot/stoch.html

[48] McCormick G.P. 1976. Computability of Global Solutions to Factorable Nonconvex Programs: PartI. Convex Underestimating Problems. Mathematical Programming 10, 147–175.

[49] Murray W., Shanbhag U.V. 2006. A Local Relaxation Approach for the Siting of Electrical Substa-tions. Computational Optimization and Applications 33, 7–49.

[50] Perron S., Hansen P., Le Digabel S., Mladenovic N. 2010. Exact and Heuristic Solutions of theGlobal Supply Chain Problem with Transfer Pricing. European Journal of Operational Research202, 864–879.

[51] Prekopa A. 1974. Programming under Probabilistic Constraints with a Random Technology Matrix.Mathematische Operationsforschung und Statistik 5 (2), 109–116.

[52] Prekopa A. 1990. Dual Method for a One-Stage Stochastic Programming with Random rhs Obeyinga Discrete Probability Distribution. Zeitschrift of Operations Research 34, 441–461.

[53] Prekopa A. 2003. Probabilistic Programming Models. Chapter 5 in: Stochastic Programming:Handbook in Operations Research and Management Science 10. Eds: Ruszczynski A., ShapiroA. Elsevier Science Ltd, 267–351.

[54] Ray J., Boggs P.T., Gay D.M., Lemaster M.N., Ehlen M.E. 2009. Risk-Based Decision makingfor Staggered Bioterrorist Attacks: Resource Allocation and Risk reduction in Reload Scenarios.Sandia Report 2009-6008, 1–61.

[55] Rawls C.G., Turnquist M.A. 2010. Pre-positioning of Emergency Supplies for Disaster Response.Transportation Research Part B 44, 521-534.

[56] Ruszczynski A. 2002. Probabilistic Programming with Discrete Distribution and Precedence Con-strained Knapsack Polyhedra. Mathematical Programming 93 (2), 195–215.

[57] Sahinidis N.V. 1996. Baron: A General Purpose Global Optimization Software Package. Journalof Global Optimization 8, 201–205.

[58] Sen S. 1992. Relaxations for Probabilistically Constrained Programs with Discrete RandomVariables. Operations Research Letters 11, 81–86.

[59] Tanner M.W., Ntaimo L. 2010. IIS Branch-and-Cut for Joint Chance-Constrained Stochastic Pro-grams and Application to Optimal Vaccine Allocation. European Journal of Operational Research207 (1), 290–296.

[60] Tawarmalani M., Sahinidis N.V. 2002. Convexification and Global Optimization in Continuous andMixed-Integer Nonlinear Programming: Theory, Algorithms, Software and Aapplications. Noncon-vex Optimization and Its Applications 65, Kluwer Academic Publishers, Dordrecht.

[61] Tawarmalani M., Sahinidis N.V. 2004. Global Optimization of Mixed-Integer Nonlinear Programs:A Theoretical and Computational Study. Mathematical Programming 99, 563–591.

[62] van Ackooij W., Henrion R., Moller A., Zorgati R. 2011. On Joint Probabilistic Constraints withGaussian Coefficient Matrix. Operations Research Letters 39 (2), 99–102.

[63] Ventura J.A. 1991. Computational Development of a Lagrangian Dual Approach for QuadraticNetworks. Networks 21, 469–485.

32

Page 33: Solving Chance-Constrained Optimization Problems with ...€¦ · A chance-constrained problem is a stochastic programming optimization problem involving one or more stochastic constraints

Appendix

Appendix I: Illustration of Boolean Modeling Method with a Numerical ExampleWe use the stochastic programming problem (71) to illustrate the different steps of the Boolean reformu-lation method:

max 3x1 + x2 + x3

subject to P

2ξ12x1x2 + 4ξ13x1x3 − 24 ≤ 03ξ21x2x1 + 6ξ22x2

2 − 29 ≤ 0ξ12x1x2 ≤ 4ξ13x1x3 ≤ 4ξ22x2

2 ≤ 3

≥ 0.7 ,

x1, x2, x3 ≥ 0

(71)

where s11 = 2, s12 = s21 = 4, s13 = s23 = 0, s22 = 6. The cumulative probability distribution F ofthe random variable ξ = [ξ1, ξ2, ξ3] = [ξ12, ξ13, ξ22] with ξ12 = ξ21 is given in Table 6. The realizationsωk = [ωk

1, ωk2, ω

k3] of ξ are equally likely.

Table 6: Probability Distributionk 1 2 3 4 5 6 9 8 9 10ωk

1 9 1 1 4 2 4 9 9 9 9ωk

2 10 1 3 5 3 5 8 2 8 5ωk

3 2 1 3 2 1 1 1 4 3 6F(ωk) 0.5 0.1 0.2 0.4 0.2 0.3 0.6 0.2 0.8 0.9

The sufficient-equivalent set of cut points is:

Ce = c11 = 9; c21 = 5; c22 = 8; c23 = 10; c31 = 3; c32 = 4; c33 = 6 .Table 7 displays the recombinations and their binary images (i.e., relevant Boolean vectors) obtainedwith Ce.

Table 7: Reformulations and Relevant Boolean Vectors

Numerical Vectors Boolean Vectors Relevant Boolean Setsk ωk

1 ωk2 ωk

3 βk11 βk

21 βk22 βk

23 βk31 βk

32 βk33

11 9 5 3 1 1 0 0 1 0 0Ω−B12 9 5 4 1 1 0 0 1 1 0

9 9 8 3 1 1 1 0 1 0 0

Ω+B

10 9 5 6 1 1 0 0 1 1 113 9 8 4 1 1 1 0 1 1 014 9 8 6 1 1 1 0 1 1 115 9 10 3 1 1 1 1 1 0 016 9 10 4 1 1 1 1 1 1 017 9 10 6 1 1 1 1 1 1 1

Cut Points9 5 8 10 3 4 6

33

Page 34: Solving Chance-Constrained Optimization Problems with ...€¦ · A chance-constrained problem is a stochastic programming optimization problem involving one or more stochastic constraints

Appendix II: Additional Results for Stochastic Facility Location Models

Table 8: Modification of strong branching parameters for Trb-L with formulation M2-2

(30,10) (30,5) (30,2) (20,10) (20,5) (20,2) (10,10) (10,5) (10,2)1B2C 2 95 5000 7.8 7.5 10.6 7.8 7.9 8.7 8.6 7.2 9.31B2C 3 95 5000 284.1 257.7 309.3 290.4 263.1 331.3 394.5 486.0 296.81B2C 4 95 5000 4201.1 4145.9 4262.4 4359.8 4296.8 3868.3 4254.8 4600.0 4595.41B2C 2 90 5000 105.0 106.6 101.0 99.6 110.1 93.2 106.2 88.7 92.41B2C 3 90 5000 1707.2 1758.3 3637.5 1606.9 2233.9 2841.4 2708.0 2240.6 3001.11B2C 4 90 5000 9000 9000 9000 9000 9000 9000 9000 9000 90001B2C 2 85 5000 281.0 350.4 694.4 349.4 392.9 587.3 403.6 940.1 705.91B2C 3 85 5000 7002.5 5397.4 8895.8 9000 5779.6 9000 6673.7 5794.0 90001B2C 4 85 5000 9000 9000 9000 9000 9000 9000 9000 9000 90001B2C 2 80 5000 9000 3487.6 6666.5 4554.4 6167.1 4820.5 2883.8 5254.8 5106.31B2C 3 80 5000 9000 9000 9000 9000 9000 9000 9000 9000 90001B2C 4 80 5000 9000 9000 9000 9000 9000 9000 9000 9000 90001B2C 2 95 10000 19.0 15.7 15.7 12.5 16.3 14.0 14.0 15.8 13.71B2C 3 95 10000 325.7 362.1 388.9 350.7 361.1 401.2 292.3 363.7 386.81B2C 4 95 10000 5347.0 3205.0 3877.1 9000 3897.6 3588.3 4139.7 3683.1 4219.21B2C 2 90 10000 120.3 121.8 131.1 115.1 96.3 127.9 140.4 141.9 139.21B2C 3 90 10000 2651.0 3808.5 2629.1 2944.0 1772.6 3342.0 4151.9 1882.5 3210.11B2C 4 90 10000 9000 9000 9000 9000 9000 9000 9000 9000 90001B2C 2 85 10000 431.8 491.0 674.7 448.3 545.9 648.0 322.9 397.3 688.61B2C 3 85 10000 9000 9000 9000 8362.1 9000 9000 6358.2 9000 90001B2C 4 85 10000 9000 9000 9000 9000 9000 9000 9000 9000 90001B2C 2 80 10000 5546.1 9000 9000 2915.3 3379.4 9000 5599.6 5092.2 90001B2C 3 80 10000 9000 9000 9000 9000 9000 9000 9000 9000 90001B2C 4 80 10000 9000 9000 9000 9000 9000 9000 9000 9000 9000

Average Time 4917.9 4729.8 5095.6 4850.7 4596.7 4986.3 4602.2 4666.2 5061.0Minimal Time 7.8 7.5 10.6 7.8 7.9 8.7 8.6 7.2 9.3Maximal Time 9000 9000 9000 9000 9000 9000 9000 9000 9000

Unsolved Instances 10 10 11 10 9 11 8 9 11

34

Page 35: Solving Chance-Constrained Optimization Problems with ...€¦ · A chance-constrained problem is a stochastic programming optimization problem involving one or more stochastic constraints

Table 9: Modification of bound tightening parameters for Trb-L with formulation M2-2

yy yn ny nn1B2C 2 95 5000 9.9 8.6 13.5 9.91B2C 3 95 5000 272.1 394.5 264.8 373.21B2C 4 95 5000 4040.7 4254.8 3982.1 3981.41B2C 2 90 5000 116.4 106.2 112.1 96.71B2C 3 90 5000 1968.4 2708.0 1950.5 2679.31B2C 4 90 5000 9000 9000 9000 90001B2C 2 85 5000 380.6 403.6 388.8 419.91B2C 3 85 5000 5898.3 6673.7 6112.5 90001B2C 4 85 5000 9000 9000 9000 90001B2C 2 80 5000 3713.8 2883.8 9000 90001B2C 3 80 5000 9000 9000 9000 90001B2C 4 80 5000 9000 9000 9000 9000

1B2C 2 95 10000 17.6 14.0 15.8 13.41B2C 3 95 10000 333.3 292.3 295.0 317.11B2C 4 95 10000 4164.0 4139.7 4428.4 4009.41B2C 2 90 10000 114.9 140.4 120.6 130.71B2C 3 90 10000 1893.2 4151.9 7177.4 1617.21B2C 4 90 10000 9000 9000 9000 9030.81B2C 2 85 10000 322.9 322.9 1033.2 542.61B2C 3 85 10000 9000 6358.2 9000 6267.41B2C 4 85 10000 9000 9000 9000 90001B2C 2 80 10000 8284.6 5599.6 9000 5480.11B2C 3 80 10000 9000 9000 9000 90001B2C 4 80 10000 9000 9000 9000 9000

Average Time 4688.8 4602.2 5203.9 4830.8Minimal Time 9.9 8.6 13.5 9.9Maximal Time 9000 9000 9000 9000

Unsolved Instances 9 8 11 10

35

Page 36: Solving Chance-Constrained Optimization Problems with ...€¦ · A chance-constrained problem is a stochastic programming optimization problem involving one or more stochastic constraints

Table 10: Modification of strong branching parameters for Trb-I on formulation M2-2-MC1-1.

(30,10) (30,5) (30,2) (20,10) (20,5) (20,2) (10,10) (10,5) (10,2)2B1C 2 95 5000 7.5 4.5 4.5 4.3 3.8 3.8 4.7 5.3 5.02B1C 3 95 5000 37.9 46.3 41.5 45.0 23.8 52.5 76.0 49.6 29.82B1C 4 95 5000 154.1 289.0 206.9 501.5 235.5 131.9 363.3 108.8 418.62B1C 2 90 5000 15.5 8.0 9.6 8.8 11.3 8.9 8.6 9.1 10.12B1C 3 90 5000 73.7 59.6 96.7 109.0 195.3 41.5 146.6 84.7 77.32B1C 4 90 5000 365.2 419.6 581.2 733.9 332.4 160.1 704.2 298.0 384.42B1C 2 85 5000 43.0 58.2 41.6 57.6 34.4 27.2 24.5 43.0 28.52B1C 3 85 5000 3545.4 321.5 227.4 415.5 745.7 425.1 427.6 287.3 388.32B1C 4 85 5000 911.3 1807.2 4391.9 1418.1 506.8 2123.6 6154.2 1037.7 835.52B1C 2 80 5000 140.6 205.2 146.2 124.8 163.8 129.1 152.1 140.6 56.72B1C 3 80 5000 1160.9 3222.3 2032.1 1065.9 1460.5 9000 4404.0 7009.6 1239.82B1C 4 80 5000 1402.5 3949.7 2887.7 6854.9 8564.5 4454.0 5934.5 3420.4 1617.0

2B1C 2 95 10000 8.8 7.5 5.2 4.2 3.8 5.2 4.9 5.0 5.12B1C 3 95 10000 58.9 54.9 32.4 47.1 62.3 36.9 52.5 43.2 48.12B1C 4 95 10000 159.2 407.1 223.9 358.6 557.3 230.9 432.0 220.3 160.52B1C 2 90 10000 17.2 11.9 9.5 11.4 10.4 9.9 16.2 11.7 10.02B1C 3 90 10000 74.2 148.5 104.3 126.9 1213.7 114.8 115.1 1832.9 60.92B1C 4 90 10000 395.5 672.2 177.3 1164.8 1128.4 1149.7 958.2 253.8 371.02B1C 2 85 10000 41.5 79.2 32.9 73.3 38.2 23.7 50.7 43.6 29.52B1C 3 85 10000 455.8 667.2 837.2 908.6 329.2 9000 328.9 252.6 329.32B1C 4 85 10000 575.0 2198.5 3172.2 4160.1 2765.4 1361.6 4571.8 1453.2 567.32B1C 2 80 10000 174.4 238.2 176.5 209.6 145.8 130.3 126.2 95.2 110.92B1C 3 80 10000 1328.0 1136.4 2106.7 1760.0 831.5 9000 1540.6 2399.9 1507.32B1C 4 80 10000 1600.5 2090.9 9000 6995.7 5932.9 1078.5 7864.7 9000 1609.6

Average Time 531.1 754.3 1106.1 1131.6 1054.0 1612.5 1435.9 1171.0 412.5Minimal Time 7.5 4.5 4.5 4.2 3.8 3.8 4.7 5.0 5.0Maximal Time 3545.4 3949.7 9000 6995.7 8564.5 9000 7864.8 9000 1617.0

Unsolved Instances 0 0 1 0 0 3 0 1 0

36

Page 37: Solving Chance-Constrained Optimization Problems with ...€¦ · A chance-constrained problem is a stochastic programming optimization problem involving one or more stochastic constraints

Table 11: Modification of strong branching parameters for Trb-L-I on formulation M2-2-MC1-1.

(30,10) (30,5) (30,2) (20,10) (20,5) (20,2) (10,10) (10,5) (10,2)2B1C 2 95 5000 5.5 5.4 4.7 4.8 3.1 4.4 6.2 5.8 4.22B1C 3 95 5000 71.2 35.2 40.7 61.0 24.0 25.8 73.9 48.2 34.12B1C 4 95 5000 352.1 325.6 146.0 505.8 391.7 178.3 355.3 114.7 354.42B1C 2 90 5000 9.5 9.5 10.6 9.4 13.5 8.8 9.0 9.0 11.62B1C 3 90 5000 132.3 58.3 68.4 111.9 128.1 51.5 230.5 77.9 68.42B1C 4 90 5000 979.6 590.2 379.8 730.3 328.4 163.4 804.6 4580.8 795.92B1C 2 85 5000 71.4 54.0 47.9 53.9 32.5 20.5 24.4 37.7 25.52B1C 3 85 5000 641.5 282.4 346.7 369.2 1572.4 495.0 430.4 558.4 359.82B1C 4 85 5000 2701.6 3836.3 590.8 3568.1 563.0 1203.3 7186.6 1097.7 1023.22B1C 2 80 5000 260.6 158.6 769.0 140.1 177.1 96.5 129.4 115.5 61.92B1C 3 80 5000 3687.6 3174.0 2225.5 1955.0 1315.1 9000 1507.1 1737.9 1578.02B1C 4 80 5000 9000 9000 1513.0 7900.7 7760.7 2213.8 7100.8 2593.2 1982.32B1C 2 95 10000 9.5 7.8 6.1 4.3 3.9 5.2 4.1 5.3 4.72B1C 3 95 10000 113.5 51.5 30.6 47.7 57.5 35.9 53.9 40.0 49.92B1C 4 95 10000 578.9 424.6 199.3 379.9 531.0 231.0 353.0 272.4 140.42B1C 2 90 10000 12.8 11.1 10.9 11.3 10.7 10.3 13.0 15.6 11.82B1C 3 90 10000 99.0 135.9 174.4 134.0 1906.3 67.8 126.0 68.5 62.92B1C 4 90 10000 965.2 580.0 696.8 905.1 472.6 190.8 784.9 257.1 368.02B1C 2 85 10000 85.7 145.2 45.2 46.6 33.9 27.5 67.2 42.2 29.32B1C 3 85 10000 986.1 525.9 469.5 794.0 735.2 3177.4 386.6 289.2 307.62B1C 4 85 10000 7269.7 1152.0 9000 5912.0 2448.6 1296.4 5166.7 2763.0 581.62B1C 2 80 10000 322.9 253.4 788.0 134.6 133.0 142.3 122.1 93.5 134.72B1C 3 80 10000 1723.7 936.2 4075.0 3199.8 1061.7 940.8 1588.8 2480.2 6233.42B1C 4 80 10000 7717.3 4100.0 2320.8 8721.3 9000 1350.9 9000 7389.9 1542.7

Average Time 1574.9 1077.2 998.3 1487.5 1196.0 872.4 1480.2 1028.9 656.9Minimal Time 5.5 5.4 4.7 4.3 3.1 4.4 4.1 5.3 4.2Maximal Time 9000 9000 9000 8721.3 9000 9000 9000 7389.9 6233.4

Unsolved Instances 1 1 1 0 1 1 1 0 0

37

Page 38: Solving Chance-Constrained Optimization Problems with ...€¦ · A chance-constrained problem is a stochastic programming optimization problem involving one or more stochastic constraints

Variations of the bound tightening options of Couenne, namely

• feasibility bt: yes or no

• optimality bt: yes or no.

This gives us 4 parameter settings, labeled “yy”, “yn”, “ny”, and “nn”, the first letter referring to thesetting of feasibility bt.

Table 12: Modification of bound tightening parameters for Trb-I with formulation M2-2-MC1-1.

yy yn ny nn2B1C 2 95 5000 9.9 5.0 8.1 4.02B1C 3 95 5000 65.2 29.8 59.8 28.52B1C 4 95 5000 160.5 418.6 159.6 414.52B1C 2 90 5000 15.0 10.1 15.7 9.52B1C 3 90 5000 87.9 77.3 73.6 73.72B1C 4 90 5000 393.5 384.4 381.9 373.32B1C 2 85 5000 42.1 28.5 43.6 26.12B1C 3 85 5000 447.5 388.3 356.1 391.92B1C 4 85 5000 1137.6 835.5 620.9 1098.22B1C 2 80 5000 138.4 56.7 153.0 56.02B1C 3 80 5000 1192.5 1239.8 1531.9 924.02B1C 4 80 5000 1328.6 1617.0 1677.1 1283.6

2B1C 2 95 10000 8.8 5.1 1727.9 4.42B1C 3 95 10000 60.1 48.1 8.2 47.32B1C 4 95 10000 173.0 160.5 64.9 153.32B1C 2 90 10000 15.7 10.0 167.6 9.92B1C 3 90 10000 75.2 60.9 15.2 56.82B1C 4 90 10000 379.1 371.0 84.8 362.62B1C 2 85 10000 42.6 29.5 384.3 23.12B1C 3 85 10000 390.6 329.3 38.0 313.72B1C 4 85 10000 580.1 567.3 397.7 541.92B1C 2 80 10000 156.0 110.9 808.0 121.22B1C 3 80 10000 952.6 1507.3 106.1 1178.52B1C 4 80 10000 1664.4 1609.6 1319.2 1536.1

Average Time 418.2 424.3 423.4 390.3Minimal Time 9.9 5.0 8.1 4.0Maximal Time 1328.6 1617.0 1677.1 1283.6

Unsolved Instances 0 0 0 0

38

Page 39: Solving Chance-Constrained Optimization Problems with ...€¦ · A chance-constrained problem is a stochastic programming optimization problem involving one or more stochastic constraints

Table 13: Modification of bound tightening parameters for Trb-L-I with formulation M2-2-MC1-1.

yy yn ny nn2B1C 2 95 5000 7.5 4.2 8.3 28.62B1C 3 95 5000 37.9 34.1 33.5 327.82B1C 4 95 5000 154.1 354.4 154.4 9.02B1C 2 90 5000 15.5 11.6 16.6 62.32B1C 3 90 5000 73.7 68.4 86.3 752.82B1C 4 90 5000 365.2 795.9 371.4 26.02B1C 2 85 5000 43.0 25.5 40.6 3328.02B1C 3 85 5000 3545.4 359.8 350.3 936.52B1C 4 85 5000 911.3 1023.2 804.7 59.62B1C 2 80 5000 140.6 61.9 86.6 890.02B1C 3 80 5000 1160.9 1578.0 1184.1 1311.52B1C 4 80 5000 1402.5 1982.3 9000 4.0

2B1C 2 95 10000 8.8 4.7 9.2 47.12B1C 3 95 10000 58.9 49.9 57.9 133.52B1C 4 95 10000 159.2 140.4 152.5 10.62B1C 2 90 10000 17.2 11.8 15.1 59.02B1C 3 90 10000 74.2 62.9 77.2 349.22B1C 4 90 10000 395.5 368.0 391.0 23.02B1C 2 85 10000 41.5 29.3 47.4 246.22B1C 3 85 10000 455.8 307.6 483.8 540.12B1C 4 85 10000 575.0 581.6 605.1 119.72B1C 2 80 10000 174.4 134.7 154.9 1353.82B1C 3 80 10000 1328.0 6233.4 5830.7 1577.02B1C 4 80 10000 1600.5 1542.7 1676.1 1577.0

Average Time 654.8 524.9 1011.4 644.7Minimal Time 7.5 4.2 8.3 4.0Maximal Time 3545.4 1982.3 9000 3328.0

Unsolved Instances 0 0 1 0

39