structural approach to parametric analysis of an ip on the case of the right-hand side

9
EUROPEAN JOURNAL OF OPERATIONAL RESEARCH ELSEVIER European Journal of Operational Research 92 (1996) 148-156 Theory and Methodology Structural approach to parametric analysis of an IP On the case of the right-hand side Hsiao-Fan Wang *, Jyh-Shing Horng Institute of Industrial Engineering, National Tsing Hua University, Hsinchu, Taiwan, ROC Received August 1994; revised January 1995 Abstract In this paper, we define a stepsize to parametrize the right-hand side of an integer programming problem. Based on the properties of the defined stepsize, we derive an algorithm for solving families of integer linear programming problems with the right-hand side in the form of b + Bb’, where b and b’ are vectors and the single parameter 0 is a scalar. The proposed algorithm is applicable to the cases when the vector b’ consists of positive and/or negative components. A complexity analysis of solving the entire problem is done with numerical examples. Keywords: Parametric programming; Right-hand side; Integer programming; Complexity 1. Introduction Consider an integer linear programming problem (IP) in the form Model (P) Minimize z = cTx (1) s.t. Ax 2 b x 2 0 and x is integer, (2) where the cost coefficient, c, and the solution of the model, X, are IZ x 1 vectors having respective elements cjandxj, j=l,..., n; the constraint matrix A is an m x II matrix with elements aij, i = 1,. . . , m, j = 1 .f, n; and,the right-hand side (RHS) b is an mx 1 vktor with elements bi, i = 1,. . . , m. When a sensi- tivity analysis is carried out, it is to find the largest * Corresponding author. tolerance interval of a parameter of which the current optimal basis remains. On the other hand, a parametric analysis refers to identifying the sequence of optimal solutions with the corresponding tolerance intervals of the parameters. Therefore, a parametric integer linear programming problem can be formulated in the form Model (P&W) Minimize Z = (c+&‘>Tx (3) s.t. (A+AA’)x 3 b+8b’ x 2 0 and x is integer, where c’ and b’ are given vectors corresponding to c and b, respectively; A’ is a matrix corresponding to A; and the scalars 4, A and B are the perturbation parameters of the problem. Without loss of generality, they are allowed to vary from 0 to 1. In the following, z (4) and z (8) will be denoted by the functions q5 and 8 when the perturbations occur on the objectives 0377-2217/96/$15.00 @ 1996 Elsevier Science B.V. All rights reserved SSDIO377-2217(95)00046-l

Upload: hsiao-fan-wang

Post on 21-Jun-2016

235 views

Category:

Documents


6 download

TRANSCRIPT

Page 1: Structural approach to parametric analysis of an IP on the case of the right-hand side

EUROPEAN JOURNAL

OF OPERATIONAL RESEARCH

ELSEVIER European Journal of Operational Research 92 (1996) 148-156

Theory and Methodology

Structural approach to parametric analysis of an IP On the case of the right-hand side

Hsiao-Fan Wang *, Jyh-Shing Horng Institute of Industrial Engineering, National Tsing Hua University, Hsinchu, Taiwan, ROC

Received August 1994; revised January 1995

Abstract

In this paper, we define a stepsize to parametrize the right-hand side of an integer programming problem. Based on the properties of the defined stepsize, we derive an algorithm for solving families of integer linear programming problems with the right-hand side in the form of b + Bb’, where b and b’ are vectors and the single parameter 0 is a scalar. The proposed algorithm is applicable to the cases when the vector b’ consists of positive and/or negative components. A complexity analysis of solving the entire problem is done with numerical examples.

Keywords: Parametric programming; Right-hand side; Integer programming; Complexity

1. Introduction

Consider an integer linear programming problem (IP) in the form

Model (P)

Minimize z = cTx (1)

s.t. Ax 2 b

x 2 0 and x is integer, (2)

where the cost coefficient, c, and the solution of the model, X, are IZ x 1 vectors having respective elements cjandxj, j=l,..., n; the constraint matrix A is an m x II matrix with elements aij, i = 1,. . . , m, j = 1 .f, n; and,the right-hand side (RHS) b is an mx 1 vktor with elements bi, i = 1,. . . , m. When a sensi- tivity analysis is carried out, it is to find the largest

* Corresponding author.

tolerance interval of a parameter of which the current optimal basis remains. On the other hand, a parametric analysis refers to identifying the sequence of optimal solutions with the corresponding tolerance intervals of the parameters. Therefore, a parametric integer linear programming problem can be formulated in the form

Model (P&W)

Minimize Z = (c+&‘>Tx (3)

s.t. (A+AA’)x 3 b+8b’

x 2 0 and x is integer,

where c’ and b’ are given vectors corresponding to c and b, respectively; A’ is a matrix corresponding to A; and the scalars 4, A and B are the perturbation parameters of the problem. Without loss of generality, they are allowed to vary from 0 to 1. In the following, z (4) and z (8) will be denoted by the functions q5 and 8 when the perturbations occur on the objectives

0377-2217/96/$15.00 @ 1996 Elsevier Science B.V. All rights reserved SSDIO377-2217(95)00046-l

Page 2: Structural approach to parametric analysis of an IP on the case of the right-hand side

HI Wang, J.S. HomgiEuropean Journal of Operational Research 92 (1996) 148-156 149

and RHS, respectively. In this study, we shall pay particular attention to the

parametric analysis of IPs when a perturbation occurs on the RHS. Then, Model (P&W) will be reduced to the following model:

Model (PO)

Minimize z = cTx (4)

s.t. Ax 2 bfeb’

x 2 0 and x is integer, (5)

where c, A, b, b’ and 8 are defined as in ( Pc$A~). In all the discussions that follow, we assume that (P) has a bounded, feasible solution set with a single optimal solution existing in each allowed interval of 8.

Although sensitivity and parametric analyses in lin- ear programming (LP) have been developed exten- sively [ 171, those in IP are still in their infancy. Due to the lack of continuity, there is no algorithm for IP that is as powerful as the simplex method for LP Be- sides, since a general IP model has no shadow prices or dual variables, if the influence of changing the sup- ply level of a resource on the optimal solution needs to be determined, then, in general, one must resolve the problem with the alternative resource. In order to make the best use of the current optimal information, the development of an efficient algorithm for paramet- ric analysis in integer programming is necessary.

The commonly used algorithms for solving IF’s are enumeration [ 11, branch-and-bound [ 21, and cut- ting planes [ 3,4]. The implicit enumeration, including branch-and-bound, and cutting planes have been mod- ified and applied to parametric analyses [ 5-131.

All of these parametric methods have been based on solving an IP at some values of the changed parameter. Then the optimal solution at the adjacent values of the parameter is determined. Therefore, the following solution depends on the information obtained from the current optimal solution. In other word, with such kind of approaches, it requires additional computer storage. This tends to grow exponentially when the model size and the range of perturbation increase. Therefore, all reported analyses are limited to the size of an IP, or the range of perturbation, or even both.

In order to overcome these disadvantages, Jenkins [ 14,151 developed another way to perform a paramet- ric analysis. Based on the theoretical results developed

by Geoffrion and Nauss [ 91, he solved the IPs at dif- ferent values of the parameters. Each step of solving sequential IPs is independen of the other. Thus, the storage and computation of the intermediate informa- tion are saved.

Jenkins’ method was originally developed for a per- turbation occurring on the objectives of a minimiza- tion IP problem. Based on the fact that the optimal value of the objective z ( c$) is a piecewise linear con- cave function of 4, he solved the IP several times to find this function. Because of the concavity and the piecewise linear property, z (4) can be exactly deter- mined in finite steps. That is, if there are p different optimal solutions, then this function can be completely determined by solving 2p - 1 IPs.

However, when the perturbation occurs on the RHS, z (0) is a step function of 8. Although Jenkins sug- gested to apply the procedure above directly to (PO), there is no guarantee to perform complete parametric analysis of IP for all 0 < 0 6 1 [ 141. This short- coming has remained when Jenkins improved his al- gorithm from complexity of 2p - 1 to p [ 151.

A detailed review on parametric IP over the last fifteen years can be found in Ref. [ 161.

Because the feasible region of an IP is discrete, when performing parametric analysis, one difficulty is to identify where and which is the next optimal, thus feasible solution. That is, one of the major issues when solving parametric IP problems is to decide the stepsize that, however, is rarely discussed in the lit- erature. In this paper, we shall focus on the determi- nation of stepsize and its properties in parametric IPs on the RHS. It is expected that from the stepsize we shall have a clear insight into the structure of para- metric IPs, and develop a more general and efficient procedure for the analysis.

In the next section, we first present some basic the- ories. Then, based on these studies, an algorithm will be developed in Section 3. In the last section, we sum- marize our results and discuss some issues for future studies.

2. Basic definitions and theoretical results

First, we define some basic concepts for the inves- tigation of the theorems below.

Page 3: Structural approach to parametric analysis of an IP on the case of the right-hand side

150 H.E Wang, J.S. Homg/European Journal of Operational Research 92 (1996) 148-156

Definition 1 ( [ 91) . An optimal problem (Q) is said to be a restriction of problem (P) if the feasible region of (Q) is entirely contained within that of (P), and if the objective function of (Q) is at least as great as that of (P) everywhere on the feasible region of (Q) .

portion of bi + Bbf changes.

On the other hand, problem (P) is said to be a relaxation of problem (Q).

Based on the theorem, one can realize that if we take di = l/b: for constraint i, then the hyperplane Cj aijxj = bi + ebi will not pass any integral point if (w-l)di<B<wdiwithw=l,...,l/di,aninteger variable. Therefore, when more than two constraints (m 2 2) are considered simultaneously, d can be taken to be

Property 1 ( [ 91). Zf an optimal solution x* of (P) remains feasible in a restriction (Q) of (P), and if it has the same objective function value in (Q) as in (P), then it must be an optimal solution of (Q).

d= 1

least common multiple of {b’, , . . . , b;} ’ (6)

This property tells us that a global characteristic will be preserved in a local region. Now, let us state a simple but important concept:

Then, any bi + 061, for i = 1,. . . , m, will never be in- tegerwhen(w-l)d<B<wdwithw=l,...,l/d. Thus, all of the m hyperplanes Ax = b + Bb’ will not pass any integral point during these intervals of 8.

Corollary 1 ( [9] >. Let 6’ satisfy Ax* 2 b’ 2 b. Then x* remains optimal for (P) with b replaced by b’.

Remark 1. From Eq. (6), we define of (PO) by d, and its multiples, wd O,l,..., l/d, as the “candidates of 0”.

a stepsize with w =

It says that the allowed perturbation range of the RHS is as large as that shown in the corollary without any computation. Thus, geometrically, we can move all the hyperplanes, defined by the constraints, from ( Ax)i = bi to Ax* = (Ax*)~ = bi, i = 1,. . . , m, directly so that though the feasible domain is shrunk, the optimality of the current optimal solution is not destroyed.

Now, for any constraint i in model (PO) as Cj a’.‘,r’ 2 bi + Bbi’, if all entries ai,i of the constraint matrix are integer, we have the following observation.

When 8 varies, because of the following two rea- sons, we just have to consider those that make bi + t9bi integer for some i: ( 1) the current optimal solution will not change until at least one hyperplane passes some integral point; and (2) not every candidate of 0 will make bi + ebi integer such that the ith hyperplane is binding at some integral point. We call these 0 val- ues “the principal candidates of 0”. In the following sections, if there are R of them, they will be denoted by 5”) and arranged in the order of @ < l(‘+‘) for r= ,..., 1 R- 1.

Now the question is how many optimal solutions Theorem 1. When 8 varies, the hyperplane defined may exist between two adjacent principal candidates by constraint i cannot pass any integral point unless of 8. We first want to know how many it can be, then the integer portion of bi + ebf changes. we shall find them without missing any.

Proof. Suppose, on the contrary, when 0 varies from 8’ to e”, where 0’ < 8”, the hyperplane passes some integral point, say x*, without changing the integer portion of bi + ebi. Because aij and X* are integers, there exists a scalar 8* such that bi + e*bi=xj aij_$ is integer and bi+O’b{ < bi + 8*b[ < bi + B”bi. It contradicts the fact that the integer portion of the RHS does not change. 0

Theorem 2. If @‘) and t(‘+‘) are two adjacent principal candidates of 0, then when 8 varies within ([(“), $‘+‘)), there exists at most one optimal solu- tion of (pe).

This theorem tells us that, if x* is a current optimal solution, when the perturbation occurs on the RHS of constraint i, it will remain optimal unless the integer

Proof. Suppose, on the contrary, when 8 varies within (@‘), $‘+‘)), there are more than one optimal solu- tions, say x’, x”, . . ., of which x’ is assumed to be the first optimal solution to appear. There are just two possibilities that x” can take the place of x’. The first is that x’ becomes infeasible in some subinterval of (,$‘) , t$rf’)), and the second is that x” becomes fea-

Page 4: Structural approach to parametric analysis of an IP on the case of the right-hand side

H.F Wang, J.S. Horng/European Journal of Operational Research 92 (1996) 148-156 151

sible and superior to X’ in some subinterval of this interval. But these two cases cannot happen. Since X’ and x” are integral points, if any of these two cases takes place, then as 8 varies, at least one constraint must pass x’ in the first case, or pass x” in the second case. In each case, there must be at least one principal candidate of 8 generated in the interval (&(‘),@‘+‘)) because the RHS of the corresponding constraint be- comes integral. But it contradicts the fact that 5”) and [(‘+‘) are two adjacent principal candidates of 19. 0

Remark 2. A fact which is used in the proof of Theo- rem 2 is worthwhile to be emphasized. The feasibility of a solution of (Pf?) can be violated only if certain constraints with positive perturbation pass it at some value of 0, say 6. Due to the integrity of the feasi- ble solution, when 8 = 5, the RHS of the bound con- straint must be integral. Thus, 5 is a principal candi- date. Therefore, the feasibility will be preserved in an open interval which is constituted by 5 and its next principal candidate of 0. From this concept, the fol- lowing corollary can be derived.

Corollary 2. If5 E ($‘),@‘+‘)), and x(t) is the optimal solution of (PO) at 0 = 5, then x (5) is optimal in the interval (@‘), $‘+‘)).

Proof. From Theorem 2 we find that, when 13 varies in ( L$~), $‘+’ ) ) , x (5) is the only optimal solution. Be- cause [lr) and $‘+‘) are two adjacent principal candi- dates of 8, and x(t) is feasible, thus x(t) will keep feasible in this open interval, and thus will be optimal in this interval. 0

Thus, a similar corollary can be stated without proof.

Corollary 3. Zf 5 E (t(‘), ((‘+‘)), and the feasible domain of (PO) at 0 = LJ is empty, then the feasible domains of (P/3) are empty for all 8 E (5”) , $‘+‘)).

Remark 3. Based on the developed theorems and corollaries, one can perform a parametric analysis on the RHS as follows: - First, solve (Pe) at every candidate of 8. - Second, solve (PO) at each value of 8 that was ar-

bitrarily chosen between any two adjacent principal candidates; this solution will be optimal in this open

interval.

Up to now, we have dealt with the question, when 0 varies, how to find all optimal solutions and the in- tervals that preserve the optimal bases. But what does it mean that some optimal solutions which are gener- ated from different values of 0, say 8’ and t9”, are the same? Does it mean that this solution is optimal in the closed interval [ 8’, d”] ? In 1982, Jenkins [ 141 faced this problem. He adopted a heuristic rule proposed for parametrization on the objective function as follows:

Zf x(0’) = x(W), then x(6’) is assumed to be optimal for et 6 e < et’.

However, it is not always true. Let us consider the following counterexample.

Example 1.

Min 3x1 + 2x2

s.t. -4x2 > -18

25x1 + 10x2 2 129 + 26’

5x, + 20x2 2 82 - 46

Xi > 0 and Xi is integer for i = 1,2.

(7)

(8)

(9)

(10)

Solution. The stepsize is l/4, and the principal can- didates of 8 are 0, l/4, 2/4, 3/4 and 1. The optimal solutions obtained at these values of B are (4,4)T, (4,4)T, (4,3)T, (4,4)T and (4,4)T. It isevident that even though (4,4)T is the same optimal solution at 0 = 0, l/4,3/4 and 1, it can not guarantee that (4, 4)T is optimal at [ 8’) f3”] where 8’ = 0 or l/4, and 8” = 3/4 or 1.

The reason is very clear: (4, 3)T becomes feasible and is superior to (4,4)T at B = 2/4, and thus it is optimal. However, it becomes infeasible as soon as 0 > 2/4, thus (4, 4)T returns to be optimal at the same time.

Therefore, Jenkins’ statement should be modified into the following theorem.

Theorem 3. For (PO), if@‘) and L$‘+‘) are two ad- jacent principal candidates and x ( $‘) ) = x ( ,$‘+I ) ) , then x(6(‘)) is optimalfor all $“) 6 0 < @‘+‘) .

Proof. From Corollary 2, if it can be proved that there exists an optimal solution x(t) such that x(t) =

Page 5: Structural approach to parametric analysis of an IP on the case of the right-hand side

152 H.E Wang, J.S. Horng/European Journal of Operational Research 92 (1996) 148-1.56

x(,$‘)) for some 5 E ($‘), [(‘+I)), then the proof is that is, the minimal value of 8 of which there is at given. least one constraint binding at x*.

First, it should be reminded that, when 8 varies from 0 to 1, once the feasibility of a solution is violated, it will never be recovered. That is, when x(@) = x( ,$“+‘)), x (5) will keep feasible in [ $‘), [(‘+I) 1.

Now suppose, on the contrary, x(t) # x( 6”)). Since it is generated somewhere in ($‘),,$‘f’)) by some constraint, the RHS of the bound constraint must be integral, thus a principal candidate of 8 appears in (~“‘,,$‘+“). Th’ IS contradicts the fact that x(,$> and x($‘+‘)) are two adjacent principal candidates of0. 0

This concept also tells us that if some constraint from the positive subset is binding at the current opti- mal solution x*, then x* is no more feasible as soon as the bound constraint moves away. Incorporating these two concepts into the definition of the principal can- didates of 8, we have the following theorem.

Theorem 4. Suppose $‘) is a pn’ncipal candidate of 0, if some constraint in positive subset is bound at

x(5(‘)), then x($r)) is no more optimalas e > @‘); otherwise, the optima& of x($‘) > is retained when e E [~(~),~(~fi)).

Remark 4. Therefore, if the optimal solutions solved at two adjacent principal candidates of B are the same, then from Theorem 3, this solution will be optimal in the closed interval which is constituted by these two principal candidates of 8.

Although the steps of the parametric analysis stated above are able to find all optimal solutions with re- spective intervals, the process is too cumbersome to execute. In the next section, we shall propose a more efficient algorithm based on these theorems and corol- laries.

3. The algorithm

When the perturbation occurs on the RHS, the com- ponents of the perturbation vector b’ can be positive, negative, or zero. We divide all constraints into three subsets. The first, called positive subset denoted by S+, contains those constraints with bi > 0. The sec- ond, called negative subset denoted by S-, contains those with bi < 0; and the third, called null subset denoted by 9, contains those with bf = 0.

It is noted that Corollary 1 implies that if b’ 2 0, then (PO”) is a restriction of (Per) when 8” > 8’. The feasible domain can be shrunk to that of (PO*) directly and retains the optimality of the current solution x*, where 8* can be obtained by x*. The exact value of e* is

(11)

Proof. The first case is trivial. For the latter case, we choose ,$ E [@, &(‘+‘)). Since no constraint in the positive subset is binding at x ($‘) ) , from Corollary 2, x(@‘)) will retain feasible and thus optimal. 0

For b’ < 0, the constraint in the negative subset will “relax” the feasible region. Thus, if an optimal solution x(8(‘)) is bound by some constraint in the negative subset, it means that x($‘)) is not feasible and optimal until 8 = @. A similar theorem can be stated.

Theorem 5. Suppose 5”) is a pn’ncipal candidate of 8, if some constraint in the negative subset is bind-

ing at x(,$(r)), then x(5(‘)) will not be optimal un-

til 8 3 t(r); otherwise, x(5(‘)) is optimal when B E (p”,&‘]*

Proof. The first case is trivial. As for the second case, if we can prove that x(@) ) is feasible in (,$-‘), ,$‘) ), then, of course from Corollary 2, it will be optimal in (L$‘-‘), $‘)I.

Suppose, on the contrary, that x (@‘)) is not feasible in (t(‘-‘), 5”) ) . Since x (8”)) is feasible at 8 = @‘), if x(@‘)) is not feasible in this open interval, then there must be a constraint in the negative subset which relaxes x(6(‘)) at 8 = $‘). But it contradicts the fact that no constraint in the negative subset is binding at x(@). cl

Combining these two theorems, we have the fol- lowing corollaries.

Page 6: Structural approach to parametric analysis of an IP on the case of the right-hand side

H.E Wang, J.S. HomglEuropean Journal of Operational Research 92 (1996) 148-156 153

Corollary 4. Zf x( $‘) ) is bound by some constraints in the negative subset and some in the positive subset, then it can be optimal at 8 = @‘) only.

See Example 1 for t9 = l/2.

Corollary 5. lf x($‘)) is not bound by any con- straint, then it will be optimal in ($‘-‘), @‘+‘)).

For general b’, we can combine the concept of the special case of b’ > 0 and b’ < 0, and have the fol- lowing observation. If the current optimal solution x* is obtained at 8 = 5, with principal candidate t* of 8, we can find the first bound constraint in the positive subset with 19* in Eq. ( 11) . Then, from Corollary 1, it seems likely that the hyperplane defined by the con- straints can be moved until the first one in the posi- tive subset is bound at x* and without violating the optimality of x* with respect to 9. But even though x* will remain feasible in B E [ 6, [*I, some integral points that are generated by the constraints in the neg- ative subset may be superior to it. Thus, the optimal- ity of x* is violated, and the generated integral point becomes a new optimal solution.

Due to the fact that some better integral points may appear again and again, we have to check them at ev- ery principal candidates of 8 produced by the con- straints in the negative subset. Besides, because the constraints in the positive subset may violate the feasi- bility of the generated integral points, the life cycle of the optimality of the new optimal solutions may end in ([, 5’1. Of course, the last new generated optimal solution may across two iterations, but x* is not able to. Therefore, for each iteration, first solve (PO) at the current value of 8, t, and obtain x* = x (6). Also, t* is calculated. Second, if any principal candidate of 0 is generated by constraints in the negative subset, and if it is within (5, (*I, then we have to detect if any of new optimal solutions is generated. If it is, then we have to determine the tolerance intervals of the new optimal solution.

It should be noted that another optimal solution may appear between two different optimal solutions which are obtained from two adjacent principle candidates of 8. In each iteration, since x (5) should be optimal in [ 5, [*I, its optimality is lost only when new supe- rior integral points appear. Thus, when the life cycle of these newly appeared optimal solutions ended, then

X( 5) will be optimal in the rest intervals of [ ,$,(*I. It is due to the nonexistence of a better competitor, and thus the feasibility and the optimality can be retained. However, when one iteration ends, and the next itera- tion starts at the next principal candidate of B greater than [*, no previous optimal solution is assumed be- tween these two adjacent principal candidates. Thus, it needs to solve (P6) at a specified 8 within these two adjacent ones. Therefore, in each iteration, “the third different optimal solution” is just x (6)) and it may be the only unidentified one when B varies between two iterations.

Therefore, there are two major steps for solving (PO). - First, at each iteration, determine the tolerance in-

tervals of the newly generated optimal solutions ly- ing in ((,(*I.

- Second, determine the optimal solution between two adjacent [ 5, t*] ‘s. After these discussions, we can propose the algo-

rithm now. But before this, the used notations are listed first. g(S) is the set of principal candidates of 8 generated

by set S; Z(X) is the tolerance interval of x; Sf , S- , 5e are the positive, negative, and null subsets; NC(X) = {x(e) 1 e E g(r), 4 < e G t*, x(e) f

x(5>}; x(t) is the optimal solution of (PO) solved at 8 = 5; x (5) E bind(S) means that x( 5) is in the subset of

some binding constraints in S; 5 is the current value of 0; t* is the value of 0 when the first constraint in the

positive subset is bound at the current optimal so- lution;

$‘) are the principal candidates of 8, where r = 1 R. ,...,

Algorithm for parametric analysis on RHS Step 1. Set 5 = 0, tcR+t) = co and initialize r = rnext = 1. Step 2. - Solve (Pe) at 8 = 5. - If the optimal solution exists, set x* E x (5) ; oth-

erwise, got0 Step 13. - Calculate t* = 8* by Eq. ( 11) . - Set[*=l,ifr* > 1. Step 3. If 5 = @‘) with r # 1, then

Page 7: Structural approach to parametric analysis of an IP on the case of the right-hand side

IS4 H.E Wang, J.S. Horng/European Journal of Operational Research 92 (1996) 148-156

(i) If x(5) E bind(S+)\bind(S-), then x(t) is as follows: optimal in (@‘) , $‘) ) ; otherwise,

(ii) If;(t) $! bind(S+)\bind(S-), thensolve (PIN) = ;(,$(‘-” +&“+, and this solution will be

optimal in ($‘-‘), 5”) ).

- The optimal solutions in the interval [,$, &*I are treated from Step 1 to Step 5.

Step 4. If 5 = t*, then - 1(x(t)) = 4*, and - got0 Step 12; otherwise,

- At each iteration, if any new optimal solutions are generated in (5, ,$*I, i.e. NG( X) is not empty, then they will be treated from Step 6 to Step 11. (2) Any principal candidates of 0 can be distributed

Step 5: Solve (Pf?) at 0 E (0 1 0 E g(S-), 5 < 8 < {*} to obtain NG( x). If NG( X) is empty, then - [(x(O) = Ct,5*1, and - got0 Step 12; otherwise, Step 6. If .EJ(‘) > t*, then goto Step 11. Step 7. If [(‘) E g( S+) \g( S-), then set x(@) = x*; otherwise, x* =x(@‘)). Step 8. Determine the tolerance interval of x(@)) as follows:

to one and only one of the four disjoint subsets stated in Step 8. After the tolerance interval of the current optimal solution is determined, we use “r,,,,” to store the next principal candidates of 8. Usually, it will be increased by 1 at each time. However, if the condition stated in Step 9 is satisfied, then from Theorem 3, the intervals corresponding to the same optimal solutions can be merged. Thus, the value of rnext can be increased by a larger number.

Because the convex hull may be changed without af- fecting the current optimal solution, we have to merge some intervals of 0 which have the same optimal solu- tion. This extra computation is caused by the negative subset. It is necessary because when we relax the con- straints with bi < 0, we are unable to predict at which of candidate of B the optimal solution appears. Thus, the less are the number of constraints in the negative subset, the more efficient is the algorithm. Now, let us explain the algorithm by the following example.

(i)

(ii)

(iii)

(iv)

If x(6(‘)) E bind(S+) n bind(S-), then 1(x($‘))) = l(‘); else If x (.$‘) ) E bind( S+) \bind( S- ), then it is op- timal in (t(‘-I), .$‘)I; else If x (,$“)) E bind( S- ) \bind( Sf ) , then it is op- timal in [ @, @‘+‘)); else If x(@)) is not bound by any constraint, then it is optimal in ($‘-‘), @+I)).

Set Y,,,[ = r + 1. Step 9. If there exists r* such that for all /3 E [@‘),@*)] n g(S-), x(0) =x(,$), then - ~(5”)) is optimal in [$‘),$‘*)I, and - update rnext = r*.

Step 10. Set r = rnext, and goto Step 6.

~E~I~gw5)) = {e I 5 6 e 6 ~*N_JxENG~x~{~ I

Step 12. - rnext = rnext + 1, - Set 5 = 5”) with r = rnext,

- If 5 6 1, goto Step 2; otherwise, Step 13. Output the results. Step 14. Stop!

Remark 5. ( 1) This algorithm is to determine the tolerance in-

tervals of the optimal solutions. We check every prin- cipal candidate of 0 but (PO) is not solved at each one. The two major steps of our algorithm are treated

Example 2. Consider the following problem:

min x1 + x2 (12)

s.t. -3x1 + 2x2 2 8 - 58 (13)

x1 +2x2 2 lo+48 (14)

2x1 2 1 (15)

xi > 0 and xi is integer, i = 1,2.

Solution. From the algorithm, d = l/20, and the prin- cipal candidates of 0 are 0, 4/20,5/20, B/20, 10/20, 12/20, 15/20, 16/20, and 1. The solution process is summarized in Table 1.

In Table 1, when 6’ varies, each “iteration” means that model (PO) is solved for each interval [ [‘, &] U [ 5, ,$*I, where 5’ is the previous adjacent candidate of 6. Thus, the number of IPs to be solved may exceed 1 at each iteration. When 8 varies outside this range, it is marked by I‘-“.

Page 8: Structural approach to parametric analysis of an IP on the case of the right-hand side

H.E Wang, J.S. Horng/European Journal of Operational Research 92 (1996) 148-156 155

Table I Solution process of Example 2

her. Step Process

I Set~=O,~(‘“)=03andr=r,,,=l.

I 2 3 4 s 6 I 8

9 IO 6 I 8

9 IO 6 I 8

9 IO 6 I 8

9

IO 6

II

2 2

3

4 5

12

13

14

x* =x(O) = ( l,6)T. 5’ = H/20. 5 = [(I) with r = I, do nothing. 6 # .$*, goto the next step. NG(x) = {x(4/20) = ( l,5)T} is not empty. 5”) < 5’. goto the next step. x* =x(O) = (1,6)‘. x(0) is not bound by any constraint, thus ( 1,6)T is optimal in [ 0,4/20). Set rnext = 2. There is no such r*. Set r = 2, goto Step 6. f(‘) < [*, goto the next step. x* = x(4/20) = ( 1,5)T. x(4/20) E bind(S-)\bind(S+), thus (1,5)T is optimal in [4/20,5/20). Set rnexL = 3. There is no such r*. Set r = 3, goto Step 6. .$c3) < f*, goto the next step. x(5/20) = x* = ( l,5)T. x(5/20) E bind(S+)\bind(S-), thus (1,5)T is optimal in (4/20,5/201. Set rnext = 4. There is no such Y*. Set r = 4, goto Step 6. .$c3) < [*, goto the next step. X* =x(8/20) = ( l,6)T. x(8/20) is not bound by any constraint, thus ( 1, 6)T is optimal in (5/20,10/20). Set rnext = 5. Set rl = 7, ( l,6)T is also optimal in [8/20, 15/20]. Set r = 7, goto Step 6. ft3) = f*, got0 Step 11. I((l.6)T) = ]0,4/2O)U(5/20,15/20].

x* = x(1/2) = (l,7)T. Because p = 1.25 > 1, set [* = I. 6 = [(*) with r = 8. x(6(*)) E bind(S+)\bind(S-), thus ( l,7)T is optimal in ( 15/20,16/20). 5 f f*. goto the next step. NG(x) isempty,thus 1(l,7)T= [16/20,1].Goto Step 12. Set rnext = 10, and thus 5 = co. Goto the next step.

H ro, g)) GM (’ ‘51 20’ 20 (15 I] 20 ’

X* (t,6)T (1.5)T (1.6)’ (1,7)T

Stop!

Theorem 6. Each tolerance interval found in the al- gorithm is the largest that retains the respective solu- tion optimal.

Proof. Let I be the tolerance interval of x*. Suppose, on the contrary, that I is not the largest. That is, there is another I’ ’ + I such that for all 5 E I’\1 = (0 1 0 E I’, 0 # I}, implies x(t) = x*.

Suppose there are R principal candidates of 8. The principle of our algorithm is to solve implicitly the following 2R - 1 subproblems:

( (PO) with 8 E (@‘v’2),[(‘v’2)+‘)

(Pe)(N) = if N is even, (PO) with e = ,$((N+1)/2) (16)

if N is odd,

where N = 1,2,..., 2R - 1, and then the intervals having the same optimal solutions are merged. From Theorem 2, each subproblem has at most one optimal solution. Thus, the tolerance intervals are composed of principal candidates and the open intervals between two adjacent principal candidates of 8. Because the optimality is preserved in such open interval, for con- venience, I’ can be taken to be such composition.

Since x* is optimal in I and I’\l, these two adjacent intervals will be merged to I’, and thus I’ = I. It contradicts the fact that I’ 2 I. 0

Corollary 6. If there are R principal candidates of 8, then the worst-case complexity of our algorithm is 2R- 1.

Proof. Because each subproblem has at most one op- timal solution, and because each subproblem is solved no more than one time. Therefore, from Eq. ( 16)) the worst-case complexity is 2R - 1. 0

4. Summary and conclusions

In this paper, based on Jenkins’ analysis of the per- turbed RHS in an IP, we present a stepsize of para- metric analysis on the RHS. From it, the structure of the principal candidates of r3 can be derived. The sig- nificance of this structure is that the basic properties, such as optimality and feasibility of solutions, or the emptiness of the feasible region, can be preserved in the open intervals made of any two adjacent princi-

Page 9: Structural approach to parametric analysis of an IP on the case of the right-hand side

156 H.E Wang, J.S. Horng/European Journal of Operational Research 92 (1996) 148-156

pal candidates of 0. Then, the difficulty of completely solving (PO) as Jenkins [ 141 faced in 1982, can be overcome.

In summary, although Geoffrion and Nauss [9] in 1977 have proposed some of very important theoreti- cal results which paved the way to the development of some algorithms [ 14,151 by Jenkins, the advantages of these algorithms on avoiding the large amount of storage can not offset the shortcomings. There are ( 1) (PO) cannot be completely solved, and (2) the al- gorithm for (PO) cannot be adopted efficiently to its special case of b’ 3 0. Therefore, based on Jenkins’ principle, we proposed an algorithm with the defined stepsize which is able to completely solve (Pr3) with the reduced complexity in the general case.

Now, there are some issues for future studies. Ac- cording to the proposed stepsize, we move from one principal candidate to another, which is a conservative step. Let us consider the following equation:

c a,;x,i = b + Ab, (17)

wherea,i, j = l,..., n, b, and Ab are integer. If we can find the exact variation of b that causes the X.i’s to have integer solutions, the stepsize can be taken to be Ab. Then, this will be the largest stepsize, and the complexity will be considerably reduced. This integer problem is also a very important one in the area of algebraic analysis and needs further studies.

For the parametric analysis of LPs, the perturbation occurring on the RHS, objective and constraint matrix can be considered simultaneously. However, because of the lack of continuity of IPs, the problem has not been fully investigated in the literature. Now the con- servative step is proposed, and some basic properties can be preserved between two adjacent principal can- didates of 0. Thus, another perturbation, occurring on the objective or constraint matrix, can be discussed when 0 varies in such intervals. Therefore, in the fu- ture, it is expected that the parametric analysis of IPs with the perturbation occurring on the RHS, objective and constraint matrix can be performed.

Acknowledgements

The authors acknowledge with gratitude the finan- cial support from The National Science Council of

Republic of China with the project number NSC 81- 0415-E007-02.

References

[ 1 ] Balas, E., “An additive algorithm for solving linear programs with zero-one variables”, Operations Research 13 ( 1965) 517-546.

[ 21 Geofion, A.M., and Marsten, R.E., “Integer programming: A framework and state-of-the-art survey”, Management Science 13 ( 1972) 456-491.

[ 3 ] Gomory, R.E., “An algorithm for integer solutions to linear programs”, in: R.L. Graves and P Wolfe (eds.), Recent Advances in Mathematical Programming, McGraw-Hill, New York, 1963.

[4] Salkin, H.M., Integer Programming, Addison Wesley, Reading, MA, 1975. Roodman, G.M., “Postoptimality analysis in zero-one programming by implicit enumeration”, Naval Research Logistics Quartely 19 (1972) 435-447.

1 Piper, C.J., and Zoltners, A.A.,“Implicit enumeration based algorithms for postoptimizing zero-one programs”, Naval Research Logistics Quarterly 22 (1975) 791-809. Piper, C.J., and Zoltners, A.A., “Some easy postoptimality analysis for zero-one programming”, Management Science 22 (1976) 759-765.

1 Roodman, G.M., “Postoptimality analysis in integer programming by implicit enumeration : the mixed integer case”, Naval Research Logistics Quartely 21 ( 1974) 595- 607.

[9] Geoffrion, A.M., and Nauss, R., “Parametric and post- optimality analysis in integer linear programming”, Managemenf Science 23 (1977) 1270-1284.

[IO] Marsten, R.E., and Mot-in, T.L., “Parametric integer programming:the right-hand side case”, Annals of Discrefe Mathematics 1 (1977) 375-390. Helm, S., and Klein, D., “Discrete right hand side parameterization for linear integer programs”, European Journal of Operational Research 2 (1978) 50-53. Klein, D., and Holm, S., “Integer programming post-optimal analysis with cutting planes”, Management Science 25 (1979) 64-72. Bailey, M.G., and Gillett, B.E., “Parametric integer programming analysis : a contraction approach’, Journal of the Operational Research Society 31 ( 1980) 257-262.

[ 141 Jenkins, L., “Parametric mixed integer programming : an application to solid waste management”, Management Science 28 (1982) 1270-1284.

[ 151 Jenkins, L., “Using parameter integer programming to plan the mix of an air transport fleet”, INFOR 25 ( 1987) 117- 135.

[ 161 Jenkins, L., “Parametric methods in integer linear programming”, Annals of Operations Research 27 ( 1990) 77-96.

[ 171 Ward, J.E., and Wendell, R.E. “Approaches to sensitivity analysis in linear programming”, Annals of Operations Research 27 ( 1990) 23-38.