an all-integer cttting michael edward hanna, b.a., m.s

94
AN ALL-INTEGER CTTTING PLANE ALGORITHM WITH AN ADVANCED START by MICHAEL EDWARD HANNA, B.A., M.S. A DISSERTATION IN BUSINESS ADMINISTRATION Submitted to the Graduate Faculty of Texas Tech University in Partial Fulfillment of the Requirements for the Degree of DOCTOR OF BUSINESS ADMINISTRATION Apprqjf^d August, 1981

Upload: others

Post on 13-Jan-2022

2 views

Category:

Documents


0 download

TRANSCRIPT

AN ALL-INTEGER CTTTING PLANE ALGORITHM

WITH AN ADVANCED START

by

MICHAEL EDWARD HANNA, B.A., M.S.

A DISSERTATION

IN

BUSINESS ADMINISTRATION

Submitted to the Graduate Faculty of Texas Tech University in Partial Fulfillment of the Requirements for

the Degree of

DOCTOR OF BUSINESS ADMINISTRATION

Apprqjf^d

August , 1981

ACKNOWLEDGEMENTS

J am d^ply Indebted to Dr. Larry Austin for his

continued guidance and assistance throughout the last five

yeaz*8. His contributions to this research are most

sincerely appreciated. The limitless energy which he

displays will forever be an inspiration to me.

I am grateful to Dr. Richard Barton and Dr. Richard

Sparkman, Jr., who served as members of the committee and

provided numerous helpful suggestions. I would also like

to thank Dr. Robert Koester and Dr. Skrikant Panwalkar for

serving as readers of this dissertation.

I would like to dedicate this dissertation to the

person who has shared the most in its development — my

wife and best friend, Susan Hanna. Although I can never

adequately repay her for the years of support and

encouragement she has afforded me, I can think of no better

way to express my gratitude than with this dedication.

11

CC ffSNTS

page

ACKNOWLEDGEMENTS ' 11

LIST OF TABLES v

LIST OP FIGURES vi

CHAPTER

I. INTRODUCTION 1

ILP in Perspective 2

Organization of the Research 5

II. LITERATURE REVIEW 6

Applications 6

Branch and Bound 7

Implicit Enumeration 9

Group Theory iO

Fractional Cutting Planes i2

All-Integer Cutting Planes i^

Other Considerations 17

III. THE ADVANCED START ALGORITHM 19

The ILP Problem 21

The BDA 24

Difficulties with Other Initial Solutions 27

Tests for Optimality 35

Derivation of the Advanced Starts ^2

Advanced Start Algorithm ^^

ill

page

IV. COMPUTATIONAL COMPARISON 61

Basis for Comparison 61

The Commercial Cutting Plane Codes 63

The Test Problems 65

The Computational Results 70

V. SUMMARY, CONCLUSIONS, AND RECOMMENDATIONS 77

Summary 77

Conclusions 78

Recommendations for Future Research 80

REFERENCES 82

iv

LIST OP TABLES

Table ' page

1. Results of Computational Testing 7-1 for Allocation Problems

2. Results of Con^utational testing 72 for Fixed Charge Problems

LIST OF FIGURES

Figure page

1. ILP as a fart of Mathematical Prograimning 3

2. The Standard Integer Programming Tableau 23

3. Tableau of Coefficients with Upper Bound 30 Constraints Explicitly Stated

4. Expanded Form of Initial BDA Tableau 31

5. Initial Tableau for BDA in Reduced Form 34

6. Tableau of Coefficients with Upper Bound 38 Constraints and "z" Column

7. Expanded Form of Initial Advanced Start .40 Tableau

8. Initial Tableau for Advanced Start Algorithm 41

9- Initial Tableau for Example 1 53

10. Second Tableau for Example 1 54

11. Third and Fourth Tableaus for Example 1 57

12. All Three Tableaus for Example 2 59

vi

CHAPTER ,1

INTRODUCTION

r any large scale business problems may be modeled

using integer linear programming (ILP); unfortunately, the

state of the art is such that many problems modeled as

ILP problems cannot be solved in a reasonable amount of time.

Although many approaches have been tried, there does not

exist at the present time one solution technique which

efficiently solves all types of ILP problems. However, new

algorithms which are based on existing methods may help to

alleviate some of the computational difficulties associated

with ILP.

The purpose of this research is to investigate the use

of alternate starting solutions for an all-integer cutting

plane algorithm. The need for further study in this arjerS

has been noted in the literature. Taha [1975, p. 351] states

In spite of the tremendous importance of integer linear programming, its methods of solution are still inadequate, particularly from the computa­tional standpoint ... Perhaps an optimal algorithm capable of solving large practical problems will not be available in the foreseeable future; but because of the importance of the Integer problem, research on both the theoretical and computational aspects of integer programming must continue.

Young [1965, p. 218] also stresses the need for

further research, particularly In the area of starting

solutions when he notes:

It may be convenient and useful in many cases to express good solutions - achieved by heuristic or v ot^r means - as initial basic solutions to an^«^ ^ # integer programming problem. ^^ Jl,^

Much work has been done in recent years in modeling

problems using ILP, in trying to improve existing solution

methods, and in developing new techniques to solve the

problem. However, little has been done in the area suggested

by Young — using good initial basic solutions. This

investigation fills this void with respect to the all-integer

cutting plane algorithms.

ILP in Perspective

Integer linear programming, linear programming (LP),

and nonlinear programming are all members of the more

general class of mathematical programming models (see Figure

1.). The difference between ILP and LP is that in LP

problems the variables must be continuous, while in ILP they

must be integer valued. If all the variables are required to

be integers, then the problem is referred to as a pure integer

programming problem; if some of the variables are allowed to

be continuous then the problem is called a mixed integer

problem (MIP).

Mathematical Programming 0

( Integer Linear

Programming (

Nonlinear Programming )

Solution Techniques

Branch and

Bound

Cutting Planes ) (

Implicit Enumeration

d

3 Group Theory }

Fractional ) (

All-Integer 0

Figure 1. ILP as a Part of Mathematical Programming.

As true for many operations research methods, the

field of ILP consists of two main areas — modeling the

problem and solving the problem. The solution techniques

may be eat#gorlzed as either heuristic or analytic. The

f^ur commonly used analytic techniques are implicit

enumeration, branch and bound, group theory, and cutting

planes imich are either fractional or all-integer in

nature. An all- integer cutting plane is one in which all

coefficients begin as integers and remain integer throughout

the calculations.

A major portion of ILP modeling consists of problems

in which some or all of the variables are restricted to be

either zero or one. Specialized algorithms have been

developed to exploit the special structure of these problems..

Although it is true that any ILP problem in which upper

bounds may be found for all the variables may be modeled as

a zero-one problem (see for example Wagner [1975, p. 483]),

the number of new variables which are created compounds the

computational difficulty associated with the ILP problem.

As will be seen in Chapter 2, the all-integer cutting

plane literature does not discuss the use of advanced starts.

This dissertation discusses an approach to the use of an

advanced start in an all-integer cutting plane algorithm.

Included in this dissertation are the results of computational

tests which compare the advanced start algorithm with other

cutting plane techniques.

Organization of the Research

A review of the relevant literature concerning ILP

applications and solution techniques is provided in

Chapter 2. Chapter 3 describes a dual all-integer cutting ••t.--

plane algorithm and the modifications necessary to implement

advanced starting solutions with this algorithm. Chapter 4

exhibits the computational results obtained by using an

advanced start algorithm on a set of standard "hard" test

problems. Chapter 5 presents the conclusions based on the

results of this study, and makes recommendations for future

research.

CHAPTER II

LITERATURE REVIEW

This chapter will discuss some previous research on

solution techniques for ILP problems. Some of the

applications of ILP are reviewed, as well as solution

techniques currently available. Emphasis is placed on

all-integer cutting planes, since they are the focal point

of this investigation.

Applications

When discussing ILP applications it is convenient to

consider two cases. The first case is one in which

fractional values for the variables involved in the

problem are meatningless. For examples, it makes no sense to

suggest that a company should purchase 2.37 airplanes or

that 8.62 employees should be used for a particular task.

In these cases only integer values would have meaning, but

any positive integer value would be acceptable. This is

not true for the second, category of ILP applications which

involves the use of variables which are allowed to take on

only the values of zero and one. Dantzig [I960] discussed

many of the applications of integer programming in the early

days of ILP, and later Gushing [I969] pointed out some

potential applications of ILP in the business environment.

Scheduling, sequencing, capital budgeting, equipment

utilization, and fixed charges are common problems which

may be modeled using the zero-one format. Other situations

exist in which one of two constraints must be binding

depending upon Which variables are basic (have values which

are not zero) in the optimal solution to the problem. This

type of either-or constraint may be modeled as a zero-one

problem* Portfolio selection and Job shop scheduling (e.g.,

Nanne [1960]) are examples of this type of problem. This

concept may be extended to the situation in which "k" of

"n" constraints must be binding (see Dantzig [I960]).

While finding applications for ILP is not difficult,

finding the solutions to the problems is. Many techniques

have been tried, but none possesses the computational

efficiency necessary to achieve widespread application.

Branch- and Bound

Perhaps the most widely used method for solving the

ILP problem is the branch and bound technique, first used

by Land and Doig [1960]. This method begins by solving the

associated LP problem; if this solution is integer, then the

ILP problem has also been solved. If this solution is not

integer, one of the non-integer variables is selected for

branching. In one of the branches the variable chosen must

8

equal the "rounded down" value, while In the other branch it

is required to equal the "rounded up" value. These new

problems are then solved and the process is repeated. If an

integer solution Is found in one of the branches it yields

a bound on the value of the objective function for the ILP

problem* Other branches are then considered until they either

provide a better Integer solution, or give solutions which

are worse than a previously calculated bound. The process

continues until all branches are either explicitly or =

Implicitly investigated (fathomed). v

Dakin [1965] modified the Land and Doig algorithm by

using inequalities rather thsm equalities for the new

restrictions for the branching variables. For example, if

the variable x had a value of 2.5 in the original solution

to the LP problem, the branching would involve the use of

the constraints x<=2 and x>=3 rather than x=2 and x=3

as Land and Doig proposed. This greatly increases the

efficiency of the algorithm.

Driebeek [1966], Beale and Small [1965] and Tomlin

[1971] have all proposed the use of penalties for choosing the

variable which will be used in the branching. A penalty is

simply the possible change in the value of the objective

function which would occur if branching were done on a

particular variable.

Recently the idea of using advanced starts with branch

and bound techniques has been investigated by Faaland and

y

Hillier [1979]. Using heuristic methods they found "good"

solutions which are used to provide "bounds" in a branch and

bound algorithm. These "bounds" may allow some branches to

be fathomed more quickly than with the traditional bounding

methods.

Although progress is being made due to the development

of larger and faster computers, branch and bound algorithms

loherently face two problems — large storage requirements

and possible round-off errors. For these reasons the "optimal"

solution found by such algorithms may or may not be the true

optimal solution. This difficulty is not present in any

all-integer cutting plane algorithm since the only values

used are integers. The algorithm to be presented in this

dissertation will be of the all-integer cutting plane type.

Implicit Enumeration

When all variables in an ILP problem are restricted to

be either zero or one, implicit enumeration is often used,

Balas [1965] developed the so-called "additive algorithm"

which is generally recognized as being very efficient for

this type of problem. A branch-and-bound format is used, but

solutions are found through actual enumeration rather than by

using the simplex method* This enumeration is facilitated by

the fact that the variables are either zero or one.

Dri^bsiE [1^66] developed the idea of using penalties

for improving the branching rules in this enumeration

algdrltlHii, furtlMr improvement in the enumeration method

was made by aiover [1965b, 1968b], who suggested the use of

surrogate constraints. A surrogate constraint is simply a

e^ltstralnt Implied by other explicit constraints in the -" '

problem. For example, a linear combination of several

constraints may quickly show that no feasible solution to the

problem exists. Other work with surrogate constraints was

done by Balas [1967] and by Geoffrion [1969].

The original additive algorithm used as an initial

solution all variables equal to zero. Salkin [1970] and

Spielberg [1969] have demonstrated how to use points other

than zero as the initial solution in an enumeration

algorithm.

Despite the fact that the "additive algorithm" with

penalties was developed over 15 years ago, it remains

recognized today as a very efficient algorithm for solving

the zero-one problem.

Group Theory

Gomory [1965] introduced the idea of using group theory

for solving the ILP problem. This method involves solving

the associated LP problem, dropping the non-negativity

Ill

constraints for the basic variables in this solution, and

examining the resulting solution space (which is called the

linear programming cone). The set of integer solutions in

this coiie is referred to as the corner polyhedron and is a

JTlnite Abellan group. Two methods may be used to solve the

resulting problem — dynamic programming (see Gomory [1965])

or network analysis (see Hu [1969] and Shapiro [1968]).

Unfortunately, the solution to the group problem may

or may not be feasible with respect to the original ILP

problem (i.e., some of the variables may be negative).

Sufficient conditions for the solution to the group problem

to directly solve the ILP problem were found by Hu [1969]

and by Shapiro [1968b]. Extensions of the group theoretic

techniques have been provided by Shapiro [1969b] and by Taha

[1975], which insure that a solution is found to the ILP

problem through further conputations. Unfortunately, the

resulting algorithms are much less efficient than the algo­

rithms for solving the group problem.

Further modifications and refinements of algorithms

which rely on group theory have been provided by Bell

[1977, 1979] and by Bell and Shapiro [1979]. However, at

the present time there does not exist any evidence to

suggest that the group theoretic techniques will alleviate

the computational difficulty associated with ILP.

B^ii*!!*?^- •.

1^

Fractional Cutting Planes

The usa of "cuts" in solving ILP problems was

investigated by Dantzig, Fulkerson and Johnson [1954] and

by Markowltz and Manne [1958]. However, it was not until

the work of Gomory [1958] that a cutting plane algorithm which

converged in a finite number of iterations was available.

Qomory's fractional algorithm solves the LP problem first

and checks to see if the solution meets the integer require­

ments. If any variables are not integer, a new constraint or

cut is generated which cuts off the current optimal point

from the feasible solution space. The dual simplex method

is then used to restore feasibility. This new optimal (and

feasible) solution is checked for integrality and the process

is then repeated.

The cut which Gomory proposed could come from any of

the rows corresponding to non-integer variables or from any

linear combination of these rows. Srinivasan [1965] found

that choosing the row corresponding to the hyperplane

farthest from the origin seemed to work relatively well.

Another type of fractional cut was called a convexity

cut by Glover [1973]. This is actually a generalization of

the intersection cut of Balas [1971a] and the hypercylindrical

cut of Young [1971]. All of these cuts are geometrically

derived and are used in an algorithm similar to the one used

by Gomory. As Salkin [1975] notes however, a Gomory cut must

4r. » •

13

be used periodically in the algorithm if finite convergence

is to be assured.

An ia^ortant feature of convexity cut methods is the

derivation of a convex set which contains the optimal solution

to the LP problem, since different convex sets lead to

different cuts. Work involving the derivation of these

c^ivex sets has been done by Balas [1971b, 1972], Charnes,

Granot, and Qranot [1977], Glover [1973] and Glover and

Klingman [1973a, 1973b].

The power of cutting planes lies in their ability to

quickly reduce the feasible region. This is a desirable

property which may be incorporated into branch and bound

algorithms as well. Some hybrid algorithms which use cutting

planes have been developed by Marsten and Morin [1978] and

by Lev and Soyster [1978].

A difficulty with fractional cutting planes (and with

other methods which use fractional values in the calculations)

is that round-off errors may occur in the computer. This may

cause an optimal integer solution to be unrecognizable —

which would then cause a suboptimal solution to be considered

optimal. For example, the solution to the problem at the end

of an iteration might have integer values for all but one

variable which would have a value of 5.999999. Since this is

not an integer, the algorithm would continue and cut off this

point from the feasible region. The next solution would be

1*

found, and the process would continue until an integer

solution were found. If the 5.999999 value in the computer

was due to round-off error and should actually have been 6,

then any subsequent integer solution would be suboptimal.

Techniques for minimizing the possibility of round-off errors

(such as storing numerators and denominators of fractions

separately) are available, but storage capabilities of the

computer are often exceeded when these are used. Thus, the

round-off error problem associated with fractional cutting

planes cannot always be eliminated.

All-Integer Cutting Planes

The problem of round-off error was recognized and

eliminated by Gomory [1963b] when he developed the first

all-integer cutting plane algorithm. Using a dual simplex

technique with an initial tableau containing only integer

values, it can easily be demonstrated that, if the pivot

element is always a "-1", all subsequent tableaus will also

be integer valued. Gomory consequently developed a cut

which insured that ..the pivot element would be a "-1".

Another dual all-integer algorithm was developed by

Glover [1965a], which he called the bound escalation

algorithm. Although this is considered to be a cutting plane

method, the cuts are never explicitly generated. Instead,

15

lower bounds are fcilid for the variables and these bounds are

escalated at %ach iliieration. Fneuli [1971] suggested using

the solution to the LP problem as an initial starting point

In this algorithm, but computational results on the use of

these starts were inconclusive.

^/Utother dual algorithm has recently appeared — the

so-called bounded descent algorithm (BDA) developed by

Austin [1979]. Austin and Hanna [1980] have demonstrated

that using upper bounds on the variables as an initial

solution in a Gomory type dual algorithm may help to speed

convergence of the algorithm. Using a standard set of test

problems which were exhibited by Trauth and WooIsey [1969],

Austin and Hanna found that the BDA performed considerably

better than several computer codes based on other cutting

plane techniques.

One criticism of all dual algorithms is that — until

the optimal solution is found — no feasible solution is

known. To alleviate this problem, Ben-Israel and Charnes

[1962] developed a primal all-integer algorithm. However,

finite convergence was not guaranteed, as Mathis [1971]

illustrated.

Young [1965] modified the primal algorithm of

Ben-Israel and Charnes by deriving very complicated rules for

selecting the source row and pivot column, thus insuring

convergence. Further modifications were made by Young [1968],

16

resulting in his simplified primal algorithm (SPA). Glover

[1968a] also found simpler ways to insure convergence of the

primal algorithm.

A major problem with the SPA is the existence of long

sequences of iterations in which the basis (and consequently

the objective function value) remains unchanged. Such an

iteration is called a stationary cycle. In some cases, the

optimal solution has been reached, but the simplex evaluation

criterion does not indicate this fact until a long sequence

of stationary cycles occurs. Arnold and Bellmore [1974a]

used an enumerative branch and bound algorithm to find an

upper bound on the value of the objective function, which

recognized that the optimal solution had been reached —

without actually performing the stationary cycles.

In another attempt to deal with stationarity, Arnold

and Bellmore [1974b] developed a cut which causes stationary

cycles in the SPA to end sooner than they normally would.

The regular cut is used until a stationary cycle is

encountered; the new cut is then used in an attempt to speed

convergence. In yet another attempt to avoid stationarity

problems, Arnold and Bellmore [1974c] found that in certain

situations stationary cycles could be implicitly performed. .

Thus the tableau which follows the long series of stationary

cycles could be found much more quickly than ifthe stationary

cycles were explicitly performed.

17

In addition to the primal and the dual cutting plane

algorithms previously mentioned, there exist other cutting

plane methods which are best described as primal-dual in

natui^e. The first of these was the pseudo primal-dual

algorithm developed by Glover [1966]. This is similar to

the dual algorithm of Gomory in the dual stage and the

primal algorithm of Young in the primal stage. A later

primal-dual method which was designed to avoid stationarity

problems in the primal stage was developed by Ghandforoush

[1980]. Advanced starts in this algorithm were investigated

by Austin, Ghandforoush, and Hanna [198O].

A partitioning type of algorithm for ILP problems

called the eclectic primal algorithm (EPA) was developed by

Young [1975]. This is basically a primal cutting plane

algorithm which decomposes the problem into smaller Integer

programs, which are then solved by any ILP solution technique.

Thus this algorithm should actually be classified as a hybrid

technique rather than as a pure primal cutting plane technique.

Other Considerations

If the starting solutions used in the various techniques

are compared, some interesting results are found. The branch

and bound, group theoretic, and fractional cutting plane

algorithms all use the solution to the LP problem as a

?>|r'* IS', '-u.

Starting point. The original enuHneration method of Balas

was modified by Salkin [IS?^] and by Spielberg [1969] to

allow points other thaa,'«iro as the initial solution.

Howeveri' Jll pi?lmal all-intsger cutting plane methods begin

wit^ iPLl irarlables equal to zero, and the original dual and

|IHNai«^ual methods also start at zero. Modifications of the

dual (by Austin and Hanna [1980]) and the primal-dual (by

Austin, Ghandforoush, and Hanna [198O]) algorithms have

permitted the Initial solutions to be set at upper bounds

on the variables. The bound escalation algorithm of Glover

was modified by Pneuli to allow the use of values from the

solution to the LP problem in the initial solution. However,

the dual algorithm which Gomory developed has never been

configured to allow the use of other values (other than

upper bounds) in the initial solution.

It is intuitively appealing to modify Gomory»s dual

all-integer algorithm to allow the use of values related

to the solution of the LP problem as starting points. Thus

the advantage of an all-integer technique (i.e., no round-off

errors) can be combined with the advanced start advantage of

the other methods to provide a more efficient way of solving

ILP problems. It is this rationale which provided the impetus

for this investigation.

CHAPTER III

THE ADVANCED START ALGORITHM

The algorithm to be presented in this chapter is an

all-integer cutting plane algorithm. This algorithm not only

incorporates many of the features found to speed convergence

in other algorithms (such as the objective cut), it also ' 5 * "•••

provides a major contribution to the area of all-integer

cutting planes by allowing any point to be chosen as the

initial solution. This starting point may be optimal, better

than optimal but infeasible, feasible but less than optimal,

or both infeasible and less than optimal. Regardless of the

feasibility and optimality status of this starting point,

the algorithm provides a method for moving to a feasible

solution; and if optimality is not present at this point, the

algorithm will continue until the optimal solution is found.

The Bounded Descent Algorithm (BDA), which can be

considered as a forerunner of the Advanced Start Algorithm,

also uses a point other than zero as the initial solution.

Upper bound constraints are explicitly Included in the BDA

tableau and a transformation is made to begin with each

structural variable set equal to its upper bound. However,

the same procedure cannot be used with any starting point

because the problem could be overconstrained — the variables

19

20

might initially be set at values less than their optimal

values and the uppe r bound constraints would be binding on

these variables. The Advanced Start Algorithm provides a way

to Initially set the variables at any value, move to a

feasible solution, check to see if the problem was initially

over-constrained,, and if it was to then relax these

constraints and move to an optimal solution.

In this chapter the standard integer linear programming

problem will be presented with mention of the starting point

used in Young's Simplified Primal Algorithm (SPA) and the

starting point in Gomory»s dual all-integer cutting plane

algorithm. Important features of the BDA which are incor­

porated in the Advanced Start Algorithm will be explained as

well as the aspect of overconstraining the problem. The

method for allowing the advanced start constraints to be

nonbinding will then be presented followed by a discussion of

the derivation of the advanced start itself. Finally, the

Advanced Start Algorithm will be presented together with

examples and further discussion of the steps in the algorithm.

21

The ILP Problem

The integer linear programming problem may be stated

as follows:

^ ""•- . n

Maximize Xn * c x* (1) ^ j,l 0 J

n Subject to Z a..X. <« b, , i = l,2,...,m (2)

J=l J J 1

X. >« 0 and integer, J = l,2,...,n (3)

where a^., c., b^ are integer for all i, J. (4)

If the slack variables are added to the constraints and

appropriate arithmetic operations are performed, the problem

may be stated in an expanded form (PI) which resembles the

standard Beale tableau. This form is:

Maximize XQ = b^ - c^x^ - o^^^ ••• - c x (5)

Subject to ^1 ~ 1 " 11^1 ~ ^12^2 " ••• ~ in^n (6)

^2 " 2 " 21^1 " ^22^2 " ••• - 2n^n ^'^^

(PI)

where

s =b -a^x^ - a x« - ... - a x (8) m m ml 1 m2 2 mn n

X. >= 0 and integer for j = l,2,...,n (9)

c , a.., b. are integer for all i, J. (10)

* \ • •

22

From this statement is derived the standard Integer

programming tableau which explicitly includes the

nonnsgativlty constraints. This tableau is given in Figure

2; the "t" variables are the slacks for the nonnegativity

constraints. If all the values in the first column are

non-negative, then this tableau represents an initial

feasible solution in which all structural variables are

equal to zero. This tableau then could be used as the initial

tableau in a primal all-integer cutting plane algorithm. If

an additional row were added to be used for insuring conver-

gence, then the tableau would be the initial tableau for

Young's SPA. Therefore, it can be seen that the initial

solution used by Young in the SPA has all decision variables

set equal to zero.

If all the objective function coefficients are

originally negative or zero (as might be the case if the

problem were a minimization problem), then this tableau would

represent a dual feasible solution. All the structural

variables are set equal to zero as mentioned previously, but

if any of the slack variables are negative, then the tableau

would be primal infeasible. This tableau could be used as

the initial tableau for a dual algorithm such as Gomory's

all-integer cutting plane algorithm. The additional row used

for insuring convergence in a primal algorithm is not needed

in a dual algorithm since a lexicographic dual simplex

23

x^ «

X, =

*n =

s, =

So =

s = m

b •

0

0

0

.

a

.

0

\

"^2

b m

-h

-°i -X

0

.

.

.

0

^11

^21

-t2 . .

-°2 ••

0

- 1

• • •

• • •

• • •

0

^12 ••

^22 ••

• • •

• • •

^m2 ••

- t n

• -^n 1 0 j

0 1

. . 1

. . 1

. * 1

- 1 !

• ^m 1 • ^2n 1

• • *

• • 1

a 1 mn 1

Figure 2. The Standard Integer Programming Tableau.

24

procedure will guarantee convergence.

Every one of the original ail-integer cutting plane

algorithms begin witti a tableau Which is similar to the one

given in Figure 2. All Structural variables are set equal to

zero as is seen in this tableau. This same initial solution

was used in all-Integer cutting plane algorithms until the

develbpment of the BDA by Austin [1979].

THE BDA

The Bounded Descent Algorithm was the first true

all-integer cutting plane algorithm which used an "advanced

st€u:»t." The basic idea behind the BDA was to approach the

optimal lattice point from a different direction and from a

^oint Which is closer to the optimal solution.

In this algorithm each variable is initially set equal

to its upper bound which is either an explicit upper bound

(such as 1 for a zero-one variable) or an Implicit upper

bound which may be derived from the constraints. If the

upper bounds are derived from the constraints, they are

chosen to be as small as possible so that the initial solution

is "close" to the optimal solution. However, each upper

bound represents a binding constraint in the problem and

thus cannot arbitrarily be set at a lower value or the

problem will be overconstrained. This difficulty will be

25

addressed and alleviated in the Advanced Start Algorithm.

The BDA has soifte features which will be incorporated

into the Advanced Start Algorithm — the use of an objective

cut and the selective ordering of. the constraints. The

objective cut is a constraint appended to the tableau which

limits the value of the objective function in the integer

problem to be less than or equal to the integer portion of

the optimal objective function value of the relaxed problem.

In many problems this cut helps to speed convergence.

The selective ordering of the constraints in the BDA

also results in fewer iterations in many cases. This

involves determining which constraint is most violated by the

initial solution (i.e., the constraint with the most negative

"b" value) and making this the first constraint; the second

most violated constraint is placed second, etc. The rationale

for doing this involves the row selection criterion which is

necessary to insure convergence. The rule used is to choose

the first violated constraint as the source row. If the

usual dual simplex criterion of choosing the most violated

constraint were used, convergence would be be guaranteed. By

ordering the rows so that the most violated constraint is also

the first violated constraint the two criteria are actually

combined at the first iteration. While this order may be

lost in subsequent iterations, this procedure seems to work

well in practice. An alternative selection rule which aims

at making the deepest cut possible while still insuring

.*•* , -x'

26

convergence is to choose the most violated constraint unless

this choice leads to a dual degenerate iteration. If this is

the case, then the second most violated constraint is tried.

If this also leads to a dual degenerate iteration, then the

next constraint is tried, and this continues until an

appropriate row is found. If such a row is found, then it is

used as the source row. Convergence is guaranteed as long as

such a source row is chosen, but if no such source row is

found (i.e., a dual degenerate iteration must occur), then

convergence can be proven if the first violated constraint

is chosen. While this technique reduces the number of

iterations in many problems, it is often more time consuming

than simply always choosing the first violated constraint.

Consequently, the row selection criterion used in the

Advanced Start Algorithm will be the same as that for the

BDA.

Hence the BDA provides two features which are used in

the algorithm developed in this paper. However, rather than

using upper bounds as the initial solution, other starting

points will be used and a technique will be developed to

prevent these advanced starts from overconstraining the

problem.

.*'.»•»

f V 1 .*'

27

Difficulties with Oth#r Initial Solutions

Using upper bounds as initial solutions to ILP problems

will guarantee that the Initial solution to the problem is

infeasible except in the trivial case in which this is the

optiattl solution to the problem. Other starting points may

or may not possess this characteristic. If the integer

portion of the relaxed solution is used as the initial

solution it may or may not be feasible. Examples may easily

be constructed to illustrate each of these situations.

If all objective function coefficients are positive,

then using upper bounds at the start will insure that the

initial objective function value is "better" than the

objective function value for the optimal solution. If the

integer portion of the relaxed solution were used, the saine

situation would not necessarily exist. Were it true that

the integer portion of the relaxed solution always resulted

in a feasible, optimal solution to the integer problem, then

there would be no need for special integer programming

algorithms. Obviously this is not true.

Arbitrarily choosing other starting points which are

less than the upper bounds will result In similar difficulties.

However, Judiciously choosing the initial values may insure

that the initial objective function value is better than the

optimal value. This selection process will be discussed

later.

3-' i*

28

If a vai TaTj'Mpire initially set. at a value less than

the optimal value and the BDA were used to solve the

problem, the variable would never exceed this value. If

this value were below the true optimal value, then the

problem would be overconstrained and the algorithm would

stop at a point which was suboptimal. However, if the

upper bound constraints were somehow afllq/wed to be nonbinding

then the situation could be rectified.> i

Consider the following expression (P2) of the ILP

problem with the upper bound constraints explicitly

stated.

n Maximize x^ - S c.x. = 0 (11)

u j=l J J

Subject to X. + t. = u, J = l,2,...,n (12)

n (P2) Z a .x. + s^ = b^ i = l,2,...,m (13)

Ji°j^j ^ 'o = t ! ^ ^

X., t., s^, tQ >= 0 and integer (15)

c., a ., b^, u. integer (16)

29

The variables are defined as follows:

X. structural variable

t. slack variable for upper bound constraint

u. upper bound on variable x.

s^ slack variable for original .constraint i

tQ slack variable for objective constraint

X* value of objective function for relaxed problem

The tableau which represents this problem, is given in

Figure 3. Note that this is an expanded form and not the

form which is actually used for computational purposes. It

will not be necessary to use all of these columns in the

pivot operations and consequently some will be dropped from

the tableau. As in other all-integer cutting plane tableaus

the nonnegativity constraints are explicitly stated, but the

slack variables for these constraints will not be explicitly

included in the tableau.

If a normal pivot operation is used to set each

structural variable equal to its upper bound, then the

tableau is transformed to yield the beginning tableau for

the BDA; this, is presented in Figure 4. It should be noted

that the values in the first column always refer to the

vatriables in the same order. Thus, the first value in the

30

"" ^1 ''2 ••• ^n ^ ^2 ••• ^n ^1 ^2--- \ ^o

^0 " ° -^1 -^2 ••• "^n ^ 0 ... 0 0 0 ... 0 0

^1 = ° -1 0 ••• 0 . 0 0 ... 0 0 0 ... 0 0

"^2"° ° -^ ••• 0 0 0 . . . 0 0 0 . . . 0 0

^n

H

^2

= 0

" ^1

- U 2

0

1

0

0

0

1

0 ... -1 0 0 . . . 0 0 0...0 0

0 1 0 . . . 0 0 0...0 0 /

0 0 1 . . . 0 0 0...0 0

^ n ' % ^ 0 ... 1 0 0 . . . 1 0 0...0 0

^1 ' 1 ^11 ^12 ••• ^In ° 0 . . . 0 1 0...0 0

^2 " 2 ^21 ^22 ••• ^2n ° 0 . . . 0 0 1...0 0

m = \ ^ml ^m2 ••• ^mn ° 0 ... 0 0 0 ... 1 0

tQ = [x*] c^ c^ ... c^ 0 0 ... 0 0 0 ... 0 1

Figure 3. Tableau of Coefficients with Upper Bound Constraints Explicitly Stated.

31

^1 2 ••• \ h ^2 ••• *n 1 ^2-' \ ^0

XQ = ICjUj 0 0 ... 0 . c^ °2 • • • °n 0 0 • •. 0 0

^1 ^ ^1 i 0 • • • 0 1 0 ... 0 0 0 ... 0 0 :-

""2 "& ^2 ° ^ • • • 0 0 1 • • • 0 0 0 ... 0 0

«n " "^n 0 0 ... 1 0 0 ... 1 0 0 ... 0 0

\ ' 0 0 0 ... 0 -1 0 ... 0 0 0 ... 0 0

^ 2 = 0 0 0 . . . 0 0 - 1 . . . 0 0 0 . . . 0 0

t^= 0 0 0 . . . 0 0 0...-1 0 0 . . . 0 0

^1 = V^^IJ^J Q 0 •••0 -^11 -^12---^ln 1 0 ••• 0 0

S2 = 2"' 2J 1 ^ ^ ... 0. -ap-, -app...-ap 0 1 ... 0 0

s„ = m' m .- 0 0 ... 0 -a T -a «...-a 0 0 ... 1 0 m m mj J ml m2 mn

tQ = fegl-rc.u 0 0 ... 0 -c^ -c^ •••-Cn ° 0 ... 0 1

Figure 4. Expanded Form of Initial BDA Tableau.

32 • • *

first column is f S | p the v^m Of the objective function,

the next value is always the value of the first structural

variable, etc. Consequently, it is not necessary to carry

along the identity columns or even to keep track of which

variables the other columns represent. The tableau for the

BDA may be further reduced by eliminating the rows correspond­

ing to the slack variables for the upper bound constraints if

the upper bounds are derived from the original constraints,

because the upper bound constraints in this case would be

redundant. Thus the size of the tableau for the BDA is

approximately the same as the size for most other all-integer

cutting plane algorithms. This reduced tableau is given in

Figure 5.

In order to allow any variable to be used in an

initial solution, there must be some way to allow the

variables to exceed the initial values (if necessary) to

reach the optimal solution. While eliminating the rows

corresponding to the slack variables for the upper bound

constraints does have the effect of relaxing these limits

(unless they are implied by the other constraints), the first

feasible solution found will not necessarily be optimal if

the initial values are set too low. What this means is that

while all the "basic" variables in a final suboptimal solution

are the variables which must be "basic" In the optimal

solution, they are set at values lower than their optimal

33

values. This would be anala^ous to a final simplex solution

to an LP problem at an interior point rather than at an

extreme point.

Further stt; y of the initial BDA tableau and the

corresponding nonbaslc variables helps to demonstrate how

the upper bound constraints (or advanced start constraints)

may be relaxed* Consider the expanded tableau in Figure 4.

The nonbaslc variables in this tableau are the slack

variables for the advanced start constraints, and this

particular tableau explicitly displays the values of these

variables. When a pivot column is chosen, the corresponding

variable becomes basic and the value is explicitly displayed.

This meauis that the advanced start constraint is redundant

and consequently is not a binding constraint. If the row

which explicitly gives the value of this slack variable is

dropped along with the cut which has been generated, then in

later iterations it will be possible for this variable to

become negative without affecting the feasibility status of

the problem (as indicated by the first column of the tableau).

This is analagous to dropping the cut constraints in

fractional cutting plane algorithms whenever the slack

variable for a cut becomes a basic variable. A discussion

of dropping cuts in fractional algorithms is provided in

most integer programming texts.

^0 =

^1 =

Xg =

b

'=J"J

" l

«2

*1

°1

1

0

*2

°2

0

1

• . .

• a .

. . a

. . .

*n

°n

*n " »n ° 0 • • • 1

1 ~ ^T~ ^ 1 *u. -a.., -a -a 1 1 IJ J 11 ^12 ••• ^in

^o ~ b«- 2a«.u.. -a«^ -a -a 2 2 2j J **21 ^21 ••• ^2n

s = b — Za .u. —a —a _o m m mJ J ml ^m2 ••" ^ mn

tg^Dcg^Ec^u^ -Ci -C2 ... -c^

Figure 5. Initial Tableau for BDA in Reduced Form.

3

35

As one of the slack variables for an advanced start

constraint becomes basic, a slack variable for a cut becomes

nonbaslc. If all columns have been used as the pivot column,

then the nonbaslc variables in the tableau will be slack

variables for cut constraints. Since the rows corresponding

to the advanced start slack variables are dropped from the

tableau, there is nothing left in the tableau which would

indicate what values were used in the initial solution. What

is present in the tableau at this point is a basic solution

generated from the cut constraints which in turn were derived

from the original constraints in the problem. The advanced

start constraints merely provide a means of reaching such a

solution rather quickly, and once this is done, the tableau

is at a point consistent with the traditional dual all-integer

cutting plane type of algorithm. Gomory [1963] developed the

theory to show that the first feasible solution found from

this point will be the optimal solution.

Tests for Optimality

One way to determine if a problem has been overcon­

strained is to determine whether the final value of a

variable is the same as the initial value. If no values are

equal to the initial values, then the problem has not been

overconstrained. If any variable is equal to the initial

36

value, then the problem may or may not have been

overconstrained, and further analysis is necessary.

The method that will be used to test the final

solution for optimality if the problem may have been

overconstrained will now be developed. First, problem P2

is modified to make the advanced start constraints nonbinding.

This is acc Mttplished by introducing only one new variable "z"

to all of the advanced start constraints. Such a constraint

Would now have the form

Xj + tj - z = uj (17)

If the value of z is greater than zero in the final

solution, then the value for the "x" variable will be above

the initial value. This procedure differs from the normal

handling of this situation in linear programming in which

either the slack variable or the z variable would have to be

zero. In all-integer methods it is possible that all variables

in this equation would be greater than zero. The statement

of the problem with the inclusion of the z variable is the

following: n

Maximize XQ- Z c.x = 0 (18)

Subject to X. + t. - z = u. j = l,2,...,n (19)

• CP3) Z a^jX^+s^ = b^ i = l,2.,,,.m (19)

Z o.x, + t. = [x»] (20)

37

^y ^y ^1' 0 ^ ^ ^^ integer (2i)

Oy a.^y b^, uj Integer (22)

The tableau representing the statement of the problem

as in P3 Is given in Figure 6. The "u" values in the BDA

tableau will be used to represent the initial values for the

variables, and are not necessarily upper bounds. This tableau

also differs from the BDA tableau by the introduction of the

*z" column. If the normal pivot operations are performed to

set each structural variable equal to the initial value, the

tableau is transformed to the tableau in Figure 7.

As mentioned previously the identity columns are not

needed in the actual computational procedures so they are

dropped from the tableau. Also the slack variables for the

"upper bound" constraints (they should more accurately be

called initial value constraints) will be taken out of the

tableau because leaving them in might cause these constraints

to be binding and the problem would be overconstrained. The

"z" column is the mechanism used to move from the first

feasible solution to the optimal solution if any variables

are equal to their original values. This column will not be

eligible for selection as the pivot column until a feasible

solution is found. At this point it will be the first pivot

column in the primal phase, and it will remain eligible

for selection until the optimal solution is found. The actual

38

' "l ^2 \ t i t g . . . t^ s^ sg . . . s^ t o z

^0 = ° -^1 -^2 . . . - c ^ 0 0 . . . 0 0 0 . . . 0 0 0

""l " ^ "^ ^ • - . 0 0 0 . . . 0 0 0 . . . . 0 0 0

^2 " ° ^ -^ • • • 0 0 0 . . . 0 0 0 . . . 0 0 0

X n * ° ° 0 . . . - 1 0 0 . . . 0 0 0 . . . 0 0 0

* ! • ^1 1 0 . . . 0 1 0 . . . 0 0 0 . . . 0 0 - 1

* 2 * ^ 2 ° 1 - . . 0 0 1 . . . 0 0 0 . . . 0 0 - 1

t^ = u^ 0 0 . . . 1 0 0 . . . 1 0 0 . . . 0 0 - 1

s- « b, a , , a^p*-- a, 0 0 . . . 0 1 0 . . . 0 0 0

Sg * bp ap-j ^22* •• ^2n ^ ^ . . . 0 0 1 . . . 0 0 0

s« = K a -, a ^ p . . . a^^O 0 . . . 0 0 0 . . . 1 0 0 m m ml md mn

tQ *|X§1 c^ C2 . . . c^ 0 0 . . . 0 0 0 . . . 0 1 0

Figure 6 . Tableau of C o e f f i c i e n t s wi th Upper Bound C o n s t r a i n t s and "z"Column.

X

3-9

b X, x^ ... X tv t, 1 -2 ••• n \ ^2 ••• *n «1 «2 ••• \ ^o'^

XQ = ^^j^j- 0 0 ... 0 c^ C2 ... c^ 0 0 ...0 0 0

x-L = -U^ 1 0 ... 0 1 0 ... 0 0 0 ... 0 0-1

2 " ^2 0 1 . . . 0 0 1 ... 0 0 0 . . . 0 0 -1

*n " ^n 0 0 ... 1 0 0 ... 1 0 0 ... 0 0 -1

^1 = 0 0 0 ... 0 -1 0 ... 0 0 0 .!. 0 0 -1

^2 ' 0 0 0 ... 0 0 -1 ... 0 0 0 ... 0 0 -1

t^= 0 0 0 . . . 0 0 0 ...-1 0 0 . . . 0 0 - 1

s^ = b^-Za^.u. 0 0 ... 0 -a^^-a^2 ••• ~^±n " ^ ... 0 0 Za..

S2 = 2*^^21^i ^ ^ ... 0 -a2i-a22 ••• "^2n ° •'" ••• ^ 0 Zap.

8 = b_-Za .u. 0 0 ... 0 -a_,-a^^ ... -a^^ 0 0 ... 1 0 Za . m m mJ J ml m2 mn mj

^0 'I^^Zc.u. 0 0 ... 0 -c^ • C2 ... -c^ 0 0 ... 0 1 Zc.

Figure 7. Expanded Form of Initial Advanced Start Tableau.

40

initial tableau for this algorithm is given in Figure 8.

After the first feasible solution has been reached, the

"z" column will always appear to indicate that the solution

is not optimal regardless of the validity of this indication.

However, if no variables are equal to their initial values,

then the solution must be optimal. If a primal phase were

entered at this point, it would only serve to confirm this

fact by going through a series of stationary cycles until

optimality was indicated. The actual solution would not

change in any of these iterations. In the case where the

first feasible solution is not the optimal solution, the "z"

column will allow a primal phase to begin and the values of

the variables may then increase until the true optimal

solution is found. It is possible that a first feasible

solution with some variables equal to the initial values does

represent the optimal solution. If this happens then the

primal phase will merely confirm this. Again comparing this

to the simplex procedure for LP problems, this primal phase

would either begin at a feasible interior point and move to

the optimal extreme point or it would begin at the optimal

point and perform some degenerate iterations until the

optimality criterion indicated that this point was In fact

the desired final solution.

.IS" ' £:('' ' ^ r

41

X 0

x, =

'*'3"^J

u.

U.

1

0

0

1

^n -

c -Zc. n J

0 -1

0 -1

n u n 0 1 -1

®1 " 1 - IJ J

^n = 2 - 2J J

"^11 "^12

"^21 "^22

-^m^^ij

- 2n 2J

^m = \ - ^^mj^j -^ml -^m2

tQ = Dcg] - Zcyu^ -c^ -02

... —a z<a . mn raj

... —c u c • n J

Figure 8. Initial Tableau for Advanced Start Algorithm,

jjf> ^tp ^tp

42

Derivation of the Advanced Starts.

While the Advanced Start Algorithm will allow any •i

Starting point to be used as an initial solution, it is

intuitively appealing to begin at some point close to the

solution of the relaxed problem. The solution to the ILP

problem is not necessarily close to the solution of the LP

problem and at times may be extremely different from the

relajced solution. However, for many problems either the

two solutions are close to each other or there exists a

feasible solution which is close to the. relaxed solution

and which yields an objective function value which is

close to the optimal objective function value for the

integer problem. For this reason and also because

fractional cutting planes, group theoretic methods, and

branch and bound methods all begin will the solution to

the relaxed problem, all of the starting points discussed

in this section will be based on the relaxed solution.

The most obvious and probably the most appealing

starting point is the one found by using the traditional

rounding technique to get an integer solution from the

relaxed solution. This method is simply to add 0.5 to

each of the values in the relaxed solution and use the

integer part of the result as the start. This will result

in a starting point which is closest (in terms of distance)

to the relaxed solution, although it will not always result

43

in an objective function value which is closest to the value

for the relaxed solution. Because of this proximity to the

relaxed solution, this will be the starting point used in the

Advanced Start Algorithm.

Another starting point which could be used and which

has certain desirable properties is one found by rounding

the relaxed solution values either up or down depending upon

the sign of the objective function coefficient for each

variable. By rounding the value up if the coefficient is

positive and rounding down if the coefficient is negative

the initial solution will always be "better than optimal"

and infeasible. Most dual algorithms begin with a solution

which has these two properties, and consequently this method

for generating an initial solution is somewhat appealing.

Obviously, there are many other rounding techniques

available. Heuristics based on objective function

coefficients or on coefficients in binding constraints could

be used. Special structures in some problems- might suggest

other ways to generate good starting points. However, for

the general ILP problem there is no logical reason to expect

any other method to perform better than the one actually used

in algorithm. '

The initial solution found by adding 0.5 and using

the integer portion of the result could yield a feasible

solution. If it does, then a lower bound on the value of the

objective function is immediately available. Should this

occur, the Advanced Start Algorithm could begin the primal

phase to see if this solution could be improved. However,

if a feasible solution should be so readily apparent, it

would seem logical to try to improve upon this solution

before actually starting the algorithm. One way to do this

would be to add one to some or all of the variables and to

use the result as the initial solution. The variable with

the lowest positive objective function coefficient could be

..,. selected and one would be added to the value found by rounding

If this still gives a feasible solution, then the next

smallest objective function coefficient would be used to

select the next variable to increase by one. This process

would continue until an infeasible solution were found. The

final outcome of this technique would be an infeasible

solution which is close to the relaxed solution. This is the

approach which will be used in this algorithm. /

Advanced Start Algorithm

The topics discussed thus far in this chapter include

a description of the BDA, how the BDA can be modified to

allow the upper bound constraints to be nonbinding, how the

"z". column will cause the algorithm to search for the

optimal solution, and a way to derive a good advanced start.

45

Each of these elements contributed to the development of the

Advanced Start Algoritto. The actual algorithm will now be

presented with further discussion and examples to follow.

Step 1. Solve the associated linear prograimning problem

(the Integer problem with the integer restrictions

relaxed). Let xg be the value of the objective

function. If all variables are Integers, stop.

Otherwise, go to Step 2.

Step 2. Generate the following advanced start:

J " '"J "'' ^"^^^ ^ = l,2,...,n, where the

notation [ ] means the largest integer less than

or equal to the value in the brackets. If this

yields a feasible solution, then add 1 to the

value of the variable corresponding the smallest

positive objective function coefficient. If this

is still feasible, go to the next variable smd add

1. Continue until the start becomes infeasible.

Step 3. Generate the following objective cut which limits

the value of the objective function to be less than

or equal to the value in the relaxed solution:

n Z c.x, <= Cx*].

J=l ^ "

46

step 4. Transformthe tableau to set. the variables equal

to the starting values (Figure 5). Rearrange the

original constraints in order of increasing b..

Step 5. Geneopate the "z" column and append this to the

tableau (Figure 8).

Step 6. If all b^ >« 0 then the solution is feasible and

go to Step 12. Otherwise, find the source row

indexed by r such that r = Min {i | b. < 0}.

Step ?• Omitting the z column, choose the lexicographically

smallest column such that a^i < 0. Let s be the rj

index of this pivot column.

Step 8. Let p be the row index for the first nonzero

element in column s. Compute y.=[a ,/a ] for all

J with a . < 0, and such that a , is the first

nonzero element in column J; otherwise let y. =

Step 9. Compute X = Max 11 a j | / Hj } for a ^ < 0 and

y. finite.

Step 10. Generate the cut constraint as follows:

Z [a^,/X] t. <= [b /X]. J«l J J J

00,

^ k.

47

Step 11. Append the cut to the tableau and perform a dual

simplex iteration with column s as the pivot column

and the cut constraint as the pivot row. Drop the

cut from the constraint and go to Step 6.

Step 12. It XQ - [xgl or if no variables are equal to their

starting values or if all columns have been used as

the pivot colxanr then stop. This solution is the

optimal solution. If none of these conditions are

k> met, then the solution may or may not be the optimal

solution; go to Step 13. jf *>• t^*i

**> *f*

^'WJ "-

Step 13. If all c. >» 0 then stop; otherwise choose the

lexicographically smallest column as the pivot

column s (the z column is now eligible for

selection as the pivot column, and it will be the

pivot column in the first iteration of this

primal phase). If this column leads to a stationary

cycle, choose the next smallest column with c.> 0;

If this column results in a transition cycle, let

it be the pivot column, otherwise choose the next

smallest column. Continue until a transition cycle

is found. If no transition cycle is possible, then

simply use the lexicographically smallest column.

Step l4. Choose the source row r as the row which minimizes

bVa, for all i such that a, >0. 1 Is IS

m. m. m. mii mii mii

48

step 15. Let X= a^^. Generate the cut constraint as in

Step 10.

Step 16. Append the cut to the tableau and perform a

primal simplex iteration with column s as the

pivot column and with the cut constraint as the

pivot row. If XQ = [xg] then stop; otherwise, go

to Step 13. .

Since there is both a primal and a dual phase to this

algorithm, it should be classified as primal-dual algorithm.

However, it differs greatly from the Pseudo Primal-Dual

Algorithm of Glover [1966] and the Constructive Primal-Dual

Algorithm of Ghandforoush [1980] in that it does not alternate

between the primal phase and the dual phase. The algorithm

begins in a dual phase, and only after feasibility is

achieved does a primal stage begin. Once the primal phase

begins, no further dual iterations are ever required. Both

of the algorithms mentioned above have specific criteria

which dictate the type of iteration to be performed, and

consequently there may be several primal phases and several

dual phases in the solution of one problem.

Convergence of the dual phase of the algorithm is

assured because of the source row selection rule used with

a lexicographic dual simplex type of algorithm. Salkin

[1975, p. 118] provides a complete proof of this, and many

49

similar proofs are available in the literature. This proof

is based on the following features of the lexicographic dual

simplex algoritlaa and the source row selection rule:

(1) The initial tableau is lexicographically positive.

(2) The choice of the cut constant insures that the

tableau will remain lexicographically positive.

(3) The "b" column is decreasing lexicographically

from one iteration to the next.

(4) If the first violated constraint is chosen as the

source row (r), then one of the "b" values above

b^ must decrease because b will increase. This r r

means that eventually b^ must decrease, and thus an

infinite number of dual degenerate iterations may

not occur.

(5) An all-integer algorithm results in integer changes

in the value of the objective function, and if there

is a bound on this value, it must be reached in a

finite number of steps unless an infinite number of

dual degenerate iterations occur. Since only a

finite number of dual degenerate iterations may

occur, the algorithm must converge in a finite

number of steps.

50

The proof provided by Salkin which has been outlined above

may be directly applied to the 'ASA.

The priniilrphase of this algorithm (Step 13 through

Step 16) Is simply a stat«BBent of a primal all-integer

algorithm similar to the Rudimentary Primal Algorithm with

a search for a transition cycle added. If the "L" row used

by Young was added to the algorithm, then this phase of the

ASA would essentially be the Simplified Primal Algorithm

(SPA). This part of the algorithm would be slightly

modified to insure that the primal phase would converge.

In Step 13, each column would be divided by the corresponding

value in the L row, and the lexicographically smallest column

would indicate the pivot column. The source row criterion

would change only if a transition cycle could not be found.

To insure convergence it is only necessary to guarantee that

every series of stationary cycles must eventually end, and

there are several possible row selection rules which will do

this (see Salkin [1975], p. 132). However, regardless of the

rule used, stationary cycles remain very much a problem with

all primal cutting plane algorithms, and thousands of

iterations may occur without ever changing the values of the

objective function or the values of any of the variables. As

will be seen in Cahpter 4, when stationary cycles occur, the

primal phase of the ASA suffers despite various attempts to

remedy this. The use of the L row would guarantee convergence

51

in a finite number of iterations, but the finite number may

be impractically large.

The first feasible solution found in Step 13 is

usually a good solution (and sometimes an optimal solution)

which may be used with all-integer cutting plane algorithm

as am advanced start for that algorithm. In addition, it

could be used as a bound for a branch and bound type of

algorithm. Thus,, some useful information is available

even before the primal phase is used.

To illustrate this algorithm two examples will be

presented. The first will be an example in which both the

dual phase and the primal phase of this algorithm must be

utilized because the first feasible solution found will not

be the optimal solution. The second example will be one in

which the first feasible solution found is also the optimal

solution to the problem.

Consider the following ILP problem.

(24) Maximize XQ = 5X^ + 6X2

Subject to 4XT + 5X2<= 61 (25)

5X^ + 3X2<= 59

X,, X2 >= 0 and integer

(26)

(27)

The relaxed solution to this problem is XQ - 74.9, X.^ - 8.6,

and X =5.3. From this solution is derived the following

52

advanced start U^ = [8.6 + 0.5] = .9. Ug - 15.3 + 0.5] « .5.

The objective cut used in this problem will be the following:

5X^ + 6X2- <- 74 (28)

It is easily verified that this advanced start does not

violate the first constraint but does violate the second

constraint and the objective constraint by one unit each.

Consequently, the slack variables for these constraints will

equal -1 in the initial tableau, and thus the initial

solution is infeasible. The beginning tableau is given in

Figure 9. Notice that the constraints have been rearranged

to place the most violated of the original constraints first

and the least violated constraint last. Therefore, the slack

variables will not appear in the same order as they normally

would. However, the slack variables will maintain their

relative positions throughout all the subsequent tableaus.

The source row and the pivot column are indicated on

all tableaus by the arrows. The last row on each tableau is

the cut which is generated from the source row. The slack

variable for this constraint is omitted as is the common

procedure for most integer programming illustrations in the

text books in this area (e.g., Garflnkel and Nemhauser [1972],

Greenberg [1971], Hu [1969], Salkin [1975], Taha [1975], and

Zionts [1974]). The reason for this is that the column which

would correspond to this slack variable will always look like

the previous pivot column, and the column which was the pivot

53

= 0

^1

*2

"2

*0

°1

cut

=

a:

- .

-

b

75

9

5

- 1

- 1

0

- 1

^1

5

1

0

-5

-5

-4

- 1

+

t2

6

0

1

-3

-6

-5

- 1

z '.

-11

- 1

- 1

8

11

9

1

Figure 9. Initial Tableau for Example 1.

-^'

^0

*1

*2

"2

*0

»1

""

as

=

=

s

s

70,.

8

5

4

4

4

5

1

0

-5

-5

-4

1

- 1

1

2

- 1

- 1

-6

0

- 1

3

6

5

cut 0 -1 -1

Figure 10. Second Tableau for Example 1.

54

£ii

55

column will contliprone and zeroes like an identity column.

Since all other columns of the identity matrix are not

explicitly stated, this other identity column is likewise

not explicitly stated. Figure 10 shows the next iteration

for this example problem, and it Is clear that the pivot

column is unchanged.

Again consider the first tableau which is presented in

Figure 9. The first violated constraint is chosen as the

Source row, and the lexicographically smallest column (other

than the "z" column) is chosen as the pivot column. The cut

is generated and appended to the bottom of the tableau. When

a dual simplex pivot operation is performed with -1 as the

pivot element, the tableau in Figure 10 results. This tableau

represents a feasible solution to this problem because all

variables have values which are greater than or equal to zero.

However, since X2 is equal to the original starting value,

this solution may or may not represent the optimal solution

to the problem. Therefore, the primal phase of the algorithm

must be used to either verify that this is the optimal

solution or to move from this point to the optimal solution.

The primal phase begins with the tableau in Figure 10. The

lexicographically smallest column (in this case the "z"

column) is chosen as the pivot column, and the source row is

selected according to the criterion in Step l4. This row

is indicated by the arrow in Figure 10. A cut is generated

m.

56

and appended to the bottom of the tableau. This first cut .

is an example of a stationary cut — the values of the

variables will remain unchanged in the next iteration. The

next tableau and the final tableau are given in Figure 11.

As can be seen in the third tableau, selecting the

lexicographically smallest column would result in another

stationary cycle, and consequently the other column with a

negative coefficient in the first row is chosen as the pivot

column. The cut which is then found results in a transition

cycle which leads to the optimal tableau. This solution is

the optimal solution because all values in the first row

are nonnegative.

The Advanced Start Algorithm has thus solved this

problem with one dual iteration and two primal iterations.

The next example will be a problem in which the first

feasible solution found is the optimal solution, and

consequently the primal phase will not even be needed to

solve the problem. This problem is as follows:

Maximize XQ = 3 \ + 5X2 (29)

Subject to 2X^ - 3X2 <= 37 (30)

^X^ + 6X2 <= 71 (31)

X^, X2 >= 0 and Integer (32)

i ik

' > * . , ^ ^ "

57

x,

s.

s.

cut

70 .

8-

5

4

4

4

4

- 1

1

- 1

- 2

1

1

1

+

-5

- 1

0

5

5

4

5

^ 6

0

1

--3

-6

-5

-6

74

4

9

12

0

0

1

- 1

1

2

- 1

- 1

0

- 6

5

15

0

- 1

0

6

-5

-15

0

1

J ' igure 11. Third and Fourth Tableaus for Example 1

• 58

The solution to the relaxed problem is X^ = .244.4, X, = 48.3,

and X2 = 19.9; thus the advanced start will be U. = 48 and

U2 = 20. Implementing this advanced start, adding the

objective, constraint, and rearranging the constraints leads

to the tableau in Figure 12. Since the value of the

objective function is 244 (the same as the relaxed solution),

this tableau would represent the optimal solution if it were

feasible. However, the second constraint is violated so the

solution is not feasible, and consequently the dual phase of

the algorithm must begin. As can be seen in Figure 12 two

iterations are required to reach the first feasible solution.

Because the variables are not equal to their original values,

this solution is also the optimal solution despite the fact

that the "z" column still has a negative value in the first

row. Therefore, it is not necessary to use the primal

phase, and the optimal solution was found in two iterations.

If the primal phase were used in this problem, some stationary

cycles would occur and then it would be verified that this

current solution is indeed the optimal solution.

These two examples illustrate the use of the Advanced

Start Algorithm, In both cases the advanced start was very

close to the optimal integer solution as well as the optimal

relaxed solution. Obviously these problems are small and not

necessarily representative of ILP problems in general. In

the next chapter computational results will be reported to

59

^0

^1

X,

^2

^1

cut

^0

=2

=1

cut

^0

^1

244

48

20

-1

0

1

-1

239

48

19

5

5

-2

-1.

236

47

19

4

8

0

3

1

0

1

-3

-2

0

3

1

0

1

-3

-2

-1

3

1

0

1

-3

-2

4

0

1

-6

-5

3

-1

5

0

1

-6

-5

3

1

8

1

1

-5

-8

1

-8

-1

-1

5

8

-1

0

-8

-1

-1

5

8

-1

-1

-11

-2

-1

4

11

1

Figure 12. All Three Tableaus for Example 2

60

demonstrate how this algorithmi performs on a set of standard

"hard" test problems. These results will be compared with

the results of several commercial codes which were tested

on these same test problems and which are based on other

cutting plane algorithms.

fe&ife&ife&iii

CHAPTER IV

COMPUTATIONAL COMPARISON

In order to demonstrate the efficiency of the

Advanced Start Algorithm, computational results obtained

using this algorithm will be compared to computational

results previously reported by Trauth and Woolsey [1969]

for several commercial codes based on cutting plane

algoritmns. Included in this chapter will be a discussion

of the basis used for comparison, a presentation of the

problems on which the algorithms were tested, an

explanation of why these are hard problems, a brief

discussion of the commercial codes used In the comparison,

and the actual results of the testing accompanied by further

comments and explanations.

Basis for Comparison

It is difficult to find a basis for comparison of the

various types of ILP algorithms due to various factors.

Obviously, the CPU time required to solve a problem is of

utmost importance to the user of the code, but this is a

function not only of the algorithm used but also of the

programmer, the computer language which is used, and the

61

62

machine on which the code is tested. For this reason the CPU

time is not necessarily a valid indication of the efficiency

of the algorithm.

Aiiother basis for comparison in some cases could be the

number of iterations required to reach the optimal solution.

An iteration is defined the same for any cutting plane

algorithm; it is simply the change in the tableau resulting

from-a pivot operation. Although one cut in a fractional

algorithm may result in several iterations and one cut in an

all-integer cutting plane will result in only one iteration,

the computations involved in one iteration of a fractional

algorithm are roughly equivalent to the computations involved

In one iteration of an all-integer algorithm. Since it is

true that the number of iterations is independent of the

language used, the expertise of the programmer, and the type

of computer used, this is the most appropriate way to

compare cutting plajie algorithms.

The only difficulty with using the number of iterations

as the efficiency criterion is that other types of algorithms

(such as branch-and-bound) may not be compared to cutting

plane methods since an iteration is not clearly defined for

such techniques. Ultimately the CPU time must be used to

compare different types of algorithms. Were it possible to

use one programmer, one language, and one machine for a

computational test, the results based on CPU time might be

63

valid. However, since all 'ilgOrlthms which are. directly

related to this investigation are cutting planes, it is

appropriate to use tile number of iterations as the basis for

comparison^'

The Commercial Cutting Plane Codes

The Advanced Start Algorithm will be compared with five

comKtercial integer programming codes as well as the BDA.

Since the BDA has been explained in the last chapter, further

discussion of this is unnecessary. The other five codes, all

of which are based on cutting plane algorithms, will be

briefly described. These descriptions are taken from Trauth

and Woolsey [1969], in which the original comparison of these

codes was made.

The first of these five codes is IPM3, which is

primarily based on Gomory's fractional algorithm — although

all-integer cuts are also used. An interesting feature of

this code is that it may generate several constraints rather

than Just one at each iteration. Because this code was

developed for the IBM 7090 series computers, it is not

useable on other machines. A complete description of this

code is provided by Levitan [196I].

The second code, LlPl, is also derived from Gomory»s

fractional algorithn^ but the approach taken is quite different

64

thanrihe one used-iA the original fractional algorithm. To

begin, the solution to the LP problem is found and cuts are

fenerated from the objective function in an attempt to make

tlli value of the objective function an integer. Once this

is done cuts are developed from other constraints which force

the variables one at a time to become integer valued while

not affecting previously developed integer values. If this

forces the last variable to be fractional, then the process

is restarted. Like IPM3, this code is useable only on the

IBM 7090 series computers because of the programming language

used. Haldi and Isaacson [1965] are the developers of this

code.

The remaining three codes are all based on the

all-integer rather than the fractional technique developed

by Gomory. Two of these codes, ILP2-1 and ILP2-2, are

actually variations of the same code. Differing only in

the source row selection, both of these codes are CDC

proprietary codes which were written to implement Gomory's

all-integer algorithm. The first code (ILP2-1) uses the

most violated constraint as the source of the cut, while the

second (ILP2-2) considers the resultant change in the value

of the objective function when selecting the source row.

Details of this procedure are found in Trauth and Woolsey

[1968]..

The final code used in this study is IPSC which uses

a source row selection procedure similar to the one used in

65

1LP2-2. ^ ^ B B H F ^ S feature of this code is the fact

that it is writte^^tirely In FORTRAN II which essentially

makes it a "universal" code. The complete description of

this algorithm is given in Woolsey and Trauth [1966], The

set of problems on which these were tested are designed to

be "hard" for cutting plane algorithms to solve. These

problems will now be listed, and an explanation of why they

are difficult will be provided.

The Test" Problems

The computational comparison of the Advanced Start

Algorithm with the BDA and with the commercial codes will be

made using nineteen test problems. Nine of these are

allocation problems (0-1 knapsack problems) developed by

Trauth and Woolsey [1969] to test the sensitivity of integer

programming codes to small changes in a problem. The ten

remaining problems are fixed charge problems which were

generated by Haldi [1964] and which were also used by Trauth

and Woolsey in their computational study.

All variables included in the following statements of

the nineteen test problems are restricted to be integer

valued and nonnegative.

All nine allocation problems have the following forms:

Maximize: Z = 20X^ + 18X2 + 17X3 + 15X^ + 15X5 + lOXg + 5X^ + 3Xg + X^

/. >. K

Sub jeer, to:

30X^ + 25X2 + 20X3 * ^^^4 .* 7X5 + llXg + 5X^ + 2X3 + X^ + XIO

<= B

X^ <= 1 for 1 « 1, 2, ... , 10

The value of B in each of the nine problems is given below.

Problem 1 2 3 4 5 6 7 8 9

B 35 60 65 70 75 80 85 90 100

It "is clear that the problems only differ in the B value, and

these differences are not very great. However, these small

changes may greatly affect the efficiency of some cutting

plsine algorithms. This will be seen when the results are

presented later in this chapter.

The ten fixed charge problems are not very large, but

they were designed to be very difficult to solve. The first

four of these have the following form:

Maximize Z = X^ + X2 + X-

Subject to: 2P-L + 3F2 + X^ + 2X2 + 2X^ <= B^

3F^ + 2F2 + 2X^ + X2 + 2X3 <= B2

-R^F^ •'• ^1 <= 0

-R2F2 + X2 <= 0

. ^ ^ . -

67

The values of Rj_,..| ,; B ,. .and Bj are, given below.

• • ' • • • • : - : 3

• ' ' ' . { • • • , •• *

Problem

1

2

3

4

Problems 5 and 6 both have the following form:

Maximize Z = X- + X2 + X^

Subject to: 20F^ + 3OF2 + X^ + 2X2 + 2X <= B

30P^ + 2OF2 + 2X^ + X2 + 2X3 <= B2

2 a.x-

\

6

9

9

6

c gJLve

" 2

7

7

9

8

n DexG

^1

18

18

21

19

w.

B2

15

17

21

15

R^P,

^1

+

-R2P2 +

^2

^1

X2

< =

< =

< =

< =

0

0

1

1

The values for R,, R2, B,, and B2 are given below:

Problem R., R2 B- B2

5 60 75 180 150

6 90 90 210 210

68

Problems 7 and 8 are Identical to 5 and 6 respectively with

the last two constraints removed. The solutions are not ^ 6 ••

changed when these constraints are removed, but the

difficulty of the problem is affected When using certain

algorithms to solve them. This will also be seen later in

this chapter.

Problem 9 is:

Maximize Z »

Subject to: 2F^

2F,

-8F,

+

+

2F2 +

+ 2F^ +

2F2 + 2F2

+

-8P2

-8F^

X^ + X2 + x^

^1 "*• ^2 ^^

X, + X3 <=

+ X2 + x^ <=

^1

+ x^ <=

+ X3 <=

10

10

10

0

0

0

Due to the size of Problem 10 it will be presented In

a detached coefficient format.

F^ P2 ^3 ^4 ^5 ^6 ^1 ^2 ^3 ^t ^5 ^6

Max. Z 1 1 1 1 1 1

69

Sub. to

9 -6

12 i^^S

15 5

18 4

•?>. ••'•J!«-i«

-15

16

6

12

4

-12

8

2

4 :

18

-10

24

2Q

4

28

-11

5

8

5

1

3

4

5

6

1

7

6

5

4

1

8

3

6

2

1

4

1

2

9

1

6

5

1

7

1

5

8

5

1

<a

< «

< =

< «

< =

< =

110

95

80

• 100

0

0

0

0

0

r < t . — 1 1 I 1 <ss n

All nineteen of these probelms are relatively small;

however, they are not easy to solve due to the prevalence of

dual degeneracy. This degeneracy contributes to the

occurrence of iterations in which the value of the objective

function does not change. Consequently, many iterations may

be performed which result in no progress toward reaching the

optimal solution. This dual degeneracy is recognized by the

occurrence of zeroes in the objective row. In many of these

problems the objective row for the final tableau contains only

one non-zero element (other than the actual "z" value), and

in all of them the final tableau has at least 50 per cent

zeroes in the objective row.

70

The CompT^atlonal Results

When the Advanced Start Ajlgorithm (ASA), the BDA, and

the five coinercial codes were used to solve these test

problems, some interestIng results were found. The small

change in the right hand side value for the allocation

probl«BS resulted in some rather pronounced differences in

Iterations -retjuired for convergence for some of the

algorithms, and the fixed charge problems proved to be

•xcaiedingly difficult for some of the algorithms. The

number of iterations required for solving the allocation

problems and the fixed charge problems are given in Table 1

and Table 2, respectively.

The number of iterations required for the ASA to solve

the problems is given in two parts: the first part is the

number of iterations required to reach the first feasible

solution, and the second part is the number of primal

iterations required to prove that the first feasible solution

found was optimal. In all of these test problems the first

feasible solution was the optimal solution, and in all but

three of these problems this fact was recognized before the

primal phase was begun. However, in the three problems in

which the primal phase was needed, the primal phase did not

converge in several thousand iterations even though the first

feasible solution was optimal. Several modifications were

'K;S^" •"

TABLE 1

RESULTS OF CCMIPUTATIONAL TESTING FOR

ALLOCATION PROBLEMS

71

Code

Ppoblem

1

2

3 4 5 6

7 8

9

1 1

3

j

.1—

ASA

2

61

16

6

0

40

131 22

0

(0)

(-)

(0)

(0)

(0)

(0)

(0)

( - )

(0)

BDA

13 20

14

12

3

35

395 38

2

IPM3

14

31 30

18

11

18

61

21

12

LIPl

19

55 41

19 12

40

81

51

12

ILP2-1

54

163

168

192

139

157 504

370

201

ILP2-2

51

77

59 48

32

54

119

57

34

IPSC

46

64

71 62

50

81

131 102 *

44

Note. For the ASA the number in parentheses is the number of

primal iterations required to prove optimality. No

convergence is indicated, by "(-)".

72

,-s M-Bm 2

RESttLTS OF COMPUTATIONAL TESTING FOR

FIXED CHARGE PROBLEMS

:rl:

Code 1

^oblem

1

2

3 4

5

6

7

8

9 10

ASA

15

I 5

1 13 6

1 105 ••""76

1 105

76

1 40

7

(0)

(-)

(0)

(0)

(0)

(0) (0)

(0)

(0)

( - )

BDA

12

14

10

12

107

83

107

83 44

879

IPM3

54

81

37

91 7000+

7000+

7000+

7000+

118

1396

LIPl

24

15

26

18

158

123

159 126

42

102

ILP2-1

135

94

154

93 7000+

7000+

7000+

7000+

7000+

7000+

ILP2-2

36

47 104

18

7000+

311 7000+

306

298

7000+

IPSC

32

45

56

22

6104

3320

7000+

7000+

339 7000+

Note. For the ASA the number in parenthesis is the number of

primal iterations required to prove optimality. No

convergence is indicated by "(-)".

73

tried to make the primal phase converge Including the use of

Glover's reference row and V£u:*ious row selection criteria,

but none of these were successful. This failure to converge

can* be attributed to the stationarity problem which exists

in frlmal all-integer cutting plane algorithms. Stationarity

is such a problem that it led to the series of articles by

Arnold and Bellmore [1974a,b,c] mentioned in Chapter 2 which

were aimed at minimizing this difficulty,

r It should be noted here that the primal phase of the

flgorithm is not new but is simply a primal all-integer

cutting plane algoritlun attributable to both Young and

Glover who both provided theoretical proofs of convergence.

It is evident from these three problems that theoretical

convergence is not always satisfactory in a practical sense.

This sentiment is most aptly expressed by Hesse and Woolsey

[1980, p. 259] who noted: "Unfortunately, what is finite

for a mathematician is infinite to a person who only lives

100 years or so." Further discussions of this problem with

the primal phase will be provided in Chapter 5, but for the

remainder of this chapter the number of dual Iterations for

these problems will be used in making comparisons, since

this phase did result in the optimal solution even though it

could not be proven to be optimal.

In comparing the effectiveness of the different

algorithms on these test problems, there are obviously many

criteria which could be used. The average number of

74

irerations required for each algorithm to solve, each set of

problems is one such criterion. Another could be the number

of iterations required in the "worst" problem. However, the

criterion used with this set of results will be the number

H^roblems on which the algorithm performed best (required

t h,e f ewe St number of Iterat ions).

As seen in Table 1, the ASA required the fewest number

of iterations for- four of the nine allocation problems. The

BDA performed best on three of these problems, and IPM3

worked best on two of them. In problems 5 and 9 the advanced

start coincided with the optimal solution, and consequently

no iterations were required. This was evident because the

advanced start was a feasible solution with an objective

function value equal to the integer portion of the optimal

objective function value for the relaxed (LP) problem.

Table 2 gives the results for the fixed charge problems.

On eight of the ten fixed charge problems the ASA required the

fewest number of iterations, and on the other two it was a

very close second to the BDA. For the largest of the test

problems (number 10), the ASA only required seven iterations,

while most of the others required several thousand Iterations.

In a comparison of the ASA with the BDA alone, the ASA

requires fewer iterations in l4 of the 19 test problems.

Since the dual phases of the two algorithms are very similar,

and since the ASA does start at a point closer to the optimal

75

solution, it is logical'^ta^ithough incorrect) to expect the

ASA Ito always pertoro better than the BDA. However, because

the ^ WiH:ing points used are different, the paths taken to

arriv# al^^he optimal solution are different, and the path

taitwi? by the ASA may be longer. Why this occurs in some •

pi^lems can at this point only be attributed to the sometimes

erratic behavior of cutting plane algorithms. There is

nothing evident from these problems which would indicate why

the slight changes in the parameters cause the varying

results. Perhaps future study in this area would lead to an

explanation of this.

Comparing the computational results of the ASA on the

allocation problems with the results of the ASA on the fixed

charge problems indicates that problem type may have an

effect on the efficiency of the algorithm. This is

consistent with conclusions drawn by Trauth and Woolsey

[1969] about the codes used In their computational study;

they observed that the type of problem can affect the

performance of the different codes.

Based on the results presented in this chapter, it

appears that the Advanced Start Algorithm works rather well

on a small set of test problems. These problems are not

purported to be the most difficult Integer programming

problems, nor are they intended to be completely representative

of the entire domain of such problems. However, they are

76

examples from two classes of integer, problems, and the

efficiency of the ASA on these problems indicates that this

algorithm might be an effective way of solving integer

programming problems. At the very least this small set of

computational tests indicates that further study of this

algorithm is warranted. The next chapter will contain

some suggestions for this future work as well as a summary

of what has been done, and additional conclusions and conjec­

tures.

CHAPTER V • [ • • • ' • > f

SUMMARY, CONCLUSIONS, AND RECOMMENDATIONS

^ This chapter has three purposes. First, it summarizes

the actual results of this research and compares this with

^^ ^^if^^-^ stated goals. Second, it presents the

conclusions which may be drawn from this study. Third, this

chapter provides recommendations for further research based

upon the results of this investigation.

Summary

The original goal of this investigation was to provide

a way to allow the use of any starting solution in an

all-integer cutting plane algorithm. The algorithm

presented in Chapter III accomplishes this goal. Although

the algorithm provides a specific way to generate advanced

starts, any start may be used with the algorithm.

Consequently, a method is now available for an all-Integer

cutting plane algorithm to begin at a point which'is close

to the optimal solution of the relaxed (LP) problem. Thus,

an advantage which previously was Inherent with fractional

cutting planes, branch and bound methods, and group theoretic

methods is now extended to all-integer cutting planes as well.

77

78

It is worth reilisrating that all-integer methods possess one

advantage which Is not a part of any other technique — the

total elimination of round-off errors.

The major theoretical development of this research is

presented in Chapter 3. It was demonstrated in this chapter

that the advanced start constraints may be relaxed by

eUdnlnattng constraints corresponding to the advanced start

slack-variables. It was also shown that the selection of all

columns as the pivot column resulted in a basic solution from

which a Gomory type algorithm could move to insure that the

first feasible solution was in fact the optimal solution.

Computational results were provided to show that this

algorithm does work relatively well on a set of test problems

which have proven to be quite difficult for other cutting

plane algorithms. At the very least it can be said that this

algorithm shows promise in contributing to the alleviation

of the computational difficulty associated with integer

prograumning problems.

Conclusions

The effectiveness of this algorithm is still very much

open to question despite the favorable results presented in

Chapter IV. Results based on a small set of test problems

or even on a large set of problems do not prove that an

1>

79

algorithm works well or that It performs better than others

on evet*5r integer programming problem. It is often true that

a pathological problem can be constructed which would make a

particular algorithm perform relatively poorly. However, test

results do provide an indication of the relative efficiency

of algorithms and consequently aj?e an important basis for

eooparison. From the test results previously presented, the

following conclusions can be made.

(1> The Advanced Start Algorithm appears to be a relatively

effective algorithm when compared with other cutting

^ plauie techniques.

k' r-. f

(2) The first feasible solution found by this algorithm is

the optimal solution in many cases (and in all of the

test problems).

(3) The primal phase of the algorithm, when required, does

not always perform well, and further study of this

phase of the algorithm is warranted.

As mentioned previously, the primal phase is not original with

this algorithm and is merely a variation of the Simplified

Primal Algorithm. It appears that stationarity (primal

degeneracy) in primal algorithms is a greater problem than is

dual degeneracy in dual algorithms. Suggestions for improving

this phase of the algorithm are discussed in the next section.

\'^ 2

80

Recoi8aendatlons for Future Research

Just as the "end*' of a simplex algorithm (the solution

to iim relaxed problem) represents the "beginning"(starting

poini)'f6r the algorithm developed in this dissertation, so

too is the "end" of this reserach actually the "beginning" of

future endeavors. One obvious area for further work will be

the generation of additional .computational results to give

more Information about the efficiency of the Advanced Start

Algorithm. This will include testing the algorithm on larger

and larger problems to determine whether any correlation

exists between the size of the problem and the effectiveness

of the algorithm. Also a part of future computational work

will be an attempt to convert the current computer program

which was written in SAS (Statistical Analysis System) to a

computer language which is computationally faster. When this

is done, comparisons based on CPU time will be used to

compare this algorithm to existing branch and bound

algorithms.

Since the primal phase of this algorithm does not always

work well, additional study of this feature is recommended.

However, because the first feasible solution found by this

algorithm is usually the optimal solution, several alternative

approaches are apparent. First, alternate optimality criteria

might be found so that the primal phase would be needed less

often. As seen by three of the test problems in this study.

81

the first feasible solution can be optimal, but several

thousand, iterations might be performed without ever verifying

^hls fact, A second approach would be to take the first

, feasible solution and to use this as a lower bound in a

branch and bound algorithm. A third approach would be to

develop a different type of primal algorithm which could be

used with the ASA. When the ASA was tested on some other

small problems, an interesting pattern emerged. Whenever the

first feasible solution was suboptimal, the optimal solution

could be found by adding a multiple of one of the columns in

the tableau to the solution column. This technique seems to

be a variation of the simplified iteration developed by

Glover [1968], but further development is needed to determine

if this will always work.

To conclude this dissertation, one final comment is in

order. This author believes in the philosophy expressed on

several occasions by Woolsey (e.g., Hesse and Woolsey [1980,

p. 244]) — integer programming is an art, and as with any

type of art the tools available are important but the final

result depends on the person using the tools. There are

many tools available to the Integer programmer, and the

algorithm developed in this dissertation is but another tool

which may be used. Only further testing and research will

show exactly how valuable this tool actually is.

REFERENCES

1. Arnold, L.R., and Bellmore, M. "Iteration skipping in ?Qjk? i' ®?® programming," Operations Research. 197*ra, 22a 129-136.

2. Arnold, L. R., and Bellmore, M. "A generated cut for primal integer programming," Operations Research, 1974b, 22, 137-143.

iu"';'

3« Arnold, L. R., and Bellmore, M. "A bounded minimization problem for primal integer programming," Operations Research. 1974c, 22, 383-392.

4. Austin, L. M. The bounded descent algorithm for integer programming. unpublished monograph. College of Business Administration, Texas Tech University, 1979.

5. Austin, L. M., Ghandforoush, P., and Hanna, M. E. "Advemced starts in a primal-dual cutting plane algorithm for all-integer programming," Proceedings of American Institute for Decision Sciences, 19^07 160-162.

6. Austin, L. M., and Hanna, M. E. "A bounded dual (all-integer) integer programming algorithm with an objective cut," presented at the Joint national ORSA/TIMS meeting; Washington, D.C. (May, 1980).

7. Balas, E. "An additive algorithm for solving linear programs with zero-one variables," Operations Research, 1965, 11, 517-546.

8. Balas, E. "Discrete programming by the filter method," Operations Research, 1967, li, 915-957.

9. Balas, E. "Intersection cuts — a new type of cutting planes for integer programming," Operations Research, 1971a, 19, 19-39.

82

83

10. Balas, E* "A duality theorem and an algorithm for (mijed)* integer prdgramming," OperatioriS Research, 1971b, 4 , jSt-<35a.

11. Beale, E.M.L., and Small, R.E. "Mixed integer proipramming by a branch-and-bound technique," froceedings of IFIP Congress > New York, May 1965,

*.

12. Bell, D.E. "A simple algorithm for integer programs using group constraints," Operational Research Quarterly. 1977, 28, 453-458":

13. Bell, D.E. "Efficient group cuts for integer programs," Mathematical Programming, 1979, H , 176-183.

14.. Bell, D.E., and Shapiro, J.E. "A convergent duality theory for integer prograimning," Operations Research, 1977, 25, 419-434.

15. Ben-Israel, A., and Charnes, A. "On some problems in diophantine programming," Cahlers Cent. d^Etudes Recherchle Operationnelle, 1962, j, 215-2bQ.

16. Charnes, A., Granot, D., and Granot, F. "On intersection cuts in interval integer linear programming. Operations Research, 1977, 25,, 352-355.

17. Gushing, B. E. "The application potential of integer programming," Journal of Business, 1970, 43., ^57-467.

18. Dakin, R. M. "A tree search algorithm f'or mixed integer programming problems," Computer Journal, 1965, £, 250-255.

19. Dantzig, G. B. "On the significance of solving linear programming problems with some Integer variables, Econometrica, I960, 28 , 30-44.

84

20. °^"^^iS£ 3 B^.^»uUcer«on. D. R.. and Johnson. S. M. problems S J . i J f ' * * 2*"^* travell ing salesinan proDiem," Xft)eratlonB Reaeareh. I95U, 2, 393-JllO.

IS 21. ^i^*J[t N, J. "An algorithm for the solution of ^ ^

SlScSf il66 ^^^^^--^ 22. Faaland, B. H., and Hillier, F. S. "Interior path

methods for heuristic integer programming methods," Operations Research. 1979, 27, IO69-IO87.

23. Garflnkel, R. S., and Nemhauser, G. L. Integer Programming, Wiley, New York, N.Y., 1972?—

24. (jeoffrlon, A.M. "An improved implicit enumeration approach for integer programming," Operations Research. 1969, 17, 437-454.

25. Ghandforoush, P. "A constructive primal-dual cutting-plane algorithm for integer programming," D.B.A, Dissertation, College of Business Administration, Texas Tech University, I98O.

26. Glover, G.A. "A bound escalation method for the solution of integer linear programs," Cahiers Cent, d*Etudes de Recherche Operationnelle, 1965a, 6,

,. .131-168. .

27. Glover, P. A. "A multiphase-dual algorithm for the zero-one integer programming problem," Operations Research, 1965b, 13_, 879-919.

28. Glover, F.A. "Generalized cuts in diophantine programming," Management Science, 1966, IJ, 254-268.

29. Glover, F.A. "A new foundation for a simplified primal integer programming algorithm," Operations Research, 1968a, 16, 727-740.

85

'°- ''°l968b: lg'^?5?$;H*'^^»*»'" operations Rese...h.

:• mi *

32. Glover, F., and Klingman, D. "Concave programming '^•m v Implied to a special class of 0-1 Integer problems," -" Operations Research. 1973a, 21, 135-140.

33. Glover, F., and Klingman, D. "The generalized lattice point problem," Operations Research. 1973b, 21, 141-155. " —

34. Gomory, R.E. "Outline of an algorithm for integer solutions to linear programs," Bulletin of American Mathematical Society. 1958, 64, 275-27b.

35. Gomory, R.E. "An algorithm for integer solutions to linear programs," in Recent Advances in Mathematical Programming (R.L. Graves and P. Wolfe, eds.), pp. 269-302, McGraw-Hill, New York, 1963.

36. Greenberg, H. Integer Programming, Academic Press, New York, 1971.

37. Haldi, J. 25 Integer Programming Test Problems, Working Paper No. 43, Graduate School of Business, Stanford University.

38. Haldi, J., and Isaacson, L.M. "A computer code for integer solutions to linear programs," Operations Research, 1965, IJ, 946-959.

39. Hesse, R., and Woolsey, G. Applied Management Science, SRA, Chicago, 198O.

40. Land, A.H., and Doig, A.G. "An automatic method for solving discrete programming problems," Econometrica, i960, 28, 497-520.

86

Lev, G., and Soyster, A.L. "Integer programming with bounded variables via canonical separation," Journal i»r the Opeyatiorial Research Society, 1978, 29., 47?-488.- '

42. Levitan, R.E. IPMS* Share Distribution Number 1190, W6l-

43. Manne, A.S. "On the Job-shop scheduling problem," ffperations Research, I960, 8i, 219-223-

44. Markowltz, H.M., and Manne, A.S. "On the solution of discrete programming problems," Econometrica, 1957, 2§,, 84-110.

45.

46.

47.

48.

Marsten, R.E. , and Morin, T.L. "A hybrid approach to d i scre te mathematical programming," Mathematical Programming, 1978, 1^, 21-40.

1^ ~TF .

Mathis, S.J., Jr. "A counterexample to the rudimentary primal integer programming problem," Operations Research, 1971, !£, 1518-1522.

.*.i.i

Pneuli, A. "An improved starting point for integer llAear programming algorithms," in Developments xn Operations Research (Avi-Itzhak, ed.), pp. b3-9i, Gordon and Breach, New York, 1971.

<5alkin H M. "On the merit of the generalized origin aSd restarts in implicit enumerations," Operations Research, 1970, l8, 549-554.

49.

50.

Salkin, H.M. TnhPo-er Programming, Addison-Wesley, Reading, Mass., 1.975.

Shapiro. J.F. "Dynamic Programming al|orlthms for the integer programming problem-I: The ini eger progllLing'prohlem viewed as a knapsack type ^T...?.-,^r, « nr»vations Research, 1968a, lb, i^i ^^^-

87

51. Shapira, J.F. "Group theoretic algorithms: for the integ er programming proble»-II; Extension to a

feneral algbrithmi," Operations Research. 1968b, _6, 928-947.

52. Spielberg, K. "Algorithms for the simple location problem with some side conditions," Operations Research, 1969, 17., 85-111.

53* Srinivasan, A.V. "An investigation of some computational aspects of integer programming," Journal of the Association for Computing Machinery, 1965, 12, 525-535.

54. Taha, H.A. Integer Programming: Theory, Applications, and Computations, Academic Press, New York, 1975.

55. Tomlin, J.A. "An improved branch-and-bound method for integer programming," Operations Research, 1971, 191, 1070-1074.

56. Trauth, C.A., and Woolsey, R. E. MESA, A Heuristic Integer Linear Programming Technique, Sandia Laboratories Research Report, SC-RR-68-299, August 1968.

57. Trauth, C.A., and Woolsey, R.E. "Integer linear programming: A study in computational efficiency. Management Science, 1969, 15., 481-493-

58. Wagner, H.M. Principles of Operations Research, 2nd ed., Prentice-Hall, Englewood Cliffs, N.J., 1975.

59 Young, R.D. "A primal (all-integer) integer programming algorithm," Journal of the National Bureau of Stfndards; Section B, 1965, 69, 213-^50.

60 Young R.D. "A simplified primal (all-integer) 60. Young,^R^D. p^^g^^^^^g algorithm," Operations Research,

1968, l i , 750-782.

88

61. Young, R.D. "Hypercylindrlcally deduced cuts in 0-1 integer programming," Operations Research. 1971, 19, 1393-1405. —

62. Young, R.D. "The eclectic primal algorithm: Cutting plane method that accommodates hybrid subproblem solution technique," Mathematical Programming, 1975, i, 294-312.

63. Zionts, S. Linear and Integer Programming, Prentice-Hall, Englewood Cliffs, N.J., 1973.